Azure Storage Blob Service
Since Camel 3.3
Both producer and consumer are supported
The Azure Storage Blob component is used for storing and retrieving blobs from Azure Storage Blob Service using Azure APIs v12. However in case of versions above v12, we will see if this component can adopt these changes depending on how much breaking changes can result.
Prerequisites
You must have a valid Windows Azure Storage account. More information is available at Azure Documentation Portal.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-azure-storage-blob</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
URI Format
azure-storage-blob://accountName[/containerName][?options]
In case of consumer, accountName, containerName are required. In case of producer, it depends on the operation that being requested, for example if operation is on a container level, e.b: createContainer, accountName and containerName are only required, but in case of operation being requested in blob level, e.g: getBlob, accountName, containerName and blobName are required.
The blob will be created if it does not already exist. You can append query options to the URI in the following format, ?options=value&option2=value&…
Configuring Options
Camel components are configured on two separate levels:
-
component level
-
endpoint level
Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
Component Options
The Azure Storage Blob Service component supports 34 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
The blob name, to consume specific blob from a container. However, on producer it is only required for the operations on the blob level. | String | ||
Set the blob offset for the upload or download operations, default is 0. | 0 | long | |
The blob type in order to initiate the appropriate settings for each blob type. Enum values:
| blockblob | BlobType | |
Close the stream after read or keep it open, default is true. | true | boolean | |
The component configurations. | BlobConfiguration | ||
StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. | StorageSharedKeyCredential | ||
Determines the credential strategy to adopt. Enum values:
| AZURE_IDENTITY | CredentialType | |
How many bytes to include in the range. Must be greater than or equal to 0 if specified. | Long | ||
The file directory where the downloaded blobs will be saved to, this can be used in both, producer and consumer. | String | ||
Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. | Integer | ||
Specifies the maximum number of additional HTTP Get requests that will be made while reading the data from a response body. | 0 | int | |
Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. | String | ||
Filters the results to return only blobs whose names match the specified regular expression. May be null to return all if both prefix and regex are set, regex takes the priority and prefix is ignored. | String | ||
Autowired Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through BlobServiceClient#getBlobContainerClient(String), and operations on a blob are available on BlobClient through BlobContainerClient#getBlobClient(String). | BlobServiceClient | ||
An optional timeout value beyond which a RuntimeException will be raised. | Duration | ||
Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean | |
A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1.The default value is 0. | 0 | Long | |
Specifies which type of blocks to return. Enum values:
| COMMITTED | BlockListType | |
When using getChangeFeed producer operation, this gives additional context that is passed through the Http pipeline during the service call. | Context | ||
When using getChangeFeed producer operation, this filters the results to return events approximately before the end time. Note: A few events belonging to the next hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. | OffsetDateTime | ||
When using getChangeFeed producer operation, this filters the results to return events approximately after the start time. Note: A few events belonging to the previous hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. | OffsetDateTime | ||
Close the stream after write or keep it open, default is true. | true | boolean | |
When is set to true, the staged blocks will not be committed directly. | true | boolean | |
When is set to true, the append blocks will be created when committing append blocks. | true | boolean | |
When is set to true, the page blob will be created when uploading page blob. | true | boolean | |
Override the default expiration (millis) of URL download link. | Long | ||
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean | |
The blob operation that can be used with this component on the producer. Enum values:
| listBlobContainers | BlobOperationsDefinition | |
Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. | 512 | Long | |
Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean | |
Used for enabling or disabling all consumer based health checks from this component. | true | boolean | |
Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true. | true | boolean | |
Access key for the associated azure account name to be used for authentication with azure blob services. | String | ||
Source Blob Access Key: for copyblob operation, sadly, we need to have an accessKey for the source blob we want to copy Passing an accessKey as header, it’s unsafe so we could set as key. | String |
Endpoint Options
The Azure Storage Blob Service endpoint is configured using URI syntax:
azure-storage-blob:accountName/containerName
with the following path and query parameters:
Query Parameters (49 parameters)
Name | Description | Default | Type |
---|---|---|---|
The blob name, to consume specific blob from a container. However, on producer it is only required for the operations on the blob level. | String | ||
Set the blob offset for the upload or download operations, default is 0. | 0 | long | |
Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through getBlobContainerClient(String), and operations on a blob are available on BlobClient through getBlobContainerClient(String).getBlobClient(String). | BlobServiceClient | ||
The blob type in order to initiate the appropriate settings for each blob type. Enum values:
| blockblob | BlobType | |
Close the stream after read or keep it open, default is true. | true | boolean | |
StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. | StorageSharedKeyCredential | ||
Determines the credential strategy to adopt. Enum values:
| AZURE_IDENTITY | CredentialType | |
How many bytes to include in the range. Must be greater than or equal to 0 if specified. | Long | ||
The file directory where the downloaded blobs will be saved to, this can be used in both, producer and consumer. | String | ||
Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. | Integer | ||
Specifies the maximum number of additional HTTP Get requests that will be made while reading the data from a response body. | 0 | int | |
Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. | String | ||
Filters the results to return only blobs whose names match the specified regular expression. May be null to return all if both prefix and regex are set, regex takes the priority and prefix is ignored. | String | ||
Autowired Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through BlobServiceClient#getBlobContainerClient(String), and operations on a blob are available on BlobClient through BlobContainerClient#getBlobClient(String). | BlobServiceClient | ||
An optional timeout value beyond which a RuntimeException will be raised. | Duration | ||
If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean | |
Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean | |
To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | ||
Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | ||
A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | ||
A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1.The default value is 0. | 0 | Long | |
Specifies which type of blocks to return. Enum values:
| COMMITTED | BlockListType | |
When using getChangeFeed producer operation, this gives additional context that is passed through the Http pipeline during the service call. | Context | ||
When using getChangeFeed producer operation, this filters the results to return events approximately before the end time. Note: A few events belonging to the next hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. | OffsetDateTime | ||
When using getChangeFeed producer operation, this filters the results to return events approximately after the start time. Note: A few events belonging to the previous hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. | OffsetDateTime | ||
Close the stream after write or keep it open, default is true. | true | boolean | |
When is set to true, the staged blocks will not be committed directly. | true | boolean | |
When is set to true, the append blocks will be created when committing append blocks. | true | boolean | |
When is set to true, the page blob will be created when uploading page blob. | true | boolean | |
Override the default expiration (millis) of URL download link. | Long | ||
The blob operation that can be used with this component on the producer. Enum values:
| listBlobContainers | BlobOperationsDefinition | |
Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. | 512 | Long | |
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean | |
The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | ||
The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | ||
To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | ||
Milliseconds before the next poll. | 500 | long | |
If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean | |
Milliseconds before the first poll starts. | 1000 | long | |
Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long | |
The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel | |
Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | ||
To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object | |
To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | ||
Whether the scheduler should be auto started. | true | boolean | |
Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit | |
Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean | |
Access key for the associated azure account name to be used for authentication with azure blob services. | String | ||
Source Blob Access Key: for copyblob operation, sadly, we need to have an accessKey for the source blob we want to copy Passing an accessKey as header, it’s unsafe so we could set as key. | String |
Required information options:
To use this component, you have multiple options in order to provide the required Azure authentication information:
-
By providing your own BlobServiceClient instance which can be injected into
blobServiceClient
. Note: You don’t need to create a specific client, e.g: BlockBlobClient, the BlobServiceClient represents the upper level which can be used to retrieve lower level clients. -
Via Azure Identity, when specifying
credentialType=AZURE_IDENTITY
and providing required environment variables. This enables service principal (e.g. app registration) authentication with secret/certificate as well as username password. Note that this is the default authentication strategy. -
Via shared storage account key, when specifying
credentialType=SHARED_ACCOUNT_KEY
and providingaccountName
andaccessKey
for your Azure account, this is the simplest way to get started. The accessKey can be generated through your Azure portal. -
Via shared storage account key, when specifying
credentialType=SHARED_KEY_CREDENTIAL
and providing a StorageSharedKeyCredential instance which can be injected intocredentials
option.
Usage
For example in order to download a blob content from the block blob hello.txt
located on the container1
in the camelazure
storage account, use the following snippet:
from("azure-storage-blob://camelazure/container1?blobName=hello.txt&credentialType=SHARED_ACCOUNT_KEY&accessKey=RAW(yourAccessKey)").
to("file://blobdirectory");
Message Headers
The Azure Storage Blob Service component supports 63 message header(s), which is/are listed below:
Name | Description | Default | Type |
---|---|---|---|
CamelAzureStorageBlobOperation (producer) Constant: | (All) Specify the producer operation to execute, please see the doc on this page related to producer operation. Enum values:
| BlobOperationsDefinition | |
CamelAzureStorageBlobHttpHeaders (producer) Constant: | (uploadBlockBlob, commitBlobBlockList, createAppendBlob, createPageBlob) Additional parameters for a set of operations. | BlobHttpHeaders | |
CamelAzureStorageBlobETag (consumer) Constant: | The E Tag of the blob. | String | |
CamelAzureStorageBlobCreationTime (consumer) Constant: | Creation time of the blob. | OffsetDateTime | |
CamelAzureStorageBlobLastModified (consumer) Constant: | Datetime when the blob was last modified. | OffsetDateTime | |
CamelAzureStorageBlobContentType (consumer) Constant: | Content type specified for the blob. | String | |
CamelAzureStorageBlobContentMD5 (common) Constant: | (producer) (Most operations related to upload blob) Most operations related to upload blobAn MD5 hash of the block content. This hash is used to verify the integrity of the block during transport. When this header is specified, the storage service compares the hash of the content that has arrived with this header value. Note that this MD5 hash is not stored with the blob. If the two hashes do not match, the operation will fail. (consumer) Content MD5 specified for the blob. | byte[] | |
CamelAzureStorageBlobContentEncoding (consumer) Constant: | Content encoding specified for the blob. | String | |
CamelAzureStorageBlobContentDisposition (consumer) Constant: | Content disposition specified for the blob. | String | |
CamelAzureStorageBlobContentLanguage (consumer) Constant: | Content language specified for the blob. | String | |
CamelAzureStorageBlobCacheControl (consumer) Constant: | Cache control specified for the blob. | String | |
CamelAzureStorageBlobBlobSize (consumer) Constant: | The size of the blob. | long | |
CamelAzureStorageBlobBlobUploadSize (producer) Constant: | When uploading a blob with the uploadBlockBlob-operation this can be used to tell the client what the length of an InputStream is. | long | |
CamelAzureStorageBlobSequenceNumber (common) Constant: | (producer) (createPageBlob) A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1. The default value is 0. (consumer) The current sequence number for a page blob. | Long | |
CamelAzureStorageBlobBlobType (consumer) Constant: | The type of the blob. Enum values:
| BlobType | |
CamelAzureStorageBlobLeaseStatus (consumer) Constant: | Status of the lease on the blob. Enum values:
| LeaseStatusType | |
CamelAzureStorageBlobLeaseState (consumer) Constant: | State of the lease on the blob. Enum values:
| LeaseStateType | |
CamelAzureStorageBlobLeaseDuration (consumer) Constant: | Type of lease on the blob. Enum values:
| LeaseDurationType | |
CamelAzureStorageBlobCopyId (consumer) Constant: | Identifier of the last copy operation performed on the blob. | String | |
CamelAzureStorageBlobCopyStatus (consumer) Constant: | Status of the last copy operation performed on the blob. Enum values:
| CopyStatusType | |
CamelAzureStorageBlobCopySource (consumer) Constant: | Source of the last copy operation performed on the blob. | String | |
CamelAzureStorageBlobCopyProgress (consumer) Constant: | Progress of the last copy operation performed on the blob. | String | |
CamelAzureStorageBlobCopyCompletionTime (consumer) Constant: | Datetime when the last copy operation on the blob completed. | OffsetDateTime | |
CamelAzureStorageBlobCopyStatusDescription (consumer) Constant: | Description of the last copy operation on the blob. | String | |
CamelAzureStorageBlobCopyDestinationSnapshot (consumer) Constant: | Snapshot identifier of the last incremental copy snapshot for the blob. | String | |
CamelAzureStorageBlobIsServerEncrypted (consumer) Constant: | Flag indicating if the blob’s content is encrypted on the server. | boolean | |
CamelAzureStorageBlobIsIncrementalCopy (consumer) Constant: | Flag indicating if the blob was incrementally copied. | boolean | |
CamelAzureStorageBlobAccessTier (common) Constant: | (producer) (uploadBlockBlob, commitBlobBlockList) Defines values for AccessTier. (consumer) Access tier of the blob. | AccessTier | |
CamelAzureStorageBlobIsAccessTierInferred (consumer) Constant: | Flag indicating if the access tier of the blob was inferred from properties of the blob. | boolean | |
CamelAzureStorageBlobArchiveStatus (consumer) Constant: | Archive status of the blob. | ArchiveStatus | |
CamelAzureStorageBlobaccessTierChangeTime (consumer) Constant: | Datetime when the access tier of the blob last changed. | OffsetDateTime | |
CamelAzureStorageBlobMetadata (common) Constant: | (producer) (Operations related to container and blob) Operations related to container and blob Metadata to associate with the container or blob. (consumer) Additional metadata associated with the blob. | Map | |
CamelAzureStorageBlobCommittedBlockCount (consumer) Constant: | Number of blocks committed to an append blob. | Integer | |
CamelAzureStorageBlobAppendOffset (consumer) Constant: | The offset at which the block was committed to the block blob. | String | |
CamelAzureStorageBlobRawHttpHeaders (consumer) Constant: | Returns non-parsed httpHeaders that can be used by the user. | HttpHeaders | |
CamelAzureStorageBlobFileName (consumer) Constant: | The downloaded filename from the operation downloadBlobToFile. | String | |
CamelAzureStorageBlobDownloadLink (consumer) Constant: | The download link generated by downloadLink operation. | String | |
CamelAzureStorageBlobListBlobOptions (producer) Constant: | (listBlobs) Defines options available to configure the behavior of a call to listBlobsFlatSegment on a BlobContainerClient object. | ListBlobsOptions | |
CamelAzureStorageBlobListDetails (producer) Constant: | (listBlobs) The details for listing specific blobs. | BlobListDetails | |
CamelAzureStorageBlobPrefix (producer) Constant: | (listBlobs,getBlob) Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. | String | |
CamelAzureStorageBlobRegex (producer) Constant: | (listBlobs,getBlob) Filters the results to return only blobs whose names match the specified regular expression. May be null to return all. If both prefix and regex are set, regex takes the priority and prefix is ignored. | String | |
CamelAzureStorageBlobMaxResultsPerPage (producer) Constant: | (listBlobs) Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. | Integer | |
CamelAzureStorageBlobTimeout (producer) Constant: | (All) An optional timeout value beyond which a RuntimeException will be raised. | Duration | |
CamelAzureStorageBlobPublicAccessType (producer) Constant: | (createContainer) Specifies how the data in this container is available to the public. Pass null for no public access. | PublicAccessType | |
CamelAzureStorageBlobRequestCondition (producer) Constant: | (Operations related to container and blob) This contains values which will restrict the successful operation of a variety of requests to the conditions present. These conditions are entirely optional. | BlobRequestConditions | |
CamelAzureStorageBlobBlobContainerName (producer) Constant: | (Operations related to container and blob) Override/set the container name on the exchange headers. | String | |
CamelAzureStorageBlobBlobName (producer) Constant: | (Operations related to blob) Override/set the blob name on the exchange headers. | String | |
CamelAzureStorageBlobFileDir (producer) Constant: | (downloadBlobToFile) The file directory where the downloaded blobs will be saved to. | String | |
CamelAzureStorageBlobPageBlobRange (producer) Constant: | (Operations related to page blob) A PageRange object. Given that pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the end offset must be a modulus of 512 - 1. Examples of valid byte ranges are 0-511, 512-1023, etc. | PageRange | |
CamelAzureStorageBlobPageBlobSize (producer) Constant: | (createPageBlob, resizePageBlob) Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. | Long | |
CamelAzureStorageBlobCommitBlobBlockListLater (producer) Constant: | (stageBlockBlobList) When is set to true, the staged blocks will not be committed directly. | boolean | |
CamelAzureStorageBlobBlockListType (producer) Constant: | (getBlobBlockList) Specifies which type of blocks to return. Enum values:
| BlockListType | |
CamelAzureStorageBlobCreateAppendBlob (producer) Constant: | (commitAppendBlob) When is set to true, the append blocks will be created when committing append blocks. | boolean | |
CamelAzureStorageBlobCreatePageBlob (producer) Constant: | (uploadPageBlob) When is set to true, the page blob will be created when uploading page blob. | boolean | |
CamelAzureStorageBlobDeleteSnapshotsOptionType (producer) Constant: | (deleteBlob) Specifies the behavior for deleting the snapshots on this blob. Include will delete the base blob and all snapshots. Only will delete only the snapshots. If a snapshot is being deleted, you must pass null. Enum values:
| DeleteSnapshotsOptionType | |
CamelAzureStorageBlobListBlobContainersOptions (producer) Constant: | (listBlobContainers) A ListBlobContainersOptions which specifies what data should be returned by the service. | ListBlobContainersOptions | |
CamelAzureStorageBlobParallelTransferOptions (producer) Constant: | (downloadBlobToFile) ParallelTransferOptions to use to download to file. Number of parallel transfers parameter is ignored. | ParallelTransferOptions | |
CamelAzureStorageBlobDownloadLinkExpiration (producer) Constant: | (downloadLink) Override the default expiration (millis) of URL download link. | Long | |
CamelAzureStorageBlobSourceBlobAccountName (producer) Constant: | (copyBlob) The source blob account name to be used as source account name in a copy blob operation. | String | |
CamelAzureStorageBlobSourceBlobContainerName (producer) Constant: | (copyBlob) The source blob container name to be used as source container name in a copy blob operation. | String | |
CamelAzureStorageBlobChangeFeedStartTime (producer) Constant: | (getChangeFeed) It filters the results to return events approximately after the start time. Note: A few events belonging to the previous hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. | OffsetDateTime | |
CamelAzureStorageBlobChangeFeedEndTime (producer) Constant: | (getChangeFeed) It filters the results to return events approximately before the end time. Note: A few events belonging to the next hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. | OffsetDateTime | |
CamelAzureStorageBlobContext (producer) Constant: | (getChangeFeed) This gives additional context that is passed through the Http pipeline during the service call. | Context |
Advanced Azure Storage Blob configuration
If your Camel Application is running behind a firewall or if you need to have more control over the BlobServiceClient
instance configuration, you can create your own instance:
StorageSharedKeyCredential credential = new StorageSharedKeyCredential("yourAccountName", "yourAccessKey");
String uri = String.format("https://%s.blob.core.windows.net", "yourAccountName");
BlobServiceClient client = new BlobServiceClientBuilder()
.endpoint(uri)
.credential(credential)
.buildClient();
// This is camel context
context.getRegistry().bind("client", client);
Then refer to this instance in your Camel azure-storage-blob
component configuration:
from("azure-storage-blob://cameldev/container1?blobName=myblob&serviceClient=#client")
.to("mock:result");
Automatic detection of BlobServiceClient client in registry
The component is capable of detecting the presence of an BlobServiceClient bean into the registry. If it’s the only instance of that type it will be used as client and you won’t have to define it as uri parameter, like the example above. This may be really useful for smarter configuration of the endpoint.
Azure Storage Blob Producer operations
Camel Azure Storage Blob component provides wide range of operations on the producer side:
Operations on the service level
For these operations, accountName
is required.
Operation | Description |
---|---|
| Get the content of the blob. You can restrict the output of this operation to a blob range. |
| Returns transaction logs of all the changes that occur to the blobs and the blob metadata in your storage account. The change feed provides ordered, guaranteed, durable, immutable, read-only log of these changes. |
Operations on the container level
For these operations, accountName
and containerName
are required.
Operation | Description |
---|---|
| Creates a new container within a storage account. If a container with the same name already exists, the producer will ignore it. |
| Deletes the specified container in the storage account. If the container doesn’t exist the operation fails. |
| Returns a list of blobs in this container, with folder structures flattened. |
Operations on the blob level
For these operations, accountName
, containerName
and blobName
are required.
Operation | Blob Type | Description |
---|---|---|
| Common | Get the content of the blob. You can restrict the output of this operation to a blob range. |
| Common | Delete a blob. |
| Common | Downloads the entire blob into a file specified by the path.The file will be created and must not exist, if the file already exists a {@link FileAlreadyExistsException} will be thrown. |
| Common | Generates the download link for the specified blob using shared access signatures (SAS). This by default only limit to 1hour of allowed access. However, you can override the default expiration duration through the headers. |
| BlockBlob | Creates a new block blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with PutBlob; the content of the existing blob is overwritten with the new content. |
|
| Uploads the specified block to the block blob’s "staging area" to be later committed by a call to commitBlobBlockList. However in case header |
|
| Writes a blob by specifying the list of block IDs that are to make up the blob. In order to be written as part of a blob, a block must have been successfully written to the server in a prior |
|
| Returns the list of blocks that have been uploaded as part of a block blob using the specified block list filter. |
|
| Creates a 0-length append blob. Call commitAppendBlo`b operation to append data to an append blob. |
|
| Commits a new block of data to the end of the existing append blob. In case of header |
|
| Creates a page blob of the specified length. Call |
|
| Writes one or more pages to the page blob. The write size must be a multiple of 512. In case of header |
|
| Resizes the page blob to the specified size (which must be a multiple of 512). |
|
| Frees the specified pages from the page blob. The size of the range must be a multiple of 512. |
|
| Returns the list of valid page ranges for a page blob or snapshot of a page blob. |
|
| Copy a blob from one container to another one, even from different accounts. |
Refer to the example section in this page to learn how to use these operations into your camel application.
Consumer Examples
To consume a blob into a file using file component, this can be done like this:
from("azure-storage-blob://camelazure/container1?blobName=hello.txt&accountName=yourAccountName&accessKey=yourAccessKey").
to("file://blobdirectory");
However, you can also write to file directly without using the file component, you will need to specify fileDir
folder path in order to save your blob in your machine.
from("azure-storage-blob://camelazure/container1?blobName=hello.txt&accountName=yourAccountName&accessKey=yourAccessKey&fileDir=/var/to/awesome/dir").
to("mock:results");
Also, the component supports batch consumer, hence you can consume multiple blobs with only specifying the container name, the consumer will return multiple exchanges depending on the number of the blobs in the container. Example:
from("azure-storage-blob://camelazure/container1?accountName=yourAccountName&accessKey=yourAccessKey&fileDir=/var/to/awesome/dir").
to("mock:results");
Producer Operations Examples
-
listBlobContainers
:
from("direct:start")
.process(exchange -> {
// set the header you want the producer to evaluate, refer to the previous
// section to learn about the headers that can be set
// e.g:
exchange.getIn().setHeader(BlobConstants.LIST_BLOB_CONTAINERS_OPTIONS, new ListBlobContainersOptions().setMaxResultsPerPage(10));
})
.to("azure-storage-blob://camelazure?operation=listBlobContainers&client&serviceClient=#client")
.to("mock:result");
-
createBlobContainer
:
from("direct:start")
.process(exchange -> {
// set the header you want the producer to evaluate, refer to the previous
// section to learn about the headers that can be set
// e.g:
exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "newContainerName");
})
.to("azure-storage-blob://camelazure/container1?operation=createBlobContainer&serviceClient=#client")
.to("mock:result");
-
deleteBlobContainer
:
from("direct:start")
.process(exchange -> {
// set the header you want the producer to evaluate, refer to the previous
// section to learn about the headers that can be set
// e.g:
exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName");
})
.to("azure-storage-blob://camelazure/container1?operation=deleteBlobContainer&serviceClient=#client")
.to("mock:result");
-
listBlobs
:
from("direct:start")
.process(exchange -> {
// set the header you want the producer to evaluate, refer to the previous
// section to learn about the headers that can be set
// e.g:
exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName");
})
.to("azure-storage-blob://camelazure/container1?operation=listBlobs&serviceClient=#client")
.to("mock:result");
-
getBlob
:
We can either set an outputStream
in the exchange body and write the data to it. E.g:
from("direct:start")
.process(exchange -> {
// set the header you want the producer to evaluate, refer to the previous
// section to learn about the headers that can be set
// e.g:
exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName");
// set our body
exchange.getIn().setBody(outputStream);
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlob&serviceClient=#client")
.to("mock:result");
If we don’t set a body, then this operation will give us an InputStream
instance which can proceeded further downstream:
from("direct:start")
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlob&serviceClient=#client")
.process(exchange -> {
InputStream inputStream = exchange.getMessage().getBody(InputStream.class);
// We use Apache common IO for simplicity, but you are free to do whatever dealing
// with inputStream
System.out.println(IOUtils.toString(inputStream, StandardCharsets.UTF_8.name()));
})
.to("mock:result");
-
deleteBlob
:
from("direct:start")
.process(exchange -> {
// set the header you want the producer to evaluate, refer to the previous
// section to learn about the headers that can be set
// e.g:
exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName");
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=deleteBlob&serviceClient=#client")
.to("mock:result");
-
downloadBlobToFile
:
from("direct:start")
.process(exchange -> {
// set the header you want the producer to evaluate, refer to the previous
// section to learn about the headers that can be set
// e.g:
exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName");
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=downloadBlobToFile&fileDir=/var/mydir&serviceClient=#client")
.to("mock:result");
-
downloadLink
from("direct:start")
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=downloadLink&serviceClient=#client")
.process(exchange -> {
String link = exchange.getMessage().getHeader(BlobConstants.DOWNLOAD_LINK, String.class);
System.out.println("My link " + link);
})
.to("mock:result");
-
uploadBlockBlob
from("direct:start")
.process(exchange -> {
// set the header you want the producer to evaluate, refer to the previous
// section to learn about the headers that can be set
// e.g:
exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName");
exchange.getIn().setBody("Block Blob");
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=uploadBlockBlob&serviceClient=#client")
.to("mock:result");
-
stageBlockBlobList
from("direct:start")
.process(exchange -> {
final List<BlobBlock> blocks = new LinkedList<>();
blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("Hello".getBytes())));
blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("From".getBytes())));
blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("Camel".getBytes())));
exchange.getIn().setBody(blocks);
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=stageBlockBlobList&serviceClient=#client")
.to("mock:result");
-
commitBlockBlobList
from("direct:start")
.process(exchange -> {
// We assume here you have the knowledge of these blocks you want to commit
final List<Block> blocksIds = new LinkedList<>();
blocksIds.add(new Block().setName("id-1"));
blocksIds.add(new Block().setName("id-2"));
blocksIds.add(new Block().setName("id-3"));
exchange.getIn().setBody(blocksIds);
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=commitBlockBlobList&serviceClient=#client")
.to("mock:result");
-
getBlobBlockList
from("direct:start")
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlobBlockList&serviceClient=#client")
.log("${body}")
.to("mock:result");
-
createAppendBlob
from("direct:start")
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=createAppendBlob&serviceClient=#client")
.to("mock:result");
-
commitAppendBlob
from("direct:start")
.process(exchange -> {
final String data = "Hello world from my awesome tests!";
final InputStream dataStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8));
exchange.getIn().setBody(dataStream);
// of course you can set whatever headers you like, refer to the headers section to learn more
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=commitAppendBlob&serviceClient=#client")
.to("mock:result");
-
createPageBlob
from("direct:start")
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=createPageBlob&serviceClient=#client")
.to("mock:result");
-
uploadPageBlob
from("direct:start")
.process(exchange -> {
byte[] dataBytes = new byte[512]; // we set range for the page from 0-511
new Random().nextBytes(dataBytes);
final InputStream dataStream = new ByteArrayInputStream(dataBytes);
final PageRange pageRange = new PageRange().setStart(0).setEnd(511);
exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange);
exchange.getIn().setBody(dataStream);
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=uploadPageBlob&serviceClient=#client")
.to("mock:result");
-
resizePageBlob
from("direct:start")
.process(exchange -> {
final PageRange pageRange = new PageRange().setStart(0).setEnd(511);
exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange);
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=resizePageBlob&serviceClient=#client")
.to("mock:result");
-
clearPageBlob
from("direct:start")
.process(exchange -> {
final PageRange pageRange = new PageRange().setStart(0).setEnd(511);
exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange);
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=clearPageBlob&serviceClient=#client")
.to("mock:result");
-
getPageBlobRanges
from("direct:start")
.process(exchange -> {
final PageRange pageRange = new PageRange().setStart(0).setEnd(511);
exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange);
})
.to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getPageBlobRanges&serviceClient=#client")
.log("${body}")
.to("mock:result");
-
copyBlob
from("direct:copyBlob")
.process(exchange -> {
exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "file.txt");
exchange.getMessage().setHeader(BlobConstants.SOURCE_BLOB_CONTAINER_NAME, "containerblob1");
exchange.getMessage().setHeader(BlobConstants.SOURCE_BLOB_ACCOUNT_NAME, "account");
})
.to("azure-storage-blob://account/containerblob2?operation=copyBlob&sourceBlobAccessKey=RAW(accessKey)")
.to("mock:result");
In this way the file.txt in the container containerblob1 of the account 'account', will be copied to the container containerblob2 of the same account.
Development Notes (Important)
All integration tests use Testcontainers and run by default. Obtaining of Azure accessKey and accountName is needed to be able to run all integration tests using Azure services. In addition to the mocked unit tests you will need to run the integration tests with every change you make or even client upgrade as the Azure client can break things even on minor versions upgrade. To run the integration tests, on this component directory, run the following maven command:
mvn verify -DaccountName=myacc -DaccessKey=mykey -DcredentialType=SHARED_ACCOUNT_KEY
Whereby accountName
is your Azure account name and accessKey
is the access key being generated from Azure portal.
Spring Boot Auto-Configuration
When using azure-storage-blob with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-azure-storage-blob-starter</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
The component supports 35 options, which are listed below.