Azure Storage Data Lake Service
Since Camel 3.8
Both producer and consumer are supported
The Azure storage datalake component is used for storing and retrieving file from Azure Storage Data Lake Service using the Azure APIs v12.
Prerequisites
You need to have a valid Azure account with Azure storage set up. More information can be found at Azure Documentation Portal.
Maven users will need to add the following dependency to their pom.xml
for this component.
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-azure-storage-datalake</artifactId>
<version>x.x.x</version>
<!-- use the same version as your camel core version -->
</dependency>
Uri Format
azure-storage-datalake:accountName[/fileSystemName][?options]
In the case of the consumer, both accountName
and fileSystemName
are required. In the case of the producer, it depends on the operation being requested.
You can append query options to the URI in the following format: ?option1=value&option2=value&…
Configuring Options
Camel components are configured on two separate levels:
-
component level
-
endpoint level
Configuring Component Options
At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level.
For example, a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
You can configure components using:
-
the Component DSL.
-
in a configuration file (
application.properties
,*.yaml
files, etc). -
directly in the Java code.
Configuring Endpoint Options
You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java.
A good practice when configuring options is to use Property Placeholders.
Property placeholders provide a few benefits:
-
They help prevent using hardcoded urls, port numbers, sensitive information, and other settings.
-
They allow externalizing the configuration from the code.
-
They help the code to become more flexible and reusable.
The following two sections list all the options, firstly for the component followed by the endpoint.
Component Options
The Azure Storage Data Lake Service component supports 38 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
client id for azure account. | String | ||
Whether or not a file changed event raised indicates completion (true) or modification (false). | Boolean | ||
check for closing stream after read. | Boolean | ||
configuration object for data lake. | DataLakeConfiguration | ||
Determines the credential strategy to adopt. Enum values:
| CLIENT_SECRET | CredentialType | |
count number of bytes to download. | Long | ||
directory of the file to be handled in component. | String | ||
download link expiration time. | Long | ||
expression for queryInputStream. | String | ||
directory of file to do operations in the local system. | String | ||
name of file to be handled in component. | String | ||
offset position in file for different operations. | Long | ||
maximum number of results to show at a time. | Integer | ||
no of retries to a given request. | int | ||
set open options for creating file. | Set | ||
path in azure data lake for operations. | String | ||
permission string for the file. | String | ||
This parameter allows the caller to upload data in parallel and control the order in which it is appended to the file. | Long | ||
recursively include all paths. | Boolean | ||
regular expression for matching file names. | String | ||
Whether or not uncommitted data is to be retained after the operation. | Boolean | ||
Autowired data lake service client for azure storage data lake. | DataLakeServiceClient | ||
shared key credential for azure data lake gen2. | StorageSharedKeyCredential | ||
tenant id for azure account. | String | ||
Timeout for operation. | Duration | ||
umask permission for file. | String | ||
whether or not to use upn. | Boolean | ||
Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean | |
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean | |
operation to be performed. Enum values:
| listFileSystem | DataLakeOperationsDefinition | |
Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean | |
Used for enabling or disabling all consumer based health checks from this component. | true | boolean | |
Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true. | true | boolean | |
account key for authentication. | String | ||
client secret for azure account. | String | ||
client secret credential for authentication. | ClientSecretCredential | ||
SAS token credential. | AzureSasCredential | ||
SAS token signature. | String |
Endpoint Options
The Azure Storage Data Lake Service endpoint is configured using URI syntax:
azure-storage-datalake:accountName/fileSystemName
With the following path and query parameters:
Query Parameters (53 parameters)
Name | Description | Default | Type |
---|---|---|---|
client id for azure account. | String | ||
Whether or not a file changed event raised indicates completion (true) or modification (false). | Boolean | ||
check for closing stream after read. | Boolean | ||
Determines the credential strategy to adopt. Enum values:
| CLIENT_SECRET | CredentialType | |
count number of bytes to download. | Long | ||
service client of data lake. | DataLakeServiceClient | ||
directory of the file to be handled in component. | String | ||
download link expiration time. | Long | ||
expression for queryInputStream. | String | ||
directory of file to do operations in the local system. | String | ||
name of file to be handled in component. | String | ||
offset position in file for different operations. | Long | ||
maximum number of results to show at a time. | Integer | ||
no of retries to a given request. | int | ||
set open options for creating file. | Set | ||
path in azure data lake for operations. | String | ||
permission string for the file. | String | ||
This parameter allows the caller to upload data in parallel and control the order in which it is appended to the file. | Long | ||
recursively include all paths. | Boolean | ||
regular expression for matching file names. | String | ||
Whether or not uncommitted data is to be retained after the operation. | Boolean | ||
Autowired data lake service client for azure storage data lake. | DataLakeServiceClient | ||
shared key credential for azure data lake gen2. | StorageSharedKeyCredential | ||
tenant id for azure account. | String | ||
Timeout for operation. | Duration | ||
umask permission for file. | String | ||
whether or not to use upn. | Boolean | ||
If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean | |
Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean | |
To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | ||
Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | ||
A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | ||
operation to be performed. Enum values:
| listFileSystem | DataLakeOperationsDefinition | |
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean | |
The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | ||
The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | ||
To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | ||
Milliseconds before the next poll. | 500 | long | |
If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean | |
Milliseconds before the first poll starts. | 1000 | long | |
Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long | |
The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel | |
Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | ||
To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object | |
To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | ||
Whether the scheduler should be auto started. | true | boolean | |
Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit | |
Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean | |
account key for authentication. | String | ||
client secret for azure account. | String | ||
client secret credential for authentication. | ClientSecretCredential | ||
SAS token credential. | AzureSasCredential | ||
SAS token signature. | String |
Methods of authentication
To use this component, you will have to provide at least one of the specific credentialType parameters:
-
SHARED_KEY_CREDENTIAL
: ProvideaccountName
andaccessKey
for your azure account or provide StorageSharedKeyCredential instance which can be provided intosharedKeyCredential
option. -
CLIENT_SECRET
: Provide ClientSecretCredential instance which can be provided intoclientSecretCredential
option or provideaccountName
,clientId
,clientSecret
andtenantId
for authentication with Azure Active Directory. -
SERVICE_CLIENT_INSTANCE
: Provide a DataLakeServiceClient instance which can be provided intoserviceClient
option. -
AZURE_IDENTITY
: Use the Default Azure Credential Provider Chain -
AZURE_SAS
: ProvidesasSignature
orsasCredential
parameters to use SAS mechanism
The default is CLIENT_SECRET
.
Usage
For example, to download content from file test.txt
located on the filesystem
in camelTesting
storage account, use the following snippet:
from("azure-storage-datalake:camelTesting/filesystem?fileName=test.txt&accountKey=key").
to("file://fileDirectory");
Message Headers
The Azure Storage Data Lake Service component supports 63 message header(s), which is/are listed below:
Name | Description | Default | Type |
---|---|---|---|
CamelAzureStorageDataLakeListFileSystemsOptions (producer) Constant: | Defines options available to configure the behavior of a call to listFileSystemsSegment on a DataLakeServiceAsyncClient object. Null may be passed. | ListFileSystemsOptions | |
CamelAzureStorageDataLakeTimeout (producer) Constant: | An optional timeout value beyond which a RuntimeException will be raised. | Duration | |
CamelAzureStorageDataLakeOperation (producer) Constant: | Specify the producer operation to execute. Different operations allowed are shown below. Enum values:
| DataLakeOperationsDefinition | |
CamelAzureStorageDataLakeFileSystemName (producer) Constant: | Name of the file system in azure data lake on which operation is to be performed. Please make sure that filesystem name is all lowercase. | String | |
CamelAzureStorageDataLakeDirectoryName (producer) Constant: | Name of the directory in azure data lake on which operation is to be performed. | String | |
CamelAzureStorageDataLakeFileName (producer) Constant: | Name of the file in azure data lake on which operation is to be performed. | String | |
CamelAzureStorageDataLakeMetadata (from both) Constant: | The metadata to associate with the file. | Map | |
CamelAzureStorageDataLakePublicAccessType (producer) Constant: | Defines options available to configure the behavior of a call to listFileSystemsSegment on a DataLakeServiceAsyncClient object. | PublicAccessType | |
CamelAzureStorageDataLakeRawHttpHeaders (consumer) Constant: | Non parsed http headers that can be used by the user. | HttpHeaders | |
CamelAzureStorageDataLakeRequestCondition (producer) Constant: | This contains values which will restrict the successful operation of a variety of requests to the conditions present. These conditions are entirely optional. | DataLakeRequestConditions | |
CamelAzureStorageDataLakeListPathOptions (producer) Constant: | Defines options available to configure the behavior of a call to listContainersSegment on a DataLakeFileSystemClient object. Null may be passed. | ListPathOptions | |
CamelAzureStorageDataLakePath (producer) Constant: | Path of the file to be used for upload operations. | String | |
CamelAzureStorageDataLakeRecursive (producer) Constant: | Specifies if the call to listContainersSegment should recursively include all paths. | Boolean | |
CamelAzureStorageDataLakeMaxResults (producer) Constant: | Specifies the maximum number of blobs to return, including all BlobPrefix elements. | Integer | |
CamelAzureStorageDataLakeUserPrincipalNameReturned (producer) Constant: | Specifies if the name of the user principal should be returned. | Boolean | |
CamelAzureStorageDataLakeRegex (producer) Constant: | Filter the results to return only those files with match the specified regular expression. | String | |
CamelAzureStorageDataLakeFileDir (producer) Constant: | Directory in which the file is to be downloaded. | String | |
CamelAzureStorageDataLakeAccessTier (consumer) Constant: | Access tier of file. | AccessTier | |
CamelAzureStorageDataLakeContentMD5 (producer) Constant: | An MD5 hash of the content. The hash is used to verify the integrity of the file during transport. | byte[] | |
CamelAzureStorageDataLakeFileRange (producer) Constant: | This is a representation of a range of bytes on a file, typically used during a download operation. Passing null as a FileRange value will default to the entire range of the file. | FileRange | |
CamelAzureStorageDataLakeParallelTransferOptions (producer) Constant: | The configuration used to parallelize data transfer operations. | ParallelTransferOptions | |
CamelAzureStorageDataLakeOpenOptions (producer) Constant: | Set of OpenOption used to configure how to open or create a file. | Set | |
CamelAzureStorageDataLakeAccessTierChangeTime (consumer) Constant: | Datetime when the access tier of the blob last changed. | OffsetDateTime | |
CamelAzureStorageDataLakeArchiveStatus (consumer) Constant: | Archive status of file. | ArchiveStatus | |
CamelAzureStorageDataLakeCacheControl (consumer) Constant: | Cache control specified for the file. | String | |
CamelAzureStorageDataLakeContentDisposition (consumer) Constant: | Content disposition specified for the file. | String | |
CamelAzureStorageDataLakeContentEncoding (consumer) Constant: | Content encoding specified for the file. | String | |
CamelAzureStorageDataLakeContentLanguage (consumer) Constant: | Content language specified for the file. | String | |
CamelAzureStorageDataLakeContentType (consumer) Constant: | Content type specified for the file. | String | |
CamelAzureStorageDataLakeCopyCompletionTime (consumer) Constant: | Conclusion time of the last attempted Copy Blob operation where this file was the destination file. | OffsetDateTime | |
CamelAzureStorageDataLakeCopyId (consumer) Constant: | String identifier for this copy operation. | String | |
CamelAzureStorageDataLakeCopyProgress (consumer) Constant: | Contains the number of bytes copied and the total bytes in the source in the last attempted Copy Blob operation where this file was the destination file. | String | |
CamelAzureStorageDataLakeCopySource (consumer) Constant: | URL up to 2 KB in length that specifies the source file or file used in the last attempted Copy Blob operation where this file was the destination file. | String | |
CamelAzureStorageDataLakeCopyStatus (consumer) Constant: | Status of the last copy operation performed on the file. Enum values:
| CopyStatusType | |
CamelAzureStorageDataLakeCopyStatusDescription (consumer) Constant: | The description of the copy’s status. | String | |
CamelAzureStorageDataLakeCreationTime (consumer) Constant: | Creation time of the file. | OffsetDateTime | |
CamelAzureStorageDataLakeEncryptionKeySha256 (consumer) Constant: | The SHA-256 hash of the encryption key used to encrypt the file. | String | |
CamelAzureStorageDataLakeETag (consumer) Constant: | The E Tag of the file. | String | |
CamelAzureStorageDataLakeFileSize (consumer) Constant: | Size of the file. | Long | |
CamelAzureStorageDataLakeLastModified (consumer) Constant: | Datetime when the file was last modified. | OffsetDateTime | |
CamelAzureStorageDataLakeLeaseDuration (consumer) Constant: | Type of lease on the file. Enum values:
| LeaseDurationType | |
CamelAzureStorageDataLakeLeaseState (consumer) Constant: | State of the lease on the file. Enum values:
| LeaseStateType | |
CamelAzureStorageDataLakeLeaseStatus (consumer) Constant: | Status of the lease on the file. Enum values:
| LeaseStatusType | |
CamelAzureStorageDataLakeIncrementalCopy (producer) Constant: | Flag indicating if the file was incrementally copied. | Boolean | |
CamelAzureStorageDataLakeServerEncrypted (consumer) Constant: | Flag indicating if the file’s content is encrypted on the server. | Boolean | |
CamelAzureStorageDataLakeDownloadLinkExpiration (producer) Constant: | Set the Expiration time of the download link. | Long | |
CamelAzureStorageDataLakeDownloadLink (consumer) Constant: | The link that can be used to download the file from data lake. | String | |
CamelAzureStorageDataLakeFileOffset (producer) Constant: | The position where the data is to be appended. | Long | |
CamelAzureStorageDataLakeLeaseId (producer) Constant: | By setting lease id, requests will fail if the provided lease does not match the active lease on the file. | String | |
CamelAzureStorageDataLakePathHttpHeaders (producer) Constant: | Additional parameters for a set of operations. | PathHttpHeaders | |
CamelAzureStorageDataLakeRetainCommitedData (producer) Constant: | Determines Whether or not uncommitted data is to be retained after the operation. | Boolean | |
CamelAzureStorageDataLakeClose (producer) Constant: | Whether or not a file changed event raised indicates completion (true) or modification (false). | Boolean | |
CamelAzureStorageDataLakePosition (producer) Constant: | The length of the file after all data has been written. | Long | |
CamelAzureStorageDataLakeExpression (producer) Constant: | The query expression on the file. | String | |
CamelAzureStorageDataLakeInputSerialization (producer) Constant: | Defines the input serialization for a file query request. either FileQueryJsonSerialization or FileQueryDelimitedSerialization. | FileQuerySerialization | |
CamelAzureStorageDataLakeOutputSerialization (producer) Constant: | Defines the output serialization for a file query request. either FileQueryJsonSerialization or FileQueryDelimitedSerialization. | FileQuerySerialization | |
CamelAzureStorageDataLakeErrorConsumer (producer) Constant: | Sets error consumer for file query. | Consumer | |
CamelAzureStorageDataLakeProgressConsumer (producer) Constant: | Sets progress consumer for file query. | Consumer | |
CamelAzureStorageDataLakeQueryOptions (producer) Constant: | Optional parameters for File Query. | FileQueryOptions | |
CamelAzureStorageDataLakePermission (producer) Constant: | Sets the permission for file. | String | |
CamelAzureStorageDataLakeUmask (producer) Constant: | Sets the umask for file. | String | |
CamelAzureStorageDataLakeFileClient (producer) Constant: | Sets the file client to use. | DataLakeFileClient | |
CamelAzureStorageDataLakeFlush (producer) Constant: | Sets whether to flush on append. | Boolean |
Automatic detection of a service client
The component is capable of automatically detecting the presence of a DataLakeServiceClient bean in the registry. Hence, if your registry has only one instance of type DataLakeServiceClient, it will be automatically used as the default client. You won’t have to explicitly define it as an uri parameter.
Azure Storage DataLake Producer Operations
The various operations supported by Azure Storage DataLake are as given below:
Operations on Service level
For these operations, accountName
option is required
Operation | Description |
---|---|
| List all the file systems that are present in the given azure account. |
Operations on File system level
For these operations, accountName
and fileSystemName
options are required
Operation | Description |
---|---|
| Create a new file System with the storage account |
| Delete the specified file system within the storage account |
| Returns list of all the files within the given path in the given file system, with folder structure flattened |
Operations on Directory level
For these operations, accountName
, fileSystemName
and directoryName
options are required
Operation | Description |
---|---|
| Create a new file in the specified directory within the fileSystem |
| Delete the specified directory within the file system |
Operations on file level
For these operations, accountName
, fileSystemName
and fileName
options are required
Operation | Description |
---|---|
| Get the contents of a file |
| Download the entire file from the file system into a path specified by fileDir. |
| Generate a download link for the specified file using Shared Access Signature (SAS). The expiration time to be set for the link can be specified otherwise 1 hour is taken as default. |
| Delete the specified file. |
| Appends the data passed to the specified file in the file System. Flush command is required after append. |
| Flushes the data already appended to the specified file. |
| Opens an |
Refer to the examples section below for more details on how to use these operations
Consumer Examples
To consume a file from the storage datalake into a file using the file component, this can be done like this:
from("azure-storage-datalake":cameltesting/filesystem?fileName=test.txt&accountKey=yourAccountKey").
to("file:/filelocation");
You can also directly write to a file without using the file component. For this, you will need to specify the path in fileDir
option, to save it to your machine.
from("azure-storage-datalake":cameltesting/filesystem?fileName=test.txt&accountKey=yourAccountKey&fileDir=/test/directory").
to("mock:results");
This component also supports batch consumer. So, you can consume multiple files from a file system by specifying the path from where you want to consume the files.
from("azure-storage-datalake":cameltesting/filesystem?accountKey=yourAccountKey&fileDir=/test/directory&path=abc/test").
to("mock:results");
Producer Examples
-
listFileSystem
from("direct:start")
.process(exchange -> {
//required headers can be added here
exchange.getIn().setHeader(DataLakeConstants.LIST_FILESYSTEMS_OPTIONS, new ListFileSystemsOptions().setMaxResultsPerPage(10));
})
.to("azure-storage-datalake:cameltesting?operation=listFileSystem&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
createFileSystem
from("direct:start")
.process(exchange -> {
exchange.getIn().setHeader(DataLakeConstants.FILESYSTEM_NAME, "test1");
})
.to("azure-storage-datalake:cameltesting?operation=createFileSystem&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
deleteFileSystem
from("direct:start")
.process(exchange -> {
exchange.getIn().setHeader(DataLakeConstants.FILESYSTEM_NAME, "test1");
})
.to("azure-storage-datalake:cameltesting?operation=deleteFileSystem&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
listPaths
from("direct:start")
.process(exchange -> {
exchange.getIn().setHeader(DataLakeConstants.LIST_PATH_OPTIONS, new ListPathsOptions().setPath("/main"));
})
.to("azure-storage-datalake:cameltesting/filesystem?operation=listPaths&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
getFile
This can be done in two ways, We can either set an OutputStream
in the exchange body
from("direct:start")
.process(exchange -> {
// set an OutputStream where the file data can should be written
exchange.getIn().setBody(outputStream);
})
.to("azure-storage-datalake:cameltesting/filesystem?operation=getFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
.to("mock:results");
Or if the body is not set, the operation will give an InputStream
, given that you have already registered for query acceleration in azure portal.
from("direct:start")
.to("azure-storage-datalake:cameltesting/filesystem?operation=getFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
.process(exchange -> {
InputStream inputStream = exchange.getMessage().getBody(InputStream.class);
System.out.Println(IOUtils.toString(inputStream, StandardCharcets.UTF_8.name()));
})
.to("mock:results");
-
deleteFile
from("direct:start")
.to("azure-storage-datalake:cameltesting/filesystem?operation=deleteFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
downloadToFile
from("direct:start")
.to("azure-storage-datalake:cameltesting/filesystem?operation=downloadToFile&fileName=test.txt&fileDir=/test/mydir&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
downloadLink
from("direct:start")
.to("azure-storage-datalake:cameltesting/filesystem?operation=downloadLink&fileName=test.txt&dataLakeServiceClient=#serviceClient")
.process(exchange -> {
String link = exchange.getMessage().getBody(String.class);
System.out.println(link);
})
.to("mock:results");
-
appendToFile
from("direct:start")
.process(exchange -> {
final String data = "test data";
final InputStream inputStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8));
exchange.getIn().setBody(inputStream);
})
.to("azure-storage-datalake:cameltesting/filesystem?operation=appendToFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
flushToFile
from("direct:start")
.process(exchange -> {
exchange.getIn().setHeader(DataLakeConstants.POSITION, 0);
})
.to("azure-storage-datalake:cameltesting/filesystem?operation=flushToFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
openQueryInputStream
For this operation, you should have already registered for query acceleration on the azure portal
from("direct:start")
.process(exchange -> {
exchange.getIn().setHeader(DataLakeConstants.QUERY_OPTIONS, new FileQueryOptions("SELECT * from BlobStorage"));
})
.to("azure-storage-datalake:cameltesting/filesystem?operation=openQueryInputStream&fileName=test.txt&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
upload
from("direct:start")
.process(exchange -> {
final String data = "test data";
final InputStream inputStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8));
exchange.getIn().setBody(inputStream);
})
.to("azure-storage-datalake:cameltesting/filesystem?operation=upload&fileName=test.txt&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
uploadFromFile
from("direct:start")
.process(exchange -> {
exchange.getIn().setHeader(DataLakeConstants.PATH, "test/file.txt");
})
.to("azure-storage-datalake:cameltesting/filesystem?operation=uploadFromFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
createFile
from("direct:start")
.process(exchange -> {
exchange.getIn().setHeader(DataLakeConstants.DIRECTORY_NAME, "test/file/");
})
.to("azure-storage-datalake:cameltesting/filesystem?operation=createFile&fileName=test.txt&dataLakeServiceClient=#serviceClient")
.to("mock:results");
-
deleteDirectory
from("direct:start")
.process(exchange -> {
exchange.getIn().setHeader(DataLakeConstants.DIRECTORY_NAME, "test/file/");
})
.to("azure-storage-datalake:cameltesting/filesystem?operation=deleteDirectory&dataLakeServiceClient=#serviceClient")
.to("mock:results");
Testing
Please run all the unit tests and integration tests while making changes to the component as changes or version upgrades can break things. For running all the tests in the component, you will need to obtain azure accountName
and accessKey
. After obtaining the same, you can run the full test on this component directory by running the following maven command
mvn verify -Dazure.storage.account.name=<accountName> -Dazure.storage.account.key=<accessKey>
You can also skip the integration test and run only basic unit test by using the command
mvn test
Spring Boot Auto-Configuration
When using azure-storage-datalake with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-azure-storage-datalake-starter</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
The component supports 39 options, which are listed below.