Kamelets user guide
Speaking technically, a Kamelet is a resource that can be installed on any Kubernetes cluster. The following is an example of Kamelet that we’ll use to discuss the various parts:
apiVersion: camel.apache.org/v1
kind: Kamelet
metadata:
name: telegram-text-source (1)
annotations: (2)
camel.apache.org/kamelet.icon: "data:image/svg+xml;base64,PD94bW..."
labels: (3)
camel.apache.org/kamelet.type: "source"
spec:
definition: (4)
title: "Telegram Text Source"
description: |-
Receive all text messages that people send to your telegram bot.
# Instructions
Description can include Markdown and guide the final user to configure the Kamelet parameters.
required:
- botToken
properties:
botToken:
title: Token
description: The token to access your bot on Telegram
type: string
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:password
dataTypes: (5)
out:
default: text
types:
text:
mediaType: text/plain
# schema:
template: (6)
from:
uri: telegram:bots
parameters:
authorizationToken: "#property:botToken"
steps:
- convert-body-to:
type: "java.lang.String"
type-class: "java.lang.String"
charset: "UTF8"
- filter:
simple: "${body} != null"
- log: "${body}"
- to: "kamelet:sink"
1 | The Kamelet ID, to be used in integrations that want to leverage the Kamelet |
2 | Annotations such as icon provide additional display features to the Kamelet |
3 | Labels allow users to query Kamelets e.g. by kind ("source" vs. "sink") |
4 | Description of the Kamelets and parameters in JSON-schema specification format |
5 | The data type that the Kamelet produces. Data type specifications contain the media type of the output and also may include a schema. |
6 | The route template defining the behavior of the Kamelet |
At a high level (more details are provided later), a Kamelet resource describes:
-
A metadata section containing the ID (
metadata
→name
) of the Kamelet and other information, such as the type of Kamelet (source
orsink
) -
A JSON-schema specification (
definition
) containing a set of parameters that you can use to configure the Kamelet -
An optional section containing information about input and output expected by the Kamelet (
types
) -
A Camel flow in YAML DSL containing the implementation of the Kamelet (
flow
)
Once installed on a Kubernetes namespace, the Kamelet can be used by any integration in that namespace.
Kamelets can be installed on a Kubernetes namespace with a simple command:
kubectl apply -f yourkamelet.kamelet.yaml
Kamelets are standard YAML files, but their common extension is .kamelet.yaml
to help IDEs to recognize them and provide auto-completion (in the future).
Using Kamelets in Integrations
Kamelets can be used in integrations as if they were standard Camel components. For example, suppose that you’ve created the telegram-text-source
Kamelet in the default
namespace on Kubernetes, then you can write the following integration to use the Kamelet:
from('kamelet:telegram-text-source?botToken=XXXXYYYY')
.to('log:INFO')
URI properties ("botToken") match the corresponding parameters in the Kamelet definition |
Kamelets can also be used multiple times in the same route definition. This happens usually with sink Kamelets.
Suppose that you’ve defined a Kamelet named "my-company-log-sink" in your Kubernetes namespace, then you can write a route like this:
from('kamelet:telegram-text-source?botToken=XXXXYYYY')
.to("kamelet:my-company-log-sink?bucket=general")
.filter().simple('${body} contains "Camel"')
.to("kamelet:my-company-log-sink?bucket=special")
The "my-company-log-sink" will obviously define what it means to write a log in the enterprise system and what is concretely a "bucket".
Configuration
When using a Kamelet, the instance parameters (e.g. "botToken", "bucket") can be passed explicitly in the URI or you can use properties. Properties can be also loaded implicitly by the operator from Kubernetes secrets (see below).
1. URI based configuration
You can configure the Kamelet by passing directly the configuration parameters in the URI, as in:
from("kamelet:telegram-text-source?botToken=the-token-value")
// ...
In this case, "the-token-value" is passed explicitly in the URI (you can also pass a custom property placeholder as value).
2. Property based configuration
An alternative way to configure the Kamelet is to provide configuration parameters as properties of the integration.
Taking for example a different version of the integration above:
from('kamelet:telegram-text-source')
.to("kamelet:my-company-log-sink")
.filter().simple('${body} contains "Camel"')
.to("kamelet:my-company-log-sink/mynamedconfig")
The integration above does not contain URI query parameters and the last URI ("kamelet:my-company-log-sink/mynamedconfig") contains a path parameter with value "mynamedconfig" |
The integration above needs some configuration in order to run properly. The configuration can be provided in a property file:
# Configuration for the Telegram source Kamelet
camel.kamelet.telegram-text-source.botToken=the-token-value
# General configuration for the Company Log Kamelet
camel.kamelet.my-company-log-sink.bucket=general
# camel.kamelet.my-company-log-sink.xxx=yyy
# Specific configuration for the Company Log Kamelet corresponding to the named configuration "mynamedconfig"
camel.kamelet.my-company-log-sink.mynamedconfig.bucket=special
# When using "kamelet:my-company-log-sink/mynamedconfig", the bucket will be "special", not "general"
Then the integration can be run with the following command:
kamel run example.groovy --property file:example.properties
3. Implicit configuration using secrets
Property based configuration can also be used implicitly by creating secrets in the namespace that will be used to determine the Kamelets configuration.
To use implicit configuration via secret, we first need to create a configuration file holding only the properties of a named configuration.
# Only configuration related to the "mynamedconfig" named config
camel.kamelet.my-company-log-sink.mynamedconfig.bucket=special
# camel.kamelet.my-company-log-sink.mynamedconfig.xxx=yyy
We can create a secret from the file and label it so that it will be picked up automatically by the operator:
# Create the secret from the property file
kubectl create secret generic my-company-log-sink.mynamedconfig --from-file=mynamedconfig.properties
# Bind it to the named configuration "mynamedconfig" of the "my-company-log-sink" Kamelet
kubectl label secret my-company-log-sink.mynamedconfig camel.apache.org/kamelet=my-company-log-sink camel.apache.org/kamelet.configuration=mynamedconfig
You can now write an integration that uses the Kamelet with the named configuration:
from('timer:tick')
.setBody().constant('Hello')
.to('kamelet:my-company-log-sink/mynamedconfig')
You can run this integration without specifying other parameters, the Kamelet endpoint will be implicitly configured by the Camel K operator that will automatically mount the secret into the integration Pod.
Binding Kamelets
In some contexts (for example "serverless") users often want to leverage the power of Apache Camel to be able to connect to various sources/sinks, without doing additional processing (such as tranformations or other enterprise integration patterns).
A common use case is that of Knative Sources, for which the Apache Camel developers maintain the Knative CamelSources. Kamelets represent an evolution of the model proposed in CamelSources, but they allow using the same declarative style of binding, via a resource named Pipe.
Binding to a Knative Destination
A Pipe allows to declaratively move data from a system described by a Kamelet towards a Knative destination (or other kind of destinations, in the future), or from a Knative channel/broker to another external system described by a Kamelet.
For example, here’s an example of binding:
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: telegram-text-source-to-channel
spec:
source: (1)
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: telegram-text-source
properties:
botToken: the-token-here
sink: (2)
ref:
kind: InMemoryChannel
apiVersion: messaging.knative.dev/v1
name: messages
1 | Reference to the source that provides data |
2 | Reference to the sink where data should be sent to |
This binding takes the telegram-text-source
Kamelet, configures it using specific properties ("botToken") and makes sure that messages produced by the Kamelet are forwarded to the Knative InMemoryChannel named "messages".
Note that source and sink are specified declaratively as standard Kubernetes object references.
The example shows how we can reference the "telegram-text-source" resource in a Pipe. It’s contained in the source
section because it’s a Kamelet of type "source". A Kamelet of type "sink", by contrast, can only be used in the sink
section of a Pipe
.
Under the covers, a Pipe creates an Integration resource that implements the binding, but this is transparent to the end user.
Binding to a Kafka Topic
The example seen in the previous paragraph can be also configured to push data a Strimzi Kafka topic (Kamelets can be also configured to pull data from topics).
To do so, you need to:
-
Install Strimzi on your cluster
-
Create a Strimzi Kafka cluster using plain listener and no authentication
-
Create a Strimzi KafkaTopic named
my-topic
Refer to the Strimzi documentation for instructions on how to do that.
The following binding can be created to push data into the my-topic
topic:
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: telegram-text-source-to-kafka
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: telegram-text-source
properties:
botToken: the-token-here
sink:
ref: (1)
kind: KafkaTopic
apiVersion: kafka.strimzi.io/v1beta1
name: my-topic
1 | Kubernetes reference to a Strimzi KafkaTopic |
After creating it, messages will flow from Telegram to Kafka.
Binding to an explicit URI
An alternative way to use a Pipe is to configure the source/sink to be an explicit Camel URI. For example, the following binding is allowed:
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: telegram-text-source-to-channel
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: telegram-text-source
properties:
botToken: the-token-here
sink:
uri: https://mycompany.com/the-service (1)
1 | Pipe with explicitly URI |
This Pipe explicitly defines an URI where data is going to be pushed.
the uri option is also conventionally used in Knative to specify a non-kubernetes destination. To comply with the Knative specifications, in case an "http" or "https" URI is used, Camel will send CloudEvents to the destination. |
Binding with data types
When referencing Kamelets in a binding users may choose from one of the supported input/output data types provided by the Kamelet. The supported data types are declared on the Kamelet itself and give additional information about used header names, content type and content schema.
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: my-sample-source-to-log
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: my-sample-source
data-types: (1)
out:
format: text-plain (2)
sink:
uri: "log:info"
1 | Specify the output data type on the referenced Kamelet source. |
2 | Select text-plain as an output data type of the my-sample-source Kamelet. |
The very same Kamelet my-sample-source
may also provide a CloudEvents specific data type as an output which fits perfect for binding to a Knative broker.
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: my-sample-source-to-knative
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: my-sample-source
data-types:
out:
format: application-cloud-events (1)
sink:
ref:
kind: Broker
apiVersion: eventing.knative.dev/v1
name: default
1 | Select application-cloud-events as an output data type of the my-sample-source Kamelet. |
Information about the supported data types can be found on the Kamelet itself.
apiVersion: camel.apache.org/v1
kind: Kamelet
metadata:
name: my-sample-source
labels:
camel.apache.org/kamelet.type: "source"
spec:
definition:
# ...
dataTypes:
out: (1)
default: text-plain (2)
types: (3)
text-plain:
description: Output type as plain text.
mediaType: text/plain
application-cloud-events:
description: CloudEvents specific representation of the Kamelet output.
mediaType: application/cloudevents+json
schema: (4)
# ...
dependencies: (5)
- "camel:cloudevents"
template:
from:
uri: ...
steps:
- to: "kamelet:sink"
1 | Declared output data types of this Kamelet source |
2 | The output data type used by default |
3 | List of supported output types |
4 | Optional Json schema describing the application/cloudevents+json data type |
5 | Optional list of additional dependencies that are required by the data type. |
This way users may choose the best Kamelet data type for a specific use case when referencing Kamelets in a binding.
Error Handling
You can configure an error handler in order to specify what to do when some event ends up with failure. See Pipes Error Handler User Guide for more detail.
Trait via annotations
You can easily tune your Pipe
with traits configuration adding .metadata.annotations
. Let’s have a look at the following example:
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: timer-2-log-annotation
annotations: (1)
trait.camel.apache.org/logging.level: DEBUG
trait.camel.apache.org/logging.color: "false"
spec:
source:
uri: timer:foo
sink:
uri: log:bar
1 | Include .metadata.annotations to specify the list of traits we want to configure |
In this example, we’ve set the logging
trait to specify certain configuration we want to apply. You can do the same with all the traits available, just by setting trait.camel.apache.org/trait-name.trait-property
with the expected value.
if you need to specify an array of values, the syntax will be trait.camel.apache.org/trait.conf: "[\"opt1\", \"opt2\", …]" |
Troubleshooting
A Kamelet
is translated into a Route
used from the Ìntegration
. In order to troubleshoot any possible issue, you can have a look at the dedicated troubleshoot section.
Kamelet Specification
We’re now going to describe the various parts of the Kamelet in more details.
Metadata
The metadata section contains important information related to the Kamelet as Kubernetes resource.
name | Description | Type | Example |
---|---|---|---|
| ID of the Kamelet, used to refer to the Kamelet in external routes |
| E.g. |
| The Kubernetes namespace where the resource is installed |
|
The following annotations and labels are also defined on the resource:
name | Description | Type | Example |
---|---|---|---|
| An optional icon for the Kamelet in URI data format |
| E.g. |
| An optional configuration setting for a trait |
| E.g. |
name | Description | Type | Example |
---|---|---|---|
label: | Indicates if the Kamelet can be used as source, action or sink. | enum: | E.g. |
Definition
The definition part of a Kamelet contains a valid JSON-schema document describing general information about the Kamelet and all defined parameters.
name | Description | Type | Example |
---|---|---|---|
| Display name of the Kamelet |
| E.g. |
| A markdown description of the Kamelet |
| E.g. |
| List of required parameters (complies with JSON-schema spec) | array: | |
| Map of properties that can be configured on the Kamelet | map: |
Each property defined in the Kamelet has its own schema (normally a flat schema, containing only 1 level of properties). The following table lists some common fields allowed for each property.
name | Description | Type | Example |
---|---|---|---|
| Display name of the property |
| E.g. |
| Simple text description of the property |
| E.g. |
| JSON-schema type of the property |
| E.g. |
| Specific aids for the visual tools | array: | E.g. |
Data shapes
Kamelets are designed to be plugged as sources or sinks in more general routes, so they can accept data as input and/or produce their own data. To help visual tools and applications to understand how to interact with the Kamelet, the specification of a Kamelet includes also information about type of data that it manages.
# ...
spec:
# ...
dataTypes:
out: (1)
default: json
types:
json: (2)
mediaType: application/json
schema: (3)
properties:
# ...
1 | Defines the type of the output |
2 | Name of the data type |
3 | Optional JSON-schema definition of the output |
Data shape can be indicated for the following channels:
-
in
: the input of the Kamelet, in case the Kamelet is of typesink
-
out
: the output of the Kamelet, for bothsource
andsink
Kamelets -
error
: an optional error data shape, for bothsource
andsink
Kamelets
Data shapes contain the following information:
name | Description | Type | Example |
---|---|---|---|
| A specific component scheme that is used to identify the data shape |
| E.g. |
| The data shape name used to identify and reference the data type in a Pipe when choosing from multiple data type options. |
| E.g. |
| The media type of the data |
| E.g. |
| Optional map of message headers that get set with the data shape where the map keys represent the header name and the value defines the header type information. |
| |
| Optional list of additional dependencies that are required for this data type (e.g. Json marshal/unmarshal libraries) |
| E.g. |
| An optional JSON-schema definition for the data |
|
Flow
Each Kamelet contains a YAML-based Camel DSL that provides the actual implementation of the connector.
For example:
spec:
# ...
template:
from:
uri: telegram:bots
parameters:
authorizationToken: "#property:botToken"
steps:
- convert-body-to:
type: "java.lang.String"
type-class: "java.lang.String"
charset: "UTF8"
- filter:
simple: "${body} != null"
- log: "${body}"
- to: "kamelet:sink"
Source and sink flows will connect to the outside route via the kamelet:source
or kamelet:sink
special endpoints: - A source Kamelet must contain a call to kamelet:sink
- A sink Kamelet must start from kamelet:source
The kamelet:source and kamelet:sink endpoints are special endpoints that are only available in Kamelet route templates and will be replaced with actual references at runtime. |
Kamelets contain a single route template written in YAML DSL, as in the previous example.
Kamelets, however, can also contain additional sources in the spec
→ sources
field. Those sources can be of any kind (not necessarily route templates) and will be added once to all the integrations where the Kamelet is used. They main role is to do advanced configuration of the integration context where the Kamelet is used, such as registering beans in the registry or adding customizers.
KEDA enabled Kamelets
Some Kamelets are enhanced with KEDA metadata to allow users to automatically configure autoscalers on them. Kamelets with KEDA features can be distinguished by the presence of the annotation camel.apache.org/keda.type
, which is set to the name of a specific KEDA autoscaler.
A KEDA enabled Kamelet can be used in the same way as any other Kamelet, in a binding or in an integration. KEDA autoscalers are not enabled by default: they need to be manually enabled by the user via the keda
trait.
In a Pipe, the KEDA trait can be enabled using annotations:
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: my-keda-binding
annotations:
trait.camel.apache.org/keda.enabled: "true"
spec:
source:
# ...
sink:
# ...
In an integration, it can be enabled using kamel run
args, for example:
kamel run my-keda-integration.yaml -t keda.enabled=true
Make sure that the my-keda-integration uses at least one KEDA enabled Kamelet, otherwise enabling KEDA (without other options) will have no effect. |
For information on how to create KEDA enabled Kamelets, see the KEDA section in the development guide.