The Conduit Kafka Connect Wrapper provides the ability to use Kafka Connect plugins with Conduit. The connector can be used as a source or a destination but this depends on the Kafka Connect Connector being used. If the Kafka Connect Connector is a source connector, it should be used only as a source connector with this wrapper in Conduit.
- JDK 11
- Currently, only Unix-like OSes are supported.
The general process follows these steps:
- Build the Conduit Kafka Connect Wrapper.
- Download the Kafka Connect JAR you want to use plus all of its dependencies
- Drop in newly downloaded Kafka Connect Connnectors into the
- Create connection in Conduit with your Kafka Connect configuration
Building and using the wrapper
Start by cloning thee repository, conduit-kafka-connect-wrapper.
git clone [email protected]:ConduitIO/conduit-kafka-connect-wrapper.git
scripts/dist.sh in the newly clone repository to build an executable.
scripts/dist.sh will create a directory called
dist with following contents:
- A script (which runs the connector). This script starts a connector instance.
- The connector JAR itself
libs. This is where you put the Kafka Connect Connector JARs and their dependencies (if any).
When creating a Conduit connector, the plugin path you need to use is the path to
conduit-kafka-connect-wrapper. Here's a full working example of a Conduit connector configuration:
Note that the
wrapper.connector.class should be a class which is present on the classpath, i.e. in one of the JARs in
libs directory. For more information, theck the Configuration section.
Loading Kafka Connect connectors
All Kafka Connect Connectors will load, along with all the other dependencies, from a
libs directory, which is expected to be in the
same directory as the Conduit Connector executable itself. For example, if the Conduit Connector executable is at
then the dependencies are expected to be in
The plugin will be able to find the dependencies as soon as they are put into
libs. Please note that, a JDBC connector (generally) will require a database-specific driver to work (for example, PostgreSQL's driver can be found here).
This plugin's configuration consists of the configuration of the requested Kafka connector, plus:
|The class of the requested connector. It needs to be found on the classpath, i.e. in a JAR in the ||yes||none|
|The schema of the records which will be written to a destination connector.||the plugin doesn't require it, but the underlying Kafka connector may||none|
|Automatically generate schemas (destination connector). Cannot be ||no|
|Name of automatically generated schema.||yes, if schema auto-generation is turned on||none|
|A (partial) schema which overrides types in the auto-generated schema.||no||none|
Here's a full example, for a new Conduit destination connector, backed up by a JDBC Kafka sink connector.
All the configuration parameters prefixed with
wrapper. belong to the Kafka Connect wrapper and is used to control its behavior. All other configuration parameters are forwarded to the underlying Kafka connector as-is. In this example,
wrapper.connector.class is telling the wrapper to instantiate a JDBC source connector.
connection.url and all the other parameters are specific to the JDBC source connector.
schema.autogenerate.enabled is set to
true, the plugin will try to automatically generate Kafka connector schemas
for a destination. If schema auto-generation is enabled, then a schema name must be provided (through the
Optionally, it's possible to override types for individual fields. This is useful in cases where the plugin's inferred type
for a field is not suitable. To override types for individual fields, specify a schema through
The specified schema is, of course, partial.
Here's an example:
In this example we specify a partial schema, where a single field,
joined, is defined. Schema generator will skip its
own specifications for this field and instead use the provided one.
Schema auto-generation works differently for records with structured data and records with raw data.
- Records with structured data: A record with structured data contains a
google.protobuf.Struct. The mappings are as follows:
|number (only double is supported)||OPTIONAL_FLOAT64_SCHEMA|
|ListValue||ARRAY, where element types correspond to element type from the protobuf ListValue|
- Records with raw data, in JSON format: The mappings are as follows:
|NUMBER||the narrowest integer or float schema for the number|
- Records with raw data, and no schema at all - not supported (yet).
The complete server-side code for this plugin is not committed to the repo. Rather, it's generated from a proto file, when the project is compiled.
IDEs may not automatically add the generated sources to the class. If that's the case, you need to:
mvn clean compile(so that the needed code is generated)
- Change the project settings in your IDE to include the generated source. In IntelliJ, for example, you do that by going
to File > Project structure > Project Settings > Modules. Then, right-click on
target/generated-sourceand select "Sources".
Any issues, questions, or comments on this connector can be directed to the repo, conduit-kafka-connect-wrapper