The Conduit Kafka plugin provides both, a source and a destination Kafka connector, for Conduit.
How it works?
Under the hood, the plugin uses Segment's Go Client for Apache Kafka(tm). It was chosen since it has no CGo dependency, making it possible to build the plugin for a wider range of platforms and architectures. It also supports contexts, which will likely use in the future.
A Kafka source connector is represented by a single consumer in a Kafka consumer group. By virtue of that, a source's logical position is the respective consumer's offset in Kafka. Internally, though, we're not saving the offset as the position: instead, we're saving the consumer group ID, since that's all which is needed for Kafka to find the offsets for our consumer.
A source is getting associated with a consumer group ID the first time the
Read() method is called.
The destination connector uses synchronous writes to Kafka. Proper buffering support which will enable asynchronous (and more optimal) writes is planned.
There's no global, plugin configuration. Each connector instance is configured separately.
|name||part of||description||required||default value|
|destination, source||A list of bootstrap servers to which the plugin will connect.||true|
|destination, source||The topic to which records will be written to.||true|
|destination||The number of acknowledgments required before considering a record written to Kafka. Valid values: 0, 1, all||false|
|destination||Message delivery timeout.||false|
|destination||Whether or not to read a topic from beginning (i.e. existing messages or only new messages).||false|
The planned work is tracked through GitHub issues.