Skip to main content

Kafka Connector

General

The Conduit Kafka plugin provides both, a source and a destination Kafka connector, for Conduit.

How it works?

Under the hood, the plugin uses Segment's Go Client for Apache Kafka(tm). It was chosen since it has no CGo dependency, making it possible to build the plugin for a wider range of platforms and architectures. It also supports contexts, which will likely use in the future.

Source

A Kafka source connector is represented by a single consumer in a Kafka consumer group. By virtue of that, a source's logical position is the respective consumer's offset in Kafka. Internally, though, we're not saving the offset as the position: instead, we're saving the consumer group ID, since that's all which is needed for Kafka to find the offsets for our consumer.

A source is getting associated with a consumer group ID the first time the Read() method is called.

Destination

The destination connector uses synchronous writes to Kafka. Proper buffering support which will enable asynchronous (and more optimal) writes is planned.

Configuration

There's no global, plugin configuration. Each connector instance is configured separately.

namepart ofdescriptionrequireddefault value
serversdestination, sourceA list of bootstrap servers to which the plugin will connect.true
topicdestination, sourceThe topic to which records will be written to.true
acksdestinationThe number of acknowledgments required before considering a record written to Kafka. Valid values: 0, 1, allfalseall
deliveryTimeoutdestinationMessage delivery timeout.false10s
readFromBeginningdestinationWhether or not to read a topic from beginning (i.e. existing messages or only new messages).falsefalse

Planned work

The planned work is tracked through GitHub issues.