Skip to main content

· 8 min read

In this article, we are going to walk through, step by step, how to build a Conduit connector.

Conduit connectors communicate with Conduit by either writing records into the pipeline (source connector) and/or the other way around (destination connector).

For this example, we are going to build an Algolia destination connector. The goal of this connector is to give the user the ability to send data to Algolia. In the context of search engines, this is called indexing. Since Conduit is a generic tool to move data between data infrastructure, with this new connector we can index data from any Conduit Source (PostgreSQL, Kafka, etc.).

You may find this full example on GitHub.

Let's build!

· 3 min read

The Conduit Kafka Connect Wrapper connector is a special connector that allows you to use Kafka Connect connectors with Conduit. Conduit doesn't come bundled with Kafka Connect connectors, but you can use it to bring any Kafka Connect connector with Conduit.

This connector gives you the ability to:

  • Easily migrate from Kafka Connect to Conduit.
  • Remove Kafka as a dependency to move data between data infrastructure.
  • Leverage a datastore if Conduit doesn't have a native connector.

Since the Conduit Kafka Connect Wrapper itself is written in Java, but most of Conduit's connectors are written in Go, it also serves as a good example of the flexbilty of the Conduit Plugin SDK.

Let's begin.

How it works

To use the Kafka Connect wrapper connector, you'll need to:

  1. Clone the conduit-kafka-connect-wrapper repository.
  2. Build the Connector JAR.
  3. Download Kafka Connect JARs and any dependencies you would like to add.
  4. Create a Conduit pipeline.
  5. Add the Connector to pipeline.

· 3 min read

By default, Conduit ships with a REST API that allows you to automate the creation of data pipelines and connectors. To make it easy to get started with the API, we have provided a Swagger UI to visualize and interact with the Condiut without having to write any code...yet 😉.

After you start Conduit, if you navigate to http://localhost:8080/openapi/, you will see a page that looks like this:

Conduit in Terminal

Then, after you test the API, you can write code to make the equilivent request. For example, here is how you would make a request using the axios Node.js library.

const config = {
type: 'TYPE_SOURCE',
plugin: `${pkgPath}/pkg/plugins/pg/pg`,
pipelineId: pipeline.id,
config: {
name: 'pg',
settings: {
table: pgTable,
url: pgUrl,
cdc: 'false',
},
},
}

const response = await axios.post(`http://localhost:8080/v1/connectors`, config)

Esentially, the API is everything you'd need to auomate pipeline creation. Let's begin.

Starting Conduit

To get started, you need to install and start Conduit. You may even add Conduit to your $PATH.

./conduit

To open the Swagger UI, open your browser and navigate to http://localhost:8080/openapi. This UI allows you to interact with the API and create connectors. It also serves as a reference for the API.

Making a Request

The API lets you manage all parts of Conduit. For example, all we need to create and start a pieline are these three APIs:

  • Create Pipelines - POST /v1/pipelines
  • Create Connectors - POST /v1/connectors
  • Start/Stop Pipelines POST /v1/pipelines/{id}/start

Let's use the Swagger UI to create a pipeline.

  1. First, find the create pipeline API, and select "Try it out":
Conduit in Terminal
  1. Update the body of the request with your new pipeline details:
Create a Conduit Pipeline

In this case, the config describes the name and the description of the new pipeline:

{
"config": {
"name": "string",
"description": "string"
}
}
  1. Select "Execute", notice the response of the request:
Conduit API Response

For every request, you will be able to try it out, see the body of the request, and the expected response.

What's Next

Now that you know how to try out the API, you can explore Conduit with these other resources:

· 3 min read

In this guide, we will build a data pipline that moves data between files. This example is a great to get started with Conduit on a local machine, but it's also the foundation of use cases such as log aggregation.

Kafka to Postgres Conduit Pipeline

Everytime that data is appended to the src.log, data will be move in real-time to dest.log.