Connecteurs

Your Hub for Connecting Apache Kafka® to All Your Systems

Information and data management begins when centralization of data can occur organically. The effectiveness of microservices, streaming applications, and innumerable other uses of data is predicated on how easy it is to move data into and out of a centralized location. Apache Kafka Connect APIs and Connectors provide developers, data engineers and operators with a supported and natural way to evolve an organization’s data environment with ways to access and deliver data.

Why Connect?

  • Centralized Data Pipeline

    Use meaningful data abstractions to pull or push data to Apache Kafka.

  • Flexibility and Scalability

    Connectors for streaming and batch-oriented systems. Deploy on a single node or scale to a distributed service, serving your entire organization.

  • Reusability and Extensibility

    Leverage existing connectors and transformations. Easily add your own, tailored to your needs and faster time to production.

API Kafka Connect

The Kafka Connect API is an interface that simplifies and automates the integration of a new data source or sink to your Kafka cluster. The most popular data systems have connectors built by either Confluent, its partners, or the Kafka community and you can find them in Confluent Hub. You can leverage this work to save yourself time and energy. There are also tools, like Confluent Control Center and Confluent CLI make it easy to manage and monitor Connectors.

Learn More About Connect APIs

Data for All

Packaged Connectors make it easy to access data from popular data sources and sinks into and out of Kafka. A source connector can ingest entire databases and stream table updates to Kafka topics. It can also collect metrics from all of your application servers into Kafka topics, making the data available for stream processing. A sink connector can deliver data from Kafka topics into secondary indexes such as Elasticsearch or batch systems such as Hadoop for offline analysis.

Flexible Deployment Options

Kafka Connect can run either as a standalone process for jobs on a single machine (e.g., log collection), or as a distributed, scalable, fault tolerant service supporting an entire organization. This allows it to scale down to development, testing, and small production deployments with a low barrier to entry and low operational overhead, and to scale up to support a large organization’s data pipeline.

Enterprise Level Performance

Kafka Connect is focused on streaming data to and from Kafka, making it simpler for you to write high quality, reliable, and high performance connector plugins. It also enables the framework to make guarantees that are difficult to achieve using other frameworks. Kafka Connect is an integral component of an ETL pipeline when combined with Kafka and a stream processing framework.

Confluent Hub Advantage

Why Confluent Hub

Confluent Hub provides the only supported, managed and curated repository of connectors and other components in the Apache Kafka ecosystem. Confluent Hub provides the following set of components at various levels of support:

  • Confluent Connectors, supported by Confluent
  • Verified and Gold Connectors, supported by Confluent partners.
  • Community Connectors, supported by the community. Patches are welcome.

Start the curated connector experience with
Confluent Hub

We use cookies to understand how you use our site and to improve your experience. Click here to learn more or change your cookie settings. By continuing to browse, you agree to our use of cookies.