Confluent Platform

Confluent Schema Registry

When organizing real-time data from multiple applications and systems into a streaming platform, a common challenge is coordinating teams of developers. Each application, even team, likely has its own event schema. What happens when one application uses data generated by a totally different team? Or worse, what happens if you want to change that schema after the application is already integrated into a pipeline feeding dozens or more downstream systems?

Written and open sourced by Confluent, the Schema Registry for Apache Kafka enables developers to define standard schemas for their events, share them across the organization and safely evolve them in a way that is backward compatible and future proof.

Download Open Source
Get Started With Confluent
Schema Registry hero image

Why Confluent Schema Registry

Confluent Schema Registry stores a versioned history of all schemas and allows the evolution of schemas according to the configured compatibility settings. Also provides a plugin to clients that handles schema storage and retrieval for messages that are sent in Avro format.

Deploy Reliably

Having a tool to protect your Apache Kafka deployment from breaking changes means you’ll be able to let your developers freely deploy and help you stop sweating the small stuff. Like schema compatibility.

Evolve Quickly

Need to add a new column to a downstream database? You don’t need an involved change process and at least 5 meetings to coordinate 20 teams.

Confluent Schema Registry lets you validate your changes as an integrated part of the development process. Reducing the coordination overhead and giving you the information you need in your development environment.



Learn to Love Thy Schema

Schema Registry Learn to love image

Gérez votre schéma

Schemas are named and you can have multiple versions of the same schema. The Confluent Schema Registry will validate compatibility and warn about possible issues. This allows developers to add and remove fields independently to move faster and result in stronger decoupling. New schemas and versions are automatically registered and automatically validated, so the whole process of pushing new schemas to production is seamless.

Integrate with standard development tools

Confluent Schema Registry includes Maven and Gradle plugins, so you can integrate schema management and validation right into the development process. Because you want to find out about compatibility issues as early in the development process as possible.

Access your schema.

Confluent Schema Registry comes with a REST API, that allows any application to integrate and save or retrieve schemas for the data they need to access. Additionally, formatters provide command line functionality for automatically converting JSON messages to make your data human friendly.

Automated serialization.

Confluent Schema Registry is built right into Kafka Client serialization libraries—in every language. Writing serialized Apache Avro records to Kafka is as simple as configuring a producer with the Schema Registry serializers and sending Avro objects to Kafka. And if you use incompatible schema accidentally? That’s just an exception for the producer to handle. Incompatible data will never make it into Apache Kafka.

How Confluent Schema Registry Works

How it works image
01

The serializer places a call to the schema registry, to see if it has a format for the data the application wants to publish. If it does, schema registry passes that format to the application’s serializer, which uses it to filter out incorrectly formatted messages.

02

After checking the schema is authorized, it’s automatically serialized and there’s no effort you need to put into it. The message will, as expected, be delivered to the Kafka topic.

03

Your consumers will handle deserialization, making sure your data pipeline can quickly evolve and continue to have clean data. You simply need to have all applications call the schema registry when publishing.