Confluent
Log Compaction | Highlights in the Apache Kafka and Stream Processing Community | April 2016
Log Compaction

Log Compaction | Highlights in the Apache Kafka and Stream Processing Community | April 2016

Gwen Shapira

The Apache Kafka community was crazy-busy last month. We released a technical preview of Kafka Streams and then voted on a release plan for Kafka 0.10.0. We accelerated the discussion of few key proposals in order to make the release, rolled out two release candidates, and then decided to put the release on hold in order to get few more changes in.

  • Kafka Streams tech preview! If you are interested in a new, lightweight, easy-to-use way to process streams of data, I highly recommend you take a look.
  • If you are interested in the theory of stream processing, check out Making Sense of Stream Processing download the eBook while it’s still available. The book is written by Martin Kleppmann and if you’ve been interested in Kafka and stream processing for a while, you know his work is always worth reading.
  • Wondering what will be included in 0.10.0 release? Worried if there are any critical issues left? Take a look at our release plan.
  • Pull request implementing KIP-36 was merged. KIP-36 adds rack-awareness to Kafka. Brokers can now be assigned to specific racks and when topics and partitions are created, and the replicas will be assigned to nodes based on their rack placement.
  • Pull request implementing KIP-51 was merged. KIP-51 is a very small change to the Connect REST API, allowing users to ask for a list of available connectors.
  • Pull request implementing KIP-45 was merged. KIP-45 is a small change to the new consumer API which standardizes the types of containers accepted by the various consumer API calls.
  • KIP-43, which adds support for standard SASL mechanisms in addition to Kerberos, was voted in. We will try to get this merged into Kafka in release 0.10.0.
  • There are quite a few KIPs under very active discussions:

    • KIP-4, adding an API for administrative actions such as creating new topics, requires some modifications to MetadataRequest.
    • KIP-35 adds a new protocol for getting the current version of all requests supported by a Kafka broker. This protocol improvement will make it possible to write Kafka clients that will work with brokers of different versions.
    • KIP-33 adds time-based indexes to Kafka and supporting both time-based log purging and time-based data lookup.

That’s all for now! Got a newsworthy item? Let us know. If you are interested in contributing to Apache Kafka, check out the contributor guide to help you get started.

Subscribe to the Confluent Blog

Subscribe

More Articles Like This

Suppress Feature
John Roesler

Kafka Streams’ Take on Watermarks and Triggers

John Roesler .

Back in May 2017, we laid out why we believe that Kafka Streams is better off without a concept of watermarks or triggers, and instead opts for a continuous refinement ...

Spring Cloud Stream Application
Soby Chacko

Spring for Apache Kafka Deep Dive – Part 2: Apache Kafka and Spring Cloud Stream

Soby Chacko .

On the heels of part 1 in this blog series, Spring for Apache Kafka – Part 1: Error Handling, Message Conversion and Transaction Support, here in part 2 we’ll focus ...

WalkthroughsGenerator
Rishi Dhanaraj

A Beginner’s Perspective on Kafka Streams: Building Real-Time Walkthrough Detection

Rishi Dhanaraj .

Here at Zenreach, we create products to enable brick-and-mortar merchants to better understand, engage and serve their customers. Many of these products rely on our capability to quickly and reliably ...

Leave a Reply

Your email address will not be published. Required fields are marked *

Try Confluent Platform

Download Now

We use cookies to understand how you use our site and to improve your experience. Click here to learn more or change your cookie settings. By continuing to browse, you agree to our use of cookies.