Project Metamorphosis: Unveiling the next-gen event streaming platformLearn More

Kafka Summit NYC: Streaming Pipelines Track – What to Expect

It is just a few weeks out until Kafka Summit NYC! Since we’re on the program committee for this event and are also the track leads for the Streaming Pipelines track, we’ve felt it would be fun to share how we picked the talks for the track and also share a few of the sessions we’re most excited to see in May.

For background, this track was designed for developers and users to discuss how they’ve used Apache Kafka® to build integrated data architectures and expand their use cases to the cloud.

When discussing the track as the program committee, we sought to cultivate a group of practitioners who could share their experiences, successes and lessons with the greater Kafka and stream processing community. We looked for compelling Kafka use cases that would pique our interest, the topic would appeal to at least 30% of Summit attendees, and the abstracts had strong titles with clear details on why members of the community would care to hear the talk. From a holistic view, all the sessions in the track should cover various aspects and components of the Kafka ecosystem. Sounds easy enough, right?

There were more than 100 talks submitted for Kafka Summit New York. With this number of quality talks, it was a daunting task to narrow them down to 8 for each track. At the end of the process, we’re proud of the session abstracts that were chosen for this track and hope you will agree once you have a chance to attend them.

We’re excited to highlight our ‘can’t-miss’ sessions in this track:

airbnb Every Message Counts: Kafka as a Foundation for Highly Reliable Logging at Airbnb
Youssef Francis, Software Engineer & Jun He, Software, Engineer AirbnbAirbnb is so popular that everyone will be familiar with their use-case and are therefore a great way to introduce the audience to a high-reliability Kafka architecture.
yelp Billions of Messages a Day: Yelp’s Real-time Data Pipeline
Justin Cunningham, Technical Lead, YelpLast year, Yelp described their Kafka-based real-time data pipeline in a multi-part blog series. We’ve since seen many companies implementing a very similar data pipeline and it always turned out to be very successful. We want to expose a large audience to their architecture pattern for data pipelines since it worked so well for many different companies.
ancestry California Schemin’! How the Schema Registry has Ancestry Basking in Data
Chris Sanders, Director, Data Warehouse and Visualization, AncestryAncestry had unique data integration challenges and taken a very systematic approach to handling them, building a “factory” for reliable data pipelines. We’ve seen many organizations struggling at how to balance agile development practices with maintaining data quality, and we are looking forward to learn from Ancestry’s experience.

 

This event will be the largest gathering of Kafka experts across a wide range of industries so we hope to see you there. Register early, and you can follow the event on twitter at #kafkasummit.

 

Did you like this blog post? Share it now

Subscribe to the Confluent blog

More Articles Like This

Vidéos de la session Kafka Summit New York 2019

It seems like there’s a Kafka Summit every other month. Of course there’s not—it’s every fourth month—but hey, close enough. We now have the Kafka Summit New York in the […]

The Program Committee Has Chosen: Kafka Summit NYC 2019 Agenda

The East Coast called. They told us they wanted another New York Kafka Summit. So there was really only one thing we could do: We’re heading back to New York […]

Kafka Summit 2019 Call for Papers, Tracks and Office Hours

It might seem a little strange, being the holiday season and still technically 2018, for me to be talking 2019 Kafka Summit events. But as you may already know, the […]

Sign Up Now

Start your 3-month trial. Get up to $200 off on each of your first 3 Confluent Cloud monthly bills

Nouvelles inscriptions uniquement.

En cliquant sur le bouton « inscription » ci-dessus, vous acceptez que nous traitions vos informations personnelles conformément à notre Politique de confidentialité.

En cliquant sur « Inscription » ci-dessus, vous acceptez les termes du/de la Conditions d'utilisation et de recevoir occasionnellement des e-mails publicitaires de la part de Confluent. Vous comprenez également que nous traiterons vos informations personnelles conformément à notre Politique de confidentialité.

Gratuit à vie sur un seul broker Kafka
i

Le logiciel permettra une utilisation illimitée dans le temps de fonctionnalités commerciales sur un seul broker Kafka. Après l'ajout d'un second broker, un compteur de 30 jours démarrera automatiquement sur les fonctionnalités commerciales. Celui-ci ne pourra pas être réinitialisé en revenant à un seul broker.

Sélectionnez un type de déploiement
Déploiement manuel
  • tar
  • zip
  • deb
  • rpm
  • docker
ou
Déploiement automatique
  • kubernetes
  • ansible

En cliquant sur le bouton « télécharger gratuitement » ci-dessus, vous acceptez que nous traitions vos informations personnelles conformément à notre Politique de confidentialité.

En cliquant sur « Téléchargement gratuit » ci-dessus, vous acceptez la Contrat de licence Confluent et de recevoir occasionnellement des e-mails publicitaires de la part de Confluent. Vous acceptez également que vos renseignements personnels soient traitées conformément à notre Politique de confidentialité.

Ce site Web utilise des cookies afin d'améliorer l'expérience utilisateur et analyser les performances et le trafic sur notre site Web. Nous partageons également des informations concernant votre utilisation de notre site avec nos partenaires publicitaires, analytiques et de réseaux sociaux.