Conferences

Confluent is proud to participate in the following conferences, trade shows and meetups.

Meetup: Kafka in Kubernetes & Streaming ETL with Apache Kafka® & KSQL

6:00 pm Doors open
7:00 pm - 7:45 pm A Highly Available Kafka Cluster in 5 Minutes? Running Kafka in Kubernetes. - Charles Martinot, Grab
7:45 pm - 8:30 pm Streaming ETL with Apache Kafka® and KSQL - Nick Dearden, Confluent
8:30 pm - 9:00 pm Pizza, drinks, networking and additional Q&A

Session: A Highly Available Kafka Cluster in 5 Minutes? Running Kafka in Kubernetes.

We'll dive in the Kafka setup used in the Grab data engineering team, and see how running Kafka in Kubernetes relieves the operational load associated with maintaining Kafka clusters.

Speaker: Charles Martinot, Data Engineer, Grab

As a devops engineer, Charles experimented and ran production loads over the past 5 years with Mesos and Marathon, bare CoreOS and now Kubernetes. Now, as a data engineer in Grab, he works to find ways to make his teammates life easier by reducing the operational load they have to shoulder.

Session: Streaming ETL with Apache Kafka and KSQL

Companies new and old are all recognising the importance of a low-latency, scalable, fault-tolerant data backbone - in the form of the Apache Kafka streaming platform. With Kafka developers can integrate multiple systems and data sources, enabling low latency analytics, event-driven architectures and the population of multiple downstream systems. What's more, these data pipelines can be built using configuration alone.

In this talk, we'll see how easy it is to capture a stream of data changes in real-time from a database such as MySQL into Kafka using the Kafka Connect framework and then use KSQL to filter, aggregate and join it to other data, and finally stream the results from Kafka out into multiple targets such as Elasticsearch and MySQL. All of this can be accomplished without a single line of Java code!

Speaker: Nick Dearden, Director of Engineering, Confluent

Nick is a technology and product leader at Confluent, where he enjoys leveraging many years of experience in the world of data and analytic systems to help design and explain the power of a streaming platform for every business. Prior to Confluent, he led the data platform group for a leading online real-estate seller and was chief architect for a cloud-based financial analytics platform. His early career stretches all the way back through multiple data warehouse and business intelligence adventures to the green-screen days of mainframe banking systems.

Register Now

Strata Data London

Speaker: Michael Noll, Product Manager, ConfluentSession: Unlocking the World of Stream Processing with KSQL, the Streaming SQL Engine for Apache Kafka®May 23, 14:05 – 14:45

We introduce KSQL, the open source streaming SQL engine for Apache Kafka. KSQL makes it easy to get started with a wide range of real-time use cases such as monitoring application behavior and infrastructure, detecting anomalies and fraudulent activities in data feeds, and real-time ETL. We cover how to get up and running with KSQL and also explore the under-the-hood details of how it all works.

Michael Noll is a product manager at Confluent, the company founded by the creators of Apache Kafka. Previously, Michael was the technical lead of DNS operator Verisign’s big data platform, where he grew the Hadoop, Kafka, and Storm-based infrastructure from zero to petabyte-sized production clusters spanning multiple data centers—one of the largest big data infrastructures in Europe at the time. He is a well-known tech blogger in the big data community. In his spare time, Michael serves as a technical reviewer for publishers such as Manning and is a frequent speaker at international conferences, including Strata, ApacheCon, and ACM SIGIR. Michael holds a PhD in computer science.

Session Details
Event Details

Online Talk: Apache Kafka® in Action with Carahsoft

Speaker: Will LaForest, Government Advisor on Streaming Data, Confluent

Processing real time data, at internet scale, requires a special set of tools. Apache Kafka lies at the heart of the largest data pipelines, handling trillions of messages and petabytes of data every day, all in real time. Despite the extraordinary scale and performance, Apache Kafka is incredibly easy to set up and use.

In this webcast, Will LaForest, Government Advisor on Streaming Data at Confluent, will explain:

    The fundamental principles of working with Kafka
    The process of standing up Kafka
    Streaming data from producer to consumer

Register Now

Meetup: Processing Streaming Data with KSQL

6:00 pm Doors open
6:00 pm - 6:30 pm Pizza, drinks and networking
6:30 pm - 7:30 pm Processing Streaming Data with KSQL - Tim Berglund, Confluent
7:00 pm - 7:30 pm Q&A

Session: Processing Streaming Data with KSQL

Apache Kafka® is a de facto standard streaming data processing platform, being widely deployed as a messaging system, and having a robust data integration framework (Kafka Connect) and stream processing API (Kafka Streams) to meet the needs that common attend real-time message processing. But there’s more!

Kafka now offers KSQL, a declarative, SQL-like stream processing language that lets you define powerful stream-processing applications easily. What once took some moderately sophisticated Java code can now be done at the command line with a familiar and eminently approachable syntax. Come to this talk for an overview of KSQL with live coding on live streaming data.

Speaker: Tim Berglund, Senior Director of Developer Experience, Confluent

Tim Berglund is a teacher, author and technology leader with Confluent, where he serves as the Senior Director of Developer Experience. He can frequently be found at speaking at conferences in the United States and all over the world. He is the co-presenter of various O’Reilly training videos on topics ranging from Git to Distributed Systems, and is the author of "Gradle Beyond the Basics."

Register Now

Online Talk: The Future of ETL Isn't What It Used to Be

9:00 am - 10:00 am PT | 12:00 pm - 1:00 pm ET

Join Gwen Shapira, Apache Kafka® committer and co-author of "Kafka: The Definitive Guide," as she presents core patterns of modern data engineering and explains how you can use microservices, event streams and a streaming platform like Apache Kafka to build scalable and reliable data pipelines designed to evolve over time.

Speaker: Gwen Shapira, Principal Data Architect, Confluent

Gwen is a principal data architect at Confluent. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. Gwen is the author of “Kafka – The Definitive Guide” and “Hadoop Application Architectures,” and a frequent presenter at industry conferences. Gwen is a PMC member on the Apache Kafka project and committer on Apache Sqoop. When Gwen isn’t building data pipelines or thinking up new features, you can find her pedaling on her bike exploring the roads and trails of California, and beyond.

Register Now

Meetup: Empowering the Move to Data-driven Architecture

Speaker: Will LaForest, Senior Director of Federal, Confluent

In the past few years, Apache Kafka has established itself as the world's most popular, real-time and large-scale messaging system. This platform is used by thousands of organizations, across a wide range of industries and government agencies, including Netflix, Cisco, PayPal and Twitter. Kafka is empowering users to move from legacy infrastructures towards a modern, data-driven architecture.

In this presentation we will discuss:

    The new paradigm shift to data-driven architecture
    What is Apache Kafka
    How data driven architecture enables decoupled high velocity development

The speaker for this session is Will LaForest, Senior Director of Federal at Confluent. In his current position, Mr. LaForest evangelizes the benefits of Event Stream Processing, Data Centric Enterprise and open source software is addressing mission challenges in the Government.

BTree Solutions Inc is proud to host this event at their office in Herndon, VA. Come join us during this session. Pizza and drinks will be served at the event, too.

Register Now

Online Talk: Stateful, Stateless and Serverless – Running Apache Kafka® on Kubernetes

10:00 am PT - 11:00 am PT | 1:00 - 2:00 pm ET

With the rapid adoption of microservices, there is a growing need for solutions to manage deployment, resources and data for fleets of microservices. Kubernetes is a resource management framework for containers that is rapidly growing in popularity. Apache Kafka is a streaming platform that makes data accessible to the edges of an organization. It's no wonder the question of running Kafka on Kubernetes keeps coming up!

In this online talk, Joe Beda, CTO of Heptio and co-creator of Kubernetes, and Gwen Shapira, principal data architect at Confluent and Kafka PMC member, will help you navigate through the hype, address frequently asked questions and deliver critical information to help you decide if running Kafka on Kubernetes is the right approach for your organization.

You will:

    Get an introduction to the basic concepts you need to know as you plan to deploy services on Kubernetes.
    Learn which parts of the Kafka ecosystem fit Kubernetes like a glove, and which require special attention.
    Pick up useful tips for getting started.
    See why Confluent Platform for Kubernetes is the simplest solution to deploying and orchestrating Kafka on Kubernetes, using container images and a Kubernetes operator.

Speaker: Joe Beda, Co-founder and CTO, Heptio

Joe Beda started his career at Microsoft working on Internet Explorer (he was young and naive). Throughout his 7 years at Microsoft and 10 years at Google, Joe has worked on GUI frameworks, real-time voice and chat, telephony, machine learning for ads and cloud computing. Most notably, while at Google Joe started the Google Compute Engine and, along with Brendan Burns and Heptio Co-founder Craig McLuckie, created Kubernetes.

Speaker: Gwen Shapira, Principal Data Architect, Confluent

Gwen has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. Gwen is the author of “Kafka – The Definitive Guide” and “Hadoop Application Architectures,” and a frequent presenter at industry conferences. Gwen is a PMC member on the Apache Kafka project and committer on Apache Sqoop.

Register Now

Meetup: Apache Kafka® and KSQL in Action

Speaker: Robin Moffatt, Developer Advocate, Confluent12:00 - 14:00

Abstract: Apache Kafka is a distributed, scalable, and fault-tolerant streaming platform, providing low-latency pub-sub messaging coupled with native storage and stream processing capabilities. Integrating Kafka with RDBMS, NoSQL and object stores is simple with the Kafka Connect API, which is part of Apache Kafka. KSQL is the open-source SQL streaming engine for Apache Kafka, and makes it possible to build stream processing applications at scale, written using a familiar SQL interface.

In this session, we’ll introduce Apache Kafka and discuss the concept of a Streaming Platform. From there, we'll explore how easy it is to stream data from a database such as MySQL into Kafka using CDC and Kafka Connect. In addition, we’ll use KSQL to filter, aggregate and join it to other data, and then stream this from Kafka out into multiple targets such as Elasticsearch and S3. All of this can be accomplished without a single line of code!

About Robin: The company founded by the creators of Apache Kafka, as well as an Oracle ACE Director and Developer Champion. His career has always involved data, from the old worlds of COBOL and DB2, through the worlds of Oracle and Hadoop, and into the current world with Kafka. His particular interests are analytics, systems architecture, performance testing and optimization. Outside of work he enjoys drinking good beer and eating fried breakfasts, although generally not at the same time.

Register Now

Data in Action

Data-driven applications and business solutions are at the heart of our business thinking today. Both personalization (products, sales, marketing), optimization (processes, costs) and innovation (new business models, ecosystems) are affected.

On May 29, 2018, you will have the opportunity to rate your data drive and learn about data-driven process models. Exchange ideas with your peers from major Swiss companies. Find out where the combination of data and intelligence has already led the two companies Generali Versicherung and Migros-Genossenschafts-Bund.

Event Details

Elastic Stack in a Day (Milan)

Speaker: Paolo Castagna, Account Executive, ConfluentSession: Elastic & Kafka a lovely couple14:25 - 14:50

Paolo's technical background is "big data" and he has touched the evolution and enormous change that the software industry is going through from batch / big data to stream processing / fast data.

Event Details

Online Talk: Capital One Delivers Customer Risk Insights in Real Time with Stream Processing

10:00 am - 11:00 am PT | 1:00 - 2:00 pm ET

Capital One supports interactions with real-time streaming transactional data using Apache Kafka®. Kafka helps deliver information to internal operation teams and bank tellers to assist with assessing risk and protect customers in a myriad of ways.

Inside the bank, Kafka allows Capital One to build a real-time system that takes advantage of modern data and cloud technologies without exposing customers to unnecessary data breaches, or violating privacy regulations. These examples demonstrate how a streaming platform enables Capital One to act on their visions faster and in a more scalable way through the Kafka solution, helping establish Capital One as an innovator in the banking space.

Join us for this online talk on lessons learned, best practices and technical patterns of Capital One’s deployment of Apache Kafka.

    Find out how Kafka delivers on a 5-second service-level agreement (SLA) for inside branch tellers.
    Learn how to combine and host data in-memory and prevent personally identifiable information (PII) violations of in-flight transactions.
    Understand how Capital One manages Kafka Docker containers using Kubernetes.

Speaker: Ravi Dubey, Senior Manager, Software Engineering, Capital One

Ravi is a senior manager working for Capital One in Virginia. Ravi has over 25 years of software development and management experience across a range of products in support of government and commercial industries. His most recent experience includes full stack development of web apps, cloud-based enterprise-facing support applications and a high-throughput, low-latency, distributed cloud-hosted data processing platform.

Speaker: Jeff Sharpe, Senior Software Engineer, Capital One

Jeff is a senior software engineer working for Capital One in Virginia. He’s been an engineer for almost 18 years, with major projects spanning five different languages. Though he began his work on kernel drivers and web applications, he’s been repeatedly drawn into high volume, high throughput data processing projects.

Register Now

Meetup: Zopa and Centrica Connected Homes talk Kafka

Register Now

6:00 pm Doors open
6:00 pm - 6:30 pm Pizza, drinks and networking
6:30 pm - 7:30 pm The Road to Success with Kafka Streams - Adrian MacCague, Zopa
7:30 pm - 8:30 pm Real-time Analytics and Machine Learning with Kafka streams and Kubernetes - Josep Casals, Centrica Connected Homes
8:30 pm - 9:00 pm Additional Q&A and networking

Session: The Road to Success with Kafka Streams

Adrian thinks he will need around 35 minutes for the presentation and then most likely 10-15 minutes for questions.

Adrian will discuss why Zopa chose to adopt Kafka and the success they have had with it so far. Kafka Streams was a major part of this journey, but with great power and flexibility comes easy to fall into pitfalls when building a (sometimes) stateful microservice architecture. He will attempt to guide new and existing developers to Kafka Streams down the road of success.

Zopa was the world’s first peer-to-peer consumer lending platform. Now we’re taking on a new challenge: building a next-generation bank in the UK.

As a company that’s been around for more than thirteen years, Zopa’s C#/MS-SQL/Windows-only tech stack has been maturing alongside the business success throughout. With the decision to launch a bank in the UK, we made a commitment to transition towards becoming a polyglot shop adding Java and Python to our stack, adopted Docker with Kubernetes, while diversifying on tech, people and locations. This transition is all about helping us tackle one of the biggest challenges mature companies face: breaking up the monolith.

We’ve made an important call to break away from our sole dependency on a relational database, and added Kafka as our distributed log platform as the glue between all microservices. This has enabled some strong architectural decisions around stream processing, asynchronous design and reactive event processing. To complement our stack, we’re using key-value Redis clusters to handle the low latency and high throughput data access use cases in various pockets of the system.

Speaker: Adrian McCague, Software Engineer, Zopa

Adrian has experience developing software across the mobile, desktop and server platforms. Prior to working on Zopa's origination pipeline, he developed the HideMyAss! OS X Consumer VPN client. He is passionate about delivering software that does the right thing with respect to the user and applying best in breed technologies to the problem in hand.

Session: Real-Time Analytics and Machine Learning with Kafka Streams and Kubernetes

Centrica Connected Home is one of the largest connected home providers in the UK. It sells the Hive brand of products which include smart lights, sensors and plugs in addition to its Active Heating system. We build real-time analytics and machine learning solutions for our customers using Kafka Streams, Kafka Connect and Spark. Kafka is the backbone of our real-time analytics, and machine learning platform and our applications are deployed on Kubernetes. We want to present our use of Kafka Streams to build analytics and machine learning solutions and running Kafka Streams on Kubernetes. The talk would go over our reasons for picking Kafka Streams as our preferred streaming solution, experiences with other streaming frameworks, the scalability and reliability Kafka Streams provides us and the kind of problems we solve for our customers.

Speaker: Josep Casals, Head of Data and Analytics, Centrica Connectected Homes

Josep Casals is a creative architect and hands-on IT engineer. He likes to design lead and implement systems from first principles. His specialities are: Smart Contracts, Advanced system architectures, Big Data, Internet of Things, Containers, Machine Learning, Technical Lead, CTO and Startup. He is also Organiser of the London Apache Kafka® Meetup Teacher at the London School of Economics MISDI CodeCamp.

Register Now

Building IoT 2018

Session: Process IoT Data with Apache Kafka, KSQL and Machine LearningJune 5, 10:45 - 11:25

IoT devices generate large amounts of data that must be continuously processed and analyzed. Apache Kafka is a highly scalable open-source streaming platform for reading, storing, processing and routing large amounts of data from thousands of IoT devices. KSQL is an open source streaming SQL engine natively based on Apache Kafka to enable stream processing to anyone using simple SQL commands.

This talk, with a health care scenario, shows how Kafka and KSQL can help to continuously perform health checks on patients. A live demo shows how machine learning models - trainers with frameworks such as TensorFlow, DeepLearning4J or H2O - can be deployed in a time-critical and scalable real-time application.

Previous Knowledge: Knowledge of distributed systems and architectures is helpful. Experience with machine learning is helpful, but not mandatory.

Learning Objectives:

    Apache Kafka is a streaming platform for reading, storing, processing and forwarding large volumes of data from thousands of IoT devices.
    KSQL allows continuous integration and analysis without external big-data clusters and without writing source code.
    Machine learning models can be easily trained and used in the Apache Kafka environment.

Session Details
Speaker: Kai Waehner, Technology Evangelist, Confluent

Kai works as a Technology Evangelist at Confluent. The kangaroo areas are Big Data Analytics, Machine Learning / Deep Learning, Messaging, Integration, Microservices, Stream Processing, Internet of Things and Blockchain.

Event Details

IQPC CDO Exchange

An Exchange is made up of innovative learning and networking opportunities that keep even the most senior business leaders engaged.The Exchange is an intimate environment that creates connections which become long-term partnerships. You will experience inspiring keynote addresses, in-depth case studies, structured networking and interactive discussion groups (our signature Think Tanks and BrainWeaves). The consultative one-to-one business meetings between attendees and solution providers are carefully scheduled throughout the Exchange to meet your specific business needs.

Event Details

Big Data Tech

Session: The Future of ETL (isn't what it used to be)

Gwen Shapira will share design and architecture patterns that are used to modernize data engineering. We will see how Apache Kafka®, microservices and event streams are used by modern engineering organizations to efficiently build data pipelines that are scalable, reliable and built to evolve.

We'll start the presentation with a discussion of how software engineering changed in the last 20 years - focusing on microservices, stream processing, cloud and the proliferation of data stores. These changes represent both a challenge and opportunity for data engineers.

Then we'll present 3 core patterns of modern data engineering:

    Building data pipelines from decoupled microservices
    Agile evolution of these pipelines using schemas as a contract for microservices
    Enriching data by joining streams of events

We'll give examples of how these patterns were used by different organizations to move faster, not break things and scale their data pipelines. We'll also show how these can be implemented with Apache Kafka.

Session Details
Speaker: Gwen Shapira, Principal Data Architect, Confluent

Gwen is a principal data architect at Confluent helping customers achieve success with their Apache Kafka® implementation. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. She currently specializes in building real-time reliable data processing pipelines using Apache Kafka. Gwen is an author of “Kafka - the Definitive Guide,” "Hadoop Application Architectures" and a frequent presenter at industry conferences. Gwen is also a committer on the Apache Kafka and Apache Sqoop projects. When Gwen isn't coding or building data pipelines, you can find her pedaling on her bike exploring the roads and trails of California, and beyond.

Event Details

DevNation Federal

Speaker: Will LaForest, Event Streaming Advocate for Federal, ConfluentSession: Understanding Apache Kafka

Processing real-time data, at internet scale, requires a special set of tools. Apache Kafka® lies at the heart of the largest data pipelines, handling trillions of messages and petabytes of data every day all in real-time. This breakout will explain the basic concepts behind Apache Kafka and how one works with it. It is complementary (but not required) to the workshop “Hands on With Apache Kafka.”

Speaker: Jeffrey Needham, Principal Systems Engineer, ConfluentSession: Hands on With Apache Kafka®

This workshop will walk participants through basic operation and usage through hands on exercises. In this session you will stand up an Apache Kafka cluster on OpenShift, publish a stream of events, operate on them and subscribe to the data. Bring your own laptop to fully participate. It is helpful, but not required, to attend the breakout session “Understanding Apache Kafka.”

Event Details

Online Talk: Steps to Building a Streaming ETL Pipeline with Apache Kafka® and KSQL

9:00 am - 10:00 am PT | 12:00 pm - 1:00 pm ET

In this talk, we'll build a streaming data pipeline using nothing but our bare hands, the Kafka Connect API and KSQL. We'll stream data in from MySQL, transform it with KSQL and stream it out to Elasticsearch. Options for integrating databases with Kafka using CDC and Kafka Connect will be covered as well.

Speaker: Robin Moffatt, Developer Advocate, Confluent

Robin is a developer advocate at Confluent, the company founded by the creators of Apache Kafka, as well as an Oracle ACE Director. His career has always involved data, from the old worlds of COBOL and DB2, through the worlds of Oracle and Hadoop, and into the current world with Kafka. His particular interests are analytics, systems architecture, performance testing and optimization.

Register Now

JEMS data connect

On the agenda:

    Conferences on Big Data, GDPR, Cloud, DataLake, IoT, DevOps
    Market trends in Big Data
    Exchanges with publishing partners

Speaker: Aurélien Goujet, Director of Solution Engineering, Southern Europe, Confluent
Event Details

Velocity Conference

Event Details
Session: Metrics Are Not Enough: Monitoring Apache Kafka®Room: LL21 A/BJune 13, 11:25 am – 12:05 pm

Prerequisite knowledge: Some knowledge of Apache Kafka is important.

When you are running systems in production, clearly you want to make sure they are up and running at all times. But in a distributed system such as Apache Kafka… what does “up and running” even mean?

Experienced Apache Kafka users know what is important to monitor, which alerts are critical and how to respond to them. They don’t just collect metrics – they go the extra mile and use additional tools to validate availability and performance on both the Kafka cluster and their entire data pipelines.

In this presentation we’ll discuss best practices of monitoring Apache Kafka. We’ll look at which metrics are critical to alert on, which are useful in troubleshooting and what may actually misleading. We’ll review a few “worst practices” – common mistakes that you should avoid. We’ll then look at what metrics don’t tell you – and how to cover those essential gaps.

Session Details
Speaker: Gwen Shapira, Principal Data Architect, Confluent

Gwen Shapira is a system architect at Confluent, where she helps customers achieve success with their Apache Kafka implementation. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. Gwen currently specializes in building real-time reliable data-processing pipelines using Apache Kafka. Gwen is an Oracle Ace Director, the coauthor of Hadoop Application Architectures, and a frequent presenter at industry conferences. She is also a committer on Apache Kafka and Apache Sqoop. When Gwen isn’t coding or building data pipelines, you can find her pedaling her bike, exploring the roads and trails of California and beyond.

Speaker: Xavier Léauté, Software Engineer, Confluent

One of the first engineers to Confluent team, Xavier is responsible for analytics infrastructure, including real-time analytics in KafkaStreams. He was previously a quantitative researcher at BlackRock. Prior to that, he held various research and analytics roles at Barclays Global Investors and MSCI. He holds an MEng in Operations Research from Cornell University and a Masters in Engineering from École Centrale Paris.

Event Details

Big Data Analytics London

Speaker: Lyndon Hedderly, Director of Customer Solutions, Confluent

Lyndon helps organizations model how Digital and specifically Data-Streaming can enable new business models and improve company performance. He is most interested in changing perception from ‘big-data’ to fast-data’, advising how a real-time data-streaming platform can become the central nervous system of an Enterprise. Confluent’s mission is to build a streaming platform and put it at the heart of every company. Prior to joining Confluent, Lyndon was Director of Digital Strategy at Acquia (2014-2017), Head of Customer Success at Centrix –a UK digital start-up– (2009-2014) and an IT Strategist and Enterprise Architect at Accenture (1996-2008). Lyndon holds an MSc in Neuroscience incl. Neural Networks & AI, from Oxford University.

Event Details

AWS Public Sector Summit

On June 20-21, 2018, global leaders from government, education, and nonprofit organizations will come together for the ninth annual AWS Public Sector Summit in Washington, DC. The move to the cloud is unlike any other technology shift in our lifetime. Don’t miss this opportunity to learn how to use the cloud for complex, innovative, and mission-critical projects. With over 100 breakout sessions led by visionaries, experts, and peers, you’ll take home new strategies and tactics for shaping culture, building new skillsets, saving costs, and achieving your mission. Also, check back soon for an opportunity to register for technical bootcamps and workshops on the Summit Pre-day.

Event Details

Scala Days New York

Session: Journey to a Real-Time EnterpriseJune 20, 9:00 am - 10:00 am

There is a monumental shift happening in how data powers a company's core business. This shift is about moving away from batch processing and to real-time data. Apache Kafka® was built with the vision to help companies traverse this change and become the central nervous system that makes data available in real time to all the applications that need to use it.

This talk explains how companies are using the concepts of events and streams to transform their business to meet the demands of this digital future and how Apache Kafka serves as the foundation to streaming data applications. You will learn how KSQL, Connect, and the Streams API with Apache Kafka capture the entire scope of what it means to put real time into practice. Neha Narkhede is co-founder and CTO at Confluent, the company behind the popular Apache Kafka streaming platform. Prior to founding Confluent, Neha led streams infrastructure at LinkedIn, where she was responsible for LinkedIn’s streaming infrastructure built on top of Apache Kafka. She is one of the initial authors of Apache Kafka and a committer and PMC member on the project.

Session Details
Speaker: Neha Narkhede, Co-founder and CTO, Confluent

Neha Narkhede is co-founder and CTO at Confluent, the company behind the popular Apache Kafka streaming platform. Prior to founding Confluent, Neha led streams infrastructure at LinkedIn, where she was responsible for LinkedIn’s streaming infrastructure built on top of Apache Kafka and Apache Samza. She is one of the initial authors of Apache Kafka and a committer and PMC member on the project.

Event Details

Online Talk: Streaming Transformations - Putting the T in Streaming ETL

9:00 am - 10:00 am PT | 12:00 pm - 1:00 pm ET

We’ll discuss how to leverage some of the more advanced transformation capabilities available in both KSQL and Kafka Connect, including how to chain them together into powerful combinations for handling tasks such as data-masking, restructuring and aggregations. Using KSQL, you can deliver the streaming transformation capability easily and quickly.

Speaker: Nick Dearden, Director of Engineering, Confluent

Nick is a technology and product leader at Confluent, where he enjoys leveraging many years of experience in the world of data and analytic systems to help design and explain the power of a streaming platform for every business. Prior to Confluent, he led the data platform group for a leading online real-estate seller and was chief architect for a cloud-based financial analytics platform. His early career stretches all the way back through multiple data warehouse and business intelligence adventures to the green-screen days of mainframe banking systems.

Register Now

OSCON Open Source Convention

OSCON has been ground zero of the open source movement. In its 20th year, OSCON continues to be the catalyst for innovation and success for companies.

Speaker: Tim Berglund, Senior Director of Developer Experience, ConfluentSession: Tutorial
Event Details

Google Cloud Next

Google Cloud Next ’18 is your chance to unlock new opportunities for your business, uplevel your skills, and uncover what’s next for Cloud.

Event Details

Strata Data East

How do you drive business results with data?

Every year thousands of top data scientists, analysts, engineers, and executives converge at Strata Data Conference—the largest gathering of its kind. It's where technologists and decision makers turn data and algorithms into business advantage.

Event Details

SpringOne

Speaker: Neha Narkhede, Co-founder and CTO, Confluent

Neha is the co-founder of Confluent and one of the initial authors of Apache Kafka®. She’s an expert on modern, stream-based data processing.

Event Details

JAX London

Speaker: Tim Berglund, Senior Director of Developer Experience, ConfluentSession: Stream Processing with Apache Kafka® and KSQL

Apache Kafka is a de facto standard streaming data processing platform, being widely deployed as a messaging system, and having a robust data integration framework (Kafka Connect) and stream processing API (Kafka Streams) to meet the needs that common attend real-time message processing. But there’s more! Kafka now offers KSQL, a declarative, SQL-like stream processing language that lets you define powerful stream-processing applications easily. What once took some moderately sophisticated Java code can now be done at the command line with a familiar and eminently approachable syntax. In this workshop, you’ll get a thorough introduction to Apache Kafka, learn to understand what sort of architectures it supports, and most importantly, use the exciting new KSQL language to write real-time stream processing applications.

Session Details
Speaker: Tim Berglund, Senior Director of Developer Experience, ConfluentSession: The Database Unbundled: Commit logs in an age of Microservices

Microservice architectures provide a robust challenge to the traditional centralized database we have come to understand. In this talk, we’ll explore the notion of unbundling that database, and putting a distributed commit log at the center of our information architecture. As events impinge on our system, we store them in a durable, immutable log (happily provided by Apache Kafka), allowing each microservice to create a derived view of the data according to the needs of its clients. Event-based integration avoids the now well-known problems of RPC and database-based service integration, and allows the information architecture of the future to take advantage of the growing functionality of stream processing systems like Apache Kafka. This way we can create systems that can more easily adapt to the changing needs of the enterprise and provide the real-time results we are increasingly being asked to provide.

Session Details

Tim is a teacher, author and technology leader with Confluent, where he serves as the Senior Director of Developer Experience. He can frequently be found at speaking at conferences in the United States and all over the world. He is the co-presenter of various O’Reilly training videos on topics ranging from Git to Distributed Systems, and is the author of "Gradle Beyond the Basics."

Event Details

Kafka Summit San Francisco

Discover the World of Streaming Data

As streaming platforms become central to data strategies, companies both small and large are re-thinking their architecture with real-time context at the forefront. Monoliths are evolving into Microservices. Data-centers are moving to the cloud. What was once a ‘batch’ mindset is quickly being replaced with stream processing as the demands of the business impose more and more real-time requirements on developers and architects.

This revolution is transforming industries. What started at companies like LinkedIn, Uber, Netflix and Yelp has made its way to countless others in a variety of sectors. Today, thousands of companies across the globe build their businesses on top of Apache Kafka®. The developers responsible for this revolution need a place to share their experiences on this journey.

Kafka Summit is the premier event for data architects, engineers, devops professionals, and developers who want to learn about streaming data. It brings the Apache Kafka community together to share best practices, write code, and discuss the future of streaming technologies.

Welcome to Kafka Summit San Francisco!

Event Details

Big Data London

Speaker: Jay Kreps, Co-founder and CEO, ConfluentRoom: Keynote Theater13 November, 09:30

Jay Kreps is the CEO of Confluent, Inc., a company backing the popular Apache Kafka® messaging system. Prior to founding Confluent, he was formerly the lead architect for data infrastructure at LinkedIn. He is among the original authors of several open source projects including Project Voldemort (a key-value store). Apache Kafka (a distributed messaging system) and Apache Samza (a stream processing system).

Event Details

AWS re:INVENT

Join us for deeper technical content, more hands-on learning opportunities, keynote announcements, a bigger and better Partner Expo, exciting after-hours events, and the best party in technology—re:Play.

At re:Invent 2018, you can dive into solving challenges and working on a team in our two-hour workshops. In the chalk talks or builders sessions, you will have the opportunity to interact in a small group setting with AWS experts as they whiteboard through problems and solutions. In addition, we will be repeating our most popular sessions and offering late night sessions, so you get the most out of re:Invent.

Event Details

KubeCon + CloudNativeCon

The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities in Seattle, WA on December 11-13, 2018. Join Kubernetes, Prometheus, OpenTracing, Fluentd, gRPC, containerd, rkt, CNI, Envoy, Jaeger, Notary, TUF, Vitess, CoreDNS, NATS, and Linkerd as the community gathers for three days to further the education and advancement of cloud native computing.

Event Details

Ready to Talk to Us?

Have someone from Confluent contact you.

Contact Us