How Apache Kafka® Works

Logo
Presented by

Michael Bingham, Technical Trainer, Confluent

About this talk

Pick up best practices for developing applications that use Apache Kafka, beginning with a high level code overview for a basic producer and consumer. From there we’ll cover strategies for building powerful stream processing applications, including high availability through replication, data retention policies, producer design and producer guarantees. We’ll delve into the details of delivery guarantees, including exactly-once semantics, partition strategies and consumer group rebalances. The talk will finish with a discussion of compacted topics, troubleshooting strategies and a security overview. This session is part 3 of 4 in our Fundamentals for Apache Kafka series.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (328)
Subscribers (10605)
Confluent is building the foundational platform for data in motion. Our cloud-native offering is designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organisation. With Confluent, organisations can create a central nervous system to innovate and win in a digital-first world.