Apache Kafka, the default choice for real-time and batch data processing and it facilitates parallel processing of messages. However, when it comes to scaling messages, the continuous optimization of Kafka is critically important to maintaining optimal system performance.
IT managers, system architects, and data engineers are responsible for the successful deployment, adoption, and performance of a real-time streaming platform. Kafka performance and reliability can negatively impact the usability, operation, and maintenance of the platform, as well as the data and devices connected to it. When something breaks, it can be difficult to restore service, or even know where to begin.
This webinar discusses best practices to maintain optimal performance for Kafka data streaming and includes the following topics:
– Apache Kafka cluster components: producers, consumers, and brokers
– Key Kafka performance metrics: throughput and latency
– Kafka performance tuning: tuning brokers, producers, and consumers
– Offline partitioning
– Balancing Apache Kafka clusters
– Optimizing Kafka performance