5 Kafka Best Practices

Logo
Presented by

Alex Pierce

About this talk

Learn five ways to improve your Kafka operations’ readiness and platform performance through proven Kafka best practices. The influx of data from a wide variety of sources is already straining your big data IT infrastructure. On top of that, data must be ingested, processed, and made available in near real-time to support business-critical use cases. Kafka data streaming is used today by 30% of Fortune 500 companies because of its ability to feed data in real-time into a predictive analytics engine in support of these use cases. However, there are critical challenges and limitations. By following the latest Kafka best practices, you can more easily and effectively manage Kafka. Join us for a webinar where we will discuss five specific ways to help keep your Kafka deployment optimized and more easily managed. Best practices covered: -Monitoring key component states to understand Kafka cluster health -Measuring crucial metrics to understand Kafka cluster performance -Observing critical building blocks in the Kafka hardware stack -Tracking important metrics for Kafka capacity planning -Knowing what to alert on and what can be monitored passively

Related topics:

More from this channel

Upcoming talks (2)
On-demand talks (112)
Subscribers (6158)
Pepperdata is the Big Data performance company. Fortune 1000 enterprises depend on Pepperdata to manage and optimize the performance of Hadoop and Spark applications and infrastructure. Developers and IT Operations use Pepperdata soluions to diagnose and solve performance problems in production, increase infrastructure efficiencies, and maintain critical SLAs. Pepperdata automatically correlates performance issues between applications and operations, accelerates time to production, and increases infrastructure ROI. Pepperdata works with customer Big Data systems on-premises and in the cloud.