Best Practices Optimizing your big data costs with Amazon EMR

Logo
Presented by

Kunal Agarwal, CEO Unravel Data & Roy Hasson, Sr. Manager Data Lakes, AWS

About this talk

Data is a core part of every business. As data volumes increase so do costs of processing it. Whether you are running your Apache Spark, Hive, or Presto workloads on-premise or on AWS, Amazon EMR is a sure way to save you money. In this session, we’ll discuss several best practices and new features that enable you to cut your operating costs and save money when processing vast amounts of data using Amazon EMR. Hear from Unravel Data on how you can use Unravel APM, a full-stack solution for big data workloads running on Amazon EMR, to get visibility and reporting on your Amazon EMR cluster resource utilization and cost savings. Learn about the best practices and new features such as Managed Scaling and improved Apache Spark performance to help you optimize and reduce your Amazon EMR costs. Watch a demo on how Unravel APM, a full-stack monitoring, tuning, and troubleshooting solution for big data workloads running on Amazon EMR, can help you optimize your Amazon EMR cluster costs.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (43)
Subscribers (5811)
At Unravel, we see an urgent need to help every business understand and optimize the performance of their applications, while managing data operations with greater insight, intelligence, and automation. For these businesses, Unravel is the AI-powered data operations company. We offer novel solutions that leverage AI, machine learning, and advanced analytics to help you fully operationalize the way you drive predictable performance in your modern data applications and pipelines.