Reduce the Runaway Waste and Cost of Autoscaling

Logo
Presented by

Kirk Lewis

About this talk

Autoscaling is the process of automatically increasing or decreasing the computational resources delivered to a cloud workload based on need. This typically means adding or reducing active servers (instances) that are leveraged against your workload within an infrastructure. The promise of autoscaling is that workloads should get exactly the cloud computational resources they require at any given time, and you only pay for the server resources you need, when you need them. Autoscaling provides the elasticity that customers require for their big data workloads, but it can also lead to exorbitant runaway waste and cost. Pepperdata provides automated deployment options that can be seamlessly added to your Amazon EMR, Google Dataproc, and Qubole environments to recapture waste and reduce cost. Join us for this webinar where we will discuss how DevOps can use managed autoscaling to be even more efficient in the cloud. Topics include: – Types of scaling – What does autoscaling do well? When should you be using it? – Is traditional autoscaling limiting your big data success? – What is missing? Why is this problem important? – Managed cloud autoscaling with Pepperdata Capacity Optimizer
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (117)
Subscribers (6408)
Pepperdata is the Big Data performance company. Fortune 1000 enterprises depend on Pepperdata to manage and optimize the performance of Hadoop and Spark applications and infrastructure. Developers and IT Operations use Pepperdata solutions to diagnose and solve performance problems in production, increase infrastructure efficiencies, and maintain critical SLAs. Pepperdata automatically correlates performance issues between applications and operations, accelerates time to production, and increases infrastructure ROI. Pepperdata works with customer Big Data systems on-premises and in the cloud.