Managing Kubernetes cloud costs and recapturing resource waste is challenging for even the most experienced ITOps teams. The promise of autoscaling is that workloads receive exactly the cloud computational resources they require at any given time, and you only pay for the server resources you need, when you need them. However, most autoscaling features aren’t granular enough to address today’s variable workload and application needs. Without granular and automatic control for your cloud instance resources, you are likely over paying for your Kubernetes workloads.
All the cloud providers provide autoscaling for Kubernetes, but with different approaches.
Join Pepperdata Field Alex Pierce for this discussion about operational challenges associated with maintaining optimal big data performance in the cloud with a focus on Kubernetes autoscaling best practices.