Hi [[ session.user.profile.firstName ]]

What Does a CTO Do When a 60PB Hadoop Cluster Devours the IT Budget?

In 2019, the CTO of a large global bank realized a problem: Their data continued to grow, costs for their Hadoop cluster rapidly escalated, and these costs started eating into their annual IT budget. Moving off of Hadoop, or “lift-and-shift” was out of the question. They needed a way to cap their cost and growth without impacting their ability to remain market competitive.

Learn how you can expose, simplify, and solve the problems created by large big data clusters. Save time and money, and ensure compliance.
Recorded Dec 15 2020 36 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Hitachi Vantara Sr Director of Product Marketing, Chuck Yarbrough and Pepperdata Field Engineer, Alex Pierce
Presentation preview: What Does a CTO Do When a 60PB Hadoop Cluster Devours the IT Budget?

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • The Future of Big Data: A Perspective from IT Leaders Transforming IT Ops Mar 2 2021 6:00 pm UTC 38 mins
    Ahmed Kamran Imadi, Fortune 100 Finserv, Mark Kidwell, Autodesk, Satish Nekkalapudi, Magnite, Joel Stewart, Pepperdata
    Have you changed the way you use big data in your business? Understanding the rapid pace of data usage across your organization and planning for the future of big data is a key skill. Sometimes we all need a little insight.

    During this webinar, hear from industry leaders, Ahmed Kamran Imadi, Big Data Solutions Engineering at Fortune 100 Financial Institution, Mark Kidwell, Chief Data Architect at Autodesk, Satish Nekkalapudi, Sr. Manager at Magnite, and VP of Customer Success Joel Stewart at Pepperdata about what role big data is playing in their business today and how they are adapting their IT ops and development teams to keep pace with change.

    Topics include:
    What will be big data’s role in the future for business and how will IT adapt and grow?
    How will the growth in big data affect IT ops and developer processes today?
    Will this change skill sets for these roles?
    What skills will be needed in IT as the need for big data increases?
  • Controlling Cost and Complexity in the Cloud with Managed Autoscaling Feb 16 2021 6:00 pm UTC 45 mins
    Pepperdata Field Engineer, Alex Pierce
    Autoscaling automatically increases or decreases the computational resources delivered to a cloud workload based on need. This typically means adding or reducing active servers (instances) that are leveraged against your workload within an infrastructure. The promise of autoscaling is that workloads receive exactly the cloud computational resources they require at any given time, and you only pay for the server resources you need, when you need them.

    Autoscaling enables applications to perform their best when demand changes, but depending on the application, performance varies. While some applications are constant and predictable, others are bound by CPU or memory, or “spiky” in nature. Autoscaling automatically addresses these variables to ensure optimal application performance. Amazon EMR, Azure HDInsight, and Google Cloud Dataproc all provide autoscaling for big data and Hadoop with a different approach. Autoscaling provides the elasticity that customers require for their big data workloads, but it can also lead to exorbitant runaway waste and cost, and management complexity. Estimating the right number of cluster nodes for a workload is difficult; user-initiated cluster scaling requires manual intervention, and mistakes are often costly and disruptive.

    Join Pepperdata field engineer, Alex Pierce for this discussion about operational challenges associated with maintaining optimal big data performance in the cloud, what milestones to set, and recommendations on how to create a successful cloud migration framework. Topics include:

    – Types of scaling
    – What does autoscaling do well?
    – When should you use ?
    – Does traditional autoscaling limit your success?
    – What is optimized cloud autoscaling?
  • Big Data Observability – What Is It and How Do I Get It? Feb 9 2021 6:00 pm UTC 21 mins
    Heidi Carson, Pepperdata PM
    Observability is an extremely popular topic these days. What's driving this interest? Why is observability needed? What is the difference between observability and monitoring?

    When IT Ops knows there is a problem, but they can't pinpoint or quickly get to the root cause, traditional monitoring approaches are not enough anymore. Achieving observability requires carefully correlating many different sources from logs, metrics, and traces. And this can present additional challenges in distributed environments that use containers and micro-services.

    In this webinar, you’ll get the answers to these questions:

    - Why is observability essential in distributed big data environments?
    - What are the critical challenges of the multi-cloud and containerized world?
    - How can analytics stack performance solutions help you move from monitoring to observability?
  • How to Implement Cloud Observability Like a Pro Jan 26 2021 6:00 pm UTC 45 mins
    Heidi Carson, Pepperdata Product Manager and Kirk Lewis, Pepperdata Field Engineer
    Do traditional on-prem observability techniques translate to the cloud? Many big data enterprises lack observability and thus struggle to manage and understand unprecedented amounts of data in the cloud. A monitoring solution may alert to a problem, but it can’t pinpoint the issue or quickly get to the root cause.

    Observability, by contrast, tells you why you have a problem and often provides a recommendation on how to quickly resolve it. Combined with ML and automation, observability delivers actionable answers to optimize cloud-native applications while also improving overall cluster performance. Observability is particularly challenging in cloud environments, where the old, manual, cluster-by-cluster approach may be insufficient and error-prone.

    In this webinar, you will learn three key techniques for achieving big data observability in the cloud.
  • Where is Big Data Going in 2021? Recorded: Jan 19 2021 42 mins
    Kirk Lewis, Pepperdata Field Engineer
    As corporate big data leaders look to improve data quality, turnaround some of their big data projects in 2021, and optimize and improve application and cluster performance to meet business objectives, big data and analytics remain essential resources for companies to survive in a highly competitive big data environment.

    As you help your organization plan for the future and prepare for where big data is going in 2021, join presenter, Pepperdata Field Engineer, Kirk Lewis for this webinar where he will discuss the following:

    - How cloud technology will make big data more accessible
    - How cloud data will shape customer experiences
    - Kubernetes
    - Simplicity (one tool for each job)
    - Complexity (several tools)
    - Cost control (managing data and cloud sprawl)
  • Kafka Performance: Best Practices for Monitoring and Improving Recorded: Jan 12 2021 47 mins
    Kirk Lewis
    Kafka performance relies on implementing continuous intelligence and real-time analytics. It is important to be able to ingest, check the data, and make timely business decisions.

    Stream processing systems provide a unified, high-performance architecture. This architecture processes real-time data feeds and guarantees system health. But, performance and reliability are challenging. IT managers, system architects, and data engineers must address challenges with Kafka capacity planning to ensure the successful deployment, adoption, and performance of a real-time streaming platform. When something breaks, it can be difficult to restore service, or even know where to begin.

    This webinar discusses best practices to overcome critical performance challenges for Kafka data streaming that can negatively impact the usability, operation, and maintenance of the platform, as well as the data and devices connected to it. Topics include: Kafka data streaming architecture, key monitoring metrics, offline partitioning, broker, topics, consumer groups, and topic lag.
  • Best Practices for Spark Performance Management Recorded: Dec 22 2020 28 mins
    Alex Pierce, Field Engineer at Pepperdata
    Gain the knowledge of Spark veteran Alex Pierce on how to manage the challenges of maintaining the performance and usability of your Spark jobs.

    Apache Spark provides sophisticated ways for enterprises to leverage big data compared to Hadoop. However, the increasing amounts of data being analyzed and processed through the framework is massive and continues to push the boundaries of the engine.

    This webinar draws on experiences across dozens of production deployments and explores the best practices for managing Apache Spark performance. Learn how to avoid common mistakes, improve the usability, supportability and performance of Spark.

    Topics include:

    – Serialization
    – Partition sizes
    – Executor resource sizing
    – DAG management
  • What Does a CTO Do When a 60PB Hadoop Cluster Devours the IT Budget? Recorded: Dec 15 2020 36 mins
    Hitachi Vantara Sr Director of Product Marketing, Chuck Yarbrough and Pepperdata Field Engineer, Alex Pierce
    In 2019, the CTO of a large global bank realized a problem: Their data continued to grow, costs for their Hadoop cluster rapidly escalated, and these costs started eating into their annual IT budget. Moving off of Hadoop, or “lift-and-shift” was out of the question. They needed a way to cap their cost and growth without impacting their ability to remain market competitive.

    Learn how you can expose, simplify, and solve the problems created by large big data clusters. Save time and money, and ensure compliance.
  • How to Save Even More with Qubole Recorded: Nov 17 2020 13 mins
    Alex Pierce
    Cloud providers make managing big data look easy, but autoscaling is wasteful and inefficient. Qubole takes advantage of the separation between compute and storage to help their customers reduce their spend in the cloud. However, Qubole customers can use cloud computing resources even more efficiently, only pay for what they use, and avoid over-provisioning servers and virtual machines with managed autoscaling.

    In this webinar, presenter Alex Pierce will use customer examples to demonstrate how Qubole customers can automatically improve infrastructure utilization and gain more throughput with Pepperdata big data performance management solutions.
  • The Future of Big Data: A Perspective from IT Leaders Transforming IT Ops Recorded: Nov 10 2020 39 mins
    Ahmed Kamran Imadi, Fortune 100 Finserv, Mark Kidwell, Autodesk, Satish Nekkalapudi, Magnite, Joel Stewart, Pepperdata
    Have you changed the way you use big data in your business? Understanding the rapid pace of data usage across your organization and planning for the future of big data is a key skill. Sometimes we all need a little insight.

    During this webinar, hear from industry leaders, Ahmed Kamran Imadi, Big Data Solutions Engineering at Fortune 100 Financial Institution, Mark Kidwell, Chief Data Architect at Autodesk, Satish Nekkalapudi, Sr. Manager at Magnite, and VP of Customer Success Joel Stewart at Pepperdata about what role big data is playing in their business today and how they are adapting their IT ops and development teams to keep pace with change.

    Topics include:
    What will be big data’s role in the future for business and how will IT adapt and grow?
    How will the growth in big data affect IT ops and developer processes today?
    Will this change skill sets for these roles?
    What skills will be needed in IT as the need for big data increases?
  • Autoscaling Big Data Operations in the Cloud Recorded: Oct 27 2020 29 mins
    Kirk Lewis
    The ability to scale the number of nodes in your cluster up and down on the fly is among the major features that make cloud deployments attractive. Estimating the right number of cluster nodes for a workload is difficult; user-initiated cluster scaling requires manual intervention, and mistakes are often costly and disruptive.

    Autoscaling enables applications to perform their best when demand changes. But the definition of performance varies, depending on the app. Some are CPU-bound, others memory-bound. Some are “spiky” in nature, while others are constant and predictable. Autoscaling automatically addresses these variables to ensure optimal application performance. Amazon EMR, Azure HDInsight, and Google Cloud Dataproc all provide autoscaling for big data and Hadoop, but each takes a different approach.

    Pepperdata field engineer, Kirk Lewis will discuss the operational challenges associated with maintaining optimal big data performance, what milestones to set, and offer recommendations on how to create a successful cloud migration framework. Topics include:

    – Types of scaling
    – What does autoscaling do well? When should you use it?
    – Does traditional autoscaling limit your success?
    – What is optimized cloud autoscaling?
  • Fix Spark Performance Issues Without Thinking Too Hard Recorded: Oct 13 2020 27 mins
    Heidi Carson and Alex Pierce
    This discussion explores the results of analyzing thousands of Spark jobs on many multi-tenant production clusters. We will discuss common issues we have seen, the symptoms of those issues, and how you can address and overcome them without thinking too hard.

    Pepperdata big data performance management gathers trillions of performance data points on hundreds of production clusters running Spark, covering a variety of industries, applications, and workload types.

    Based on analyzing the behavior and performance of thousands of Spark applications and use case data from the Pepperdata Big Data Performance report, Heidi and Alex will discuss key performance insights. Topics include best and worst practices, gotchas, machine learning, and tuning recommendations.
  • Kafka Performance: Best Practices for Monitoring and Improving Recorded: Sep 29 2020 48 mins
    Kirk Lewis
    Kafka performance relies on implementing continuous intelligence and real-time analytics. It is important to be able to ingest, check the data, and make timely business decisions.

    Stream processing systems provide a unified, high-performance architecture. This architecture processes real-time data feeds and guarantees system health. But, performance and reliability are challenging. IT managers, system architects, and data engineers must address challenges with Kafka capacity planning to ensure the successful deployment, adoption, and performance of a real-time streaming platform. When something breaks, it can be difficult to restore service, or even know where to begin.

    This webinar discusses best practices to overcome critical performance challenges for Kafka data streaming that can negatively impact the usability, operation, and maintenance of the platform, as well as the data and devices connected to it. Topics include: Kafka data streaming architecture, key monitoring metrics, offline partitioning, broker, topics, consumer groups, and topic lag.
  • Top Considerations When Choosing a Big Data Performance Management Solution Recorded: Sep 15 2020 22 mins
    Alex Pierce
    Growing adoption of Hadoop and Spark has increased demand for Big Data and Performance Management solutions that operate at scale. However, enterprise organizations quickly realize that scaling from pilot projects to large-scale production clusters involves a steep learning curve. Despite progress, DevOps teams still struggle with multi-tenancy, cluster performance, and workflow monitoring. This webinar discusses the top considerations when choosing a big data performance management solution.

    In this webinar, field engineer Alex Pierce discusses the key things to consider when choosing a big data performance management solution. Learn how to:

    – Maximize your infrastructure investment
    – Achieve up to 50 percent increase in throughput, and run more jobs on existing infrastructure
    – Ensure cluster stability and efficiency
    – Avoid overspending on unnecessary hardware
    – Spend less time in backlog queues

    Learn how to automatically tune and optimize your cluster resources, and recapture wasted capacity. Alex will walk through use case examples to demonstrate the types of results you can expect to achieve in your own big data environment.
  • Spark Recommendations – Optimize Application Performance and Build Expertise Recorded: Aug 25 2020 31 mins
    Pepperdata Product Manager Heidi Carson, Pepperdata Field Engineer, Alex Pierce
    Does your big data analytics platform provide you with the Spark recommendations you need to optimize your application performance and improve your own skillset? Explore how you can use Spark recommendations to untangle the complexity of your Spark applications, reduce waste and cost, and enhance your own knowledge of Spark best practices.

    Topics include:

    - Avoiding contention by ensuring your Spark applications are requesting
    the appropriate amount of resources,
    - Preventing memory errors,
    - Configuring Spark applications for optimal performance,
    - Real-world examples of impactful recommendations,
    - and More!

    Join Product Manager Heidi Carson and Field Engineer Alex Pierce from Pepperdata to gain real-world experience with a variety of Spark recommendations, and participate in the Q and A that follows.
  • Reduce the Runaway Waste and Cost of Autoscaling Recorded: Aug 11 2020 38 mins
    Kirk Lewis
    Autoscaling is the process of automatically increasing or decreasing the computational resources delivered to a cloud workload based on need. This typically means adding or reducing active servers (instances) that are leveraged against your workload within an infrastructure. The promise of autoscaling is that workloads should get exactly the cloud computational resources they require at any given time, and you only pay for the server resources you need, when you need them. Autoscaling provides the elasticity that customers require for their big data workloads, but it can also lead to exorbitant runaway waste and cost.

    Pepperdata provides automated deployment options that can be seamlessly added to your Amazon EMR, Google Dataproc, and Qubole environments to recapture waste and reduce cost. Join us for this webinar where we will discuss how DevOps can use managed autoscaling to be even more efficient in the cloud. Topics include:

    – Types of scaling
    – What does autoscaling do well? When should you be using it?
    – Is traditional autoscaling limiting your big data success?
    – What is missing? Why is this problem important?
    – Managed cloud autoscaling with Pepperdata Capacity Optimizer
  • IT Cost Optimization with Big Data Analytics Performance Management Recorded: Jul 28 2020 34 mins
    Alex Pierce, Pepperdata Field Engineer
    Big data analytics performance management is a competitive differentiator and a priority for data-driven companies. However, optimizing IT costs while guaranteeing performance and reliability in distributed systems is difficult. The complexity of distributed systems makes it critically important to have unified visibility into the entire stack. This webinar discusses how to maximize the business value of your big data analytics stack investment and achieve ROI while reducing expenses. Learn how to:

    - Correlate visibility across big data applications and infrastructure for a complete and transparent view of performance and cost.
    - Continuously tune your platform, and run up to 50% more jobs on Hadoop clusters.
    - Optimally utilize resources, and ensure customer satisfaction.
    - Simplify troubleshooting and problem resolution while resolving issues to meet SLAs.

    In this webinar, learn specific ways to automatically tune and optimize big data cluster resources, recapture wasted capacity, and improve ROI for your big data analytics stack.
  • Big Data Observability - What Is It and How Do I Get It? Recorded: Jul 14 2020 21 mins
    Heidi Carson, Pepperdata PM
    Observability is an extremely popular topic these days. What's driving this interest? Why is observability needed? What is the difference between observability and monitoring?

    When IT Ops knows there is a problem, but they can't pinpoint or quickly get to the root cause, traditional monitoring approaches are not enough anymore. Achieving observability requires carefully correlating many different sources from logs, metrics, and traces. And this can present additional challenges in distributed environments that use containers and micro-services.

    In this webinar, you’ll get the answers to these questions:

    - Why is observability essential in distributed big data environments?
    - What are the critical challenges of the multi-cloud and containerized world?
    - How can analytics stack performance solutions help you move from monitoring to observability?
  • Best Practices for Spark Performance Management Recorded: Jun 23 2020 29 mins
    Alex Pierce, Field Engineer at Pepperdata
    Gain the knowledge of Spark veteran, Alex Pierce on how to manage the challenges of maintaining the performance and usability of your Spark jobs

    Apache Spark provides sophisticated ways for enterprises to leverage Big Data compared to Hadoop. However, the increasing amounts of data being analyzed and processed through the framework is massive and continues to push the boundaries of the engine.

    This webinar draws on experiences across dozens of production deployments and explores the best practices for managing Apache Spark performance. Learn how to avoid common mistakes, improve the usability, supportability and performance of Spark.

    Topics include:

    – Serialization
    – Partition sizes
    – Executor resource sizing
    – DAG management
  • Proven Approaches to Hive Query Tuning Recorded: Jun 9 2020 46 mins
    Kirk Lewis, Pepperdata Field Engineer
    Apache Hive is a powerful tool frequently used to analyze data while handling ad-hoc queries and regular ETL workloads. Despite being one of the more mature solutions in the Hadoop ecosystem, developers, data scientists and IT operators are still unable to avoid common inefficiencies when running Hive at scale. Inefficient queries can mean missed SLAs, negative impact on other users, and slow database resources. Poorly tuned platforms or poorly sized queues can cause even efficient queries to suffer.

    This webinar discusses proven approaches to Hive query tuning that improve query speed and reduce cost. Learn how to understand the detailed performance characteristics of query workloads and the infrastructure-wide issues that impact these workloads.

    Pepperdata Field Engineer, Kirk Lewis will discuss:

    - Finding problem queries - Pinpointing delayed queries, expensive queries, and queries that waste CPU and memory
    - Improving query utilization and performance with database and infrastructure metrics
    - Ensuring your infrastructure is not adversely impacting query performance
Performance Management for Big Data
Pepperdata is the Big Data performance company. Fortune 1000 enterprises depend on Pepperdata to manage and optimize the performance of Hadoop and Spark applications and infrastructure. Developers and IT Operations use Pepperdata soluions to diagnose and solve performance problems in production, increase infrastructure efficiencies, and maintain critical SLAs. Pepperdata automatically correlates performance issues between applications and operations, accelerates time to production, and increases infrastructure ROI. Pepperdata works with customer Big Data systems on-premises and in the cloud.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: What Does a CTO Do When a 60PB Hadoop Cluster Devours the IT Budget?
  • Live at: Dec 15 2020 6:00 pm
  • Presented by: Hitachi Vantara Sr Director of Product Marketing, Chuck Yarbrough and Pepperdata Field Engineer, Alex Pierce
  • From:
Your email has been sent.
or close