Hi [[ session.user.profile.firstName ]]

What is Pepperdata?

Pepperdata has engineered a big data APM solution that empowers operators to automatically optimize the performance and capacity of their big data infrastructure while enabling developers to improve the performance of their applications.

Unlike other APM tools that merely summarize static data and make application performance recommendations in isolation, Pepperdata delivers complete system analytics on hundreds of real-time operational metrics continuously collected from applications as well as the infrastructure — including CPU, RAM, disk I/O, and network usage metrics on every job, task, user, host, workflow, and queue.

The result is a comprehensive, intuitive dashboard that provides a holistic view of cluster resources, system alerts, and dynamic recommendations for more accurate and effective troubleshooting, capacity planning, reporting, and application performance management.

Pepperdata diagnoses problems quickly, automatically alerts about critical conditions affecting system performance, and provides recommendations for rightsizing containers, queues and other resources. Leveraging AI-driven resource management, Pepperdata tunes and optimizes infrastructure resources to recapture wasted capacity and get the most out of the infrastructure.

Welcome to the new world of real-time big data application and infrastructure performance management.

Welcome to Pepperdata.

Optimize your infrastructure, your applications, and your time — at scale.
Recorded Mar 25 2019 2 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Pepperdata
Presentation preview: What is Pepperdata?

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Take the Guesswork out of Migrating to the Cloud Jan 15 2020 6:00 pm UTC 45 mins
    Panel: Ash Munshi, Charles Marker,
    In the early days of cloud migration, it was all upside: Operating a data center in the cloud is always cheaper than on dedicated on-premises servers – nothing to worry about. Fast-forward a few years, IT Ops is in a visibility crisis and many big data teams cannot understand what they are spending or why.

    Ultimately, in the quest to control and understand cloud spend, analytics are critically important. Without powerful, in-depth insights, big data teams simply don’t have the information they need to do their job.

    Please join Pepperdata CEO Ash Munshi and VP of Engineering Charles Market for a roundtable Q and A discussion on how to take the guesswork out of migrating to the cloud, and reduce the runaway management costs of a hybrid data center.
  • Stop Manually Tuning and Start Getting ROI From Your Big Data Infrastructure Recorded: Dec 4 2019 26 mins
    Pepperdata Field Engineer, Eric Lotter
    Would your big data organization benefit from automatic capacity optimization that eliminates manual tuning and enables you to run 30-50% more jobs on your Hadoop clusters?

    As analytics platforms grow in scale and complexity on-prem and in the cloud, managing and maintaining efficiency is a critical challenge, and money is wasted.

    In this webinar, Pepperdata Field Engineer Eric Lotter discusses how your organization can:

    – Maximize your infrastructure investment
    – Achieve up to 50 percent increase in throughput and run more jobs on existing infrastructure
    – Ensure cluster stability and efficiency
    – Avoid overspending on unnecessary hardware
    – Spend less time in backlog queues

    On a typical cluster, hundreds and even thousands of decisions are made per second, increasing typical enterprise cluster throughput up to 50 percent. Even the most experienced operator dedicated to resource management can’t make manual configuration changes with the required precision and speed. Learn how to automatically tune and optimize your cluster resources, and recapture wasted capacity. Eric will provide relevant use case examples and the results achieved to show you how to get more out of your infrastructure investment.
  • Introduction to Platform Spotlight - Big Data Analytics Performance Management Recorded: Nov 20 2019 8 mins
    Pepperdata Field Engineer, Kirk Lewis
    You’re constantly looking at different tools to understand the performance of your clusters, to manage and monitor resource capacity, maximize your existing infrastructure investment, and forecast resource needs, but it’s impossible to have an accurate view without access to the right data.

    Your operators are challenged with configuring and sizing critical resources running on multi-tenant clusters with mixed workloads, and you receive alerts without enough detail to isolate and resolve problems. Improving the performance of your clusters and successfully managing capacity requires an understanding of dozens of performance metrics and tuning parameters.

    Pepperdata Platform Spotlight continuously collects extensive unique data—that nobody else collects—about your hosts, queues, users, applications and all relevant resources, providing you with a 360° cluster view to quickly diagnose performance issues and make resource decisions.
  • How to Overcome the Five Most Common Spark Challenges Recorded: Nov 19 2019 33 mins
    Alex Pierce
    Apache Spark is a full-fledged, data engineering toolkit that enables you to operate on large data sets without worrying about the underlying infrastructure. Spark is known for its speed, which is a result of improved implementation of MapReduce that focuses on keeping data in memory instead of persisting data on disk. However, in addition to its great benefits, Spark has its issues including complex deployment and scaling. How best to deal with these and other challenges and maximize the value you are getting from Spark?

    Drawing on experiences across dozens of production deployments, Pepperdata Field Engineer Alexander Pierce explores issues observed in a cluster environment with Apache Spark and offers guidelines on how to overcome the most common Spark problems you are likely to encounter. Alex will also accompany his presentation with demonstrations and examples. Attendees can use this information to improve the usability and supportability of Spark in their projects and successfully overcome common challenges. During this webinar, attendees will learn about:

    – Serialization and its role in Spark performance
    – Partition recommendations and sizing
    – Executor resource sizing and heap utilization
    – Driver-side vs. executor-side processing: reducing idle executor time
    – Using shading to manage library conflicts
  • Capacity Planning for Big Data Hadoop Environments Recorded: Jul 29 2019 19 mins
    Kirk Lewis, Pepperdata Field Engineer
    As the data analytics field matures, the amount of data generated is growing rapidly and so is its use by enterprise organizations. This increase in data improves data analytics and the result is a continuous circle of data and information generation. To manage these new volumes of data, IT organizations and DevOps teams must understand resource usage and right-size their Hadoop clusters to balance the OPEX and CAPEX.

    This presentation discusses capacity planning for big data Hadoop environments. Pepperdata field engineer Kirk Lewis explores big data Hadoop capacity planning at the cluster level, the queue level, and the application level via the Pepperdata big data performance management UI.
  • Optimizing the Performance of Your Critical Big Data Applications Recorded: Jun 27 2019 32 mins
    Pepperdata Bob Williams and Ryan Clark
    Webinar Date: Thursday June 27
    Time: 8:00 AM Eastern / 5:00 AM Pacific
    Duration: 30 minutes

    Optimizing the Performance of Your Critical Big Data Applications

    Moving workloads and Hadoop and Spark applications to the cloud is either a reality or a near-term goal for an overwhelming number of enterprises. For most organizations, optimizing cloud use to improve operational efficiency and achieve cost savings is a primary objective. But migration of workloads is takes time, during which an organization must manage application performance both on-premises and in the cloud while maintaining a close watch on ROI.

    This webinar addresses key questions for organizations deploying big data workloads and applications:
    - Why is my application running slow / stopped?
    - How can I achieve faster MTTR and reduce resource requirements?
    - How can I save up to 50% on infrastructure spend and still achieve SLAs?
    - How can I automatically correlate application and infrastructure performance metrics to get the “big picture”?
    - How accurate are my cloud migration and long-term deployment cost estimates?

    Join the Pepperdata performance optimization team to learn more...
  • Seven Steps to a Successful AWS Cloud Migration Recorded: May 22 2019 31 mins
    Ashrith Mekala, Head of Engineering at Cloudwick
    Cloud migration is more about processes than data. Even seemingly simple tasks like file distribution can require complex migration steps to ensure that the resulting cloud infrastructure matches the desired workflow. Most cloud benefits, from cost savings to scalability, are justifiable. But a proven methodology, a complete understanding of the risks, careful planning and flawless execution are necessary to realize those returns.

    Join presenter Ashrith Mekala, Head of Engineering at Cloudwick, as he shares his experience as a big data solutions architect who has successfully guided dozens of enterprises through the AWS cloud migration process. Attendees can apply these learnings to refine their own processes, avoid the risks, and optimize the benefits of current and planned cloud migrations.

    Topics include:

    – Migration models - forklift, hybrid, native
    – Framework - data migration, data validation and app integration
    – Methodology - including pre-migration state and cloud cost assessment using Pepperdata
    – GAP analysis and project planning
    – Moving from pilot to production
    – Key transition tasks and on-going support
  • Five Mistakes to Avoid When Using Spark Recorded: May 15 2019 32 mins
    Alex Pierce, Pepperdata Field Engineer
    Apache Spark is playing a critical role in the adoption and evolution of Big Data technologies because it provides sophisticated ways for enterprises to leverage Big Data compared to Hadoop. The increasing amounts of data being analyzed and processed through the framework is massive and continues to push the boundaries of the engine.

    Drawing on experiences across dozens of production deployments, Pepperdata Field Engineer Alexander Pierce explores issues observed in a cluster environment with Apache Spark and offers guidelines on how to avoid common mistakes. Attendees can use these observations to improve the usability and supportability of Spark and avoid such issues in their projects.

    Topics include:

    – Serialization
    – Partition sizes
    – Executor resource sizing
    – DAG management
    – Shading
  • Cloud Migration: Opportunities and Risks for the Business Unit Recorded: Apr 24 2019 31 mins
    John Armstrong, Head of Product Marketing
    The business case for cloud migration is compelling: cost reductions, ease of growth and
    expansion, outsourcing of infrastructure and maintenance, and improved access to the latest
    technologies. However, many organizations that are migrating to the cloud focus on technical factors and overlook the broader business implications of their projects. As a best practice, assessing the opportunities and risks for the organization should be a joint effort led by the IT and business unit teams. At the highest level, a business justification focuses on the return on investment (ROI) associated with a proposed technical change. However, many supporting data points are required to populate the formula and achieve a realistic calculation.

    In this webinar, we will address these and other critical questions:

    What are the key business expectations associated with a cloud migration?
    What are the implications if the cloud migration doesn’t go as planned?
    What business considerations should be included in any cloud migration plan?

    And always, this webinar will be followed by a short Q and A session with the audience. Please join us!
  • What is Pepperdata? Recorded: Mar 25 2019 2 mins
    Pepperdata
    Pepperdata has engineered a big data APM solution that empowers operators to automatically optimize the performance and capacity of their big data infrastructure while enabling developers to improve the performance of their applications.

    Unlike other APM tools that merely summarize static data and make application performance recommendations in isolation, Pepperdata delivers complete system analytics on hundreds of real-time operational metrics continuously collected from applications as well as the infrastructure — including CPU, RAM, disk I/O, and network usage metrics on every job, task, user, host, workflow, and queue.

    The result is a comprehensive, intuitive dashboard that provides a holistic view of cluster resources, system alerts, and dynamic recommendations for more accurate and effective troubleshooting, capacity planning, reporting, and application performance management.

    Pepperdata diagnoses problems quickly, automatically alerts about critical conditions affecting system performance, and provides recommendations for rightsizing containers, queues and other resources. Leveraging AI-driven resource management, Pepperdata tunes and optimizes infrastructure resources to recapture wasted capacity and get the most out of the infrastructure.

    Welcome to the new world of real-time big data application and infrastructure performance management.

    Welcome to Pepperdata.

    Optimize your infrastructure, your applications, and your time — at scale.
  • Pepperdata Application Spotlight FREE Recorded: Mar 25 2019
    Pepperdata
    Use Application Spotlight for free on up to 20 Nodes
    Pepperdata Application Spotlight is a self-service APM solution that provides developers with a holistic and real-time view of their applications in the context of the entire big data cluster, allowing them to quickly identify and fix problems (failed Spark applications, for instance) to improve application runtime, predictability, performance and efficiency.
  • Successfully Migrating Big Data Workloads to the Cloud: What You Need to Know Recorded: Mar 20 2019 28 mins
    John Armstrong, Head of Product Marketing, Pepperdata
    Moving workloads to the cloud is either a reality or a near-term goal for an overwhelming number of enterprises. For most organizations, optimizing cloud use to improve operational efficiency and achieve cost savings is the primary objective. But navigating cloud adoption is a complex process that requires careful planning and analyses to achieve desired economic goals and ensure success. It’s a technology decision that has significant impact on the business.

    Economic benefits vs. costs must be accurately estimated and carefully weighed before making a move to cloud…not just for the cluster, but for every workload queue. This webinar will take the guesswork out of calculating cloud migration costs and provide you with the detailed analyses you need to make fully-informed technical and business decisions before embarking on your cloud migration journey.

    This webinar addresses critical questions for organizations considering or already deploying big data workloads in the cloud:

    - How accurate are my cloud migration and long-term deployment cost estimates?
    - What queues will be more cost-effective in the cloud, and which ones are better left on-premises?
    - What AWS, Azure, Google, or IBM cloud instances will work best for each of my queues? CPU-optimized? Memory-optimized? General purpose?
    - How can I help my team to make a successful transition to deploying workloads using the public cloud?
  • Breaking Through Big Data Bottlenecks Recorded: Mar 6 2019 28 mins
    Kirk Lewis, Pepperdata Field Engineer
    Bottlenecks are a fact of life in IT. No matter how fast you build something, somebody will find a way to max it out. But bottlenecks can be crippling to organizations whose business operations depend on reliable and consistent service levels. Deploying an application performance management (APM) solution optimized to address big data challenges is essential for rapidly identifying and overcoming congestion within operational environment.

    In this webinar, we will:
    · Walk through a number of bottlenecks ranging from "easy to find" to "hard to find”
    · Discuss examples involving CPU, (easy) memory, network and I/O (hard)
    · Show you how you can quickly identify root cause and resolve big data bottlenecks
  • 8 ROI Benefits of APM Recorded: Feb 20 2019 27 mins
    John Armstrong, Head of Product Marketing, Pepperdata
    For most enterprises, APM is considered an essential element of IT operations, bridging production and development with IT and digital business. As companies invest in new technology and projects in their digital transformation journeys, it’s critical to understand the ROI value of those investments.

    This webinar will look at eight ROI benefits of APM — both financial and non-financial — that organizations need to consider when evaluating APM solutions. These include increased developer productivity, reducing downtime, improving business continuity and more.

    Attendees will learn:
    - 8 elements to consider when assessing APM solutions
    - How to evaluate financial and non-financial benefits to technology solutions
    - How leading organizations measure ROI, often through hard lessons learned
  • Ensuring Uptime for Healthcare Recorded: Feb 4 2019 2 mins
    Dr. Charles Boicey, Clearsense Chief Innovation Officer
    “There is no tolerance for downtime in healthcare, which is why we bought Pepperdata. We started using Pepperdata on day one because Pepperdata instruments and monitors the resources as well as the applications running on the Clearsense Platform.

    "No else does that. We couldn’t do what we do without Pepperdata,”
    –Dr. Charles Boicey, Clearsense Chief Innovation Officer
  • Leveraging APM to Overcome Big Data Challenges Recorded: Jan 16 2019 40 mins
    John Armstrong, Head of Product Marketing, Pepperdata
    Leveraging APM to Overcome Big Data Development and Infrastructure Performance Challenges

    While businesses are deriving tremendous insights from ever-growing big data sets, development teams are challenged with increasingly resource-hungry workloads and overwhelming bottlenecks that impact productivity. This makes big data application performance management (APM) a must-have in today’s ecosystem. Join us to learn how APM can help enterprises overcome development and performance challenges associated with growing big data stores.
    Attendees will learn:
    - What is driving the demand for big data in application development
    - Challenges application developers face when working with increasingly larger workloads
    - How APM can mitigate these and other challenges, improve workflow productivity, and optimize resource effectiveness
  • Optimizing BI Workloads with Pepperdata Recorded: Dec 11 2018 36 mins
    Pepperdata
    BI workloads are an increasingly important part of your big data system and typically consist of large queries that analyze huge amounts of data. Because of this, BI users frequently complain about the responsiveness of their applications.

    Learn how Pepperdata enables you to tune your big data system and applications to meet SLAs for critical BI workloads.
  • Pepperdata Helps Clearsense Ensure 99.999% Uptime and Maximize Life-Saving Apps Recorded: Nov 14 2018 46 mins
    Charles Boicey, Clearsense Chief Innovation Officer
    Clearsense is a healthcare technology company that helps its customers realize measurable value from data with real-time analytics. Clearsense collects patient information — from monitors, ventilators and other biomedical devices — and provides real-time views of patient conditions and changes for early detection and prevention. With no room for downtime, Clearsense relies on Pepperdata to help them ensure uptime and optimize application performance.

    Join Pepperdata and Clearsense Chief Innovation Officer Charles Boicey for this informative webinar.

    - Learn how Clearsense relies on Pepperdata to:
    - Ensure 99.999% uptime for life-saving applications
    - Enable clinicians to better monitor and alert on health issues and avoid catastrophic events
    - Provide customers with fast and reliable access to data and analytics
    - Run applications at maximum efficiency
    - Plan accurately for growth
    - And more

    We have no tolerance for downtime, which is why we use Pepperdata.”
    ~ Clearsense Chief Innovation Officer Charles Boicey
  • Operations Manager Q and A – Do More with Your Big Data Platform Recorded: Oct 24 2018 24 mins
    Alex Pierce, Field Engineer
    Organizations are faced with countless obstacles to achieving big data success, including platform, application and user issues, as well as limited resources. This webinar will answer operational management questions around optimizing performance and maximizing capacity, such as “Who’s blowing up our cluster?, “How can I run more applications?” and more. You will learn from our expert, based on real-world deployments, how a complete APM solution provides:

    – Reduced mean time to problem resolution.
    – An accurate understanding of the most expensive users.
    – Improved platform throughput, uptime, efficiency and performance.
    – Reduced backlog.
    – And more.

    Presenter

    Alex Pierce joined Pepperdata in 2014. Previously, he worked as a senior solution architect at WanDisco. Before that, he was the senior solution architect at Red Hat. Alex has a strong background in system administration and big data.
  • Capacity Manager Q and A – How to Improve Productivity, Throughput, and Uptime Recorded: Oct 10 2018 34 mins
    Kirk Lewis, Field Engineer
    There are numerous challenges to leveraging your big data infrastructure for optimal performance. This webinar answers operational management questions around optimizing performance and maximizing capacity, such as “Who’s blowing up our cluster?”, “How can I run more applications?” and more. You will learn from our expert, based on real-world deployments, how a complete APM solution delivers:

    – Improved throughput, uptime, efficiency and performance.
    – Accurate capacity planning.
    – Deploy capacity accurately for predictable performance.
    – Recapture wasted resources to maximize current infrastructure.

    Presenter

    Kirk joined Pepperdata in 2015. Previously, he was a Solutions Engineer at StackVelocity. Before that he was the lead technical architect for big data production platforms at American Express. Kirk has a strong background in big data.
Performance Management for Big Data
Pepperdata is the Big Data performance company. Fortune 1000 enterprises depend on Pepperdata to manage and optimize the performance of Hadoop and Spark applications and infrastructure. Developers and IT Operations use Pepperdata soluions to diagnose and solve performance problems in production, increase infrastructure efficiencies, and maintain critical SLAs. Pepperdata automatically correlates performance issues between applications and operations, accelerates time to production, and increases infrastructure ROI. Pepperdata works with customer Big Data systems on-premises and in the cloud.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: What is Pepperdata?
  • Live at: Mar 25 2019 9:30 pm
  • Presented by: Pepperdata
  • From:
Your email has been sent.
or close