Hi [[ session.user.profile.firstName ]]

Pepperdata

  • Date
  • Rating
  • Views
  • Creatively Visualizing Spark Data
    Creatively Visualizing Spark Data Christina Holland Recorded: Jul 18 2017 26 mins
    Pepperdata tech talk by Pepperdata software engineer, Christina Holland on creatively visualizing spark data and designing new ways to see new pipelines.
  • Production Spark Series Part 4: Spark Streaming Delivers Critical Patient Care
    Production Spark Series Part 4: Spark Streaming Delivers Critical Patient Care Charles Boicey, Chief Innovation Officer, Clearsense Recorded: Jun 22 2017 58 mins
    Clearsense is a pioneer in healthcare data science solutions using Spark Streaming to provide real time updates to health care providers for critical health care needs. Clinicians are enabled to make timely decisions from the assessment of a patient's risk for Code Blue, Sepsis and other conditions based on the analysis of information gathered from streaming physiological monitoring along with streaming diagnostic data and the patient historical record. Additionally this technology is used to monitor operational and financial process for efficiency and cost savings. This talk discusses the architecture needed and the challenges associated with providing real time SLAs along with 100% uptime expectations in a multi-tenant Hadoop cluster.
  • Spark Summit 2017 - Connect Code to Resource Consumption to Scale Production
    Spark Summit 2017 - Connect Code to Resource Consumption to Scale Production Vinod Nair, Director of Product Management Recorded: Jun 6 2017 26 mins
    Apache Spark is a dynamic execution engine that can take relatively simple Scala code and create complex and optimized execution plans. In this talk, we will describe how user code translates into Spark drivers, executors, stages, tasks, transformations, and shuffles. We will also discuss various sources of information on how Spark applications use hardware resources, and show how application developers can use this information to write more efficient code. We will show how Pepperdata’s products can clearly identify such usages and tie them to specific lines of code. We will show how Spark application owners can quickly identify the root causes of such common problems as job slowdowns, inadequate memory configuration, and Java garbage collection issues.
  • Spark Summit 2017 – Spark Summit Bay Area Apache Spark Meetup
    Spark Summit 2017 – Spark Summit Bay Area Apache Spark Meetup Sean Suchter, Pepperdata Founder and CTO Recorded: Jun 5 2017 98 mins
    Bay Area Apache Spark Meetup at the 10th Spark Summit featuring tech-talks about using Apache Spark at scale from Pepperdata’s CTO Sean Suchter, RISELab’s Dan Crankshaw, and Databricks’ Spark committers and contributors.
  • HDFS on Kubernetes: Lessons Learned
    HDFS on Kubernetes: Lessons Learned Kimoon Kim, Engineer, Pepperdata Recorded: Jun 2 2017 36 mins
    There is growing interest in running Spark natively on Kubernetes (see https://github.com/apache-spark-on-k8s/spark). Spark applications often access data in HDFS, and Spark supports HDFS locality by scheduling tasks on nodes that have the task input data on their local disks. When running Spark on Kubernetes, if the HDFS daemons run outside Kubernetes, applications will slow down while accessing the data remotely.

    In this webinar, we will demonstrate how to run HDFS inside Kubernetes to speed up Spark. In particular, we will show:

    - Spark scheduler can still provide HDFS data locality on Kubernetes by discovering the mapping of Kubernetes containers to physical nodes to HDFS datanode daemons.
  • Production Spark Series Part 3: Tuning Apache Spark Jobs
    Production Spark Series Part 3: Tuning Apache Spark Jobs Simon King, Engineer, Pepperdata Recorded: May 30 2017 40 mins
    A Spark Application that worked well in a development environment or with sample data may not behave as expected when run against a much larger dataset in a production environment. Pepperdata Application Profiler, based on open source Dr Elephant, can help you tune you Spark Application based on current dataset characteristics and cluster execution environment. Application Profiler uses a set of heuristics to provide actionable recommendations to help you quickly tune your applications.

    Occasionally an application will fail (or execute too slowly) due to circumstances outside your control: a busy cluster, another misbehaving YARN application, bad luck, or bad "cluster weather". We'll discuss ways to use Pepperdata's Cluster Analyzer to quickly determine when an application failure may not be your fault and how to diagnose and fix symptoms that you can affect.
  • Production Spark Series Part 2: Connecting Your Code to Spark Internals
    Production Spark Series Part 2: Connecting Your Code to Spark Internals Sean Suchter, CTO/Co-Founder, Pepperdata Recorded: May 9 2017 39 mins
    Spark is a dynamic execution engine that can take relatively simple Scala code and create complex and optimized execution plans. In this talk, we will describe how user code translates into Spark drivers, executors, stages, tasks, transformations, and shuffles. We will describe how this is critical to the design of Spark and how this tight interplay allows very efficient execution. Users and operators who are aware of the concepts will become more effective at their interactions with Spark.
  • Big Data for Big Data: Machine Learning Models of Hadoop Cluster Behavior
    Big Data for Big Data: Machine Learning Models of Hadoop Cluster Behavior Sean Suchter, CTO/Co-Founder, Pepperdata and Shekhar Gupta, Software Engineer, Pepperdata Recorded: Apr 10 2017 37 mins
    Learn how to use machine learning to improve cluster performance.

    This talk describes the use of very fine-grained performance data from many Hadoop clusters to build a model predicting excessive swapping events.

    Performance of batch processing systems such as YARN is generally determined by the throughput, which measures the amount of workload (tasks) completed in a given time window. For a given cluster size, the throughput can be increased by running as much workload as possible on each host, to utilize all the free resources available on host. Because each node is running a complex combination of different tasks/containers, the performance characteristics of the cluster are dynamically changing. As a result, there is always a danger of overutilizing host memory, which can result into extreme swapping or thrashing. The impacts of thrashing can be very severe; it can actually reduce the throughput instead of increasing it.

    By using very fine-grained (5 second) data from many production clusters running very different workloads, we have trained a generalized model that very rapidly detects the onset of thrashing, within seconds from the first symptom. This detection has proven fast enough to enable effective mitigation of the negative symptom of thrashing, allowing the hosts to continuously provide high throughput.

    To build this system we used hand-labeling of bad events combined with large scale data processing using Hadoop, HBase, Spark, and iPython for experimentation. We will discuss the methods used as well as the novel findings about Big Data cluster performance.
  • Production Spark Webinar Series - Part 1: Best Practices for Spark in Production
    Production Spark Webinar Series - Part 1: Best Practices for Spark in Production Chad Carson, Co-Founder and Ed Colonna, VP of Marketing Recorded: Mar 7 2017 59 mins
    Join us for our Part 1 of our Production Spark Webinar Series. This first installment gathers Spark experts and practitioners from varying backgrounds to discuss the top trends, challenges and use cases for production Spark applications. Our expert panel will discuss several key considerations when running Spark in production and take questions directly from the audience.

    Our distinguished panel of industry experts is as follows:

    Dr. Babak Behzad, Senior Software Engineer, SAP/Altiscale
    Charles Boicey, Chief Innovation Officer, Clearsense
    Richard Williamson, Principal Engineer, Silicon Valley Data Science
    Andrew Ray, Principal Data Engineer, Silicon Valley Data Science
    Sean Suchter, CTO and Co-Founder, Pepperdata
  • Philips Wellcentive Cuts Hadoop Troubleshooting from Months to Hours
    Philips Wellcentive Cuts Hadoop Troubleshooting from Months to Hours Geovanie Marquez, Hadoop Architect at Philips Wellcentive Recorded: Dec 6 2016 48 mins
    Philips Wellcentive, a SaaS health management and data analytics company, relies on a nightly Mapreduce job to process and analyze data for their entire patient population; from birth to current day. It looks at their entire patient population to assess a number of different characteristics and powers the analytics that physician organizations need to deliver better services. When this job began to fail repeatedly, the Hadoop team spent months trying to identify the root cause using existing monitoring tools, but were unable to come up with an explanation for the job failures and slowdowns.

    Join our webinar to hear more about why existing Hadoop monitoring tools were insufficient to diagnose the root cause of Philips Wellcentive’s problems and how Pepperdata helped them to significantly improve their Big Data operations. The webinar will cover the different approaches that Philips Wellcentive took to rectify their missed SLAs, and how Pepperdata ultimately helped them quickly troubleshoot their performance problems and ensure their jobs complete on time.

Embed in website or blog