Hi [[ session.user.profile.firstName ]]

Better Sales Performance with Databricks

In this webinar, you will learn how Yesware used Databricks to radically improve the reliability, scalability, and ease of development of Yesware’s Apache Spark data pipeline. Specifically the Yesware team will cover the workflow of taking an idea from the prototyping stage in a Databricks notebook to the final, fully-tested, peer reviewed, and versioned production feature that produces high quality data for Yesware customers on a daily basis.
Recorded Jun 23 2016 57 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Justin Mills and Anna Holschuh of Yesware
Presentation preview: Better Sales Performance with Databricks

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • 5 Keys to Build Machine Learning and Visualization Into Your Application Nov 8 2017 7:00 pm UTC 60 mins
    John Huang, Engineering and Data Analytics Lead at Handshake
    Machine learning has unlocked new possibilities that deliver significant business value. However most companies don’t have the resources to either build and maintain the supporting infrastructure or apply data science to build a smarter solution.

    Join us for this webinar and hear from John Huang, engineering and data analytics lead at Handshake, as he shares how he quickly and cost effectively scaled a small engineering team to build an machine-learning powered recommendation engine that profiles users and behaviors to present relevant next steps. In this webinar you will learn how to:

    -Simplify and accelerate data engineering processes including data ingest and ETL
    -Incorporate machine learning into your production application without an army of data scientists
    -Choose an analytics engine that will enable key analytics such as attribution, step analysis, and linear regression
    -Embed visualizations into your application that drive stickiness
  • How to Put Cluster Management on Autopilot Recorded: Oct 19 2017 49 mins
    Prakash Chockalingam, Product Manager at Databricks
    A key obstacle for doing data engineering at scale is having a robust distributed infrastructure on which frameworks like Apache Spark can run efficiently. On top of building the infrastructure, having proper automatic functioning of the infrastructure is another critical piece for running production workloads.

    Join this webinar to learn how Databricks’ Unified Analytics Platform can help simplify your data engineering problems by configuring your distributed infrastructure to be in autopilot mode. Learn how:
    -Databricks’ automated infrastructure will allow you to autoscale compute and storage independently.
    -To significantly reduce cloud costs through cutting edge cluster management features.
    -To control certain features in the cluster management and balance between ease of use and manual control.
  • How CardinalCommerce Significantly Improved Data Pipeline Speeds by 200% Recorded: Sep 21 2017 61 mins
    Christopher Baird from CardinalCommerce
    CardinalCommerce was acquired by Visa earlier this year for its critical role in payments authentication. Through predictive analytics and machine learning, Cardinal measures performance and behavior of the entire authentication process across checkout, issuing and ecosystem partners to recommend actions, reduce fraud and drive frictionless digital commerce.

    With Databricks, CardinalCommerce simplified data engineering to improve the performance of their ETL pipeline by 200% while reducing operational costs significantly via automation, seamless integration with key technologies, and improved process efficiencies.

    Join this webinar to learn how CardinalCommerce was able to:
    -Simplify access to data across the organization
    -Accelerate data processing by 200%
    -Reduce EC2 costs through faster performance and automated infrastructure
    -Visualize performance metrics to customers and stakeholders
  • Performance Benchmarking Big Data Platforms in the Cloud Recorded: Aug 22 2017 47 mins
    Reynold Xin, Co-founder and Chief Architect at Databricks
    Performance is often a key factor in choosing big data platforms. Over the past few years, Apache Spark has seen rapid adoption by enterprises, making it the de facto data processing engine for its performance and ease of use.


    Since starting the Spark project, our team at Databricks has been focusing on accelerating innovation by building the most performant and optimized Unified Analytics Platform for the cloud. Join Reynold Xin, Co-founder and Chief Architect of Databricks as he discusses the results of our benchmark (using TPC-DS industry standard requirements) comparing the Databricks Runtime (which includes Apache Spark and our DBIO accelerator module) with vanilla open source Spark in the cloud and how these performance gains can have a meaningful impact on your TCO for managing Spark.

    This webinar covers:
    Differences between open source Spark and Databricks Runtime.
    Details on the benchmark including hardware configuration, dataset, etc.
    Summary of the benchmark results which reveal performance gains by up to 5x over open source Spark and other big data engines.
    A live demo comparing processing speeds of Databricks Runtime vs. open source Spark.

    Special Announcement: We will also announce an experimental feature as part of the webinar that aims at drastically speeding up your workloads even more. Be the first to see this feature in action. Register today!
  • Productionizing Apache Spark™ MLlib Models for Real-time Prediction Serving Recorded: Aug 10 2017 52 mins
    Joseph Bradley and Sue Ann Hong
    Data science and machine learning tools traditionally focus on training models. When companies begin to employ machine learning in actual production workflows, they encounter new sources of friction such as sharing models across teams, deploying identical models on different systems, and maintaining featurization logic. In this webinar, we discuss how Databricks provides a smooth path for productionizing Apache Spark MLlib models and featurization pipelines.

    Databricks Model Scoring provides a simple API for exporting MLlib models and pipelines. These exported models can be deployed in many production settings, including:

    * External real-time low-latency prediction serving systems, without Spark dependencies,
    * External real-time low-latency prediction serving systems, without Spark dependencies,
    * Apache Spark Structured Streaming jobs, and
    * Apache Spark batch jobs.

    In this webinar, we overview our solution’s functionality, describe its architecture, and demonstrate how to use it to deploy MLlib models to production.
  • Build, Scale, and Deploy Deep Learning Pipelines with Ease Recorded: Jul 27 2017 62 mins
    Sue Ann Hong, Tim Hunter and Jules S. Damji
    Deep Learning has shown a tremendous success, yet it often requires a lot of effort to leverage its power. Existing Deep Learning frameworks require writing a lot of code to work with a model, let alone in a distributed manner.

    This webinar is the first of a series in which we survey the state of Deep Learning at scale, and where we introduce the Deep Learning Pipelines, a new open-source package for Apache Spark. This package simplifies Deep Learning in three major ways:

    1. It has a simple API that integrates well with enterprise Machine Learning pipelines.
    2. It automatically scales out common Deep Learning patterns, thanks to Spark.
    3. It enables exposing Deep Learning models through the familiar Spark APIs, such as MLlib and Spark SQL.

    In this webinar, we will look at a complex problem of image classification, using Deep Learning and Spark. Using Deep Learning Pipelines, we will show:

    * how to build deep learning models in a few lines of code;
    * how to scale common tasks like transfer learning and prediction; and
    * how to publish models in Spark SQL.
  • Accelerate Data Science with Better Data Engineering with Databricks Recorded: Jul 13 2017 63 mins
    Andrew Candela
    Whether you’re processing IoT data from millions of sensors or building a recommendation engine to provide a more engaging customer experience, the ability to derive actionable insights from massive volumes of diverse data is critical to success. MediaMath, a leading adtech company, relies on Apache Spark to process billions of data points ranging from ads, user cookies, impressions, clicks, and more — translating to several terabytes of data per day. To support the needs of the data science teams, data engineering must build data pipelines for both ETL and feature engineering that are scalable, performant, and reliable.

    Join this webinar to learn how MediaMath leverages Databricks to simplify mission-critical data engineering tasks that surface data directly to clients and drive actionable business outcomes. This webinar will cover:

    - Transforming TBs of data with RDDs and PySpark responsibly
    - Using the JDBC connector to write results to production databases seamlessly
    - Comparisons with a similar approach using Hive
  • How Databricks and Machine Learning is Powering the Future of Genomics Recorded: May 25 2017 59 mins
    Frank Austin Nothaft, Genomics Data Engineer at Databricks
    With the drastic drop in the cost of sequencing a single genome, many organizations across biotechnology, pharmaceuticals, biomedical research, and agriculture have begun to make use of genome sequencing. While the sequence of a single genome may provide insight about the individual who was sequenced, to derive maximal insight from the genomic data, the ultimate goal is to query across a cohort of many hundreds to thousands of individuals.

    Join this webinar to learn how Databricks — powered by Apache Spark — enables queries across a database of genomics in interactive time and simplifies the application of machine learning models and statistical tests to genomics data across patients, to derive more insight into the biological processes driven by genomic alterations.

    In this webinar, we will:

    - Demonstrate how Databricks can rapidly query annotated variants across a cohort of 1,000 samples.
    - Look at a case study using Databricks to improve the performance of running an expression quantitative trait loci (eQTL) test across samples from the GEUVADIS project.
    - Show how we can parallelize conventional genomics tools using Databricks.
  • Deploying Machine Learning Techniques at Petabyte Scale with Databricks Recorded: May 22 2017 61 mins
    Saket Mengle, Senior Principal Data Scientist at DataXu
    The central premise of DataXu is to apply data science to better marketing. At its core, is the Real-time Bidding Platform that processes 2 petabytes of data per day and responds to ad auctions at a rate of 2.1 million requests per second across 5 different continents. Serving on top of this platform is DataXu’s analytics engine that gives their clients insightful analytics reports addressed towards client marketing business questions. Some common requirements for both these platforms are the ability to do real-time processing, scalable machine learning, and ad-hoc analytics.

    This webinar will showcase DataXu’s successful use-cases of using the Apache® Spark™ framework and Databricks to address all of the above challenges while maintaining its agility and rapid prototyping strengths to take a product from initial R&D phase to full production.

    We will also discuss in detail:

    Challenges of using Apache Spark in a petabyte scale machine learning system and how we worked to solve the issues.
    Best practices and highlight the steps of large scale Spark ETL processing, model testing, all the way through to interactive analytics.
  • Deep Learning on Apache® Spark™: Workflows and Best Practices Recorded: May 4 2017 47 mins
    Tim Hunter and Jules S. Damji
    The combination of Deep Learning with Apache Spark has the potential for tremendous impact in many sectors of the industry. This webinar, based on the experience gained in assisting customers with the Databricks Virtual Analytics Platform, will present some best practices for building deep learning pipelines with Spark.

    Rather than comparing deep learning systems or specific optimizations, this webinar will focus on issues that are common to deep learning frameworks when running on a Spark cluster, including:

    * optimizing cluster setup;
    * configuring the cluster;
    * ingesting data; and
    * monitoring long-running jobs.

    We will demonstrate the techniques we cover using Google’s popular TensorFlow library. More specifically, we will cover typical issues users encounter when integrating deep learning libraries with Spark clusters.

    Clusters can be configured to avoid task conflicts on GPUs and to allow using multiple GPUs per worker. Setting up pipelines for efficient data ingest improves job throughput, and monitoring facilitates both the work of configuration and the stability of deep learning jobs.
  • Databricks Product Demonstration Recorded: Apr 19 2017 48 mins
    Don Hilborn
    This is a live demonstration of the Databricks virtual analytics platform.
  • How to Increase Data Science Agility at Scale with Databricks Recorded: Mar 30 2017 51 mins
    Maddie Schults
    Apache® Spark™ has become an indispensable tool for data science teams. Its performance and flexibility enables data scientists to do everything from interactive exploration, feature engineering, to model tuning with ease. In this webinar, Maddie Schults - Databricks product manager - will discuss how Databricks allows data science teams to use Apache Spark for their day-to-day work.

    You will learn:

    - Obstacles faced by data science teams in the era of big data;
    - How Databricks simplifies Spark development;
    - A demonstration of key Databricks functionalities that help data scientists become more productive.
  • Databricks Product Demonstration Recorded: Mar 29 2017 63 mins
    Miklos Christine
    This is a live demonstration of the Databricks virtual analytics platform.
  • Databricks Product Demonstration Recorded: Mar 15 2017 45 mins
    Jason Pohl
    This is a live demonstration of the Databricks virtual analytics platform.
  • Apache® Spark™ MLlib 2.x: How to Productionize your Machine Learning Models Recorded: Mar 9 2017 61 mins
    Richard Garris and Jules S. Damji
    Apache Spark has rapidly become a key tool for data scientists to explore, understand and transform massive datasets and to build and train advanced machine learning models. The question then becomes, how do I deploy these model to a production environment? How do I embed what I have learned into customer facing data applications?

    In this webinar, we will discuss best practices from Databricks on how our customers productionize machine learning models, do a deep dive with actual customer case studies, and show live tutorials of a few example architectures and code in Python, Scala, Java and SQL.
  • How Smartsheet operationalized Apache Spark with Databricks Recorded: Feb 23 2017 61 mins
    Francis Lau, Senior Director, Product Intelligence at Smartsheet
    Apache Spark is red hot, but without the compulsory skillsets, it can be a challenge to operationalize — making it difficult to build a robust production data pipeline that business users and data scientists across your company can use to unearth insights.

    Smartsheet is the world’s leading SaaS platform for managing and automating collaborative work. With over 90,000 companies and millions of users, it helps teams get work done ranging from managing simple task lists to orchestrating the largest sporting events and construction projects.

    In this webinar, you will learn how Smartsheet uses Databricks to overcome the complexities of Spark to build their own analysis platform that enables self-service insights at will, scale, and speed to better understand their customers’ diverse use cases. They will share valuable patterns and lessons learned in both technical and adoption areas to show how they achieved this, including:

    How to build a robust metadata-driven data pipeline that processes application and business systems data to provide a 360 view of customers and to drive smarter business systems integrations.
    How to provide an intuitive and valuable “pyramid” of datasets usable by both technical and business users.
    Their roll-out approach and materials used for company-wide adoption allowing users to go from zero to insights with Spark and Databricks in minutes.
  • Apache® Spark™ - The Unified Engine for All Workloads Recorded: Jan 12 2017 63 mins
    Tony Baer, Principal Analyst at Ovum
    The Apache® Spark™ compute engine has gone viral – not only is it the most active Apache big data open source project, but it is also the fastest growing big data analytics workload, on and off Hadoop. The major reason behind Spark’s popularity with developers and enterprises is its flexibility to support a wide range of workloads including SQL query, machine learning, streaming, and graph analysis.


    This webinar features Ovum analyst Tony Baer, who will explain the real-world benefits to practitioners and enterprises when they build a technology stack based on a unified approach with Apache Spark.

    This webinar will cover:
    Findings around the growth of Spark and diverse applications using machine learning and streaming.
    The advantages of using Spark to unify all workloads, rather than stitching together many specialized engines like Presto, Storm, MapReduce, Pig, and others.
    Use case examples that illustrate the flexibility of Spark in supporting various workloads.
  • Apache® Spark™ MLlib 2.x: Migrating ML Workloads to DataFrames Recorded: Dec 8 2016 61 mins
    Joseph K. Bradley and Jules S. Damji
    In the Apache® Spark™ 2.x releases, Machine Learning (ML) is focusing on DataFrame-based APIs. This webinar is aimed at helping users take full advantage of the new APIs. Topics will include migrating workloads from RDDs to DataFrames, ML persistence for saving and loading models, and the roadmap ahead.
  • How to Evaluate Cloud-based Apache® Spark™ Platforms Recorded: Nov 16 2016 62 mins
    Nik Rouda - ESG Global
    Since its release, Apache Spark has quickly become the fastest growing big data processing engine. But few companies have the domain expertise and resources to build their own Spark-based infrastructure - often times resulting in a mix of tools that are complex to stand up and time consuming to maintain.

    There are several cloud-based platforms available that allow you to harness the power of Spark while reaping the advantages of the cloud. This webinar features ESG Global senior analyst Nik Rouda who will share research and best practices to help decision makers evaluate the most popular cloud-based Apache Spark solutions and to understand the differences between them.
  • Databricks for Data Engineers Recorded: Oct 26 2016 49 mins
    Prakash Chockalingam
    Apache Spark has become an indispensable tool for data engineering teams. Its performance and flexibility made ETL one of Spark’s most popular use cases. In this webinar, Prakash Chockalingam - seasoned data engineer and PM - will discuss how Databricks allows data engineering teams to overcome common obstacles while building production-quality data pipelines with Spark. Specifically, you will learn:

    - Obstacles faced by data engineering teams while building ETL pipelines;
    - How Databricks simplifies Spark development;
    - A demonstration of key Databricks functionalities geared towards making data engineers more productive.
Making Big Data Simple
Databricks’ mission is to accelerate innovation for its customers by unifying Data Science, Engineering and Business. Founded by the team who created Apache Spark™, Databricks provides a Unified Analytics Platform for data science teams to collaborate with data engineering and lines of business to build data products. Users achieve faster time-to-value with Databricks by creating analytic workflows that go from ETL and interactive exploration to production. The company also makes it easier for its users to focus on their data by providing a fully managed, scalable, and secure cloud infrastructure that reduces operational complexity and total cost of ownership.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Better Sales Performance with Databricks
  • Live at: Jun 23 2016 5:00 pm
  • Presented by: Justin Mills and Anna Holschuh of Yesware
  • From:
Your email has been sent.
or close