Hi [[ session.user.profile.firstName ]]

Databricks' Data Pipelines: Journey and Lessons Learned

With components like Spark SQL, MLlib, and Streaming, Spark is a unified engine for building data applications. In this talk, we will take a look at how we use Spark on our own Databricks platform.

In this webinar, we discuss the role and importance of ETL and what are the common features of an ETL pipeline. We will then show how the same ETL fundamentals are applied and (more importantly) simplified within Databricks’ Data pipelines. By utilizing Apache Spark as its foundation, we can simplify our ETL processes using one framework. With Databricks, you can develop your pipeline code in notebooks, create Jobs to productionize your notebooks, and utilize REST APIs to turn all of this into a continuous integration workflow. We will provide tips and tricks of doing ETL with Spark and lessons learned from our pipeline.
Recorded Aug 4 2016 58 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Burak Yavuz
Presentation preview: Databricks' Data Pipelines: Journey and Lessons Learned

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • From Data to Insights in Seconds: How Rue La La built a streaming ETL pipeline Jun 13 2018 5:00 pm UTC 60 mins
    Ben Wilson
    Rue La La strives to be the most engaging e-commerce website in the world. Their goal is to create an individualized shopping experience through the use of big data and machine learning. With over 400GB of clickstream data generated per day, they needed a way to process that data and feed it into their models in near real time. Without the right tools and support, that can be a resource intensive and costly proposition.

    Join this webinar featuring Ben Wilson, data science architect at Rue La La, as he shares how he built a streaming ETL pipeline with Databricks Delta, a powerful new offering within the Databricks Unified Analytics Platform — allowing them to accelerate processing times at exponential rates while simplifying the ability to tap into the power of machine learning at scale. This talk will highlight:

    The challenges Rue La La faced trying to build a data pipeline that could deliver the performance required of their near real time use case.
    How Databricks’ Unified Analytics Platform allowed them to easily build a streaming ETL pipeline and while simplifying data science at scale.
    The engineering and business impact Databricks has had including reducing ETL outputs from 30 minutes to 10 seconds and contributing to a 10x increase in purchase engagement.
  • Collaboration to Production with Apache Spark on Azure Databricks Apr 27 2018 10:00 am UTC 60 mins
    Sandy May, Data Shepherd at Elastacloud
    Sandy is going to highlight some key aspects of the new Spark-as-a-Service offering in Azure, from Databricks. Leveraging the power of Databricks notebooks to showcase loading and cleaning data in SQL and Scala, exploration and all the way through to having a model into production.
  • Apache Spark™ for Machine Learning and AI Apr 26 2018 5:00 pm UTC 60 mins
    Brian Dirking, Senior Director of Partner Marketing at Databricks, and Nauman Fakhar, System Architect at Databricks
    Azure Databricks in an Apache Spark™ based platform, providing the scale, collaborative platform, and integration with your Azure environment that makes it the best place to run your ML and AI workloads on Azure. This webinar will include an in-depth demo of key AI and ML use cases.
  • How Viacom Revolutionized Audience Experiences with Real-Time Analytics and AI Apr 25 2018 5:00 pm UTC 60 mins
    Mark Cohen, VP of Data Platform Engineering at Viacom; Chris Burns, Machine Learning Solutions Architect at AWS
    With 170+ global networks, Viacom is focused on providing an amazing audience experience to its billions of viewers around the world. Core to this strategy is leveraging big data and AI to offer the right content to the right audience and deliver it flawlessly on any device. To make this possible, Viacom set-out to build a real-time, scalable data analytics platform on Apache Spark™.

    Join this webinar to learn how Viacom overcame the complexities of Spark with Databricks and AWS to build an end-to-end scalable self-service insights platform that delivers on a wide range of analytics use cases.

    This webinar will cover:
    - The challenges Viacom faced building a scalable, real-time data insights and AI platform
    - How they overcame these challenges with Spark, AWS and Databricks
    - How they leverage a unified analytics platform for data pipelines, analytics and machine learning to reduce video start delays and improve content delivery with stream analytics at scale
    - What it takes to create a data driven culture with self-service analytics that meet the needs of business users, data analysts and data scientists
  • Getting Started with Apache Spark™ on Azure Databricks Recorded: Mar 27 2018 60 mins
    Brian Dirking, Senior Director of Partner Marketing at Databricks, and Nauman Fakhar, System Architect at Databricks
    Learn the basics of Apache Spark™ on Azure Databricks. Designed by Databricks, in collaboration with Microsoft, Azure Databricks combines the best of Databricks and Azure to help customers accelerate innovation with one-click set up, streamlined workflows and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts.

    This webinar will cover the following topics:

    · RDDs, DataFrames, Datasets, and other fundamentals of Apache Spark.
    · How to quickly setup Azure Databricks, relieving you of DataOps duties.
    · How to use the Databricks interactive notebooks, which provide a collaborative space for your entire analytics team, and how you can schedule notebooks, immediately putting your work into production.
  • Fast and Reliable ETL Pipelines with Databricks Recorded: Mar 7 2018 57 mins
    Prakash Chockalingam, Product Manager at Databricks
    Building multiple ETL pipelines is very complex and time consuming, making it a very expensive endeavor. As the number of data sources and the volume of the data increases, the ETL time also increases, negatively impacting when an enterprise can derive value from the data.

    Join Prakash Chockalingam, Product Manager and data engineering expert at Databricks, to learn how to avoid the common pitfalls of data engineering and how the Databricks Unified Analytics Platform can ensure performance and reliability at scale to lower total cost of ownership (TCO).

    In this webinar, you will learn how Databricks can help to:
    - Remove infrastructure configuration complexity to reduce DevOps efforts
    - Optimize your ETL data pipelines for performance without compromising reliability
    - Unify data engineering and data science to accelerate innovation for the business.
  • Azure Databricks: Accelerating Innovation with Microsoft Azure and Databricks Recorded: Feb 15 2018 52 mins
    Brian Dirking, Senior Director of Partner Marketing at Databricks
    Data scientists and data engineers need a secure and scalable platform to collaborate on analytics. Register for this webinar and see how Azure Databricks provides a platform that enables teams to accelerate innovation, providing:

    - A collaborative workspace to experiment with models and datasets, and then put jobs into action instantly.
    - An automated infrastructure that enables you to autoscale compute and storage independently.

    The live demo portion of the webinar will show how Azure Databricks can bring in streaming data, run it in a machine learning model, and then output the results to PowerBI for visualization.
  • What's New in the Upcoming Apache Spark 2.3 Release? Recorded: Feb 8 2018 49 mins
    Reynold Xin, Chief Architect at Databricks, and Jules Damji, Spark Community and Developer Advocate
    The upcoming Spark 2.3 release marks a big step forward in speed, unification, and API support.

    Reynold Xin and Jules Damji from Databricks will walk through how you can benefit from the upcoming improvements:

    - New DataSource APIs that enable developers to more easily read and write data for Continuous Processing in Structured Streaming.
    - PySpark support for vectorization, giving Python developers the ability to run native Python code fast.
    - Improved performance by taking advantage of NVMe SSDs.
    - Native Kubernetes support, marrying the best of container orchestration and distributed data processing.
  • Ten Must-Haves to Deploy Machine Learning and AI in the Enterprise Recorded: Jan 25 2018 61 mins
    Forrester VP & Principal Analyst, Mike Gualtieri; Data Science Lead at Overstock, Chris Robison; PM at Overstock, Craig Kelly
    Enterprise data science teams are driving big innovations in machine learning, but this has put them under increased pressure to deliver more models, more frequently, and more rapidly.

    In this webinar, Forrester VP & Principal Analyst, Mike Gualtieri, will share data on the top trends in machine learning and lay out what data science teams need to do in order to maximize their output.

    Chris Robison, Head of Data Science at Overstock.com and Craig Kelly, Group Product Manager at Overstock.com, will showcase how they utilized big data and machine learning to

    -Create a one-to-one personalized shopping experience.
    -Decrease cost of moving models to production by nearly 50%.
    -Stand up new models 5x faster than before.
  • How Databricks helps iPass optimize for performance and availability Recorded: Jan 10 2018 60 mins
    Tomasz Magdanski, Director of Big Data and Analytics at iPass
    iPass is the world’s largest wifi network serving over 160 network providers with nearly 60+ million hotspots in airports, hotels, airplanes, and public spaces in 120 countries across the globe.

    Analyzing the state of the world’s wifi in real time is a daunting task fraught with unpredictable challenges that can impact performance, reliability, and security. Join this webinar to learn why iPass moved from an on-premises Hadoop system to Databricks in the cloud and how they are able to deliver ground-breaking results with a small and nimble team.

    With Databricks, iPass can now focus on scalable business logic and not building infrastructure. This new found freedom has allowed their team to:
    -monitor the performance of millions of wifi hotspots around world.
    -leverage machine learning and real-time analytics to understand the health of access points.
    -make recommendations to customers on the best access point to use to ensure optimal performance.
  • Continuous Integration & Continuous Delivery with Databricks Recorded: Dec 7 2017 45 mins
    Prakash Chockalingam, Product Manager at Databricks
    Continuous integration and continuous delivery (CI/CD) enables an organization to rapidly iterate on software changes while maintaining stability, performance, and security. Many organizations have adopted various tools to follow the best practices around CI/CD to improve developer productivity, code quality, and software delivery. However, following the best practices of CI/CD is still challenging for many big data teams.

    This webinar will highlight:
    *Key challenges in building a data pipeline for CI/CD.
    *Key integration points in a data pipeline's CI/CD cycle.
    *How Databricks facilitates iterative development, continuous integration and build.
  • Unified Data Management: The Best of Data Lakes, Data Warehouses and Streaming Recorded: Nov 16 2017 61 mins
    Jason Pohl, Software Engineer at Databricks, and Bill Chambers, Product Manager at Databricks
    Current data management architectures are a complex combination of siloed, single-purpose tools. There are data lakes for low cost storage, but are difficult to use for data discovery, data warehouses that are reliable and optimized for fast queries, but come at a cost when having to scale, and various streaming and batch systems to shuffle data between them, often times resulting in data integrity issues.

    Businesses have to create a patchwork of different tools, skillsets, and expertise just to solve one fundamental problem: How can I make data-driven decisions faster?

    Join this webinar to learn how Databricks Delta — a new unified data management system — takes advantage of the the scale of a data lake, the reliability and performance of a data warehouse, and the low-latency updates of a streaming system, all in a unified and fully managed fashion.

    This webinar will cover:
    -How the need to process batch and streaming data creates challenges for enterprises with complex data architectures.
    -How Databricks Delta takes the best of data warehouses, data lakes and streaming systems to provide a highly scalable, performant, and reliable data management system.
    -A live demonstration of Databricks Delta to showcase how easy it is to cost-efficiently scale without impacting query performance.
  • 5 Keys to Build Machine Learning and Visualization Into Your Application Recorded: Nov 8 2017 51 mins
    Databricks, Handshake, and Looker
    Machine learning has unlocked new possibilities that deliver significant business value. However most companies don’t have the resources to either build and maintain the supporting infrastructure or apply data science to build a smarter solution.

    Join us for this webinar and hear from John Huang, engineering and data analytics lead at Handshake, as he shares how he quickly and cost effectively scaled a small engineering team to build an machine-learning powered recommendation engine that profiles users and behaviors to present relevant next steps. In this webinar you will learn how to:

    -Simplify and accelerate data engineering processes including data ingest and ETL
    -Incorporate machine learning into your production application without an army of data scientists
    -Choose an analytics engine that will enable key analytics such as attribution, step analysis, and linear regression
    -Embed visualizations into your application that drive stickiness
  • How to Put Cluster Management on Autopilot Recorded: Oct 19 2017 49 mins
    Prakash Chockalingam, Product Manager at Databricks
    A key obstacle for doing data engineering at scale is having a robust distributed infrastructure on which frameworks like Apache Spark can run efficiently. On top of building the infrastructure, having proper automatic functioning of the infrastructure is another critical piece for running production workloads.

    Join this webinar to learn how Databricks’ Unified Analytics Platform can help simplify your data engineering problems by configuring your distributed infrastructure to be in autopilot mode. Learn how:
    -Databricks’ automated infrastructure will allow you to autoscale compute and storage independently.
    -To significantly reduce cloud costs through cutting edge cluster management features.
    -To control certain features in the cluster management and balance between ease of use and manual control.
  • How CardinalCommerce Significantly Improved Data Pipeline Speeds by 200% Recorded: Sep 21 2017 61 mins
    Christopher Baird from CardinalCommerce
    CardinalCommerce was acquired by Visa earlier this year for its critical role in payments authentication. Through predictive analytics and machine learning, Cardinal measures performance and behavior of the entire authentication process across checkout, issuing and ecosystem partners to recommend actions, reduce fraud and drive frictionless digital commerce.

    With Databricks, CardinalCommerce simplified data engineering to improve the performance of their ETL pipeline by 200% while reducing operational costs significantly via automation, seamless integration with key technologies, and improved process efficiencies.

    Join this webinar to learn how CardinalCommerce was able to:
    -Simplify access to data across the organization
    -Accelerate data processing by 200%
    -Reduce EC2 costs through faster performance and automated infrastructure
    -Visualize performance metrics to customers and stakeholders
  • Performance Benchmarking Big Data Platforms in the Cloud Recorded: Aug 22 2017 47 mins
    Reynold Xin, Co-founder and Chief Architect at Databricks
    Performance is often a key factor in choosing big data platforms. Over the past few years, Apache Spark has seen rapid adoption by enterprises, making it the de facto data processing engine for its performance and ease of use.


    Since starting the Spark project, our team at Databricks has been focusing on accelerating innovation by building the most performant and optimized Unified Analytics Platform for the cloud. Join Reynold Xin, Co-founder and Chief Architect of Databricks as he discusses the results of our benchmark (using TPC-DS industry standard requirements) comparing the Databricks Runtime (which includes Apache Spark and our DBIO accelerator module) with vanilla open source Spark in the cloud and how these performance gains can have a meaningful impact on your TCO for managing Spark.

    This webinar covers:
    Differences between open source Spark and Databricks Runtime.
    Details on the benchmark including hardware configuration, dataset, etc.
    Summary of the benchmark results which reveal performance gains by up to 5x over open source Spark and other big data engines.
    A live demo comparing processing speeds of Databricks Runtime vs. open source Spark.

    Special Announcement: We will also announce an experimental feature as part of the webinar that aims at drastically speeding up your workloads even more. Be the first to see this feature in action. Register today!
  • Productionizing Apache Spark™ MLlib Models for Real-time Prediction Serving Recorded: Aug 10 2017 52 mins
    Joseph Bradley and Sue Ann Hong
    Data science and machine learning tools traditionally focus on training models. When companies begin to employ machine learning in actual production workflows, they encounter new sources of friction such as sharing models across teams, deploying identical models on different systems, and maintaining featurization logic. In this webinar, we discuss how Databricks provides a smooth path for productionizing Apache Spark MLlib models and featurization pipelines.

    Databricks Model Scoring provides a simple API for exporting MLlib models and pipelines. These exported models can be deployed in many production settings, including:

    * External real-time low-latency prediction serving systems, without Spark dependencies,
    * Apache Spark Structured Streaming jobs, and
    * Apache Spark batch jobs.

    In this webinar, we overview our solution’s functionality, describe its architecture, and demonstrate how to use it to deploy MLlib models to production.
  • Build, Scale, and Deploy Deep Learning Pipelines with Ease Recorded: Jul 27 2017 62 mins
    Sue Ann Hong, Tim Hunter and Jules S. Damji
    Deep Learning has shown a tremendous success, yet it often requires a lot of effort to leverage its power. Existing Deep Learning frameworks require writing a lot of code to work with a model, let alone in a distributed manner.

    This webinar is the first of a series in which we survey the state of Deep Learning at scale, and where we introduce the Deep Learning Pipelines, a new open-source package for Apache Spark. This package simplifies Deep Learning in three major ways:

    1. It has a simple API that integrates well with enterprise Machine Learning pipelines.
    2. It automatically scales out common Deep Learning patterns, thanks to Spark.
    3. It enables exposing Deep Learning models through the familiar Spark APIs, such as MLlib and Spark SQL.

    In this webinar, we will look at a complex problem of image classification, using Deep Learning and Spark. Using Deep Learning Pipelines, we will show:

    * how to build deep learning models in a few lines of code;
    * how to scale common tasks like transfer learning and prediction; and
    * how to publish models in Spark SQL.
  • Accelerate Data Science with Better Data Engineering with Databricks Recorded: Jul 13 2017 63 mins
    Andrew Candela
    Whether you’re processing IoT data from millions of sensors or building a recommendation engine to provide a more engaging customer experience, the ability to derive actionable insights from massive volumes of diverse data is critical to success. MediaMath, a leading adtech company, relies on Apache Spark to process billions of data points ranging from ads, user cookies, impressions, clicks, and more — translating to several terabytes of data per day. To support the needs of the data science teams, data engineering must build data pipelines for both ETL and feature engineering that are scalable, performant, and reliable.

    Join this webinar to learn how MediaMath leverages Databricks to simplify mission-critical data engineering tasks that surface data directly to clients and drive actionable business outcomes. This webinar will cover:

    - Transforming TBs of data with RDDs and PySpark responsibly
    - Using the JDBC connector to write results to production databases seamlessly
    - Comparisons with a similar approach using Hive
  • How Databricks and Machine Learning is Powering the Future of Genomics Recorded: May 25 2017 59 mins
    Frank Austin Nothaft, Genomics Data Engineer at Databricks
    With the drastic drop in the cost of sequencing a single genome, many organizations across biotechnology, pharmaceuticals, biomedical research, and agriculture have begun to make use of genome sequencing. While the sequence of a single genome may provide insight about the individual who was sequenced, to derive maximal insight from the genomic data, the ultimate goal is to query across a cohort of many hundreds to thousands of individuals.

    Join this webinar to learn how Databricks — powered by Apache Spark — enables queries across a database of genomics in interactive time and simplifies the application of machine learning models and statistical tests to genomics data across patients, to derive more insight into the biological processes driven by genomic alterations.

    In this webinar, we will:

    - Demonstrate how Databricks can rapidly query annotated variants across a cohort of 1,000 samples.
    - Look at a case study using Databricks to improve the performance of running an expression quantitative trait loci (eQTL) test across samples from the GEUVADIS project.
    - Show how we can parallelize conventional genomics tools using Databricks.
Making Big Data Simple
Databricks’ mission is to accelerate innovation for its customers by unifying Data Science, Engineering and Business. Founded by the team who created Apache Spark™, Databricks provides a Unified Analytics Platform for data science teams to collaborate with data engineering and lines of business to build data products. Users achieve faster time-to-value with Databricks by creating analytic workflows that go from ETL and interactive exploration to production. The company also makes it easier for its users to focus on their data by providing a fully managed, scalable, and secure cloud infrastructure that reduces operational complexity and total cost of ownership.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Databricks' Data Pipelines: Journey and Lessons Learned
  • Live at: Aug 4 2016 5:00 pm
  • Presented by: Burak Yavuz
  • From:
Your email has been sent.
or close