Scalable End-to-End Deep Learning using TensorFlow™ and Databricks

Logo
Presented by

Brooke Wenig, Data Science Solutions Consultant at Databricks, Siddarth Murching, Software Engineer at Databricks

About this talk

Deep Learning has shown tremendous success, and as we all know, the more data the better the models. However, we eventually hit a bottleneck on how much data we can process on a single machine. This necessitates a new way of training neural networks: in a distributed manner. In this webinar, we walk through how to use TensorFlow™ and Horovod (an open-source library from Uber to simplify distributed model training) on Databricks to build a more effective recommendation system at scale. We will cover: - The new Databricks Runtime for ML, shipped with pre-installed libraries such as Keras, Tensorflow, Horovod, and XGBoost to enable data scientists to get started with distributed Machine Learning more quickly - The newly-released HorovodEstimator API for distributed, multi-GPU training of deep learning models against data in Apache Spark™ - How to make predictions at scale with deep learning pipelines
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (92)
Subscribers (39062)
No matter at what stage of your data journey you’re in, this channel will help you get a better understanding of the fundamental concepts of the Databricks Lakehouse platform and the problems we’re helping to solve for data teams.