Best practices in Deep Learning implementations

Presented by

IBM, NVIDIA and Google

About this talk

Join IBM, NVIDIA, and Google to explore Distributed Deep Learning. Beyond prototypes, organizations looking to put Deep Learning into production need AI infrastructures than can reliably keep pace with the speed of their businesses. Distributed Deep Learning extends training across multiple nodes, thereby enabling implementations with considerably more scale and performance than was possible with a single node.This session will explore Distributed Deep Learning concepts, with an emphasis on how these concepts can be applied to solve real-world problems. By attending this webinar, you'll learn about: • Introduction to Distributed Deep Learning – We’ll kick off the session with an overview of what distributed Deep Learning is and why it matters. • Distributed Approaches - We'll compare various approaches to distributed training: model- and data parallelism, parameter servers, asynchronous and synchronous training, and graph replication approaches. • Performance Breakthroughs – We’ll explore the unprecedented Deep Learning performance gains that are possible, by combining POWER8 with NVLink, along with the NVIDIA P100 GPU and the latest Distributed TensorFlow builds.

Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (8)
Subscribers (287)
IBM Systems