Saurabh Kulkarni Head of Engineering for North America, Graphcore
We live in a world where large scale systems for machine intelligence are increasingly being used to solve complex problems ranging from natural language processing tasks to computer vision to drug discovery and recommendation systems etc. A convergence of breakthrough research in Machine Learning (ML) modeling techniques, increasing accessibility of purpose-built hardware systems at cloud scale to researchers and maturing software ecosystems is paving the way for an exponential increase in the size of ML models being trained and deployed in production. Models with trillions of parameters that need exaflop scale compute and petabytes worth of memory are not too far out in the future.
What are some of the scale challenges that the industry faces as we come to terms with the enormous cost and time to train some of the most sophisticated models of the future? What are the compute, memory and networking requirements to implement these models efficiently in a hyperscale environment?
Attend this session to learn more about how Graphcore is addressing these challenges. At the heart of our technology is the Intelligent Processing Unit (IPU), which is a purpose-built accelerator to address the most demanding compute and memory bandwidth needs of modern ML models. We take a disaggregated approach towards building a scale-out architecture based on a commodity ethernet network backbone. This will allow customers to start small and then seamlessly scale to thousands of IPUs to tackle mega-models in the multi-trillion parameter model era, all so while allowing independent scaling of CPU, IPU, networking and storage resources.