Build Production-Ready AI Infrastructure

Logo
Presented by

Gijsbert Janssen Van Doorn, Director of Technical Product Marketing & Robert Magno, Sales Engineer at Run:ai

About this talk

What’s the “right” way to build your AI infrastructure stack? Today, GPU infrastructure for building and training models is mostly built on bare metal using static resource allocation, a recipe for wasted resources and lost time. Hear from Gijsbert Janssen van Doorn, Director of Technical Product Marketing at Run:ai about how Run:ai Atlas together with VMware Tanzu can help you build a cloud-native AI platform that delivers value and ROI all the way from model build right through to deployment. Learn about: • Smarter workload scheduling with Kubernetes • AI orchestration concepts borrowed from the world of HPC to better manage expensive resources • Autoscaling, from fractional GPUs to multiple nodes of GPUs for distributed training using batch workloads • Productizing AI—taking models into production with ease ensuring tight SLAs are met

Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (7)
Subscribers (931)
Run:ai’s Atlas platform provides a ‘Foundation for AI Clouds’, whether on premises, across public clouds, or at the edge, allowing organizations to have their AI resources on a single, unified platform that supports AI at all stages of development, from building and training models to running inference in production. Organizations using Run:ai increase resource utilization by an average of 2x. Customers include Fortune 500 companies and cutting-edge AI startups from multiple verticals like finance, automotive, healthcare, and gaming, as well as leading academic AI research centers.