Optimizing Infrastructure for AI

Presented by

Shimon Ben-David, Weka, Darrin Johnson, NVIDIA, Scot Schultz, NVIDIA

About this talk

As organizations start amplifying their usage of AI/ML and enriching it with large volumes of data, new technologies and methodologies must be implemented to utilize its full value. To put it simply, infrastructure is key to the success of AI/ML projects. Within that infrastructure, networking, compute, and storage are key factors that will impact how new environments will perform, as well as assure a project’s ability to grow and scale in the future. In this fireside chat, WekaIO’s Field CTO, Shimon Ben David, will talk with Darrin Johnson, Director of Solutions Architecture and Technical Marketing, Enterprise at NVIDIA and Scot Schultz, Sr. Director, Mellanox HPC and Technical Computing at NVIDIA. Both Darrin and Scot have been involved in dozens of AI & ML projects in various stages of implementation and will share their experience and insights. What you can expect to learn: - How to design an environment for high-performance AI/ML workloads - Why most AI pipelines can’t run on a standard 10G network - The one thing companies neglect to consider when starting with AI, which usually ends up costing time and money as they scale AI projects in production
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (12)
Subscribers (531)
Weka offers WekaFS, the modern file system that uniquely empowers organizations to solve the newest, biggest problems holding back innovation. Optimized for NVMe and the hybrid cloud, Weka handles the most demanding storage challenges in the most data-intensive technical computing environments, delivering truly epic performance at any scale. Its modern architecture unlocks the full capabilities of today’s data center, allowing businesses to maximize the value of their high-powered IT investments. Weka helps industry leaders reach breakthrough innovations and solve previously unsolvable problems.