As organizations start amplifying their usage of AI/ML and enriching it with large volumes of data, new technologies and methodologies must be implemented to utilize its full value. To put it simply, infrastructure is key to the success of AI/ML projects. Within that infrastructure, networking, compute, and storage are key factors that will impact how new environments will perform, as well as assure a project’s ability to grow and scale in the future.
In this fireside chat, WekaIO’s Field CTO, Shimon Ben David, will talk with Darrin Johnson, Director of Solutions Architecture and Technical Marketing, Enterprise at NVIDIA and Scot Schultz, Sr. Director, Mellanox HPC and Technical Computing at NVIDIA. Both Darrin and Scot have been involved in dozens of AI & ML projects in various stages of implementation and will share their experience and insights.
What you can expect to learn:
- How to design an environment for high-performance AI/ML workloads
- Why most AI pipelines can’t run on a standard 10G network
- The one thing companies neglect to consider when starting with AI, which usually ends up costing time and money as they scale AI projects in production