Production AI from data lake to server: Solving HW challenges in ML environments

Logo
Presented by

Rui Vasconcelos and Michael Boros

About this talk

What you will learn: Infrastructure is a critical component when enabling AI/ML teams to produce the fastest and most valuable results for high performance computing problems while maximising resource utilisation. Research capabilities can be accelerated to tackle complex workloads by leveraging the purpose-built workstations and servers that solve interrelated hardware problems, from prototyping on the workstation to deploying and scaling on the server. We will discuss: - Design and practice considerations from workstation to server with practical examples - Security, performance and cost prioritizations - The role of Kubeflow in making AI work best for business needs Who should attend: AI/ML data engineers, scientists, research leaders, product managers, developers and ops teams who want to maximise time spent on producing results.
Related topics:

More from this channel

Upcoming talks (6)
On-demand talks (392)
Subscribers (160908)
Get the most in depth information about the Ubuntu technology and services from Canonical. Learn why Ubuntu is the preferred Linux platform and how Canonical can help you make the most out of your Ubuntu environment.