Enterprise MLOps in hybrid-cloud scenarios: best practices

Logo
Presented by

Charles Adetiloye - Co-founder & AI Consultant, Mavencode | Rui Vasconcelos - AI/ML Product Manager, Canonical

About this talk

The need for efficient allocation of compute resources and planning of utilization capacity is fast becoming a necessity in many Enterprise Machine Learning Operation endeavors. Optimizing resource allocation, from both a cost and technical perspective, is driving many organizations to strongly consider a hybrid-cloud infrastructure setup. Architectural best practices that have emerged in recent times around ML workflow pipelines, cloud-agnostic model deployment and serving, feature stores, data versioning and more, make it easy for companies looking in this direction to bootstrap and get up and running. In this webinar, we will cover: 1. How to effectively bring your models to production across clouds 2. How to make the best use of feature stores 3. How to use Kubeflow Pipelines with a feature store 4. How to use Apache Hudi to unify historical and new data 5. How to use Kubeflow with Apache Spark operator 6. How to leverage model-driven operators to deploy and manage your MLOps stack 7. Storage agnostic best practices (s3, gs, az storage) in the public cloud and (ceph) on-prem
Related topics:

More from this channel

Upcoming talks (6)
On-demand talks (392)
Subscribers (160910)
Get the most in depth information about the Ubuntu technology and services from Canonical. Learn why Ubuntu is the preferred Linux platform and how Canonical can help you make the most out of your Ubuntu environment.