Building a streamlined platform for Enterprise AI

Logo
Presented by

Will McGrath, Red Hat | Karl Eklund, Red Hat | Ryan Loney, Intel Corporation | Rachel Oberman, Intel Corporation

About this talk

So many conversations about implementing AI in the enterprise revolve around hardware performance, time to result, and overall business objectives but there is a critical piece in the middle that can determine the success or failure of experimental AI—the development platforms. Building models for training or deploying efficient, accurate inference is hard enough, but managing the complex assortment of frameworks, algorithms, and optimization tools is another story entirely. Developers want the flexibility to pick the right tools without being locked in and more importantly, want to add and subtract elements without restructuring an entire strategy. In this webcast, we’ll dive into that very complexity in use case context, understanding why AI experiments fail—how they can thrive and shift into production. At the heart of this conversation with experts from Red Hat and Intel, we’ll talk about building on a RedHat OpenShift base and using tools that value AI productivity and flexibility, including OpenVINO and OneAPI AI Kit to seamlessly bounce between experimentation and production with cutting-edge tools that optimize, streamline, and focus AI initiatives at any scale.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (102)
Subscribers (41541)
Join this channel to learn best practices and insights on how to: containerize existing apps for increased cost efficiency, deliver new cloud-native and process-driven apps using microservice architectures, take an agile approach to integrate APIs and data, and do it all in a culture of collaboration using DevOps best practices.