Streamline AI from Prototype to Production

Presented by

Josiah Clark, Chief AI Architect

About this talk

As organizations increasingly look to AI for critical insight, deciding where to deploy it can be daunting. These data hungry workloads leave researchers and IT experts faced with an infrastructure conundrum. Deploy in the public cloud for greater flexibility, but pay dearly, or deploy on-prem for better cost control, but struggle with inflexibility and poor resource utilization. Either way, conventional solutions have too many trade-offs. Many AI teams are turning to composable disaggregated infrastructure (CDI) for its cloud-like flexibility and agility for on-prem infrastructure. CDI disaggregates the elements of the data center (compute, GPU, FPGA, NVMe, storage-class memory), and uses software to compose them into systems that meet the most demanding AI workloads requirements, in minutes. Keep costs in check by only deploying and scaling resources only as workloads demand. Join Liqid to learn how their Matrix CDI solution improves time to results for AI-driven workloads by dynamically composing and scaling resources like NVIDIA’s A100 GPU from prototype to production. Discover how Liqid’s as-a-service approach accelerates time-to-value and radically improves operational efficiency.

Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (23)
Subscribers (797)
Liqid provides the world’s most-comprehensive software-defined composable infrastructure platform. The Liqid Composable platform empowers users to manage, scale, and configure physical, bare-metal server systems in seconds and then reallocate core data center devices on-demand as workflows and business needs evolve. Liqid Command Center software enables users to dynamically right size their IT resources on the fly. Simply connect, click, and compose a bare-metal server on-the-fly.