xPU Deployment and Solutions Deep Dive

Presented by

Tim Michels, F5; Mario Baldi, AMD Pensando; Amit Radzi, NeuReality; John Kim, NVIDIA

About this talk

Our 1st and 2nd webcasts in this xPU series explained what xPUs are, how they work, and what they can do. In this 3rd webcast, we will dive deeper into next steps for xPU deployment and solutions, discussing: When to deploy • Pros and cons of dedicated accelerator chips versus running everything on the CPU • xPU use cases across hybrid, multi-cloud and edge environments • Cost and power considerations Where to deploy • Deployment operating models: Edge, Core Data Center, CoLo, Public Cloud • System location: In the server, with the storage, on the network, or in all those locations? How to deploy • Mapping workloads to hyperconverged and disaggregated infrastructure • Integrating xPUs into workload flows • Applying offload and acceleration elements within an optimized solution

Related topics:

More from this channel

Upcoming talks (2)
On-demand talks (292)
Subscribers (30789)
The Enterprise Storage channel has the most up-to-date, relevant content for storage and infrastructure professionals. As data centers evolve with big data, cloud computing and virtualization, organizations are going to need to know how to make their storage more efficient. Join this channel to find out how you can use the most current technology to satisfy your business and storage needs.