xPU Deployment and Solutions Deep Dive

Presented by

Tim Michels, F5; Mario Baldi, AMD Pensando; Amit Radzi, NeuReality; John Kim, NVIDIA

About this talk

Our 1st and 2nd webcasts in this xPU series explained what xPUs are, how they work, and what they can do. In this 3rd webcast, we will dive deeper into next steps for xPU deployment and solutions, discussing: When to deploy • Pros and cons of dedicated accelerator chips versus running everything on the CPU • xPU use cases across hybrid, multi-cloud and edge environments • Cost and power considerations Where to deploy • Deployment operating models: Edge, Core Data Center, CoLo, Public Cloud • System location: In the server, with the storage, on the network, or in all those locations? How to deploy • Mapping workloads to hyperconverged and disaggregated infrastructure • Integrating xPUs into workload flows • Applying offload and acceleration elements within an optimized solution
Related topics:

More from this channel

Upcoming talks (2)
On-demand talks (112)
Subscribers (42451)
With virtualization and cloud computing revolutionizing the data center, it's time that the network has its own revolution. Join the Network Infrastructure channel on all the hottest topics for network and storage professionals such as software-defined networking, WAN optimization and more to maintain performance and service in your infrastructure