xPU Deployment and Solutions Deep Dive

Logo
Presented by

Tim Michels, F5; Mario Baldi, AMD Pensando; Amit Radzi, NeuReality; John Kim, NVIDIA

About this talk

Our 1st and 2nd webcasts in this xPU series explained what xPUs are, how they work, and what they can do. In this 3rd webcast, we will dive deeper into next steps for xPU deployment and solutions, discussing: When to deploy • Pros and cons of dedicated accelerator chips versus running everything on the CPU • xPU use cases across hybrid, multi-cloud and edge environments • Cost and power considerations Where to deploy • Deployment operating models: Edge, Core Data Center, CoLo, Public Cloud • System location: In the server, with the storage, on the network, or in all those locations? How to deploy • Mapping workloads to hyperconverged and disaggregated infrastructure • Integrating xPUs into workload flows • Applying offload and acceleration elements within an optimized solution
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (98)
Subscribers (16690)
With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers. This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include: - Data center efficiency, monitoring and infrastructure management; - Data center design, facilities management and convergence; - Cooling technologies and thermal management And much more