xPU Accelerator Offload Functions

Presented by

Joseph White, Dell; John Kim, NVIDIA; Mario Baldi, Pensando; Yadong Li, Intel; David McIntyre, Samsung

About this talk

As covered in our first webcast “SmartNICs and xPUs: Why is the Use of Accelerators Accelerating,” we discussed the trend to deploy dedicated accelerator chips to assist or offload the main CPU. These new accelerators (xPUs) have multiple names such as SmartNIC, DPU, IPU, APU, NAPU. This second webcast in this series will cover a deeper dive into the accelerator offload functions of the xPU. We’ll discuss what problems the xPUs are coming to solve, where in the system they live, and the functions they implement, focusing on: Network Offloads • Virtual switching and NPU • P4 pipelines • QoS and policy enforcement • NIC functions • Gateway functions (tunnel termination, load balancing, etc) Security • Encryption • Policy enforcement • Key management and crypto • Regular expression matching • Firewall • Deep Packet Inspection (DPI) Compute • AI calculations, model resolution • General purpose processing (via local cores) • Emerging use of P4 for general purpose Storage • Compression and data at rest encryption • NVMe-oF offload • Regular expression matching • Storage stack offloads

Related topics:

More from this channel

Upcoming talks (2)
On-demand talks (907)
Subscribers (55801)
The Enterprise Architecture channel presents the next generation of Enterprise IT: recognizing the strategic importance of digital transformation and the data center to make companies more nimble and competitive. These presentations will help demanding IT and BizOps professionals achieve flexibility, scalability and performance with reduced risk and complexity.