A Novel Packet-Based Accelerator for Resource-Constrained Edge Devices

Logo
Presented by

Sharad Chole, Chief Scientist, Expedera

About this talk

AI is rapidly moving from cloud platforms into edge devices, with edge processors rapidly incorporating hardware accelerators for real-time AI processing. Faced with tight cost, bandwidth and power consumption constraints, edge AI processors must still deliver the high performance required by system developers. In this talk, Expedera co-founder and Chief Scientist Sharad Chole will discuss the problems chip designers and system architects face in trying to apply AI within tight power, performance, and area budgets, and how state-of-the-art packet-based NPUs (Neural Processing Units) can be employed to meet and beat system design goals.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (9)
Subscribers (250)
Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI inference applications. Third-party silicon validated, Expedera’s solutions produce superior performance and are scalable to a wide range of applications from edge nodes and smartphones to automotive and data centers. Expedera’s Origin deep learning accelerator products are easily integrated, readily scalable, and can be customized to application requirements. The company is headquartered in Santa Clara, California. Visit expedera.com