Optimized Chip Design with Main Processors and AI Accelerators

Logo
Presented by

Paul Karazuba, VP of Marketing, Expedera & John Min, Director of Field Application Engineering, Andes Technology

About this talk

As AI capability is beginning large-scale deployment into edge devices, many wonder about the decision to use a specialized AI accelerator, rather than simply using the systems main processor. In this first of two webinars on the topic, Paul Karazuba, VP of Marketing at Expedera, and John Min, Director of Field Application Engineering at Andes Technology, will explore the state of the art of CPUs and AI processing, and examine why leading semiconductor companies are adopting hybrid core architecture that combines a main processor with a specialized AI processor.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (9)
Subscribers (250)
Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI inference applications. Third-party silicon validated, Expedera’s solutions produce superior performance and are scalable to a wide range of applications from edge nodes and smartphones to automotive and data centers. Expedera’s Origin deep learning accelerator products are easily integrated, readily scalable, and can be customized to application requirements. The company is headquartered in Santa Clara, California. Visit expedera.com