InfoTechTarget and Informa Tech's Digital Businesses Combine.

Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.

An Ideal Neural Processing Engine for Always-sensing Deployments

Presented by

Sharad Chole, Expedera Chief Scientist and Co-founder

About this talk

Always-sensing cameras are emerging in smartphones, home appliances, and other consumer devices, much like the always-listening Siri or Google voice assistants. Always-on technologies enable a more natural and seamless user experience, allowing such features as automatic locking and unlocking of the device or display adjustment based on the user's gaze. However, camera data has quality, richness, and privacy concerns which requires specialized Artificial Intelligence (AI) processing. However, existing system processors are ill-suited for always-sensing applications. Without careful attention to Neural Processing Unit (NPU) design, an always-sensing sub-system will consume excessive power, suffer from excessive latency, or risk the privacy of the user, all leading to an unsatisfactory user experience. To process always-sensing data in a power, latency, and privacy-friendly manner, OEMs are turning to specialized “LittleNPU” AI processors. In this webinar, we’ll explore the architecture of always-sensing, discuss use cases, and provide tips for how OEMs, chipmakers, and system architects can successfully evaluate, specify, and deploy an NPU in an always-on camera sub-system.
Expedera, Inc

Expedera, Inc

267 subscribers9 talks
The official webinar feed of Expedera, Inc
Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI inference applications. Third-party silicon validated, Expedera’s solutions produce superior performance and are scalable to a wide range of applications from edge nodes and smartphones to automotive and data centers. Expedera’s Origin deep learning accelerator products are easily integrated, readily scalable, and can be customized to application requirements. The company is headquartered in Santa Clara, California. Visit expedera.com
Related topics