Introducing Netspresso – For Really Fast Inference on Cortex-A Devices

Presented by

Tae-Ho Kim, CTO & Co-Founder, Nota

About this talk

This talk takes you step-by-step through how Nota was able to compress different deep learning models on ARM devices including ResNet18 on Cortex-A72. By leveraging NetsPressso, a model compression platform built by Nota, you can optimize deep learning models in a range of environments and use cases with no learning curve. You will also get a detailed look at how this technology is a key component of commercialized lightweight deep learning models for facial recognition, intelligent transportation system, etc. If you need to deploy deep learning models on low-end devices, this talk is for you.

Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (102)
Subscribers (17344)
From providing the IP for the chip to delivering the cloud services that allow organizations to securely manage the deployment of products throughout their lifecycle, Arm delivers a complete Internet of Things (IoT) solution for our partners and customers. It’s rooted in our deep understanding of the future of compute and security.