This talk takes you step-by-step through how Nota was able to compress different deep learning models on ARM devices including ResNet18 on Cortex-A72. By leveraging NetsPressso, a model compression platform built by Nota, you can optimize deep learning models in a range of environments and use cases with no learning curve.
You will also get a detailed look at how this technology is a key component of commercialized lightweight deep learning models for facial recognition, intelligent transportation system, etc.
If you need to deploy deep learning models on low-end devices, this talk is for you.