Deploy ML Models Faster on Arm Using Open Source \ Apache TVM with OctoML

Logo
Presented by

Jason Knight, CPO, Andrew Reusch, Head of Embedded ML, OctoML & Mary Bennion, Arm AI Ecosystem Manager

About this talk

Arm AI and OctoML Webinar Hand-written operator libraries are used in most deep learning software solutions today. These libraries are often incomplete and can’t keep up with the rapid pace of machine learning innovation. These disadvantages slow your time to market and lessen application flexibility when building tinyML solutions on ultra-low power devices.”  In this webinar, OctoML shows you how to solve these challenges by making machine learning models faster and easier to put into production. We’ll discuss: - How to use the open source Apache TVM project to generate and optimize machine learning code for your Arm processor. - Generation of zero dependency DL binaries ready to link into your application. - Quantization to int8, int4 and even binary operators to reduce compute and memory requirements.

Related topics:

More from this channel

Upcoming talks (2)
On-demand talks (87)
Subscribers (15260)
From providing the IP for the chip to delivering the cloud services that allow organizations to securely manage the deployment of products throughout their lifecycle, Arm delivers a complete Internet of Things (IoT) solution for our partners and customers. It’s rooted in our deep understanding of the future of compute and security.