Sun 18 Jun 2023 10:40 - 11:00 at Magnolia 7-8 - LCTES: Code Gen Chair(s): Bernhard Egger

Running machine learning inference on tiny devices, known as TinyML, is an emerging research area. This task requires generating inference code that uses memory frugally, a task that standard ML frameworks are ill-suited for. A deployment framework for TinyML must a) be parametric in the number representation to take advantage of the emerging representations like posits, b) carefully assign high-precision to a few tensors so that most tensors can be kept in low-precision while still maintaining model accuracy, and c) avoid memory fragmentation. We describe MinUn, the first TinyML framework that holistically addresses these issues to generate efficient code for ARM microcontrollers (e.g., Arduino Uno, Due and STM32H747) that outperforms the prior TinyML frameworks.

Sun 18 Jun

Displayed time zone: Eastern Time (US & Canada) change

10:00 - 11:00
LCTES: Code GenLCTES at Magnolia 7-8
Chair(s): Bernhard Egger Seoul National University

#lctes-1000-codegen-magnolia78 Discord icon small YouTube icon small

10:00
20m
Talk
Facilitating the Bootstrapping of a New ISA
LCTES
Abigail Mortensen Florida State University, Scott Pomerville Michigan Technological University, David B. Whalley Florida State University, Soner Onder Michigan Technological University, Gang-Ryung Uh Florida State University
DOI
10:20
20m
Talk
Synchronization-aware NAS for an Efficient Collaborative Inference on Mobile Platforms
LCTES
Beom Woo Kang Hanyang University, Junho Wohn Hanyang University, Seongju Lee Hanyang University, Sunghyun Park University of Michigan, Yung-Kyun Noh Hanyang University, Yongjun Park Yonsei University
DOI
10:40
20m
Talk
MinUn: Accurate ML Inference on MicrocontrollersVirtual
LCTES
Shikhar Jaiswal Microsoft Research, Rahul Kranti Kiran Goli Microsoft Research, Aayan Kumar Microsoft Research, Vivek Seshadri Microsoft Research, Rahul Sharma Microsoft Research
DOI Pre-print Media Attached