Proceedings of tinyML Research Symposium – 2023
MetaLDC: Meta Learning of Low-Dimensional Computing Classifiers for Fast On-Device Adaption – Yejia Liu, Shijin Duan, Xiaolin Xu, Shaolei Ren [arxiv] [paper]
Memory-Oriented Design-Space Exploration of Edge-AI Hardware for XR Applications – Vivek Parmar, Syed Shakib Sarwar, Ziyun Li, Hsien-Hsin S. Lee, Barbara De Salvo, Manan Suri [arxiv] [paper]
TinyRCE Forward Learning under Tiny Constraints – Danilo Pau [arxiv] [paper]
Design and analysis of hardware friendly pruning algorithms to accelerate deep neural networks at the edge – Christin Bose [arxiv] [paper]
FMAS: Fast Multi-Objective SuperNet Architecture Search for Semantic Segmentation – Zhuoran Xiong, Marihan Amein, Olivier Therrien, Warren J. Gross, Brett H. Meyer [arxiv] [paper]
Classification of Depth-Map Image on AI Microcontroller – Jesse Santos [arxiv] [paper]
SSS3D: Fast Neural Architecture Search For Efficient Three-Dimensional Semantic Segmentation – Olivier Therrien [arxiv] [paper]
AugViT: Improving Vision Transformer Training by Marrying Attention and Data Augmentation – Zhongzhi Yu [arxiv] [paper]
Automatic Network Adaptation for Ultra-Low Uniform-Precision Quantization – Seongmin Park, Beomseok Kwon, Jieun Lim, Kyuyoung Sim, Tae-Ho Kim, Jungwook Choi [arxiv] [paper]
Fused Depthwise Tiling for Memory Optimization in TinyML Deep Neural Network Inference – Rafael Stahl [arxiv] [paper]
Data Aware Neural Architecture Search
– Emil Njor [arxiv] [paper]
Benchmarking and modeling of analog and digital SRAM in-memory computing architectures – Pouya Houshmand [arxiv] [paper]
MEMA Runtime Framework: Minimizing External Memory Accesses for TinyML on Microcontrollers – Tianmu Li [arxiv] [paper]
Training Neural Networks for Execution on Approximate Hardware – Vikas Natesh [arxiv] [paper]
How Tiny Can Analog Filterbank Features Be Made for Ultra-low-power On-device Keyword Spotting? – Subhajit Ray [arxiv] [paper]