Quantization-Guided Training for Compact TinyML Models Post date February 14, 2021 ← Compiler Toolchains for Deep Learning Workloads on Embedded Platforms → SWIS – Shared Weight bIt Sparsity for Efficient Neural Network Acceleration