SWIS – Shared Weight bIt Sparsity for Efficient Neural Network Acceleration Post date February 14, 2021 ← Quantization-Guided Training for Compact TinyML Models → Hypervector Design for Efficient Hyperdimensional Computing on Edge Devices