菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-26
📄 Abstract - Bitwise Systolic Array Architecture for Runtime-Reconfigurable Multi-precision Quantized Multiplication on Hardware Accelerators

Neural network accelerators have been widely applied to edge devices for complex tasks like object tracking, image recognition, etc. Previous works have explored the quantization technologies in related lightweight accelerator designs to reduce hardware resource consumption. However, low precision leads to high accuracy loss in inference. Therefore, mixed-precision quantization becomes an alternative solution by applying different precision in different layers to trade off resource consumption and accuracy. Because regular designs for multiplication on hardware cannot support the precision reconfiguration for a multi-precision Quantized Neural Network (QNN) model in runtime, we propose a runtime reconfigurable multi-precision multi-channel bitwise systolic array design for QNN accelerators. We have implemented and evaluated our work on the Ultra96 FPGA platform. Results show that our work can achieve 1.3185 to 3.5671 times speedup in inferring mixed-precision models and has less critical path delay, supporting a higher clock frequency (250MHz).

顶级标签: systems model training machine learning
详细标签: hardware accelerator quantization systolic array fpga mixed-precision 或 搜索:

面向硬件加速器的运行时可重配置多精度量化乘法比特级脉动阵列架构 / Bitwise Systolic Array Architecture for Runtime-Reconfigurable Multi-precision Quantized Multiplication on Hardware Accelerators


1️⃣ 一句话总结

这篇论文提出了一种新型的硬件架构,它能在运行时灵活切换计算精度,从而让搭载在边缘设备上的神经网络加速器既能保持高推理精度,又能高效节能地运行。

源自 arXiv: 2602.23334