AutoNeural:为NPU推理协同设计的视觉-语言模型 / AutoNeural: Co-Designing Vision-Language Models for NPU Inference
1️⃣ 一句话总结
这篇论文提出了一种名为AutoNeural的新型视觉-语言模型架构,它通过专门为神经处理单元(NPU)的硬件特性进行协同设计,解决了现有模型在NPU上运行效率低下的问题,从而在边缘设备上实现了更快、更稳定的多模态人工智能推理。
While Neural Processing Units (NPUs) offer high theoretical efficiency for edge AI, state-of-the-art Vision--Language Models (VLMs) tailored for GPUs often falter on these substrates. We attribute this hardware-model mismatch to two primary factors: the quantization brittleness of Vision Transformers (ViTs) and the I/O-bound nature of autoregressive attention mechanisms, which fail to utilize the high arithmetic throughput of NPUs. To bridge this gap, we propose AutoNeural, an NPU-native VLM architecture co-designed for integer-only inference. We replace the standard ViT encoder with a MobileNetV5-style backbone utilizing depthwise separable convolutions, which ensures bounded activation distributions for stable INT4/8/16 quantization. Complementing this, our language backbone integrates State-Space Model (SSM) principles with Transformer layers, employing efficient gated convolutions to achieve linear-time complexity. This hybrid design eliminates the heavy memory I/O overhead of Key-Value caching during generation. Our approach delivers substantial efficiency gains, reducing quantization error of vision encoder by up to 7x and end-to-end latency by 14x compared to conventional baselines. The AutoNeural also delivers 3x decoding speed and 4x longer context window than the baseline. We validate these improvements via a real-world automotive case study on the Qualcomm SA8295P SoC, demonstrating real-time performance for cockpit applications. Our results highlight that rethinking model topology specifically for NPU constraints is a prerequisite for robust multi-modal edge intelligence.
AutoNeural:为NPU推理协同设计的视觉-语言模型 / AutoNeural: Co-Designing Vision-Language Models for NPU Inference
这篇论文提出了一种名为AutoNeural的新型视觉-语言模型架构,它通过专门为神经处理单元(NPU)的硬件特性进行协同设计,解决了现有模型在NPU上运行效率低下的问题,从而在边缘设备上实现了更快、更稳定的多模态人工智能推理。