QuantVLA:面向视觉-语言-动作模型的尺度校准训练后量化方法 / QuantVLA: Scale-Calibrated Post-Training Quantization for Vision-Language-Action Models
1️⃣ 一句话总结
这篇论文提出了一种名为QuantVLA的训练后量化框架,它能在不重新训练模型的情况下,大幅压缩视觉-语言-动作模型的存储占用并提升推理速度,同时保持甚至超越原始模型的性能,为在资源受限设备上部署这类复杂的AI模型提供了实用方案。
Vision-language-action (VLA) models unify perception, language, and control for embodied agents but face significant challenges in practical deployment due to rapidly increasing compute and memory demands, especially as models scale to longer horizons and larger backbones. To address these bottlenecks, we introduce QuantVLA, a training-free post-training quantization (PTQ) framework that, to our knowledge, is the first PTQ approach for VLA systems and the first to successfully quantize a diffusion transformer (DiT) action head. QuantVLA incorporates three scale-calibrated components: (1) a selective quantization layout that integerizes all linear layers in both the language backbone and the DiT while keeping attention projections in floating point to preserve the original operator schedule; (2) attention temperature matching, a lightweight per-head scaling mechanism that stabilizes attention logits and is folded into the dequantization scales at inference; and (3) output head balancing, a per-layer residual interface calibration that mitigates post-projection energy drift. The framework requires no additional training, uses only a small unlabeled calibration buffer, and supports integer kernels for low-bit weights and activations while leaving the architecture unchanged. Across representative VLA models on LIBERO, QuantVLA exceeds the task success rates of full-precision baselines, achieves about 70% relative memory savings on the quantized components, and delivers a 1.22x speedup in end-to-end inference latency, providing a practical pathway toward scalable low-bit embodied intelligence under strict compute, memory, and power constraints.
QuantVLA:面向视觉-语言-动作模型的尺度校准训练后量化方法 / QuantVLA: Scale-Calibrated Post-Training Quantization for Vision-Language-Action Models
这篇论文提出了一种名为QuantVLA的训练后量化框架,它能在不重新训练模型的情况下,大幅压缩视觉-语言-动作模型的存储占用并提升推理速度,同时保持甚至超越原始模型的性能,为在资源受限设备上部署这类复杂的AI模型提供了实用方案。
源自 arXiv: 2602.20309