BFA++:面向多视角视觉语言动作模型的分层最佳特征感知令牌剪枝 / BFA++: Hierarchical Best-Feature-Aware Token Prune for Multi-View Vision Language Action Model
1️⃣ 一句话总结
本文提出了一种名为BFA++的动态令牌剪枝框架,它通过分层策略智能筛选多视角图像中的关键视觉信息,从而在显著提升机器人操作模型计算速度的同时,保持甚至提高了任务执行的成功率。
Vision-Language-Action (VLA) models have achieved significant breakthroughs by leveraging Large Vision Language Models (VLMs) to jointly interpret instructions and visual inputs. However, the substantial increase in visual tokens, particularly from multi-view inputs, poses serious challenges to real-time robotic manipulation. Existing acceleration techniques for VLMs, such as token pruning, often result in degraded performance when directly applied to VLA models, as they overlook the relationships between different views and fail to account for the dynamic and task-specific characteristics of robotic operation. To address this, we propose BFA++, a dynamic token pruning framework designed specifically for VLA models. BFA++ introduces a hierarchical pruning strategy guided by two-level importance predictors: an intra-view predictor highlights task-relevant regions within each image to suppress spatial noise, while an inter-view predictor identifies critical camera views throughout different manipulation phases to reduce cross-view redundancy. This design enables efficient token selection while preserving essential visual cues, resulting in improved computational efficiency and higher manipulation success rates. Evaluations on the RoboTwin benchmark and real-world robotic tasks demonstrate that BFA++ consistently outperforms existing methods. BFA++ improves the success rate by about 10% on both the {\pi}0 and RDT models, achieving speedup of 1.8X and 1.5X, respectively. Our results highlight that context-sensitive and task-aware token pruning serves as a more effective strategy than full visual processing, enabling faster inference and improved manipulation accuracy in real-world robotic systems.
BFA++:面向多视角视觉语言动作模型的分层最佳特征感知令牌剪枝 / BFA++: Hierarchical Best-Feature-Aware Token Prune for Multi-View Vision Language Action Model
本文提出了一种名为BFA++的动态令牌剪枝框架,它通过分层策略智能筛选多视角图像中的关键视觉信息,从而在显著提升机器人操作模型计算速度的同时,保持甚至提高了任务执行的成功率。
源自 arXiv: 2602.20566