ATLAS:面向长时程机器人动作分割的标注工具 / ATLAS: An Annotation Tool for Long-horizon Robotic Action Segmentation
1️⃣ 一句话总结
本文提出了一种专门为长时程机器人操作任务设计的标注工具ATLAS,它能同步显示多视角视频和力、夹爪状态等传感器信号,支持多种主流机器人数据集格式,并通过快捷键操作将动作标注时间平均缩短6%以上,同时显著提升了标注边界与专家标注的一致性。
Annotating long-horizon robotic demonstrations with precise temporal action boundaries is crucial for training and evaluating action segmentation and manipulation policy learning methods. Existing annotation tools, however, are often limited: they are designed primarily for vision-only data, do not natively support synchronized visualization of robot-specific time-series signals (e.g., gripper state or force/torque), or require substantial effort to adapt to different dataset formats. In this paper, we introduce ATLAS, an annotation tool tailored for long-horizon robotic action segmentation. ATLAS provides time-synchronized visualization of multi-modal robotic data, including multi-view video and proprioceptive signals, and supports annotation of action boundaries, action labels, and task outcomes. The tool natively handles widely used robotics dataset formats such as ROS bags and the Reinforcement Learning Dataset (RLDS) format, and provides direct support for specific datasets such as REASSEMBLE. ATLAS can be easily extended to new formats via a modular dataset abstraction layer. Its keyboard-centric interface minimizes annotation effort and improves efficiency. In experiments on a contact-rich assembly task, ATLAS reduced the average per-action annotation time by at least 6% compared to ELAN, while the inclusion of time-series data improved temporal alignment with expert annotations by more than 2.8% and decreased boundary error fivefold compared to vision-only annotation tools.
ATLAS:面向长时程机器人动作分割的标注工具 / ATLAS: An Annotation Tool for Long-horizon Robotic Action Segmentation
本文提出了一种专门为长时程机器人操作任务设计的标注工具ATLAS,它能同步显示多视角视频和力、夹爪状态等传感器信号,支持多种主流机器人数据集格式,并通过快捷键操作将动作标注时间平均缩短6%以上,同时显著提升了标注边界与专家标注的一致性。
源自 arXiv: 2604.26637