菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-23
📄 Abstract - EgoMAGIC- An Egocentric Video Field Medicine Dataset for Training Perception Algorithms

This paper introduces EgoMAGIC (Medical Assistance, Guidance, Instruction, and Correction), an egocentric medical activity dataset collected as part of DARPA's Perceptually-enabled Task Guidance (PTG) program. This dataset comprises 3,355 videos of 50 medical tasks, with at least 50 labeled videos per task. The primary objective of the PTG program was to develop virtual assistants integrated into augmented reality headsets to assist users in performing complex tasks. To encourage exploration and research using this dataset, the medical training data has been released along with an action detection challenge focused on eight medical tasks. The majority of the videos were recorded using a head-mounted stereo camera with integrated audio. From this dataset, 40 YOLO models were trained using 1.95 million labels to detect 124 medical objects, providing a robust starting point for developers working on medical AI applications. In addition to introducing the dataset, this paper presents baseline results on action detection for the eight selected medical tasks across three models, with the best-performing method achieving average mAP 0.526. Although this paper primarily addresses action detection as the benchmark, the EgoMAGIC dataset is equally suitable for action recognition, object identification and detection, error detection, and other challenging computer vision tasks. The dataset is accessible via this http URL (DOI: https://doi.org/10.5281/zenodo.19239154).

顶级标签: computer vision medical benchmark
详细标签: egocentric video action detection object detection medical dataset yolo 或 搜索:

EgoMAGIC:用于训练感知算法的第一人称视角野外医疗数据集 / EgoMAGIC- An Egocentric Video Field Medicine Dataset for Training Perception Algorithms


1️⃣ 一句话总结

本文介绍了一个名为EgoMAGIC的全新第一人称视角医疗活动视频数据集,包含3355个视频和50种医疗任务,并预训练了YOLO模型以检测医疗物体,为开发增强现实辅助的医疗AI提供了重要基准。

源自 arXiv: 2604.22036