菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-12
📄 Abstract - The N-Body Problem: Parallel Execution from Single-Person Egocentric Video

Humans can intuitively parallelise complex activities, but can a model learn this from observing a single person? Given one egocentric video, we introduce the N-Body Problem: how N individuals, can hypothetically perform the same set of tasks observed in this video. The goal is to maximise speed-up, but naive assignment of video segments to individuals often violates real-world constraints, leading to physically impossible scenarios like two people using the same object or occupying the same space. To address this, we formalise the N-Body Problem and propose a suite of metrics to evaluate both performance (speed-up, task coverage) and feasibility (spatial collisions, object conflicts and causal constraints). We then introduce a structured prompting strategy that guides a Vision-Language Model (VLM) to reason about the 3D environment, object usage, and temporal dependencies to produce a viable parallel execution. On 100 videos from EPIC-Kitchens and HD-EPIC, our method for N = 2 boosts action coverage by 45% over a baseline prompt for Gemini 2.5 Pro, while simultaneously slashing collision rates, object and causal conflicts by 55%, 45% and 55% respectively.

顶级标签: agents computer vision multi-modal
详细标签: egocentric video parallel execution vision-language model action planning constraint reasoning 或 搜索:

N体问题:从单人第一人称视频中实现并行执行 / The N-Body Problem: Parallel Execution from Single-Person Egocentric Video


1️⃣ 一句话总结

这篇论文提出了一种方法,通过分析一个人的第一人称视角视频,来规划多个人如何安全、高效地并行完成视频中的一系列任务,从而显著提升工作效率并避免现实中的冲突。


源自 arXiv: 2512.11393