菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-11
📄 Abstract - MoXaRt: Audio-Visual Object-Guided Sound Interaction for XR

In Extended Reality (XR), complex acoustic environments often overwhelm users, compromising both scene awareness and social engagement due to entangled sound sources. We introduce MoXaRt, a real-time XR system that uses audio-visual cues to separate these sources and enable fine-grained sound interaction. MoXaRt's core is a cascaded architecture that performs coarse, audio-only separation in parallel with visual detection of sources (e.g., faces, instruments). These visual anchors then guide refinement networks to isolate individual sources, separating complex mixes of up to 5 concurrent sources (e.g., 2 voices + 3 instruments) with ~2 second processing latency. We validate MoXaRt through a technical evaluation on a new dataset of 30 one-minute recordings featuring concurrent speech and music, and a 22-participant user study. Empirical results indicate that our system significantly enhances speech intelligibility, yielding a 36.2% (p < 0.01) increase in listening comprehension within adversarial acoustic environments while substantially reducing cognitive load (p < 0.001), thereby paving the way for more perceptive and socially adept XR experiences.

顶级标签: multi-modal systems audio
详细标签: audio-visual separation extended reality sound interaction real-time system speech intelligibility 或 搜索:

MoXaRt:面向扩展现实的视听对象引导声音交互系统 / MoXaRt: Audio-Visual Object-Guided Sound Interaction for XR


1️⃣ 一句话总结

这篇论文提出了一个名为MoXaRt的实时扩展现实系统,它通过结合视觉对象检测和音频处理技术,能在复杂嘈杂环境中有效分离多达5个同时发声的声音源(如人声和乐器),显著提升了用户在XR中的语音理解能力并降低了认知负担。

源自 arXiv: 2603.10465