MoXaRt:面向扩展现实的视听对象引导声音交互系统 / MoXaRt: Audio-Visual Object-Guided Sound Interaction for XR
1️⃣ 一句话总结
这篇论文提出了一个名为MoXaRt的实时扩展现实系统,它通过结合视觉对象检测和音频处理技术,能在复杂嘈杂环境中有效分离多达5个同时发声的声音源(如人声和乐器),显著提升了用户在XR中的语音理解能力并降低了认知负担。
In Extended Reality (XR), complex acoustic environments often overwhelm users, compromising both scene awareness and social engagement due to entangled sound sources. We introduce MoXaRt, a real-time XR system that uses audio-visual cues to separate these sources and enable fine-grained sound interaction. MoXaRt's core is a cascaded architecture that performs coarse, audio-only separation in parallel with visual detection of sources (e.g., faces, instruments). These visual anchors then guide refinement networks to isolate individual sources, separating complex mixes of up to 5 concurrent sources (e.g., 2 voices + 3 instruments) with ~2 second processing latency. We validate MoXaRt through a technical evaluation on a new dataset of 30 one-minute recordings featuring concurrent speech and music, and a 22-participant user study. Empirical results indicate that our system significantly enhances speech intelligibility, yielding a 36.2% (p < 0.01) increase in listening comprehension within adversarial acoustic environments while substantially reducing cognitive load (p < 0.001), thereby paving the way for more perceptive and socially adept XR experiences.
MoXaRt:面向扩展现实的视听对象引导声音交互系统 / MoXaRt: Audio-Visual Object-Guided Sound Interaction for XR
这篇论文提出了一个名为MoXaRt的实时扩展现实系统,它通过结合视觉对象检测和音频处理技术,能在复杂嘈杂环境中有效分离多达5个同时发声的声音源(如人声和乐器),显著提升了用户在XR中的语音理解能力并降低了认知负担。
源自 arXiv: 2603.10465