G-STAR:端到端的全局说话人追踪与属性识别 / G-STAR: End-to-End Global Speaker-Tracking Attributed Recognition
1️⃣ 一句话总结
这篇论文提出了一个名为G-STAR的端到端系统,它结合了时间感知的说话人追踪模块和语音大语言模型转录主干,旨在解决长篇幅、多说话人重叠语音场景下的时间戳标注和跨片段说话人身份一致性识别难题。
We study timestamped speaker-attributed ASR for long-form, multi-party speech with overlap, where chunk-wise inference must preserve meeting-level speaker identity consistency while producing time-stamped, speaker-labeled transcripts. Previous Speech-LLM systems tend to prioritize either local diarization or global labeling, but often lack the ability to capture fine-grained temporal boundaries or robust cross-chunk identity linking. We propose G-STAR, an end-to-end system that couples a time-aware speaker-tracking module with a Speech-LLM transcription backbone. The tracker provides structured speaker cues with temporal grounding, and the LLM generates attributed text conditioned on these cues. G-STAR supports both component-wise optimization and joint end-to-end training, enabling flexible learning under heterogeneous supervision and domain shift. Experiments analyze cue fusion, local versus long-context trade-offs and hierarchical objectives.
G-STAR:端到端的全局说话人追踪与属性识别 / G-STAR: End-to-End Global Speaker-Tracking Attributed Recognition
这篇论文提出了一个名为G-STAR的端到端系统,它结合了时间感知的说话人追踪模块和语音大语言模型转录主干,旨在解决长篇幅、多说话人重叠语音场景下的时间戳标注和跨片段说话人身份一致性识别难题。
源自 arXiv: 2603.10468