听译:语音模态集成到大型语言模型中的有效性研究 / Hearing to Translate: The Effectiveness of Speech Modality Integration into LLMs
1️⃣ 一句话总结
这篇论文通过大规模实验发现,在语音翻译任务中,目前将语音直接集成到大型语言模型中的新方法,其整体表现仍然不如传统的“先转文字再翻译”的级联系统可靠。
As Large Language Models (LLMs) expand beyond text, integrating speech as a native modality has given rise to SpeechLLMs, which aim to translate spoken language directly, thereby bypassing traditional transcription-based pipelines. Whether this integration improves speech-to-text translation quality over established cascaded architectures, however, remains an open question. We present Hearing to Translate, the first comprehensive test suite rigorously benchmarking 5 state-of-the-art SpeechLLMs against 16 strong direct and cascade systems that couple leading speech foundation models (SFM), with multilingual LLMs. Our analysis spans 16 benchmarks, 13 language pairs, and 9 challenging conditions, including disfluent, noisy, and long-form speech. Across this extensive evaluation, we find that cascaded systems remain the most reliable overall, while current SpeechLLMs only match cascades in selected settings and SFMs lag behind both, highlighting that integrating an LLM, either within the model or in a pipeline, is essential for high-quality speech translation.
听译:语音模态集成到大型语言模型中的有效性研究 / Hearing to Translate: The Effectiveness of Speech Modality Integration into LLMs
这篇论文通过大规模实验发现,在语音翻译任务中,目前将语音直接集成到大型语言模型中的新方法,其整体表现仍然不如传统的“先转文字再翻译”的级联系统可靠。
源自 arXiv: 2512.16378