测量语音大语言模型中解码器层的冗余性 / Measuring the Redundancy of Decoder Layers in SpeechLLMs
1️⃣ 一句话总结
这篇论文研究发现,用于处理语音任务的大语言模型(SpeechLLM)的解码器部分存在大量冗余,通过剪枝移除多达40%的层后模型性能依然良好,并且这种冗余模式在不同任务和语言中具有一致性,为构建更高效的轻量级多任务语音模型提供了可能。
Speech Large Language Models route speech encoder representations into an LLM decoder that typically accounts for over 90% of total parameters. We study how much of this decoder capacity is actually needed for speech tasks. Across two LLM families and three scales (1-8B), we show that decoder redundancy is largely inherited from the pretrained LLM: text and speech inputs yield similar redundant blocks. We then measure excess capacity by pruning decoder layers and analysing post-pruning healing to increase robustness. Our findings show that 7-8B models retain good ASR performance with only 60% of decoder layers, and the same trend extends to smaller scales with reduced pruning tolerance. We then generalise to speech translation, and show that the same blocks of layers are redundant across speech encoders, tasks and languages, indicating that a more global redundancy structure exists, enabling a single pruned and multi-tasks SpeechLLM backbone to be deployed.
测量语音大语言模型中解码器层的冗余性 / Measuring the Redundancy of Decoder Layers in SpeechLLMs
这篇论文研究发现,用于处理语音任务的大语言模型(SpeechLLM)的解码器部分存在大量冗余,通过剪枝移除多达40%的层后模型性能依然良好,并且这种冗余模式在不同任务和语言中具有一致性,为构建更高效的轻量级多任务语音模型提供了可能。
源自 arXiv: 2603.05121