菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-26
📄 Abstract - SICL-AT: Another way to adapt Auditory LLM to low-resource task

Auditory Large Language Models (LLMs) have demonstrated strong performance across a wide range of speech and audio understanding tasks. Nevertheless, they often struggle when applied to low-resource or unfamiliar tasks. In case of labeled in-domain data is scarce or mismatched to the true test distribution, direct fine-tuning can be brittle. In-Context Learning (ICL) provides a training-free, inference-time solution by adapting auditory LLMs through conditioning on a few in-domain demonstrations. In this work, we first show that \emph{Vanilla ICL}, improves zero-shot performance across diverse speech and audio tasks for selected models which suggest this ICL adaptation capability can be generalized to multimodal setting. Building on this, we propose \textbf{Speech In-Context Learning Adaptation Training (SICL-AT)}, a post-training recipe utilizes only high resource speech data intending to strengthen model's in-context learning capability. The enhancement can generalize to audio understanding/reasoning task. Experiments indicate our proposed method consistently outperforms direct fine-tuning in low-resource scenario.

顶级标签: llm audio model training
详细标签: in-context learning speech understanding low-resource adaptation multimodal llm post-training 或 搜索:

SICL-AT:一种将听觉大语言模型适配于低资源任务的新方法 / SICL-AT: Another way to adapt Auditory LLM to low-resource task


1️⃣ 一句话总结

本文提出了一种名为SICL-AT的后训练方法,它仅需利用高资源语音数据来增强听觉大语言模型的上下文学习能力,从而使其在数据稀缺或分布不匹配的低资源音频理解任务中,表现优于直接微调。

源自 arXiv: 2601.18904