基于置信度校准的大小语言模型协作系统:一种实现高效推理的成本优化方法 / Confidence-Calibrated Small-Large Language Model Collaboration for Cost-Efficient Reasoning
1️⃣ 一句话总结
这篇论文提出了一种名为COREA的协作系统,它先让成本低的小模型尝试回答问题并评估自己的把握,如果把握不足再转交给昂贵的大模型处理,从而在保证高准确率的同时,显著降低了使用大模型的成本开销。
Large language models (LLMs) demonstrate superior reasoning capabilities compared to small language models (SLMs), but incur substantially higher costs. We propose COllaborative REAsoner (COREA), a system that cascades an SLM with an LLM to achieve a balance between accuracy and cost in complex reasoning tasks. COREA first attempts to answer questions using the SLM, which outputs both an answer and a verbalized confidence score. Questions with confidence below a predefined threshold are deferred to the LLM for more accurate resolution. We introduce a reinforcement learning-based training algorithm that aligns the SLM's confidence through an additional confidence calibration reward. Extensive experiments demonstrate that our method jointly improves the SLM's reasoning ability and confidence calibration across diverse datasets and model backbones. Compared to using the LLM alone, COREA reduces cost by 21.5% and 16.8% on out-of-domain math and non-math datasets, respectively, with only an absolute pass@1 drop within 2%.
基于置信度校准的大小语言模型协作系统:一种实现高效推理的成本优化方法 / Confidence-Calibrated Small-Large Language Model Collaboration for Cost-Efficient Reasoning
这篇论文提出了一种名为COREA的协作系统,它先让成本低的小模型尝试回答问题并评估自己的把握,如果把握不足再转交给昂贵的大模型处理,从而在保证高准确率的同时,显著降低了使用大模型的成本开销。
源自 arXiv: 2603.03752