菜单

🤖 系统
📄 Abstract - RefineBench: Evaluating Refinement Capability of Language Models via Checklists

Can language models (LMs) self-refine their own responses? This question is increasingly relevant as a wide range of real-world user interactions involve refinement requests. However, prior studies have largely tested LMs' refinement abilities on verifiable tasks such as competition math or symbolic reasoning with simplified scaffolds, whereas users often pose open-ended queries and provide varying degrees of feedback on what they desire. The recent advent of reasoning models that exhibit self-reflection patterns in their chains-of-thought further motivates this question. To analyze this, we introduce RefineBench, a benchmark of 1,000 challenging problems across 11 domains paired with a checklist-based evaluation framework. We evaluate two refinement modes: (1) guided refinement, where an LM is provided natural language feedback, and (2) self-refinement, where LMs attempt to improve without guidance. In the self-refinement setting, even frontier LMs such as Gemini 2.5 Pro and GPT-5 achieve modest baseline scores of 31.3% and 29.1%, respectively, and most models fail to consistently improve across iterations (e.g., Gemini-2.5-Pro gains only +1.8%, while DeepSeek-R1 declines by -0.1%). By contrast, in guided refinement, both proprietary LMs and large open-weight LMs (>70B) can leverage targeted feedback to refine responses to near-perfect levels within five turns. These findings suggest that frontier LMs require breakthroughs to self-refine their incorrect responses, and that RefineBench provides a valuable testbed for tracking progress.

顶级标签: llm model evaluation benchmark
详细标签: refinement capability self-correction evaluation framework feedback reasoning models 或 搜索:

RefineBench:通过清单评估语言模型的精炼能力 / RefineBench: Evaluating Refinement Capability of Language Models via Checklists


1️⃣ 一句话总结

这篇论文提出了一个名为RefineBench的新基准测试,发现当前最先进的语言模型在没有外部指导的情况下,很难有效地自我修正错误答案,但在获得明确反馈后却能大幅改进,这揭示了模型自我精炼能力的局限性。


📄 打开原文 PDF