大语言模型奥林匹克竞赛:为何模型评估需要一场“密封考试” / LLM Olympiad: Why Model Evaluation Needs a Sealed Exam
1️⃣ 一句话总结
这篇论文提出,为了避免当前大语言模型评测中常见的刷分、数据泄露等问题,应该引入一种类似奥林匹克竞赛的“密封考试”机制,即在评测前对试题保密、提前锁定模型版本并使用统一评测框架,以确保评估结果的真实可信和可复现。
Benchmarks and leaderboards are how NLP most often communicates progress, but in the LLM era they are increasingly easy to misread. Scores can reflect benchmark-chasing, hidden evaluation choices, or accidental exposure to test content -- not just broad capability. Closed benchmarks delay some of these issues, but reduce transparency and make it harder for the community to learn from results. We argue for a complementary practice: an Olympiad-style evaluation event where problems are sealed until evaluation, submissions are frozen in advance, and all entries run through one standardized harness. After scoring, the full task set and evaluation code are released so results can be reproduced and audited. This design aims to make strong performance harder to ``manufacture'' and easier to trust.
大语言模型奥林匹克竞赛:为何模型评估需要一场“密封考试” / LLM Olympiad: Why Model Evaluation Needs a Sealed Exam
这篇论文提出,为了避免当前大语言模型评测中常见的刷分、数据泄露等问题,应该引入一种类似奥林匹克竞赛的“密封考试”机制,即在评测前对试题保密、提前锁定模型版本并使用统一评测框架,以确保评估结果的真实可信和可复现。
源自 arXiv: 2603.23292