菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-23
📄 Abstract - Silicon Bureaucracy and AI Test-Oriented Education: Contamination Sensitivity and Score Confidence in LLM Benchmarks

Public benchmarks increasingly govern how large language models (LLMs) are ranked, selected, and deployed. We frame this benchmark-centered regime as Silicon Bureaucracy and AI Test-Oriented Education, and argue that it rests on a fragile assumption: that benchmark scores directly reflect genuine generalization. In practice, however, such scores may conflate exam-oriented competence with principled capability, especially when contamination and semantic leakage are difficult to exclude from modern training pipelines. We therefore propose an audit framework for analyzing contamination sensitivity and score confidence in LLM benchmarks. Using a router-worker setup, we compare a clean-control condition with noisy conditions in which benchmark problems are systematically deleted, rewritten, and perturbed before being passed downstream. For a genuinely clean benchmark, noisy conditions should not consistently outperform the clean-control baseline. Yet across multiple models, we find widespread but heterogeneous above-baseline gains under noisy conditions, indicating that benchmark-related cues may be reassembled and can reactivate contamination-related memory. These results suggest that similar benchmark scores may carry substantially different levels of confidence. Rather than rejecting benchmarks altogether, we argue that benchmark-based evaluation should be supplemented with explicit audits of contamination sensitivity and score confidence.

顶级标签: llm benchmark model evaluation
详细标签: contamination evaluation generalization robustness semantic leakage 或 搜索:

硅基官僚主义与AI应试教育:大语言模型基准测试中的污染敏感性与分数置信度 / Silicon Bureaucracy and AI Test-Oriented Education: Contamination Sensitivity and Score Confidence in LLM Benchmarks


1️⃣ 一句话总结

这篇论文指出,当前依赖公开基准测试来评估大语言模型的做法存在风险,因为模型的高分可能源于对测试数据的‘记忆’而非真正的泛化能力,并提出了一个审计框架来量化这种‘应试’污染对分数可信度的影响。

源自 arXiv: 2603.21636