菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-19
📄 Abstract - SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

We present SciCoQA, a dataset for detecting discrepancies between scientific publications and their codebases to ensure faithful implementations. We construct SciCoQA from GitHub issues and reproducibility papers, and to scale our dataset, we propose a synthetic data generation method for constructing paper-code discrepancies. We analyze the paper-code discrepancies in detail and propose discrepancy types and categories to better understand the occurring mismatches. In total, our dataset consists of 611 paper-code discrepancies (81 real, 530 synthetic), spanning diverse computational science disciplines, including AI, Physics, Quantitative Biology, and others. Our evaluation of 21 LLMs highlights the difficulty of SciCoQA, particularly for instances involving omitted paper details, long-context inputs, and data outside the models' pre-training corpus. The best performing model in our evaluation, GPT-5, can only detect 45.7\% of real-world paper-code discrepancies.

顶级标签: llm model evaluation data
详细标签: scientific code paper-code alignment discrepancy detection synthetic data generation benchmark 或 搜索:

SciCoQA:科学论文与代码一致性的质量保证 / SciCoQA: Quality Assurance for Scientific Paper--Code Alignment


1️⃣ 一句话总结

这篇论文提出了一个名为SciCoQA的新数据集,用于检测科学论文与其对应代码库之间的差异,以评估大型语言模型发现这类问题的能力,结果发现即使是当前最好的模型也难以准确识别大部分真实世界中的论文与代码不一致问题。

源自 arXiv: 2601.12910