菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-06
📄 Abstract - EpiQAL: Benchmarking Large Language Models in Epidemiological Question Answering for Enhanced Alignment and Reasoning

Reliable epidemiological reasoning requires synthesizing study evidence to infer disease burden, transmission dynamics, and intervention effects at the population level. Existing medical question answering benchmarks primarily emphasize clinical knowledge or patient-level reasoning, yet few systematically evaluate evidence-grounded epidemiological inference. We present EpiQAL, the first diagnostic benchmark for epidemiological question answering across diverse diseases, comprising three subsets built from open-access literature. The subsets respectively evaluate text-grounded factual recall, multi-step inference linking document evidence with epidemiological principles, and conclusion reconstruction with the Discussion section withheld. Construction combines expert-designed taxonomy guidance, multi-model verification, and retrieval-based difficulty control. Experiments on ten open models reveal that current LLMs show limited performance on epidemiological reasoning, with multi-step inference posing the greatest challenge. Model rankings shift across subsets, and scale alone does not predict success. Chain-of-Thought prompting benefits multi-step inference but yields mixed results elsewhere. EpiQAL provides fine-grained diagnostic signals for evidence grounding, inferential reasoning, and conclusion reconstruction.

顶级标签: llm benchmark medical
详细标签: epidemiological reasoning question answering evidence grounding multi-step inference model evaluation 或 搜索:

EpiQAL:用于增强对齐与推理的流行病学问答大语言模型基准测试 / EpiQAL: Benchmarking Large Language Models in Epidemiological Question Answering for Enhanced Alignment and Reasoning


1️⃣ 一句话总结

这篇论文提出了首个专门评估大语言模型在流行病学推理方面能力的基准测试EpiQAL,发现现有模型在该领域表现有限,尤其是在需要结合证据进行多步推理的任务上存在明显挑战。

源自 arXiv: 2601.03471