菜单

🤖 系统
📄 Abstract - Generalist Large Language Models Outperform Clinical Tools on Medical Benchmarks

Specialized clinical AI assistants are rapidly entering medical practice, often framed as safer or more reliable than general-purpose large language models (LLMs). Yet, unlike frontier models, these clinical tools are rarely subjected to independent, quantitative evaluation, creating a critical evidence gap despite their growing influence on diagnosis, triage, and guideline interpretation. We assessed two widely deployed clinical AI systems (OpenEvidence and UpToDate Expert AI) against three state-of-the-art generalist LLMs (GPT-5, Gemini 3 Pro, and Claude Sonnet 4.5) using a 1,000-item mini-benchmark combining MedQA (medical knowledge) and HealthBench (clinician-alignment) tasks. Generalist models consistently outperformed clinical tools, with GPT-5 achieving the highest scores, while OpenEvidence and UpToDate demonstrated deficits in completeness, communication quality, context awareness, and systems-based safety reasoning. These findings reveal that tools marketed for clinical decision support may often lag behind frontier LLMs, underscoring the urgent need for transparent, independent evaluation before deployment in patient-facing workflows.

顶级标签: medical llm model evaluation
详细标签: clinical ai benchmark medical knowledge decision support evaluation 或 搜索:

通用大语言模型在医学基准测试中表现优于临床工具 / Generalist Large Language Models Outperform Clinical Tools on Medical Benchmarks


1️⃣ 一句话总结

这项研究发现,像GPT-5这样的前沿通用大语言模型在医学知识和临床推理的测试中,比市面上专门用于临床决策支持的AI工具表现更好,揭示了后者在部署前缺乏独立评估的风险。


📄 打开原文 PDF