菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-28
📄 Abstract - Large language models eroding science understanding: an experimental study

This paper is under review in AI and Ethics This study examines whether large language models (LLMs) can reliably answer scientific questions and demonstrates how easily they can be influenced by fringe scientific material. The authors modified custom LLMs to prioritise knowledge in selected fringe papers on the Fine Structure Constant and Gravitational Waves, then compared their responses with those of domain experts and standard LLMs. The altered models produced fluent, convincing answers that contradicted scientific consensus and were difficult for non-experts to detect as misleading. The results show that LLMs are vulnerable to manipulation and cannot replace expert judgment, highlighting risks for public understanding of science and the potential spread of misinformation.

顶级标签: llm model evaluation
详细标签: misinformation scientific understanding manipulation expert comparison 或 搜索:

大型语言模型正在削弱科学理解:一项实验研究 / Large language models eroding science understanding: an experimental study


1️⃣ 一句话总结

本研究通过实验证明,大型语言模型(如ChatGPT)容易被少量非主流科学内容“带偏”,生成看似流畅但实际违背科学共识的错误回答,且非专业人士很难识别,这警示我们不能用AI代替科学专家,否则可能加剧公众对科学的误解和虚假信息的传播。

源自 arXiv: 2604.25639