OpenNovelty:一个基于大语言模型、用于可验证学术新颖性评估的智能体系统 / OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment
1️⃣ 一句话总结
这篇论文介绍了一个名为OpenNovelty的智能系统,它利用大语言模型自动检索和分析相关文献,为学术论文提供有据可查、可验证的新颖性评估报告,旨在辅助同行评审,使其更公平、一致和高效。
Evaluating novelty is critical yet challenging in peer review, as reviewers must assess submissions against a vast, rapidly evolving literature. This report presents OpenNovelty, an LLM-powered agentic system for transparent, evidence-based novelty analysis. The system operates through four phases: (1) extracting the core task and contribution claims to generate retrieval queries; (2) retrieving relevant prior work based on extracted queries via semantic search engine; (3) constructing a hierarchical taxonomy of core-task-related work and performing contribution-level full-text comparisons against each contribution; and (4) synthesizing all analyses into a structured novelty report with explicit citations and evidence snippets. Unlike naive LLM-based approaches, \textsc{OpenNovelty} grounds all assessments in retrieved real papers, ensuring verifiable judgments. We deploy our system on 500+ ICLR 2026 submissions with all reports publicly available on our website, and preliminary analysis suggests it can identify relevant prior work, including closely related papers that authors may overlook. OpenNovelty aims to empower the research community with a scalable tool that promotes fair, consistent, and evidence-backed peer review.
OpenNovelty:一个基于大语言模型、用于可验证学术新颖性评估的智能体系统 / OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment
这篇论文介绍了一个名为OpenNovelty的智能系统,它利用大语言模型自动检索和分析相关文献,为学术论文提供有据可查、可验证的新颖性评估报告,旨在辅助同行评审,使其更公平、一致和高效。
源自 arXiv: 2601.01576