菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-29
📄 Abstract - Benchmarking Complex Multimodal Document Processing Pipelines: A Unified Evaluation Framework for Enterprise AI

Most enterprise document AI today is a pipeline. Parse, index, retrieve, generate. Each of those stages has been studied to death on its own -- what's still hard is evaluating the system as a whole. We built EnterpriseDocBench to take a swing at it: parsing fidelity, indexing efficiency, retrieval relevance, and generation groundedness, all on the same corpus. The corpus is built from public, permissively licensed documents across six enterprise domains (five represented in the current pilot). We ran three pipelines through it -- BM25, dense embedding, and a hybrid -- all with the same GPT-5 generator. The headline numbers: hybrid retrieval narrowly beats BM25 (nDCG@5 of 0.92 vs. 0.91), and both beat dense embedding (0.83). Hallucination doesn't grow monotonically with document length -- short documents and very long ones both hallucinate more than medium ones (28.1% and 23.8% vs. 9.2%). Cross-stage correlations are very weak: parsing->retrieval r=0.14, parsing->generation r=0.17, retrieval->generation 0.02. If quality were cascading the way most of us assume, those numbers would be much higher; they aren't. Design caveats are real (parsing fixed, generator shared, automated proxy metrics) and we don't oversell the result. One result that genuinely surprised us: factual accuracy on stated claims is 85.5%, but answer completeness averages 0.40. The system is right when it answers -- it just leaves things out. That gap matters more for real deployments than the headline accuracy number does. We also describe three reference architectures (ColPali, ColQwen2, agentic complexity-based routing) which are not yet integrated end-to-end. Framework, metrics, baselines, and collection scripts will be released open-source on acceptance.

顶级标签: systems model evaluation multi-modal
详细标签: document ai benchmark retrieval-augmented generation evaluation framework enterprise 或 搜索:

面向复杂多模态文档处理流程的基准测试:企业AI的统一评估框架 / Benchmarking Complex Multimodal Document Processing Pipelines: A Unified Evaluation Framework for Enterprise AI


1️⃣ 一句话总结

本文提出了一个名为EnterpriseDocBench的统一评估框架,用于测试企业文档AI处理流水线(解析、索引、检索、生成)的整体性能,发现混合检索的表现略优于传统BM25方法,而幻觉率并非随文档长度单调增加,且系统回答虽准确但经常遗漏关键内容,揭示了各阶段之间质量并不像预期那样相互级联传递。

源自 arXiv: 2604.26382