菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-13
📄 Abstract - Improving Layout Representation Learning Across Inconsistently Annotated Datasets via Agentic Harmonization

Fine-tuning object detection (OD) models on combined datasets assumes annotation compatibility, yet datasets often encode conflicting spatial definitions for semantically equivalent categories. We propose an agentic label harmonization workflow that uses a vision-language model to reconcile both category semantics and bounding box granularity across heterogeneous sources before training. We evaluate on document layout detection as a challenging case study, where annotation standards vary widely across corpora. Without harmonization, naïve mixed-dataset fine-tuning degrades a pretrained RT-DETRv2 detector: on SCORE-Bench, which measures how accurately the full document conversion pipeline reproduces ground-truth structure, table TEDS drops from 0.800 to 0.750. Applied to two corpora whose 16 and 10 category taxonomies share only 8 direct correspondences, harmonization yields consistent gains across content fidelity, table structure, and spatial consistency: detection F-score improves from 0.860 to 0.883, table TEDS improves to 0.814, and mean bounding box overlap drops from 0.043 to 0.016. Representation analysis further shows that harmonized training produces more compact and separable post-decoder embeddings, confirming that annotation inconsistency distorts the learned feature space and that resolving it before training restores representation structure.

顶级标签: computer vision model training data
详细标签: object detection annotation harmonization layout detection vision-language model representation learning 或 搜索:

通过智能协调改进跨不一致标注数据集的布局表示学习 / Improving Layout Representation Learning Across Inconsistently Annotated Datasets via Agentic Harmonization


1️⃣ 一句话总结

这篇论文提出了一种智能标签协调方法,利用视觉语言模型在训练前统一不同数据集中类别和标注框的标准,从而有效提升了文档布局检测模型的性能,并改善了模型学习到的特征表示。

源自 arXiv: 2604.11042