基于并行令牌预测的高效文档解析 / Efficient Document Parsing via Parallel Token Prediction
1️⃣ 一句话总结
本文提出了一种名为并行令牌预测的通用方法,通过让视觉语言模型同时预测多个未来令牌,显著提升了文档解析任务的速度和样本效率,同时减少了模型幻觉并增强了泛化能力。
Document parsing, as a fundamental yet crucial vision task, is being revolutionized by vision-language models (VLMs). However, the autoregressive (AR) decoding inherent to VLMs creates a significant bottleneck, severely limiting parsing speed. In this paper, we propose Parallel-Token Prediction (PTP), a plugable, model-agnostic and simple-yet-effective method that enables VLMs to generate multiple future tokens in parallel with improved sample efficiency. Specifically, we insert some learnable tokens into the input sequence and design corresponding training objectives to equip the model with parallel decoding capabilities for document parsing. Furthermore, to support effective training, we develop a comprehensive data generation pipeline that efficiently produces large-scale, high-quality document parsing training data for VLMs. Extensive experiments on OmniDocBench and olmOCR-bench demonstrate that our method not only significantly improves decoding speed (1.6x-2.2x) but also reduces model hallucinations and exhibits strong generalization abilities.
基于并行令牌预测的高效文档解析 / Efficient Document Parsing via Parallel Token Prediction
本文提出了一种名为并行令牌预测的通用方法,通过让视觉语言模型同时预测多个未来令牌,显著提升了文档解析任务的速度和样本效率,同时减少了模型幻觉并增强了泛化能力。
源自 arXiv: 2603.15206