迭代式多模态检索增强生成用于医疗问答 / Iterative Multimodal Retrieval-Augmented Generation for Medical Question Answering
1️⃣ 一句话总结
本文提出了一种名为MED-VRAG的新框架,它通过直接检索医学文献中的整页图像(而非仅提取文本),并利用视觉语言模型进行多轮推理和记忆累积,在多个医疗问答基准测试上显著提升了准确率,证明了图像信息对医疗知识问答的重要价值。
Medical retrieval-augmented generation (RAG) systems typically operate on text chunks extracted from biomedical literature, discarding the rich visual content (tables, figures, structured layouts) of original document pages. We propose MED-VRAG, an iterative multimodal RAG framework that retrieves and reasons over PMC document page images instead of OCR'd text. The system pairs ColQwen2.5 patch-level page embeddings with a sharded MapReduce LLM filter, scaling to ~350K pages while keeping Stage-1 retrieval under 30 ms via an offline coarse-to-fine index (C=8 centroids per page, ANN over centroids, exact two-way scoring on the top-R shortlist). A vision-language model (VLM) then iteratively refines its query and accumulates evidence in a memory bank across up to 3 reasoning rounds, with a single iteration costing ~15.9 s and the full three-round pipeline ~47.8 s on 4xA100. Across four medical QA benchmarks (MedQA, MedMCQA, PubMedQA, MMLU-Med), MEDVRAG reaches 78.6% average accuracy. Under controlled comparison with the same Qwen2.5-VL-32B backbone, retrieval contributes a +5.8 point gain over the no-retrieval baseline; we also note a +1.8 point edge over MedRAG + GPT-4 (76.8%), with the caveat that this is a cross-paper rather than head-to-head comparison. Ablations isolate +1.0 from page-image vs text-chunk retrieval, +1.5 from iteration, and +1.0 from the memory bank.
迭代式多模态检索增强生成用于医疗问答 / Iterative Multimodal Retrieval-Augmented Generation for Medical Question Answering
本文提出了一种名为MED-VRAG的新框架,它通过直接检索医学文献中的整页图像(而非仅提取文本),并利用视觉语言模型进行多轮推理和记忆累积,在多个医疗问答基准测试上显著提升了准确率,证明了图像信息对医疗知识问答的重要价值。
源自 arXiv: 2604.27724