一种无候选框的查询引导网络用于基于多模态的命名实体识别与定位 / A Proposal-Free Query-Guided Network for Grounded Multimodal Named Entity Recognition
1️⃣ 一句话总结
本文提出了一种新的无候选框查询引导网络,通过文本直接引导图像区域定位,解决了传统方法因依赖通用物体检测器而难以识别特定细粒度实体的问题,从而在跨模态命名实体识别任务中实现了更精准的定位和更强的性能。
Grounded Multimodal Named Entity Recognition (GMNER) identifies named entities, including their spans and types, in natural language text and grounds them to the corresponding regions in associated images. Most existing approaches split this task into two steps: they first detect objects using a pre-trained general-purpose detector and then match named entities to the detected objects. However, these methods face a major limitation. Because pre-trained general-purpose object detectors operate independently of textual entities, they tend to detect common objects and frequently overlook specific fine-grained regions required by named entities. This misalignment between object detectors and entities introduces imprecision and can impair overall system performance. In this paper, we propose a proposal-free Query-Guided Network (QGN) that unifies multimodal reasoning and decoding through text guidance and cross- modal interaction. QGN enables accurate grounding and robust performance in open-domain scenarios. Extensive experiments demonstrate that QGN achieves top performance among compared GMNER models on widely used benchmarks.
一种无候选框的查询引导网络用于基于多模态的命名实体识别与定位 / A Proposal-Free Query-Guided Network for Grounded Multimodal Named Entity Recognition
本文提出了一种新的无候选框查询引导网络,通过文本直接引导图像区域定位,解决了传统方法因依赖通用物体检测器而难以识别特定细粒度实体的问题,从而在跨模态命名实体识别任务中实现了更精准的定位和更强的性能。
源自 arXiv: 2603.17314