菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - LLandMark: A Multi-Agent Framework for Landmark-Aware Multimodal Interactive Video Retrieval

The increasing diversity and scale of video data demand retrieval systems capable of multimodal understanding, adaptive reasoning, and domain-specific knowledge integration. This paper presents LLandMark, a modular multi-agent framework for landmark-aware multimodal video retrieval to handle real-world complex queries. The framework features specialized agents that collaborate across four stages: query parsing and planning, landmark reasoning, multimodal retrieval, and reranked answer synthesis. A key component, the Landmark Knowledge Agent, detects cultural or spatial landmarks and reformulates them into descriptive visual prompts, enhancing CLIP-based semantic matching for Vietnamese scenes. To expand capabilities, we introduce an LLM-assisted image-to-image pipeline, where a large language model (Gemini 2.5 Flash) autonomously detects landmarks, generates image search queries, retrieves representative images, and performs CLIP-based visual similarity matching, removing the need for manual image input. In addition, an OCR refinement module leveraging Gemini and LlamaIndex improves Vietnamese text recognition. Experimental results show that LLandMark achieves adaptive, culturally grounded, and explainable retrieval performance.

顶级标签: multi-modal agents systems
详细标签: video retrieval multi-agent framework landmark detection clip visual similarity 或 搜索:

LLandMark:一个用于地标感知多模态交互式视频检索的多智能体框架 / LLandMark: A Multi-Agent Framework for Landmark-Aware Multimodal Interactive Video Retrieval


1️⃣ 一句话总结

这篇论文提出了一个名为LLandMark的多智能体框架,它通过让多个专门智能体协作处理地标信息、理解多模态查询,从而更智能、更准确地从大规模视频库中检索出符合复杂、包含文化或空间地标描述的越南场景视频。

源自 arXiv: 2603.02888