菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-07
📄 Abstract - FocusUI: Efficient UI Grounding via Position-Preserving Visual Token Selection

Vision-Language Models (VLMs) have shown remarkable performance in User Interface (UI) grounding tasks, driven by their ability to process increasingly high-resolution screenshots. However, screenshots are tokenized into thousands of visual tokens (e.g., about 4700 for 2K resolution), incurring significant computational overhead and diluting attention. In contrast, humans typically focus on regions of interest when interacting with UI. In this work, we pioneer the task of efficient UI grounding. Guided by practical analysis of the task's characteristics and challenges, we propose FocusUI, an efficient UI grounding framework that selects patches most relevant to the instruction while preserving positional continuity for precise grounding. FocusUI addresses two key challenges: (1) Eliminating redundant tokens in visual encoding. We construct patch-level supervision by fusing an instruction-conditioned score with a rule-based UI-graph score that down-weights large homogeneous regions to select distinct and instruction-relevant visual tokens. (2) Preserving positional continuity during visual token selection. We find that general visual token pruning methods suffer from severe accuracy degradation on UI grounding tasks due to broken positional information. We introduce a novel PosPad strategy, which compresses each contiguous sequence of dropped visual tokens into a single special marker placed at the sequence's last index to preserve positional continuity. Comprehensive experiments on four grounding benchmarks demonstrate that FocusUI surpasses GUI-specific baselines. On the ScreenSpot-Pro benchmark, FocusUI-7B achieves a performance improvement of 3.7% over GUI-Actor-7B. Even with only 30% visual token retention, FocusUI-7B drops by only 3.2% while achieving up to 1.44x faster inference and 17% lower peak GPU memory.

顶级标签: computer vision multi-modal model training
详细标签: ui grounding vision-language models token selection efficient inference positional encoding 或 搜索:

FocusUI:通过保留位置信息的视觉标记选择实现高效的用户界面定位 / FocusUI: Efficient UI Grounding via Position-Preserving Visual Token Selection


1️⃣ 一句话总结

这篇论文提出了一种名为FocusUI的新方法,它通过智能地筛选出与用户指令最相关且位置连续的屏幕图像区域,在显著降低计算开销和内存占用的同时,依然能高精度地完成用户界面元素的定位任务。

源自 arXiv: 2601.03928