MMSkills:面向通用视觉智能体的多模态技能 / MMSkills: Towards Multimodal Skills for General Visual Agents
1️⃣ 一句话总结
本文提出了一种名为MMSkills的框架,通过将视觉智能体的操作经验打包成包含文本步骤、状态卡片和多视角关键帧的多模态技能包,使智能体在推理时能结合视觉证据做出决策,从而显著提升其在图形界面和游戏等场景中的表现。
Reusable skills have become a core substrate for improving agent capabilities, yet most existing skill packages encode reusable behavior primarily as textual prompts, executable code, or learned routines. For visual agents, however, procedural knowledge is inherently multimodal: reuse depends not only on what operation to perform, but also on recognizing the relevant state, interpreting visual evidence of progress or failure, and deciding what to do next. We formalize this requirement as multimodal procedural knowledge and address three practical challenges: (I) what a multimodal skill package should contain; (II) where such packages can be derived from public interaction experience; and (III) how agents can consult multimodal evidence at inference time without excessive image context or over-anchoring to reference screenshots. We introduce MMSkills, a framework for representing, generating, and using reusable multimodal procedures for runtime visual decision making. Each MMSkill is a compact, state-conditioned package that couples a textual procedure with runtime state cards and multi-view keyframes. To construct these packages, we develop an agentic trajectory-to-skill Generator that transforms public non-evaluation trajectories into reusable multimodal skills through workflow grouping, procedure induction, visual grounding, and meta-skill-guided auditing. To use them, we introduce a branch-loaded multimodal skill agent: selected state cards and keyframes are inspected in a temporary branch, aligned with the live environment, and distilled into structured guidance for the main agent. Experiments across GUI and game-based visual-agent benchmarks show that MMSkills consistently improve both frontier and smaller multimodal agents, suggesting that external multimodal procedural knowledge complements model-internal priors.
MMSkills:面向通用视觉智能体的多模态技能 / MMSkills: Towards Multimodal Skills for General Visual Agents
本文提出了一种名为MMSkills的框架,通过将视觉智能体的操作经验打包成包含文本步骤、状态卡片和多视角关键帧的多模态技能包,使智能体在推理时能结合视觉证据做出决策,从而显著提升其在图形界面和游戏等场景中的表现。
源自 arXiv: 2605.13527