菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-05
📄 Abstract - UI-Mem: Self-Evolving Experience Memory for Online Reinforcement Learning in Mobile GUI Agents

Online Reinforcement Learning (RL) offers a promising paradigm for enhancing GUI agents through direct environment interaction. However, its effectiveness is severely hindered by inefficient credit assignment in long-horizon tasks and repetitive errors across tasks due to the lack of experience transfer. To address these challenges, we propose UI-Mem, a novel framework that enhances GUI online RL with a Hierarchical Experience Memory. Unlike traditional replay buffers, our memory accumulates structured knowledge, including high-level workflows, subtask skills, and failure patterns. These experiences are stored as parameterized templates that enable cross-task and cross-application transfer. To effectively integrate memory guidance into online RL, we introduce Stratified Group Sampling, which injects varying levels of guidance across trajectories within each rollout group to maintain outcome diversity, driving the unguided policy toward internalizing guided behaviors. Furthermore, a Self-Evolving Loop continuously abstracts novel strategies and errors to keep the memory aligned with the agent's evolving policy. Experiments on online GUI benchmarks demonstrate that UI-Mem significantly outperforms traditional RL baselines and static reuse strategies, with strong generalization to unseen applications. Project page: this https URL

顶级标签: agents reinforcement learning systems
详细标签: gui agents experience memory online rl hierarchical memory cross-task transfer 或 搜索:

UI-Mem:面向移动GUI智能体在线强化学习的自演进经验记忆框架 / UI-Mem: Self-Evolving Experience Memory for Online Reinforcement Learning in Mobile GUI Agents


1️⃣ 一句话总结

这篇论文提出了一个名为UI-Mem的新框架,它通过构建一个能够存储和跨任务迁移高层次操作流程、子任务技能及失败模式的自演进经验记忆库,有效解决了移动图形界面智能体在在线强化学习中面临的长期任务信用分配困难和错误重复发生的问题,从而显著提升了智能体的学习效率和泛化能力。

源自 arXiv: 2602.05832