📄 论文总结
UI-S1:通过半在线强化学习推进图形用户界面自动化 / UI-S1: Advancing GUI Automation via Semi-online Reinforcement Learning
1️⃣ 一句话总结
这篇论文提出了一种名为‘半在线强化学习’的新方法,通过在离线数据上模拟在线交互来有效训练图形界面自动化代理,既保证了训练稳定性又提升了多步骤任务的执行能力,在多个基准测试中取得了领先性能。
Graphical User Interface (GUI) agents have demonstrated remarkable progress in automating complex user interface interactions through reinforcement learning. However, current approaches face a fundamental dilemma: offline RL enables stable training on pre-collected trajectories, but struggles with multi-step task execution for lack of trajectory-level reward signals; online RL captures these signals through environment interaction, but suffers from sparse rewards and prohibitive deployment costs. To address it, we present Semi-online Reinforcement Learning, a novel paradigm that simulates online RL on offline trajectories. During each rollout process, we preserve the original model output within the multi-turn dialogue, where a Patch Module adaptively recovers the divergence between rollout and expert trajectories. To capture long-term training signals, Semi-online RL introduces discounted future returns into the reward computation and optimizes the policy with weighted step-level and episode-level advantages. We further introduce Semi-Online Performance (SOP), a metric that aligns better with true online performance, serving as a practical and effective proxy for real-world evaluation. Experiments show that ours Semi-online RL achieves SOTA performance among 7B models across four dynamic benchmarks, with significant gains over the base model (e.g., +12.0% on AndroidWorld, +23.8% on AITW), demonstrating significant progress in bridging the gap between offline training efficiency and online multi-turn reasoning. The code is available at this https URL.
UI-S1:通过半在线强化学习推进图形用户界面自动化 / UI-S1: Advancing GUI Automation via Semi-online Reinforcement Learning
这篇论文提出了一种名为‘半在线强化学习’的新方法,通过在离线数据上模拟在线交互来有效训练图形界面自动化代理,既保证了训练稳定性又提升了多步骤任务的执行能力,在多个基准测试中取得了领先性能。