📄
Abstract - SWE-MiniSandbox: Container-Free Reinforcement Learning for Building Software Engineering Agents
Reinforcement learning (RL) has become a key paradigm for training software engineering (SWE) agents, but existing pipelines typically rely on per-task containers for isolation. At scale, pre-built container images incur substantial storage overhead, slow environment setup, and require container-management privileges. We propose SWE-MiniSandbox, a lightweight, container-free method that enables scalable RL training of SWE agents without sacrificing isolation. Instead of relying on per-instance containers, SWE-MiniSandbox executes each task in an isolated workspace backed by kernel-level mechanisms, substantially reducing system overhead. It leverages lightweight environment pre-caching techniques to eliminate the need for bulky container images. As a result, our approach lowers disk usage to approximately 5\% of that required by container-based pipelines and reduces environment preparation time to about 25\% of the container baseline. Empirical results demonstrate that SWE-MiniSandbox achieves evaluation performance comparable to standard container-based pipelines. By removing the dependency on heavy container infrastructure, SWE-MiniSandbox offers a practical and accessible foundation for scaling RL-based SWE agents, particularly in resource-constrained research environments.
SWE-MiniSandbox:用于构建软件工程智能体的无容器强化学习方法 /
SWE-MiniSandbox: Container-Free Reinforcement Learning for Building Software Engineering Agents
1️⃣ 一句话总结
这篇论文提出了一种名为SWE-MiniSandbox的轻量级方法,它通过使用内核级隔离机制替代传统容器,在保证任务安全隔离的同时,大幅降低了训练软件工程智能体所需的存储开销和启动时间,为在资源有限的环境中规模化应用强化学习提供了更高效的解决方案。