📄
Abstract - Fed-SE: Federated Self-Evolution for Privacy-Constrained Multi-Environment LLM Agents
LLM agents are widely deployed in complex interactive tasks, yet privacy constraints often preclude centralized optimization and co-evolution across dynamic environments. While Federated Learning (FL) has proven effective on static datasets, its extension to the open-ended self-evolution of agents remains underexplored. Directly applying standard FL is challenging: heterogeneous tasks and sparse, trajectory-level rewards introduce severe gradient conflicts, destabilizing the global optimization process. To bridge this gap, we propose Fed-SE, a Federated Self-Evolution framework for LLM agents. Fed-SE establishes a local evolution-global aggregation paradigm. Locally, agents employ parameter-efficient fine-tuning on filtered, high-return trajectories to achieve stable gradient updates. Globally, Fed-SE aggregates updates within a low-rank subspace that disentangles environment-specific dynamics, effectively reducing negative transfer across clients. Experiments across five heterogeneous environments demonstrate that Fed-SE improves average task success rates by approximately 18% over federated baselines, validating its effectiveness in robust cross-environment knowledge transfer in privacy-constrained deployments.
Fed-SE:面向隐私受限多环境大语言模型智能体的联邦自进化框架 /
Fed-SE: Federated Self-Evolution for Privacy-Constrained Multi-Environment LLM Agents
1️⃣ 一句话总结
这篇论文提出了一个名为Fed-SE的新框架,它能让部署在不同环境中的大语言模型智能体在不共享原始数据、保护隐私的前提下,通过本地自我进化和全局知识聚合的方式协同学习,有效解决了传统联邦学习方法在动态、多样化任务中遇到的性能冲突问题,从而显著提升了智能体的任务成功率。