菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-18
📄 Abstract - In Trust We Survive: Emergent Trust Learning

We introduce Emergent Trust Learning (ETL), a lightweight, trust-based control algorithm that can be plugged into existing AI agents. It enables these to reach cooperation in competitive game environments under shared resources. Each agent maintains a compact internal trust state, which modulates memory, exploration, and action selection. ETL requires only individual rewards and local observations and incurs negligible computational and communication overhead. We evaluate ETL in three environments: In a grid-based resource world, trust-based agents reduce conflicts and prevent long-term resource depletion while achieving competitive individual returns. In a hierarchical Tower environment with strong social dilemmas and randomised floor assignments, ETL sustains high survival rates and recovers cooperation even after extended phases of enforced greed. In the Iterated Prisoner's Dilemma, the algorithm generalises to a strategic meta-game, maintaining cooperation with reciprocal opponents while avoiding long-term exploitation by defectors. Code will be released upon publication.

顶级标签: agents multi-agents reinforcement learning
详细标签: trust learning cooperation multi-agent systems social dilemmas emergent behavior 或 搜索:

信任中求生:涌现式信任学习 / In Trust We Survive: Emergent Trust Learning


1️⃣ 一句话总结

这篇论文提出了一种名为‘涌现式信任学习’的轻量级算法,它能让AI智能体在竞争性游戏环境中,仅通过维护一个简单的内部信任状态,就自发学会合作,从而在共享资源下实现共赢并避免长期损耗。

源自 arXiv: 2603.17564