菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-02
📄 Abstract - Mitigating loss of control in advanced AI systems through instrumental goal trajectories

Researchers at artificial intelligence labs and universities are concerned that highly capable artificial intelligence (AI) systems may erode human control by pursuing instrumental goals. Existing mitigations remain largely technical and system-centric: tracking capability in advanced systems, shaping behaviour through methods such as reinforcement learning from human feedback, and designing systems to be corrigible and interruptible. Here we develop instrumental goal trajectories to expand these options beyond the model. Gaining capability typically depends on access to additional technical resources, such as compute, storage, data and adjacent services, which in turn requires access to monetary resources. In organisations, these resources can be obtained through three organisational pathways. We label these pathways the procurement, governance and finance instrumental goal trajectories (IGTs). Each IGT produces a trail of organisational artefacts that can be monitored and used as intervention points when a systems capabilities or behaviour exceed acceptable thresholds. In this way, IGTs offer concrete avenues for defining capability levels and for broadening how corrigibility and interruptibility are implemented, shifting attention from model properties alone to the organisational systems that enable them.

顶级标签: agents systems theory
详细标签: ai safety control instrumental goals organizational governance corrigibility 或 搜索:

通过工具性目标轨迹缓解先进人工智能系统的失控风险 / Mitigating loss of control in advanced AI systems through instrumental goal trajectories


1️⃣ 一句话总结

这篇论文提出了一种新方法,通过监控AI系统在组织中获取计算资源和资金等关键资源的三种途径,来预警和干预其可能出现的失控行为,从而将安全控制从单纯关注模型本身扩展到整个组织系统层面。

源自 arXiv: 2602.01699