基于偏好学习和逆强化学习的智能体价值系统学习 / Learning the Value Systems of Agents with Preference-based and Inverse Reinforcement Learning
1️⃣ 一句话总结
这篇论文提出了一种新方法,通过观察和人类演示来自动学习智能体的价值系统,从而帮助它们在互动中做出符合伦理和道德原则的决策。
Agreement Technologies refer to open computer systems in which autonomous software agents interact with one another, typically on behalf of humans, in order to come to mutually acceptable agreements. With the advance of AI systems in recent years, it has become apparent that such agreements, in order to be acceptable to the involved parties, must remain aligned with ethical principles and moral values. However, this is notoriously difficult to ensure, especially as different human users (and their software agents) may hold different value systems, i.e. they may differently weigh the importance of individual moral values. Furthermore, it is often hard to specify the precise meaning of a value in a particular context in a computational manner. Methods to estimate value systems based on human-engineered specifications, e.g. based on value surveys, are limited in scale due to the need for intense human moderation. In this article, we propose a novel method to automatically \emph{learn} value systems from observations and human demonstrations. In particular, we propose a formal model of the \emph{value system learning} problem, its instantiation to sequential decision-making domains based on multi-objective Markov decision processes, as well as tailored preference-based and inverse reinforcement learning algorithms to infer value grounding functions and value systems. The approach is illustrated and evaluated by two simulated use cases.
基于偏好学习和逆强化学习的智能体价值系统学习 / Learning the Value Systems of Agents with Preference-based and Inverse Reinforcement Learning
这篇论文提出了一种新方法,通过观察和人类演示来自动学习智能体的价值系统,从而帮助它们在互动中做出符合伦理和道德原则的决策。
源自 arXiv: 2602.04518