基于流形的主原型分析用于可解释强化学习 / Principal Prototype Analysis on Manifold for Interpretable Reinforcement Learning
1️⃣ 一句话总结
这篇论文提出了一种新方法,能够自动从数据中选取最优原型,从而在保持强化学习模型高效性能的同时,显著提升其可解释性,无需依赖专家手动定义原型。
Recent years have witnessed the widespread adoption of reinforcement learning (RL), from solving real-time games to fine-tuning large language models using human preference data significantly improving alignment with user expectations. However, as model complexity grows exponentially, the interpretability of these systems becomes increasingly challenging. While numerous explainability methods have been developed for computer vision and natural language processing to elucidate both local and global reasoning patterns, their application to RL remains limited. Direct extensions of these methods often struggle to maintain the delicate balance between interpretability and performance within RL settings. Prototype-Wrapper Networks (PW-Nets) have recently shown promise in bridging this gap by enhancing explainability in RL domains without sacrificing the efficiency of the original black-box models. However, these methods typically require manually defined reference prototypes, which often necessitate expert domain knowledge. In this work, we propose a method that removes this dependency by automatically selecting optimal prototypes from the available data. Preliminary experiments on standard Gym environments demonstrate that our approach matches the performance of existing PW-Nets, while remaining competitive with the original black-box models.
基于流形的主原型分析用于可解释强化学习 / Principal Prototype Analysis on Manifold for Interpretable Reinforcement Learning
这篇论文提出了一种新方法,能够自动从数据中选取最优原型,从而在保持强化学习模型高效性能的同时,显著提升其可解释性,无需依赖专家手动定义原型。
源自 arXiv: 2603.27971