超越奖励:强化学习在网络安全防御中的应用 / Beyond Rewards in Reinforcement Learning for Cyber Defence
1️⃣ 一句话总结
这篇论文通过系统研究发现,在训练网络安全AI防御系统时,使用简单明确的目标奖励(稀疏奖励)比复杂精细设计的综合奖励(密集奖励)更能训练出可靠、高效且风险更低的防御策略。
Recent years have seen an explosion of interest in autonomous cyber defence agents trained to defend computer networks using deep reinforcement learning. These agents are typically trained in cyber gym environments using dense, highly engineered reward functions which combine many penalties and incentives for a range of (un)desirable states and costly actions. Dense rewards help alleviate the challenge of exploring complex environments but risk biasing agents towards suboptimal and potentially riskier solutions, a critical issue in complex cyber environments. We thoroughly evaluate the impact of reward function structure on learning and policy behavioural characteristics using a variety of sparse and dense reward functions, two well-established cyber gyms, a range of network sizes, and both policy gradient and value-based RL algorithms. Our evaluation is enabled by a novel ground truth evaluation approach which allows directly comparing between different reward functions, illuminating the nuanced inter-relationships between rewards, action space and the risks of suboptimal policies in cyber environments. Our results show that sparse rewards, provided they are goal aligned and can be encountered frequently, uniquely offer both enhanced training reliability and more effective cyber defence agents with lower-risk policies. Surprisingly, sparse rewards can also yield policies that are better aligned with cyber defender goals and make sparing use of costly defensive actions without explicit reward-based numerical penalties.
超越奖励:强化学习在网络安全防御中的应用 / Beyond Rewards in Reinforcement Learning for Cyber Defence
这篇论文通过系统研究发现,在训练网络安全AI防御系统时,使用简单明确的目标奖励(稀疏奖励)比复杂精细设计的综合奖励(密集奖励)更能训练出可靠、高效且风险更低的防御策略。
源自 arXiv: 2602.04809