菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-05
📄 Abstract - UAV Trajectory Optimization via Improved Noisy Deep Q-Network

This paper proposes an Improved Noisy Deep Q-Network (Noisy DQN) to enhance the exploration and stability of Unmanned Aerial Vehicle (UAV) when applying deep reinforcement learning in simulated environments. This method enhances the exploration ability by combining the residual NoisyLinear layer with an adaptive noise scheduling mechanism, while improving training stability through smooth loss and soft target network updates. Experiments show that the proposed model achieves faster convergence and up to $+40$ higher rewards compared to standard DQN and quickly reach to the minimum number of steps required for the task 28 in the 15 * 15 grid navigation environment set up. The results show that our comprehensive improvements to the network structure of NoisyNet, exploration control, and training stability contribute to enhancing the efficiency and reliability of deep Q-learning.

顶级标签: reinforcement learning robotics model training
详细标签: uav trajectory optimization noisy dqn exploration enhancement adaptive noise scheduling training stability 或 搜索:

基于改进噪声深度Q网络的无人机轨迹优化 / UAV Trajectory Optimization via Improved Noisy Deep Q-Network


1️⃣ 一句话总结

这篇论文提出了一种改进的噪声深度Q网络方法,通过增强智能体的探索能力和训练稳定性,让无人机在模拟环境中学习飞行轨迹时,能更快地找到最优路径并获得更高的任务奖励。

源自 arXiv: 2602.05644