菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-05
📄 Abstract - Formal Synthesis of Certifiably Robust Neural Lyapunov-Barrier Certificates

Neural Lyapunov and barrier certificates have recently been used as powerful tools for verifying the safety and stability properties of deep reinforcement learning (RL) controllers. However, existing methods offer guarantees only under fixed ideal unperturbed dynamics, limiting their reliability in real-world applications where dynamics may deviate due to uncertainties. In this work, we study the problem of synthesizing \emph{robust neural Lyapunov barrier certificates} that maintain their guarantees under perturbations in system dynamics. We formally define a robust Lyapunov barrier function and specify sufficient conditions based on Lipschitz continuity that ensure robustness against bounded perturbations. We propose practical training objectives that enforce these conditions via adversarial training, Lipschitz neighborhood bound, and global Lipschitz regularization. We validate our approach in two practically relevant environments, Inverted Pendulum and 2D Docking. The former is a widely studied benchmark, while the latter is a safety-critical task in autonomous systems. We show that our methods significantly improve both certified robustness bounds (up to $4.6$ times) and empirical success rates under strong perturbations (up to $2.4$ times) compared to the baseline. Our results demonstrate effectiveness of training robust neural certificates for safe RL under perturbations in dynamics.

顶级标签: reinforcement learning theory systems
详细标签: neural certificates robustness verification lyapunov functions safety-critical systems adversarial training 或 搜索:

可证明鲁棒的神经李雅普诺夫-屏障证书的形式化合成 / Formal Synthesis of Certifiably Robust Neural Lyapunov-Barrier Certificates


1️⃣ 一句话总结

这篇论文提出了一种新方法,通过训练一种特殊的神经网络证书,来确保深度强化学习控制器即使在系统动态存在不确定性和扰动的情况下,也能安全稳定地运行,并在两个实际环境中验证了其有效性。

源自 arXiv: 2602.05311