通过追踪有限时间李雅普诺夫指数来增强神经ODE的鲁棒性 / Tracking Finite-Time Lyapunov Exponents to Robustify Neural ODEs
1️⃣ 一句话总结
这篇论文提出了一种通过抑制神经网络在早期阶段对输入扰动的过度敏感程度(即有限时间李雅普诺夫指数),来提升模型对抗攻击鲁棒性的新训练方法,该方法比传统正则化更高效。
We investigate finite-time Lyapunov exponents (FTLEs), a measure for exponential separation of input perturbations, of deep neural networks within the framework of continuous-depth neural ODEs. We demonstrate that FTLEs are powerful organizers for input-output dynamics, allowing for better interpretability and the comparison of distinct model architectures. We establish a direct connection between Lyapunov exponents and adversarial vulnerability, and propose a novel training algorithm that improves robustness by FTLE regularization. The key idea is to suppress exponents far from zero in the early stage of the input dynamics. This approach enhances robustness and reduces computational cost compared to full-interval regularization, as it avoids a full ``double'' backpropagation.
通过追踪有限时间李雅普诺夫指数来增强神经ODE的鲁棒性 / Tracking Finite-Time Lyapunov Exponents to Robustify Neural ODEs
这篇论文提出了一种通过抑制神经网络在早期阶段对输入扰动的过度敏感程度(即有限时间李雅普诺夫指数),来提升模型对抗攻击鲁棒性的新训练方法,该方法比传统正则化更高效。
源自 arXiv: 2602.09613