菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-29
📄 Abstract - Error Amplification Limits ANN-to-SNN Conversion in Continuous Control

Spiking Neural Networks (SNNs) can achieve competitive performance by converting already existing well-trained Artificial Neural Networks (ANNs), avoiding further costly training. This property is particularly attractive in Reinforcement Learning (RL), where training through environment interaction is expensive and potentially unsafe. However, existing conversion methods perform poorly in continuous control, where suitable baselines are largely absent. We identify error amplification as the key cause: small action approximation errors become temporally correlated across decision steps, inducing cumulative state distribution shift and severe performance degradation. To address this issue, we propose Cross-Step Residual Potential Initialization (CRPI), a lightweight training-free mechanism that carries over residual membrane potentials across decision steps to suppress temporally correlated errors. Experiments on continuous control benchmarks with both vector and visual observations demonstrate that CRPI can be integrated into existing conversion pipelines and substantially recovers lost performance. Our results highlight continuous control as a critical and challenging benchmark for ANN-to-SNN conversion, where small errors can be strongly amplified and impact performance.

顶级标签: agents reinforcement learning model training
详细标签: ann-to-snn conversion spiking neural networks continuous control error amplification reinforcement learning 或 搜索:

误差放大效应限制了脉冲神经网络在连续控制任务中的转换性能 / Error Amplification Limits ANN-to-SNN Conversion in Continuous Control


1️⃣ 一句话总结

这篇论文发现,将人工神经网络转换为脉冲神经网络用于连续控制任务时,微小的动作近似误差会随时间累积并放大,导致性能严重下降,并为此提出了一种无需额外训练、通过跨步传递剩余膜电位来抑制误差的方法,有效恢复了转换后的性能。

源自 arXiv: 2601.21778