深度神经网络的分层零阶优化方法 / Hierarchical Zero-Order Optimization for Deep Neural Networks
1️⃣ 一句话总结
本文提出了一种名为分层零阶优化的新方法,通过将深度神经网络按层分解来训练,在不需要计算梯度的情况下,显著降低了计算成本,同时保持了与主流反向传播方法相当的准确率。
Zeroth-order (ZO) optimization has long been favored for its biological plausibility and its capacity to handle non-differentiable objectives, yet its computational complexity has historically limited its application in deep neural networks. Challenging the conventional paradigm that gradients propagate layer-by-layer, we propose Hierarchical Zeroth-Order (HZO) optimization, a novel divide-and-conquer strategy that decomposes the depth dimension of the network. We prove that HZO reduces the query complexity from $O(ML^2)$ to $O(ML \log L)$ for a network of width $M$ and depth $L$, representing a significant leap over existing ZO methodologies. Furthermore, we provide a detailed error analysis showing that HZO maintains numerical stability by operating near the unitary limit ($L_{lip} \approx 1$). Extensive evaluations on CIFAR-10 and ImageNet demonstrate that HZO achieves competitive accuracy compared to backpropagation.
深度神经网络的分层零阶优化方法 / Hierarchical Zero-Order Optimization for Deep Neural Networks
本文提出了一种名为分层零阶优化的新方法,通过将深度神经网络按层分解来训练,在不需要计算梯度的情况下,显著降低了计算成本,同时保持了与主流反向传播方法相当的准确率。
源自 arXiv: 2602.10607