菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-20
📄 Abstract - Universally Empowering Zeroth-Order Optimization via Adaptive Layer-wise Sampling

Zeroth-Order optimization presents a promising memory-efficient paradigm for fine-tuning Large Language Models by relying solely on forward passes. However, its practical adoption is severely constrained by slow wall-clock convergence and high estimation variance. In this work, we dissect the runtime characteristics of ZO algorithms and identify a critical system bottleneck where the generation of perturbations and parameter updates accounts for over 40% of the training latency. We argue that the standard uniform exploration strategy is fundamentally flawed as it fails to account for the heterogeneous sensitivity of layers in deep networks, resulting in computationally wasteful blind searches. To address this structural mismatch, we propose AdaLeZO, an Adaptive Layer-wise ZO optimization framework. By formulating the layer selection process as a non-stationary Multi-Armed Bandit problem, AdaLeZO dynamically allocates the limited perturbation budget to the most sensitive parameters. We further introduce an Inverse Probability Weighting mechanism based on sampling with replacement, which guarantees unbiased gradient estimation while effectively acting as a temporal denoiser to reduce variance. Extensive experiments on LLaMA and OPT models ranging from 6.7B to 30B parameters demonstrate that AdaLeZO achieves 1.7x to 3.0x wall-clock acceleration compared to state-of-the-art methods. Crucially, AdaLeZO functions as a universal plug-and-play module that seamlessly enhances the efficiency of existing ZO optimizers without incurring additional memory overhead.

顶级标签: llm model training
详细标签: zeroth-order optimization adaptive layer-wise sampling multi-armed bandit gradient estimation memory-efficient fine-tuning 或 搜索:

通过自适应逐层采样通用提升零阶优化 / Universally Empowering Zeroth-Order Optimization via Adaptive Layer-wise Sampling


1️⃣ 一句话总结

本文提出了一种名为AdaLeZO的自适应逐层采样方法,通过将零阶优化中的扰动预算动态分配给神经网络中最敏感的参数层(类似智能分配资源),从而显著加速大模型微调过程,在保持低内存占用的同时提升了1.7到3倍的实际运行速度。

源自 arXiv: 2604.18264