菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-25
📄 Abstract - A Layer Separation Optimization Framework for Cross-Entropy Training in Deep Learning

This paper investigates the deep learning optimization problem with softmax cross-entropy loss. We propose a layer separation strategy to alleviate the strong nonconvexity encountered during training deep networks. For cross-entropy models with fully connected and convolutional neural networks, we introduce auxiliary variables associated with hidden layer outputs and construct corresponding layer separation models, which decompose the original deeply nested optimization problem into a sequence of more manageable subproblems. We also conduct theoretical analyses, proving that the new layer separation loss provides an upper bound for the original cross-entropy loss. Moreover, we design alternating minimization algorithms and prove that, under appropriate conditions, these algorithms exhibit decreasing properties of the loss function. Numerical experiments validate the effectiveness of the proposed methods and indicate improved optimization behavior, especially for fully connected and convolutional neural networks.

顶级标签: machine learning theory
详细标签: optimization cross-entropy layer separation alternating minimization deep learning 或 搜索:

深度学习交叉熵训练中的层分离优化框架 / A Layer Separation Optimization Framework for Cross-Entropy Training in Deep Learning


1️⃣ 一句话总结

该研究提出了一种层分离策略,通过引入辅助变量将深层网络的复杂优化问题拆解为一系列简单子问题,从而缓解交叉熵损失训练中的非凸性难题,并在理论和实验上证明了该方法的有效性和收敛性。

源自 arXiv: 2604.23225