菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-11
📄 Abstract - Stronger Normalization-Free Transformers

Although normalization layers have long been viewed as indispensable components of deep learning architectures, the recent introduction of Dynamic Tanh (DyT) has demonstrated that alternatives are possible. The point-wise function DyT constrains extreme values for stable convergence and reaches normalization-level performance; this work seeks further for function designs that can surpass it. We first study how the intrinsic properties of point-wise functions influence training and performance. Building on these findings, we conduct a large-scale search for a more effective function design. Through this exploration, we introduce $\mathrm{Derf}(x) = \mathrm{erf}(\alpha x + s)$, where $\mathrm{erf}(x)$ is the rescaled Gaussian cumulative distribution function, and identify it as the most performant design. Derf outperforms LayerNorm, RMSNorm, and DyT across a wide range of domains, including vision (image recognition and generation), speech representation, and DNA sequence modeling. Our findings suggest that the performance gains of Derf largely stem from its improved generalization rather than stronger fitting capacity. Its simplicity and stronger performance make Derf a practical choice for normalization-free Transformer architectures.

顶级标签: model training machine learning theory
详细标签: normalization-free transformer architecture activation function function design deep learning 或 搜索:

更强大的无归一化Transformer / Stronger Normalization-Free Transformers


1️⃣ 一句话总结

这篇论文提出了一种名为Derf的新型激活函数,它通过搜索发现并采用高斯累积分布函数,在无需传统归一化层的情况下,在图像识别、语音和DNA建模等多个领域超越了现有方法,主要得益于其更强的泛化能力。


源自 arXiv: 2512.10938