菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - Joint Training Across Multiple Activation Sparsity Regimes

Generalization in deep neural networks remains only partially understood. Inspired by the stronger generalization tendency of biological systems, we explore the hypothesis that robust internal representations should remain effective across both dense and sparse activation regimes. To test this idea, we introduce a simple training strategy that applies global top-k constraints to hidden activations and repeatedly cycles a single model through multiple activation budgets via progressive compression and periodic reset. Using CIFAR-10 without data augmentation and a WRN-28-4 backbone, we find in single-run experiments that two adaptive keep-ratio control strategies both outperform dense baseline training. These preliminary results suggest that joint training across multiple activation sparsity regimes may provide a simple and effective route to improved generalization.

顶级标签: model training machine learning theory
详细标签: activation sparsity generalization neural networks joint training top-k constraints 或 搜索:

跨多激活稀疏性机制的联合训练 / Joint Training Across Multiple Activation Sparsity Regimes


1️⃣ 一句话总结

这篇论文提出了一种让神经网络在训练过程中,交替经历激活值稠密和稀疏状态的简单方法,初步实验表明这种方法能提升模型在未见过数据上的表现,可能为改善泛化能力提供一条新路径。

源自 arXiv: 2603.03131