菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-16
📄 Abstract - Algorithmic Simplification of Neural Networks with Mosaic-of-Motifs

Large-scale deep learning models are well-suited for compression. Methods like pruning, quantization, and knowledge distillation have been used to achieve massive reductions in the number of model parameters, with marginal performance drops across a variety of architectures and tasks. This raises the central question: \emph{Why are deep neural networks suited for compression?} In this work, we take up the perspective of algorithmic complexity to explain this behavior. We hypothesize that the parameters of trained models have more structure and, hence, exhibit lower algorithmic complexity compared to the weights at (random) initialization. Furthermore, that model compression methods harness this reduced algorithmic complexity to compress models. Although an unconstrained parameterization of model weights, $\mathbf{w} \in \mathbb{R}^n$, can represent arbitrary weight assignments, the solutions found during training exhibit repeatability and structure, making them algorithmically simpler than a generic program. To this end, we formalize the Kolmogorov complexity of $\mathbf{w}$ by $\mathcal{K}(\mathbf{w})$. We introduce a constrained parameterization $\widehat{\mathbf{w}}$, that partitions parameters into blocks of size $s$, and restricts each block to be selected from a set of $k$ reusable motifs, specified by a reuse pattern (or mosaic). The resulting method, $\textit{Mosaic-of-Motifs}$ (MoMos), yields algorithmically simpler model parameterization compared to unconstrained models. Empirical evidence from multiple experiments shows that the algorithmic complexity of neural networks, measured using approximations to Kolmogorov complexity, can be reduced during training. This results in models that perform comparably with unconstrained models while being algorithmically simpler.

顶级标签: model training theory machine learning
详细标签: algorithmic complexity model compression kolmogorov complexity parameterization neural network simplification 或 搜索:

基于马赛克-基元模式的神经网络算法简化 / Algorithmic Simplification of Neural Networks with Mosaic-of-Motifs


1️⃣ 一句话总结

这篇论文提出了一种新视角,认为训练好的神经网络之所以容易被压缩,是因为其参数具有规律性且算法复杂度更低,并据此设计了一种名为‘马赛克-基元’的方法,通过复用少量参数模块来构建模型,在保持性能的同时实现了算法层面的简化。

源自 arXiv: 2602.14896