分解、混合、适配:参数高效神经网络重组与压缩的统一框架 / Decompose, Mix, Adapt: A Unified Framework for Parameter-Efficient Neural Network Recombination and Compression
1️⃣ 一句话总结
这篇论文提出了一个名为CRISP的统一框架,它通过将预训练模型的权重分解为共享的基础矩阵和少量可调的混合系数,从而同时高效地实现模型压缩和快速适应新任务,在资源受限的设备上尤其有用。
Parameter Recombination (PR) methods aim to efficiently compose the weights of a neural network for applications like Parameter-Efficient FineTuning (PEFT) and Model Compression (MC), among others. Most methods typically focus on one application of PR, which can make composing them challenging. For example, when deploying a large model you may wish to compress the model and also quickly adapt to new settings. However, PEFT methods often can still contain millions of parameters. This may be small compared to the original model size, but can be problematic in resource constrained deployments like edge devices, where they take a larger portion of the compressed model's parameters. To address this, we present Coefficient-gated weight Recombination by Interpolated Shared basis Projections (CRISP), a general approach that seamlessly integrates multiple PR tasks within the same framework. CRISP accomplishes this by factorizing pretrained weights into basis matrices and their component mixing projections. Sharing basis matrices across layers and adjusting its size enables us to perform MC, whereas the mixer weight's small size (fewer than 200 in some experiments) enables CRISP to support PEFT. Experiments show CRISP outperforms methods from prior work capable of dual-task applications by 4-5\% while also outperforming the state-of-the-art in PEFT by 1.5\% and PEFT+MC combinations by 1\%. Our code is available on the repository: this https URL.
分解、混合、适配:参数高效神经网络重组与压缩的统一框架 / Decompose, Mix, Adapt: A Unified Framework for Parameter-Efficient Neural Network Recombination and Compression
这篇论文提出了一个名为CRISP的统一框架,它通过将预训练模型的权重分解为共享的基础矩阵和少量可调的混合系数,从而同时高效地实现模型压缩和快速适应新任务,在资源受限的设备上尤其有用。
源自 arXiv: 2603.27383