菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-02
📄 Abstract - FlexRank: Nested Low-Rank Knowledge Decomposition for Adaptive Model Deployment

The growing scale of deep neural networks, encompassing large language models (LLMs) and vision transformers (ViTs), has made training from scratch prohibitively expensive and deployment increasingly costly. These models are often used as computational monoliths with fixed cost, a rigidity that does not leverage overparametrized architectures and largely hinders adaptive deployment across different cost budgets. We argue that importance-ordered nested components can be extracted from pretrained models, and selectively activated on the available computational budget. To this end, our proposed FlexRank method leverages low-rank weight decomposition with nested, importance-based consolidation to extract submodels of increasing capabilities. Our approach enables a "train-once, deploy-everywhere" paradigm that offers a graceful trade-off between cost and performance without training from scratch for each budget - advancing practical deployment of large models.

顶级标签: model training systems machine learning
详细标签: low-rank decomposition adaptive deployment model compression cost-performance tradeoff nested submodels 或 搜索:

FlexRank:用于自适应模型部署的嵌套低秩知识分解 / FlexRank: Nested Low-Rank Knowledge Decomposition for Adaptive Model Deployment


1️⃣ 一句话总结

这篇论文提出了一种名为FlexRank的方法,它通过从预训练好的大模型中提取出重要性排序的、可嵌套组合的低秩子模块,让同一个模型能根据不同的计算预算灵活调整大小和性能,实现‘一次训练,随处部署’,从而降低大型模型的实际部署成本。

源自 arXiv: 2602.02680