📄 论文总结
Nemotron Elastic:迈向高效多合一推理大语言模型 / Nemotron Elastic: Towards Efficient Many-in-One Reasoning LLMs
1️⃣ 一句话总结
这篇论文提出了一种名为Nemotron Elastic的创新框架,能在单一模型中嵌入多个不同规模的子模型,无需额外训练即可直接部署,从而大幅降低了构建多规格推理大语言模型的训练成本。
Training a family of large language models targeting multiple scales and deployment objectives is prohibitively expensive, requiring separate training runs for each different size. Recent work on model compression through pruning and knowledge distillation has reduced this cost; however, this process still incurs hundreds of billions of tokens worth of training cost per compressed model. In this paper, we present Nemotron Elastic, a framework for building reasoning-oriented LLMs, including hybrid Mamba-Attention architectures, that embed multiple nested submodels within a single parent model, each optimized for different deployment configurations and budgets. Each of these submodels shares weights with the parent model and can be extracted zero-shot during deployment without additional training or fine-tuning. We enable this functionality through an end-to-end trained router, tightly coupled to a two-stage training curriculum designed specifically for reasoning models. We additionally introduce group-aware SSM elastification that preserves Mamba's structural constraints, heterogeneous MLP elastification, normalized MSE-based layer importance for improved depth selection, and knowledge distillation enabling simultaneous multi-budget optimization. We apply Nemotron Elastic to the Nemotron Nano V2 12B model, simultaneously producing a 9B and a 6B model using only 110B training tokens; this results in over 360x cost reduction compared to training model families from scratch, and around 7x compared to SoTA compression techniques. Each of the nested models performs on par or better than the SoTA in accuracy. Moreover, unlike other compression methods, the nested capability of our approach allows having a many-in-one reasoning model that has constant deployment memory against the number of models in the family.
Nemotron Elastic:迈向高效多合一推理大语言模型 / Nemotron Elastic: Towards Efficient Many-in-One Reasoning LLMs
这篇论文提出了一种名为Nemotron Elastic的创新框架,能在单一模型中嵌入多个不同规模的子模型,无需额外训练即可直接部署,从而大幅降低了构建多规格推理大语言模型的训练成本。