Nemotron-Flash:迈向延迟最优的混合小型语言模型 / Nemotron-Flash: Towards Latency-Optimal Hybrid Small Language Models
1️⃣ 一句话总结
这篇论文提出了一种名为Nemotron-Flash的新型混合小型语言模型,它通过优化模型深度与宽度的比例、选择高效的运算模块以及改进训练方法,在保证精度的同时,显著降低了模型在实际设备上的运行延迟并提高了处理速度。
Efficient deployment of small language models (SLMs) is essential for numerous real-world applications with stringent latency constraints. While previous work on SLM design has primarily focused on reducing the number of parameters to achieve parameter-optimal SLMs, parameter efficiency does not necessarily translate into proportional real-device speed-ups. This work aims to identify the key determinants of SLMs' real-device latency and offer generalizable principles and methodologies for SLM design and training when real-device latency is the primary consideration. Specifically, we identify two central architectural factors: depth-width ratios and operator choices. The former is crucial for small-batch-size latency, while the latter affects both latency and large-batch-size throughput. In light of this, we first study latency-optimal depth-width ratios, with the key finding that although deep-thin models generally achieve better accuracy under the same parameter budget, they may not lie on the accuracy-latency trade-off frontier. Next, we explore emerging efficient attention alternatives to evaluate their potential as candidate building operators. Using the identified promising operators, we construct an evolutionary search framework to automatically discover latency-optimal combinations of these operators within hybrid SLMs, thereby advancing the accuracy-latency frontier. In addition to architectural improvements, we further enhance SLM training using a weight normalization technique that enables more effective weight updates and improves final convergence. Combining these methods, we introduce a new family of hybrid SLMs, called Nemotron-Flash, which significantly advances the accuracy-efficiency frontier of state-of-the-art SLMs, e.g., achieving over +5.5% average accuracy, 1.3x/1.9x lower latency, and 18.7x/45.6x higher throughput compared to Qwen3-1.7B/0.6B, respectively.
Nemotron-Flash:迈向延迟最优的混合小型语言模型 / Nemotron-Flash: Towards Latency-Optimal Hybrid Small Language Models
这篇论文提出了一种名为Nemotron-Flash的新型混合小型语言模型,它通过优化模型深度与宽度的比例、选择高效的运算模块以及改进训练方法,在保证精度的同时,显著降低了模型在实际设备上的运行延迟并提高了处理速度。