容量感知混合定律实现高效大语言模型数据优化 / Capacity-Aware Mixture Law Enables Efficient LLM Data Optimization
1️⃣ 一句话总结
这篇论文提出了一种名为CAMEL的新方法,它通过一个考虑模型容量和数据混合之间非线性关系的定律,能够用更少的计算成本预测出训练大语言模型的最佳数据组合方案,从而在节省一半优化开销的同时,将模型性能提升高达3%。
A data mixture refers to how different data sources are combined to train large language models, and selecting an effective mixture is crucial for optimal downstream performance. Existing methods either conduct costly searches directly on the target model or rely on mixture scaling laws that fail to extrapolate well to large model sizes. We address these limitations by introducing a compute-efficient pipeline for data mixture scaling. First, we propose CAMEL, a capacity-aware mixture law that models validation loss with the nonlinear interplay between model size and mixture. We also introduce a loss-to-benchmark prediction law that estimates benchmark accuracy from validation loss, enabling end-to-end performance prediction for the target model. Next, we study how to allocate a fixed compute budget across model scales to fit the law and reduce prediction error. Finally, we apply our method to Mixture-of-Experts models with up to 7B-A150M parameters to fit the law, and verify the optimal mixture derived from the law by extrapolating to a 55B-A1.2B target model. Compared to prior methods, we reduces mixture optimization costs by 50\% and improves downstream benchmark performance by up to 3\%.
容量感知混合定律实现高效大语言模型数据优化 / Capacity-Aware Mixture Law Enables Efficient LLM Data Optimization
这篇论文提出了一种名为CAMEL的新方法,它通过一个考虑模型容量和数据混合之间非线性关系的定律,能够用更少的计算成本预测出训练大语言模型的最佳数据组合方案,从而在节省一半优化开销的同时,将模型性能提升高达3%。
源自 arXiv: 2603.08022