菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Towards Next-Generation LLM Training: From the Data-Centric Perspective

Large language models (LLMs) have demonstrated remarkable performance across a wide range of tasks and domains, with data playing a central role in enabling these advances. Despite this success, the preparation and effective utilization of the massive datasets required for LLM training remain major bottlenecks. In current practice, LLM training data is often constructed using ad hoc scripts, and there is still a lack of mature, agent-based data preparation systems that can automatically construct robust and reusable data workflows, thereby freeing data scientists from repetitive and error-prone engineering efforts. Moreover, once collected, datasets are often consumed largely in their entirety during training, without systematic mechanisms for data selection, mixture optimization, or reweighting. To address these limitations, we advocate two complementary research directions. First, we propose building a robust, agent-based automatic data preparation system that supports automated workflow construction and scalable data management. Second, we argue for a unified data-model interaction training system in which data is dynamically selected, mixed, and reweighted throughout the training process, enabling more efficient, adaptive, and performance-aware data utilization. Finally, we discuss the remaining challenges and outline promising directions for future research and system development.

顶级标签: llm model training data
详细标签: data-centric ai training data data preparation data selection workflow automation 或 搜索:

迈向下一代大语言模型训练:从数据中心的视角 / Towards Next-Generation LLM Training: From the Data-Centric Perspective


1️⃣ 一句话总结

这篇论文指出当前大语言模型训练在数据准备和使用上存在效率低下、自动化不足的问题,并提出通过构建自动化的智能数据准备系统以及动态优化数据使用的训练框架,来推动下一代更高效、更智能的模型训练方法。

源自 arXiv: 2603.14712