菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-29
📄 Abstract - FineInstructions: Scaling Synthetic Instructions to Pre-Training Scale

Due to limited supervised training data, large language models (LLMs) are typically pre-trained via a self-supervised "predict the next word" objective on a vast amount of unstructured text data. To make the resulting model useful to users, it is further trained on a far smaller amount of "instruction-tuning" data comprised of supervised training examples of instructions and responses. To overcome the limited amount of supervised data, we propose a procedure that can transform the knowledge in internet-scale pre-training documents into billions of synthetic instruction and answer training pairs. The resulting dataset, called FineInstructions, uses ~18M instruction templates created from real user-written queries and prompts. These instruction templates are matched to and instantiated with human-written source documents from unstructured pre-training corpora. With "supervised" synthetic training data generated at this scale, an LLM can be pre-trained from scratch solely with the instruction-tuning objective, which is far more in-distribution with the expected downstream usage of LLMs (responding to user prompts). We conduct controlled token-for-token training experiments and find pre-training on FineInstructions outperforms standard pre-training and other proposed synthetic pre-training techniques on standard benchmarks measuring free-form response quality. Our resources can be found at this https URL .

顶级标签: llm model training data
详细标签: instruction tuning synthetic data generation pre-training scaling language model 或 搜索:

FineInstructions:将合成指令数据扩展至预训练规模 / FineInstructions: Scaling Synthetic Instructions to Pre-Training Scale


1️⃣ 一句话总结

这篇论文提出了一种新方法,能够将海量的互联网预训练文本自动转化为数十亿条高质量的指令-回答对,从而让大语言模型从一开始就通过指令调优目标进行预训练,最终在回答用户提问的任务上取得了比传统预训练方法更好的效果。

源自 arXiv: 2601.22146