菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-09
📄 Abstract - Next Concept Prediction in Discrete Latent Space Leads to Stronger Language Models

We propose Next Concept Prediction (NCP), a generative pretraining paradigm built on top of Next Token Prediction (NTP). NCP predicts discrete concepts that span multiple tokens, thereby forming a more challenging pretraining objective. Our model, ConceptLM, quantizes hidden states using Vector Quantization and constructs a concept vocabulary. It leverages both NCP and NTP to drive parameter updates and generates a concept to guide the generation of the following tokens. We train ConceptLM from scratch at scales ranging from 70M to 1.5B parameters with up to 300B training data, including Pythia and GPT-2 backbones. Results on 13 benchmarks show that NCP yields consistent performance gains over traditional token-level models. Furthermore, continual pretraining experiments on an 8B-parameter Llama model indicate that NCP can further improve an NTP-trained model. Our analysis suggests that NCP leads to more powerful language models by introducing a harder pretraining task, providing a promising path toward better language modeling.

顶级标签: llm model training natural language processing
详细标签: next concept prediction vector quantization pretraining objective language model scaling discrete latent space 或 搜索:

在离散潜在空间中进行下一个概念预测能构建更强大的语言模型 / Next Concept Prediction in Discrete Latent Space Leads to Stronger Language Models


1️⃣ 一句话总结

这篇论文提出了一种名为‘下一个概念预测’的新训练方法,它让AI模型学习预测由多个词组成的完整‘概念’而非单个词,通过设置更难的训练任务,有效提升了语言模型在各种测试中的表现。

源自 arXiv: 2602.08984