菜单

🤖 系统
📄 Abstract - Every Token Counts: Generalizing 16M Ultra-Long Context in Large Language Models

This work explores the challenge of building ``Machines that Can Remember'', framing long-term memory as the problem of efficient ultra-long context modeling. We argue that this requires three key properties: \textbf{sparsity}, \textbf{random-access flexibility}, and \textbf{length generalization}. To address ultra-long-context modeling, we leverage Hierarchical Sparse Attention (HSA), a novel attention mechanism that satisfies all three properties. We integrate HSA into Transformers to build HSA-UltraLong, which is an 8B-parameter MoE model trained on over 8 trillion tokens and is rigorously evaluated on different tasks with in-domain and out-of-domain context lengths to demonstrate its capability in handling ultra-long contexts. Results show that our model performs comparably to full-attention baselines on in-domain lengths while achieving over 90\% accuracy on most in-context retrieval tasks with contexts up to 16M. This report outlines our experimental insights and open problems, contributing a foundation for future research in ultra-long context modeling.

顶级标签: llm model training natural language processing
详细标签: long context sparse attention memory length generalization moe 或 搜索:

每个词元都重要:在大型语言模型中实现1600万超长上下文的泛化 / Every Token Counts: Generalizing 16M Ultra-Long Context in Large Language Models


1️⃣ 一句话总结

这篇论文提出了一种名为‘分层稀疏注意力’的新方法,并将其集成到模型中,成功让一个80亿参数的AI模型能够高效处理和记住长达1600万个词的超长文本信息,在多项测试中表现出色。


📄 打开原文 PDF