菜单

🤖 系统
📄 Abstract - Xmodel-2.5: 1.3B Data-Efficient Reasoning SLM

Large language models deliver strong reasoning and tool-use skills, yet their computational demands make them impractical for edge or cost-sensitive deployments. We present \textbf{Xmodel-2.5}, a 1.3-billion-parameter small language model designed as a \emph{drop-in agent core}. Training with maximal-update parameterization ($\mu$P) allows hyper-parameters tuned on a 20M-parameter proxy to transfer directly to the full model, even under the parameter-tied \emph{tie-word-embedding} architecture. A 1.4T-token Warmup--Stable--Decay curriculum is used, and we further show that \textbf{switching from AdamW to Muon during the decay phase} improves the 13-task reasoning average by 4.58\,\% while keeping every other hyper-parameter fixed, verifying that early AdamW stability can be paired with late Muon sharpening for better downstream performance. FP8-mixed-precision training balances accuracy and throughput. All checkpoints, recipes, and evaluation code are released under the Apache-2.0 license.\footnote{this https URL and this https URL (training checkpoints).} Training code and evaluation harness: this https URL.

顶级标签: llm model training systems
详细标签: small language model parameter-efficient training reasoning edge deployment mixed-precision training 或 搜索:

Xmodel-2.5:一个13亿参数的数据高效推理小语言模型 / Xmodel-2.5: 1.3B Data-Efficient Reasoning SLM


1️⃣ 一句话总结

这篇论文提出了一个名为Xmodel-2.5的13亿参数小语言模型,它通过创新的训练方法(如最大更新参数化、分阶段训练课程和优化器切换)实现了高效推理能力,旨在以较低的计算成本替代大型模型,适用于边缘或成本敏感的场景。


📄 打开原文 PDF