菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-11
📄 Abstract - Solar Open Technical Report

We introduce Solar Open, a 102B-parameter bilingual Mixture-of-Experts language model for underserved languages. Solar Open demonstrates a systematic methodology for building competitive LLMs by addressing three interconnected challenges. First, to train effectively despite data scarcity for underserved languages, we synthesize 4.5T tokens of high-quality, domain-specific, and RL-oriented data. Second, we coordinate this data through a progressive curriculum jointly optimizing composition, quality thresholds, and domain coverage across 20 trillion tokens. Third, to enable reasoning capabilities through scalable RL, we apply our proposed framework SnapPO for efficient optimization. Across benchmarks in English and Korean, Solar Open achieves competitive performance, demonstrating the effectiveness of this methodology for underserved language AI development.

顶级标签: llm model training natural language processing
详细标签: mixture-of-experts low-resource languages data synthesis reinforcement learning bilingual model 或 搜索:

Solar Open 技术报告 / Solar Open Technical Report


1️⃣ 一句话总结

这篇论文介绍了Solar Open,一个针对资源匮乏语言开发的1020亿参数双语专家混合模型,它通过合成高质量数据、设计渐进式训练课程以及应用高效的强化学习框架,成功解决了数据稀缺问题,并在英语和韩语基准测试中取得了有竞争力的性能。

源自 arXiv: 2601.07022