菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-05
📄 Abstract - K-EXAONE Technical Report

This technical report presents K-EXAONE, a large-scale multilingual language model developed by LG AI Research. K-EXAONE is built on a Mixture-of-Experts architecture with 236B total parameters, activating 23B parameters during inference. It supports a 256K-token context window and covers six languages: Korean, English, Spanish, German, Japanese, and Vietnamese. We evaluate K-EXAONE on a comprehensive benchmark suite spanning reasoning, agentic, general, Korean, and multilingual abilities. Across these evaluations, K-EXAONE demonstrates performance comparable to open-weight models of similar size. K-EXAONE, designed to advance AI for a better life, is positioned as a powerful proprietary AI foundation model for a wide range of industrial and research applications.

顶级标签: llm natural language processing systems
详细标签: multilingual language model mixture-of-experts large language model model evaluation foundation model 或 搜索:

K-EXAONE 技术报告 / K-EXAONE Technical Report


1️⃣ 一句话总结

LG AI Research开发了一个名为K-EXAONE的大型多语言专家混合模型,它在推理、代理、通用及多语言任务上表现出色,旨在作为强大的专有基础模型服务于广泛的工业和科研应用。

源自 arXiv: 2601.01739