菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-26
📄 Abstract - Calibrating Beyond English: Language Diversity for Better Quantized Multilingual LLM

Quantization is an effective technique for reducing the storage footprint and computational costs of Large Language Models (LLMs), but it often results in performance degradation. Existing post-training quantization methods typically use small, English-only calibration sets; however, their impact on multilingual models remains underexplored. We systematically evaluate eight calibration settings (five single-language and three multilingual mixes) on two quantizers (GPTQ, AWQ) on data from 10 languages. Our findings reveal a consistent trend: non-English and multilingual calibration sets significantly improve perplexity compared to English-only baselines. Specifically, we observe notable average perplexity gains across both quantizers on Llama3.1 8B and Qwen2.5 7B, with multilingual mixes achieving the largest overall reductions of up to 3.52 points in perplexity. Furthermore, our analysis indicates that tailoring calibration sets to the evaluation language yields the largest improvements for individual languages, underscoring the importance of linguistic alignment. We also identify specific failure cases where certain language-quantizer combinations degrade performance, which we trace to differences in activation range distributions across languages. These results highlight that static one-size-fits-all calibration is suboptimal and that tailoring calibration data, both in language and diversity, plays a crucial role in robustly quantizing multilingual LLMs.

顶级标签: llm model training natural language processing
详细标签: quantization multilingual models calibration perplexity language diversity 或 搜索:

超越英语的校准:语言多样性助力更好的量化多语言大模型 / Calibrating Beyond English: Language Diversity for Better Quantized Multilingual LLM


1️⃣ 一句话总结

这篇论文研究发现,在压缩多语言大模型时,使用包含多种语言的校准数据,而不仅仅是英语,能显著提升模型性能,这表明针对不同语言定制校准策略至关重要。

源自 arXiv: 2601.18306