菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-02
📄 Abstract - Adam's Law: Textual Frequency Law on Large Language Models

While textual frequency has been validated as relevant to human cognition in reading speed, its relatedness to Large Language Models (LLMs) is seldom studied. We propose a novel research direction in terms of textual data frequency, which is an understudied topic, to the best of our knowledge. Our framework is composed of three units. First, this paper proposes Textual Frequency Law (TFL), which indicates that frequent textual data should be preferred for LLMs for both prompting and fine-tuning. Since many LLMs are closed-source in their training data, we propose using online resources to estimate the sentence-level frequency. We then utilize an input paraphraser to paraphrase the input into a more frequent textual expression. Next, we propose Textual Frequency Distillation (TFD) by querying LLMs to conduct story completion by further extending the sentences in the datasets, and the resulting corpora are used to adjust the initial estimation. Finally, we propose Curriculum Textual Frequency Training (CTFT) that fine-tunes LLMs in an increasing order of sentence-level frequency. Experiments are conducted on our curated dataset Textual Frequency Paired Dataset (TFPD) on math reasoning, machine translation, commonsense reasoning and agentic tool calling. Results show the effectiveness of our framework.

顶级标签: llm model training natural language processing
详细标签: textual frequency data frequency fine-tuning prompting curriculum learning 或 搜索:

亚当定律:关于大语言模型的文本频率定律 / Adam's Law: Textual Frequency Law on Large Language Models


1️⃣ 一句话总结

这篇论文提出并验证了“文本频率定律”,即使用更常见的文本表达方式(无论是提问还是训练模型)都能显著提升大语言模型在数学推理、翻译等多种任务上的表现。

源自 arXiv: 2604.02176