语言模型形式语言能力的异质性:数据真的是瓶颈吗? / Heterogeneity in Formal Linguistic Competence of Language Models: Is Data the Real Bottleneck?
1️⃣ 一句话总结
本文通过向训练数据中仅注入1%的针对性合成文本,发现小型语言模型在大多数原本表现不佳的语法现象上性能大幅提升,表明数据稀缺而非架构缺陷才是主因,但部分顽固语法错误仍无法通过数据增强解决。
Large Language Models (LLMs) exhibit a puzzling disparity in their formal linguistic competence: while they learn some linguistic phenomena with near-perfect mastery, they often perform below chance on others, even after training on trillions of tokens. In this work, we investigate whether these failures stem from inherent architectural limitations or simply the scarcity of these specific grammatical constructions in web-scale corpora. We pre-train simple GPT-2 Small (124M) models on a 100M-token random sample of the FineWeb corpus and intervene by injecting a minimal amount (1%) of synthetic data targeting specific linguistic phenomena. We find that this targeted intervention substantially improves model performance in 8 out of the 9 worst-performing BLiMP paradigms - notably the accuracy on a specific paradigm, only_npi_scope, surges from 20.9% to 69.4%. Furthermore, we observe that these interventions generally preserve or slightly improve aggregate performance. However, while we also identify a resistant phenomenon, principle_A_c_command, whose performance remains below chance even after our data augmentation, our findings do serve as an optimistic existence proof that even small language models can substantially improve on those linguistic phenomena on which models typically perform poorly, provided the pre-training data contains sufficient exposure to them. This suggests that efforts towards human-scale language modeling may benefit greatly by focusing on data composition. The code to reproduce our results is open-sourced at this https URL.
语言模型形式语言能力的异质性:数据真的是瓶颈吗? / Heterogeneity in Formal Linguistic Competence of Language Models: Is Data the Real Bottleneck?
本文通过向训练数据中仅注入1%的针对性合成文本,发现小型语言模型在大多数原本表现不佳的语法现象上性能大幅提升,表明数据稀缺而非架构缺陷才是主因,但部分顽固语法错误仍无法通过数据增强解决。
源自 arXiv: 2604.17930