CxMP:一个用于评估语言模型构式理解能力的语言学最小对基准 / CxMP: A Linguistic Minimal-Pair Benchmark for Evaluating Constructional Understanding in Language Models
1️⃣ 一句话总结
这篇论文提出了一个基于构式语法的语言学基准CxMP,通过最小对比测试发现,尽管大型语言模型能较早掌握句法规则,但对语法形式所传达的深层语义关系的理解能力发展缓慢且存在明显不足。
Recent work has examined language models from a linguistic perspective to better understand how they acquire language. Most existing benchmarks focus on judging grammatical acceptability, whereas the ability to interpret meanings conveyed by grammatical forms has received much less attention. We introduce the Linguistic Minimal-Pair Benchmark for Evaluating Constructional Understanding in Language Models (CxMP), a benchmark grounded in Construction Grammar that treats form-meaning pairings, or constructions, as fundamental linguistic units. CxMP evaluates whether models can interpret the semantic relations implied by constructions, using a controlled minimal-pair design across nine construction types, including the let-alone, caused motion, and ditransitive constructions. Our results show that while syntactic competence emerges early, constructional understanding develops more gradually and remains limited even in large language models (LLMs). CxMP thus reveals persistent gaps in how language models integrate form and meaning, providing a framework for studying constructional understanding and learning trajectories in language models.
CxMP:一个用于评估语言模型构式理解能力的语言学最小对基准 / CxMP: A Linguistic Minimal-Pair Benchmark for Evaluating Constructional Understanding in Language Models
这篇论文提出了一个基于构式语法的语言学基准CxMP,通过最小对比测试发现,尽管大型语言模型能较早掌握句法规则,但对语法形式所传达的深层语义关系的理解能力发展缓慢且存在明显不足。
源自 arXiv: 2602.21978