菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-19
📄 Abstract - Differences in Typological Alignment in Language Models' Treatment of Differential Argument Marking

Recent work has shown that language models (LMs) trained on synthetic corpora can exhibit typological preferences that resemble cross-linguistic regularities in human languages, particularly for syntactic phenomena such as word order. In this paper, we extend this paradigm to differential argument marking (DAM), a semantic licensing system in which morphological marking depends on semantic prominence. Using a controlled synthetic learning method, we train GPT-2 models on 18 corpora implementing distinct DAM systems and evaluate their generalization using minimal pairs. Our results reveal a dissociation between two typological dimensions of DAM. Models reliably exhibit human-like preferences for natural markedness direction, favoring systems in which overt marking targets semantically atypical arguments. In contrast, models do not reproduce the strong object preference in human languages, in which overt marking in DAM more often targets objects rather than subjects. These findings suggest that different typological tendencies may arise from distinct underlying sources.

顶级标签: llm natural language processing theory
详细标签: differential argument marking typological alignment synthetic corpora semantic prominence linguistic generalization 或 搜索:

语言模型处理差异性论元标记时的类型学偏好差异 / Differences in Typological Alignment in Language Models' Treatment of Differential Argument Marking


1️⃣ 一句话总结

这项研究发现,虽然语言模型能像人类语言一样,倾向于给语义上不典型的论元加上显性标记,但它们并未表现出人类语言中更倾向于标记宾语而非主语的强烈偏好,这表明不同的语言类型学规律可能源于不同的底层机制。

源自 arXiv: 2602.17653