重新审视语义角色标注:基于依存关系的高效结构化推理 / Revisiting Semantic Role Labeling: Efficient Structured Inference with Dependency-Informed Analysis
1️⃣ 一句话总结
本文提出了一种基于现代编码器的语义角色标注框架,在保持显式谓词-论元结构的基础上,推理速度提升10倍,并通过依存信息分析发现该信号主要提升模型的结构稳定性,同时展示了其在多语言场景下的应用潜力。
Semantic Role Labeling (SRL) provides an explicit representation of predicate-argument structure, capturing linguistically grounded relations such as who did what to whom. While recent NLP progress has been dominated by large language models (LLMs), these systems often rely on implicit semantic representations, often lacking explicit structural constraints and systematic explanatory mechanisms. Traditionally, SRL systems have often relied on AllenNLP; however, the framework entered maintenance mode in December 2022, limiting compatibility with evolving encoder architectures and modern inference requirements. We revisit structured SRL modeling, introducing a modernized encoder-based framework that preserves explicit predicate-argument structure while enabling inference 10 times faster. Using BERT-base, the model attains comparable predictive performance, and RoBERTa and DeBERTa further improve F1 performance within the same framework. We adopt a dependency-informed diagnostic methodology to characterize span-level inconsistencies and conduct a representation-level analysis of LLM behavior under dependency-informed structural signals. Results indicate that dependency cues primarily improve structural stability. Finally, we illustrate how the framework's explicit predicate-argument structure can support multilingual SRL projection as a downstream application.
重新审视语义角色标注:基于依存关系的高效结构化推理 / Revisiting Semantic Role Labeling: Efficient Structured Inference with Dependency-Informed Analysis
本文提出了一种基于现代编码器的语义角色标注框架,在保持显式谓词-论元结构的基础上,推理速度提升10倍,并通过依存信息分析发现该信号主要提升模型的结构稳定性,同时展示了其在多语言场景下的应用潜力。
源自 arXiv: 2605.02505