菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-22
📄 Abstract - Evaluating Assurance Cases as Text-Attributed Graphs for Structure and Provenance Analysis

An assurance case is a structured argument document that justifies claims about a system's requirements or properties, which are supported by evidence. In regulated domains, these are crucial for meeting compliance and safety requirements to industry standards. We propose a graph diagnostic framework for analysing the structure and provenance of assurance cases. We focus on two main tasks: (1) link prediction, to learn and identify connections between argument elements, and (2) graph classification, to differentiate between assurance cases created by a state-of-the-art large language model and those created by humans, aiming to detect bias. We compiled a publicly available dataset of assurance cases, represented as graphs with nodes and edges, supporting both link prediction and provenance analysis. Experiments show that graph neural networks (GNNs) achieve strong link prediction performance (ROC-AUC 0.760) on real assurance cases and generalise well across domains and semi-supervised settings. For provenance detection, GNNs effectively distinguish human-authored from LLM-generated cases (F1 0.94). We observed that LLM-generated assurance cases have different hierarchical linking patterns compared to human-authored cases. Furthermore, existing GNN explanation methods show only moderate faithfulness, revealing a gap between predicted reasoning and the true argument structure.

顶级标签: systems llm machine learning
详细标签: assurance cases graph neural networks link prediction provenance detection explainability 或 搜索:

将保证案例视为文本属性图进行结构与溯源分析 / Evaluating Assurance Cases as Text-Attributed Graphs for Structure and Provenance Analysis


1️⃣ 一句话总结

本文提出了一种图诊断框架,将保证案例(用于论证系统安全合规的结构化论证文档)转化为文本属性图,通过链接预测和分类任务,发现图神经网络能有效区分人类撰写与大型语言模型生成的案例,并揭示两者在层级连接模式上的差异。

源自 arXiv: 2604.20577