大语言模型为何失败:自动化安全补丁生成的失败分析与部分成功度量 / Why LLMs Fail: A Failure Analysis and Partial Success Measurement for Automated Security Patch Generation
1️⃣ 一句话总结
这项研究发现,虽然大语言模型能生成语法正确的代码,但在修复软件安全漏洞时,超过一半的补丁在安全性和功能上都存在缺陷,主要是因为模型未能真正理解漏洞的本质,因此生成的补丁在部署前必须经过严格验证。
Large Language Models (LLMs) show promise for Automated Program Repair (APR), yet their effectiveness on security vulnerabilities remains poorly characterized. This study analyzes 319 LLM-generated security patchesacross 64 Java vulnerabilities from the Vul4J benchmark. Using tri-axis evaluation (compilation, security via PoV tests, functionality via test suites), the analysis reveals that only 24.8% of patches achieve full correctness, while 51.4% fail both security and functionality. The dominant failure mode is semantic misunderstanding: LLMs produce syntactically valid code but apply incorrect repair strategies. The proposed Security Repair Score (SRS) quantifies this gap, showing LLMs preserve functionality (mean 0.832) but struggle with security (mean 0.251). Vulnerability type strongly predicts difficulty, with fix rates ranging from 0% (input validation) to 45% (infinite loop). These findings demonstrate that LLM security patches require rigorous validation before deployment.
大语言模型为何失败:自动化安全补丁生成的失败分析与部分成功度量 / Why LLMs Fail: A Failure Analysis and Partial Success Measurement for Automated Security Patch Generation
这项研究发现,虽然大语言模型能生成语法正确的代码,但在修复软件安全漏洞时,超过一半的补丁在安全性和功能上都存在缺陷,主要是因为模型未能真正理解漏洞的本质,因此生成的补丁在部署前必须经过严格验证。
源自 arXiv: 2603.10072