菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-23
📄 Abstract - SecureBreak -- A dataset towards safe and secure models

Large language models are becoming pervasive core components in many real-world applications. As a consequence, security alignment represents a critical requirement for their safe deployment. Although previous related works focused primarily on model architectures and alignment methodologies, these approaches alone cannot ensure the complete elimination of harmful generations. This concern is reinforced by the growing body of scientific literature showing that attacks, such as jailbreaking and prompt injection, can bypass existing security alignment mechanisms. As a consequence, additional security strategies are needed both to provide qualitative feedback on the robustness of the obtained security alignment at the training stage, and to create an ``ultimate'' defense layer to block unsafe outputs possibly produced by deployed models. To provide a contribution in this scenario, this paper introduces SecureBreak, a safety-oriented dataset designed to support the development of AI-driven solutions for detecting harmful LLM outputs caused by residual weaknesses in security alignment. The dataset is highly reliable due to careful manual annotation, where labels are assigned conservatively to ensure safety. It performs well in detecting unsafe content across multiple risk categories. Tests with pre-trained LLMs show improved results after fine-tuning on SecureBreak. Overall, the dataset is useful both for post-generation safety filtering and for guiding further model alignment and security improvements.

顶级标签: llm model evaluation data
详细标签: safety alignment harmful output detection dataset jailbreaking security robustness 或 搜索:

SecureBreak——一个面向安全与可靠模型的数据集 / SecureBreak -- A dataset towards safe and secure models


1️⃣ 一句话总结

这篇论文提出了一个名为SecureBreak的高质量安全数据集,旨在帮助检测和过滤大语言模型因安全对齐不足而产生的有害输出,从而增强模型在实际应用中的安全性。

源自 arXiv: 2603.21975