菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-04
📄 Abstract - How Few-shot Demonstrations Affect Prompt-based Defenses Against LLM Jailbreak Attacks

Large Language Models (LLMs) face increasing threats from jailbreak attacks that bypass safety alignment. While prompt-based defenses such as Role-Oriented Prompts (RoP) and Task-Oriented Prompts (ToP) have shown effectiveness, the role of few-shot demonstrations in these defense strategies remains unclear. Prior work suggests that few-shot examples may compromise safety, but lacks investigation into how few-shot interacts with different system prompt strategies. In this paper, we conduct a comprehensive evaluation on multiple mainstream LLMs across four safety benchmarks (AdvBench, HarmBench, SG-Bench, XSTest) using six jailbreak attack methods. Our key finding reveals that few-shot demonstrations produce opposite effects on RoP and ToP: few-shot enhances RoP's safety rate by up to 4.5% through reinforcing role identity, while it degrades ToP's effectiveness by up to 21.2% through distracting attention from task instructions. Based on these findings, we provide practical recommendations for deploying prompt-based defenses in real-world LLM applications.

顶级标签: llm natural language processing model evaluation
详细标签: jailbreak attacks prompt-based defenses few-shot learning safety alignment benchmark evaluation 或 搜索:

少量示例如何影响基于提示的防御对抗大语言模型越狱攻击 / How Few-shot Demonstrations Affect Prompt-based Defenses Against LLM Jailbreak Attacks


1️⃣ 一句话总结

这项研究发现,在基于提示的防御策略中,加入少量示例对两种主流方法有截然相反的效果:它能通过强化角色认同来提升角色导向提示的防御能力,却会因分散注意力而削弱任务导向提示的防御效果。

源自 arXiv: 2602.04294