菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-05
📄 Abstract - Taxonomy-Adaptive Moderation Model with Robust Guardrails for Large Language Models

Large Language Models (LLMs) are typically aligned for safety during the post-training phase; however, they may still generate inappropriate outputs that could potentially pose risks to users. This challenge underscores the need for robust safeguards that operate across both model inputs and outputs. In this work, we introduce Roblox Guard 1.0, a state-of-the-art instruction fine-tuned LLM designed to enhance the safety of LLM systems through comprehensive input-output moderation, using a pipeline of LLMs to enhance moderation capability. Built on the Llama-3.1-8B-Instruct backbone, our model is instruction fine-tuned to generalize across previously unseen safety taxonomies and demonstrates strong performance on out-of-domain safety benchmarks. The instruction fine-tuning process uses a mix of synthetic and open-source safety datasets, augmented with chain-of-thought (CoT) rationales and input inversion to enhance contextual understanding and decision making. To support systematic evaluation, we also release RobloxGuard-Eval, a new benchmark featuring an extensible safety taxonomy to assess the effectiveness of LLM guardrails and moderation frameworks.

顶级标签: llm systems model evaluation
详细标签: safety moderation instruction fine-tuning guardrails taxonomy adaptation benchmark 或 搜索:

具有鲁棒护栏的、可适应分类体系的大语言模型审核模型 / Taxonomy-Adaptive Moderation Model with Robust Guardrails for Large Language Models


1️⃣ 一句话总结

这篇论文提出了一个名为Roblox Guard 1.0的新型大语言模型审核系统,它通过指令微调,能够理解和阻止各种新的、未见过的有害内容,从而为大语言模型的应用提供更全面、更灵活的安全防护。


源自 arXiv: 2512.05339