菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-04
📄 Abstract - OpenRT: An Open-Source Red Teaming Framework for Multimodal LLMs

The rapid integration of Multimodal Large Language Models (MLLMs) into critical applications is increasingly hindered by persistent safety vulnerabilities. However, existing red-teaming benchmarks are often fragmented, limited to single-turn text interactions, and lack the scalability required for systematic evaluation. To address this, we introduce OpenRT, a unified, modular, and high-throughput red-teaming framework designed for comprehensive MLLM safety evaluation. At its core, OpenRT architects a paradigm shift in automated red-teaming by introducing an adversarial kernel that enables modular separation across five critical dimensions: model integration, dataset management, attack strategies, judging methods, and evaluation metrics. By standardizing attack interfaces, it decouples adversarial logic from a high-throughput asynchronous runtime, enabling systematic scaling across diverse models. Our framework integrates 37 diverse attack methodologies, spanning white-box gradients, multi-modal perturbations, and sophisticated multi-agent evolutionary strategies. Through an extensive empirical study on 20 advanced models (including GPT-5.2, Claude 4.5, and Gemini 3 Pro), we expose critical safety gaps: even frontier models fail to generalize across attack paradigms, with leading models exhibiting average Attack Success Rates as high as 49.14%. Notably, our findings reveal that reasoning models do not inherently possess superior robustness against complex, multi-turn jailbreaks. By open-sourcing OpenRT, we provide a sustainable, extensible, and continuously maintained infrastructure that accelerates the development and standardization of AI safety.

顶级标签: llm multi-modal model evaluation
详细标签: red teaming safety evaluation multimodal llms adversarial attacks benchmark 或 搜索:

OpenRT:一个用于多模态大语言模型的开源红队测试框架 / OpenRT: An Open-Source Red Teaming Framework for Multimodal LLMs


1️⃣ 一句话总结

这篇论文提出了一个名为OpenRT的开源、模块化框架,用于系统地测试和评估多模态大语言模型的安全性,发现即使是当前最先进的模型也存在显著的安全漏洞,平均攻击成功率高达49.14%。

源自 arXiv: 2601.01592