RedSage:一个网络安全通才大语言模型 / RedSage: A Cybersecurity Generalist LLM
1️⃣ 一句话总结
这篇论文开发了一个名为RedSage的开源、可本地部署的网络安全AI助手,它通过使用海量专业数据训练和模拟专家工作流程的方法,不仅在网络安全任务上表现出色,还提升了通用推理能力。
Cybersecurity operations demand assistant LLMs that support diverse workflows without exposing sensitive data. Existing solutions either rely on proprietary APIs with privacy risks or on open models lacking domain adaptation. To bridge this gap, we curate 11.8B tokens of cybersecurity-focused continual pretraining data via large-scale web filtering and manual collection of high-quality resources, spanning 28.6K documents across frameworks, offensive techniques, and security tools. Building on this, we design an agentic augmentation pipeline that simulates expert workflows to generate 266K multi-turn cybersecurity samples for supervised fine-tuning. Combined with general open-source LLM data, these resources enable the training of RedSage, an open-source, locally deployable cybersecurity assistant with domain-aware pretraining and post-training. To rigorously evaluate the models, we introduce RedSage-Bench, a benchmark with 30K multiple-choice and 240 open-ended Q&A items covering cybersecurity knowledge, skills, and tool expertise. RedSage is further evaluated on established cybersecurity benchmarks (e.g., CTI-Bench, CyberMetric, SECURE) and general LLM benchmarks to assess broader generalization. At the 8B scale, RedSage achieves consistently better results, surpassing the baseline models by up to +5.59 points on cybersecurity benchmarks and +5.05 points on Open LLM Leaderboard tasks. These findings demonstrate that domain-aware agentic augmentation and pre/post-training can not only enhance cybersecurity-specific expertise but also help to improve general reasoning and instruction-following. All models, datasets, and code are publicly available.
RedSage:一个网络安全通才大语言模型 / RedSage: A Cybersecurity Generalist LLM
这篇论文开发了一个名为RedSage的开源、可本地部署的网络安全AI助手,它通过使用海量专业数据训练和模拟专家工作流程的方法,不仅在网络安全任务上表现出色,还提升了通用推理能力。
源自 arXiv: 2601.22159