Bielik Guard:用于大语言模型内容审核的高效波兰语安全分类器 / Bielik Guard: Efficient Polish Language Safety Classifiers for LLM Content Moderation
1️⃣ 一句话总结
这篇论文提出了一个名为Bielik Guard的高效波兰语内容安全分类器系列,包含一大一小两个模型,它们能准确识别有害内容并优先提供恰当回应而非简单屏蔽,尤其在小模型上实现了高精度和低误报率。
As Large Language Models (LLMs) become increasingly deployed in Polish language applications, the need for efficient and accurate content safety classifiers has become paramount. We present Bielik Guard, a family of compact Polish language safety classifiers comprising two model variants: a 0.1B parameter model based on MMLW-RoBERTa-base and a 0.5B parameter model based on PKOBP/polish-roberta-8k. Fine-tuned on a community-annotated dataset of 6,885 Polish texts, these models classify content across five safety categories: Hate/Aggression, Vulgarities, Sexual Content, Crime, and Self-Harm. Our evaluation demonstrates that both models achieve strong performance on multiple benchmarks. The 0.5B variant offers the best overall discrimination capability with F1 scores of 0.791 (micro) and 0.785 (macro) on the test set, while the 0.1B variant demonstrates exceptional efficiency. Notably, Bielik Guard 0.1B v1.1 achieves superior precision (77.65%) and very low false positive rate (0.63%) on real user prompts, outperforming HerBERT-PL-Guard (31.55% precision, 4.70% FPR) despite identical model size. The models are publicly available and designed to provide appropriate responses rather than simple content blocking, particularly for sensitive categories like self-harm.
Bielik Guard:用于大语言模型内容审核的高效波兰语安全分类器 / Bielik Guard: Efficient Polish Language Safety Classifiers for LLM Content Moderation
这篇论文提出了一个名为Bielik Guard的高效波兰语内容安全分类器系列,包含一大一小两个模型,它们能准确识别有害内容并优先提供恰当回应而非简单屏蔽,尤其在小模型上实现了高精度和低误报率。
源自 arXiv: 2602.07954