菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-09
📄 Abstract - AdaCultureSafe: Adaptive Cultural Safety Grounded by Cultural Knowledge in Large Language Models

With the widespread adoption of Large Language Models (LLMs), respecting indigenous cultures becomes essential for models' culturally safety and responsible global applications. Existing studies separately consider cultural safety and cultural knowledge and neglect that the former should be grounded by the latter. This severely prevents LLMs from yielding culture-specific respectful responses. Consequently, adaptive cultural safety remains a formidable task. In this work, we propose to jointly model cultural safety and knowledge. First and foremost, cultural-safety and knowledge-paired data serve as the key prerequisite to conduct this research. However, the cultural diversity across regions and the subtlety of cultural differences pose significant challenges to the creation of such paired evaluation data. To address this issue, we propose a novel framework that integrates authoritative cultural knowledge descriptions curation, LLM-automated query generation, and heavy manual verification. Accordingly, we obtain a dataset named AdaCultureSafe containing 4.8K manually decomposed fine-grained cultural descriptions and the corresponding 48K manually verified safety- and knowledge-oriented queries. Upon the constructed dataset, we evaluate three families of popular LLMs on their cultural safety and knowledge proficiency, via which we make a critical discovery: no significant correlation exists between their cultural safety and knowledge proficiency. We then delve into the utility-related neuron activations within LLMs to investigate the potential cause of the absence of correlation, which can be attributed to the difference of the objectives of pre-training and post-alignment. We finally present a knowledge-grounded method, which significantly enhances cultural safety by enforcing the integration of knowledge into the LLM response generation process.

顶级标签: llm model evaluation natural language processing
详细标签: cultural safety knowledge grounding dataset creation model alignment responsible ai 或 搜索:

AdaCultureSafe:基于大语言模型文化知识的自适应文化安全 / AdaCultureSafe: Adaptive Cultural Safety Grounded by Cultural Knowledge in Large Language Models


1️⃣ 一句话总结

这篇论文提出了一个将文化知识与文化安全相结合的新框架,通过构建高质量数据集和知识引导的响应生成方法,有效提升了大语言模型在尊重不同文化方面的安全性。

源自 arXiv: 2603.08275