菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-07
📄 Abstract - Position: LLMs Must Use Functor-Based and RAG-Driven Bias Mitigation for Fairness

Biases in large language models (LLMs) often manifest as systematic distortions in associations between demographic attributes and professional or social roles, reinforcing harmful stereotypes across gender, ethnicity, and geography. This position paper advocates for addressing demographic and gender biases in LLMs through a dual-pronged methodology, integrating category-theoretic transformations and retrieval-augmented generation (RAG). Category theory provides a rigorous, structure-preserving mathematical framework that maps biased semantic domains to unbiased canonical forms via functors, ensuring bias elimination while preserving semantic integrity. Complementing this, RAG dynamically injects diverse, up-to-date external knowledge during inference, directly countering ingrained biases within model parameters. By combining structural debiasing through functor-based mappings and contextual grounding via RAG, we outline a comprehensive framework capable of delivering equitable and fair model outputs. Our synthesis of the current literature validates the efficacy of each approach individually, while addressing potential critiques demonstrates the robustness of this integrated strategy. Ensuring fairness in LLMs, therefore, demands both the mathematical rigor of category-theoretic transformations and the adaptability of retrieval augmentation.

顶级标签: llm natural language processing model evaluation
详细标签: bias mitigation fairness retrieval-augmented generation category theory demographic bias 或 搜索:

立场:大型语言模型必须采用基于函子和RAG驱动的偏见缓解方法以实现公平性 / Position: LLMs Must Use Functor-Based and RAG-Driven Bias Mitigation for Fairness


1️⃣ 一句话总结

这篇立场论文主张通过结合范畴论的函子映射和检索增强生成技术,从结构上和动态知识注入两方面系统性地消除大型语言模型中的性别、种族等社会偏见,以实现更公平的模型输出。

源自 arXiv: 2603.07368