菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-25
📄 Abstract - Exploring How Fair Model Representations Relate to Fair Recommendations

One of the many fairness definitions pursued in recent recommender system research targets mitigating demographic information encoded in model representations. Models optimized for this definition are typically evaluated on how well demographic attributes can be classified given model representations, with the (implicit) assumption that this measure accurately reflects \textit{recommendation parity}, i.e., how similar recommendations given to different users are. We challenge this assumption by comparing the amount of demographic information encoded in representations with various measures of how the recommendations differ. We propose two new approaches for measuring how well demographic information can be classified given ranked recommendations. Our results from extensive testing of multiple models on one real and multiple synthetically generated datasets indicate that optimizing for fair representations positively affects recommendation parity, but also that evaluation at the representation level is not a good proxy for measuring this effect when comparing models. We also provide extensive insight into how recommendation-level fairness metrics behave for various models by evaluating their performances on numerous generated datasets with different properties.

顶级标签: model evaluation systems machine learning
详细标签: fairness recommender systems representation learning demographic parity evaluation metrics 或 搜索:

探索公平模型表征如何影响公平推荐 / Exploring How Fair Model Representations Relate to Fair Recommendations


1️⃣ 一句话总结

这篇论文研究发现,虽然优化模型使其内部表征更公平(即减少对用户人口统计信息的编码)有助于提升推荐结果的公平性,但仅仅通过评估模型表征的公平性并不能准确预测最终推荐结果的公平程度,因此建议直接评估推荐结果本身。

源自 arXiv: 2603.24396