菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-18
📄 Abstract - Learning Preference from Observed Rankings

Estimating consumer preferences is central to many problems in economics and marketing. This paper develops a flexible framework for learning individual preferences from partial ranking information by interpreting observed rankings as collections of pairwise comparisons with logistic choice probabilities. We model latent utility as the sum of interpretable product attributes, item fixed effects, and a low-rank user-item factor structure, enabling both interpretability and information sharing across consumers and items. We further correct for selection in which comparisons are observed: a comparison is recorded only if both items enter the consumer's consideration set, inducing exposure bias toward frequently encountered items. We model pair observability as the product of item-level observability propensities and estimate these propensities with a logistic model for the marginal probability that an item is observable. Preference parameters are then estimated by maximizing an inverse-probability-weighted (IPW), ridge-regularized log-likelihood that reweights observed comparisons toward a target comparison population. To scale computation, we propose a stochastic gradient descent (SGD) algorithm based on inverse-probability resampling, which draws comparisons in proportion to their IPW weights. In an application to transaction data from an online wine retailer, the method improves out-of-sample recommendation performance relative to a popularity-based benchmark, with particularly strong gains in predicting purchases of previously unconsumed products.

顶级标签: machine learning data model training
详细标签: preference learning ranking data inverse probability weighting exposure bias correction recommendation systems 或 搜索:

从观察到的排名中学习偏好 / Learning Preference from Observed Rankings


1️⃣ 一句话总结

这篇论文提出了一种新方法,通过分析消费者对商品的排名数据来学习他们的个人偏好,并解决了数据中常见的‘热门商品曝光偏差’问题,从而能更准确地预测消费者对新产品的购买行为。

源自 arXiv: 2602.16476