菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-02
📄 Abstract - Designing Time Series Experiments in A/B Testing with Transformer Reinforcement Learning

A/B testing has become a gold standard for modern technological companies to conduct policy evaluation. Yet, its application to time series experiments, where policies are sequentially assigned over time, remains challenging. Existing designs suffer from two limitations: (i) they do not fully leverage the entire history for treatment allocation; (ii) they rely on strong assumptions to approximate the objective function (e.g., the mean squared error of the estimated treatment effect) for optimizing the design. We first establish an impossibility theorem showing that failure to condition on the full history leads to suboptimal designs, due to the dynamic dependencies in time series experiments. To address both limitations simultaneously, we next propose a transformer reinforcement learning (RL) approach which leverages transformers to condition allocation on the entire history and employs RL to directly optimize the MSE without relying on restrictive assumptions. Empirical evaluations on synthetic data, a publicly available dispatch simulator, and a real-world ridesharing dataset demonstrate that our proposal consistently outperforms existing designs.

顶级标签: reinforcement learning systems model training
详细标签: a/b testing time series transformer policy optimization treatment effect estimation 或 搜索:

基于Transformer强化学习的A/B测试时间序列实验设计 / Designing Time Series Experiments in A/B Testing with Transformer Reinforcement Learning


1️⃣ 一句话总结

这篇论文提出了一种结合Transformer和强化学习的新方法,用于解决在随时间顺序分配策略的A/B测试中,如何更有效地利用全部历史数据来优化实验设计,从而更准确地评估策略效果。

源自 arXiv: 2602.01853