菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-17
📄 Abstract - Ensemble-size-dependence of deep-learning post-processing methods that minimize an (un)fair score: motivating examples and a proof-of-concept solution

Fair scores reward ensemble forecast members that behave like samples from the same distribution as the verifying observations. They are therefore an attractive choice as loss functions to train data-driven ensemble forecasts or post-processing methods when large training ensembles are either unavailable or computationally prohibitive. The adjusted continuous ranked probability score (aCRPS) is fair and unbiased with respect to ensemble size, provided forecast members are exchangeable and interpretable as conditionally independent draws from an underlying predictive distribution. However, distribution-aware post-processing methods that introduce structural dependency between members can violate this assumption, rendering aCRPS unfair. We demonstrate this effect using two approaches designed to minimize the expected aCRPS of a finite ensemble: (1) a linear member-by-member calibration, which couples members through a common dependency on the sample ensemble mean, and (2) a deep-learning method, which couples members via transformer self-attention across the ensemble dimension. In both cases, the results are sensitive to ensemble size and apparent gains in aCRPS can correspond to systematic unreliability characterized by over-dispersion. We introduce trajectory transformers as a proof-of-concept that ensemble-size independence can be achieved. This approach is an adaptation of the Post-processing Ensembles with Transformers (PoET) framework and applies self-attention over lead time while preserving the conditional independence required by aCRPS. When applied to weekly mean $T_{2m}$ forecasts from the ECMWF subseasonal forecasting system, this approach successfully reduces systematic model biases whilst also improving or maintaining forecast reliability regardless of the ensemble size used in training (3 vs 9 members) or real-time forecasts (9 vs 100 members).

顶级标签: machine learning model evaluation systems
详细标签: ensemble forecasting score fairness deep learning post-processing transformer architectures probabilistic forecasting 或 搜索:

最小化(非)公平评分的深度学习后处理方法的集成规模依赖性:动机性示例与概念验证解决方案 / Ensemble-size-dependence of deep-learning post-processing methods that minimize an (un)fair score: motivating examples and a proof-of-concept solution


1️⃣ 一句话总结

这篇论文发现,当使用公平评分(如aCRPS)训练深度学习模型来优化天气预报时,如果模型结构导致预报成员之间产生依赖关系,其性能会受训练和实际预报所用集合成员数量的影响而变得不可靠,并提出了一种新的轨迹变换器方法来解决这个问题,确保预报质量不受集合规模影响。

源自 arXiv: 2602.15830