Timer-S1:一个具有序列缩放能力的十亿级时间序列基础模型 / Timer-S1: A Billion-Scale Time Series Foundation Model with Serial Scaling
1️⃣ 一句话总结
这篇论文提出了一个名为Timer-S1的强大时间序列基础模型,它通过创新的序列缩放方法解决了现有模型扩展的瓶颈,在保持高效推理的同时,利用海量数据和新的训练目标,在时间序列预测任务上取得了顶尖的性能。
We introduce Timer-S1, a strong Mixture-of-Experts (MoE) time series foundation model with 8.3B total parameters, 0.75B activated parameters for each token, and a context length of 11.5K. To overcome the scalability bottleneck in existing pre-trained time series foundation models, we perform Serial Scaling in three dimensions: model architecture, dataset, and training pipeline. Timer-S1 integrates sparse TimeMoE blocks and generic TimeSTP blocks for Serial-Token Prediction (STP), a generic training objective that adheres to the serial nature of forecasting. The proposed paradigm introduces serial computations to improve long-term predictions while avoiding costly rolling-style inference and pronounced error accumulation in the standard next-token prediction. Pursuing a high-quality and unbiased training dataset, we curate TimeBench, a corpus with one trillion time points, and apply meticulous data augmentation to mitigate predictive bias. We further pioneer a post-training stage, including continued pre-training and long-context extension, to enhance short-term and long-context performance. Evaluated on the large-scale GIFT-Eval leaderboard, Timer-S1 achieves state-of-the-art forecasting performance, attaining the best MASE and CRPS scores as a pre-trained model. Timer-S1 will be released to facilitate further research.
Timer-S1:一个具有序列缩放能力的十亿级时间序列基础模型 / Timer-S1: A Billion-Scale Time Series Foundation Model with Serial Scaling
这篇论文提出了一个名为Timer-S1的强大时间序列基础模型,它通过创新的序列缩放方法解决了现有模型扩展的瓶颈,在保持高效推理的同时,利用海量数据和新的训练目标,在时间序列预测任务上取得了顶尖的性能。
源自 arXiv: 2603.04791