基于调制专家混合的多模态时间序列预测 / Multi-Modal Time Series Prediction via Mixture of Modulated Experts
1️⃣ 一句话总结
这篇论文提出了一种名为‘专家调制’的新方法,通过让文本信息直接控制预测模型的内部‘专家’模块,有效解决了传统方法在数据稀缺或特征差异大时难以融合文本与时间序列数据的难题,从而显著提升了多模态时间序列预测的准确性。
Real-world time series exhibit complex and evolving dynamics, making accurate forecasting extremely challenging. Recent multi-modal forecasting methods leverage textual information such as news reports to improve prediction, but most rely on token-level fusion that mixes temporal patches with language tokens in a shared embedding space. However, such fusion can be ill-suited when high-quality time-text pairs are scarce and when time series exhibit substantial variation in scale and characteristics, thus complicating cross-modal alignment. In parallel, Mixture-of-Experts (MoE) architectures have proven effective for both time series modeling and multi-modal learning, yet many existing MoE-based modality integration methods still depend on token-level fusion. To address this, we propose Expert Modulation, a new paradigm for multi-modal time series prediction that conditions both routing and expert computation on textual signals, enabling direct and efficient cross-modal control over expert behavior. Through comprehensive theoretical analysis and experiments, our proposed method demonstrates substantial improvements in multi-modal time series prediction. The current code is available at this https URL
基于调制专家混合的多模态时间序列预测 / Multi-Modal Time Series Prediction via Mixture of Modulated Experts
这篇论文提出了一种名为‘专家调制’的新方法,通过让文本信息直接控制预测模型的内部‘专家’模块,有效解决了传统方法在数据稀缺或特征差异大时难以融合文本与时间序列数据的难题,从而显著提升了多模态时间序列预测的准确性。
源自 arXiv: 2601.21547