TiMi:利用多模态专家混合模型增强时间序列Transformer / TiMi: Empower Time Series Transformers with Multimodal Mixture of Experts
1️⃣ 一句话总结
这篇论文提出了一个名为TiMi的新模型,它通过引入大语言模型来生成对未来趋势的因果推理,并设计了一个轻量化的多模态专家混合模块,从而让时间序列预测模型能够有效利用文本等多模态信息,在多个真实数据集上取得了领先的预测性能。
Multimodal time series forecasting has garnered significant attention for its potential to provide more accurate predictions than traditional single-modality models by leveraging rich information inherent in other modalities. However, due to fundamental challenges in modality alignment, existing methods often struggle to effectively incorporate multimodal data into predictions, particularly textual information that has a causal influence on time series fluctuations, such as emergency reports and policy announcements. In this paper, we reflect on the role of textual information in numerical forecasting and propose Time series transformers with Multimodal Mixture-of-Experts, TiMi, to unleash the causal reasoning capabilities of LLMs. Concretely, TiMi utilizes LLMs to generate inferences on future developments, which serve as guidance for time series forecasting. To seamlessly integrate both exogenous factors and time series into predictions, we introduce a Multimodal Mixture-of-Experts (MMoE) module as a lightweight plug-in to empower Transformer-based time series models for multimodal forecasting, eliminating the need for explicit representation-level alignment. Experimentally, our proposed TiMi demonstrates consistent state-of-the-art performance on sixteen real-world multimodal forecasting benchmarks, outperforming advanced baselines while offering both strong adaptability and interpretability.
TiMi:利用多模态专家混合模型增强时间序列Transformer / TiMi: Empower Time Series Transformers with Multimodal Mixture of Experts
这篇论文提出了一个名为TiMi的新模型,它通过引入大语言模型来生成对未来趋势的因果推理,并设计了一个轻量化的多模态专家混合模块,从而让时间序列预测模型能够有效利用文本等多模态信息,在多个真实数据集上取得了领先的预测性能。
源自 arXiv: 2602.21693