菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-24
📄 Abstract - T1: One-to-One Channel-Head Binding for Multivariate Time-Series Imputation

Imputing missing values in multivariate time series remains challenging, especially under diverse missing patterns and heavy missingness. Existing methods suffer from suboptimal performance as corrupted temporal features hinder effective cross-variable information transfer, amplifying reconstruction errors. Robust imputation requires both extracting temporal patterns from sparse observations within each variable and selectively transferring information across variables--yet current approaches excel at one while compromising the other. We introduce T1 (Time series imputation with 1-to-1 channel-head binding), a CNN-Transformer hybrid architecture that achieves robust imputation through Channel-Head Binding--a mechanism creating one-to-one correspondence between CNN channels and attention heads. This design enables selective information transfer: when missingness corrupts certain temporal patterns, their corresponding attention pathways adaptively down-weight based on remaining observable patterns while preserving reliable cross-variable connections through unaffected channels. Experiments on 11 benchmark datasets demonstrate that T1 achieves state-of-the-art performance, reducing MSE by 46% on average compared to the second-best baseline, with particularly strong gains under extreme sparsity (70% missing ratio). The model generalizes to unseen missing patterns without retraining and uses a consistent hyperparameter configuration across all datasets. The code is available at this https URL.

顶级标签: machine learning model training data
详细标签: time series imputation multivariate data cnn-transformer hybrid attention mechanism missing data 或 搜索:

T1:用于多元时间序列插补的一对一通道-注意力头绑定方法 / T1: One-to-One Channel-Head Binding for Multivariate Time-Series Imputation


1️⃣ 一句话总结

本文提出了一种名为T1的新型神经网络模型,它通过将卷积通道与注意力头一对一绑定的独特设计,有效解决了多元时间序列数据在严重缺失情况下的精准补全难题,在多种数据集上显著超越了现有方法。

源自 arXiv: 2602.21043