基于双掩码自编码从不完整电子健康记录数据中学习表征 / Learning Representations from Incomplete EHR Data with Dual-Masked Autoencoding
1️⃣ 一句话总结
这篇论文提出了一种名为AID-MAE的新方法,它通过同时掩盖原始缺失值和部分观测值来训练模型,从而能直接从稀疏、不规则的电子健康记录数据中学习到更有效的患者表征,并在多种临床预测任务上取得了比现有方法更好的性能。
Learning from electronic health records (EHRs) time series is challenging due to irregular sam- pling, heterogeneous missingness, and the resulting sparsity of observations. Prior self-supervised meth- ods either impute before learning, represent missingness through a dedicated input signal, or optimize solely for imputation, reducing their capacity to efficiently learn representations that support clinical downstream tasks. We propose the Augmented-Intrinsic Dual-Masked Autoencoder (AID-MAE), which learns directly from incomplete time series by applying an intrinsic missing mask to represent naturally missing values and an augmented mask that hides a subset of observed values for reconstruction during training. AID-MAE processes only the unmasked subset of tokens and consistently outperforms strong baselines, including XGBoost and DuETT, across multiple clinical tasks on two datasets. In addition, the learned embeddings naturally stratify patient cohorts in the representation space.
基于双掩码自编码从不完整电子健康记录数据中学习表征 / Learning Representations from Incomplete EHR Data with Dual-Masked Autoencoding
这篇论文提出了一种名为AID-MAE的新方法,它通过同时掩盖原始缺失值和部分观测值来训练模型,从而能直接从稀疏、不规则的电子健康记录数据中学习到更有效的患者表征,并在多种临床预测任务上取得了比现有方法更好的性能。
源自 arXiv: 2602.15159