协变量偏移下的安全分布鲁棒特征选择 / Safe Distributionally Robust Feature Selection under Covariate Shift
1️⃣ 一句话总结
本文提出了一种名为safe-DRFS的新方法,用于在模型部署环境可能与开发环境不同的情况下,安全地选择出能保证在所有可能环境中都表现良好的传感器(特征)子集,避免因环境变化导致关键传感器缺失。
In practical machine learning, the environments encountered during the model development and deployment phases often differ, especially when a model is used by many users in diverse settings. Learning models that maintain reliable performance across plausible deployment environments is known as distributionally robust (DR) learning. In this work, we study the problem of distributionally robust feature selection (DRFS), with a particular focus on sparse sensing applications motivated by industrial needs. In practical multi-sensor systems, a shared subset of sensors is typically selected prior to deployment based on performance evaluations using many available sensors. At deployment, individual users may further adapt or fine-tune models to their specific environments. When deployment environments differ from those anticipated during development, this strategy can result in systems lacking sensors required for optimal performance. To address this issue, we propose safe-DRFS, a novel approach that extends safe screening from conventional sparse modeling settings to a DR setting under covariate shift. Our method identifies a feature subset that encompasses all subsets that may become optimal across a specified range of input distribution shifts, with finite-sample theoretical guarantees of no false feature elimination.
协变量偏移下的安全分布鲁棒特征选择 / Safe Distributionally Robust Feature Selection under Covariate Shift
本文提出了一种名为safe-DRFS的新方法,用于在模型部署环境可能与开发环境不同的情况下,安全地选择出能保证在所有可能环境中都表现良好的传感器(特征)子集,避免因环境变化导致关键传感器缺失。
源自 arXiv: 2603.16062