菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - Fine-tuning Factor Augmented Neural Lasso for Heterogeneous Environments

Fine-tuning is a widely used strategy for adapting pre-trained models to new tasks, yet its methodology and theoretical properties in high-dimensional nonparametric settings with variable selection have not yet been developed. This paper introduces the fine-tuning factor augmented neural Lasso (FAN-Lasso), a transfer learning framework for high-dimensional nonparametric regression with variable selection that simultaneously handles covariate and posterior shifts. We use a low-rank factor structure to manage high-dimensional dependent covariates and propose a novel residual fine-tuning decomposition in which the target function is expressed as a transformation of a frozen source function and other variables to achieve transfer learning and nonparametric variable selection. This augmented feature from the source predictor allows for the transfer of knowledge to the target domain and reduces model complexity there. We derive minimax-optimal excess risk bounds for the fine-tuning FAN-Lasso, characterizing the precise conditions, in terms of relative sample sizes and function complexities, under which fine-tuning yields statistical acceleration over single-task learning. The proposed framework also provides a theoretical perspective on parameter-efficient fine-tuning methods. Extensive numerical experiments across diverse covariate- and posterior-shift scenarios demonstrate that the fine-tuning FAN-Lasso consistently outperforms standard baselines and achieves near-oracle performance even under severe target sample size constraints, empirically validating the derived rates.

顶级标签: machine learning theory model training
详细标签: transfer learning fine-tuning high-dimensional regression variable selection nonparametric 或 搜索:

面向异构环境的微调因子增强神经Lasso方法 / Fine-tuning Factor Augmented Neural Lasso for Heterogeneous Environments


1️⃣ 一句话总结

这篇论文提出了一种名为FAN-Lasso的新方法,它通过一种创新的微调框架,在数据分布发生变化的高维复杂场景中,既能有效选择关键变量,又能利用已有知识提升新任务的预测性能,甚至在目标数据很少时也能接近理想效果。

源自 arXiv: 2604.12288