用于图像去噪与分类的加法U-Net多任务学习 / Multi-Task Learning with Additive U-Net for Image Denoising and Classification
1️⃣ 一句话总结
这篇论文提出了一种名为Additive U-Net的新网络结构,它通过一种简单的加法融合方式来连接编码器和解码器,不仅能在图像去噪任务中表现出色,还能更稳定、高效地同时完成去噪和图像分类两个任务,而无需增加模型复杂度。
We investigate additive skip fusion in U-Net architectures for image denoising and denoising-centric multi-task learning (MTL). By replacing concatenative skips with gated additive fusion, the proposed Additive U-Net (AddUNet) constrains shortcut capacity while preserving fixed feature dimensionality across depth. This structural regularization induces controlled encoder-decoder information flow and stabilizes joint optimization. Across single-task denoising and joint denoising-classification settings, AddUNet achieves competitive reconstruction performance with improved training stability. In MTL, learned skip weights exhibit systematic task-aware redistribution: shallow skips favor reconstruction, while deeper features support discrimination. Notably, reconstruction remains robust even under limited classification capacity, indicating implicit task decoupling through additive fusion. These findings show that simple constraints on skip connections act as an effective architectural regularizer for stable and scalable multi-task learning without increasing model complexity.
用于图像去噪与分类的加法U-Net多任务学习 / Multi-Task Learning with Additive U-Net for Image Denoising and Classification
这篇论文提出了一种名为Additive U-Net的新网络结构,它通过一种简单的加法融合方式来连接编码器和解码器,不仅能在图像去噪任务中表现出色,还能更稳定、高效地同时完成去噪和图像分类两个任务,而无需增加模型复杂度。
源自 arXiv: 2602.12649