GR4CIL:基于CLIP的类增量学习中的间隙补偿路由 / GR4CIL: Gap-compensated Routing for CLIP-based Class Incremental Learning
1️⃣ 一句话总结
本文提出GR4CIL方法,通过保留任务专属视觉知识、维护稳定共享文本语义空间,并引入正交补偿机制来减小模态差异导致的偏差,从而在利用CLIP模型进行类增量学习时,实现更可靠的任务识别与知识路由,同时不损失零样本泛化能力。
Class-Incremental Learning (CIL) aims to continuously acquire new categories while preserving previously learned knowledge. Recently, Contrastive Language-Image Pre-trained (CLIP) models have shown strong potential for CIL due to their powerful generalization ability. However, existing methods still face two key challenges: shared-parameter adaptation tends to cause old-knowledge drift, and task-specific knowledge organization often leads to poorly calibrated cross-task responses, making reliable routing difficult. To address these issues, we propose GR4CIL, a framework combining task discrimination and knowledge routing for CLIP-based CIL. GR4CIL preserves task-specific visual knowledge while maintaining an incrementally stable shared textual semantic space, thereby reducing interference across tasks. Moreover, we introduce an orthogonal compensation mechanism to mitigate modality-gap-induced bias, enhance within-task discrimination, and enlarge the score margin between the ground-truth task and competing tasks. As a result, GR4CIL enables more reliable task-aware routing over learned knowledge while retaining the zero-shot generalization capability. Experiments on multiple benchmarks show that GR4CIL consistently outperforms strong baselines.
GR4CIL:基于CLIP的类增量学习中的间隙补偿路由 / GR4CIL: Gap-compensated Routing for CLIP-based Class Incremental Learning
本文提出GR4CIL方法,通过保留任务专属视觉知识、维护稳定共享文本语义空间,并引入正交补偿机制来减小模态差异导致的偏差,从而在利用CLIP模型进行类增量学习时,实现更可靠的任务识别与知识路由,同时不损失零样本泛化能力。
源自 arXiv: 2604.17822