ACIL:用于图像分类的主动类增量学习 / ACIL: Active Class Incremental Learning for Image Classification
1️⃣ 一句话总结
本文提出了一种名为ACIL的新框架,通过结合主动学习和类增量学习,在持续学习新类别的过程中,智能选择少量最具信息量的样本进行人工标注,从而大幅降低标注成本并有效防止模型遗忘旧知识。
Continual learning (or class incremental learning) is a realistic learning scenario for computer vision systems, where deep neural networks are trained on episodic data, and the data from previous episodes are generally inaccessible to the model. Existing research in this domain has primarily focused on avoiding catastrophic forgetting, which occurs due to the continuously changing class distributions in each episode and the inaccessibility of the data from previous episodes. However, these methods assume that all the training samples in every episode are annotated; this not only incurs a huge annotation cost, but also results in a wastage of annotation effort, since most of the samples in a given episode will not be accessible to the model in subsequent episodes. Active learning algorithms identify the salient and informative samples from large amounts of unlabeled data and are instrumental in reducing the human annotation effort in inducing a deep neural network. In this paper, we propose ACIL, a novel active learning framework for class incremental learning settings. We exploit a criterion based on uncertainty and diversity to identify the exemplar samples that need to be annotated in each episode, and will be appended to the data in the next episode. Such a framework can drastically reduce annotation cost and can also avoid catastrophic forgetting. Our extensive empirical analyses on several vision datasets corroborate the promise and potential of our framework against relevant baselines.
ACIL:用于图像分类的主动类增量学习 / ACIL: Active Class Incremental Learning for Image Classification
本文提出了一种名为ACIL的新框架,通过结合主动学习和类增量学习,在持续学习新类别的过程中,智能选择少量最具信息量的样本进行人工标注,从而大幅降低标注成本并有效防止模型遗忘旧知识。
源自 arXiv: 2602.04252