菜单

🤖 系统
📄 Abstract - Mitigating Intra- and Inter-modal Forgetting in Continual Learning of Unified Multimodal Models

Unified Multimodal Generative Models (UMGMs) unify visual understanding and image generation within a single autoregressive framework. However, their ability to continually learn new tasks is severely hindered by catastrophic forgetting, both within a modality (intra-modal) and across modalities (inter-modal). While intra-modal forgetting has been studied in prior continual learning (CL) work, inter-modal forgetting remains largely unexplored. In this paper, we identify and empirically validate this phenomenon in UMGMs and provide a theoretical explanation rooted in gradient conflict between modalities. To address both intra- and inter-modal forgetting, we propose Modality-Decoupled Experts (MoDE), a lightweight and scalable architecture that isolates modality-specific updates to mitigate the gradient conflict and leverages knowledge distillation to prevent catastrophic forgetting and preserve pre-trained capabilities. Unlike previous CL methods that remain modality-coupled and suffer from modality gradient conflict, MoDE explicitly decouples modalities to prevent interference. Experiments across diverse benchmarks demonstrate that MoDE significantly mitigates both inter- and intra-modal forgetting, outperforming prior CL baselines in unified multimodal generation settings. Codes will be publicly available: this https URL

顶级标签: multi-modal model training machine learning
详细标签: continual learning catastrophic forgetting multimodal models gradient conflict knowledge distillation 或 搜索:

缓解统一多模态模型持续学习中的模态内与模态间遗忘 / Mitigating Intra- and Inter-modal Forgetting in Continual Learning of Unified Multimodal Models


1️⃣ 一句话总结

这篇论文提出了一种名为MoDE的轻量级架构,通过将不同模态的学习过程解耦,有效解决了统一多模态模型在持续学习新任务时,不仅会在单一模态内部遗忘旧知识,还会在不同模态之间相互干扰导致遗忘的关键难题。


📄 打开原文 PDF