鲁棒生成模型模块化学习的理论框架 / A Theoretical Framework for Modular Learning of Robust Generative Models
1️⃣ 一句话总结
这篇论文提出了一个理论框架,证明可以通过组合多个小型领域专家模型,并利用一个鲁棒的‘门控’机制来统一调度它们,从而在无需手动调整数据权重的情况下,构建出性能媲美甚至超越单一大型模型的生成式AI系统,且具有更好的可解释性和泛化能力。
Training large-scale generative models is resource-intensive and relies heavily on heuristic dataset weighting. We address two fundamental questions: Can we train Large Language Models (LLMs) modularly-combining small, domain-specific experts to match monolithic performance-and can we do so robustly for any data mixture, eliminating heuristic tuning? We present a theoretical framework for modular generative modeling where a set of pre-trained experts are combined via a gating mechanism. We define the space of normalized gating functions, $G_{1}$, and formulate the problem as a minimax game to find a single robust gate that minimizes divergence to the worst-case data mixture. We prove the existence of such a robust gate using Kakutani's fixed-point theorem and show that modularity acts as a strong regularizer, with generalization bounds scaling with the lightweight gate's complexity. Furthermore, we prove that this modular approach can theoretically outperform models retrained on aggregate data, with the gap characterized by the Jensen-Shannon Divergence. Finally, we introduce a scalable Stochastic Primal-Dual algorithm and a Structural Distillation method for efficient inference. Empirical results on synthetic and real-world datasets confirm that our modular architecture effectively mitigates gradient conflict and can robustly outperform monolithic baselines.
鲁棒生成模型模块化学习的理论框架 / A Theoretical Framework for Modular Learning of Robust Generative Models
这篇论文提出了一个理论框架,证明可以通过组合多个小型领域专家模型,并利用一个鲁棒的‘门控’机制来统一调度它们,从而在无需手动调整数据权重的情况下,构建出性能媲美甚至超越单一大型模型的生成式AI系统,且具有更好的可解释性和泛化能力。
源自 arXiv: 2602.17554