菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-19
📄 Abstract - Correlation-Weighted Multi-Reward Optimization for Compositional Generation

Text-to-image models produce images that align well with natural language prompts, but compositional generation has long been a central challenge. Models often struggle to satisfy multiple concepts within a single prompt, frequently omitting some concepts and resulting in partial success. Such failures highlight the difficulty of jointly optimizing multiple concepts during reward optimization, where competing concepts can interfere with one another. To address this limitation, we propose Correlation-Weighted Multi-Reward Optimization (\ours), a framework that leverages the correlation structure among concept rewards to adaptively weight each attribute concept in optimization. By accounting for interactions among concepts, \ours balances competing reward signals and emphasizes concepts that are partially satisfied yet inconsistently generated across samples, improving compositional generation. Specifically, we decompose multi-concept prompts into pre-defined concept groups (\eg, objects, attributes, and relations) and obtain reward signals from dedicated reward models for each concept. We then adaptively reweight these rewards, assigning higher weights to conflicting or hard-to-satisfy concepts using correlation-based difficulty estimation. By focusing optimization on the most challenging concepts within each group, \ours encourages the model to consistently satisfy all requested attributes simultaneously. We apply our approach to train state-of-the-art diffusion models, SD3.5 and FLUX.1-dev, and demonstrate consistent improvements on challenging multi-concept benchmarks, including ConceptMix, GenEval 2, and T2I-CompBench.

顶级标签: model training multi-modal aigc
详细标签: text-to-image compositional generation reward optimization diffusion models multi-reward learning 或 搜索:

面向组合式生成的关联加权多奖励优化 / Correlation-Weighted Multi-Reward Optimization for Compositional Generation


1️⃣ 一句话总结

这篇论文提出了一种名为关联加权多奖励优化的新方法,通过分析不同概念奖励之间的关联性,自适应地调整优化权重,有效提升了文生图模型在复杂多概念提示下的组合生成能力。

源自 arXiv: 2603.18528