MultiBanana:一个用于多参考文本到图像生成的挑战性基准 / MultiBanana: A Challenging Benchmark for Multi-Reference Text-to-Image Generation
1️⃣ 一句话总结
这篇论文提出了一个名为MultiBanana的新基准,它系统地评估了AI模型根据多张参考图片生成新图像的能力,并揭示了现有模型在应对参考图片数量、风格、尺度等复杂差异时的优势和不足。
Recent text-to-image generation models have acquired the ability of multi-reference generation and editing; the ability to inherit the appearance of subjects from multiple reference images and re-render them under new contexts. However, the existing benchmark datasets often focus on the generation with single or a few reference images, which prevents us from measuring the progress on how model performance advances or pointing out their weaknesses, under different multi-reference conditions. In addition, their task definitions are still vague, typically limited to axes such as "what to edit" or "how many references are given", and therefore fail to capture the intrinsic difficulty of multi-reference settings. To address this gap, we introduce $\textbf{MultiBanana}$, which is carefully designed to assesses the edge of model capabilities by widely covering multi-reference-specific problems at scale: (1) varying the number of references, (2) domain mismatch among references (e.g., photo vs. anime), (3) scale mismatch between reference and target scenes, (4) references containing rare concepts (e.g., a red banana), and (5) multilingual textual references for rendering. Our analysis among a variety of text-to-image models reveals their superior performances, typical failure modes, and areas for improvement. MultiBanana will be released as an open benchmark to push the boundaries and establish a standardized basis for fair comparison in multi-reference image generation. Our data and code are available at this https URL .
MultiBanana:一个用于多参考文本到图像生成的挑战性基准 / MultiBanana: A Challenging Benchmark for Multi-Reference Text-to-Image Generation
这篇论文提出了一个名为MultiBanana的新基准,它系统地评估了AI模型根据多张参考图片生成新图像的能力,并揭示了现有模型在应对参考图片数量、风格、尺度等复杂差异时的优势和不足。