菜单

🤖 系统
📄 Abstract - VQ-VA World: Towards High-Quality Visual Question-Visual Answering

This paper studies Visual Question-Visual Answering (VQ-VA): generating an image, rather than text, in response to a visual question -- an ability that has recently emerged in proprietary systems such as NanoBanana and GPT-Image. To also bring this capability to open-source models, we introduce VQ-VA World, a data-centric framework built around an agentic pipeline for large-scale, targeted data construction. Leveraging web-scale deployment, this pipeline crawls a massive amount of ~1.8M high-quality, interleaved image-text samples for model training. For evaluation, we further release IntelligentBench, a human-curated benchmark that systematically assesses VQ-VA along the aspects of world knowledge, design knowledge, and reasoning. Training with VQ-VA World data yields strong empirical gains: it helps LightFusion attain 53.06 on IntelligentBench, substantially surpassing the best prior open-source baselines (i.e., 7.78 from vanilla LightFusion; 1.94 from UniWorld-V1), and significantly narrowing the gap toward leading proprietary systems (e.g., 81.67 from NanoBanana; 82.64 from GPT-Image). By releasing the full suite of model weights, datasets, and pipelines, we hope to stimulate future research on VQ-VA.

顶级标签: computer vision multi-modal model training
详细标签: visual question answering visual answering data generation benchmark evaluation image editing 或 搜索:

VQ-VA World框架:面向视觉问答-视觉回答任务的数据中心化解决方案 / VQ-VA World: Towards High-Quality Visual Question-Visual Answering


1️⃣ 一句话总结

本文提出了VQ-VA World框架,通过智能数据构建管道收集180万高质量图像-文本样本,并发布IntelligentBench人工策划基准,显著提升了开源模型在视觉问答-视觉回答任务上的性能,缩小了与专有模型的差距。


2️⃣ 论文创新点

1. VQ-VA World数据框架

2. 智能体流水线设计

3. IntelligentBench评估基准


3️⃣ 主要结果与价值

结果亮点

实际价值


4️⃣ 术语表

📄 打开原文 PDF