📄 论文总结
VisPlay:从图像中自我演化的视觉语言模型 / VisPlay: Self-Evolving Vision-Language Models from Images
1️⃣ 一句话总结
这篇论文提出了一个名为VisPlay的自进化强化学习框架,让视觉语言模型能够利用大量未标注图像数据,通过让模型扮演提问者和回答者两个角色相互训练,自主提升视觉推理能力,并在多个基准测试中显著提高了性能。
Reinforcement learning (RL) provides a principled framework for improving Vision-Language Models (VLMs) on complex reasoning tasks. However, existing RL approaches often rely on human-annotated labels or task-specific heuristics to define verifiable rewards, both of which are costly and difficult to scale. We introduce VisPlay, a self-evolving RL framework that enables VLMs to autonomously improve their reasoning abilities using large amounts of unlabeled image data. Starting from a single base VLM, VisPlay assigns the model into two interacting roles: an Image-Conditioned Questioner that formulates challenging yet answerable visual questions, and a Multimodal Reasoner that generates silver responses. These roles are jointly trained with Group Relative Policy Optimization (GRPO), which incorporates diversity and difficulty rewards to balance the complexity of generated questions with the quality of the silver answers. VisPlay scales efficiently across two model families. When trained on Qwen2.5-VL and MiMo-VL, VisPlay achieves consistent improvements in visual reasoning, compositional generalization, and hallucination reduction across eight benchmarks, including MM-Vet and MMMU, demonstrating a scalable path toward self-evolving multimodal intelligence. The project page is available at this https URL
VisPlay:从图像中自我演化的视觉语言模型 / VisPlay: Self-Evolving Vision-Language Models from Images
这篇论文提出了一个名为VisPlay的自进化强化学习框架,让视觉语言模型能够利用大量未标注图像数据,通过让模型扮演提问者和回答者两个角色相互训练,自主提升视觉推理能力,并在多个基准测试中显著提高了性能。