我们准备好将强化学习用于文本生成3D了吗?一项渐进式研究 / Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation
1️⃣ 一句话总结
这篇论文首次系统地探索了如何将强化学习应用于文本生成3D模型,通过研究奖励设计、算法改进并引入新基准,最终开发出一个能从文本描述生成从粗略形状到精细纹理的3D模型。
Reinforcement learning (RL), earlier proven to be effective in large language and multi-modal models, has been successfully extended to enhance 2D image generation recently. However, applying RL to 3D generation remains largely unexplored due to the higher spatial complexity of 3D objects, which require globally consistent geometry and fine-grained local textures. This makes 3D generation significantly sensitive to reward designs and RL algorithms. To address these challenges, we conduct the first systematic study of RL for text-to-3D autoregressive generation across several dimensions. (1) Reward designs: We evaluate reward dimensions and model choices, showing that alignment with human preference is crucial, and that general multi-modal models provide robust signal for 3D attributes. (2) RL algorithms: We study GRPO variants, highlighting the effectiveness of token-level optimization, and further investigate the scaling of training data and iterations. (3) Text-to-3D Benchmarks: Since existing benchmarks fail to measure implicit reasoning abilities in 3D generation models, we introduce MME-3DR. (4) Advanced RL paradigms: Motivated by the natural hierarchy of 3D generation, we propose Hi-GRPO, which optimizes the global-to-local hierarchical 3D generation through dedicated reward ensembles. Based on these insights, we develop AR3D-R1, the first RL-enhanced text-to-3D model, expert from coarse shape to texture refinement. We hope this study provides insights into RL-driven reasoning for 3D generation. Code is released at this https URL.
我们准备好将强化学习用于文本生成3D了吗?一项渐进式研究 / Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation
这篇论文首次系统地探索了如何将强化学习应用于文本生成3D模型,通过研究奖励设计、算法改进并引入新基准,最终开发出一个能从文本描述生成从粗略形状到精细纹理的3D模型。
源自 arXiv: 2512.10949