通过文本可引导的图像到3D模型进行前馈式3D编辑 / Feedforward 3D Editing via Text-Steerable Image-to-3D
1️⃣ 一句话总结
这篇论文提出了一种名为Steer3D的前馈式方法,让现有的图像生成3D模型能够通过简单的文字指令来快速、高质量地编辑生成的3D资产,比现有方法更忠实于指令、更保持原资产一致性,并且速度快了2.4到28.5倍。
Recent progress in image-to-3D has opened up immense possibilities for design, AR/VR, and robotics. However, to use AI-generated 3D assets in real applications, a critical requirement is the capability to edit them easily. We present a feedforward method, Steer3D, to add text steerability to image-to-3D models, which enables editing of generated 3D assets with language. Our approach is inspired by ControlNet, which we adapt to image-to-3D generation to enable text steering directly in a forward pass. We build a scalable data engine for automatic data generation, and develop a two-stage training recipe based on flow-matching training and Direct Preference Optimization (DPO). Compared to competing methods, Steer3D more faithfully follows the language instruction and maintains better consistency with the original 3D asset, while being 2.4x to 28.5x faster. Steer3D demonstrates that it is possible to add a new modality (text) to steer the generation of pretrained image-to-3D generative models with 100k data. Project website: this https URL
通过文本可引导的图像到3D模型进行前馈式3D编辑 / Feedforward 3D Editing via Text-Steerable Image-to-3D
这篇论文提出了一种名为Steer3D的前馈式方法,让现有的图像生成3D模型能够通过简单的文字指令来快速、高质量地编辑生成的3D资产,比现有方法更忠实于指令、更保持原资产一致性,并且速度快了2.4到28.5倍。
源自 arXiv: 2512.13678