菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-18
📄 Abstract - Omni-3DEdit: Generalized Versatile 3D Editing in One-Pass

Most instruction-driven 3D editing methods rely on 2D models to guide the explicit and iterative optimization of 3D representations. This paradigm, however, suffers from two primary drawbacks. First, it lacks a universal design of different 3D editing tasks because the explicit manipulation of 3D geometry necessitates task-dependent rules, e.g., 3D appearance editing demands inherent source 3D geometry, while 3D removal alters source geometry. Second, the iterative optimization process is highly time-consuming, often requiring thousands of invocations of 2D/3D updating. We present Omni-3DEdit, a unified, learning-based model that generalizes various 3D editing tasks implicitly. One key challenge to achieve our goal is the scarcity of paired source-edited multi-view assets for training. To address this issue, we construct a data pipeline, synthesizing a relatively rich number of high-quality paired multi-view editing samples. Subsequently, we adapt the pre-trained generative model SEVA as our backbone by concatenating source view latents along with conditional tokens in sequence space. A dual-stream LoRA module is proposed to disentangle different view cues, largely enhancing our model's representational learning capability. As a learning-based model, our model is free of the time-consuming online optimization, and it can complete various 3D editing tasks in one forward pass, reducing the inference time from tens of minutes to approximately two minutes. Extensive experiments demonstrate the effectiveness and efficiency of Omni-3DEdit.

顶级标签: computer vision model training aigc
详细标签: 3d editing generative model multi-view synthesis efficiency instruction-driven 或 搜索:

Omni-3DEdit:一次性实现通用多功能三维编辑 / Omni-3DEdit: Generalized Versatile 3D Editing in One-Pass


1️⃣ 一句话总结

这篇论文提出了一个名为Omni-3DEdit的通用学习模型,它能够一次性快速完成多种三维编辑任务(如改变外观、移除物体等),无需传统方法中耗时的迭代优化过程。

源自 arXiv: 2603.17841