HorizonWeaver:面向驾驶场景的可泛化多层级语义编辑 / HorizonWeaver: Generalizable Multi-Level Semantic Editing for Driving Scenes
1️⃣ 一句话总结
这篇论文提出了一个名为HorizonWeaver的新方法,它能够根据文字指令,逼真且可控地编辑复杂的自动驾驶场景图像,解决了现有方法在密集、安全关键的道路环境中难以进行多层级、精细化编辑的难题。
Ensuring safety in autonomous driving requires scalable generation of realistic, controllable driving scenes beyond what real-world testing provides. Yet existing instruction guided image editors, trained on object-centric or artistic data, struggle with dense, safety-critical driving layouts. We propose HorizonWeaver, which tackles three fundamental challenges in driving scene editing: (1) multi-level granularity, requiring coherent object- and scene-level edits in dense environments; (2) rich high-level semantics, preserving diverse objects while following detailed instructions; and (3) ubiquitous domain shifts, handling changes in climate, layout, and traffic across unseen environments. The core of HorizonWeaver is a set of complementary contributions across data, model, and training: (1) Data: Large-scale dataset generation, where we build a paired real/synthetic dataset from Boreas, nuScenes, and Argoverse2 to improve generalization; (2) Model: Language-Guided Masks for fine-grained editing, where semantics-enriched masks and prompts enable precise, language-guided edits; and (3) Training: Content preservation and instruction alignment, where joint losses enforce scene consistency and instruction fidelity. Together, HorizonWeaver provides a scalable framework for photorealistic, instruction-driven editing of complex driving scenes, collecting 255K images across 13 editing categories and outperforming prior methods in L1, CLIP, and DINO metrics, achieving +46.4% user preference and improving BEV segmentation IoU by +33%. Project page: this https URL
HorizonWeaver:面向驾驶场景的可泛化多层级语义编辑 / HorizonWeaver: Generalizable Multi-Level Semantic Editing for Driving Scenes
这篇论文提出了一个名为HorizonWeaver的新方法,它能够根据文字指令,逼真且可控地编辑复杂的自动驾驶场景图像,解决了现有方法在密集、安全关键的道路环境中难以进行多层级、精细化编辑的难题。
源自 arXiv: 2604.04887