📄
Abstract - OmniLiDAR: A Unified Diffusion Framework for Multi-Domain 3D LiDAR Generation
LiDAR scene generation is increasingly important for scalable simulation and synthetic data creation, especially under diverse sensing conditions that are costly to capture at scale. Typically, diffusion-based LiDAR generators are developed under single-domain settings, requiring separate models for different datasets or sensing conditions and hindering unified, controllable synthesis under heterogeneous distribution shifts. To this end, we present OmniLiDAR, a unified text-conditioned diffusion framework that generates LiDAR scans in a shared range-image representation across eight representative domains spanning three shift types: adverse weather, sensor-configuration changes (e.g., reduced beams), and cross-platform acquisition (vehicle, drone, and quadruped). To enable training a single model over heterogeneous domains without isolating optimization by domain, we introduce a Cross-Domain Training Strategy (CDTS) that mixes domains within each mini-batch and leverages conditioning to steer generation. We further propose Cross-Domain Feature Modeling (CDFM), which captures directional dependencies along azimuth and elevation axes to reflect the anisotropic scanning structure of range images, and Domain-Adaptive Feature Scaling (DAFS) as a lightweight modulation to account for structured domain-dependent feature shifts during denoising. In the absence of a public consolidated benchmark, we construct an 8-domain dataset by combining real-world scans with physically based weather simulation and systematic beam reduction while following official splits. Extensive experiments demonstrate strong generation fidelity and consistent gains in downstream use cases, including generative data augmentation for LiDAR semantic segmentation and 3D object detection, as well as robustness evaluation under corruptions, with consistent benefits in limited-label regimes.
OmniLiDAR:面向多领域三维激光雷达生成的统一扩散框架 /
OmniLiDAR: A Unified Diffusion Framework for Multi-Domain 3D LiDAR Generation
1️⃣ 一句话总结
本文提出了一个名为OmniLiDAR的统一扩散模型框架,能够通过文本控制生成多种传感器配置、恶劣天气和不同平台(如车辆、无人机和四足机器人)下的激光雷达扫描数据,从而解决传统单一模型难以覆盖多种现实场景的问题,并显著提升了下游任务(如语义分割和物体检测)中的数据增强效果。