菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - StructDiff: A Structure-Preserving and Spatially Controllable Diffusion Model for Single-Image Generation

This paper introduces StructDiff, a generative framework based on a single-scale diffusion model for single-image generation. Single-image generation aims to synthesize diverse samples with similar visual content to the source image by capturing its internal statistics, without relying on external data. However, existing methods often struggle to preserve the structural layout, especially for images with large rigid objects or strict spatial constraints. Moreover, most approaches lack spatial controllability, making it difficult to guide the structure or placement of generated content. To address these challenges, StructDiff introduces an \textit{adaptive receptive field} module to maintain both global and local distributions. Building on this foundation, StructDiff incorporates 3D positional encoding (PE) as a spatial prior, allowing flexible control over positions, scale, and local details of generated objects. To our knowledge, this spatial control capability represents the first exploration of PE-based manipulation in single-image generation. Furthermore, we propose a novel evaluation criterion for single-image generation based on large language models (LLMs). This criterion specifically addresses the limitations of existing objective metrics and the high labor costs associated with user studies. StructDiff also demonstrates broad applicability across downstream tasks, such as text-guided image generation, image editing, outpainting, and paint-to-image synthesis. Extensive experiments demonstrate that StructDiff outperforms existing methods in structural consistency, visual quality, and spatial controllability. The project page is available at this https URL.

顶级标签: computer vision model training aigc
详细标签: single-image generation diffusion models spatial controllability 3d positional encoding structure preservation 或 搜索:

StructDiff:一种用于单图像生成的结构保持与空间可控扩散模型 / StructDiff: A Structure-Preserving and Spatially Controllable Diffusion Model for Single-Image Generation


1️⃣ 一句话总结

这篇论文提出了一个名为StructDiff的新方法,它能让AI仅凭一张参考图片就生成大量结构相似、布局可控的新图片,并且首次实现了对生成物体位置、大小等细节的灵活操控。

源自 arXiv: 2604.12575