菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-02
📄 Abstract - Modular Energy Steering for Safe Text-to-Image Generation with Foundation Models

Controlling the behavior of text-to-image generative models is critical for safe and practical deployment. Existing safety approaches typically rely on model fine-tuning or curated datasets, which can degrade generation quality or limit scalability. We propose an inference-time steering framework that leverages gradient feedback from frozen pretrained foundation models to guide the generation process without modifying the underlying generator. Our key observation is that vision-language foundation models encode rich semantic representations that can be repurposed as off-the-shelf supervisory signals during generation. By injecting such feedback through clean latent estimates at each sampling step, our method formulates safety steering as an energy-based sampling problem. This design enables modular, training-free safety control that is compatible with both diffusion and flow-matching models and can generalize across diverse visual concepts. Experiments demonstrate state-of-the-art robustness against NSFW red-teaming benchmarks and effective multi-target steering, while preserving high generation quality on benign non-targeted prompts. Our framework provides a principled approach for utilizing foundation models as semantic energy estimators, enabling reliable and scalable safety control for text-to-image generation.

顶级标签: aigc model training computer vision
详细标签: text-to-image safety control inference-time steering energy-based sampling foundation models 或 搜索:

基于模块化能量引导的基础模型安全文本到图像生成 / Modular Energy Steering for Safe Text-to-Image Generation with Foundation Models


1️⃣ 一句话总结

这篇论文提出了一种无需额外训练、在生成过程中实时引导的方法,通过利用现成基础模型的语义反馈来确保文本到图像生成的安全性,同时保持高质量的图像输出。

源自 arXiv: 2604.02265