菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-11
📄 Abstract - Beyond VLM-Based Rewards: Diffusion-Native Latent Reward Modeling

Preference optimization for diffusion and flow-matching models relies on reward functions that are both discriminatively robust and computationally efficient. Vision-Language Models (VLMs) have emerged as the primary reward provider, leveraging their rich multimodal priors to guide alignment. However, their computation and memory cost can be substantial, and optimizing a latent diffusion generator through a pixel-space reward introduces a domain mismatch that complicates alignment. In this paper, we propose DiNa-LRM, a diffusion-native latent reward model that formulates preference learning directly on noisy diffusion states. Our method introduces a noise-calibrated Thurstone likelihood with diffusion-noise-dependent uncertainty. DiNa-LRM leverages a pretrained latent diffusion backbone with a timestep-conditioned reward head, and supports inference-time noise ensembling, providing a diffusion-native mechanism for test-time scaling and robust rewarding. Across image alignment benchmarks, DiNa-LRM substantially outperforms existing diffusion-based reward baselines and achieves performance competitive with state-of-the-art VLMs at a fraction of the computational cost. In preference optimization, we demonstrate that DiNa-LRM improves preference optimization dynamics, enabling faster and more resource-efficient model alignment.

顶级标签: model training computer vision multi-modal
详细标签: diffusion models reward modeling preference optimization latent space image alignment 或 搜索:

超越基于视觉语言模型的奖励:扩散模型原生潜在奖励建模 / Beyond VLM-Based Rewards: Diffusion-Native Latent Reward Modeling


1️⃣ 一句话总结

这篇论文提出了一种名为DiNa-LRM的新方法,它直接在扩散模型的内部潜在空间中评估图像质量,从而以更低的计算成本实现了与主流视觉语言模型相当的图像偏好对齐效果。

源自 arXiv: 2602.11146