用于个性化文本到图像生成的方向性文本反转 / Directional Textual Inversion for Personalized Text-to-Image Generation
1️⃣ 一句话总结
这项研究提出了一种名为方向性文本反转的新方法,通过只优化文本嵌入向量的方向而非大小,解决了现有技术在复杂文本提示下生成图像效果不佳的问题,从而在保持个性化主体相似性的同时,显著提升了生成图像与文本描述的匹配度。
Textual Inversion (TI) is an efficient approach to text-to-image personalization but often fails on complex prompts. We trace these failures to embedding norm inflation: learned tokens drift to out-of-distribution magnitudes, degrading prompt conditioning in pre-norm Transformers. Empirically, we show semantics are primarily encoded by direction in CLIP token space, while inflated norms harm contextualization; theoretically, we analyze how large magnitudes attenuate positional information and hinder residual updates in pre-norm blocks. We propose Directional Textual Inversion (DTI), which fixes the embedding magnitude to an in-distribution scale and optimizes only direction on the unit hypersphere via Riemannian SGD. We cast direction learning as MAP with a von Mises-Fisher prior, yielding a constant-direction prior gradient that is simple and efficient to incorporate. Across personalization tasks, DTI improves text fidelity over TI and TI-variants while maintaining subject similarity. Crucially, DTI's hyperspherical parameterization enables smooth, semantically coherent interpolation between learned concepts (slerp), a capability that is absent in standard TI. Our findings suggest that direction-only optimization is a robust and scalable path for prompt-faithful personalization.
用于个性化文本到图像生成的方向性文本反转 / Directional Textual Inversion for Personalized Text-to-Image Generation
这项研究提出了一种名为方向性文本反转的新方法,通过只优化文本嵌入向量的方向而非大小,解决了现有技术在复杂文本提示下生成图像效果不佳的问题,从而在保持个性化主体相似性的同时,显著提升了生成图像与文本描述的匹配度。
源自 arXiv: 2512.13672