UM-Text:一种用于图像理解与编辑的统一多模态模型 / UM-Text: A Unified Multimodal Model for Image Understanding
1️⃣ 一句话总结
这篇论文提出了一个名为UM-Text的统一多模态模型,它能够根据自然语言指令理解图像上下文,并自动生成与图像风格和谐一致的视觉文字,解决了以往方法难以兼顾文字内容、布局与图像风格一致性的难题。
With the rapid advancement of image generation, visual text editing using natural language instructions has received increasing attention. The main challenge of this task is to fully understand the instruction and reference image, and thus generate visual text that is style-consistent with the image. Previous methods often involve complex steps of specifying the text content and attributes, such as font size, color, and layout, without considering the stylistic consistency with the reference image. To address this, we propose UM-Text, a unified multimodal model for context understanding and visual text editing by natural language instructions. Specifically, we introduce a Visual Language Model (VLM) to process the instruction and reference image, so that the text content and layout can be elaborately designed according to the context information. To generate an accurate and harmonious visual text image, we further propose the UM-Encoder to combine the embeddings of various condition information, where the combination is automatically configured by VLM according to the input instruction. During training, we propose a regional consistency loss to offer more effective supervision for glyph generation on both latent and RGB space, and design a tailored three-stage training strategy to further enhance model performance. In addition, we contribute the UM-DATA-200K, a large-scale visual text image dataset on diverse scenes for model training. Extensive qualitative and quantitative results on multiple public benchmarks demonstrate that our method achieves state-of-the-art performance.
UM-Text:一种用于图像理解与编辑的统一多模态模型 / UM-Text: A Unified Multimodal Model for Image Understanding
这篇论文提出了一个名为UM-Text的统一多模态模型,它能够根据自然语言指令理解图像上下文,并自动生成与图像风格和谐一致的视觉文字,解决了以往方法难以兼顾文字内容、布局与图像风格一致性的难题。
源自 arXiv: 2601.08321