菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-11
📄 Abstract - DuetSVG: Unified Multimodal SVG Generation with Internal Visual Guidance

Recent vision-language model (VLM)-based approaches have achieved impressive results on SVG generation. However, because they generate only text and lack visual signals during decoding, they often struggle with complex semantics and fail to produce visually appealing or geometrically coherent SVGs. We introduce DuetSVG, a unified multimodal model that jointly generates image tokens and corresponding SVG tokens in an end-to-end manner. DuetSVG is trained on both image and SVG datasets. At inference, we apply a novel test-time scaling strategy that leverages the model's native visual predictions as guidance to improve SVG decoding quality. Extensive experiments show that our method outperforms existing methods, producing visually faithful, semantically aligned, and syntactically clean SVGs across a wide range of applications.

顶级标签: multi-modal computer vision model training
详细标签: svg generation multimodal generation visual guidance vector graphics vision-language model 或 搜索:

DuetSVG:基于内部视觉引导的统一多模态SVG生成 / DuetSVG: Unified Multimodal SVG Generation with Internal Visual Guidance


1️⃣ 一句话总结

这篇论文提出了一种名为DuetSVG的新方法,它通过同时生成图像和SVG代码,并利用模型自身的视觉预测来引导生成过程,从而解决了现有技术在生成复杂、美观且几何一致的矢量图形时遇到的困难。


源自 arXiv: 2512.10894