菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - TikZilla: Scaling Text-to-TikZ with High-Quality Data and Reinforcement Learning

Large language models (LLMs) are increasingly used to assist scientists across diverse workflows. A key challenge is generating high-quality figures from textual descriptions, often represented as TikZ programs that can be rendered as scientific images. Prior research has proposed a variety of datasets and modeling approaches for this task. However, existing datasets for Text-to-TikZ are too small and noisy to capture the complexity of TikZ, causing mismatches between text and rendered figures. Moreover, prior approaches rely solely on supervised fine-tuning (SFT), which does not expose the model to the rendered semantics of the figure, often resulting in errors such as looping, irrelevant content, and incorrect spatial relations. To address these issues, we construct DaTikZ-V4, a dataset more than four times larger and substantially higher in quality than DaTikZ-V3, enriched with LLM-generated figure descriptions. Using this dataset, we train TikZilla, a family of small open-source Qwen models (3B and 8B) with a two-stage pipeline of SFT followed by reinforcement learning (RL). For RL, we leverage an image encoder trained via inverse graphics to provide semantically faithful reward signals. Extensive human evaluations with over 1,000 judgments show that TikZilla improves by 1.5-2 points over its base models on a 5-point scale, surpasses GPT-4o by 0.5 points, and matches GPT-5 in the image-based evaluation, while operating at much smaller model sizes. Code, data, and models will be made available.

顶级标签: llm model training natural language processing
详细标签: text-to-tikz reinforcement learning dataset construction inverse graphics code generation 或 搜索:

TikZilla:利用高质量数据和强化学习扩展文本到TikZ的生成能力 / TikZilla: Scaling Text-to-TikZ with High-Quality Data and Reinforcement Learning


1️⃣ 一句话总结

这篇论文通过构建一个更大、质量更高的数据集,并采用监督微调结合强化学习的两阶段训练方法,开发出名为TikZilla的小型开源模型,使其在根据文字描述生成科学图表代码(TikZ)的任务上,性能超越了GPT-4o,并与更强大的模型相当。

源自 arXiv: 2603.03072