菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-25
📄 Abstract - Improving Lean4 Autoformalization via Cycle Consistency Fine-tuning

Autoformalization - automatically translating natural language mathematical texts into formal proof language such as Lean4 - can help accelerate AI-assisted mathematical research, be it via proof verification or proof search. I fine-tune Qwen3.5-2B with LoRA for natural language to Lean4 formalization on FineLeanCorpus and consider three training regimes: supervised fine-tuning (SFT) with curriculum learning (difficulty 1 to 10), SFT without curriculum ordering, and reinforcement learning using group relative policy optimization (GRPO) with a cycle consistency reward. Cycle consistency measures how well the meaning of a statement is preserved through a NL to Lean4 to NL' loop, computed as cosine similarity of off-the-shelf sentence embeddings. On an unseen subset of FineLeanCorpus (FLC) and on PutnamBench, RL substantially outperforms both SFT variants (mean cycle consistency 0.669 vs. 0.513 on FLC; 0.561 vs. 0.422 on PutnamBench), while increasing cross-entropy loss by only 0.011 nats, with minimal impact on formalization quality. Curriculum ordering provides no measurable benefit over shuffled training.

顶级标签: llm natural language processing model training
详细标签: autoformalization cycle consistency reinforcement learning fine-tuning mathematical reasoning 或 搜索:

通过循环一致性微调改进Lean4自动形式化 / Improving Lean4 Autoformalization via Cycle Consistency Fine-tuning


1️⃣ 一句话总结

这篇论文通过一种名为‘循环一致性奖励’的强化学习方法,显著提升了AI将自然语言数学文本自动翻译成Lean4形式化证明语言的准确性和语义保持能力,比传统的监督学习方法效果更好。

源自 arXiv: 2603.24372