菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-25
📄 Abstract - Schema on the Inside: A Two-Phase Fine-Tuning Method for High-Efficiency Text-to-SQL at Scale

Applying large, proprietary API-based language models to text-to-SQL tasks poses a significant industry challenge: reliance on massive, schema-heavy prompts results in prohibitive per-token API costs and high latency, hindering scalable production deployment. We present a specialized, self-hosted 8B-parameter model designed for a conversational bot in CriQ, a sister app to Dream11, India's largest fantasy sports platform with over 250 million users, that answers user queries about cricket statistics. Our novel two-phase supervised fine-tuning approach enables the model to internalize the entire database schema, eliminating the need for long-context prompts. This reduces input tokens by over 99%, from a 17k-token baseline to fewer than 100, and replaces costly external API calls with efficient local inference. The resulting system achieves 98.4% execution success and 92.5% semantic accuracy, substantially outperforming a prompt-engineered baseline using Google's Gemini Flash 2.0 (95.6% execution, 89.4% semantic accuracy). These results demonstrate a practical path toward high-precision, low-latency text-to-SQL applications using domain-specialized, self-hosted language models in large-scale production environments.

顶级标签: llm natural language processing systems
详细标签: text-to-sql fine-tuning database schema model efficiency production deployment 或 搜索:

内部化模式:一种用于大规模高效文本到SQL的两阶段微调方法 / Schema on the Inside: A Two-Phase Fine-Tuning Method for High-Efficiency Text-to-SQL at Scale


1️⃣ 一句话总结

本文提出了一种两阶段微调方法,训练出一个可内部化数据库结构的专用小模型,从而在文本转SQL任务中,用极低的本地计算成本替代昂贵的大模型API调用,实现了高精度、低延迟的大规模生产部署。

源自 arXiv: 2603.24023