📄
Abstract - Parametric Knowledge and Retrieval Behavior in RAG Fine-Tuning for Electronic Design Automation
Retrieval-Augmented Generation (RAG) fine-tuning has shown substantial improvements over vanilla RAG, yet most studies target document question answering and often rely on standard NLP metrics that can obscure factual differences. We evaluate RAG fine-tuning for long-form text generation in electronic design automation, adapting a 7B model under five context augmentation strategies with varying retrieval conditions. We introduce TriFEX, a human-validated, triple-based evaluation pipeline that attributes generated claims to their origin-user query, context and reference-and propose Parametric Knowledge Precision (PKP), which isolates internalized knowledge by filtering out claims leaked in the prompt. We show that ROUGE and BERTScore fail to detect factual differences that our triple-based evaluation reveals. Additionally, we demonstrate that an existing metric for knowledge internalization is retrieva-sensitive, with about 75% of its cross-condition variance driven by changes in the rate at which internal knowledge is expressed (PR), rather than by changes in its actual correctness (PKP). The fine-tuned 7B variants outperform a 72B baseline on most metrics, further showing generalization across conditions and on a related benchmark. These results underscore the limitations of available metrics in RAG evaluation and show that smaller models could be reasonably well adapted to specialized tasks for cost-efficient, on-premises deployment.
电子设计自动化中RAG微调的参数化知识与检索行为研究 /
Parametric Knowledge and Retrieval Behavior in RAG Fine-Tuning for Electronic Design Automation
1️⃣ 一句话总结
这篇论文通过开发一种基于三元组的人工验证评估方法(TriFEX)和新的指标(PKP),揭示了在电子设计自动化任务中,传统评估指标无法有效检测RAG微调模型生成内容的真实性差异,并证明较小的模型经过适当微调后,可以在专业任务上达到甚至超过大模型的性能,从而实现更经济高效的本地部署。