菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-05
📄 Abstract - Mitigating Hallucination in Financial Retrieval-Augmented Generation via Fine-Grained Knowledge Verification

In financial Retrieval-Augmented Generation (RAG) systems, models frequently rely on retrieved documents to generate accurate responses due to the time-sensitive nature of the financial domain. While retrieved documents help address knowledge gaps, model-generated responses still suffer from hallucinations that contradict the retrieved information. To mitigate this inconsistency, we propose a Reinforcement Learning framework enhanced with Fine-grained Knowledge Verification (RLFKV). Our method decomposes financial responses into atomic knowledge units and assesses the correctness of each unit to compute the fine-grained faithful reward. This reward offers more precise optimization signals, thereby improving alignment with the retrieved documents. Additionally, to prevent reward hacking (e.g., overly concise replies), we incorporate an informativeness reward that encourages the policy model to retain at least as many knowledge units as the base model. Experiments conducted on the public Financial Data Description (FDD) task and our newly proposed FDD-ANT dataset demonstrate consistent improvements, confirming the effectiveness of our approach.

顶级标签: llm financial model training
详细标签: retrieval-augmented generation hallucination mitigation reinforcement learning knowledge verification faithfulness 或 搜索:

通过细粒度知识验证缓解金融检索增强生成中的幻觉问题 / Mitigating Hallucination in Financial Retrieval-Augmented Generation via Fine-Grained Knowledge Verification


1️⃣ 一句话总结

这篇论文提出了一种结合细粒度知识验证的强化学习方法,通过将金融回答拆解成最小知识单元并逐一验证其准确性,有效减少了AI在生成金融信息时与检索资料相矛盾的‘幻觉’问题,同时保证了回答的充分性。

源自 arXiv: 2602.05723