菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-29
📄 Abstract - Token-Guard: Towards Token-Level Hallucination Control via Self-Checking Decoding

Large Language Models (LLMs) often hallucinate, generating content inconsistent with the input. Retrieval-Augmented Generation (RAG) and Reinforcement Learning with Human Feedback (RLHF) can mitigate hallucinations but require resource-intensive retrieval or large-scale fine-tuning. Decoding-based methods are lighter yet lack explicit hallucination control. To address this, we present Token-Guard, a token-level hallucination control method based on self-checking decoding. Token-Guard performs internal verification at each reasoning step to detect hallucinated tokens before they propagate. Candidate fragments are further evaluated in a latent space with explicit hallucination risk scoring, while iterative pruning and regeneration dynamically correct detected errors. Experiments on HALU datasets show Token-Guard substantially reduces hallucinations and improves generation accuracy, offering a scalable, modular solution for reliable LLM outputs. Our code is publicly available.

顶级标签: llm natural language processing model evaluation
详细标签: hallucination detection decoding strategy self-checking reliable generation token-level control 或 搜索:

令牌守卫:基于自检解码的令牌级幻觉控制方法 / Token-Guard: Towards Token-Level Hallucination Control via Self-Checking Decoding


1️⃣ 一句话总结

这篇论文提出了一种名为Token-Guard的新方法,它能在大型语言模型生成文本的每个步骤中实时检查并纠正错误信息,从而有效减少模型‘胡言乱语’的问题,且无需耗费大量资源进行模型微调或检索增强。

源自 arXiv: 2601.21969