菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-02
📄 Abstract - OSCAR: Orchestrated Self-verification and Cross-path Refinement

Diffusion language models (DLMs) expose their denoising trajectories, offering a natural handle for inference-time control; accordingly, an ideal hallucination mitigation framework should intervene during generation using this model-native signal rather than relying on an externally trained hallucination classifier. Toward this, we formulate commitment uncertainty localization: given a denoising trajectory, identify token positions whose cross-chain entropy exceeds an unsupervised threshold before factually unreliable commitments propagate into self-consistent but incorrect outputs. We introduce a suite of trajectory-level assessments, including a cross-chain divergence-at-hallucination (CDH) metric, for principled comparison of localization methods. We also introduce OSCAR, a training-free inference-time framework operationalizing this formulation. OSCAR runs N parallel denoising chains with randomized reveal orders, computes cross-chain Shannon entropy to detect high-uncertainty positions, and then performs targeted remasking conditioned on retrieved evidence. Ablations confirm that localization and correction contribute complementary gains, robust across N in {4, 8, 16}. On TriviaQA, HotpotQA, RAGTruth, and CommonsenseQA using LLaDA-8B and Dream-7B, OSCAR enhances generation quality by significantly reducing hallucinated content and improving factual accuracy through uncertainty-guided remasking, which also facilitates more effective integration of retrieved evidence. Its native entropy-based uncertainty signal surpasses that of specialized trained detectors, highlighting an inherent capacity of diffusion language models to identify factual uncertainty that is not present in the sequential token commitment structure of autoregressive models. We are releasing the codebase1 to support future research on localization and uncertainty-aware generation in DLMs.

顶级标签: llm natural language processing model evaluation
详细标签: diffusion language models hallucination mitigation uncertainty localization inference-time control factual accuracy 或 搜索:

OSCAR:协同自验证与跨路径优化 / OSCAR: Orchestrated Self-verification and Cross-path Refinement


1️⃣ 一句话总结

这篇论文提出了一种名为OSCAR的新方法,它无需额外训练,就能让扩散语言模型在生成文本时,通过并行推理和自我比较来主动发现并修正可能产生的“幻觉”(即不准确或虚构的信息),从而显著提升生成内容的真实性和准确性。

源自 arXiv: 2604.01624