菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-18
📄 Abstract - FoMo X: Modular Explainability Signals for Outlier Detection Foundation Models

Tabular foundation models, specifically Prior-Data Fitted Networks (PFNs), have revolutionized outlier detection (OD) by enabling unsupervised zero-shot adaptation to new datasets without training. However, despite their predictive power, these models typically function as opaque black boxes, outputting scalar outlier scores that lack the operational context required for safety-critical decision-making. Existing post-hoc explanation methods are often computationally prohibitive for real-time deployment or fail to capture the epistemic uncertainty inherent in zero-shot inference. In this work, we introduce FoMo-X, a modular framework that equips OD foundation models with intrinsic, lightweight diagnostic capabilities. We leverage the insight that the frozen embeddings of a pretrained PFN backbone already encode rich, context-conditioned relational information. FoMo-X attaches auxiliary diagnostic heads to these embeddings, trained offline using the same generative simulator prior as the backbone. This allows us to distill computationally expensive properties, such as Monte Carlo dropout based epistemic uncertainty, into a deterministic, single-pass inference. We instantiate FoMo-X with two novel heads: a Severity Head that discretizes deviations into interpretable risk tiers, and an Uncertainty Head that provides calibrated confidence measures. Extensive evaluation on synthetic and real-world benchmarks (ADBench) demonstrates that FoMo-X recovers ground-truth diagnostic signals with high fidelity and negligible inference overhead. By bridging the gap between foundation model performance and operational explainability, FoMo-X offers a scalable path toward trustworthy, zero-shot outlier detection.

顶级标签: model evaluation machine learning systems
详细标签: outlier detection explainable ai tabular data foundation models uncertainty quantification 或 搜索:

FoMo X:用于离群检测基础模型的模块化可解释性信号 / FoMo X: Modular Explainability Signals for Outlier Detection Foundation Models


1️⃣ 一句话总结

这篇论文提出了一个名为FoMo-X的模块化框架,它能让强大的‘零样本’离群检测基础模型在快速给出结果的同时,还能自动提供易于理解的‘风险等级’和‘置信度’解释,从而让AI的决策过程更透明、更可信。

源自 arXiv: 2603.17570