菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-19
📄 Abstract - When More Experts Hurt: Underfitting in Multi-Expert Learning to Defer

Learning to Defer (L2D) enables a classifier to abstain from predictions and defer to an expert, and has recently been extended to multi-expert settings. In this work, we show that multi-expert L2D is fundamentally more challenging than the single-expert case. With multiple experts, the classifier's underfitting becomes inherent, which seriously degrades prediction performance, whereas in the single-expert setting it arises only under specific conditions. We theoretically reveal that this stems from an intrinsic expert identifiability issue: learning which expert to trust from a diverse pool, a problem absent in the single-expert case and renders existing underfitting remedies failed. To tackle this issue, we propose PiCCE (Pick the Confident and Correct Expert), a surrogate-based method that adaptively identifies a reliable expert based on empirical evidence. PiCCE effectively reduces multi-expert L2D to a single-expert-like learning problem, thereby resolving multi expert underfitting. We further prove its statistical consistency and ability to recover class probabilities and expert accuracies. Extensive experiments across diverse settings, including real-world expert scenarios, validate our theoretical results and demonstrate improved performance.

顶级标签: machine learning theory model evaluation
详细标签: learning to defer multi-expert systems underfitting expert identifiability surrogate loss 或 搜索:

专家越多越坏事:多专家延迟学习中的欠拟合问题 / When More Experts Hurt: Underfitting in Multi-Expert Learning to Defer


1️⃣ 一句话总结

这篇论文发现,在让AI系统选择将任务交给多位人类专家之一处理的场景中,系统会因为难以判断该信任哪位专家而出现严重的“欠拟合”问题,导致预测性能下降,并提出了一个名为PiCCE的新方法来解决这个问题。

源自 arXiv: 2602.17144