菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-25
📄 Abstract - Coarsening Bias from Variable Discretization in Causal Functionals

A class of causal effect functionals requires integration over conditional densities of continuous variables, as in mediation effects and nonparametric identification in causal graphical models. Estimating such densities and evaluating the resulting integrals can be statistically and computationally demanding. A common workaround is to discretize the variable and replace integrals with finite sums. Although convenient, discretization alters the population-level functional and can induce non-negligible approximation bias, even under correct identification. Under smoothness conditions, we show that this coarsening bias is first order in the bin width and arises at the level of the target functional, distinct from statistical estimation error. We propose a simple bias-reduced functional that evaluates the outcome regression at within-bin conditional means, eliminating the leading term and yielding a second-order approximation error. We derive plug-in and one-step estimators for the bias-reduced functional. Simulations demonstrate substantial bias reduction and near-nominal confidence interval coverage, even under coarse binning. Our results provide a simple framework for controlling the impact of variable discretization on parameter approximation and estimation.

顶级标签: theory machine learning data
详细标签: causal inference discretization bias functional estimation mediation analysis nonparametric identification 或 搜索:

因果函数中变量离散化导致的粗化偏差 / Coarsening Bias from Variable Discretization in Causal Functionals


1️⃣ 一句话总结

这篇论文指出,在因果推断中,为了计算方便而将连续变量离散化会引入显著的近似偏差,并提出了一种通过评估组内条件均值来消除主要偏差项的简单方法,从而大幅提高了估计精度。

源自 arXiv: 2602.22083