铰链回归树:一种用于斜决策树分裂的牛顿方法 / Hinge Regression Tree: A Newton Method for Oblique Regression Tree Splitting
1️⃣ 一句话总结
这篇论文提出了一种名为‘铰链回归树’的新方法,它通过一种高效的牛顿优化算法来训练决策树,让树在保持可解释性的同时,能学习更复杂的斜向分类边界,从而用更小的树结构达到更好的预测效果。
Oblique decision trees combine the transparency of trees with the power of multivariate decision boundaries, but learning high-quality oblique splits is NP-hard, and practical methods still rely on slow search or theory-free heuristics. We present the Hinge Regression Tree (HRT), which reframes each split as a non-linear least-squares problem over two linear predictors whose max/min envelope induces ReLU-like expressive power. The resulting alternating fitting procedure is exactly equivalent to a damped Newton (Gauss-Newton) method within fixed partitions. We analyze this node-level optimization and, for a backtracking line-search variant, prove that the local objective decreases monotonically and converges; in practice, both fixed and adaptive damping yield fast, stable convergence and can be combined with optional ridge regularization. We further prove that HRT's model class is a universal approximator with an explicit $O(\delta^2)$ approximation rate, and show on synthetic and real-world benchmarks that it matches or outperforms single-tree baselines with more compact structures.
铰链回归树:一种用于斜决策树分裂的牛顿方法 / Hinge Regression Tree: A Newton Method for Oblique Regression Tree Splitting
这篇论文提出了一种名为‘铰链回归树’的新方法,它通过一种高效的牛顿优化算法来训练决策树,让树在保持可解释性的同时,能学习更复杂的斜向分类边界,从而用更小的树结构达到更好的预测效果。
源自 arXiv: 2602.05371