机器学习流程的神经网络转换 / Neural Network Conversion of Machine Learning Pipelines
1️⃣ 一句话总结
这篇论文提出了一种新方法,能够将传统的非神经网络机器学习模型(如随机森林)的知识‘教给’一个神经网络学生,从而让神经网络在多数情况下达到与原模型相近的性能,并实现整个流程的统一优化。
Transfer learning and knowledge distillation has recently gained a lot of attention in the deep learning community. One transfer approach, the student-teacher learning, has been shown to successfully create ``small'' student neural networks that mimic the performance of a much bigger and more complex ``teacher'' networks. In this paper, we investigate an extension to this approach and transfer from a non-neural-based machine learning pipeline as teacher to a neural network (NN) student, which would allow for joint optimization of the various pipeline components and a single unified inference engine for multiple ML tasks. In particular, we explore replacing the random forest classifier by transfer learning to a student NN. We experimented with various NN topologies on 100 OpenML tasks in which random forest has been one of the best solutions. Our results show that for the majority of the tasks, the student NN can indeed mimic the teacher if one can select the right NN hyper-parameters. We also investigated the use of random forest for selecting the right NN hyper-parameters.
机器学习流程的神经网络转换 / Neural Network Conversion of Machine Learning Pipelines
这篇论文提出了一种新方法,能够将传统的非神经网络机器学习模型(如随机森林)的知识‘教给’一个神经网络学生,从而让神经网络在多数情况下达到与原模型相近的性能,并实现整个流程的统一优化。
源自 arXiv: 2603.25699