Gradient Boosting Neural Networks: GrowNet


الملخص بالإنكليزية

A novel gradient boosting framework is proposed where shallow neural networks are employed as ``weak learners. General loss functions are considered under this unified framework with specific examples presented for classification, regression, and learning to rank. A fully corrective step is incorporated to remedy the pitfall of greedy function approximation of classic gradient boosting decision tree. The proposed model rendered outperforming results against state-of-the-art boosting methods in all three tasks on multiple datasets. An ablation study is performed to shed light on the effect of each model components and model hyperparameters.

تحميل البحث