A comparative analysis of gradient boosting algorithms

  • PDF / 1,045,750 Bytes
  • 31 Pages / 439.37 x 666.142 pts Page_size
  • 44 Downloads / 351 Views

DOWNLOAD

REPORT


A comparative analysis of gradient boosting algorithms Candice Bentéjac1 · Anna Csörgő2 · Gonzalo Martínez‑Muñoz3 

© Springer Nature B.V. 2020

Abstract The family of gradient boosting algorithms has been recently extended with several interesting proposals (i.e. XGBoost, LightGBM and CatBoost) that focus on both speed and accuracy. XGBoost is a scalable ensemble technique that has demonstrated to be a reliable and efficient machine learning challenge solver. LightGBM is an accurate model focused on providing extremely fast training performance using selective sampling of high gradient instances. CatBoost modifies the computation of gradients to avoid the prediction shift in order to improve the accuracy of the model. This work proposes a practical analysis of how these novel variants of gradient boosting work in terms of training speed, generalization performance and hyper-parameter setup. In addition, a comprehensive comparison between XGBoost, LightGBM, CatBoost, random forests and gradient boosting has been performed using carefully tuned models as well as using their default settings. The results of this comparison indicate that CatBoost obtains the best results in generalization accuracy and AUC in the studied datasets although the differences are small. LightGBM is the fastest of all methods but not the most accurate. Finally, XGBoost places second both in accuracy and in training speed. Finally an extensive analysis of the effect of hyper-parameter tuning in XGBoost, LightGBM and CatBoost is carried out using two novel proposed tools. Keywords  XGBoost · LightGBM · CatBoost · Gradient boosting · Random forest · Ensembles of classifiers

* Gonzalo Martínez‑Muñoz [email protected]

Candice Bentéjac candice.bentejac@u‑bordeaux.fr; [email protected]

Anna Csörgő [email protected] 1

College of Science and Technology, University of Bordeaux, Bordeaux, France

2

Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Budapest, Hungary

3

Escuela Politéctica Superior, Universidad Autónoma de Madrid, Madrid, Spain



13

Vol.:(0123456789)



C. Bentéjac et al.

1 Introduction As machine learning is becoming a critical part of the success of more and more applications—such as credit scoring (Xia et  al. 2017), bioactive molecule prediction (Babajide Mustapha and Saeed 2016), solar and wind energy prediction (Torres-Barrán et al. 2017), oil price prediction (Gumus and Kiran 2017), classification of galactic unidentified sources and gravitational lensed quasars (Mirabal et  al. 2016; Khramtsov et  al. 2019), sentiment analysis (Valdivia et al. 2018), prediction of dementia using electronic health record data (Nori et  al. 2019)—it is essential to find models that can deal efficiently with complex data, and with large amounts of it. With that perspective in mind, ensemble methods have been a very effective tool to improve the performance of multiple existing models (Breiman 2001; Friedman 2001; Yoav Freund 1999; Chen and Guestrin 2016). These methods m