Show simple item record

dc.contributor.advisorRue, Håvard
dc.contributor.authorNielsen, Didrik
dc.date.accessioned2017-03-13T07:58:50Z
dc.date.available2017-03-13T07:58:50Z
dc.date.created2016-12-26
dc.date.issued2016
dc.identifierntnudaim:16128
dc.identifier.urihttp://hdl.handle.net/11250/2433761
dc.description.abstractTree boosting has empirically proven to be a highly effective approach to predictive modeling. It has shown remarkable results for a vast array of problems. For many years, MART has been the tree boosting method of choice. More recently, a tree boosting method known as XGBoost has gained popularity by winning numerous machine learning competitions. In this thesis, we will investigate how XGBoost differs from the more traditional MART. We will show that XGBoost employs a boosting algorithm which we will term Newton boosting. This boosting algorithm will further be compared with the gradient boosting algorithm that MART employs. Moreover, we will discuss the regularization techniques that these methods offer and the effect these have on the models. In addition to this, we will attempt to answer the question of why XGBoost seems to win so many competitions. To do this, we will provide some arguments for why tree boosting, and in particular XGBoost, seems to be such a highly effective and versatile approach to predictive modeling. The core argument is that tree boosting can be seen to adaptively determine the local neighbourhoods of the model. Tree boosting can thus be seen to take the bias-variance tradeoff into consideration during model fitting. XGBoost further introduces some subtle improvements which allows it to deal with the bias-variance tradeoff even more carefully.
dc.languageeng
dc.publisherNTNU
dc.subjectFysikk og matematikk, Industriell matematikk
dc.titleTree Boosting With XGBoost - Why Does XGBoost Win "Every" Machine Learning Competition?
dc.typeMaster thesis


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record