Introducing Gradient Boosting Machines

Currently one of the state of the art algorithms in Machine Learning is Gradient Boosting Machine (GBM). GBM can be used for regression, based on decision trees as prediction models. In GBMs, the learning procedure consecutively fits new models to provide a more accurate estimate of the response variable. The principle idea behind this algorithm is to construct the new base-learners to be maximally correlated with the negative gradient of the loss function, associated with the whole ensemble. The loss functions applied can be arbitrary, but to give a better perception, if the error function is the classic squared-error loss, the learning procedure would result in consecutive error-fitting. Read more at: http://www.dataminingblog.com/