Random forest is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. It is called random because there are two levels of randomness; at row level and at the column level. In spite of it being such a convenient process to deal with large datasets it has a few disadvantages. In case of smaller datasets linear regression is a better method than this. Next is that any relationship between the response and independent variables can't be predicted. Also, this process is very cumbersome and can't take values from outside the datasets. Even then, random forest is advantageous because keeping the bias constant it can decrease the variance in the datasets and it helps us ignore most of the assumptions like linearity in datasets. Read more at: http://www.datasciencecentral.com/profiles/blogs/random-forests-explained-intuitively