Random forests are ensembles of decision trees where each tree is based on values 
drawn from a random vector, independently sampled with the same distribution for 
all trees in the forest. As the number of trees increases, the generalization error of the 
forest converges almost surely to a fixed limit. The generalization error of a forest of 
tree classifiers is influenced by both the individual tree strengths and the correlation 
between them. By randomly selecting features for splitting each node, random forests 
achieve error rates that are competitive with Adaboost (Y. Freund & R. Schapire, Machine 
Learning: Proceedings of the Thirteenth International Conference, ***, 148–156), but 
they are more resilient to noise. Internal estimates are used to track error, strength, and 
correlation, which also help assess the impact of using more features during splitting. 
These internal measures are also employed to evaluate the importance of variables. 
These concepts can also be applied to regression tasks.