Ensemble learning is a type of machine learning that studies algorithms and architectures that build collections, or ensembles, of statistical classifiers/regressors that are more accurate than a single classifier/regressor. This technique combine the output of machine learning algorithms, called “weak learners”, in order to get smaller prediction errors (in regression) or lower error rates (in classification). The individual estimator must provide different patterns of generalization, thus in the training process diversity is employed. Otherwise, the ensemble would be composed of the same predictors and would provide as good accuracy as the single one. It has been proved that the ensemble performs better when each individual machine learning system is accurate and makes errors on different examples. To the methods of ensemble learning we may include bagging, boosting, stacking, subsampling, random subspaces, mixture of experts, and others.