In ensemble learning, we combine two or more models to build a more accurate model. Boosting and bagging are two approaches used in ensemble learning.
Bagging: Bagging is also known as Bootstrap aggregating. In bagging, we reduce the variance of the model by generating additional test data. Once the size of the data set is increased, we can tune the model to be more immune to variance.
Boosting: Boosting is a two-step algorithm for ensemble learning. In boosting, we use subsets of a dataset to create an average performance model. Then we tune the model on larger data set to boost the performance of the model.
Data partition: In bagging data partition is random. In boosting, miss-classified data is given higher importance.
Goal: In bagging the goal is to reduce the variance in the model. Boosting aims for increasing the prediction accuracy of the model.
Method: We use a random subspace in bagging. Boosting uses a gradient descent method.
Function: Bagging uses the weighted or average function. Boosting uses a weighted majority vote function.