Recall in machine learning is also known as sensitivity.
Let say we have a dataset of 12 fruits, apples, and oranges. Our model identifies 8 apples out of these. Out of these 5 apples are actually apples. This is called true positives. These rest 3 apples identified by the model are called false positives. In this case, the recall of the model is 5/12.
The other way of looking at recall is how complete the results are.
Recall is a measure of quantity or completeness of results.
Recall can also be used to find the probability that a relevant result is predicted by model.
We use Regularization in machine learning to minimize the overfitting of the model. Two popular techniques of regularization are L1 and L2.
L1 regression is known as Lasso regression. L2 regression is known as Ridge regression.
Both of these techniques impose a penalty on coefficients to minimize overfitting.
L1 regularization is based on a sparse approach. L1 shrinks some coefficients to 0 value. This minimizes the impact of such features in the model. L2 regularization tries to spread the error among all the terms.
If we have data with correlated features, L2 regression works better. If only some features in data are related to prediction then L1 regression gives better results.
Type I and Type II errors are used in machine learning to find the effectiveness of the hypothesis. These are the concepts derived from statistics.
Type I Error: Type I error is the rejection of the null hypothesis. It is also known as a false positive. It means the result indicates that a condition is present but it is not present. E.g. If a test predicts that a person has diabetes, but in reality, the person does not have diabetes. It is an example of a Type I error.
Type II Error: Type II error is the failure to reject the null hypothesis. It is also known as a false negative. It means, the result indicates that a condition is not present, but it is actually present. E.g. If a test predicts that a person does not have diabetes, but in reality the person has diabetes. It is an example of a Type II error.
In machine learning, we have to establish the acceptance criteria of a model on the basis of acceptable false-positive and false-negative results. Therefore, Type I and Type II errors are quite useful in machine learning models.