In the previous segment, you learnt about the One vs One classifier method. Now, in this segment, you will learn about the One vs Rest classification technique in detail. Similar to One vs One, this technique involves three steps, each with its own function. In the coming video, Ankit will provide a detailed explanation of all three steps.
As explained by Ankit, the One vs Rest technique involves three steps.
- Data sets creation
- Model training
- Model prediction
In the first step, data set creation, as explained in the video, multiple data sets are created, which have these features.
- All the data points will be present in each data set.
- Each created data set will have two target variables only.
In the video, Ankit took a data set having 1,000 rows, which has ‘n’ target variables. The target variables are represented in different colours. As shown in the image below, multiple data sets are created, each with two target variables only, i.e., they are represented as 1 and 0. 1 represents a particular colour point and the rest are marked as 0. It is important to note that unlike the One vs One technique, where each subset has fewer than 10,000 rows, in this technique, all the data sets will have 10,000 rows.
In the second step, which is model training, for each data set, a model is built and trained. Since the data set has ‘n’ target variables, we will create ‘n’ data sets in total. We will, thus, build n models (one model for each data set). In the image below, you can see that each yellow box represents a model built on a particular data set.
Similar to the One vs One method, it is important to note that the same algorithm is to be used for all the classifiers. If you are using logistic regression for one model, then you will have to use logistic regression for all the models. You will learn about the implementation/application part in the segment on Python coding.
In the third step, which is model prediction, test samples are passed to each of these ‘n’ models, and the models predict/classify the test samples accordingly. In the image below, you can see that a test sample represented with green is passed to all the n models. For each model, the output is a probability score for that respective target variable (represented as 1).
For example, let us consider the model built on dataset1. The model will give a probability score for the test sample to be classified under the blue class. Similarly, the model built on dataset2 will give a probability score for the test sample to be classified under the red class.
After all the n models have performed all the classification, the test sample gets classified under the target variable that has the highest probability.
In the end, Ankit summarises the One vs Rest method diagrammatically. The image below summarises all the three steps executed in the One vs Rest method in a single slide. You can see that Step 1 indicates data set creation, Step 2 indicates model training and the final step indicates model prediction.
The image below is quite useful if you want to revise the concept as it explains the concept both diagrammatically and conceptually. You can see that in Step 3 (model prediction), the models classify the points based on the data sets on which they are trained.
For example, classifier 1 in the image gives the probability score for the test sample to be classified under the class 1 category. It is similar for the other data sets as well. In the final box, you can see the probability table, where for each class, there is a specific probability score. The test sample gets classified under the target variable with the maximum probability score as written in the box.