In the previous segment, you were introduced to two classifying techniques. one vs One and One vs Rest. You learnt that these methods are merely classification techniques which can be implemented using any machine learning supervised classification model. Now, in this segment, you will learn about one vs one method in detail. This technique basically involves three steps, each with its own function. In the next video, Ankit will provide a detailed explanation of all the three steps.

As explained by Ankit, the One vs One technique involves three steps:

- Create subsets
- Model training
- Model prediction

In the first step, create subsets as explained in the video, a data set is divided into multiple subsets each containing two target variables. Hence, if there are ‘n’ target variables, we will have nC2 subsets.

In the video, Ankit took a data set having 1,000 rows, which has ‘n’ target variables. The target variables are represented in different colours. As shown in the image below, the data set is divided into many subsets and each subset has only two target variables which mean there are only two colours in each subset represented as 1 and 0. Hence, each subset will have fewer than 10,000 rows.

It is very important to note that each of these subsets is now a binary classification data set (two target variables).

In the second step, which is model training, for each subset, a separate model is built and trained. Since the data set has ‘n’ target variables, we will have nC2 subsets in total, and, thus, we will build nC2 models (one model for each subset). In the image below, you can see that each yellow box represents a model built on a particular subset.

It is important to note that the same algorithm is to be used for all classifiers. If you are using logistic regression for one model, then you will have to use logistic regression for all the models. You will learn about the implementation/application part in the segment on Python coding.

In the third step, which is model prediction, test samples are passed to each of the nC2 models, which predict/classify the test samples accordingly. In the image below, you can see that a test sample represented with green is passed to all the nC2models. These models classify the test sample as appropriate based on the probability scores.

For example, in subset1, the model will classify the test sample as either blue or green. Now, let’s assume that we have built the model using logistic regression. The model will give two probability scores as output, which are for blue and green. The test sample gets classified under the colour that has the highest probability. After classification, the count variable which keeps a record of the number of classifications for each target, is increased for that variable respectively.

After all the nC2 models have performed their classification, the test sample gets classified under that target variable that has the highest count.

An important note: When the maximum count does not belong to one single class, but two or more target variables have the same count i.e. maximum count, then we will follow a different approach. Let’s take an example to understand this.

Suppose, after passing the test sample to the one vs one classifier, both the red and the yellow target classes have the same count. Now, the test sample can be either red or yellow. You know that whenever a count increases, it means the test sample was passed to a model, and based on the probability scores derived from the model, it was increased. So, if the count of both the red and yellow points is three, then it means there were three models each which had given higher probability scores for them respectively.

Let us assume the following probabilities:

- Red – [0.8879, 0.6656, 0.8891]
- Yellow – [0.91, 0.98711, 0.99991]

You can see that the maximum probability for red among the three scores is 0.8891 and similarly for yellow, it is 0.99991. Since yellow has the maximum score among all the probability scores, the test sample will be classified as yellow.

In the end, Ankit summarises the One vs One method diagrammatically. The image below summarises all the three steps executed in the One vs One method in a single slide. You can see that Step 1 indicates subset creation, Step 2 indicates model training and the final step indicates model prediction.

The image below is useful if you want to revise the concept as it explains the concept both diagrammatically and conceptually. You can see that in Step 3 (model prediction), the models classify the points based on the subsets on which they are trained.

For example, classifier 1 in the image classifies the test sample as 1 or 2. It is similar for other subsets as well. In the final box, you can see the count table, where for each class, there is a specific count. The test sample gets classified under the target variable with the maximum count as written in the box.