In the previous session, you built a logistic regression model and arrived at the final set of features using RFE and manual feature elimination. You got an accuracy of about **80.475% **for the model. But the question now is – Is accuracy enough to assess the goodness of the model? As you’ll see, the answer is a big **NO!**

To understand why accuracy is often not the best metric, consider this business problem –

*“*Let’s say that increasing ‘churn’ is the most serious issue in the telecom company, and the company desperately wants to retain customers. To do that, the marketing head decides to roll out discounts and offers to all customers who are likely to churn – ideally, not a single ‘churn’ customer should be missed. Hence, it is important that the model identifies almost all the ‘churn’ customers correctly. It is fine if it incorrectly predicts some of the ‘non-churn’ customers as ‘churn’ since in that case, the worst that will happen is that the company will offer discounts to those customers who would anyway stay.”

Let’s take a look at the confusion matrix we got for our final model again – the actual labels are along the column while the predicted labels are along the rows (for e.g. 595 customers are actually ‘churn’ but predicted as ‘not-churn’.

Actual/Predicted | Not Churn | Churn |

Not Churn | 3269 | 366 |

Churn | 595 | 692 |

From the table above, you can see that there are **595 + 692 = 1287** actual ‘churn’ customers, so ideally the model should predict all of them as ‘churn’ (i.e. corresponding to the business problem above). But out of these 1287, the current model only predicts 692 as ‘churn’. Thus, only 692 out of 1287, or **only about 53% of ‘churn’ customers**, **will be predicted by the model as ‘churn’**. This is very risky – the company won’t be able to roll out offers to the rest 47% ‘churn’ customers and they could switch to a competitor!

So although the accuracy is about 80%, the model only predicts 53% of churn cases correctly.

In essence, what’s happening here is that you care more about one class (class=’churn’) than the other. This is a very common situation in classification problems – you almost always care more about one class than the other. On the other hand, the accuracy tells you model’s performance on both classes combined – which is fine, but not the most important metric.

Consider another example – suppose you’re building a logistic regression model for cancer patients. Based on certain features, you need to predict whether the patient has cancer or not. In this case, if you incorrectly predict many diseased patients as ‘Not having cancer’, it can be very risky. In such cases, it is better that instead of looking at the overall accuracy, you care about predicting the 1’s (the diseased) correctly.

Similarly, if you’re building a model to determine whether you should block (where blocking is a 1 and not blocking is a 0) a customer’s transactions or not based on his past transaction behaviour in order to identify frauds, you’d care more about getting the 0’s right. This is because you might not want to wrongly block a good customer’s transactions as it might lead to a very bad customer experience.

Hence, it is very crucial that you consider the **overall business problem** you are trying to solve to decide the metric you want to maximise or minimise.

This brings us to two of the most commonly used metrics to evaluate a classification model.

- Sensitivity
- Specificity

Let’s understand these metrics one by one. **Sensitivity **is defined as:

Sensitivity=Number of actual Yeses correctly predictedTotal number of actual Yeses

Here, ‘yes’ means ‘churn’ and ‘no’ means ‘non-churn’. Let’s look at the confusion matrix again.

Actual/Predicted | Not Churn | Churn |

Not Churn | 3269 | 366 |

Churn | 595 | 692 |

The different elements in this matrix can be labelled as follows.

Actual/Predicted | Not Churn | Churn |

Not Churn | True Negatives | False Positives |

Churn | False Negatives | True Positives |

- The first cell contains the actual ‘Not Churns’ being predicted as ‘Not-Churn’ and hence, is labelled
**‘True Negatives’**(Negative implying that the class is ‘0’, here, Not-Churn.). - The second cell contains the actual ‘Not Churns’ being predicted as ‘Churn’ and hence, is labelled
**‘False Positive’**(because it is predicted as ‘Churn’ (Positive) but in actuality, it’s not a Churn). - Similarly, the third cell contains the actual ‘Churns’ being predicted as ‘Not Churn’ which is why we call it
**‘False Negative’**. - And finally, the fourth cell contains the actual ‘Churns’ being predicted as ‘Churn’ and so, it’s labelled as
**‘True Positives’**.

Now, to find out the sensitivity, you first need the number of actual Yeses correctly predicted. This number can be found in the last row and the last column of the matrix (which is denoted as true positives). This number if** 692.** Now, you need the total number of actual Yeses. This number will be the sum of the numbers present in the last row, i.e. the actual number of churns (this will include the actual churns being wrongly identified as not-churns, and the actual churns being correctly identified as churns). Hence, you get **(595 + 692) = 1287**.

Now, when you replace these values in the sensitivity formula, you get:

Sensitivity=6921287≈53.768%

Thus, you can clearly see that although you had a high accuracy **(~80.475%)**, your sensitivity turned out to be quite low **(~53.768%)**

Now, similarly, **specificity** is defined as.

Specificity=Number of actual Nos correctly predictedTotal number of actual Nos

As you can now infer, this value will be given by the value **True Negatives (3269)** divided by the actual number of negatives, i.e. **True Negatives + False Positives (3269 + 366 = 3635)**. Hence, by replacing these values in the formula, you get specificity as:

Specificity=32693635≈89.931%