So the model evaluation on the train set is complete and the model seems to be doing a decent job. You saw two views of the evaluation metrics – one was the sensitivity-specificity view, and the other was the precision-recall view. You can choose any of the metrics you like; it is completely up to you.
In this session, we will go forward with the sensitivity-specificity view of things and make predictions based on the 0.3 cut-off that we decided earlier. Note that now we are making predictions on the test set, so we have to choose one threshold that we determined during the training phase. Now, you can choose either the sensitivity-specificity view (where the cut-off came to be 0.3) or the precision-recall view (where the cut-off came to be 0.42) when making predictions. Here in the demonstration, we have chosen the sensitivity-specificity view and hence are going with the 0.3 cut-off determined during the training phase. In the notebook, we have taken the cut-off as 0.42 to show the precision-recall view. You are free to adjust the cut-off as per your choice even though both the approaches would yield slightly different results
Note: At 2:45, Rahim mistakenly says, “On the main model we had accuracy about 79%.” Please note that he said this by mistake. It should be 77% as you had calculated in the ‘Model Evaluation Metrics – Exercise’ as well.
The metrics seem to hold on the test dataset as well. So, it looks like you have created a decent model for the churn dataset as the metrics are decent for both the training and test datasets.
You can also take the cutoff you got from the precision-recall tradeoff curve and try making predictions based on that on your own.
Report an error