Let’s now proceed to model building. Recall that the first step in model building is to check the correlations between features to get an idea about how the different independent variables are correlated. In general, the process of feature selection is almost exactly analogous to linear regression.

Looking at the correlations certainly did help, as you identified a lot of features beforehand which wouldn’t have been useful for model building. Recall that Rahim dropped the following features after looking at the correlations from the heatmap.

- MultipleLines_No

- OnlineSecurity_No

- OnlineBackup_No

- DeviceProtection_No

- TechSupport_No

- StreamingTV_No

- StreamingMovies_No

If you look at the correlations between these dummy variables with their complimentary dummy variables, i.e. **‘MultipleLines_No’** with **‘MultipleLines_Yes’** or **‘OnlineSecurity_No’** with **‘OnlineSecurity_Yes’**, you’ll find out they’re highly correlated. Have a look at the heat map below.

If you check the highlighted portion, you’ll see that there are high correlations among the pairs of dummy variables which were created for the same column. For example, **‘StreamingTV_No’** has a correlation of **-0.64** with **‘StreamingTV_Yes’**. So it is better than we drop one of these variables from each pair as they won’t add much value to the model. The choice of which of these pair of variables you desire to drop is completely up to you; we’ve chosen to drop all the ‘Nos’ because the ‘Yeses’ are generally more interpretable and easy-to-work-with variables.

Now that you have completed all the pre-processing steps, inspected the correlation values and have eliminated a few variables, it’s time to build our first model.

So you finally built your first multivariate logistic regression model using all the features present in the dataset. This is the summary output for different variables that you got:

In this table, our key focus area is just the different **coefficients** and their respective **p-values**. As you can see, there are many variables whose p-values are high, implying that that variable is statistically insignificant. So we need to eliminate some of the variables in order to build a better model.

In this table, our key focus area is just the different **coefficients** and their respective **p-values**. As you can see, there are many variables whose p-values are high, implying that that variable is statistically insignificant. So we need to eliminate some of the variables in order to build a better model.

We’ll first eliminate a few features using Recursive Feature Elimination (RFE), and once we have reached a small set of variables to work with, we can then use manual feature elimination (i.e. manually eliminating features based on observing the p-values and VIFs).

## Coming Up

In the next segment, you’ll use RFE to build another model and also look at some metrics using which you can check the goodness of fit.