All of the models that we have discussed so far have been linear in terms of the parameters (i.e., linear in terms of the beta’s). Nevertheless, for models in which the response variable is related nonlinearly with the parameters or the model coefficients, we use nonlinear regression. Let’s watch the forthcoming video to learn more about nonlinear regression.
We can see clearly that the relationship between the response variable and the model coefficients is nonlinear. Here, we cannot use the linear regression framework anymore to estimate the model coefficients. Note that this is out of the scope of this module.
Of the three models that we have discussed, polynomial regression and data transformation allow us to stay within the linear regression framework. After transforming the predictors, when we fit the model, we still check whether the model follows the assumptions so that we can trust the results that we get from the model.
It may so happen that the models built by you and by another person are different, but at the same time, they may be equally correct. What matters is that the model that you build is not very complex, it follows the assumptions of linear regression and then only, you’ll be able to make a correct inference. Please remember that data science is as much an art as it is a science. In the next segment, we will look at some of the pitfalls of linear regression.
In the next segment, you will learn about linear regression pitfalls.