IKH

Session Summary

So, in this session:

  • We discussed the need for regularisation, which helps models perform well with unseen data while identifying necessary underlying patterns in it. We did this by adding a penalty term to the cost function used by OLS.
  • We discussed two methods, Ridge and Lasso regression, which both allow some bias to get a significant decrease in variance, thereby pushing the model coefficients towards 0. 
  • You learnt that in Lasso, some of these coefficients become 0, thus resulting in model selection and, hence, easier interpretation, particularly when the number of coefficients is very large.

Report an error