Introduction
So far the discussion has been about the principles of model selection. Once you figure out the class of models to be used for a given problem, you need to build, tune and evaluate it to achieve a good performing model. In this module, we will discuss in detail the effective hyperparameter tuning and model evaluation strategies for the different types of models you have learnt so far.
In this session
The learning objectives of this session are to understand:
- Different utilities of cross-validation
- Hyperparameter tuning techniques (Grid search CV vs. Randomized search CV)
- Model evaluation
- Different cross Validation schemes
Guidelines for in-module questions
The in-video and in-content questions for this module are not graded. The graded questions are given in a separate segment at the end of the session. The questions in that segment will adhere to the following guidelines:
First Attempt Marks | Second Attempt Marks | |
Question with 2 Attempts | 10 | 5 |
Question with 1 Attempt | 10 | 0 |