In this session you learnt about various different topics, let us hear about them from Usha in the upcoming video.
As you saw in the above video, let us summarize what you learnt in this module:
- Firstly, learnt about two different losses i.e., empirical and theoretical loss and you also saw some of the limitations of the loss functions
- Next, you learnt in detail about the cross-entropy loss function for the classification problem and how it acts as a good surrogate function.
- Next, you learnt about the gradient descent algorithm and the concepts of critical points.
- You solved an example to compute minima, maxima and saddle points for a given equation.
- You learnt about momentum-based optimisers, adam, adagrad and RMS prop.
- You also learnt about the problem of vanishing and exploding gradients.
- You learnt initializations and their different types, where you learnt about three main types:
- 0
- Random
- he-et-al
- Lastly, you learnt about the following regularization techniques:
- L1 and L2 norms
- Dropouts
- Batch normalization
Until now, you have been through almost all the parts required to move forward and learn about Convolution Neural Networks and Recurrent Neural Networks.
Report an error