IKH

Introduction

Welcome to the session on ‘Modifications in Neural Network‘.

In this session

You will learn about other advanced topics related to neural networks and hyperparameter tuning in deep neural networks. Let us hear about this session from Usha in the upcoming video.

As you saw in the above video, you will be learning the following topics in this session:

  1. Firstly, you will be learning about loss functions in more depth.
  2. You will then be introduced to the concept of a surrogate function and how cross-entropy loss works as a surrogate function.
  3. Next, you will learn about the gradient descent algorithm and the concept of critical points and how to compute them.
  4. You will learn about the limitations of gradient descent algorithm and how to improve their performance using optimisers such as momentum-based methodsadam, adagrad and RMS prop.
  5. You will learn exploding and vanishing gradients and initializations in neural networks.

You will also learn different regularisation techniques for improving your neural networks.

People you will hear from in this session

Subject Matter Expert:

Usha Rengaraju

Data Science Consultant

Usha currently works as a data science consultant at Infinite-Sum Modelling Inc. She has more than four years of experience in the field of deep learning and is the chapter lead for the TensorFlow User Group and Google Developers Group, Mysore. She is also the co-organiser of Women in Machine Learning & Data Science, Bengaluru Chapter.

Report an error