IKH

Code Implementation of Backpropagation

In the previous segment, you learnt how you can implement forward propagation using TensorFlow. In this segment, you will learn how backpropagation is implemented using TensorFlow.
The codes are present in the notebook file attached in the previous segment.

To implement backpropagation using TensorFlow, let’s understand how TensorFlow performs some basic tasks that will later be useful to implement backpropagation.

First, let’s understand how TensorFlow can help in calculating the gradients of any polynomial expression.

As mentioned in the video, we can find the gradient of a function and perform gradient descent in TensorFlow with ease. We used the gradientTape() function as shown below:with tf.GradientTape() as tape:     y = f(x) grad = tape.gradient(y,x) ## dy/dx x.assign_sub(lr*grad)

This is in correspondence with the formula given below:

Wnew=Wold−α∂L∂w

Now, let’s write the complete loop for gradient descent.

You have seen that we can refer to the previous code snippet and incorporate it in a loop to help perform more iterations to obtain the minima of a function. Now, let’s understand how we can use the ideas that we have explored for gradient and gradient descent to implement backpropagation using TensorFlow on the housing price prediction example.

A summary of the steps implemented for backpropagation, taking the house pricing data set as example, has been given below:
1) Implement the forward pass y_pred = forward_prop(x,w1,b1,w2,b2)

2) Calculate the loss loss = 0.5*(y-y_pred)**2

3) Using the gradient tape functionality, calculate the gradients with respect to each of the parameters gw1, gb1, gw2, gb2 = tape.gradient(loss, [w1, b1, w2, b2])

4) Update the weights and biases from the computed gradients using the assign_sub function lr=0.01 w1.assign_sub(lr*gw1) w2.assign_sub(lr*gw2) b1.assign_sub(lr*gb1) b2.assign_sub(lr*gb1)

Finally, we refactor all the code we have written so far into a single function that will be the training loop for the neural network to learn.

We defined the training loop that will be able to take one row of data as well as all the weights and biases as inputs, implement the forward propagation step to get the predicted output and then implement gradient descent to get the updated weights and biases. The forward propagation and backpropagation steps are repeated till you get satisfactory predictions.

This completes the backpropagation algorithm that is used to train a neural network.

Report an error