IKH

Code Implementation of Feedforward Neural Network

In the previous sessions, you learnt about building a neural network, its major components, and the algorithm for forward propagation and backpropagation. You also learnt about a useful library called TensorFlow that helps build ANNs. Now, apply what you have learnt while using TensorFlow in a real life application. 

Please note that we are using TensorFlow primarily to understand what is happening behind the scenes. However, you are expected to be comfortable with the high-level API Keras. In the next session, you will learn about Keras. 

We will take the example of predicting house prices given the size of the house and the number of rooms available.

Here is the housing data and the Jupyter Notebook for ANN training on the data set for you to

explore and experiment.

In this video, you went through the housing data being read, the log transformation of the response variable and the scaling of the input data using the code snippet given below. X = df.copy() # Remove target Y = X.pop(‘price’) # perform a scaler transform of the input data scaler = StandardScaler() X = scaler.fit_transform(X) # perform log transformation of target variable Y = np.log(Y)

Next, you will learn how to do a forward pass with a single input observation from the data set using a single neuron.

You saw what happens when a single input is passed through a single neuron using a random initialisation of weights and biases. The following code snippet shows the computation of the cumulative input z and the corresponding output h after applying the sigmoid activation function on the cumulative input. #Cumulative input z = b + w1*x1 + w2*x2 h = tf.math.sigmoid(z)

Now, you will learn how the code changes when we process the single input data point using multiple neurons instead of a single one.

As you can see, each neuron will process the input with its own set of weights, and the output computed by each neuron in the hidden layer will be consumed as an input by the output neuron. As we obtain the output, we compute the Loss using the RSS method. Now, you will learn how you can represent the forward pass in the form of matrix multiplication.## forward pass # neuron 1 z1 = b1+w11*x1+w12*x2 h1 = tf.math.sigmoid(z1) ## forward pass # neuron 2 z2 = b2+w21*x1+w22*x2 h2 = tf.math.sigmoid(z2) ## forward pass # second layer z1 = b1+w11*h1+w12*h2 h1 = z1 y_true = Y[0] y_pred = h1.numpy() #loss L = 0.5*(y_true – y_pred)**2 print(“The error is”,L)

We will now repeat the previous task using vectors and matrices for the weight and bias terms. First, the weights, biases, inputs and outputs will be represented as matrices.

Second, the forward pass operations are done using the matrix representations as explained in the following video.

The codes for matrix representation explained in the video is as shown below:## layer 1 weights W1 = tf.Variable([[0.2, 0.15], [0.5, 0.6]], dtype=tf.float32) ## layer 1 bias B1 = tf.Variable([[0.1], [0.25]], dtype=tf.float32) ## forward pass layer 1 Z1 = tf.matmul(W1, tf.transpose(X)) + B1 H1 = tf.math.sigmoid(Z1)

You have now seen a matrix representation of the data and the operations for a forward pass in a neural network using matrices and vectors. You also learnt how to evaluate the loss for a given data point. But so far, we manually initialised the weights and biases. In the next video, you will learn how you can automate the random initialisation process.

With this, we conclude the segment on forward propagation using TensorFlow. You learnt how the feedforward neural network takes an input, computes the output and measures the error of prediction vs the actual value. Now, you will learn how the neural network performs backpropagation using TensorFlow.

Report an error