In this segment, you will learn how to define the input, the processing of this input and the corresponding output from a single neuron. In the video below, we will be showing you in detail the structure and working of an artificial neuron.
Now that you have seen how inputs are fed into a neuron and how outputs are obtained using activation functions, let’s reiterate the concepts with a short summary.
In the image above, you can see that x1, x2 and x3 are the inputs, and their weighted sum along with bias is fed into the neuron to give the calculated result as the output.
To summarise, the weights are applied to the inputs respectively, and along with the bias, the cumulative input is fed into the neuron. An activation function is then applied on the cumulative input to obtain the output of the neuron. We have seen some of the activation functions such as softmax and sigmoid in the previous segment. We will explore other types of activation functions in the next segment. These functions apply non-linearity to the cumulative input to enable the neural network to identify complex non-linear patterns present in the data set.
An in-depth representation of the cumulative input as the output is given below.
In the image above, z is the cumulative input. You can see how the weights affect the inputs depending on their magnitudes. Also, z is the dot product of the weights and inputs plus the bias.
In this segment, you saw how a neuron takes an input and performs some operations on it to give the output. The output is obtained through an activation function. In the next segment, we will explore some popular activation functions.
Report an error