IKH

Artificial Neuron

Now that you have a basic understanding of perceptrons, let’s study the design of Artificial Neural Networks (ANNs). 

Neural networks are a collection of artificial neurons arranged in a particular structure. In this segment, you will understand how a single artificial neuron works, i.e., how it converts inputs into outputs. You will also understand the topology or structure of large neural networks. Let’s get started by understanding the basic structure of an artificial neuron.

As you learnt in the video above, a neuron is quite similar to a perceptron. However, in perceptrons, the commonly used activation/output is the step function, whereas in the case of ANNs, the activation functions are non-linear functions.

Note: You will learn about these activation functions in the upcoming segments.

Take a look at the structure of an artificial neuron in the image given below.

Here, a’s represent the inputs, w’s represent the weights associated with the inputs, and b represents the bias of the neuron.

In the next video, you will understand how large neural networks are designed using multiple individual neurons.

As you learned in the video above, multiple artificial neurons in a neural network are arranged in different layers. The first layer is known as the input layer, and the last layer is called the output layer. The layers in between these two are the hidden layers.

The number of neurons in the input layer is equal to the number of attributes in the data set, and the number of neurons in the output layer is determined by the number of classes of the target variable (for a classification problem).

For a regression problem, the number of neurons in the output layer would be 1 (a numeric variable). Take a look at the image given below to understand the topology of neural networks in the case of classification and regression problems. 

Note that the number of hidden layers or the number of neurons in each hidden layer or the activation functions used in the neural network changes according to the problem, and these details determine the topology or structure of the neural network. We will discuss this in the subsequent segments.


Let’s watch the next video to understand what it means to specify a neural network completely, i.e., what all we need to specify in order to completely describe a neural network.

So far, you have understood the basic structure of artificial neural networks. To summarise, there are six main elements that must be specified for any neural network. They are as follows:

  1. Input layer
  2. Output layer
  3. Hidden layers
  4. Network topology or structure 
  5. Weights and biases
  6. Activation functions

You might have some questions, such as ‘How do we decide the number of neurons in a layer?’ or ‘How are weights and biases determined?’. You will be able to answer these questions in the next few segments, wherein you will learn about each of these specifications in depth.

Report an error