The developers of TensorFlow created a TensorFlow playground that helps in visualising the neural network model performance. When you visit the app, focus on the model performance by changing the data set and the depth of the neural network, defined by the number of hidden layers. You can permute numerous options available on the page, such as the activation function, the learning rate, regularisation, the different types of inputs and the different types of data sets.
In this video, Jaidev has demonstrated the architectures for different neural networks using this TensorFlow playground.
In this video, Jaidev demonstrated how a single Logistic Neuron can classify data that is linearly separable as shown in the image given below.
However, when dealing with data that is not linearly separable, we need to combine multiple neurons as shown in this image.
In the next video, Jaidev will demonstrate this network for some complicated data and show how the addition of neurons and layers can affect the decision boundaries.
In this video, Jaidev has considered some complex datasets which would be very difficult to be solved using simple machine learning techniques. As you had seen, neural networks were able to classify most of the points correctly.
Searching for the right kind of architecture is extremely difficult, but with the help of neural networks, we can classify almost any kind of non-linear data.
In a Neural Network, the flow of information occurs in the following two ways:
- Feedforward Propagation (Forward Pass)
- Backpropagation (Backward Pass)
In the next segment, you will learn about feedforward propagation.
Report an error