IKH

CIFAR-10 Classification with Python – l

In the next few segments, you train various CNN networks on the CIFAR-10 dataset. It has 10 classes of 60,000 RGB images each of size (32,32,3). The 10 classes are aeroplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck. A similar dataset is the CIFAR-100 dataset which has 100 classes. You do not need to download the dataset separately, it can be downloaded directly through the keras API.

Getting Started with Nimblebox GPU

We recommend that you use a GPU to run the CIFAR-10 notebooks(running each notebook locally will take 2-3 hours, on a GPU it will take 8-10 minutes). You can download the instruction below to start the machine.

Note: For running the notebooks for CNN, start the CNN Assignment machine and machine and not RNN Assignments as mentioned in the instruction manual. Select the machine as the machine as GPU. The notebook for this tutorial is already present in the ‘CNN Assignment’ machine.

An alternative to Nimblebox – Google Colab

In case Nimblebox is down (in case it happens), you can use Google Colab as an alternative. Google Colab is a free cloud service and provides free GPU access. You can learn how to get started with Google Colab here. Note that we recommend using Nimblebox as the primary cloud provider and Google Colab only when Nimblebox is down.

CIFAR-10 Experiments

In the coming few lectures, you will experiment with some hyperparameters and architectures and draw insights from the results . Some hyperparameters we will play with are:

. Adding and removing dropouts in convolutional layers

. Batch Normalization(BN)

.L2 regularisation

.Increasing the number of convolution layers

.Increasing the number of filters in centain layers

Experiment – I: Using dropouts after conv and FC layers

In the first experiment, we will use dropouts both after the convolutional and fully connected layers. 

Download – Notebook

You can download the notebook below.

The results of the experiments are as follows:

Experiment – I: Dropouts After Conv and FC layers

  • Training accuracy =  84%, validation accuracy = 79%

In the next few segments, we will conduct some more experiments (without dropouts, using batch normalisation, adding more convolutional layers etc) and compare the results.

Report an error