IKH

CIFAR-10 CIassification with Python – II

In the first experiment ( using dropouts after both convolutional and FC layers), We got training and validation and accuracies of about 84% and 79% respectively. Let’s now run three different experiments as mentioned below and compare the performance.

Experiment – II: Remove the dropouts after the convolutional layers ( but retain them in the FC layers). Also, use batch normalization after every convolutional layer.

Recall that batch normalisation (BN) normalises the outputs from each layer with the mean and standard deviation of the batch. You may quickly revisit the lectures on BN here.

Experiment – III: Use batch normalization and dropouts after every convolutional layer. Also, retain the dropouts in the FC layer.

Experiment – lV: Remove the dropouts after the convolutional layers and use L2 regularization in the FC layer. Retain the dropouts in FC.

The results of the experiments done so far are summariseds below. Note that ‘use BN’ refers to using BN after the convolutoinal layers, not after the FC layers.

  • Experiment – I (Use dropouts after conv and FC layers, no BN):
    • Training accuracy = 84%, validation accuracy = 79%
  • Experiment – II (Remove dropouts from conv layers, retain dropouts in FC, use Bn):
    • training accuracy = 98% validation accuracy = 79%
  • Experiment – III (Use dropouts after conv and FC layers, use BN):
    • Training accuracy =  89%, validation accuracy  =  82%
  • Experiment – IV (Remove dropouts from conv layers, use L2 + dropouts in FC, use BN):
    • Training accuracy = 94%, validation accuracy = 76%. 

Report an error