top of page

Classification of Flowers Dataset

serispoorthi
  1. Using the dataset for flowers

    1. https://www.kaggle.com/alxmamaev/flowers-recognition

  2. The main goal is to improve the average accuracy of ten class.

  3. Need to experiment various hyperparameters

4. Network topology

  • Number of neurons per layer (for example, 100 x 200 x 100, 200 x 300 x 100 … )

  • number of layers (For example, 2 vs 3 vs 4 … )

  • shape of conv2d

5. While experimenting, need to record the performance such that you can create a bar chart of the performance


  • I have used VGG-Net for classifying the images. I used data-generator to reshape all images to the same size for training.

  • In VGGNet, they have used 3 layers in each block, I have used only 2 and got better results. Also, I tried multiple optimizers – Nadam, Adam, AdaGrad, SGD. Also, I used Learning Rate Scheduler and Early Stopping.

  • The main challenge I faced was finding a good reference. I tried various architectures before deciding on VGG net.

  • I have attached the training and validation graphs for my model.

  • I have achieved a test accuracy of 78.79%.


Model Details –

Model: "sequential_18"

Layer (type) Output Shape Param #

================================================================= batch_normalization_118 (Ba (16, 224, 224, 3) 12

tchNormalization)

conv2d_94 (Conv2D) (16, 224, 224, 64) 1792 batch_normalization_119 (Ba (16, 224, 224, 64) 256 tchNormalization)

conv2d_95 (Conv2D) (16, 224, 224, 64) 36928 max_pooling2d_64 (MaxPoolin (16, 111, 111, 64) 0 g2D)

dropout_70 (Dropout) (16, 111, 111, 64) 0 conv2d_96 (Conv2D) (16, 111, 111, 128) 73856 batch_normalization_120 (Ba (16, 111, 111, 128) 512 tchNormalization)

conv2d_97 (Conv2D) (16, 111, 111, 128) 147584 batch_normalization_121 (Ba (16, 111, 111, 128) 512 tchNormalization)

max_pooling2d_65 (MaxPoolin (16, 55, 55, 128) 0 g2D)

dropout_71 (Dropout) (16, 55, 55, 128) 0 conv2d_98 (Conv2D) (16, 55, 55, 256) 295168 batch_normalization_122 (Ba (16, 55, 55, 256) 1024 tchNormalization)

conv2d_99 (Conv2D) (16, 55, 55, 256) 590080 batch_normalization_123 (Ba (16, 55, 55, 256) 1024 tchNormalization)

max_pooling2d_66 (MaxPoolin (16, 27, 27, 256) 0 g2D)

dropout_72 (Dropout) (16, 27, 27, 256) 0 conv2d_100 (Conv2D) (16, 27, 27, 512) 1180160 batch_normalization_124 (Ba (16, 27, 27, 512) 2048 tchNormalization)

dropout_73 (Dropout) (16, 27, 27, 512) 0 conv2d_101 (Conv2D) (16, 27, 27, 512) 2359808 batch_normalization_125 (Ba (16, 27, 27, 512) 2048 tchNormalization)

max_pooling2d_67 (MaxPoolin (16, 13, 13, 512) 0

g2D)

dropout_74 (Dropout) (16, 13, 13, 512) 0

flatten_21 (Flatten) (16, 86528) 0

dense_51 (Dense) (16, 1024) 88605696

batch_normalization_126 (Ba (16, 1024) 4096

tchNormalization)

dropout_75 (Dropout) (16, 1024) 0

dense_52 (Dense) (16, 512) 524800

batch_normalization_127 (Ba (16, 512) 2048

tchNormalization)

dropout_76 (Dropout) (16, 512) 0

dense_53 (Dense) (16, 5) 2565

batch_normalization_128 (Ba (16, 5) 20

tchNormalization)

activation_21 (Activation) (16, 5) 0

================================================================= Total params: 93,832,037

Trainable params: 93,825,237

Non-trainable params: 6,800





Reference –







32 views0 comments

Recent Posts

See All

Comments


+1785-521-0399

  • LinkedIn
  • Instagram

©2022 by Spoorthi Seri. Proudly created with Wix.com

bottom of page