Issue
I am doing image classification on 211 classes of similar objects, coin around the world. My model constantly having issues of low accuracy but higher overfitting. Is there any METHOD on improving my model so the accuracy improves while having lower overfitting?
Image size : 350x350
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
data augmentation : RandomFlip("horizontal") ,
RandomRotation(0.1),RandomZoom(0.1),
layers :
experimental.preprocessing.Rescaling(1./255),
Conv2D(16, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.8),
Flatten(),
Dense(128, activation='relu'),
Dense(num_classes)
model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
epochs = 50
As for the result, it starts overfitting at epoch 10 :
loss: 2.8354 - accuracy: 0.3566 - val_loss: 2.8626 - val_accuracy: 0.4017
for epoch 50 :
loss: 1.0284 - accuracy: 0.7201 - val_loss: 2.2794 - val_accuracy: 0.6493
Solution
I suggest using transfer learning over the pre-trained Resnet-50. Adjustable learning prevents overfitting.Refer this.
You may use an intermediate SVM classifier for better performance.See the following picture: Input is sent to a pretrained model.We train the SVM classifier from activations somewhere earlier in the network(We have to tune this).
Answered By – sai_varshittha
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0