Issue
I am writing code for the following problem:
- I have a fruit dataset containing train and test directories. In both these directories, contain the 6 classes (fresh/rotten apples, fresh/rotten oranges, fresh/rotten bananas).
- I am using transfer learning on MobileNetV2 model
I am trying to get my data splits set up properly, but am confused on how to…
- Get train, validation, and test splits set-up
- How to check that they are indeed set up properly (no sort of overlaps of data for example)
- How do I save progress through training. (Example: I run my script that trains for 10 epochs. How do I make sure that the training continues from where I left off when I run my script again for x epochs.)
My code so far:
train_batches = ImageDataGenerator(preprocessing_function=mobilenet_v2.preprocess_input, validation_split=0.20).flow_from_directory(
train_path, target_size=(im_height, im_width), batch_size=batch_size)
test_batches = ImageDataGenerator(preprocessing_function=mobilenet_v2.preprocess_input).flow_from_directory(
test_path, target_size=(im_height, im_width), batch_size=batch_size)
mobv2 = tf.keras.applications.mobilenet_v2.MobileNetV2()
x = mobv2.layers[-2].output
output_layer = Dense(units=6, activation='softmax')(x)
model = tf.keras.models.Model(inputs=mobv2.input, outputs=output_layer)
for layer in model.layers[:-25]:
layer.trainable = False
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=.0001),
loss='categorical_crossentropy',
metrics=['accuracy']
)
Here is my fit, but haven’t completed it yet, as I’m not sure what to include for validation and test….
model.fit(train_batches, steps_per_epoch=4, )
Solution
see code below
train_batches = ImageDataGenerator(preprocessing_function=mobilenet_v2.preprocess_input, validation_split=0.20).flow_from_directory(
train_path, target_size=(im_height, im_width), batch_size=batch_size,
subset='training)
valid_batches=ImageDataGenerator(preprocessing_function=mobilenet_v2.preprocess_input, validation_split=0.20).flow_from_directory(
train_path, target_size=(im_height, im_width), batch_size=batch_size,
subset='validation')
epochs = 15 # set the number of epochs to run
history=model.fit(train_batches, epochs=epochs, verbose=1,
validation_data=valid_batches, verbose=1)'
to get better results I also recommend you consider using an adjustable learning
rate using the Keras callback ReduceLROnPlateau. Documentation is here. Set it up to monitor validation loss . Use the code below:
rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5,
patience=10, verbose=1)
I also recommend you use the Keras callback EarlyStopping. Documentation is here. Use the code below
es=tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=3, verbose=1,
restore_best_weights=True)
Now in model.fit include
callbacks=[rlronp, es]
Answered By – Gerry P
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0