Issue
In which documentation of TF/Keras list all the available string values for the monitor argument and their explanations? I saw "val_acc", "val_loss" but what are others?
For instance EarlyStopping:
tf.keras.callbacks.EarlyStopping(
monitor='val_loss', <------------
min_delta=0, patience=0, verbose=0,
mode='auto', baseline=None, restore_best_weights=False
)
Solution
Apparently we need to use the name
attribute of the Keras metric instance which has been specified at model.compile
. If the default instance is specified via the default name e.g. "val" or "accuracy", then it is the name attributes.
Experiment
Specify "hogehoge" as the metric name of tf.keras.metrics.AUC
.
METRIC_NAME = "hogehoge"
metrics=[tf.keras.metrics.AUC(name=METRIC_NAME), "accuracy"]
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
loss=BinaryCrossentropy(),
metrics=metrics
)
Then hogehoge and val_hogehoge appear as the metric names as the message from model.fit()
. “`val_“ ` is the prefix for validation metrics as the Keras convention.
model.fit(
x=X,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
validation_data=V,
callbacks=[
ROCCallback(),
]
)
----------
...
Epoch 1/5
42/42 [==============================] - 30s 546ms/step - loss: 0.0018 - hogehoge: 1.0000 - accuracy: 1.0000 - val_loss: 0.0021 - val_hogehoge: 1.0000 - val_accuracy: 1.0000
Epoch 00001: val_hogehoge improved from -inf to 1.00000, saving model to /content/drive/MyDrive/home/repository/mon/huggingface/finetuning/output/run_HOGE/model/model.h5
loss:0.0018227609107270837
hogehoge:1.0
accuracy:1.0
val_loss:0.002101171063259244
val_hogehoge:1.0
val_accuracy:1.0
The logs
argument of the callback method has them as the ROCCallback prints.
class ROCCallback(tf.keras.callbacks.Callback):
"""calculate ROC&AUC
"""
def __init__(self):
def on_epoch_end(self, epoch, logs={}):
[print(f"{k}:{v}") for k, v in logs.items()]
monitor attribute
To monitor the metrics of the tf.keras.metrics.AUC
instance, set hogehoge
for training or val_hogehoge
for validation as the monitor
attribute of the callbacks.
MONITOR_METRIC = f"val_{METRIC_NAME}"
MONITOR_MODE = 'max'
class ModelCheckpointCallback(tf.keras.callbacks.ModelCheckpoint):
def __init__(self, path_to_file, monitor, mode):
super().__init__(
filepath=path_to_file,
monitor=monitor,
mode=mode,
save_best_only=True,
save_weights_only=True,
save_freq="epoch",
verbose=1
)
model.fit(
x=X,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
validation_data=V,
callbacks=[
ModelCheckpointCallback(MODEL_FILE, monitor=MONITOR_METRIC, mode=MONITOR_MODE),
]
)
----------
Epoch 1/5
42/42 [==============================] - 30s 545ms/step - loss: 0.0368 - hogehoge: 0.9993 - accuracy: 0.9821 - val_loss: 0.0038 - val_hogehoge: 1.0000 - val_accuracy: 1.0000
Epoch 00001: val_hogehoge improved from -inf to 1.00000, saving model to /content/drive/MyDrive/model.h5
Conclusion
To verify the metrics names to monitor, dump the name of the metrics
or metric_names
attribute of the Keras model. Prefix with val_
to monitor validation metrics.
for metric in model.metrics:
print(metric.name)
----------
loss
hogehoge
accuracy
print(model.metrics_names)
----------
['loss', 'hogehoge', 'accuracy']
Answered By – mon
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0