Why is 'metrics = tf.keras.metrics.Accuracy()' giving an error but 'metrics=['accuracy']' isn't?

Issue

Im using the given code example on the fashion_mnist dataset. It contains metrics="accuracy" and runs through. Whenever I change it to metrics=tf.keras.metrics.Accuracy() it gives me following error:

ValueError: Shapes (32, 10) and (32, 1) are incompatible

What am i doing wrong? Is the Accuracy() function not the same?

import tensorflow as tf

fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images = train_images / 255.
test_images = test_images / 255.

model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation=tf.keras.activations.relu),
    tf.keras.layers.Dense(10)])

model.compile(
    optimizer=tf.keras.optimizers.Adam(),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=['accuracy'])


model.fit(train_images, train_labels, epochs=10)

Solution

Based on the docs here:

When you pass the strings "accuracy" or "acc", we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape.

So, when you pass "accuracy" it will be converted to the SparseCategoricalAccuracy() automatically.

So you can pass it like following:

model.compile(
    optimizer=tf.keras.optimizers.Adam(),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
# or
model.compile(
    optimizer=tf.keras.optimizers.Adam(),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=['accuracy'])

Answered By – Kaveh

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published