AttributeError: module 'tensorflow._api.v2.train' has no attribute 'get_or_create_global_step'

Issue

I have this line of code as part of a function:

global_step = tf.train.get_or_create_global_step()

An this is the error I’m getting:

AttributeError                            Traceback (most recent call last)
<ipython-input-17-e0d01fc93072> in <module>()
    131     summary_writer = tf.summary.create_file_writer(log_path, flush_millis=10000)
    132     summary_writer.set_as_default()
--> 133     global_step = tf.get_or_create_global_step()
    134 
    135     # Load the dataset

AttributeError: module 'tensorflow' has no attribute 'get_or_create_global_step'

I get that this function is deprecated but I couldn’t figure out how to make the necessary adjustments and the migration guide wasn’t very clear.

This is the entirety of my code:

# Make directories for this run
time_string = time.strftime("%Y-%m-%d-%H-%M-%S")
model_path = os.path.join(model_path, time_string)
results_path = os.path.join(results_path, time_string)
safe_makedirs(model_path)
safe_makedirs(results_path)

# Initialise logging
log_path = os.path.join('logs', exp_name, time_string)
summary_writer = tf.summary.create_file_writer(log_path, flush_millis=10000)
summary_writer.set_as_default()
global_step = tf.train.get_or_create_global_step() <- This is the line.

# Load the dataset
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], raw_size, raw_size, channels)

# Add noise for condition input
train_inputs = add_gaussian_noise(train_images, stdev=0.2, data_range=(0, 255)).astype('float32')
train_inputs = normalise(train_inputs, (-1, 1), (0, 255))
train_images = normalise(train_images, (-1, 1), (0, 255))
train_labels = train_images.astype('float32')

train_dataset = tf.data.Dataset.from_tensor_slices((train_inputs, train_labels)).shuffle(buffer_size).batch(batch_size)

# Test set
test_images = test_images.reshape(test_images.shape[0], raw_size, raw_size, channels)
test_inputs = add_gaussian_noise(test_images, stdev=0.2, data_range=(0, 255)).astype('float32')
test_inputs = normalise(test_inputs, (-1, 1), (0, 255))
test_images = normalise(test_images, (-1, 1), (0, 255))
test_labels = test_images.astype('float32')

# Set up some random (but consistent) test cases to monitor
num_examples_to_generate = 16
random_indices = np.random.choice(np.arange(test_inputs.shape[0]), num_examples_to_generate,replace=False)
selected_inputs = test_inputs[random_indices]
selected_labels = test_labels[random_indices]

# Set up the models for training
generator = make_generator_model_small()
discriminator = make_discriminator_model()

generator_optimizer = tf.train.AdamOptimizer(learning_rate)
discriminator_optimizer = tf.train.AdamOptimizer(learning_rate)

checkpoint_prefix = os.path.join(model_path, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer, discriminator_optimizer=discriminator_optimizer, generator=generator, discriminator=discriminator)

generate_and_save_images(None, 0, selected_inputs, selected_labels)  # baseline
print("\nTraining...\n")

# Compile training function into a callable TensorFlow graph (speeds up execution)
train_step = tf.contrib.eager.defun(train_step)
train(train_dataset, max_epoch)
print("\nTraining done\n")

Solution

From what i gathered in https://github.com/tensorflow/tensorflow/blob/v2.8.1/tensorflow/python/training/training_util.py, and https://github.com/tensorflow/tensorflow/commit/0ea7b2bcc2f553b06859b7cbf6962dfc340c868d,
it seems that you should be calling:

global_step = tf.training_util.get_or_create_global_step()

Although there’s also this:

global_step = tf.compat.v1.train.get_or_create_global_step()

(this is from https://github.com/tensorflow/tensorflow/blob/9e2ac684a641b236431d0602b80fc9391dd565c0/tensorflow/examples/speech_commands/train.py#L187)

Though I would hazard a guess that since this is with the compat.v1.train module, and you’re using the 2.0
version, you’re probably good with going with the tf.training_util.get_or_create_global_step().

But, there’s a note in the comment for the get_or_create_global_step() [https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/training_util.py#L267], and I quote:

With the deprecation of global graphs, TF no longer tracks variables in
collections. In other words, there are no global variables in TF2. Thus, the
global step functions have been removed (get_or_create_global_step,
create_global_step, get_global_step) . You have two options for migrating:

  1. Create a Keras optimizer, which generates an iterations variable. This
    variable is automatically incremented when calling apply_gradients.
  2. Manually create and increment a tf.Variable.

Answered By – ewong

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published