How are the new tf.contrib.summary summaries in TensorFlow evaluated?


I’m having a bit of trouble understanding the new tf.contrib.summary API. In the old one, it seemed that all one was supposed to do was to run tf.summary.merge_all() and run that as an op.

But now we have things like tf.contrib.summary.record_summaries_every_n_global_steps, which can be used like this:

import tensorflow.contrib.summary as tfsum

summary_writer = tfsum.create_file_writer(logdir, flush_millis=3000)
summaries = []

# First we create one summary which runs every n global steps
with summary_writer.as_default(), tfsum.record_summaries_every_n_global_steps(30):
    summaries.append(tfsum.scalar("train/loss", loss))

# And then one that runs every single time?
with summary_writer.as_default(), tfsum.always_record_summaries():
    summaries.append(tfsum.scalar("train/accuracy", accuracy))

# Then create an optimizer which uses a global step
step = tf.create_global_step()
train = tf.train.AdamOptimizer().minimize(loss, global_step=step)

And now come a few questions:

  1. If we just run in a loop, I assume that the accuracy summary would get written every single time, while the loss one wouldn’t, because it only gets written if the global step is divisible by 30?
  2. Assuming the summaries automatically evaluate their dependencies, I never need to run[accuracy, summaries]) but can just run, since they have a dependency in the graph, right?
  3. If 2) is true, can’t I just add a control dependency to the training step so that the summaries are written on every train run? Or is this a bad practice?
  4. Is there any downside to using control dependencies in general for things that are going to be evaluated at the same time anyway?
  5. Why does tf.contrib.summary.scalar (and others) take in a step parameter?

By adding a control dependency in 3) I mean doing this:

    train = tf.train.AdamOptimizer().minimize(loss, global_step=step)


answer moved from edit to self-answer as requested

I just played around with this a little bit, and it seems that if one combines tf.control_dependencies with tf.record_summaries_every_n_global_steps it behaves as expected and the summary only gets recorded every nth step. But if they are run together within a session, such as[train, summs]), the summaries are stored every once in a while, but not exactly every nth step. I tested this with n=2 and with the second approach the summary was often written at odd steps, while with the control dependency approach it was always on an even step.

Answered By – Jakub Arnold

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published