TensorFlow placement algorithm


I would like to know when the placement algorithm of TensorFlow (as described in the white paper) gets actually employed. All examples for distributing TensorFlow that I have seen so far seem to specify manually where the nodes should be executed on, using tf.device().


The dynamic placement algorithm described in Section 3.2.1 of the TensorFlow whitepaper was not included in the open-source release. Instead, the “simple placer” (whose implementation can be found in simple_placer.cc) is used, but it requires some explicit annotations (via tf.device()) to make yield an efficient placement. Higher-level constructs like tf.train.replica_device_setter() wrap tf.device() to specify common policies such as “shard the variables across parameter servers, and otherwise put all ops on the worker device,” and we use this extensively in distributed training.

In practice we have found that a small set of annotations usually yields a more efficient placement than the dynamic placer will determine, but improving the placement algorithm remains an area of active research.

Answered By – mrry

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published