Why do we use tf.name_scope()

They are not the same thing.

import tensorflow as tf
c1 = tf.constant(42)
with tf.name_scope('s1'):
    c2 = tf.constant(42)
print(c1.name)
print(c2.name)

prints

Const:0
s1/Const:0

So as the name suggests, the scope functions create a scope for the names of the ops you create inside. This has an effect on how you refer to tensors, on reuse, on how the graph shows in TensorBoard and so on.


I don't see the use case for reusing constants but here is some relevant information on scopes and variable sharing.

Scopes

  • name_scope will add scope as a prefix to all operations

  • variable_scope will add scope as a prefix to all variables and operations

Instantiating Variables

  • tf.Variable() constructer prefixes variable name with current name_scope and variable_scope

  • tf.get_variable() constructor ignores name_scope and only prefixes name with the current variable_scope

For example:

with tf.variable_scope("variable_scope"):
     with tf.name_scope("name_scope"):
         var1 = tf.get_variable("var1", [1])

with tf.variable_scope("variable_scope"):
     with tf.name_scope("name_scope"):
         var2 = tf.Variable([1], name="var2")

Produces

var1 = <tf.Variable 'variable_scope/var1:0' shape=(1,) dtype=float32_ref>

var2 = <tf.Variable 'variable_scope/name_scope/var2:0' shape=(1,) dtype=string_ref>

Reusing Variables

  • Always use tf.variable_scope to define the scope of a shared variable

  • The easiest way to do reuse variables is to use the reuse_variables() as shown below

with tf.variable_scope("scope"):
    var1 = tf.get_variable("variable1",[1])
    tf.get_variable_scope().reuse_variables()
    var2=tf.get_variable("variable1",[1])
assert var1 == var2
  • tf.Variable() always creates a new variable, when a variable is constructed with an already used name it just appends _1, _2 etc. to it - which can cause conflicts :(

Tags:

Tensorflow