TensorFlow simple operations: tensors vs Python variables

The result is the same because every operator (add or __add__ that's the overload of +) call tf.convert_to_tensor on its operands.

The difference between tf.add(a + b) and a + b is that the former gives you the ability to give a name to the operation with the name parameter. The latter, instead, does not give you this ability and also make it possibile that the computation is done by the Python interpreter and not outside it, in the Tensorflow environment.

This happen if (and only if) both a and b are not Tensor objects and thus Tensorflow will be not involved in the computation.


They are all the same.

The python-'+' in a + b is captured by tensorflow and actually does generate the same op as tf.add(a, b) does.

The tf.conctant allows you more specifics, such as defining the shape, type and name of the created tensor. But again tensorflow owns that "a" in your example a = 1 and it is equivalent to tf.constant(1) (treating the constant as an int-value in this case)


The four examples you gave will all give the same result, and generate the same graph (if you ignore that some of the operation names in the graph are different). TensorFlow will convert many different Python objects into tf.Tensor objects when they are passed as arguments to TensorFlow operators, such as tf.add() here. The + operator is just a simple wrapper on tf.add(), and the overload is used when either the left-hand or right-hand argument is a tf.Tensor (or tf.Variable).

Given that you can just pass many Python objects to TensorFlow operators, why would you ever use tf.constant()? There are a few reasons:

  • If you use the same Python object as the argument to multiple different operations, TensorFlow will convert it to a tensor multiple times, and represent each of those tensors in the graph. Therefore, if your Python object is a large NumPy array, you may run out of memory if you make too many copies of that array's data. In that case, you may wish to convert the array to a tf.Tensor once

  • Creating a tf.constant() explicitly allows you to set its name property, which can be useful for TensorBoard debugging and graph visualization. (Note though that the default TensorFlow ops will attempt to give a meaningful name to each automatically converted tensor, based on the name of the op's argument.)

  • Creating a tf.constant() explicitly allows you to set the exact element type of the tensor. TensorFlow will convert Python int objects to tf.int32, and float objects to tf.float32. If you want tf.int64 or tf.float64, you can get this by passing the same value to tf.constant() and passing an explicit dtype argument.

  • The tf.constant() function also offers a useful feature when creating large tensors with a repeated value:

    c = tf.constant(17.0, shape=[1024, 1024], dtype=tf.float32)
    

    The tensor c above represents 4 * 1024 * 1024 bytes of data, but TensorFlow will represent it compactly in the graph as a single float 17.0 plus shape information that indicates how it should be interpreted. If you have many large, filled constants in your graph, it can be more efficient to create them this way.