Changing the scale of a tensor in tensorflow

You are trying to normalize the data. A classic normalization formula is this one:

normalize_value = (value − min_value) / (max_value − min_value)

The implementation on tensorflow will look like this:

tensor = tf.div(
   tf.subtract(
      tensor, 
      tf.reduce_min(tensor)
   ), 
   tf.subtract(
      tf.reduce_max(tensor), 
      tf.reduce_min(tensor)
   )
)

All the values of the tensor will be betweetn 0 and 1.

IMPORTANT: make sure the tensor has float/double values, or the output tensor will have just zeros and ones. If you have a integer tensor call this first:

tensor = tf.to_float(tensor)

Update: as of tensorflow 2, tf.to_float() is deprecated and instead, tf.cast() should be used:

tensor = tf.cast(tensor, dtype=tf.float32) # or any other tf.dtype, that is precise enough

According to the feature scaling in Wikipedia you can also try the Scaling to unit length:

enter image description here

It can be implemented using this segment of code:

In [3]: a = tf.constant([2.0, 4.0, 6.0, 1.0, 0])                                                                                                                                                                     
In [4]: b = a / tf.norm(a)
In [5]: b.eval()
Out[5]: array([ 0.26490647,  0.52981293,  0.79471946,  0.13245323,  0.        ], dtype=float32)