Working with SSIM loss function in tensorflow for RGB images

I was capable of solving the issue by changing the dynamic range of the images to 2.0, since I have images scaled between [-1, 1] by:

loss_rec = tf.reduce_mean(tf.image.ssim(truth, reconstructed, 2.0))

And since a better image quality is shown by a higher SSIM value, I had to minimize the negative of my loss function (SSIM) to optimize my model:

optimizer = tf.train.AdamOptimizer(learning_rate).minimize(-1 * loss_rec)


SSIM is designed to only measure the difference between two luminance signals. The RGB images are converted to greyscale before measuring similarity. If that was fed back into the loss function, it wouldn't know if the image was losing color saturation because it wouldn't show up in the error metric. That's just a theory.