TensorFlow: How to measure how much GPU memory each tensor takes?

Sorry for the slow reply. Unfortunately right now the only way to set the log level is to edit tensorflow/core/platform/logging.h and recompile with e.g.

#define VLOG_IS_ON(lvl) ((lvl) <= 1)

There is a bug open 1258 to control logging more elegantly.

MemoryLogTensorOutput entries are logged at the end of each Op execution, and indicate the tensors that hold the outputs of the Op. It's useful to know these tensors since the memory is not released until the downstream Op consumes the tensors, which may be much later on in a large graph.


Now that 1258 has been closed, you can enable memory logging in Python by setting an environment variable before importing TensorFlow:

import os
os.environ['TF_CPP_MIN_VLOG_LEVEL']='3'
import tensorflow as tf

There will be a lot of logging as a result of this. You'll want to grep the results to find the appropriate lines. For example:

grep MemoryLogTensorAllocation train.log

Tags:

Tensorflow