CuDNNLSTM: Failed to call ThenRnnForward

I was facing this issue too with my model and Tensorflow 2.4.1 recently; I have also found out it is reproducible with e.g. the model from the tutorial Text generation with an RNN. Running on the CPU (and consuming ~3 GB RAM), training fails on the GPU with 8 GB memory with the error

2021-02-12 18:45:48.482327: E tensorflow/stream_executor/dnn.cc:616] CUDNN_STATUS_EXECUTION_FAILED
in tensorflow/stream_executor/cuda/cuda_dnn.cc(1859): 'cudnnRNNForwardTraining( cudnn.handle(), rnn_desc.handle(), model_dims.max_seq_length, input_desc.handles(), input_data.opaque(), input_h_desc.handle(), input_h_data.opaque(), input_c_desc.handle(), input_c_data.opaque(), rnn_desc.params_handle(), params.opaque(), output_desc.handles(), output_data->opaque(), output_h_desc.handle(), output_h_data->opaque(), output_c_desc.handle(), output_c_data->opaque(), workspace.opaque(), workspace.size(), reserve_space.opaque(), reserve_space.size())'
2021-02-12 18:45:48.482405: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at cudnn_rnn_ops.cc:1521 : Internal: Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 3, 0, 0 , [num_layers, input_size, num_units, dir_count, max_seq_length, batch_size, cell_num_units]: [1, 256, 1024, 1, 100, 32, 0] 

I also observed the GPU memory filling up to the limit on model.compile() call before the error.

I solved that by prohibiting the full GPU memory allocation by adding

gpu_devices = tf.config.experimental.list_physical_devices("GPU")
for device in gpu_devices:
    tf.config.experimental.set_memory_growth(device, True)

early enough in the script (e.g. after import tensorflow as tf). This instructs Tensorflow to allocate GPU memory on demand. With that, training runs on GPU, only consuming ~2.2 GB memory.


Probably your are running out of memory on the gpu. Your network is very large with 11 million trainable parameters. Do you really need a 512*2 Output of your recurrent layer?

Furthermore your embedding_dim is also quite large, while your vocabulary is quite small with 5k words. I guess your network is too complex for your problem. I would suggest to try an embedding size of 32 and a LSTM size of 32 as a start. If your accuracy is still bad, you can increase complexity.

EMBEDDING_DIM = 32
Bidirectional(LSTM(32, return_sequences=False))(embedding)