How to fix "ResourceExhaustedError: OOM when allocating tensor"

From [800000,32,30,62] it seems your model put all the data in one batch.

Try specified batch size like

history = model.fit([trainimage, train_product_embd],train_label, validation_data=([validimage,valid_product_embd],valid_label), epochs=10, steps_per_epoch=100, validation_steps=10, batch_size=32)

If it still OOM then try reduce the batch_size


OOM stands for "out of memory". Your GPU is running out of memory, so it can't allocate memory for this tensor. There are a few things you can do:

  • Decrease the number of filters in your Dense, Conv2D layers
  • Use a smaller batch_size (or increase steps_per_epoch and validation_steps)
  • Use grayscale images (you can use tf.image.rgb_to_grayscale)
  • Reduce the number of layers
  • Use MaxPooling2D layers after convolutional layers
  • Reduce the size of your images (you can use tf.image.resize for that)
  • Use smaller float precision for your input, namely np.float32
  • If you're using a pre-trained model, freeze the first layers (like this)

There is more useful information about this error:

OOM when allocating tensor with shape[800000,32,30,62]

This is a weird shape. If you're working with images, you should normally have 3 or 1 channel. On top of that, it seems like you are passing your entire dataset at once; you should instead pass it in batches.