Too Much Memory Issue with Semantic Image Segmentation NN (DeepLabV3+)

First question: The DeepLabV3+ is a very large model (I assume you are using the Xception backbone?!) and 11 GB of needed GPU capacity is totally normal regarding a bachsize of 32 with 200x300 pixels :) (Training DeeplabV3+, I needed approx. 11 GB using a batchsize of 5 with 500x500 pixels). One note to the second sentence of your question: the needed GPU resources are influenced by many factors (model, optimizer, batchsize, image crop, preprocessing etc) but the actual size of your dataset set shouldn't influence it. So it doesn't matter if your dataset is 300MB or 300GB large.

General Question: You are using a small dataset. Choosing DeeplabV3+ & Xception might not be a good fit, since the model might be too large. This might lead to overfitting. If you haven't obtained satisfying results yet you might try a smaller network. If you want to stick to the DeepLab-framework you could switch the backbone from the Xception network to MobileNetV2 (In the official tensorflow version it is already implemented). Alternatively, you could try using a standalone network like the Inception network with a FCN head...

In each case it would be essential to use a pre-trained encoder with a well-trained feature representation. If you don't find a good initialization of your desired model based on grayscale input images, just use a model pre-trained on RGB images and extend the pre-training with a grayscale dataset (basically you can convert any big rgb dataset to be grayscale) and finetune the weights on the grayscale input before using your data.

I hope this helps! Cheers, Frank