Pytorch AssertionError: Torch not compiled with CUDA enabled

In my case, I had not installed PyTorch with Cuda enabled in my Anaconda environment. Note that you need a CUDA enabled GPU for this to work.

Follow this link to install PyTorch for the specific version of Cuda you have: https://pytorch.org/get-started/locally/

In my case I installed this version: conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch


If you look into the data.py file, you can see the function:

def get_iterator(data, batch_size=32, max_length=30, shuffle=True, num_workers=4, pin_memory=True):
    cap, vocab = data
    return torch.utils.data.DataLoader(
        cap,
        batch_size=batch_size, shuffle=shuffle,
        collate_fn=create_batches(vocab, max_length),
        num_workers=num_workers, pin_memory=pin_memory)

which is called twice in main.py file to get an iterator for the train and dev data. If you see the DataLoader class in pytorch, there is a parameter called:

pin_memory (bool, optional) – If True, the data loader will copy tensors into CUDA pinned memory before returning them.

which is by default True in the get_iterator function. And as a result you are getting this error. You can simply pass the pin_memory param value as False when you are calling get_iterator function as follows.

train_data = get_iterator(get_coco_data(vocab, train=True),
                          batch_size=args.batch_size,
                          ...,
                          ...,
                          ...,
                          pin_memory=False)

Removing .cuda() works for me on macOS.


So I'm using a Mac, trying to create a Neural Net with cuda like

net = nn.Sequential(
    nn.Linear(28*28, 100),
    nn.ReLU(),
    nn.Linear(100, 100),
    nn.ReLU(),
    nn.Linear(100, 10),
    nn.LogSoftmax()
).cuda()

My mistake was that I was trying to create nn, while Macs don't have CUDA. So if anyone facing the same problem just remove the .cuda() and you code should work.

Edit:

You can't do GPU computations without CUDA. And unfortunately for people who have Intel integrated graphics, CUDA can't be installed because it's only compatible for NVIDIA GPUs.

If you have a NVIDIA graphic card, it's probably that CUDA is already installed on our system, if not you can install it.

You can buy external graphics compatible with your computer but that alone will take around $300, not to mention the problem of connectivity.

Otherwise you can use : Google-Colaboratory, Kaggle Kernels (Free)
AWS, GCP(free credits), PaperSpace (Paid)