Tensor type mismatch when moving to GPU

This is happening because you are re-initializing self.input_layer in your forward() function.

The call self.network.cuda() moves all of the model parameters into cuda. Which means any and all the layers you initialize at the creation of your FeedForward object will be moved to cuda memory. But when you reinitialize self.input_layer in your forward() function, you initialize that layer's parameters in cpu and not gpu. Same goes for self.output_layer.


Firstly, to compute using your GPU, you have to prep your data type to a CUDA tensor.

In this case, it can be done simply as follows.

dtype=torch.cuda.FloatTensor
x=torch.autograd.Variable(x.type(dtype))

You can make the changes according to this in your tensor_to_Variable function.

Secondly, To specify that you want your "network" to expect CUDA tensors, network.cuda() will help.

Lastly, although this is not part of your question, you need not specify batch size while configuring your feed forward network. To elucidate,

1) Forward pass:

def forward(self,x):
    x=self.input_layer(x)
    x=self.middle_layer(x)
    x=self.output_layer(x)
    return x

2) Network initialization

def__init__(self,feature_size,hidden_size,output_size):
     self.input_layer=nn.Linear(feature_size,hidden_size)
     self.middle_layer=nn.Linear(hidden_size,hidden_size)
     self.output_layer=nn.Linear(hidden_size,output_size)

3) Preprocessing your data before packing into CUDA Variable

your_tensor.view(batch_size,feature_size)

Hope this helps!