What does -1 mean in pytorch view?

Yes, it does behave like -1 in numpy.reshape(), i.e. the actual value for this dimension will be inferred so that the number of elements in the view matches the original number of elements.

For instance:

import torch

x = torch.arange(6)

print(x.view(3, -1))      # inferred size will be 2 as 6 / 3 = 2
# tensor([[ 0.,  1.],
#         [ 2.,  3.],
#         [ 4.,  5.]])

print(x.view(-1, 6))      # inferred size will be 1 as 6 / 6 = 1
# tensor([[ 0.,  1.,  2.,  3.,  4.,  5.]])

print(x.view(1, -1, 2))   # inferred size will be 3 as 6 / (1 * 2) = 3
# tensor([[[ 0.,  1.],
#          [ 2.,  3.],
#          [ 4.,  5.]]])

# print(x.view(-1, 5))    # throw error as there's no int N so that 5 * N = 6
# RuntimeError: invalid argument 2: size '[-1 x 5]' is invalid for input with 6 elements

print(x.view(-1, -1, 3))  # throw error as only one dimension can be inferred
# RuntimeError: invalid argument 1: only one dimension can be inferred

From the PyTorch documentation:

>>> x = torch.randn(4, 4)
>>> x.size()
torch.Size([4, 4])
>>> y = x.view(16)
>>> y.size()
torch.Size([16])
>>> z = x.view(-1, 8)  # the size -1 is inferred from other dimensions
>>> z.size()
torch.Size([2, 8])

I guess this works similar to np.reshape:

The new shape should be compatible with the original shape. If an integer, then the result will be a 1-D array of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions.

If you have a = torch.arange(1, 18) you can view it various ways like a.view(-1,6),a.view(-1,9), a.view(3,-1) etc.


I love the answer that Benjamin gives https://stackoverflow.com/a/50793899/1601580

Yes, it does behave like -1 in numpy.reshape(), i.e. the actual value for this dimension will be inferred so that the number of elements in the view matches the original number of elements.

but I think the weird case edge case that might not be intuitive for you (or at least it wasn't for me) is when calling it with a single -1 i.e. tensor.view(-1). My guess is that it works exactly the same way as always except that since you are giving a single number to view it assumes you want a single dimension. If you had tensor.view(-1, Dnew) it would produce a tensor of two dimensions/indices but would make sure the first dimension to be of the correct size according to the original dimension of the tensor. Say you had (D1, D2) you had Dnew=D1*D2 then the new dimension would be 1.

For real examples with code you can run:

import torch

x = torch.randn(1, 5)
x = x.view(-1)
print(x.size())

x = torch.randn(2, 4)
x = x.view(-1, 8)
print(x.size())

x = torch.randn(2, 4)
x = x.view(-1)
print(x.size())

x = torch.randn(2, 4, 3)
x = x.view(-1)
print(x.size())

output:

torch.Size([5])
torch.Size([1, 8])
torch.Size([8])
torch.Size([24])

History/Context

I feel a good example (common case early on in pytorch before the flatten layer was official added was this common code):

class Flatten(nn.Module):
    def forward(self, input):
        # input.size(0) usually denotes the batch size so we want to keep that
        return input.view(input.size(0), -1)

for sequential. In this view x.view(-1) is a weird flatten layer but missing the squeeze (i.e. adding a dimension of 1). Adding this squeeze or removing it is usually important for the code to actually run.


Example2

if you are wondering what x.view(-1) does it flattens the vector. Why? Because it has to construct a new view with only 1 dimension and infer the dimension -- so it flattens it. In addition it seems this operation avoids the very nasty bugs .resize() brings since the order of the elements seems to be respected. Fyi, pytorch now has this op for flattening: https://pytorch.org/docs/stable/generated/torch.flatten.html

#%%
"""
Summary: view(-1, ...) keeps the remaining dimensions as give and infers the -1 location such that it respects the
original view of the tensor. If it's only .view(-1) then it only has 1 dimension given all the previous ones so it ends
up flattening the tensor.

ref: my answer https://stackoverflow.com/a/66500823/1601580
"""
import torch

x = torch.arange(6)
print(x)

x = x.reshape(3, 2)
print(x)

print(x.view(-1))

output

tensor([0, 1, 2, 3, 4, 5])
tensor([[0, 1],
        [2, 3],
        [4, 5]])
tensor([0, 1, 2, 3, 4, 5])

see the original tensor is returned!