Different methods for initializing embedding layer weights in Pytorch

Both are same

torch.manual_seed(3)
emb1 = nn.Embedding(5,5)
emb1.weight.data.uniform_(-1, 1)

torch.manual_seed(3)
emb2 = nn.Embedding(5,5)
nn.init.uniform_(emb2.weight, -1.0, 1.0)

assert torch.sum(torch.abs(emb1.weight.data - emb2.weight.data)).numpy() == 0

Every tensor has a uniform_ method which initializes it with the values from the uniform distribution. Also, the nn.init module has a method uniform_ which takes in a tensor and inits it with values from uniform distribution. Both are same expect first one is using the member function and the second is using a general utility function.


According to my knowledge both forms are identical in effect as @mujjiga answers.

In general my preference goes towards the second option because:

  1. You have to access .data attribute in the manual case.

  2. Using torch.nn.init is more explicit and readable (a little subjective)

  3. Allows others to modify your source code easier (if they were to change initialization scheme to, say, xavier_uniform, only the name would have to change).

Little offtopic: TBH, I think torch.nn.init should be callable on the layer itself as it would help initialize torch.nn.Sequential models using simple model.apply(torch.nn.init.xavier_uniform_). Furthermore, it might be beneficial for it to initialize bias Tensor as well (or use an appropriate argument) for it, but it is what it is.

Tags:

Python

Pytorch