KL Divergence for two probability distributions in PyTorch

function kl_div is not the same as wiki's explanation.

I use the following:

# this is the same example in wiki
P = torch.Tensor([0.36, 0.48, 0.16])
Q = torch.Tensor([0.333, 0.333, 0.333])

(P * (P / Q).log()).sum()
# tensor(0.0863), 10.2 µs ± 508

F.kl_div(Q.log(), P, None, None, 'sum')
# tensor(0.0863), 14.1 µs ± 408 ns

compare to kl_div, even faster


Yes, PyTorch has a method named kl_div under torch.nn.functional to directly compute KL-devergence between tensors. Suppose you have tensor a and b of same shape. You can use the following code:

import torch.nn.functional as F
out = F.kl_div(a, b)

For more details, see the above method documentation.