How to implement the Softmax derivative independently from any loss function?

The other answers are great, here to share a simple implementation of forward/backward, regardless of loss functions.

In the image below, it is a brief derivation of the backward for softmax. The 2nd equation is loss function dependent, not part of our implementation. enter image description here

backward verified by manual grad checking.

import numpy as np

class Softmax:
    def forward(self, x):
        mx = np.max(x, axis=1, keepdims=True)
        x = x - mx  # log-sum-exp trick
        e = np.exp(x)
        probs = e / np.sum(np.exp(x), axis=1, keepdims=True)
        return probs

    def backward(self, x, probs, bp_err):
        dim = x.shape[1]
        output = np.empty(x.shape)
        for j in range(dim):
            d_prob_over_xj = - (probs * probs[:,[j]])  # i.e. prob_k * prob_j, no matter k==j or not
            d_prob_over_xj[:,j] += probs[:,j]   # i.e. when k==j, +prob_j
            output[:,j] = np.sum(bp_err * d_prob_over_xj, axis=1)
        return output

def compute_manual_grads(x, pred_fn):
    eps = 1e-3
    batch_size, dim = x.shape

    grads = np.empty(x.shape)
    for i in range(batch_size):
        for j in range(dim):
            x[i,j] += eps
            y1 = pred_fn(x)

            x[i,j] -= 2*eps
            y2 = pred_fn(x)

            grads[i,j] = (y1 - y2) / (2*eps)
            x[i,j] += eps
    return grads

def loss_fn(probs, ys, loss_type):
    batch_size = probs.shape[0]
    # dummy mse
    if loss_type=="mse":
        loss = np.sum((np.take_along_axis(probs, ys.reshape(-1,1), axis=1) - 1)**2) / batch_size
        values = 2 * (np.take_along_axis(probs, ys.reshape(-1,1), axis=1) - 1) / batch_size

    # cross ent
    if loss_type=="xent":
        loss = - np.sum( np.take_along_axis(np.log(probs), ys.reshape(-1,1), axis=1) ) / batch_size
        values = -1 / np.take_along_axis(probs, ys.reshape(-1,1), axis=1) / batch_size

    err = np.zeros(probs.shape)
    np.put_along_axis(err, ys.reshape(-1,1), values, axis=1)
    return loss, err

if __name__ == "__main__":
    batch_size = 10
    dim = 5

    x = np.random.rand(batch_size, dim)
    ys = np.random.randint(0, dim, batch_size)
    for loss_type in ["mse", "xent"]:
        S = Softmax()
        probs = S.forward(x)
        loss, bp_err = loss_fn(probs, ys, loss_type)

        grads = S.backward(x, probs, bp_err)

        def pred_fn(x, ys):
            pred = S.forward(x)
            loss, err = loss_fn(pred, ys, loss_type)
            return loss

        manual_grads = compute_manual_grads(x, lambda x: pred_fn(x, ys))

        # compare both grads
        print(f"loss_type = {loss_type}, grad diff = {np.sum((grads - manual_grads)**2) / batch_size}")

It should be like this: (x is the input to the softmax layer and dy is the delta coming from the loss above it)

    dx = y * dy
    s = dx.sum(axis=dx.ndim - 1, keepdims=True)
    dx -= y * s

    return dx

But the way you compute the error should be:

    yact = activation.compute(x)
    ycost = cost.compute(yact)
    dsoftmax = activation.delta(x, cost.delta(yact, ycost, ytrue)) 

Explanation: Because the delta function is a part of the backpropagation algorithm, its responsibility is to multiply the vector dy (in my code, outgoing in your case) by the Jacobian of the compute(x) function evaluated at x. If you work out what does this Jacobian look like for softmax [1], and then multiply it from the left by a vector dy, after a bit of algebra you'll find out that you get something that corresponds to my Python code.

[1] https://stats.stackexchange.com/questions/79454/softmax-layer-in-a-neural-network


Mathematically, the derivative of Softmax σ(j) with respect to the logit Zi (for example, Wi*X) is

enter image description here

where the red delta is a Kronecker delta.

If you implement iteratively:

def softmax_grad(s):
    # input s is softmax value of the original input x. Its shape is (1,n) 
    # i.e.  s = np.array([0.3,0.7]),  x = np.array([0,1])

    # make the matrix whose size is n^2.
    jacobian_m = np.diag(s)

    for i in range(len(jacobian_m)):
        for j in range(len(jacobian_m)):
            if i == j:
                jacobian_m[i][j] = s[i] * (1 - s[i])
            else: 
                jacobian_m[i][j] = -s[i] * s[j]
    return jacobian_m

Test:

In [95]: x
Out[95]: array([1, 2])

In [96]: softmax(x)
Out[96]: array([ 0.26894142,  0.73105858])

In [97]: softmax_grad(softmax(x))
Out[97]: 
array([[ 0.19661193, -0.19661193],
       [-0.19661193,  0.19661193]])

If you implement in a vectorized version:

soft_max = softmax(x)    

# reshape softmax to 2d so np.dot gives matrix multiplication

def softmax_grad(softmax):
    s = softmax.reshape(-1,1)
    return np.diagflat(s) - np.dot(s, s.T)

softmax_grad(soft_max)

#array([[ 0.19661193, -0.19661193],
#       [-0.19661193,  0.19661193]])

Just in case you are processing in batches, here is an implementation in NumPy (tested vs TensorFlow). However, I will suggest avoiding the associated tensor operations, by mixing the jacobian with the cross-entropy, which leads to a very simple and efficient expression.

def softmax(z):
  exps = np.exp(z - np.max(z))
  return exps / np.sum(exps, axis=1, keepdims=True)

def softmax_jacob(s):
  return np.einsum('ij,jk->ijk', s, np.eye(s.shape[-1])) \
       - np.einsum('ij,ik->ijk', s, s)

def np_softmax_test(z):
  return softmax_jacob(softmax(z))

def tf_softmax_test(z):
  z = tf.constant(z, dtype=tf.float32)
  with tf.GradientTape() as g:
    g.watch(z)
    a = tf.nn.softmax(z) 
  jacob = g.batch_jacobian(a, z)
  return jacob.numpy()

z = np.random.randn(3, 5)
np.all(np.isclose(np_softmax_test(z), tf_softmax_test(z)))