Why python broadcasting in the example below is slower than a simple loop?

Memory access.

First off, the broadcast version can be simplified to

def norm_bcast(M, v):
     return np.sum((M - v)**2, axis=1)

This still runs slightly slower than the looped version. Now, conventional wisdom says that vectorized code using broadcasting should always be faster, which in many cases isn't true (I'll shamelessly plug another of my answers here). So what's happening?

As I said, it comes down to memory access.

In the broadcast version every element of M is subtracted from v. By the time the last row of M is processed the results of processing the first row have been evicted from cache, so for the second step these differences are again loaded into cache memory and squared. Finally, they are loaded and processed a third time for the summation. Since M is quite large, parts of the cache are cleared on each step to acomodate all of the data.

In the looped version each row is processed completely in one smaller step, leading to fewer cache misses and overall faster code.

Lastly, it is possible to avoid this with some array operations by using einsum. This function allows mixing matrix multiplications and summations. First, I'll point out it's a function that has rather unintuitive syntax compared to the rest of numpy, and potential improvements often aren't worth the extra effort to understand it. The answer may also be slightly different due to rounding errors. In this case it can be written as

def norm_einsum(M, v):
    tmp = M-v
    return np.einsum('ij,ij->i', tmp, tmp)

This reduces it to two operations over the entire array - a subtraction, and calling einsum, which performs the squaring and summation. This gives a slight improvement:

%timeit norm_bcast(M, v)
30.1 ms ± 116 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

%timeit norm_loop(M, v)
25.1 ms ± 37.3 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

%timeit norm_einsum(M, v)
21.7 ms ± 65.3 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Squeezing out maximum performance

On vectorized operations you clearly have a bad cache behaviour. But the calculation itsef is also slow due to not exploiting modern SIMD instructions (AVX2,FMA). Fortunately it isn't really complicated to overcome this issues.

Example

import numpy as np
import numba as nb
@nb.njit(fastmath=True,parallel=True)
def norm_loop_improved(M, v):
  n = M.shape[0]
  d = np.empty(n,dtype=M.dtype)

  #enables SIMD-vectorization 
  #if the arrays are not aligned
  M=np.ascontiguousarray(M)
  v=np.ascontiguousarray(v)

  for i in nb.prange(n):
    dT=0.
    for j in range(v.shape[0]):
      dT+=(M[i,j]-v[j])*(M[i,j]-v[j])
    d[i]=dT
  return d

Performance

M = np.random.random_sample((1000, 1000))
norm_loop_improved: 0.11 ms**, 0.28ms
norm_loop: 6.56 ms 
norm_einsum: 3.84 ms

M = np.random.random_sample((10000, 10000))
norm_loop_improved:34 ms
norm_loop: 223 ms
norm_einsum: 379 ms

** Be careful when measuring performance

The first result (0.11ms) comes from calling the function repeadedly with the same data. This would need 77 GB/s reading-throuput from RAM, which is far more than my DDR3 Dualchannel-RAM is capable of. Due to the fact that calling a function with the same input parameters successively isn't realistic at all, we have to modify the measurement.

To avoid this issue we have to call the same function with different data at least twice (8MB L3-cache, 8MB data) and than divide the result by two to clear all the caches.

The relative performance of this methods also differ on array sizes (have a look at the einsum results).

Tags:

Python

Numpy