Comparing Python, Numpy, Numba and C++ for matrix multiplication

You can still optimize these loops by improving the memory acces, your function could look like (assuming the matrizes are 1000x1000):

CS = 10
NCHUNKS = 100

def dot_chunked(A,B):
    C = np.zeros(1000,1000)

    for i in range(NCHUNKS):
        for j in range(NCHUNKS):
            for k in range(NCHUNKS):
                for ii in range(i*CS,(i+1)*CS):
                    for jj in range(j*CS,(j+1)*CS):
                        for kk in range(k*CS,(k+1)*CS):
                            C[ii,jj] += A[ii,kk]*B[kk,jj] 
    return C

Explanation: the loops i and ii obviously together perform the same way as i did before, the same hold for j and k, but this time regions in A and B of size CSxCS can be kept in the cache (I guess) and can used more then once.

You can play around with CS and NCHUNKS. For me CS=10 and NCHUNKS=100 worked well. When using numba.jit, it accelerates the code from 7s to 850 ms (notice i use 1000x1000, the graphics above are run with 3x3x10^5, so its a bit of another scenario).


What I would recommend

If you want maximum efficiency, you should use a dedicated linear algebra library, the classic of which is BLAS/LAPACK libraries. There are a number of implementations, eg. Intel MKL. What you write is NOT going to outpeform hyper-optimized libraries.

Matrix matrix multiply is going to be the dgemm routine: d stands for double, ge for general, and mm for matrix matrix multiply. If your problem has additional structure, a more specific function may be called for additional speedup.

Note that Numpy dot ALREADY calls dgemm! You're probably not going to do better.

Why your c++ is slow

Your classic, intuitive algorithm for matrix-matrix multiplication turns out to be slow compared to what's possible. Writing code that takes advantage of how processors cache etc... yields important performance gains. The point is, tons of smart people have devoted their lives to making matrix matrix multiply extremely fast, and you should use their work and not reinvent the wheel.


Definitely use -O3 for optimization. This turns vectorizations on, which should significantly speed your code up.

Numba is supposed to do that already.


In your current implementation most likely compiler is unable to auto vectorize the most inner loop because its size is 3. Also m2 is accessed in a "jumpy" way. Swapping loops so that iterating over p is in the most inner loop will make it work faster (col will not make "jumpy" data access) and compiler should be able to do better job (autovectorize).

for (int row = 0; row < m; row++) {
    for (int k = 0; k < n; k++) {
        for (int col = 0; col < p; col++) {
            m3.data_[p*row + col] += m1.data_[n*row + k] * m2.data_[p*k + col];
        }
    }
}

On my machine the original C++ implementation for p=10^6 elements build with g++ dot.cpp -std=c++11 -O3 -o dot flags takes 12ms and above implementation with swapped loops takes 7ms.