Determinant of large matrices: there's GOTTA be a faster way

No, this is not the way that any (sane) person would compute a determinant. This is not even the way a computer would calculate a determinant! It requires a sum over $n!$ terms, which quickly becomes infeasible even for a computer, around $n = 15$ or so. An elementary way to compute a determinant quickly is by using Gaussian elimination.

We know a few facts about the determinant:

  1. Adding a scalar multiple of one row to another does not change the determinant.
  2. Interchanging two rows negates the determinant.
  3. Scaling a row by a constant multiplies the determinant by that constant.

So, now take the matrix

$$ A = \begin{bmatrix}-4 & 3 &3 \\ 8 & 7 & 3 \\ 4 & 3 & 3\end{bmatrix} $$

By fact (1) above, I can add twice the top row to the middle row, and also the top row to the bottom row, without affecting the determinant. So:

$$ \det A = \det \begin{bmatrix}-4 & 3 &3 \\ 0 & 13 & 9 \\ 0 & 6 & 6\end{bmatrix}$$

Now, I can interchange the bottom two rows, and and scale the row with only $6$'s, at a cost of $-6$:

$$ \det A = - \det \begin{bmatrix}-4 & 3 &3 \\ 0 & 6 & 6 \\ 0 & 13 & 9 \end{bmatrix} = - 6 \det \begin{bmatrix}-4 & 3 &3 \\ 0 & 1 & 1 \\ 0 & 13 & 9 \end{bmatrix}$$

Now I can subtract 13 lots of the middle row from the bottom row:

$$ \det A = - 6 \det \begin{bmatrix}-4 & 3 &3 \\ 0 & 1 & 1 \\ 0 & 13 & 9 \end{bmatrix} = - 6 \det \begin{bmatrix}-4 & 3 &3 \\ 0 & 1 & 1 \\ 0 & 0 & -4 \end{bmatrix}$$

Now the matrix is upper-triangular, and so the determinant is just the product of the diagonal entries. So we have

$$ \det A = -6 (-4 \times 1 \times -4) = -96 $$

So there you have it: computing a determinant is as easy as finding row-echelon form.


In general, determinants of large matrices are not computed by cofactor expansion but rather by factoring the matrix into factors whose determinants are easy to compute.

For example, you can factor an $n$ by $n$ matrix $A$ as

$A=P^{T}LU$

where $P$ is a permutation matrix, $L$ is lower triangular, and $U$ is upper triangular. This computation takes $O(n^{3})$ time for an $n$ by $n$ matrix $A$.

Since

$\det(A)=\det(P^{T})\det(L)\det(U)$

and the determinants of $P^{T}$, $L$, and $U$ are easy to compute (the determinant of a lower or upper triangular matrix is the product of the diagonal elements and you can easily do cofactor expansion on a permutation matrix), you can quickly find the determinant of $A$.

If you want to try a computational experiment, test MATLAB's det() function on randomly generated matrices of size $n$ by $n$ for $n=1000, 2000,\ldots, 10000$ and use tic/toc to see how long the computation takes.


If the entries of your matrix belong to a field, then you can compute the determinant easily using either LPU decomposition or PLU decomposition. These algorithms take $O \left(n^3\right)$ time, where $n$ is the size of the matrix.

If the entries of your matrix belong to an arbitrary commutative ring, then there are still $O \left(n^4\right)$-time algorithms to compute the determinant. See Günter Rote, Division-free algorithms for the determinant and the Pfaffian: Algebraic and Combinatorial Approaches, §2. (If I understand correctly, the rough idea of at least one of the algorithms is to replace the matrix $A \in R^{n\times n}$ by the matrix $1 - At$ over the power series ring $R \left[\left[t\right]\right]$, then compute the determinant of the latter via LU decomposition (which always exists in the power series ring), and then obtain $\det A$ as a coefficient of this polynomial. Of course, power series per se are uncomputable, but here only the first few coefficients need to be taken care of.)

Of course, the algorithms cannot do magic. The running time estimates of $O \left(n^3\right)$ and $O \left(n^4\right)$ assume that the fundamental operations of the base ring ($+$, $\cdot$, $-$ and taking inverses in the case of a field) require constant time and the overhead of storing and copying matrix entries is negligible. This assumption is justified if the base ring is a finite field or (to some extent) if the base "ring" is the floating-point reals (although these don't really form a ring, so you might end up with completely wrong results due to numerical instability), but not if the base ring is the integers (because integers get harder to work with the larger they become), the rational numbers or a polynomial ring. When the entries of your matrix are algebraically independent indeterminates, then you should not expect anything too fast, at least if you want the result in expanded form; after all, the result will simply be the general formula for the determinant of an $n \times n$-matrix, which "contains" a list of all $n!$ permutations of $\left\{1,2,\ldots,n\right\}$, and clearly such list requires at least $O \left(n!\right)$ time to write down! There might be some faster algorithms that result in non-expanded versions (similarly to Horner's scheme for polynomial evaluation), but I wouldn't expect anything with polynomial running time unless you allow the algorithm to return a recursion instead of an explicit sum-of-products-sums-of-products-of-etc..