Determining whether a symmetric matrix is positive-definite (algorithm)

Mathcast had it; in fact, in practical work, one uses the Cholesky decomposition $\mathbf G\mathbf G^T$ for efficiently testing if a symmetric matrix is positive definite. The only change you need to make to turn your decomposition program into a check for positive definiteness is to insert a check before taking the required square roots that the quantity to be rooted is positive. If it is zero, you have a positive semidefinite matrix; if neither zero nor positive, then your symmetric matrix isn't positive (semi)definite. (Programming-wise, it should be easy to throw an exception within a loop! If your language has no way to break a loop, however, you have my pity.)

Alternatively, one uses the $\mathbf L\mathbf D\mathbf L^T$ decomposition here (an equivalent approach in the sense that $\mathbf G=\mathbf L\sqrt{\mathbf D}$); if any nonpositive entries show up in $\mathbf D$, then your matrix is not positive definite. Note that one could set things up that the loop for computing the decomposition is broken once a negative element of $\mathbf D$ is encountered, before the decomposition is finished!

In any event, I don't understand why people are shying away from using Cholesky here; the statement is "a matrix is positive definite if and only if it possesses a Cholesky decomposition". It's a biconditional; exploit it! It's exceedingly cheaper than successively checking minors or eigendecomposing, FWIW.


I don't think there is a simpler way than computing a decomposition or determinant unless your matrix has a special form. For example if it is a sample covariance matrix then it is positive semidefinite by construction.


Other possibilities include using the conjugate gradient algorithm to check positive-definiteness. In theory, this method terminates after at most n iterations (n being the dimension of your matrix). In practice, it may have to run a bit longer. It is trivial to implement. You can also use a variant of the Lanczos method to estimate the smallest eigenvalue of your matrix (which is much easier than computing all eigenvalues!) Pick up a book on numerical linear algebra (check the SIAM collection.)

At any rate recall that such methods (and the Cholesky decomposition) check numerical positive-definiteness. It is possible for the smallest eigenvalue of your matrix to be, say, 1.0e-16 and for cancellation errors due to finite-precision arithmetic to cause your Cholesky (or conjugate gradient) to break down.