Using the Krylov method for Solve: Speeding up a SparseArray calculation

I found a way to dramatically improve the performance of this algorithm by using the undocumented function SparseArray`KrylovLinearSolve. The key advantage of this function is that it seems to be a near-analog of MATLAB's pcg, and as such accepts as a first argument either:

a square matrix, or a function generating a vector of length equal to the length of the second argument.

One may discover this by giving incorrect arguments and noting the advice given in the messages produced as a result, in much the same way as one discovers the correct arguments for any undocumented function. In this case the message is SparseArray`KrylovLinearSolve::krynfa.

You only need to change one line in your code to use it, namely:

s = SparseArray`KrylovLinearSolve[
     alph l.# + AT[A[#]] &, g, 
     Method -> "ConjugateGradient", "Preconditioner" -> (p.# &), 
     Tolerance -> tol, MaxIterations -> maxit
    ];

where maxit should preferably be Automatic (meaning 10 times the size of the system to be solved) or larger. With the data given in your question it takes a few hundred iterations to converge to a tolerance of $10^{-4}$, but each iteration is quite fast, so it seems to make more sense to adjust the tolerance than the number of iterations if performance is still an issue. However, while I didn't investigate this, needing this many iterations to converge to a relatively loose tolerance may of course be symptomatic of a poorly conditioned system, so using a different preconditioner or the biconjugate gradient stabilized method ("BiCGSTAB") could perhaps reduce the number of iterations required.

You will note that the options are exactly the same as for LinearSolve's "Krylov" method, so we may surmise that this function is probably called more or less directly by LinearSolve when Method -> "Krylov" is specified. In fact, if we assume that this is indeed the case and try

s = LinearSolve[
     alph l.# + AT[A[#]] &, g, 
     Method -> {"Krylov",
       Method -> "ConjugateGradient", "Preconditioner" -> (p.# &), 
       Tolerance -> tol, MaxIterations -> maxit
      }
    ];

we find that it works equally well, so evidently LinearSolve does in fact provide just the same functionality as pcg as far as the first argument is concerned, but without this actually being documented anywhere as far as I can tell. So, the overall conclusion is that you can just use LinearSolve directly after all.


This is not an answer, but may be it is :) as I do not have time to fully understand the question, just picked up few terms, but just in case, I thought I mention this.

Mathematica 8 already has Krylov method in LinearSolve ! so, if you are just looking to use these methods to solve Ax=b, it is already there.

Here is an example from my code (I used these in a demo for solving poisson 2D pde)

Which[preconditioner == "ILU0",
  x = LinearSolve[A, rightHandVector, 
    Method -> {"Krylov", Method -> nonStationarySolver, 
      "Preconditioner" -> preconditioner, MaxIterations -> Automatic, 
      Tolerance -> 10^-toleranceConstant}],
  True,
  x = LinearSolve[A, rightHandVector, 
    Method -> {"Krylov", Method -> nonStationarySolver, 
      "Preconditioner" -> {preconditioner, "FillIn" -> fillIn}, 
      MaxIterations -> Automatic, Tolerance -> 10^-toleranceConstant}]
  ];

In the above A is sparse 2D matrix and rightHandVector is the 'b' vector in Ax=b

There are many other submethods for Krylov and options If this is what you are asking for, then you can check my demo for more example, I also implement Preconditioned Conjugate Gradients Method (what Matlab pcg does).

If this is NOT what you are asking for, I can delete this. (the comment was too small to write all this in).

ps. the code for this is here http://12000.org/my_notes/mma_demos/poisson_2D/index.htm