How is the Saddle point approximation used in physics?

In the simplest form the saddle point method is used to approximate integrals of the form

$$I \equiv \int_{-\infty}^{\infty} dx\,e^{-f(x)}.$$

The idea is that the negative exponential function is so rapidly decreasing — $\;e^{-10}$ is $10000$ times smaller than $e^{-1}$ — that we only need to look at the contribution from where $f(x)$ is at its minimum. Lets say $f(x)$ is at its minimum at $x_0$. Then we could approximate $f(x)$ the first terms of its Taylor expansion.

$$f(x) \approx f(x_0) + \frac{1}{2}(x- x_0)^2 f''(x_0) +\cdots.$$

There is no linear term because $x_0$ is a minimum. This may be a terrible approximation to $f(x)$ when $x$ is far from $x_0$, but if $f(x)$ is significantly bigger than its minimal value in this region then it doesn't really matter, since the contribution to the integral will be negligible either way. Anyway plugging this into our integral

$$I \approx \int_{-\infty}^{+\infty} dx\, e^{-f(x_0) - \frac{1}{2}(x-x_0)^2 f''(x_0)}= e^{-f(x_0)}\int_{-\infty}^{\infty} dx\, e^{-\frac{1}{2}(x-x_0)^2 f''(x_0)}.$$

The gaussian integral can be evaluated to give you

$$I = e^{-f(x_0)}\sqrt{\frac{2\pi}{f''(x_0)}}.$$

So where does this come up in physics? Probably the first example is Stirling's approximation. In statistical mechanics we are always counting configurations of things so we get all sorts of expressions involving $N!$ where $N$ is some tremendously huge number like $10^{23}$. Doing analaytical manipulation with factorials is no fun, so it would be nice if there was some more tractable expression. Well we can use the fact that:

$$N! =\int_0^\infty dx\, e^{-x}x^N = \int_0^\infty dx \exp(-x +N\ln x).$$

So now you can apply the saddle point approximation with $f(x) = x -N\ln x$. You can work out the result yourself. You should also convince yourself that in this case the approximation really does become better and better as $N\rightarrow \infty$. (Also you have to change the lower bound of the integral from $0$ to $-\infty$.)

There are lots of other examples, but I don't know your background so it's hard to say what will be a useful reference. The WKB approximation can be thought of as a saddle point approximation. A common example is in partition function/ path integrals where we want to calculate

$$\mathcal{Z} = \int d\phi_i \exp(-\beta F[\phi_i]),$$

where the $\phi_i$ are some local variables and $F[\cdot]$ is the free energy functional. We do the same as before but now with multiple variables. Again we can find the set $\{\phi_i^{(0)}\}$ that minimizes $F$ and then expand

$$F[\phi_i] = F[\phi_i^{(0)}] +\frac{1}{2}\sum_{ij}(\phi_i -\phi_i^{(0)})(\phi_j -\phi_j^{(0)})\frac{\partial^2F}{\partial\phi_i\partial\phi_j}.$$

This gives you the ground state contribution, times a Gaussian (free) theory which you can handle by the usual means. Following the earlier remarks we expect this to be good in the limit $\beta\rightarrow \infty$, although your mileage may vary.


BebopButUnsteady has explained the mathematics behind it and I'll provide you with some references I've found useful and quite like that get into the more technical mathematical details although they are still very readable. These deal more concretely with the complex analysis required and how to properly pick the correct contour so you don't get divergences and whatnot. It's actually a fairly technical issue that I've found the more elementary references tend to gloss over.

One of my favourites is Advanced Mathematical Methods by Bender and Orszag. This is one of the canonical references for applied mathematicians and physicists. I took a graduate course in perturbation theory from it and I find its treatment to be pretty good. It's one of the standards for a course on asymptotics and it shies away from rigourously proving things and gives you the right level to be able to calculate with the techniques. One thing I like about this book is that the exercises are superb.

A very good book on advanced complex analysis is Asymptotic Expansion of Integrals by Bleistein and Handelsman although unfortunately it is out of print as far as I am aware. If you plan on doing a lot of more of this type of mathematics, I highly recommend acquiring a copy. However it is very technical but goes into all the nitty-gritty details about the method of steepest descent and is very complete.

A new book I like is Applied Asymptotic Analysis. This is a more recent book and isn't as mathematically technical as the other two but it's got nice pictures and lots of text explaining what is going on. Given that you want aren't so much interested in the more technical details, this is probably the recommended reference. The section on steepest descents is very chit-chatty. I haven't worked through all the exercises yet so I can't comment on them.

Since you asked about where the steepest descent might come up, the method of steepest descents is a generalisation of some more elementary methods (Laplace's method for example) to cover more general cases. Thus it can come up whenever one needs to approximate an integral. An example I can think of immediately is a standard problem in elementary quantum field theory. It's evaluating the Klein-Gordon propagator and is covered in Peskin and Schroeder. However they don't explicitly work out the details, but they are using the ideas. One of my colleagues, who does work in molecular dynamics, wrote a paper that required approximating an integral using steepest descents. So it can arise in a variety of applications.