Why can we treat quantum scattering problems as time-independent?

This is fundamentally no more difficult than understanding how quantum mechanics describes particle motion using plane waves. If you have a delocalized wavefunction $\exp(ipx)$ it describes a particle moving to the right with velocity p/m. But such a particle is already everywhere at once, and only superpositions of such states are actually moving in time.

Consider

$$\int \psi_k(p) e^{ipx - iE(p) t} dp$$

where $\psi_k(p)$ is a sharp bump at $p=k$, not a delta-function, but narrow. The superposition using this bump gives a wide spatial waveform centered about at x=0 at t=0. At large negative times, the fast phase oscillation kills the bump at x=0, but it creates a new bump at those x's where the phase is stationary, that is where

$${\partial\over\partial p}( p x - E(p)t ) = 0$$

or, since the superposition is sharp near k, where

$$ x = E'(k)t$$

which means that the bump is moving with a steady speed as determined by Hamilton's laws. The total probability is conserved, so that the integral of psi squared on the bump is conserved.

The actual time-dependent scattering event is a superposition of stationary states in the same way. Each stationary state describes a completely coherent process, where a particle in a perfect sinusoidal wave hits the target, and scatters outward, but because it is an energy eigenstate, the scattering is completely delocalized in time.

If you want a collision which is localized, you need to superpose, and the superposition produces a natural scattering event, where a wave-packet comes in, reflects and transmits, and goes out again. If the incoming wavepacked has an energy which is relatively sharply defined, all the properties of the scattering process can be extracted from the corresponding energy eigenstate.

Given the solutions to the stationary eigenstate problem $\psi_p(x)$ for each incoming momentum $p$, so that at large negative x, $\psi_p(x) = exp(ipx) + A \exp(-ipx)$ and $\psi_p(x) = B\exp(ipx)$ at large positive x, superpose these waves in the same way as for a free particle

$$\int dp \psi_k(p) \psi_p(x) e^{-iE(p)t}$$

At large negative times, the phase is stationary only for the incoming part, not for the outgoing or reflected part. This is because each of the three parts describes a free-particle motion, so if you understand where free particle with that momentum would classically be at that time, this is where the wavepacket is nonzero. So at negative times, the wavepacket is centered at

$$ x = E'(k)t$$

For large positive t, there are two places where the phase is stationary--- those x where

$$ x = - E'(k) t$$

$$ x = E_2'(k) t$$

Where $E_2'(k)$ is the change in phase of the transmitted k-wave in time (it can be different than the energy if the potential has an asymptotically different value at $+\infty$ than at $-\infty$). These two stationary phase regions are where the reflected and transmitted packet are located. The coefficient of the reflected and transmitted packets are A and B. If A and B were of unit magnitude, the superposition would conserve probability. So the actual transmission and reflection probability for a wavepacket is the square of the magnitude of A and of B, as expected.


First suppose that the Hamiltonian $H(t) = H_0 + H_I(t)$ can be decomposed into free and interaction parts. It can be shown (I won't derive this equation here) that the retarded Green function for $H(t)$ obeys the equation $$G^{(+)}(t, t_0) = G_0^{(+)}(t, t_0) - {i \over \hbar} \int_{-\infty}^{\infty} {\rm d} t' G_0^{(+)}(t,t') H_I(t') G^{(+)}(t', t_0)$$ where $G_0^{(+)}$ is the retarded Green function for $H_0$. Letting this equation act on a state $\left| \psi(t_0) \right>$ this becomes $$\left| \psi(t) \right> = \left| \varphi(t) \right> - {i \over \hbar} \int_{-\infty}^{\infty} {\rm d} t' G_0^{(+)}(t,t') H_I(t')\left| \psi(t') \right> $$ where $\varphi(t) = G_0^{(+)}(t,t') \left| \psi(t_0) \right>$. Now, we suppose that until $t_0$ there is no interaction and so we can write $\left |\psi(t_0) \right>$ as superposition of momentum eigenstates $$\left| \psi(t_0) \right> = \int {\rm d}^3 \mathbf p a(\mathbf p) e^{-{i \over \hbar} E t_0} \left| \mathbf p \right>.$$ A similar decomposition will also hold for $\left| \phi(t) \right>$. This should inspire us in writing $\left| \psi(t) \right >$ as $$\left| \psi(t) \right> = \int {\rm d}^3 \mathbf p a(\mathbf p) e^{-{i \over \hbar} E t} \left| \psi^{(+)}_{\mathbf p} \right>$$ where the states $\left| \psi^{(+)}_{\mathbf p} \right>$ are to be determined from the equation for $\left|\psi(t) \right>$. Now, the amazing thing (which I again won't derive due to the lack of space) is that these states are actually eigenstates of $H$: $$H \left| \psi^{(+)}_{\mathbf p} \right> = E \left| \psi^{(+)}_{\mathbf p} \right>$$ for $E = {\mathbf p^2 \over 2m}$ (here we assumed that the free part is simply $H_0 = {{\mathbf p}^2 \over 2m}$ and that $H_I(t)$ is independent of time).

Similarly, one can derive advanced eigenstates from advanced Green function $$H \left| \psi^{(-)}_{\mathbf p} \right> = E \left| \psi^{(-)}_{\mathbf p} \right>.$$

Now, in one dimension and for an interaction Hamiltonian of the form $\left< \mathbf x \right| H_I \left| \mathbf x' \right> = \delta(\mathbf x - \mathbf x') U(\mathbf x)$ it can be further shown that $$\psi^{(+)}_p \sim \begin{cases} e^{{i \over \hbar}px} + A(p) e^{-{i \over \hbar}px} \quad x< -a \cr B(p)e^{{i \over \hbar}px} \quad x> a \end{cases}$$ where $a$ is such that the potential vanishes for $|x| > a$ and $A(p)$ and $B(p)$ are coefficients fully determined by the potential $U(x)$. Similar discussion again applies for wavefunctions $\psi^{(-)}_p$. Thus we have succeeded in reducing the dynamical problem into a stationary problem by writing the non-stationary states $\psi(t, x)$ in the form of stationary $\psi^{(+)}_p(x)$.


Here I would like to expand some of the arguments given in Ron Maimon's nice answer.

i) Setting. Let us divide the 1D $x$-axis into three regions $I$, $II$, and $III$, with a localized potential $V(x)$ in the middle region $II$ having a compact support. (Clearly, there are physically relevant potentials that haven't compact support, e.g. the Coulomb potential, but this assumption simplifies the following discussion concerning the notion of asymptotic states.)

ii) Time-independent and monochromatic. The particle is free in the regions $I$ and $III$, so we can solve the time-independent Schrödinger equation

$$\hat{H}\psi(x) ~=~E \psi(x), \qquad\qquad \hat{H}~=~ \frac{\hat{p}^2}{2m}+V(x),\qquad\qquad E> 0, \tag{1}$$

exactly there. We know that the 2nd order linear ODE has two linearly independent solutions, which in the free regions $I$ and $III$ are plane waves

$$ \psi_{I}(x) ~=~ \underbrace{a^{+}_{I}(k)e^{ikx}}_{\text{incoming right-mover}} + \underbrace{a^{-}_{I}(k)e^{-ikx}}_{\text{outgoing left-mover}}, \qquad k> 0, \tag{2} $$ $$ \psi_{III}(x) ~=~ \underbrace{a^{+}_{III}(k)e^{ikx}}_{\text{outgoing right-mover}} + \underbrace{a^{-}_{III}(k)e^{-ikx}}_{\text{incoming left-mover}}. \qquad\qquad\qquad \tag{3} $$

Just from linearity of the Schrödinger equation, even without solving the middle region $II$, we know that the four coefficients $a^{\pm}_{I/III}(k)$ are constrained by two linear conditions. This observation leads, by the way, to the time-independent notion of the scattering $S$-matrix and the transfer $M$-matrix

$$ \begin{pmatrix} a^{-}_{I}(k) \\ a^{+}_{III}(k) \end{pmatrix}~=~ S(k) \begin{pmatrix} a^{+}_{I}(k) \\ a^{-}_{III}(k) \end{pmatrix}, \tag{4} $$

$$ \begin{pmatrix} a^{+}_{III}(k) \\ a^{-}_{III}(k) \end{pmatrix}~=~ M(k) \begin{pmatrix} a^{+}_{I}(k) \\ a^{-}_{I}(k) \end{pmatrix}, \tag{5} $$

see e.g. my Phys.SE answer here.

iii) Time-dependence of monochromatic wave. The dispersion relation reads

$$ \frac{E(k)}{\hbar} ~\equiv~\omega(k)~=~\frac{\hbar k^2}{2m}. \tag{6} $$

The specific form on the right-hand side of the dispersion relation $(6)$ will not matter in what follows (although we will assume for simplicity that it is the same for right- and left-movers). The full time-dependent monochromatic solution in the free regions I and III becomes $$ \Psi_r(x,t) ~=~ \sum_{\sigma=\pm}a^{\sigma}_r(k)e^{\sigma ikx-i\omega(k)t} ~=~\underbrace{e^{-i\omega(k)t}}_{\text{phase factor}} \Psi_r(x,0), \qquad r ~\in~ \{I, III\}. \tag{7} $$

The solution $(7)$ is a sum of a right-mover ($\sigma=+$) and a left-mover ($\sigma=-$). For now the words right- and left-mover may be taken as semantic names without physical content. The solution $(7)$ is fully delocalized in the free regions I and III with the probability density $|\Psi_r(x,t)|^2$ independent of time $t$, so naively, it does not make sense to say that the waves are right or left moving, or even scatter! However, it turns out, we may view the monochromatic wave $(7)$ as a limit of a wave packet, and obtain a physical interpretation in that way, see next section.

iv) Wave packet. We now take a wave packet

$$ A^{\sigma}_r(k)~=~0 \qquad \text{for} \qquad |k-k_0| ~\geq~ \frac{1}{L}, \qquad\sigma~\in~\{\pm\}, \qquad r ~\in~ \{I, III\},\tag{8} $$

narrowly peaked around some particular value $k_0$ in $k$-space,

$$|k-k_0| ~\leq~ K, \tag{9}$$

where $K$ is some wave number scale, so that we may Taylor expand the dispersion relation

$$\omega(k)~=~ \omega(k_0) + v_g(k_0)(k-k_0) + {\cal O}\left((k-k_0)^2\right), \tag{10} $$ and drop higher-order terms ${\cal O}\left((k-k_0)^2\right)$. Here

$$v_g(k)~:=~\frac{d\omega(k)}{dk}\tag{11}$$

is the group velocity. The wave packet (in the free regions I and III) is a sum of a right- and a left-mover,

$$ \Psi_r(x,t)~=~ \Psi^{+}_r(x,t)+\Psi^{-}_r(x,t), \qquad\qquad r ~\in~ \{I, III\},\tag{12} $$

where

$$ \Psi^{\sigma}_r(x,t)~:=~ \int dk~A^{\sigma}_r(k)e^{\sigma ikx-i\omega(k)t}, \qquad\qquad\sigma~\in~\{\pm\}, \qquad\qquad r ~\in~ \{I, III\}, $$ $$ ~\approx~ e^{i(k_0 v_g(k_0)-\omega(k_0))t} \int dk~A^{\sigma}_r(k)e^{ ik(\sigma x- v_g(k_0)t)}$$ $$~=~\underbrace{e^{i(k_0 v_g(k_0)-\omega(k_0))t}}_{\text{phase factor}} ~\Psi^{\sigma}_r\left(x-\sigma v_g(k_0)t,0\right).\tag{13}$$

The right- and left-movers $\Psi^{\sigma}$ will be very long spread-out wave trains of sizes $\geq \frac{1}{K}$ in $x$-space, but we are still able to identity via eq. $(13)$ their time evolution as just

  1. a collective motion with group velocity $\sigma v_g(k_0)$, and

  2. an overall time-dependent phase factor of modulus $1$ (which is the same for the right- and the left-mover).

In the limit $K \to 0$, with $K >0$, the approximation $(10)$ becomes better and better, and we recover the time-independent monochromatic wave,

$$ A^{\sigma}_r(k) ~\longrightarrow ~a^{\sigma}_r(k_0)~\delta(k-k_0)\qquad \text{for} \qquad K\to 0. \tag{14}$$

It thus makes sense to assign a group velocity to each of the $\pm$ parts of the monochromatic wave $(7)$, because it can understood as an appropriate limit of the wave packet $(13)$. The previous sentence is in a nutshell the answer to OP's title question (v3).