Schrödinger equation in position representation

Let me first say that I think Tobias Kienzler has done a great job of discussing the intuition behind your question in going from finite to infinite dimensions.

I'll, instead, attempt to address the mathematical content of Jackson's statements. My basic claim will be that

Whether you are working in finite or infinite dimension, writing the Schrodinger equation in a specific basis only involves making definitions.

To see this clearly without having to worry about possible mathematical subtleties, let's first consider

Finite dimension

In this case, we can be certain that there exists an orthnormal basis $\{|n\rangle\}_{n=1, \dots N}$ for the Hilbert space $\mathcal H$. Now for any state $|\psi(t)\rangle$ we define the so-called matrix elements of the state and Hamiltonian as follows: \begin{align} \psi_n(t) = \langle n|\psi(t)\rangle, \qquad H_{mn} = \langle m|H|n\rangle \end{align} Now take the inner product of both sides of the Schrodinger equation with $\langle m|$, and use linearity of the inner product and derivative to write \begin{align} \langle m|\frac{d}{dt}|\psi(t)\rangle=\frac{d}{dt}\langle m|\psi(t)\rangle=\frac{d\psi_m}{dt}(t) \end{align} The fact that our basis is orthonormal tells us that we have the resolution of the indentity \begin{align} I = \sum_{m=1}^N|m\rangle\langle m| \end{align} So that after taking the inner product with $\langle m|$, the write hand side of Schrodinger's equation can be written as follows: \begin{align} \langle m|H|\psi(t)\rangle = \sum_{m=1}^N\langle n|H|m\rangle\langle m|\psi(t)\rangle = \sum_{m=1}^N H_{nm}\psi_m(t) \end{align} Equating putting this all together gives the Schrodinger equation in the $\{|n\rangle\}$ basis; \begin{align} \frac{d\psi_n}{dt}(t) = \sum_{m=1}^NH_{nm}\psi_m(t) \end{align}

Infinite dimension

With an infinite number of dimensions, we can choose to write the Schrodinger equation either in a discrete (countable) basis for the Hilbert space $\mathcal H$, which always exists by the way since quantum mechanical Hilbert spaces all possess a countable, orthonormal basis, or we can choose a continuous "basis" like the position "basis" in which to write the equation. I put basis in quotes here because the position space wavefunctions are not actually elements of the Hilbert space since they are not square-integrable functions.

In the case of a countable orthonormal basis, the computation performed above for writing the Schodinger equation in a basis follows through in precisely the same way with the replacement of $N$ with $\infty$ everywhere.

In the case of the "basis" $\{|x\rangle\rangle_{x\in\mathbb R}$, the computation above carries through almost in the exact same way (as your question essentially shows), except the definitions we made in the beginning change slightly. In particular, we define functions $\psi:\mathbb R^2\to\mathbb C$ and $h:\mathbb R^2\to\mathbb C$ by \begin{align} \psi(x,t) = \langle x|\psi(t)\rangle, \qquad h(x,x') = \langle x|H|x'\rangle \end{align} Then the position space representation of the Schrodinger equation follows by taking the inner product of both sides of the equation with $\langle x|$ and using the resolution of the identity \begin{align} I = \int_{-\infty}^\infty dx'\, |x'\rangle\langle x'| \end{align} The only real mathematical subtleties you have to worry about in this case are exactly what sorts of objects the symbols $|x\rangle$ represent (since they are not in the Hilbert space) and in what sense one can write a resolution of the identity for such objects. But once you have taken care of these issues, the conversion of the Schrodinger equation into its expression in a particular "representation" is just a matter of making the appropriate definitions.


Think of a linear operator as taking the limit of an infinitely large matrix with discrete indices to one with continuous "indices" called coordinates. $K$ would denote the Matrix, while $k(x,x')$ is what one writes as $K_{x x'}$ for matrices. When you apply the linear operator to a function, it's like multiplying a matrix with a vector, only that instead of summation over the discrete second index you now integrate over the continuous second coordinate, i.e. $\sum_{x'}K_{x x'}f_{x'} \to \int dx'\, k(x,x')f(x')$. There's a bit more to it of course due to going from $\{1,2,3,...,n\}$ via $\mathbb N$ to $\mathbb R$ as "index" involves some mathematical messiness, but in most cases it just works fine without any special consideration.