How can entropy increase in quantum mechanics?

The total entropy of an isolated system indeed does not change under Schrodinger time evolution. To see this, note that (assuming for simplicity that the Hamiltonian does not depend explicitly on time) the system's density matrix satisfies the Von Neumann equation $\rho(t) = e^{-i H t / \hbar}\, \rho(0)\, e^{i H t / \hbar}$, so $\rho(t)$ and $\rho(0)$ are always unitarily equivalent, and therefore have the same eigenvalue spectra. Therefore any entropy measure that depends only on the density matrix weights (which, practically speaking, is all of them), is constant in time.

But the entanglement entropy between subsystems can indeed increase, because the subsystems are not isolated. So if the system, say, starts in a product state with no spatial entanglement between its subsystems, then generically Schrodinger time-evolution will lead to increasing entanglement between the subsystems, so the local entropy associated with each little piece of the whole system will indeed increase, even as the total entropy remains constant. This fact relies on a very non-classical feature of Von Neumann entropy, which is that the sum of the entropies of the subsystems can be greater than the entropy of the system as a whole. (Indeed, in studying the entanglement of the ground state, we often consider systems where the subsystems have very large entanglement entropy, but the system as a whole is in a pure state and so has zero entropy!)

The subfields of "eigenstate thermalization", "entanglement propagation", and "many-body localization" - which are all under very active research today - study the ways in which the Schrodinger time-evolution of various systems do or do not lead to increasing entanglement entropy of subsystems, even as the entropy of the system as a whole always stays the same.


Although tparker expresses a commonly held view, I disagree. Entropy in closed systems --be it quantum or classical-- can indeed increase. This should be entirely obvious from a physical perspective: imagine having a closed box of gas where in the initial state all the gas is in the top left corner of the box, this will clearly homogenize over time, with a definite entropy increase. This intuition is no different in the quantum context, but I will try to clarify this more in what follows.

Of course, if you say $S = k \log \left( \dim \mathcal H \right)$, it cannot change over time. But that is not necessarily the expression for the entropy of a closed system. In general, what you put in the logarithm is the phase space volume consistent with what you know about your closed system. If all you know about your system is its Hilbert space (classical analogue: all you know of a box is its classical phase space), then you have to necessarily expect the system to be in its total maximum entropy state (classical analogue: you are going to guess that the gas in the box is homogeneous), in which case of course nothing will change in time.

In other cases, however, you might know more about your system, e.g. you might know some (initial) inhomogeneous spatial distribution (this specification is called a macrostate). Classically, the entropy of that state is calculating the volume $\Omega$ spanned by all the microstates (i.e. points in phase space) consistent with that macrostate (one equivalently says that the macrostate variables partition phase space). Taking the logarithm of that number gives you the entropy of the closed system in that macrostate. This is the well-known Boltzmann entropy $S = k \log \Omega$, and this certainly increases over time for closed systems (unless, of course, you already started in a maximum-entropy state). [Interestingly, note that Gibbs entropy cannot increase for closed systems (a consequence of Liouville's theorem), which illustrates that the Boltzmann entropy is, dare I say, better. Some say the Boltzmann entropy is a special case of the Gibbs entropy, but that is only in the boring case of maximal entropy.]

The same holds in the quantum case: suppose you have an isolated system with some Hilbert space $\mathcal H \cong \mathbb C^N$. Suppose all you know of the system is a list of expectation value of some observables. Consider the set of states $\mathcal S \subset \mathcal H$ consistent with that knowledge. We want to assign a sensible volume $\Omega_{\mathcal S}$ to this such that we can then define the Boltzmann entropy $S = k \log \Omega_{\mathcal S}$. If all we know is the Hilbert space (i.e. we have total ignorance, such that $\mathcal S = \mathcal H$), then it is sensible to define $\Omega = \dim \left( \mathcal H \right) = N$ since then entropy is additive: adding two decoupled systems, their Hilbert spaces are tensor products such that their dimensions multiply and hence $\log \Omega$ is indeed additive. This is consistent with what your professor wrote down. More generally, if $\mathcal S$ is merely a subregion of the total space, we would assign $\Omega$ to be the corresponding fraction. This can be sensibly defined as follows: let $\pi: \mathbb C^N \to \mathbb CP^{N-1}$ denote the projection onto the projective Hilbert space (i.e. we mod out by the gauge degree of freedom), then we now have a compact manifold (with a submanifold $\pi(\mathcal S)$) such that we can assign $$ \Omega_{\mathcal S} = N \frac{\textrm{Vol}\left( \pi(\mathcal S) \right)_{\mathbb CP^{N-1}}}{\textrm{Vol}\left( \mathbb CP^{N-1} \right)_{\mathbb CP^{N-1}}} \; .$$ Anyway, the details of defining the volume is not so important for this discussion. The point is that we have a Boltzmann entropy --similar to the classical case-- and this will maximize --similar to the classical case. Note that its maximization is no mystic concept: it is the obvious statement that the system will flow to the macrostate with the largest number of corresponding microstates.