Do the results of statistical mechanics depend upon the choice of macrostates?

This was too long for a comment, so I am posting it as an answer.

I second @MarkMitchison 's advice to read E.T. Jaynes' work. His point is exactly identical to what you have made. If I have understood him correctly, entropy (in statistical mechanics) is a tool for statistical inference, which equips you to make the least biased decision regarding various macroscopic parameters based on only what information you have (that information being your knowledge of macrostate) and nothing more. But just because you made a statistical inference, doesn't imply that nature should conform to it. Whether your inference is correct or not, is to be verified by doing experiments. As far as I am aware, in ordinary cases, statistical inference based on maximizing entropy works excellently, but a priori it need not have. If you find that experimental results do not validate your inference, then it means that either the information you had was inadequate, irrelevant, or incorrect.

When entropy is interpreted this way, it becomes much more general. Let me give an example from my own research work. I have used entropy maximization procedure to find the equilibrium diameter distribution of droplets in turbulent flow experiments, based on only the knowledge of mean volume of droplets (just as you would find velocity distribution of molecules given mean energy). In some cases it gives a good fit. In some cases it doesn't. In those cases where it doesn't fit, it indicates that factors other than mean volume of droplets dictate the size distribution, and I have to introduce additional hypotheses to account for the same.


For a system in thermal equilibrium, the only admissible macrostates are those of the form $\rho=e^{-S/k_B}$, where $S$ is a linear combination of additively conserved quantum numbers. This severly limits the possibility to ensembles like the canonical and grand canonical ensemble, and excludes your choice.

Outside of equilibrium, the admissible macrostates are still of the form $\rho=e^{-S/k_B}$, but the choices for $S$ are more varied. See, e.g., Chapter 10 of my online book ''Classical and Quantum Mechanics via Lie algebras''. This chapter contains also a discussion of the relation between entropy and information.


What's wrong with these reasoning?

First of all let me say what is right: you are right that the definition of macrostates is a choice. In your example, we could split the paramagnet into two equal sections (in our minds) and describe the macrostate in terms of the magnetizations $M_1$ and $M_2$ of each section. If we keep subdividing in this way eventually we end up in the situation you describe, where we consider the magnetization of each spin separately.

What's wrong is your handling of the thermodynamic limit. The statement 'the physical macrostate minimizes free energy' is only true in this limit. In any finite-size system the statistical mechanics gives you a probability distribution over the macrostates of the system, and while the one with minimum free energy has maximum likelihood, there is no reason for the distribution to be sharp. In particular, if you consider each spin seperately then the probability distribution is simply given by the Boltzmann factor, $P(\vec{s}) \sim \exp[\mu \vec{s}\cdot \vec{H}/k T]$.

Now, in order for the distribution to become sharp (and thus the conclusion that the physical macrostate minimizes free energy be valid), what is needed is that the number of microstates per macrostate becomes large. If I consider the macrostate to be described by the total magnetization, $M$, or the magnetizations $(M_1,M_2,\ldots)$ of a fixed finite number of different sections, then as the number of spins $N$ becomes large the number of microstates per macrostate also grows (exponentially in $N$), so the thermodynamic limit works fine. However, if I consider the macrostate as specifying the magnetization of every spin then the number of microstates per macrostate is a constant (equal to one), and we have a problem.

In short, your argument fails because you cannot assume the free energy is minimized as you have defined it.

Is it somehow illegal to make this macrostate choice?

No-one is going to arrest you, but in order for the thermodynamic limit to work the ratio macrostates/microstates must grow (exponentially) with $N$.

Could acquiring all of this information about the spins necessarily change how the magnet behaves, e.g. from something like Landauer's principle?

As I understand it, Landauer's principle implies that there is a minimum entropy cost to aquiring information, but says nothing about where this excess entropy must be held. If you are considering the paramagnet in equilibrium with a thermal bath nothing will change. Of course, in a real paramagnet continuous measurement of each spin certainly would affect how the system behaves.

In general, can changing macrostate choice ever change the predictions of statistical mechanics?

It changes what your model can predict, yes. For example, if I define the macrostate in terms of the two magnetizations ($M_1,M_2$) then I get more information (in principle) than if I define macrostate in terms of the magnetization of the whole system, $M$. There are some cases where this might be significant, for example if the external field has spatial variation. However, the predictions must be compatible with each other, in the sense that $M = M_1 + M_2$ (in the thermodynamic limit) or for a finite system $P(M) = \sum_{M_1+M_2=M}P(M_1,M_2)$.