How is information related to energy in physics?

If one measures lack of information by the entropy (as usual in information theory), and equates it with the entropy in thermodynamics then the laws of thermodynamics say just the opposite: In a mechanically, chemically and thermally isolated system described in the hydrodynamic approximation, energy is conserved whereas entropy can be created, i.e., information can get lost. (In an exact microscopic Hamiltonian description, the concept of information makes no sense, as it assumes a coarse-grained description in which not everything is known.)

The main stream view in physics (aside from speculations about future physics not yet checkable by experiment) is that on the most fundamental level energy is conserved (being a consequence of the translation symmetry of the universe), while entropy is a concept of statistical mechanics that is applicable only to macroscopic bodies and constitutes an approximation, though at the human scale a very good one.


In contrast to @ArnoldNeumaier, I'd argue that the information content of the World could be constant: it almost certainly can't get smaller and how it and if it gets bigger depends on the resolution of questions about the correct interpretation of what exactly happens when one makes a quantum measurement. I'll leave the latter (resolution of quantum interpretation) aside, and instead discuss situations wherein information is indeed constant. See here for definition of "information": the information in a thing is essentially the size in bits of the smallest document one can write and still uniquely define that thing. For the special case of a statistically independent string of symbols, the Shannon information is the mean of the negative logarithms of their probabilities $p_j$ of appearance in an infinite string:

$H=-\sum_j p_j \log p_j$

If the base of the logarithm is 2, H is in bits. How this relates to the smallest defining document for the string is defined in Shannon's noiseless coding theorem.

In the MaxEnt interpretation of the second law of thermodynamics, pioneered by E. T. Jaynes (also of the Jaynes-Cumming model for two level atom with one electromagnetic field mode interaction fame), the wonted "observable" or "experimental" entropy $S_{exp}$ (this is what the Boltzmann H formula yields) of a system comprises what I would call the true Shannon information, or Kolmogorov complexity, $S_{Sha}$, plus the mutual information $M$ between the unknown states of distinguishable subsystems. In a gas, $M$ measures the predictability of states of particles conditioned on knowledge about the states of other particles, i.e. is is a logarithmic measure of statistical correlation between particles:

$S_{Exp} = S_{Sha} + M$ (see this reference, as well as many other works by E. T. Jaynes on this subject)

$S_{Sha}$ is the minimum information in bits needed to describe a system, and is constant because the basic laws of physics are reversible: the World, therefore, has to "remember" how to undo any evolution of its state. $S_{Sha}$ cannot in general be measured and indeed, even given a full description of a system state, $S_{Sha}$ is not computable (i.e. one cannot compute the maximum reversible compression of that description). The Gibbs entropy formula calculates $S_{Sha}$ where the joint probability density function for the system state is known.

The experimental (Boltzmann) entropy stays constant in a reversible process, and rises in a non-reversible one. Jaynes's "proof" of the second law assumes that a system begins with all its subsystems uncorrelated, and therefore $S_{Sha} = S_{exp}$. In this assumed state, the subsystems are all perfectly statistically independent. After an irreversible change (e.g. a gas is allowed to expand into a bigger container by opening a tap, the particles are now subtly correlated, so that their mutual information $M > 0$. Therefore one can see that the observable entropy $S_{exp}$ must rise. This ends Jaynes's proof.

See also this answer, for an excellent description of entropy changes an irreversible change. The question is also relevant to you.

Energy is almost unrelated to information, however, there is a lower limit on the work must do to "forget" information in a non reversible algorithm: this is the Landauer limit and arises to uphold the second law of thermodynamics simply because the any information must be encoded in a physical system's state: there is no other "ink" to write in in the material world. Therefore, if we swipe computer memory, the Kolmogorov complexity of the former memory state must get pushed into the state of the surrounding World.

Afterword: I should declare bias by saying I subscribe to many of the ideas of the MaxEnt interpretation, but disagree with Jaynes's "proof" of the second law. There is no problem with Jayne's logic but (Author's i.e. My Opinion): after an irreversible change, the system's substates are correlated and one has to describe how they become uncorrelated again before one can apply Jaynes's argument again. So, sadly, I don't think we have a proof of the second law here.


Energy is the relationship between information regimes. That is, energy is manifested, at any level, between structures, processes and systems of information in all of its forms, and all entities in this universe is composed of information. To understand information and energy, consider a hypothetical universe consisting only of nothingness. In this universe imagine the presence of the smallest most fundamental possible instance of deformation which constitutes a particle in this otherwise pristine firmament of nothingness. Imagine there is only one instance of this most fundamental particle and let us dub this a Planck-Particle PP. What caused this PP to exist is not known, but the existence of the PP constitutes the existence of one Planck-Bit (PB) of information. Resist the temptation to declare that energy is what caused our lone PP to exist. In this analogy, as in our reality, the ‘big’ bang that produced our single PP is not unlike the big bang that caused our known universe in that neither can be described in terms of any energy relationship or regime known to the laws of physics in this universe.

This PB represents the most fundamental manifestation of information possible in this universe. Hence, the only energy that exists in this conceptual universe will be described by the relationship (there’s that word again) between the lone PP and the rest of the firmament of nothingness that describes its universe. Call this energy a Planck-quantum (PQ). Note that this PQ of energy in this universe only exists by virtue of the existence of the PP alone in relation to the surrounding nothingness. With only one PP there are few descriptions of energy that can be described. There is no kinetic energy, no potential energy no gravity no atomic or nuclear energy no entropy, no thermodynamics etc.. However, there will be some very fundamental relationships pertaining to the degrees-of freedom defined by our PP compared to is bleak environment that may be describable as energy.

Should we now introduce a second PP into our sparse universe, you may now define further relationship and energy regimes within our conceptual growing universe, and formulate Nobel worthy theories and equations which describe these relationships. Kinetic energy suddenly manifest as the relationship of distance between our lonely PP’s suddenly comes into existence. Likewise, energy as we know it describes the relationships manifested between information regimes which are describable by the language of mathematics.