Why Does Entropy Increase?

Sometimes it’s useful to try to define our terms. Consider an isolated system, such that we have both a macroscopic description (mass, temperature, pressure, etc) and a microscopic description in terms of the microscopic constituents (molecules, whatever) and that each microscopic description can be encoded as a point in gazillion dimensional phase space – one dimension for each microscopic degree of freedom. In general a given macro state will not correspond to a unique micro state, and in fact we can think of an “ensemble” of micro states each of which would correspond to the macro parameters as sort of a blob of fluid in phase space.

The log of the volume of fluid (or number of all possible microstates) corresponding to the given macro state is the entropy of that state. (We divide up the volume of the phase space into a bunch of mini volumes, the number of which becomes the number of states) In a very concrete way, the entropy of the macro state represents what we don’t know about the micro state – that is, the number of (microstate) possibilities for it. If we knew precisely which of these possible micro states was the real one, the entropy would be zero.

If our system is truly isolated, though, there is a theorem of classical mechanics, Liouville’s theorem, which says that time evolution preserves the volume of phase space – which would appear to preclude entropy increase. The conventional explanation is to say that while our original blob of phase fluid doesn’t expand, it develops innumerable fingers penetrating other volumes to become ‘close’ to a much larger volume. If we now “coarse grain,” by re-dividing the volume, and counting as “accessible” each volume that contains a bit of the original phase fluid, then we find entropy has increased. It’s hard to avoid the feeling that we have seen some slight of hand here, though, since coarse grained or no, the undisturbed system should still preserve the memory of its past and original volume.

In practice, though, there never is a truly isolated system, and tiny disturbances can act as a kind of entropic viscosity, smearing the phase fluid through all the phase space it comes near. I said that entropy represents what we don’t know about a system, so the increase in entropy due to outside disturbance can equally be thought of as a leaking away of what we once knew about the system. In a perfectly classical world we could imagine chasing down all that leaking information and somehow keeping our entropy down, but in the relativistic cosmos, information can leak away irretrievably by going over horizons.

There are two kinds of relevant horizons here: black hole boundaries and the cosmic “border” of our observable universe. Because the universe is expanding, and expansion rate increases with distance, there are regions of the universe which we can never hope to reach.

We associate an entropy with these horizons, which at first seems abstract and strange. What though, if we recall that entropy is really just information lost? Then what if the entropy of these horizons is just the information about our universe which has leaked out through them? I’m throwing this up as a speculation – I have no idea whether experts regard this idea as obvious or obviously wrong. I kind of like thinking about these entropies as what the universe has forgotten about itself. If that idea is correct though, should we really expect the black hole information loss “paradox” to have a happy solution?

Comments

Popular posts from this blog

Anti-Libertarian: re-post

Uneasy Lies The Head

We Call it Soccer