Time's arrow and Boltzmann's entropy

From Scholarpedia
Joel L. Lebowitz (2008), Scholarpedia, 3(4):3448. doi:10.4249/scholarpedia.3448 revision #137152 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Joel L. Lebowitz

The arrow of time expresses the fact that in the world about us the past is distinctly different from the future. Milk spills but doesn't unspill; eggs splatter but do not unsplatter; waves break but do not unbreak; we always grow older, never younger. These processes all move in one direction in time - they are called "time-irreversible" and define the arrow of time. It is therefore very surprising that the relevant fundamental laws of nature make no such distinction between the past and the future. This in turn leads to a great puzzle - if the laws of nature permit all processes to be run backwards in time, why don't we observe them doing so? Why does a video of an egg splattering run backwards look ridiculous? Put another way: how can time-reversible motions of atoms and molecules, the microscopic components of material systems, give rise to the observed time-irreversible behavior of our everyday world? The resolution of this apparent paradox is due to Maxwell, Thomson and (particularly) Boltzmann. These ideas also explain most other arrows of time - in particular; why do we remember the past but not the future?

Contents

What is time

Time is arguably among the most primitive concepts we have—there can be no action or movement, no memory or thought, except in time. Of course this does not mean that we understand, whatever is meant by that loaded word "understand", what time is. As put by Saint Augustine.

"What is time? If nobody asks me, I know; but if I were desirous to explain it to one that should ask me, plainly I know not."

In a book entitled Time's Arrow and Archimedes' Point the Australian philosopher Huw Price describes well the ``stock philosophical debates about time. These have not changed much since the time of Saint Augustine or even earlier.

"... Philosophers tend to be divided into two camps. On one side there are those who regard the passage of time as an objective feature of reality, and interpret the present moment as the marker or leading edge of this advance. Some members of this camp give the present ontological priority, as well, sharing Augustine's view that the past and the future are unreal. Others take the view that the past is real in a way that the future is not, so that the present consists in something like the coming into being of determinate reality. .... Philosophers in the opposing camp regard the present as a subjective notion, often claiming that now is dependent on one's viewpoint in much the same way that here is. Just as "here" means roughly "this place", so "now" means roughly "this time", and in either case what is picked out depends where the speaker stands. In this view there is no more an objective division of the world into the past, the present, and the future than there is an objective division of a region of space into here and there.

Often this is called the block universe view, the point being that it regards reality as a single entity of which time is an ingredient, rather than as a changeable entity set in time."

A very good description of the block universe point of view is given by Kurt Vonnegut in his novel Slaughterhouse-Five. The coexistence of past, present and future forms one of the themes of the book. The hero, Billy Pilgrim, speaks of the inhabitants of Tralfamadore a planet in a distant galaxy: "The Tralfamadorians can look at all different moments just the way we can look at a stretch of the Rocky Mountains, for instance. They can see how permanent all the moments are, and they can look at any moment that interests them. It is just an illusion we have here on earth that one moment follows another like beads on a string, and that once a moment is gone it is gone forever."

This view (with relativity properly taken into account) is certainly the one held by most physicists—at least when they think as physicists. It is well expressed in the often quoted passage from Einstein's letter of condolences upon the death of his youthful best friend Michele Besso: "Michele has left this strange world just before me. This is of no importance. For us convinced physicists the distinction between past, present and future is an illusion, although a persistent one."

There are however also more radical views about time among physicists. At a conference on the Physical Origins of Time Asymmetry which took place in Mazagon, Spain, in 1991, the physicist Julian Barbour conducted an informal poll about whether time is fundamental. Here is Barbour's account of that from his book The End of Time.

"During the Workshop, I conducted a very informal straw-poll, putting the following question to each of the 42 participants: Do you believe time is a truly basic concept that must appear in the foundations of any theory of the world, or is it an effective concept that can be derived from more primitive notions in the same way that a notion of temperature can be recovered in statistical mechanics?

The results were as follows: 20 said there was no time at a fundamental level, 12 declared themselves to be undecided or wished to abstain, and 10 believed time did exist at the most basic level. However, among the 12 in the undecided/abstain column, 5 were sympathetic to or inclined to the belief that time should not appear at the most basic level of theory."

Matter in space-time

In this article, the intuitive notion of space-time as a primitive undefined concept is taken as a working hypothesis. This space-time continuum is the arena in which matter, radiation and all kinds of other fields exist and change.

Many of these changes have a uni-directional order "in time", or display an arrow of time. One might therefore expect, as Feynman puts it, that there is some fundamental law which says, that "uxels only make wuxels and not vice versa." But we have not found such a law.... "so this manifest fact of our experience is not part of the fundamental laws of physics." The fundamental microscopic laws (with some, presumably irrelevant, exceptions) all turn out to be time symmetric. Newton's laws, the Schrödinger equation, the special and general theory of relativity, etc., make no distinction between the past and the future—they are "time-symmetric". As put by Brian Greene in his book "The Fabric of the Cosmos: Space, Time and the Structure of Reality", "no one has ever discovered any fundamental law which might be called the Law of the Spilled Milk or the Law of the Splattered Egg."

It is only secondary laws, which describe the behavior of macroscopic objects containing many, many atoms, such as the second law of thermodynamics, (discussed below), which explicitly contain this time asymmetry. The obvious question then is; how does one go from a time symmetric description of the dynamics of atoms to a time asymmetric description of the evolution of macroscopic systems made up of atoms.

In answering that question, one may mostly ignore relativity and quantum mechanics. These theories, while essential for understanding both the very large scale and the very small scale structure of the universe, have a "classical limit" which is adequate for a basic understanding of time's arrow. One may also for simplicity ignore waves, made up of photons, and any entities smaller than atoms and talk about these atoms as if they were point particles interacting with each other via some pair potential, and evolving according to Newtonian laws.

In the context of Newtonian theory, the "theory of everything" at the time of Thomson, Maxwell and Boltzmann, the problem can be formally presented as follows: The complete microscopic (or micro) state of a classical system of \(N\) particles, is represented by a point \(X\) in its phase space \(\Gamma\ ,\) \( X =(r_1, p_1, r_2, p_2, ..., r_N, p_N), r_i\) and \(p_i\) being three dimensional vectors representing the position and momentum (or velocity) of the \(i\)th particle. When the system is isolated, say in a box \(V\) with reflecting walls, its evolution is governed by Hamiltonian dynamics with some specified Hamiltonian \(H(X)\) which we will assume for simplicity to be an even function of the momenta: no magnetic fields. Given \(H(X)\ ,\) the microstate \(X(t_0)\ ,\) at time \(t_0\ ,\) determines the microstate \(X(t)\) at all future and past times \(t\) during which the system will be or was isolated. Let \(X(t_0)\) and \(X(t_0+\tau)\ ,\) with \(\tau\) positive, be two such microstates. Reversing (physically or mathematically) all velocities at time \(t_0+\tau\ ,\) we obtain a new microstate, \(RX\ .\) \[ RX = (r_1,-p_1, r_2,-p_2, ...,r_N,-p_N). \] If we now follow the evolution for another interval \(\tau\) we find that the new microstate at time \(t_0 + 2\tau\) is just \(RX(t_0)\ ,\) the microstate \(X(t_0)\) with all velocities reversed: Hence if there is an evolution, i.e. a trajectory \(X(t)\ ,\) in which some property of the system, specified by a function \(f(X(t))\ ,\) behaves in a certain way as \(t\) increases, then if \(f(X) = f(RX)\) there is also a trajectory in which the property evolves in the time reversed direction. So why is one type of evolution, the one consistent with an entropy increase in accord with the "second law" of thermodynamics, common and the other never seen?

An example of the entropy increasing evolution is the approach to a uniform temperature of systems initially kept isolated at different temperatures, as exemplified by putting a glass of hot tea and a glass of cold water into an insulated container. It is common experience that after a while the two glasses and their contents will come to the same temperature.

This is one of the "laws" of thermodynamics, a subject developed in the eighteenth and nineteenth century, purely on the basis of macroscopic observations—primarily the workings of steam engines—so central to the industrial revolution then taking place. Thermodynamics makes no reference to atoms and molecules, and its validity remains independent of their existence and nature—classical or quantum. The high point in the development of thermodynamics came in 1865 when Rudolf Clausius pronounced his famous two fundamental theorems: 1. The energy of the universe is constant. 2. The entropy of the universe tends to a maximum.


The "second law" says that there is a quantity called entropy associated with macroscopic systems which can only increase, never decrease, in an isolated system. In Clausius' poetic language, the paradigm of such an isolated system is the universe itself. But even leaving aside the universe as a whole and just considering our more modest example of two glasses of water in an insulated container, this is clearly a law which is asymmetric in time. Entropy increase is identified with heat flowing from hot to cold regions leading to a uniformization of the temperature. But, if we look at the microscopic dynamics of the atoms making up the systems then, as noted earlier, if the energy density or temperature inside a box \(V\) gets more uniform as time increases, then, since the energy density profile is the same for \(X\) and \(RX\ ,\) there is also an evolution in which the temperature gets more nonuniform.

There is thus clearly a difficulty in deriving or showing the compatibility of the second law with the microscopic dynamics. This is illustrated by the impossibility of time ordering of the snapshots in {Fig. 1} using solely the microscopic dynamical laws: the time symmetry of the microscopic dynamics implies that if (a, b, c, d) is a possible ordering so is (d, c, b, a).

Figure 1: A sequence of "snapshots", a, b, c, d taken at times \(t_a, t_b, t_c, t_d\ ,\) each representing a macroscopic state of a system, say a fluid with two "differently colored" atoms or a gas in which the shading indicates the local density. How would one order this sequence in time?.

The explanation of this apparent paradox, due to Thomson, Maxwell and Boltzmann, shows that not only is there no conflict between reversible microscopic laws and irreversible macroscopic behavior, but, as clearly pointed out by Boltzmann in his later writings, there are extremely strong reasons to expect the latter from the former. (Boltzmann's early writings on the subject are sometimes unclear, wrong, and even contradictory. His later writings, however, are generally very clear). These reasons involve several interrelated ingredients which together provide the required distinction between microscopic and macroscopic variables and explain the emergence of definite time asymmetric behavior in the evolution of the latter despite the total absence of such asymmetry in the dynamics of the former.

Macrostates

To describe the macroscopic state of a system of \(N\) atoms in a box \(V\ ,\) say \( N {}^>_\sim 10^{20}\ ,\) we make use of a much cruder description than that provided by the microstate \(X\ .\) We shall denote by \(M\) such a macroscopic description or macrostate. As an example we may take \(M\) to consist of the specification, to within a given accuracy, of the energy and number of particles in each half of the box \(V\ .\) A more refined macroscopic description would divide \(V\) into \(K\) cells, where \(K\) is large but still \(K << N\ ,\) and specify the number of particles, the momentum, and the amount of energy in each cell, again with some tolerance.

Clearly \(M\) is determined by \(X\) but there are many \(X\)'s (in fact a continuum) which correspond to the same \(M\ .\) Let \(\Gamma_M\) be the region in \(\Gamma\) consisting of all microstates \(X\) corresponding to a given macrostate \(M\) and denote by \(|\Gamma_M|=(N! h^{3N})^{-1} \int_{\Gamma_M}\prod_{i=1}^Ndr_{i}p_i\ ,\) its symmetrized \(6N\) dimensional Liouville volume in units of \(h^{3N}\ .\) At this point this is simply an arbitrary choice of units. It is however a very convenient one for dealing with the classical limit of quantum systems. ath.nyu.edu/faculty/varadhan/

Time evolution of macrostates: An example

Consider a situation in which a gas of \(N\) atoms with energy \(E\) (with some tolerance) is initially confined by a partition to the left half of the box \(V\ ,\) and suppose that this constraint is removed at time \(t_a\ ,\) see Fig. 1. The phase space volume available to the system for times \(t>t_a\) is then fantastically enlarged compared to what it was initially, roughly by a factor of \(2^N\ .\) If the system contains 1 mole of gas then the volume ratio of the unconstrained phase space region to the constrained one is far larger than the ratio of the volume of the known universe to the volume of one atom.

Let us now consider the macrostate of this gas as given by \(M=\left({N_L \over N} , {E_L \over E}\right)\ ,\) the fraction of particles and energy in the left half of \(V\) (within some small tolerance). The macrostate at time \(t_a, M=(1, 1)\ ,\) will be denoted by \(M_a\ .\) The phase-space region \(|\Gamma| = \Sigma_E\ ,\) available to the system for \(t> t_a\ ,\) i.e., the region in which \(H(X) \in (E, E + \delta E), \delta E << E\ ,\) will contain new macrostates, corresponding to various fractions of particles and energy in the left half of the box, with phase space volumes very large compared to the initial phase space volume available to the system. We can then expect (in the absence of any obstruction, such as a hidden conservation law) that as the phase point \(X\) evolves under the unconstrained dynamics and explores the newly available regions of phase space, it will with very high probability enter a succession of new macrostates \(M\) for which \(|\Gamma_{M}|\) is increasing. The set of all the phase points \(X_t\ ,\) which at time \(t_a\) were in \(\Gamma_{M_a}\ ,\) forms a region \(T_t \Gamma_{M_a}\) whose volume is, by Liouville's Theorem, equal to \(|\Gamma_{M_a}|\ .\) The shape of \(T_t\Gamma_{M_a}\) will however change with \(t\) and as \(t\) increases \(T_t\Gamma_{M_a}\) will increasingly be contained in regions \(\Gamma_M\) corresponding to macrostates with larger and larger phase space volumes \(|\Gamma_M|\ .\) This will continue until almost all the phase points initially in \(\Gamma_{M_a}\) are contained in \(\Gamma_{M_{eq}}\ ,\) with \(M_{eq}\) the system's unconstrained macroscopic equilibrium state. This is the state in which approximately half the particles and half the energy will be located in the left half of the box, \(M_{eq} = ({1\over 2}, {1 \over 2})\) i.e. \(N_L /N\) and \(E_L/ E\) will each be in an interval \(\left({1 \over 2} - \epsilon, {1 \over 2} + \epsilon\right)\ ,\) \(N^{-1/2} << \epsilon << 1\ .\)

\(M_{eq}\) is characterized, in fact defined, by the fact that it is the unique macrostate, among all the \(M_\alpha\ ,\) for which \(|\Gamma_{M_{eq}}| / |\Sigma_E| \simeq 1\ ,\) where \(|\Sigma_E|\) is the total phase space volume available under the energy constraint \(H(X) \in (E, E + \delta E)\ .\) (Here the symbol \(\simeq\) means equality when \(N \to \infty\ .\)) That there exists a macrostate containing almost all of the microstates in \(\Sigma_E\) is a consequence of the law of large numbers. The fact that \(N\) is enormously large for macroscopic systems is absolutely critical for the existence of thermodynamic equilibrium states for any reasonable definition of macrostates, in the above example e.g. for any \(\epsilon\ ,\) such that, \(N^{-1/2} << \epsilon << 1\ .\) Indeed thermodynamics does not apply (is even meaningless) for isolated systems containing just a few particles. Nanosystems are interesting and important intermediate cases: Note however that in many cases an \(N\) of about 1,000 will already behave like a macroscopic system: see related discussion about computer simulations below.

After reaching \(M_{eq}\) we will (mostly) see only small fluctuations in \(N_L(t) / N\) and \(E_L(t) / E\ ,\) about the value \({1 \over 2}\ :\) typical fluctuations in \(N_L\) and \(E_L\) being of the order of the square root of the number of particles involved. (Of course if the system remains isolated long enough we will occasionally also see a return to the initial macrostate—the expected time for such a Poincaré recurrence is however much longer than the age of the universe and so is of no practical relevance when discussing the approach to equilibrium of a macroscopic system.)

As already noted earlier, the scenario in which \(|\Gamma_{M(X(t))}|\) increase with time for the \(M_a\) shown in Fig.1 cannot be true for all microstates \(X\subset \Gamma_{M_a}\ .\) There will of necessity be \(X\)'s in \(\Gamma_{M_a}\) which will evolve for a certain amount of time into microstates \(X(t)\equiv X_t\) such that \(|\Gamma_{M(X_t)}|<|\Gamma_{M_a}|\ ,\) e.g. microstates \(X\in \Gamma_{M_a}\) which have all velocities directed away from the barrier which was lifted at \(t_a\ .\) What is true however is that the subset \(B\) of such "bad" initial states has a phase space volume which is very very small compared to that of \(\Gamma_{M_a}\ .\) This is what is meant by the statement that entropy increasing behavior is typical; a more extensive discussion of typicality is given later.

Boltzmann's entropy

The end result of the time evolution in the above example, that of the fraction of particles and energy becoming and remaining essentially equal in the two halves of the container when \(N\) is large enough (and `exactly equal' when \(N \to\infty\)), is of course what is predicted by the second law of thermodynamics.

It was Boltzmann's great insight to connect the second law with the above phase space volume considerations by making the observation that for a dilute gas \(\log |\Gamma_{M_{eq}}|\) is proportional, up to terms negligible in the size of the system, to the thermodynamic entropy of Clausius. Boltzmann then extended his insight about the relation between thermodynamic entropy and \(\log |\Gamma_{M_{eq}}|\) to all macroscopic systems; be they gas, liquid or solid. This provided for the first time a microscopic definition of the operationally measurable entropy of macroscopic systems in equilibrium.

Having made this connection Boltzmann then generalized it to define an entropy also for macroscopic systems not in equilibrium. That is, he associated with each microscopic state \(X\) of a macroscopic system a number \(S_B\) which depends only on \(M(X)\) given, up to multiplicative and additive constants (which can depend on \(N\)), by \[\tag{1} S_B(X) = S_B (M(X)) \]

with \[\tag{2} S_B(M) = k \log|\Gamma_{M}|, \]

This is the Boltzmann entropy of a classical system, Penrose (1970) N. B. This definition uses two equations to emphasize their logical independence which is important for the discussion of quantum systems.

Boltzmann then used phase space arguments, like those given above, to explain (in agreement with the ideas of Maxwell and Thomson) the observation, embodied in the second law of thermodynamics, that when a constraint is lifted, an isolated macroscopic system will evolve toward a state with greater entropy. In effect Boltzmann argued that due to the large differences in the sizes of \(\Gamma_M\ ,\) \(S_B(X_t) = k \log |\Gamma_{M(X_t)}|\) will typically increase in a way which explains and describes qualitatively the evolution towards equilibrium of macroscopic systems.

These very large differences in the values of \(|\Gamma_M|\) for different \(M\) come from the very large number of particles (or degrees of freedom) which contribute, in an (approximately) additive way, to the specification of macrostates. This is also what gives rise to typical or almost sure behavior. Typical, as used here, means that the set of microstates corresponding to a given macrostate \(M\) for which the evolution leads to a macroscopic increase (or non-decrease) in the Boltzmann entropy during some fixed macroscopic time period \(\tau\) occupies a subset of \(\Gamma_M\) whose Liouville volume is a fraction of \(|\Gamma_M|\) which goes very rapidly (exponentially) to one as the number of atoms in the system increases. The fraction of "bad" microstates, which lead to an entropy decrease, thus goes to zero as \(N\to \infty\ .\)

Typicality is what distinguishes macroscopic irreversibility from the weak approach to equilibrium of probability distributions (ensembles) of systems with good ergodic properties having only a few degrees of freedom, e.g. two hard spheres in a cubical box. While the former is manifested in a typical evolution of a single macroscopic system the latter does not correspond to any appearance of time asymmetry in the evolution of an individual system. Maxwell makes clear the importance of the separation between microscopic and macroscopic scales when he writes: "the second law is drawn from our experience of bodies consisting of an immense number of molecules. ... it is continually being violated, ..., in any sufficiently small group of molecules ... . As the number ... is increased ... the probability of a measurable variation ... may be regarded as practically an impossibility."

On the other hand, because of the exponential increase of the phase space volume with particle number, even a system with only a few hundred particles, such as is commonly used in molecular dynamics computer simulations, will, when started in a nonequilibrium `macrostate' \(M\ ,\) with `random' \(X \in \Gamma_M\ ,\) appear to behave like a macroscopic system. After all, the likelihood of hitting, in the course of say one thousand tries, something which has probability of order \(2^{-N}\) is, for all practical purposes, the same, whether \(N\) is a hundred or \(10^{23}\ .\) Of course the fluctuation in \(S_B\) both along the path towards equilibrium and in equilibrium will be larger when \(N\) is small, c.f. [2b]. This will be so even when integer arithmetic is used in the simulations so that the system behaves as a truly isolated one; when its velocities are reversed the system retraces its steps until it comes back to the initial state (with reversed velocities), after which it again proceeds (up to very long Poincare recurrence times) in the typical way.

We might take as a summary of such insights in the late part of the nineteenth century the statement by Gibbs and quoted by Boltzmann (in a German translation) on the cover of his book Lectures on Gas Theory II:

``In other words, the impossibility of an uncompensated decrease of entropy seems to be reduced to an improbability.

Initial conditions

Once we accept the statistical explanation of why macroscopic systems evolve in a manner that makes \(S_B\) increase with time, there remains the nagging problem (of which Boltzmann was well aware) of what we mean by "with time": since the microscopic dynamical laws are symmetric, the two directions of the time variable are a priori equivalent and thus must remain so a posteriori.

In terms of Fig. 1 this question may be put as follows: why can one use phase space arguments to predict the macrostate at time \(t\) of an isolated system whose macrostate at time \(t_b\) is \(M_b\ ,\) in the future, i.e. for \(t > t_b\ ,\) but not in the past, i.e. for \(t < t_b\ ?\) After all, if the macrostate \(M\) is invariant under velocity reversal of all the atoms, then the same argument should apply equally to \(t_b + \tau\) and \(t_b -\tau\ .\) A plausible answer to this question is to assume that the nonequilibrium macrostate \(M_b\) had its origin in an even more nonuniform macrostate \(M_a\ ,\) prepared by some experimentalist at some earlier time \(t_a < t_b\) (as is indeed the case in Figure 1) and that for states thus prepared we can apply our (approximately) equal a priori probability of microstates argument, i.e. we can assume its validity at time \(t_a\ .\) But what about events on the sun or in a supernova explosion where there are no experimentalists? And what, for that matter, is so special about the status of the experimentalist? Isn't he or she part of the physical universe?

Put differently, where ultimately do initial conditions, such as those assumed at \(t_a\ ,\) come from? In thinking about this we are led more or less inevitably to introduce cosmological considerations by postulating an initial "macrostate of the universe" having a very small Boltzmann entropy. To again quote Boltzmann: "That in nature the transition from a probable to an improbable state does not take place as often as the converse, can be explained by assuming a very improbable [small \(S_B\)] initial state of the entire universe surrounding us. This is a reasonable assumption to make, since it enables us to explain the facts of experience, and one should not expect to be able to deduce it from anything more fundamental". While this requires that the initial macrostate of the universe, call it \(M_0\ ,\) be very far from equilibrium with \(|\Gamma_{M_0}|<< |\Gamma_{M_{eq}}|\ ,\) it does not require that we choose a special microstate in \(\Gamma_{M_0}\ .\) As also noted by Boltzmann elsewhere "We do not have to assume a special type [read microstate] of initial condition in order to give a mechanical proof of the second law, if we are willing to accept a statistical viewpoint...if the initial state is chosen at random...entropy is almost certain to increase." This is a very important aspect of Boltzmann's insight: it is sufficient to assume that this microstate is typical of an initial macrostate \(M_0\) which is far from equilibrium.

This going back to the initial conditions, i.e. the existence of an early state of the universe (presumably close to the big bang) with a much lower value of \(S_B\) than the present universe, as an ingredient in the explanation of the observed time asymmetric behavior, bothers some scientists. A common question is: how does the mixing of the two colors after removing the partitions in Fig. 1 depend on the initial conditions of the universe? The answer is that once you accept that the microstate of the system in 1a is typical of its macrostate the future evolution of the macrostates of this isolated system will indeed look like those depicted in Fig 1. It is the existence of inks of different colors separated in different compartments by an experimentalist, indeed the very existence of the solar system, etc. which depends on the initial conditions. In a "typical" universe everything would be in equilibrium.

It is the initial state of the universe plus the dynamics which determines what is happening at present. Conversely, we can deduce information about the initial state from what we observe now. As put by Feynman, Feynman, et al. (1967) "It is necessary to add to the physical laws the hypothesis that in the past the universe was more ordered, in the technical sense, [i.e. low \(S_B\)] than it is today...to make an understanding of the irreversibility."

Figure 2: With a gas in a box, the maximum entropy state (thermal equilibrium) has the gas distributed uniformly; however, with a system of gravitating bodies, entropy can be increased from the uniform state by gravitational clumping leading eventually to a black hole. From Reference number 8

A very clear discussion of initial conditions is given by Roger Penrose in connection with the "big bang" cosmology, Penrose, (1990 and 2005). He takes for the initial macrostate of the universe the smooth energy density state prevalent soon after the big bang: an equilibrium state (at a very high temperature) except for the gravitational degrees of freedom which were totally out of equilibrium, as evidenced by the fact that the matter-energy density was spatially very uniform. That such a uniform density corresponds to a nonequilibrium state may seem at first surprising, but gravity, being purely attractive and long range, is unlike any of the other fundamental forces. When there is enough matter/energy around, it completely overcomes the tendency towards uniformization observed in ordinary objects at high energy densities or temperatures. Hence, in a universe dominated, like ours, by gravity, a uniform density corresponds to a state of very low entropy, or phase space volume, for a given total energy, see Fig. 2.

The local `order' or low entropy we see around us (and elsewhere)—from complex molecules to trees to the brains of experimentalists preparing macrostates—is perfectly consistent with (and possibly even a necessary consequence of, i.e. typical of) this initial macrostate of the universe. The value of \(S_B\) at the present time, \(t_p\ ,\) corresponding to \(S_B (M_{t_p})\) of our current clumpy macrostate describing a universe of planets, stars, galaxies, and black holes, is much much larger than \(S_B(M_0)\ ,\) the Boltzmann entropy of the "initial state", but still quite far away from \(S_B(M_{eq})\) its equilibrium value. The `natural' or `equilibrium' state of the universe, \(M_{eq}\ ,\) is, according to Roger Penrose, Penrose (1990 and 2005), one with all matter and energy collapsed into one big black hole. Penrose gives an estimate \(S_B(M_0) / S_B(M_{t_p}) / S_{eq}\sim 10^{88} / 10^{101} / 10^{123}\) in natural (Planck) units, see Fig. 3.

Figure 3: The creator locating the tiny region of phase-space—one part in \(10^{10^{123}}\)—needed to produce a \(10^{80}\)-baryon closed universe with a second law of thermodynamics in the form we know it. From Reference number 8. If the initial state was chosen randomly it would, with overwhelming probability, have led to a universe in a state with maximal entropy. In such a universe there would be no stars, planets, people or a second law.

It is this fact that we are still in a state of low entropy that permits the existence of relatively stable neural connections, of marks of ink on paper, which retain over relatively long periods of time shapes related to their formation. Such nonequilibrium states are required for memories- in fact for the existence of living beings and of the earth itself.

We have no such records of the future and the best we can do is use statistical reasoning which leaves much room for uncertainty. Equilibrium systems, in which the entropy has its maximal value, do not distinguish between past and future.

Penrose's consideration about the very far from equilibrium uniform density "initial state" of the universe is quite plausible, but it is obviously far from proven. In any case it is, as Feynman says, both necessary and sufficient to assume a far from equilibrium initial state of the universe, and this is in accord with all cosmological evidence. The "true" equilibrium state of the universe may also be different from what Penrose proposes. There are alternate scenarios in which the black holes evaporate and leave behind mostly empty space, c.f. Chen and Carroll.

The question as to why the universe started out in such a very unusual low entropy initial state worries Penrose quite a lot (since it is not explained by any current theory) but such a state is just accepted as a given by Boltzmann. Clearly, it would be nice to have a theory which would explain the "cosmological initial state", but such a theory is not available at present. The "anthropic principle" in which there are many universes and ours just happens to be right, or we would not be here, is too speculative for an encyclopedic article.


References

  • R. P. Feynman, The Character of Physical Law, MIT Press, Cambridge, Mass. (1967), ch. 5.
  • S. Goldstein and J. L. Lebowitz, On the Boltzmann Entropy of Nonequilibrium Systems, Physica D, 193, 53-66, (2004); {b)} P. Garrido, S. Goldstein and J. L. Lebowitz, The Boltzmann Entropy of Dense Fluids Not in Local Equilibrium, Phys. Rev. Lett. 92, 050602, (2003).
  • J. L. Lebowitz, {a)} Macroscopic Laws and Microscopic Dynamics, Time's Arrow and Boltzmann's Entropy, Physica A 194, 1–97(1993);
  • J.L. Lebowitz, {b}}Boltzmann's Entropy and Time's Arrow, Physics Today, 46, 32–38(1993); see also letters to the editor and response in "Physics Today", 47, 113-116 (1994);{c)} Microscopic Origins of Irreversible Macroscopic Behavior, Physica A, 263, 516–527, (1999);
  • J.L. Lebowitz, {d)} A Century of Statistical Mechanics: A Selective Review of Two Central Issues, Reviews of Modern Physics, 71, 346–357, 1999; {e)} From Time-symmetric Microscopic Dynamics to Time-asymmetric Macroscopic Behavior: An Overview, to appear in European Mathematical Publishing House, ESI Lecture Notes in Mathematics and Physics.
  • O. Penrose, Foundations of Statistical Mechanics, Pergamon, Elmsford, N.Y. (1970): reprinted by Dorer (2005).
  • R. Penrose, The Emperor's New Mind, Oxford U.P., New York(1990), ch. 7: The Road to Reality, A. E. Knopf, New York(2005), ch. 27–29.
  • S.M. Carroll and J. Chen, Spontaneous Inflation and the Origin of the Arrow of Time, arXiv:hep-th/0410270v1

Internal references

  • Valentino Braitenberg (2007) Brain. Scholarpedia, 2(11):2918.
  • Tomasz Downarowicz (2007) Entropy. Scholarpedia, 2(11):3901.
  • Eugene M. Izhikevich (2007) Equilibrium. Scholarpedia, 2(10):2014.
  • Mark Aronoff (2007) Language. Scholarpedia, 2(5):3175.
  • Howard Eichenbaum (2008) Memory. Scholarpedia, 3(3):1747.
  • Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
  • David H. Terman and Eugene M. Izhikevich (2008) State space. Scholarpedia, 3(3):1924.


Recommended reading

  • For a general history of the subject and references to the original literature see S.G. Brush, The Kind of Motion We Call Heat, Studies in Statistical Mechanics, vol. VI, E.W. Montroll and J.L. Lebowitz, eds. North-Holland, Amsterdam, (1976).
  • For a historical discussion of Boltzmann and his ideas see articles by M. Klein, E. Broda, L. Flamn in The Boltzmann Equation, Theory and Application, E.G.D. Cohen and W. Thirring, eds., Springer-Verlag, 1973.
  • For interesting biographies of Boltzmann, which also contain many quotes and references, see E. Broda, Ludwig Boltzmann, Man—Physicist—Philosopher, Ox Bow Press, Woodbridge, Conn (1983); C. Cercignani, Ludwig Boltzmann; The Man Who Treated Atoms, Oxford University Press (1998); D. Lindley, Boltzmann's Atom: The Great Debate that Launched a Revolution in Physics, Simon & Shuster (2001).


See also

Entropy

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools