21
Some Examples of Simple Systems 0 D. A. Lavis Department of Mathematics King’s College, Strand, London WC2R 2LS - U.K. Email:[email protected]; Webpage:www.mth.kcl.ac.uk/dlavis 1 Introduction The aim of these notes is to provide a set of examples which may give some insight into the behaviour of ‘real’ statistical mechanical systems. Simulations are performed using MAPLE. In each case the time-evolution of the Boltzmann entropy is explored. This can be defined as follows. Consider a system, which at time t has microstate given by x(t) Γ N . 1 Macrostates (observable states) are defined by a set Ξ of macroscopic variables. 2 Let the set of macrostates be {μ} Ξ . They are so defined that every x Γ N is in exactly one macrostate denoted by μ(x) and the mapping x μ(x) is many-one. Every macrostate μ is associated with its ‘volume’ V (μ)= V μ in Γ N . 3 We thus have the map x μ(x) →V μ (x) from Γ N to < + or N. The Boltzmann entropy is defined by S b (x)= k b ln[V μ (x)]. (1) This is a phase function depending on the choice of macroscopic variables Ξ, so how do we expect that it will behave? The point of view of the weakened second law approach can be expressed as follows: If the system starts at a phase point associated with low entropy then we expect the entropy to increase. 4 As it increases there is the possibility of fluctuations, which could be large. When the entropy gets near to its maximum value then it will still be expected to fluctuate and those fluctuations could be large but we don’t expect large fluctuations to occur very often. This assertion, of course, runs counter to the strict form of the second law, which does not allow any decreases in entropy of an isolated system. 0 This file is: www.mth.kcl.ac.uk/dlavis/papers/examples.pdf. c D. A. Lavis 2002. 1 Both time and the space Γ N may be continuous or discrete and the ‘dynamics’ which drives x(t) in Γ N will be either deterministic, or could be taken to be stochastic. 2 These may include some thermodynamic variables (volume, number of particles etc.) but they can also include other variables, specifying, for example, the number of particles in a set of subvolumes. Ridderbos (2002) denotes these by the collective name of supra-thermodynamic variables. 3 In the case where Γ N is continuous the volume of μ will normally be its Lebesque measure; when Γ N is discrete the volume will be the number of points in μ. 4 If it is the macrostate of least volume it will, of course, have to initially increase. 1

Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

Some Examples of Simple Systems0

D. A. LavisDepartment of Mathematics

King’s College, Strand, London WC2R 2LS - U.K.Email:[email protected];

Webpage:www.mth.kcl.ac.uk/∼dlavis

1 Introduction

The aim of these notes is to provide a set of examples which may give someinsight into the behaviour of ‘real’ statistical mechanical systems. Simulationsare performed using MAPLE. In each case the time-evolution of the Boltzmann

entropy is explored. This can be defined as follows. Consider a system, whichat time t has microstate given by x(t) ∈ ΓN .1 Macrostates (observable states)are defined by a set Ξ of macroscopic variables.2 Let the set of macrostatesbe {µ}Ξ. They are so defined that every x ∈ ΓN is in exactly one macrostatedenoted by µ(x) and the mapping x → µ(x) is many-one. Every macrostateµ is associated with its ‘volume’ V(µ) = Vµ in ΓN .3 We thus have the mapx → µ(x) → Vµ(x) from ΓN to <+ or N. The Boltzmann entropy is defined by

Sb(x) = kb ln[Vµ(x)]. (1)

This is a phase function depending on the choice of macroscopic variables Ξ, sohow do we expect that it will behave? The point of view of the weakened second

law approach can be expressed as follows:

If the system starts at a phase point associated with low entropy

then we expect the entropy to increase.4 As it increases there is the

possibility of fluctuations, which could be large. When the entropy

gets near to its maximum value then it will still be expected to

fluctuate and those fluctuations could be large but we don’t expect

large fluctuations to occur very often.

This assertion, of course, runs counter to the strict form of the second law,which does not allow any decreases in entropy of an isolated system.

0This file is: www.mth.kcl.ac.uk/∼dlavis/papers/examples.pdf. c© D. A. Lavis 2002.1Both time and the space ΓN may be continuous or discrete and the ‘dynamics’ which

drives x(t) in ΓN will be either deterministic, or could be taken to be stochastic.2These may include some thermodynamic variables (volume, number of particles etc.) but

they can also include other variables, specifying, for example, the number of particles in a set ofsubvolumes. Ridderbos (2002) denotes these by the collective name of supra-thermodynamic

variables.3In the case where ΓN is continuous the volume of µ will normally be its Lebesque measure;

when ΓN is discrete the volume will be the number of points in µ.4If it is the macrostate of least volume it will, of course, have to initially increase.

1

Page 2: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

We shall see in our examples that an important feature of the system, whichis determined by the dynamics, is the accessibility of one macrostate from an-other. By µ′ being ‘accessible’ from µ we shall mean directly accessible (withoutany intermediate macrostates) and µ′ is accessible from µ if there exists anx ∈ µ which maps with time so that the next macrostate it visits is (in thecase of deterministic dynamics) or has non-zero probability of being (in the caseof stochastic dynamics) µ′. To clarify this point we introduce a number ofdefinitions. Given a macrostate µ, the set of macrostates which are accessiblefrom µ is denoted by S(µ). For reversible dynamic systems5 it is normally thecase that the phase points x and J x belong to the same macrostate. ThenSb(x) = Sb(J x) and if µ′ is accessible from µ then µ is accessible from µ′.

Now divide S(µ) into two subsets S(+)(µ) consisting of those macrostates with

volumes greater or equal 6 to V(µ) and S(+)(µ) consisting of those macrostates

with volumes less than to V(µ). Entropy will increase/decrease in the transition

µ → µ′ if µ′ ∈ S(±)(µ) and the proportions of volume accessible from µ leading

to increase/decrease of entropy are

v(±)(µ) =V(S(±)(µ))

V(S(µ)), (2)

where V(S(µ)) denotes the total volume of the members of S(µ) (and similarly for

S(±)(µ)). In some of our examples it is simple to compute v(±)(µ) as functions

of V .

2 A Two-Dimensional Gas

Consider the pictures of N particles in a two-dimensional square box of sideL shown in Fig. 1. If we were to encounter such a gas then the state in thefirst picture with all the particles in one end of the box would strike us asremarkable. So we suspect that at this time t = 0 the gas has been prepared thisway. The next few pictures carry some signs of this remarkable property, whichhas completely disappeared by t = 6, with t = 7 being ‘quite unremarkable’.States we find remarkable are such because they are unexpected and they areunexpected because they are relatively few in number. But, of course, this is amacroscopic property. Our eye is noticing the difference between macrostateswhich we must now define. To achieve this we divide the box into n verticalstrips of equal width L/n. Suppose that at a particular time there are Nj

particles in strip j = 1, 2, . . . , n. The macrostate will not be affected by wherethe particles are in each strip; nor will it change according to which particularparticles are in each strip. So the volume of the macrostate designated by {Nj}is, to within a constant factor (L/n)N ,

V({Nj}) =N !

N1!N2! · · ·Nn!. (3)

At this point we should reveal the fact that the particles in the box, althoughthey appear to be moving in a complicated manner are, in fact, just moving

5Meaning that, there exists an operator J on the points of ΓN such that, if x′ = φtx then

J−1φtJ x′ = x.

6So we shall be using ‘increase’ to mean increase or be equal to and decrease in the strictsense.

2

Page 3: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

t=0 t=1

t=2 t=3

t=4 t=5

t=6 t=7

Figure 1: A gas of N = 1000 particles in a two-dimensional square box of sideL

3

Page 4: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

in separated horizontal lines without collisions.7 This makes it quite easy tocompute the evolution of the Boltzmann entropy. If the initial velocities areincommensurate the motion will be quasi-periodic and, with N = 1000, n = 100,the picture looks like Fig. 2.8 There are a number of interesting things about

Sb

t0.85

0.9

0.95

1

0 10 20 30 40 50

Figure 2: The evolution of the Boltzmann entropy of the two-dimensional gaswhen the overall motion is quasi-periodic.

the behaviour of the entropy.

(i) It rises very quickly to a value above 0.98 after t = 5.

(ii) Within the range shown, it never attains its maximum value of unity.

(iii) It stays in the range [0.98, 1] for all subsequent values shown.

Property (i) is a consequence of the fact that, most particle distributions arequite close to the uniform distribution Nj = N/n and (ii) shows us that, al-though the entropy is largest for the uniform distribution its volume is smallrelative to volume of all the macrostates in some small interval around it. Actu-

ally achieving the uniform distribution exactly would be quite remarkable. Fromthe observation (iii) it cannot, of course, be deduced that the system will neverreturn to a remarkable state of low entropy. To achieve quasi-periodic motion theparticle velocities were sampled randomly from a gaussian distribution. Thusalthough the system will not return exactly to the state shown for t = 0 it willpass through remarkable states arbitrarily close to it. That this does not showup on the graph is simply a result of the long time scales involved. If we are im-patient to see a quick return to a remarkable state then this can be achieved bymaking the system periodic rather than quasi-periodic. If, in units of the lengthof the box, we randomly assign velocities of 0.1 and 0.05 to the particles thenthe the system will reattain its remarkable initial state when t = 40 (Fig. 3). Of

7Hence our neglecting of the momentum space when specifying the macrostates.8Here and henceforth we shall choose to plot Sb = Sb/(Sb)max.

4

Page 5: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

Sb

t0.8

0.85

0.9

0.95

1

10 20 30 40 50

Figure 3: The evolution of the Boltzmann entropy of the two-dimensional gaswhen the overall motion is periodic.

course, choosing an unremarkable, high entropy, state which quickly attains aremarkable, low entropy, state is, in general, very difficult. However, there is onetrick that we can play and that is to start the system in a remarkable state, runit for some some time t0 and then reverse all the velocities. The system will thenreturn to its initial remarkable state at 2t0 (Fig. 4). In this discussion we have

Sb

t0.85

0.9

0.95

1

0 10 20 30 40 50

Figure 4: The evolution of the Boltzmann entropy of the two-dimensional gaswhen the motion is reversed when t = 25, so that the system returns to itsinitial when t = 50.

chosen the macrostates to reflect a particular remarkable macroscopic property.

5

Page 6: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

Sb

t

(a)

(b)

0.86

0.88

0.9

0.92

0.94

0.96

0.98

1

0 10 20 30 40 50

Figure 5: The Boltzmann entropy of the two-dimensional gas for varying num-bers of particles; n = 100 and (a) N = 1000, (b) N = 100.

At the outset this property was simply that all the particles were in one halfof the box, but in order to quantify observations we introduced an (arbitrary)division of the box into strips. The entropy is then a measure of inhomogeneityin the distribution of particles on the strips. Of course, the remarkable propertyis not unique. We could, for example, number the particles. Then it would beremarkable if all the odd-numbered particles were in the left half of the box andall the even-numbered were in the right half. It would also be remarkable if allthe prime-numbered molecules were in the left-hand half of the box. For eachof these types of remarkableness one can define a Boltzmann entropy. Probablyany distribution of particles could be viewed as remarkable in some way.9

Our computations have shown that for randomly selected velocities and ini-tial positions randomly selected in the left-hand half of the box the appropriateBoltzmann entropy of the system of particles increases from a remarkable stateand fluctuates near its maximum value. The question of what affects the ‘near-ness’ is illustrated in Fig. 5. This exemplifies the point that ‘typically’ theentropy along a curve will fluctuate more and take longer to approach its max-imum value from a remarkable state for a system with a smaller number ofparticles (degrees of freedom). Of course, a real system would have in the orderof 1023 particles, so the extent of the fluctuations would be very small comparedto those shown in Fig. 5.

9In a similar way, I am told that there is an informal theorem of number theory whichsays that ‘Every number is interesting’, an assertion borne out by the well-known story ofHardy, Ramanujan and the taxi with licence number 1729 = 13 +123 = 93 +103, the smallestnumber that can be written as the sum of two cubes in two different ways, (recounted in byC. P. Snow in his Foreword to the 1967 edition of Hardy (1940)).

6

Page 7: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

V

t0

0.5

1

1000 2000 3000 4000 5000

Figure 6: Simulation of the SM model with m = 100.

0

0.2

0.4

0.6

0.8

1

20 40 60 80 100

p(+)(n)

p(−)(n)

n

Figure 7: The probabilities p(±)(n), for the SRW model, plotted against n fora the case of m = 100 states.

3 Simple Markovian (SM) and Simple Random Walk(SRW) Models

Suppose that a system has m macrostates and that the macrostate µ(n) for n =1, 2, . . . , m, consists of V(n) = n phase points. In the phase space there will be12m(m+1) points. Let a trajectory of the system be a sequence of random jumpsbetween phase points. Fig. 6 shows a simulation giving the scaled macrostatevolumes for the system with m = 100, starting in µ(1). It is clear that whereasthe initial entropy jump is necessarily positive, the subsequent behaviour issignificantly different from the ‘typical behaviour’ of the two-dimensional gas.The band of frequently visited states have scaled volumes spreading over anapproximate range [0.3, 1.0] with frequent fluctuations to lower values.

The problem with this model arises from the fact that we have allowed acces-

7

Page 8: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

V

t0

0.5

1

1000 2000 3000 4000 5000

Figure 8: Simulation of the SRW model with m = 100.

sibility between all pairs of macrostates. Given that, for large n, the majorityof the phase space will be occupied by points in macrostates of smaller vol-umes,10 unrestricted transitions will lead to large and persistent fluctuations inmacrostate volume and entropy. If on the other hand transitions are allowedfrom macrostate n only to macrostates n±1 the model becomes a simple randomwalk (SRW) and the picture changes. The probabilities that, when the systemis in macrostate µ(n), it moves to a macrostate with larger/smaller entropy is

p(±)(n) = π(n, n ± 1) =

n ± 1

2n, if 1 ≤ n < m,

12 (1 ∓ 1), if n = m,

= v(±)(n), (4)

where11

π(n, n ± 1) = Prob[x(t + 1) = n ± 1|x(t) = n]. (5)

Graphs of p(±)(n) are given in Fig. 7. The probability of a transition leadingto an increase in entropy falls to a value near to a half, converging with theprobability of a drop in entropy, when the system is in a state of volume nearto the maximum. Of course, when the system is in the macrostate of maximumvolume then any transition must lead to a drop in entropy. These forms for theprobabilities illuminate the reduction in fluctuations as macrostate volume (andentropy) increases as shown in Fig. 8.

This behaviour is much closer to that for the entropy of the two-dimensionalgas. Thus we are learning that accessibility between macrostates plays an im-portant part in the behaviour of the model. In the case of the two-dimensionalgas accessibility was determined by the dynamics of the model. A transitionbetween two macrostates occurred whenever a particle passed from one stripto another. This, of course, meant that the consequent change in entropy wasquite small. The probability ρ(n; t) that x(t) = n, for n = 1, 2, . . . , m satisfies12

ρ(n; t + 1) = ρ(n − 1; t)π(n − 1, n) + ρ(n + 1; t)π(n + 1, n), (6)

10For m = 100 about half the phase space consists of points in macrostates with n < 70.11As in Parzen (1962) we use the convention that π(n, k) is the transition probability from

state n to state k. A random walk is a Markov chain for which π(n, k) = 0 if |k − n| > 1. Inour examples we also have π(n, n) = 0 for all n.

12With ρ(n; t) = 0 for all t, when n is not in the range 1 ≤ n ≤ m.

8

Page 9: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

and it is easy to see that

ρ?(n) =V(n)

m∑

k=1

V(k)

=2n

m(m + 1)(7)

is an equilibrium, meaning time-invariant, solution of (6).

4 The Dog-Flea Model

Two of the most well-known ‘toy’ models of statistical mechanics are the dog-

flea model 13 of Ehrenfest and Ehrenfest (1907) (see also Emch and Liu, 2002, p.106–112, which contains further references) and the ring model of Kac (1959).We shall now see that the dog-flea model is a stochastic version of a special caseof the ring model.

Two dogs called Plus and Minus share a population of N fleas, where forconvenience we suppose that N is even. Each axis of the phase space ΓN

denotes the state of a flea and has two points +1 and −1 indicating that theflea is on Plus or Minus respectively. A macrostate of the system, for fixed N ,is specified by one variable n, in the range

[

− 12N, 1

2N]

, where N (+) = 12N + n

and N (−) = 12N −n are respectively the number of fleas on Plus and Minus and

the volume of (number of phase points in) the macrostate is

V(n) =N !

(

12N + n

)

!(

12N − n

)

!. (8)

The Boltzmann entropy Sb(n) = kb ln[V(n)] is symmetric in n with maximumat n = 0 and zeros at n = ± 1

2N .Now suppose the fleas are numbered 1 to N and the flea-trainer chooses a

number at random in this range and orders that flea to change dogs. This againis a random walk with transition probabilities

π(n, n ± 1) =N ∓ 2n

2N, −1

2N ≤ n ≤ 1

2N. (9)

The probabilities p(±)(n) that entropy and the macrostate volume increase/decreaseare the probabilities that |n| decreases/increases respectively. That is

p(±)(n) =

π(n, n ∓ 1), if 0 < n ≤ 12N ,

12(1 ∓ 1), if n = 0,

π(n, n ± 1), if − 12N ≤ n < 0,

(10)

where now the random variable for the system is x(t) = n. These probabilities,together with the scaled volumes given by (2), plotted as functions of the scaledmacrostate volume V = V(n)/V(0), are shown in Fig. 9. Unlike the SRWmodel, v(±)(n) 6= p(±)(n). This is because the model has a structure imposedby the numbering of the fleas. Each point in V(n) corresponds to exactly onedistribution of the numbered fleas. This restricts the number of points in V(n±1) which are accessible. However, p(+)(V) still has the same monotonically

13A vermin-free version of this model replaces the dogs by urns and the fleas by balls.

9

Page 10: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

0

0.2

0.4

0.6

0.8

1

0.2 0.4 0.6 0.8 1

v(+)(V)

v(−)(V)

p(+)(V)

p(−)(V)

V

Figure 9: The probabilities p(±)(V) and the volume ratios v(±)(V), for the dog-flea model, plotted against V for a population of N = 100 fleas.

decreasing character as that shown in the SRW model and we would expectthat, if the dogs begin their day with most fleas on one dog, then as the fleatrainer carries out his task, the macrostate volume, and hence the entropy, willincrease, rapidly at first, until they are near to their maximum values when theywill be subject to small fluctuations. The expected numbers of fleas on Plus andMinus at time t + 1, given that there are N (±)(t) at time t, are

〈N (±)(t + 1)〉 = π(n, n ± 1)

[

N (±)(t) −n

|n|

]

+ π(n, n ∓ 1)

[

N (±)(t) +n

|n|

]

= N (±)(t)

[

1 −1

N

]

+ N (∓)(t)1

N. (11)

Replacing N (±)(t) by their expected numbers gives

〈N (±)(t + 1)〉 = 〈N (±)(t)〉

[

1−1

N

]

+ 〈N (∓)(t)〉1

N. (12)

which have the solution

〈N (±)(t)〉 =1

2+

(

1 −2

N

)t [

N (±)(0) −1

2N

]

. (13)

Figs. 10 and 11 show simulations, together with results derived from (13), with100 fleas beginning with 93 on Minus. In Fig. 10 we see that the numbers offleas on the two dogs converge towards 50, where they exhibit small fluctuationsabout the expected values given by (12). Fig. 11 shows the behaviour of thescaled entropy and of its value derived from (12). The probability ρ(n; t) thatx(t) = n again satisfies (6), except that now the transition probabilities are

10

Page 11: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

0

20

40

60

80

100

200 400 600 800 1000

N(−)(t)

N(+)(t)

t

Figure 10: The numbers of fleas N (±)(t) on the dogs Plus and Minus plottedagainst time t for a population of N = 100 fleas, beginning with three fleas onPlus. The smooth curve gives the values of 〈N (±)(t)〉 derived from (13).

given by (9). It is not difficult to show that this equation is now satisfied by theequilibrium distribution

ρ?(n) =V(n)

n=N/2∑

n=−N/2

V(n)

=1

2N

N !(

12N + n

)

!(

12N − n

)

!. (14)

For this distribution a number of significant results can be established (see Emchand Liu, 2002, p. 106–112):

(i)

Prob[x(t) = n|x(t + 1) = k] = π(n, k)

= Prob[x(t + 1) = k|x(t) = n]. (15)

The backward transition probability is equal to the forward transitionprobability, which is a form of statistical time-reversibility.

(ii) With

Π(k, n, m) = Prob[x(t − 1) = k and x(t + 1) = m|x(t) = n], (16)

Π(n − 1, n, n − 1) : Π(n − 1, n, n + 1) : Π(n + 1 : n : n − 1) : Π(n + 1, n, n + 1)

=12N + n12N − n

: 1 : 1 :12N + n12N − n

. (17)

11

Page 12: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

0.2

0.4

0.6

0.8

1

0 200 400 600 800 1000

Sb

t

Figure 11: The scaled entropy Sb, for the dog-flea model, plotted against timet for a population of N = 100 fleas, beginning with three fleas on Plus. Thesmooth curve gives the value derived from (13).

It follows that, in this equilibrium distribution, the most probable se-quences of three states are reversions towards the long-term mean valuesn = 0, N (+) = N (−) given by (13).

(iii) Let ν(n; τ) be the probability that a trajectory starting with x(0) = n willreturn to state n for the first time at time τ . We define, for fixed n themean and variance of the recurrence time

〈τ〉n =

∞∑

τ=1

τν(n; τ) and Varn(τ) =

∞∑

τ=1

{〈τ〉n − τ}2ν(n; τ), (18)

It was shown by Kac (1947) that

(a)

∞∑

τ=1

ν(n; τ) = 1, (19)

which means that every state has a probability of one of recurring atsome time.

(b) 〈τ〉n =1

ρ?(n)=

2N

V(n). (20)

The recurrence times for n = 0 are relatively small but for n = 12N

they are large, even for modest values of N .14

(c) For large N and n ≈ N/2 the variance is of the order of the 〈τ〉n. Sothe large mean recurrence time has limited significance.

14For N = 10, 〈τ〉0 = 256/63 and 〈τ〉5 = 1024; for N = 100 〈τ〉0 ≈ 12.56 and 〈τ〉50 ≈12.68 × 1029 .

12

Page 13: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

5 The Ring Model

Suppose that we codify the dog-flea model by taking a ring of N sites and placein a random way an up-spin on a site for every flea on Plus and a down-spin ona site for every flea on Minus. The flea jumping is now represented by choosingat random one point equidistant between two sites as a spin-flipper. Then allthe spins move one site clockwise, with the one passing through the flipperbeing flipped. If the location of the spin-flipper is relocated randomly beforeeach rotation the model is exactly equivalent to the dog-flea model. It can begeneralized by randomly distributing m spin-flippers; this corresponds to theflea-trainer choosing a team of m fleas with instructions to change dogs.

Now suppose that, having chosen the initial locations of the spin-flippers, weleave them fixed. After the initial distribution of spins and flippers, the modelis deterministic. It is reversible simply by rotating the spins in the anticlockwisedirection and periodic with period N if m is even and 2N if m is odd. This isa version of the Kac ring model.15 The ‘magnetization’ at time t is

σ(t) =N (+)(t) − N (−)(t)

N=

2n(t)

N. (21)

Suppose that each spin were equally likely to be flipped. Then the equations ofmotion would be

N (±)(t + 1) = N (±)(t)[

1 +m

N

]

+ N (∓)(t)m

Nt = 0, 1, 2, . . . (22)

and it is not difficult to show that

σ(t) = σ(0)

(

1 −2m

N

)t

. (23)

The expectation value equations (12) for the dog-flea model are just the casem = 1 of (22), which would, of course, arise from the dog-flea model if m fleaswere instructed to change dogs on each occasion. The (unjustified) equally-likelihood assumption, leads to a ‘mean-field theory’ where magnetization evolvesmonotonically from its initial value to zero and the Boltzmann entropy evolvesmonotonically to its maximum value. As with any mean-field theory fluctua-tions are smoothed out. Figs. 12, show the evolution of a ring of 1000 spins with44 spin-flippers. Figs. 13, show the same situation with the rotation reversed att = 100, returning the system to its initial state. (The two halves of the graphsare, of course, mirror images.) In Figs. 14 we show the case of 100 spins andfinally in Figs. 15 the same situation where the rotation is reversed at t = 50.These figures clearly show the reversibility and recurrence of this deterministicmodel. As in the case of the two-dimensional gas the simplicity of the modelallows us to effect reversibility with the consequent fall in entropy in a waywhich could not be achieved with more ‘realistic’ systems. The recurrence, forwhich the time is simply related to the number of spins, again would not occuron realizable time scales for realistic systems.

15See Dresden (1962), Thompson (1972), Bricmont (1995) and Edens (2001) for furtherdiscussion of the model and its variants.

13

Page 14: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

6 The Baker’s Transformation

This is the transformation, shown in Fig. 16, where a unit square is stretched totwice its width and then cut in half with the right-hand half used to restore theupper half of the unit square. As the mapping φ on the cartesian coordinates(x, y) of the unit square it is given by

φ(x, y) =

(2x, 12y), mod 1, 0 ≤ x ≤ 1

2,

(2x, 12(y + 1)), mod 1, 1

2≤ x ≤ 1.

(24)

σ

t

–1

–0.5

0

0.5

1

20 40 60 80 100 120 140 160 180 200

Sb

t

0

0.2

0.4

0.6

0.8

1

1.2

20 40 60 80 100 120 140 160 180 200

Figure 12: The evolution of the magnetization and Boltzmann’s entropy, for thering model, with N = 1000, with m = 44, N (+)(0) = 6. The mean-field value isgiven by the broken line.

14

Page 15: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

A convenient way of writing this transformation is to express x and y as binarystrings:

x = 0 · x1x2x3 . . . ,

y = 0 · y1y2y3 . . . ,(25)

σ

t

–1

–0.5

0

0.5

1

20 40 60 80 100 120 140 160 180 200

Sb

t

0

0.2

0.4

0.6

0.8

1

1.2

20 40 60 80 100 120 140 160 180 200

Figure 13: As in Figs. 12, for the ring model, except that now the rotation isreversed at t = 100.

15

Page 16: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

where xj and yj take the values 0 or 1. Then the baker’s transformation takesthe form

φ(0 · x1x2x3 . . . , 0 · y1y2y3 . . .) = (0 · x2x3x4 . . . , 0 · x1y1y2 . . .),

with

φ−1(0 · x1x2x3 . . . , 0 · y1y2y3 . . .) = (0 · y1x1x2 . . . , 0 · y2y3y4 . . .).

(26)

It is clear that the mapping is reversible with φ−1 = Jφ J and J(x, y) = (y, x).It can be shown (Lasota and Mackey, 1994, p. 54–56) that the baker’s trans-

σ

t

–1

–0.5

0

0.5

1

20 40 60 80 100

Sb

t

0

0.2

0.4

0.6

0.8

1

1.2

20 40 60 80 100

Figure 14: The evolution of the magnetization and Boltzmann’s entropy, for thering model, with N = 50, m = 5, N (+)(0) = 2. The mean-field value is givenby the broken line.

16

Page 17: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

formation is volume-preserving and thus that the Poincare recurrence theoremapplies. One way of representing the transformation is to write the digits forthe initial point as

. . . y5y4y3y2y1|x1x2x3x4x5 . . . . (27)

Then φ corresponds to moving the vertical bar one step to the right. Nowsuppose that a trajectory starts at a randomly chosen point in the small squareσ given by 0 ≤ x < 2−m, 0 ≤ y < 2−m. This simply means that in (27) thereare m entries of zero on each side of the bar. The trajectory will return to σwhen, after some translations of the bar, this again happens. We now calculatethe mean recurrence time to σ. Let f(n) be the probability that the point is

σ

t

–1

–0.5

0

0.5

1

20 40 60 80 100

Sb

t

0

0.2

0.4

0.6

0.8

1

1.2

20 40 60 80 100

Figure 15: As in Figs. 14, for the ring model, except that now the rotation isreversed at t = 50.

17

Page 18: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

φ:

φ−1:

Figure 16: The baker’s transformation and its inverse.

in σ for the first time (following the starting value) after n steps and let g(n)be the probability that after n steps the phase point is in σ irrespective of theintermediate values. Then, with g(0) = 1,

g(n) =

n∑

k=1

f(k)g(n − k) (28)

Multiplying by zn and summing over n gives

G(z) = F(z) + F(z)G(z), (29)

where

F(z) =

∞∑

n=1

f(n)zn, G(z) =

∞∑

n=1

g(n)zn. (30)

Now

g(n) =

12n , 1 ≤ n ≤ 2m,

122m , 2m ≤ n.

(31)

So

G(z) =22m(1 − z) + z2m+1

22m(2 − z)(1 − z)(32)

and, from (29),

F(z) =22m(1 − z) + z2m+1

22m(3 − z)(1 − z) + z2m+1. (33)

Of course, F(1) = 1 and F′(1) = 22m is the mean recurrence time to σ.In the hierarchy of special dynamic properties: ergodic, mixing, Kolmogorov,

Bernoulli, each implies the one preceding it. The baker’s transformation is

18

Page 19: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

t=0 t=2

t=4 t=6

t=8 t=10

t=12 t=14

Figure 17: A gas of N = 50 particles moving under the baker’s transformation.

19

Page 20: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

Sb

t0

0.2

0.4

0.6

0.8

1

20 40 60 80 100

Figure 18: The evolution of the Boltzmann entropy of N = 50 particles movingunder the baker’s transformation.

Bernoulli.16 To show this divide the unit square into the partition B2 = {b0, b1},

where b0 is the set where 0 ≤ x < 12

and b1 the set where 12≤ x < 1. Then

(x, y) will be in bx1. The complete record of the labels of a trajectory is the

string (27) (without the bar). This is distinct for each trajectory of points andthe members are uncorrelated so the partition is Bernoulli.

A rather more interesting application of the baker’s transformation is toconsider a ‘gas’ of N points. Just as with the two-dimensional gas investigatedin Sect. 2 we can start all the points in some small subset of the unit squareand watch them evolve. Suppose the square is divided into 22m equal squarecells and all the particles begin in the bottom left-hand cell. Fig. 17 shows theevolution for a gas of N = 50 particles with m = 4 (256 cells). Now we supposethat the macrostates correspond to identifying the number of particles Nij ineach of the cells i, j = 1, 2, . . . , 2m. Then

V({Nij}) =N !

i,j Nij !, (34)

and Fig. 18 shows the evolution of the Boltzmann entropy. The entropy willreturn to its initial value if all the particles arrive in the same cell. The meantime for this to occur can be calculated by the same procedure given above but

16A system is Bernoulli if there is a Bernoulli-partition Bn of the invariant space Σ of thetransformation. of Σ. This is defined in the following way. Let Bn be a partition of the Σ inton subsets. We label the members of Bn with the integers [0, 1, . . . , n−1] and, for any trajectoryand some ∆t, record the infinite sequence of numbers S = {. . . , s−2, s−1, s0, s1, s2, . . .}, wherethe trajectory is in the set bsk

labelled sk at time k∆t. The partition is Bernoulli if allsequences S are uncorrelated and no two distinct trajectories have the same sequence.

20

Page 21: Some Examples of Simple Systems0...4If itis the macrostate ofleast volume will, course, ha e to initially increase. 1 We shall see in our examples that an important feature of the

with (31) replaced by

g(n) =

12n(N−1) , 1 ≤ n ≤ 2m,

122m(N−1) , 2m ≤ n.

(35)

This then yields the result that the mean time for the entropy to return to itsinitial value is 22m(N−1) steps. For m = 4, N = 50 this approximates to 10118.

References

Bricmont, J. (1995). Science of chaos or chaos in science?, Physicalia 17: 159–208.

Dresden, M. (1962). A study of models in non-equilibrium statistical mechanics,in J. de Boer and G. E. Uhlenbeck (eds), Studies in Statistical Mechanics,

Vol. 1, North-Holland, pp. 303–343.

Edens, B. (2001). Semigroups and symmetry: an investigation of Prigogine’s

theories, PhD thesis, Institute for History and Foundations of Science,Utrecht University. Available at:http://philsci-archive.pitt.edu/documents/disk0/00/00/04/36/.

Ehrenfest, P. and Ehrenfest, T. (1907). Ueber zwei bekannte Einwande gegendas Boltzmannshe H-Theorem, Phys. Zeitschrift 8: 311–314.

Emch, G. and Liu, C. (2002). The logic of thermostatistical physics, Springer.

Hardy, G. H. (1940). A mathematician’s apology, Cambridge U. P. Reprint withForeword by C. P. Snow, 1967.

Kac, M. (1947). On the notion of recurrence in discrete stichastic processes.,Bull. Amer. Math. Soc. 53: 1002–1010.

Kac, M. (1959). Probability and related topics in the physical sciences, Inter-science.

Lasota, A. L. and Mackey, M. C. (1994). Chaos, fractals and noise, Springer.

Parzen, E. (1962). Stochastic Processes, Holden-Day.

Ridderbos, K. (2002). The course-graining approach to statistical mechanics:how blissful is our ignorance?, Stud. Hist. Phil. Mod. Phys. 33: 65–77.

Thompson, C. J. (1972). Mathematical statistical mechanics, Princeton U. P.

21