Main
Thermodynamics, Gibbs Method and Statistical Physics of Electron Gases: Gibbs Method and Statistical..
Thermodynamics, Gibbs Method and Statistical Physics of Electron Gases: Gibbs Method and Statistical Physics of Electron Gases
Bahram M. Askerov, Sophia Figarova (auth.)
How much do you like this book?
What’s the quality of the file?
Download the book for quality assessment
What’s the quality of the downloaded files?
This book deals with theoretical thermodynamics and the statistical physics of electron and particle gases. While treating the laws of thermodynamics from both classical and quantum theoretical viewpoints, it posits that the basis of the statistical theory of macroscopic properties of a system is the microcanonical distribution of isolated systems, from which all canonical distributions stem. To calculate the free energy, the Gibbs method is applied to ideal and nonideal gases, and also to a crystalline solid. Considerable attention is paid to the FermiDirac and BoseEinstein quantum statistics and its application to different quantum gases, and electron gas in both metals and semiconductors is considered in a nonequilibrium state. A separate chapter treats the statistical theory of thermodynamic properties of an electron gas in a quantizing magnetic field.
Publisher:
SpringerVerlag Berlin Heidelberg
Series:
Springer Series on Atomic, Optical, and Plasma Physics 57
The file will be sent to your email address. It may take up to 15 minutes before you receive it.
The file will be sent to your Kindle account. It may takes up to 15 minutes before you received it.
Please note: you need to verify every book you want to send to your Kindle. Check your mailbox for the verification email from Amazon Kindle.
Most frequently terms
Springer Series on
atomic, optical, and plasma physics
57
Springer Series on
atomic, optical, and plasma physics
The Springer Series on Atomic, Optical, and Plasma Physics covers in a comprehensive manner theory and experiment in the entire f ield of atoms and molecules
and their interaction with electromagnetic radiation. Books in the series provide
a rich source of new ideas and techniques with wide applications in f ields such as
chemistry, materials science, astrophysics, surface science, plasma technology, advanced optics, aeronomy, and engineering. Laser physics is a particular connecting
theme that has provided much of the continuing impetus for new developments
in the f ield. The purpose of the series is to cover the gap between standard undergraduate textbooks and the research literature with emphasis on the fundamental
ideas, methods, techniques, and results in the f ield.
Please view available titles in Springer Series on Atomic, Optical, and Plasma Physics
on series homepage http://www.springer.com/series/411
Bahram M. Askerov
Sophia R. Figarova
Thermodynamics, Gibbs
Method and Statistical
Physics of Electron Gases
With 101 Figures
123
Prof. Bahram M. Askerov
Dr. Sophia Figarova
Baku State University
Z ahid K halilov St. 23
370148 Baku, Azerbaijan
bahram.mehrali@mail.ru
figarov@bsu.az
Springer Series on Atomic, Optical, and Plasma Physics
ISSN 16155653
ISBN 9783642031700
eISBN 9783642031717
DOI 10.1007/9783642031717
Springer Heidelberg Dordrecht London New York
Library of Congress Control Number: 2009936788
© SpringerVerlag Berlin Heidelberg 2010
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specif ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microf ilm or in any other way, and storage in data banks. Duplication of this publication or
parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 196; 5, in its
current version, and permission for use must always be obtained from SpringerVerlag. Violations are liable
to prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specif ic statement, that such names are exempt from the relevant protective laws and
regulations and therefore free for general use.
Cover design: SPi Publisher Services
Printed on acidfree paper
Springer is part of Springer Science +Business Media (www.springer.com)
Preface
Thermodynamics and statistical physics study the physical properties (mechanical, thermal, magnetic, optical, electrical, etc.) of the macroscopic system.
The tasks and objects of study in thermodynamics and statistical physics are
identical. However, the methods of investigation into macroscopic systems are
diﬀerent.
Thermodynamics is a phenomenological theory. It studies the properties
of bodies, without going into the mechanism of phenomena, i.e., not taking
into consideration the relation between the internal structure of substance
and phenomena, it generalizes experimental results. As a result of such a generalization, postulates and laws of thermodynamics made their appearance.
These laws make it possible to ﬁnd general relations between the diﬀerent
properties of macroscopic systems and the physical events occurring in them.
Statistical physics is a microscopic theory. On the basis of the knowledge of
the type of particles a system consists of, the nature of their interaction, and
the laws of motion of these particles issuing from the construction of substance,
it explains the properties being observed on experiment, and predicts the new
properties of systems. Using the laws of classical or quantum mechanics, and
also the theory of probability, it establishes qualitatively new statistical appropriatenesses of the physical properties of macroscopic systems, substantiates
the laws of thermodynamics, determines the limits of their applicability, gives
the statistical interpretation of thermodynamic parameters, and also works
out methods of calculations of their means. The Gibbs method is based on
statistical physics. This method is the most canonical. Therefore, in this book,
the exposition of the Gibbs method takes an important place.
Results, stemming from phenomenological thermodynamics, bear the general character and can be applied to any macroscopic systems; however, the
internal mechanism of physical phenomena and properties, being observed
in the experiments, is not disclosed. In other words, thermodynamics only
describes the phenomena and establishes the relation between them, but does
not answer the question why it happens just so.
VI
Preface
Statistical physics relates the properties of bodies to their internal construction, creates the microscopic theory of physical phenomena, and answers
the question why it happens just so. The disadvantage of this method resides
in the fact that results, obtained here, bear a particular character and are
right only in frames of the considered model of the structure of substance.
Thermodynamics and statistical physics study not only equilibrium systems, but also systems in which speciﬁed currents and ﬂows (the electric
current, ﬂow of energy and substance) exist. In this case, the theory is called
thermodynamics of nonequilibrium systems or kinetics. Kinetics originates
from the Boltzmann equation (1872) and has continued developing up to the
present time.
The development of phenomenological thermodynamics started in the ﬁrst
half of the nineteenth century.
The ﬁrst law of thermodynamics was discovered by the German physiologist Julius Robert von Mayer (1842) and the English physicist James
Prescott Joule (1843). They showed the equivalence of heat and mechanical
work. The ﬁrst law of thermodynamics is a law of conservation of energy for
closed processes. In 1847, the German physicist and physiologist Hermann von
Helmholtz generalized this law for any nonclosed thermodynamic processes.
The second law of thermodynamics was discovered independently by
both the German physicist Rudolf Clausius (1850) and the English physicist William Thomson (Lord Kelvin). They introduced in the theory a new
function of state – entropy, in the statistical sense, and discovered the law of
increasing entropy.
The third law of thermodynamics was discovered in 1906 by the German
physicist–chemist Walther Nernst. According to this law, entropy of all systems independently of external parameters tends to the identical value (zero)
as temperature approaches the absolute zero.
Note that the ﬁrst law of thermodynamics is a law about energy, and the
second and the third ones are about entropy.
The founders of thermodynamics are J.R. von Mayer, J.P. Joule, H. von
Helmholtz, R. Clausius, W. Kelvin, and W. Nernst.
Statistical physics received its development only in the last quarter of the
nineteenth century. The founders of classical statistical physics are R. Clausius, J.C. Maxwell, L. Boltzmann, and J.W. Gibbs. The height of development
of classical statistical physics is the method of Josiah Willard Gibbs (1902).
The application of classical statistics to many problems provided results,
though not coinciding with the experimental facts of that time. Black radiation
(thermodynamics of a photon gas), heat capacity of metals, Pauli paramagnetism, etc. can serve as examples. These diﬃculties of classical statistics
were circumvented only after the rise of quantum mechanics (L. de Broglie,
W. Heisenberg, E. Schrödinger, and P. Dirac) and quantum statistics, created
on its basis (E. Fermi, P. Dirac, S.N. Bose, A. Einstein) during 1924–1926.
The method of thermodynamic functions and potentials, and also the
Gibbs statistical method or the methods of free energy, being the keynote
Preface
VII
of the book, occupy an important place. It is shown that of all the thermodynamic functions, the most important are the function of free energy and
grand thermodynamic potential, which are determined from the Gibbs canonical distribution. It is expalined that the basic postulate of statistical physics
– the microcanonical distribution of isolated systems – is based on the statistical theory of the macroscopic properties of a system, from which all canonical
distribution stems.
Understanding free energy and grand thermodynamic potential, it is easy
to determine entropy, thermal and caloric equations of state, and also all
thermodynamic coeﬃcients, measured by testing. To do this in the case of
classical systems, it is suﬃcient to know the Hamilton function – energy as
a function of coordinates and impulses of particles of the system, forming
it, and for quantum systems, it is the energy spectrum, i.å., the dependence
of energy on quantum numbers. It is also an essence of the Gibbs method,
which is applied to ideal and nonideal gases, and also to a crystalline solid.
The exposition of the FermiDirac and Bose–Einstein quantum statistics and
its application to diﬀerent quantum gases occupy a large place. It is shown
how the diﬃculties of classical statistics, associated with its application to an
electron gas in metals, are circumvented. The statistics of the electron gases
are considered in detail in this book.
A separate chapter is devoted to the statistical theory of thermodynamic
properties of an electron gas in a quantizing magnetic ﬁeld. Note that the
investigation of properties of an electron gas in extremal conditions, in particular, at ultralow temperatures and in strong quantizing magnetic ﬁelds, is
one of the actual tasks of contemporary physics.
In the last chapter, on the basis of the Boltzmann kinetic equation, the
electron gas in metals and semiconductors is considered in a nonequilibrium
state. Nonequilibrium processes are associated with charge carrier motion in
a crystal under external disturbances such as the electric ﬁeld and the temperature gradient in the magnetic ﬁeld. They include electric conductivity,
thermoelectric, galvanomagnetic, and thermomagnetic eﬀects.
Baku,
November 2009
Bahram M. Askerov
Sophia R. Figarova
Contents
1
2
Basic Concepts of Thermodynamics
and Statistical Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 Macroscopic Description of State of Systems:
Postulates of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Mechanical Description of Systems: Microscopic State:
Phase Space: Quantum States . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Statistical Description of Classical Systems:
Distribution Function: Liouville Theorem . . . . . . . . . . . . . . . . . . .
1.4 Microcanonical Distribution:
Basic Postulate of Statistical Physics . . . . . . . . . . . . . . . . . . . . . . .
1.5 Statistical Description of Quantum Systems:
Statistical Matrix: Liouville Equation . . . . . . . . . . . . . . . . . . . . . .
1.6 Entropy and Statistical Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7 Law of Increasing Entropy:
Reversible and Irreversible Processes . . . . . . . . . . . . . . . . . . . . . . .
1.8 Absolute Temperature and Pressure:
Basic Thermodynamic Relationship . . . . . . . . . . . . . . . . . . . . . . . .
Law of Thermodynamics: Thermodynamic Functions . . . . . .
2.1 First Law of Thermodynamics:
Work and Amount of Heat: Heat Capacity . . . . . . . . . . . . . . . . . .
2.2 Second Law of Thermodynamics: Carnot Cycle . . . . . . . . . . . . . .
2.3 Thermodynamic Functions of Closed Systems:
Method of Thermodynamic Potentials . . . . . . . . . . . . . . . . . . . . . .
2.4 Thermodynamic Coeﬃcients
and General Relationships Between Them . . . . . . . . . . . . . . . . . .
2.5 Thermodynamic Inequalities:
Stability of Equilibrium State of Homogeneous Systems . . . . . .
2.6 Third Law of Thermodynamics: Nernst Principle . . . . . . . . . . . .
2.7 Thermodynamic Relationships for Dielectrics and Magnetics . .
1
1
6
13
19
22
27
31
35
43
43
50
56
63
69
74
79
X
Contents
2.8 Magnetocaloric Eﬀect:
Production of UltraLow Temperatures . . . . . . . . . . . . . . . . . . . . . 83
2.9 Thermodynamics of Systems with Variable Number
of Particles: Chemical Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2.10 Conditions of Equilibrium of Open Systems . . . . . . . . . . . . . . . . . 90
3
Canonical Distribution: Gibbs Method . . . . . . . . . . . . . . . . . . . . . 93
3.1 Gibbs Canonical Distribution for Closed Systems . . . . . . . . . . . . 93
3.2 Free Energy: Statistical Sum and Statistical Integral . . . . . . . . . 99
3.3 Gibbs Method and Basic Objects of its Application . . . . . . . . . . 102
3.4 Grand Canonical Distribution for Open Systems . . . . . . . . . . . . . 103
4
Ideal Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.1 Free Energy, Entropy and Equation
of the State of an Ideal Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.2 Mixture of Ideal Gases: Gibbs Paradox . . . . . . . . . . . . . . . . . . . . . 112
4.3 Law About Equal Distribution of Energy Over Degrees
of Freedom: Classical Theory of Heat Capacity
of an Ideal Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.3.1 Classical Theory of Heat Capacity of an Ideal Gas . . . . . 118
4.4 Quantum Theory of Heat Capacity of an Ideal Gas:
Quantization of Rotational and Vibrational Motions . . . . . . . . . 120
4.4.1 Translational Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.4.2 Rotational Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.4.3 Vibrational Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.4.4 Total Heat Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.5 Ideal Gas Consisting of Polar Molecules
in an External Electric Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
4.5.1 Orientational Polarization . . . . . . . . . . . . . . . . . . . . . . . . . . 133
4.5.2 Entropy: Electrocaloric Eﬀect . . . . . . . . . . . . . . . . . . . . . . . 137
4.5.3 Mean Value of Energy: Caloric Equation of State . . . . . . 138
4.5.4 Heat Capacity: Determination
of Electric Dipole Moment of Molecule . . . . . . . . . . . . . . . 139
4.6 Paramagnetic Ideal Gas in External Magnetic Field . . . . . . . . . . 141
4.6.1 Classical Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.6.2 Quantum Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.7 Systems with Negative Absolute Temperature . . . . . . . . . . . . . . . 150
5
NonIdeals Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.1 Equation of State of Rareﬁed Real Gases . . . . . . . . . . . . . . . . . . . 157
5.2 Second Virial Coeﬃcient and Thermodynamics
of Van Der Waals Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.3 Neutral Gas Consisting of Charged Particles: Plasma . . . . . . . . 169
Contents
XI
6
Solids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.1 Vibration and Waves in a Simple Crystalline Lattice . . . . . . . . . 175
6.1.1 OneDimensional Simple Lattice . . . . . . . . . . . . . . . . . . . . . 178
6.1.2 ThreeDimensional Simple Crystalline Lattice . . . . . . . . . 182
6.2 Hamilton Function of Vibrating Crystalline Lattice:
Normal Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
6.3 Classical Theory of Thermodynamic Properties of Solids . . . . . 187
6.4 Quantum Theory of Heat Capacity of Solids:
Einstein and Debye Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.4.1 Einstein’s Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
6.4.2 Debye’s Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
6.5 Quantum Theory of Thermodynamic Properties of Solids . . . . . 204
7
Quantum Statistics: Equilibrium Electron Gas . . . . . . . . . . . . . 213
7.1 Boltzmann Distribution: Diﬃculties of Classical Statistics . . . . 214
7.2 Principle of Indistinguishability of Particles:
Fermions and Bosons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
7.3 Distribution Functions of Quantum Statistics . . . . . . . . . . . . . . . 229
7.4 Equations of States of Fermi and Bose Gases . . . . . . . . . . . . . . . . 234
7.5 Thermodynamic Properties of Weakly Degenerate
Fermi and Bose Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
7.6 Completely Degenerate Fermi Gas: Electron Gas:
Temperature of Degeneracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
7.7 Thermodynamic Properties
of Strongly Degenerate Fermi Gas: Electron Gas . . . . . . . . . . . . . 244
7.8 General Case: Criteria of Classicity
and Degeneracy of Fermi Gas: Electron Gas . . . . . . . . . . . . . . . . 249
7.8.1 Low Temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
7.8.2 High Temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
7.8.3 Moderate Temperatures: T ≈ T0 . . . . . . . . . . . . . . . . . . . . 251
7.9 Heat Capacity of Metals:
First Diﬃculty of Classical Statistics . . . . . . . . . . . . . . . . . . . . . . . 254
7.9.1 Low Temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
7.9.2 Region of Temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
7.10 Pauli Paramagnetism: Second Diﬃculty of Classical Statistics . 258
7.11 “UltraRelativistic” Electron Gas in Semiconductors . . . . . . . . . . 262
7.12 Statistics of Charge Carriers in Semiconductors . . . . . . . . . . . . . 265
7.13 Degenerate Bose Gas: Bose–Einstein Condensation . . . . . . . . . . 277
7.14 Photon Gas: Third Diﬃculty of Classical Statistics . . . . . . . . . . 282
7.15 Phonon Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
8
Electron Gas in Quantizing Magnetic Field . . . . . . . . . . . . . . . . 297
8.1 Motion of Electron in External Uniform Magnetic Field:
Quantization of Energy Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . 297
8.2 Density of Quantum States in Strong Magnetic Field . . . . . . . . . 302
XII
Contents
8.3 Grand Thermodynamic Potential and Statistics
of Electron Gas in Quantizing Magnetic Field . . . . . . . . . . . . . . . 304
8.4 Thermodynamic Properties of Electron Gas
in Quantizing Magnetic Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
8.5 Landau Diamagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
9
NonEquilibrium Electron Gas in Solids . . . . . . . . . . . . . . . . . . . 321
9.1 Boltzmann Equation and Its Applicability Conditions . . . . . . . . 321
9.1.1 Nonequilibrium Distribution Function . . . . . . . . . . . . . . . . 321
9.1.2 Boltzmann Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
9.1.3 Applicability Conditions of the Boltzmann Equation . . . 325
9.2 Solution of Boltzmann Equation in Relaxation
Time Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
9.2.1 Relaxation Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
9.2.2 Solution of the Boltzmann Equation in the Absence
of Magnetic Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
9.2.3 Solution of Boltzmann Equation with an Arbitrary
Nonquantizing Magnetic Field . . . . . . . . . . . . . . . . . . . . . . 336
9.3 General Expressions of Main Kinetic Coeﬃcients . . . . . . . . . . . . 340
9.3.1 Current Density and General Form
of Conductivity Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
9.3.2 General Expressions of Main Kinetic Coeﬃcients . . . . . . 342
9.4 Main Relaxation Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
9.4.1 Charge Carrier Scattering by Ionized Impurity Atoms . . 345
9.4.2 Charge Carrier Scattering by Phonons in Conductors
with Arbitrary Isotropic Band . . . . . . . . . . . . . . . . . . . . . . 348
9.4.3 Generalized Formula for Relaxation Time . . . . . . . . . . . . 357
9.5 Boltzmann Equation Solution for Anisotropic Band
in Relaxation Time Tensor Approximation . . . . . . . . . . . . . . . . . . 359
9.5.1 Current Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
9.5.2 The Boltzmann Equation Solution . . . . . . . . . . . . . . . . . . . 360
9.5.3 Current Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
Deﬁnite Integrals Frequently Met in Statistical Physics . . . . . . . . 363
A.1 GammaFunction or Euler Integral of Second Kind . . . . . . . . . . 363
A.2 Integral of Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
A.3 Integral of Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
A.4 Integral of Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
A.5 Integral of Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Jacobian and Its Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Bibliograpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
1
Basic Concepts of Thermodynamics
and Statistical Physics
Summary. The basic concepts and postulates of thermodynamics and statistical physics are expounded in this chapter. Diﬀerent ways of description of the
state of macroscopic systems, consisting of a very large number of particles such
as atoms, molecules, ions, electrons, photons, phonons, etc., are adduced. Such
concepts as the distribution function over microstates, statistical weight of the preassigned macroscopic state of a system, absolute temperature, and pressure are also
introduced.
1.1 Macroscopic Description of State of Systems:
Postulates of Thermodynamics
As noted in the Foreword, thermodynamics and statistical physics study physical properties of macroscopic systems with a large number of degrees of
freedom. The lifetime of these systems ought to be suﬃciently long to conduct experiments on them. A usual gas, consisting of atoms or molecules,
photon gas, plasma, liquid, crystal, and so on, can serve as an example of
such systems. A small but macroscopic part of the considered system is called
a subsystem.
Macroscopic systems can interact between themselves or with the surrounding medium by the following channels:
1. An interaction when the considered system performs work on other systems, or vice versa, is called mechanical interaction (ΔA = 0). In this
case, the volume of the system changes.
2. An interaction in which the energy of the system changes only at the
expense of heat transfer (without performing work) is called thermal
interaction (ΔQ = 0).
3. An interaction leading to exchange of particles between systems or between
the system and the surrounding medium is called material interaction
(ΔN = 0).
2
1 Basic Concepts of Thermodynamics and Statistical Physics
Depending on which of the aboveindicated channels is open or closed, diﬀerent
types of macroscopic systems exist in nature.
A system is called isolated if energy and material exchange with the surrounding medium is absent (ΔA = 0, ΔQ = 0, ΔN = 0). For such systems
all channels of interaction are closed.
If a system is surrounded by a heatinsulated shell, the system is called an
adiabatically isolated system (ΔQ = 0).
If a system does not exchange particles with the surrounding medium
(ΔN = 0), such a system is called closed, and, on the other hand, if exchange
of particles (ΔN = 0) takes place, the system is called open.
If the considered system is a small but macroscopic part of a large system,
physical processes occurring in it will hardly inﬂuence the thermodynamic
state of the large system. In this case, the large system is called a thermostat,
and the system interacting with it is a system in the thermostat.
The thermodynamic state of each system at preassigned external conditions can be described by a restricted number of physical quantities which can
be measured on test. Such quantities are called thermodynamic parameters.
The number of particles in a system N , its volume V , pressure P , absolute
temperature T , dielectric P and magnetic M polarization vectors, electric
E and magnetic H ﬁeld strengths are examples of thermodynamic parameters. These parameters characterize the system itself and also the external
conditions in which it is found.
Parameters that are determined by coordinates of external bodies interacting with a system are called external parameters: volume, the electric and
the magnetic ﬁeld strength, etc. Parameters that, apart from coordinates of
external bodies, depend also on coordinates and impulses of particles entering into the system are called internal parameters: pressure, temperature, the
internal energy, dielectric and magnetic polarizations.
Internal parameters can be intensive or extensive. Parameters not depending on the number of particles in a system are called intensive: pressure,
temperature, etc. Parameters that are proportional to the number of particles
or the amount of substance are called extensive or additive: entropy, energy
and other thermodynamic potentials.
The state of a system determined by the totality of the aboveenumerated,
measuredontest thermodynamic parameters is called a macroscopic state of
the system:
Macroscopic state ⇒ (N, V, P, T, P, M , E, H , . . .) .
It is evident that these macroscopic parameters determine the averaged state
of a system, in that the details and the nature of the complex motion of
particles composing the system are disregarded. Such a description of a system
bears the phenomenological, i.e. the descriptive, character.
If thermodynamic parameters determining the state of a system do not
depend on time, such a state is called stationary. Moreover, if stationary ﬂows
1.1 Macroscopic Description of State of Systems
3
and currents are absent in a system, such a state is called a thermodynamic
equilibrium. This state is the simplest macroscopic state of a system. It is to
be noted that even in this state, inside the system particles perform complex
chaotic motion; however, this motion is not of interest in thermodynamics.
After introducing the primary basic thermodynamic concepts, we pass on
to the exposition of two postulates comprising the basis of thermodynamics.
These postulates were established from generalizations of experimental data.
The ﬁrst postulate of thermodynamics states that each isolated system
has only one intrinsic state, that of thermodynamic equilibrium. If a system
is not in the equilibrium state, it tends toward its equilibrium state over a
period of time, and once it has achieved that state, can never come out of it
spontaneously without an external force being exerted on it.
Also called the general principle of thermodynamics, this postulate deﬁnes
the thermodynamic equilibrium state. This principle is demonstrated by an
example of the macroscopic parameter L (Fig. 1.1). The time τ during which
the parameter L(t) passes to the equilibrium state L0 is called the relaxation
time. The quantity τ depends on the nature of interaction and intensity of
the motion of the particles composing the system.
The ﬁrst postulate of thermodynamics determines the limit of applicability of the laws of thermodynamics. Indeed, inasmuch as each system consists
of chaotically moving particles, the parameter L(t) can deviate from its mean
value, i.e. a ﬂuctuation occurs. These deviations are schematically shown in
Fig. 1.1. Thermodynamics disregards these ﬂuctuations and takes only the
mean values measured on test into consideration. Therefore the laws of thermodynamics are applicable only to systems in which deviations from the mean
values are much smaller than the mean values themselves. But this is possible
only in systems with a suﬃciently large number of particles.
If a system consists of a small number of particles, the relative ﬂuctuation
can be large and the system itself can move away from the equilibrium state.
In this case, the concept of “the equilibrium state” loses its sense, and the
ﬁrst postulate of thermodynamics is violated. It can be demonstrated by a
simple example. Let us assume that a gas contained in a rectangular vessel
contains N particles. Mentally divide the vessel into two equal parts. In the
equilibrium state, in each half N/2 molecules ought to be found. If N = 4,
L (t)
L0
0
τ
t
Fig. 1.1. The ﬂuctuation of the thermodynamical parameter
4
1 Basic Concepts of Thermodynamics and Statistical Physics
often the following picture can be observed: in the ﬁrst part there are three
molecules, and in the second there is one molecule; or, in the ﬁrst part there
are four molecules, and the second part is empty. Such a situation means that
the system itself is not in the equilibrium state, in which N/2 = 2 molecules
should be present in each part of the vessel.
Thus, from the ﬁrst postulate the following conclusion can be made: Laws
of thermodynamics are not applicable to systems consisting of a small number
of particles.
The second postulate of thermodynamics states that if two systems A and
B are separately found in thermodynamic equilibrium with the third system C,
A and B are also found in thermodynamic equilibrium between themselves, i.e.
A∼C
⇒ A ∼ B.
(1.1)
B∼C
This postulate is also called the zeroth law of thermodynamics and, as we will
see below, deﬁnes the concept of absolute temperature.
The second postulate determines the upper boundary of applicability of
the laws of thermodynamics. As seen from this postulate, when bringing into
thermal contact subsystems A, B, C, or while disconnecting them, the state
of equilibrium is not violated, i.e. the energy of interaction of the subsystems
is negligibly small and the energy of the whole system is an additive quantity1
E=
Eα ,
(1.2)
α
where Eα is the energy of the subsystem α.
Thus, the laws of thermodynamics are applicable only to systems for which
the condition of additivity of energy (1.2) is fulﬁlled. Naturally, the condition
of additivity is not fulﬁlled for large systems, individual parts of which interact
through a gravitational ﬁeld.2 Therefore, the laws of thermodynamics are not
applicable to very large, complex systems, e.g. to the universe as a whole.
From the second postulate, besides the principle of additivity of energy, the
second, not less important, conclusion stems. Indeed, from this postulate it follows that if A, B, C are subsystems of a large system in the equilibrium state,
their state, besides the external parameters, ought to be characterized also by
the general internal parameter. This internal intensive parameter is called
temperature and is identical in all parts of the large system, which is in the
thermodynamic equilibrium state. Temperature is determined by the intensity
of the thermal motion of the particles in the system. Thus, according to the
second postulate, the thermodynamic equilibrium state of a system is determined by the totality of external parameters and temperature. Consequently,
1
2
Equality (1.2) supposes that the energy of interaction between subsystems is
negligible small compared with the internal energy of a subsystem.
In this case, the gravitational energy of interaction between parts cannot be
neglected.
1.1 Macroscopic Description of State of Systems
5
according to the second postulate, each internal parameter is a function of
external parameters and temperature. This conclusion relates any internal
parameter Ai to temperature T and external parameters a1 , a2 , . . . , an :
Ai = Ai (a1 , a2 , . . . , an ; T ); i = 1, 2, . . . κ,
(1.3)
where k is the number of internal parameters. This equation, written in the
symbolic form, is called the equation of state. The number of such equations,
naturally, equals the number of internal parameters k.
If in the capacity of an internal parameter we accept the internal energy
of a system Ai ≡ E, (1.3) can be presented in the form:
E = E(a1 , a2 , . . . , an ; T ).
(1.4)
This equation is called the caloric equation of the state of a system.
If in the capacity of an internal parameter we accept pressure Ai ≡ P ,
from (1.3) we get the thermal equation of state:
P = P (a1 , a2 , . . . , an ; T ).
(1.5)
Thus, from the set of equations (1.3) it is seen that the thermodynamic
state of a system is singlevaluedly determined by (n+ 1) independent number
of parameters. Therefore, the number (n + 1) is called the thermodynamic
degree of freedom of a system. Depending on the complexity of a system, n
takes on values n = 1, 2, 3, . . ..
In the simplest case of closed systems,3 if in the capacity of an independent
external parameter volume V is accepted, the internal parameter pressure P
and internal energy E, conforming to (1.3), can be expressed as follows:
P = P (V ; T );
E = E(V ; T ).
(1.6)
The explicit form of these equations for ideal gases is experimentally determined and theoretically substantiated by statistical methods:
P =
N
k0 T ;
V
E=
3
k0 TN ,
2
(1.7)
where N is the number of particles of an ideal gas, and k0 is the Boltzmann
constant.
If from (1.4) we determine temperature and substitute it into (1.3),
all internal parameters can be expressed by E and external parameters
a1 , a2 , . . . , an . Thus, the second postulate of thermodynamics can be also
expressed as follows: All internal parameters of a system found in thermodynamic equilibrium are functions of external parameters and energy:
Ai = Ai (a1 , a2 , . . . , an ; E).
3
(1.8)
A gas consisting of the preassigned number of neutral atoms or molecules can be
considered as an example.
6
1 Basic Concepts of Thermodynamics and Statistical Physics
Systems satisfying this condition are called ergodic. Consequently, thermodynamics is applicable only to ergodic systems.
For an ideal gas, using (1.7), the equation of type (1.8) takes the following
explicit form:
2E
P =
.
(1.9)
3V
Finally, note once more that the ﬁrst postulate of thermodynamics deﬁnes the
concept of thermodynamic equilibrium and the second one deﬁnes the concept
of absolute temperature.
1.2 Mechanical Description of Systems:
Microscopic State: Phase Space: Quantum States
It is known that any macroscopic system consists of a colossal but ﬁnite
number of particles, and also that each particle can have a complex internal structure. Here the structure of a particles does not interest us, and we
will regard that the considered system as consisting of N number of chaotically moving material points interacting among themselves. Thus, the number
of degrees of freedom of the considered system is 3N . Note that under normal
conditions 1 cm3 of air contains 3 × 1019 molecules. The linear dimension of
each molecule is 10−8 cm. In order to have a notion of the number of particles and their dimensions, we quote a known example by Kelvin, according
to which the number of H2 O molecules in a glass of water is 100 times the
number of glasses of water available in all oceans and seas of the world.
Naturally, it is impossible to describe in detail the state of such a macroscopic system with a small number of thermodynamic parameters, since these
parameters disregard the internal structure of the system. For the complete
description of a system, it is necessary to know which particles it consists
of, what nature of their interaction is and by which equations their motion
is described, i.e. whether the motion of particles obeys classical or quantum
mechanical laws. In conformity with this, in nature two types of systems exist:
classical and quantum systems. We consider these cases separately.
Classical systems. The motion of particles forming such systems obeys
the laws of classical mechanics, and the state of each of them is determined
by three generalized coordinates qi (t) and by three corresponding generalized
impulses pi (t),4 where i takes on the value i = 1, 2, 3. Consequently, the general
state of a classical system consisting of N particles at the instant of time t is
determined by 6N quantities:
microstate ⇒ (q, p) ≡ (q1 , q2 , . . . , q3N ; p1 , p2 , . . . , p3N ).
4
(1.10)
In classical statistical physics the motion of particles is characterized not by velocity, but by impulse, as the Liouville theorem (Sect. 1.3) is just not in the space
of coordinates and velocity (q, q̇), but in the phase space (q, p).
1.2 Mechanical Description of Systems
7
The state of a system being determined by 3N generalized coordinates and 3N
generalized impulses is called the microscopic state or, brieﬂy, the microstate
of the classical system. The quantities qi (t) and pi (t), i.e. the microstates of
the system, are found from a solution of the system of Hamilton canonical
equations:
∂H
∂H
, ṗi = −
; i = 1, 2, 3, . . . , 3N,
(1.11)
q̇i =
∂pi
∂qi
where 3N is the number of degrees of freedom of the system, the points over
qi and pi mean the time derivative and H is the Hamilton function of the
system. For conservative systems, H coincides with the total energy E(q, p)
of the system:
H ≡ E(q, p) =
3N
p2i
+ U (q1 , q2 , . . . , q3N ),
2m
i=1
(1.12)
where m is the mass of a particle and U (q) is the potential energy of interaction
of particles; it is supposed that an external ﬁeld is absent.
To describe a microstate of classical systems, it is convenient to introduce
the concept of phase space or the Γspace. Each system has its intrinsic phase
space. For instance, the phase space of a classical system consisting of N
particles represents an abstract 6N dimensional space. The position of each
“point” in this space is determined by 3N generalized coordinates qi and 3N
generalized impulses pi , i.e. by 6N quantities.
Thus, a microstate of a classical system consisting of N particles has a
corresponding “point” – a phase point – in the 6N dimensional phase space.
Henceforth, for an elementary volume of the phase space dΓ we will use
the following symbolic notation:
dΓ = dq dp ≡
3N
dqi dpi .
(1.13)
i=1
Hence it is seen that the dimensionality of an element of “volume” of the phase
space of a classical system consisting of N particles is (action)3N .
The phase space can be subdivided into two subspaces: coordinates and
impulses. Then, for an element of volume of the phase space one can write
dΓ = dΓq · dΓp .
For some systems (e.g. an ideal gas) instead of the Γspace, we may introduce the concept of the μspace. The μspace is a sixdimensional space, each
point of which is determined by a coordinate and an impulse of one particle
(x, y, z, px , py , pz ). It is evident that in the μspace a microstate of an ideal
gas consisting of N particles is described by a multitude of N points.
Coordinates qi (t) and impulses pi (t) of particles forming a system continually change in conformity with equations of motion (1.11); therefore, the
microstate of the system changes with time. As a result, the position of
phase points changes, describing a speciﬁed “curve” in the 6N dimensional
8
1 Basic Concepts of Thermodynamics and Statistical Physics
phase space. This curve is called the phase trajectory.5 The equation of the
phase trajectory, in principle, can be found from the solution of the system of equations (1.11). These solutions can be written symbolically in the
following form:
qi = qi (t; q01 , q02 , . . . , q03N ; p01 , p02 , . . . , p03N )
(1.14)
pi = pi (t; q01 , q02 , . . . , q03N ; p01 , p02 , . . . , p03N ),
where q0i and p0i are initial coordinates and impulses, respectively, of a
particle.
Note that the phase trajectory can be closed, but it cannot intersect or
touch itself. This result follows from the principle of determinism of classical
mechanics, i.e. the singlevaluedness of the solution (1.14) of the equation of
motion (1.11).6
The phase trajectory of even simple systems cannot be described graphically. It is suﬃcient to remember that the phase space of a system consisting
of one particle is sixdimensional. The phase trajectory can be graphically
described only for one particle moving in a onedimensional space. In this case,
the phase space is twodimensional. In Fig. 1.2, the phase trajectory of a freely
moving particle with a mass m and a preassigned energy p2 /2m = ε0 = const
in “a onedimensional box” with dimensions −L/2 ≤ q ≤ L/2 is presented.
As seen from Fig. 1.3, the phase trajectory of a linear harmonic oscillator with a preassigned energy p2 /2m + mω 2 q 2 /2 = ε0 = const represents
p
2me 0
− L/2
L/2
0
q
− 2me 0
Fig. 1.2. The phase trajectory of the onedimensional free moving
p
2me0
0
2e0 / mω 2
q
Fig. 1.3. The phase trajectory of a linear harmonic oscillator
5
6
This curve should not be confused with the trajectory of a particle’s motion in
the usual threedimensional space.
Indeed, if the phase trajectory would intersect itself, the intersection point could
be accepted as an initial one, and the further change in the state of the system
would not be singlevalued.
1.2 Mechanical Description of Systems
9
√
an ellipse with semiaxes 2ε0 /mω 2 and 2mε0 , where ω is the circular
frequency of the linear harmonic oscillator with a mass m.
Quantum systems. For quantum systems, i.e. systems in which the motion
of particles is described by equations of quantum mechanics, concepts of the
phase space, phase point and phase trajectory have no sense. Indeed, according
to the Heisenberg uncertainty principle, the coordinate q and impulse p of a
particle cannot be singlevaluedly determined simultaneously. Principal errors
Δq and Δp are related by the following relationship:
(Δq)2 · (Δp)2 ≥
where
2
,
4
(1.15)
= h/2π = 1.05 × 10−27 erg s,
h being the Planck constant.
The state of a system in nonrelativistic quantum mechanics is described
by the stationary Schrödinger equation
Hˆ Ψn = En Ψn .
(1.16)
Here Ψn and En are an eigenfunction and eigenvalue of energy, respectively.
The Hamilton operator Hˆ entering into (1.16) can be obtained from expression (1.12) by replacing pi ⇒ p̂i = −i ∂q∂ i . If in place of the generalized
coordinate qi we take the Cartesian coordinate r i = r (xi , yi , zi ), the Hamilton
operator Hˆ takes the following form:
Hˆ =
N
2 2
∇ + U (r 1 , r 2 , . . . , r N ),
−
2m i
i=1
(1.17)
where ∇2i is the Laplace operator. In (1.16) n is the totality of quantum
numbers, determining one quantum state of the system. The wave function of the system Ψn depends on the coordinates of all particles, i.e.
Ψn = Ψn (r 1 , r 2 , . . . , r N ).
Thus, the microscopic state of a quantum system is determined by the
totality of all quantum numbers describing one quantum state Ψn . One value
of the energy of the system En can correspond to each microstate n, or several
microstates can correspond to one value of energy, i.e. it is possible to have
quantum degeneracy.
For real systems, we cannot exactly solve the Schrödinger equation (1.16)
and determine the microstate of a system. Here, we consider the simplest ideal
systems, allowing an exact solution.
Ideal gas in a rectangular box. Assume that in a rectangular box with
dimensions Lx , Ly , Lz , N noninteracting particles with a mass m are found.
It is known that the total energy of an ideal gas is the sum of the energies of
10
1 Basic Concepts of Thermodynamics and Statistical Physics
the individual particles:
En =
N
(1.18)
εni ,
i=1
where
2
2
2
k 2 + kiy
+ kiz
2m ix
is the quantum mechanical energy of the i th particle;
π
π
π
nix , kiy =
niy , kiz =
niz
kix =
Lx
Ly
Lz
εni =
(1.19)
(1.20)
are components of the wave vector, where
(1.21)
nix , niy , niz = 1, 2, 3, . . .
are quantum numbers of the ith particle, taking on any positive integer value;
and n ⇒ (n1x , n1y , n1z ; n2x , n2y , n2z ; . . . ; nNx , ; nNy , nNz ) is the totality of
quantum numbers determining the state of a system.
If we substitute the value of the wave vector (1.20) into (1.19), the energy
of the ith particle εni can be expressed by the quantum numbers
εni =
π 2 2
2m
n2iy
n2
n2ix
+ 2 + iz2
2
Lx
Ly
Lz
.
(1.22)
As can be seen, the energy of the ith particle of an ideal gas is determined
by three quantum numbers nix , niy , niz (the spin of a particle is disregarded).
From quantum mechanics, it is known that in such a case each totality of
quantum numbers has only one corresponding wave function, i.e. one quantum
state, and the degeneracy is absent.
If the energy of a particle (1.22) is taken into account in (1.18), it can be
said that one microstate of an ideal gas in the box is determined by assigning
3N quantum numbers, i.e. for the considered system
microscopic state ⇒
⇒
(n1x , n1y , n1z ; n2x , n2y , n2z ; . . . ; nNx , nNy , nNz ).
(1.23)
System consisting of harmonic oscillators. Assume that the considered system
consists of N noninteracting linear harmonic oscillators with frequency ω.
Owing to the absence of the interaction between oscillators, expression (1.18)
for the energy of the system remains in force; only in the given case, energy
of the ith particle of an oscillator has the form
εi = (ni + 1/2) ω,
(1.24)
where ni = 0, 1, 2, . . . are quantum numbers of an oscillator. Each value ni
has one corresponding wave function, i.e. one quantum state.
1.2 Mechanical Description of Systems
11
Thus, one microstate of an ideal gas consisting of N linear harmonic
oscillators is determined by the totality of N quantum numbers:
microstate ⇒ (n1 , n2 , . . . , nN ).
(1.25)
System consisting of rotators. Consider a system consisting of N noninteracting rotators formed by rigidly bound atoms of masses m and m
at a distance r from each other (a diatomic molecule). Assume that the rotators rotate around the axis passing through the ﬁxed centres of the masses.
Inasmuch as the rotators do not interact, the total energy of the system can be
determined from expression (1.18); however, in the given case, the quantum
mechanical energy of each rotator is preassigned by the expression
εi =
2
li (li + 1),
2I
(1.26)
where li = 0, 1, 2, . . . is the azimuthal quantum number, I = mr 2 is the
moment of inertia of a rotator (a molecule), and m = m m /(m + m ) is the
reduced mass.
Note that the wave function of a rotator, apart from the azimuthal quantum number li , depends also on the magnetic quantum number mi , which
takes on integer values in the limits
− l i ≤ mi ≤ l i .
(1.27)
Hence it follows that each quantum state of a rotator is determined by two
quantum numbers (li and mi ). Inasmuch as the energy of a rotator does
not depend on the quantum number mi , its energy levels are (2li + 1)fold
degenerate.
Thus, one microstate of a system consisting of N rotators is determined
by the totality of 2N quantum numbers:
microstate ⇒ (l1 , m1 ; l2 , m2 ; . . . ; lN , mN ).
(1.28)
From the examples given above, it is seen that the total of quantum numbers
determining one microstate of a system equals the number of degrees of freedom. In fact, if we consider an ideal gas consisting of N diatomic molecules
and take into account that each molecule has three translational, one vibrational and two rotational degrees of freedom, then the system as a whole has
6N degrees of freedom. From (1.23), (1.25) and (1.28), it is seen that for the
considered system the total of quantum numbers determining one microstate
also equals 6N .
Quasiclassical approximation. It is known that at speciﬁed conditions (the
quasiclassical approximation) the quantum mechanical description of a system can be replaced with its classical counterpart, i.e. in some particular cases,
quantum systems behave as classical ones. Consider the conditions of quasiclassicity. The quasiclassical approximation is applicable when the diﬀerence
12
1 Basic Concepts of Thermodynamics and Statistical Physics
of energies of two adjacent quantum levels is much less than the energy of the
particles:
[ε(n + 1) − ε(n)]
ε(n).
(1.29)
For the case (1.22), this inequality takes the form (2n + 1)
n2 , i.e. the
quasiclassical approximation is applicable for very large (n
1) quantum
numbers.
Condition of quasiclassicity (1.29) obviously can be expressed as follows:
if the mean energy of the thermal motion of particles ε(n) = k0 T is much more
than the discreteness of the energy spectrum, the motion can be regarded as
classical. For free particles in a box with volume V = L3 , i.e. for case (1.22),
the condition of quasiclassicity (1.29) can be written down in the form
π 2 2
2mL2
(1.30)
k0 T,
where T is the absolute temperature.
Condition of quasiclassicity (1.30) can also be presented as
(1.31)
√
where λ = h/p is the de Broglie wavelength of a particle, and p = 2mk 0 T is
the mean impulse.
Thus, the free motion of particles is classical when the linear dimensions
of the space L in which the motion occurs are much more than the de Broglie
wavelength of a particle.
Consider the question of the number of microstates of a system. For quantum systems, the number of microstates in the preassigned range of energy
equals the number of quantum states. In the quantum case, the number of
microstates accounting for the speciﬁed range of energy is ﬁnite because of
the property of discreteness.
For classical systems, one phase point corresponds to each microstate and
therefore, formally, the “inﬁnite” number of microstates corresponds to any
element of the phase space. If, from the point of view of quantum mechanics,
it is taken into account that the least “volume” of the phase space h = 2π
(Fig. 1.4) corresponds to one state of a particle with one degree of freedom,
L
λ,
p
h = 2πh
dp
q
dq
Fig. 1.4. The “volume” of the quantum state of a particle with one degree of freedom
1.3 Statistical Description of Classical Systems
13
then the “volume” (2π)3N corresponds to a microstate of a system with 3N
degrees of freedom in the phase space.
Thus, the number of quantum states corresponding to “an element of
volume” dΓ = dq dp equals
3N
dG =
dqi dpi
dΓ
i=1
=
.
(2π)3N
(2π)3N
(1.32)
Note that we will use the relation (1.32) between “an element of volume”
of the phase space and the number of quantum states of a system for the
singlevalued determination of entropy of a system in the quasiclassical case
(Sect. 1.6).
1.3 Statistical Description of Classical Systems:
Distribution Function: Liouville Theorem
As noted in Sect. 1.2, to determine a microstate of a classical system consisting
of N particles it is necessary to know 6N parameters. To do this, it is required
to solve a system of 6N equations (1.11). Even if the explicit form of the
Hamilton function (1.12) and initial conditions are known, it is very diﬃcult
or practically impossible to solve the system (1.11) owing to the huge number
of particles. Moreover, knowing the coordinates and impulses of all particles
gives no complete information about properties of the system as a whole.
It is associated with the fact that in the behaviour of macrosystems qualitatively new statistical appropriateness arises. Such appropriateness bears a
probabilistic character and is distinct from mechanical laws. Hence it follows
that states of macrosystems should be described by statistical methods. In
these methods, the idea is not the exact determination of the microstates of
a system but their determination with a certain probability.
Assume that the considered system is a small but macroscopic part of a
large system – the thermostat. A microstate of a system interacting with the
thermostat chaotically changes over a course of time, and we cannot exactly
determine the coordinates (q, p) of these states in the phase space. Then the
question can be posed in the following form: What is the probability that
microstates of a system may be found in a small element of volume (dq dp) of
the phase space? To determine this probability, mentally trace the change in
the microstates of a system in the course of a long time interval T . Assume
that over the course of a short time range dt a microstate of the system is
found in the element of volume (dq dp) taken near the point (q, p). If the time
of observation is regarded as very long, the quantity
dt
lim
= dW
(1.33)
T →∞
T
14
1 Basic Concepts of Thermodynamics and Statistical Physics
can be taken as the probability of the microstate of the system being found
in the element of volume (dq dp).
It is evident that the probability dW depends on around which point
(q, p) = (q1 , q2 , . . . , q3N ; p1 , p2 , . . . , p3N ) the element of volume (dq dp) =
3N
dqi dpi is taken, and, naturally, the quantity dW ought to be directly
i=1
proportional to this element of volume (dq dp):
dW = ρ(q, p)dq dp.
(1.34)
The coeﬃcient of proportionality ρ(q, p) is the probability of the microstate
found in the unit volume of the phase space taken near the point (q, p) and
is called the distribution function. From the geometrical point of view, ρ(q, p)
is the density of phase points corresponding to the microstate of the system
near the point (q, p) of the phase space, i.e. it characterizes the distribution
of microstates of the system in the phase state.
Knowing the distribution function, we can calculate the mean statistical value of any quantity L(q, p), depending on coordinates and impulses of
particles of the system:
Lρ = L(q, p)ρ(q, p)dq dp.
(1.35)
Notice that in reality the state of a system is determined by macroscopic
quantities measured experimentally. Knowing the internal structure of the
system, the mean value of the measured quantities with respect to time can
be immediately calculated:
1
Lt = lim
T →∞ T
T
L(q(t), p(t))dt.
(1.36)
0
To do this, besides the explicit form of the dependence of the quantity L on q
and p, it is necessary to know the dependences of the coordinates and impulses
of a particle on time, i.e. functions q = q(t) and p = p(t). And this means a
solution of system of equations (1.11). Inasmuch as this system is unsolvable,
the immediate calculation of Lt by (1.36) is not possible. In order to overcome
this diﬃculty, we assume that the mean value with respect to time (1.36) can
be replaced with the statistical mean (1.35):
Lt ⇒ Lρ .
(1.37)
This supposition is called the ergodic hypothesis, and systems satisfying condition (1.37) are called ergodic systems. Henceforth, we will consider only
ergodic systems and omit the statistical index ρ, i.e. Lρ = L.
An advantage of calculation of the statistical mean value (1.35) lies in the
fact that in this case it is suﬃcient to know dependences of the quantity L
1.3 Statistical Description of Classical Systems
15
on q and p, i.e. it is not necessary to know the time dependences of q and p.
The explicit form of the function L(q, p) for diﬀerent systems can be found
in classical mechanics. In the particular case, L can be the Hamilton function
H = E(q, p). However, as seen from (1.35), in order to ﬁnd L, besides L(q, p)
it is necessary to know the distribution function ρ(q, p). Finding the explicit
form of this function is the basic task of statistical physics. It is necessary
to ﬁnd such a distribution function in order that the mean statistical value,
calculated with its aid, would coincide with the mean value with respect to
time, i.e. condition (1.37) would be satisﬁed.
Properties of the distribution function. In order to ﬁnd the explicit form
of the distribution function, note its following properties.
1. The distribution function ought to satisfy the normalization condition. It
is evident that if we integrate the expression (1.34) over all the phase
space, we get the normalization condition
ρ(q, p)dq dp = 1,
(1.38)
which means that the event of a microstate of a system being found at
some point or other of the phase space is real.
2. To deﬁne the second property of the distribution function, we introduce
the concept of a statistically independent subsystem. Assume that the considered system consists of two macroscopic subsystems. It is evident that in
the process of interaction of these subsystems, basically particles that are
found on the boundary, the number of which is much smaller than the total
number of particles in the subsystems, participate. Therefore, in a time
range which is less than the relaxation time, these subsystems can be considered as selfdependent, i.e. the change in the state of one subsystem in
the given time interval does not inﬂuence the state of the other subsystem.
Subsystems satisfying this condition are statistically independent.
Elements of volume of the phase space of the considered subsystems are
denoted by dq (1) dp(1) and dq (2) dp(2) , respectively. Then probabilities of
microstates of the subsystems being found in these elements of volume have
the form
(1.39)
dW (1) = ρ1 dq (1) dp(1) ; dW (2) = ρ2 dq (2) dp(2) .
It is evident that the distribution function ρ1 depends on coordinates and
impulses of particles of the ﬁrst subsystem, and ρ2 depends on coordinates
and impulses of particles of the second subsystem.
Mathematically, the statistical independence means that the probability
dW = ρ dq dp of the microstate of the system consisting of two subsystems
found in the element of volume dq dp = dq (1) dp(1) dq (2) dp(2) ought to equal
the product of the probability of the microstate of the ﬁrst subsystem found
in the element of volume dq (1) dp(1) and the probability of the microstate of
the second subsystem found in the element of volume dq (2) dp(2) , i.e. dW =
dW (1) · dW (2) . In the explicit form
16
1 Basic Concepts of Thermodynamics and Statistical Physics
ρ dq dp = ρ1 dq (1) dp(1) · ρ2 dq (2) dp(2) ,
(1.40)
ρ = ρ1 · ρ2 .
(1.41)
or
In the general case of a large system consisting of n0 number of subsystems,
equality (1.41) takes the form
ρ = ρ1 · ρ2 · . . . · ρn0 =
n0
ρα .
(1.42)
α=1
Thus, the distribution function of a large system is the product of the distribution functions of the statistically independent subsystems forming the
large system. The converse is also true: if the distribution function of the
whole system can be presented in the form of the product of the distribution functions of individual subsystems, these subsystems are statistically
independent.
If we take the logarithm of the equality (1.42), we can obtain the second
important property of the distribution function:
ln ρ =
n0
ln ρα .
(1.43)
α=1
This means that the logarithm of the distribution function of a large system
equals the sum of the logarithms of the distribution functions of the individual
subsystems; in other words, the logarithm of the distribution function of the
system is an additive quantity.
3. Liouville theorem. The third property of the distribution function stems
from this theorem, according to which the distribution function is constant along the phase trajectory, i.e. ρ(q, p) = const. This is one of the
deﬁnitions of the Liouville theorem. To prove this theorem, mentally keep
a watch over microstates of the given subsystem for an extended time and
subdivide the time of observation into very small identical ranges. Imagine that phase points A1 , A2 , A3 , . . . , An correspond to microstates of the
subsystem at instants of time t1 , t2 , t3 , . . . , tn in the phase space.
Now assume that each phase point A1 , A2 , A3 , . . . , An corresponds to a
microstate of one of the subsystems at the instant of time t. It is evident
that the multitude of subsystems mentally formed in this way is a multitude
of states of identical subsystems (with identical Hamilton function) and is
called the Gibbs statistical ensemble. The number of subsystems n entering
into this ensemble ought to be very large.
A microstate of the statistical ensemble, i.e. positions of phase points
A1 , A2 , . . . , An in the course of time changes and at the instant of time t ,
is described by the multitude of the phase points A1 , A2 , . . . , An :
(A1 , A2 , . . . , An )t ⇒ (A1 , A2 , . . . , An )t .
(1.44)
1.3 Statistical Description of Classical Systems
17
If Δn phase points occupy an element of the phase volume ΔΓ, by deﬁnition
of the distribution function it can be written as
Δn = ρ(q, p, t)ΔΓ.
(1.45)
For the preassigned ensemble, phase points do not disappear or arise. Therefore, the distribution function ρ(q, p, t) in the phase space ought to satisfy
the continuity equation. In order to write the continuity equation in the
6N dimensional phase space, ﬁrst remember its form in the usual threedimensional space:
∂ρ
+ div(ρυ) = 0.
(1.46)
∂t
Here ρ(x, y, z, t) and υ(x, y, z, t) are the density and velocity of the ﬂow of
points at the instant of time t, respectively. Equation (1.46) actually is the
law of conservation of substance and can be written down in the form
∂ρ
+ υ grad ρ + ρ div υ = 0.
∂t
(1.47)
If we take into account that the sum of the ﬁrst two terms is a total derivative
of the function ρ with respect to time, (1.47) takes the form:
dρ
+ ρ div υ = 0.
dt
(1.48)
Then, by analogy with (1.48), for the continuity equation in the 6N dimensional phase space we have:
dρ(q, p, t)
+ ρ Div V = 0,
dt
(1.49)
where V is the vector of “velocity” in the 6N dimensional phase space with
components q̇1 , q̇2 , . . . , q̇3N ; ṗ1 , ṗ2 , . . . , ṗ3N ; dρ/dt is the velocity of the change
in the density of phase points around the point (q, p); and DivV is the symbol
of divergence in the 6N dimensional phase space:
Div V =
3N
∂ q̇i
i=1
∂qi
+
∂ ṗi
.
∂pi
(1.50)
Taking into account the Hamilton canonical (1.11), we get:
3N
∂ q̇i
i=1
∂ ṗi
+
∂qi
∂pi
3N 2
∂2H
∂ H
=
= 0.
−
∂qi ∂pi ∂pi ∂qi
i=1
(1.51)
Thus, for the 6N dimensional phase space
DivV = 0.
(1.52)
18
1 Basic Concepts of Thermodynamics and Statistical Physics
From (1.49) and (1.52), it is seen that a total derivative of the distribution
function equals zero:
dρ(q, p, t)
= 0,
(1.53)
dt
i.e. ρ is constant along the phase trajectory:
ρ(q, p, t) = const.
(1.54)
When proving this property of the distribution function, we have used equation of motion (1.11) describing the phase trajectory. Therefore, in conformity
with Liouville theorem (1.54), the third property of the distribution function
can be formulated as follows: The distribution function along the phase trajectory does not depend on time, i.e. it is an integral of the motion. Note that
ρ(q, p, t) along the phase trajectory remains constant though coordinates q(t)
and impulses p(t) strongly depend on time, and their changes are described
by a solution of (1.11).
Besides the above, one more equivalent deﬁnition of the Liouville theorem
can be given; it follows from expression (1.45) and condition (1.54): an element
of volume ΔΓ of the phase space occupied by phase points for the preassigned
part (Δn = const) of the ensemble does not depend on time, i.e. ΔΓ = const.
Thus, another deﬁnition of the Liouville theorem can be formulated thus:
The phase volume occupied by phase points corresponding to microstates of
the statistical ensemble is conserved, i.e.
ΔΓ = ΔΓ ,
(1.55)
where ΔΓ and ΔΓ are elements of volume of the phase space occupied by
phase points of the ensemble at the instants of time t and t , respectively.
Consider the general conclusions stemming from the indicated properties
of the distribution function. As it is seen from (1.35), to calculate the mean
statistical value of the physical quantities it is necessary to know two functions.
The explicit form of L(q, p) is found in classical mechanics. The distribution
function ρ(q, p) is determined in statistical physics. It is clear that for both
L(q, p) and ρ(q, p) the universal form for all systems cannot be found. However, using the aboveindicated properties of the distribution function, we can
determine the general form applicable for any system.
According to the third property stemming from the Liouville theorem
(1.54), the distribution function ρ(q, p) along the trajectory is constant,
though its arguments q and p substantially depend on time. This means that
the distribution function ρ can depend on coordinates q and impulses p only
in terms of mechanical invariants I(q, p) – integrals of the motion:
ρ(q, p) = ρ(I(q, p)).
(1.56)
From classical mechanics, it is known that systems have seven additive integrals of motion: the total internal energy of a system E(q, p); components
1.4 Microcanonical Distribution: Basic Postulate of Statistical Physics
19
of the impulse Px , Py , Pz ; and components of the moment of the impulse
Mx , My , Mz of the system as a whole. Note that the frame of reference can be
so connected with the body (the system) that in the given frame of reference
P and M would be equal to zero. Then, in this frame of reference the only
mechanical invariant – the total internal energy E(q, p) – remains, and as a
result dependence (1.56) can be written down in the form
ρ(q, p) = ρ(E(q, p)).
(1.57)
Such a dependence of the distribution function ρ(q, p) indicates the exclusively
important role of the internal energy E(q, p) in statistical physics.
Thus, we get the most general and fundamental property of the distribution function: The distribution function ρ(q, p) depends only on the total
internal energy of a system E(q, p).
And what is the explicit form of this dependence? To this question, a
universal answer for any system does not exist. We consider a concrete system.
Assume that the considered system consists of several subsystems. Then,
taking into account the condition of additivity of energy (1.2) and ln ρ (1.43),
we see that the logarithm of the distribution function of any αsubsystem can
depend on its energy Eα as follows:
ln ρα = Aα + βEα (q, p).
(1.58)
The constants Aα and β here are found from some speciﬁed conditions. It is
evident that the coeﬃcient β ought not to depend on the number of the subsystem and is identical for all subsystems. Only in this case, for all the system
ln ρ(q, p) and, consequently, the distribution function ρ(q, p) itself satisﬁes the
condition (1.57):
i.e. ρ(q, p) depends only on the total internal energy of the
system: E =
Eα .
α
It should be noted that just the distribution function of a system found in
the thermostat, i.e. the Gibbs canonical distribution (see Chap. 4), has such
a general appearance (1.58): or
ρα = exp(Aα + βEα (q, p)).
(1.59)
1.4 Microcanonical Distribution:
Basic Postulate of Statistical Physics
The microcanonical distribution is concerned with completely isolated systems. Owing to the absence of any interaction with the surrounding medium
(ΔA = 0, ΔΘ = 0, ΔN = 0), its energy remains ﬁxed as E0 = const, i.e. in
whatever microstate the system is found, its total internal energy is
E(q, p) = E0 .
(1.60)
20
1 Basic Concepts of Thermodynamics and Statistical Physics
Microstates satisfying this condition are called possible states. In the abstract
phase space, (1.60) is an equation of a speciﬁed “hypersurface”. Energy of all
microstates corresponding to phase points on this surface is identical, and
E0 = const. Therefore this hypersurface is called the isoenergetic surface. It
is evident that the system cannot be found in microstates to which phase
points outside this surface E(q, p) = E0 correspond. These states are called
impossible states.
The explicit form of the distribution function for completely isolated classical systems that are found in the thermodynamic equilibrium is determined
on the basis of the postulate of statistical physics.
The basic postulate of statistical physics is as follows: If a completely isolated system is found in the thermodynamic equilibrium state, the probability
of it being found in any possible microstate is identical, i.e. preference can be
given to none of the possible states.
Mathematically, this postulate can be expressed in the form
C = const, at E(q, p) = E0 ,
ρ(q, p) =
(1.61)
0,
at E(q, p) = E0 .
For the distribution function, expression (1.61) has the appearance
ρ(q, p) = Cδ(E(q, p) − E0 ).
(1.62)
This distribution function is called the microcanonical distribution. The constant C is determined from the normalization condition of the distribution
function (1.38):
ρ(q, p)dq dp = C δ(E(q, p) − E0 )dq dp = 1.
(1.63)
Recall that the δfunction entering into expression (1.62) has the following
properties:
1.
δ(x − a)dx = 1,
2.
f (x)δ(x − a)dx = f (a),
(1.64)
f (xs )
,
3.
f (x)δ[φ(x)]dx =
φ (xs )
s
where xs are roots of the equation φ(xs ) = 0, and a is an arbitrary constant.
In order to use the second property of the δfunction (1.64), in (1.63) pass
from the integral over dq dp ≡ dΓ to the integral over dE. Then we get
dΓ
dE = 1.
C δ(E(q, p) − E0 )dq dp = C δ(E(q, p) − E0 )
dE
1.4 Microcanonical Distribution: Basic Postulate of Statistical Physics
21
ρ(q, p)
E0
0
E(q, p)
Fig. 1.5. The microcanonical distribution for isolated classical systems
Here
C=
1
1
=
(dΓ/dE)E=E0
Ω(E0 )
(1.65)
is the normalizing constant.
Hence, it is seen that the quantity Ω(E0 ) = (dΓ/dE )E=E0 represents
the “volume” of the phase space found between two hypersurfaces, diﬀering
from one another by one energetic unit and taken around the isoenergetic
hypersurface E(q, p) = E0 .
As a result, the distribution function for isolated classical systems – the
microcanonical distribution – takes the form
1
δ(E(q, p) − E0 ).
(1.66)
ρ(q, p) =
Ω(E0 )
The microcanonical distribution is schematically depicted in Fig. 1.5.
Note that (1.66) is a mathematical expression of the basic postulate of
statistical physics. The justiﬁcation of this postulate is corroborated by the
coincidence of the results obtained by use of (1.66) with the experimental
results.
For isolated systems, with the aid of the microcanonical distribution the
mean value of any physical quantity depending on energy L(q, p) = L(E(q, p))
can be computed:
1
δ(E(q, p) − E0 )dq dp.
L = L(E(q, p))
(1.67)
Ω(E0 )
If with the aid of the replacement
dq dp = dΓ =
dΓ
dE
dE
(1.68)
we pass from the integral over dΓ to the integral over dE and take into account
(1.65), for the mean value we get
L = L(E0 ).
(1.69)
In the particular case when L(E(q, p)) = E(q, p), for the mean value of energy
of the system we have
E(q, p) = E0 .
(1.70)
22
1 Basic Concepts of Thermodynamics and Statistical Physics
1.5 Statistical Description of Quantum Systems:
Statistical Matrix: Liouville Equation
Imagine that the motion of particles of the considered macroscopic system
bears quantum character. Inasmuch as the number of particles in the system
is very large, the purely quantum mechanical description of the system as
well as for classical systems is practically impossible. In fact, for the quantum
mechanical description of a system it is necessary to solve (1.16) and ﬁnd
the wave function Ψn , depending on 3N number of variables – coordinates
of all particles. Furthermore, the wave function Ψn can be used to ﬁnd the
quantum mechanical mean value L of the physical quantity L. The practical
impossibility of working with this problem is evident.
Another principal diﬃculty is associated with the fact that a system cannot
be found in the purely stationary quantum state, inasmuch as its energy spectrum is continuous. In nature, a completely isolated system does not exist.
Each system to some extent interacts with the surrounding medium and
the energy of this interaction is more of the diﬀerence between energy levels. Therefore, the macroscopic system being found in the stationary state is
impossible. For this reason, the macroscopic system is found not in the purely
quantum state but in “the mixed” state. According to quantum mechanics,
in “a mixed” state the system is described not by the stationary wave function, but by the density matrix. In statistical physics, it is called the statistical
matrix.
Impossibility of the stationary state of a macroscopic system also follows
from the uncertainty principle for energy. Indeed, the diﬀerence of energies
of two adjacent levels ΔE = (En+1 − En ) ought to be much more than the
uncertainty of energy:
δE ∼
.
(1.71)
Δt
To fulﬁll this condition, the time of measurement Δt ought to be inﬁnitely
large. In reality, however, Δt is ﬁnite. Therefore, in the range of the uncertainty
of energy δE, several energy levels can be found and, consequently, it cannot
be asserted that the system is found in some speciﬁed stationary state.
Thus, inasmuch as the quantum mechanical description of the system is
impossible, the problem needs to be solved by statistical methods. To do this,
we proceed as follows. Separate one subsystem, which is a small part of a large
system. Suppose that the subsystem does not interact with the surroundings.
Then, we can speak about “the stationary state” of the subsystem. The wave
function of the subsystem in these stationary states is denoted by ψn (q).
Here, q are coordinates of all particles of the system, and n is the totality
of quantum numbers determining its stationary state. Let the energy of this
stationary state be En .
Assume that at a certain instant of time the state of the subsystem is
described by the wave function Ψ(t), which can be expanded into a series
1.5 Statistical Description of Quantum Systems
with respect to orthonormalized wave functions ψn (q):
Cn ψn (q),
Ψ(t) =
23
(1.72)
n
where the coeﬃcient Cn depends on time
iEn t
,
Cn ∼ exp −
and satisﬁes the normalization condition
2
Cn  = 1.
(1.73)
(1.74)
n
The mean value of a physical quantity L corresponding to the operator L̂ is
deﬁned as follows:
L = Ψ∗ L̂Ψ dq.
(1.75)
If we use the expansion (1.72), for L we get
L=
Cn∗ Cm Lnm ,
(1.76)
nm
where
Lnm =
ψn∗ (q)L̂ψm (q)dq
(1.77)
is the matrix element of the physical quantity L corresponding to the
operator L̂.
If we introduce the notation
Cn∗ Cm ⇒ Wmn ,
we get the formula to calculate the mean value:
L=
Wmn Lnm .
(1.78)
(1.79)
mn
The totality of quantities Wmn is called the statistical matrix. If we denote
the statistical operator corresponding to this matrix by Ŵ , expression (1.79)
takes the form
L=
(Ŵ L̂)nn .
(1.80)
n
Diagonal elements of the statistical matrix Wnn ≡ Wn show the probability of
the system being found in the stationary state n. Therefore Wn in the quantum statistics corresponds to the distribution function ρ(q, p) in the classical
statistics:
(1.81)
ρ(q, p) ⇒ Wn ,
24
1 Basic Concepts of Thermodynamics and Statistical Physics
and the normalization condition (1.38) has the appearance
Wn = 1.
(1.82)
n
Recall that for a classical system the distribution function ρ(q, p) determines
the probability of the system that is found in a microstate corresponding to the
phase point (q; p). And in quantum systems, Wn means the probability of the
system that is found in a microstate with energy En , which is determined by
the totality of quantum numbers n.
For classical systems, the distribution function ρ(q, p) possesses the property stemming from the Liouville theorem: the distribution function is an
integral of the motion and therefore depends only on mechanical invariants
(1.56). In quantum systems, for the statistical matrix Wnm a theorem analogous to the Liouville theorem can be proved. To do this, using (1.73) write
down the derivative of the statistical matrix with respect to time. Then we get
i
∂
(Cn∗ Cm ) = (En − Em )Cn∗ Cm ,
∂t
(1.83)
or, if we use notations (1.78),
i
∂
Wmn = (En − Em )Wmn .
∂t
The righthand side of this equation can be presented in the form
(En − Em )Wnm =
(Wmk Hkn − Hmk Wkn ).
(1.84)
(1.85)
k
Here Hmn is the matrix element of the Hamilton operator Hˆ . In the energetic
representation, Hmn is a diagonal matrix:
Hmn = En δmn .
(1.86)
If we take this into account, the entry of (1.85) becomes clear.
As a result, (1.84) takes the form
i
∂
Wmn =
(Wmk Hkn − Hmk Wkn ).
∂t
(1.87)
k
Equation (1.87) can be also written down in the matrix form, i.e. for the
operator of the density matrix Ŵ ,
∂
i
Ŵ = (Ŵ Hˆ − Hˆ Ŵ ).
∂t
This equation, as well as (1.87), is called the Liouville equation.
(1.88)
1.5 Statistical Description of Quantum Systems
25
As seen from the Liouville equation, to fulﬁll the stationary condition
∂ Ŵ /∂t = 0, the operator Ŵ ought to commutate with the Hamilton operator
Hˆ of the system:
Ŵ Hˆ − Hˆ Ŵ = 0.
(1.89)
Physical quantities corresponding to the operator that commutates with the
Hamilton operator are conservation quantities. Therefore, according to (1.89),
it can be asserted that the statistical matrix is an integral of the motion.
In quantum statistics, this conclusion is an analogue of the Liouville
theorem (1.54) in classical statistics.
From the energetic presentation of the statistical matrix, one more of its
property follows. Indeed, as seen from the Liouville equation (1.84), to fulﬁll
the stationary condition (∂Wmn /∂t = 0) the following ought to take place:
(En − Em )Wmn = 0.
(1.90)
And to fulﬁll this condition, the matrix element Wmn ought to be diagonal:
Wmn = Wn δmn .
(1.91)
With regard to (1.91), the formula for the mean value (1.79) takes the form
L=
Wn Lnn .
(1.92)
n
As can be seen, to calculate the mean value of any physical quantity L it is sufﬁcient to know the distribution function Wn and only the diagonal elements
of the matrix Lnn . For the considered system, Lnn is found from quantum
mechanics. And the ﬁnding of the explicit form of the distribution function
Wn is the task of statistical physics. Naturally, a universal expression for Wn
applicable to any system does not exist. However, as is known, in quantum
statistics the Liouville theorem is fulﬁlled, too, i.e. Wn is a conservation quantity. And this means that the dependence Wn on the totality of the quantum
numbers n is expressed by conservation quantities, namely, by En :
Wn = W (En ).
(1.93)
This property is an analogue of the property of the distribution function (1.61)
in classical statistics.
The explicit form of the function W (En ) is diﬀerent for diﬀerent physical systems. Assume that the considered system with energy E = En consists
of statistically independent subsystems. If we denote the energy of the
αsubsystem by Enα , the energy of the complete system is
Enα ,
(1.94)
En =
α
where nα is the totality of quantum numbers determining the state of the
αsubsystem, and n is the totality of quantum numbers determining the state
of the whole system, i.e. n ⇒ n1 , n2 , . . . nα .
26
1 Basic Concepts of Thermodynamics and Statistical Physics
Inasmuch as the subsystems are statistically independent, the distribution
function Wn ought to possess the property analogous to the property of ρ(q, p)
in classical statistics (1.43):
ln Wn =
ln Wnα ,
(1.95)
α
i.e. in the quantum case the logarithm of the distribution function is an additive quantity, too. Then, the logarithm of the distribution function can be
presented in the form
ln W (Enα ) = Aα + βEnα ,
(1.96)
where Enα is energy of the αsubsystem; Aα is a constant, which is found from
the normalization condition and depends on the number of the subsystem; and
the coeﬃcient β ought not to depend on the number of the subsystem, since
only in this case conditions (1.95) and (1.94) are fulﬁlled at the same time.
Notice that the canonical distribution for systems in the thermostat has
the same appearance as (1.96).
Consider the microcanonical distribution for isolated quantum systems
with the preassigned energy E = E0 = const. As noted above, the energy
spectrum of macroscopic systems is continuous. Denote by dG the number of
quantum states in an inﬁnitely small range of energy dE taken around the
given value of energy E.
If it is supposed that the system consists of several subsystems, then
dGα ,
(1.97)
dG =
α
where dGα is the number of quantum states in an inﬁnitely small range of
energy dE, taken close to the given value of energy Eα of the subsystem with
number α.
Notice that (1.97) corresponds to the relationship
dΓα
(1.98)
dΓ =
α
for the classical case, which means that an element of volume of the phase
space of the whole system equals the product of elements of volumes of the
phase space of individual subsystems.
For an isolated system, quantum states dG falling in the range of energy
dE can be considered as possible states. According to the basic postulate of
statistical physics, the probability of the system found in any microstate is
identical, i.e. preference can be given to none of them. On the other hand,
the probability dW of the system found in any of the states dG ought to be
proportional to the number dG. Then it can be written as
dW = const δ(E − E0 )
dGα .
(1.99)
α
1.6 Entropy and Statistical Weight
27
Equation (1.99) is called the microcanonical distribution for quantum systems.
Here the δ(E − E0 ) function shows the isolatedness of the system, and
Eα
(1.100)
E=
α
is the total energy of the system.
1.6 Entropy and Statistical Weight
We introduce one of the basic concepts of thermodynamics and statistical
physics – entropy of a system. Entropy, as well as energy, is a function of
state, i.e. it determines the microscopic state of a system.
At ﬁrst, consider the concept of the statistical weight of a system, which
is closely associated with entropy. To do this, suppose that the considered
quantum system is found in the thermodynamic equilibrium state. Subdivide
this system into a multitude of subsystems. Let n be the totality of quantum
numbers determining a microstate of any subsystem with energy En , and
Wn = W (En ) be the probability of the system found in the given microstate.
Pass from the distribution over microstates W (En ) to the distribution over
energy w(E). It is known that the energy spectrum of a macroscopic system
is almost continuous, and therefore a multitude of energy levels corresponding
to the quantum states accounts for a suﬃciently small range of energies. In
order to ﬁnd the probability w(E)dE of the system found in the state with
energy in the range of E and E + dE taken close to E, it is necessary to
multiply the function W (E) by the number of quantum states (microstates)
accounting for the range of energy dE.
If we take into account that the number of these microstates is
dG(E) =
dG(E)
dE = g(E)dE,
dE
(1.101)
the distribution function over energies takes the form
w(E) = g(E)W (E).
Here
(1.102)
dG(E)
(1.103)
dE
is the function of the density of quantum states, i.e. the number of quantum
states accounting for a unit range of energy taken close to E, and G(E) is the
total number of all the quantum states with energy less than E.
Even without knowing the explicit form of the distribution function w(E),
it can be asserted that a subsystem in thermodynamic equilibrium ought to
be found most of the time in states close to the mean value of energy E.
g(E) =
28
1 Basic Concepts of Thermodynamics and Statistical Physics
w(E)
w(E)
E
ΔE
0
E
Fig. 1.6. The distribution function over energy
Therefore, the distribution function w(E) over energy ought to have a sharp
maximum at E = E (Fig. 1.6).
According to the normalization condition
w(E)dE = 1.
(1.104)
This geometrically means that the area under the curve w(E) ought to be
equal unity.
If the curve depicted in Fig. 1.6 is approximately replaced by a rectangle
with the height w(E) and width ΔE, condition (1.104) can be presented in
the form
w(E)ΔE = 1,
(1.105)
where ΔE is called the width of the distribution curve over energy.
Taking into account the distribution function (1.102) in (1.105), we get
W (E)ΔG = 1.
(1.106)
ΔG = g(E)ΔE
(1.107)
Here,
is the number of microstates accounting for the range of energies ΔE of the
subsystem and is called the statistical weight of a macrostate of the system
with energy E = E.
The statistical weight ΔG shows the number of microstates corresponding
to one preassigned macrostate of the system. Therefore, ΔG characterizes
the degree of the internal chaoticity of the system.
The statistical weight of a closed system, in conformity with (1.97), can be
presented as the product of statistical weights of the individual subsystems:
ΔG =
ΔGα ,
(1.108)
α
where ΔGα = ΔG(E α ) is the statistical weight of the αsubsystem.
In statistical physics, apart from the statistical weight, a more convenient
function, also characterizing the degree of chaoticity of a system, is accepted.
This function is deﬁned as a logarithm of the statistical weight:
1.6 Entropy and Statistical Weight
29
(1.109)
S = k0 ln ΔG
and is called the entropy of a system. Here k0 is the Boltzmann constant. As
is seen, entropy cannot be negative, i.e. S ≥ 0, since the statistical weight
ΔG ≥ 1. Note that entropy of a system, as also the energy, in conformity
with (1.108) and (1.109), possesses the property of additivity, i.e.
S=
Sα .
(1.110)
α
Here, Sα = k0 ln ΔGα is entropy of the αsubsystem.
Thus, it can be asserted that entropy is a function of the state of a macroscopic system and characterizes the degree of its internal chaoticity; entropy
has only a statistical sense; entropy cannot be spoken of separately for a given
particle.
If we take into account (1.106) in (1.109), entropy can be expressed by the
distribution function
(1.111)
S = −k0 ln W (E).
Inasmuch as, according to (1.96), the logarithm of the distribution function is a linear function of energy, ln W (E) can be replaced with the mean
ln W (E), i.e.
ln W (E) = ln W (E).
(1.112)
Then, the expression of entropy (1.111) takes the form
Wn ln Wn .
S = −k0
(1.113)
n
We now consider classical systems in the quasiclassical approximation. In this
case, using the normalization condition for classical systems consisting of N
particles,
ρ(q, p)dΓ = 1,
(1.114)
3N
where dΓ = dq dp =
dqi dpi is an element of volume of the phase space,
i
and therefore we can pass from the distribution over microstates ρ(q, p) to the
distribution over energies ρ(E). To do this, rewrite the condition (1.114) in
the form
dΓ
dE = 1,
(1.115)
ρ(E(q, p))
dE
or
ρ(E(q, p))(2π)3N g0 (E)dE = 1.
(1.116)
Here,
g0 (E) = (2π)−3N
dΓ
dE
(1.117)
30
1 Basic Concepts of Thermodynamics and Statistical Physics
is the function of the density of states of quasiclassical systems.
As is seen from (1.116),
ρ(E) = (2π)3N ρ(E(q, p))g0 (E)
(1.118)
is the distribution function over energy. Then condition (1.116) takes the form
ρ(E)dE = 1.
(1.119)
Taking into account that ρ(E) takes on a maximum value at E = E, (1.119)
can be roughly presented as
ρ(E)ΔE = 1.
(1.120)
Here ΔE is the width of the distribution curve (Fig. 1.6). If substitute (1.118)
into (1.120), we get
(1.121)
(2π)3N ρ(E(q, p))ΔG = 1,
where
ΔG = g0 (E)ΔE
(1.122)
is the number of microstates accounting for the range of energy ΔE taken
close to energy E = E in the quasiclassical case, i.e. the statistical weight
of the macrostate with energy E = E. Then entropy can be presented in
the form
S = k0 ln ΔG = k0 ln g0 (E)ΔE.
(1.123)
If the value of g0 (E) from (1.117) is taken into account, for the quasiclassical
case entropy can be presented as
S = k0 ln
ΔqΔp
.
(2π)3N
(1.124)
In the quasiclassical case, entropy can be also expressed by the distribution
function over microstates. To do this, take into account (1.121) and (1.123).
Then we get
S = −k0 ln[(2π)3N ρ(E(q, p))].
(1.125)
On the basis of property (1.58) of the distribution function, we can replace
ln ρ(E(q, p)) = ln ρ(E(q, p)).
As a result, for entropy we get
S = −k0 ρ(q, p) ln (2π)3N ρ(q, p) dq dp.
(1.126)
(1.127)
From additivity of entropy (1.110), one more its property stems. If we divide
the width of the distribution function ΔE (Fig. 1.6) by the number of
1.7 Law of Increasing Entropy: Reversible and Irreversible Processes
31
energy levels ΔG in this range, we get the distance between adjacent energy
levels:
ΔE
= ΔE e−S(E)/k0 .
(1.128)
D(E) =
ΔG
From the property of additivity, it follows that the more the amount of substance (the number of particles N ) in a system, the more is the entropy
of the system S(E) and the denser the energy levels. Thus, with increasing
amount of substance in a system, the distance between adjacent energy levels
exponentially decreases.
In conclusion, once more recall the basic properties of entropy:
1. Entropy is a function of state and characterizes the degree of its internal
chaoticity. The more the entropy, the more is the chaoticity, and vice versa;
the less the entropy, the more is the ordering of the system.
2. Entropy has only a statistical sense; it cannot be spoken of for a separate
particle.
3. Entropy cannot be negative, i.e. S ≥ 0.
4. Entropy is an additive quantity, i.e. S = Sα .
α
5. Entropy characterizes the density of levels in the energy spectrum of a
system. The more the entropy, the denser are the energy levels.
6. Entropy of a system in the thermodynamic equilibrium state takes on a
maximum value. This property follows from the law of increasing entropy,
which is further discussed in Sect. 1.7 below.
1.7 Law of Increasing Entropy:
Reversible and Irreversible Processes
In Sect. 1.6, we introduced the concept of entropy of an isolated system found
in thermodynamic equilibrium. The question arises whether entropy can be
spoken of for a system that is not in thermodynamic equilibrium. A positive
answer to this question can be given, i.e. the concept of entropy for thermodynamic nonequilibrium systems can also be introduced. To substantiate this,
imagine that the considered closed system is not found in the thermodynamic
equilibrium state, and its relaxation time is τ . If we study the system in the
time range Δt < τ , it is evident that the system is in the nonequilibrium state
(Fig. 1.7a) and, naturally, we cannot speak of a speciﬁed value of entropy of
the system. However, if we subdivide the considered system into small subsystems, the relaxation time τα 7 of each of them (assume with number α) will
be less than the time of observation Δt, i.e. τα < Δt (Fig. 1.7b). As can be
seen from the ﬁgure, for a very small time τα the subsystem passes onto its
local equilibrium state (the quantity Lα tends to its local equilibrium value
L0α ). And it can be said that for all the time of observation Δt, the subsystem
7
With a decrease in dimensions of the subsystem its relaxation time decreases.
32
1 Basic Concepts of Thermodynamics and Statistical Physics
L(t)
Lα (t)
L0
L0α
0
Dt
τ
t
τα
0
(a)
Dt
t
(b)
Fig. 1.7. The relaxation of the thermodynamical parameter: (a)  for system; (b) for subsystem
is found in this local equilibrium state. Consequently, the concept of entropy
Sα = S(Eα ) can be introduced for each subsystem found in the local equilibrium state. Inasmuch as entropy is an additive quantity, we can speak of
the instantaneous value
of entropy of a large nonequilibrium system, having
deﬁned it as S(t) = Sα (Eα ).
α
According to the ﬁrst postulate of thermodynamics, in the course of time
an isolated nonequilibrium system ought to pass into the equilibrium state.
The question arises as to how entropy of the system S(t) changes as a result
of this process.
In order to answer this question, use the microcanonical distribution function (1.99) for isolated systems with energy E0 and pass from the distribution
over microstates to the distribution over energy. Then, (1.99) takes the form
dW = const δ(E − E0 )
dGα
α
dEα
dEα .
(1.129)
If we replace the derivative dGα /dEα in this distribution by the ratio
ΔGα /ΔEα and make use of the expression of entropy of the αsubsystem,
stemming from deﬁnition (1.109)
Sα (Eα )
ΔGα = exp
,
(1.130)
k0
we get
dW = const δ(E − E0 ) eS/k0
where S =
dEα
,
ΔEα
α
(1.131)
Sα (Eα ) is entropy of the isolated system. Inasmuch as the
α
range of energy ΔEα very weakly depends on energy compared with the factor
eS/k0 , it can be regarded as constant and the distribution function over energy
(1.131) takes the form
dW = const δ(E − E0 )eS/k0
dEα .
(1.132)
α
1.7 Law of Increasing Entropy: Reversible and Irreversible Processes
33
The obtained distribution function (1.132) is the probability of the subsystems
found in states with energies E1 , E2 , . . . , Eα , . . . in the ranges of energy E1 +
dE1 , E2 + dE2 , . . . , Eα + dEα , . . ., respectively. Here
the δ(E − E0 ) function
provides the isolatedness of the system, i.e. E = Eα = E0 = const.
α
As seen from (1.132), the distribution function over energies of an isolated
system having the aboveindicated physical sense very strongly and exponentially depends on entropy of the system, i.e. S = S(E1 , E2 , . . . , Eα , . . .).
Therefore, the greater the probability of the considered macrostate of the system, the higher is the entropy of this state. It is known that the probability of
an isolated system found in thermodynamic equilibrium state is maximum. In
this state, energy of subsystems ought to be equal to its mean value, Eα = E α .
Thus, entropy of an isolated system in thermodynamic equilibrium state has
the maximum value:
S(E 1 , E 2 , . . . , E α , . . .) = Smax .
(1.133)
Conclusion: If an isolated system at a certain instant of time is not found
in thermodynamic equilibrium state, in the course of time internal processes
proceed in such a direction, as a result of which the system comes to its equilibrium state and at that entropy of the system increases, reaching its maximum
value. This assertion is called the Law of Increasing Entropy or the Second
Law of Thermodynamics. The law in such a form was formulated by Clausius
in 1865, and statistically substantiated by Boltzmann in 1870.
Note that on a system such a process can be performed at which each
macroscopic state will be in thermodynamic equilibrium and entropy does
not change. Taking into account this case also, the law of increasing entropy
in the general form can be formulated as follows: Entropy of an isolated system
never decreases; it either increases or, in the particular case, remains constant.
In conformity with this, processes proceeding in all macroscopic systems
can be subdivided into two groups (Fig. 1.8):
dS/dt > 0 − irreversible process
dS/dt = 0 − reversible process.
S(t)
(1.134)
∂ S / ∂ t >0
Smax
Irreversible
process
S = const
Reversible
process
0
t
Fig. 1.8. The time dependence of entropy for irreversible and reversible processes
34
1 Basic Concepts of Thermodynamics and Statistical Physics
Irreversible processes cannot proceed in the reverse direction, inasmuch as
in that case the entropy decreases, but this contradicts the law of increasing
entropy. In nature, we frequently come across irreversible processes. Diﬀusion, thermal conductivity, expansion of a gas, etc. can serve as examples of
irreversible processes.
A reversible process can proceed in direct and reverse directions. In this
case, a system passes through identical equilibrium macroscopic states, since
in this case entropy in both the directions does not change. In nature, it can
be said that reversible processes do not exist, and they can be created only
approximately by artiﬁcial methods. Compression or expansion of a gas found
under a piston in a cylinder is one example of the simple adiabatic process.
An adiabatic process is a reversible process. Processes proceeding suﬃciently slowly in adiabatically isolated (ΔQ = 0) systems are called adiabatic.
We can show that in such processes entropy does not change: dS/dt = 0, i.e.
the process is reversible. To do this, consider the simplest case: a gas under a
piston in an adiabatically isolated cylinder (Fig. 1.9).
As the external parameter, take the volume of the gas under the piston,
which for the given cylinder is determined by the height l. By changing this
height, we can increase or decrease the volume of the gas. Inasmuch as the
change in entropy with time is related to the change in volume, we can write
dS
dS dl
=
· .
dt
dl dt
(1.135)
Suppose that the piston moves suﬃciently slowly; then the change in entropy
with time can be expanded into a series in powers of l̇ = dl/dt:
2
dS
dl
dl
= A0 + A1 + A2
+ ···
(1.136)
dt
dt
dt
The constants A0 and A1 in this series ought to be equal to zero. The constant
A0 equals zero because at l˙ = 0 the state of the system does not change
and entropy remains constant, i.e. dS/dt = 0; A1 equals zero because when
changing the sign of l̇ (the motion of the piston down and up), dS/dt changes
its sign, which contradicts the law of increasing entropy (dS/dt ≥ 0). Thus,
.
l>0
.
l<0
l
Fig. 1.9. An adiabatic process
1.8 Absolute Temperature and Pressure: Basic Thermodynamic Relationship
dS
= A2
dt
dl
dt
35
2
.
(1.137)
If we take into account (1.135), we get
dl
dS
= A2 .
dl
dt
(1.138)
Hence it is seen that as the velocity of the piston tends to zero (l˙ → 0),
the change in entropy of the system with respect to the external parameter l
tends to zero: dS/dl = 0, i.e. when the change in volume of the adiabatically
isolated gas is slow, the processes of expansion or compression are adiabatically
reversible.
Thus, we have seen that the processes of expansion or compression are
reversible (dS/dl = 0) if the piston moves with a very small velocity. Then
the following question arises: In reference to which velocity ought the velocity
of the piston be small? In order to answer this question, take a look at the
process of compression or expansion of the gas. If the piston moves down
(l˙ < 0), immediately under the piston the instantaneous density of the gas
increases. The velocity of motion of the piston ought to be such that at any
instance of time, i.e. at any position of the piston, the density of the gas
everywhere would be identical, i.e. the gas would be found in thermodynamic
equilibrium. An analogous situation arises also during the upward motion
of the piston (l˙ > 0): as the piston moves up, the gas ought to ﬁll up the
vacuum forming under the piston, in order that the density would be identical
everywhere and equilibrium would be attained. It is apparent that during the
motion of the piston the process of regaining uniform distribution of particles
of the gas occurs with the speed of sound. Therefore, in order that the process
would be adiabatic, the velocity of the motion of the piston ought to be less
than the speed of propagation of sound υ0 in the gas:
dl
υ0 .
(1.139)
dt
If we take into account that the speed of sound in most gases is on the order of
350 m/s, the piston can be moved with a suﬃciently large velocity, satisfying
condition (1.139), and the adiabaticity of the process is not violated.
1.8 Absolute Temperature and Pressure:
Basic Thermodynamic Relationship
In Sect. 1.7 we became acquainted with three thermodynamic quantities: volume V , internal energy E(q, p) and entropy S(E). Of these, volume is an
external parameter, and energy and entropy are internal ones. Energy has
both a mechanical and a statistical sense, and entropy has only a statistical
36
1 Basic Concepts of Thermodynamics and Statistical Physics
S1
E1
V1
a
b
S2
E2
V2
Fig. 1.10. The thermal equilibrium of two subsystems
sense. The change in one of these quantities induces a change in the others. Derivatives of these quantities are also thermodynamic parameters: for
instance, absolute temperature and pressure. We consider them separately.
Absolute temperature. Assume that an isolated system with energy E0 =
const in thermodynamic equilibrium consists of two subsystems with energy,
entropy and volume E1 , S1 , V1 and E2 , S2 , V2 , respectively (Fig. 1.10). As a
consequence of the additivity of energy and entropy, we have
E = E1 + E2 ,
(1.140)
S = S1 (V1 , E1 ) + S2 (V2 , E2 ).
(1.141)
and
Assume that the boundary of division of the two subsystems ab (Fig. 1.10) is
immovable; therefore, volumes V1 and V2 do not change but energies of the
systems E1 and E2 and also entropies S1 and S2 can change.
Taking into account E2 = E −E1 , notice that entropy of an isolated system
depends only on one independent variable E1 , i.e. S = S(E1 ). According to
the law of increasing entropy, in thermodynamic equilibrium entropy of an
isolated system ought to be maximum. For this, it is necessary to fulﬁll the
condition
∂E2
∂S
∂S1
∂S2
=
+
= 0.
(1.142)
∂E1
∂E1 V1
∂E2 V2 ∂E1
Since ∂E2 /∂E1 = −1, from (1.142) we get8
∂S1
∂S2
=
.
∂E1 V1
∂E2 V2
(1.143)
If an isolated system in thermodynamic equilibrium is subdivided into n arbitrary subsystems, the condition of thermodynamic equilibrium (1.143) in the
general form can be presented as
8
Here and everywhere a quantity, indicated at the foot of the bracket when taking
the derivate, is regarded as constant.
1.8 Absolute Temperature and Pressure: Basic Thermodynamic Relationship
∂S1
∂E1
=
V1
∂S2
∂E2
= ··· =
V2
∂Sn
∂En
37
.
(1.144)
Vn
Hence it is seen that if an isolated system is found in thermodynamic equilibrium, there exists the quantity (∂S/∂E)V , which is identical in any part of
the system. The reciprocal value of this quantity is denoted by T :
∂E
=T
(1.145)
∂S V
and is called the absolute temperature or, in brief, temperature. Then condition
of thermodynamic equilibrium or maximum entropy (1.144) takes the form
T1 = T2 = · · · = Tn = T.
(1.146)
If we take into account the deﬁnition of entropy (1.109), according to (1.145)
temperature is measured in degrees.
If entropy is deﬁned not by the expression (1.109) but as S = ln ΔG,
temperature (1.145) ought to be measured in energetic units. Hence, it is
seen that the Boltzmann constant k0 relates energy to temperature. For
instance, 1 erg = κ0 · 1 deg. From experiment it has been determined that
κ0 = 1.38 × 10−16 erg/deg. Thus, the Boltzmann constant k0 is the number of
ergs corresponding to one degree.
Note some properties of the absolute temperature:
1. In a system which is in thermodynamic equilibrium, the temperature at all
points is identical (1.146).
2. As wit
