Physics:Thermal fluctuations

From HandWiki
Short description: Random temperature-influenced deviations of particles from their average state
Atomic diffusion on the surface of a crystal. The shaking of the atoms is an example of thermal fluctuations. Likewise, thermal fluctuations provide the energy necessary for the atoms to occasionally hop from one site to a neighboring one. For simplicity, the thermal fluctuations of the blue atoms are not shown.

In statistical mechanics, thermal fluctuations are random deviations of an atomic system from its average state, that occur in a system at equilibrium.[1] All thermal fluctuations become larger and more frequent as the temperature increases, and likewise they decrease as temperature approaches absolute zero.

Thermal fluctuations are a basic manifestation of the temperature of systems: A system at nonzero temperature does not stay in its equilibrium microscopic state, but instead randomly samples all possible states, with probabilities given by the Boltzmann distribution.

Thermal fluctuations generally affect all the degrees of freedom of a system: There can be random vibrations (phonons), random rotations (rotons), random electronic excitations, and so forth.

Thermodynamic variables, such as pressure, temperature, or entropy, likewise undergo thermal fluctuations. For example, for a system that has an equilibrium pressure, the system pressure fluctuates to some extent about the equilibrium value.

Only the 'control variables' of statistical ensembles (such as the number of particules N, the volume V and the internal energy E in the microcanonical ensemble) do not fluctuate.

Thermal fluctuations are a source of noise in many systems. The random forces that give rise to thermal fluctuations are a source of both diffusion and dissipation (including damping and viscosity). The competing effects of random drift and resistance to drift are related by the fluctuation-dissipation theorem. Thermal fluctuations play a major role in phase transitions and chemical kinetics.

Central limit theorem

The volume of phase space [math]\displaystyle{ \mathcal{V} }[/math], occupied by a system of [math]\displaystyle{ 2m }[/math] degrees of freedom is the product of the configuration volume [math]\displaystyle{ V }[/math] and the momentum space volume. Since the energy is a quadratic form of the momenta for a non-relativistic system, the radius of momentum space will be [math]\displaystyle{ \sqrt{E} }[/math] so that the volume of a hypersphere will vary as [math]\displaystyle{ \sqrt{E}^{2m} }[/math] giving a phase volume of

[math]\displaystyle{ \mathcal{V}=\frac{(C\cdot E)^m}{\Gamma(m+1)}, }[/math]

where [math]\displaystyle{ C }[/math] is a constant depending upon the specific properties of the system and [math]\displaystyle{ \Gamma }[/math] is the Gamma function. In the case that this hypersphere has a very high dimensionality, [math]\displaystyle{ 2m }[/math], which is the usual case in thermodynamics, essentially all the volume will lie near to the surface

[math]\displaystyle{ \Omega(E)=\frac{\partial\mathcal{V}}{\partial E}=\frac{C^m\cdot E^{m-1}}{\Gamma(m)}, }[/math]

where we used the recursion formula [math]\displaystyle{ m\Gamma(m)=\Gamma(m+1) }[/math].

The surface area [math]\displaystyle{ \Omega(E) }[/math] has its legs in two worlds: (i) the macroscopic one in which it is considered a function of the energy, and the other extensive variables, like the volume, that have been held constant in the differentiation of the phase volume, and (ii) the microscopic world where it represents the number of complexions that is compatible with a given macroscopic state. It is this quantity that Planck referred to as a 'thermodynamic' probability. It differs from a classical probability inasmuch as it cannot be normalized; that is, its integral over all energies diverges—but it diverges as a power of the energy and not faster. Since its integral over all energies is infinite, we might try to consider its Laplace transform

[math]\displaystyle{ \mathcal{Z}(\beta)=\int_0^{\infty}e^{-\beta E}\Omega(E)\,dE, }[/math]

which can be given a physical interpretation. The exponential decreasing factor, where [math]\displaystyle{ \beta }[/math] is a positive parameter, will overpower the rapidly increasing surface area so that an enormously sharp peak will develop at a certain energy [math]\displaystyle{ E^{\star} }[/math]. Most of the contribution to the integral will come from an immediate neighborhood about this value of the energy. This enables the definition of a proper probability density according to

[math]\displaystyle{ f(E;\beta)=\frac{e^{-\beta E}}{\mathcal{Z}(\beta)}\Omega(E), }[/math]

whose integral over all energies is unity on the strength of the definition of [math]\displaystyle{ \mathcal{Z}(\beta) }[/math], which is referred to as the partition function, or generating function. The latter name is due to the fact that the derivatives of its logarithm generates the central moments, namely,

[math]\displaystyle{ \langle E\rangle =-\frac{\partial\ln\mathcal{Z}}{\partial\beta}, \qquad \ \langle(E-\langle E\rangle)^2\rangle=(\Delta E)^2=\frac{\partial^2\ln\mathcal{Z}}{\partial\beta^2}, }[/math]

and so on, where the first term is the mean energy and the second one is the dispersion in energy.

The fact that [math]\displaystyle{ \Omega(E) }[/math] increases no faster than a power of the energy ensures that these moments will be finite.[2] Therefore, we can expand the factor [math]\displaystyle{ e^{-\beta E}\Omega(E) }[/math] about the mean value [math]\displaystyle{ \langle E\rangle }[/math], which will coincide with [math]\displaystyle{ E^{\star} }[/math] for Gaussian fluctuations (i.e. average and most probable values coincide), and retaining lowest order terms result in

[math]\displaystyle{ f(E;\beta)=\frac{e^{-\beta E}}{\mathcal{Z}(\beta)}\Omega(E)\approx\frac{\exp\{-(E-\langle E\rangle)^2/2\langle(\Delta E)^2\rangle\}}{\sqrt{2\pi(\Delta E)^2}}. }[/math]

This is the Gaussian, or normal, distribution, which is defined by its first two moments. In general, one would need all the moments to specify the probability density, [math]\displaystyle{ f(E;\beta) }[/math], which is referred to as the canonical, or posterior, density in contrast to the prior density [math]\displaystyle{ \Omega }[/math], which is referred to as the 'structure' function.[2] This is the central limit theorem as it applies to thermodynamic systems.[3]

If the phase volume increases as [math]\displaystyle{ E^m }[/math], its Laplace transform, the partition function, will vary as [math]\displaystyle{ \beta^{-m} }[/math]. Rearranging the normal distribution so that it becomes an expression for the structure function and evaluating it at [math]\displaystyle{ E=\langle E\rangle }[/math] give

[math]\displaystyle{ \Omega(\langle E\rangle)=\frac{e^{\beta(\langle E\rangle)\langle E\rangle}\mathcal{Z}(\beta(\langle E\rangle))}{\sqrt{2\pi(\Delta E)^2}}. }[/math]

It follows from the expression of the first moment that [math]\displaystyle{ \beta(\langle E\rangle)=m/\langle E\rangle }[/math], while from the second central moment, [math]\displaystyle{ \langle(\Delta E)^2\rangle=\langle E\rangle^2/m }[/math]. Introducing these two expressions into the expression of the structure function evaluated at the mean value of the energy leads to

[math]\displaystyle{ \Omega(\langle E\rangle)=\frac{\langle E\rangle^{m-1}m}{\sqrt{2\pi m}m^me^{-m}} }[/math].

The denominator is exactly Stirling's approximation for [math]\displaystyle{ m!=\Gamma(m+1) }[/math], and if the structure function retains the same functional dependency for all values of the energy, the canonical probability density,

[math]\displaystyle{ f(E;\beta)=\beta\frac{(\beta E)^{m-1}}{\Gamma(m)}e^{-\beta E} }[/math]

will belong to the family of exponential distributions known as gamma densities. Consequently, the canonical probability density falls under the jurisdiction of the local law of large numbers which asserts that a sequence of independent and identically distributed random variables tends to the normal law as the sequence increases without limit.

Distribution about equilibrium

The expressions given below are for systems that are close to equilibrium and have negligible quantum effects.[4]

Single variable

Suppose [math]\displaystyle{ x }[/math] is a thermodynamic variable. The probability distribution [math]\displaystyle{ w(x)dx }[/math] for [math]\displaystyle{ x }[/math] is determined by the entropy [math]\displaystyle{ S }[/math]:

[math]\displaystyle{ w(x) \propto \exp\left(S(x)\right). }[/math]

If the entropy is Taylor expanded about its maximum (corresponding to the equilibrium state), the lowest order term is a Gaussian distribution:

[math]\displaystyle{ w(x) = \frac{1}{\sqrt{2\pi \langle x^2 \rangle}} \exp\left(-\frac{x^2}{2 \langle x^2 \rangle} \right). }[/math]

The quantity [math]\displaystyle{ \langle x^2 \rangle }[/math] is the mean square fluctuation.[4]

Multiple variables

The above expression has a straightforward generalization to the probability distribution [math]\displaystyle{ w(x_1,x_2,\ldots,x_n)dx_1dx_2\ldots dx_n }[/math]:

[math]\displaystyle{ w = \prod_{i,j=1\ldots n}\frac{1}{\left(2\pi\right)^{n/2}\sqrt{\langle x_ix_j \rangle}} \exp\left(-\frac{x_ix_j}{2\langle x_ix_j \rangle}\right), }[/math]

where [math]\displaystyle{ \langle x_ix_j \rangle }[/math] is the mean value of [math]\displaystyle{ x_ix_j }[/math].[4]

Fluctuations of the fundamental thermodynamic quantities

In the table below are given the mean square fluctuations of the thermodynamic variables [math]\displaystyle{ T,V,P }[/math] and [math]\displaystyle{ S }[/math] in any small part of a body. The small part must still be large enough, however, to have negligible quantum effects.

Averages [math]\displaystyle{ \langle x_ix_j \rangle }[/math] of thermodynamic fluctuations. [math]\displaystyle{ C_P }[/math] is the heat capacity at constant pressure; [math]\displaystyle{ C_V }[/math] is the heat capacity at constant volume.[4]
[math]\displaystyle{ \Delta T }[/math] [math]\displaystyle{ \Delta V }[/math] [math]\displaystyle{ \Delta S }[/math] [math]\displaystyle{ \Delta P }[/math]
[math]\displaystyle{ \Delta T }[/math] [math]\displaystyle{ \frac{k_{\rm B}}{C_V }T^2 }[/math] [math]\displaystyle{ 0 }[/math] [math]\displaystyle{ k_{\rm B}T }[/math] [math]\displaystyle{ \frac{k_{\rm B}}{C_V }T^2\left( \frac{\partial P}{\partial T} \right)_V }[/math]
[math]\displaystyle{ \Delta V }[/math] [math]\displaystyle{ 0 }[/math] [math]\displaystyle{ - k_{\rm B}T\left(\frac{\partial V}{\partial P} \right)_T }[/math] [math]\displaystyle{ k_{\rm B}T\left(\frac{\partial V}{\partial T} \right)_P }[/math] [math]\displaystyle{ - k_{\rm B}T }[/math]
[math]\displaystyle{ \Delta S }[/math] [math]\displaystyle{ k_{\rm B}T }[/math] [math]\displaystyle{ k_{\rm B}T\left(\frac{\partial V}{\partial T} \right)_P }[/math] [math]\displaystyle{ k_{\rm B}C_P }[/math] [math]\displaystyle{ 0 }[/math]
[math]\displaystyle{ \Delta P }[/math] [math]\displaystyle{ \frac{k_{\rm B}}{C_V }T^2\left(\frac{\partial P}{\partial T} \right)_V }[/math] [math]\displaystyle{ -k_{\rm B}T }[/math] [math]\displaystyle{ 0 }[/math] [math]\displaystyle{ - k_{\rm B}T\left(\frac{\partial P}{\partial V} \right)_S }[/math]

See also

Notes

  1. In statistical mechanics they are often simply referred to as fluctuations.
  2. 2.0 2.1 Khinchin 1949
  3. Lavenda 1991
  4. 4.0 4.1 4.2 4.3 Landau & Lifshitz 1985

References