167
Physics Made Easy Physics Made Easy…? Hello and welcome to Physics Made Easy, a site whose purpose is, unsurprisingly, to attempt to make physics, if not easy, at least less difficult. Unfortunately, making physics easy is no simple task in itself, and so the site is an ongoing project, continually adding and refining content in the hopes of reaching its stated goal. Right now, the site is on version 1.0, in which creator Karura will steadily upload the revision notes she made at university; once this framework is in place, the next step will be to optimise the content, add in extra examples where needed, all in the hope that fewer people will have to search through both textbooks and Google in an increasingly frustrating and hopeless attempt to identify some way of getting started on a particular problem. Also to be included on the site are A-Level revision notes for other science subjects, previously featured on Karura’s old website. Note: the reason this site has not been updated in ages is due to Karura being busy in real life. It remains unclear when she’ll be able to get back to it GATE Syllabus for Physics (PH) Mathematical Physics: Linear vector space; matrices; vector calculus; linear differential equations; elements of complex analysis; Laplace transforms, Fourier analysis, elementary ideas about tensors. Classical Mechanics: Conservation laws; central forces, Kepler problem and planetary motion;

Physics Made Easy H JOSHI

Embed Size (px)

DESCRIPTION

dd

Citation preview

Physics Made EasyPhysics Made Easy?Hello and welcome to Physics Made Easy, a site whose purpose is, unsurprisingly, to attempt to make physics, if not easy, at least less difficult. Unfortunately, making physics easy is no simple task in itself, and so the site is an ongoing project, continually adding and refining content in the hopes of reaching its stated goal.

Right now, the site is on version 1.0, in which creator Karura will steadily upload the revision notes she made at university; once this framework is in place, the next step will be to optimise the content, add in extra examples where needed, all in the hope that fewer people will have to search through both textbooks and Google in an increasingly frustrating and hopeless attempt to identify some way of getting started on a particular problem.

Also to be included on the site are A-Level revision notes for other science subjects, previously featured on Karuras old website.

Note: the reason this site has not been updated in ages is due to Karura being busy in real life. It remains unclear when shell be able to get back to itGATE Syllabus for Physics (PH)

Mathematical Physics: Linear vector space; matrices; vector calculus; linear differential

equations; elements of complex analysis; Laplace transforms, Fourier analysis, elementary ideas

about tensors.

Classical Mechanics: Conservation laws; central forces, Kepler problem and planetary motion;

collisions and scattering in laboratory and centre of mass frames; mechanics of system of

particles; rigid body dynamics; moment of inertia tensor; noninertial frames and pseudo forces;

variational principle; Lagranges and Hamiltons formalisms; equation of motion, cyclic

coordinates, Poisson bracket; periodic motion, small oscillations, normal modes; special theory

of relativity Lorentz transformations, relativistic kinematics, mass-energy equivalence.

Electromagnetic Theory: Solution of electrostatic and magnetostatic problems

includingboundary value problems;dielectrics andconductors; Biot-Savarts and Amperes laws;

Faradays law; Maxwells equations; scalar and vector potentials; Coulomb and Lorentz gauges;

Electromagnetic waves and their reflection, refraction, interference, diffraction and polarization.

Poynting vector, Poynting theorem, energy and momentum of electromagnetic waves; radiation

from a moving charge.

Quantum Mechanics: Physical basis of quantum mechanics; uncertainty principle; Schrodinger

equation; one, two and three dimensional potential problems; particle in a box, harmonic

oscillator, hydrogen atom; linear vectors and operators in Hilbert space; angular momentum and

spin; addition of angular momenta; time independent perturbation theory; elementary scattering

theory.

Thermodynamics and Statistical Physics: Laws of thermodynamics; macrostates and

microstates; phase space; probability ensembles; partition function, free energy, calculation of

thermodynamic quantities; classical and quantum statistics; degenerate Fermi gas; black body

radiation and Plancks distribution law; Bose-Einstein condensation; first and second order phase

transitions, critical point.

Atomic and Molecular Physics: Spectra of one- and many-electron atoms; LS and jj coupling;

hyperfine structure; Zeeman and Stark effects; electric dipole transitions and selection rules; Xray spectra; rotational and vibrational spectra of diatomic molecules; electronic transition in

diatomic molecules, Franck-Condon principle; Raman effect; NMR and ESR; lasers.

Solid State Physics: Elements of crystallography; diffraction methods for structure

determination; bonding in solids; elastic properties of solids; defects in crystals; lattice vibrations

and thermal properties of solids; free electron theory; band theory of solids; metals,

semiconductors and insulators; transport properties;optical, dielectric and magnetic properties of

solids; elements of superconductivity.

Nuclear and Particle Physics: Nuclear radii and charge distributions, nuclear binding energy,

Electric and magnetic moments; nuclear models, liquid drop model semi-empirical mass

formula, Fermi gas model of nucleus, nuclear shell model; nuclear force and two nucleon

problem; Alpha decay, Beta-decay, electromagnetic transitions in nuclei;Rutherford

scattering,nuclear reactions, conservation laws; fission and fusion;particle accelerators and

detectors; elementary particles, photons, baryons, mesons and leptons; quark model.

Electronics: Network analysis; semiconductor devices; Bipolar Junction Transistors, Field

Effect Transistors, amplifier and oscillator circuits; operational amplifier, negative feedback

circuits ,active filters and oscillators; rectifier circuits, regulated power supplies; basic digital

logic circuits, sequential circuits, flip-flops, counters, registers, A/D and D/A conversion.

Physics Made Easy?

Astrophysics I Astrophysics II Astrophysics III Astrophysics IV Climate Physics I Climate Physics II Condensed Matter I Condensed Matter II Condensed Matter III Condensed Matter IV Condensed Matter V Condensed Matter VI Cosmology I Cosmology II Cosmology III Cosmology IV Fluid Flow I Fluid Flow II Photonics I Special Relativity Biology Notes

Biological Molecules Cell Structure andFunction Ecology Enzymes Gas Exchange Hormones Maths for Biology Reproduction Respiration Taxonomy The Eye The Nervous System Transport Systems Chemistry Notes

Atomic Structure Bonding Kinetic Theory and IdealGases Reacting Quantities andEquationsAstrophysics ILuminosity: F is the radiative flux at the stellar surface. Energy may be lost due to neutrinos or direct mass loss.

Flux: At the Earths surface, observed flux is Stellar flux Apparent magnitude, m, is based on the flux received at the Earths surface, f (flux at frequency ).

Fainter star has larger magnitude.

Absolute magnitude, M, is defined using the flux we would see from a star if it was 10 parsecs distant, F. This is a measure of the stars brightness.

Bolometric magnitude is calculated using the total flux f integrated over all frequencies.

apparent

absolute

Distance modulus, d: by taking the difference between the apparent and absolute magnitudes, a measure of the distance to a star can be found.

Distance measurements using trigonometric parallax: close stars appear to move against the background of fixed stars, such that their positions appear different to an observer on Earth when observed at the two extremes of the Earths orbit.

Small angle , where b is one astronomical unit (a.u.), i.e. the distance between the Earth and the Sun.

Distances in binary systems: for this we need the deprojected extent. i.e. the angle in the plane of the sky. Once we know this, everything should be easier.

d=distance to binary

a=semimajor axis of binary

Keplers Laws1. The orbit of each planet is an ellipse with the Sun at one of the foci.

2. A line joining the planet and the Sun sweeps out equal areas in equal times.

3. The square of the orbital period is proportional to the cube of its average distance from the Sun.

Effective temperature is the equivalent black-body temperature corresponding to a stars luminosity.

It ranges between 2000K and 100,000K.

Mass-luminosity relationship 3Cluster must be a galactic cluster (high luminosity stars on the main sequence), hence the composition is roughly solar and the mass fraction of hydrogen XH~0.7

ii) Stars leave the main sequence when ~15% of their initial mass has been burnt to helium.

iii) for stars somewhat brighter than the sun

Given , Total energy radiated per hydrogen atom consumed is

Total number of hydrogen atoms consumed is f~0.15 XH~0.7Lifetime of cluster is ~ total energy radiated up to MS turnoff luminosity at turn-off point

~ total energy per hydrogen atom x no. H atoms consumed luminosity

More accurate age determination: use theoretical evolutionary tracks for stars of solar composition and various masses. Construct isochromes (lines of equal times) and fit them to observed HR diagrams. The best fit isochrome determines the age of the cluster.

Uncertainties in this method include:

t=0 is taken to be the time when all stars in the cluster reach the main sequence, but of course they may not all reach it at the same time.

Errors may occur in determining which stars actually belong to the clusters.

Errors also occur in converting observing colours such as violet and blue to luminosities and effective temperatures for comparison with theory.

Astrophysics IIFundamental principle I: stars are self-gravitating bodies in dynamical equilibrium due to a balance of gravity and internal pressure forces.Equation of hydrostatic equilibrium: consider a small volume element at a distance r from the centre- cross section S, length r,

Equation of distribution of mass

Dimensional analysis: use this to estimate the central pressure. Consider a point at and approximate- , , Dynamical timescale: the dynamical timescale is the time it would take the star to collapse completely if pressure forces were negligible.

Equation of motion: Inward displacement: Put s~Rs, r~Rs, Mr~ Ms to estimate.

Virial theorem:Start with latex 4\pi[r^3 P_r]^{r=R_s}_{r=0}-3\int_0^{R_s}P_r4\pi r^2dr=-\int_0^{R_s}\frac{GM_r}{r}4\pi r^2 \rho_r dr$

We can cancel the first term as it is zero at both limits, leaving-

(total gravitational energy) (1)

Thermal energy per unit volume: (f represents degrees of freedom).

, , Substitute all this in to get Now recall the Ideal gas law -> and total thermal energy is Now back to (1):

For a fully ionised, ideal gas -> Total energy

Now for a quick summary to remind you of the steps to take if you ever need to derive this

1. Start with equation of hydrostatic equilibrium

2. Multiply by 4r3 and integrate over the radius of the star

3. Substitute for gravitational energy

4. Consider thermal energy

Fundamental principle II- negative heat capacity': for the above case, if total energy decreases, thermal energy increases and the star heats up.Nuclear burning is self regulatory: if nuclear burning and thus total energy decrease, the core contracts and heats up; this causes nuclear burning to then increase again, as it is highly temperature dependent. Conversely, if there is an increase in nuclear burning, the core will expand and cool, decreasing it again.

Unstable stars: if =4/3, then E=0, i.e. the star has zero binding energy. In this case, the star is easily disrupted, causing rapid mass loss.

Periods when nuclear burning is not active: during star formation, energy is lost from the surface, so the proto-star contracts and heats up until hydrogen burning is ignited. When hydrogen fuel is exhausted, the same thing happens, and if the star is massive enough, fusion of heavier elements can occur.

Fundamental Principle III: since stars lose energy by radiation, stars supported by thermal pressure require an energy source to avoid collapse.

Thermal timescale: (aka Kelvin-Helmholtz timescale)

~ 15 million years, i.e. too short to provide energy for a stellar lifetime. Since thermal energy~gravitational energy, it is clear that for stars there must be another mechanism- this is where nuclear fusion comes in.

Nuclear timescale: where is the efficiency- for H->He fusion, and Mc is the mass of the core.

Working this out gives us the timescale we would expect for a star.

Energy loss at a stellar surface is compensated for by energy release from nuclear reactions in the stellar interior.

where r is the nuclear energy released per unit mass per second.

for any elementary shell.

Energy transport1. Conduction: negligible except in degenerate matter.

2. Radiation: interior consists of X-ray photons which undergo a random walk over ~5103 yr and are degraded to optical frequencies.

Consider a spherical shell, area A=4r2, radius r, thickness drRadiation pressure momentum flux

Rate of deposition of momentum in r->r+dr (i)

Opacity, , is a measure of absorption. Solution Where we define optical depth is the mean free path.

If >>1, material is optically thick

If 3. 4. Physics Made EasyAstrophysics IIIEquation of state: its easiest to treat the interior of the star as a perfect gas.where n is number density and is mean particle mass in units of mH.

Mass fractions and numbers of atomsX=mass fraction of hydrogen (for the Sun, X=0.70)

Y=mass fraction of helium (for the Sun, X=0.28)

Z=mass fraction of heavier elements, all of which astronomers call metals for ease of reference (for the Sun, X=0.02)

Obviously X+Y+Z=1; if it didnt, there would be something very wrong.

If we assume that the material is fully ionised-

For hydrogen, number of atoms is and number of electrons is .

For helium, number of atoms is and number of electrons is .

For metals, number of atoms is and number of electrons is .

A is the average atomic weight of heavier elements present; each metal atom contributes ~A/2 electrons.

Degeneracy: in a completely degenerate gas all momentum states up to the Fermi momentum are completely filled. In this case, we can no longer use the perfect gas law.

Number density of electrons within a sphere of radius p0 in momentum space at T=0.

The factor of 2 in the integral covers spin states, the 1/h3 covers phase space density.

From kinetic theory

Non-relativistic complete degeneracyalways

Relativistic complete degeneracy

=\frac{hc}{8}(\frac{3}{\pi})^{\frac{1}{3}} N_e^{\frac{4}{3}}$

Non-relativistic degeneracy is important in the degenerate cores of red giants and the interiors of white dwarfs. At high densities, we need to use the relativistic expression; this shows that the white dwarf collapses further to become a neutron star or black hole (there is no longer a stable minimum radius > 0).

Opacity is the cross-section per unit mass and it is a measure of the rate at which energy flows by radiative transfer (the maths was already covered in Astrophysics II). Sources of stellar opacity are-

1. Bound-bound absorption (negligible in interiors)

2. Bound-free absorption

3. Free-free absorption

4. Scattering by free electrons

The hydride ion is also important; it has only a single energy at -0.75eV. A lot of absorption is due to the hydride ion.

At low temperature At intermediate temperature At high temperature 1, 2, 3 are constants for stars of a given composition.

Hydrogen burning- PPI chain: nuclear fusion will be covered in more detail in the Nuclear Physics section; here we will only focus on the relevant reactions

PPI chain reactions

1. 1H+1H->2D+e++ +1.44 MeV

2. 2D+1H->3He+ +5.49 MeV

3. 3He+3He->4He+1H+1H +12.85 MeV

Overall 4 1H -> 4He

In one cycle, reactions 1 and 2 will occur twice and reaction 3 will occur once.

Total energy released is 26.71 MeV; 0.26 MeV of this is carried away per neutrino and the remaining 26.19 MeV contributes to the luminosity.

We can also write the energy released as 0.007mc2Timescales for the reactions at characteristic temperature T=3107K.

1. 14109 yr- reaction 1 is a weak interaction and the bottleneck of the reaction chain; this is the one that sets the lifetime of a hydrogen burning star.

2. 6 seconds- deuterium is burned up quickly.

3. 106 yr

Obviously the exact values depend on density and mass fractions.

PPII and PPIII chains can occur once 4He and other elements are sufficiently abundant.

PPII chain:

3He+4He-> 7Be+ +1.59 MeV

7Be+e--> 7Li+ +0.86 MeV

7Li+1H-> 4He+4He +17.35 MeV

PPIII chain:

7Be+1H-> 8B+ +0.14 MeV

8B-> 8Be+

8Be-> 4He+4He +18.07 MeV

4He is acting as a catalyst for the conversion of hydrogen to helium. The total energy released is the same in each case, but the energy carried away by the neutrino is different.

PPI, PPII and PPIII operate simultaneously in a star containing sufficient 4He.

The CNO cycle: if a star starts burning helium, heavier elements start to build up. These elements (C, N, O) catalyse the conversion of hydrogen to helium.

12C+1H-> 13N+ 2nd slowest reaction13N-> 13C+e++

13C+1H-> 14N+

14N+1H-> 15O+ slowest reaction15O-> 15N+e++

15N+1H-> 12C+4He

The timescale of the cycle is determined by the slowest reaction, but the approach to equilibrium is determined by the second slowest reaction.

In equilibrium 12Cn12C= 13Cn13C= 14Nn14N= 15Nn15N=reaction rate, n=number density

Observational evidence for CNO cycle1. 13C/12C ratio is ~1/5 in some red giants, compared to ~1/90 on Earth.

2. Nitrogen-rich stars have been discovered.

CN(O) bi- and tri-cycle: once every 2500 reactions or so, 15N+1H-> 16O+ occurs. This leads to two other reaction cycles involving fluorine and oxygen; these are the bi- and tri-cycles, and they have an equilibration time of ~1011 yr.

Helium burning: the onset of helium burning is believed to be accompanied by an explosive reaction in an electron degenerate hydrogen-exhausted stellar core. This is the helium flash.

The main problem with helium burning (which we can think of as fusing -particles) is that the next stable nucleus that can be formed is 12C and after that 16O (no stable nucleus at A=8)- but in order to from these nuclei from -particles, we would need three or four to collide at the same time, as in the triple- reaction-

4He+4He+4He-> 12C+

The reaction rate for this is negligible unless on resonance.

There is, however, another possibility. In 4He gas, a small concentration of 8Be can build up and interact with another particle before it decays-

8Be+4He-> 12C+

This reaction also has to be resonant to be non-negligible, but a resonant state has been observed.

The characteristic temperature of helium burning is 108 K.

Carbon burning: in large stars, further contraction and heating can occur, allowing fusion of heavier elements all the way up to iron; however, most of the possible energy release from fusion reactions comes from hydrogen and helium burning.

Structure of the Sun: due to its conveniently local position at the heart of our solar system, the Sun is the only star for which we can measure internal properties. Composition can be determined from meteorites; density and internal rotation from helioseismology and central conditions from neutrinos.

Helioseismology: the Sun acts as a resonant cavity, oscillating in millions of mode (acoustic, gravitational). These modes are excited by convective eddies with =1.5-20 min, v~0.1 ms-1. Just like earthquakes on Earth, these resonant modes can be used to reconstruct the internal density structure of the Sun. To do this, we need to measure Doppler shifts ~10-6 times smaller than spectral linewidths- this is achieved with good spectrometers and long integration times (to average out noise).

Results of helioseismology1. Density structure and speed of sound in the Sun.

2. Depth of outer convective zone is found to be ~0.28RO.

3. Core rotation is slow, so it must have been spun down with the envelope.

Solar neutrinos: we already saw that neutrinos are produced in the core of the Sun, and carry away 2-6% of the energy from H-burning. By detecting solar neutrinos in underground experiments, we can probe the solar core.

PP chain neutrino emitting reactions: just to refresh our memories-

1H+1H->2D+e++ Emax=0.42 MeV

7Be+e--> 7Li+ Emax=0.86 MeV

8B-> 8Be+ Emax=14.0 MeV

The Homestake (Davis) experiment (~1970) to detect neutrinos. An underground tank is filled with 600 tons of C2Cl4. Some neutrinos interact with Cl via the reaction e+37Cl -> 37Ar+e-. Every two months, 37Ar atoms are filtered out and counted.

There are two problems with this experiment-

1. Only a tiny number of neutrinos can be detected.

2. It is only sensitive to neutrinos from the 8B reaction, which is only a minor reaction in the Sun.

That aside, ~54 37Ar atoms are expected to be seen, but only 17 37Ar are observed, i.e. the neutrino flux is only about 1/3 of what it should be. This is the solar neutrino problem.

Possible solutions to the solar neutrino problem1. Astrophysical solutions- perhaps our picture of the Suns core is wrong. Maybe the central temperature is 5% less than we think, although to achieve this we would need there to be mixing in the core, by convection or rotation.

Other proposed models include there being no nuclear reactions in the Sun (core is black hole/iron/degenerate), or WIMPs (weakly interacting massive particles) transporting away some of the energy in place of the neutrinos. Ultimately, however, helioseismology rules out most astrophysical solutions.

1. Nuclear physics- perhaps our reaction cross-sections are wrong. Improved experiments have since confirmed that the cross-sections are correct for the key nuclear reactions.

2. Particle physics. All neutrinos generated by the Sun are electron neutrinos, which are the only type the Davis experiment could detect. But if neutrinos have mass, and different neutrino types have different masses, then maybe neutrinos are changing type on the way to earth (neutrino oscillations). This solution looks hopeful, but brings up more considerations- do the oscillations occur in matter or vacuum?

Recent solar neutrino experiments use more sensitive detectors that can also detect neutrinos from the main pp reaction.

1. Kamiokande experiment: 3000 tons of ultra-pure water (1680 tons of active medium) for inelastic scattering +e- -> +e-This reaction is about six times more likely for e than or . The observed flux is half the predicted flux, perhaps due to the energy dependence of neutrino interactions.

1. Gallium experiments (GALLEX, SAGE) use Ga to directly measure low-energy pp neutrons.

e+71Ga -> 71Ge+e- -0.23 MeV

Predicted 1327 SNU

Observed 8010 SNU

(1 SNU=10-36 interactions per target s-1)

1. Sudbury Neutrino Observatory (SNO): 1000 tons of D2O in an acrylic plastic vessel with 9456 light sensors/PMTs, located 2070m underground. The experiment detects Cerenkov radiation from electrons and photons from weak interactions and neutrino-electron scattering.

Results from 2001 seem to confirm neutron oscillations (in matter- MSW effect).

The structure of main sequence (hydrogen core burning) stars: when a star begins its main sequence lifetime, it has a homogeneous composition. During its main sequence lifetime, we can derive certain scaling relations (of the form ). Differential equations are replaced with characteristic quantities in order to derive these.

Hydrostatic equilibrium for main sequence: (1)

Radiative transfer for main sequence: (2)

Luminosity-mass relationship for main sequence: we need to first specify both the equation of state and the opacity law.1. Massive stars: ideal gas law, electron scattering opacity.

~TH constant

Using (1) Using (2) 1. Low-mass stars: ideal gas law, Kramers opacity law -> 1. Very massive stars: radiation pressure, electron scattering opacity

Most important thing to remember for stars near a solar mass:

Main-sequence lifetime During the main sequence lifetime: core temperature Tc is fixed at ~107 K because that is the characteristic temperature of hydrogen burning. Pressure is inversely proportional to mean molecular mass (). During hydrogen burning, increases from ~0.62 to ~1.34, whilst the stars radius increases by a factor of ~2.Opacity and metallicity: at low temperatures, opacity depends strongly on metallicity ( for bound-free absorption).

Low metallicity stars are much hotter and more luminous at a given mass, hence they have shorter lifetimes.

The mass-radius relationship is only weakly dependent on metallicity.

Subdwarfs are low metallicity stars lying just below the main sequence.

General properties of homogeneous starsUpper main sequence: M>1.5MO; core is convective and well mixed; energy release via CNO cycle; opacity due to electron scattering; surface H is fully ionised with energy transport by radiation.

Lower main sequence: M Si, P, S, Mg

Silicon burning Si -> Fe

Core collapse supernovae are triggered after the exhaustion of nuclear fuel in the core of a massive star if the iron core mass > Chandrasekhar mass. Gravitational energy from the collapsing core provides energy for the explosion, most of which is emitted in the form of neutrinos. By an unknown mechanism, ~1% of the energy is deposited in the stellar envelope, which is blown away (ejected), leaving a compact remnant (neutron star or black hole).

Thermonuclear explosions: if an accreting CO white dwarf reaches the Chandrasekhar mass, the carbon is ignited under degenerate conditions. The nuclear burning raises T, but not P, and just as with the helium flash we get thermonuclear runaway. This results in incineration and complete destruction of the star; with 1044 J of nuclear energy release, no remnant is expected.

These explosions are the main producers of iron and also act as standard candles (which we discussed in the Cosmology section).

Supernova classification: there are many different types of supernova, and if truth be told, not even the experts can decide how to classify them. For that reason, only a general overview will be given here.

1. Type I: no hydrogen lines in spectrum

2. Type II: hydrogen lines in spectrum

Theoretical types: the thermonuclear and core collapse models discussed above.The relationship between these types is no longer 1:1.

Neutron stars are the end products of core collapse of massive stars (between 8 and ~20MO). In the collapse, all nuclei are dissociated to produce a very compact remnant mainly composed of neutrons (with a few protons and electrons). The typical radius of the remnant is ~10km, with a density of ~1018 kgm-3.

Just like with white dwarfs, the fact that neutrons, like electrons, are fermions leads to a maximum possible mass which can be supported by neutron degeneracy- this mass is estimated to between 1.5-3MO.

The dissociation of the nuclei is endothermic, using some of the gravitational energy released in the collapse. These reactions undo all the previous nuclear fusion reactions.

Schwarzschild black holes: a black hole has a density such that the escape velocity is greater than the speed of light (and since not even light can escape, of course its going to look black).

Take photons to have mass Escape velocity If vesc > c, R degenerate (white) dwarf.

2. Star develops a degenerate core and ignites nuclear fuel explosively -> complete disruption in a supernova.

3. Star exhausts all of its nuclear fuel and the core exceeds the Chandrasekhar mass -> core collapse and a compact remnant (neutron star or black hole).

Summary- final fate as a function of initial mass (not the mass of the end state)M0.08MO: No hydrogen burning, supported by degeneracy pressure and Coulomb forces -> planets, brown dwarfs.

0.08-0.48 MO: hydrogen burning but no helium burning -> degenerate helium dwarf.

0.48-8 MO: hydrogen and helium burning -> degenerate CO dwarf

8-13 MO: complicated burning sequences, no iron core -> neutron star

13-80 MO: iron core, core collapse -> neutron star or black hole

M80MO: pair instability, complete disruption -> no remnant

Binary stars: as explained earlier, to conserve angular momentum, most stars are members of binary (or multiple star) systems. The orbital period of such systems ranges from 11 minutes to 106 years.

Most binaries are far apart with little interaction- in close binaries (P10-5%Boring

CO2>0.033%Important (see later)All increasing in atmosphere

CH4>210-4%

N2O>5.010-5%

O3variable (~510-5%)Important

H2Ovariable(~0-0.1%)

Standard pressure is defined as 760 mm Hg/1 atm / 1.013 ba r= 1013 mbar = 1.013105 PaAtmospheric pressure is the weight of air above you (1 kg cm-3)- thats one ton per square foot!

Pressure vs. height: as you would expect, pressure decreases with height. Still, this being physics, we want to express that more quantitatively.

Hydrostatic equation Ideal Gas Law per mole

Where is the scale height- the height over which the pressure drops by 1/e

If the atmosphere were isothermal (T=const) as we find approximately in the lower stratosphere, then we have Troposphere- vertical temperature profile: the troposphere is the lowest layer of the atmosphere (0->10km).

Take a parcel of air rising adiabatically (same pressure as surroundings, but T may change).

For one mole of ideal gas (in case you hadnt guessed, well be using this a lot).

Now to use our other good friend, the First Law of Thermodynamics.

(adiabatic)

latex Vdp=C_pdT$

Now back to the hydrostatic equation

is called the adiabatic lapse rate.

Tropospheric lapse rate: =9.7 K km-1 for dry air. Cp for water vapour is almost twice that of dry air- that and latent heat effects mean that ~6-7 K km-1 in reality. As the atmosphere is mostly heated from below, often , inducing convection (more on this later).

Vertical temperature profile of stratosphere: above 10km, large-scale convection ceases and the temperature stops falling. This optically thin, radiative layer of atmosphere is known as stratosphere.

Stratosphere- optically thin, radiative, stratified (hence the name), T const

Troposphere- optically thick, convective, Radiative transfer will be discussed further later.

Potential temperature is the temperature that a parcel of air at the point of observation would achieve if it were moved adiabatically to a standard pressure p0 (usually 1 bar).

Using the First Law of Thermodynamics and the Ideal Gas Law as earlier-

Integrate from p,T (point of observation) to p0, (standard pressure)

Differentiate wrt to z

To clean this up a bit more, recall that So

Stability: a parcel of air is stable if, when it undergoes a small vertical displacement from equilibrium, it experiences a buoyancy force restoring it to its original position.

The atmosphere is stable if , and unstable if , Entropy: now we are in a position to find the entropy profile of the atmosphere.

latex dS=\frac{dQ}{T}=C_p\frac{dT}{T}-\frac{V}{T}dp$

Integrating

Now absorb Rln p0 into the constant

latex \Delta S=C_p(\Delta \ln\theta)$ so ln can be used as an entropy scale.

Atmospheric humidity: the concentration of water varies with latitude and height from 0-10%. There are several different ways to quantify humidity.

Mass mixing ratio, xm, is the ratio of the mass of water vapour to the mass of dry air in a volume V with total pressure p.

Partial pressure and density of water pH2O, H2OPartial pressure and density of air pa, a

Volume mixing ratio, xV, is the ratio of the volume of water vapour to the volume of dry air if the constituents were separated at total pressure p.

Relative humidity is the ratio of x to its value at saturation.

Dew/frost point is the temperature at which saturation occurs.

Molecular mass of a mixture: well be considering one mole of damp air from now on.

Hydrostatic equation for a mixture of gases

Where is the effective molecular mass (pressure weighted mean)

Stability of moist airwithout phase changes

First Law

adiabatic (the last term on the LHS accounts for condensation- L is of course latent heat).

From the Ideal Gas Law

Well now go ahead and find an expression for that we can substitute back in here.

latex dm=M_{H_2O}(\frac{dp_{H_2O}}{p}-\frac{p_{H_2O}}{p^2}dp)$

Clausius-Clapeyron equation Vv>>Vl

Whew- so were finally ready to substitute.

sat is always less than dryIf , moist air is absolutely stable.

If , moist air is conditionally stable (so it depends on the relative humidity).

If forced above the condensation level, moist air becomes unstable and continues to rise, forming clouds.

Saturated vapour pressure over a convex surface of radius r.

Where is the saturated vapour pressure over a plane surface, and is surface tension.

Saturation ratio: Supersaturation: Formation of cloud droplets: a droplet can only grow by condensation once it has reached a critical radius r*; smaller droplets just evaporate unless the surrounding air is highly supersaturated. It is very unlikely, however, that a drop of critical size would just form at random from a collection of water molecules- this would require the simultaneous collision of ~17 molecules at S~750%. Even so, clouds form on a regular basis despite the fact that we dont live in quite such a waterlogged atmosphere, so there must be some other mechanism at work.

In fact, soluble particles in the atmosphere act as condensation nuclei, making the formation of large droplets possible. They reduce the SVP, thus attracting vapour.

Effect of solute on vapour pressure (Raoults Law)

Where n0 is the number of water molecules and n is the number of solute molecules.

Critical radius: saturation ratio over a droplet is the product of two terms.

The first term accounts for curvature and tends to increase S, the second term is the solute term and tends to decrease S.

Well now expand this as a power series-

conveniently neglecting higher order terms.

S has a maximum S* at critical radius r*.

at r*

Above this radius, the droplet will tend to grow in order to reduce the vapour pressure above its surface (smaller droplets will tend to shrink).

Sources of condensation nuclei1. The most common natural source is sea spray.

2. Anthropogenic (man-made) aerosols such as sulphates are also important.

3. Electrical charges can work in the same way, but this effect is rarely important in the atmosphere.

Rate of droplet growth is given by

Droplet has radius R.

Where l is the liquid density.

Now substitute for with the expression we just found.

But , the vapour density, so

Heat loss/gain of a dropletHeat gain due to latent heat L

Heat lost due to thermal conductivity of air,

In equilibrium Coalescence: the growth rate of cloud droplets is very slow- it takes over half a day to make a 40m drop! But once the droplet begins to fall, larger particles will collide with smaller ones and grow by coalescence. This can be a very efficient process, especially if updrafts are also present in the clouds.

Growth of ice crystals: the SVP over ice is less than that over water, so at a given temperature an ice crystal will grow more rapidly at the expense of any water droplets present. Under the right conditions, crystals can grow to greater than 100m in only a few minutes. At mid-northern latitudes, most rain actually originates as ice particles which melt before reaching the ground.

Ozone layer: ozone, O3 is continually formed and destroyed in the upper atmosphere.

is a UV photon; M is an arbitrary molecule that carries off excess energy.

Oxides of N, H, Cl and Br (mostly produced by human activities) catalyse the breakdown of ozone.

Ocean- hydrological cycle: 71% of the Earth is covered by ocean (6% ice), and it is the main source of atmospheric water vapour. Precipitation from the atmosphere provides surface freshwater fluxes such as streams and rivers that of course flow back into the ocean and affect ocean currents.

Ocean- global energy balance: equatorial regions receive much more energy from the Sun than polar regions, although energy lost by radiation is much the same everywhere. The ocean plays an important part in transporting the excess heat around, although the atmosphere/condensation have equally important roles. The top 3.2m of the ocean has the same heat capacity as the entire atmosphere, with total ocean heat content being ~1000 times that of the atmosphere.

Ocean- amelioration of global warming takes place in two ways

1. Ocean takes up ~35% of carbon released from burning fossil fuels.

2. Large heat capacity and slow deep mixing delay the effects of warming trends.

Salinity: ocean salt comes from the weathering of rocks over geological timescales. Near the surface, salinity varies from 3.3-3.8%, 85% of which is sodium chloride. In the deeper ocean, salinity is much less variable.Note: unlike freshwater, saltwater gets denser as it gets colder. At high T , at low T (dominant contributions).

Ocean equation of state: unfortunately, there is no ideal ocean law. It is usually conventional to linearise the equation of state.

higher order terms

Ocean measurements1. The basic method (and yes, I do mean basic). Fill a bucket with seawater and quickly take measurements before its temperature changes.

2. Slightly more advanced method. Lower a cluster of sampling bottles (there, doesnt that sound better than bucket) into the water and fill them at different depths.

3. Acoustic thermometry. The ocean is practically opaque to other types of radiation, but the speed of sound in water can be used to work out T, S and p (temperature dependence dominates).

4. Surface temperature and altimetry can be measured by satellites.

/* Style Definitions */table.MsoNormalTable{mso-style-name:Table Normal;mso-tstyle-rowband-size:0;mso-tstyle-colband-size:0;mso-style-noshow:yes;mso-style-parent:;mso-padding-alt:0cm 5.4pt 0cm 5.4pt;mso-para-margin:0cm;mso-para-margin-bottom:.0001pt;mso-pagination:widow-orphan;font-size:10.0pt;font-family:Times New Roman;mso-ansi-language:#0400;mso-fareast-language:#0400;mso-bidi-language:#0400;}

Vertical structure of the ocean1. 0-10m. Mixed layer: rapid overturning by wind. As you can guess from the name, the layer is well mixed, so the temperature is much the same throughout.

2. 10-100m. Thermocline layer: slow overturning leads to steep gradients- temperature decreases with depth.

3. 100m-4km. Deep layer: small gradients, almost no vertical mixing. Temperature pretty much constant throughout.

Temperature gradient and depth of the thermocline layer depend on diffusion, surface winds, surface heat budget etc.Estimate of latex \frac{\partial T}{\partial t}=-\omega \frac{\partial T}{\partial z}+\kappa \frac{\partial^2 T}{\partial z^2}$

Where the first term on the RHS corresponds to advection, and the second to diffusion.

In the steady state Dimensional analysis

Climate Physics IIDeep ocean circulation is predominantly adiabatic and salt conserving, allowing the use of soluble tracers. Results show that communication between surface and depths occurs only in isolated cold regions (North Atlantic, Antarctica).Ocean structure and circulation- heat flux: solar energy is absorbed in the few top tens of metres; long wave radiation is then re-radiated. Latent heat fluxes also play an important part, and as you would expect, there are strong regional, seasonal and diurnal variations.

Ocean structure and circulation- freshwater flux: salinity is altered by freezing (rejects brine), river runoff and net evaporation minus precipitation.

Ocean structure and circulation- momentum flux: surface winds generate waves and cause turbulent mixing.

Geostrophic flow and thermohaline circulation: temperature and salinity gradients produce pressure gradients. These gradients are balanced by the Coriolis force (acts perpendicular to velocity). This balances creates a geostrophic flow, which gives rise to the thermohaline circulation. This is a kind of global conveyor belt that moves a lot of heat around the planet. The key features are massive sinking in the north Atlantic and Antarctic, and upswelling almost everywhere else.

Sources of energySun (EM radiation): 236 Wm-2Energetic particles: 0.001 Wm-2Geothermal: 0.06 Wm-2Anthropogenic: 0.02 Wm-2As you might expect, the Sun is the main source of energy, and the nuclear reactions involved therein are covered in great detail in the Astrophysics section.

Planck Radiation Law

Where B is blackbody radiance.

Optically thick slab

Optically thin slab

Atmospheric absorption1. Ozone absorbs UV, visible and IR (9.6m).

2. O2 absorbs UV (ozone production).

3. H2O absorbs IR and far IR.

4. CO2 absorbs near IR and IR.

5. CH4, CO, N2O and NO also absorb at various wavelengths.

6. There is also continuum absorption by water dimers, aerosols, cloud particles etc.

Sinks of energy: where does the Suns energy go?1. 50% is absorbed by the surface of the Earth, which re-emits long wave radiation- hence the atmosphere is mostly heated from below.

2. ~20% is absorbed by polar molecules in the atmosphere.

3. ~30% is reflected by clouds, dust and the Earths surface.

Radiance is energy per unit time, area, wavenumber and solid angle; if we put it all together, we can see that it is measured in units of W m-1 sr-1Radiative transfer equation: take monochromatic radiance passing through an element of absorbing medium.

= temperature

= density

= absorption coefficient

where the first term on the RHS corresponds to absorption (assuming no scattering) and the second term corresponds to emission.

Schwartzchilds equation

At the top of the atmosphere, =0 and R=Ratm (we see the radiance of the whole atmosphere)

At the Earths surface, = g and R=0 (since theres no atmosphere below us, we dont see any radiance).

So, upward radiance from atmosphere

Transmission

-> Or, in terms of z where z=0 when = g and z= when =0

Now, to get the total radiance, we need to add the black body radiance emitted by the surface and measured at z=

Weighting function peaks high or low in the atmosphere depending on whether the opacity is large or small at wavelength .

These weighting functions and their peaks are useful for working out atmospheric temperature profiles from measured radiances.

Instruments for remote sensing (obviously we us a satellite to house them).

Basic idea

Measured where is the temperature at height z where the weighting function peaks.

Take the weighted mean over all n, using to get a complete temperature profile.

Infrared radiometer: used for remote sounding.

1. Radiometer views Earth through a plane mirror. The mirror can be rotated to view two calibration targets- cold space (zero radiance) and a calibration blackbody at temperature TC>Tearth. The calibration targets are used to produce a linear calibration curve of signal vs. radiance.

2. Chopper: mechanical shutter that interrupts the beam to chop it into pulses of frequency 100-1000 Hz. This avoids low frequency noise and drifting due to background, as well as generating an A.C. signal for the amplifier. Radiation can also be labelled by pulse.

3. Optics focus energy onto the detector. IR wavelengths are readily absorbed by many materials so usually reflecting optics are used. Where transmitting components are required, IR transparent materials such as Ge or diamond are used.

4. Filter: selects the wavelength range to which the instrument is sensitive. Consists of a stack of Fabry-Perot etalons comprised of ~100 thin layers of dielectric material deposited on a substrate of IR transmitting material. Can be cooled to cut down background.

5. Detector: either thermal detector or photon detector. Thermal detectors respond to the heating effect of the radiation (thermal resistance or thermoelectric effect). Photon detectors are semiconductors- the incoming photons excite transitions across the band gap. These need to be cooled to cut down on thermal transitions.

6. Amplifier etc: amplifies and outputs signal.

Output signal in frequency band -> +

g= linear gain factor (VJ-1)

A= aperture

= solid angle (A is also known as the etendu)

= filter transmission at wavelength R= radiance at wavelength We usually take R=B, blackbody radiance.

or where X is gain, Y is offset and P is power ( per )

Optical throughput/transmission: a factor to take into account any optical losses in the system; usually we do this by replacing A with A (or ) in any expression. A word of warning, however, in some books the term optical throughput is used to mean what we are calling etendu.

Main sources of noise1. Thermal/Johnson noise due to resistance. Proportional to square root of absolute temperature.

2. Shot noise due to finite electron charge. Proportional to square root of current.

3. Photon noise- proportional to square root of flux.

4. Flicker noise- proportional to 1/f.

As noise is random, overall it is proportional to Signal to noise ratio

Noise Equivalent Power (NEP) is the power that must fall on the detector to produce a signal equivalent to the sum of all sources of noise per unit bandwidth. So, despite its name, its not exactly a power as it has units of WHz-1/2Noise where D* is the quality factor.

Noise Equivalent Radiance (NER) is the change in the target spectral radiance that produces a change in signal equal to the noise. Consider the power, P, falling on the detector in a particular frequency range -> +

-> Noise Equivalent Temperature (NET) is the change in the temperature of the target that produces a change in signal equal to the noise. If the instrument is viewing a region emitting with black-body radiance, B, then

-> Integration time

Greenhouse effect: a popular term for the energy balance at the Earths surface, which affects its mean temperature. This balance is affected by the abundance of minor constituents in the atmosphere (H2O vapour, CO2, etc). Of course, these abundances are increasing because of human activities- theory predicts this may lead to an increase in mean temperature, although the exact changes are not easy to predict.

Solar constant: the flux at Earth due to radiation from the Sun.

Total flux at Suns surface Flux at Earth (D=1 a.u.)

Albedo: a measure of cloud and dust cover, usually given as a fraction A. Only the fraction (1-A) of the Earths surface receives power from the Sun. Albedo is difficult to measure exactly (A~0.3).

Effective temperature of the EarthTotal power arriving at Earth Note that its not latex P_2=\sigma 4\pi R_E^2 T_{eff}^4$

In equilibrium, P1=P2

But we know just from living here that the mean surface temperature is not -23C, its about 10C. This is because the atmosphere acts like a blanket.

Simple greenhouse model

Model the atmosphere as a slab at temperature Ta.

Say the atmosphere is completely transparent to short-wave radiation from the Sun (4m).

Balance fluxes in equilibrium

At surface Ta is the effective temperature Teff.

(from above we know that at the top of the atmosphere).

Despite its simplicity, this model gives a reasonable answer- but we can make better models without too much more effort.

Improved greenhouse model

Stratosphere: optically thin, completely transparent to shortwave radiation (4m). At tropopause At surface Yet another greenhouse model

At each level the fluxes must balance when in equilibrium

Top of atmosphere Tropopause Surface Should we wish to get more involved, we could keep on adding layers to build up a more complete profile.

Stratosphere temperature profile: up until now, we have always assumed that the stratosphere is at constant temperature, but in fact this is not quite the case. Ozone in the upper stratosphere absorbs UV radiation from the Sun, leading to an increase in temperature with height.

Main factors affecting climate change1. Atmospheric pollution caused by human activity; this has three main effects

a) Increased CO2 levels contribute to the greenhouse effect. A global temperature rise has been observed, but it is still unclear how much of this is due to natural variation.b) Increased sulphate concentration may contribute to increased cloud cover.

c) CFCs, CLOx, BrOx and NOx catalyse the breakdown of ozone in the stratosphere, so there is less absorption of solar UV radiation (note that as CFCs can remain in the atmosphere for up to 40 years, they are still present despite the widespread ban on their use).

1. Ocean variations.a) Short-term (decadal) effects include oscillations of wind driven circulation such as the El Nio phenomenon, which has definite local impact.b) Of great importance in the long term is the stability of the thermohaline circulation, which moves a lot of heat around the planet. THC shutdown would have a significant global impact.

1. Albedo variations: cloud cover works in opposition to the greenhouse effect. Although it is very difficult to measure albedo accurately, preliminary measurements indication short-term fluctuations of ~20%.

2. Astronomical (Milankovich) cycles: variations in the Earths eccentricity, obliquity and precession of the equinoxes take place with a period of ~104 years. Any associated climate change will be on a similar timescale.

3. Solar constant variationsa) In the short term, this is mainly due to sunspots.b) There also appear to be longer term variations of Assume that (T) is constant (high) for an ice-covered Earth, constant (low) for an ice free Earth and a linear function of T in between.

We get three solutions, A, B and C.

In regions 1 and 3, , In regions 2 and 4, , -> points A and C are stable fixed points, point B is unstable

A: ice cover

B: partial ice cover

C: ice free

Obviously we dont live in A or C conditions, so this model is not very accurate (and after we did all that work!).

Stommells 2-box model of ocean circulation

Each box is a reservoir of well-mixed water.

q, E are volume fluxes.

E is net evaporation minus precipitation.

Assume a linear equation of state.

k=friction coefficient, x=x1-x2 for any variable x.

Now we want to set some simple, physically motivated boundary conditions for T and S.

We know that-

a) Air-sea heat exchange tends to restore ocean temperature to equilibrium values over short time scales.

b) Evaporation and precipitation rates do not depend on S.

So we have mixed boundary conditions- restoring for T, fixed flux for S.

Assume T maintained by air sea fluxes.

Salt budget for either box gives

(were using modulus signs because the flux can be either way).

S is small, so S1~S2~S0

Let ,

Solutions at Real solutions are possible given q0, qs >0, only ever a maximum of three- 2 positive and one negative.

The two solutions in the +ve quadrant are thermally direct.

Sinking at high latitudes

The solution in the ve quadrant is salinity driven.

Note that if qs or E increase, then the thermally direct solutions disappear.

Critical value Extra calculations- how do we know we only get three solutions to that quadratic equation? Well, if you have the time and/or like to thing about these things, lets go through it.

If q is +ve

If q is -ve

(cant have the other root because |q| must be positive by definition).

Condensed Matter ITypes of cohesive force1. Ionic bonding: transfer of electrons from one atom to another, leading to charged atoms with filled shells, e.g. Na+Cl-.

2. Covalent bonding: unpaired valence electrons mix (hybridise) to give a lower energy state. Covalent bonding is very directional.

s, px, py, pz -> sp3 hybridisation, tetrahedral diamond structure3. Metallic bonding: outer atomic orbitals overlap several other atoms, so the electrons become mobile and delocalised (reducing KE). Metallic bonding is not directional, and is weaker than covalent bonding.

4. Hydrogen bonding: covalent bonds between different atomic species lead to polar molecules with electrostatic attraction between + and - of different molecules. Hydrogen bonding is much weaker than ionic or covalent bonding.

5. Van der Waals forces: the random motion of electrons in an atom leads to a temporarily induced dipole. This inturn induces a dipole on neighbouring atoms.

E-field at p2: induces .Energy of interaction -

Lattice: an infinite array of points, each with an identical environment. Each point has position r=n1a1+n2a2+n3a3 where the ai are lattice vectors.

Basis: the structural unit (group of atoms) associated with one lattice point.

Crystal structure: the convolution of a lattice with a basis, e.g.

lattice x basis = crystal structure (each dot is a lattice point)

Unit cell: a polyhedral region of space, such that when many identical unit cells are tessellated, they form a crystal structure (i.e. the building block of said structure). The unit cell in 3D can be specified by three vectors, a, b and c. Positions within the unit cell can be specified by fractional coordinates (i.e fractios of a, b and c).

1. Conventional unit cell: can contain more than one lattice point (is usually an easy to work with shape like a square or cube that makes the maths easier).

2. Primitive unit cell: contains only one lattice point.

3. Wigner-Seitz unit cell: contains one lattice point, and displays the same point symmetry as the lattice.

Condensed Matter IILattice dynamics- starting assumptions1. Adiabatic approximation- assume electrons are attached rigidly to the nucleus.

2. Assume the amplitude of the vibrations is small.

3. Harmonic approximations- terms of order higher than u2 in the interatomic potential are neglected.

One-dimensional monatomic chain: well start by only including nearest neighbour forces.

positions rn, equilibrium atomic separation rn-rn-1=adisplacements from equilibrium unInteratomic pair potential Potential energy of nth atom Taylor expansion about rn-rn-1=a

(a)=0, so Force Also note that (a) can be written as the force constant, C.

In fact, having done all that, we could have just started from this point by remembering Hookes Law F=Cx (where x is displacement from equilibrium).

Aside: just to check that we arent fiddling our force constants to make everything neat, we can use Newton III to show that Cn=C-n

If we wanted to include more distant neighbour interactions (up to the Nth), our equation at this point would have the form

Going back to nearest neighbours, try a solution of the form

->

Dispersion relationNearest neighbours only

In the long wavelength limit (small k),

speed of sound

Considering more distant neighbours

Rule of thumb- number of modulations = number of further neighbour interactions considered.

Brillouin Zones: all the physics of a system is contained within a Brillouin Zone. The standard definition of the BZ is that it is the Wigner-Seitz unit cell in reciprocal space, but theres no point just quoting this if you dont understand (or cant readily explain) what it means. For that reason, it also helps to remember this alternate definition-

The first Brillouin Zone is the set of points closer to the origin in reciprocal space than any other reciprocal lattice point.

The nth Brillouin Zone is the set of points reached from the origin in reciprocal space by crossing n-1 Bragg planes and no fewer.

Each Brillouin Zone has the same volume. Any normal mode can be represented by a wavevector in the first Brillouin Zone.

1D diatomic chain- intuitive method

For a diatomic chain, the repeat distance is doubled in real space, so it is halved in k-space (i.e. the size of the Brillouin Zone is halved). At first glance, this seems to mean we have lost the information contained in the region. If we want to retain it, we will have to translate it back into the new 1st Brillouin Zone.

If the two types of atom have different masses, a gap opens up between the two branches.

As youll see from the graphs, weve suddenly introduced the terms acoustic and optic branch, but dont panic. You didnt miss anything- these terms will be described below once we go through the maths of the 1D diatomic chain.

1D diatomic chain- the maths: use all the same assumptions as the monatomic chain.

Equation of motion for white atoms Equation of motion for black atoms Try solutions m1 m2 ->

Eliminating leads to

An alternate (and quite neat) method to get this answer without too much trouble is to use matrices as follows.

Start with trial solutions of the form

This gives

We can rewrite this in matrix form

INCLUDEPICTURE "https://s0.wp.com/latex.php?latex=%5Cleft%5B+%5Cbegin%7Barray%7D%7Bc%7D+A+%5C%5C+B+%5Cend%7Barray%7D+%5Cright%5D+%3D0+&bg=ffffff&fg=000000&s=0" \* MERGEFORMATINET For non-trivial solutions

This gives us a quadratic in 2 which we can solve easily enough to get the answer given above.

Acoustic and optic modes: first, well consider what is going on at points A to D. Throughout this section, m1 > m2.

A. kakT).

We get peaks in the absorption where there are transitions between two energy levels. As the well widens, the energy levels get closer and the absorption tends to the 3D absorption curve drawn earlier.

Quantum well laser: the laser transition is E1 electrons to E1 holes (these are the most populated levels, unsurprisingly). Quantum well lasers are used in fibre optic communications, CD players and laser pointers.

pn junction

Once again the chemical potential must be continuous across the junction. Say the potential across the gap In the central region, electrons and holes annihilate to produce a depletion layer of low carrier density (ionised impurities are still present). There is a net E-field in this region.

pn junctions are massively useful.

Depletion layer: pn junction, area A.

Potential across junction Boundary conditions: E=0 at the edges of the depletion layer; E, V continuous at x=0 (xn=wn, xp=-wp)Charge density on p-side Charge density on n-side

Integrate and use boundary conditions to get

-wn0 (electron-electron coupling). Spins align to create a spontaneous magnetisation at a critical temperature- we can see this using the molecular field (also called the mean field) model.

Say that the atoms look like they are in an effective flux (first term on RHS is an external field, second is a molecular field due to exchange interactions). Note that Beff is not a real magnetic field that we can measure, its just a way of thinking about these interactions.

Assume that Curies Law holds for the total field Beff.

TC is the critical temperature where we see a spontaneous magnetisation.

Domains: pure iron only magnetises in the presence of an external field. Why dont all the spins line up spontaneously in zero field?We need to consider the three types of energy in a magnetic crystal.

1. Magnetostatic energy =. This will be large if all spins are lined up in the same direction.

2. Anisotropy energy: spins like to line up along particular crystal directions. There is an energy cost for any spins not aligned with this easy axis.

3. Exchange energy =. There is an energy cost whenever spins are not aligned.

1. All spins are aligned along easy axis but magnetostatic energy is large.

2. Magnetostatic energy is reduced.

3. Further reduction of magnetostatic energy.

4. All magnetostatic energy now contained within the crystal but now there are lots of atoms perpendicular to the easy axis.

5. Better still- now only a few atoms perpendicular to the easy axis.

So why doesnt this just go on until the domains get infinitely small? To answer this, we need to think about domain walls.Domain walls (Bloch walls) look like this:

Most of the spins in the wall are not aligned with the easy axis; also, adjacent spins are at an angle to each other, so their exchange energy is higher. This means that each domain wall costs energy. What we have to do is find an optimum where we are not spending too much on any one type of energy.

Thickness of domain walls: energy is minimised by changing the spin slowly in N steps by a small angle .

For each line of spins, exchange energy cost

(here were considering the difference in exchange energy when the N spins are at angle instead of being aligned).

is small, so do a power series expansion.

per line of atoms.

We have lines per unit area (a is just the atomic spacing).

Anisotropy energy cost = Total energy cost

So Utot is a minimum when For iron, a domain wall will be around 300 atoms thick (or 300a metres thick).

Impurities and hard magnets: pure iron is a soft magnetic material, i.e. it does not retain its magnetisation in zero field. In order to pin the domain walls and keep the magnetisation, we need to introduce impurities, e.g. carbon in steel.

Magnetisation of a hard magnet

1. Reversible boundary displacement- domain boundaries move a little but they dont hit impurities so they can easily go back to their original position.

2. Irreversible boundary displacement- the applied field forces the domain walls through impurities. When the field is removed, they cant pass back through the impurities.

3. Magnetisation rotation- all spins line up with the field, i.e. saturation.

Because (2) is irreversible, the system has a hysteresis and an overall magnetisation is retained when the applied field is reduced to zero. A reverse field is then required to demagnetise the material.Antiferromagnetism: if the exchange integral J

Magnetic resonance experiments: the basic idea is to split up the atomic energy levels by applying a magnetic field and then observe the transitions between them.

Well use a J=3/2 level for this example.

Selection rule for J=0.

Expect to see three peaks corresponding to the three allowed transitions.

INCLUDEPICTURE "https://physicsmadeeasy.files.wordpress.com/2008/03/magnetic-resonance-energy-levels.jpg" \* MERGEFORMATINET You may not expect there to be a splitting at zero field, but remember that this isnt an isolated atom- it is in a crystal, where there are other perturbations to consider.

Apparatus- Nuclear Magnetic Resonance (NMR)

For ESR (electron spin resonance), we are looking at atomic rather than nuclear transitions so we need a frequency in the GHz region. To achieve this, replace the coil with a cavity resonator. We also need to cool the sample to a low temperature to see a signal.

In both cases, either sweep the frequency through resonance whilst keeping the B-field constant, or vice versa.

Adiabatic demagnetisation (used for cooling): we can cool a solid containing magnetic ions by using a magnetic field. To do this, we apply a field, and then remove it adiabatically (keeping the populations of spin states the same).

Tf is limited by small interactions which split energy levels at T=0.

Superconductivity: in a superconductor, resistivity goes to zero below the critical temperature TC. But a superconductor is not just a perfect conductor, it is also a perfect diamagnet (=-1)- it expels magnetic flux (Meissner effect).

Superconductivity can be destroyed by a critical magnetic field BC or critical current JC.

Meissner effect and flux expulsion: when a superconductor is cooled below TC (which is usually in the range of mK to ~160K), surface currents are set up which expel magnetic flux. For a zero resistance metal cooled below TC, the flux becomes trapped rather than expelled.

Type I superconductors

Sharp transition to normal state at BC.

Type II superconductors e.g. Nb

1. Superconducting state.

2. Vortex state: a few lines of flux get through the superconductor surrounded by helical screening currents.

Type II are of more practical use.Penetration depth: from London theory, surface currents go as where (penetration depth). Magnetisation energy occurs over this penetration depth.

BCS theory, Cooper pairs and energy gap: specific heat, IR absorption and tunnelling all indicate that superconductivity is related to an energy gap (this idea will be discussed further a bit later).

Electrons experience an attraction caused by the interaction with the crystal lattice, leading to binding in pairs (Cooper pairs). Electrons of wavevector k1 and k2 can exchange virtual phonons. This interaction is strongest when k1= -k2, so electrons bind together in pairs with momenta kF and kF.

Wavefunction singlet

The pair has charge 2e, mass 2m and binding energy per electron.

Pairs are destroyed when there is enough energy to excite electrons across the gap.

We have a perfect conductor because no scattering can occur until there is sufficient energy to excite pairs across the gap.

Coherence length: say (roughly)

So Flux quantisationCurrent density General wavefunction of a Cooper pair, mass 2m, charge 2e

-> \bold{j}(\bold{r})=\frac{-e}{2m}|\psi(\bold{r})|^2(\bar{h}\nabla \theta+2e\bold{A}$

Far from the surface of a superconductor in its Meissner state, j=0, so

Integrate around closed curve C inside superconductor.

(r) is single-valued, so around a closed loop, -> \Phi=\pm\frac{2\pi n \bar{h}}{2e}=\pm\frac{nh}{2e}=\pm n\Phi_0$

where 0=2.0710-5 Tm2 is the flux quantum. Flux around a closed loop is quantised in units of 0.

Evidence for 2 band gap in superconductors1. Infrared absorption. Analogous to results for semiconductor band gap; absorption only occurs when h>22. Tunelling between two superconductors with a thin barrier (~10-9m)

Tunnel current shows features due to alignment of energy levels either side of the barrier- measures energy gaps.

1. Form of heat capacity indicates a 2 band gap- below TC.

Experiment to measure the flux quantum

Place sample in a magnetic field of ~10T.

Cool sample through TC.

Vibrate sample between search coils to find the trapped flux.

Cosmology IWhat is cosmology?: Cosmology is the study of the origin and evolution of the Universe, including its contents, dynamics and future. Of course, as there is only one universe available to us, we cant just take it down to the lab and run experiments on it; instead, we must rely on observation.

Olbers Paradox: Our most basic observation is that the sky is dark at night- yet in an infinite, homogeneous, static universe, with n stars per unit volume (neglecting absorption), we expect to see a total flux from the Universe.Flux from one star: Total flux:

Cosmic Microwave Background (CMB): a(n almost) uniform background glow. In the past, the universe was hot enough that everywhere was ionised, and all space was one photosphere. Since then, expansion has caused that radiation to be red-shifted to a lower characteristic temperature of 2.7K. So in fact, the night sky is glowing with uniform brightness- only it is in the microwave spectrum rather than the visible.

Dipole anisotropy: anisotropy in the CMB arising from the absolute space velocity, v, of the Earth. COBE (Cosmic Microwave Background Explorer) satellite measured v=371km s-1, giving T~3mK in each direction.

Consequences of the Earths motion: temperature is boosted due to the motion of the observer with respect to the CMB, but the thermal spectrum is retained.

Observed temperature: Doppler factor: Expand as a series of multipoles:

Recombination and decoupling: after the Big Bang, temperatures were sufficiently high enough that matter was fully ionised. Thomson scattering was highly efficient, and the universe was in thermal equilibrium. As the universe expanded and cooled, however, it reached a temperature where electrons and protons could combine to form atoms (recombination). Once neutral hydrogen was formed, the universe became transparent to CMB photons. Matter and radiation effectively evolved separately from there (decoupling).

Epoch of last scattering is when a typical CMB photon underwent its last scattering from an electron.

Surface of last scattering: the radiation we see as the CMB appears to come from a spherical surface around the observer such that the radius of the shell is the distance each photon has travelled since it was last scattered at (or after) the epoch of decoupling. This is the surface of last scattering. The surface of last scattering as seen by observers in, say, two different galaxies has the same radius, but they are not the same surface.

Anisotropies in the CMB: the dipole anisotropy is due to the Earths motion through space, but higher order anisotropies are properties of the CMB itself, and should tell us about the precursors of the structures (e.g. galaxies) that we see today.

Gravitational/Sachs-Wolfe anisotropies: photons climbing out of a gravitational potential well will experience a gravitational red-shift. Photons emitted from regions of low density roll down the gravitational potential and blue-shifted. All these photons also suffer a time dilation. Both red-shift and time dilation contribute to with terms linear in Overall Cold spots correspond to overdensities (seeds of clusters and superclusters). Hot spots correspond to underdensities (seeds of giant voids).

These perturbations have effect over large angular scales (3 degrees).

Intrinsic/Adiabatic perturbations: high density regions will be at a higher temperature. Those denser regions recombine later, so they are less redshifted and appear hotter.

Has effect over angular scales of 0.1-1.

Doppler perturbations: at recombination, the plasma has non-zero velocity, leading to Doppler shifts in frequency and hence temperature. These have an effect on the smallest angular scales.

Power spectrum quantifies the amplitude of temperature fluctuations of different angular scales. Fluctuations around a mean direction on the sky can be analysed in terms of the auto-correlation function C()

We can express this as a sum of Legendre polynomials, Pl()

l=0 is the mean temperature over the observed sky

l=1 dipole anisotropy (see earlier)

l2 fluctuations on angular scales Angular power spectrum quantifies the amplitude (al2) of temperature fluctuations on different angular scales.

Cosmic variance affects measurements at low l. No matter how precisely we make our measurements, there is still a large uncertainty. Out angular scale is large at low l, and with only 4 of sky available, there is not much scope to make measurements.

Geometry of the Universe: predictions from cosmological simulations-

1. If the universe is flare, power spectrum images will be dominated by hot and cold spots of ~1 degree in size.

2. If the universe is closed, parallel lines converge, thus images will be magnified by the curvature and structure will appear larger than 1 degree.

3. If the universe is open, parallel lines diverge and images will appear smaller.

Spectrum of primordial sound: temperature variations can be regarded as sound waves in primordial plasma. The angular power spectrum of these images reveals the characteristic scale which dominates them. This physical scale is determined by the product of the sound speed at this time and the age of the universe. The angle that this subtends on the sky will depend on how curved space-time is; the more open the universe, the smaller the angle that will be subtended, leading to an acoustic peak at higher l.Homogeneity and history: CMB irregularities are on the order of (equivalent to a swimming pool whose largest ripples are only a hundredth of a millimetre high). The CMB tells us about decoupling when the universe was ~105 years old, and the scale of these irregularities indicates that it was a mostly homogeneous place. Even so, the current universe is a lot lumpier, with stars clustered together in galaxies- we need to qualify what happened between then and now to change it.

Gravitational Instabilities: after considering the above, we conclude that structures in the universe evolve through gravitational instability, using the following reasoning.

Start by invoking a long range force, i.e. gravity. If there are small irregularities in the distribution of matter at decoupling, the regions with the highest density will tend to attract matter from the surrounding regions, making them denser still. Therefore, an irregular distribution of matter is unstable under gravity, which is why the universe is no longer the homogeneous place it was at decoupling.

Galaxy spectra1. Younger galaxy (stars ~108 years old) shows strong Balmer absorption lines of hydrogen.

2. Older galaxy has C, N. Mg and Ca lines, with an abrupt falloff at 4000 angstroms.

Galaxy intensity profiles1. The brightness of discs of spirals, SO and many irregular galaxies is reasonably well described by an exponential disc.

where r0 is the scale length.

2. Bulges of spiral and elliptical galaxies are more centrally concentrated, and need a more empirical law, such as the de Vaucouleurs or R114 law

Hubbles LawThe distance and velocity of distant objects is independently determined; allowing for scatter, the general law seems to be

where H0 is known as Hubbles constant.

Velocity is determined from redshift Distance is determined from luminosity.

Fainter objects (presumably more distant) appear to have higher redshift.

H050-100 km s-1 Mparsec-1Theory suggests it should be 72, but because it is quite difficult to get accurate values of d, H0 still needs calibrating.

Hubbles Law represents the expansion of the universe (more on this later).

Hubble flow describes the general recession of galaxies on scales greater than tens of megaparsecs. On smaller scales, other mechanisms are at work; for example, the Andromeda galaxy is actually moving towards us.

Expansion of the universe: at this point, it helps to define exactly what is expanding; in a cosmological context, we mean the separation between galaxies. As we will see later, even if everything appears to be moving away from where we are, that does not mean that we are the centre of the expansion. In fact, we do not even have to invoke a centre of expansion at all.

Co-moving coordinates are a coordinate system where the coordinates are carried along with the expansion. As the expansion is uniform, the physical distance r and the co-moving distance x are related by

a(t) is called the scale factor; basically a time dependent magnification factor.

A given volume of coordinate space always contains the same number of galaxies, so .

a at t=now is called a0 and is usually taken to have a value of 1.

A and B get further apart in space, but they are always separated by one grid length.

Hubbles Parameter:

as so But from Hubbles Law, so This is Hubbles Parameter- what we quoted earlier was Hubbles Constant H0, which is Hubbles Parameter at t=now. H is a constant in space but not in time- H would decrease in time due to the gravitational attraction of matter in the universe.

Redshift due to expansion: consider two galaxies with relative velocity dv

Doppler shift (using relative velocities of emitting and observing galaxies)

Time between emission and observation latex \frac{d\lambda}{\lambda_{em}}=\frac{\dot{a}}{a}$

Integrate

Definition of redshift

Masses of spiral galaxies: consider a star orbiting around the centre of galaxy at a distance R from the centre, and assume that the galaxy has mass M(R) within radius R. Equate gravitational acceleration with centripetal acceleration.

We would expect this behaviour for stars as a function of R.

(Note: rotational velocities beyond the visible part of the galactic disc are obtained by measuring the Doppler Shifts in the 21 cm hydrogen emission line.

Orbital velocities are not proportional to , but instead seem to be a constant out to the largest radii. It seems that luminous matter is not the only thing making a contribution to M(R)- the measured velocities imply that there is ten times as much matter as is directly seen. This is dark matter.

Of course, our basic assumption is that the laws of physics apply on these scales.

Cosmology IIHorizons arise from causality, and delimit any observers zone of vision (past and future). They are dependent on the observers position.Particle horizon encloses all the particles an observer could have seen at the present time

Co-ordinate distance travelled by a light ray from the beginning of the universe up to a time t:

Physical distance:

This is the maximum straight-line distance which could have been travelled by a light ray since t=0 (i.e dp is the radius of a sphere enclosing all the other particles that could have been seen. dp(t) is the particle horizon distance.

Event horizon encloses all parts of the universe which in principle can be reached (although in practice it could take a while to get there).

Copernican principle: the universe is the same whoever and wherever you are. Obviously this is not exact, but it holds better on increasingly larger scales.

Homogeneity and isotropy:1. Homogeneity: the universe looks the same at every point.

2. Isotropy: the universe looks the same in all directions- there is no need to invoke any centre of the expansion.

Note: homogeneity does not imply isotropy.Friedmann equation (non rigorous derivation). This equation describes the expansion of the universe.

Method: find kinetic and gravitational potential energy of a test particle in a uniform expanding medium with mass density - then use conservation of energy.

As the universe is homogeneous, we can take any point as the centre. The test particle has mass m, and from Gauss Law, only feels a force from the material at smaller radii.

Mass of force-exerting material Force: GPE: KE: Energy conservation U=T+V

U is a constant for a particular particle (not necessarily the same for particles at different r).

Multiply by

Put where k is curvature (see later).

This is the Friedmann equation.

The fluid equation: now we need to describe the density of material in the universe, and how it evolves. Consider a volume V with unit co-moving radius (so r=a).

Energy:

First Law of Thermodynamics For a reversible expansion dS=0

So:

This is the fluid equation.

Dust: cold, non-relativistic matter which exerts negligible pressure (p=0).

Radiation: light has pressure These are the two equations of state we will need for the next section.

For a dust dominated universe, solve the fluid equation for the dust equation of state.

Density falls off in proportion to volume- not unreasonablr.

Coefficient of proportionality:

Remember, when t=0, a=0, and a0=1 by convention.

Substitute into the Friedmann equation with k=0.

To solve, either-

Try:

Equate powers of t: or-

Full solution:

In this solution, the universe continues to expand forever, but the rate of expansion becomes infinitely slow with time.

For a radiation dominated universe, the fluid equation now becomes-

Friedmann equation

To solve, either-

Try:

Equate powers of t: or-

Full solution:

Universe still expands forever, just more slowly.

Note here that - an extra power is picked up here because the wavelength is stretched by the scale factor.

Mixture: total density We can solve the equations exactly for a mixture, but to get a feel for what is actually going on, consider the long-term evolution of a universe where one or the other dominates.

If radiation dominates:

Density of dust falls off more slowly and will come to dominate.

If dust dominates:

Dust remains dominant.

Matter-radiation equality

As dust comes to dominate, the expansion rate speeds up from to .

Destiny of the Universe: For the expansion of the universe to stop, we need the Hubble Parameter H=0.

If k=0: in this case, the Universe expands forever, although the expansion is continually slowing down. This is known as a flat, Euclidean universe.

If k free expansion.

If k>0: H can now be zero if the RHS terms in the Friedmann equation cancel. At large times, the second RHS term dominates, but is now negative; because gravitational attraction will still persist, collapse is inevitable (the Big Crunch).

Scale factor vs. time

Density parameter and critical density: once again, we go back to the Friedmann equation.

For a given value of H, there is a special value of the density which would mean k=0 (flat universe). This is known as the critical density, c, given by:

If c exceeds this, k>0 and the Big Crunch is inevitable.

The density parameter, is just a way of quoting densities in terms of the critical density.

The cosmological constant: we can introduce a new term into the Friedmann equation.

where is the cosmological constant. Thismay be thought of as representing a zero-point or vacuum energy, or perhaps even dark energy.

Matter contribution to the density parameter, M: this is not easy to measure.

1. First, count stars- stars=0.005 -> 0.01

2. Cold gas content- as well as stars, there is also cold gas which has failed to form stars, and low-mass non-luminous stars (Jupiters). Light element abundances and nucleosynthesis models give 0.01c, then B and C have spacelike separation.

Active galaxies- models: Any model must explain the high luminosity- a small but efficient energy source is needed. Possibilities are-

1. Star cluster- luminosity due to supernova. But observed variability is too coherent for a supernova, the lifetime is too short and there is no explanation for the radio jets.

2. Supermassive stars- again unlikely, because these are unstable to collapse.

3. Supermassive black hole (fuelled by accretion) provides the best possibility- it is small and efficient with a stable potential well and a stable direction for radio jets. There is evidence for such enormous masses in nearby galaxies.

Cosmology IVTemperature, energy and scale factorDefine H0=100h kms-1Mpc-1Define energy density for black body radiation where Using the observed CMB temperature, Critical density

CMB radiation is a small but not insignificant fraction of the critical density.

We know

Or to put it simply, the universe cools as it expands. The particle distribution continues to correspond to a thermal distribution, but with a lower temperature.

Photon to baryon ratio: if interactions are negligible, particles cannot simply disappear and so particle number densities reduce in inverse proportion to the volume (. This is true for protons and neutrons (baryons) and CMB photons; the ratio of photons to baryons is therefore a constant.

Photon number density- present CMB energy density is at the top of the page.

Mean energy for T=2.728K

Number density of photons

Baryon number density- from nucleosynthesis, the density parameter for baryons is B~0.02h-2Energy density Rest mass ~ 948 MeV -> number density nB=0.22m-3Total baryon energy density considerably exceeds that of photons, but when it comes to numbers there are ~109 photons for every baryon.

Neutrino energy density is ~0.68x photon energy density, so density parameter for relativistic particles (scales as Non-relativistic scales as

At recombination z~1000

as a0=1.

At decoupling Unless 0h2 is very much smaller than we currently think it is, the Universe will be matter dominated.

At matter-radiation equality, rel= non-rel

Thermal history: assume instantaneous transition between radiation domination and matter domination. For matter domination and a flat universe, so

T0=2.728K, t0=31017 sec

This holds for So the time of matter-radiation equality is given by

At temeperatures above Teq, radiation domination takes over, and from the expansion law , we get

where the constants of proportionality are fixed at matter-radiation equality.

Ignoring dependence on 0 and h, we have

When the universe was 1 second old, the temperature would have been about 21010K and the typical energy would be 2 MeV, almost ready for nucleosynthesis cookery (not as tasty as real cookery- see the earlier graph of log(T) vs. log(t) in the matter-radiation equality section).

Absorption and optical depth: any process which removes photons from a beam will be called absorption here (including scattering).

Where dI is change in intensity at wavelength , is the absorption coefficient (opacity), is density, I is intensity and s is distance travelled.

or, for a gas of uniform opacity and density

Characteristic distance For scattered photons, the characteristic distance is the photon mean free path or n can be thought of as the gas target area encountered by a photon for every unit length it travels.

Define optical depth as

If the optical depth of the rays starting point is 1, the intensity of the ray will decline by a factor of e-1 before escaping.

Optical depth may be thought of as the number of mean free paths from the original position to some other significant point (e.g. surface of star or boundary of gas cloud.

If >>1, material is optically thick.

If total inflow = Rate of increase of mass in container = For a fixed container, V=constant.

using a vector identity

Following a blob of incompressible density, . Incompressibility is a good approximation for liquids and gases where , the speed of sound. Under incompressible conditions, sound waves can be neglected.

Rectilinear flow between parallel platesin x-direction

Now use the Navier-Stokes equation, but neglect gravity.

x-component:

-> as the other terms are zero.

If we take the x-derivative of the above, all terms involving u must vanish, because So and is independent of x.

y and z components: Now make the further assumption that the flow is steady and y-independent.

(steady) and For steady flow, p is also independent of t, so is a constant.

Define , the pressure gradient.

Putting this all into the original x-component equation, we end up with

Boundary conditions: for viscous flow, the no slip condition means that the fluid comes to rest at the walls, i.e. u=0 when z=h.

We get a parabolic flow profile- an example of Poiseulle flow.

Volume flux per unit y distance for the above flow is Flow in a circular pipeAssume flow is independent of and t -> u=u(r)taking the radial component of del-squared.

Boundary conditions: u=0 at r=a (pipe wall- no slip condition again) and no singularity at r=0.

putting r=0 -> const=0

$u=\frac{-Gr^2}{4\mu}+const$

As u=0 at r=a, const=

Volume flux=Average speed=Reynolds number, Re, is a dimensionless number used to quantify the importance of viscosity. We do this by considering the ratio of inertial forces to viscous forces, and then simplifying with scale analysis.

where U=typical velocity scale, L=typical length scale of variation

If Re>>1, viscosity is usually unimportant

If Re

The two terms on the LHS are centrifugal and pressure gradient terms respectively, whilst the RHS terms are viscous terms.

ir and i terms vanish separately.

For 0

Try

->

-> Boundary conditions: u=0 at r=a and u=b at r=bAt r=a:

At r=b:

We can now work out the torque needed to keep the inner cylinder at rest.

Tangential viscous stress on inner cylinder:

per unit area of cylinder

Torque per unit z-distance on inner cylinder:

This can be used as a way of measuring (viscometer)

As increases, laminar flow breaks down and we see waves and turbulence.

Finally, to find the pressure distribution, use

Vorticity, w, is the curl of the velocity field

Vorticity is the measure of the local spin or rotation of the fluid, but it is not in general a measure of large scale rotation.