Upload
vuhuong
View
214
Download
1
Embed Size (px)
Citation preview
1
Fundamental Physics on board
the International Space Station
A roadmap for physics aboard the ISS
A report of the Topical Team
Hansjörg DITTUS1, Claus LÄMMERZAHL
1, and Nick LOCKERBIE
2
1 ZARM, University of Bremen, am Fallturm, 28359 Bremen, Germany
2 Department of Physics, University of Strathclyde, Glasgow G40NG, Scotland, UK
2
Members of the Topical Team
Giovanni AMELINI-CAMELIA
Dipartimento di Fisica, University Roma La Sapienza (Roma 1),
Piazzale Moro 2, 00185 Roma, Italy
Christian BORDÉ
Laboratoire de Physique des Lasers, UMR 7538, Institut Galilée
Université Paris 13, 99 Avenue Jean - Baptiste Clément
93430 Villetaneuse, France
Hansgörg DITTUS
ZARM, University of Bremen, Am Fallturm, 28359 Bremen, Germany
Barry KENT
1.24 R68 Rutherford Appleton Laboratory, Chilton Didcot,
Oxon OX11 0XQ, UK
Claus LÄMMERZAHL
ZARM, University of Bremen, Am Fallturm, 28359 Bremen, Germany
Nicholas LOCKERBIE
Department of Physics, University of Strathclyde, 107 Rottenrow, Glasgow
G4 0NG, Scotland, UK
Frank LÖFFLER
Physikalisch-Technische Bundesanstalt PTB, Bundelsallee 100,
38116 Braunschweig, Germany
Ernst RASEL
Institute for Quantum Optics, University Hannover, Im Welfengarten 1,
Hannover, Germany
Christophe SALOMON
Laboratoire Kastler Brossel, Departement de Physique de l'ENS,
24 rue Lhomond, 75231 Paris, France
Stephan SCHILLER
Institute for Experimental Physics, Heinrich-Heine-University of Düsseldorf,
Universitätsstr. 1, Building 25.42, Düsseldorf, Germany
Tim SUMNER
Imperial College, Technology & Medicine, Prince Consort Road,
London 2W7 2BZ, UK
Pierre TOUBOUL
ONERA , Departement Mesures Physiques,
29 Ave de la Division Leclerc BP 72, 92322 Chatillon Cedex, France
Stefano VITALE
Department of Physics, University of Trento, 38050 Povo, Trento, Italy
3
Introduction
Physics Needs Space
We are living in the present-day through a period where Physics faces exciting new
challenges, which might result in a completely new understanding of our picture of
the world. High precision laboratory-based experiments (and observational astronomy
of increasing sensitivity) have now offered us tantalising glimpses into the structure of
our universe, both on the large and microscopic scales. Some of the results obtained
so far have come as revelations, causing much re-examination of established theories
in the domain of Fundamental Physics—the part that deals with the very foundations
of physics—and the consequences may be far-reaching. For example, the required
unification of quantum theory and gravity would potentially give a new framework to
physics. This endeavour on the theoretical side has been underpinned largely by the
experimental work mentioned above, but experimental Fundamental Physics today
faces many challenges, both intrinsic and practical (creating a new definition for the
kilogramme, for example, would fall into the latter category), and in order to make
further progress we require experiments to be carried out increasingly under
conditions that cannot be realised on Earth. Indeed, in fundamental physics
experiments a very low-noise environment is mandatory, whilst in many cases it is
necessary to ‗switch off‘ gravity as well, since this would otherwise be a disturbing or
a competing interaction. Furthermore, larger differences in height, or changes in the
velocity, of the laboratory than are obtainable on Earth are essential in order to
perform certain experiments with sufficiently high precision: in reality, all of these
exacting conditions can only be met in space. Consequently, there are many high
precision experiments in Fundamental Physics (FP) that await a suitable space
platform, before they may be attempted.
Space Stimulates Physics
The desire to improve our knowledge of the natural world at its most fundamental
level acts as a stimulus to the development of both science and technology, in a
mutually beneficial fashion. New experimental results always influence the
development not only of theoretical physics, but also of experimental physics. New
technologies are developed and applied, which in their turn promise new results for
physics, as well as being applicable to, say, industrial production.
What is Fundamental Physics?
Today‘s FP can be divided into two categories: the category of universal theories and
the category of interactions. The universal theories are
Quantum Theory [characterized by ħ]
Special Relativity [characterized by c]
General Relativity [characterized by κ = G/c2]
Statistical physics [characterized by kB]
and the four interactions are
4
The Gravitational interaction [characterized by G]
The Electromagnetic interaction [characterized by the fine structure constant, α]
The Weak interaction [characterized by αweak]
The Strong interaction [characterized by αstrong]
As can be seen from the list of interactions, the gravitational interaction plays a
double rôle both as a universal theory, and as a particular interaction, which is
certainly one reason for the difficulty so far in quantizing gravity or unifying it with
the other interactions (indeed, all attempts in this direction seem to lead to a violation
of its universality).
Figure 1: The scheme of Fundamental Physics. We have three fundamental physical
phenomena, Newtonian gravity, Special Relativity and quantum mechanics. The
unification of gravity and relativity yields General Relativity, the unification of quantum
mechanics and relativity gives quantum field theory, and the unification of all three is
assumed to result in a final quantum gravity theory. [Figure self-made]
What are the important problems in Fundamental Physics?
The most outstanding problem in the first category is the incompatibility of quantum
theory and gravity, and some consequential examples of where our present notion of
5
physics breaks down are in the information loss in black holes, in the nature of such
singularities, and in the ‗big bang‘ at the birth of our universe. This implies the need
for a unified description of the underlying physics, that is, a theory of quantum
gravity.
There are several approaches to such a unified description, namely string theory and
the canonical quantization of gravity or loop gravity. While the canonical
quantization procedure focuses on the quantization of gravity, string theory, in
addition, aims also to solve another major problem in this category, namely the
unification of the interactions. There are good reasons in support of such a unification
in the form of the universality principles of Special and General Relativity, that is to
say the relativity principle, the Weak Equivalence Principle, and the universality of
the gravitational red shift. In the second category of FP there are problems involving
theories of the four interactions themselves, such as the time-dependence of the fine
structure constant, whilst realisation of the Watt balance would be an example of a
practical problem that needs to be addressed in the area of Metrology (see Fig.2).
The practical importance of Fundamental Physics
It should be stressed that FP is not only a question of the foundations of physical
theories but also covers areas of practical purposes. One of these areas is modern
metrology. As can be seen from Fig.2, the validity of basic principles underlying
Special and General Relativity, and, of course, also Quantum Theory are necessary
for the uniqueness of the definition of the most important physical units, the second,
the meter and the kilogram. If, for example, the velocity of light might turn out to be
not constant, then the definition of the meter can no longer rely on the definition of
the second anymore and the theory of Specialy Relativity is not longer valid. In this
case the meter probably again has to be defined by the Paris Platinum meter stick
produced in 1799, Fig.3.
Fig.2: Fundamental principles of physics and their importance for modern metrology.
On the right side, the today’s physical units are depicted together with their
interdependes. In particular, the definition of the second, the meter and the kilogram
needs the validity of basic principles of Special and General Relativity. [Figure self-
made.]
6
Similarly, the Weak Equivalence Principle (the equivalence of free-fall of bodies in a
gravitational field) should be violated, then the foundation of General Relativity itself
would be destroyed (see Fig.2) and also a gravitational definition of the unit of mass
can no longer work. In addition, also the uniqueness of the gravitational redshift, one
of the foundations of General Relativity, ensures the uniqueness of the definition of
the second.
Figure 3: Paris metre stick. [Figure from PTB web-page]
The definition of the second and the meter are only two examples for the influence of
Fundamental Phyiscs in metrology. Due to high precision repeatability of quantum
mechanical effects, it is a current program in modern metrology to base all units on
quantum effects. As an example we mention the quantum definitions of the electrical
resistance and the electrical voltage which are now based on the von Klitzing-constant
realized in the quantum Hall effect, and on the Josephson effect, an interference effect
in superconductivity. The extraction of the corresponding units for the resistance and
the voltage heavily depends on the underlying theory. If the theory might turn out to
be not valid exactly – what might happen if Special Relativity, for example, turns out
to be not valid – then this connection between experimental definition and theoretical
concept looses its meaning and, again, one is thrown back to the situation that each
qiantuty has to definied by one principal species and others can be fabricated by
comparison only.
Metrology is only one application of Fundamental Physics. Other applications are the
Global Positioning System (GPS) and geodesy which both are heavily relying on the
validity of Special and General Relativity. Also spectroscopy which is important for
material analysis, for example, needs Special Relativity in order to interpret in a
consistent and correct way the multitude of spectral lines in atomic and molecular
spectra. Furthermroe, for a reliable data transfer around the world the special and
general relativistic time delays have to be taken into account.
As a conclusion, Fundamental Physics is not only a branch of physics dealing with
some esoteric questions like the structure of physical theories under extreme
situations but also has many practical purposes. Present daily life would be much
more complicated if many techniques directly related to Fundamental Physics were
not available.
Why Fundamental Physics in Space?
Space offers the following advantages for FP experiments over ground-based
laboratories. These advantages could be broken down into basic or intrinsic
7
advantages or requirements (space conditions), and practical experimental
requirements
Very long ‗free fall‘ under microgravity conditions
Long exposure to very weak interactions is possible, rendering their effects
measurable
No seismic noise—quiet environmental conditions
The cosmic particle content of space itself
Huge distances and variations in altitude
Large velocities and large velocity variations
Large potential differences may be used
Repeatability (aboard ISS, only) and continuous improvement of the
experimental hardware
Past FP missions in Space
Mission Mission duration Space agency
GP-A Gravity Probe A 1976 NASA
Viking 1976–82 NASA
LLR Lunar Laser Ranger 1969–present NASA
LAGEOS I & II Laser Geodynamics Satellite 1972–present NASA
GP-B Gravity Probe B 2004–present NASA
FP missions in Space under development
Mission Projected
launch date
Space agency
MICROSCOPE Test of the Weak Equivalence Principle 2007 CNES–ESA
LISA Laser Interferometer Spaec Antenna 2012 ESA-NASA
LISA-Pathfinder LISA technology mission 2007 ESA
ASTROD Astrodynamical Test of Relativity using
Optical Devices
>2010 PMO/China
Why Fundamental Physics on the ISS?
The International Space Station (ISS), developed by the international space agencies,
is still under construction. It is now being offered as an opportunity for scientific
experimentation under conditions of weightlessness, in a quite unique environment
that offers many of the advantages of space listed above. However, one of the most
powerful arguments for the utilization of the ISS for FP experimentation is that once
it is fully operational it will provide unrivalled opportunity for quicker, easier, and
8
repeatable access to the experimental apparatus for adjustment, repair, or repetition of
experiments. This is something that is quite impossible in the case of dedicated
satellites. In consequence, a facility like the ISS will reduce considerably the
timescales and costs associated with the practical realisation of such experiments.
There are also circumstances in which the Earth itself (or, at least its atmosphere) may
be used as a part of the experimental set-up. This is the case for the EUSO
experiment, for example, which is described in more detail elsewhere in this article.
Here, downward-pointing detectors aboard the ISS will study extremely high energy
particle interactions with the upper atmosphere, via UV emissions.
Fundamental Physics on the ISS: Projects under study
Seven FP projects have been approved so far for the ISS, and these are described in
more detail below.
Mission Projected
launch
Space agency
SUMO Superconducting Microwave Oscillator NASA
BEST Boundary Effects Near Superfluid Transitions NASA–DLR
ACES/PHARAO Atomic Clock Ensemble in Space 2006 CNES/ESA
AMS Alpha Magnetic Spectrometer 2006 NASA
EUSO Extreme Universe Space Observatory 2009 ESA
SUE Superfluid Universality Experiment NASA–DLR
PARCS Primary Atomic Reference Clock in Space NASA
MISTE Microgravity Scaling Theory Experiment
DYNAMX Critical Dynamics in Microgravity NASA
EXACT EXperiments Along Coexistence near
criTicality
NASA
RACE Rubidium Atomic Clock Experiment NASA
Questions in Fundamental Physics In this section some general issues in Fundamental Physics , that are worthy of further
consideration, and of testing in space aboard the ISS, are presented and discussed in
more detail. It is the potential resolution of these issues that may lead to further
insight into physics.
Quantum Theory
What the Equivalence Principle is to General Relativity, the Superposition Principle is
to Quantum Mechanics. The Superposition Principle describes the wave nature of
matter—which is the central feature of Quantum Mechanics. Therefore, as in the case
of the Equivalence Principle, it is essential to test the Superposition Principle with
ever-increasing precision, and this is one of the main points treated in more detail
below.
9
Since Quantum Mechanics is the physics of small spatial scales, then, as in
spectroscopy, for example, in most cases only short timescales are involved.
However, due to enormous advances in the precise manipulation of single quantum
systems, such as laser cooling, it is possible nowadays to prepare and isolate quantum
systems with very low velocities—on the order of mm/s. Additionally, these quantum
systems may have lifetimes of many seconds, or even longer, so that it should be
possible in principle to perform interference experiments. This may be feasible, for
example, in situations where coherently split wave functions remain separated for at
least several seconds before they recombine again. However, this is possible only if
the wave functions do not fall outside the interferometer during this time. Therefore it
is necessary to carry out experiments under conditions of weightlessness, in order to
take advantage of a long free evolution time, and this has the potential to increase
enormously the experimental accuracy of tests on the Superposition Principle.
Among a number of issues in quantum mechanics there are some which are devoted
solely to quantum aspects of nature. They are possible candidates for space
experiments. These issues are: -
Decoherence. Long free evolution times are important for studies of the decoherence
of quantum systems. The question whether quantum systems suffer decoherence in
vacuum may be observable with interferometry by searching for a decrease in the
visibility of interference fringes as function of the free evolution before recombination
of the coherently split wave function. The longer the free evolution time is, the better
the sensitivity for studying these kinds of phenomena. Certainly, there are hypotheses
that decoherence is a ‗quantum gravity‘ effect, due to quantum gravity induced
fluctuations of space-time.
Linearity of Quantum Mechanics. The property of quantum systems to show
interference is one of the miracles of physics in the quantum domain. It states that
quantum particles, e.g. electrons, protons, neutrons, etc, when released either in a
cloud or one after another before moving through a double slit, are not found at places
on a screen behind the double slit as predicted by ordinary classical mechanics, see
Fig.4. Classical mechanics predicts that there will be a distribution of hits of these
particles on the screen which has two distinct peaks. Instead, what one observes is an
interference pattern with many peaks — an observation that has been confirmed by
many experiments. Furthermore, the same result comes about even if one particle is
released only after the previous particle has already arrived at the screen. This means
that, in a probabilistic sense, the succeeding particles ―know‖ what the previous
particles have done, although they had not yet been created when the previous particle
was absorbed. This cannot be explained, it can just be explored experimentally and
then described mathematically. The same result also arises when all these particles
are released simultaneously in different double-slit setups, and one then sums up all
the results of these individual experiments. Again, the particles somehow ―know‖
what the other particles are doing.
The effects described above have been confirmed many times in a variety of
interferometric setups, and to-date this principle has been proven to govern the lews
of microscopic physics with very high accuracy. Any hypothetical non-linear
quantum mechanical effect has been ruled out by neutron interferometry, within the
10
precision of the experiment. However, a considerable improvement on this result is
conceivable through the use of atom interferometry in space, and by taking advantage
of a long free evolution time that microgravity conditions permit.
Figure 4: The Superposition principle: Evolution of interference fringes. [Figure self-made]
Entanglement/correlations. Another prominent feature of quantum mechanics is the
entanglement of states. This is connected with non-local behaviour of quantum
systems. Many features of quantum theory, like quantum teleportation, Einstein-
Podolski-Rosen paradoxes, Greenberger-Horne-Zeilinger states, quantum computing,
etc., are connected with entanglement. Today, quantum entanglement has been proven
to exist over macroscopic distances, that is, over several kilometers.
The Process of Measurement. The understanding of the measurement process is one
of the unsolved problems in quantum mechanics. In the conventional interpretation of
quantum mechanics it is described by the postulate of the reduction of the wave
function. This seems to be contradictory if the physical system under consideration
and the measuring apparatus (which should be part of the physical world) are both
described quantum mechanically. Solutions to that problems are, e.g., a many-world-
interpretation of Quantum Mechanics, or the decoherence of the physical state under
consideration through its interaction with the measurement apparatus, so that the final
state of the physical system effectively (after the measurement process) looks as if a
reduction of the wave packet had taken place. There are speculations where the
reduction is ascribed to a fundamental decoherence or to a modified dynamics for
large systems.
11
Figure 5: The Casimir effect. Electromagentic waves above a particular wavelength are
forbidden to exist inside two capacitor plates. The superior number of waves and, thus, the
excess of energy outside the capacitor plates lead to a pressure from outside resulting in an
attractive force between the two capacitor plates. This effect which has been confirmed by
experiments, can be understood only in the framework of quantum field theory. [Figure self-
made]
Casimir—effect. The Casimir effect is a macroscopic effect which can be explained
only by means of second quantization of the Maxwell field describing the creation
and annihilation of photons. The main consequence of second quantization is the
prediction of vacuum fluctuations, whose nature depends on the physical boundary
conditions of the system. They are therefore different for unrestricted free space
compared with a physically restricted region of space, and this difference leads
directly to the famous Casimir effect. The difference in the energies of the vacuum
fluctuations between the closely-spaced plates of a capacitor, and the same plates
when they are separated by a larger gap, leads to a distance-dependent force between
such neutral capacitor plates: please refer to figure 5. This effect has been verified by
experiment. However, it is desirable to improve the accuracy of such experiments,
not least because of the fundamental importance of this effect, and also to explore
further related effects.
Figure 6: Creation of a Bose-Einstein condensate. This picture shows the cooling process of an
atomic cloud. The left part shows the laser cooled atomic cloud. Then, evaporation cooling
12
decreases the mean velocity of the atoms. In the state of condensation, all atoms possess velocities
near zero, only limited by the uncertainty relation.
Bose-Einstein condensation. If identical bosonic quantum systems are cooled down
to very low temperatures (of the order of nK), then each individual system falls into
the lowest possible energy state. As a consequence, all the wavefunctions of the
individual systems overlap, and the whole system begins to become one single giant
quantum state, the Bose-Einstein condensate. In this condensate all atoms are linked
coherently, and they behave as a single quantum state with one phase, see Fig.6.
This leads to new phenomena, which have important applications in interferometry
and metrology. For example, Bose-Einstein condensates may be the source for a
coherent atomic beam, thus serving as an atom laser. A use of these coherent atomic
beams in atom interferometers will increase the sensitivity of these devices by orders
of magnitude, and can furthermore be used for a more precise determination of such
fundamental constants as, e.g., the fine structure constant.
An important issue necessary for the understanding of this phenomenon, and in need
of exploration, is the way the large number of atoms build up a single phase and
become one quantum state. This requires the observation of a free BEC for a long
time, which is something that can be done only if the BEC is freely falling, and so
such an experiment can be realized best in space — particularly aboard the ISS.
Indeed, under these conditins even lower BEC temperatures can be achieved, that is,
temperatures in the femto-Kelvin region (10-15
K). We may very well speculate as to
whether under these really extreme conditions new physical phenomena may arise1.
The 2001 Nobel Prize was awarded for the creation of Bose-Einstein Condensates to
E.A. Cornell, W. Ketterle, and C.E. Wieman, underlining how the physics community
acknowledges the importance of this area of physics. And a prerequisite for these
studies was and is the laser-cooling of the atoms — indeed it is the most important
step in creating a BEC. The importance of this technique has also emphasized by the
award of the 1997 Nobel Prize to S. Chu, C. Cohen-Tannoudju, and W.D. Phillips.
Other issues in the quantum domain are concerned with the coupling of quantum
matter to external fields, in particular the coupling to inertial and gravitational fields,
and the coupling of the spin of elementary particles to these fields. These questions
will be considered further in the gravity section, below.
1 It makes sense, perhaps, to remind oneself of the fact that the quantum Hall effect, which initiated a
revolution in metrology and for which K. von Klitzing was awarded the Nobel Prize, was also achieved
―only‖ by cooling down samples of thin metallic films. In other words the preparation of extreme
conditions and subsequent examination of cooled samples is worth carrying out in any case.
13
Special Relativity
The theory of Special Relativity plays a rôle of central significance in our
understanding of nature, as well as being of great practical importance. Special
Relativity is a theory that has to be incorporated into all other physical theories, such
as, for example, the theory of gravity: General Relativity. And although we may not
be aware of it, Special Relativity is also absolutely necessary in our everyday lives:
for the proper functioning of the Global Positioning System (GPS), Fig.7, and also for
the forthcoming European Galileo system, for example, which has to take the effects
of Special Relativity into account. In fact, if all special relativistic effects were
neglected in this case then positional errors of the order of 2 km per day would
accumulate—certainly an effect that must be mitigated in circumstances where, for
example, an aircraft has to land at a fog-bound airfield. Another practical example
where Special Relativity is important is in the field of high-precision metrology.
Industrial production sometimes requires a very precise measure of length, and this
can only be realized under the constancy of the speed of light, one of the cornerstones
of Special Relativity. It is noteworthy that such requirements of metrology cannot be
met by the ‗primary‘ metre stick in Paris. We need instead the new ‗portable‘
definition of the metre, which is underpinned by Special Relativity.
Figure 7: The forthcoming Galileo positioning system [Figure from some esa webpage]
Alhough it is seen to be very important for everyday life, Special Relativity is based
on, and displays, some extraordinary features—when viewed from the perspective of
our everyday experience. One of the principles that Special Relativty is based upon is
the independence of the speed of light ‗c‘ from the velocity of the light source, a
feature that is completely counter-intuitive. Our intuition tells us that if a ball were
thrown out of a moving car, then the ball would have a larger velocity when thrown in
the direction of motion of the car than in the opposite direction. However, if instead
of a ball we take light, then we must infer that light emitted by the headlights of (e.g.)
a speeding Ferrari, when crossing some traffc lights, should possess a larger velocity
than the light emitted by the traffic lights, see Fig.8. Experimentation, however, tells
us that this is not true: both the light emitted by the Ferrari, as well as that emitted
from the stationary traffic lights, propagates with exactly the same velocity. There is
only one velocity of light.
14
Figure 8: The velocity of light of a standing traffic light and a fast moving car are measured to be
identical. The velocity of light does not depend on the velocity of the car. [self-made figure]
This feature of the propagation of light has been tested to its extreme with high energy
particles in the particle accelerators at CERN. There, it was possible to create a
particular species of particles which possessed a very high energy and that travelled
with a velocity that was 99.975 % of the speed of light. The same species of particles
could also be produced in a different way, so that they were at rest. In both cases after
a very short time the particles subsequently decayed into photons, i.e. into light. The
velocity of the light coming from these particles, namely from the ones at rest and the
ones moving at approximately the speed of light, was then measured. The
experimental discovery was that in both cases the same velocity of light ‗c‘ was
measured (this statement can also be interpreted as a special case of the famous
Einstein velocity addition theorem).
Figure 9: Special Relativity has been tested to its extreme with high energy particles at CERN.
There is no more convincing experiment in favour of the constancy of the speed of
light than this one. Since all the measurements were carried out with the same
apparatus, and since only a comparison of velocities was made, the conclusion
reached is inescapable. Moreover, there is absolutely no room for differing
theoretical interpretations—it is merely a statement of fact. This experimental result
can only be explained by Special Relativity. Consequently, this very unusual
behaviour of light forced science into a new understanding of space and time, which
culminated in the theory of Special Relativity.
Not only is the velocity of light a constant, but it is also the same if light with another
wavelength, or with another polarization, is employed. Furthermore, the maximum
15
velocity of macroscopic matter like rockets, or even of particles like electrons,
protons, or neutrons, cannot ever exceed the speed of light. It is really a mystery why
all of these different bodies or particles share this same property.
This result implies that at every point in space, and at all times, there is one and only
one ‗lightcone‘ that can be drawn in space and time. All light paths are the same,
irrespective of the motion of the source or from where they have been emitted. This
is the fundamental geometrical nature of Special Relativity.
Figure 10: Light cones. Light propagates with the same velocity in all directions and the velocity
of light is independent from the state of motion of the observer and of the source. [self-made
Figure]
There are two further ingredients to Special Relativity: the isotropy of the velocity of
light, and the relativity principle. The isotropy of the velocity of light means that light
has the same velocity when travelling in any direction (Fig.10). This feature of light
was first tested by Michelson and Morley (Fig.11) at the end of the 19th
century,
during the search for an ‗ether‘, a medium through which light propagated, and a
natural consequency of a nonrelativistic framework of physics. It is well known that
no such ether was, or has been, observed, and thus the old Newtonian mechanical
framework had to be considered as being merely approximately correct, valid only for
low velocities.
16
Figure 11: Portraits of A.A. Michelson (1892 – 1931) and E. Morley (1838 – 1923).
Today there are much more accurate tests of the isotropy of the velocity of light. The
current relative precision is 1015
. This means for example that present experiments
have the capability of detecting that a light ray travelling a distance of 1 light year
(that is 10 000 billion km) in one direction may cover this distance in 30 ns (30
billionths of a second) less than another light ray, travelling this same distance, in
some other direction. Until now, no such differences in light speed have been
observed, and nowadays experiments of even higher accuracy are under construction.
Figure 12: Albert Einstein (1879 - 1955), the inventor of Special and General Relativity. [Figure
from http://www.th.physik.uni-frankfurt.de/~jr/physpiceinstein.html]
The isotropy of the velocity of light forms a part of the so-called isotropy of space.
This means that there is in no way any preferred direction in space. Physics must be
the same irrespective of the spatial orientation of the laboratory. On the level of
massive particles this means the following: If a particle is given always the same
17
energy (for charged particles by means of electrical acceleration or for atoms by
acceleration in strong laser fields, for example) then the velocity of that particle is
different in different spatial directions. Identical particles behave differently if
different spatial directions are involved. This hypothesis has been tested using
nuclear spectroscopy with incredible accuracy – even more precisely than the optical
analogue described above. Needless to say, no effect has been found. If not only
orientations but also velocities are concerned, then this principle of spatial isotropy
extends to the relativity principle.
The relativity principle which has the status of a Universality Principle, states that
there is no physical phenomenon which singles out or distinguishes any particular
inertial (that is to say, non-accelerating and non-rotating) frame of reference from any
other. In the words of a universality principle: all phenomena happen in the same
way in all inertial frames. If in two inertial frames we have the same initial and
boundary conditions with respect to the corresponding coordinate systems, then the
dynamics of all physical systems is identical in both inertial frames. This principle is
also true in Newtonian mechanics but, when applied to the constancy of the speed of
light, enforces the unification of space and time — one of the fundamental
implications of Special Relativity. In other word, a transformation between two
moving inertial frames necessarily involves a mixing between space- and time-
coordinates, see Fig.13. This mixing is not present in Newtonian mechanics.
The relativity principle includes the fact that all particles can attain as their maximum
speed the velocity of light: a different maximum speed for one particle would single
out a preferred frame, thus violating the relativity principle. Conversely, all tests of
Special Relativity are mainly tests of the relativity principle.
Figure 13: An essential feature of SRT: space and time gets mixed for transformations between
moving frames. [self-made figure]
There is one feature of Special Relativity which stands out: the effect of so-called
‗time dilation‘, see Figs.14 and 15. This effect has to do with the twin paradox, see
Fig.15, and is tested in experiments on the relativistic Doppler effect. Time dilation is
in fact an effect of the mixing of time and space between two moving inertial systems.
18
It means that the time in a moving frame seems to go slower when observed from
another frame. This is a symmetrical effect: both observers see the slowing down of
the clock in the other frame. Although this seems to be just an artifical effect due to
synchronization procedures of clocks, it has a real physical meaning as exemplified
by the twin paradox, see Fig.16. This has been tested many times with moving
elementary particles or atoms.
Figure 14: The special relativistic time dilation. A clock can be realized by a photon
travelling back and forth two mirrors at a distance L. A unit of time is then defined by
the time the photon needs for one tour. Such a clock is called a light clock. According to
Special Relativity, atomic clocks and light clocks show the same time. If a light clock is
moving with velocity v as shown in the right part of the figure, then the distance light
has to travel is longer. Taking the constancy of the velocity of light as granted, then the
photon needs longer for one tour so that the unit of time of the moving clock is longer
than for the clock at rest. The corresponding dilation of the time unit of the moving
clock is described by the famous formula on the right. [self-made figure]
Fig.15: Time dilation. The time read out from a moving clock runs slower than the time
given by the inertal system. Lower part: A moving clock shows the same time as the
clocks in the observer’s frame. Upper part: after a while, the time of the moving clock
when read out by the observer, is slower. [self-made figure]
2 2
1
1t t
v c
19
Fig.16: The twin paradox is based on the time dilation. At the beginning the moving
clock shows the same time as the observer’s clocks. After some time, a another clock
moving in the opposite direction, takes over the time of he moving clock. Coming back
to the starting point, the moving clock is retarded in time. This effect has been observed
in storage rings where the time unit was the decay time of elementary particles. [self-
made figure]
There are of course other experiments on the foundations of Special Relativity, like
the Hughes-Drever experiments, which search for any anisotropy of space using
quantum mechanical interactions. Since these experiments use nuclear energies, they
are much more precise than the optical experiments discussed above. Needless to say,
these expeiments have also found no hint of a violation of Special Relativity.
In summary, it is fair to say that to-date Special Relativity has passed so many tests,
and has failed none, that it really can be considered to be a fully established and
accepted theory. Since theory is in accordance with all relevant experiments, there
might be no need to question it further. Indeed, this position can be likened to the
state of physics at the beginning of the 20th
century. However, in the last few years
questions have begun to emerge, as a result of attempts at unifying Quantum Theory
and Gravity. From the quintessential incompatibility of these two theories the need
for modifying at least one of them has become inescapable, and ensuing efforts have
resulted in different approaches to a unified theory of Quantum Gravity, such as
‗string theory‘, ‗canonical quantum theory,‘ ‗non-commutative theory‘, etc. And the
one quite astonishing outcome from all of these approaches has been that they all
predict a violation of, or a modifcation to, Special Relativity. Such violations will
20
occur only at very high energies, certainly, energies that have been inaccessible up
until now; but the quest is on to try to approach yet higher energies, and to try to
improve still further the experimental basis of Special Relativity. Improvements of
such experimental tests are therefore crucially important for the direction towards, and
the construction of, a new theory of space and time. And the path to improving these
tests lies through the use of space conditions.
General Relativity and gravitation
After Special Relativity, the theory of gravitation, General Relativity, was the next big
milestone that Einstein achieved. It was the lack of the ‗ether‘ that made Special
Relativity necessary, and since Special Relativity was theoretically inconsistent with
the ―old‖ Newtonian theory of gravity, a new theory for the gravitational interaction
had to be found.
As in the case of Special Relativity, General Relativity was also absolutely
revolutionary as regards our understanding of space and time, and for the structure of
physics as a whole; and like Special Relativity it is also important for our daily lives
for, once again, the Global Positioning System (GPS) could not function without
taking general relativistic effects into account. Indeed, if general relativistic time
dilation (or red shift) of clocks were not taken into account, then errors again of the
order 10 km per day would result. Geodesy, that is the measurement of the form of
the Earth, see Figs.17 and 18, which is important for e.g. the study of continental
drifts, navigation, etc., also relies heavily on Special and General Relativity. Geodetic
engineers are interested in (i) establishment of precise 3D positions, (ii) navigation,
(iii) determination and modelling of the earth's gravity field, and (iv) measurement
and modelling of geodynamic phenomena (polar motion, earth rotation, crustal
deformation). For all this, Special and General Relativity are needed.
21
Figure 17: Geodesy. From measurement of the relative positions between the satellite and the
Earth’s surface and between the satellite and GPS satellites, one can determine the geographical
shape of he Earth as well as its gravitational shape, that is, the surfaces of equal gravitational
potential. The latter is called the geoid. [with modifications from: C. Audoin and B. Guinot, The
Measurement of Time (Cambridge 2001)]
Figure 18: Geodesy - the “Potsdam potato”. In contrast to a sphere made up of a homogenous
material, where the surfaces of constant gravitational potential have the form of a sphere, the
surfaces of constant gravitational potential of the Earth, due to its mountains and its
inhomogeneities, is irregularly shaped. It has the form of a deformed sphere, like a potato.
[from some ESA web page]
More profoundly, however, General Relativity (GR) is absolutely necessary to our
understanding of the evolution of our universe, see Fig.19. The big bang is a direct
consequence of General Relativity. Furthermore, one of the most outstanding,
fascinating, and rather strange predictions of GR is the existence of Black Holes,
Fig.20, which according to observations now seem to be at least in the centres of most
galaxies. Of course it is also well known that effects like the bending of light in the
neigbourhood of the sun, the perihelion shift of Mercury, the gravitational redshift,
and gravitationally-induced time delays, can only be explained with the help of GR.
Furthermore, gravitational waves are a core prediction of GR and, when then are
detected, will open a wide new window into the universe — especially to the very
early one, which can be used to explore — in a much more effective way than by
optical means — the very first phase of our universe.
22
Figure 19: The evolution of our Universe is strongly influences by the gravitational interaction. If
small modifications in the gravitational interaction have to be applied, then all the astrophysical
and cosmological data have to be re-interpreted and the whole history of our universe has to be
written anew. [from http://particleadventure.org]
Figure 20: Black Hole. The accretion disc encircling a black hole. Due to the curvature of space-
time around a black hole it is possible to see the accretion disc behind the black hole.
In the development of the formalism of GR, the Universality of free-fall (also called
the Weak Equivalence Principle or, for short, just the Equivalence Principle) played a
central rôle. The Universality of free-fall means that all kinds of pointlike matter fall
in a gravitational field along the same spacetime path. Their motion is identically the
23
same, see Fig.21. Therefore, this motion is universal, and thus can be ascribed to a
geometry. The Universality of free-fall is the reason for geometrizing gravity.
Gravity is peculiar in this respect: it is the only interaction that can be so geometrized.
In electrodynamics, on the other hand, the path taken by matter depends on the charge
of the particles that constitute it, and thus cannot be geometrized. This exceptional
property of gravity creates the geometry of space and time (in a Newtonian frame,
where the force, defined as the inertal mass times the acceleration, is equal to the
gravitational force—given by the gravitational mass times the gradient of the
Newtonian gravitational potential—this is equivalent to the equality of inertial and
gravitational mass).
Figure 21: In vacuum, a feather falls in the same way as a bowl of lead. [self-made figure]
The Universality of free-fall is fulfilled in the old Newtonian thory of gravity; but has
to be made compatible with Special Relativity. It turns out that this is possible only if
Special Relativity is considered only to be valid locally, that is, in small space-time
regions. In consequence, fascinating geometrical structures arise naturally, namely:
space-time geometry as a consequence of the matter content, the evolution of the
universe, and the formation of Black Holes. The reason is exemplified by the
following consideration: since all kinds of matter accelerate in the same way, e.g., in a
laboratory located on the surface of the Earth, a new reference frame can be
constructed which falls towards the Earth with just this same acceleration, and in this
frame the gravitational acceleration disappears. This means that gravity can be
simulated or compensated for by an accelerated frame of reference, see Fig.22. This
is the fameous ‗Einstein elevator: In a box without a window there is no way to
decide, i.e. there is no experiment whose outsome can distinguish, whether a
perceived acceleration is due to gravity or is due to a force accelerating the whole
box.
24
Figure 22: Simulation and compensation of gravity. Left: The gravitational force related to the
acceleration g a mass (blue ball) experiences at the bottom of a box located at the surface of the
Earth is the same as if mass and box are in space and the box is accelerating with an acceleration
a g . Consequently, gravity can be simulated by acceleration. Right: Gravity can be
transformed away. In of Earth not anly the mass is falling down but with the mass also the
surrounding box, then this represents a situation equivalent to the one where both, the mass and
the box, are freely floating in outer space. [self-made figure]
Since the gravitational acceleration can be simulated or transformed away, this
quantity is no meaningful representative of the gravitational field. The gravitational
field now is given by the curvature of space-time. This curvature is a measure of the
relative acceleration of two particles or light rays moving in free fall in space-time,
see Fig.23. This relative acceleration cannot be transformed away. This means that
measuring the curvature is one of the most prominent tasks of General Relativity. One
effect of the curvature of space-time are the Earth tides, see Fig.24. Gradiometers are
apparatuses which can measure or map the gravitational field in terms of the space-
time curvature, see Fig.25. In a two-dimensional setting, the curvature represents the
bending of a surface. Fig. 26 shows the curved space-time of a spherically symmetric
gravitating body like the Sun represented as two-dimenional surface embedded in
three-dimensional space.
Fig.23. Particles or light rays moving along different trajectories in a gravitational field, e.g., of
the Earth, will be accelerated in adifferent way, they show a differential acceleration. This is a
25
direct observational aspect of space-time curvature. This effect will also lead to ocean tides, see
Fig.24. [self-made figure]
Figure 24: Nonlocal effects of gravity: the curvature describes gravity as relative acceleration, as
tidal force. Parts of the blue body which are near to the gravitating Moon feel a slightly larger
attracting force than the parts being farer away from the Moon. Furthermore, since the
gravitational force is directed to one point, to the center of the Moon, different parts feel a
slightly different direction of the gravitational acceleration. Therefore, the Earth in the
gravitational field of the Moon feels a small stretching force in Earth-Moon direction, and a
small pressure in the other direction. The force is such small that it would be very difficult to be
seen for a solid, but since a good part of the Earth is covered by water, this effect can be seen as
the oceanic tides. This is also the reason why this influence of the gravitational field of the Earth
on the shape of the Moon cannot be observed. [self-made figure]
26
Fig.25: curvature can be measured with a gravity gradiometer which measures the different
acceleration experienced by masses placed at nearby but different positions. One type of
gradiometer can be based on the comparison of the torques on two pendula (upper figure). The
upper left figure is a photograph of such a gravity gradiometer, on the right the scheme of the
setup is shown. Another type of measurements consists of the simultaneous measurement of the
accelerations of two bodies. A moden version of this uses atoms at two heights which acceleration
can be sensed by the same laser field at the same time what leads to an automatic cancellation of
many systematic errors (lower Figure). [upper figures from: Misner, Thorne, Wheeler:
Gravitation, Freman, San Francisco 1973; lower Figure from M.J. Snadden et al, Phys. Rev. Lett.
81, 971 (1998)]
27
Figure 26: In GR, space and time are unified to give a curved space-time. The gravitational field
of a body like the sun can be visualized as curved surface. [self-made figure]
Since reference frames located at the same position are now related by the laws of
Special Relativity, it turns out that the structure of the gravitational field can be
inferred to be a Riemannian geometry, and Gravity is thus described by a metrical
tensor. This metric is determined by all the matter in the universe, and serves as a
guiding field for the motion of particles, as well as a tool for measuring distances.
The fact that everywhere in the universe we have a unique metrical field as a
description of gravity is ensured by the principle of the Universality of the
Gravitational Redshift. This means that all physical laws have the same structure
everywhere and everytime. A consequence of this is that all kinds of
(nongravitational) clocks behave in the same way in gravitational fields. And so in
consequence by moving different clocks, e.g. an atomic clock, a quartz clock, an
optical resonator clock, etc, in a gravitational field, and then observing them, one
cannot determine where in the gravitational field we are, see Figs. 27-29.
28
Figure 27: The gravitational redshift is a gravitation induced time delay of clocks. Clocks in
stronger gravitational fields (in the figure represented by a clock down in the valley) move slower
than clocks in weaker gravitational fields (represented by a clock on top of a mountain). In
terms of frequencies, clocks down in the valley are redshiftet. [seld-made figure]
Figure 28: The gravitational redshift. Left: A laser emitting upwards a beam of green light at the
ground. There is no change in the frequency of the laser light with height. Right: In Einsteinian
gravity, the frequency of the laser light descreases and the qwave length increases, what means
that the color of the light changes from green to yellow to red. [seld-made figure]
29
Figure 29: Universality of ravitational redshift: clocks in gravitational field: Though being
influences by the gravitational field, two different clocks always show the same time, irrespective
where they are located in the gravitational field. [seld-made figure]
Therefore, the structure of gravity is determined by two universality principles,
namely the Universality of Free Fall and the Universality of the Gravitional Redshift,
and by the fact that all local frames are related by Lorentz transformations. Together,
these make up the Einstein Equivalence Principle.
For an experimental exploration of gravity it is therefore of primary importance to test
the validity of the Einstein Equivalence Principle. A second step in the exploration of
gravity is then to test the predictions of the theory of gravity, namely the bending of
light rays, the perihelion shift, the red shift, the time delay and the Lense-Thirring or
frame-dragging effect.
The Universality of Free Fall. The Universality of free-fall, compare Fig.21, means
that a sphere made of lead falls in the same way or, equivalently, feels the same
gravitational acceleration, as a sphere made of aluminium, for example. This has
been confirmed on the Earth with the incredible high relative accuracy of 10-12
. It
30
means that the difference of the gravitational acceleration on the Earth is less than 10-
11 m/s
2. In order to have a feel for what this implies consider the following example:
if we imagine a year of free fall with a continuous acceleration comparable to that on
the Earth, then after this time both bodies would be less than a tenth of a millimetre
apart. The sensitive terrestrial technique used most often for this kind of comparison
of acceleration is that of the torsion pendulum, originally designed by Eötvös at the
end of the 19th
century. A violation of the Universality of Free Fall would therefore
amount to a new interaction which would allow gravity to distinguish between
different substances, and goes under the name fifth force.
A striking challenge is to explore whether anti-matter also obeys the Universality of
Free Fall, since this has never been tested. Only recently, anti-Hydrogen has been
produced at CERN so that with this kind of matter a test of the Universality of Free
Fall is at least feasible—though not with the high accuracy used in tests on ordinary
matter.
Further questions concerning gravity are whether matter with polarization (or spin), or
with charge, also feels gravity in the usual way. In principle, these kinds of matter do
violate the Universality of Free Fall, since these are extended bodies (note that the
Universality of Free Fall only holds for pointlike particles). Polarization and charge
create extended electromagnetic fields which belong to the particle, but which ―feel‖
the gravitational field far away from the body. However, the corresponding effects
are very small and far beyond any experimental significance. What one is looking for
in these cases are anomalous couplings of the gravitational field to polarization or
charge, which are outside the expected coupling to the gravitational field. Any such
anomalous coupling would lead to a fifth force. Anomalous couplings of spin or
charge to gravity may not only influence the free fall of polarized or charged matter,
but may also modifiy the spectral properties of atoms. Therefore, searches for this
kind of fifth force have also been carried out using high precision spectroscopy.
Tests of the Universality of Free Fall are especially important because a violation may
signal a quantum gravity effect. It has been shown that violations of the Universality
of free-fall are predicted, e.g. from string theory. Therefore, any better experimental
test of this principle than is available at present, e.g. by the dedicated MICROSCOPE
experiment, or by the far more sensitive STEP proposal, is very important. It is clear
that a long time of free fall in orbit around the Earth will enhance the effect. Thus
space conditions are very appropriate for improving the sensitivities of these kinds of
experiments.
The Universality of the Gravitational Redshift. The gravitational redshift principle
states that clocks near to gravitating bodies run slower than clocks far away: a clock
on a mountain or in an aeroplane or in a satellite runs slightly faster than the same
clock located at the surface of the Earth, see Fig.27. This effect has indeed been
observed using atomic clocks in aeroplanes, and has to be taken into account in
analysing GPS data for precise positioning. The clock runs faster the further away it
is from any gravitating body. The first Fundamental Physics space mission, Gravity
Probe A (GP-A), launched in 1978, was devoted to measuring the gravitational red
shift. Until the present day this experiment has yielded the most precise measurement
of this effect.
31
Figure 30: Atomic, Optical, and Molecular clocks: Metaphorically speaking, the second defined
by an atomic clock (left) is the time the electron needs to orbit the nucleus, the second defined by
an optical clock or resonator (middle) is the time a photon needs to mova back and forwards
between the two mirrors at the two end sides of the optical cavity, and the second defined by a
molecular clock is the time for one vibration or one rotation. (The variety of definitions of clocks
are even wider than given here.) Each of these clocks is based on different physical principles:
while the first one is based on the motion of an electron, the optical clock is based on the motion
of photons, and the last one on the motion of whole atoms. It is a miraculous but well proven
principle of physics underlying Einstein’s theory of gravity that each clock behaves in
gravitational fields in exactly the same way as the others. [seld-made figure]
The gravitational redshift is not present in Newton‘s theory of gravity. And once
again it is an absolutely astonishing fact that all kinds of clocks (see Fig.30) behave in
the same way in a gravitational field. All kinds of clocks will speed up in the same
way when they are taken further away from a gravitating body. It follows that it is
immaterial whether the clock is based on atomic or on other kinds of transitions, or on
photons running back and forth in a cavity (as in a light clock), or on molecules being
in a certain vibrational or rotational state defininga certain frequency. All of these
clocks are based on different physical principles (i.e., laws), yet they nevertheless
behave in the same way — meaning that the physical laws are themselves influenced
by gravity in a universal way, once again. In Fig.31 the case of a violation of the
Universality of the Gravitational Redshift is visualized.
Figure 31: Violation of the Universality of Gravitational redshift: two different clocks, even when
started at the saem time, start to show different times. [seld-made figure]
32
One important point in this connection is that different clocks depend in a different
way on physical constants: on the fine structure constant in the case of the coulomb
interaction, or on the ratio of the electron to proton mass, for example. Time units
based on different physical processes depend differently on these various constants,
and are based upon different physical principles. Thus the universality of the
gravitational red shifts states that these constants are really constant in both space and
time. Again, present approaches to a theory of quantum gravity predict a time-
dependency of these constants, and thereby a violation of the Universality of the
Gravitational redshift.
Due to the relative weakness of the gravitational interaction (gravity is only strong if
huge masses like the mass of the Sun are participating) the gravitational redshift, and
in turn the universality of this effect, has only been tested to a level of approximately
0.1 ‰. Although clocks today have a relative precision beyond 1015
, the gravitational
field, being so very weak, reduces the achievable level of precision by approximately
10 orders of magnitude. Therefore it is important either to improve the precision of
the clocks, or to go to stronger gravitational fields like the gravitational field of the
Sun, or to make longer-term tests in order to improve the statistics. ISS projects like
ACES/PHARAO and SUMO are aiming to improve the verification of the
gravitational redshift. Indeed, with the new generation of space clocks like PHARAO
the accuracy will be such that second order effects in the gravitational redshift can be
measured. This will amount to a new test of Einstein‘s General Relativity.
Since clocks can be based on Hydrogen atoms, an interesting point here might be to
base a clock on anti-Hydrogen atoms, that is, to make an universality experiment
between clocks and anti-clocks.
It should be stated that for Special Relativity as well as for General Relativity clocks
are absolutely basic and almost give a co plete test of these two theories. Special
Relativity andeed can be tested completely with clocks: The isotropy of the velocity
of light can be tested by comparing a clock with an intrinsic direction like the light
clock with another light clock with another orientation or with an atomic clock with
no orientation dependence. The constancy of the speed of light can be tested by
comparing a light clock with an atomic clock with no velocity dependence and the
tiome dilation is a comparison of identical clocks with diefferent state of motion. As
far as General Relativity is concerned, the Universality of the Gravitational Redshift
compares different clocks at the same position (and a measurement of the absolute
value of the gravitational redshift comes out from a comparison of identical clocks at
different positions in the gravitational field.) Therfore, but the Universality of Free
Fall, all tests underlying metrical theories of gravity can be carried through by the use
of clocks. Clocks are therefore an almost universal tool for fundamental relativity and
gravity tests.
33
Testing the predictions of General Relativity. There are many very important, and
physically highly relevant, phenomena connected with General Relativity: black
holes, cosmology, gravitational lensing, gravitational waves, and, of course, the
effects which can be observed within our solar system. These latter phenomena have
been essential for the very first verification of the theory of General Relativity, as
proposed by Einstein. These solor system effects are the perihelion shift of Mercury,
the bending of light in the vicinity of the Sun, the gravitational red shift, the
gravitationally-induced time delay, and the Lense-Thirring effect. All of these effects
can be explained by General Relativity. Newton‘s theory of gravity is unable to
describe any of them.
34
Figure 32: Perihelion shift: In contrast to Newtonian gravitational theory, the semimajor axis of
the ellitptical orbit does not keep its orientation but, instead, rotates around the center of the
ellipse where the sun is. [delf-made figure]
The first effect which was recognized to be of general relativistic origin was the
perihelion shift of Mercury, see Fig.32. Though there were competitive contributions
to the perihelion shift due to the outer planets, and also from the flattening of the sun
due to its own rotation, there was a small part that could not be explained before the
advent General Relativity. The first real test of General Relativity however was the
test of the prediction that the path of light from distant stars should be bent by the
gravitational field of the sun, see Fig.33. This effect was confirmed by Eddington,
shortly after General Relativity was first described. After a long time, beginning in
the 1960‘s, the red shift of light emitted from the bottom of a tower to its top was
observed using the Mößbauer effect. A further effect, which is related to the redshift,
is that light near the sun propagates slower than it does when far away from the sun.
This has been observed by comparing the round-trip time-of-flight of a signal from
the Earth to the Cassini spacecraft on its way to Saturn and back again, under
conditons where the Sun was both near to the connecting path, and far away, see
Fig.34. The essential point for the very precise measurement was the use of a method
– a multifrequency link – that made it possible to eliminate all solar disturbances on
the frequency and, thus, on the time of flight of the photons.
35
Figure 33: Light deflection: Light passing the sun will be deflected towards the sun. As a result,
distand stars observed from the Earth appear as being repelled rom the sun. [self-made figure]
Figure 34: The present best test of gravitational time delay has been performed recently by the
Cassini spacecraft. Using a multi-frequency microwave link, all disturbances, in particular
disturbances due to the corona of the sun, on the electromagnetic waves could be eliminated.
[self-made figure]
All of the solar system effects have been confirmed today by observations with an
accuracy better than 0.1 ‰. Only recently, the precession of the orbital plane of an
artificial satellite has been observed with, however, a very low accuary of 20%, which
was mainly due to the fact that this satellite was designed only for Earth observation,
and thus possessed orbital parameters that were not ideally adapted to measuring the
Lense-Thirring effect.
A very important class of effects predicted by Einstein‘s General Relativity are related
with the gravitomagnetic field. The gravitomagnetic field is a component of the
gravitational field which possess no Newtonian analogue. This field appears when
gravitating masses are rotating like, e.g., the Earth. Rotating gravitating masses
produce a post-Newtonian gravitational field, the gravitomagnetic field, in the same
way as a rotating electrically charged sphere creates a magnetic field. The
gravitomagnetic field of the Earth is illustrated in Fig.35.
36
Figure 35: The gravitomagnetic field of the rotating Earth. This field is not present in Newtonian
gravity so that its measurement clearly is an impressive confirmation of the new feature of
Einsteinian gravity. This field gives rise to the Lense-Thirring effect (the precession of satellite
orbits) and the Schiff or frame-dragging effect (the precession of gyroscopes).
This gravitomagnetic field implies further gravitational effect. These effects are (i) the
Lense-Thirring effect, (ii) the Schiff effect, and (ii) the gravitomagnetic clock effect.
The Lense-Thirring effect influences the orbit of satellites in a way that the normal of
the orbital plane starts to precess, see Fig.36. This is not foreseen by Newton‘s theory
of gravity and is also different from the Perihelion shift where the orbital plane
remains constant and only the perihelion rotates. This effect has been seen with,
however, very low accuracy using the LAGEOS orbital data. The same
gravitomagnetic field also affects to orientation of inertial systems: a gyroscope starts
to precess when being compared with the direction of distant stars, see Fig.37. This
effect, which is also called ―frame dragging effect‖, is currently under exploration
with the Gravity Probe B mission. At last, it has been recognized by Mashhoon and
coworkers that the gravitomagnetic field also influences the time shown by, e.g.,
atomic clocks: If two satellites orbit the rotating Earth in opposite directions along a
circular path, then they show a different time when they meet again at the starting
point. Though the effect is well within the capabilities of today‘s atiomic clocks, the
orbits must ne known at the mm level which is really hard with present day
technology.
37
Figure 36: The Lense-Thirring effect: The orbital plane precesses around a rotating gravitating
source. While in the Perihelion shift the orbital plane remaion in the same orientation, here the
normal of the plane changes. In the simple example given in this Figure, the apgee (as well as the
perigee) moves on a circle
Figure 37: The Schiff- or frame dragging effect: The axis of rotation of a gyroscope starts to
precess in a gravitomagnetic field. This effect will soon be measured with Gravity Probe B.
A further outstanding prediction of General Relativity is the existence of gravitational
waves. Their existence has been deduced indirectly via the observed energy loss of
an isolated binary neutron-star system. Direct proof of their existence is still missing
at the time of writing, although considerable experimental effort is being expended in
the form of interferometric- and bar- types of gravitational wave detectors.
Gravitational wave astronomy would open up a new window into the universe, which
would enable us to study, e.g., the dynamics of black holes, the merging of black
holes and the very early universe.
38
Further tests. Tests which do not question the metrical structure of gravity, but rather
the way that matter determines the gravitational field, are tests of the structure of the
gravitational potential. In this case unifying and quantum gravity theories predict the
existence of an additional Yukawa-like gravitational potential. This implies a
gravitational force different from that what Newton‘s and Einstein‘s theory predicts.
These Yukawa potentials are also connected with higher dimensional theories of
gravity.
Indeed, there has been considerable interest recently in the possibility that the
Newtonian inverse square law of gravitation may not be respected by nature at
particle separations less than 1mm. It has been suggested that there is a possibility that
the energy scale for unification of gravitation and the other fundamental forces may
be substantially lower than the accepted Planck scale, Mp, of 1019
GeV. This situation
could come about if gravitational interactions extend into one or more additional
macroscopic dimensions. The Strong, Weak and electromagnetic forces are confined
to the brane, being mediated by open strings, whereas gravity and perhaps other gauge
forces can move freely in the bulk as they are mediated by closed strings. The relative
weakness of gravity is then explained by its being ‗diluted‘ by the higher dimensions
of the bulk. Specifically, in a simple model the introduction of n new compact
dimensions gives a new energy scale for unification. At particle separations smaller
than radius of the compactified manifold, Gauss‘ theorem dictates that the
gravitational force varies as nr 21 . For two extra dimensions, a test of the inverse
square law at 1m would be a test of M-theory at an energy scale of about 10 TeV.
Theoretical developments motivated by the need for a unified theory of gravitation
and the other quantum-based forces are developing at a rapid pace. It can be argued
that, irrespective of the particular theoretical framework chosen for the interpretation
of a non-null result, testing of the inverse square law of gravitation is of fundamental
importance. It seems natural, therefore, to perform as precise an experimental test as
possible given available technology. This is the aim of this proposal.
It is convenient to parametrise deviations from Newton‘s law by adding an additional
Yukawa potential, of strength and range , to the potential energy of interaction
between two point masses;
/21 1 rer
mGmV
Fig.38 shows the current experimental limits of the inverse square law of gravitation
together with the theoretical predictions in ),( parameter space.
As a result of Lunar Laser Ranging measurements, deviations from the ordinary
Newtonian potential at Earth-Moon distances are now excluded to very high
precision. However, modern theories predict large deviations in the sub-millimetre
range. However, due to the weakness of the gravitational force this range is
experimentally extremely difficult to explore, although many experiments are planned
or under way to improve the present status.
39
Fig.38: Exploration of the Newtonian potential at sub-mm distances.
Furthermore, theories of Quantum Gravity predict fluctuations in the space-time
continuum, see Fig.39. Since it is the space-time structure itself that acts
gravitationally upon material objects, these fluctuations should be exhibited in
propagation phenomena of e.g. light or quantum objects. There is now some
experimental effort aimed at searching for such space-time fluctuations.
Figure 39: Space-time fluctuations. On scales of the order of the Plancl length and the Planck
time, space-time is assumed to show changes, i.e. fluctuations, in the topology (here shown in an
two-dimensional analogy where bridges and holes are created and destroyed.)
Condensed matter
Systems consisting of many particles are described by statistical methods which serve
as theoretical tools for the derivation of averaged quantities that are accessible to
experiments. Such quantities are temperature, pressure, specific heat, density, etc.
The basic quantity underlying the calculation of these quantities is the partition
function from which all else can be derived. A method for the determination and
analysis of the structure of the theoretical results is the renormalization group theory,
which also has applications in other branches of physics.
The main result of this theory is that critical phenomena (phase transitions) appearing
through manipulation of parameters show a universality and a certain scaling
40
behaviour. The critical temperature between the normal and the superfluid state of a
system depends on the pressure. However, when we approach the critical temperature
other thermodynamic quantities like the specific heat, the magnetization, or the
density, behave universally, independently of the pressure. This means that near the
critical temperature certain quantities describing the many particle system depend on
the temperature alone. In these equations there is no reference to the actual system
used. Therefore all systems must behave in the same way. This is a very surprising
prediction.
Another issue in condensed matter physics is that the behaviour of many particle
systems should also depend on the size of the system. However, once again, the
functions which are thought to depend on size are actually found to be independent of
size. Again, we have a universal behaviour for these systems.
Superfluid Helium 4He is best suited to an experimental study of these predictions of
statistical physics, and renormalization group theory. And many experiments have
been carried out using superfluid Helium 4He on Earth. However, gravity induces
inhomogeneities in the system which greatly limits the attainable accuracy of these
terrestrial results.
The 2003 Nobel Prize was given to Alexei A. Abrikosov, Vitaly L. Ginzburg and
Anthony J. Leggett for their research on superfluidity and superconductivity.
Electromagnetic interaction
In our daily lives the electromagnetic interaction is vitally important. Indeed, it is a
sine qua non. Electromagnetism governs telecommunication like television and
broadcasting, of course. And the GPS system functions via the exchange of
electromagnetic signals. But also the physics of all atoms and molecules is governed
by the laws of the electromagnetic field. In order to handle all these issues in a proper
way, and to make predictions, it is important to know very precisely the form of the
laws for the electromagnetic field, that is, the Maxwell equations, see Fig.40.
Fig.40: James C. Maxwell (1831 - 1879) finally formulated the complete set equations,
today called the Maxwell equations, which describe the physics of all electromagnetic
phenomena and which are today still valid as equations describing the dynamikcs of the
clasdsical electromagnetic field.
41
Furthermore, the particular form of Maxwell‘s equations is very important for the
validity of Special and General Relativity. Since the behaviour of light should be a
consequence of Maxwell's equations, all the tests designed to test the validity of
Special Relativity, that is the Michelson-Morley and Kennedy-Thorndike as well as
the Doppler-shift experiments, are also tests of Maxwell‘s equations. From the very
beginning, Maxwell‘s equations are the correct Special Relativistic equations for
electromagnetic phenomena. Also red shift and UFF experiments are sensitive to
modifications of these equations. In fact, it has been shown that only one particular
modification to Maxwell‘s equations is compatible with the UFF. In other words, any
other modification to Maxwell‘s equations leads to an electromagnetically induced
fifth force.
Although there is as yet no single experimental hint of a violation of Maxwell‘s
equations, quantum gravity predicts tiny modifications to it. Therefore, it is very
important to increase the accuracy of test for these equations. Here we review briefly
the experiments that can be carried out in order to test the validity of Maxwell‘s
equations. Owing to the variety of possible effects in this generalized theory, the
points discussed below will certainly not cover all the aspects needed for carrying out
a complete test of Maxwell equations, that is, tests which uniquely single out the
special or general relativistic form.
Linearity of Maxwell's equations. In a similar way to quantum mechanical
phenomena, electrodynamical phenomena respect the superposition principle. To be
accurate, Maxwell‘s equations must be modified in order to include non-linear parts
that are predicted by the Heisenberg-Euler theory, effectively the formulation of
quantum electrodynamics QED. Such non-linearities have been observed through
light-by-light scattering. There are other non-linear versions of Maxwell‘s equations,
the Born-Infeld theory, for example, which was invented in order to avoid the
infinities of a point-like charge. However, for ordinary laboratory experiments, or
experiments on a satellite, such non-linearities play no rôle. For ordinary energies,
the superposition principle of the electromagnetic field can be taken as granted to very
high accuracy. The question is now whether there are other non-linearities present in
the equations for the electromagnetic field.
Mass of photon and electromagnetic ―fifth‖ force. All particles that visible matter is
made consist mainly of massive particles like the proton, the neutron, the electron etc.
On the other hand, the electromagnetic force acting between charged particles is
represented by a massless particle. This masslessness is related to the fact the range
of this interaction is infinite: Plane electromagnetic waves can propagate to infinity
without any loss in their intensity. It has often been speculated that the photon may
also possess a mass which, of course, should be very small indeed in order to be
compatible with current observation. Even a small mass for the photon may
contribute considerably to the total mass in the universe. A non-zero mass for the
photon would lead to dispersion in its propagation, so that different frequencies would
propagate with different velocities. A further very distinguished feature of a photon
mass would be a Yukawa-like modification of the ordinary Coulomb law of
electrostatics. All astrophysical data and laboratory experiments limit the mass of the
photon to values smaller than 10-53
kg. In abroader approach this also runs under the
name ―electromagnetic fifth force‖. In the same way as in searches for a gravitational
fifth force, the strength electromagnetic fifth force may depend on its range. From
42
classical experiments exploring the electric field of charged spheres at distances of 1
cm, from astrophysical analyses along astrophysical distances and from atomic
spectra exploring the electric field of the proton at atomic distances, there are very
strong limits on any hypothetical electromagnetic fifth force. Only in the micrometer
range there are no experimental data. This region may be accessible by plasma
physics.
Birefringence of the vacuum. Birefringence means that the two polarization states of
electromagnetic radiation propagate with different velocities. This phenomenon is
well known from the propagation of electromagnetic waves, e.g. light or X-rays,
through crystalline solids. The question here is, whether the vacuum possesses such a
property. This is a question concerning the property of the vacuum which, as we
know from quantum theory and its zero point energies, is not a entity without any
structure. Therefore, the vacuum may, in principle, also be equipped with anisotropic
structures like birefringence or dispersion. Furthermore, an anisotropic photon mass
may also lead to an energy dependent birefringence. Birefringence can be seen by
observing the propagation of different polarization states of waves originating at the
same event. For this purpose nothing is better than astrophysical observations of
polarized light — without the attendant effects due to the Earth‘s atmosphere.
Charge conservation. Charge conservation has many aspects: (i) The electron
disappearance factor. This factor states that at most one electron will disappear within
5.3 1023
years. (ii) It has also be claimed that there is a strong connection between
charge conservation and the Pauli exclusion principle in the sense that one of these
principles cannot be violated without a violation of the other. (iii) The neutrality of
atoms or the equality of electron and proton charge, which has been confirmed to one
part in 1019
, and the neutrality of the neutron (it has been tested that the charge of the
neutron is less than 5 parts in 1018
of the charge of the electron. The first result only
states that the electron and proton charges are equal but still may change their
absolute values. (iv) A time-dependence of the fine structure constant 2e c . A
measurement of a hypothetical time dependence of the fine structure constant can be
obtained by comparing different time or length standards which depend in a different
way on α. In laboratory experiments, variations of the fine structure constant can be
detected by comparing rates between clocks based on hyperfine transitions in alkali
atoms with different atomic numbers. The comparison of the H-maser with a Hg+-
clock resulted in 143.7 10 per year. In a new proposal the time-dependence
may be measured with very high precision using monolithic resonators. (v) In the
frame of the generalized Maxwell equations, a charge non-conservation is connected
with a photon mass tensor. (vi) The behaviour of atomic clocks. Since any
modification to the potential of a point charge modifies the structure of the atomic
energy levels, spectroscopy or the time given by an atomic clock also reflects a charge
non-conservation. Moreover, since atomic clocks are sensitive to the fine structure
constant, a charge non-conservation additionally gives rise to a temporal change in the
energy levels.
Quantum gravity modifications. In general, quantum gravity leads to space-time
fluctuations which will modify the dynamics of fields in space-time. Quantum
gravity induced modifications of Maxwell‘s equations lead in most cases to
birefringence and dispersion. The latter means that light with a higher frequency
43
propagates with a velocity different from the velocity of light of a lower frequency.
The actual difference in these velocities depends on the quantum gravity theory under
consideration, and thus might be used in order to distinguish between competing
theories of quantum gravity. However, the size of the effect scales with the Planck
length and is too small to have been detected up until now. Even the frequency
analysis of high energy gamma ray bursts events have not shown any deviation from
standard physics.
However there is another quite different consequence of a modified dispersion
relationship for photons, which may indicate that there should indeed be a quantum
gravity induced modification to the dispersion relationship. According to the standard
dispersion relationship high energy cosmic ray protons will interact with the cosmic
microwave background photons, leading to the creation of particles. Therefore, the
original photon loses energy. Thus the free path for such high energy photons is
limited to the order of 100 Mpc. However, this so-called GZK-cutoff (Greisen-
Zatsepin-Kuzmin) does not seem to exist because we do indeed observe such high
energy cosmic rays. One way of explaining this observation is through quantum
gravity modification of the dispersion relationship.
Weak interaction
Parity violation in molecules. There are theoretical predictions that due to the
existence of the weak interaction right- and left-handed versions of molecules give
rise to slightly different energy levels, see xx for a recent review. There are
experimental proposals to test these slight difference spectroscopically.
Although the violation of parity has been observed in many physical systems, a
violation of the invariance of physics against time reversal has been detected only
once, up until now. A violation of time reversal is connected with the existence of a
permanent dipole moment of elementary particles. Since it is not yet understood
which physical mechanism may be responsible for time reversal, a search for an
electric dipole moment may shed more light on this problem. Experiments of this
kind have the capability of changing the Standard Model of particle physics.
Strong interaction
An exploration of the physics of the strong interaction needs high energies, and thus
huge accelerators. Even higher energies of up to 1021
eV are available in space,
however (although it is a very interesting and strongly evolving area of modern
physics, the detection of high energy cosmic rays in space is a case for observation,
rather than the building of an experiment with a well defined set of initial conditions.
Therefore we shall not be concerned here any further with it).
Astroparticle physics
In addition to the list of missions under study above, the ESA Topical Team on
Fundamental Physics has identified a further xx FP experiments that would benefit
from using the facilities of the ISS.
44
New Technologies for the ISS
Here we present a few key technologies which development will help a lot in
improving the accuracy for fundamental physics tests on the ISS. Furthermore, most
of these techniques are also key technologies for satellite tests and, thus, are universal.
A development and space qualifying of any of these technologies will give a lot of
benefit to all future space missions.
Atomic interferometry. By using atom interferometers it is now possible to determine
very precisely phase shifts due to acceleration, and, in particular, rotation. This has
naturally resulted in the development of atom interferometer based accelerometers
and gyroscopes. The present day sensitivity of atom interferometers used as
accelerometers is δa ~ 109
Hz/m , and as gyroscopes is δ ~ 6.1010
Hz/rad .
Using atom interferometers a very precise measurement of ħ/m is also planned.
Furthermore, there is the possibility of measuring the Lense-Thirring effect with this
technique, and exploring the WEP in the quantum domain (in the dedicated HYPER
mission).
Atomic clocks. Space-qualified atomic clocks based of Rubidium or Cesium are
already used in space. Their intrinsic relative stability is of the order of 1015
.
Ion clocks. Clocks made of Hg+, Cd
+ or Yt
+ have been proven to have a stability
which approaches the 10-16
level for days. Therefore, they provide the most stable
clocks. However, they are not yet space proven.
H-maser. A clock with a relative accuracy of 1015
. This kind of clock has been used
for the most precise verification of the gravitational red shift (GP-A).
Fig.41: A silicon cavity [Figure from our lab]
45
Cavities and resonators. Optical resonators are today the most stable length
standards. For cryogenic resonators, the stability is given by 15103.2 ll over
20 s. The cavities can be made of ULE (Ultra Low Expansion) materials with thermal
expansion coefficients of 109
/K, or of Silicon (see Fig.41) which possesses a
vanishing thermal expansion coefficient at 140 K. Frequency drifts due to aging
effects can be reduced to the order of 1 Hz per day.
Ultrastable lasers. Laser light with a very high intensity stability and frequency
stability can be produced by diode-pumped Nd:YAG lasers. Such lasers have already
been accredited as ‗space-qualified‘, and are planned for use in the dedicated LISA
mission.
Optical frequency comb. This recently invented device is of great importance for
further improvements to Kennedy-Thorndike tests, as well as for tests of the
Universality of the Gravitational Red Shift. With the help of a mode-locked femto-
second laser, emitting a series of very short laser pulses with a well-defined repetition
rate, a frequency comb can be generated which makes it possible to compare the
microwave frequency of atomic clocks (about 100
Hz) with optical frequencies
(about 10
Hz) with an accuracy of 1015
.
SQUIDs. SQUID (Superconducting Quantum Interference Device) based
measurement rely directly on two phenomena: (i) the flux quantization in
superconducting loops, and (ii) the Josephson effect. Both effects are only observable
in the presence of superconductivity. A SQUID is the most sensitive magnetic flux
detector known today, and the utility of the SQUIDs arises because almost any low-
frequency signal can be converted into a corresponding magnetic flux, and this can
then be detected with very high precision. Therefore, a great many SQUID
applications are known in modern experimental physics. For fundamental physics,
SQUIDs are mainly used to measure displacements, and, indirectly, linear and angular
accelerations. For these measurements any movement of (e.g.) a test mass induces an
inductive signal that is then coupled to the SQUID. Indirectly, the same phenomena
that allow SQUIDs to function as sensors make superconductive shields exceptionally
effective screens against external magnetic fields (via the Meissner effect), leading to
shielding factors of order 100
per shield.
The conditions on the ISS for fundamental physics
ISS environment and operational modes
The International Space Station (ISS) developed by the international space agencies is
still under construction. The completion of its assembly is expected within the next
five years. It is now being offered as an opportunity for scientific experimentation
under conditions of weightlessness, in a quite unique environment that cannot be
attained in terrestrial laboratories. Although the range of 1st generation experimental
facilities on board the ISS has nearly been fixed, it is worth emphasizing the
possibility of carrying out more high precision experiments in Fundamental Physics
on board the ISS, in order to develop guidelines and requirements for the next
2 1
1
( , )f
?
46
generation of ISS facilities, and for future operation of the ISS. During the last few
years experimental capabilities have increased hugely through advances in technology
and improvements in our scientific understanding. These give us very good reasons
for exploring standard physics with much higher precision than hitherto. New
experimental devices for performing greatly improved high precision tests of the basic
tenets of physics have been developed. Laser-cooling, atomic interferometry, and
atomic fountain clocks, are examples of new tools for exploring the interaction of
quantum matter with gravitational and inertial fields. And these may even be
improved further using Bose-Einstein condensates as coherent atomic sources. Very
high precision frequency standards are now provided by ultrastable resonators, new
devices for measuring tiny forces have been developed, and machining techniques
have been improved tremendously, so that sub-m accuracy can be achieved in the
dimensions of metre-scale parts. Partly as a result of this technical progress, the
domain of Fundamental Physics has become a burgeoning, dynamic, and hugely
exciting area of science — driven by the potential for new discovery.
Advantages of free fall conditions
In many cases the sensitivity of measuring devices and/or the accuracy of the
measurement itself will increase if the experiments can be performed under conditions
of free fall, that is, under conditions of weightlessness. The advantages of such
conditions are:
The infinitely long, and periodic, free-fall. As an example, long free fall
conditions enable high precision tests of the Universality of Free Fall for all
kinds of structureless (i.e. pointlike) matter.
Long interaction times: This is, for example, hugely advantageous in atomic or
molecular interferometers, where the atoms or molecules may interact with
other external fields for a long time without falling out of the experimental
volume.
High potential differences. In a large class of experiments (e.g. tests of the
gravitational redshift), the search for signals depends on the difference in the
gravitational potential. It is obvious that this can be achieved best in space.
Large velocity changes. For macroscopic devices, e.g. testing the dependence
of light speed with respect to the laboratory velocity (Kennedy-Thorndike-
tests) the maximum velocity on Earth might be of the order a few thousand
km/h. In space this can be increased by about one order of magnitude. For
example, the velocity variations along the orbit (e.g. in a high elliptical Earth
orbit) are 30 times higher than one can attain using the Earth's rotation.
Long distance measurements. In space, much longer distances are available
than in any laboratory on Earth, and this may be essential, e.g., for the study of
low frequency (10-3
Hz) gravity waves using interferometric techniques, where
the strain of spacetime is to be measured at or below the 10-21
level.
47
A low noise / vibration environment. Seismic noise is a limiting factor for
many experiments on Earth (e.g. for gravitational wave detectors and for
torsion balances) in the frequency range below 10 Hz.
It is clear that many, but not all, of these advantages are realized on board the ISS.
Furthermore, there are some disadvantages due to the very existence and construction
of the ISS: Due to atmospheric drag, the true free-fall inside the ISS is rather short
and, due to the circular orbit, the difference in the gravitational potential of the Earth
is small. In addition, the large structure and movable parts on the ISS create a rather
large vibrational noise, and the non-negligible Earth's gravitational gradient as well as
the gravitational field of the ISS itself gives a relatively high level of residual
acceleration. Therefore, there are many high-precision experiments in Fundamental
Physics that must be carried out on specially designed satellites, having highly precise
attitude and orbital (drag-free) control. Equally, however, there are still important
advantages so that there is a substantial number of experiments in the area of
Fundamental Physics which, if carried out aboard the ISS, may yield remarkable
improvements compared to existing terrestrial laboratory results. Moreover, the ISS
may be used as an important and very appropriate test bed for certain dedicated
Fundamental Physics mission satellites. On the other hand, the ISS environment
enables experiments to be conducted in a way that would be quite impossible using
satellites. Due to the necessity for regular servicing of the space station, exchange,
repair, and improvements to experimental facilities on board the ISS are possible.
Facilities also can be brought back to Earth for post-mission analysis of effects that
may have been causing (e.g.) potential systematic errors, and, from a physical point of
view, it is of prime importance to have the capacity to repeat experiments, and to test
the reproducibility of results. Undeniably, one of the most powerful arguments for
the utilization of the ISS for FP experimentation, notwithstanding the less than ideal
environmental conditions on board, is the unrivalled opportunity for quicker and
easier access to the experimental apparatus than is conceivably possible using
dedicated satellites. In consequence, this facility must reduce considerably the time-
scales and costs involved in the realisation of such experiments.
We now discuss briefly the present International Space Station's design and operation.
Although it will become clear that the ISS environment currently circumscribes what
may be achievable as regards high precision experiments, it is interesting nevertheless
to discuss proposals on how the Space Station's operation and the facilities on board
could be improved with respect to the feasibility of carrying out even higher precision
experiments in the future.
ISS environment and operational modes
The ISS is a manned space platform. Therefore, its design, orbit, structure, and
operation has been determined by safety and logistical considerations. It is a multi-
purpose facility which therefore demands compromises from any experimental
activity. Nevertheless, the ISS has been designed to be a laboratory, although it also
has to serve as a home for the astronauts. Crew motion, ventilation systems, motors,
pumps etc. disturb the weightless environment and cause a relatively high level of
residual acceleration acting on any experiments. These internal sources as well as
external ones (radiation, gravity gradient, charging, drag, cosmic rays etc.) need a
48
careful analysis for high precision experiments. A comprehensive description of the
general conditions can be found in different ISS Users Guides.
ISS struture and cabin environment
At the end of its construction period ISS will consist of several pressurized modules
with a total volume of ca. 1,200 m3. The modules and truss, including the supporting
structures for solar panels, will cover an area of about 100 100 m2. The total mass
will be about 420 t. This huge structure causes a great deal of vibration in the
frequency range below 0.1 Hz, and a non-negligible gravity gradient along the
laboratory modules. The complicated structure makes it difficult to analyse the
influence of vibrational noise in the low frequency range for a specific experiment
and location.
Usually, experiments are carried out in experimental racks located inside the
pressurized modules. The standard double rack structure is approximately 2,000 mm
high, 1,000 mm wide, and about 850 mm deep. This size limits the ability to carry
out free fall experiments. Any free-flying platform inside a rack would hit the wall of
the rack within short time due to the permanent drag decelerating to which the station
permanently subjected. The total mass of an internal rack at launch cannot exceed
2,500 kg. The in-orbit mass can be increased up to ca. 10,000 kg, however. Electrical
power supply, data management, gas supply, or vacuum venting is designed
individually, and depends on the experimental requirements of the rack type. The
nominal atmosphere onboard the ISS is an Earth-normal 101.4 kPa.
ISS orbit, periodic manoeuvres, and operation modes
ISS flies on a near circular orbit inclined at an angle of 51.6 with an eccentricity of
only 7 10-4
. The very small eccentricity does not allow the carrying out experiments
that require a variation of the gravitational potential (e.g. in-situ clock tests of the
universality of the gravitational redshift with respect to the gravitational field of the
Earth). The orbital height is varying between 340 and 460 km. The variation depends
on the solar activity and is strongly correlated to the 11-year solar cycle expanding the
Earth atmosphere and lowering its density at solar activity maximum. Due to the
large cross section of the station, atmospheric drag causes an altitude decrease of 150
to 200 m per day, which means that the station must be raised every 10 to 45 days.
Fig.42 shows the ISS altitude as a function of time. Reebost and rendezvous
manoeuvres, as well as station maintenance, require a timeline for operational modes.
Quiescent periods (microgravity mode) are interrupted frequently by periods for
maintenace and manoeuvers. A typical cycle time is around 100 days: Following a
period of around 15 to 25 days for the rendezvous of spacecraft supplying the ISS, the
reboost to a higher orbit, and checkout procedures, the ISS will be operated in the
microgravity mode for about 30 days; 10 days maintenanace (standard mode), then a
further 30 days microgarvity mode, and an additional 10 days of standard mode will
follow. Therefore, any experimental operation requiring a low residual acceleration
level has to be stopped from time to time.
49
Figure 42: The orbital height varies with the solar activity. The saw-tooth shape of the curve
causes from the altitude decrease through to atmosperic drag and the reboost manoeuvres
occuring once every 10 to 45 days and lasting about 1.5 to 3 hours.
The truss of the ISS is the supproting structure for the solar panels and it is oriented
perpendicular to most of the laboratory modules. The ISS itself flies in an attitude
such that the truss should always be oriented approximately perpendicularly to the
orbital plane. Therfore the tangential velocity vector points along the station axis. To
optimize the thermal control, the power generation, and the communication links, the
ISS has to be re-oriented along roll, pitch, and yaw axes, which causes additional
centrifugal accelerations. For example, the solar panels change position continously
in order to face towards the sun.
Residual acceleration and gravity gradient
The level of the residual acceleration acting on experiments is depending on their
location within the ISS. The huge structure has many eigenfrequencies of varying
amplitudes. In addition to these station movements a huge number of sources (fans,
gyros, pumps, and crew motion) create higher frequeny vibrational noise. Fig.43
shows the mean residual acceleration as function of the frequency. The anticipated
level (red line) exceeds the station requirement in most parts of the frequency range
above 0.1 Hz. According to the ISS microgrvaity control plan, the rms amplitude
should not exceed 1 μg0 between 0.01 and 0.1 Hz, and for frequencies up to 100 Hz
the acceleration can increase to about 2 mg0. Transient disturbances should not
exceed a magnitude of atrans 1,000 g0 per axis within a bandwidth up to 300 Hz and
an integration limit of 10s atrans d t 10 μg0 s.
50
Figure 43: The anticipated level of the residual acceleration (in units of 1 10
-6 g0 ≈ 10
-5 m/s
2) as
function of the frequency. The dashed line marks the ISS requirement.
Due to the large extension of the ISS, a gravity gradient of up to 2 μg0 occurs
perpendicularily to the station axis as a quasi-static disturbance acceleration. The
gravity gradient sets the quasi-steady acceleration limit for experiments mounted on
the station. It is obvious that high precision gravitational experiments which require a
residual acceleration level of less than 10-8
m/(s2Hz) cannot be carried out under
these circumstances. Active damping of experimental racks on the station is possible
in principle with the Microgravity Isolation Mounts (MIM), but these systems are not
effective in the low frequency range less than 0.01 Hz. Operation of these kinds of
precision experiments can only be done on specially designed free flying platforms
orbiting the ISS, which will be discussed in detail below.
External conditions
Many external effects may influence experiments inside and outside the ISS. One has
to distinguish here between effects caused by external sources like the
electromagnetic and particle radiation of the sun or the South Atlantic anomaly of the
Earth's atmosphere, and effects caused by the station itself like outgasing or thermal
radiation. The most prominent effects are summarized in the following.
Temperature. Although the temperature inside the ISS modules (average temperature
18 to 17 C) is stabilized and can be controlled very well, any experiment mounted to
the outside structure or on an orbiting platform is subject to strong temperature
variations. Due to the inclined orbit, ISS is exposed to large variations between 120
and 420 K. Experiments requiring temperature gradients of less than 1 mK/Hz need
active thermal control and thermal shielding. Water cooling is provided at each
payload rack and this enables a local power dissipation of ca. 6 kW. ISS will also
carry a liquid helium Dewar (Low Temperature Microgravity Physics Facility,
LTMPF), see below.
51
Charging. Charging due to high energy particle radiation is a serious problem for all
high precision experiments in space. For gravitational experiments in particular, free-
flying test masses without electrical conduction to the outside causes non-negligible
electrostatic forces. Continous discharging by ultrviolet light sources exciting the
electrons on the test mass and its housing might allow charging control, but needs
careful analysis. The charging rate is generally proportional to the radiation dose on
the test masses. Charging results from solar particle radiation, trapped particle belts,
and cosmic rays. The ISS frequently passes the South Atlantic Anomaly, a region of
enhanced radiation caused by the misalignment of the Earth's rotation axis with its
geomagentic axis. Because the orbital height of ISS is relatively low, and
permanently below 1,000 km and the Van-Allen-Belt, energetic proton flux
originating from solar flares is nearly completely shielded and does not normally
affect experiments on the ISS. Nevertheless, highly energetic ions from cosmic ray
fluxes cannot be shielded.
Drag and the ISS environment. It has been mentioned above that the drag of the
residual gas atmosphere in the relatively low orbit of the ISS causes a permanent
deceleration and decay of the station‘s orbital altitude. The station‘s cross-section
with respect to the flight direction varies with attitude corrections, and with changes
of the position of the huge solar panels (between ca. 3,700 and 850 m2), when the ISS
is finally completed. For an average gas density of 7 10-12
kg/m3 (corresponding to
an orbital height of 350 km), the drag force varies between ca. 2 and 0.5 N. With the
station's total mass of 420 t, the resulting drag acceleration is calculated to 1 10-6
to
5 10-6
m/s2. This value is related to the centre of mass of the entire station and sets
also an upper quasi-steady acceleration limit for any experiment on free-flying
platforms orbiting the station. It is a strong restriction in particular for gravitational
experiments.
The residual gas atmoshpere around the station is also contaminated from outgassing,
venting, leaks, or thruster exhausts, which degrade surfaces and lower the vacuum
quality. In addition, atomic oxygen, the dominant component of the Earth atmosphere
at the ISS orbital altitude causes significant erosion of surfaces which might be
important for experiments on platforms outside the laboratory modules. Controls on
these effects include the specification that ISS contamination sources will contribute
no more than 10-14
molecular cm-2
to the molecular column density along any
unobstructed line of sight, and produce no more than 10-16
g/(cm-2
a) total deposition
on sampling surfaces at 300 K.
On board resources and facilities for Fundamental Physics experiments
The ISS will offer some facilities and infrastructure which could be used for specific
experiments in Fundamental Physics.
Low Temperature Microgravity Physics Facility (LTMPF). The LTMPF is a self-
contained, reusable, cryogenic facility filled with 270 l liquid helium that will
accommodate low temperature experiments. It will be installed on the Japanese
Experiment Module (JEM) Exposed Facility of the ISS, i.e. on the outer side of the
JEM module. LTMPF allows access to temperatures down to 1.4 K for durations up
to six months, and enables parallel operation of two experiments. Each experiment
52
attached to the Dewar probe can occupy a volume of 19 cm diameter and 70 cm long,
weighting up to 6 kg. Standard available electronics to measure and control
temperature ranges of 10-10
K at 2 K is available. Attached to each probe are the
cells and sensors for each experiment. Each probe can have several stages of isolation
platform, with separate temperature regulation on each stage, to provide the maximum
temperature stability. Ultra high-resolution temperature and pressure sensors are
based on SQUID (Super-conducting Quantum Interference Devices) magnetometers.
There are up to 12 SQUIDs shared between the two experiments. The high-resolution
thermometers have demonstrated sub-nK temperature resolution in past space
experiments. Other existing measurement techniques include resistance
thermometers, precision heaters, capacitance bridges, precision clocks and frequency
counters, modular gas handling systems, and an optical access capability. An onboard
flight computer controls all facility and instrument electronics, all ISS interfaces,
command, telemetry, and data storage during on-orbit operations.
Typical experiments to be carried out in the LTMPF are the SUMO Special Relativity
tests, the experiments to study predictions of statistical physics and renormalization
theory (BEST and SUE), and the Microgravity Scaling Theory Experiment (MISTE).
Most LTMPF experiments are sensitive to random vibrations, charged particles, and
stray magnetic fields. The rms amplitude of random vibrations at frequencies less
than 0.1 Hz is several μg0. A passive vibration isolation system attenuates higher
frequency vibration inputs from the ISS to a level 500 μg0. Several layers of
magnetic shielding are built into the instrument probe to protect the experiments from
on-orbit variations in the magnetic field environment. Vibration and radiation
monitors will provide experimenters with near real-time data. The facility will be
launched filled with liquid helium, and retrieved when the helium is depleted, in an
approximately 16 to 24 months cycle.
Atomic clocks. At present there are two atomic clock ensembles under development
to be mounted on the ISS: (i) Atomic Clock Ensemble in Space / Projet d‘Horloge
Atomique par Refroidissement d‘Atomes en Orbite (ACES / PHARAO) developed by
a team of French research institutes and (ii) Primary Atomic Reference Clock in
Space (PARCS) devloped by a team of US research institutes. The microgravity
environment of space affords the possibility of slowing down atoms to speeds well
below those used in terrestrial atomic clocks, allowing for substantial improvement in
clock accuracy.
The PHARAO clock will be operated on board ISS together with a hydrogen maser,
to establish a time scale which can be compared with terrestrial clocks to an accuracy
of 10-16
-- which would be an enourmous improvement over the present level of
synchronization that is possible using GPS (Global Positioning System) clocks. Thus
an ultra-high performance global time-synchronization system should be realised,
which will make possible new navigation and positioning applications. The clock
ensemble consists of an atomic clock based on a fountain of cold Cesium atoms, and
the hydrogen-maser clock. Additional components are the MWL (MicroWave Link),
which sends short bursts of light (100 ps) between clocks on Earth and the clocks on
the ISS, to synchronize them. The payload will be placed outside the cabin space of
the ISS on an external platform. In this mission — for the first time — laser cooling
techniques and atom traps will be established and tested in space. Furthermore, also
for the first time, the performance of atom-optical elements can be tested in space.
53
PARCS is a similar atomic-clock mission scheduled to fly on the ISS in 2008. The
mission involves a laser-cooled cesium atomic clock, and a time-transfer system using
Global Positioning System (GPS) satellites. PARCS will fly concurrently with
SUMO (Superconducting Microwave Oscillator), a different sort of clock that will be
compared against the PARCS clock to carry out the same type of Special Relativity
tests mentioned above.
Exposed facilities. Beside the standard racks inside the laboratory modules, there are
exterior attachment points for carrying exposed experiments. There are four
dedicated sites on the starboard side of the truss where external payloads can be
attached. There are two attachment points on the nadir, or Earth-facing, side of the
truss, and two on the opposite, or zenith, side. These points can be used for detectors
like the Alpha Magnetic Spectrometer (AMS), a detector to search for antimatter and
dark matter by measuring with high accuracy the cosmic rays composition.
There are also some exposed platforms attached to laboratory modules which can
carry additional experimental facilities such as EUSO (Extreme Universe Space
Observatory), an experiment to observe high energetic cosmic rays by observing and
detecting the reflected Cerenkov radiation when an extremely energetic cosmic ray
interacts with the Earth atmosphere. EUSO is to be mounted onto the Columbus
External Payload Facility, from where an unobstracted nadir view is possible.
Proposals for improvement
A very low residual acceleration level in the low frequency range is a strong
requirement for many fundamental physics experiments, and it seems to be impossible
to carry out these kinds of experiment on the ISS with enough precision.
Nevertheless, changes in the operational concept could improve the conditions
tremendously, and these will be discussed briefly here.
Operational concept
The operational concept of the ISS is based on the combination of laboratory modules
and astronauts habitats. This concept results in many compromises with respect to the
experimental conditions. Although many parameters (e.g. the orbital height and
inclination) are fixed and cannot be changed, it is worth discussing how laboratory
operations and crew activities could be separated. Experiments running on small free-
flying platforms co-orbiting with the ISS would experience a much lower level of
residual acceleration. Because the platforms would have to be serviced from the
station, they cannot be in perfect free fall, but could be bound to the station in such a
way that they are operated in the free fall mode for periods of at least several days,
before having to be pulled back to the station. A docking mechanism on the ISS
would enable exchange, repair, or recovery of the platforms, whilst allowing
platforms to be rigidly clamped during reboost manoeuvres. Depending on their
masses, the platforms could be separated from the station by between 500 and 2,000
m. High precision attitude and orbit control would guarantee an acceleration level of
at least 10-10
m/(s2 Hz) in the frequency range above 10
-3 Hz. Tidal forces (gravity
gradient) resulting from the ISS (mass: 420 t) are negligible at a distance of more than
100 m.
54
This concept would not only improve the conditions for fundamental experiments, but
would also allow repetition of high precision experiments after checking them on
Earth — something which is never possible for normal satellite experiments.
Free-flying platforms with drag-free control
The fine attitude control, also called the drag-free control of the free-platform, should
be carried out with an accuracy of at least 10-10
m/(s2 Hz). The general principle of
drag-free control is to make the trajectory of the satellite‘s or the platform‘s centre of
mass as close as possible a geodesic. Therefore, a gravity reference sensor has to be
used consisting of a test mass whose movements relative to the satellite are measured
with respect to all 6 degrees of freedom (see Fig. 44).
Figure 44: Schematic of a drag-free satellite. The red test mass is shielded by the satellite against
all disturbance forces. Following the test mass, the satellite moves on a geodetic. [from
http://www.nrich.maths.org.uk/mathsf/journalf/mar00/art1/]
A common concept is to detect the movement capacitively. Electrodes for sensing
and active servo-control surround the freely floating test mass. The control signal is
then used to control the satellite‘s movement and attitude using a set of thrusters. For
low Earth orbits, colloidal thrusters with a maxiumum thrust of about 0.2 mN are
applicable. The thrust-force must be controlled with an accuracy of 0.1 μN. To
control all 6 degrees of freedom a minimum of 3 clusters of 4 thrusters each (i.e., 12
thrusters) is necessary.
The most important external disturbance on the ISS is the atmospheric drag.
Depending on the orbital altitude the aerodynamical drag on the platform would be
between 10 and 150 mN, the resulting torques on it being between 10 und 100 μNm.
In addition, several other disturbing effects would occur: -
Disturbing forces from interactions with electromagnetic fields. Solar
radiation, Earth albedo, and terrestrial thermal radiation cause forces on the
satellite‘s surface.
55
Gravitational forces: Tidal forces have strong influences, because the drag free
reference point usually is not identical with the test mass centre of gravity.
These effects must be determined from models like the NASA Earth Gravity
Model from 1996 (EGM96). Additional effects occur through mutual
attraction of satellite and reference mass (test mass), because of position
variations of their centres of mass with respect to temperature variations on
orbit.
Coupling between test mass and satellite: Feed-back reactions between test
mass position sensors and the test mass cage cause disturbances.
Interactions with the Earth‘s magentic field: These effects need modelling
(e.g. using the International Geomagnetic Reference Field (IGRF)).
Charging: Electrostatic forces are caused by interactions with highly energetic
cosmic rays, but also by charging through protons during the frequent
passages over the South Atlantic Anomaly. These passages can create charges
of more than 10-15
C/kg. Solar flares can cause a charging of up to 10-9
C/kg.
Drag-free control of the satellite and test mass dynamics clearly need to be modelled.
And the control concept has to minimize all non-gravitational accelerations inside the
satellite. The acceleration must be minimized at the test mass location which is
denoted as the drag-free reference point. Usually, the states of satellite and test mass
are measured directly or they may be determined by an observer model, based on real
observations. The estimated states, the output of the observer model, are fed into the
controller, which then commands the thrusters. The external disturbances have in
most cases a constant or modellable part. The estimated values can then be fed back
directly to cancel out these disturbances, the residuals only being compensated by the
controller.
Drag-free control is a state-of-the-art concept for high precision attitude and orbit
control and can easily be adapted to small free-flying platforms.
Conclusion and outlook
Fundamental Physics, in particular Gravitational physics, needs experiments to be
carried out in a space environment. And through the course of many studies various
satellite-orientated experiments have been developed. Nevertheless, as a result of
financial constraints it seems questionable whether these experiments can be carried
out using dedicated satellite missions in the near future. Therefore it is necessary to
work also on concepts that will allow the realization of fundamental physics
experiments on the ISS. Although the ISS seems not to be an ideal laboratory, there
are ways of improving the situation such as the use of free-flying platforms orbiting
the station. These concepts are based on existing state-of-the-art technology, and so
could be realized within short time. Providing the operational concept of the ISS is
not immutable, it makes sense now to study these concepts intensively.
56
List of possible experimental projects
The following list contains proposals and ideas for Fundamental Physics experiments
under weightlessness as they have been identified so far by the Topical Team
―Fundamental Physics on ISS‖. Some of these proposals will be described in more
detil int the next section. Authors and relevant publications for each proposal are
given:
Testing the Universality of Free Fall for Anti-hydrogen
o J. Walz and T.W. Hänsch: A Proposal to Measure Antimatter Gravity
Using Antihydrogen Atoms, Gen. Rel. Grav. 36, 523 (2003).
Testing the Universality of Free Fall for charged matter
o H. Dittus, C. Lämmerzahl, A proposal for testing the Weak
Equivalence Principle for charged particles in space, in A. Macias, F.
Uribe and E. Diaz (eds.), Developments in Mathematical and
Experimental Physics (Kluwer/Academic Press, New York 2002),
p.257.
o H. Dittus, C. Lämmerzahl and H. Selig: Testing the Universality of
Free Fall for Charged Particles in Space, Gen. Rel. Grav. 36, 571
(2003).
Testing Newton´s law for very short distances
o H.J. Paik, M.V. Moody, D.M. Strayer, Short-range Inverse-square
Law of Experiment in Space, Gen. Rel. Grav. 36, 523 (2003).
o C.C. Speake, G.D: Hammond, C. Trenkel, The Feasibility of Testing
the Inverse Square Law of Gravitation at Newtonian Strength and at
Mass Separation of 1 µm, Gen. Rel. Grav. 36, 503 (2003).
Tests of Newton´s law and Measurement of the Gravitational constant
o N. Lockerbie, ISLAND - Inverse Square Law Acceleration
Measurement using iNertial Drift, Internal Report, Gen. Rel. Grav. 36,
593 (2003).
Special Relativity and General Relativity tests on ISS (Cavities)
Universality of Gravitational Redshift
o H. Müller, C. Braxmaier, S. Herrmann, O. Pradl, C. Lämmerzahl, J.
Mlynek, Stephan Schiller, A. Peters, Testing the foundations of
relativity using cryogenic optical resonator, Int. J. Mod. Phys. D 11,
1101 (2002).
o See the review: C. Lämmerzahl, G. Ahlers, N. Ashby, M. Barmatz, P.
L. Biermann, H. Dittus, V. Dohm R. Duncan, K. Gibble, J. Lipa, N.
Lockerbie, N. Mulders, C. Salomon: Experiments in Fundamental
Physics Scheduled and in Development for the ISS, Gen. Rel. Grav.
36, 615 (2004).
o V. Alan Kostelecký and Matthew Mewes: Signals for Lorentz violation
in electrodynamics, Phys. Rev. D 66, 056005 (2002).
o Robert Bluhm, V. Alan Kostelecký, Charles D. Lane, and Neil Russell:
Probing Lorentz and CPT violation with space-based experiments,
57
Phys. Rev. D 68, 125008 (2003).
Superfluidity / Critical coefficients / Tests of Renormalization Group
Theory Experiments SUE / BEST / DYNAMX / MISTE
o See the review: C. Lämmerzahl, G. Ahlers, N. Ashby, M. Barmatz, P.
L. Biermann, H. Dittus, V. Dohm R. Duncan, K. Gibble, J. Lipa, N.
Lockerbie, N. Mulders, C. Salomon: Experiments in Fundamental
Physics Scheduled and in Development for the ISS, Gen. Rel. Grav.
36, 615 (2004)
Coulomb potential
o D.F. Bartlett and S. Lögl: Limits on Electromagnetic fifth force, Phys.
Rev. Lett. 61, 2285 (1988). The interesting range between 1 μ and 1
mm may be explored using cold charged plasmas.
Watt balance / Determination of mass related to Planck´s constant
o E.R. Williams, R.L. Steiner, D.B. Newell, P-T. Olsen, Accurate
Measurement of the Planck Constant, Phys.Rev.Lett. 81, 2404 (1998)
Spin coupling
o Anomalous spin couplings have been discussed generally in, e.g., C.
Lämmerzahl: Quantum tests of the foundations of General Relativity,
Class. Quantum Grav. 15, 13 (1998), C. Lämmerzahl: Quantum tests
of space-time structure, in P.G. Bergmann, V. DeSabbata, G.T. Gillies,
P. Pronin (eds.): Spin in gravity -- Is it possible to give an experimental
basis to torsion?, (World Scientific, Singapore 1998).
Cold hydrogen clocks o R.L. Walsworth, I.F. Silvera, H.P. Godfried, C.C. Agosta, R.F.C.
Vessot, and E.M. Mattison, Hydrogen masers at temperatures below 1
K, Phys. Rev. A 34, 2550 (1986).
Entanglement o M. Aspelmeyer, T. Jennewein, H.R. Böhm, C. Brukner, R.
Kaltenbaeck, M. Lindenthal, G. Molina-Terriza, J. Petschinka, R.
Ursin, P. Walther, A. Zeilinger, M. Pfennigbauer, W.R. Leeb:
Quantum Communications in Space, prepared for the European Space
Agency under ESTEC/Contract. No. 16358/02/NL/SFe.
o A. Zeilinger et al., QUEST Proposal (QUantum Entanglement for
Space ExperimenTs).
Space-time fluctuations
o G. Amelino-Camelia: Gravity-wave interferometers as probes of a
low-energy effective quantum gravity, Phys. Rev. D 62, 024015
(2000).
o S. Schiller, Probing the quantum fluctuations of space with an
ISS.based free-flyer, Internal Report of the ESA Topical Team
―Fundamental Physics on ISS‖ (2004).
o S. Schiller, C. Lämmerzahl, H. Müller, C. Braxmaier, S. Herrmann, A.
Peters, Experimental limits for low frequency space-time fluctuations
58
from ultrastable optical resonators, Phys. Rev. D 69, 027504 (2004).
CPT on anti-hydrogen – charged anti-hydrogen
o R. Bluhm, V. A. Kostelecký, and N. Russell: CPT and Lorentz Tests in
Hydrogen and Antihydrogen, Phys. Rev. Lett. 82, 2254-2257 (1999)
Parity violation in molecules
o A particular aspect of this is the search for an electric dipole moment
of the electron (EDM), see, e.g., J.J. Hudson, B.E. Sauer, M.R. Tarbutt
and E.A. Hinds: ;easurment of the electron dipole moment using YbF
molecules, Phys. Rev. Lett. 89, 023003 (2002).
Cold atoms
o Cold atoms are used for precuise clocks in space like PHARO, PARCS
and RACE. These clocks are described in the review: C. Lämmerzahl,
G. Ahlers, N. Ashby, M. Barmatz, P. L. Biermann, H. Dittus, V. Dohm
R. Duncan, K. Gibble, J. Lipa, N. Lockerbie, N. Mulders, C. Salomon:
Experiments in Fundamental Physics Scheduled and in Development
for the ISS, Gen. Rel. Grav. 36, 615 (2004), and in the refernces cited.
o see forthcoming report of the ATOPIS Topical Team
Ion interferometry
o Until now, charged particle interferometry has been realized with
electrons only. For atomic and molecular ions, the intensity of the
corresponding sources is too low. See, e.g., F. Hasselbach: Selected
topics in charged particle interferometry, Scanning Microscope 11, 234
(1997).
o See forthcoming report of the ATOPIS Topical Team
Atom laser and amplification of matter waves
o See forthcoming report of the ATOPIS Topical Team
Bose-Einstein-Condensates
o See forthcoming report of the ATOPIS Topical Team
Further references:
- C. Lämmerzahl, H. Dittus, Fundamental Physics in Space: A Guide to Present
Projects, Ann.Phys. (Leipzig) 11, 95 (2002)
- General Relativity and Gravitation, Special issue: Fundamental Physics on the
ISS (eds.: C. Lämmerzahl, H. Dittus), vol.36, no.3 (2004)
Possible future fundamental physics experiments on the ISS and on an ISS-based freely flying platform
59
In this section we describe some of the obove proposals in more detail.
Testing the Equivalence Principle for Anti-hydrogen
Recent production of cold anti-hydrogen atoms at CERN will open an new field of
experiments in anti-matter gravity. The big advantage of anti-hydrogen with respect to
anti-protons is that anti-hydrogen is neutral and, thus, not subject to stray
electromagnetic fields so that in equivalence principle tests the free fall behaviour is
much easier to observe. Furthermore, anti-hydrogen also allows to perform
spectroscopy experiment searching for violations of the PCT theorem and, thus, of the
Universality of the Gravitaional Redshift or LPI.
Until now, the temperature of the anti-hydrogen atoms are a critical parameter. In
experiments on anti-matter gravity, temperatures in the 10 – 100 micro-Kelvin range
are desirable. This is still about six orders of magnitude smaller than what is
achievable for anti-hydrogen with present-day techniques.
Laser-cooling of magnetically trapped anti-hydrogen should be feasible in the future
and will enable very precise tests of fundamental symmetries by high-resolution
spectroscopy. However, the photon recoil on the Lyman-alpha transition sets a
temperature limit at a few milli-Kelvin which is not sufficiently low for experiments
in anti-matter gravity.
However, a new scheme (see Fig.45) has ben proposed to produce ultracold asnti-
hydrogen atoms. This scheme is based on sympathetic cooling of positive anti-
hydrogen ions using laser-cooled ions from ordinary matter and subsequent photo-
detachment. The proposed scheme has the additional advantage that it readily leads
itself to a time-of-flight technique for measuring the gravitational acceleration of anti-
matter.
Fig.45: Scheme for testing the Universality of Free Fall for Anti-hydrogen. [self-made
figure]
Testing Newtons law for very short distances
Fig.46 shows the current experimental limits of the inverse square law of gravitation
together with the theoretical predictions in ),( parameter space. We propose here a
test with the potential sensitivities indicated in the plot. We suggest to carry through a
space-based test of the inverse square law of gravitation with Newtonian strength and
at particle separations of 1m. An analysis of systematic uncertainties, such as
spurious forces due to the Casimir force, and various sources of random uncertainty,
including those due to patch fields and detector noise suggests that a cryogenic, drag-
free space environment is necessary, as thermal and vibrational noise could
potentially be reduced to the required level in such an environment.
60
We adopt a geometry that is similar to that of the source mass in Chiaverini et al, but
here the density of the both source and test mass plates is density modulated. The
current experiment is designed such that the interacting masses generate transverse
forces on the plates and in this respect it is therefore similar to that of Hoyle et al. The
separation between the two surfaces of the sources and test masses is of crucial
importance and sets the magnitude of the residual Casimir interactions of patch-field
forces. In addition the demands on machining tolerances is set by this parameter. We
have chosen a value for the surface separation of 1.3 m as a compromise between
signal strength and these spurious sources of noise.
The test-mass is shown in Fig.47 and is a cube that could possibly be manufactured
from single crystal silicon. The top and bottom surfaces are recessed to leave a box
structure with a plane across the equator (the x-z plane). A stripe pattern made from
two metals of different density is produced photolithographically onto the external
faces of the cube whose normals are parallel to the z and –z directions. The stripes are
parallel to the x-axis. A good choice for these two metals is gold and copper as they
have similar conductivities, although a pair with a larger density contrast could
possibly be used. The test mass cube sits in a channel also possibly constructed from
silicon and shown in Fig.47. Patterned on to the inner surfaces of the channel are
arrays of source masses of similar form to the test masses. Linear motion of the
channel in the y direction will produce a gravitational torque on the test mass about
Current
proposal
Figure 46. This proposal aims at a sensitivity shown by the thick, blue dashed line.
61
the x-axis provided that one of the patterns on the channel (or cube) is offset in the y
direction by half the pitch of the stripe pattern.
The range of the inverse square law that can be tested clearly depends on the width
and depth chosen for the cross section of the strips. However, given a separation of
the surfaces of the mass distributions of 1.3m, the depth of the strips should also be
approximately of this size. A value of 3m for both width and depth offers the
possibility of achieving an experimental sensitivity equal to gravitational strength at
1m and a signal to noise at 3m of approximately 10. In order that any spurious
signal can be unambiguously identified as non-Newtonian, the gravitational
background interaction must be reliably calculated to an accuracy of about 10%. We
suppose that this is possible, for example, by using electron microscopy to determine
the dimensions of the stripe profiles. The transverse force for range 1m is maximised
for a pitch (periodicity) of the pattern of 9m. The interleaving strips are therefore
6m in width. We have set the area of the patterned faces of the cube to be 10-4
m2. It
is important to cover the gold/copper pattern with a continuous metallic layer. This
serves to reduce residual Casimir forces that are produced by differences in the
conductivities of gold and copper and to eliminate electrostatic forces due to the
difference between their contact potentials. These issues and further design details are
discussed in reference 1.
Fig. 47:The test mass cube and channel structure. The facing surfaces are patterned
with strips of alternate metals (Cu and Au). [self-made figure]
y
x
z
1cm
62
Motion of the cube is sensed using detectors that are located on either side of the
equatorial plane. These sensors would possibly be interferometric and the equatorial
plane would then be reflective. In addition levitation forces which could be magnetic
or electrostatic are applied to the equatorial plane, the internal surfaces normal to the
z-axis and the external surfaces perpendicular to the y-axis. The experimental
procedure then comprises the measurement of the rotation of the cube as a function of
the motion of the channel source mass structure in the y direction. If the amplitude of
this motion was, say, 10 times the pitch of the stripe pattern (~0.1mm), the resulting
torques could be examined for signals with the periodicity of the pattern. This would
enable signals to be spectrally distinguishable from the disturbances due to motion of
the source mass and other random surface forces.
Fig.48: Noise contributions [self-made figure]
Fig.48 shows various contributions to the random noise that would be experienced
assuming a temperature of 2K. Curve 1 represents the signal level achievable after an
integration period of 12 days (equivalent to a bandwidth of 10-6
Hz). Curve 2 shows
the thermal noise due to viscous gas damping. Curve 3 shows the thermal noise
assuming a fused silica suspension. Curves 4 and 5 show the expected noise
limitations due to SQUID and interferometric sensors. We have included the inherent
stiffness of the SQUID sensor, whereas the interferometer has no stiffness. The final
curve is an estimate of the residual ground vibrations on a terrestrial vibration
isolation stage.
It appears that a relatively low frequency is required to reduce the stiffness of the
suspension and, therefore, its intrinsic thermal noise. The major justification for the
space environment is the need for vibration isolation at frequencies below 1Hz where
the fundamental thermal noise is reduced. This can only be achieved in a drag-free
environment. We also suggest that, as the space-based suspension does not support
the weight of the test mass, its stiffness and thermal noise could be further reduced.
Acceleration noise limits
-20
-18
-16
-14
-12
-10
-8
-1 0 1 2 3
log10(frequency Hz)
log10(
accn
ms^
-2)
1 signal
2 gas thermal
noise
3 suspension
noise
4 SQUID
5 Interferometer
6 ground
vibrations
63
This may reduce loss mechanisms and enable the experiment to be performed at a
higher temperature.
Test of G and Newton’sinverse square law for medium distances
The gravitational constant and Newton‘s 1/r law in the range of 1m may be
determined and tested using the Inverse Square Law Acceleration measurement
iNertial Draft (ISLAND) setup. In this setup, one has a ring-shaped gravitational mass
and a target mass which moves along the symmetry axis through the source. From the
continuous measurement of the target mass‘ velocity and the corresponding changes
in the velocity due to the gravitational interaction with the ring-mass one can deduce
the strength and the form of the gravitational attraction, that is, Newton‘s constant and
whether gravity acts as a 1/r- or as a Yukawa-potential.
The target mass of about 1 kg will be launched e.g. by an induction driver and is
coated with a high-resistivity material. Its velocity which, for an initial velocity of
about 1 cm/s, changes by about 2% during the flight through the ring-shaped mass of
about 100 kg, will be measured using laser-interferometric techniques. During one
measurement the apparatus is free-flying inside a vacuum can so that it does not
experience the acceleration of the ISS. Each measurement will take between 20 s and
2 min.
Fig.49: The set-up of the ISLAND project for measuring G and the testing Newton’s
inverse square law at cm distances. [self-made figure]
The experiment may be carried out over months, and for each run 106 data points may
be taken. The expected accuracy in the determination of G is 10-6
which is almost 3
orders of magnitude better than present determinations. The same is expected for the
test of Newton‘s 1/r law. Reversal of the entire apparatus will eliminate local gravity
gradients. It is clear that microgravity is a prerequisite for this project.
Optical tests of the constancy of the speed of light
Tests of the constancy of the speed of light benefit from large variations in the
velocity of the whole apparatus. On Earth, only velocities up to 1000 km/h can be
64
achieved. Contrary to that, in space velocities of up to 30000 km/s can be achieved. In
particular, the ISS with its low orbit (the lower the orbit, the higher the velocity) is
well suited for that.
Since it is very difficult to have cryogenic temperatures onboard of the ISS, room
temperature resonators or resonators cooled to 140 K (that is ca 130 degrees below 0
°C) are available exhibiting similar stability characteristics than cryogenic resonators.
These resonatirs are then made of ULE (Ultro Low Expansion) ceramics or silicon
which has a vanishing first order temperature expansion coefficient at 140 K. With
these resonators and the velocity variations of the ISS it should be possible to improve
tests of the constancy of the velocity of light by three orders of magnitude.
Search for space-time fluctuations
Within a wide range of models and theories attempting to combine gravity and
quantum mechanics it is found that space and time exhibit intrinsic spatio-temporal
fluctuations. These fluctuations would pose a fundamental limit to the ability of
measuring distances with arbitrary precision, beyond any limitations due to standard
quantum mechanics. The possibility of existence of observable consequences of these
fluctuations, such as the decoherence of quantum states or a limit to distance
measurement precision have been discussed and their detection proposed, e.g by
means of particle and light interferometers. Motivated by the concepts of space-time
“foam", or "fuzziness", G. Amelino-Camelia has recently discussed model-
independent characteristics of the influence of space-time fluctuations on length
measurements. He proposed general forms of the power spectral density S for the
relative measurement uncertainty LL in a measurement of a length L . Of particular
interest are fluctuations with strength monotonically increasing with falling
frequency, for example random-walk-type fluctuations with -2
RW f~ (f)S .
It is important to initiate a program aimed at constraining the magnitude and spectral
dependence of the hypothetical quantum space fluctuations at all experimentally
accessible frequencies. We propose to use a laser interferometer on a free-flyer
operated from the ISS to search for space fluctuations with a sensitivity at least one
million times higher than possible on earth A suitable tool for the search for space
fluctuations are Michelson interferometers. Hereby the fluctuations of the distance
difference of two mirrors from a common beamsplitter are measured. The two light
paths are orthogonal. An extension of the method that vastly enhances sensitivity
consists in employing an arrangement of two independent, orthogonal Fabry-Perot
cavities. To each cavity a laser is resonantly coupled. The difference of the two laser
frequencies is obtained by a heterodyne beat between waves split off from each laser
and then superposed on a photodetector. The beat frequency is counted using a
frequency counter (these tasks are implemented in LS1 and LS2 in Fig.1 below). The
time-dependence of this beat provides information about the variations of the
difference of the measured effective cavity lengths, including any quantum space
noise. It is a fundamental assumption that the space fluctuations in the two
cavities/arms are uncorrelated, so that the power spectrum of the difference variations
can be interpreted as a measure of the power spectrum of the length measurement
noise of an individual cavity/arm.
In the laboratory, two types of interferometers have so far been employed to search
for quantum fluctuations of space and set upper limits for their strength:
65
(1) The first are the medium- and large-scale interferometers intended to search for
gravitational waves. These instruments are extremely sensitive, thanks to the large
dimensions and the high laser power employed to interrogate them. However, the
accessible frequency range is limited to f > 100 Hz, due to the presence of seismic
noise. For example, the Japanese TAMA 300 detector set a limit S < 2.10
-41 Hz
-1 at f =
1000 Hz, implying SRW(f) < 2.10
-35Hz / f
2.
(2) The second type are rigid interferometers. These are much less sensitive to seismic
noise and thus can probe the low-frequency regime. When operated at cryogenic
temperature their exceptional dimensional stability allows to reach the milli- to micro-
Hertz range and below. In this regime other environmental noises appear, which are
responsible for the sensitivity limitations. The limit set at f = 2 mHz is S < 1.10
-28 Hz
-1
so that the random walk model is constrained by SRW(f) < 3.10
-34 Hz/f
2. In applying
rigid interferometers to the search for quantum space fluctuations, the assumption is
made that the quantum fluctuations of empty space (described by a fluctuating index
of refraction n(t), with <n(t)>=1), are not cancelled by any quantum fluctuations of
the macroscopic length of the interferometer, and thus of the bond lengths between
the atoms contained within it.
An ISS-based free-flyer provides a unique opportunity for an improvement of many
orders in the sensitivity of the search for space fluctuations, thanks to the low level of
residual acceleration. A schematic of a possible configuration is shown in Fig.50. In
this configuration, the free-flyer contains a drag-free proof mass 1. This proof mass
carries a rigid interferometer, consisting of two orthogonal high-finesse Fabry-Perot
cavities. They are interrogated by narrow-linewidth lasers contained within LS1. A
second proof mass 2 within the first one is required to implement an interferometer
with free end mirrors. The two cavities of this second interferometer are interrogated
by the laser system LS2. The relative position of proof masses 1 and 2 are not
controlled during data taking.
Durations of true free flight for the drag-free proof masses lasting up to hours can be
envisioned. Moreover, cavities with very high finesse (several 100 000) can be
employed. For such an instrument, the sensitivity to f-2
– type space fluctuations is
estimated to be at least six orders higher compared to the terrestrial results obtained
thus far.
The aim is to perform an analysis of the achievable sensitivity as a function of various
parameters (cavity finesse, laser power, drag-free stabilization), to identify potential
sources of noise, and to discuss possible experimental implementations (planar
interferometers vs. 3D-interferometers, release and long-term control of the proof
masses).
At a later stage, development of a prototype is required. Such a prototype can be
tested in a near-zero-g environment in parabolic flights.
66
Figure 50. Concept of a free-flyer quantum space fluctuation sensor containing two
interferometers, each consisting of a pair of orthogonal optical cavities. LS1, LS2 are
laser systems that interrogate the resonance frequencies of the optical cavities and
determine the fluctuations in the difference of their lengths. [self-made figure]
Search for anomalous spin interaction
According to Special Relativity, the spin and the mass are both fundamental
properties of elementary particles with equal significance. Both quantities are e.g.
characteristic for a particular representation of the Lorentz group. Standard quantum
theory, i.e. the Dirac equation in Riemannian space-times, states that spin-particles in
the quasi-classical regime effectively couple to the gravitational field via a spin-
curvature coupling. This coupling is, however, far from being detectable by present
technology.
Since mass couples with gravity via the Newtonan potential one may speculate
whether there is an anomalous coupling between spin and gravity. This problem is
also intimately connected with the question whether the discrete symetries C, P, and T
are still valid in the presence of a gravitational field. Starting with Okun in 1964 and
pushed forward by e.g. Moody and Wilczek, theories predicting such anomalous
couplings are an important topic in modern gravity. Anomalous couplings have been
searched for in spectroscopic experiments, e.g. by Venema and coworkers. Other
experiments use polarized macroscopic objects and look for effects of, e.g., changes
of the polarization. If anomalous couplings are present, then this should result in a
rotation (orbital angular momentum) of the polarized body when hanging on a torsion
fibre. Experiments in this direction have been carried out by Ni, Ritter, Gillies and co-
workers and were proposed in the original STEP proposal. In atom interferometers,
these effects may be studied on the level of elementary spin particles.
In this proposal we suggest to explore the question of an anomalous spin-coupling
using a classical Ramsey-type interferometer as used for cold atom clocks. The beam
splitter can be identified with the process of two successive spin-flips. Each time the
atoms experience a spin-flip with 50% probability being, thus, in a superposition of
two spin states. An anomalous spin-coupling would add an additional frequency shift
which would be detected with respect to a precision-frequency reference. A particular
feature this atom interferometer is, that no spatial beam splitting is required and
should be avoided to be insensitive to mechanical perturbations. Space conditions
with very long time-of-flight is very much in favour to enhance these effects like for
67
atomic clocks. The experimental setup requires superb shielding of stray magnetic
fields, e.g. by superconductors.
Testing the Equivalence Principle for charged matter
The Universality of Free Fall states that all structureless point particles fall in a
gravitational field in the same way. This should also hold for charged particles
(contributions from the non-local Coulomb field can be neglected). The only
experiment aimed for testing the UFF for charged matter is the Witteborn-Fairbank
experiment which confirmed the UFF with an accuracy of 10%. The main
obstructions in the used setup were gravity-induced stray fields like the Schiff-
Barnhill-field or the DMRT-field (see Fig.51). Other errors are due to patch effects
which might be minimized by precise machining. It is clear that in space all gravity
induced errors can be made much more smaller. It has been estimated that the
accuracy of a test of the Universality of Free Fall for charges ions can be improved by
four orders of magnitude if the corresponding experiment is carried through in an ISS-
based free flying platform.
Fig.51: Schematic for a test of the Universality of Free Fall for charged particles. [self-
made figure]
Definition of the mass (Watt balance)
One of the big problems in metrology is the definition of the kilogram. While the
second and, via the constancy of the speed of light, the meter could be based on
highly reproducable quantum effects, the kilogram still is defined in terms of the
kilogram prototype in Paris. It is known, that this prototype and the other identically
produced prototype located in other countries, show up an unexplained drift: the
differences of different prototypes increase over the last decades. Therefore, it is a big
goal of modern metrology to base the mass unit also on some quantum procedure.
One way to do so, is to use the Watt balance and to use the relation between a mass
unit and Planck‘s constant h in order to express the mass in terms of h. A knowledge
of Planck‘s constant at the 10-9
level, which a space experiment will bring, would be a
major breakthrough in the definition of basic units and in the determination of other
fundamental constants.
68
On Earth, the presently best determination of Planck‘s constant with an uncertainty of
8.7.10
-8 rests on an experiment which proposes to use a setup, which is called a Watt
balance. This device, see Fig.51, is based on the comparison of an electric to a
mechanical power in a two-step experiment. In a first step the gravitational force on a
refernce mass is balanced by the electric force of a coil in an inhomogenous magnetic
field. In a second step, the same coil moves uniformly in a the same field, thus
generating a voltage by induction. The electric quantities are measured through the
Josephson and quantum Hall effects. A combination of both steps gives a relation
between mechanical and electric power thus relating the macroscopic reference mass
with quantum units, here Planck‘s constant. Important is that uninteresting parameters
of the experiment will be eliminated.
Fig.52: The Watt balance.
This determination of Planck‘s constant, or, equivalently, the relation between mass
and Planck‘s constant, requires an accurate measurement of the gravitational
acceleration. This sets a limit at the 10-8
level on Earth since many local corrections
should be applied as well as non-local ones due to air pressure variations, etc.
69
Compared to that, in a space experiment the gravitational field can be replaced by a
very well characterized inertial field, either by an acceleration or by a rotation.
Furthermore, inertial field s can be changed at will, from zero g to several g‘s. The
accuracy potential of a space experiment is thus far greater than on Earth in a key area
of fundamental metrology.
Atomic interferometry
There are also possible experiments using atomic interferometry which may be
performed on the ISS. Since effects in interferomerty scale with the interaction time
of the quantum matter with external forces between the beam splitter and the
recombiner, it is also clear in this case, that free fall conditions enable much longer
interaction times than on Earth. Consequently, atomic or molecular interferometry
may be used for a better test of the UFF in the quantum domain, for a search for
anomalous spin interactions, for decoherence effects, and for nonlinearities.
Furthermore, this device may also be sensitive to predictions like the Schiff effect.
There is also a huge practical potential: atom interferometers may serve as very
precise gyroscopes, accelerometers and gravito-gradiometers.
Fig.52: Atomic interferometry with laser beams as atom optical elements (beam splitter,
mirror and recombiner).
Bose-Einstein Condensates
Ultracold quantum gases, as Bose-Einstein condesates (BECs), are systems of
identical boson states of 106 to 10
8 atoms typically. Cooled down to the nK-range, the
wave-functions of the individual atoms form a macroscopic qunatum system where all
atoms are coherently linked. Therefore, BECs offer an unique insight into the
quantum world. BEC tests in laboratories on Earth are limited by gravity, because the
condensates fall down and hit the container wall after several seconds. Microgravity
permits to extend the free time of evolutionby 1 to 3 orders of magnitude. One also
can expect that BECs can be cooled down under microgravity to temperatures far
beyond the present terretrial limits.
Gradiometry
Gradiometry is the measurement of the gravitoelectric and gravitomagnetic
components of the gravitational field, that is, of the space-time curvature. According
to the difenition of the curvature of space-time, a measurement of it requires the
comparison of the acceleration of two nearby test masses. This can be done by using,
e.g., two test masses connected by a spring. Such a device has been described by Paik
70
and Mashhoon. As an alternative, atom interferometry is also capable to measure the
space-time curvature directly. Indeed, the functioning of a corresponding device on
Eath has been demonstrated by M. Kasevich and his group at Stanford.
Scientific and technical outreach
In the case that these experiments will be carried through on the ISS, various
implications for other branches of physics this will arise - beyond the pure
improvements compared with previous results:
Improvements of the foundations of metrology. The valid definition of the meter in
terms of time, and thus the present set of physical units, relies on the constancy of the
speed of light.
Furthermore, UGR ensures the uniqueness of timekeeping. If this universality is
violated, then different types of clocks read different times in gravitational fields
(which cannot be switched off). In this case, one has to restrict to one particular clock
as main primary clock.
Earth models. By tracking the very precisely defined geodetic path of an ISS-based
free flyer, the Earth's gravitational field can be determined very precisely. This can be
used to improve Earth gravity models.
Quantum gravity. All versions of quantum gravity theories like canonical or loop
quantum gravity, string theory or non-commutative geometry predict violations of the
UFF, SR and the UGR. Important for the development of such quantum gravity
theories is the improvement of the corresponding tests.
Clocks. Clocks based on optical techniques are likely to become the most precise
frequency standards in the near future. Implementing such techniques in space
missions will contribute to an improved world--wide clock network of time standards
(TAI). Better clocks can also improve the operating mode of GALILEO.
Furthermore, many of the techniques needed for carrying through the proposed
experiments on the ISS are already developed for ground use and will be of great
importance for other space missions.
Education and Public Outreach
Science is an ever phantasy stimulating experience. In particular, the basic principles
of relativity, either Special or General, are – if one regards them without any
preconception – have the status of a miracle:
It not absolutely counter intuitive that the velocity of light is constant, that is,
that one measures the same velocity of light irrfespective of what the velocity of
the source or the velocity of the observer is. Furthermore, it also not clear why
the maximum velocity of all particles is rthe same, namely the velocity of light.
One may ask, wherefrom the electron, for example, knows about properties of
the neutron or the proton. All these phenomena cannot be explained – they just
have to be accepted irrespective of how counterintuitive they are. These
71
properties of particles, on the other hand, have been taken as the basis of
building the Theory of Special Relativity.
In an analogous manner, the Universality of the Gravitational Redshift is also a
wonder in physics. One may ask, why the time an optical clock based on,
namely on a photon going back and forth between mirrors, is exactly the same
as the time defined by an atomic clock symbolized by the electron orbiting the
nucleus. Both clocks are based on completely different principles, motions, etc.
so that the universality is a miracle.
Also the Universality of Free Fall is an outstanding fact in physics. All other
interactions violate this principle. Different electric charges, for example, are
influenced differently by electromagnetic fields. Therefore, it is a miracle that
all types of materials behave in a gravitational field in exactly the same way.
Furthermore, the universality principles in fluid helium physics near the
Lambda-point is also attracting the attention of people.
These ―miracles‖ which are underlying fundamental theories can stimulate the
scientific activities of pupils at school and also attract the interest of ―ordinary‖
people.
In a pure didactical approach, the state of free fall can be used to illustrate the physics
in an inertial system. That means, that in an inertial systems all bodies in force-free
motion move along straight lines – something what can never be observed on Earth. It
is only in space in free fall that you can realize an inertial system. And inertial
systems are an absolute necessary and basic notion in classical mechanics, the basis of
all physical theories. Furthermore, the condition of a vanishing acceleration can be
used to visualize the physics of mechanical tops: only if acceleration are acting on the
tops, they starts to precess. Many other examples can be imagined which can be of big
help in visualizing basic principles for the class room and also for teaching courses at
universities.
Of course, any successful scientific experiment on the ISS will also lead to a
multitude of articles in journals for popular science and to popular talks. All these
activities will help in showing the usefulness and the importance of fundamental
physics under microgravity conditions.
We very much suggest and ask the space agencies to use the ISS much more for these
pedagocical aims. One should really think about to establish something like an
inertial class room or a microgravity class room where basic school experiments
which need weighlessness can be carried through. We believe that such activities can
be used by many school and even university courses for certain pedagocical
experiments which cannot be performed on Earth and which, at the same time, also
stimulate a lot of scientific activities among younger people.
Industrial Impact
The requirement of advanced technology always has big impact on industriy. At first,
with the construction of space experiments always the industry is involved by
carrying through main parts of thre implementation. Second, the high requirements
always leads to new technologies which have applications not only in space projects
but also on Earth.
72
Summary and Outlook
It is clear that experimental physics has developed to the point where many new
techniques possessing extraordinary potentialities — when compared with what was
possible just a few short years ago — are awaiting application in the micro-g
environment to be found aboard the ISS. It is not simply a question here of
technological refinement, although that has played its part, it is the birth of altogether
new techniques, such as atomic interferometry. Such technological advances
stimulate new physics, which in its turn generates novel technology, and so the cycle
continues. Fundamental Physics theory, on a parallel developmental path to the
experimental work, has also undergone very considerable change over this same
period of time, although here the net result so far has been an even longer list of more
profound, unanswered, questions. It is therefore absolutely clear that the search for
answers to many of these same questions should be directed towards space. Seven
Fundamental Physics experiments have been chosen so far for flight programmes
aboard the ISS, but the Topical Team on Fundamental Physics has identified xx more
that may be appropriate for further study. The ISS needs a strong science programme:
Fundamental Physics is ideally placed to guarantee this.
Acknowledgements
The authors are indebted to K. Danzmann, V. Dohm, W. Ertmer, C.F.W. Everitt, Th.
Konrad, L. Maleki, B. Mashhoon, J. Mester, W.-T. Ni, G. Schäfer for a lot of
discussions and help. We thank M. Siemer for producing many of the Figures.
List of Acronyms used in the text
ACES Atomic Clock Ensemble in Space
AOCS Attitude and Orbit Control
ATHENA AnTi HydrogEN Apparatus (at CERN)
ATOPIS ATomic OPtics and Interferometry in Space
BEST Boundary Effects near Superfluid Transitions
FEEP Field Electrical Emission Propulsion
FP Fundamental Physics
FPAG Fundamental Physics Advisory Group of ESA
GP-A Gravity Probe A
GP-B Gravity Probe B
GR General Relativity
HYPER HYPER precision atomic interferometer in space
ISS International Space Station
JPL Jet Propulson Laboratory
LAGEOS LAser GEOdynamic Satellite
LLR Lunar Laser Ranging
LTMPF Low Temperature Microgravity Physics Facility (on the ISS)
MICROSCOPE Micro-satellite a trainee Compensee pour l'Observation du Principe
d'Equivalence
MOT Magneto Optical Trap
73
MWL MiocroWave Link
ONERA Office National de Recherches et d'Etudes Aerospatiale
PHARAO Projet d'Horloge Atomique par Refroidissement d'Atomes en Orbite
PPARC Particle Physics and Astronomy Research Council
SMART Small Mission for Advanced Research in Technology
SQUID Superconducting Qauntum Interference Device
SR Special Relativity
STEP Satellite Test of the Equivalence Principle
STM SpaceTime Mission
SUE Superfluid Universality Experiment
SUMO Superconducting Microwave Oscillator
T2L2 Time Transfer by Laser Link
UFF Universality of Free Fall
UGR Universality of Gravitational Redshift
WEP Weak Equivalence Principle