43
y-07-06 4:08 AM Chapter 4. Weather, macroweather, the climate and beyond 4.1 The weather: nothing but turbulence… and don’t mind the gap “This afternoon, the sky will start to clear, with cloud shreds, runners and thin bars followed by flocks”. If Jean- Bapiste Lamarck (1744 –1829) had had his way, this might have been an uplifting early morning weather forecast announcing the coming of a sunny day. Unfortunately for poetry, in 1803, several month’s after Lamarck proposed this first cloud classification, the “namer of clouds” Luke Howard (1772 –1864) introduced his own staid Latin nomenclature that is still with us today, including “cumulus”, “stratus”, and “cirrus” a . For a long time, human scale observation of clouds was the primary source of scientific knowledge of atmospheric morphologies and dynamics. This didn’t change for fifty years until the appearance of the first weather maps based on meagre collections of ground station measurements. This was the beginning of the map-based field of “synoptic” (“map scale”) meteorology. Under the leadership of Wilhelm Bjerknes (1862- 1951) it spawned the Norwegian school of meteorology that notably focused on sharp gradients, “fronts”. This was the situation when in the mid 1920’s - Richardson proposed his scaling 4/3 diffusion law. The resolution of these “synoptic scale” maps, was so low that features smaller than a thousand kilometers or so could not be discerned. Between these and the kilometric human “microscales”, virtually nothing was known. Richardson’s claim that a single scaling law might hold from thousands of kilometers down to millimeters didn’t seem so daring: not only was it compatible with the scale free equations that he had elaborated, but there were no scalebound paradigms to contradict it. a Luke not only had a more scientific sounding jargon, but was soon given PR in the form of a poem by Goethe; Lamarck’s names didn’t stand a chance. 1

Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

  • Upload
    doannhu

  • View
    214

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Chapter 4. Weather, macroweather, the climate and beyond

4.1 The weather: nothing but turbulence… and don’t mind the gap

“This afternoon, the sky will start to clear, with cloud shreds, runners and thin bars followed by flocks”. If Jean-Bapiste Lamarck (1744 –1829) had had his way, this might have been an uplifting early morning weather forecast announcing the coming of a sunny day. Unfortunately for poetry, in 1803, several month’s after Lamarck proposed this first cloud classification, the “namer of clouds” Luke Howard (1772 –1864) introduced his own staid Latin nomenclature that is still with us today, including “cumulus”, “stratus”, and “cirrus”a.

For a long time, human scale observation of clouds was the primary source of scientific knowledge of atmospheric morphologies and dynamics. This didn’t change for fifty years until the appearance of the first weather maps based on meagre collections of ground station measurements. This was the beginning of the map-based field of “synoptic” (“map scale”) meteorology. Under the leadership of Wilhelm Bjerknes (1862-1951) it spawned the Norwegian school of meteorology that notably focused on sharp gradients, “fronts”. This was the situation when in the mid 1920’s - Richardson proposed his scaling 4/3 diffusion law. The resolution of these “synoptic scale” maps, was so low that features smaller than a thousand kilometers or so could not be discerned. Between these and the kilometric human “microscales”, virtually nothing was known. Richardson’s claim that a single scaling law might hold from thousands of kilometers down to millimeters didn’t seem so daring: not only was it compatible with the scale free equations that he had elaborated, but there were no scalebound paradigms to contradict it.

By the late 40’s and 50’s the development of radar finally opened a window onto the intermediate scales. During the war, the first radars had picked up precipitation as annoying noise that regularly ruined the signals. In 1943, in an attempt to better understand the problem, the Canadian Army Operational Research Group initiated “project stormy weather”. After the war, the team - headed by John Stuart Marshall – set up the “Stormy Weather Group” at McGill, which – thanks to the “Marshall-Palmer relation b” soon

a Luke not only had a more scientific sounding jargon, but was soon given PR in the form of a poem by Goethe; Lamarck’s names didn’t stand a chance.b This is still the name used by meteorologists for the humble exponential distribution of raindrops as functions of drop size. In 1948 Marshall’s graduate student Palmer had use chemically coated blotting paper to relate the size of a drop to the diameter of a “blot”. Marshall and Palmer had used many such small pieces placed in the bottom of jars, in order to establish the relative number of small and large drops, information needed to interpret radar backscatter. But they had assumed that the drops were uniformly distributed in space whereas - thanks to turbulence - they were in fact distributed in a hierarchical (cascade) like manner. Forty years later, using a huge (128x128cm) piece of blotting paper, a student and I recalibrated the blotting paper in the same McGill staircase, only this time showing that the spatial distribution of drops was a fractal set:1 Marshall, J. S. & Palmer, W. M. The distribution of raindrops with size. Journal of Meteorology 5, 165-166 (1948); 2 Lovejoy, S. & Schertzer, D. Fractals, rain drops and resolution dependence of rain measurements. Journal of Applied Meteorology 29, 1167-1170 (1990).

A decade later, tens of thousands of drops in volumes 10m3 were analyzed using stereophotography. This confirmed that Marshall’s homogeneity assumption is only valid up to about 40- 50 cm, not up to kilometers as is still routinely assumed:3 Desaulniers-Soucy., N., Lovejoy, S. & Schertzer, D. The continuum limit in rain and the HYDROP experiment,. Atmos. Resear. 59-60,, 163-197 (2001); 4 Lovejoy, S. & Schertzer, D. Turbulence, rain drops and

1

Page 2: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

established the quantitative basis for interpreting radar precipitation scans; the famous “Z-R” relation (reflectivity- rain rate)c. Beyond this quantification of precipitation, the key advance of the radar was the ability to image the first weather patterns in the range 1- 100 kilometers in size: the discovery of structures and motions in the middle (“meso”) scales between the human micro and the synoptic map scales.

As this atmospheric window opened, the path pioneered by Richardson; statistical theories of turbulence, were rapidly advancing. The idea of turbulence theory was to derive high level statistical laws governing the behaviour of strongly nonlinear flows such as those in the atmosphere where the nonlinear terms were typically a thousand billion times larger than the linear onesd. In order to make progress, three important simplifications were made. First, only incompressible fluids were considered. Since gravity only acts on density variations, this had the effect of eliminating the main real world source of anisotropy and stratification at the very outsete. Second, boundaries, walls - and for the atmosphere, the earth’s surface and north-south temperature gradients - are also sources of anisotropy, so that an additional assumption of statistical isotropy was made: that the flow itself was on average the same in all directions5. Third, although at any instant in time, the actual turbulent flow would be highly variable from one place to another, it was assumed that on average, the turbulence was everywhere the same: that it was statistically homogeneous.

It is important to take a moment to examine the notions of homogeneity and isotropy more closely. In common parlance, something that is homogeneous is spatially uniform, the same everywhere, constant. Similarly, something isotropic is the same in all directions, it is spherically symmetric. If the atmosphere was literally - in this deterministic sense - both homogeneous and isotropic; then the wind, temperature, pressure and other atmospheric parameters would have identical values everywhere, a useless approximation.

The notion of a turbulence that is statistically homogeneous and statistically isotropic is much more subtle than this, it has to do with the same symmetries - translational and rotational invariance – but over statistical averages. A statistical average is neither a spatial nor a temporal average, it is rather an average over a statistical ensemble. To understand an ensemble one must imagine re-enacting (almost) exactly the same experiment a large number of times under identical conditions. For each experiment, the details of the resulting turbulent flow would be different because infinitesimally small differences are amplified by the strongly nonlinear character of the flow (the “butterfly effect”, ch. ?). Statistical averages would then be obtained by averaging the flow over this huge (in principle infinite) ensemble of experiments.

Each member – “realization” - of such a statistically homogeneous and statistically isotropic ensemble could easily be extremely inhomogeneous in space and could have a strong preferred directionf. However, the preferred locations of turbulent “hotspots”, or the preferred orientations of vortices would be different on each experiment, so that the average over all the experiments would be a constant everywhere and would display no

the l **1/2 number density law. New J. of Physics 10, 075017(075032pp), doi:075010.071088/071367-072630/075010/075017/075017 (2008).c After Marshall retired in 1977, my phD supervisor Geoff Austin succeeded him as leader of the Stormy Weather Group and as director of McGill’s (later baptised) John S. Marshall radar observatory which was then attached to the physics department. When in 1980, I gave my first seminar on fractal models of rain in the McGill meteorology department, Marshall attended as a still somewhat active emeritus professor. dThe ratio is the “Reynolds number”.e Later, various incompressible approximations were made, but the main theory was based on incompressibility.f Indeed, the breakthrough due to cascades and multifractals was precisely the realization that we should expect extreme variations from one realization of a turbulent process to another.

2

Page 3: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

preferred direction. The problem with empirically testing this idea is that no one ever performs an infinite number of identical experiments; and when it comes to the weather and climate, there is only one planet earth (although, in many respects, Mars comes pretty close, as a second realization, see below!). Often, we have to somehow figure out what “typical” inhomogeneities and “typical” anisotropies might be expected on single realizations even of processes that are known to be statistically homogeneous and isotropic. For example, the multifractals generated by cascades might easily be constructed to be statistically homogeneous and isotropic but nevertheless to be far more wildly variable than anyone had imagined!

While these assumptions may sound academic, they are not unreasonable approximations to appropriately stirred water in a tank - or even the coffee in your cup. Of course, in practical terms it is impossible to either stir your coffee in exactly the same way thoughout the cup (homogeneously) or to do so in a way that is the same in all directions (isotropically). However, there are reasonable arguments to the effect that if one was far enough from boundaries and at small enough scales, that these anisotropies and inhomogeneities would no longer be feltg, so that the theorists were emboldened to apply these ideas to the atmosphere - even if over limited ranges of scales. Unfortunately for isotropy, gravity acts at all scales so that even if the boundaries only affect the nearby flow and the north-south temperature gradients are important only at the largest scales, the presence of gravity is sufficient to render the isotropic theories of the atmosphere academic.

We have considered statistical constancy in space and in directions, what about in time? A vigorous stirring of your coffee might lead to some approximation of statistical homogeneity and isotropy, but if the stirring stopped, then due to friction – viscosity - the motions would die down. Therefore, an even simpler situation was usually considered; “quasi-steady” homogeneous and isotropic turbulence in which the fluid was stirred constantly so that the stirring energy would being dissipated as heat at – on average - the same rate at which it was input by the stirringh.

Since large structures (“eddies”) tended to be unstable and to break up into smaller ones, it was enough for the stirring to create large whirls and let the turbulence do the rest: create smaller and smaller structures until eventually dissipation took over. This hierarchical transfer of energy from large to small was what Richardson had referred to in his poem, “the big whirls have little whirls that feed on their velocity”, it was the basic cascade ideai. Such a quasi-steady state means on average everything is the same at all times, it was an approximation to the temporal equivalent of statistical homogeneity: statistical “stationarity”.

The paradigm of “isotropic, homogeneous turbulence” emerged by the end of the j 1930’s. During this time, the Soviet mathematician and physicist Andrei Kolmogorov (1903-1987) was axiomatizing probability theory7 thus laying the mathematical basis for the treatment of random processes. By the end of the 1930’s, Kolmogorov had begun to turn his attention to turbulence. The breakthrough was the recognition that the key parameter controlling the flow of energy from the large scale stirring to the small scale dissipation was the energy rate density (ch. 1). Using this quantity one immediately obtains

g For example, the idea of “return to isotropy” was interpreted in this way:6 Rotta, J. C. Statistische theorie nichtonogenr turbulenz. Z. Phys. 129, 547-572 (1951).h I say on average, because typical experiments would be far from smooth with energy dissipation occurring very unevenly in both space and in time. This was the phenomenon of intermittency discussed earlier – the “spottiness” of turbulence, but the full significance of this was not understood until much later. i Note that to obtain such a steady state required a constant input of energy so that the overall system was very far from thermodynamic equilibrium.jNotably in the form of the Karman-Howarth equations (1938).

3

Page 4: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

the Kolmogorov law8 that relates the turbulent velocity fluctuations across a structure to its scalek:

(Velocity Fluctuations) = (Energy Rate Density)1/2 x(Scale) 1/3

During the war scientific exchanges were limited and it appears that the Kolmogorov law was independently discovered no less than five times! One of them was at almost exactly the same time – by another Soviet, Obhukhov9 - but in the (equivalent) spectral domain (where it has the form k-5/3 where k is an inverse length, a wavenumber, the spatial equivalent of a frequency). As a consequence, the law is also referred to as the “5/3 law” or the “Komolgorov-Obukov” law. The next to publish was Onsager10 (1945), and he was the first to explicitly link the law to a cascade of energy flux from large to small scales l. But Onsager’s American publication was no more than a short abstract; it was no more visible than the earlier Soviet papers had been. This led the physicists Heisenberg12 (1948) and von Weizacker13 (1948) to their own rediscoveries. The pattern of independent Soviet and nearly concurrent western discoveries continued with the discovery of the closely analogous turbulent laws of turbulent mixing, the (also scaling) “Corrsin14-Obuhov15 law”:

(Temperature fluctuations)= (Turbulent fluxes) x(Scale) 1/3

and again (1959) with the Bolgiano16 - Obhukov17 law for buoyancy driven turbulence that we discussed in ch. 2:

(Velocity fluctuations)= (Turbulent fluxes) x(Scale) 3/5

By 1953, the theory of isotropic homogeneous turbulence had evolved to the point that it was already the subject of a landmark book18 “The theory of homogeneous turbulence” by George Batchelor (1920-2000). By then, the role of isotropy had subtly changed. Whereas it had originally been introduced as a way of simplifying theoretical treatments of turbulence, it now had a life of its own. While the main application of the theory was to the atmosphere, Kolmogorov noted that the rather stringent “inertial rangem” assumptions that he had used to derive it – including the neglect of gravitational forces - would only be valid up to scales of several hundred meters, a conclusion amplified by Batchelor who speculated that the range might only be between 100 m and 0.2 cm.

While Richardson had been blissfully ignorant of isotropic theory and had dared to propose that his scaling law would hold over the whole range of atmospheric scales, now - the nearly equivalent - Kolmogorov law was claimed to be limited to a tiny range. This drastic limitation was not due to any evidence nor to the discovery of any scale breaking mechanism. Rather, the restriction and its implied scale break were hypothesized because atmospheric stratification: contradicted isotropy. Isotropy had come to dominate the theory and the reason for its introduction had been forgotten: theoretical simplicity! Rather

kTled Richardson’s 4/3 law of turbulent diffusion, proposed by Richardson largely on empirical grounds. l Many years later, in an important update on his law (taking intermittency into account) Kolmogorov explained that during the period 1939-1941 both he and Obukov had been inspired by Richardson’s cascades:11 Kolmogorov, A. N. A refinement of previous hypotheses concerning the local structure of turbulence in viscous incompressible fluid at high Raynolds number. Journal of Fluid Mechanics 83, 349 (1962).m So called because the law was valid only when the inertial terms in the equations dominated the dissipation/friction terms.

4

Page 5: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

than finding the best theory to fit reality, the theorists were trying to find realities to fit their theories.

The full irony had to wait over fifty years to be savoured; for the analysis of the large scale drop sonde campaign discussed in ch. 1 involving 238 sondes. It turned out that scientists had been so confident in isotropy that they hadn’t bothered to check the Kolomogorov law in the vertical direction - and when they finally did so at the end of the 1960’s (see ch. 2), the contrary Bolgiano-Obukov results had simply been ignored n. The change (difference) of the horizontal wind was calculated over layers of increasing thickness (fig. 4.1), first only for the near surface region (bottom), and then from the surface to higher and higher altitudes; the set of points shows the result using all the layers between the ground and 12.8 km (roughly the tropopause). For all altitude ranges, one obtains nearly perfect straight lines indicating scaling over its internal layers covering layers with thicknesses ranging from 5 m to nearly 10 km. Even at small scales, the Kolmogorov law (the line with the 1/3 slope at the bottom of the figure, in red) is completely unrealistic, with the real data being very close to the Bolgiano-Obukhov law (slope 3/5).

Although the slopes in the figure increase a little at higher altitudes even the theories predicting a slope of 1 can be rejected (see the line marked “GW” for gravity waves”). This slope is predicted by both gravity wave theories19 as well as by Charney’s quasi-geostrophic turbulence theory which is thus also seen to be quite unrealistic. The conclusion from the data is unequivocal: the original (isotropic) Kolmogorov law simply does not hold anywhere in the atmosphere (unless it is hiding at scales below 5 m!). Kolmogorov and Batchelor’s speculation that the Kolmogorov law would hold up to hundreds of meters was doubly wrong: in reality it holds at most up to 5 m in the vertical… but – as we will see below - up to planetary scales in the horizontal.

As a secondary - but also fundamental discovery – the data showed that the atmosphere was divided into a fractal hierarchy of stable and unstable layers (fig. 4.2). Rather than the traditional low resolution view that the atmosphere was generally unstable near the surface and then stable at higher altitudes, the dropsondes showed that within each apparently stable layer that here were unstable sub-layers and within the unstable sub layers there were stable sub-sub layers etc. This discovery proved to be difficult for various theories (in particular of the propagation of gravity waves) that assumed that wide stable layers existed.

n Actually, their confidence in isotropy was so strong that often, they didn’t bother checking it in the horizontal: they measured the spectrum at a fixed location and then converted the result to space as described below.

5

Page 6: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 4.1 The average mean absolute difference in the horizontal wind from 238 drop

sondes over the Pacific Ocean taken in 2004. The data were analyzed over regions from the surface to higher and higher altitudes (the different lines from bottom to top, separated by a factor of 10 for clarity). Layers of thickness z increasing from 5m to the thicknesses spanning the region were estimated, and lines fit corresponding to power laws with the exponents as indicated. At the bottom reference lines with slopes 1/3 (Kolmogorov, K), 3/5 (Bolgiano-Obukov, BO), and 1 (Gravity waves, GW and quasi-geostrophic turbulence) are shown for reference. Reproduced from20.

6

Page 7: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Figure 4.2: The stability of the atmosphere as determined by a drop sonde using the stability criterion Ri>1/4 where the Richardson number (Ri) is estimated using increasingly thick layers: 5, 20, 80, 320 m thick (black, red, blue, cyan respectively). The figure shows atmospheric columns, the left one from the ocean to 11520 m (just below the aircraft), the right is a blow up from 8000-9000 m. The left of each column indicates dynamically unstable conditions (Ri<1/4) whereas the right hand side, indicates dynamically stable conditions (Ri>1/4). The figure reveals a Cantor set-like (fractal) structure of unstable regions. Reproduced from 21.

***The final important early turbulence development was the discovery by Fjorthoft

(1953) that completely flat, two dimensional turbulence was fundamentally different from three dimensional isotropic turbulenceo. Although Fjorthoft was cautious in interpreting his results in terms of real atmospheric flows, the seed had been planted for the isotropic 2D-3D model that followed a decade or so later.

By the mid 1950’s, established empirically based synoptic scale meteorology had already relegated the “microscales” to mere turbulence, but this had been done mostly for practical reasons. Similarly, the new mesoscale was pragmatically viewed as the connection between the two, while simultaneously promising a better understanding of thunderstorms and other previously inaccessible meteorological phenomena. The emerging synoptic, meso and micro scale regimes were thus not theoretically ordained, they were practical distinctions awaiting theoretical clarification. Yet, the theorists were loath to drop their isotropy assumptions, and were happy to find convenient justifications for dividing up the range of scales into small scale isotropic three dimensional turbulence and something stratified - albeit not yet clearly discerned - at the larger scales. It was already tempting to knit all this together and to identify the microscales with 3D isotropic turbulence and the weather with the larger stratified ones.

This was the situation when Panofsky and Van der Hoven began their famous measurements of the wind spectrum that they published between 1955 and 195723,24. At this point, wind data at sub-second scales for durations of minutes had already confirmed the Kolmogorov lawp, but data were lacking at the longer time scales. Given the lack of computers, the researchers averaged their data at 10 s intervals using “eye averages” and collected data in this way for an hour or so. The spectrum was then laboriously calculated by hand. Finally, knowing the average wind speed allowed the scientists to make a rough conversion from time to space. For example if the average wind over a minute was 10 m/s, then the variability at one minute was interpreted as information about the variability at spatial scales of 600m. To investigate the mesoscale between 1 and 100 km, such data were needed spanning periods of minutes to several hours.

The new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that lasted several days. When the spectrum from this analysis was plotted on the same graph as a one minute spectrum that had been taken from a completely different experiment under different conditions, Panofsky and Van der Hoven discovered that there was a dearth of variability precisely in the range of about 1- 100km, centered on time scales of 1 hour, corresponding to 10 km. In the authors’ words: “The spectral gap suggests a rather convenient separation of mean and turbulent flow in the atmosphere: flow averaged over periods of about an hour… is to be regarded as ' mean '

o This was not quite the discovery of Kraichnan’s law of two dimensional turbulence:22 Kraichnan, R. H. Inertial ranges in two-dimensional turbulence. Physics of Fluids 10, 1417-1423 (1967).p The confirmation was indirect since the measurements were in time.

7

Page 8: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

motion, deviations from such a mean as ' turbulence’.” The meso-scale gap was born. However, the first (1955) paper was based on a single location and on only two

experiments and their figures were not very convincing. This led Van der Hoven to perform another series of four experiments that produced what later became the iconic mesoscale gap spectrum (fig. 4.3). The gap cleanly and “conveniently” separated the synoptic weather scales from the small scale turbulence. With the development of the first computer weather models whose resolutions – even today – don’t include the microscales, this gap became even more seductive. At first, it justified simply ignoring these scales; later, it justified “parameterising” them. The gap idea was so popular that Van der Hoven’s spectrum was reworked and republished many times notably in meteorological textbooks, through the 1970’s. Soon, the actual data points were replaced with smooth artists’ impressions (see e.g. fig 4.3) thus inadvertently hiding the fact that his spectrum was actually a composite taken under four different sets of conditions; even today, his paper is still frequently and approvingly cited.

Fig. 4.3: The famous “meso-scale gap” between the right-most bump and the “synoptic maximum” (left-most bump”), adapted from Van der Hoven 1957 in25. The ellipses show the rough ranges of the four experiments which were combined to give the composite spectrum (the actual data points had already been replotted from the original). The vertical line that corresponds to about 6 minutes and was often superposed to indicate the limit of 3D isotropic turbulence.

Yet within ten years, the gap was strongly criticized26,27,28 with critics pointing out that it was essentially based on a single high frequency bulge (fig. 4.3, near scales of 100 s) due a single experiment data taken “under near hurricane” conditions. By the end of the 1970’s, satellites were routinely imaging interesting mesoscale featuresq. Practionners of the nascent field of mesoscale meteorology such as Atkinson29 were highly sceptical of any supposedly barren “gap” that could relegate their entire field to a mere footnoter.

Although for many the gap was too convenient to kill, the mesoscale itself underwent a transformation. The new developments were of two dimensional isotropic turbulence by

q Due to the curvature of the earth ground based weather radars start looking above the weather at distances of around 100 km; satellites gave a much more satisfactory range of scales.r Atkinson approvingly quotes the early critic Robinson who noted “… I find it unconvincing the argument that disturbances on scales between the cyclone and the thunderstorm do no exist because we do not see them on synoptic charts”:26 Robinson, G. D. Some current projects for global meteorological observation and experiment, . Quart. J. Roy. Meteor. Soc., 93, 409–418 (1967).

8

Page 9: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Robert Kraichnan (1928-2000) in 196730, 22, and in 1971its extension to quasi-geostrophic turbulences by Jules Charney (1917-1981)31. While modern data had filled in the gap with lots of structures and variability, these isotropic turbulence theories supported a new interpretation of the mesoscale as a regime transitional between isotropic 3D and isotropic 2D (quasi-geostrophic) turbulence, supposedly near the atmospheric thickness, of about 10 km.

While in some quarters, the gap lived on, the development of two dimensional turbulence changed the focus: rather than searching for the gap, the new goal was to search for signs of large scale isotropic 2D turbulence. Such a discovery promised to transform the meso-scale from a barren gap into the site of a 3D-2D “dimensional transition”32. When Kraichan published his paper, the idea 2D isotropic turbulence was so seductive that claims of 2D turbulence sprang up almost immediately. Whereas 3D isotropic turbulence followed the k-5/3 law, 2D turbulence was expected to have a k-3 regimet, so that anything resembling a k-3 regime was considered to be a “smoking gun” for the purported 2D behaviour. Even tiny ranges with only two or three data points that were vaguely aligned with the right slope were soon interpreted as confirmation of the theoryu.

The first experiment devoted to testing this 2D/3D model was the EOLE experimentv (1974). It used the dispersion of 480 constant density balloons34 (at about 12km altitudew dispersed over the southern hemisphere; this is was the same as some of Richardson’s methods used to obtain fig. 2.8; it was effectively an updated versionx. Since Kolmogorov’s 5/3 law was essentially the same a Richardson’s 4/3 law, the dispersion of the balloons was an indirect test of the former. But contrary to Richardson, the original analysis of the EOLE analysis by Morel and Larchevesque34 concluded that the turbulence in the 100 km -1000 km range did not follow his law, but rather those predicted by two dimensional turbulence.

But the matter didn’t rest there. Even in the mid 1970’s, internal discrepancies in the EOLE analysis had been notedy. More importantly, the EOLE conclusions soon contradicted those of the GASP (1983) and later MOZAIC (1999) analyses that found Kolmogorov turbulence out to hundreds of kilometres. Two decades later, the original (and still unique) EOLE data set was reanalyzed by Lacorte et al35 and it was concluded that the original EOLE conclusions were not founded and that on the contrary the data vindicated Richardson over the range of about 200 - 2000 km range.

But the saga was still not over. Strangely, in spite of supporting Richardson at the largest scales, the Lacorte et al reanalysis contradicted him at the smallest EOLE scales -from 200 km down to the smallest available scales (50 km) - over which they claimed to have validated the original 2D turbulence interpretation! A decade later, this conclusion prompted a re-revisit that found an error in this smaller scale analysis thus eliminating any

s This was effectively a derivation of Kraichnan’s pure two dimensional turbulence starting from a series of nontrivial approximations to the governing fluid equations.t It also generally had a k-5/3 regime - only this was at very large scales - a fact that was often conveniently forgotten.u Some authors admitted to “eyeballing” their spectra over a factor of 2 in scales to back up such claims:33 Julian, P. R., Washington, W., M., Hembree, L. & Ridley, C. On the spectral distribuiton of large-scale atmospheric energy. J. Atmos. Sci., 376-387 (1970). v After the Greek wind God.w More precisely, close to the 200 mb pressure level.x The balloons stayed (nearly) on isopycnals (i.e. surfaces of constant density), not on isobars (surfaces of constant pressure), the key difference being that while the latter are gradually sloping, the former are highly variable with large scale average slopes diminishing at larger and larger scales.y Between the relative diffusivity and the velocity structure function results.

9

Page 10: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

evidence for two dimensional turbulence up to the largest scale covered by EOLE, 1000 km, thus (finally!) vindicating Richardson in 2013z.

***Science is a quintessentially human activity. At each epoch, it depends on the

available technology, the reigning scientific theories, on the key scientific problems, on its historical level of development. Yet it also depends on society’s attitude and on its willingness to allocate resources to it. In order to understand the trajectory of atmospheric science that followed the heady 1980’s decade of nonlinear science, it’s necessary to take a short interlude to discuss the changing fortunes of fundamental science in the advanced western countries. Although the following account is tainted by my own situation in Canada, the scientifically advanced countries followed nearly identical economic policies and it is fairly representative.

The post World War Two élan of scientific optimism was vividly summed up in the title of Vannevar Bush’s report: “Science, the endless frontier”aa. It was the beginning of the era of “Big Science”, i.e. of science being directly harnessed by big business. But it was seen as a partnership; between publically and privately funded efforts. There was a general recognition that investment in inappropriate scientific concepts or unrealistic models could lead to the squandering of huge sums of money; no matter how urgent a problem might be, its solution required a balance between fundamental and applied research. But fundamental research was expensive and accountants were unable to determine the corresponding rate of return on investments. Corporations were generally happy to let academic or other publically funded institutionsbb pickup the bill. By the 1990’s, high costs and risks had created a situation where only a handful of giant corporations still carried out any fundamental research. Marking the end of an epoch, in 1996, even the famous Bell labs were sold offcc, while others were downsized or refocused toward more practical mattersdd. Today, the fundamental research required for technological advance is virtually entirely publically funded.

This evolution came at a price. Whereas in the past, fundamental sector scientists had been given free reign to investigate the areas of greatest scientific significance - it was largely “curiosity driven” - now, businesses lobbied governments for direct control over public research, over both its priorities and the management of its funds. Violating its very nature as a long-term enterprise, funding of fundamental science was retargeted towards short-term corporate gain with public research agencies reoriented accordingly. To reassure the public, administrations and politicians mouthed the new mantra of “excellence” according to which pandering to special interests was also excellent and where less could be better. At the same time, official R&D expenditure figures were doped by

z See appendix 2A of:25 Lovejoy, S. & Schertzer, D. The Weather and Climate: Emergent Laws and Multifractal Cascades. (Cambridge University Press, 2013).aa The title of a report in July 1945 by Vannevar Bush, Director of the Office of Scientific Research and Development, US.bb In the US, especially the military.cc In 1996, the parent company, AT&T sold Bell labs to Lucent Technologies, 1996, in 2006, Lucent was acquired by Alcatel and in 2015, by Nokia.dd In my case, Canada’s “branch plant economy” had never enjoyed much corporate research in the first place.

10

Page 11: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

including tax shelters for businesses claiming to invest in high techee, effectively hiding the pirating of public funds from close scrutiny.

Concomitant with the industrial focus, was a growing disinterest – even by scientists -in fundamental issues including those that the nonlinear revolution had promised to solve. In atmospheric science, resources were tightly focused on the development of numerical weather models (Global Circulation Models, GCMs)ff. The only justification for funding became the promise of improving GCM outputs. In the past, it had been possible to obtain support for an applied science project and – thanks to deliberately loose controls – scientists would regularly siphon off some of the funds to (illicitly) do “real science”. But in this brave new world of excellence, sponsors required rigorous accountability. Not only were research priorities imposed from without, but every dollar had to be spent exactly as specified in increasingly detailed proposals that often had been written years earliergg. University accounting departments administering the grants played a police role to protect the sponsors’ money from being spent by scientists irresponsibly trying to advance science rather than following the sponsors dictates. Academic scientists were slowly being transformed into cheap labour.

But technology continued to advance and rapidly increasing computer sizes and speeds, combined with improved algorithms ensured that the 1990’s were a golden age for atmospheric science. Fundamental GCM issues were also being resolved; by the decade’s end, weather forecasts had significantly improved. This was largely thanks to advances in data assimilation that had opened doors to the widespread “ingestion” of satellite and other disparate and hitherto unexploited sources of datahh. For atmospheric science, GCMs increasingly appeared to be the only way of the future.

Atmospheric science had always suffered from a gulf between the idealized smooth, calm models concocted by theoreticians and the wild irregularity of the real world ii. In the 1970’s, a popular adage was “No one believes a theory except the person who invented it. Everyone believes the data except the person who took them”. In the new ambiance, theorists themselves were not immune to this cynicism: when discussing atmospheric scaling in the early 1990’s a well known colleague commented: “if no progress is made for long enough, the problem is considered solved and we move on.” It was thus not surprising that when the budgetary screws were turned, this ambient negativism hit the nonlinear revolution and conventional turbulence theory alike: interest was scant and funding scarcer. Theory of any kind was increasingly seen as superfluous – it was either irrelevant or a luxury that could no longer be afforded. Any and all atmospheric questions were answered using the now standard numerical models.

ee Specific examples that stick in my memory from include a case where hundreds of millions of dollars in tax relief was given to banks who had “invested” in science and technology simply by upgrading their office equipment. Another example also from the mid 1990’s was of tax relief given to a companying for research into new flavours of beer. This type of accounting trick allowed the government to boast of stable levels of support for science.ff In Canada for example, the staff of the federal weather office, the “Atmospheric Environment Service” (now the Meteorological Services of Canada, a part of Environment Canada) was cut by 50% during the 1990’s and this included the elimination of a small fund used to support academic research.gg Since the outcome of research was intrinsically unpredictable, the best strategy for scientists was to somehow get ahead of the game and ask for funding that they had already performed, guaranteeing that they could fulfill the precise terms of the grant.hh The development of revolutionary new data assimilation techniques: spatial “3D var”, and later space-time, “4D var”.ii In my view, this was because the theories made many conventional smoothness and regularity assumptions that were totally unrealistic and that the scaling approach would overcome.

11

Page 12: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Unfortunately for the advancement of science, GCMs are massive constructs built by teams of scientists spanning generations. They were increasingly “black boxes”, and even when they answered questions, they did not deliver understanding. Atmospheric science was being gradually transformed from an effort at comprehending the atmosphere into one of numerically imitating it; i.e. into a purely applied field. New areas – such as the climate – were being totally driven by applications and technology: climate change and computers. In this brave new world, few felt the need or had the resources to solve the basic scientific problems of the field.

***So it was that the excitement engendered by the nonlinear revolution in the 1980’s,

slowly faded and with it, the general scientific interest in geosystem scales and scaling. Yet the revolution had succeeded in establishing a beach-head, a community of like minded scientists organized first in the European Geophysical Society’s jj Nonlinear Processes divisionkk (1989), around the Nonlinear Processes in Geophysics journal (1994), and a little later, in the American Geophysical Union’s Nonlinear Geophysics focus group (1997). Following a 2009 workshop on Geocomplexityll, a dozen scientists published a kind of nonlinear manifesto entitled “Nonlinear Geophysics: Why we need it” 36. It proclaimed that “the disciplines coalescing in the NG movement are united by the fact that many disparate phenomena show similar behaviours when seen in a proper nonlinear prism. This hints at some fundamental laws of self-organization and emergence that describe the real nature instead of linear, reductive paradigms that at best capture only small perturbations to a solved state of problem…”.

It was largely in nonlinear geophysicsmm that evidence for wide range atmospheric scaling slowly accumulated, notably by the study of radar rain reflectivities37,41, and satellite cloud radiances38-43 (see e.g. fig. 4.4). But these analyses were generally restricted to scales smaller than a thousand kilometers and crucially, they didn’t involve the wind field that could not be reliably sensed by remote means. For the wind field, the only alternative to aircraft data was the analysis of the outputs of numerical models and at the time, these didn’t have a wide enough range of scales to be able to settle the issue eithernn.

jj Since 2002, following a merger with the European Union of Geodesy, it became the European Geosciences Union.kk This precocious development was greatly helped by EGS’s visionary director Arne Richter (1941-2015).ll At York University, Toronto, Canada.mm At the European Geosciences Union, “Nonlinear Processes”.nn A partial exception was a paper by Strauss and Ditlevsen in 1999 that analyzed reanalyses. A reanalysis is a complex data – model hybrid that effectively fills in holes in the data (and partially corrects for errors), by constraining the system using the equations of the atmosphere as embodied in a numerical model. Although these authors strongly criticized the reigning 2D picture, rather than simply analyzing the scaling directly, they instead analyzed more complex theoretically inspired constructs. 44 Strauss, D. M. & Ditlevsen, P. Two-dimensional turbulence properties of the ECMWF reanalyses. Tellus 51A, 749-772 (1999).

12

Page 13: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 4.4: Spectra from three different satellites from largely cloudy regions. Meteosat, geostationary, 8 km resolution, Landsat, at 83 m resolution and the number 1- 5 from the NOAA-9 satellite with channel 1 in the visible, channel 5 the thermal infra red and 2-4 in the in between wavelengths. Scaling, power laws are straight lines on the log-log plot; the range 1 – 100 km is indicated “mesoscale” and it shows no signs of a break in the scaling. Overall, the figure covers the range of 166m to 5000 km. Reproduced from45.

The next major advance appeared in the late 2000’s with the beginning of widespread

availabilityoo of global scale atmospheric data sets, notably massive satellite data sets and these invariably showed excellent wide range scaling, although again, not of the hard to measure wind field (see fig 4.5). Also at this time, the numerical models were getting big enough to analyse, and again, wide range scaling was found46 (see the reanalyses, fig. 4.5 lower right). By 2008, it seemed that the only evidence that wide range scaling could not explain, were the spectra of aircraft wind measurements.

oo The key advance was not so much the actual existence of large scale data sets - this was not so new - it was rather due to the rapidly increasing internet speed and the online availability.

13

Page 14: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 4.5: Upper left: Spectra from over 1000 orbits the Tropical Rainfall Measurement Mission (TRMM); of five channels visible through thermal IR wavelengths displaying the very accurate scaling down to scales of the order of the sensor resolution (≈ 10 km).Upper left: Spectra from five other (microwave) channels from the same satellite. The data are at lower resolution and the latter depends on the wavelength, again the scaling is accurate up to the resolution. Lower Left: The zonal, meridional and temporal spectra of 1386 images (~ two months of data, September and October 2007) of radiances fields measured by a thermal infrared channel (10.3-11.3 μm) on the geostationary satellite MTSAT over south-west Pacific at resolutions 30 km and 1 hr over latitudes 40°S – 30°N and longitudes 80°E – 200°E. With the exception of the (small) diurnal peak (and harmonics), the rescaled spectra are nearly identical and are also nearly perfectly scaling (the black line shows exact power law scaling after taking into account the finite image geometry. Lower right: Zonal Spectra of reanalyses from the European Centre for Medium Range Weather Forecasting (ECMWF), once daily for the year 2008 over the band ±45o latitude. These figures are adapted from ones in the review: 25.

Fig. 4.6: The GASP spectrum of long-haul flights (more than 4800 km) with reference lines corresponding to the horizontal and vertical behaviour with slope indicated. The rough position of the scale break is shown, it is near 1000 km, much larger than any possible 2D-3D transition scale. Adapted from Gage and Nastrom47 (in particular, the lines with slope -2.4 were added).

Fig. 4.6a: Trajectories of an aircraft following lines of constant pressure to with 0.1% near the 200 mb pressure level (height in meters). Data were taken every second (≈ 280 m). The trajectories are far from constant in altitude as can been seen in the blow of one of them below. Adapted from48.

14

Page 15: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 4.6b: The trajectory (black) of a scientific aircraft following a constant pressure level (to within 0.1%), data at 1 s resolution (≈280 m). The purple shows the variation in the longitudinal component of the horizontal velocity (in m/s, deviations from 24.5 m/s), and the orange is the transverse component (in m/s, deviations from 1.2 m/s). Moving from upper left to right, top to bottom, we show three blowups (over the region indicated by the parentheses) by factors 8 in scale. Black shows the deviations of altitude (z) from the 12700m of the altitude of the aircraft, (in m) but divided by 8, 4, 2, 1 respectively (the overall trajectory thus changes altitude by over 160 m overall). This is for flight leg 15, but was typical. It can be seen that the changes in horizontal wind cannot be dissociated from the variability of the aircraft altitude. Reproduced from49.

Fig. 4.7a: Spectra from the Pacific Winter Storm experiment; the average over 24 legs, each 280 m in resolution of length 1120 km at 200 mb (about 12 km altitude), longitudinal and transverse components of the horizontal wind are shown along with reference lines indicating the horizontal (Kolmogorov) exponent 5/3 and the vertical exponent 2.4 (close the Bolgiano-Obukov value 11/5). Adapted from48.

15

Page 16: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 4.7b: Same as fig 4.7a except for temperature (T), humidity (h) and log potential temperature (log). A reference line corresponding to k-2 spectrum is shown in red. The mesoscale (1 – 100 km) is shown between the dashed blue lines. Adapted from48.

Even if one accepted the original interpretation of the EOLE experiment in terms of 2D turbulence, it measured the dispersion of balloons, not the wind speeds that were needed for a direct test of the 2D-3D model. It was therefore only with the first large scale aircraft campaigns in the 1980’s that the theory was seriously tested - and this turned out to be the beginning of another multidecadal saga. The first and still most famous experiment was GASPpp whose data were analyzed by Nastrom and Gage 198350. On this basis, it was claimed that the empirical spectrum did indeed show a transition from the Kolmogorov 3D isotropic turbulence, to two dimensional isotropic turbulenceqq.

The most glaring problem with the GASP results was that the apparent 2D-3D transition scale was typically at several hundred kilometers51,52,53,54. At such distances, 3D isotropic turbulence would imply that clouds and other structures extended well into outer space! Simply calling the phenomenon “squeezed 3D isotropic turbulence”55, or “escaped” 3D turbulence51 explained nothing. In 1999, an update of the GASP experiment MOZAIC found essentially the same results, that were strongly interpreted as support for the 2D-3D theory56,57.

In order to get to the bottom of this, with the help of Daniel Schertzer and Adrian Tuckrr, we reanalysed48 the published wind spectra and showed that a key point had been overlooked: the spectra did not transition between k-5/3 and k-3 but rather between k-5/3 and k-2.4 (see e.g. fig. 4.7a). In the 2009 paper “Reinterpreting aircraft measurements in anisotropic scaling turbulence48” we proposed a simple explanation: that the low wavenumber (k-2.4) part of the spectrum was not simply a poorly discerned k-3 signature of isotropic two dimensional turbulence, but rather the spectrum of the wind in the vertical rather than the horizontal direction! If the turbulence was never isotropic but rather anisotropic with different exponents in the horizontal and vertical directions, then everything could be easily explained by aircraft following gently sloping isobars (rather than isoheights), fig. 4.6a,b.

pp GASP= Global Atmospheric Sampling Program.qq Interestingly, Nastrom and Gage themselves interpreted their results as support for yet another theory based on waves.rr Tuck was pioneer in aircraft measurements. At the time, he was head of the atmospheric chemistry group at NOAA, boulder. He had been responsible for the Antarctic aircraft campaign in the late 1980’s that conclusively established the existence of the ozone hole and the link to CFC’s.

16

Page 17: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 4.8: This shows typical variations in the transverse (top) and longitudinal (bottom) components of the wind; black are the measurements, purple are the theoretical contours for a 23/9D atmosphere. Reproduced from WC fig. 2.15a and 58.

But even this didn’t satisfy the die-hard 2D-3D theorists, notably Eric Lindborg. He incited experimentalist colleagues at the National Center for Atmospheric Research (NCAR) Frehlic and Sharman59 to use the big Aircraft Meteorological Data Relay (AMDAR) aircraft data base to disprove our hypothesis by attempting to empirically demonstrate the statistical equivalence of wind data at constant heights and constant pressures (isoheights versus isobars). But the AMDAR technology didn’t include GPS altitude determinations that were needed to distinguish accurately enough between the two. In order to prove that our explanation was correct and to close the debate60, 59, it was necessary to determine the joint (horizontal- vertical) velocity structure functionss. This was finally done with the help of wind data from 14,500 aircraft trajectories that allowed its first direct determination (fig. 4.8). The aircraft used in our study had GPS Tropical Airborne Meteorological Reporting (TAMDAR) technology that provided altitude measurements with metric scale accuracy. The results of this massive study58 showed that the horizontal wind was scaling with an anisotropic in-between “elliptical dimension” 2.56±0.02, close to the theoretical value 23/9 discussed in ch. 2 and 3.

Almost exactly sixty years after the meso-scale gap had brought into question Richardson’s wide range scaling, the only thing preventing closure was the development of theory showing that the observed anisotropic scaling was indeed compatible with the equations. As discussed in ch. 3, this was indeed done at nearly at the same time so that the theoretical debate between the 3D-2D isotropic model or the anisotropic scaling alternative (the 23/9D model, notably with 61) was finally nearing closure.

***“Good afternoon Martians, I greet you as an Earthling: I have good news: we’re

twins!”.So began my presentation to a room packed with Mars specialists at a session at the

European Geosciences Union, April 2016. Mars may be our sister planet, but when it comes to its atmosphere, up until now, scientists had focused on the differences, for example the strong control of Martian atmospheric temperature by dust, the larger role of topography, the stronger diurnal and annual cycles, the larger role of atmospheric tides. But these differences mostly affected the forces driving the system and the nature of the

ss This means that we estimated the typical change in the horizontal wind for arbitrary displacements in vertical cross-sections.

17

Page 18: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

boundaries: if the turbulence approach was correct, at small enough scales and far enough from the boundaries, then we would expect the same statistics, the same scaling: the behavior was expected to be independent of the details: “universal”.

The horizontal scaling of the atmosphere is fundamentally important in its own right, but - due to the wind - it implies temporal scaling, at least up to scales of about ten days (the size of the planet divided by the typical large scale wind). Fig. 4.5 (lower left) shows that indeed, up to scales of about 5000 km and 7 days in time, that the spatial and temporal spectra are essentially indistinguishable from each other and also scaling.

In order to understand this, and in particular to work out the fundamental time scale at which the weather regime breaks down and makes a transition to the lower frequency macroweather regime, we can again go back to [Van der Hoven, 1957]. Aside from the ill- starred spectral gap”, his spectrum also showed a more robust feature: a drastic change in atmospheric statistics at time scales of several days (fig. 4.2, the “bump” on the left). At first ascribed to “migratory pressure systems”, later termed the “synoptic maximum” 62, it was eventually theorized as a baroclinic instability (63). However, neither its presence in all the atmospheric fields nor its true origin and fundamental implications could be explained until - as explained above - the turbulent laws were extended to planetary scales.

The key feature of anisotropic scaling is that the vertical is controlled by the buoyancy force variance flux and the horizontal dynamics by the energy flux to smaller scales (units W/Kg, also known as the “energy rate density”). This is the same dimensional quantity upon which the Kolmogorov law is based (v = 1/3L1/3 for the velocity difference v across a horizontal distance L), although, here the law only holds in the horizontal, not the vertical. The classical lifetime () – size (L) relation is then obtained by using dimensional analysis on : ≈ -1/3L2/3. This is the Lagrangian space-time relation (64 and ch. 8 of the review 25); the Eulerian one - as deduced for example from the space-time IR imagery in fig.4.10 - is linear (for some theory see 65), fig. 1.2.3 shows the Eulerian (fixed frame) space-time relationship that this spectrum implies; it is linear and corresponds to about 900 km/day (≈10m/s).

Fig. 4.10: The space-time diagram obtained from the satellite pictures analyzed in fig. 4.2, lower left, reproduced from 66.

18

Page 19: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

If one estimates by dividing the total tropospheric mass by the total solar power that is transformed into mechanical energy (about 4%), one finds ≈ 1 mW/Kg which is close to the directly estimated empirical value including latitudinal variations, see fig. 1.2.4, bottom). Combining this with, L = 20,000km (the largest great circle distance) this value implies that the lifetime of planetary structures (and hence the weather-macroweather transition) is w ≈ 5- 10 days. When the theory is applied to the ocean (which is similarly turbulent with ≈ 10-8 W/Kg), one obtains a transition at about 1- 2 years (also observed, 67, fig. 1.2.5). Finally, it can be used to accurately estimate ≈ 40 mW/Kg on Mars and hence the corresponding Martian transition scale at about 1.8 sols (fig. 1.2.5).

From the point of view of turbulent laws, the transition from weather to macroweather is a “dimensional transition” since at time scales longer than w, the spatial degrees of freedom are essentially “quenched” so that the system’s dimension is effectively reduced from (time+space) to (time). Both turbulence models and GCMs control runs (i.e. with constant external forcings) reproduce the transition and also produce realistic low frequency variability. The fact that these weather models reproduce this justifies the term “macroweather”. However in forced GCM’s - as with instrumental and multiproxy data beyond a critical time scale c, the variability starts to increase again (as in the weather regime) and the true climate regime begins; c ≈ 10 years in the anthropocene, and c >≈100 years in the pre-industrial epoch, (see, fig. 1.2.4 top curves).

Fig. 1.2.5: The three known weather - macroweather transitions: air over the Earth (black and upper purple), the Sea Surface Temperature (SST, ocean) at 5o resolution (lower blue) and air over Mars (Green and orange). The air over earth curve is from 30 years of daily data from a French station (Macon, black) and from air temps for last 100 years (5ox5o resolution NOAA NCDC), the spectrum of monthly averaged SST is from the same data base (blue, bottom). The Mars spectra are from Viking lander data (orange) as well as MACDA Mars reanalysis data (Green) based on thermal infrared retrievals from the Thermal Emission Spectrometer (TES) for the Mars Global Surveyor satellite. The strong green and orange “spikes” at the right are the Martian diurnal cycle and its harmonics. Adapted from 68.

19

Page 20: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 1.2.2b: A comparison of spectra from terrestrial and Martian reanalyses (left are right columns respectively) showing the universality of the scaling behaviour; the top row shows the zonal, the bottom row the meridional spectra. Adapted from 69.

Fig. 1.2.2c: A comparison of trace moments (M = <q> ) terrestrial (left) and Martian (right) for

moments q =0.2, 0.4, 0.6, …1.8, 2 and = (half planet circumference)/ (spatial resolution), the same reanalyses as in fig. 1.2.3a. The graphs (top to bottom) are for the surface pressure, meridional wind, zonal wind, temperature (the latter three at about 70% surface pressure). The Martian trace moments should be compared to the terrestrial ones on the left of the thin black dashed line (the points to the right are at scales not represented in the lower resolution Martian reanalysis).

20

Page 21: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 1b: The three known weather - macroweather transitions: air over the Earth (black and upper purple), the Sea Surface Temperature (SST, ocean) at 5o resolution (lower blue) and air over Mars (Green and orange). The air over earth curve is from 30 years of daily data from a French station (Macon, black) and from air temps for last 100 years (5ox5o resolution NOAA NCDC), the spectrum of monthly averaged SST is from the same data base (blue, bottom). The Mars spectra are from Viking lander data (orange) as well as MACDA Mars reanalysis data (Green) based on thermal infrared retrievals from the Thermal Emission Spectrometer (TES) for the Mars Global Surveyor satellite. The strong green and orange “spikes” at the right are the Martian diurnal cycle and its harmonics. Adapted from 68.

21

Page 22: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 2: The weather-macroweather transition scale w estimated directly from break points in the spectra for the temperature (red) and precipitation (green) as a function of latitude with the longitudinal variations determining the dashed one standard deviation limits. The data are from the 138 year long Twentieth Century reanalyses (20CR, 70), the w estimates were made by performingτ bilinear log-log regressions on spectra from 180 day long segments averaged over 280 segments per grid point. The blue curve is the theoretical w obtained by estimating the distribution of from the ECMWF reanalyses for the year 2006 (using w =-1/3L2/3 where L = half earth circumference), it agrees very well with the temperature w. w is particularly high near the equator since the winds tend to be lower, hence lower . Similarly, w is particularly low for precipitation since it is usually associated with high turbulence (high ). Reproduced from 25.

22

Page 23: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 3: The zonal, meridional and temporal spectra of 1386 images (~ two months of data, September and October 2007) of radiances fields measured by a thermal infrared channel (10.3-11.3 μm) on the geostationary satellite MTSAT over south-west Pacific at resolutions 30 km and 1 hr over latitudes 40°S – 30°N and longitudes 80°E – 200°E. With the exception of the (small) diurnal peak (and harmonics), the rescaled spectra are nearly identical and are also nearly perfectly scaling (the black line shows exact power law scaling after taking into account the finite image geometry. Reproduced from 64.

23

Page 24: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 5c: Haar fluctuation analysis of globally, annually averaged outputs of past Millenium simulations over the pre-industrial period (1500-1900) using the NASA GISS E2R model with various forcing reconstructions. Also shown (thick, black) are the fluctuations of the pre-industrial multiproxies showing that they have stronger multi centennial variability. Finally, (bottom, thin black), are the results of the control run (no forcings), showing that macroweather (slope<0) continues to millennial scales. Reproduced from 71.

24

Page 25: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 5d: Haar fluctuation analysis of Climate Research Unit (CRU, HadCRUtemp3 temperature fluctuations), and globally, annually averaged outputs of past Millenium simulations over the same period (1880-2008) using the NASA GISS E2R model with various forcing reconstructions (dashed). Also shown are the fluctuations of the pre-industrial multiproxies showing the much smaller centennial and millennial scale variability that holds in the pre-industrial epoch. Reproduced from 71.

25

Page 26: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

Fig. 6: Variation of τw (bottom) and τc (top) as a function of latitude as estimated from the 138 year long 20CR reanalyses, 700mb temperature field (the c estimates are only valid in the anthropocene). The bottom red and thick blue curves for w are from fig. 2; also shown at the bottom is the effective external scale (eff) of the temperature cascade estimated from the European Centre for Medium-Range Weather Forecasts interim reanalysis for 2006 (thin blue). The top τc curves were estimated by bilinear log-log fits on the Haar structure functions applied to the same 20CR temperature data. The macroweather regime is the regime between the top and bottom curves.

Readers of the blog “23/9 D atmospheric motions: an unwitting constraint on Numerical Weather Models” will recall that larger and larger atmospheric structures become flatter and flatter at larger and larger scales, but that they do so in a scaling (power law) way. Contrary to the postulates of the classical 3D/2D model of isotropic turbulence, there is no drastic scale transition in the atmosphere’s statistics. However, since the famous Global Atmospheric Sampling Program (GASP) experiment (fig. 2) there have been repeated reports of drastic transitions in aircraft statistics (spectra) of horizontal wind typically at scales of several hundred kilometers. We are now in a position to resolve the apparent contradiction between scaling 23/9D dynamics and observations with broken scaling. At some critical scale – that depends on the aircraft characteristics as well as the turbulent state of the

26

Page 27: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

atmosphere - the aircraft “wanders” sufficiently off level so that the wind it measures changes more due to the level change than to the horizontal displacement of the aircraft. It turns out that this effect can easily explain the observations. Rather than a transition from characteristic isotropic 3D to isotropic 2D behavior (spectra with transitions from k-5/3 to k-3 where k is a wavenumber, an inverse distance), instead, one has a transition from k-5/3 (small scales) to k-2.4

at larger scales (fig. 2), the latter being the typical exponent found in the vertical direction (for example by dropsondes, 72).

Since the 1980’s, the wide range scaling of the atmosphere in the both the horizontal and the vertical was increasingly documented; many examples are shown in WC, ch. 1. By around 2010, the only remaining empirical support the 3D/2D model was the interpretation of fig. 2 (and others like it) in terms of a “dimensional transition” from 3D to 2D. These interpretations were already implausible since a re-examination of the literature had shown that the large scales were closer to k-2.4 than k-3, as expected due to the “wandering” aircraft trajectories. Finally, just last year, with the help of ≈14500 commercial aircraft flights with high accuracy GPS altitude measurements, it was possible for the first to determine the typical variability in the wind in vertical sections (fig. 3), and this was almost exactly the predicted 23/9=2.555… value: the measured “elliptical dimension” being ≈2.57. It is hard to see how the 3D/2D model can survive this finding.

So next time you buckle up, celebrate the fact that the turbulence you feel is still stimulating scientific progress!

1 Marshall, J. S. & Palmer, W. M. The distribution of raindrops with size. Journal of Meteorology 5, 165-166 (1948).

2 Lovejoy, S. & Schertzer, D. Fractals, rain drops and resolution dependence of rain measurements. Journal of Applied Meteorology 29, 1167-1170 (1990).

3 Desaulniers-Soucy., N., Lovejoy, S. & Schertzer, D. The continuum limit in rain and the HYDROP experiment,. Atmos. Resear. 59-60,, 163-197 (2001).

4 Lovejoy, S. & Schertzer, D. Turbulence, rain drops and the l **1/2 number density law. New J. of Physics 10, 075017(075032pp), doi:075010.071088/071367-072630/075010/075017/075017 (2008).

5 Taylor, G. I. Statistical theory of turbulence. Proc. Roy. Soc. I-IV, A151, 421-478 (1935).

6 Rotta, J. C. Statistische theorie nichtonogenr turbulenz. Z. Phys. 129, 547-572 (1951).

7 Kolmogorov, A. N. Grundebegrisse der Wahrscheinlichkeitrechnung (1933).

27

Page 28: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

8 Kolmogorov, A. N. Local structure of turbulence in an incompressible liquid for very large Reynolds numbers. (English translation: Proc. Roy. Soc. A434, 9-17, 1991). Proc. Acad. Sci. URSS., Geochem. Sect. 30, 299-303 (1941).

9 Obukhov, A. M. On the distribution of energy in the spectrum of turbulent flow. Dokl. Akad. Nauk SSSR 32, 22-24 (1941).

10 Onsager, L. The distribution of energy in turbulence (abstract only). Phys. Rev. 68, 286 (1945).

11 Kolmogorov, A. N. A refinement of previous hypotheses concerning the local structure of turbulence in viscous incompressible fluid at high Raynolds number. Journal of Fluid Mechanics 83, 349 (1962).

12 Heisenberg, W. On the theory of statistical and isotrpic turbulence. Proc. of the Roy. Soc. A 195, 402-406 (1948).

13 von Weizacker, C. F. Das spektrum der turbulenz bei grossen Reynolds'schen zahlen. Z. Phys. 124, 614 (1948).

14 Corrsin, S. On the spectrum of Isotropic Temperature Fluctuations in an isotropic Turbulence. Journal of Applied Physics 22, 469-473 (1951).

15 Obukhov, A. Structure of the temperature field in a turbulent flow. Izv. Akad. Nauk. SSSR. Ser. Geogr. I Geofiz 13, 55-69 (1949).

16 Bolgiano, R. Turbulent spectra in a stably stratified atmosphere. J. Geophys. Res. 64, 2226 (1959).

17 Obukhov, A. Effect of archimedean forces on the structure of the temperature field in a turbulent flow. Dokl. Akad. Nauk SSSR 125, 1246 (1959).

18 Batchelor, G. K. The theory of homogeneous turbulence. (Cambridge University Press, 1953).

19 Dewan, E. Saturated-cascade similtude theory of gravity wave sepctra. J. Geophys. Res. 102, 29799-29817 (1997).

20 Lovejoy, S., Tuck, A. F., Hovde, S. J. & Schertzer, D. Is isotropic turbulence relevant in the atmosphere? Geophys. Res. Lett. doi:10.1029/2007GL029359, L14802 (2007).

21 Lovejoy, S., Tuck, A. F., Hovde, S. J. & Schertzer, D. Do stable atmospheric layers exist? Geophys. Res. Lett. 35, L01802, doi:01810.01029/02007GL032122 (2008).

22 Kraichnan, R. H. Inertial ranges in two-dimensional turbulence. Physics of Fluids 10, 1417-1423 (1967).

23 Panofsky, H. A. & Van der Hoven, I. Spectra and cross-spectra of velocity components in the mesometeorlogical range. Quarterly J. of the Royal Meteorol. Soc. 81, 603-606 (1955).

24 Van der Hoven, I. Power spectrum of horizontal wind speed in the frequency range from 0.0007 to 900 cycles per hour. Journal of Meteorology 14, 160-164 (1957).

25 Lovejoy, S. & Schertzer, D. The Weather and Climate: Emergent Laws and Multifractal Cascades. (Cambridge University Press, 2013).

26 Robinson, G. D. Some current projects for global meteorological observation and experiment, . Quart. J. Roy. Meteor. Soc., 93, 409–418 (1967).

27 Vinnichenko, N. K. The kinetic energy spectrum in the free atmosphere for 1 second to 5 years. Tellus 22, 158 (1969).

28

Page 29: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

28 Goldman, J. L. The power spectrum in the atmosphere below macroscale. (Institue of Desert Research, University of St. Thomas, Houston Texas, 1968).

29 Atkinson, B. W. Meso-scale atmospheric circulations. (Academic Press, 1981).30 Fjortoft, R. On the changes in the spectral distribution of kinetic energy in

two dimensional, nondivergent flow. Tellus 7, 168-176 (1953).31 Charney, J. G. Geostrophic Turbulence. J. Atmos. Sci 28, 1087 (1971).32 Schertzer, D. & Lovejoy, S. in Turbulent Shear Flow (ed L. J. S. Bradbury et al.)

7-33 (Springer-Verlag, 1985).33 Julian, P. R., Washington, W., M., Hembree, L. & Ridley, C. On the spectral

distribuiton of large-scale atmospheric energy. J. Atmos. Sci., 376-387 (1970).34 Morel, P. & Larchevêque, M. Relative dispersion of constant level balloons in

the 200 mb general circulation. J. of the Atmos. Sci. 31, 2189-2196 (1974).35 Lacorta, G., Aurell, E., Legras, B. & Vulpiani, A. Evidence for a k^-5/3 spectrum

from the EOLE Lagrangian balloons in the lower stratosphere. J. of the Atmos. Sci. 61, 2936-2942 (2004).

36 Lovejoy, S. et al. Nonlinear geophysics: why we need it. EOS, 456-457 (2009).37 Schertzer, D. & Lovejoy, S. Physical modeling and Analysis of Rain and Clouds

by Anisotropic Scaling of Multiplicative Processes. Journal of Geophysical Research 92, 9693-9714 (1987).

38 Gabriel, P., Lovejoy, S., Schertzer, D. & Austin, G. L. Multifractal Analysis of resolution dependence in satellite imagery. Geophys. Res. Lett. 15, 1373-1376 (1988).

39 Lovejoy, S., Schertzer, D. & Tsonis, A. A. Functional Box-Counting and Multiple Elliptical Dimensions in rain. Science 235, 1036-1038 (1987).

40 Tessier, Y. Multifractal objective analysis of rain and clouds, McGill, (1993).41 Lovejoy, S., Schertzer, D., Silas, P., Tessier, Y. & Lavallée, D. The unified scaling

model of atmospheric dynamics and systematic analysis in cloud radiances. Annales Geophysicae 11, 119-127 (1992).

42 Lovejoy, S. & Schertzer, D. Multifractals, Universality classes and satellite and radar measurements of cloud and rain fields. Journal of Geophysical Research 95, 2021 (1990).

43 Lovejoy, S., Schertzer, D. , Stanway, J. D. Direct Evidence of planetary scale atmospheric cascade dynamics. Phys. Rev. Lett. 86, 5200-5203 (2001).

44 Strauss, D. M. & Ditlevsen, P. Two-dimensional turbulence properties of the ECMWF reanalyses. Tellus 51A, 749-772 (1999).

45 Tessier, Y., Lovejoy, S. & Schertzer, D. Universal Multifractals: theory and observations for rain and clouds. Journal of Applied Meteorology 32, 223-250 (1993).

46 Stolle, J., Lovejoy, S. & Schertzer, D. The stochastic cascade structure of deterministic numerical models of the atmosphere. Nonlin. Proc. in Geophys. 16, 1–15 (2009).

47 Gage, K. S. & Nastrom, G. D. Theoretical Interpretation of atmospheric wavenumber spectra of wind and temperature observed by commercial aircraft during GASP. J. of the Atmos. Sci. 43, 729-740 (1986).

29

Page 30: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

48 Lovejoy, S., Tuck, A. F., Schertzer, D. & Hovde, S. J. Reinterpreting aircraft measurements in anisotropic scaling turbulence. Atmos. Chem. and Phys. 9, 1-19 (2009).

49 Lovejoy, S., Tuck, A. F., Schertzer, D. & Hovde, S. J. Reinterpreting aircraft measurements in anisotropic scaling turbulence. Atmos. Chem. Phys. Discuss., 9, 3871-3920 (2009).

50 Nastrom, G. D. & Gage, K. S. A first look at wave number spectra from GASP data. Tellus 35, 383 (1983).

51 Lilly, D. K. Two-dimensional turbulence generated by energy sources at two scales. J. Atmos. Sci. 46, 2026–2030 (1989).

52 Bacmeister, J. T. et al. Stratospheric horizontal wavenumber spectra of winds, potnetial temperature, and atmospheric tracers observed by high-altitude aircraft. J. Geophy. Res. 101, 9441-9470 (1996).

53 Gao, X. & Meriwether, J. W. Mesoscale spectral analysis of in situ horizontal and vertical wind measurements at 6 km. J. of Geophysical Res. 103, 6397-6404 (1998).

54 Lindborg, E. Can the atmospheric kinetic energy spectrum be explained by two-dimensional turbulence? J. Fluid Mech. 388, 259-288 (1999).

55 Högström, U., Smedman, A. N. & Bergström, H. A Case Study Of Two-Dimensional Stratified Turbulence. J. Atmos. Sci. 56, 959-976 (1999).

56 Lindborg, E. & Cho, J. Horizontal velocity structure functions in the upper troposphere and lower stratosphere ii. Theoretical considerations. J. Geophys. Res. 106, 10233-10241 (2001).

57 Cho, J. & Lindborg, E. Horizontal velocity structure functions in the upper troposphere and lower stratosphere i: Observations. J. Geophys. Res. 106, 10223-10232 (2001).

58 Pinel, J., Lovejoy, S., Schertzer, D. & Tuck, A. F. Joint horizontal - vertical anisotropic scaling, isobaric and isoheight wind statistics from aircraft data. Geophys. Res. Lett. 39, L11803, doi:10.1029/2012GL051698 (2012).

59 Frehlich, R. G. & Sharman, R. D. Equivalence of velocity statistics at constant pressure or constant Altitude GRL 37, L08801, doi:08810.01029/02010GL042912, 042010 (2010).

60 Lindborg, E., Tung, K. K., Nastrom, G. D., Cho, J. Y. N. & Gage, K. S. Comment on "Reinterpreting aircraft measurments in anisotropic scaling turbulence" by Lovejoy et al 2009. Atmos. Chem. Phys. Discuss. 9, 22331-22336 (2009).

61 Lindborg, E., Tung, K. K., Nastrom, G. D., Cho, J. Y. N. & Gage, K. S. Interactive comment on “Comment on “Reinterpreting aircraft measurements in anisotropic scaling turbulence" by Lovejoy et al. (2009)”. Atmos. Chem. Phys. Discuss. 9 C9797–C9798 (2010).

62 Kolesnikov, V. N. & Monin, A. S. Spectra of meteorological field fluctuations. Izvestiya, Atmospheric and Oceanic Physics 1, 653-669 (1965).

63 Vallis, G. in Stochastic Physics and Climate Modelliing (ed P. Williams T. Palmer) 1-34 (Cambridge University Press, 2010).

64 Pinel, J., Lovejoy, S. & Schertzer, D. The horizontal space-time scaling and cascade structure of the atmosphere and satellite radiances. Atmos. Resear. 140–141, 95–114, doi:doi.org/10.1016/j.atmosres.2013.11.022 (2014).

30

Page 31: Chapter 4.Weather, macroweather, the climate and … · Web viewThe new element was the use of lower resolution series that could be eye averaged at 5 minute resolutions and that

y-05-08 6:15 PM

65 Radkevitch, A., Lovejoy, S., Strawbridge, K. B., Schertzer, D. & Lilley, M. Scaling turbulent atmospheric stratification, Part III: empIrical study of Space-time stratification of passive scalars using lidar data. Quart. J. Roy. Meteor. Soc., DOI: 10.1002/qj.1203 (2008).

66 Pinel, J. The space-time cascade structure of the atmosphere phD thesis, McGill, (2012).

67 Lovejoy, S. & Schertzer, D. Towards a new synthesis for atmospheric dynamics: space-time cascades. Atmos. Res. 96, pp. 1-52, doi:doi: 10.1016/j.atmosres.2010.01.004 (2010).

68 Lovejoy, S., Muller, J. P. & Boisvert, J. P. On Mars too, expect macroweather. Geophys. Res. Lett. 41, 7694-7700, doi:doi: 10.1002/2014GL061861 (2014).

69 Chen, W., Lovejoy, S. & Muller, J. P. Mars’ atmosphere: the sister planet, our statistical twin. J. Geophys. Res. Atmos. 121, doi:10.1002/2016JD025211 (2016).

70 Compo, G. P. et al. The Twentieth Century Reanalysis Project. Quarterly J. Roy. Meteorol. Soc. 137, 1-28, doi:DOI: 10.1002/qj.776 (2011).

71 Lovejoy, S., Schertzer, D. & Varon, D. Do GCM’s predict the climate…. or macroweather? Earth Syst. Dynam. 4, 1–16, doi:10.5194/esd-4-1-2013 (2013).

72 Lovejoy, S., Tuck, A. F., Hovde, S. J. & Schertzer, D. The vertical cascade structure of the atmosphere and multifractal drop sonde outages. J. Geophy. Res. 114, D07111, doi:07110.01029/02008JD010651. (2009).

31