8
The Precautionary Principle Yaneer Bar-Yam , Rupert Read , Nassim Nicholas Taleb ‡† New England Complex Systems Institute School of Philosophy, University of East Anglia School of Engineering, New York University Abstract—The precautionary principle is useful only in certain contexts and can justify only a certain type of actions. We present a non-naive, fragility-based version of the precautionary principle, placing it under formal statistical and probabilistic structure of “ruin” problems, in which an entire system is at a risk of total failure. We discuss the implications this definition has on current questions about the use of nuclear energy and the creation of GMOs, and address common counterarguments to our claims. CONTENTS I Introduction 1 II Decision making and types of Risk 1 III A Non-Naive PP 2 III-A Harm vs. Ruin: When the PP is necessary 2 III-B Naive Interventionism .......... 3 IV Fat Tails and Fragility 3 IV-A Thin and Fat Tails ............ 3 IV-B Fragility as a Nonlinear Response ... 4 V Why is fragility the general rule? 5 V-A Fragility and Replicating Organisms . . 5 V-B Fragility, Dose response and the 1/n rule 5 VI Why are GMOs to be put under PP but not the nuclear? 5 VI-A GMOs .................. 5 VI-B Risk of famine without GMOs ..... 6 VI-C Nuclear .................. 6 VII Preventive Strikes 6 VIII Fallacious arguments against PP 6 VIII-A Crossing the road (the paralysis argument) 6 VIII-B The Loch Ness fallacy ......... 6 VIII-C The fallacy of misusing the naturalistic fallacy .................. 6 VIII-D The "Butterfly in India" fallacy ..... 7 VIII-E The potato fallacy ............ 7 VIII-F The Russian roulette fallacy (the coun- terexamples in the risk domain) .... 7 VIII-G The Carpenter fallacy .......... 7 VIII-H The pathologization fallacy ....... 7 IX Conclusions 8 I. I NTRODUCTION The aim of the precautionary principle (PP) is to prevent decision makers from putting society as a whole —or a significant segment of it —at risk, from the unseen side effects of a certain type of decisions. The PP states that if an action or policy has a suspected risk of causing severe harm to the public domain (such as general health or the environment), in the absence of scientific near-certainty about the safety of the action, the burden of proof about absence of harm falls on those proposing an action. It is meant to deal with effects of absence of evidence and the incompleteness of scientific knowledge in some risky domains. 1 We believe that it should be used only in extreme situations: when the potential harm is systemic (rather than isolated), and the consequences can involve total irreversible ruin, such as the extinction of human beings or even all of life on the planet. The aim of this paper is to place the concept of precaution within a formal statistical and risk-based structure, grounding it in probability theory and the properties of complex systems. Our aim is to allow decision makers to discern which course of events warrant the use of the PP and in which cases one may be acting out of paranoia and using the PP inappropriately, in a way that restricts benign (and necessary) risk-taking. II. DECISION MAKING AND TYPES OF RISK Decision and policy makers tend to assume all risks are created equal, and thus all potential sources of randomness are subject to the same set of approaches (for instance standard risk-management techniques or the invocation of the precau- tionary principle). However, taking into account the structure of randomness in a given system can have a dramatic effect on which kinds of actions are or are not appropriate and justified. Two kinds of potential harm must be considered when determining an appropriate approach to risk: 1) localized, non- spreading errors and 2) systemic, spreading errors that propa- gate through a system, resulting in irreversible damage. When the potential for harm is localized, non-systemic, and risk is easy to calculate from past data, risk-management techniques, cost-benefit analyses and standard mitigation techniques are appropriate, as any error in the calculations will be non- spreading and the potential harm from miscalculation will be bounded. In these situations of idiosyncratic, i.e., non-systemic harm, cost benefit analysis enables balancing of potential benefits against potential losses. 1 The Rio Declaration presents it as follows "In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation." 1

The precautionary principle pp2

Embed Size (px)

Citation preview

Page 1: The precautionary principle pp2

The Precautionary PrincipleYaneer Bar-Yam⇤, Rupert Read†, Nassim Nicholas Taleb‡ †New England Complex Systems Institute

†School of Philosophy, University of East Anglia‡School of Engineering, New York University

Abstract—The precautionary principle is useful only in certaincontexts and can justify only a certain type of actions. Wepresent a non-naive, fragility-based version of the precautionaryprinciple, placing it under formal statistical and probabilisticstructure of “ruin” problems, in which an entire system is at arisk of total failure. We discuss the implications this definitionhas on current questions about the use of nuclear energy andthe creation of GMOs, and address common counterargumentsto our claims.

CONTENTS

I Introduction 1

II Decision making and types of Risk 1

III A Non-Naive PP 2III-A Harm vs. Ruin: When the PP is necessary 2III-B Naive Interventionism . . . . . . . . . . 3

IV Fat Tails and Fragility 3IV-A Thin and Fat Tails . . . . . . . . . . . . 3IV-B Fragility as a Nonlinear Response . . . 4

V Why is fragility the general rule? 5V-A Fragility and Replicating Organisms . . 5V-B Fragility, Dose response and the 1/n rule 5

VI Why are GMOs to be put under PP but not thenuclear? 5

VI-A GMOs . . . . . . . . . . . . . . . . . . 5VI-B Risk of famine without GMOs . . . . . 6VI-C Nuclear . . . . . . . . . . . . . . . . . . 6

VII Preventive Strikes 6

VIII Fallacious arguments against PP 6VIII-A Crossing the road (the paralysis argument) 6VIII-B The Loch Ness fallacy . . . . . . . . . 6VIII-C The fallacy of misusing the naturalistic

fallacy . . . . . . . . . . . . . . . . . . 6VIII-D The "Butterfly in India" fallacy . . . . . 7VIII-E The potato fallacy . . . . . . . . . . . . 7VIII-F The Russian roulette fallacy (the coun-

terexamples in the risk domain) . . . . 7VIII-G The Carpenter fallacy . . . . . . . . . . 7VIII-H The pathologization fallacy . . . . . . . 7

IX Conclusions 8

I. INTRODUCTION

The aim of the precautionary principle (PP) is to preventdecision makers from putting society as a whole —or asignificant segment of it —at risk, from the unseen side effectsof a certain type of decisions. The PP states that if an actionor policy has a suspected risk of causing severe harm to thepublic domain (such as general health or the environment),in the absence of scientific near-certainty about the safety ofthe action, the burden of proof about absence of harm fallson those proposing an action. It is meant to deal with effectsof absence of evidence and the incompleteness of scientificknowledge in some risky domains.1

We believe that it should be used only in extreme situations:when the potential harm is systemic (rather than isolated), andthe consequences can involve total irreversible ruin, such asthe extinction of human beings or even all of life on the planet.

The aim of this paper is to place the concept of precautionwithin a formal statistical and risk-based structure, groundingit in probability theory and the properties of complex systems.Our aim is to allow decision makers to discern which course ofevents warrant the use of the PP and in which cases one maybe acting out of paranoia and using the PP inappropriately, ina way that restricts benign (and necessary) risk-taking.

II. DECISION MAKING AND TYPES OF RISK

Decision and policy makers tend to assume all risks arecreated equal, and thus all potential sources of randomness aresubject to the same set of approaches (for instance standardrisk-management techniques or the invocation of the precau-tionary principle). However, taking into account the structureof randomness in a given system can have a dramatic effect onwhich kinds of actions are or are not appropriate and justified.

Two kinds of potential harm must be considered whendetermining an appropriate approach to risk: 1) localized, non-spreading errors and 2) systemic, spreading errors that propa-gate through a system, resulting in irreversible damage. Whenthe potential for harm is localized, non-systemic, and risk iseasy to calculate from past data, risk-management techniques,cost-benefit analyses and standard mitigation techniques areappropriate, as any error in the calculations will be non-spreading and the potential harm from miscalculation will bebounded. In these situations of idiosyncratic, i.e., non-systemicharm, cost benefit analysis enables balancing of potentialbenefits against potential losses.

1The Rio Declaration presents it as follows "In order to protect theenvironment, the precautionary approach shall be widely applied by Statesaccording to their capabilities. Where there are threats of serious or irreversibledamage, lack of full scientific certainty shall not be used as a reason forpostponing cost-effective measures to prevent environmental degradation."

1

Page 2: The precautionary principle pp2

Table 1 encapsulates the central idea of the paper and showsthe differences between decisions with a risk of harm (warrant-ing regular risk management techniques) and decisions with arisk of total ruin (warranting the PP).

Standard Risk Management Precautionary Approachlocalized harm systemic ruinnuanced cost-benefit avoid at all costsstatistical probabilistic non statisticalvariations ruinconvergent probab. divergent probabilitieslocal systemicrecoverable irreversibleindependent factors interconnected factorsevidence based precautionarythin tails fat tailsbottom-up, tinkering, evolution top-down, human-made

Table I: The two different types of risk and their respectivecharacteristics compared

III. A NON-NAIVE PPTaking risks is not just unavoidable, but necessary for the

functioning and advancement of society; accordingly, the aimof PP is precisely to avoid constraining such risk-taking whileprotecting ourselves from its most severe consequences.

Various critics of the PP have expressed concern that itwill be applied in an overreaching manner, eliminating theability to take reasonable risks, those that are needed forindividual or societal gains. Indeed one can naively invokethe precautionary principle to justify constraining risk in anundiscriminate manner given the abstract nature of the risksof events that did not take place.

Likewise one can make the error of suspending the PP incases when it is vital.

Hence, a non-naive view of the precautionary principle isone in which it is only invoked when necessary, and only tojustify reasonable interventions that prevent a certain variety ofvery precisely defined risks based on probabilistic structures.But, also, in that view, the PP should never be omitted whenneeded.

This section will outline the difference between the naiveand non-naive approaches.

A. Harm vs. Ruin: When the PP is necessaryThe purpose of the PP is to avoid a certain class of what

is called in probability and insurance "ruin" problems, ratherthan regular fluctuations and variations that do not representa severe existential threat. Regular variations within a system,even drastic ones, differ from "ruin" problems in a funda-mental way: once a system at some scale reaches an absolutetermination point, it cannot recover; a gambler who has losthis entire fortune cannot bounce back into the game; a speciesthat has gone extinct cannot spring back into existence. Whilean individual may be advised to not "bet the farm", whetheror not he does so is a matter of individual preferences. Policymakers have a responsibility to avoid catastrophic harm for

society as a whole; the focus is on aggregate, not at the levelof single individuals, and on global-systemic, not idiosyncraticharm. This is the domain of collective "ruin" problems. On thelevel of the ecosystem, the “ruin" is ecocide: a systemwide,irreversible extinction of life at some scale, which could bethe planet.

Even if the risk of ruin of a specific were minuscule, withenough exposures ruin becomes essentially guaranteed. Takingone such risk in a "one-off" manner may sound reasonablebut it also means that an additional one is reasonable. Forthis reason the risk of ruin is not sustainable. This can bequantified as the probability of ruin approaching 1 as thenumber of exposures increases (see Fig. 1). The good newsis that not all classes of systems present such a risk of ruin;some have a probability of practically zero per single exposure.For example, the planet must have taken close to zero risks ofecocide in trillions of trillions of variations over 3 billion years,otherwise we would not be here.2 For this reason we mustconsider any genuine risk of total ruin as if it were inevitable.

2000 4000 6000 8000 10000Exposure

0.2

0.4

0.6

0.8

1.0Probability of Ruin

Figure 1: Why Ruin is not a Renewable Resource. Nomatter how small the probability, with enough exposures ruinbecomes guaranteed.

For humanity, global devastation cannot be measured ona standard scale in which harm is proportional to level ofdevastation. The harm due to complete destruction is not thesame as 10 times the destruction of 1/10 of the system. As thepercentage of destruction approaches 100%, the assessment ofharm diverges to infinity (instead of converging to a particularnumber) due to the value placed on a future that ceases toexist.

Because the “cost” of ruin is effectively infinite, cost-benefitanalysis (in which the potential harm and potential gain aremultiplied by their probabilities and weighed against eachother) is no longer a useful paradigm. The potential harm isso substantial that everything else in the equation ceases tomatter. In this case, we must do everything we can to avoidthe catastrophe.

A formalization of the ruin problem identifies harm as notabout proportion of destruction, but rather a measure of the

2We can demonstrate that the probability of ruin from endogenous variation(that is, not taking into account external shocks such as meteorites) ispractically zero, even adjusting for 1) survivorship bias, 2) risks taken bythe system in its early stages, 3) such catastrophic events as the Permianmass extinction.

2

Page 3: The precautionary principle pp2

integrated level of destruction over the time it persists. Whenthe impact of harm extends to all future times, then the harmis an infinite quantity. When the harm is infinite, the productof the risk probability and the harm is also infinite, and cannotbe balanced against any potential gains, which are necessarilyfinite. This strategy for evaluation of harm as involving theduration of destruction can be extended to localized harmsfor better assessment in risk management. Our focus hereis on the case where destruction is complete for a systemor irreplaceable aspect of a system and therefore the harmdiverges.

Just as the imperative of decision making changes whenthere is a divergent harm even for a finite (non-zero) risk,so is there a fundamental change in the ability to applyconventional scientific methods to the evaluation of that harm.This influences the way we evaluate both the existence of andrisk associated with ruin.

Critically, traditional empirical approaches do not apply toruin problems. Standard evidence-based approaches cannotwork. In an evidentiary approach to risk (relying on evidencebased methods), the existence of a risk or harm occurs whenwe experience that risk or harm.

In the case of ruin, by the time evidence comes it will bydefinition be too late to avoid it. Nothing in the past maypredict one large fatal deviation as illustrated in Fig. 3.

Statistical-evidentiary approaches to risk analysis and miti-gation assume that the risk itself (i.e likelihood or probabilitiesof outcomes) is well known. However, the level of risk maybe hard to gauge, the error attending its evaluation may behigh, its probability may be unknown, and in the case of anessentially infinite harm, the uncertainty about both probabilityand harm becomes itself a random variable, so we face theconsequences of the severely intractable "probability that themodel may be wrong". 3

Structural and Incompressible Unpredictability: It has beenshown that the complexity of real world systems limit theability of empirical observations to determine the outcomesof actions (Bar-Yam, 2013). This means that a certain classof systemic risks will remain inherently unknown. Those whowant to introduce innovations use controlled experiments toevaluate impact. But, in some class of complex systems,controlled experiments cannot evaluate all of the possiblesystemic consequences under real-world conditions. In thesecircumstances, efforts to provide assurance of the "lack ofharm" are insufficiently reliable for one to take action.

Since there are mathematical limitations to predictability ina complex system, the central point to determine is whetherthe threat is local (hence globally benign) or carries systemicconsequences. Local risks can handle mistakes without spread-

3Statistical-evidentiary approaches for risk management are split in twogeneral methods. For pure statistical approaches, devoid of experiments,risk assessors base themselves of time series analysis, computing variousattributes of past data. They can either count the frequency of past events(robust statistics) or calibrate parameters that allow the building of statisticaldistributions from which to generate probabilities of future events (called theparametric approach), or both. Experimental evidentiary methods follow themodel of medical trials, computing probabilities of harm from side effects ofdrugs or interventions by observing the reactions in a variety of animal andhuman models.

ing through the entire system. Scientific analysis can robustlydetermine whether a risk is systemic, i.e. by evaluating theconnectivity of the system to propagation of harm, withoutdetermining the specifics of such a risk. If the consequencesare systemic the associated uncertainty of risks must be treateddifferently. In cases such as this, precautionary action is notbased on "evidence" but purely on analytical approaches. Itrelies on probability theory without computing probabilities.

B. Naive Interventionism

Often when a risk is perceived as having the potential forruin, it is assumed that any preventive measure is justified.There are at least two problems with such a perspective.

First, as outlined above, localized harm is often mistaken forruin, and the PP is wrongly invoked where risk managementtechniques should be employed. When a risk is not systemic,overreaction will typically cause more harm than benefits, likeundergoing dangerous surgery to remove a benign growth.

Second, even if the threat of ruin is real, taking specific(positive) action in order to ward off the perceived threat mayintroduce new systemic risks, fragilizing the system further. Itis often wiser to reduce or remove activity that is generatingor supporting the threat and allow natural variations to playout in localized ways. For example, preemptive U.S. militaryinterventions which have been justified as threat reduction mayultimately embolden anti-American sentiment and amplify thethreat they purport to minimize.

IV. FAT TAILS AND FRAGILITY

A. Thin and Fat Tails

To understand whether a given decision involves the riskof ruin and thus warrants the use of the PP, we must firstunderstand the relevant underlying probabilistic structures.There are two broad types of probability domains: ones wherethe event is accompanied with well behaved mild effects (ThinTails, or "Mediocristan"), the other where small probabilitiesare associated with large and unpredictable consequences thathave no characteristic scale (Fat Tails, or "Extremistan")4. Thedemarcation between the two is as follows. 5

• In Thin Tailed domains, i.e., Mediocristan, harm comesfrom the collective effect of many, many events; noevent alone can be consequential enough to affect theaggregate. It is practically impossible for a single dayto account for 99% of all heart attacks in a givenyear (the probability is small enough to be practically

4The designation Mediocristan and Extremistan were presented in TheBlack Swan to illustrate that in one domain the bulk of the variations comefrom the collective effect of the "mediocre", that is belongs to the center ofthe distribution while in the other domain, Extremistan, changes result fromjumps and exception, the extreme events.

5More technically, in Silent Risk, Taleb (2014) distinguishes betweendifferent classes ranging between extreme thin-tailed (Bernoulli) and extremefat tailed 1) Compact support but not degenerate, 2) Subgaussian, 3) Gaussian,4) subexponential, 5) Power Laws with exponent > 2, 6) Power Laws withExponent 2, 7) Power law tails with exponents 3. The borderline betweenMediocristan and Extremistan is defined along the class of subexponentialwith a certain parametrization worse than lognormal, i.e., distributions nothaving any exponential moments.

3

Page 4: The precautionary principle pp2

zero), (see Fig 2 for an illustration). Example of well-known statistical distributions that belong squarely to thethin-tailed domain are: Gaussian, Binomial, Bernoulli,Standard Poisson, Gamma, Beta, Exponential.

• In Fat Tailed domains, i.e., Extremistan, the aggregate isdetermined by the largest variation.(see Fig. 3) While nohuman being can be heavier than, say, ten adults (sinceweight is thin-tailed), a single one can be richer than thebottom two billion humans (since wealth is fat tailed).Example of statistical distributions: Pareto distribution,Levy-Stable distributions with infinite variance, Cauchydistribution, mixture distributions with power-law jumps.

Nature, on the largest scale, is a case of thin tails : nosingle variation represents a large share of the sum of thetotal variation; even occasional mass extinctions are a blipin the total variation. This is characteristic of a bottom-up, tinkering design, where things change only mildly anditeratively. Working backwards, we can see that were this notthe case we would not be alive today, since a single one in thetrillions, perhaps the trillions of trillions, of variations wouldhave terminated life on the planet, and we would not have beenable to recover from extinction. Therefore while tails can befat for any particular isolated subsystem, nature remains thin-tailed at the level of the planet (Taleb, 2014).

Humanmade systems, by contrast, those constructed in atop-down manner, tend to have fat-tailed variations. A singledeviation will eventually dominate the sum.

Figure 2: Thin Tails from Tinkering, Bottom Up, BroadDesign. A typical manifestation in Mediocristan.

Figure 3: Fat Tails from Systemic Effects, Top-down,Concentrated Design A typical distribution in which the sumis dominated by a single data point

Interdependence is frequently a feature of fat tails at a sys-temic scale6. Consider the global financial crash of 2008. Asfinancial firms became increasingly interdependent, the systemstarted exhibiting periods of calm and efficiency, masking thefact that, overall, the system became very vulnerable as anerror can spead through the economy. Instead of a local shockin an independent section, we experienced a global shock withcascading effects.

The crisis of 2008, in addition, illustrates the failure ofevidentiary risk management since data from time seriesexhibited more stability than ever before, causing the periodto be dubbed "the great moderation", and fooling those relyingon statistical risk management.

B. Fragility as a Nonlinear Response

Everything that survived is necessarily non-linear to harm.If I fall from a height of 10 meters I am injured more than10 times than if I fell from a height of 1 meter, or more than1000 times than if I fell from a height of 1 centimeter, henceI am fragile. Every additional meter, up to the point of mydestruction, hurts me more than the previous one. If I werenot fragile (susceptible to harm linearly), I would be destroyedeven by accumulated small events, and thus would not survive.

Similarly, if I am hit with a big stone I will be harmed a lotmore than if I were pelted serially with pebbles of the sametotal weight. Everything that is fragile and still in existence(that is, unbroken), will be harmed more by a certain stressorof intensity X than by k times a stressor of intensity X/k, upto the point of breaking. This non-linear response is central foreverything on planet earth, from objects to ideas to companiesto technologies.

This explain the the necessity of using scale when invokingthe PP. Polluting in a small way does not warrant the PPbecause it is exponentially less harmful than polluting in largequantities, since harm is non-linear.

We should be careful, however, of actions that may seemsmall and local but then lead to systemic consequences.

Figure 4: The non-linear response compared to the linear.

6Interdependence causes fat tails, but not all fat tails come from interde-pendence

4

Page 5: The precautionary principle pp2

V. WHY IS FRAGILITY THE GENERAL RULE?

The statistical structure of stressors is such that smalldeviations are much, much more frequent than large ones.Look at the coffee cup on the table: there are millions ofrecorded earthquakes every year. Simply, if the coffee cup werelinearly sensitive to earthquakes, it would not have existed atall as it would have been broken in the early stages of itslife. The coffee cup, however, is non-linear to harm, so thatthe small earthquakes only make it wobble, whereas one largeone would break it forever.

This nonlinearity is necessarily present in everything fragile.

A. Fragility and Replicating Organisms

In the world of coffee cups, it is acceptable if one breakswhen exposed to a large quake; we simply replace it withanother that will last until the next extreme shock. Becauseof the infrequency of large events, the cup serves its purposethrough most events, and is not terribly missed when the bigone strikes.

Biological organisms share a similar characteristic. They areable to survive many, many events that incur small amountsof harm (or stress), but break when exposed to extremeshocks. Unlike coffee cups, organisms enjoy the feature of(self-)replication, such that no external agent is necessary tomake a new one. This allows the exposures of two similarorganisms to become decorrelated as they move essentiallyindependently throughout space. This means that exposure toan extreme event of one organism of a given species doesnot imply exposure to all individuals of the species. By thetime an extreme shock rolls around, replication has minimizedthe exposure to a small subset of individuals; risk has beenlocalized.

Variations among replicated organisms allow the system oforganisms as a whole to develop new and better ways ofsurviving the typical stressors individuals are exposed to. Thefragility of the individual provides information for the system:what does not work. An overly-fragile organism is not able toreplicate itself sufficiently, as a coffee cup that breaks whenset on the table would not enjoy wide popularity.

B. Fragility, Dose response and the 1/n rule

Another area we see non-linear responses to harm is thedose-response relationship. As the dose of some chemical orstressor increases, the response to it grows non-linearly. Manylow-dose exposures do not cause great harm, but a singlelarge-dose can cause irreversible damage to the system, likeoverdosing on painkillers.

In decision theory, the 1/n heuristic is a simple rule inwhich an agent invests equally across n funds (or sourcesof risk) rather than weighting their investments according tosome optimization criterion such as mean-variance or ModernPortfolio Theory (MPT), which dictate some amount of con-centration in order to increase the potential payoff. The 1/nheuristic mitigates the risk of suffering ruin due to an errorin the model; there is no single asset whose failure can bringdown the ship. While the potential upside of the large payoff

is dampened, ruin due to an error in prediction is avoided.The heuristic works best when the sources of variations areuncorrelated and, in the presence of correlation or dependencebetween the various sources of risk, the total exposure needsto be reduced.

Hence, because of non-linearities, it is preferable to spreadpollutants, or more generally our effect on the planet, acrossthe broadest number of uncorrelated sources of harm, ratherthan concentrate them. In this way, we avoid the risk of anunforeseen disproportionately harmful response to a pollutantdeemed "safe" by virtue of responses observed only in rela-tively small doses.

VI. WHY ARE GMOS TO BE PUT UNDER PP BUT NOT THENUCLEAR?

A. GMOs

Genetically Modified Organisms (GMOs) and their risk arecurrently the subject of debate. Here we argue that they fallsquarely under the PP not because of the potential harmto the consumer, but because the nature of their risk issystemic. In addition to intentional cultivation, GMOs have thepropensity to spread uncontrollably, and thus their risks cannotbe localized. The cross-breeding of wild-type plants withgenetically modified ones prevents their disentangling, leadingto irreversible system-wide effects with unknown downsides.

One argument in favor of GMOs is that they are no more"unnatural" than the selective farming our ancestors have beendoing for generations. In fact, the ideas developed in thispaper show that this is not the case. Selective breeding is aprocess in which change still happens in a bottom-up way,and results in a thin-tailed distribution. If there is a mistake,some harmful mutation, it will simply not spread throughoutthe whole system but end up dying out in isolation.

Top-down modifications to the system (through GMOs)are categorically and statistically different from bottom upones. Bottom-up modifications do not remove the crops fromtheir coevolutionary context, enabling the push and pull ofthe ecosystem to locally extinguish harmful mutations. Top-down modifications that bypass this evolutionary pathwaymanipulate large sets of interdependent factors at a time. Theythus result in fat-tailed distributions and place a huge risk onthe food system as a whole. We should exert the precautionaryprinciple here – our non-naive version – because we do notwant to discover errors after considerable and irreversibleenvironmental damage.

Labeling the GMO approach “scientific" betrays a verypoor—indeed warped—understanding of probabilistic payoffsand risk management. A lack of observations of explicit harmdoes not show absence of hidden risks. In complex systems itis often difficult to identify the relevant variables, and modelsonly contain the subset of reality that is deemed relevant bythe scientist. Nature is much richer than any model of it. Toexpose an entire system to something whose potential harm isnot understood because extant models do not predict a negativeoutcome is not justifiable; the relevant variables may not havebeen adequately identified.

5

Page 6: The precautionary principle pp2

B. Risk of famine without GMOs

Invoking the risk of "famine" as an alternative to GMOs isa deceitful strategy, no different from urging people to playRussian roulette in order to get out of poverty. While hunger isa serious threat to human welfare, as long as the threat remainslocalized, it falls under risk management and not the PP.

Some attempts have been made to counter GMO skepticson grounds of "morality". A GMO variety, "golden rice"supposedly adds the needed vitamins to consumers – as ifvitamins cannot be offered separately. Aside from the defectsin the logic of the argument, we fail to see the moralityof putting people at risk with untested methods instead offocusing on (less profitable) but safer mechanisms.

C. Nuclear

In large quantities we should worry about an unseen riskfrom nuclear energy and certainly invoke the PP. In smallquantities, however, it may simply be a matter of risk man-agement. Although exactly where the cutoff is has yet to bedetermined, we must make sure threats never cease to be local.It is important to keep in mind that small mistakes with thestorage of nuclear energy are compounded by the length oftime they stay around. The same reasoning applies to fossilfuels, and other sources of pollution.

If the dose remains small in each source, then the response(the damage) will also be relatively small. In this case, weare the ones throwing rocks at the environment; if we want tominimize harm, we should be throwing lots of little pebblesinstead of one big rock. Unfortunately the scientists evaluatingthe safety of current methods are limited in their view to onesource at a time. Taking a systemic view reveals that it isimportant to consider the global effects when evaluating theconsequences of one source.

VII. PREVENTIVE STRIKES

Preventive action needs to be limited to correcting situationsvia negativa in order to bring them back in line with astatistical structure that avoids ruin. It is often better to removestructure or allow natural variation to take place rather than toadd something additional to the system.

When one takes the opposite approach, taking specificaction designed to diminish some perceived threat, one isalmost guaranteed to induce unforeseen consequences. Onemight imagine a straight line from a specific action to aspecific preventive outcome, but the web of causality ex-tends outwards from the action in complex paths far fromthe intended goal. These unintended consequences can oftenhave counter-intuitive effects: generating new vulnerabilities orstrengthening the very source of risk one is hoping to diminish.My coffee cups are fragile, so I put them in a large, heavy-duty box to protect them from shocks in the outside world.When the whole box tumbles, all of the coffee cups smashtogether. Whenever possible, one should focus on removingfragilizing interdependencies rather than imposing additionalstructure and activity that will only increase the fragility ofthe system as a whole.

VIII. FALLACIOUS ARGUMENTS AGAINST PPNext is a continuously updated list of the arguments against

PP that we find flawed.

A. Crossing the road (the paralysis argument)Many have countered invocation of the PP with "nothing is

ever totally safe", "I take risks crossing the road every day, soaccording to you I should stay home in a state of paralysis".The answer is that we don’t cross the street blindfolded, weuse sensory information to mitigate risks and reduce exposureto extreme shocks.

Even more importantly in the context of the PP, the probabil-ity distribution of death from road accidents at the populationlevel is thin-tailed; I do not incur the risk of generalized humanextinction by crossing the street —a human life is bounded andits unavoidable termination is part of the logic of the system(Taleb, 2007). In fact, the very idea of the PP is to avoid sucha frivolous focus. The error of my crossing the street at thewrong time and meeting an untimely demise in general doesnot cause others to do the same; the error does not spread. Ifanything, one might expect the opposite effect, that others inthe system benefit from my mistake by adapting their behaviorto avoid exposing themselves to similar risks.

The paralysis argument is also used to present our idea asincompatible with progress. This is untrue: tinkering, bottom-up progress where mistakes are bounded is how true progresshas taken place in history. The non-naive PP simply asserts thatthe risks we take as we innovate must not extend to the entiresystem; local failure serves as information for improvement.

B. The Loch Ness fallacyMany have countered that we have no evidence that the

Loch Ness monster doesn’t exist, and to take the argument ofevidence of absence being different from absence of evidence,we should act as if the Loch Ness monster existed. Theargument is a corruption of the absence of evidence problem(paranoia is not risk management) and certainly not part ofthe PP, rather part of risk management.

If the Loch Ness monster did exist, there would still be noreason to invoke the PP, as the harm he might cause is limitedin scope to Loch Ness itself, and does not present the risk ofruin.

C. The fallacy of misusing the naturalistic fallacySome people invoke "the naturalistic fallacy", a philosoph-

ical concept that is limited to the moral domain. We do notclaim to use nature to derive a notion of how things "ought"to be organized. Rather, as scientists, we respect nature forits statistical significance; a large n cannot be ignored. Naturemay not have arrived at the most intelligent solution, but thereis reason to believe that it is smarter than our technology basedonly on statistical significance.

The question about what kinds of systems work (as demon-strated by nature) is different than the question about whatworking systems ought to do. We can take a lesson from nature—and time —about what kinds of organizations are robust

6

Page 7: The precautionary principle pp2

against, or even benefit from, shocks, and in that sense systemsshould be structured in ways that allow them to function.Conversely, we cannot derive the structure of a functioningsystem from what we believe the outcomes ought to be.

To take one example, Cass Sunstein — a skeptic of theprecautionary principle — claims that agents have a "falsebelief that nature is benign." However, his papers fail todistinguish between thin and fat tails (Sunstein, 2003). Themethod of analysis misses both the statistical significance ofnature and the fact that it is not necessary to believe in theperfection of nature, or in the "benign" attributes, rather, in itstrack record, its sheer statistical power as a risk manager.

D. The "Butterfly in India" fallacyThe statement “if I move my finger to scratch my nose, by

the butterfly-in-India effect, owing to non-linearities, I mayterminate life on earth" is known to be flawed, but there hasbeen no explanation of why it is flawed. Our thesis, can rebut itwith the argument that in the aggregate, nature has experiencedtrillions of such small actions and yet it survives. Thereforewe know that the effects of scratching one’s nose fall into thethin tailed domain and thus does not warrant the precautionaryprinciple.

Not every small action sets off a chain of events thatculminates in some disproportionately large-scale event. Thisis because nature has found a way of decorrelating small eventssuch that their effect balances out in the aggregate. Unlikethe typical statistical assumptions of independence amongcomponents, natural systems display a high-degree of inter-dependence among components. Understanding how systemswith a high-degree of connectivity achieve decorrelation inthe aggregate, such that a butterfly in India does not causecatastrophe, is essential for understanding when it is and isn’tappropriate to use the PP.

E. The potato fallacyMany species were abruptly introduced into the Old World

starting in the 16th Century that did not cause environmen-tal consequences. Some use that fact in defense of GMOs.However, the argument is fallacious at two levels:

First, by the fragility argument, potatoes, tomatoes, andsimilar "New World" goods were developed locally throughprogressive bottom-up tinkering in a complex system in thecontext of its interactions with its environment. Had theyhad an impact on the environment, it would have caused ad-verse consequences that would have prevented their continualspread.

Second, a counterexample is not evidence in the risk do-main, particularly when the evidence is that taking a similaraction previously did not lead to ruin. This is the Russianroulette fallacy, detailed below.

F. The Russian roulette fallacy (the counterexamples in therisk domain)

The potato example, assuming potatoes had not been gener-ated top-down by some engineers, would still not be sufficient.

Nobody says "look, the other day there was no war, so wedon’t need an army", as we know better in real-life domains.Nobody argues that a giant Russian roulette with many barrelsis "safe" and great money making opportunity because it didn’tblow someone’s brain up last time.

There are many reasons a previous action may not have ledto ruin while still having the potential to do so. There aremany reasons one might have ‘gotten away’ with actions thathave inherent risk. If you attempt to cross the street with ablindfold and earmuffs on, you may get lucky and make itacross, but this is not evidence that such an action carries norisk.

More generally one needs a large sample for claims ofabsence of risk in the presence of a small probability of ruin,while a single “n = 1" example would be sufficient to counterthe claims of safety —this is the Black Swan argument. Simplyput, systemic modifications require a very long history in orderfor the evidence of lack of harm to carry any weight.

G. The Carpenter fallacy

Risk managers who are skeptical of the understanding ofrisk of biological processes such as GMOs by the expertsare sometimes asked "are you a biologist?". But nobodyasks a probabilist dealing with roulette sequences if he is acarpenter. To understand the gambler’s ruin problem with themiscalibration of roulette betting, we know to ask a probabilist,not a carpenter. No amount of expertise in carpentry canreplace probabilisitic rigor in understanding the properties oflong sequences of small probability bets. Likewise no amountof expertise in the details of biological processes can be asubstitute for probabilistic rigor.

The track record of the experts in understanding biologicalrisks has been extremely poor, and we need the system toremain robust to their miscalculations. For there has beenan “expert problem,” a very poor record historically in un-derstanding the risks of innovations in biological products,from misestimated risks of biofuels to transfat to nicotine, etc.Consider the recent major drug recalls such as Thalidomide,Fen-Phen, Tylenol, Vioxx —all of these show chronic BlackSwan blindness on the part of the specialist. Yet these riskswere local not systemic: with the systemic the recall happenstoo late, which is why we need this strong version of the PP.

H. The pathologization fallacy

Often narrow models reveal biases that, in fact, turn outto be rational positions, except that it is the modeler who isusing an incomplete representation. Often the modelers arenot familiar with the dynamics of complex systems or useGaussian statistical methods that do not take into account fat-tails and make inferences that would not be acceptable underdifferent classes of probability distributions. Many biases suchas the ones used by Cass Sunstein (mentioned above) aboutthe overestimation of the probabilities of rare events in factcorrespond to the testers using a bad probability model thatis thin-tailed. See Silent Risk, Taleb (2014) for a deeperdiscussion.

7

Page 8: The precautionary principle pp2

It became popular to claim irrationality for GMO and otherskepticism on the part of the general public —not realizingthat there is in fact an "expert problem" and such skepticismis healthy and even necessary for survival. For instance, inThe Rational Animal 7, the authors pathologize people fornot accepting GMOs although "the World Health Organizationhas never found evidence of ill effects" a standard confusionof evidence of absence and absence of evidence. Such apathologizing is similar to behavioral researchers labelinghyperbolic discounting as "irrational" when in fact it is largelythe researcher who has a very narrow model and richer modelsmake the "irrationality" go away.

These researchers fail to understand that humans may haveprecautionary principles against systemic risks, and can beskeptical of the untested for deeply rational reasons.

IX. CONCLUSIONS

This formalization of the two different types of uncer-tainty about risk (local and systemic) makes clear when theprecautionary principle is, and when it isn’t, appropriate.The examples of GMOs and nuclear help to elucidate theapplication of these ideas. We hope this will help decisionmakers to avoid ruin in the future.

REFERENCES, FURTHER READING, & TECHNICAL BACKUP

Bar-Yam, Y., 2013, The Limits of Phenomenology: FromBehaviorism to Drug Testing and Engineering Design, arXiv1308.3094

Rauch, E. M. and Y. Bar-Yam, 2006, Long-range interactionsand evolutionary stability in a predator-prey system, PhysicalReview E 73, 020903

Hutchinson, Phil and Read, Rupert, What’s wrong with GMfood?,The Philosophers Magazine Issue 65, 2nd Quarter of2014.

Taleb, N.N., 2014, Silent Risk: Lectures on Fat Tails, (Anti)Fragility, and Asymmetric Exposures, SSRN

Taleb, N.N. and Tetlock, P.E., 2014, On the Difference be-tween Binary Prediction and True Exposure with Implicationsfor Forecasting Tournaments and Decision Making Researchhttp://dx.doi.org/10.2139/ssrn.2284964

Sunstein, Cass R., Beyond the Precautionary Principle (Jan-uary 2003). U Chicago Law & Economics, Ol in WorkingPaper No. 149; U of Chicago, Public Law Working Paper No.38.

CONFLICTS OF INTEREST

One of the authors (Taleb) reports having received monetarycompensation for lecturing on risk management and BlackSwan risks by the Institute of Nuclear Power Operations,INPO, the main association in the United States, in 2011, inthe wake of the Fukushima accident.

7Kenrick, D. T. and V. Griskevicius (2013), The rational animal: Howevolution made us smarter than we think. Basic Books.

8