41
Essay 5 Believers in the Land of Glory Nature has the deep cunning which hides itself under the appearance of openness, so that simple people think they can see through her quite well, and all the while she is secretly preparing a refutation of their confident prophecies. --- George Eliot, The Mill on the Floss (i) Noam Chomsky has emphasized the importance of grammatical systematicity within language learning, for without underlying rules we could never learn to manage the wide variety of sentences encountered in the courses of everyday life. As impressive as the bank of early learning skills related to grammatical parsing must be, we should not neglect the wider ranges of duties to which our limited brains must eventually attend. The mere capacity to convert a bunch of sentences into some kind of mental representation doesn’t advance our biological cause much unless we can swiftly extract useful answers to important questions from these representations through inferential probes of some manner or other. From this point of view, many of our most elaborate computational capacities (plotting a viable path across a terrain, for example) must be acquired to a substantive degree quite early in life, long before the brain is prepared to bother much with the problem of how sentential structure should be deciphered. And these early blocks of inferential skill must embody a fair degree of modular systematicity as well, a crucial fact that underlies our remarkable capacities to transfer fairly elaborate inferential routines wholesale to novel applications, often in an abrupt “Ah ha! Now I see it!” manner. Such capacities for wholesale inferential transfer plainly lie at the core of the conceptual plasticity that most sharply distinguishes homo sapiens from our cousin animals. It is often astonishing how dramatically little adjustments can shift the underlying strategic purpose of an original block of inferential skills. My favorite example of this phenomenon is discussed at greater length in essay 1: the manner in which a “marching method” scheme suited to plotting cannon ball flight paths can be readily converted to a “shooting method” methodology for computing the static shape of a loaded clothesline. The main adjustment required in this transfer of inferential

Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

Essay 5

Believers in the Land of Glory

Nature has the deep cunning which hides itself under the appearance ofopenness, so that simple people think they can see through her quite well,and all the while she is secretly preparing a refutation of their confidentprophecies. --- George Eliot, The Mill on the Floss

(i)

Noam Chomsky has emphasized the importance of grammatical systematicitywithin language learning, for without underlying rules we could never learn to managethe wide variety of sentences encountered in the courses of everyday life. Asimpressive as the bank of early learning skills related to grammatical parsing must be,we should not neglect the wider ranges of duties to which our limited brains musteventually attend. The mere capacity to convert a bunch of sentences into some kindof mental representation doesn’t advance our biological cause much unless we canswiftly extract useful answers to important questions from these representationsthrough inferential probes of some manner or other. From this point of view, many ofour most elaborate computational capacities (plotting a viable path across a terrain, forexample) must be acquired to a substantive degree quite early in life, long before thebrain is prepared to bother much with the problem of how sentential structure shouldbe deciphered. And these early blocks of inferential skill must embody a fair degreeof modular systematicity as well, a crucial fact that underlies our remarkable capacitiesto transfer fairly elaborate inferential routines wholesale to novel applications, often inan abrupt “Ah ha! Now I see it!” manner. Such capacities for wholesale inferentialtransfer plainly lie at the core of the conceptual plasticity that most sharplydistinguishes homo sapiens from our cousin animals.

It is often astonishing how dramatically little adjustments can shift theunderlying strategic purpose of an original block of inferential skills. My favoriteexample of this phenomenon is discussed at greater length in essay 1: the manner inwhich a “marching method” scheme suited to plotting cannon ball flight paths can bereadily converted to a “shooting method” methodology for computing the static shapeof a loaded clothesline. The main adjustment required in this transfer of inferential

Page 2: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-2-

technology is that the cannonball schememust get embedded within an iterativeguessing process that eventually hones inon the correct clothesline shape. Thisexample may look rather technical, butclosely allied forms of inferentialadaptation are quite ancient (which is whythere’s a caveman in the picture) and canbe encountered within the humblest formsof everyday reasoning. The fact that littlesyntactic adjustments sometimes mask very major changes in underlying strategy willemerge as a central theme in this essay.

However, neither of these basic requirements on early learning (that is,grammatical parsing and inferential capacity) supplies firm assurance that thebehaviors of the real world will conform to the computational skills we can readilyacquire. The macroscopic world around us is a very complex place and the devotedefforts of our parents and our animal ancestors cannot prepare us fully for the fine-grained descriptive and inferential challenges to be faced in the years ahead(especially as the projects in which we engage become technologically moredemanding). Mom and Dad can get us off to a good start, but we must seek ourdescriptive fortunes ourselves, through plucky and adventitious improvisations uponold policies in manners that generate inferential procedures that may operate insignificantly different ways than their parentally installed progenitors. Amongst allelse, it is these swift skills in conceptual plasticity that distinguish us most from ouranimal cousins.

However, adaptive inferential transfer from one topic to another carriessignificant risks as well, as is evidenced by the many bold schemes that havefloundered on the shoals of ill-considered analogy. Prudent conceptual advancerequires the assistance of additional management tools, amongst which fall oursatellite capacities for critically scrutinizing the descriptive strategies that underliethe inferential gambits we exploit (skills that, as we’ll soon discover, sometimesinvolve involve substantial dollops of sophisticated mathematics). The importance ofthese transition facilitating tools are central in the essay ahead of us.1

Nonetheless, as long as these supplementary allowances for monitoredapplicational adjustment are acknowledged, Chomsky’s central observation remainswell-taken: swift language acquisition demands a large degree of systematicity withrespect to parsing and inferential capacity within learning’s earliest stages. And it

Page 3: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-3-

Acquiring the early a priori

may happen that these early preparedness conditions fit together into an integrated setof skills that can be plausibly dubbed the“early a priori”: an entangled set of parsingand inferential capacities with respect to tense, modality and so forth whose governingrules are closely linked to grammatical structure (“if we learn “Monty has an attachedtail,” infer that “Monty has an attached part”). Indeed, these elaborate patterns of earlyreasoning with respect to tense, causal connection and so forth comprise the maincapacities we look for when we wonder whether a young person has become“competent in English” or not. As such, organized blocks of “early apriori” skillsfully merit the intense degree of attention that linguists and cognitive scientists havedevoted to their unraveling and characterization.

However, thinking as philosophers, we must avoid inflating these initialfootholds on inferential procedure into firm and permanent conceptual necessities,for such extrapolations foster the illusion that we are more stoutly prepared for theworld’s computational vicissitudes than we really are. If these deflationary warningsare ignored, we may readily presume that a wonderful discipline of armchairspeculation can be founded devoted to the ageless “metaphysical necessities”allegedly enshrined within our early a priori thinking. As I write, a tidal wave ofnecessitarian propensities has swept across our subject, establishing an array ofdubious enterprises collectively called “analytic metaphysics” by their in anexcessively rigidified conception of language, where words aresupposed to attach to physical world “meanings” in a simple andimmutable way that the enterprising philosopher can look pastwhen she directly scrutinizes the vital “metaphysical categories”that allegedly lie behind common terms such as “object,”“attribute,” “cause,” “part,” “whole,” “possibility” and “law ofnature.”2 In point of fact, all of these words embody crucialissues strategic language management that need to remain atthe forefront of every philosopher’s attention. The key ambition of this essay is to linkthe distorted projects of present day “analytic metaphysics” to a underlying failure toappreciate the unavoidable linguistic requirements of the post-“early a priori”restrategizations just discussed.

Here’s a preliminary sketch of how the debate will unfold. Early in life, mostof us learn to talk proficiently of the “parts” of solids and liquids (including their“surfaces” and “interfaces”) in innocent manners that rarely get directly challengedwithin the rounds of everyday affairs. Furthermore, natural talk of “very small parts”provides an easy avenue for introducing recondite dominions of molecules, atoms or“material points” to our children. These “early apriori” proficiencies encourage the

Page 4: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-4-

hope that an armchair discipline entitled “mereology” might be founded devoted tothe tenets that “parts” and “wholes” must necessarily obey. Here is how L.A. Paulpraises the benefits to be obtained from this dedicated study:

An ontological theory of parts and wholes (a mereology) of physical objectsdescribes more fundamental and more general constructional principlesthan physics or chemistry does, for it gives general principles that govern allthe physical objects with parts, including microparticles, atoms andmolecules. For example, chemistry may tell us that the physical structure ofa polycarbonate is causally created by arranging elements a certain way,and that its physical parts consist of these arrangements of elements and theattractive forces between them. Mereology contributes the additional claimthat the molecule just is (say), the mereological fusion of its arranged parts(the elements and the attractive forces). The polycarbonate molecule iscreated by this mereological fusion, but not in a causal sense. Rather, it iscreated in the compositional or ontological sense: it exists when the partsarranged in the right way exist. So the metaphysics tells us what it is to be asum or physical object composed of these structured arrangements of parts,and thus tells us how the physical object is metaphysically constructed(composed) from its parts. In contrast, chemistry tells us what some of theparts and the arrangements of the parts are for different kinds of molecules,and it also tells us how to causally manipulate the world in order to bringsuch arrangements into existence. Chemistry tells us how molecules arephysically or causally constructed from attractive forces and smallerparticles.

On her view, mereology comprises a specimen of conceptual pre-science that mustbe set in place before scientific inquiry proper can commence (in a moment we shallinspect a more overt articulation of this claim). Analytic metaphysicians generallypresume (without evident success, insofar as I can see) that the traditionalphilosophical riddles involving “parts” and “wholes” will readily unravel once theTrue Principles of armchair mereology become suitably articulated.

Within real life applicational settings, those “traditional philosophical riddles”(which we shall survey in essay 7) do not comprise mere metaphysical whimsies fortheir presence is closely allied to serious mathematical challenges that confront themodelers who attempt to track, with concrete numerical values, the evolving

Page 5: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-5-

behaviors of real life materials such as melting ice packs, for such unfolding historiestypically involve internal boundaries (= interfaces) that enlarge from nothingness,travel across the floe and eventually disappear. To address these evolving patternsproductively, applied mathematicians have gradually recognized, after many long yearsof hard-to-gain reflection, that the strategic underpinnings of distinctions that we takefor granted within ordinary life--specifically, our policies for dividing behaviors into“surface” and “interior” modes–have to be substantially rethought. And theydiscovered that, at bottom, our successful invocations of these classificatory words(and, mutatis mutandis, “parts” and “wholes” aswell) rely upon subtle allocations of linguisticlabor in which descriptive responsibilities assignedto “surfaces” and “interiors” secretly operate in asurprising “cooperative family” manner revolvingaround the concept of energy storage. Simpleassumptions of “Fido”/Fido semantic attachmentcannot render justice to the subtleties of“meanings” that possess this “cooperative family”character. Yet recent successes in numericallyauguring how cracks appear and disappear on thesurface of a lake rely precisely upon a deep appreciation of the “cooperative family”policies that instruct us as to how energy allocations should be shifted about withinevolving circumstances such as these. Viewed retrospectively, moreover, one canoften discern that even our everyday thinking about crack development has long beentacitly monitored by crude forms of similar inferential controls. The final “semantic”outcome is that familiar physical classifiers such as “mass,” “pressure” and “force”must be regarded as covertly encoding information about the physical world followingfar more sophisticated strategies than “Fido”/Fido thinking anticipates. As theseunexpected complexities come to the fore, the mereological verities cherished byanalytic metaphysicians fall by the wayside.

However, we must delay further discussion of “surfaces” and “interiors” toessay 7, which can concentrate upon the “cooperative family” techniques required. Inthe present essay, we’ll examine the somewhat simpler anti-apriorist behaviors of theword “cause,” which frequently casts aside its early apriorist chains through directingour attention to novel strategies of data utilization. In so doing, it partially serves asone of those tools of “language management” to which I earlier alluded.

Page 6: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-6-

Neurath’s boat

To a philosophical anti-necessitarian suchas W.V. Quine, these declinations from a priorigrace are only to be expected, as the naturalconcomitants of the “Neurath’s boat” currents thatcontinually redirect our descriptive vesselstowards unanticipated compass points. Unfortunately, Quine’s writings simultaneouslyintroduce a second form of destructive semanticrigidity into our discussion, stemming from the presumption that scientificmethodology can be usefully “reconstructed” as a process that begins with thehypothetical introduction of a firm set of “theory T” postulates that implicitly fix themeanings of the parochial terms they introduce. To be sure, writers of this “theory T”persuasion recognize that, in real life, “foundational postulates” of the requisitestrength never appear in this stern once-and-for-all fashion, but they also fancy that noserious distortions are introduced through this “convenient idealization.” But this ismistaken: a exaggerated trust in mythological acts of this ilk usually indicates that thewriter has not attended sufficiently to science’s substantive skills in blending togetherpartial information tracing to conflicting sources (allied to size and time scales, ascanvassed in the previous essay). Avoiding unnecessary reliance upon unproven“posits” through the clever “physics avoidance” exploitation of proven descriptiveresources represents a potent characteristic that distinguishes true science from theproposals of cranks, who can lay down “postulates” as ably as the greatest scientist yetcare not a whit about building reliability into their schemes.3 The problem with acrank is not simply that nature fails to conform empirically to his provisos, but thatconsiderations of modeling reliability have not been adequately integrated into hisspeculations. Yet we hear nothing of these concerns within the methodologicalnarratives provided by Quine or the analytic metaphysicians, who approach thehistoriography of scientific development with the same naive “man proposes: naturedisposes” convictions as a run-of-the-mill pseudo-scientist. And therein hangs muchof our tale.

In a closely allied manner, theory T thinkers tacitly assume that all forms ofsuccessful science modeling operate in an entirely “bottom up” manner (at least “inprinciple”4) working entirely with the conceptual tools articulated by their innovatorsat the hypothetical time of “postulation.” Thereafter they patiently await the verdict ofexperiment, proposing applicational models framed entirely within the conceptualresources laid down within the syntax of T until such time as a replacement theory T’comes along (which is why I characterized such opinions as a “man proposes; naturedisposes” portrait of scientific endeavor). Within the span of time under T’s

Page 7: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-7-

maintenance, the “meanings” of the technical vocabulary utilized will remain securelyfixed by T’s postulations. This is the Quinean ersatz “semantic rigidity” of which Iearlier complained.

Well, you’ll have a lot of bridges falling down on you over the interveningyears if you trust excessively to an incautious methodology of this purist theory T ilk. In practical life, ones “postulations” should be tempered by a capacity to blendtogether whatever funds of reliable knowledge you possess, on any scale of time orsize, to insure, as best you can, that the numbers you’ll reach in your final calculationsare trustworthy. But these prudent requirements of intelligent data amalgamation leaddirectly to many delicate forms of strategic concern, for, as we noted in the previousessay, funds of knowledge obtained in quasi-independent ways, frequently do not mixtogether well without clever forms of computational amelioration.

In short, the wise employment of theoretical principle within real lifeapplications requires a much greater allowance for the cross-fertilizations of data typesthat clever strategies of “physics avoidance” offer. We can’t understand the generallyimproving reliability of science otherwise. Therefore, our suspicions should bealerted whenever we encounter writers who disdain the somewhat nitty-grittycomputational considerations emphasized in this book with an airy “We are onlyinterested in science’s ontological posits, not its detailed inferential techniques,” forsuch presumptions originate in a failure to appreciate the methodological and strategicfreedoms that must be left open within a well-devised descriptive practice. Consider,in this light, the following extract from Ted Sider wherein an exclusive cultivation ofearly a priori propensities becomes converted into a science until itself–or, moreaccurately, into a conceptual prescience that needs to be locked in place beforescience proper can commence:

A realistic picture of science leaves room for a metaphysics temperedby humility. Just like scientists, metaphysicians begin with observations,albeit quite mundane ones: there are objects, these objects have properties,they last over time, and so on. And just like scientists, metaphysicians go onto construct general theories based upon those observations, even thoughthe observations do not logically settle which theory is correct. In doing so,metaphysicians use standards for choosing theories that are like thestandards used by scientists (simplicity, comprehensiveness, elegance, andso on).

Emphasizing continuity with science helps dispel radical skepticismabout metaphysics; the humility comes in when we remember thediscontinuities. Observation bears on metaphysics in a very indirect way,

Page 8: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-8-

and it is far less clear how to employ standards of theory choice (likesimplicity) in metaphysics than it is in science. But metaphysicians can,and should, acknowledge this. Metaphysics is speculative, and rarely if everresults in certainty. Who would have thought otherwise?... The philosophermust therefore live with uncertainty whether her life’s work is ultimatelymeaningful--that is the cost of the breadth of reflection demanded byphilosophy. Philosophy’s reflective nature is generally a good thing, but thedown side is that it can lead to paralysis. Don’t let it. You don’t need tohave answers to all meta-questions before you can ask first-order questions.5

Observe that Sider’s alleged “standards of theory choice” are framed entirely upon a“choose your theoretical postulates wisely” plan, without a smidgen of “leave roomfor prudent physics avoidance.” Furthermore, the “observations” central to his projectfocus upon the “early learning” behaviors of key words such as “object,” “attribute,”“cause,” “part,” “whole,” “possibility,” or “law of nature,” without evident concernthat these words might display somewhat altered behaviors within future stages ofconceptual development. Through such breezy invocations, Sider locks in place thetwo modes of overly wrought semantic fixation of which I have complained, leavingto science only the task of filling in his allotted slots: the specific catalogs of“objects,” “properties,” etc., suit our universe best. Like L.A. Paul, he presumes thatwe can’t properly appreciate the nature of scientific endeavor unless we regard itsworkings as merely filling in the prior conceptual categories carved out by themetaphysicians. The supposition that anything mattering to metaphysics might begained by asking the physical chemists down the hall how they handle the “parts andwholes” of a slab of melting ice will likely strike them both as preposterous: “Surely,the question of what these framework concepts signify must be resolved long beforewe can contemplate the best way of filling in their particulars.”

In proceeding in this blithe manner, Sider and his companions smooth over thedetailed strategic developments that might warn them to be more circumspect in theirnecessitarian speculations.

(ii)

The first thing that we must do in taming this wild-eyed vision of themetaphysical enterprise is to correct some substantive misapprehensions about thevarieties of terminology to be encountered within the workings of a sagaciouslyemployed science. Once again, these wrong-headed conceptions descend from themethodological distortions characteristic of Theory T thinking. Specifically, theinherited doctrines that trouble me are these. (1). When scientists “postulate a theory

Page 9: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-9-

T,” at least a T that purports to be “fundamental” in someill-described sense, they simultaneously lay out the “kindterms” that they expect to employ in all of theexplanations that fall within the ambit of T. (2). Moreover, all of these “kind terms” fall within the tightorbit of terminology that can be constructed from T’sbasic non-logical vocabulary through logical operations.6

In addition, many of our metaphysicians subscribeto the characteristically Quinean presumption that the“postulations” of pure mathematics should be treated on allfours with those of physics, partially because this grouping allows the overall“ontological commitments” of the positing party to be clearly gauged in one fellswoop. Following the “man proposes; nature disposes” percepts of which I havealready complained, we are quickly led to the presumption that the posited “ontologyand ideology”of science and mathematics will remain fixed until such time as theseestablishing postulates become altered. Presumption (2) of the previous paragraphthen assumes that the full collection of coordinated physical and mathematical “kindterms” P provided by these conjoined postulates will arrange themselves in a naturalhierarchy of definability according to how P’s internal syntax gets compounded fromT’s founding funds of physical and mathematical vocabulary. Within these tidilydelineated orbits, grammatically acceptable combinations can be graded a priori asdirectly physically descriptive, (“the force between these planets obeys Newton’sGm1m2/r2 rule”), semi-artifactual (“the planet sits within a parabolic potential well”)and altogether useless (“the planet travels through the position (7, -8 + 6i, -37i).”

Such static “postulate everything all once” presumptions figure centrally in theturmoil over “naturalism” that we will visit in Essay 9 (in truth, these concerns strikeme as a philosophical parallel to the Dutch tulip mania of 1635).

But for present concerns we need only to recognize that clinging to such limitedorbits of “kind terms” can’t render proper justice to the many devices of cross-disciplinary fertilization that physics employs to keep its modes of physicaldescription advancing along profitable and reliable pathways. Such a moral, indeed,framed the central theme of the previous essay, where we investigated some of theways in which physics cleverly homogenizes data drawn from different scale levels togreatly enhance its modeling reliability. What we must now recognize that sciencecommonly pins linguistic labels on the important quantities that frequently hide withinthe convoluted behaviors of target systems by following the strategic pathways that ledto their uncovering.

In particular, in this essay (and elsewhere in the book) we shall particularly

Page 10: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-10-

concentrate upon the strategic advantages offered by factoring–uncovering a set ofdescriptive parameters (if they exist) that allow us to decompose complicatedbehaviors into simpler sub-behaviors. Within the theory of numbers, for example, ourcapacities for decomposing general numbers into prime factors so much simplifiesproofs in the subject that mathematicians immediately attempt to regain the virtues ofunique factorization when studying domains D that lack the property (e.g., thealgebraic ring of 15) by immediately shifting to a richer domain D* to which enoughextra numbers have been added that unique factorization has been restored. The hopeis to harness the “easy proof” capacities of D* to find theorems that can hopefully bepulled back down into the original D. But once this strategic gambit has provedsuccessful, it allows us to pull down the properties uncovered in D* and apply themmeaningfully to the original membership of D proper.

One of the central techniques cited frequently in these essays is that of Fourieranalysis7–the factoring of the behavior of a vibrating string into independent overtonemotions. Mathematicians commonly picture a Fourier transform as a map between arepresentation of the target system in “position space” terms (i.e., “What is theelevation of the string at position x?”) into “energy (or momentum) space” terms(“how much energy has been allocating to the vibrational mode p?”). The inferentialpathways to rich conclusions about the target are more easily found within the lattersetting.

But the basic strategic logic behind these searchesfor unfamiliar quantities can be illustrated in an entirelyuntechnical way. Consider the following sequence ofsimple situations.

Scenario 1: Suppose that we confront a landscapethat contains two perfectly conical volcanos. One readilyappreciates that the most computationally convenientpolicy of describing this tableau is to employ three

Page 11: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-11-

different coordinate maps as follows: local map A, which lays down conicalcoordinates with their origin at the apex of volcano A; local map B, which follows thesame policy with respect to volcano B and global map C: a Cartesian style rectilinearmap that locates the two volcanos on a common map where distances between themcan be readily calculated. Why is this particular partitioning of descriptive practiceadvisable? Most likely, because of the way thatvolcanos grow and the importance of elevation toplant life, the soil and vegetation around eachpeak will prove to be pretty much the same at thesame radial distances rA and rB from each peak(whereas a location’s angular location θA or θB

will display no allied regularities whatsoever). This alignment means that for many planningpurposes–e.g., plotting how to effectively “gatherwild flowers from the hillside”–, one only needsto pay attention to the rA and rB coordinate valuesand can largely ignore their θA or θB values(mathematicians say that our problem “admitsseparation of variables”–it can be factored intoseparately addressable questions).

Scenario 2: We now consider a lessperfect landscape whose twinned mountains areless perfectly conical due to the influence ofvarious perturbing factors. As long as the degreeof disturbance is not too great, it will still beadvantageous to utilize the same conicalcoordinates, despite the fact that they no longer coincide with the physical boundariesof the terrain. And our simple rules for vegetation and soil will still capture the centralfacts about the terrain (“to first order” as the mathematicians say), allowing us tointroduce corrective factors as “perturbations” upon the simpler distribution patterns. The continued algorithmic advantages of radial factoring persists even within thismessier situation.

But now we need to handle both the errors in altitude and the transition region. Such extrapolations become especially fraught if the valley region between A and Bbecomes filled with fallen rubble creating a boundary region unlikely to conform toour simple A and B rules of mountainside vegetation (mathematically, the cleanseperatrix that segregates domain A from domain B in scenario 1 has enlarged intowhat is technically called a “mushy region”). The most common policy for handling

Page 12: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-12-

rattleback

these intermediate dominions in actual practice to supply a suitable rule that smoothlyextrapolates between our A and B rules, based upon their map C separations.8 Suchan A and B dependent descriptive policy often introduces substantive referentialcomplications within a physical vocabulary within these extrapolated boundary regions(as we’ll witness when we discuss “pressure” below).

We still wish to “see,” as it were, the perfect conical volcanos embedded withinthe messier contours of scenario 2. Even in these muffled circumstances, the conicalcoordinates continue to capture the dominant behaviors on display, although theynow hide beneath an obscurant layer of rocky complication. The trick is to handlethese irregular variations in height as perturbations upon the underlying factoring

behaviors, possibly treated in a stochastic manner. In a more abstract way, we will naturally factor the “phase

portrait” of a dissipative system in the same way (say, a spinnergradually slowed by friction whose “portrait” appears asillustrated, where the two basins of motion correspond tospinners twirled to the left or right). The obvious advantages of

employing coordinates centered upon the two “valley basins”9 evident in the diagramremain in play, although evolving trajectory of any real life system will trace out onlyone of these possible curves. Following Poincare’s celebrated advice on the merits of“qualitative physics,” we will “understand” our systembest if we view it as traveling within the mountain,valley and ridge topography of its surrounding possiblestates (this usually requires that we pick thecoordinates of our “possibilities” in a canny way--seeEssay 6 for more on this).. And these descriptiveadvantages persist even when we ponder the celebrated Rattleback toy, whosetrajectory can hop rather chaotically from one valley basin to the other.

In truth, when we apply an eigenfunction decomposition to a violin string, weactually find ourselves in scenario 2 circumstances, because (for reasons outlined inEssay X) real wires of this type don’t fully manage to convert their traveling waveenergies into standing wave format. Instead, our tonal decomposition primarilyapplies to the so-called “Helmholtz envelope” mapped out by the moving string, towhich we might add a supplementary stochastic modeling of the likely deviations thatthe real string will display around its predicted Helmholtz envelope positions.

Page 13: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-13-

Nevertheless, no one doubts that the customarytonal spectrum reports upon a genuineset of properties possessed by the string–theycapture, after all, the chief characteristics that thesluggishly responding sounding box “sees”–butthey possess the same off-center qualities as thefact that a scenario 2 volcano enjoys the propertyof being shaped almost like a cone.

Of course, at some point of deterioration(perhaps in the scenario sketched as 3), ourstrategy of factorization will lose its “dominant behavior” punch and we will need tolook around for some other strategic handle on the situation. But this is the usual fateof “dominance” within the real world, as overbearing despots often discover: one dayyou wake up and all of your regal commands have faded to feeble gestures.

Now the linguistic lesson to be learned here is that the syntactic labels weconstruct for important physical behaviors often derive from the mathematicaltopography of the general strategy we follow in confronting the physical situationrather than welling up from below, as it were, plucked from the fixed orbits ofquantities spun out by the “fundamental postulates” of whatever theoreticalframework one is working with. To be sure, we will generally wish to “cross-fertilize” the intimations of strategy we have extracted from other considerations withconstraints stemming directly from our “theory,” but this is almost always a processthat involves the blending of diverse data drawn from a variety of sources, rather thanproceeding according to the monotheocratic convictions of Theory T thinking.

As we shall discuss in a more fulsome manner in Essay 9, the strategysuggestive mountain peaks and valleys that guide us in our choice of suitablenomenclature to designate a newly discovered physical characteristic may lie very faraway in Mathematicsland, in ways that are impossible to augur before they areuncovered. Here’s a simple example. The familiar notion of resistence (as suppliedby a resistor) extends in a straightforward direct manner from individual electricalelements to DC circuits where the enriched notion of attentuation characteristic iscalled the circuit’s “inductance.” From an a priori perspective, one doesn’t anticipatethat claims of the form “the inductance of this circuit is ½ + i ohms” should makesense but that is exactly the format we should apply to our circuits once we startrunning AC currents through them. What is the motive for this strange way of talking? It’s because the attenuation in these circuits is intimately entangled with the time lag,as we quickly realize once we factor the circuit’s behavior into poles and zeros livingon the complex plane with a Laplace transform (which represents the best strategic

Page 14: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-14-

setting for answering all sorts of circuit design questions). Viewed within this farawaysetting, a complex number represents the natural device for capturing thecodependence we witness, so it is natural to carry the complex nomenclature back toour real life circuit.

Here we might coin a slogan: frequently effective strategy is the mother ofdenotation.

A generally pragmatic approach to linguistic meaning is implicit within theseways of thinking: a word manages to attach itself to the world, not through thebaptismal intentions or able postulations of its employers, but through the practicalentanglements of word usage with worldly affairs that lock into place as we graduallydevelop improving methods of calculation to forward our practical objectives. If wecan perform these feats repeatedly, then in the words of one of my favorite mottos: “amethod which leads to true results must have its logic.”10 In this context, thephrase “must have its logic” indicates both that the strategy underlying thecomputation must roughly accord with the computational opportunities offered bynature and that the central syntactic nodes within the computational stream mustencode robust stores of physical information, albeit in potentially convoluted ways. To be sure, able baptismal intentions and able axiomatics can initially situate anetwork of descriptive words in a provisional position adequate for allowing thehidden hand of refining calculation to refine their tightening word/world linkages untoa higher standard. In the course of this “tightening,” unexpected requirements oflinguistic monitoring often come to the fore, upsetting the apple carts of early aprioripresumption along the way.

The fact that the analytical metaphysicians pay little attention to theseunavoidable linguistic developments explains why I earlier complained of the“excessive semantic rigidity” ingrained within their thinking.

(iii)

In Essay 1 we observed that, in paradigmatically evolutionary modelingcontexts (such as progressive wave motion), the terms “cause and effect” attach

Page 15: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-15-

themselves quite firmly to the vital physical developments that mathematiciansemphasize under the heading of “characteristic flows” (which we shall visit again withthe a more sophisticated contexts of Essay 8). Barnacle-like attachments of this typestand behind our (correct) intuitions that “causal processes” form an important part ofobjective nature, contrary to the aspersions of doubters such as Bertrand Russell andErnst Mach who contended that “physics has no need for such notions.”11 But we alsoobserved that “cause” semantically behaves like the cat that keeps coming back nomatter how earnestly one tries to get rid of it. Thus, even when we resolutely adopt anequilibrium-centered explanatory strategy (whose great advantages in terms ofreliability trace precisely to the fact that they studiously avoid tracking causaldevelopments through sagacious injection of “physics avoidance”), we keep findingthe word “cause” perched on our applicational doorstep, now focusing on other aspectsof the physical circumstances before us.

Why does this happen? Largely because we also employ the terminology toarrange our thought processes in a productive ordering (which is why parents invest somuch effort in teaching “cause and effect” to children who would prefer to not havetheir thoughts so productively arranged, thank you). In the rest of this essay, I will setforth two specific manners in which we employ “cause and effect” discriminations as amethod of setting in place clever stratagems for combining different types of physicaldata together in useful and reliability-heightened ways. In the same directive mannerthat our section (i) caveman employs, we hope to orient our audience’s attention to thecorrect ordering in which various available scraps of data can be blended for fruitfulpurposes. As it happens, the causal claims we employ to these managerial purposesalso capture important swatches of experimental data in their own right, but thesecontents are apt to seem thin and excessively experiment-centered to anyone whosesemantic proclivities have been shaped by the fact that our “cause and effect” talkattaches to a deeper set of underlying processes when we speak of wave motion andallied types of robustly evolutionary circumstance.

Is this a wise way to design a language? Probably not, but we must resignourselves to the fact that we are forever doomed to cobbling together novelcomputational strategies for dealing with a difficult world, working from a sorry set ofinadequate starting materials fashioned within our hunter-gatherer past and throughour early apriori instruction.

However this may be, “cause”’s complex semantic career provides us with anexcellent window into why the characteristic emphases of our analytic metaphysiciansprove so woefully misguided. We will be greatly assisted in this project by the far-reaching studies of my colleague-across-the-hall Jim Woodward, who has documentedin great detail the many ways in which experimentally-centered “cause and effect” talk

Page 16: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-16-

contribute to the architecture of effectivescience.12 In particular, he focuses ourattention upon a certain class of manipulationconditionals of the form “If factor A were tobe experimentally altered to condition A*,then a condition B would result” andexplicates the standard procedures thatdiagnosticians employ for extracting “causeand effect” claims from data of this sort.

Here I might remark that Woodward’s classes of conditionals usually includeactual and counterfactual reports, for our data sets need to be rich enough to fulfillvarious directive purposes that we shall outline later. And these data sets must also becarefully framed for the same reasons–we shouldn’t admit any stray experimentalconclusion into our framing sets of conditionals, no matter how firm its experimentalcredentials. The main novelty within this note is to firmly correlate the exploitationof these special Woodwardian data sets with the strategic architecture of certainreliability-enhancing (and “physics avoiding”) ploys commonly adopted withinphysical practice.

Once this connection is forged, ample documentation of the waywardinclinations of early apriori thinking becomes easy, because Woodward’s observationshave commonly been haughtily dismissed by starry-eyed analytic metaphysicians astrivial, beside the point or circular. Trivial, because his explications don’t penetrate tothe robust heart of the causal processes we directly witness when a cataract of watercareens along a mountain hillside. Beside the point, because we employ “cause”fruitfully in contexts that plainly do not fit Woodward’s “manipulationist” emphases atall. Circular, because what might a “manipulation” be, except a specific form of“causal intervention”?

In contrast, these same critics frequently favor some adaptation of a point-of-view first articulated by Nelson Goodman and developed further by David Lewis,Robert Stalnaker and others. It works like this. Let’s begin with the counterfactuals,whose antecedents specify some altered condition A* (“the match is truck”) contrasting with the real world circumstances A (“the match is not struck”). Build a new setof initial conditions around A* in a “cotenable manner” (that is, we shift to conditionsrather like those of A ) and then determine whether nature’s laws carry one from theseadjusted initial conditions to B (“the match lights”) or not. If so, the counterfactualclaim A B should be regarded as true; otherwise not. Goodman despaired whetherthe criteria we employ to construct A*’s initial conditions from those of A could berendered pellucid, but Lewis and Stalnaker posited that this transition comprises a

Page 17: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-17-

modal primitive: we natively understand how to evaluate the “closeness” of onepossible world to another. Such assumptions allow them to set up a proper logic forreasoning with counterfactuals (the details differ slightly between the two authors),which seems a useful enterprise even if we remain somewhat in the dark about those“cotenability” judgements.

Upon this basis, Lewis then proposes a one-size-fits-all analysis of our causaltalk, whose details are less widely accepted than the portrait of counterfactualevaluation just sketched and won’t prove important in the discussion yet to come.

Why do our analytic metaphysicians presume that these considerations establishthe poverty of Woodward’s suggestions? They typically presume that Lewis’ studieshave shown that our everyday causal claims must be somehow grounded within themetaphysically deeper notion of “law,” in whatever manner that fundamental notiondemands further analysis of its own.13 Woodward’s failure to provide any comparableform of metaphysical “grounding” is thus regarded as a glaring inadequacy of hisproposals.

But something must have gone amiss in this reasoning because, in the cases weshall now investigate, the primary motive for invoking a salient set of causal claims,backed by an appropriate set of Woodwardian conditionals, is precisely to skirt anydependency upon “laws” of the hypothesized sort. By such “law avoiding”procedures practical science gains considerably in reliability.

Before we look into the reasons why this is so, let me quickly remark on someof the Theory T inadequacies that lie behind analyses of this sort (much of thediscussion in the next two paragraphs will rely upon the materials presented in Essay 1and can be skipped by readers who have skipped those prior materials). Goodmanand crew have patently collapsed widely different forms of explanatory structure intowhat mathematicians categorize as an initial-value problem, whose salient modelingequations (which is roughly what the metaphysicians have in mind when they talk of“laws”) possess an evolutionary character (= are of hyperbolic signature and move thetarget system forward in time). They likewise presume that the equation’s annexedset of initial conditions report the facts available on a time-slice of the entire universe(or, more exactly, specify the system’s current phase upon a so-called Cauchy surface),chosen at whatever time when one seeks to make the counterfactual antecedent Atrue. Conventional boundary conditions are not mentioned at all (or the term isapplied incorrectly if they are) and it seems to assumed that the hypothetical “laws”will induce the appropriate time-like conditions by promulgating the initial conditionsforward in such a way that a violin string will remain immobile at the bridge and nut.

Through succumbing to these “reduce every explanatory setting to evolutionarymodeling circumstances” propensities, one overlooks the fact that counterfactual

Page 18: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-18-

claims often make perfectly good sense within explanatory circumstances that arestrongly equilibrium centered (our examples to come are of this type) and throws awaythe possibility of attending to standard applied mathematics distinctions that arealmost certainly relatively to the manner in which we construct concrete A*-favoringsituations (example: the effacement considerations that always accompany wiseselections of boundary conditions). I submit that we’ll never understand general counterfactual behavior ably if wecontinue to practice such a coarse form of philosophy of science diagnosis.

(iv)

Turning away from these negative misapprehensions, let us accentuate thepositive and inspect the important architectural role that helpful sets of Woodwardiancausal claims play in implementing one of the most effective (and widely celebratedwithin the right social circles!) schemes for avoiding excessive dependence uponlower scale “law”: Lagrange’s “virtual work” techniques.

We can employ a simple, if somewhat contrived, example to illustrate thetechnique. Suppose that have three seesaws linked together in the manner illustrated(seesaw #1 is linked to seesaw #2 with a rigid connecting rod whereas #2 and #3 areconnected with a spring). All of the children except the girl on the far left sit onsliders that will become pulled back and forth because of the cross-teeter-totter ties. How will this system behave? If one attempted to answer this question in a bottom upNewtonian way, one need to to articulate a host of principles that determine howforces operate inside the seesaws, how they interact in turn with the fulcrums. ButLagrange is going to show us how to “walk around’ these speculative matters withsome easy-to-check large scale information: seesaws don’t distort (much) under theseinteractions and children retain their weights. The only laws we will require is that ofsimple terrestrial gravitation and a Hooke’s law rule for the force generated by a springwhen stretched.

Page 19: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-19-

What is the trick? Well, it appears immediately intuitive that there should beone (or more) equilibrium configurations open to this arrangement–some relocationof the moveable children to positions where the configuration will remain stable andthe overall potential energy captured within the system gets (locally) minimized. Callsuch a pure mode of potential energy storage a “sweet spot” configuration for theseesaw. A glance at our diagram shows that locating such an arrangement will notprove an altogether easy task. Of course, with real children, there’ll be a lot of randomtwitching and horsing around that will prevent the apparatus from ever resting in a“sweet spot,” but the key to understanding generic teeter-totter behaviors turn uponour first locating the possible “sweet spot” properties of the composite system. Whyis this so? Lagrange appealed to a second intuitive thesis commonly known as“d’Alembert’s principle.” Suppose that we are confronted with a movingseesaw–what must we do to bring the contraption to rest? Obviously, by applying arequisite degree of force to each of our rowdy children’s heads. But how much forcewill be needed for each child? Figure out how much or less the children would needto weigh to convert their present locations into a “sweet spot” configuration and thedifferences between these statically computed “sweet spot” weights and their actualmasses tells you the forces Fi required to bring each child i to a screeching halt. But

since the contraption isn’t at rest, each of the children must be presently acceleratingwith a speed adjustment ai given by Newton’s Fi = mi ai rule. These accelerations,combined with the children’s current positions and velocities allow us to plot theongoing motions of the teeter-totters through standard finite difference techniques.

In short, the skeleton key to understanding the complete dynamics of our targetsystem in Lagrange’s manner lies reduces to the task of locating the “sweet spot”characteristics of a large set of similar seesaws loaded with children of differentmasses–they represent the central mountain peak(s) we want to locate within the usualcomplexities of their behaviors. This centrality leads us to regard a “sweet spot” traitφ as an important continuing characteristic of our loaded seesaws–they continue to

Page 20: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-20-

retain φ (if we don’t alter the weights of the children)even when the contraption is not at rest. We regard thevarious eigenfunction modes of a string in the same way:we picture the wave form in its potential energyextreme–at the turnaround point when it reverses itsmotion–rather than in the flat configuration when itcrosses the axis. But through all of its shifting spatialconfigurations, it endures.14 Note, as well, that our“sweet spot” configurations represent collective behaviors locked together across theentire system, in the same sense discussed in Essay 4 with respect to to the Fourierfactoring modes of a vibrating string. And this is why the “sweet spot” rest positionsare hard to find. It is easy to find the equilibrium positions of solitary teeter-totters–ifthe boy on seesaw #1 weren’t connected to seesaw #2 by a connecting rod, then hecould easily wiggle his way to balance with girl #1 following the venerable “law of thelever.” And the same would happen within seesaws #2 and #3. But the ties that bindrequire that the children reach a collective compromise on where they should beseated. Finding that collective resolution, accordingly, requires a search.

How should it be conducted? Here’s Lagrange’s answer. Go back to the basicrequirements of a stable equilibrium: any attempt to level seesaw i further by shiftingchild j’s position a little bit will be answered by forced adjustments in the otherchildren’s positions that will make the overall situation energetically worse. On thisbasis, let’s begin with an arbitrary trial guess φ0 as to how the children might bepositioned and investigate whether we can lower the overall potential energy bywiggling a particular child j one way or other. If this occurs, can, adopt this alteredconfiguration as a fresh guess φ1 and apply our wiggling test again, focusing now uponsome other child. In favorable circumstances, the resulting sequence of improvingguesses φ0, φ1, φ2, ... will converge to a “fixed point” guess φ (= the potential energyno longer lowers under our wiggling tests) that will capture one of the hiddenequilibria residing in our coupled seesaws.

Observe that the primary data that we employ in ours searches representhypothetical propositions about what would happen if we were to manipulate thevarious children in a specific manner:

If the seesaws were configured in position C and if child j were shifted to anew position p, the overall potential energy of the system will lower/raise byan amount r.

But all of these conditionals–or, more exactly, slightly tweaked versions of them inwhich the manipulations are rendered “virtual”15–are quite easy to resolve and

Page 21: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-21-

Lagrange’s “virtual work” methodology exploits this ready knowledge to lay down acomputational pathway that leads to the desired sweet spot characteristics. Each of these required conditionals reports upon a very specific form of counterfactualalteration of the system, with exactly the “solitary manipulation” characteristicsemphasized by Woodward. As such, theentire lot collects into what mathematicianscall a “space with an associated norm”(which, in our circumstances, simplyreflects our ability to assess the potentialenergy content of the tolerated variations inconfiguration16). If this norm is properlycoercive, our conditionals will squeeze inon our sweet points as the φ via successiveapproximations.

Throughout these essays we shall often encounter similar circumstances wherewe must squeeze out a sought after property (if it indeed exists) through an infinitesequence of often irregular approximations. The properties φ that emerge at the endof this (very) long trail frequently display considerably different final characteristicsfrom the along-the-pathway traits φi we employ to tease them out (stock illustrationfrom Fourier analysis: perfectly smooth traits that produce disconnected sawtoothcurves with big jumps in them in the final limit). Such considerations show why the“fixed orbit of kind terms” notion fails to render justice to the actual strategic policiesthat working physics adopts to locate the quantities of greatest interest to it.

Observe as well that the Woodwardian data we employ is largely large scale incharacter: seesaws (scarcely) bend or stretch when loaded with children; fulcrums(scarcely) compress, etc. Physicists call such upper scale provisos “constraints” andtheir mathematical characteristics are further detailed in the book’s postscript. Ofcourse, from a molecular point of view, these constraint conditions can’t be fullysatisfied and, within a truly Newtonian treatment, the molecular forces that valiantlyresist juvenile compressions within a wooden beam must be explicitly delineated,employing so-called “constitutive modeling equations” that relate stress to strain inthe timber (such “constitutive equations” roughly capture the sorts of data that ourmetaphysicians have in mind when they talk loosely of “laws”). But upon amacroscopic scale of observation, those seesaw timbers don’t look as if they arealtering shape and, through employing our “virtual manipulation” techniques, weimplement our hunch that “such puny effects can’t make any significant difference tooverall teeter-totter behavior, so let’s work out a ‘physics avoidance’ policy thatexploits our upper scale knowledge that timbers scarcely compress to good

Page 22: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-22-

advantage.” And that is exactly what Lagrangedoes, working from a very trustworthy allotmentof upper scale Woodwardian manipulations. Forthese top down evasions of purist bottom upmodeling policies, civil engineers continue todrink toasts to our Frenchman’s memory.

The top-down character of Lagrange’sstrategies becomes even more evident when weinvestigate the “multiplier” technique heemploys to calculate internal stresses when needed. But to avoid piling technicalitiesupon technicalities, I’ll delay discussion of these to an appendix.

All of this brings us ‘round to my original contention that the analyticmetaphysicians have gotten the dependencies of counterfactual discourse almostcompletely backwards, at least in circumstances such as these. We plainly do not relyupon some hypothetical set of lower scale laws to ground our counterfactual claims;we employ the latter precisely as a means of evading substantive reliance upon suchspeculative matters. How, then, do we ascertain which of our Woodwardiancounterfactuals are true and which false? In most cases, from direct experiment andby simple induction thereupon (plus some “virtual manipulation” twiddling) overthose experiments. Indeed, even “experiment” seems a pretentious term for codifyingthe data that we have exploited in the circumstances above, for we’ve known fromearliest days that timbers rarely shrink or bend and that children can slide only laterallyalong seesaws.

In any case, the conceit that counterfactual mastery conceptually depends uponeither laws or causes should have always seemed a bit strange. Children master theconstruction “If I had that big red sucker, I’d be a happier boy” long before theyhandle “cause” or (God help us) “law” with any proficiency.

(v)

We can observe another collection of Woodwardian conditionals performingtheir “set up a wise explanatory architecture” duties if we inspect a multi-scalarsituation of the sort discussed in the previous essay (although I won’t require any ofthe details of that discussion). Suppose that we wish to calculate what is likely tohappen if a strong shock wave hits a large block of subterranean granite: what sort ofinternal waves will form? Will the rock become permanently damaged, etc.? At firstglance , this looks like a straightforward initial-boundary value problem, of the samesort to which Theory T thinkers tend to assimilate all explanatory forms (recall their

Page 23: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-23-

bias for squeezing all counterfactuals into this narrowed format). That is, we firstdescribe the incoming wave and the rock in its pre-stressed state and proceed from there. But, as stated,this modeling project only makes formal sense if wework with a modeling that operates on a single scalelevel only–either (1) employing the standard modelingtools of simple continuum mechanics in a decidedlymacroscopic manner or (2) treating the whole complexaffair as a problem in bottom up molecular physics(which plainly will not be feasible in any practicalsense).

To implement our first modeling recipe, we require a rule for how graniteresponds to stresses and heat (including the very high values that accompany shockwaves). Within a range of temperate cases, reasonable principles of this sort can beextracted from laboratory experiment upon smallish samples of granite. But we knowthat some of the intervening structures within granite become damaged under theextreme conditions contemplated: some of its component mineral crystals mayplastically flatten out into the layered sheets one witnesses in metamorphised granite(otherwise known as gneiss) or, deep within these little structures, the dislocationsmay pile up along the interfacial barriers and render the material far more brittle thanit otherwise would be. Material scientists now possess pretty good localized modelsof how these structural changes occur, largely employing the tools of standardcontinuum mechanics, but adjusted to suit problems posed upon a mesoscopic level (=betwixt macroscopic and molecular scales). But large accumulations of subterraneandamage of this type can radically alter the basic parameters involved within amacroscopic approach to wave movement (example: disturbances move throughisotropic materials such as normal granite in much simpler patterns than throughanisotropic stuffs such as gneiss, so determining where and when such a “phase shift”in the governing parameters might arise becomes critical). For many years (Duhemwas centrally involved in these efforts), material scientists tried to develop single-level macroscopic rules for plastic flow and phase change, but the results often provedunsatisfactory. Advances in modern computing have made feasible various tactics fordirectly exploiting our rich mesoscale knowledge of granite (rather than seekingcompletely reliable upper scale stress/strain rules in the old-fashioned manner) andgreat successes has been achieved along these lines. But the implementation of any ofthese schemes requires a certain computational architecture, in which one level ofmodeling feeds its conclusions to the other size scales under consideration (e.g., if alocal investigation of a granite crystal indicates that a strong directional shearing might

Page 24: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-24-

occur, it sends this data to the upper scale wave modeling which adjusts its parametersaccordingly). And with this “computational architecture,” Woodwardian conditionalsenter the scene in a very pronounced way.

Before we look into these matters, let us first note how careless about formalmatters is the Theory T thinking that makes analytic metaphysicians prone to viewingour problematic as a simple “initial value problem” at all. Suppose that we somehowhave all the “laws” of molecular physics available to us, how do we set proper “initialconditions” for a problem such as this? First consider the incoming shock wave. Howdo we set initial conditions for its current state? Well, we generally know how torepresent its position and momentum on a macroscopic scale, but not at a molecularlevel at all (indeed, the shock is commonly modeled as a wave front singularity whichindicates the dramatic amount of data compression involved in our knowledge of itsposition). And the same holds for the present conditions of the granite as well: wenormally only know the values required for a well-posed macroscopic modeling: asmeared amount assessment of the temperatures and pressures to which the rock iscurrently subject. Ascertaining microscopic conditions suitable for a well-posedmolecular modeling is different kettle of fish and we require a procedure that cansomehow convert our upper scale “initial conditions” into data suited to our lowerscale modeling.

Sloppy thinking can make one suppose that this translation procedure shouldprove relatively easy: “Oh, since we know the molecular materials of which our rockis composed, we should be able to figure out how it is likely to bunch together and,upon this basis, we should be able to develop a pretty good set of microscopiccorrelates to our macroscopic knowledge.” But a fleeting memory of ones childhoodrock collection reveals the flaw within this thinking. There are a wide variety of rocksof essentially the same material composition as granite (they all derive from the samevolcanic magma) that possess quite different material properties on a macroscopicscale (compare our granite with basalt, pumice, obsidian or gneiss). These significantbehavioral differences trace to the manner in which the magma was tossed about andcooled long ago and the history of how the solidified glob has been stressed andreheated subsequently. The result is that the dominant mesoscopic structures thatdistinguish granite from its cousin rocks arise, not because of any inherent predilectionon the part of its component molecules to favor such patterns, but because themacroscopic rates and pressures at which the magma originally cooled have leftbehind fossilized fingerprints of their formative operations within the solidified rock. To be sure, if the original magma had been granted space enough and time, it wouldhave developed into a colony of the orderly crystal whiskers that one can sometimesgrow in favorable conditions. But it didn’t; the cruel intervention of rapid cooling and

Page 25: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-25-

internal entrapment halted its chemical components early in their progression towardsthe beatific equilibrium of a perfected molecular lattice. The upshot is a host of“frozen disorder” structures within our rock, characteristically arranged at scale levelsthat correspond to the interventionist level at which outside environmental factors hadarrested the march towards true equilibrium (e.g., for a little grain of feldspar, the sizescale at which crowding by its independently nucleating neighbors begins to inhibit itscapacity to form into an undistorted crystal). The fact that normal solids contain somany “fossils of historical formation” explains why working out rules for theirthermodynamic behavior is considerably harder than for a simple gas or a perfectcrystal.

So our earlier hunch that “since we know the molecular materials of which ourrock is composed, we should be able to figure out how it is likely to bunch together” isplainly mistaken and our only hope for establishing the internal conditions of ourgranite from proper “initial conditions” would be to go back to the primordial days oftheir vulcanic manufacture and trace those evolving histories through. Obviously, thiswould represent a very tall modeling order indeed and the conclusions we would reachwould be highly speculative and unreliable.

How do we evade these problems within a modern multi-scale treatment? Answer: the only part of the modeling that gets set up as a proper initial value problemis the macroscopic part suited to the smeared out initial conditions that we actuallyknow (including describing the shock as an incoming singular surface). We thencalculate a preliminary appraisal of the altering stress and temperature conditionsacross our rock, relying upon a simple “elastic behavior” rule for the granite’sbehavior. But we then send the results of this preliminary assessment down tolocalized models of, e.g., the individual grains within the rock matrix. At this level,we repose our query as a pure boundary value problem with no mention of timewhatsoever: if the bounding environment of our little mineral changes from X to Y,into what new equilibrium state will the grain adjust? If the upshot is a simple elasticshearing, we do nothing but if we instead decide that an irreversible layering of thegrain is likely, then we may need to send a message to our original modeling asking itto modify its modeling parameters from isotropic to anisotropic. In fact, even at thelevel of granite grain, we probably can’t decide whether the flattening will proveirreversible or not, so we now need to set up several additional layers of local lowerscale investigation until we reach the level where the dislocations become the keyobjects of interest (see Essay 4 for more on all of this) and where the decision about“irreversibility” hinges upon how they will move within the local stress environmentthey were assigned within the modeling at the next scale up. Computationally, wesend these little messages back and forth between our various scale-linked submodels

Page 26: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-26-

until a self-consistent solution is reached, at which point we’ll know how our wavewill progress through the rock. Through such a modeling architecture, we can reachmuch better results than were obtainable through old-fashioned single leveltechniques, because we are directly availing ourselves of our rich knowledge of thelower scale structures to be found in granite. Rarely must these downward goingprobes reach the lowest scale of molecular behavior; often potential dislocationmovement is the critical factor we need to check.

But upon what basis do we determine how this “architecture” of modelingdependencies should be assembled? Here’s where Woodward’s characteristiccollections of manipulation conditionals enter the picture: our modelers must decide,usually on the basis of direct empirical tests, what are the likely effects upon a scalelevel ΔL due to a dominant behavioral change (= “manipulation”) upon theappropriate scale level ΔL* just above. These choices of “cause and effect”dependency reflect the hierarchy of important levels of intermediate structures foundwithin our rock and thus allow us to set up a well-posed modeling problem withoutrequiring the impossible-to-obtain data that a genuine single level initial valueproblem would demand.

In other words, a reasonable approach to our original “what happens to thegranite?” problem involves computational arrangements possessing a hybridevolutionary/equilibrium character, stitched together coherently via a multi-scalararchitecture that directly reflects, courtesy of our Woodwardian facts, our empiricalknowledge of the hierarchy of downward-going dependencies that reign within thematerial. The analytic metaphysics literature is full of murky discussions of whether,e.g., granite behaviors are “grounded” within their molecular behavior underpinningsor somehow prove “emergent.” I rarely understand what they are arguing about, but itwould surely help matters if such issues were prosecuted with the same attention tothe computational engagement of modeling components that applied mathematicianscustomarily apply. Through such keener eyes, one can easily determine that asuccessful treatment of the subject is likely to require direct input from mesoscaleexperimental determination.

So here is how our philosophical scorecard stands. (1) Analytic metaphysiciansreject the salience of Woodward’s studies on the grounds that they fail to reveal the“grounding” of our counterfactual claims and, upon this basis, direct our attention to aseriously deficient understanding of explanatory structure. (2) Woodward, strivinginstead to capture real life practice in a directly descriptive manner, alerts our attentionto some central, but often unnoticed, diagnostic tools that we regularly employ indevising wiser forms of investigative architecture.

Page 27: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-27-

(vi)

Many of the computational methods that produce accurate and reliable answersare rather sophisticated from a strategic point of view and it is not surprising that,historically, working scientists have often cobbled their ways to rough implementationsof such explanatory forays without a full recognition of the investigative innovations

they have tacitly adopted. As this happens,conceptual–and even outrightphilosophical–confusions commonly arise due tothese mismatches between an agent’s actualinferential skills and his/her own portrayals of whatthey have been doing. A set of diagnostic tools suchas Woodward’s manipulation conditionals have trulyproven their worth, in my opinion, if they allow us toretrospectively understand why these characteristicconfusions might have arises as a natural collateral

effect of intellectual advance. Only in this way can we effectively address the worriesabout the “rationality of scientific progress” posed by Thomas Kuhn and others, forthey characteristically focus on the side effects as more important than thecomputational gains. Although Woodward’s suggestions haven’t been extensivelyemployed to these ends, they can assist greatly in unraveling key historicalcontroversies, as I shall briskly illustrate in the appendix.

Let us instead return to the foibles of exaggerating the permanence of our earlyapriori patterns of thought in the “pre-scientific” manner advocated by Ted Sider andL.A. Paul in section (i). I remarked that such attitudes derive from a lethalcombination of two popular varieties of ersatz “semantic rigidity”: that of presumingthat “fundamental theories” implicitly settle the denotations of the “kind terms” thatcan be employed under its aegis and that basic classificatory categories such as “cause”and “if .. then” get their extensions and truth-conditions fixed in some early stage ofpre-scientific determination. Both opinions overlook the ongoing need to tailor ourinferential policies to suit the strategic opportunities that nature send our way and theseadaptations generally pull the syntax involved into tighter alignments with externalcircumstance. In order to do so, we require vocabulary to serve in a partiallymanagerial capacity in the sense that we often need to direct an audience’s attentionto a particular means of piecing together a novel effective computational policy. Thisis the chief purpose to which the caveman of section (i) employs the word “cause”: tooutline the order of steps to be followed in computing the shape of a loadedclothesline. He tells his aboriginal companions: “On the clothesline side of this

Page 28: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-28-

diagram provided, think of the starting position and slope on the left hand side asoperating in conjunction with gravity to cause a curvature in the next section of rope tothe right.” Through this brisk appeal he can quickly transfer a familiar reasoningappropriate to cannonball flight into a new applicational setting. As this happens, theevolutionary “causal process” significance that attaches to “cause” within the originalsetting happens to get left behind and is replaced by a more abstract notion of “locallyreduces the energy norm attaching to the system”–a notion of genuine physical import,to be sure, but rather different than the “causal process” employment from it sprang.

In an allied way, we employ a well chosen set of Woodward’s manipulatedcause-and-effect claims to select the downward aiming submodelings we plan to utilizein our computational scheme and to arrange them in the proper computational order. But these same arrangements carry physical significance as well, for we are essentiallymocking the main stages of energy transfer that gradually convert much of thedirectionally coherent macroscopic work of the original eastward heading shock waveinto increasingly incoherent forms of energy, finally bottoming out as randommolecular motion (= heat; see the discussion of “energy cascades” in Essay 4 for moreon this). So we will want to mimic this downward trending hierarchy of energydegradations in our reasonings because both totally ordered motion (our shock wave)and totally random motion (heat) can be captured with a relatively small number ofdescriptive parameters, whereas halfway houses such as plastic flow or dislocationmovement require many more. So our multi-valued schemes can gain greatly inefficiency through concentrating initially upon the order preserving patterns that usuallyprevail upon the macrolevel and opening up the lower scale degrees of freedomassociated with material damage only as needed.

This is why “cause” behaves so wantonly in its denotational associations: onceensconced in a concrete applicational setting, it aligns itself with physical significancereadily enough, but it simultaneously serves as one of the opening wedges (= “leadingprinciples” in C. S. Peirce’s sense) that pulls our language into that setting in the firstplace. For this reasons, the dependency relationships we highlight with “cause” inthese architectural employments should be carefully framed–we shouldn’t cite any olddependency relationship that might be legitimately considered a “cause and effect”connection within another applicational context (that is, the various “cause and effect”claims we group together should act as a “cooperative family” in the sense described inother essays in this collection). For such reasons, various writers have declared thatour judgements of “cause and effect” are “interest relative” but I find such a glossmisleading. It is isn’t to our “interests” that “cause” usually tailors its variedministrations, but to the varied computational opportunities that nature makesavailable to us (such as the fact that children don’t internally damage seesaws much,

Page 29: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-29-

allowing us to describe the apparatus in largescale terms).

Unfortunately, orthodox philosophicalapproaches to “semantics” haven’t foundways to incorporate managerial function ofthe sort witnessed here into their concepts of“word meaning” (even though those architectural utilities appear to represent thestrongest invariants that the term preserves across its ever-multiplying sets ofemployments). Certainly the “rigid semantics” assumption that all of the truth-valuesthat will ever attach to the word must be fully fixed through our processes of earlyapriori instruction should seem almost laughable and a great abuse of the normalcritical utility of the term “truth value.” An example from Postscript 2 may make thepoint. The reasoning considerations outlined there allowed the early scientificemployers of the term “pressure” to extend its descriptive reach from wall pressures tointernal pressures also failed, in a very clear sense, to install firm truth-values upon awide range of “pressure” assertions, although the scientists in question did notrecognize this. And this is exactly how engineers report such facts to their physicsclasses: “In contexts where one only considers incompressible flows, all claims about‘the absolute pressure’ lack coherent truth-values.” By insisting that all suchsentences should “possess truth values” once a certain level of linguistic proficiencyhas been reached, we rob a very useful discrimination of its real life utility.

To be sure, the contrasts between the forms of physical significance that “cause”acquires within its different applicational settings can appear so vast that we maydecide, after the fact, that “cause” might not have proved the optimal word of choicefor shifting our audience’s algorithmic horizons in the desired way. Perhaps ourcaveman might come to suffer pangs of semantic conscience of this variety later on. Granted: we can burn our bridges after we cross them, but they’re still required to getacross the river in the first place. For we are perpetually doomed to cobbling togethereffective computational policies founded upon the peculiar allotment of imperfect reasoning skills we have acquired from ourbiological heritage and early childhood instruction ((drum roll); enter Neurath’s boat,stage left).

(vii)

Within their panegyrics, the analytic school views its own endeavors as atriumphal return to the “metaphysics” of days of yore, after suffering through atwentieth century dark age in which these noble endeavors were squelched by

Page 30: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-30-

puritanical positivists. This self-assessment might apply with some justice if weconfine our attention to the Scholasticists of medieval times, but not if we embracevenerable “metaphysical” names such as Descartes, Leibniz, Kant, Peirce or Cassirer. For all of the latter, in different ways, were concerned with the perverse and oftenpuzzling policies we must follow to persuade mathematical physics to disgorgenumerical computations that can tell us something useful about the opaque universespread before us. Descartes, for example, was a scientific realist in any reasonablesense of the term, but he also believed that our computational abilities to followphysical process by mathematical methods were limited and confined to specialopportunities in which the physical world accidently falls into matching accord with therepresentational structures available to us within mathematics. As detailed more fullyin Essay 9, he believed that, as coarse finite beings, we are incapable of following theparticles of a fluid through a sharp constriction in any detail (he contended that theyinstead entered a zone of “indefiniteness” with respect to our inferential capacities)and that we could only pick up the descriptive threads on the other side of the pipe,relying upon the conservation of bulk mass and momentum on the neighboring sides ofthe pipe (the connection with the problems of Bernoulli’s principle discussed inPostscript 2 should seem obvious). In this pessimistic assessment, he was hamperedby not having the infinitesimal calculus available to him but, even with such resources,even today we require a certain degree of allied “cooperative family” interdependenceto carry a fluid system past the counterflow region of a sharp restriction (see essay 7 onthis). Likewise, as noted in essay 2, many of Leibniz’ metaphysical worries about the“labyrinth of the continuum” are closely allied with the troublesome descriptiveoverreach typical of calculus based physical models (essentially the same problems wetreated under the heading of “the greediness of scales” in the eponymous chapter). And so it goes with the other historical metaphysicians I have mentioned.

In blithely presuming that we “posit mathematics” on exactly the same staticfooting as our “foundational” set of “physical laws,” our analytic philosophers havepushed all of these significant dimensions of traditional concern off the table ofmetaphysics, leaving behind only wan worries whether we might “avoid onticcommitment to abstract objects” by engaging in strange policies of grammaticalcircumlocution. This, I submit, is not the sturdy stuff of which a sound metaphysicsshould be comprised. See Essay 9 for more on this theme.

Insofar as I can determine, few within the analytic school recognize howdramatically they have reshaped the playing field of traditional metaphysical concern. In a nice stroke of unintended irony, Sider extracts the title of his chief doctrinalexposition from a famous quotation by Galileo in The Assayer:

Philosophy is written in this grand book — I mean the universe — which

Page 31: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-31-

stands continually open to our gaze, but it cannot beunderstood unless one first learns to comprehend thelanguage and interpret the characters in which it is written.It is written in the language of mathematics, and itscharacters are triangles, circles, and other geometricalfigures, without which it is humanly impossible tounderstand a single word of it; without these, one iswandering around in a dark labyrinth.17

In point of fact, perfect triangles have proved a great nuisance to mathematical physicsas it has developed since Galileo’s time, because such structures do not accord wellwith the descriptive tools of the differential calculus. For example, suppose that wehad such perfect triangle made of iron, say, and pulled on two of its faces as illustrated. The normal differential equations of elasticity demand an infinite stress in the sharpcorner. The easiest (but not the optimal) way of evading this difficulty is to insist thatsuch figures cannot exist–that surface tension effects will immediately round off anypotential sharp corner before it actualizes. On the other hand, such “impossible”figures offer a fair number of computational advantages, such as the fact that we cansometimes find “exact solutions” that make good sense everywhere except the cornersand which offer valuable blocks of “analytical” information that simply cannot beobtained from raw numerical methods. Furthermore, a natural strategy forapproximating the stresses within a complex shape such as the notched pie plateillustrated is to decompose (or “tesselate”) the object into an array of flat triangles, in amanner pioneered by aeroscience engineers in the 1940's. So Galileo’s “triangles”nicely illustrate the severe tensions that often arise between the conceptual machinerywith which we can reason most handily (mathematically perfect triangles) and thephysical structures that nature herself appears to tolerate. Because of the needs of theairplane industry (amongst others), applied mathematicians eventually uncoveredsubtler strategies for resolving our “infinite stress in a notch” problem, wherein fancyfunctional analysis tools are used to impose better “cooperative family” behavior uponour little triangles (details will be provided in Essay 7). But many of the “facts” thatwe learned about triangles in happy, early apriori days of yore need to reevaluated inthis rethinking. Such examples nicely display the adaptive concerns that often bring longsuppressed and subtle misalignments within our descriptive policies to the surface. These conceptual tensions have lain latent in our practices since our earliest days ofconceptual instruction but their emergent clashes and paradoxes are not to be curedthrough traditional armchair musings on “misused meanings.” Indeed, such conflictsrarely arise from words being used wrongly, but because the natural processes of word

Page 32: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-32-

development carry us helplessly into unanticipated applicational clashes.18 And theirproper cures frequently require much deeper levels of thinking about the strategicguidance that mathematical thinking plays within our evolving descriptive practices. How should different forms of coordinate representation relate to on another? Howdoes an abstract entity such as a differential manifold make its presence evident to us? Why do mathematical singularities often serve as the most potent repositories ofphysical information? Why do asymptotic expansions often supply the best answers toour “what is the dominant behavior witnessed here?” questions? All of these questionsrelate intimately to the issues to be discussed in essay 9 and none of them accept trivialanswers. When Gauss complained about the blinkered necessitarian thinking of hisown day in these terms--

But, methinks, in spite of the meaningless Word-Wisdom ofthe

metaphysicians, we know too little or nothing at all aboutthe real

meaning of space to stamp anything appearing unnatural tous as Absolutely Impossible.19 --, he had in mind an unwillingness to think through some of the underlying strategicissues just mentioned, he was not merely articulating the coarse conclusion that “theproperties of space must be empirically established.” Indeed, Gauss himself was anearly pioneer in recognizing some of the central specimens of “cooperative family”dependence highlighted in this book, which demand that we evaluate familiardescriptive strategies from a deeper--and more “philosophical” in its proper sense--perspective. But most early a priori verities collapse under the withering eye of thisprobing scrutiny.

In any case, the methodological concerns central both here and within much oftraditional metaphysics share little affinity with the “anti-metaphysical” suppressions ofa Rudolf Carnap or A.J. Ayer. Our worry is rather that, in neglecting the intricatepuzzles of real life inferential technique, the analytic metaphysicians have whittled theglories of venerable inquiry down to puny dimensions and that many robustmetaphysical babies have gotten cast out along with the philosophy of science,mathematics and language waters in which they formerly bathed.

In the final analysis, it shouldn’t be surprising that our earliest vocabulariesconceal “semantic time bombs” of the sort we have highlighted. Attempting toengineer a language perfectly from the beginning, so that it will never encounter anyadaptive tensions is straightforwardly foolhardy, as our brains lack adequate storagecapacity to store the elaborate instructions required for such a farseeing task (we can’tinstruct our babies: “This is a triangle and in its corners you’ll need to impose a weak

Page 33: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-33-

solution a la Laurent Schwartz”). We readily accept that we’d never learn to play asport such as baseball ably if the idiosyncratic motor skills required had to arrive hard-wired into our genetic codes. Flexible muscular adaptivity instead desires a loose setof inherited skills that can be locally tuned to suit the divers requirements of variousvarieties of sportive enterprise. The same moral holds in language as well, prudentlinguistic engineering begins with rough computational capacities that can be laterreshaped to suit the more exacting demands of novel applications. Semi-managerialwords such as “cause” help us arrange these adaptions wisely and on the fly, so that wecan swiftly outline the computational architecture required to blend together theorywith important injections of reliable empirical data, in a manner that suits thespecialized opportunity currently before us. As I’ve stressed, the objective of many ofthese higher scale data insertions is precisely to avoid an unreliable dependency uponlower scale “grounding” laws of a speculative nature. For such reasons, we can’tproperly appreciate the many good offices that “cause” serves on our behalf unless weturn a cold eye upon the “grounding” entreaties of the recent crop of analyticmetaphysicians.

It is my impression that few members of this new school recognize how manyareas of traditional metaphysical concern they have tacitly set aside by ignoring themotivating puzzles of mathematical practice. Such obtuseness, I think, is largely dueto the unreflective habit of suppressing real life strategic diversity under large dollopsof theory T thinking. In these respects, a passage from our opening Sider quote isrevealing: A realistic picture of science leaves room for a metaphysics tempered byhumility. What does he mean by “a realistic picture of science”? In light of theconsiderations raised in this book, not “an unbiased portrait of scientific method basedupon a close observation of real life practice,” for then Sider would not dismissWoodward’s work so airily. Nor can he intend “from a non-instrumentalist point ofview that maintains that science talks about a genuine external world,” for Descarteswas surely a “realist” in this sense, but who further suspected that the inferentialpowers afforded to us within mathematics were limited and could not keep pace withthe external world except in fits and starts (many of the concerns raised in essay 4 canbe regarded as more up-to-date refinements upon these fundamental Cartesian doubts). No, what Sider actually means is “a picture of science that entirely conforms to thesimple Theory T percepts I have been taught.” That is, he presumes, in the face ofmuch presumptive evidence to the contrary, that, upon our final day of epistemologicalreckoning, the Almighty will present us with a stone tablet inscribed with an array of“laws” that fully accord, structurally, with every syntactic percept anticipated by DavidLewis in his writings on science. This, I submit, does not qualify as “humility,” butreflects a philosopher’s ungrounded presumption that we can ably pronounce upon the

Page 34: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-34-

“basic structures of science” without bothering to study them in any detail whatsoever.From a formal point of view, Sider’s conception of metaphysics as a “pre-

scientific” enterprise almost certainly rests upon the syntactic conviction that his “finalphysics” will assuredly assume the structural contours of what was called an“evolutionary physics of hyperbolic signature operating on a cosmological level” inessay 1. In that rosy future, all of today’s annoying cobwebs of scales that “talk to” oneanother through homogenizations and interiors and boundaries that must be describedin unequal but complementary ways will have entirely vanished, replaced by aresplendent theory T of impeccable first order logic credentials. For it is only withinsuch a beautifully amalgamated future that Sider will find the pristine “laws, properties,lasting through time” he requires to ground his present day speculations. In theprevious essay, I cheerfully acknowledged that such a futuristic eventuality might cometo pass–the matter lies ultimately in nature’s hands, not ours. But I discern no evidentnecessity within, nor strong inductive support behind, these expectations and feel thatcontemporary philosophy will perform better if it concentrates a bit more on the here-and-now. For that project I heartily recommend the diagnostic value of Woodward’smanipulation conditionals.

I write these passages with some sarcasm, because, in my limited dealings withanalytic metaphysicians of stout timber, I have been struck by their lack of interest in,e.g., the recent innovations within continuum physics with respect to a viable treatmentof “boundaries” and “interiors.” But perhaps I’ve merely had the misfortune ofrunning into an especially dogmatic subgroup, although I suspect thattheir isolationalist propensities trace to more fundamental sources,such as the semantic presumptions unpacked here. On the surface, atleast, the entire “analytic metaphysics” movement gives theimpression of a creed that rebuffs the counter-suggestions ofempirical study as robustly as any other sect that promises itsmembers similar allotments of contentment and self-satisfaction.

In one of those cruel whimsies of philosophical fate, Quine’s unfortunate,Theory T-laced suggestions on “ontological commitment” in From a Logical Point ofView have become central dogma within this new theology, in a manner that invokesentirely unQuinean visions of a kingdom of necessitarian doctrine wherein a cleverphilosopher can luxuriate without knowing much of anything about anything else. As ithappens, Quine’s book title derives from a classic calypso composed by the RoaringLion (“From a logical point of view/Always marry a woman uglier than you”). In thatsame spirit of light irony and in tribute to Quine himself, let me borrow my own titlefrom another of Lion’s evocative compositions, this time satirizing the enthusiasms ofreligious cults:

Page 35: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-35-

Yes, believers! We come from the glory--we come from the glorious kingdom.In present circumstances, the relevant “kingdom” represents the permanentenshrinement and glorification of the genuine linguistic considerations, familiar to all,that I have labeled “the early a priori.” In the process, a legitimate frog has puffeditself up into an unsustainable ox.

.

Appendix: Woodwardian conditionals and the evolution of conceptual disparities.

Conceptual paradoxes often emerge as the collateral side effects of otherwiseheathy patterns of linguistic improvement and a hallmark of true philosophical insightlies in its capacities to explicate the emergence of longstanding muddles. In theserespects, Woodward’s manipulation conditionals earn their spurs admirably in theserespects. In the previous essay, we noted that data articulated upon differentcharacteristic scales commonly induce formal inconsistencies upon amalgamation dueto the fact that the two modes often employ identical vocabulary in mildly discordantways tracing to adaptation to local descriptive advantage. As we also stressed,articulating a descriptive practice in contextually sensitive layers is often wise policyfrom an efficient data registration point of view, but can result in confusion if the layersdo not “talk to one another” in one of the subtle manners canvassed in Essay 4. Achief merit of Woodward’s work, in my assessment,lies in its identification of the vital roles thatcontrolled cause-and-effect considerations play inarranging the computational architecture of profitablescience. As this transpires, central words mayslightly shift their data registration focus and variousconfusions may ensue, despite the fact that the terminological extension itselfrepresents a great step forward in science.

Here’s a simple illustration of these policies in action. Our understanding of thenotion of “pressure” historically begins with the palpable resistance that a fluiddisplays in response to any exterior attempt to compress its bounding surface in a way(say, with a piston). Let us observe that water in bulk appears to observe a “constantvolume” constraint:

No manipulation, including increased pressure on its boundaries, upon aflask of water can cause any change in the volume of any packet of the fluid.

This tenet directly reflects “manipulations upon the boundaries of a system” of exactly

Page 36: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-36-

the character that Woodward highlights. The constant volume presumption thensupplies us with a criterion for extending the application of the term “pressure”differentially to interior parts of the inward, allowing us to speak meaningfully of an“internal pressure” varying throughout the fluid, far away from its bounding edges. Here’s the explanation I extracted from Wikipedia on the day that I looked up“pressure”:

Bernoulli's principle for steady state flow can be derived directly fromNewton's 2nd law. If a small volume of fluid is flowing horizontally from aregion of high pressure to a region of low pressure, then there is morepressure behind than in front. This gives a net force on the volume,accelerating it along the streamline... If a fluid is flowing horizontally andalong a section of a streamline, where the speed increases it can only bebecause the fluid on that section has moved from a region of higher pressureto a region of lower pressure.

Note the central role of our constant volume constraint within this argumentation.Every high school physics student is taught to reason in this fashion and the linkbetween conserved volume and pressure allows to talk of a “pressure” distributedunevenly throughout the interior of the fluid..

But such reasoning readily generates a “paradox” thatsometimes occurs to these budding scientists:

Gee, the volumes that must remain constant in a fluid canbe relatively large and complicated and fluids aren’t verysmart. But it’s very hard to distort any object on one sidewithout temporarily affecting its volume. Thus consider two balls of fluid Aand B situated next to one another. Waves travel through water at a finiterate, so when I first press on ball A, won’t it need to first contract for a shortperiod and then reexpand, so that it transmit a heightened force to B whenthe two balls again regain contact? In other words, water can only maintaina simulacrum of our constant volume principle. Over short intervals of time,

the principle must fail. Indeed, one can sometimes directly observe thetime lag a fluid must endure before it regains itsformer volume by considering the “die swell”within a tube of tooth paste: it takes a short periodof time before the paste fully “remembers” itsuncompressed volume inside the tube. When wethink in this vein, we have tacitly shifted to an

Page 37: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-37-

alterative form of “cause” and “effect” investigation linked to a fixed scale level thatponders the processes that allow waves and allieddisturbances to ripple through interiors ofcontinuous media. We are naturally led to a subtlyrevised notion of “internal pressure” that musteschew its “constant volume” antecedents and allowa small packet of water to temporarily store a certainquantity of compressive strain energy internallylinked to its degree of volume compression (which,ipso facto, implies that it cannot fully retain itsoriginal volume during this interval–the issues arequite similar to those involved in billiard ball impactas outlined in Essay 2). Once we employ “pressure” in this traction-force-related-to-distortion manner, we are well on the way to recogizing that a pressure represents anartificially simplified reduction of the general (and subtle) concept of stress, as firstarticulated by Cauchy and which allow our little blobs of fluid to press on one anotherin oblique directions related to shear.

Unfortunately but inevitably, old and new employments of “pressure” oftennestle together promiscuously in modern descriptive practice. Thus, it is common (anddeductively efficient) to off-handedly ask normal water, in its mid-stream flow, to actas an “incompressible fluid.” In doing so, we have backtracked to a stance where weare once again enforcing a “constant volume” demand upon our water. Mathematically, we thereby introduce an upper-scale constraint that, throughdownward projection, induces new Lagrangian multiplier “forces” inside our little fluidblobs (the inductive procedure involved is outlined in the appendix to Essay 6) able toimmediately puff them back to their original volumes. By introducing this innocuousconstraint (and all of the “reaction forces” that follow in its wake), we derive ourselvesof all capacity to determine what the “absolute pressure” inside our blobs are like (=the stored energies responsible for wave motion). In the usual jargon, the concept of“absolute pressure” becomes indeterminate in constant volume circumstances.

Because pedagogy often recapitulates ontogeny, young engineers are taught the“constant volume” employment of “pressure” first and only later warned that they mustbeware of confusing this non-absolutist notion of “head pressure” from a compressive“absolute pressure.” Every student of the subject at some point experiences somevertigo with respect to the true “meaning” of “pressure,” illustrating my observation isthat the asking price of an effective reasoning architecture is often some form ofconceptual headache.

In fact, usage becomes increasingly nuanced as another round of Woodwardian

Page 38: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-38-

“cause and effect” consideration enters the scene. These appear as “friction” (whichrepresents a ragtag collection of concerns) gradually assumes an important role. Thermodynamics demands that we distinguish a “pressure” upon a blob of fluid from a“heat flux” because the former is able to perform coherent work on its surroundings,whereas randomized heat cannot. The notion of “coherent effect” proves rather subtleitself in the case of a fluid, because when we track the coherent movements of a“packet of viscous fluid,” the groupings we follow are not determined by thetrajectories of the individual molecules currently within the packet, for these may comeand go, but by the overall coherence (with respect to momentum) of the evolving blob. As a gas becomes rarified, sharp distinctions between “pressure” and “heat flux”become amorphous.

There are several “semantic morals” illustrated in this example. (1) Words like“pressure” acquire their specific capacities for data registration through their immersionwithin the broader context of an investigative framework (= what questions do weaddress in our modelings?). (2) Basic “cause and effect” considerations of aWoodwardian stripe mark out the basic contours of these “investigative frames.” (3)In doing so, “cause” and “effect” serve as much as “instruments of languagemanagement” (= they articulate important pieces of the local investigative architecture)as they encode direct descriptive information. (4) Once such a context has been fixed,then all of these words–“pressure, “cause” and “effect”–will specialize to capturesalient physical factors, but what they will be may differ substantially from oneinvestigative context to another. In the circumstances of our wave motion concerns,“pressure” will firmly record important energetic considerations and “cause” will settleupon the notion of hyperbolic “causal process” discussed in Essay 1. In constantvolume circumstances, these same words fasten upon questions of comparative“pressure head” for which absolutist conceptions of stored energy only serve ascomputational annoyances.

As implied above, one of the chief merits of Woodward’s studies is that he hasably documented, across an impressively wide swath of productive science, themanagerial roles that “cause” and “effect” discussion play in setting up pertinent formsof investigative architecture. To complain, as our analytic metaphysicians are wont todo, that Woodward has failed to isolate the “core metaphysical meaning” for “cause”that we must have surely imbibed during our periods of early apriori inoculation is todeeply misunderstand the meandering careers of both science and language.

Endnotes:

Page 39: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-39-

1. In the present essay, we shall concentrate upon the policies that aid the uncoveringof helpful strategies of effective “physics avoidance.” But understanding why thesestrategies work often represents a significantly different kettle of fish, which we shalldiscuss more intently in Essay 9.

2. In their own hagiographies, it is common to claim that much of twentieth centuryanalytic philosophy erred in overemphaszing linguistic considerations and neglectedthe metaphysics underneath.

3. Such a foible, I think, attends the pamphlet I once ran across that lays down “TheAxioms of Scientology.” For more on this score, consult Martin Gardner’s classicFads and Fallacies in the Name of Science (New York: Dover, 1957).

4. It would be better if we regarded the frequent philosophical invocations of thistelltale phrase with the wary skepticism that we apply to assurances of a patentmedicine salesman: “In principle, this elixir will cure your rheumatism.”

5. Theodore Sider, “Introduction” in Sider, Hawthorne and Zimmerman, eds., Contemporary Debates in Metaphysics (Hoboken: Wiley-Blackwell, 2007), p. XXX.

6. An early concern within the battles over “functionalism,” “supervenience,”“grounding” and so forth focused upon the question whether a syntactic expressionthat contains an abundance of quantifiers (especially of a functional stripe) lifted theexpression beyond the allowable orbits of “physical kinds” and into the nether spheresof psychology or the other “special sciences.” Hilary Putnam started much of this,presumably inspired by his work in recursion theory (where layered stacks ofquantifers do correspond to important hierarchies of mathematical behavior). In my“Honorable Intensions” (in Wagner and Warner, eds, Naturalism: A Critical Appraisal(South Bend” Notre Dame Press, 1993)), I argue that this entire project of sorting out“kinds” through synatx is misguided in a physical context.

7. Or, more generally, a Sturm-Liouville decomposition into eigenfunctions. Theunderpinning of these techniques are discussed in Essay 9.

8. The great paradigm for this kind of treat is L. Prandt’s “intermediate asymptotics”approach to boundary layer theory

9. In dealing with potential wells of this character, mountain peaks naturally turn intovalley floors.

Page 40: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-40-

10. I found this quote in H. J. S. Smith, “On Some of the Methods at Present in Use inPure Geometry” in Collected Mathematical Papers Vol. 1(Providence: AMS Chelsea,1965) p. 6). He credits the remark to an earlier British mathematician, RichardWoodhouse.

11. As Jacques Hadamard stressed, the salient distinctions are directly encoded withinthe signatures of the very differential equations that Russell and Mach regard asobligatory, so how could the processes in question prove “unnecessary for science”? If we restore this dispute to its original scientific context, Mach’s remarks can beinterpreted in the more favorite light of his battles over thermal physics, in a mannersimilar to our story in Essay 3. Russell, in contrast, appears to be simply borrowingfrom Mach without similar contextual justification.

12. James Woodward, Making Things Happen (Oxford: Oxford University Press,2005).

13. I find this assurance amusing because I doubt that any of these metaphysicianscould riffle through the pages of a standard physics primer and reliably locate the“laws” therein. In truth, the doctrines we call “laws” in real life comprise a rag-tagcollection of classifications tied together through various forms of historical accident. I find it astonishing that this fuzzy notion is now viewed as a key ingredient within ourearly apriori arsenal (in contrast to “laws of causality” of earlier times). eras). Havethey ever attempted to teach the notion to a seven year old?

14. Lillian Gish: The Night of the Hunter.

15. For our teeter-toter scheme to work properly, our counterfactuals need to betweaked into the “virtual manipulation” format explicated in Essay 6: even when ourseesaws are turned, we keep the downward force of the children’s weight the same(rather than correcting for the change in oblique angle).

16. That is, we must “satisfy the constraints of the problem”–we’re not to considerthe possibilities where the teeter-totters bend or stretch.

17. Galileo, “Selections from ‘The Assayer’” in Maurice A. Finocchiaro, ed, TheEssential Galileo (Indianapolis: Hackett, 2008).

18. In WS, I stress the outright “multi-valuedness” that sometimes arises within ourclassifications and analogize these developments to the the “anholonomies” thatautomatically arise under analytic continuation upon the complex plane.

Page 41: Nature has the deep cunning which hides itself under the ... · there’s a caveman in the picture) and can be encountered within the humblest forms of everyday reasoning. The fact

-41-

19.Gauss in Carslaw p. 25 Find source