6
.—.. HISTORICAL PERSPECTIVE From Turing and von Neumann to the Present by Necia G. Cooper automaton—a mechanism that is rel- atively self-operating; a device or machine designed to follow automatically a predetermined sequence of operations or respond to encoded instructions. The notion of automata in the sense of machines that operate on their own from encoded instructions is very ancient, and one might say that mechanical clocks and music boxes fall under this category. The idea of computing machines is also very old. For instance. Pascal and Leibnitz outlined vari- ous schematics for such machines. In the latter part of the 18th century Baron de Kempelen built what was alleged to be the first chess-playing machine. Remarkable as it appeared. alas, it was a fake operated by a person hidden within it! The modern theory of automata can be traced to two giants in the field of mathematics. Alan Turing and John von Neumann. These two men laid much of the logical foundation for the development of present-day electronic computers, and both were involved in the practical design of real computing machines. Before World War II Turing had proved the logical limits of computability and on the basis of this work had designed in idealized terms a universal computer, a machine that could perform all possible numerical com- putations. This idealized machine is now known as a Turing machine. (All modern computers have capabilities equivalent to some of the universal Turing machines.) During World War H Turing successfully applied his logical talent to the real and urgent problem of breaking the Nazi in- telligence code. a feat that played a crucial role in the Allied victory. Prior to World War II von Neumann was aware of Turing’s work on computing ma- chines and realized how useful such ma- chines would be for investigating nonlinear problems in mathematical physics, in particular. the fascinating problem of turbulence. Numerical calculations might, for example, elucidate the mysterious role of the Reynolds number in turbulent phenomena. (The Reynolds number gives roughly the ratio of the inertial forces to the viscous forces. A flow that is regular be- comes turbulent when this number is about 2000.) He was convinced that the best mathematics proceeds from empirical sci- ence and that numerical calculation on elec- tronic computers might provide a new kind of empirical data on the properties of nonlinear equations. Stan Ulam suggests that the final impetus for von Neumann to work energetically on computer methods and de- sign came from wartime Los Alamos, where it became obvious that analytical work alone was often not sufficient to provide even qualitative answers about the behavior of an atomic bomb. The best way to construct a computing machine thus presented a prac- tical as well as a theoretical problem. Starting in 1944 von Neumann formulated methods of translating a set of mathematical procedures into a language of instructions for a computing machine. Before von Neu- mann’s work on the logical design of com- puters, the few existing electronic machines had to be rewired for each new problem. Von Neumann developed the idea of a fixed “flow diagram” and a stored “code,” or program, that would enable a machine with a fixed set of connections to solve a great variety of problems. Von Neumann was also interested, as was Turing, in discovering the logical elements 22 Fall 1983 LOS ALAMOS SCIENCE

Cooper, From Turing and Von Neumann to Present

Embed Size (px)

DESCRIPTION

la notion d'automata (automates cellulaires) et son influence sur l'informatique.

Citation preview

Page 1: Cooper,  From Turing and Von Neumann to Present

.—..

HISTORICAL PERSPECTIVE

From Turing and von Neumannto the Present

by Necia G. Cooper

automaton—a mechanism that is rel-atively self-operating; a device or machinedes igned to fo l low automat ica l ly apredetermined sequence of operations orrespond to encoded instructions.

The notion of automata in the sense ofmachines that operate on their own fromencoded instructions is very ancient, and onemight say that mechanical clocks and musicboxes fall under this category. The idea ofcomputing machines is also very old. Forinstance. Pascal and Leibnitz outlined vari-ous schematics for such machines. In thelatter part of the 18th century Baron deKempelen built what was alleged to be thefirst chess-playing machine. Remarkable asit appeared. alas, it was a fake operated by aperson hidden within it!

The modern theory of automata can betraced to two giants in the field ofmathematics. Alan Turing and John vonNeumann. These two men laid much of thelogical foundation for the development ofpresent-day electronic computers, and bothwere involved in the practical design of realcomputing machines.

Before World War II Turing had provedthe logical limits of computability and on thebasis of this work had designed in idealizedterms a universal computer, a machine thatcould perform all possible numerical com-putations. This idealized machine is nowknown as a Turing machine. (All moderncomputers have capabilities equivalent tosome of the universal Turing machines.)During World War H Turing successfullyapplied his logical talent to the real andurgent problem of breaking the Nazi in-telligence code. a feat that played a crucialrole in the Allied victory.

Prior to World War II von Neumann wasaware of Turing’s work on computing ma-chines and realized how useful such ma-chines would be for investigating nonlinearproblems in mathematical physics, inparticular. the fascinating problem ofturbulence. Numerical calculations might,for example, elucidate the mysterious role oft h e R e y n o l d s n u m b e r i n t u r b u l e n tphenomena. (The Reynolds number givesroughly the ratio of the inertial forces to theviscous forces. A flow that is regular be-comes turbulent when this number is about2000.) He was convinced that the best

mathematics proceeds from empirical sci-ence and that numerical calculation on elec-tronic computers might provide a new kindof empirical data on the properties ofnonlinear equations. Stan Ulam suggests thatthe final impetus for von Neumann to workenergetically on computer methods and de-sign came from wartime Los Alamos, whereit became obvious that analytical work alonewas often not sufficient to provide evenqualitative answers about the behavior of anatomic bomb. The best way to construct acomputing machine thus presented a prac-tical as well as a theoretical problem.

Starting in 1944 von Neumann formulatedmethods of translating a set of mathematicalprocedures into a language of instructionsfor a computing machine. Before von Neu-mann’s work on the logical design of com-puters, the few existing electronic machineshad to be rewired for each new problem. VonNeumann developed the idea of a fixed “flowdiagram” and a stored “code,” or program,that would enable a machine with a fixed setof connections to solve a great variety ofproblems.

Von Neumann was also interested, as wasTuring, in discovering the logical elements

22 Fall 1983 LOS ALAMOS SCIENCE

Page 2: Cooper,  From Turing and Von Neumann to Present

Cellular Automata

HISTORICAL PERSPECTIVE—.

and organization required to perform someof the more general types of functions thathuman beings and other life forms carry outand in trying to construct, at least at an

abstract level, machines that contained suchcapabilities. But whereas Turing was prima-rily interested in developing “intelligent” au-tomata that would imitate the thinking anddecision-making abilities of the human brain,von Neumann focused on the broader prob-lem of developing a general theory of com-plicated automata, a theory that would en-compass both natural automata (such as thehuman nervous system and living organisms)and artificial automata (such as digital com-puters and communication networks).

What is meant by the term “com-plicated”? As von Neumann put it, it is not aquestion of how complicated an object is butrather of how involved or difficult itspurposive operations are. In a series oflectures delivered at the University of Illinoisin 1949, von Neumann explored ideas aboutwhat constitutes complexity and what kindof a theory might be needed to describecomplicated automata. He suggested that anew theory of information would be neededfor such systems, one that would bear aresemblance to both formal logic andthermodynamics. It was at these lecturesthat he explained the logical machinerynecessary to construct an artificial automa-ton that could carry out one very specificcomplicated function, namely, self-reproduc-tion. Such an automaton was also logicallycapable of constructing automata more com-plex than itself. Von Neumann actuallybegan constructing several models of self-reproducing automata. Based on an inspiredsuggestion by Ulam, one of these models wasin the form of a “cellular” automaton (seethe preceding article in this issue by StephenWolfram for the definition of a cellularautomaton).

From the Illinois lectures it is clear thatvon Neumann was struggling to arrive at acorrect definition of complexity. Althoughhis thoughts were still admittedly vague, they

LOS ALAMOS SCIENCE Fall 1983

do seem, at least in some respects, related tothe present efforts of Wolfram to find univer-sal features of cellular automaton behaviorand from these to develop new laws,analogous to those of thermodynamics, todescribe self-organizing systems.

Von Neumann suggested that a theory ofinformation appropriate to automata wouldbuild on and go beyond the results of Turing,Godel, Szilard, and Shannon.

Turing had shown the limits of what canbe done with certain types of informa-tion—namely, anything that can be de-scribed in rigorously logical terms can bedone by an automaton. and, conversely.

anything that can be done by an automatoncan be described in logical terms. Turingconstructed. on paper, a universal automa-ton that could perform anything that anyother automaton could do. It consisted of afinite automaton, one that exists in a finitenumber of states, plus an indefinitely ex-tendible tape containing instructions. “Theimportance of Turing’s research is just this:”said von Neumann, “that if you construct anautomaton right, then any additional require-ments about the automaton can be handledby sufficiently elaborate instructions. This istrue only if [the automaton] is sufficientlycomplicated. if it reaches a certain minimumlevel of complexity” (John von Neumann,Theory of Self-Reproducing Automata,

edited and completed by Arthur W. Burks,University of Illinois Press, 1966, p. 50).

Turing also proved that there are somethings an automaton cannot do. For exam-ple, “YOU cannot construct an automatonwhich can predict in how many steps an-other automaton which can solve a certainproblem will actually solve it. . . . In otherwords, you can build an organ which can doanything that can be done. but you cannotbuild an organ which tells you whether it canbe done” (ibid., p. 51). This result of Turing’sis connected with Godel’s work on the

hierarchy of types in formal logic. VonNeumann related this result to his notion ofcomplexity. He suggested that for objects of

low complexity, it is easier to predict theirproperties than to build them, but for objectsof high complexity, the opposite is true.

Von Neumann stated that the new theory

of information should include not only thestrict and rigorous considerations of formallogic but also statistical considerations. Thereason one needs statistical considerations isto include the possibility of failure. Theactual structure of both manmade andartificial automata is dictated by the need toachieve a state in which a majority of allfailures will not be lethal. To include failure,one must develop a probabilistic system oflogic. Von Neumann felt that the theory ofentropy and information in thermodynamicsand Shannon’s information theory would berelevant.

Szilard had shown in 1929 that entropy ina physical system measures the lack ofinformation; it gives the total amount ofmissing information on the microscopicstructure of the system. Entropy defined as aphysical quantity measures the degree ofdegradation suffered by any form of energy.“There are strong indications that informa-tion is similar to entropy and that thedegenerative processes of entropy areparalleled by degenerative processes in theprocessing of information” (ibid., p. 62).

Shannon’s work focused on the problemof transmitting information. He had de-veloped a quantitative theory of measuringthe capacity of a communication channel, atheory that included the role of redundancy.Redundancy makes it possible to correcterrors and “is the only thing which makes itpossible to write a text which is longer than,say, ten pages. In other words, a languagewhich has maximum compression wouldactually be completely unsuited to conveyinginformation beyond a certain degree of com-plexity, because you could never find outwhether a text is right or wrong” (ibid., p.60).

Von Neumann emphasized the ability ofliving organisms to operate across errors.Such a system “is sufficiently flexible and

23

Page 3: Cooper,  From Turing and Von Neumann to Present

HISTORICAL PERSPECTIVE

well organized that as soon as an errorshows up in any one part of it, the systemautomatically senses whether this error mat-ters or not. If it doesn’t matter, the systemcontinues to operate without paying anyattention to it. If the error seems to beimportant, the system blocks that region out.by-passes it and proceeds along other chan-nels. The system then analyzes the regionseparately at leisure and corrects what goeson there. and if correction is impossible thesystem just blocks the region off and by-passes it forever. . . .

“To apply the philosophy underlyingnatural automata to artificial automata wemust understand complicated mechanismsbetter than we do, we must have elaboratestatistics about what goes wrong, and wemust have much more perfect statisticalinformation about the milieu in which amechanism lives than we now have. Anautomaton cannot be separated from themilieu to which it responds” (ibid., pp .71-72).

From artificial automata “one gets a verystrong impression that complication, orproductive potentiality in an organization, isdegenerative, that an organization whichsynthesizes something is necessarily morecomplicated. of a higher order, than theorganization it synthesizes” (ibid., p. 79).

But life defeats degeneracy. Although thecomplicated aggregation of many elementaryparts necessary to form a living organism isthermodynamically highly improbable, oncesuch a peculiar accident occurs. the rules ofprobability do not apply because the or-ganism can reproduce itself provided themilieu is reasonable—and a reasonablemilieu is thermodynamically much less im-probable. Thus probability leaves a loopholethat is pierced by self-reproduction.

Is it possible for an artificial automaton toreproduce itself? Further, is it possible for amachine to produce something that is morecomplicated than itself in the sense that theoffspring can perform more difficult andinvolved tasks than the progenitor? These

24

A three-dimensional object grown from a single cube to the thirtieth generation (darkcubes). The model shows only one octant of the three-dimensional structure. This

figure and the two others illustrating this article are from R. G. Schrandt and S. M.Ulam, “On Recursively Defined Geometrical Objects and Patterns of Growth,” LosAlamos Scientific Laboratory report LA-3762, November 1967 and are also reprintedin Arthur W. Burks, editor, Essays on Cellular Automata, University of Illinois Press,1970.

questions arise from looking at natural au-tomata. In what sense can a gene contain adescription of the human being that willcome from it? How can an organism at alow level in the phylogenetic order developinto a higher level organism?

From his comparison of natural andartificial automata, von Neumann suggestedthat complexity has one decisive property,namely, a critical size below which theprocess of synthesis is degenerative andabove which the process is explosive in thesense that an automaton can produce others

that are more complex and of higher poten-tiality than itself. However, to get beyond therealm of vague statements and develop acorrect formulation of complexity, he felt itwas necessary to construct examples thatexhibit the “critical and paradoxicalproperties of complication” (ibid., p. 80).

To this end he set out to construct, inprinciple, self-reproducing automata, autom-ata “which can have outputs somethinglike themselves” (ibid., p. 75). (All artificial

automata discussed up to that point. such asTuring machines, computing machines, and

Fall 1983 LOS ALAMOS SCIENCE

Page 4: Cooper,  From Turing and Von Neumann to Present

Cellular Automata

HISTORICAL PERSPECTIVE

(a) (b)

the network of abstract neurons discussed byMcCulloch and Pitts (“A Logical Calculusof the Ideas Immanent in Nervous Activity,”Bulletin of Mathematical Biophysics, 1943),had inputs and outputs of completely dif-ferent media than the automata them-selves.)

"There is no question of producing matterout of nothing. Rather, one imagines au-tomata which can modify objects similar tothemselves, or effect syntheses by picking upparts and putting them together, or takesynthesized entities apart” (ibid., p. 75).

Von Neumann drew up a list of unam-

biguously defined parts for the kinematicmodel of a self-reproducing automaton. Al-though this model ignored mechanical andchemical questions of force and energy, it didinvolve problems of movement, contact,positioning, fusing, and cutting of elements.

Von Neumann changed his initial ap-proach after extensive discussions withUlam. Ulam suggested that the proof ofexistence and construction of a self-

reproducing automaton might be done in asimpler, neater way that retained the logicaland combinatorial aspects of the problembut eliminated complicated aspects ofgeometry and motion. Ulam’s idea was toconstruct the automaton in an indefinitelylarge space composed of cells. In twodimensions such a cellular structure is equiv-alent to an infinite checkerboard. The ele-ments of the automaton are a set of allow-able states for each cell. including an empty,

A “contest’’ between two patterns, one of lines within squares (shaded) and one of dotswithin squares, growing in a 23 by 23 checkerboard. Both patterns grow by a recursiverule stating that the newest generation (represented by diagonal lines or by dots in anx shape) may occupy a square if that square is orthogonally contiguous to one andonly one square occupied by the immediately preceding generation (represented byperpendicularly bisecting lines or by dots in a + shape). In addition, no piece of eitherpattern may survive more than two generations. Initially, the line pattern occupiedonly the lower left corner square, and the dot pattern occupied only the squareimmediately to the left of the upper right corner square. (a) At generation 16 the twopatterns are still separate. (b) At generation 25 the two patterns engage. (c)At 32generations the dot pattern has penetrated enemy territory. (d) At 33 generations thedot pattern has won the contest.

or quiescent, state, and a transition rule fortransforming one state into another. The ruledefines the state of a cell at time interval t+1in terms of its own state and the states ofcertain neighboring cells at time interval t.Motion is replaced by transmitting informa-tion from cell to cell; that is, the transitionrule can change a quiescent cell into anactive cell.

Von Neumann’s universal self-reproduc-ing cellular automaton, begun in 1952, was arather baroque construction in which eachcell had twenty-nine allowable states and a

LOS ALAMOS SCIENCE Fall 1983 25

Page 5: Cooper,  From Turing and Von Neumann to Present

HISTORICAL PERSPECTIVE

neighborhood consisting of the four cellsorthogonal to it. Influenced by the work ofMcCulloch and Pitts, von Neumann used aphysiological simile of idealized neurons tohelp define these states. The states andtransition rules among them were designedto perform both logical and growth opera-tions. He recognized. of course. that hisconstruction might not be the minimal oroptimal one, and it was later shown byEdwin Roger Banks that a universal self-reproducing automaton was possible withonly four allowed states per cell.

The logical trick employed to make theautomaton universal was to make it capableof reading any axiomatic description of anyother automaton, including itself, and toinclude its own axiomatic description in itsmemory. This trick was close to that used byTuring in his universal computing machine.The basic organs of the automaton includeda tape unit that could store information onand read from an indefinitely extendiblelinear array of cells, or tape, and a construct-ing unit containing a finite control unit andan indefinitely long constructing arm thatcould construct any automaton whose de-scription was stored in the tape unit. Realiza-tion of the 29-state self-reproducing cellularautomaton required some 200,000 cells.

Von Neumann died in 1957 and did notcomplete this construction (it was completedby Arthur Burks). Neither did he completehis plans for two other models of self-reproducing automata. In one, based on the29-state cellular automaton, the basic ele-ment was to be neuron-like and have fatiguemechanisms as well as a threshold for excita-tion. The other was to be a continuous modelof self-reproduction described by a system ofnonlinear partial differential equations of thetype that govern diffusion in a fluid. VonNeumann thus hoped to proceed from thediscrete to the continuous. He was inspiredby the abilities of natural automata andemphasized that the nervous system was notpurely digital but was a mixed analog-digitalsystem.

26

Much effort since von Neumann’s timehas gone into investigating the simulationcapabilities of cellular automata. Can onedefine appropriate sets of states and transi-tion rules to simulate natural phenomena’?Ulam was among the first to use cellularautomata in this way. He investigatedgrowth patterns of simple finite systems,simple in that each cell had only two statesand obeyed some simple transition rule.Even very simple growth rules may yieldhighly complex patterns, both periodic andaperiodic. “The main feature of cellularautomata,” Ulam points out, “is that simplerecipes repeated many times may lead tovery complicated behavior. Information

analysts might look at some final pattern andinfer that it contains a large amount ofinformation, when in fact the pattern isgenerated by a very simple process. Perhapsthe behavior of an animal or even ourselvescould be reduced to two or three pages ofsimple rules applied in turn many times!"(private conversation. October 1983).Ulam’s study of the growth patterns ofcellular automata had as one of its aims “tothrow a sidelight on the question of howmuch ‘information’ is necessary to describethe seemingly enormously elaborate struc-tures of living objects” (ibid.). His work withHolladay and with Schrandt on an electroniccomputing machine at Los Alamos in 1967produced a great number of such patterns.Properties of their morphology were

surveyed in both space and time. Ulam andSchrandt experimented with “contests” inwhich two starting configurations were al-lowed to grow until they collided. Then afight would ensue, and sometimes one con-figuration would annihilate the other. Theyalso explored three-dimensional automata.

Another early investigator of cellularautomata was Ed Fredkin. Around 1960 hebegan to explore the possibility that allphysical phenomena down to the quantummechanical level could be simulated bycellular automata. Perhaps the physicalworld is a discrete space-time lattice of

A pattern grown according to a recursiverule from three noncontiguous squaresat the vertices of an approximately equi-lateral triangle. A square of the nextgeneration is formed if (a) it is con-tiguous to one and only one square of thecurrent generation, and (b) it touches noother previously occupied square exceptif the square should be its “grand-parent. ” In addition, of this set of pro-spective squares of the (n+l)th genera-tion satisfying condition (b), all squaresthat would touch each other areeliminated. However, squares that havethe same parent are allowed to touch.

information bits that evolve according tosimple rules. In other words, perhaps theuniverse is one enormous cellular automa-ton.

There have been many other workers inthis field. Several important mathematicalresults on cellular automata were obtainedby Moore and Holland (University of Mich-igan) in the 1960s. The “Game of Life,” anexample of a two-dimensional cellularautomaton with very complex behavior, wasinvented by Conway (Cambridge University)around 1970 and extensively investigated forseveral years thereafter.

Fall 1983 LOS ALAMOS SCIENCE

Page 6: Cooper,  From Turing and Von Neumann to Present

Cellular Automata

Cellular automata have been used in bio-logical studies (sometimes under the namesof “tessellation automata” or “homogeneousstructures”) to model several aspects of thegrowth and behavior of organisms. Theyhave been analyzed as parallel-processingcomputers (often under the name of “iter-ative arrays”). They have also been appliedto problems in number theory under thename “stunted trees” and have been con-sidered in ergodic theory, as endomorphismsof the “dynamical” shift system.

A workshop on cellular automata at LosAlamos in March 1983 was attended byresearchers from many different fields. Theproceedings of this workshop will be pub-lished in the journal Physica D and will alsobe issued as a book by North-HollandPublishing Co.

In all this effort the work of StephenWolfram most closely approaches von Neu-mann’s dream of abstracting from examplesof complicated automata new concepts rele-

vant to information theory and analogous tothe concepts of thermodynamics. Wolframhas made a systematic study of one-dimen-

sional cellular automata and has identifiedfour general classes of behavior, as describedin the preceding article.

Three of these classes exhibit behavioranalogous to the limit points, limit cycles,and strange attractors found in studies ofnonlinear ordinary differential equations andtransformation iterations. Such equations

characterize dissipative systems. systems inwhich structure may arise spontaneouslyeven from a disordered initial state. Fluidsand living organisms are examples of suchsystems. (Non-dissipative systems, in con-trast, tend toward disordered states of max-imal entropy and are described by the lawsof thermodynamics.) The fourth class mim-ics the behavior of universal Turing ma-chines. Wolfram speculates that his identifi-cation of universal classes of behavior incellular automata may represent a first step

Further Reading

John von Neumann. Theory of Self-Reproducing Automata. Edited and completed by Arthur W. Burks.Urbana: University of Illinois Press. 1966. Part I is an edited version of the lectures delivered at theUniversity of Illinois. Part II is von Neumann’s manuscript describing the construction of his 29-stateself-reproducing automaton.

Arthur W. Burks, editor. Essays on Cellular Automata. Urbana: University of Illinois Press. 1970. Thisvolume contains early papers on cellular automata including those of Ulam and his coworkers.

Andrew Hodges, “Alan Turing: Mathematician and Computer Builder.” New Scientist. 15 September1983, pp. 789-792. This contains a wonderful illustration, “A Turing Machine in Action.”

Martin Gardner. "On Cellular Automata, Self-Reproduction, the Garden of Eden. and the Game ‘Life,’ “Scientific American, October 1971.

The following publications deal with complexity per se:

W. A. Beyer, M. L. Stein, and S. M. Ulam. “The Notion of Complexity.” Los Alamos ScientificLaboratory report LA-4822. December 1971.

HISTORICAL PERSPECTIVE

in the formulation of general laws for com-plex self-organizing systems. He says thatwhat he is looking for is a new con-cept—maybe it will be complexity or maybesomething else—that like entropy will bealways increasing (or decreasing) in such asystem and will be manifest in both themicroscopic laws governing evolution of thesystem and in its macroscopic behavior. Itmay be closest to what von Neumann had inmind as he sought a correct definition of’complexity. We can never know. We can

Acknowledgment

I wish to thank Arthur W. Burks for per-mission to reprint quotations from Theory ofSelf-Reproducing Automata. We areindebted to him for editing and completingvon Neumann’s manuscripts in a mannerthat retains the patterns of thought of a greatmind.

S. Winograd. Arithmetic Complexity of Computations. Philadelphia: Society of Industrial and AppliedMathematics. 1980.

LOS ALAMOS SCIENCE Fall 1983 27