40
Vol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers will achieve human-level artificial intelligence soon. That day appears to be off in the distant future. Why? In this penetrating skeptical critique of AI, computer scientist Peter Kassan reviews the numerous reasons why this problem is harder than anyone anticipated. — Michael Shermer A.I. Gone Awry The Futile Quest for Artificial Intelligence by Peter Kassan On March 24, 2005, an announcement was made in newspapers across the country, from the New York Times 1 to the San Francisco Chronicle, 2 that a company 3 had been founded to apply neuroscience research to achieve human-level artificial intelligence. The reason the press release was so widely picked up is that the man behind it was Jeff Hawkins, the brilliant inventor of the PalmPilot, an invention that made him both wealthy and respected. 4 You’d think from the news reports that the idea of approaching the pursuit of artificial human-level intelligence by modeling 1

Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

Vol. 12 No. 2 Featured ArticleFor decades now computer scientists and futurists have been telling us that computers will achieve human-level artificial intelligence soon. That day appears to be off in the distant future. Why? In this penetrating skeptical critique of AI, computer scientist Peter Kassan reviews the numerous reasons why this problem is harder than anyone anticipated. — Michael Shermer

A.I. Gone Awry The Futile Quest for Artificial Intelligenceby Peter Kassan On March 24, 2005, an announcement was made in newspapers across the country, from the New York Times1 to the San Francisco Chronicle,2 that a company3 had been founded to apply neuroscience research to achieve human-level artificial intelligence. The reason the press release was so widely picked up is that the man behind it was Jeff Hawkins, the brilliant inventor of the PalmPilot, an invention that made him both wealthy and respected.4 You’d think from the news reports that the idea of approaching the pursuit of artificial human-level intelligence by modeling the brain was a novel one. Actually, a Web search for “computational neuroscience” finds over a hundred thousand webpages and several major research centers.5 At least two journals are devoted to the subject.6 Over 6,000 papers are available online. Amazon lists more than 50 books about it. A Web search for “human brain project” finds more than eighteen thousand matches.7 Many researchers think of modeling the human brain or creating a “virtual” brain a feasible project, even if a “grand challenge.”8 In other words, the idea isn’t a new one. Hawkins’ approach sounds simple. Create a machine with artificial “senses” and then allow it to learn, build a model of its world, see analogies, make predictions, solve problems, and give us

1

Page 2: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

their solutions.9 This sounds eerily similar to what Alan Turing10 suggested in 1948. He, too, proposed to create an artificial “man” equipped with senses and an artificial brain that could “roam the countryside,” like Frankenstein’s monster, and learn whatever it needed to survive.11 The fact is, we have no unifying theory of neuroscience. We don’t know what to build, much less how to build it.12 As one observer put it, neuroscience appears to be making “antiprogress” — the more information we acquire, the less we seem to know.13 Thirty years ago, the estimated number of neurons was between three and ten billion. Nowadays, the estimate is 100 billion. Thirty years ago it was assumed that the brain’s glial cells, which outnumber neurons by nine times, were purely structural and had no other function. In 2004, it was reported that this wasn’t true.14 Even the most ardent artificial intelligence (A.I.) advocates admit that, so far at least, the quest for human-level intelligence has been a total failure.15 Despite its checkered history, however, Hawkins concludes A.I. will happen: “Yes, we can build intelligent machines.”16 A Brief History of A.I.

Duplicating or mimicking human-level intelligence is an old notion — perhaps as old as humanity itself. In the 19th century, as Charles Babbage conceived of ways to mechanize calculation, people started thinking it was possible — or arguing that it wasn’t. Toward the middle of the 20th century, as mathematical geniuses Claude Shannon,17 Norbert Wiener,18 John von Neumann,19 Alan Turing, and others laid the foundations of the theory of computing, the necessary tool seemed available. In 1955, a research project on artificial intelligence was proposed; a conference the following summer is considered the official inauguration of the field. The proposal20 is fascinating for its assertions, assumptions, hubris, and naïveté, all of which have characterized the field of A.I. ever since. The authors proposed that ten people could make significant progress in the field in two months. That ten-person, two-month project is still going strong — 50 years later. And it’s involved the efforts of more like tens of thousands of people. A.I. has splintered into three largely independent and mutually contradictory areas (connectionism, computationalism, and robotics), each of which has its own subdivisions and contradictions. Much of the activity in each of the areas has little to do with the original goals of mechanizing (or computerizing) human-level intelligence. However, in pursuit of that original goal, each of the three has its own set of problems, in addition to the many that they share. 1. Connectionism

Connectionism is the modern version of a philosophy of mind known as associationism.21 Connectionism has applications to psychology and cognitive science, as well as underlying the schools of A.I.22 that include both artificial neural networks23 (ubiquitously said to be “inspired by” the nervous system) and the attempt to model the brain. The latest estimates are that the human brain contains about 30 billion neurons in the cerebral cortex — the part of the brain associated with consciousness and intelligence. The 30 billion neurons of the cerebral cortex contain about a thousand trillion synapses (connections between neurons).24 Without a detailed model of how synapses work on a neurochemical level, there’s no hope of modeling how the brain works.25 Unlike the idealized and simplified connections in so-called artificial neural networks, those synapses are extremely variable in nature — they can have different cycle times, they can use different neurotransmitters, and so on. How much data must be gathered about each synapse? Somewhere between kilobytes (tens of thousands of numbers) and megabytes (millions of numbers).26 And since the cycle time of synapses can be more than a

2

Page 3: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

thousand cycles per second, we may have to process those numbers a thousand times each second. Have we succeeded in modeling the brain of any animal, no matter how simple? The nervous system of a nematode (worm) known as C. (Caenorhabditis) elegans has been studied extensively for about 40 years. Several websites27 and probably thousands of scientists are devoted exclusively or primarily to it. Although C. elegans is a very simple organism, it may be the most complicated creature to have its nervous system fully mapped. C. elegans has just over three hundred neurons, and they’ve been studied exhaustively. But mapping is not the same as modeling. No one has created a computer model of this nervous system — and the number of neurons in the human cortex alone is 100 million times larger. C. elegans has about seven thousand synapses.28 The number of synapses in the human cortex alone is over 100 billion times larger. The proposals to achieve human-level artificial intelligence by modeling the human brain fail to acknowledge the lack of any realistic computer model of a synapse, the lack of any realistic model of a neuron, the lack of any model of how glial cells interact with neurons, and the literally astronomical scale of what is to be simulated. The typical artificial neural network consists of no more than 64 input “neurons,” approximately the same number of “hidden neurons,” and a number of output “neurons” between one and 256.29

This, despite a 1988 prediction by one computer guru that by now the world should be filled with “neuroprocessors” containing about 100 million artificial neurons.30 Even if every neuron in each layer of a three- layer artificial neural net with 64 neurons in each layer is connected to every neuron in the succeeding layer, and if all the neurons in the output layer are connected to each other (to allow creation of a “winner-takes-all” arrangement permitting only a single output neuron to fire), the total number of “synapses” can be no more than about 17 million, although most artificial neural networks typically contain much, much less — usually no more than a hundred or so. Furthermore, artificial neurons resemble generalized Boolean logic gates more than actual neurons. Each neuron can be described by a single number — its “threshold.” Each synapse can be described by a single number — the strength of the connection — rather than the estimated minimum of ten thousand numbers required for a real synapse. Thus, the human cortex is at least 600 billion times more complicated than any artificial neural network yet devised. It is impossible to say how many lines of code the model of the brain would require; conceivably, the program itself might be relatively simple, with all the complexity in the data for each neuron and each synapse. But the distinction between the program and the data is unimportant. If each synapse were handled by the equivalent of only a single line of code, the program to simulate the cerebral cortex would be roughly 25 million times larger than what’s probably the largest software product ever written, Microsoft Windows, said to be about 40 million lines of code.31 As a software project grows in size, the probability of failure increases.32 The probability of successfully completing a project 25 million times more complex than Windows is effectively zero. Moore’s “Law” is often invoked at this stage in the A.I. argument.33 But Moore’s Law is more of an observation than a law, and it is often misconstrued to mean that about every 18 months computers and everything associated with them double in capacity, speed, and so on. But Moore’s Law won’t solve the complexity problem at all. There’s another “law,” this one attributed to Nicklaus Wirth: Software gets slower faster than hardware gets faster.34 Even though, according to Moore’s Law, your personal computer should be about a hundred thousand

3

Page 4: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

times more powerful than it was 25 years ago, your word processor isn’t. Moore’s Law doesn’t apply to software. And perhaps last, there is the problem of testing. The minimum number of software errors observed has been about 2.5 errors per function point.35 A software program large enough to simulate the human brain would contain about 20 trillion errors. Testing conventional software (such as a word processor or Windows) involves, among many other things, confirming that its behavior matches detailed specifications of what it is intended to do in the case of every possible input. If it doesn’t, the software is examined and fixed. Connectionistic software comes with no such specifications — only the vague description that it is to “learn” a “pattern” or act “like” a natural system, such as the brain. Even if one discovers that a connectionistic software program isn’t acting the way you want it do, there’s no way to “fix” it, because the behavior of the program is the result of an untraceable and unpredictable network of interconnections. Testing connectionistic software is also impossible due to what’s known as the combinatorial explosion. The retina (of a single eye) contains about 120 million rods and 7 million cones.36 Even if each of those 127 million neurons were merely binary, like the beloved 8x8 input grid of the typical artificial neural network (that is, either responded or didn’t respond to light), the number of different possible combinations of input is a number greater than 1 followed by 38,230,809 zeroes. (The number of particles in the universe has been estimated to be about 1 followed by only 80 zeroes.37) Testing an artificial neural network with input consisting of an 8x8 binary grid is, by comparison, a small job: such a grid can assume any of 18,446,744,073,709,551,616 configurations — orders of magnitude smaller, but still impossible. 2. Computationalism

Computationalism was originally defined as the “physical symbol system hypothesis,” meaning that “A physical symbol system has the necessary and sufficient means for general intelligent action.”38 (This is actually a “formal symbol system hypothesis,” because the actual physical implementation of such a system is irrelevant.) Although that definition wasn’t published until 1976, it co-existed with connectionism from the very beginning. It has also been referred to as “G.O.F.A.I.” (good old-fashioned artificial intelligence). Computationalism is also referred to as the computational theory of mind.39 The assumption behind computationalism is that we can achieve A.I. without having to simulate the brain. The mind can be treated as a formal symbol system, and the symbols can be manipulated on a purely syntactic level — without regard to their meaning or their context. If the symbols have any meaning at all (which, presumably, they do — or else why bother manipulating them?), that can be ignored until we reach the end of the manipulation. The symbols are at a recognizable level, more-or-less like ordinary words — a so-called “language of thought.”40 The basic move is to treat the informal symbols of natural language as formal symbols. Although, during the early years of computer programming (and A.I.), this was an innovative idea, it has now become a routine practice in computer programming — so ubiquitous that it’s barely noticeable. Unfortunately, natural language — which may not literally be the language of thought, but which any human-level A.I. program has to be able to handle — can’t be treated as a formal symbol. To give a simple example, “day” sometimes mean “day and night” and sometimes means “day as opposed to night” — depending on context.

4

Page 5: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

Joseph Weizenbaum41 observes that a young man asking a young woman, “Will you come to dinner with me this evening?”42 could, depending on context, simply express the young man’s interest in dining, or his hope to satisfy a desperate longing for love. The context — the so-called “frame” — needed to make sense of even a single sentence may be a person’s entire life. An essential aspect of the computationalist approach to natural language is to determine the syntax of a sentence so that its semantics can be handled. As an example of why that is impossible, Terry Winograd43 offers a pair of sentences:

The committee denied the group a parade permit because they advocated violence. The committee denied the group a parade permit because they feared violence.44

The sentences differ by only a single word (of exactly the same grammatical form). Disambiguating these sentences can’t be done without extensive — potentially unlimited — knowledge of the real world.45 No program can do this without recourse to a “knowledge base” about committees, groups seeking marches, etc. In short, it is not possible to analyze a sentence of natural language syntactically until one resolves it semantically. But since one needs to parse the sentence syntactically before one can process it at all, it seems that one has to understand the sentence before one can understand the sentence. In natural language, the boundaries of the meaning of words are inherently indistinct, whereas the boundaries of formal symbols aren’t. For example, in binary arithmetic, the difference between 0 and 1 is absolute. In natural language, the boundary between day and night is indistinct, and arbitrarily set for different purposes. To have a purely algorithmic system for natural language, we need a system that can manipulate words as if they were meaningless symbols while preserving the truth-value of the propositions, as we can with formal logic. When dealing with words — with natural language — we just can’t use conventional logic, since one “axiom” can affect the “axioms” we already have — birds can fly; but penguins and ostriches are birds that can’t fly. Since the goal is to automate human-style reasoning, the next move is to try to develop a different kind of logic — so-called non-monotonic logic. What used to be called logic without qualification is now called “monotonic” logic. In this kind of logic, the addition of a new axiom does- n’t change any axioms that have already been processed or inferences that have already been drawn. The attempt to formalize the way people reason is quite recent — and entirely motivated by A.I.. And although the motivation can be traced back to the early years of A.I., the field essentially began with the publication of three papers in 1980.46 However, according to one survey of the field in 2003, despite a quarter-century of work, all that we have are prospects and hope.47 An assumption of computationalists is that the world consists of unambiguous facts that can be manipulated algorithmically. But what is a fact to you may not be a fact to me, and vice versa.48 Furthermore, the computationalist approach assumes that experts apply a set of explicit, formalizable rules. The task of computationalists, then, is simply to debrief the experts on their rules. But, as numerous studies of actual experts have shown,49 only beginners behave that way. At the highest level of expertise, people don’t even recognize that they’re making decisions. Rather, they are fluidly interacting with the changing situation, responding to patterns that they recognize. Thus, the computationalist approach leads to what should be called “beginner systems” rather than “expert systems.” The way people actually reason can’t be reduced to an algorithmic procedure like arithmetic or formal logic. Even the most ardent practitioners of formal logic spend most of their time explaining and justifying the formal proofs scattered through their books and papers — using

5

Page 6: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

natural language (or their own unintelligible versions of it). Even more ironically, none of these practitioners of formal logic — all claiming to be perfectly rational — ever seem to agree with each other about any of their formal proofs. Computationalist A.I. is plagued by a host of other problems. First of all its systems don’t have any common sense.50 Then there’s “the symbol- grounding problem.”51 The analogy is trying to learn a language from a dictionary (without pictures) — every word (symbol) is simply defined using other words (symbols), so how does anything ever relate to the world? Then there’s the “frame problem” — which is essentially the problem of which context to apply to a given situation.52 Some researchers consider it to be the fundamental problem in both computationalist and connectionist A.I.53 The most serious computationalist attempt to duplicate human-level intelligence — perhaps the only serious attempt — is known as CYC54 — short for enCYClopedia (but certainly meant also to echo “psych”). The head of the original project and the head of CYCORP, Douglas Lenat55 has been making public claims about its imminent success for more than twenty years. The stated goal of CYC is to capture enough human knowledge — including common sense — to, at the very least, pass an unrestricted Turing Test.56 If any computationalist approach could succeed, it would be this mother of all expert systems. Lenat had made some remarkable predictions: at the end of ten years, by 1994 he projected, the CYC knowledge base will contain 30–50% of consensus reality.57 (It is difficult to say what this prediction means, because it assumes that we know what the totality of consensus reality is and that we know how to quantify and measure it.) The year 1994 would represent another milestone in the project: CYC would, by that time, be able to build its knowledge base by reading online materials and ask questions about it, rather than having people enter information.58 And by 2001, Lenat said, CYC would have become a system with human-level breadth and depth of knowledge.59 In 1990, CYC produced what it termed “A Midterm Report.”60 Given that the effort started in 1984, calling it this implied that the project would be successfully completed by 1996, although in the section labeled “Conclusion” it refers to three possible outcomes that might occur by the end of the 1990s. One would hope that by that time CYC would at least be able to do simple arithmetic. In any case, the three scenarios are labeled “good” (totally failing to meet any of the milestones), “better” (which shifts the achievements to “the early twenty-first century” and that still consists of “doing research”), and “best” (in which the achievement still isn’t “true A.I.” but only the “foundation for … true A.I.” in — 2015). Even as recently as 2002 (one year after CYC’s predicted achievement of human-level breadth and depth of knowledge), CYC’s website was still quoting Lenat making promises for the future: “This is the most exciting time we’ve ever seen with the project. We stand on the threshold of success.”61 Perhaps most tellingly, Lenat’s principal coworker, R.V. Guha62 left the team in 1994, and was quoted in 1995 as saying “CYC is generally viewed as a failed project. The basic idea of typing in a lot of knowledge is interesting but their knowledge representation technology seems poor.”63

In the same article, Guha is further quoted as saying of CYC, as could be said of so many other A.I. projects, “We were killing ourselves trying to create a pale shadow of what had been promised.” It’s no wonder that GOFA.I. has been declared “brain-dead.”64 3. Robotics

The third and last major branch of the river of A.I. is robotics — the attempt to build a machine capable of autonomous intelligent behavior. Robots, at least, appear to address many of problems

6

Page 7: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

of connectionism and computationalism: embodiment,65 lack of goals,66 the symbol-grounding problem, and the fact that conventional computer programs are “bedridden.”67 However, when it comes to robots, the disconnect between the popular imagination and reality is perhaps the most dramatic. The notion of a fully humanoid robot is ubiquitous not only in science fiction but in supposedly non-fictional books, journals, and magazines, often by respected workers in the field. This branch of the river has two sub-branches, one of which (cybernetics) has gone nearly dry, the other of which (computerized robotics) has in turn forked into three sub-branches. Remarkably, although robotics would seem to be the most purely down-to-earth engineering approach to A.I., its practitioners spend as much time publishing papers and books as do the connectionists and the computationalists. Cybernetic Robotics

While Turing was speculating about building his mechanical man, W. Grey Walter68 built what was probably the first autonomous vehicle, the robot “turtles” or “tortoises,” Elsie and Elmer. Following a cybernetic approach rather than a computational one, Walter’s turtles were controlled by a simple electronic circuit with a couple of vacuum tubes. Although the actions of this machine were trivial and exhibited nothing that even suggested intelligence, Grey has been described as a robotics “pioneer” whose work was “highly successful and inspiring.”69 On the basis of experimentation with a device that, speaking generously, simulated an organism with two neurons, he published two articles in Scientific American70 (one per neuron!), as well as a book.71 Cybernetics was the research program founded by Norbert Wiener,72 and was essentially analog in its approach. In comparison with (digital) computer science, it is moribund if not quite dead. Like so many other approaches to artificial intelligence, the cybernetic approach simply failed to scale up.73 Computerized Robots

The history of computerized robotics closely parallels the history of A.I. in general: Grand theoretical visions, such as Turing’s musings (already discussed) about how his

mechanical creature would roam the countryside. Promising early results, such as Shakey, said to be “the first mobile robot to reason about

its actions.”74 A half-century of stagnation and disappointment.75 Unrepentant grand promises for the future.

What a roboticist like Hans Moravec predicts for robots is the stuff of science fiction, as is evident by the title of his book, Robot: Mere Machine to Transcendent Mind.76 For example, in 1997 Moravec asked the question, “When will computer hardware match the human brain?” and answered “in the 2020s.”77 This belief that robots will soon transcend human intelligence is echoed by many others in A.I.78 In the field of computerized robots, there are three major approaches:

TOP-DOWN  The approach taken with Shakey and its successors, in which a computationalist computer program controls the robot’s activities.79 Under the covers, the programs take the same approach as good old-fashioned artificial intelligence, except that instead of printing out answers, they cause the robot to do something.

OUTSIDE-IN  Consists of creating robots that imitate the superficial behavior of people, such as responding to the presence of people nearby, tracking eye movement, and

7

Page 8: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

so on. This is the approach largely taken recently by people working under Rodney A. Brooks.80

BOTTOM-UP  Consists of creating robots that have no central control, but relatively simple mechanisms to control parts of their behavior. The notion is that by putting together enough of these simple mechanisms (presumably in the right arrangement), intelligence will “emerge.” Brooks has written extensively in support of this approach.81

The claims of roboticists of all camps range from the unintelligible to the unsupportable. As an example of the unintelligible, consider MIT’s Cog (short for “cognition”). The claim was that Cog displayed the intelligence (and behavior) of, initially, a six-month old infant. The goal was for Cog to eventually display the intelligence of a two-year-old child.82 A basic concept of intelligence — to the extent that anyone can agree on what the word means — is that (all things being equal) it stays constant throughout life. What changes as a child or animal develops is only the behavior. So, to make this statement at all intelligible, it would have to be translated into something like this: the initial goal is only that Cog will display the behavior of a six-month-old child that people consider indicative of intelligence, and later the behavior of a two-year-old child. Even as corrected, this notion is also fallacious. Whatever behaviors a two-year-old child happens to display, as that child continues to grow and develop it will eventually display all the behavior of a normal adult, because the two- year-old has an entire human brain. However, even if we manage to create a robot that mimics all the behavior of a two-year-old child, there’s reason to believe that that same robot will without any further programming, ten years later, display the behavior of a 12-year-old child, or later, display the behavior of an adult. Cog never even displayed the intelligent behavior of a typical six-month-old baby.83 For it to behave like a two-year-old child, of course, it would have to use and understand natural language — thus far an insurmountable barrier for A.I.. The unsupportable claim is sometimes made that some robots have achieved “insect-level intelligence,” or at least robots that duplicate the behavior of insects.84 Such claims seem plausible simply because very few people are entomologists, and are unfamiliar with how complex and sophisticated insect behavior actual is.85 Other experts, however, are not sure that we’ve achieved even that level.86 According to the roboticists and their fans, Moore’s Law will come to the rescue. The implication is that we have the programs and the data all ready to go, and all that’s holding us back is a lack of computing power. After all, as soon as computers got powerful enough, they were able to beat the world’s best human chess player, weren’t they? (Well, no — a great deal of additional programming and chess knowledge was also needed.) Sad to say, even if we had unlimited computer power and storage, we wouldn’t know what to do with it. The programs aren’t ready to go, because there aren’t any programs. Even if it were true that current robots or computers had attained insect-level intelligence, this wouldn’t indicate that human-level artificial intelligence is attainable. The number of neurons in an insect brain is about 10,000 and in a human cerebrum about 30,000,000,000. But if you put together 3,000,000 cockroaches (this seems to be the A.I. idea behind “swarms”), you get a large cockroach colony, not human-level intelligence. If you somehow managed to graft together 3,000,000 natural or artificial cockroach brains, the results certainly wouldn’t be anything like a human brain, and it is unlikely that it would be any more “intelligent” than the cockroach colony would be. Other species have brains as large as or larger than humans, and none of them display human-level intelligence — natural language, conceptualization, or the ability to reason

8

Page 9: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

abstractly.87 The notion that human- level intelligence is an “emergent property” of brains (or other systems) of a certain size or complexity is nothing but hopeful speculation. Conclusions

With admirable can-do spirit, technological optimism, and a belief in inevitability, psychologists, philosophers, programmers, and engineers are sure they shall succeed, just as people dreamed that heavier-than-air flight would one day be achieved.88 But 50 years after the Wright brothers succeeded with their proof-of-concept flight in 1903, aircraft had been used decisively in two world wars; the helicopter had been invented; several commercial airlines were routinely flying passengers all over the world; the jet airplane had been invented; and the speed of sound had been broken. After more than 50 years of pursuing human- level artificial intelligence, we have nothing but promises and failures. The quest has become a degenerating research program89 (or actually, an ever-increasing number of competing ones), pursuing an ever-increasing number of irrelevant activities as the original goal recedes ever further into the future — like the mirage it is. References & Notes

1. Markoff, John, “A New Company to Focus on Artificial Intelligence,” New York Times, March 24, 2005, available at www.nytimes.com/2005/03/24/technology/24think.html

2. Raine, George, “Palm Founders to Start New Firm,” San Francisco Chronicle, March 24, 2005, available at www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2005/03/24/BUG07BTARA1.DTL&type=business

3. www.numenta.com 4. Hawkins is also the author (with Sandra Blakeslee) of On Intelligence, 2004, as well as

the director of the Redwood Neurosci-ence Institute (www.rni.org), a research company affiliated with the Helen Wills Neuro-science Institute (http://neuroscienceberkeley.edu) at the University of California at Berkeley. See www.rni.org/HelenWillisNeuroscienceInstitute.html

5. For example: The Organization for Computational Neurosciences (www.cnsorg.org); The Swartz Center for Computational Neuroscience at the University of California at San Diego (www.sccn.ucsd.edu); The Computational Neuroscience Program at the University of Minnesota (www.compneuro.umn.edu); and The Laboratory for Computational Neuro-science at the Department of Neurosurgery of Presbyterian University Hospital (www.neuronet.pitt.edu/groups/lcn).

6. The Journal of Computational Neuroscience, published by Springer, and another with the identical title from Kluwer Academic Publishers.

7. These include pages from the project sponsored by the federal government’s National Institutes of Mental Health that started in 1993 (www.nimh.nihgov/neuro.informatics), as well as human brain projects (many of them funded by the NIMH itself) at Washington University (sig.biostr .washington.edu/projects/brain), the California Institute of Technology (www.gg.caltech.edu/hbp), University of Southern California (www-hbp.usc.edu), University of California at Davis and San Diego (neuroscience.ucdavis.edu/hbp), Cornell University (neocortex.med.cornelledu),.Stanford University (spnl.stanford.edu/tools/human_brain_proj.htm), and elsewhere across the country and the globe.

9

Page 10: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

8. See, for example: Adams, Bruce and Stephen Ottley, “Brain Simulation,” 2000, available at www.cybernetics.demon.co.uk/brainsim.html and “The Virtual Brain Machine Project,” 2001, available at www.cybernetics.demon.co.uk/VBM.html; Sloman, Aaron (moderator), “Grand Challenge 5: The Architecture of Brain and Mind,” 2004 [based on a draft by Mike Denham, 2003], available at www.nesc.ac.uk/esi/events/Grand_Challenges/proposals/ ArchitectureOfBrainAndMind.pdf; and Borisyuk, Roman, “The Grand Challenge for the 21st Century: A Theory of the Brain,” ND (2002?), available at www.nesc.ac.uk/esi/events/Grand_Challenges/paneld/d2.pdf.

9. Hawkins, Jeff, with Sandra Blakeslee, On Intelligence, 2004. 10. The mathematician generally considered to be the founder of computer science. 11. Turing, Alan “Intelligent Machinery,” 1948, available at

www.turingarchive.org/browse.php/C/11. This is one of at least five versions of what was originally a talk given in 1948 and eventually published. This and several other versions are avail able online at the Turing Archive, www.turingarchive.org.

12. As John R. Searle (professor of the philosophy of the mind and language at the University of California at Berkeley) writes in The Mystery of Consciousness, 1997: “The dirty secret of contemporary neuroscience is … [that] [s]o far we do not have a unifying theoretical principle of neuroscience. In the way that we have an atomic theory of matter, a germ theory of disease, a genetic theory of inheritance, a tectonic plate theory of geology, a natural selection theory of evolution, a blood- pumping theory of the heart, and even a contraction theory of the muscles, we do not in that sense have a theory of how the brain works. We know a lot of facts about what actually goes on in the brain, but we do not yet have a unifying theoretical account of how what goes on at the level of the neurobiology enables the brain to do what it does by way of causing, structuring, and organizing our mental life.”

13. Horgan, John, The Undiscovered Mind How: the Human Brain Defies Replication, Medication, and Explanation, 1999. The term “anti-progress” is Horgan’s.

14. As Douglas R. Fields reports in “The Other Half of the Brain,” Scientific American, April, 2004, “Mounting evidence suggests that glial cells, overlooked for half a century, may be nearly as critical to thinking and learning as neurons are.” Claudia Krebs, Kertsin Hüttmann and Christian Steinhäuser, “The Forgotten Brain Emerges,” Scientific American [Mind] Special, Vol. 14, No. 5, 2004, write: “After disregarding them for decades, neuroscientists now say glial cells may be nearly as important to thinking as neurons are.”

15. As James P. Hogan wrote in Mind Matters: Exploring the World of Artificial Intelligence (1997), “[W]e haven’t really come a long way. … [T]he early A.I. vision of reproducing all-around humanlike reasoning and perception remains as elusive as ever.” In “When Machines Outsmart Humans,” 2000, Futures, Vol. 35:7, available at www.nickbostrom.com/2050/outsmart.html, Nick Boostrom, Faculty of Philosophy, Oxford University, England, wrote: “The annals of artificial intelligence are littered with broken promises. Half a century after the first electric computer, we still have nothing that even resembles an intelligent machine, if by ‘intelligent’ we mean possessing the kind of general-purpose smartness that we humans pride ourselves on.” In the “The Complexity Ceiling,” in The Next Fifty Years (edited by John Brockman, 2002), Jaron Lanier (an eminent computer scientist known for coining the phrase “virtual reality” and

10

Page 11: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

for founding VPL Research, probably the first virtual reality company, later acquired by Sun Microsystems — see www.sun.com/smi/Press/sunflash/1998-02/sunflash.980223.1.html) wrote, “The first fifty years of general computation, which roughly spanned the second half of the twentieth century, were characterized by extravagant swings between giddy overstatement and embarrassing near-paralysis. The practice of overstatement continues … Accompanying the parade of quixotic overstatements of theoretical computer power has been a humiliating and unending sequence of disappointments in the performance of real information systems.”

16. Hawkins, Jeff, with Sandra Blakeslee, On Intelligence, 2004. 17. A mathematician at Bell Telephone Laboratories and the inventor of information theory. 18. A mathematician and founder of the science of cybernetics. 19. A mathematician credited, among other things, with the architecture of serial computers. 20. McCarthy, J, M. L. Minsky, N. Rochester, and C.E. Shannon, “A Proposal for the

Dartmouth Summer Research Project on Artificial Intelligence,” 1955, available at www.formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

21. Associationism is paradoxically both attributed to and rejected by John Locke. Others trace elements of this idea back as far as Aristotle and Plato. See Young, Robert M., “Association of Ideas,” Dictionary of the History of Ideas, available at etext.lib.virginia.edu/cgi-local/DHI/dhiana.cgi?id=dv1-19

22. See: Eliasmith, Chris, “Connectionism,” 2004, Dictionary of Philosophy of Mind, available at artsci.wustl.edu/~philos/MindDict/connectionism.html; Garson,James, “Connectionism, ”[1997–2002], The Stanford Encyclopedia of Philosophy, 2002, Edward N. Zalta (ed.), available at plato.stanford.edu/archives/win2002/entries/connectionism; “Connectionism,” 2003, FOLDOP3.0 Free On-Line Dictionary of Philosophy, Gian Paolo Terravecchia, Chief Editor and Project Coordinator, available at www.swif.it/foldop/dizionario.php?ind=fconnectionism; Aizawa, Ken, “History of Connectionism,” 2004, available at www.artsci.wustl.edu/~philos/Mind Dict/connectionismhistory.html; “History of Connectionism,” 2003, FOLDOP3.0 Free On-Line Dictionary of Philosophy, Gian Paolo Terravecchia, Chief Editor and Project Coordinator, available at www.swif.it/foldop/dizionario.php?query=connectionism+history+of; Pollack, Jordan B., “Connectionism: Past, Present, and Future,” 1989, available at demo.cs.brandeis.edu/papers/nnhistory.pdf; Berkeley, Istvan S. N., “A Revisionist History of Connectionism,” 1997, available at www.ucs.louisiana.edu/~isb9112/dept/phil341/histconn.html.

23. Generally credited as having been invented by Warren McCullough (a psychiatrist at the University of Chicago) and Walter Pitts (a mathematician and logician at the same university). See McCullough, Warren and Walter Pitts and Walter Pitts, “A logical calculus of the ideas immanent in nervous activity,” 1943, Bulletin of Mathematical Biophysics, Vol. 5, 1943. For a detailed analysis of this difficult-to-find paper, see Piccinini, Gualtiero, “The First Computational Theory of Mind and Brain: A Close Look at McCulloch and Pitts’s ‘Logical Calculus of Ideas Immanent in Nervous Activity’,” ND, available at www.artsci.wustl.edu/~gpiccini/First%20Computational%20Theory.pdf. However, their ideas were similar in many respects to the work of W. Ross Ashby, a British psy chiatrist (see Ashby, W. Ross, “Principles of the Self-Organizing Dynamic System,” 1947, Journal of General Psychology, Vol. 37, 1947, and Ashby, W. Ross,

11

Page 12: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

Design for a Brain: The Origin of Adaptive Behaviour, 1952 [First edition], Donald Hebb, a psychologist at McGill University in Canada, (see Hebb, Donald, The Organization of Behavior: A Neuropsychological Theory, 1949, as well, of course, as Norbert Wiener. Later seminal work on artificial neural networks include the Perceptron, credited to Frank Rosenblatt, a professor of Neuro-biology & Behavior at Cornell, working at the Cornell Aeronautical Laboratory in the period 1957–1960. See Rosenblatt, Frank, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” 1958, Cornell Aero-nautical Laboratory, Psychological Review, v 65, No. 6, 1958 and Rosenblatt, Frank, Principles of Neurodynamics; Perceptrons and the Theory of Brain Mechanisms, 1962. See also Minsky, Marvin And Papert, Seymour, Perceptrons: an Introduction to Computational Geometry, 1969 [First Edition], 1987 [Expanded Edition].

24. These statistics are from Gerald M. Edelman and Giulio Tononi, A Universe of Consciousness: How Matter Becomes Imagination, 2000, but they’re consistent with estimates from a variety of other sources.

25. See www.ninds.nih.gov/news_and_events/proceedings/ computationalwkshp_technical.htm.

26. Kristen M. Harris, a professor in the department of neurology at the Medical College of Georgia and the principal investigator at the Laboratory of the Synapse Structure and Function at the Medical College of Georgia (synapses.mcg.edu/lab/lab.htm) and the creator of the Synapse Web (synapses.mcg.edu), suggests that the numbers are in the megabytes (personal communication). Mark Ellisman, a professor of both neuroscience and bioengineering and the Director of the National Center for Microscopy and Imaging Research at the Center for Research in Biological Systems of the University of California, San Diego (www-ncmir.ucsd.edu), suggests that there are probably between five and twenty thousand macromolecules involved in the structure, function, and dynamics of each synapse (personal communication).

27. Such as www.wormbase.org, www.wormatlas.org, and www.bio logie.ens.fr/bcsgnce. 28. See Bessereau, Jean-Louis (group leader), “Genetics and Neurobiology of C. elegans”

website, www.biologie.ens.fr/bcsgnce. 29. For an example of an artificial neural network with 64 input neurons (and apparently only

one output neuron), see “Who Wrote The Book of Life?: Picking Up Where D’Arcy Thompson Left Off,” 1999, on the NASA website, science.nasa.gov/newhome/headlines/ast28may 99_1.htm. This artificial neural network, it is said, will require the resources of a supercomputer to process its 10,000 training cases.

30. James Martin, an eminent computer scientist known as “the Guru of the Information Age” and credited with writing over one hundred textbooks predicted this in PC Week, November 21, 1988.

31. As James A. Whittaker, an associate professor and director of the Center for Software Engineering Research at the computer sciences department at the Florida Institute of Technology, wrote in How to Break Software: A Practical Guide to Testing, 2003: “Building nontrivial software is an enormously difficult endeavor and usually results in software that fails once it gets fielded.” As Jaron Lanier wrote in “The Complexity Ceiling,” in John Brockman (ed.), The Next Fifty Years, 2002: “ … [T]he complexity of software is currently limited by the ability of human engineers to explicitly analyze and

12

Page 13: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

manage it, we can be said to have already reached the complexity ceiling of software as we know it. If we don’t find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code no matter how fast, plentiful, or exotic our processors become.”

32. As industry expert Capers Jones, Chief Scientist Emeritus at a company called Software Productivity Research (www.spr.com) put it in “Conflict and Litigation Between Clients and Developers,” 2004, unpublished manuscript: “[T]he development of large applications in excess of 10,000 function points [at about 125 lines of code per function point, that’s about 1,250,000 lines of code] is one of the most hazardous and risky undertakings of the modern world.” According to statistics reported by Jones, the probability of a project of 100,000 function points [that is, about 12,500,000 lines of code] failing is about two out of three. As Jaron Lanier wrote in “One-Half of a Manifesto: Why Stupid Software will save the future from neo-Darwinian machines,” Wired, December 2000, available at www.wired.com/wired/archive/8.12/lanier.html, “Getting computers to perform specific tasks of significant complexity in a reliable … way … is essentially impossible.”

33. Moore, Gordon E., “Cramming more components onto integrated circuits,” Electronics, April 1965, available at download.intel.com/research/silicon/moorespaper.pdf. See also: Brampton, Martin, “Devil’s Advocate: More to Moore’s Law,” 2004, www.silicon.com/comment/0,39024711,39117869,00.htm; Tuomi, Ilkka, “The Lives and Death of Moore’s Law,” 2002, First Monday, Vol. 7, No. 11, 2002 available at firstmonday.org/issues/issue7_11/tuomi/index.html.

34. An eminent computer scientist, now retired, formerly Assistant Professor of Computer Science at Stanford University and at the University of Zurich.

35. Jones, Capers, Applied Software Measurement, 1996, and Software Quality: Analysis and Guidelines for Success, 1997. A function point is equivalent to about 125 lines of code.

36. Hoffman, Donald D. Visual Intelligence, 1998. 37. See www.cs.wpi.edu/~kfisler/Courses/Rice/210/Labs/lab09/univSize.html. 38. Newell, A. and Herbert A. Simon [both researchers at the RAND Corporation, perhaps

the world’s first “think tank”], “Computer science as empirical enquiry: symbols and search,” Communications of the ACM, Vol. 19, No. 3, 1976.

39. Just as connectionism has its antecedents in the philosophy of mind known as associationism, computationalism has been said to be the modern equivalent of rationalist psychology, which has been traced back to Kant and said to have its roots as early as Aristotle. See Fodor, Jerry, The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology, 2000; Schwarz, Georg, “What is Computationalism?,” 1990, available at www.aec.at/en/archiv_files/19902/E1990b_107.pdf; Horst, Steven, “The Computational Theory of Mind,” 2003, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), available at plato.stanford.edu/archives/fall2003/entries/computational-mind.

40. As Jerry Fodor (a professor of philosophy at Rutgers University ) explains the idea in The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology, 2000, “Mental processes (including … thinking) are computations, that is, they are operations defined on the syntax of mental representations, and they are reliably truth preserving in indefinitely many cases.” See also Aydede, Murat, “The Language of

13

Page 14: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

Thought Hypothesis,” 2004, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), available at plato.stanford.edu/archives/fall2004/entries/language-thought.

41. A prominent computer scientist and author, currently Emeritus Professor of Computer Science at the Laboratory for Computer Science at MIT. Weizenbaum is the author of the notorious computer program ELIZA.

42. Weizenbaum, Joseph, Computer Power and Human Reason, 1967. Volume 12 Number 2 2006.

43. A professor of computer science at Stanford University. 44. Winograd, Terry, Understanding Natural Language, 1972. 45. As Daniel C. Dennett (a professor of philosophy and director of the Center for Cognitive

Studies at Tufts University) points out in Brainchildren: Essays on Designing Minds, 1998, one could imagine a world in which the committee advocated violence and the group seeking the permit feared it.

46. Thomason, Richmond, “Logic and Artificial Intelligence,” The Stanford Encyclopedia of Philosophy (2003), Edward N. Zalta (ed.), plato.stanford.edu/archives/fall2003/entries/logic-ai

47. As Richmond Thomason wrote in “Logic and Artificial Intelligence,” The Stanford Encyclopedia of Philosophy, 2003, Edward N. Zalta (ed.), plato.stanford.edu/archives/fall2003/entries/logic-ai: “[T]here is reason to hope that the combination of logical methods with planning applications in A.I. can enable the development of a far more comprehensive and adequate theory of practical reasoning than has heretofore been possible. As with many problems having to do with common sense reasoning, the scale and complexity of the formalizations that are required are beyond the traditional techniques of philosophical logic.”

48. As Alexander Riegler (postdoctoral fellow at the Center Leo Apostel at the Vrije Universiteit Brussel) puts it in “When is a Cognitive System Embodied?” 2002, Cognitive Systems Research 3, available at www.univie.ac.at/constructivism/people/riegler/papers/riegler02embodiment.pdf: “The wrong assumption [of computationalist approaches] is that the world is a collection of facts that could be arbitrarily combined with each other. Even if we managed the combinatorial complexity, a question would remain: what is a fact?”

49. See, for example: Klein, Gary, Sources of Power: How People Make Decision, 1999; and Dreyfus, Hubert L. and Stuart E. Dreyfus, Mind over Machine, 1986.

50. As Jerry Fodor wrote in The Mind Doesn’t Work That Way, 2000: “[T]he failure of artificial intelligence to produce successful simulations of routine commonsense cognitive competences is notorious, not to say scandalous.” For one example of an attempted solution, see McCarthy, John, “Artificial Intelligence, Logic and Formalizing Common Sense,” 1990, available at www-formal.stanford.edu/jmc/ailogic/ailogic.html. See also the “Open Mind Common Sense” page at commonsense.media.mit.edu/cgi-bin/search.cgi and the “Commonsense Computing @ Media” page at csc.media.mit.edu.

51. As Steven Harnad (at the School of Electronics and Computer Science at the University of Southampton, England ) puts it in “The Symbol Grounding Problem,” 1990, Physica D 42, 1990, available at www.ecs.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html, “How can the

14

Page 15: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

meanings of the meaning less symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols?”

52. See Shanahan, Murray, “The Frame Problem,” 2004, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), available at plato.stanford.edu/archives/spr2004/entries/frame-problem.

53. See, for example, Fodor, Jerry, The Mind Doesn’t Work That Way, 2000. 54. The story of CYC has been told many times in many books and on many websites,

including on CYC’s own website (www.cyc.com). There is also an open-source version, known as OPENCYC (www.opencyc.org). Microelectronics and Computer Technology Corporation was (and apparently still is) part of the University of Texas, Austin. (In 1995, the effort was spun off into the separate corporation now known as CYCORP.)

55. A computer science pioneer and a professor at Stanford University and elsewhere. 56. As Lanier put it, the Turing Test “is the creation myth of artificial intelligence.” For a

thorough introduction, see Oppy, Graham, Dowe, David, “The Turing Test,” 2003, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), available at plato.stanford.edu/archives/sum2003/entries/turing-test.

57. Quoted in Copeland, Jack, Artificial Intelligence, 1993. 58. Copeland, Jack, Artificial Intelligence, 1993. A small portion of Copeland’s work on

CYC is available at ejap.louisiana.edu/EJAP/1997.spring/copeland976.2.html. 59. Copeland, Jack, Artificial Intelligence, 1993. 60. www.cyc.com/doc/articles/midterm_report_1990.pdf 61. See www.opencyc.org/cyc/company/news/APArticle060902. 62. A Ph.D. in computer science (Stanford University) and the coauthor of several papers

about CYC, including the previously mentioned midterm report, as well as a book about the effort (Lenat, D. B. and R.V. Guha, Building Large Knowledge-Based Systems: Representation and Inference in the Cyc Project, 1990).

63. Guha, R.V., quoted in D. Stipp, “2001 is just around the corner. Where’s Hal?” Fortune, 1995, cited by Deniz Yuret, available at home.ku.edu.tr/~dyuret/pub/cyc96/node1.html

64. Baard, Mark, “A.I. Founder Blasts Modern Research,” Wired News, 2003, available at www.wired.com/news/technology/0,1282,58714,00.html

65. See Lakoff, G., and Mark Johnson, Philosophy In the Flesh: The Embodied Mind And Its Challenge To Western Thought, 1999. See also Cowart, Monica, “Embodied Cognition,” ND, The Internet Encyclopedia of Philosophy, available at www.iep.utm.edu/e/embodcog.htm.

66. As Pinker wrote in How the Mind Works, 1997, “Without goals, the very concept of intelligence is meaningless.”

67. Dennett, Daniel, Brainchildren, 1998. 68. A neurophysiologist working at the Burden Neurological Institute in Bristol, England. 69. www.stcroixstudios.com/wilder/fastkarl/drgreywalter.html 70. Walter, W. Grey, “An Imitation of Life,” Scientific American, May 1950, and “A

Machine that Learns,” Scientific American, August 1951. 71. Walter, W. Grey, The Living Brain, 1963. 72. Wiener, Norbert, Cybernetics, 1948, 1961. 73. As Hans Moravec (an adjunct research professor at the Robotics Institute of Carnegie

Mellon University) wrote in Moravec, Hans, Robot: Mere Machine to Transcendent Mind, 1998: “Cybernetics attempted to copy nervous system function by imitating its

15

Page 16: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

physical structure. The approach stumbled in the 1960s on the difficulty of constructing all but the simplest artificial nervous systems…”

74. This characterization is ubiquitous. See for example www.sri.com/about/timeline/shakey.html. Shakey is widely discussed in the popular histories of A.I. and robots. For the official history, see www.ai.sri.com/shakey.

75. As Moravec wrote in Robot: Mere Machine to Transcendent Mind, 1998, the results of all this work “…were like a cold shower … [T]he best robot-control programs, besides being … difficult to write, took hours to find and pick up a few blocks on a table-top, and often failed completely, performing much worse than a six-month-old child.”

76. Moravec, Hans, Robot: Mere Machine to Transcendent Mind, 1998. 77. Moravec, Hans, “When Will Computer Hardware Match the Human Brain?,” 1997,

Journal of Evolution and Technology, 1998, 88. The Defense Advanced Research Projects Agency of the Department of Defense. See www.darpa.mil. 77. Moravec, Hans, “When Will Computer Hardware Match the Human Brain?,” 1997, Journal of Evolution and Technology, 1998, Vol. 1, available at www.transhumanist.com/volume1/mora.

78. Again, this is evident from mere book titles, such as Kurzweil, Ray, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, 1999.

79. See Dean, Thomas, “Robot Architectures,” 2002, available at www.cs.brown.edu/people/tld/courses/cs148/02/architectures.html.

80. A professor of computer science at MIT and director of the MIT Computer Science and Artificial Intelligence Laboratory (www.ai.mit.edu/ projects/humanoid-robotics-group).

81. For example, Brooks, Rodney, A, “Intelligence without Representation,” 1987, Artificial Intelligence 47, available at people.csail.mit.edu/u/b/brooks/public_html/papers/representation.pdf.

82. See for example, www.eecs.mit.edu/100th/images/Brooks. 83. For an indication of what those behaviors are, see

www.childdevelopmentinfo.com/development/normaldevelopment.shtml and www.envisage design.com/ohbaby/develop.html.

84. See, for example www.uh.edu/engines/epi434.htm. Brooks had expressed this as a two-year goal in Brooks, Rodney, A, “Intelligence without Representation,” 1987, Artificial Intelligence 47, 1991, available at people.csail.mit.edu/u/b/brooks/public_html/papers/representation.pdf. Quoting Hans Moravec in Robot again: “Today’s best commercial robots are controlled by computers just powerful enough to produce insect-grade behavior.”

85. For a readable introduction to one type of insect, see Gordon, David George, The Compleat Cockroach: A Comprehensive Guide to the Most Despised (And Least Understood) Creature on Earth, 1996.

86. Nils Nillson (principal researcher on Shakey, eminent and pioneering computer scientist, and now professor of engineering, emeritus, at the Artificial Intelligence Laboratory of the Department of Computer Science at Stanford University), personal communication.

87. The issue of how to compare the brains of animals of different species is extremely complicated. See Walker, S.F., Animal Thought, 1983, of which Chapter 4, “The Phylogenetic Scale, Brain Size and Brain Cells” is available online at www.psyc.bbk.ac.uk/people/academic/walker_s/pubs/an-thought/at-ch4.html. For indications that even our closest living relative, the chimpanzee, is incapable of reasoning

16

Page 17: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

abstractly and forming true concepts, see Povinelli, Daniel J., Folk Physics for Apes: The Chimpanzee’s Theory of How the World Works, 2000.

88. The airplane is every A.I. advocate’s favorite analogy. As Edward A. Feigenbaum and Julian Feldman wrote in “What Are the Limits of Artificial Intelligence Research?” in Edward A. Feigenbaum and Julian Feldman (eds.), Computers and Thought, 1963: “Today, despite our ignorance, we can point to that biological milestone, the thinking brain, in the same spirit as the scientists many hundreds of years ago pointed to the bird as a demonstration in nature that mechanisms heavier than air could fly.”

89. The concept is that of Imre Lakatos, a philosopher of science at the London School of Economics until his death in 1974. See Lakatos, Imre, “Falsification and the Methodology of Scientific Research Programmers,” in Lakatos, Imre and Alan Musgrave (eds.), Criticism and the Growth of Knowledge: Vol. 4: Proceedings of the International Colloquium in the Philosophy of Science, 1965.

Transformers -The Movie & Evolution of Machine IntelligenceJune 18, 2007For many, 2007 was the year that big movies stopped happening. Lord of the Rings, Star Wars, Star Trek and the like had stopped or finished their runs. There was supposed to be a lull. Thankfully, director Michael Bay decided not to listen to what the ‘populous’ was saying, and has created what is looking to be one of the greatest geek movies of all time. To be released this June, Transformers has been gaining momentum week after week, as new trailers (see below) and images are released over the internet, and fan hype continually stoked. It stars Shia LaBeouf as Sam Witwicky, who discovers the map to the Allspark: the source of life for which the heroic Autobots and evil Decepticons wage their war. So, with the intensity at its height, and the movie just around the corner, we thought that we would take a bit of a deeper look at the movie. To be specific, the science behind the fiction. Unlike Earth, the home of a variety of organic-based lifeforms, the planet of Cybertron is the homeworld of a race of robots which have the ability to transform into other mechanisms, with each Transformer having its own unique disguise. Transformers are divided into two separate camps: the good and just Autobots, who are led by Optimus Prime (whose disguise is a red 18-wheel semi truck); and the evil Decepticons, who are led by Megatron (who transforms into a gun; there's a good deal of size-shifting involved with Megatron as well). With fuel supplies (called Energon Cubes) on Cybertron running low, both forces travel through space looking for a new source, which leads them to Earth, which from their perspective in rich in the minerals and chemicals they need. Disguising themselves as cars, airplanes, boats, familiar and commonplace to humans, the Transformers engage in a secret war for control of Earth's bountiful natural resources.Sound farfetched. It is...but according to no less an authority that Steven Dick, chief historian for NASA, "if, as is often assumed, intelligent life is millions or billions of years old, cultural evolution may have resulted in a "postbiological universe," in which flesh and blood intelligence has been superseded by artificial intelligence. Carnegie Mellon AI pioneer Hans Moravec has famously postulated a postbiological Earth in the next few generations. Given the time scales of the universe, its seems much more likely to have already happened in outer space."All of these outcomes, Dick continues, "have implications for human destiny. It may be our destiny to populate the universe, or to interact with its flesh-and-blood intelligence in many forms. Or, in the postbiological universe, we may have to interact with machine intelligence."

17

Page 18: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

But Transformers has taken these scientific possibilities and presented us with a future that many science fiction authors have also jumped upon. Possibly the most famous author to posit a future with robots is Isaac Asimov. One of his most popular works were a series of short stories named the Robot Series. As simply as possible, an artificial intelligence is – at the moment – based around what is termed as ‘automated inference engines’. These AI’s are then divided in to two types of AI; classifiers and controllers. Classifiers however are further divided in to two separate categories; Convential AI and Computational Intelligence. Without devolving in to a lesson in either, Conventional AI is essentially classified as ‘machine learning’, and is at its heart, the ability to sift through a sizable amount of data and apply reasoning capabilities to reach a conclusion. Computational Intelligence is based around repetitive development, or ‘learning’. CI is the ability to fine-tune ones parameters to reach a conclusion, just as a child would repetitively try and decipher what he would be allowed to get away with, and base his actions accordingly. The fact of the matter is that as of today, Transformers is a long way in our future. We’re lucky to have a robot that is able to distinguish a necklace from a fern frond. The artificial intelligence within the universe of the Transformers has moved well away from simply science fiction, and evolved to Philosophical Artificial Intelligence. It is not just the ability to determine an outcome from evidence provided, but the belief that within technology, a soul can not only exist, but form. Throughout the comics and cartoon series, issues such as death and birth are often discussed. So while there exists today a level of artificial intelligence, Transformers is still – thankfully – well within the realm of scientific and philosophical fantasy and fiction -at least on Earth.Posted by Josh Hill with Casey Kazan.Posted at 12:04 AM in Extraterriestrial Lifehttp://www.dailygalaxy.com/my_weblog/2007/06/transformers_th.html#more

The Coming SuperbrainBy JOHN MARKOFF - Published: May 23, 2009http://www.nytimes.com/2009/05/24/weekinreview/24markoff.html?_r=1

Mountain View, Calif. — It’s summertime and the Terminator is back. A sci-fi movie thrill ride, “Terminator Salvation” comes complete with a malevolent artificial intelligence dubbed Skynet, a military R.&D. project that gained self-awareness and concluded that humans were an irritant — perhaps a bit like athlete’s foot — to be dispatched forthwith. FRIEND OR FOE? In “Terminator Salvation,” computers surpass and plan to eliminate humans.

The notion that a self-aware computing system would emerge spontaneously from the interconnections of billions of computers and computer networks goes back in science fiction at least as far as Arthur C. Clarke’s “Dial F for Frankenstein.” A prescient short story that appeared in 1961, it foretold an ever-more-interconnected telephone network that spontaneously acts like a newborn baby and leads to global chaos as it takes over financial, transportation and military systems.

18

Page 19: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

Today, artificial intelligence, once the preserve of science fiction writers and eccentric computer prodigies, is back in fashion and getting serious attention from NASA and from Silicon Valley companies like Google as well as a new round of start-ups that are designing everything from next-generation search engines to machines that listen or that are capable of walking around in the world. A.I.’s new respectability is turning the spotlight back on the question of where the technology might be heading and, more ominously, perhaps, whether computer intelligence will surpass our own, and how quickly.The concept of ultrasmart computers — machines with “greater than human intelligence” — was dubbed “The Singularity” in a 1993 paper by the computer scientist and science fiction writer Vernor Vinge. He argued that the acceleration of technological progress had led to “the edge of change comparable to the rise of human life on Earth.” This thesis has long struck a chord here in Silicon Valley.Artificial intelligence is already used to automate and replace some human functions with computer-driven machines. These machines can see and hear, respond to questions, learn, draw inferences and solve problems. But for the Singulatarians, A.I. refers to machines that will be both self-aware and superhuman in their intelligence, and capable of designing better computers and robots faster than humans can today. Such a shift, they say, would lead to a vast acceleration in technological improvements of all kinds.The idea is not just the province of science fiction authors; a generation of computer hackers, engineers and programmers have come to believe deeply in the idea of exponential technological change as explained by Gordon Moore, a co-founder of the chip maker Intel. In 1965, Dr. Moore first described the repeated doubling of the number transistors on silicon chips with each new technology generation, which led to an acceleration in the power of computing. Since then “Moore’s Law” — which is not a law of physics, but rather a description of the rate of industrial change — has come to personify an industry that lives on Internet time, where the Next Big Thing is always just around the corner.Several years ago the artificial-intelligence pioneer Raymond Kurzweil took the idea one step further in his 2005 book, “The Singularity Is Near: When Humans Transcend Biology.” He sought to expand Moore’s Law to encompass more than just processing power and to simultaneously predict with great precision the arrival of post-human evolution, which he said would occur in 2045.In Dr. Kurzweil’s telling, rapidly increasing computing power in concert with cyborg humans would then reach a point when machine intelligence not only surpassed human intelligence but took over the process of technological invention, with unpredictable consequences.Profiled in the documentary “Transcendent Man,” which had its premier last month at the TriBeCa Film Festival, and with his own Singularity movie due later this year, Dr. Kurzweil has become a one-man marketing machine for the concept of post-humanism. He is the co-founder of Singularity University, a school supported by Google that will open in June with a grand goal — to “assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity’s grand challenges.”Not content with the development of superhuman machines, Dr. Kurzweil envisions “uploading,” or the idea that the contents of our brain and thought processes can somehow be translated into a computing environment, making a form of immortality possible — within his lifetime.

19

Page 20: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

That has led to no shortage of raised eyebrows among hard-nosed technologists in the engineering culture here, some of whom describe the Kurzweilian romance with supermachines as a new form of religion.The science fiction author Ken MacLeod described the idea of the singularity as “the Rapture of the nerds.” Kevin Kelly, an editor at Wired magazine, notes, “People who predict a very utopian future always predict that it is going to happen before they die.”However, Mr. Kelly himself has not refrained from speculating on where communications and computing technology is heading. He is at work on his own book, “The Technium,” forecasting the emergence of a global brain — the idea that the planet’s interconnected computers might someday act in a coordinated fashion and perhaps exhibit intelligence. He just isn’t certain about how soon an intelligent global brain will arrive.Others who have observed the increasing power of computing technology are even less sanguine about the future outcome. The computer designer and venture capitalist William Joy, for example, wrote a pessimistic essay in Wired in 2000 that argued that humans are more likely to destroy themselves with their technology than create a utopia assisted by superintelligent machines.Mr. Joy, a co-founder of Sun Microsystems, still believes that. “I wasn’t saying we would be supplanted by something,” he said. “I think a catastrophe is more likely.”Moreover, there is a hot debate here over whether such machines might be the “machines of loving grace,” of the Richard Brautigan poem, or something far darker, of the “Terminator” ilk.“I see the debate over whether we should build these artificial intellects as becoming the dominant political question of the century,” said Hugo de Garis, an Australian artificial-intelligence researcher, who has written a book, “The Artilect War,” that argues that the debate is likely to end in global war.Concerned about the same potential outcome, the A.I. researcher Eliezer S. Yudkowsky, an employee of the Singularity Institute, has proposed the idea of “friendly artificial intelligence,” an engineering discipline that would seek to ensure that future machines would remain our servants or equals rather than our masters.Nevertheless, this generation of humans, at least, is perhaps unlikely to need to rush to the barricades. The artificial-intelligence industry has advanced in fits and starts over the past half-century, since the term “artificial intelligence” was coined by the Stanford University computer scientist John McCarthy in 1956. In 1964, when Mr. McCarthy established the Stanford Artificial Intelligence Laboratory, the researchers informed their Pentagon backers that the construction of an artificially intelligent machine would take about a decade. Two decades later, in 1984, that original optimism hit a rough patch, leading to the collapse of a crop of A.I. start-up companies in Silicon Valley, a time known as “the A.I. winter.” Such reversals have led the veteran Silicon Valley technology forecaster Paul Saffo to proclaim: “never mistake a clear view for a short distance.”Indeed, despite this high-technology heartland’s deeply held consensus about exponential progress, the worst fate of all for the Valley’s digerati would be to be the generation before the generation that lives to see the singularity.“Kurzweil will probably die, along with the rest of us not too long before the ‘great dawn,’ ” said Gary Bradski, a Silicon Valley roboticist. “Life’s not fair.”

The "Blue Brain" & Human Consciousness -Scientists Create Artificial BrainMay 24, 2007 - http://www.dailygalaxy.com/my_weblog/2007/05/the_blue_brain_.html

20

Page 21: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

A network of artificial nerves is evolving right now in a Swiss supercomputer. This bizarre creation is capable of simulating a natural brain, cell-for-cell. The Swiss scientists, who created what they have dubbed "Blue Brain", believe it will soon offer a better understanding of human consciousness. This is no sci-fi flick; it’s an actual ‘computer brain’ that may eventually have the ability to think for itself. Exciting? Scary? It could be a little of both.The designers say that "Blue Brain" was willful and unpredictable from day one. When it was first fed electrical impulses, strange patterns began to appear with lightning-like flashes produced by ‘cells’ that the scientists recognized from living human and animal processes. Neurons started interacting with one another until they were firing in rhythm. "It happened entirely on its own," says biologist Henry Markram, the project's director. "Spontaneously."The project essentially has its own factory to produce artificial brains. Their computers can clone nerve cells quickly. The system allows for the production of whole series of neurons of all different types. Because in natural brains, no two cells are exactly identical, the scientists make sure the artificial cells used for the project are also random and unique. Does this ‘Brain’ have a soul? If it does, it is likely to be the shadowy remnants of thousands of sacrificed rats whose brains were almost literally fed into the computer. After opening the rat skulls and slicing their brains into thin sections, the scientists kept the slices alive. Tiny sensors picked up individual neurons, recorded how the cells fired off neurons and the adjacent cells’ responses. In this way the scientists were able to collect entire repertoires of actual rat behavior- basically how a rat would respond in different situations throughout a rat's life.The researchers say it wouldn't present much of a technological challenge to bring the brain to life. "We could simply connect a robot to the brain model," says Markram. "Then we could see how it reacts to real environments." Hmm, are rats capable of revenge? What I’m wondering is what this brain would do to those researchers if it was attached to a giant metallic rat body and equipped with teeth and claws…now there’s a good sci-fi movie.Although over ten thousand artificial nerve cells have already been woven in, the researchers plan to increase the number to one million by next year. The researchers are already working with IBM experts on plans for a computer that would operate at inconceivable speeds – something fast enough to simulate the human brain. The project is scheduled to last beyond 2015, at which point the team hopes to be ready for their primary goal: a computer model of an entire human brain. So, who’s brain will they be slicing up for that one? Lets hope it’s not a psychopath. Story LinkRelated Posts:Neurotheology -Is God Hardwired into the Human Brain?Mysteries of the Human BrainOrigin of Religion -Human Brain as "Belief Engine"The Biology of AweBig Brain & the Pursuit of Happiness

21

Page 22: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

Are we on the brink of creating a computer with a human brain?By Michael Hanlon Last updated at 8:45 AM on 11th August 2009

http://www.dailymail.co.uk/sciencetech/article-1205677/Are-brink-creating-human-brain.html#ixzz0O1xFM7J1

There are only a handful of scientific revolutions that would really change the world. An immortality pill would be one. A time machine would be another. Faster-than-light travel, allowing the stars to be explored in a human lifetime, would be on the shortlist, too. To my mind, however, the creation of an artificial mind would probably trump all of these - a development that would throw up an array of bewildering and complex moral and philosophical quandaries. Amazingly, it might also be within reach. For while time machines, eternal life potions and Star Trek-style warp drives are as far away as ever, a team of scientists in Switzerland is claiming that a fully-functioning replica of a human brain could be built by 2020. This isn't just pie-in-the-sky. The Blue Brain project, led by computer genius Henry Markram - who is also the director of the Centre for Neuroscience & Technology and the Brain Mind Institute - has for the past five years been engineering the mammalian brain, the most complex object known in the Universe, using some of the most powerful supercomputers in the world. And last month, Professor Markram claimed, at a conference in Oxford, that he plans to build an electronic human brain 'within ten years'. If he is right, nothing will be the same again. But can such an extraordinary claim be credible? When we think of artificial minds, we inevitably think of the sort of machines that have starred in dozens of sci-fi movies. Indeed, most scientists - and science fiction writers - have tended to concentrate on the nuts and bolts of robotics: how you make artificial muscles; how you make a machine see and hear; how you give it realistic skin and enough tendons and ligaments underneath that skin to allow it to smile convincingly. But what tends to be glossed over is by far the most complex problem of all: how you make a machine think. This problem is one of the central questions of modern philosophy and goes to the very heart of what we know, or rather do not know, about the human mind. Most of us imagine that the brain is rather like a computer. And in many ways, it is. It processes data and can store quite prodigious amounts of information. But in other ways, a brain is quite unlike a computer. For while our computers are brilliant at calculating the weather forecast and modelling the effects of nuclear explosions - tasks most often assigned to the most powerful machines - they still cannot 'think'. We cannot be sure this is the case. But no one thinks that the laptop on your desk or even the powerful mainframes used by the Met Office can, in any meaningful sense, have a mind.

22

Page 23: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

So what is it, in that three pounds of grey jelly, that gives rise to the feeling of conscious self-awareness, the thoughts and emotions, the agonies and ecstasies that comprise being a human being? This is a question that has troubled scientists and philosophers for centuries. The traditional answer was to assume that some sort of 'soul' pervades the brain, a mysterious 'ghost in the machine' which gives rise to the feeling of self and consciousness. If this is the case, then computers, being machines not flesh and blood, will never think. We will never be able to build a robot that will feel pain or get angry, and the Blue Brain project will fail. But very few scientists still subscribe to this traditional 'dualist' view - 'dualist' because it assumes 'mind' and 'matter' are two separate things. Instead, most neuroscientists believe that our feelings of self-awareness, pain, love and so on are simply the result of the countless billions of electrical and chemical impulses that flit between its equally countless billions of neurons. So if you build something that works exactly like a brain, consciousness, at least in theory, will follow. In fact, several teams are working to prove this is the case by attempting to build an electronic brain. They are not attempting to build flesh and blood brains like modern-day Dr Frankensteins. They are using powerful mainframe computers to 'model' a brain. But, they say, the result will be just the same. Two years ago, a team at IBM's Almaden research lab at Nevada University used a BlueGene/L Supercomputer to model half a mouse brain. Half a mouse brain consists of about eight million neurons, each of which can form around 8,000 links with neighbouring cells. Creating a virtual version of this pushes a computer to the limit, even machines which, like the BlueGene, can perform 20trillion calculations a second. The 'mouse' simulation was run for about ten seconds at a speed a tenth as fast as an actual rodent brain operates. Nevertheless, the scientists said they detected tell-tale patterns believed to correspond with the 'thoughts' seen by scanners in real-life mouse brains. It is just possible a fleeting, mousey, 'consciousness' emerged in the mind of this machine. But building a thinking, remembering human mind is more difficult. Many neuroscientists claim the human brain is too complicated to copy. Markram's team is undaunted. They are using one of the most powerful computers in the world to replicate the actions of the 100billion neurons in the human brain. It is this approach - essentially copying how a brain works without necessarily understanding all of its actions - that will lead to success, the team hopes. And if so, what then? Well, a mind, however fleeting and however shorn of the inevitable complexities and nuances that come from being embedded in a body, is still a mind, a 'person'. We would effectively have created a 'brain in a vat'. Conscious, aware, capable of feeling, pain, desire. And probably terrified. And if it were modelled on a human brain, we would then have real ethical dilemmas. If our 'brain' - effectively just a piece of extremely impressive computer software - could be said to know it exists, then do we assign it rights? Would turning it off constitute murder? Would performing experiments upon it constitute torture? And there are other questions, too, questions at the centre of the nurture versus nature debate. Would this human mind, for example, automatically feel guilt or would it need to be 'taught' a

23

Page 24: Vol - Meetupfiles.meetup.com/284333/AI-Slow Progress.doc · Web viewVol. 12 No. 2 Featured Article For decades now computer scientists and futurists have been telling us that computers

sense of morality first? And how would it respond to religion? Indeed, are these questions that a human mind asks of its own accord, or must it be taught to ask them first? Thankfully, we are probably a long way from having to confront these issues. It is important to stress that not one scientist has provided anything like a convincing explanation for how the brain works, let alone shown for sure that it would be possible to replicate this in a machine. Not one computer or robot has come near passing the famous 'Turing Test', devised by the brilliant Cambridge scientist Alan Turing in 1950, to prove whether a machine could think. It is a simple test in which someone is asked to communicate, using a screen and keyboard, with a computer trying to mimic a human, and another, real human. If the judge cannot tell the machine from the other person, the computer has 'passed' the test. So far, every computer we have built has failed. Yet, if the Blue Brain project succeeds, in a few decades - perhaps sooner - we will be looking at the creation of a new intelligent lifeform on Earth. And the ethical dilemmas we face when it comes to experimenting on animals in the name of science will pale into insignificance when faced with the potential torments of our new machine mind.

COMMENTS:I'm a researcher in Artificial Intelligence. The later paragraphs of your article hit the nail on the head. Prof. Markram is on a typical silly season funding mission. If there is as little progress in the next ten years as there has been in the last ten, then we can expect to wait at least fifty years before we see the changes you talk about. My experience is that the more we discover about AI, the more difficult the task of producing human comparable performance looks, and the more wonderful our abilities seem to be.- Dr Andy Edmonds, Woburn Sands, 11/8/2009 2:57

24