703

[Michael W. Eysenck Mark T. Keane] Cognitive Psych(BookFi.org)

Embed Size (px)

DESCRIPTION

Psihologie cognitiva

Citation preview

  • Administrator200096c1coverp13b.jpg

  • COGNITIVE PSYCHOLOGY

  • To Christine with love(M.W.E.)

    To Ruth with love all ways(M.K.)

    The only means of strengthening ones intellect is to make up ones mind aboutnothingto let the mind be a thoroughfare for all thoughts. Not a select party.

    (John Keats)

  • Cognitive Psychology A StudentsHandbook

    Fourth Edition

    Michael W. Eysenck(Royal Holloway, University of London, UK)

    Mark Keane(University College Dublin, Ireland)

    HOVE AND NEW YORK

  • First published 2000 by Psychology Press Ltd27 Church Road, Hove, East Sussex BN3 2FA

    www.psypress.co.uk

    Simultaneously published in the USA and Canadaby Taylor & Francis Inc.

    325 Chestnut Street, Philadelphia, PA 19106

    Psychology Press is an imprint of the Taylor & Francis Group

    This edition published in the Taylor & Francis e-Library, 2005.

    To purchase your own copy of this or any of Taylor & Francis or Routledges collection of thousands of eBooks please go towww.eBookstore.tandf.co.uk.

    Reprinted 2000, 2001

    Reprinted 2002 (twice) and 2003by Psychology Press

    27 Church Road, Hove, East Sussex BN3 2FA29 West 35th Street, New York, NY 10001

    2000 by Psychology Press Ltd

    All rights reserved. No part of this book may be reprinted orreproduced or utilised in any form or by any electronic,

    mechanical, or other means, now known or hereafter invented,including photocopying and recording, or in any information

    storage or retrieval system, without permission in writing fromthe publishers.

    British Library Cataloguing in Publication DataA catalogue record for this book is available from the British Library

    ISBN 0-203-62630-3 Master e-book ISBN

    ISBN 0-203-62636-2 (Adobe eReader Format)ISBN 0-86377-550-0 (hbk)ISBN 0-86377-551-9 (pbk)

    Cover design by Hybert Design, Waltham St Lawrence, Berkshire

  • Contents

    Preface xii

    1. Introduction 1

    Cognitive psychology as a science 1

    Cognitive science 5

    Cognitive neuropsychology 13

    Cognitive neuroscience 18

    Outline of this book 25

    Chapter summary 26

    Further reading 27

    2. Visual perception: Basic processes 28

    Introduction 28

    Perceptual organisation 28

    Depth and size perception 34

    Colour perception 43

    Brain systems 48

    Chapter summary 56

    Further reading 57

    3. Perception, movement, and action 58

    Introduction 58

    Constructivist theories 59

    Direct perception 64

    Theoretical integration 68

    Motion, perception, and action 70

    Visually guided action 71

  • Perception of object motion 79

    Chapter summary 87

    Further reading 89

    4. Object recognition 90

    Introduction 90

    Pattern recognition 91

    Marrs computational theory 96

    Cognitive neuropsychology approach 106

    Cognitive science approach 109

    Face recognition 116

    Chapter summary 128

    Further reading 129

    5. Attention and performance limitations 130

    Introduction 130

    Focused auditory attention 132

    Focused visual attention 136

    Divided attention 147

    Automatic processing 155

    Action slips 160

    Chapter summary 165

    Further reading 166

    6. Memory: Structure and processes 167

    Introduction 167

    The structure of memory 167

    Working memory 172

    Memory processes 182

    Theories of forgetting 187

    Theories of recall and recognition 194

    Chapter summary 203

    Further reading 204

    vi

  • 7. Theories of long-term memory 205

    Introduction 205

    Episodic and semantic memory 205

    Implicit memory 208

    Implicit learning 211

    Transfer appropriate processing 213

    Amnesia 216

    Theories of amnesia 223

    Chapter summary 234

    Further reading 235

    8. Everyday memory 236

    Introduction 236

    Autobiographical memory 238

    Memorable memories 245

    Eyewitness testimony 249

    Superior memory ability 256

    Prospective memory 261

    Evaluation of everyday memory research 263

    Chapter summary 264

    Further reading 265

    9. Knowledge: Propositions and images 266

    Introduction 266

    What is a representation? 267

    What is a proposition? 270

    Propositions: Objects and relations 271

    Schemata, frames, and scripts 276

    What is an image? Some evidence 282

    Propositions versus images 287

    Kosslyns computational model of imagery 293

    The neuropsychology of visual imagery 298

    vii

  • Connectionist representations 299

    Chapter summary 304

    Further reading 305

    10. Objects, concepts, and categories 306

    Introduction 306

    Evidence on categories and categorisation 307

    The defining-attribute view 313

    The prototype view 317

    The exemplar-based view 320

    Explanation-based views of concepts 322

    Conceptual combination 325

    Concepts and similarity 326

    Evaluating theories of categorisation 331

    Neurological evidence on concepts 332

    Chapter summary 333

    Further reading 334

    11. Speech perception and reading 335

    Introduction 335

    Listening to speech 336

    Theories of word recognition 340

    Cognitive neuropsychology 345

    Basic reading processes 348

    Word identification 352

    Routes from print to sound 357

    Chapter summary 365

    Further reading 367

    12. Language comprehension 368

    Introduction 368

    Sentence processing 368

    Capacity theory 376

    viii

  • Discourse processing 379

    Story processing 386

    Chapter summary 397

    Further reading 398

    13. Language production 399

    Introduction 399

    Speech as communication 399

    Speech production processes 401

    Theories of speech production 403

    Cognitive neuropsychology: Speech production 410

    Cognitive neuroscience: Speech production 412

    Writing: Basic processes 414

    Cognitive neuropsychology: Writing 419

    Speaking and writing compared 425

    Language and thought 426

    Chapter summary 428

    Further reading 430

    14. Problem solving: Puzzles, insight, and expertise 431

    Introduction 431

    Early research: The Gestalt school 433

    Newell and Simons problem-space theory 438

    Evaluating research on puzzles 446

    Re-interpreting the Gestalt findings 449

    From puzzles to expertise 452

    Evaluation of expertise research 461

    Learning to be an expert 461

    Cognitive neuropsychology of thinking 465

    Chapter summary 466

    Further reading 467

    15. Creativity and discovery 468

    ix

  • Introduction 468

    Genius and talent 468

    General approaches to creativity 469

    Discovery using mental models 473

    Discovery by analogy 476

    Scientific discovery by hypothesis testing 480

    Evaluating problem-solving research 483

    Chapter summary 486

    Further reading 487

    16. Reasoning and deduction 488

    Introduction 488

    Theoretical approaches to reasoning 491

    How people reason with conditionals 492

    Abstract-rule theory 502

    Mental models theory 506

    Domain-specific rule theories 513

    Probabilistic theory 515

    Cognitive neuropsychology of reasoning 518

    Rationality and evaluation of theories 519

    Chapter summary 520

    Further reading 521

    17. Judgement and decision making 522

    Introduction 522

    Judgement research 523

    Decision making 531

    How flawed are judgement and decision making? 534

    Chapter summary 535

    Further reading 536

    18. Cognition and emotion 537

    Introduction 537

    x

  • Does affect require cognition? 537

    Theories of emotional processing 543

    Emotion and memory 549

    Emotion, attention, and perception 556

    Conclusions on emotional processing 561

    Chapter summary 563

    Further reading 564

    19. Present and future 565

    Introduction 565

    Experimental cognitive psychology 565

    Cognitive neuropsychology 568

    Cognitive science 570

    Cognitive neuroscience 573

    Present and future directions 575

    Chapter summary 576

    Further reading 577

    Glossary 579

    References 591

    Author index 657

    Subject index 680

    xi

  • Preface

    Cognitive psychology has changed in several excit- ing ways in the few years since the third edition of thistextbook. Of all the changes, the most dramatic has been the huge increase in the number of studies makinguse of sophisticated techniques (e.g., PET scans) to investigate human cognition. During the 1990s, suchstudies probably increased tenfold, and are set to increase still further during the early years of the thirdmillennium. As a result, we now have four major approaches to cognitive psychology: experimentalcognitive psychology based mainly on laboratory experiments; cognit- ive neuropsychology, which pointsup the effects of brain damage on cognition; cognitive science, with its emphasis on computationalmodelling; and cognitive neuroscience, which uses a wide range of techniques to study brain functioning. Itis a worthwhile (but challenging) business to try to integrate information from these four approaches, andthat it is exactly what we have tried to do in this book. As before, our busy professional lives have made itessential for us to work hard to avoid chaos. For example, the first author wrote several parts of the book inChina, and other parts were written in Mexico, Poland, Russia, Israel, and the United States. The secondauthor followed Joyces ghost, writing parts of the book between Dublin and Trieste.

    I (Michael Eysenck) would like to express my profound gratitude to my wife Christine, to whom thisbook (in common with the previous edition) is appropriately dedicated. I am also very grateful to our threechildren (Fleur, William, and Juliet) for their tolerance and understanding, just as was the case with theprevious edition of this book. How- ever, when I look back to the writing of the third edition of thistextbook, it amazes me how much they have changed over the last five years.

    Since I (Mark Keane) first collaborated on Cognitive Psychology: A Students Handbook in 1990 myprofessional life has undergone considerable change, from a post-doc in psychology to Professor ofComputer Science. My original motivation in writing this text was to influence the course of cognitivepsychology as it was then developing, to encourage its extension in a computational direction. Looking backover the last 10 years, I am struck by the slowness of change in the introduction of these ideas. The standardpsychology undergraduate degree does a very good job at giving students the tools for the empiricalexploration of the mind. However, few courses give students the tools for the theoretical elaboration of thetopic. In this respect, the discipline gets a could do better rather than an excellent on the mark sheet.

    We are very grateful to several people for reading an entire draft of this book, and for offering valuableadvice on how it might be improved. They include Ruth Byrne, Liz Styles, Trevor Harley, and RobertLogie. We would also like to thank those who commented on various chapters: John Towse, SteveAnderson, James Hampton, Fernand Gobet, Evan Heit, Alan Parkin, David Over, Ken Manktelow, KenGilhooly, Peter Ayton, Clare Harries, George Mather, Mark Georgeson, Gerry Altmann, Nick Wade, MickPower, David Hardman, John Richardson, Vicki Bruce, Gillian Cohen, and Jonathan St.B.T.Evans.

    Michael Eysenck and Mark Keane

  • 1Introduction

    COGNITIVE PSYCHOLOGY AS A SCIENCE

    In the years leading up to the millennium, people made increased efforts to understand each other and theirown inner, mental space. This concern was marked with a tidal wave of research in the field of cognitivepsychology, and by the emergence of cognitive science as a unified programme for studying the mind.

    In the popular media, there are numerous books, films, and television programmes on the more accessibleaspects of cognitive research. In scientific circles, cognitive psychology is currently a thriving area, dealingwith a bewildering diversity of phenomena, including topics like attention, perception, learning, memory,language, emotion, concept formation, and thinking.

    In spite of its diversity, cognitive psychology is unified by a common approach based on an analogy betweenthe mind and the digital computer; this is the information-processing approach. This approach is thedominant paradigm or theoretical orientation (Kuhn, 1970) within cognitive psychology, and has been forsome decades.

    Historical roots of cognitive psychology

    The year 1956 was critical in the development of cognitive psychology. At a meeting at the MassachusettsInstitute of Technology, Chomsky gave a paper on his theory of language, George Miller presented a paperon the magic number seven in short-term memory (Miller, 1956), and Newell and Simon discussed theirvery influential computational model called the General Problem Solver (discussed in Newell, Shaw, &Simon, 1958; see also Chapter 15). In addition, the first systematic attempt to consider concept formationfrom a cognitive perspective was reported (Bruner, Goodnow, & Austin, 1956).

    The field of Artificial Intelligence was also founded in 1956 at the Dartmouth Conference, which wasattended by Chomsky, McCarthy, Minsky, Newell, Simon, and Miller (see Gardner, 1985). Thus, 1956witnessed the birth of both cognitive psychology and cognitive science as major disciplines. Books devotedto aspects of cognitive psychology began to appear (e.g., Broadbent, 1958; Bruner et al., 1956). However, ittook several years before the entire information-processing viewpoint reached undergraduate courses(Lachman, Lachman, & Butterfield, 1979; Lindsay & Norman, 1977).

    Information processing: Consensus

    Broadbent (1958) argued that much of cognition consists of a sequential series of processing stages. When astimulus is presented, basic perceptual processes occur, followed by attentional processes that transfer some

  • of the products of the initial perceptual processing to a short-term memory store. Thereafter, rehearsalserves to maintain information in the short-term memory store, and some of the information is transferred toa long-term memory store. Atkinson and Shiffrin (1968; see also Chapter 6) put forward one of the mostdetailed theories of this type.

    This theoretical approach provided a simple framework for textbook writers. The stimulus input could befollowed from the sense organs to its ultimate storage in long-term memory by successive chapters onperception, attention, short-term memory, and long-term memory The crucial limitation with this approachis its assumption that stimuli impinge on an inactive and unprepared organism. In fact, processing is oftenaffected substantially by the individuals past experience, expectations, and so on.

    We can distinguish between bottom-up processing and top-down processing. Bottom-up or stimulus-driven processing is directly affected by stimulus input, whereas top-down or conceptually drivenprocessing is affected by what the individual contributes (e.g., expectations determined by context and pastexperience). As an example of top-down processing, it is easier to read the word well in poor handwritingif it is presented in the sentence context, I hope you are quite___, than when it is presented on its own.The sequential stage model deals primarily with bottom-up or stimulus-driven processing, and its failure toconsider top-down processing adequately is its greatest limitation.

    During the 1970s, theorists such as Neisser (1976) argued that nearly all cognitive activity consists ofinteractive bottom-up and top-down processes occurring together (see Chapter 4). Perception andremembering might seem to be exceptions, because perception depends heavily on the precise stimulipresented (and thus on bottom-up processing), and remembering depends crucially on stored information(and thus on top-down processing). However, perception is influenced by the perceivers expectations aboutto-be-presented stimuli (see Chapters 2, 3, and 4), and remembering is influenced by the preciseenvironmental cues to memory that are available (see Chapter 6).

    By the end of the 1970s, most cognitive psychologists agreed that the information-processing paradigmwas the best way to study human cognition (see Lachman et al., 1979):

    People are autonomous, intentional beings interacting with the external world. The mind through which they interact with the world is a general-purpose, symbol-processing system

    (symbols are patterns stored in long-term memory which designate or point to structures outsidethemselves; Simon & Kaplan, 1989, p. 13).

    Symbols are acted on by processes that transform them into other symbols that ultimately relate to thingsin the external world.

    The aim of psychological research is to specify the symbolic processes and representations underlyingperformance on all cognitive tasks.

    Cognitive processes take time, and predictions about reaction times can often be made. The mind is a limited-capacity processor having structural and resource limitations. The symbol system depends on a neurological substrate, but is not wholly constrained by it.

    Many of these ideas stemmed from the view that human cognition resembles the functioning of computers.As Herb Simon (1980, p. 45) expressed it, It might have been necessary a decade ago to argue for thecommonality of the information processes that are employed by such disparate systems as computers andhuman nervous systems. The evidence for that commonality is now over-whelming. (See Simon, 1995, foran update of this view.)

    The information-processing framework is continually developing as information technology develops.The computational metaphor is always being extended as computer technology develops. In the 1950s and

    2 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • 1960s, researchers mainly used the general properties of the computer to understand the mind (e.g., that ithad a central processor and memory registers). Many different programming languages had been developedby the 1970s, leading to various aspects of computer software and languages being used (e.g., Johnson-Laird, 1977, on analogies to language understanding). After that, as massively parallel machines weredeveloped, theorists returned to the notion that cognitive theories should be based on the parallel processingcapabilities of the brain (Rumelhart, McClelland, & the PDP Research Group, 1986).

    Information processing: Diversity

    Cognitive science is a trans-disciplinary grouping of cognitive psychology, artificial intelligence,linguistics, philosophy, neuroscience, and anthropology. The common aim of these disciplines is theunderstanding of the mind. To simplify matters, we will focus mainly on the relationship between cognitivepsychology and artificial intelligence.

    At the risk of oversimplification, we can identify four major approaches within cognitive psychology:

    Experimental cognitive psychology: it follows the experimental tradition of cognitive psychology, andinvolves no computational modelling.

    Cognitive science: it develops computational models to understand human cognition. Cognitive neuropsychology: it studies patterns of cognitive impairment shown by brain-damaged patients

    to provide valuable information about normal human cognition. Cognitive neuroscience: it uses several techniques for studying brain functioning (e.g., brain scans) to

    understand human cognition.

    There are various reasons why these distinctions are less neat and tidy in reality than we have implied. First,terms such as cognitive science and cognitive neuroscience are sometimes used in a broader and moreinclusive way than we have done. Second, there has been a rapid increase in recent years in studies thatcombine elements of more than one approach. Third, some have argued that experimental cognitivepsychologists and cognitive scientists are both endangered species, given the galloping expansion ofcognitive neuropsychology and cognitive neuroscience.

    In this book, we will provide a synthesis of the insights emerging from all four approaches. The approachtaken by experimental cognitive psychologists has been in existence for several decades, so we will focusmainly on the approaches of cognitive scientists, cognitive neuropsychologists, and cognitiveneuroscientists in the following sections. Before doing so, however, we will consider some traditional waysof obtaining evidence about human cognition.

    Empirical methods

    In most of the research discussed in this book, cognitive processes and structures were inferred fromparticipants behaviour (e.g., speed and/or accuracy of performance) obtained under well controlledconditions. This approach has proved to be very useful, and the data thus obtained have been used in thedevelopment and subsequent testing of most theories in cognitive psychology. However, there are twomajor potential problems with the use of such data:

    1. Measures of the speed and accuracy of performance provide only indirect information about the internalprocesses and structures of central interest to cognitive psychologists.

    1. INTRODUCTION 3

  • 2. Behavioural data are usually gathered in the artificial surroundings of the laboratory. The ways inwhich people behave in the laboratory may differ greatly from the ways they behave in everyday life(see Chapter 19).

    Cognitive psychologists do not rely solely on behavioural data to obtain useful information from theirparticipants. An alternative way of studying cognitive processes is by making use of introspection, which isdefined by the Oxford English Dictionary as examination or observation of ones own mental processes.Introspection depends on conscious experience, and each individuals conscious experience is personal andprivate. In spite of this, it is often assumed that introspection can provide useful evidence about somemental processes.

    Nisbett and Wilson (1977) argued that introspection is practically worthless, supporting their argumentwith examples. In one study, participants were presented with a display of five essentially identical pairs ofstockings, and decided which pair was the best. After they had made their choice, they indicated why theyhad chosen that particular pair. Most participants chose the rightmost pair, and so their decisions wereactually affected by relative spatial position. However, the participants strongly denied that spatial positionhad played any part in their decision, referring instead to slight differences in colour, texture, and so onamong the pairs of stockings as having been important.

    Nisbett and Wilson (1977, p. 248) claimed that people are generally unaware of the processes influencingtheir behaviour: When people are asked to report how a particular stimulus influenced a particularresponse, they do so not by consulting a memory of the mediating process, but by applying or generatingcausal theories about the effects of that type of stimulus on that type of response. This view was supportedby the discovery that an individuals introspections about what is determining his or her behaviour are oftenno more accurate than the guesses made by others.

    The limitations of introspective evidence are becoming increasingly clear. For example, consider researchon implicit learning, which involves learning complex material without the ability to verbalise what hasbeen learned. There is reasonable evidence for the existence of implicit learning (see Chapter 7). There iseven stronger evidence for implicit memory, which involves memory in the absence of consciousrecollection. Normal and brain-damaged individuals can exhibit excellent memory performance even whenthey show no relevant introspective evidence (see Chapter 7).

    Ericsson and Simon (1980, 1984) argued that Nisbett and Wilson (1977) had overstated the case againstintrospection. They proposed various criteria for distinguishing between valid and invalid uses ofintrospection:

    It is preferable to obtain introspective reports during the performance of a task rather than retrospectively,because of the fallibility of memory.

    Participants are more likely to produce accurate introspections when describing what they are attendingto, or thinking about, than when required to interpret a situation or their own thought processes.

    People cannot usefully introspect about several kinds of processes (e.g., neuronal processes; recognitionprocesses).

    Careful consideration of the studies that Nisbett and Wilson (1977) regarded as striking evidence of theworthlessness of introspection reveals that participants generally provided retrospective interpretations aboutinformation that had probably never been fully attended to. Thus, their findings are consistent with theproposed guidelines for the use of introspection (Crutcher, 1994; Ericsson & Simon, 1984).

    4 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • In sum, introspection is sometimes useful, but there is no conscious awareness of many cognitiveprocesses or their products. This point is illustrated by the phenomena of implicit learning and implicitmemory, but numerous other examples of the limitations of introspection will be presented throughout thisbook.

    COGNITIVE SCIENCE

    Cognitive scientists develop computational models to understand human cognition. A decent computationalmodel can show us that a given theory can be specified and allow us to predict behaviour in new situations.Mathematical models were used in experimental psychology long before the emergence of the information-processing paradigm (e.g., in IQ testing). These models can be used to make predictions, but often lack anexplanatory component. For example, committing three traffic violations is a good predictor of whether aperson is a bad risk for car insurance, but it is not clear why. One of the major benefits of the computationalmodels developed in cognitive science is that they can provide both an explanatory and predictive basis fora phenomenon (e.g., Keane, Ledgeway, & Duff, 1994; Costello & Keane, 2000). We will focus oncomputational models in this section, because they are the hallmark of the cognitive science approach.

    FIGURE 1.1

    A flowchart of a bad theory about how we understand sentences.

    1. INTRODUCTION 5

  • Computational modelling: From flowcharts to simulations

    In the past, many experimental cognitive psychologists stated their theories in vague verbal statements. Thismade it hard to decide whether the evidence fitted the theory. In contrast, cognitive scientists producecomputer programs to represent cognitive theories with all the details made explicit. In the 1960s and1970s, cognitive psychologists tended to use flowcharts rather than programs to characterise their theories.Computer scientists use flowcharts as a sort of plan or blue-print for a program, before they write thedetailed code for it. Flowcharts are more specific than verbal descriptions, but can still be underspecified ifnot accompanied by a coded program.

    An example of a very inadequate flowchart is shown in Figure 1.1. This is a flowchart of a bad theoryabout how we understand sentences. It assumes that a sentence is encoded in some form and then stored.After that, a decision process (indicated by a diamond) determines if the sentence is too long. If it is toolong, then it is broken up and we return to the encode stage to re-encode the sentence. If it is ambiguous,then its two senses are distinguished, and we return to the encode stage. If it is not ambiguous, then it isstored in long-term memory. After one sentence is stored, we return to the encode stage to consider the nextsentence.

    In the days when cognitive psychologists only used flowcharts, sarcastic questions abounded, such as,What happens in the boxes? or What goes down the arrows?. Such comments point to genuine criticisms.We need to know what is meant by encode sentence, how long is too long, and how sentence ambiguityis tested. For example, after deciding that only a certain length of sentence is acceptable, it may turn out thatit is impossible to decide whether the sentence portions are ambiguous without considering the entiresentence. Thus, the boxes may look all right at a superficial glance, but real contradictions may appear whentheir contents are specified.

    In similar fashion, exactly what goes down the arrows is critical. If one examines all the arrowsconverging on the encode sentence box, it is clear that more needs to be specified. There are fourdifferent kinds of thing entering this box: an encoded sentence from the environment; a sentence that hasbeen broken up into bits by the split-sentence box; a sentence that has been broken up into several senses;and a command to consider the next sentence. Thus, the encode box has to perform several specificoperations. In addition, it may have to record the fact that an item is either a sentence or a possible meaningof a sentence. Several other complex processes have to be specified within the encode box to handle thesetasks, but the flowchart sadly fails to addresses these issues. The gaps in the flowchart show somesimilarities with those in the formula shown in Figure 1.2.

    Not all theories expressed as flowcharts possess the deficiencies of the one described here. However,implementing a theory as a program is a good method for checking that it contains no hidden assumptionsor vague terms. In the previous example, this would involve specifying the form of the input sentences, thenature of the storage mechanisms, and the various decision processes (e.g., those about sentence length andambiguity). These computer programs are written in artificial intelligence programming languages, usuallyLISP (Norvig, 1992) or PROLOG (Shoham, 1993).

    There are many issues surrounding the use of computer simulations and the ways in which they do and donot simulate cognitive processes (Cooper, Fox, Farrington, & Shallice, 1996; Costello & Keane, 2000; Palmer& Kimchi, 1986). Palmer and Kimchi (1986) argued that it should be possible to decompose a theorysuccessively through a number of levels (from descriptive statement to flowchart to specific functions in aprogram) until one reaches a written program. In addition, they argued that it should be possible to draw aline at some level of decomposition, and say that everything above that line is psychologically plausible ormeaningful, whereas everything below it is not. This issue of separating psychological aspects of theprogram from other aspects arises because there will always be parts of the program that have little to do

    6 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • with the psychological theory, but which are there simply because of the particular programming languagebeing used and the machine on which the program is running. For example, in order to see what the programis doing, it is necessary to have print commands in the program which show the outputs of various stages inthe computers screen. However, no-one would argue that such print commands form part of thepsychological model. Cooper et al. (1996) argue that psychological theories should not be

    Three issues sorrounding computer simulation:

    Is it possible to decompose a theory until one reaches the level of a written program? Is it possible to separate psychological aspects of a program f rom other aspects? Are there differences in reaction time between programs and human participants?

    described using natural language at all, but that a formal specification language should be used. This wouldbe a very precise language, like a logic, that would be directly executable as a program.

    Other issues arise about the relationship between the performance of the program and the performance ofhuman participants (Costello & Keane, 2000). For example, it is seldom meaningful to relate the speed ofthe program doing a simulated task to the reaction time taken by human participants, because the processingtimes of programs are affected by psychologically irrelevant features. Programs run faster on more

    FIGURE 1.2

    The problem of being specific. Copyright 1977 by Sidney Harris in American Scientist Magazine. Reproduced withpermission of the author.

    1. INTRODUCTION 7

  • powerful computers, or if the programs code is interpreted rather than compiled. However, the variousmaterials that are presented to the program should result in differences in program operation time thatcorrelate closely with differences in participants reaction times in processing the same materials. At thevery least, the program should be able to reproduce the same outputs as participants when given the sameinputs.

    Computational modelling techniques

    The general characteristics of computational models of cognition have been discussed at some length. It isnow time to deal with some of the main types of computational model that have been used in recent years.Three main types are outlined briefly here: semantic networks; production systems; and connectionistnetworks.

    Semantic networks

    Consider the problem of modelling what we know about the world (see Chapter 9). There is a longtradition from Aristotle and the British empiricist school of philosophers (Locke, Hume, Mill, Hartley,Bain) which proposes that all knowledge is in the form of associations. Three main principles of associationhave been proposed:

    Contiguity: two things become associated because they occurred together in time. Similarity: two things become associated because they are alike. Contrast: two things become associated because they are opposites.

    There is a whole class of cognitive models owing their origins to these ideas; they are called associative orsemantic or declarative networks. Semantic networks have the following general characteristics:

    Concepts are represented by linked nodes that form a network. These links can be of various kinds; they can represent very general relations (e.g., is-associated-with or

    is-similar-to) or specific, simple relations like is-a (e.g., John is-a policeman), or more complete relationslike play, hit, kick.

    The nodes themselves and the links among nodes can have various activation strengths representing thesimilarity of one concept to another. Thus, for example, a dog and a cat node may be connected by a linkwith an activation of 0.5, whereas a dog and a pencil may be connected by a link with a strength of 0.1.

    Learning takes the form of adding new links and nodes to the network or changing the activation valueson the links between nodes. For example, in learning that two concepts are similar, the activation of alink between them may be increased.

    Various effects (e.g., memory effects) can be modelled by allowing activation to spread throughout thenetwork from a given node or set of nodes.

    The way in which activation spreads through a network can be determined by a variety of factors Forexample, it can be affected by the number of links between a given node and the point of activation, orby the amount of time that has passed since the onset of activation.

    Part of a very simple network model is shown in Figure 1.3. It corresponds closely to the semantic networkmodel proposed by Collins and Loftus (1975). Such models have been successful in accounting for avarious findings. Semantic priming effects in which the word dog is recognised more readily if it is

    8 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • preceded by the word cat (Meyer & Schvaneveldt, 1971) can be easily modelled using such networks (seeChapter 12). Ayers and Reder (1998) have used semantic networks to understand misinformation effects ineyewitness testimony (see Chapter 8). At their best, semantic networks are both flexible and elegantmodelling schemes.

    Production systems

    Another popular approach to modelling cognition involves production systems. These are made up ofproductions, where a production is an IF THEN rule. These rules can take many forms, but an examplethat is very useful in everyday life is, If the green man is lit up, then cross the road. In a typical productionsystem model, there is a long-term memory that contains a large set of these IFTHEN rules. There is alsoa working memory (i.e., a system holding information that is currently being processed). If information fromthe environment that the green man is lit up reaches working memory, it will match the IF-part of the rulein long-term memory, and trigger the THEN-part of the rule (i.e., cross the road).

    Production systems have the following characteristics:

    They have numerous IFTHEN rules. They have a working memory containing information. The production system operates by matching the contents of working memory against the IF-parts of the

    rules and executing the THEN-parts. If some information in working memory matches the IF-part of many rules, there may be a conflict-

    resolution strategy selecting one of these rules as the best one to be executed.

    FIGURE 1.3

    A schematic diagram of a simple semantic network with nodes for various concepts (i.e., dog, cat), and linksbetween these nodes indicating the differential similarity of these concepts to each other.

    1. INTRODUCTION 9

  • Consider a very simple production system operating on lists of letters involving As and Bs (see Figure 1.4).The system has two rules:

    1. IF a list in working memory has an A at the end THEN replace the A with AB.2. IF a list in working memory has a B at the end THEN replace the B with an A.

    If we give this system different inputs in the form of different lists of letters, then different things happen. Ifwe give it CCC, this will be stored in working memory but will remain unchanged, because it does notmatch either of the IF-parts of the two rules. If we give it A, then it will be notified by the rules after the Ais stored in working memory. This A is a list of one item and as such it matches rule 1. Rule 1 has the effectof replacing the A with AB, so that when the THEN-part is executed, working memory will contain an AB.On the next cycle, AB does not match rule 1 but it does match rule 2. As a result, the B is replaced by an A,leaving an AA in working memory. The system will next produce AAB, then AAAB, and so on.

    Many aspects of cognition can be specified as sets of IFTHEN rules. For example, chess knowledgecan readily be represented as a set of productions based on rules such as, If the Queen is threatened, then movethe Queen to a safe square. In this way, peoples basic knowledge of chess can be modified as a collectionof productions, and gaps in this knowledge as the absence of some productions. Newell and Simon (1972)first established the usefulness of production system models in characterising cognitive processes likeproblem solving and reasoning (see Chapter 14). However, these models have a wider applicability.Anderson (1993) has modelled human learning using production systems (see Chapter 14), and others haveused them to model reinforcement behaviour in rats, and semantic memory (Holland et al., 1986).

    Connectionist networks

    Connectionist networks, neural networks, or parallel distributed processing models as they are variouslycalled, are relative newcomers to the computational modelling scene. All previous techniques were markedby the need to program explicitly all aspects of the model, and by their use of explicit symbols to representconcepts. Connectionist networks, on the other hand, can to some extent program themselves, in that theycan learn to produce specific outputs when certain inputs are given to them. Furthermore, connectionistmodellers often reject the use of explicit rules and symbols and use distributed representations, in whichconcepts are characterised as patterns of activation in the network (see Chapter 9).

    Early theoretical proposals about the feasibility of learning in neural-like networks were made byMcCulloch and Pitts (1943) and by Hebb (1949). However, the first neural network models, called

    FIGURE 1.4

    A schematic diagram of a simple production system.

    10 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • Perceptrons, were shown to have several limitations (Minsky & Papert, 1988). By the late 1970s, hardwareand software develpments in computing offered the possibility of constructing more complex networksovercoming many of these original limitations (e.g., Rumelhart, McClelland, & the PDP Research Group,1986; McClelland, Rumelhart, & the PDP Research Group, 1986).

    Connectionist networks typically have the following characteristics (see Figure 1.5):

    The network consists of elementary or neuron-like units or nodes connected together so that a single unithas many links to other units.

    Units affect other units by exciting or inhibiting them. The unit usually takes the weighted sum of all of the input links, and produces a single output to another

    unit if the weighted sum exceeds some threshold value. The network as a whole is characterised by the properties of the units that make it up, by the way they

    are connected together, and by the rules used to change the strength of connections among units.

    FIGURE 1.5

    A multi-layered connectionist network with a layer of input units, a layer of internal representation units or hiddenunits, and a layer of output units. Input patterns can be encoded, if there are enough hidden units, in a form that allowsthe appropriate output pattern to be generated from a given input pattern. Reproduced with permission from David E.Rumelhart & James L.McClelland, Parallel distributed processing: Explorations in the microstructure of cognition (Vol.1), published by the MIT Press, 1986, the Massachusetts Institute of Technology.

    1. INTRODUCTION 11

  • Networks can have different structures or layers; they can have a layer of input links, intermediate layers(of so-called hidden units), and a layer of output units.

    A representation of a concept can be stored in a distributed manner by a pattern of activation throughoutthe network.

    The same network can store many patterns without them necessarily interfering with each other if theyare sufficiently distinct.

    An important learning rule used in networks is called backward propagation of errors (BackProp).

    In order to understand connectionist networks fully, let us consider how individual units act when activationimpinges on them. Any given unit can be connected to several other units (see Figure 1.6). Each of theseother units can send an excitatory or an inhibitory signal to the first unit. This unit generally takes aweighted sum of all these inputs. If this sum exceeds some threshold, it produces an output. Figure 1.6 showsa simple diagram of just such a unit, which takes the inputs from a number of other units and sums them toproduce an output if a certain threshold is exceeded.

    These networks can model cognitive behaviour without recourse to the kinds of explicit rules found inproduction systems. They do this by storing patterns of activation in the network that associate variousinputs with certain outputs. The models typically make use of several layers to deal with complexbehaviour. One layer consists of input units that encode a stimulus as a pattern of activation in those units.Another layer is an output layer, which produces some response as a pattern of activation. When thenetwork has learned to produce a particular response at the output layer following the presentation of aparticular stimulus at the input layer, it can exhibit behaviour that looks as if it had learned a rule of the formIF such-and-such is the case THEN do so-and-so. However, no such rules exist explicitly in the model.

    Networks learn the association between different inputs and outputs by modifying the weights on thelinks between units in the net. In Figure 1.6, we see that the weight on the links to a unit, as well as theactivation of other units, plays a crucial role in computing the response of that unit. Various learning rulesmodify these weights in systematic ways. When we apply such learning rules to a network, the weights onthe links are modified until the net produces the required output patterns given certain input patterns.

    One such learning rule is called backward propagation of errors or BackProp. BackProp allows anetwork to learn to associate a particular input pattern with a given output pattern. At the start of thelearning period, the network is set up with random weights on the links among the units. During the earlystages of learning, after the input pattern has been presented, the output units often produce the incorrectpattern or response. BackProp compares the imperfect pattern with the known required response, noting theerrors that occur. It then back-propagates activation through the network so that the weights between theunits are adjusted to produce the required pattern. This process is repeated with a particular stimulus patternuntil the network produces the required response pattern. Thus, the model can be made to learn thebehaviour with which the cognitive scientist is concerned, rather than being explicitly programmed to do so.

    Networks have been used to produce very interesting results. Several examples will be discussedthroughout the text (see, for examples, Chapters 2, 10, and 16), but one concrete example will be mentionedhere. Sejnowski and Rosenberg (1987) produced a connectionist network called NETtalk, which takes anEnglish text as its input and produces reasonable English speech output. Even though the network is trainedon a limited set of words, it can pronounce the words from new text with about 90% accuracy. Thus, thenetwork seems to have learned the rules of English pronunciation, but it has done so without havingexplicit rules that combine and encode sounds.

    Connectionist models such as NETtalk have great Wow! value, and are the subject of much researchinterest. Some researchers might object to our classification of connectionist networks as merely one among

    12 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • a number of modelling techniques. However, others have argued that connectionism represents analternative to the information-processing paradigm (Smolensky, 1988; Smolensky, Legendre, & Miyata,1993). Indeed, if one examines the fundamental tenets of the information-processing framework, thenconnectionist schemes violate one or two. For example, symbol manipulation of the sort found inproduction systems does not seem to occur in connectionist networks. We will return to the complex issuesraised by connectionist networks later in the book.

    COGNITIVE NEUROPSYCHOLOGY

    Cognitive neuropsychology is concerned with the patterns of cognitive performance in brain-damagedpatients. Those aspects of cognition that are intact or impaired are identified, with this information being ofvalue for two main reasons. First, the cognitive performance of brain-damaged patients can often beexplained by theories within cognitive psychology. Such theories specify the processes or mechanismsinvolved in normal cognitive functioning, and it should be possible in principle to account for many of thecognitive impairments of brain-damaged patients in terms of selective damage to some of thosemechanisms.

    Second, it may be possible to use information from brain-damaged patients to reject theories proposed bycognitive psychologists, and to propose new theories of normal cognitive functioning. According to Ellisand Young (1988, p. 4), a major aim of cognitive neuropsychology:

    FIGURE 1.6

    Diagram showing how the inputs from a number of units are combined to determine the overall input to unit-i. Unit-i hasa threshold of 1; so if its net input exceeds 1 then it will respond with +1, but if the net input is less than 1 then it willrespond with 1.

    1. INTRODUCTION 13

  • is to draw conclusions about normal, intact cognitive processes from the patterns of impaired andintact capabilities seen in brain-injured patientsthe cognitive neuropsychologist wishes to be in aposition to assert that observed patterns of symptoms could not occur if the normal, intact cognitivesystem were not organised in a certain way.

    The intention is that there should be bi-directional influences of cognitive psychology on cognitiveneuropsychology, and of cognitive neuropsychology on cognitive psychology. Historically, the formerinfluence was the greater one, but the latter has become more important.

    Before discussing the cognitive neuropsychological approach in more detail, we will discuss a concreteexample of cognitive neuropsychology in operation. Atkinson and Shiffrin (1968) argued that there is animportant distinction between a short-term memory store and a long-term memory store, and thatinformation enters into the long-term store through rehearsal and other processing activities in the short-termstore (see Chapter 6). Relevant evidence was obtained by Shallice and Warrington (1970). They studied abrain-damaged patient, KF, who seemed to have severely impaired short-term memory, but essentially intactlong-term memory.

    The study of this patient served two important purposes. First, it provided evidence to support thetheoretical distinction between two memory systems. Second, it pointed to a real deficiency in thetheoretical model of Atkinson and Shiffrin (1968). If, as this model suggests, long-term learning andmemory depend on the short-term memory system, then it is surprising that someone with a grossly deficientshort-term memory system also has normal long-term memory.

    The case of KF shows very clearly the potential power of cognitive neuropsychology. The study of thisone patient provided strong evidence that the dominant theory of memory at the end of the 1960s wasseriously deficient. This is no mean achievement for a study on one patient!

    Cognitive neuropsychological evidence

    How do cognitive neuropsychologists set about the task of understanding how the cognitive systemfunctions? A crucial goal is the discovery of dissociations, which occur when a patient performs normallyon one task but is impaired on a second task. In the case of KF, a dissociation was found betweenperformance on short-term memory tasks and on long-term memory tasks. Such evidence can be used toargue that normal individuals possess at least two separate memory systems.

    There is a potential problem in drawing sweeping conclusions from single dissociations. A patient mayperform poorly on one task and well on a second task simply because the first task is more complex than thesecond, rather than because the first task involves specific skills that have been affected by brain damage.The solution to this problem is to look for double dissociations. A double dissociation between two tasks (1and 2) is shown when one patient performs normally on task 1 and at an impaired level on task 2, andanother patient performs normally on task 2 and at an impaired level on task 1. If a double dissociation canbe shown, then the results cannot be explained in terms of one task being harder than the other.

    In the case of short-term and long-term memory, such a double dissocation has been shown. KF hadimpaired short-term memory but intact long-term memory, whereas amnesic patients have severelydeficient long-term memory but intact short-term memory (see Chapter 7). These findings suggest there aretwo distinct memory systems which can suffer damage separately from each other.

    If brain damage were usually very limited in scope, and affected only a single cognitive process ormechanism, then cognitive neuropsychology would be a fairly simple enterprise. In fact, brain damage isoften rather extensive, so that several cognitive systems are all impaired to a greater or lesser extent. This

    14 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • means that much ingenuity is needed to make sense of the tantalising glimpses of human cognition providedby brain-damaged patients.

    Theoretical assumptions

    Most cognitive neuropsychologists subscribe to the following assumptions (with the exception of the lastone):

    The cognitive system exhibits modularity, i.e., there are several relatively independent cognitiveprocesses or modules, each of which functions to some extent in isolation from the rest of the processingsystem; brain damage typically impairs only some of these modules.

    There is a meaningful relationship between the organisation of the physical brain and that of the mind; thisassumption is known as isomorphism.

    Investigation of cognition in brain-damaged patients can tell us much about cognitive processes innormal individuals; this assumption is closely bound up with the other assumptions.

    Most patients can be categorised in terms of syndromes, each of which is based on co-occurring sets ofsymptoms.

    SyndromesThe traditional approach within neuropsychology made much use of syndromes. It was claimed that

    certain sets of symptoms or impairments are usually found together, and each set of co-occurring symptomswas used to define a separate syndrome (e.g., amnesia; dyslexia). This syndrome-based approach allows usto impose some order on the numerous brain-damaged patients who have been studied by assigning them toa fairly small number of categories. It is also of use in identifying those areas of the brain mainlyresponsible for cognitive function such as language, because we can search for those parts of the braindamaged in all those patients having a given syndrome.

    In spite of its uses, the syndrome-based approach has substantial problems. It exaggerates the similaritiesamong different patients allegedly suffering from the same syndrome. In addition, those symptoms orimpairments said to form a syndrome may be found in the same patients solely because the underlyingcognitive processes are anatomically adjacent.

    There have been attempts to propose more specific syndromes or categories based on our theoreticalunderstanding of cognition. However, the discovery of new patients with unusual patterns of deficits, and theoccurrence of theoretical advances, mean that the categorisation system is constantly changing. As Ellis(1987) pointed out, a syndrome thought at time t to be due to damage to a single unitary module is boundto have fractionated by time t+2 years into a host of awkward subtypes.

    How should cognitive neuropsychologists react to these problems? Some cognitive neuro-psychologists(e.g., Parkin, 1996) argue that it makes sense to carry out group studies in which patients with the samesyndrome are considered together. He introduced what he called the significance implies homogeneity[uniformity] rule. According to this rule, if a group of subjects exhibits significant hetereogeneity[variability] then they will not be capable of generating statistically significant group differences (Parkin,1996, p. 16). The potential problem with this rule is that a group of patients can show a significant effecteven though a majority of the individual patients fail to show the effect.

    Ellis (1987) argued that cognitive neuropsychology should proceed on the basis of intensive single-casestudies in which individual patients are studied on a wide range of tasks. An adequate theory of cognitionshould be as applicable to the individual case as to groups of individuals, and so single-case studies provide

    1. INTRODUCTION 15

  • a perfectly adequate test of cognitive theories. The great advantage of this approach is that there is no needto make simplifying assumptions about which patients do and do not belong to the same syndrome.

    Another argument for single-case studies is that it is often not possible to find a group of patients showingvery similar cognitive deficits. As Shallice (1991, p. 432) pointed out, as finer and finer aspects of thecognitive architecture are investigated in attempts to infer normal function, neuropsychology will be forcedto resort more and more to single-case studies.

    Ellis (1987) may have overstated the value of single-case studies. If our theoretical understanding of anarea is rather limited, it may make sense to adopt the syndrome-based approach until the major theoreticalissues have been clarified. Furthermore, many experimental cognitive psychologists disapprove of attachinggreat theoretical significance to findings from individuals who may not be representative even of brain-damaged patients. As Shallice (1991, p. 433) argued:

    A selective impairment found in a particular task in some patient could just reflect: the patientsidiosyncratic strategy, the greater difficulty of that task compared with the others, a premorbid lacuna[gap] in that patient, or the way a reorganised system but not the original normal system operates.

    A reasonable compromise position is to carry out a number of single-case studies. If a theoretically crucialdissociation is found in a single patient, then there are various ways of interpreting the data. However, if thesame dissociation is obtained in a number of individual patients, it is less likely that all the patients had atypicalcognitive systems prior to brain damage, or that they have all made use of similar compensatory strategies.

    Modularity

    The whole enterprise of cognitive neuropsychology is based on the assumption that there are numerousmodules or cognitive processors in the brain. These modules function relatively independently, so thatdamage to one module does not directly affect other modules. Modules are anatomically distinct, so that braindamage will often affect some modules while leaving others intact. Cognitive neuropsychology may helpthe discovery of these major building blocks of cognition. A double dissociation indicates that two tasksmake use of different modules or cognitive processors, and so a series of double dissociations can be

    Syndrome-based approach vs. single-case studies

    syndrome-based approach Single-case studies

    Advantages AdvantagesProvides a means of imposing order and categorisingpatients. Allows identification of cognitive functionsof brain areas. Useful while major theoretical issuesremain to be clarified.

    Avoids oversimplifying assumptions, No need tofind groups of patients with very similar cognitivedeficits.

    Disadvantages DisadvantagesOversimplification based on theoreticalassumptions. Exaggeration of similarities amongpatients.

    Evidence lacks generalisability and can even bemisleading.

    used to provide a sketch-map of our modular cognitive system.

    16 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • The notion of modularity was emphasised by Fodor (1983), who identified the following distinguishingfeatures of modules:

    Informational encapsulation: each module functions independently from the functioning of othermodules.

    Domain specificity: each module can process only one kind of input (e.g., words; faces). Mandatory or compulsory operation: the functioning of a module is not under any form of voluntary

    control. Innateness: modules are inborn.

    Fodors ideas have been influential. However, many psychologists have criticised mandatory operation andinnateness as criteria for modularity. Some modules may operate automatically, but there is little evidenceto suggest that they all do. It is implausible to assume the innateness of modules underlying skills such asreading and writing, as these are skills that the human race has developed only comparatively recently.

    From the perspective of cognitive neuropsychologists, these criticisms do not pose any special problems.If the assumptions of information encapsulation and domain specificity remain tenable, then data frombrain-damaged patients can continue to be used in the hunt for cognitive modules. This would still be thecase even if it turned out that several modules or cognitive processors were neither mandatory nor innate.

    It is not only cognitive neuropsychologists who subscribe to the notion of modularity. Most experimentalcognitive psychologists, cognitive scientists, and cognitive neuroscientists also believe in modularity. Thefour groups differ mainly in terms of their preferred methods for showing modularity.

    Isomorphism

    Cognitive neuropsychologists assume there is a meaningful relationship between the way in which thebrain is organised at a physical level and the way in which the mind and its cognitive modules are organised.This assumption has been called isomorphism, meaning that two things (e.g., brain and mind) have the sameshape or form. Thus, it is expected that each module will have a different physical location within the brain.If this expectation is disconfirmed, then cognitive neuropsychology and cognitive neuroscience will becomemore complex enterprises.

    An assumption that is related to isomorphism is that there is localisation of function, meaning that anyspecific function or process occurs in a given location within the brain (Figure 1.7). The notion oflocalisation of function seems to be in conflict with the connectionist account, according to which a process(e.g., activation of a concept) can be distributed over a wide area of the brain. There is as yet no definitiveevidence to support one view over the other.

    Evaluation

    Are the various theoretical assumptions underlying cognitive neuropsychology correct? It is hard to tell.Modules do not actually exist, but are convenient theoretical devices used to clarify our understanding.Therefore, the issue of whether the theoretical assumptions are valuable or not is probably best resolved byconsidering the extent to which cognitive neuropsychology is successful in increasing our knowledge ofcognition. In other words, the proof of the pudding is in the eating. Farah (1994) argued that the evidencedoes not support what she termed the locality assumption, according to which damage to one module has onlylocal effects. According to Farah (1994, p. 101), The conclusion that the locality assumption may befalse is a disheartening one. It undercuts much of the special appeal of neuropsychological architecture.

    1. INTRODUCTION 17

  • One of the most serious problems with cognitive neuropsychology stems from the difficulty in carryingout group studies. This has led to the increasing use of single-case studies. Such studies are sometimes veryrevealing. However, they can provide misleading evidence if the patient had specific cognitive deficits priorto brain damage, or if he or she has developed unusual compensatory strategies to cope with theconsequences of brain damage.

    COGNITIVE NEUROSCIENCE

    Some cognitive psychologists argue that we can understand cognition by relying on observations ofpeoples performance on cognitive tasks and ignoring the neurophysiological processes occurring within thebrain. For example, Baddeley (1997, p. 7) expressed some scepticism about the relevance ofneurophysiological processes to the development of psychological theories:

    A theory giving a successful account of the neurochemical basis of long-term memory would beunlikely to offer an equally elegant and economical account of the psychological characteristics ofmemory. While it may in principle one day be possible to map one theory onto the other, it will stillbe useful to have both a psychological and a physiological theoryNeurophysiology andneurochemistry are interesting and important areas, but at present they place relatively few constraintson psychological theories and models of human memory.

    Why was Baddeley doubtful that neurophysiological evidence could contribute much to psychologicalunderstanding? The main reason was that psychologists and neurophysiologists tend to focus on differentlevels of analysis. In the same way that a carpenter does not need to know that wood consists mainly of

    FIGURE 1.7

    PET scans can be used to show localisation of function within the brain. This three-dimensional PET scan shows themetabolic activity within the brain during a hand exercise. The exercise involved moving the fingers of the right hand.The front of the brain is at the left. The most active area appears white; this is the motor cortex in the cerebral cortexwhere movement is coordinated. Photo credit: Montreal Neurological Institute/McGill University/CNRI/Science PhotoLibrary.

    18 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • atoms moving around rapidly in space, so it is claimed that cognitive psychologists do not need to know thefine-grain neurophysiological workings of the brain.

    A different position was advocated by Churchland and Sejnowski (1991, p. 17), who suggested:

    It would be convenient if we could understand the nature of cognition without understanding thenature of the brain itself. Unfortunately, it is difficult, if not impossible, to theorise effectively onthese matters in the absence of neurobiological constraints. The primary reason is that computationalspace is consummately vast, and there are many conceivable solutions to the problems of how acognitive operation could be accomplished. Neurobiological data provide essential constraints oncomputational theories, and they consequently provide an efficient means for narrowing the searchspace. Equally important, the data are also richly suggestive in hints concerning what might really begoing on.

    In line with these proposals, there are some psychological theories that are being fairly closely constrainedby findings in the neurosciences (see Hummel & Holyoak, 1997, and Chapter 15).

    Neurophysiologists have provided several kinds of valuable information about the brains structure andfunctioning. In principle, it is possible to establish where in the brain certain cognitive processes occur, andwhen these processes occur. Such information can allow us to determine the order in which different partsof the brain become active when someone is performing a task. It also allows us to find out whether twotasks involve the same parts of the brain in the same way, or whether there are important differences. As wewill see, this can be very important theoretically.

    The various techniques for studying brain functioning differ in their spatial and temporal resolution(Churchland & Sejnowski, 1991). Some techniques provide information about the single-cell level, whereasothers tell us about activity over much larger groups of cells. In similar fashion, some techniques provideinformation about brain activity on a millisecond-by-millisecond basis (which corresponds to the timescalefor thinking), whereas others indicate brain activity only over much longer time periods such as minutes orhours.

    FIGURE 1.8

    The spatial and temporal ranges of some techniques used to study brain functioning. Adapted from Churchland andSejnowski (1991).

    1. INTRODUCTION 19

  • Some of the main techniques will be discussed to give the reader some idea of the weapons available tocognitive neuroscientists. The spatial and temporal resolutions of some of these techniques are shown inFigure 1.8. High spatial and temporal resolutions are advantageous if a very detailed account of brainfunctioning is required, but low spatial and temporal resolutions can be more useful if a more general viewof brain activity is required.

    Single-unit recording

    Single-unit recording is a fine-grain technique developed over 40 years ago to permit the study of singleneurons. A micro-electrode about one 10,000th of a millimetre in diameter is inserted into the brain of ananimal to obtain a record of extracellular potentials. A stereotaxic apparatus is used to fix the animalsposition, and to provide the researcher with precise information about the location of the electrode in three-dimensional space. Single-unit recording is a very sensitive technique, as electrical charges of as little asone-millionth of a volt can be detected.

    The best known application of this technique was by Hubel and Wiesel (1962, 1979). They used it withcats and monkeys to study the neurophysiology of basic visual processes. Hubel and Wiesel found therewere simple and complex cells in the primary visual cortex, but there were many more complex cells. Thesetwo types of cells both respond maximally to straight-line stimuli in a particular orientation (see Chapter 4).The findings of Hubel and Wiesel were so clear-cut that they constrained several subsequent theories ofvisual perception, including that of Marr (1982; see Chapter 2).

    Evaluation

    The single-unit recording technique has the great value that it provides detailed information about brainfunctioning at the neuronal level, and is thus more fine-grain than other techniques (see Figure 1.8).Another advantage is that information about neuronal activity can be obtained over a very wide range of timeperiods from small fractions of a second up to several hours or days. A major limitation is that it is aninvasive technique, and so would be unpleasant to use with humans. Another limitation is that it can onlyprovide information about activity at the neuronal level, and so other techniques are needed to assess thefunctioning of larger areas of the cortex.

    Event-related potentials (ERPs)

    The electroencephalogram (EEG) is based on recordings of electrical brain activity measured at the surfaceof the scalp. Very small changes in electrical activity within the brain are picked up by scalp electrodes.These changes can be shown on the screen of a cathode-ray tube by means of an oscilloscope. A keyproblem with the EEG is that there tends to be so much spontaneous or background brain activity that itobscures the impact of stimulus processing on the EEG recording.

    A solution to this problem is to present the same stimulus several times. After that, the segment of EEGfollowing each stimulus is extracted and lined up with respect to the time of stimulus onset. These EEGsegments are then simply averaged together to produce a single waveform. This method produces event-related potentials (ERPs) from EEG recordings, and allows us to distinguish genuine effects of stimulationfrom background brain activity.

    ERPs are particularly useful for assessing the timing of certain cognitive processes. For example, someattention theorists have argued that attended and unattended stimuli are processed differently at an earlystage of processing, whereas others have claimed that they are both analysed fully in a similar way (see

    20 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • Chapter 5). Studies using ERPs have provided good evidence in favour of the former position. For example,Woldorff et al. (1993) found that ERPs were greater to attended than unattended auditory stimuli about 2050 milliseconds after stimulus onset.

    Evaluation

    ERPs provide more detailed information about the time course of brain activity than do most othertechniques, and they have many medical applications (e.g., diagnosis of multiple sclerosis). However, ERPsdo not indicate with any precision which regions of the brain are most involved in processing. This is due inpart to the fact that the presence of skull and brain tissue distorts the electrical fields emerging from thebrain. Furthermore, ERPs are mainly of value when the stimuli are simple and the task involves basicprocesses (e.g., target detection) occurring at a certain time after stimulus onset. As a result of theseconstraints (and the necessity of presenting the same stimulus several times) it would not be feasible tostudy most complex forms of cognition (e.g., problem solving; reasoning) with the use of ERPs.

    Positron emission tomography (PET)

    Of all the new methods, the one that has attracted the most media interest is positron emission tomographyor the PET scan. The technique is based on the detection of positrons, which are atomic particles emitted bysome radioactive substances. Radioactively labelled water is injected into the body, and rapidly gathers inthe brains blood vessels. When part of the cortex becomes active, the labelled water moves rapidly to thatplace. A scanning device next measures the positrons emitted from the radioactive water. A computer thentranslates this information into pictures of the activity levels of different parts of the brain. It may sounddangerous to inject a radioactive substance into someone. However, only tiny amounts of radioactivity areinvolved.

    Raichle (1994b) has described the typical way in which PET has been used by cognitive neuroscientists.It is based on a subtractive logic. Brain activity is assessed during an experimental task, and is also assessedduring some control or baseline condition (e.g., before the task is presented). The brain activity during thecontrol condition is then subtracted from that during the experimental task. It is assumed that this allows usto identify those parts of the brain that are active only during the performance of the task. This techniquehas been used in several studies designed to locate the parts of the brain most involved in episodic memory,which is long-term memory involving conscious recollection of the past (see Chapter 7). There is moreactivity in the right prefrontal cortex when participants are trying to retrieve episodic memories than whenthey are trying to retrieve other kinds of memories (see Wheeler, Stuss, & Tulving, 1997, for a review).

    Evaluation

    One of the major advantages of PET is that it has reasonable spatial resolution, in that any active areawithin the brain can be located to within about 3 or 4 millimetres. It is also a fairly versatile technique, inthat it can be used to identify the brain areas involved in a wide range of different cognitive activities.

    PET has several limitations. First, the temporal resolution is very poor. PET scans indicate the totalamount of activity in each region of the brain over a period of 60 seconds or longer, and so cannot revealthe rapid changes in brain activity accompanying most cognitive processes. Second, PET provides only anindirect measure of neural activity. As Anderson, Holliday, Singh, and Harding (1996, p. 423) pointed out,changes in regional cerebral blood flow, reflected by changes in the spatial distribution of intravenouslyadministered positron emitted radioisotopes, are assumed to reflect changes in neural activity. Thisassumption may be more applicable at early stages of processing. Third, it is an invasive technique, because

    1. INTRODUCTION 21

  • participants have to be injected with radioactively labelled water. Fourth, it can be hard to interpret thefindings from use of the subtraction technique. For example, it may seem plausible to assume that thoseparts of the brain active during retrieval of episodic memories but not other kinds of memories are directlyinvolved in episodic memory retrieval. However, the participants may have been more motivated to retrievesuch memories than other memories, and so some of the brain activity may reflect the involvement ofmotivational rather than memory systems.

    Magnetic resonance imaging (MRI and fMRI)

    What happens in magnetic resonance imaging (MRI) is that radio waves are used to excite atoms in thebrain. This produces magnetic changes which are detected by an 11-ton magnet surrounding the patient.These changes are then interpreted by a computer and turned into a very precise three-dimensional picture.MRI scans (Figure 1.9) can be used to detect very small brain tumours. MRI scans can be obtained fromnumerous different angles. However, they only tell us about the structure of the brain rather than about itsfunctions.

    The MRI technology has been applied to the measurement of brain activity to provide functional MRI(fMRI). Neural activity in the brain produces increased blood flow in the active areas, and there is oxygenand glucose within the blood. According to Raichle (1994a, p. 41), the amount of oxygen carried byhaemoglobin (the molecule that transports oxygen) affects the magnetic properties of the haemoglobinMRI can detect the functionally induced changes in blood oxygenation in the human brain. The approachbased on fMRI provides three-dimensional images of the brain with areas of high activity clearly indicated.It is more useful than PET, because it provides more precise spatial information, and shows changes overshorter periods of time. However, it shares with PET a reliance on the subtraction technique in which brainactivity during a control task or situation is subtracted from brain activity during the experimental task.

    A study showing the usefulness of fMRI was reported by Tootell et al. (1995b). It involves the so-calledwaterfall illusion, in which lengthy viewing of a stimulus moving in one direction (e.g., a waterfall) isfollowed immediately by the illusion that stationary objects are moving in the opposite direction. There

    FIGURE 1.9

    MRI scan showing a brain tumour. The tumour appears in bright contrast to the surrounding brain tissue. Photo credit:Simon Fraser/Neuroradiology Department, Newcastle General Hospital/Science Photo Library.

    22 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • were two key findings. First, the gradual reduction in the size of the waterfall illusion over the first 60seconds of observing the stationary stimulus was closely paralleled by the reduction in the area of activationobserved in the fMRI. Second, most of the brain activity produced by the waterfall illusion was in V5,which is an area of the visual cortex known to be much involved in motion perception (see Chapter 2). Thus,the basic brain processes underlying the waterfall illusion are similar to those underlying normal motionperception.

    Evaluation

    Raichle (1994a, p. 350) argued that fMRI has several advantages over other techniques:

    The technique has no known biological risk except for the occasional subject who suffersclaustrophobia in the scanner (the entire body must be inserted into a relatively narrow tube). MRIprovides both anatomical and functional information, which permits an accurate anatomicalidentification of the regions of activation in each subject. The spatial resolution is quite good,approaching the 12 millimetre range.

    One limitation with fMRI is that it provides only an indirect measure of neural activity. As Anderson et al.(1996, p. 423) pointed out, With fMRI, neural activity is reflected by changes in the relative concentrationsof oxygenated and deoxygenated haemoglobin in the vicinity of the activity. Another limitation is that ithas poor temporal resolution of the order of several seconds, so we cannot track the time course of cognitiveprocesses. A final limitation is that it relies on the subtraction technique, and this may not accurately assessbrain activity directly involved in the experimental task.

    Magneto-encephalography (MEG)

    In recent years, a new technique known as magneto-encephalography or MEG has been developed. Itinvolves using a superconducting quantum interference device (SQUID), which measures the magneticfields produced by electrical brain activity. The evidence suggests that it can be regarded as a directmeasure of cortical neural activity (Anderson et al., 1996, p. 423). It provides very accurate measurementof brain activity, in part because the skull is virtually transparent to magnetic fields. Thus, magnetic fieldsare little distorted by intervening tissue, which is an advantage over the electrical activity assessed by theEEG.

    Anderson et al. used MEG in combination with MRI to study the properties of an area of the visual cortexknown as V5 (see Chapter 2). They found with MEG that motion-contrast patterns produced large responsesfrom V5, but that V5 did not seem to be responsive to colour. These data, in conjunction with previousfindings from PET and fMRI studies, led Anderson et al. (1996, p. 429) to conclude that these findingsprovide strong support for the hypothesis that a major function of human V5 is the rapid detection of objectsmoving relative to their background. In addition, Anderson et al. obtained evidence that V5 was activeapproximately 20 milliseconds after V1 (the primary visual cortex) in response to motion-contrast patterns.This is more valuable information than simply establishing that V1 and V5 are both active during this task,because it helps to clarify the sequence in which different brain areas contribute towards visual processing.

    Evaluation

    MEG possesses several valuable features. First, the magnetic signals reflect neural activity reasonablydirectly. In contrast, PET and fMRI signals reflect blood flow, which is assumed in turn to reflect neural

    1. INTRODUCTION 23

  • activity. Second, MEG supplies fairly detailed information at the millisecond level about the time course ofcognitive processes. This matters because it makes it possible to work out the sequence of activation indifferent areas of the cortex.

    Techniques used by cognitive neuroscientists

    Method Strengths Weaknesses

    Single-unit recording Fine-grain detail. Invasive.Information obtained over a widerange of time periods.

    Only neuronal-level informationis obtained.

    ERPs Detailed information about thetime course of brain activity.

    lack precision in identifyingspecific areas of the brain. Canonly be used to study basiccognitive processes.

    PET Active areas can be located towithin 34 mm. Can identify awide range of cognitive activities.

    Cannot reveal rapid changes inbrain activity. Provides only anindirect measure of neuralactivity. Findings f rom asubtraction technique can be hardto interpret.

    MRl and fMRI No known biological risk.Obtains accurate anatomicalinformation.fMRl provides good informationabout timing.

    Indirect measure of neuralactivity.Cannot track the time course ofmost cognitive processes.

    MEG Provides a reasonably directmeasure of neural activity,

    Irrelevant sources of magnetismmay interfere with measurement.

    Gives detailed information aboutthe time course of cognitiveprocesses.

    Does not give accurateinformation about brain areasactive at a given time.

    There are some major technical problems associated with the use of MEG. The magnetic field generated bythe brain when thinking is about 100 million times weaker than the Earths magnetic field, and a million timesweaker than the magnetic fields around overhead power cables, and it is very hard to prevent irrelevantsources of magnetism from interfering with the measurement of brain activity. Superconductivity requirestemperatures close to absolute zero, which means the SQUID has to be immersed in liquid helium at fourdegrees above the absolute zero of 273C. However, these technical problems have been largely (orentirely) resolved. The major remaining disadvantage is that MEG does not provide structural or anatomicalinformation. As a result, it is necessary to obtain an MRI as well as MEG data in order to locate the activebrain areas.

    Section summary

    All the techniques used by cognitive neuro-scientists possess strengths and weaknesses. Thus, it is oftendesirable to use a number of different techniques to study any given aspect of human cognition. If similar

    24 COGNITIVE PSYCHOLOGY: A STUDENTS HANDBOOK

  • findings are obtained from two techniques, this is known as converging evidence. Such evidence is ofspecial value, because it suggests that the techniques are not providing distorted information. For example,studies using PET, fMRI, and MEG (e.g., Anderson et al., 1996; Tootell et al., 1995a, b) all indicate clearlythat area V5 is much involved in motion perception.

    It can also be of value to use two techniques differing in their particular strengths. For example, the ERPtechnique has good temporal resolution but poor spatial resolution, whereas the opposite is the case with fMRI.Their combined use offers the prospect of discovering the detailed time course and location of the processesinvolved in a cognitive task.

    The techniques used within cognitive neuro-science are most useful when applied to areas of the brainthat are organised in functionally discrete ways (S.Anderson, personal communication). For example, as wehave seen, there is evidence that area V5 forms such an area for motion perception. It is considerably lessclear that higher-order cognitive functions are organised in a similarly neat and tidy fashion. As a result, thevarious techniques discussed in this section may prove less informative when applied to such functions.

    You may have got the impression that cognitive neuroscience consists mainly of various techniques forstudying brain functioning. However, there is more than that to cognitive neuroscience. As Rugg (1997, p.5) pointed out, The distinctiveness [of cognitive neuroscience] arises from a lack of commitment to asingle level of explanation, and the resulting tendency for explanatory models to combine functional andphysiological concepts. Various examples of this explanatory approach are considered during the course ofthis book.

    OUTLINE OF THIS BOOK

    One problem with writing a textbook of cognitive psychology is that virtually all the processes andstructures of the cognitive system are interdependent. Consider, for example, the case of a student reading abook to prepare for an examination. The student is learning, but there are several other processes going onas well. Visual perception is involved in the intake of information from the printed page, and there isattention to the content of the book (although attention may be captured by irrelevant stimuli). In order forthe student to profit from the book, he or she must possess considerable language skills, and must also haverich knowledge representations that are relevant to the material in the book. There may be an element ofproblem solving in the students attempts to relate what is in the book to the possibly conflictinginformation he or she has learned elsewhere. Furthermore, what the student learns will depend on his or heremotional state. Finally, the acid test of whether the learning has been effective and has produced long-termmemory comes during the examination itself, when the material contained in the book must be retrieved.

    The words italicised in the previous paragraph indicate some of the main ingredients of human cognition,and form the basis of our coverage of cognitive psychology. In view of the interdependent functioning of allaspects of the cognitive system, there is an emphasis in this book on the ways in which each process (e.g.,perception) depends on other processes and structures (e.g., attention; long-term memory; storedrepresentations). This should aid the task of making sense of the complexities of the human cognitivesystem.

    1. INTRODUCTION 25

  • CHAPTER SUMMARY

    Cognitive psychology as a science. Cognitive psychology is unified by a common approachbased on an analogy between the mind and the computer. This information-processing approachviews the mind as a general-purpose, symbol-processing system of limited capacity. There arefour main types of cognitive psychologists: experimental cognitive psychologists; cognitivescientists; cognitive neuropsychologists; and cognitive neuroscientists, who use varioustechniques to study brain functioning.

    Cognitive science. Cognitive scientists focus on computational models, in which theoreticalassumptions have to be made explicit. These models are expressed in computer programs, whichshould produce the same outputs as people when given the same inputs. Three of the main typesof computational model are semantic networks, production systems, and connectionist networks.Semantic networks consist of concepts, which are linked by various relations (e.g., is-similar-to).They are useful for modelling the structure of peoples conceptual knowledge. Productionsystems are made up of productions in the form of IFTHEN rules. Connectionist networksdiffer from previous approaches in that they