7
Neuropsychologia 40 (2002) 801–807 Speechreading circuits in people born deaf Mairéad MacSweeney a,, Gemma A. Calvert b , Ruth Campbell c , Philip K. McGuire d , Anthony S. David d,e , Steven C.R. Williams d , Bencie Woll f , Michael J. Brammer d a BBSU, Institute of Child Health, University College London, 30 Guilford Street, London WC1N 1EH, UK b FMRIB, University of Oxford, John Radcliffe Hospital, Oxford OX3 9DU, UK c Department of Human Communication Science, University College London, London WC1N 1PG, UK d Institute of Psychiatry, De Crespigny Park, London SE5 8AF, UK e Guy’s, King’s and Thomas’ School of Medicine, London SE5 9PJ, UK f Department of Language and Communication Science, City University, London EC1V 0HB, UK Received 20 March 2001; accepted 24 August 2001 Abstract In hearing people, silent speechreading generates bilateral activation in superior temporal regions specialised for the perception of auditory speech [Science 276 (1997) 593; Neuroreport 11 (2000) 1729; Proceedings of the Royal Society London B 268 (2001) 451]. In the present study, FMRI data were collected from deaf and hearing volunteers while they speechread numbers and during a control task in which they counted nonsense mouth movements (gurns). Brain activation for silent speechreading in oral deaf participants was found primarily in posterior cingulate cortex and hippocampal/lingual gyri. In contrast to the pattern observed in the hearing group, deaf participants showed no speechreading-specific activation in left lateral temporal regions. These data suggest that acoustic experience shapes the functional circuits for analysing speech. We speculate on the functional role, the posterior cingulate gyrus may play in speechreading by profoundly congenitally deaf people. © 2002 Elsevier Science Ltd. All rights reserved. Keywords: FMRI; Posterior cingulate; Superior temporal sulcus; Hippocampal gyrus; Plasticity 1. Introduction The majority of people born deaf are born to hearing parents. Exposure to seen speech, therefore, constitutes the primary language input in infancy for the majority of deaf people. While seeing speech can never be as efficient for comprehension as hearing speech, many deaf people become good speechreaders. The majority are at least equivalent to hearing people in the perceptual discrimination of phonolog- ical contrasts carried by eye [2] and in the recruitment of con- ceptual information to speechreading [23]. However, similar behavioural performance does not necessarily imply that the same strategies or neural systems are used by both groups. Functional neuroimaging has shown that hearing peo- ple consistently activate left temporal regions during silent speechreading [8,17,18]. This activation can include the lat- eral tip of Heschl’s gyrus at the junction of primary and secondary auditory cortex (BA41/42). We have previously Corresponding author. Tel: +44-20-7905-2164; fax: +44-20-7831-7050. E-mail address: [email protected] (M. MacSweeney). reported that, as a group, people who are deaf from birth do not systematically recruit these regions when presented with silent speech [18]. Rather, temporal activation, espe- cially in the left hemisphere, in individual deaf participants is dispersed across different sites. We argued that in hearing people, the left superior temporal sulcus (STS) ‘manages’ and modulates audio–visual speech integration. During pre- sentation of congruent and incongruent audio–visual speech, cross-modal facilitation and depression observed in this area co-occurs with response enhancement and suppression, re- spectively, in primary visual and auditory cortices [7,9]. Therefore, in hearing people, who have extensive experi- ence of concurrent audio–visual speech, this area can play a crucial role when speech is delivered by visual input alone. This cross-modal integrative system does not appear to be readily available to people born deaf since they have not been exposed to concurrent audio–visual speech. MacSweeney et al. [18] focused on the differences in tem- poral cortex activation between deaf and hearing groups. The aim of the present study is to extend this investiga- tion and obtain a more global picture of the cortical re- gions recruited by deaf people during speechreading. In our 0028-3932/02/$ – see front matter © 2002 Elsevier Science Ltd. All rights reserved. PII:S0028-3932(01)00180-4

Speechreading circuits in people born deaf

Embed Size (px)

Citation preview

Page 1: Speechreading circuits in people born deaf

Neuropsychologia 40 (2002) 801–807

Speechreading circuits in people born deaf

Mairéad MacSweeneya,∗, Gemma A. Calvertb, Ruth Campbellc, Philip K. McGuired,Anthony S. Davidd,e, Steven C.R. Williamsd, Bencie Wollf , Michael J. Brammerd

a BBSU, Institute of Child Health, University College London, 30 Guilford Street, London WC1N 1EH, UKb FMRIB, University of Oxford, John Radcliffe Hospital, Oxford OX3 9DU, UK

c Department of Human Communication Science, University College London, London WC1N 1PG, UKd Institute of Psychiatry, De Crespigny Park, London SE5 8AF, UK

e Guy’s, King’s and Thomas’ School of Medicine, London SE5 9PJ, UKf Department of Language and Communication Science, City University, London EC1V 0HB, UK

Received 20 March 2001; accepted 24 August 2001

Abstract

In hearing people, silent speechreading generates bilateral activation in superior temporal regions specialised for the perception ofauditory speech [Science 276 (1997) 593; Neuroreport 11 (2000) 1729; Proceedings of the Royal Society London B 268 (2001) 451].In the present study, FMRI data were collected from deaf and hearing volunteers while they speechread numbers and during a controltask in which they counted nonsense mouth movements (gurns). Brain activation for silent speechreading in oral deaf participants wasfound primarily in posterior cingulate cortex and hippocampal/lingual gyri. In contrast to the pattern observed in the hearing group, deafparticipants showed no speechreading-specific activation in left lateral temporal regions.

These data suggest that acoustic experience shapes the functional circuits for analysing speech. We speculate on the functional role,the posterior cingulate gyrus may play in speechreading by profoundly congenitally deaf people. © 2002 Elsevier Science Ltd. All rightsreserved.

Keywords: FMRI; Posterior cingulate; Superior temporal sulcus; Hippocampal gyrus; Plasticity

1. Introduction

The majority of people born deaf are born to hearingparents. Exposure to seen speech, therefore, constitutes theprimary language input in infancy for the majority of deafpeople. While seeing speech can never be as efficient forcomprehension as hearing speech, many deaf people becomegood speechreaders. The majority are at least equivalent tohearing people in the perceptual discrimination of phonolog-ical contrasts carried by eye [2] and in the recruitment of con-ceptual information to speechreading [23]. However, similarbehavioural performance does not necessarily imply that thesame strategies or neural systems are used by both groups.

Functional neuroimaging has shown that hearing peo-ple consistently activate left temporal regions during silentspeechreading [8,17,18]. This activation can include the lat-eral tip of Heschl’s gyrus at the junction of primary andsecondary auditory cortex (BA41/42). We have previously

∗ Corresponding author. Tel:+44-20-7905-2164;fax: +44-20-7831-7050.

E-mail address: [email protected] (M. MacSweeney).

reported that, as a group, people who are deaf from birthdo not systematically recruit these regions when presentedwith silent speech [18]. Rather, temporal activation, espe-cially in the left hemisphere, in individual deaf participantsis dispersed across different sites. We argued that in hearingpeople, the left superior temporal sulcus (STS) ‘manages’and modulates audio–visual speech integration. During pre-sentation of congruent and incongruent audio–visual speech,cross-modal facilitation and depression observed in this areaco-occurs with response enhancement and suppression, re-spectively, in primary visual and auditory cortices [7,9].Therefore, in hearing people, who have extensive experi-ence of concurrent audio–visual speech, this area can play acrucial role when speech is delivered by visual input alone.This cross-modal integrative system does not appear to bereadily available to people born deaf since they have notbeen exposed to concurrent audio–visual speech.

MacSweeney et al. [18] focused on the differences in tem-poral cortex activation between deaf and hearing groups.The aim of the present study is to extend this investiga-tion and obtain a more global picture of the cortical re-gions recruited by deaf people during speechreading. In our

0028-3932/02/$ – see front matter © 2002 Elsevier Science Ltd. All rights reserved.PII: S0028-3932(01)00180-4

Page 2: Speechreading circuits in people born deaf

802 M. MacSweeney et al. / Neuropsychologia 40 (2002) 801–807

previous study, speechreading of spoken numbers was com-pared with a simple baseline condition, in which participantswere required to count the occurrence of a small visual cuethat appeared on the image of the resting speaker. This con-trolled for the internal speech component of the speechread-ing task. When speechreading was contrasted with this base-line, the deaf group predominantly activated posterior re-gions including the cuneus/parahippocampal gyrus (BA18),precuneus (BA31) and posterior cingulate (BA23).

Deaf people can show enhanced cortical activation to dis-plays of visual movement when compared with hearing peo-ple [1]1 and this is commensurate with greater behaviouralsensitivity [21]. Although not traditionally considered amovement processing area, Cornette et al. [11] found theposterior cingulate to be sensitive to motion onset. There-fore, it is possible that the posterior activation observedin our previous study may reflect differential sensitivity tomovement, rather than speechreading-specific differences.The study reported here attempted to control for this byemploying a baseline task that matched more closely thevisual dynamic information contained in the speechreadingcondition. This comprised nonsense lower face movements(gurns), performed at the same rate as speaking, which theparticipant was required to count. Differential activation be-tween these two conditions should clarify the circuits usedby deaf people in watching silent speech compared withwatching similar, but non-linguistic, facial actions.

2. Methods

2.1. Participants

Thirteen right-handed participants were tested. Sevenwere normally hearing adults (two males and five femalesaged between 21 and 55 years; mean= 29 years) andsix were congenitally profoundly deaf (three males, threefemales aged between 22 and 38; mean= 30 years). Allsubjects gave written informed consent to participate in thestudy, which was approved by the Local Research EthicsCommittee. The hearing loss (HL) of each deaf participantwas quantified using audiometry. Mean HL in the better earwas 110 dB HL (range 101–116 dB HL). All were, there-fore, profoundly deaf and had been so since birth. All deafparticipants had hearing parents and had attended eithermainstream schools, where speech was their main form ofcommunication, or ‘oral’ schools for the deaf, which usedspeech-based teaching methods. Their dominant form ofcommunication with hearing people in everyday life wasthrough speechreading. All deaf participants performed at

1 Bavelier et al. focused on MT/MST and found an effect of deafnesson peripheral motion detection. It is nevertheless possible that deaf andhearing people show differential cortical sensitivity to movement in de-manding tasks of motion detection in central vision, such as those in thepresent study.

or above an age appropriate level on a test of non-verbalIQ (Block Design, WAIS-R). Educational level was closelymatched across the two groups. Five deaf subjects andfive hearing subjects had earned degrees. A test of adultspeechreading [13] was administered to participants priorto the scan. All deaf subjects (n = 6) scored above 80%on this test. Three hearing subjects also scored above 80%and one scored 48% (three hearing subjects were not testedbecause of time limitations).

2.2. Experimental design

The tasks were presented in alternating 21 s blocks of ex-perimental and baseline conditions in a run lasting 4 min,54 s. The stimulus material comprised a silent video of a fe-male speaker (including torso) with full face looking straightat the camera.

2.2.1. Experimental condition: silent speechreadingSubjects viewed 10 numbers between 1 and 9 being spo-

ken at random at a rate of one every 2 s. Prior to enteringthe scanner, each participant had watched the speaker onvideotape and repeated each number aloud as they saw it be-ing spoken. Participants performed the same task within thescanner but were instructed to repeat each numbercovertlyto avoid production confounds and maintain their focus onthe task. Covert repetition also enabled comparison with pre-vious studies from our group.

2.2.2. Baseline condition: gurning faceThe female speaker made closed-mouth ‘gurning’ move-

ments at the same rate as numbers were spoken in thespeechreading condition. Participants counted the numberof gurns. The task was performed aloud out of the scannerand covertly when in the scanner. The baseline condition,thus, controlled for covert number naming, attention to theface and facial movement in the stimuli.

The videotaped stimuli were projected onto a screen lo-cated at the base of the scanner table via a Proxima 8300LCD projector and then projected to a mirror angled abovethe subject’s head in the scanner.

2.3. Imaging parameters

Gradient echo echoplanar MRI data were acquired witha 1.5T General Electric MR system fitted with advancedNMR hardware and software, using a standard quadraturehead coil. Head movement was minimised by positioningthe subject’s head between cushioned supports. Ninety-eightT ∗

2 weighted images depicting ‘bold’ contrast were acquiredwith a slice thickness of 7 mm (with 0.7 mm interslice gap).Fourteen axial slices were acquired in each volume to coverthe whole brain (TR= 3 s, TE = 40 ms). An inversionrecovery EPI dataset was also acquired to facilitate regis-tration of each individual’s FMRI dataset to Talairach andTournoux space [25]. This comprised of 43, 3 mm slices

Page 3: Speechreading circuits in people born deaf

M. MacSweeney et al. / Neuropsychologia 40 (2002) 801–807 803

(0.3 mm gap) which were acquired parallel to the AC–PCline (TE = 80 ms, TI= 180 ms, TR= 16 s).

2.4. Data analysis

2.4.1. Analysis of group dataPrior to time series analysis, data were processed to re-

move low-frequency signal changes and motion-related arte-facts [5]. The responses at each voxel were then analysed byregressing the corrected time series data on a linear modelproduced by convolving each contrast vector to be studiedwith two Poisson functions parameterising haemodynamicdelays of 4 and 8 s [14]. Following least squares fitting ofthis model, a goodness-of-fit statistic composed of the ratioof model to residual sum of squares was calculated [12] foreach contrast. The distribution of the same statistics underthe null hypothesis of no experimental effect were then cal-culated by randomly permuting the time series at each voxel,refitting the models and combining the resulting data follow-ing random permutation at all intracerebral voxels [4]. Acti-vations for any contrast at any requiredP-value can then bedetermined by obtaining the appropriate critical values fromthe null distribution. Generic group activation maps wereconstructed by mapping the observed and randomised teststatistics for each individual into the standard stereotacticspace of Talairach and Tournoux [25] and computing andtesting median activation maps (as previously described byBrammer et al. [3]).2

2.4.2. Group contrast: analysis of varianceThe following analysis of variance (ANOVA) model was

fitted at each intracerebral voxel in standard space:

FPQi,j = β0 + β1Gj + β2 �FPQi,j + ei,j

where FPQi,j , is the observed fundamental power quotient(FPQ) at theith voxel in thejth subject,G the experimentalgroup factor coding (Gj = 1 for hearing andGj = −1for deaf) andei ,j , an error term (see [5] for details of theregression model incorporating motion correction).

The null hypothesis was tested by comparing the coeffi-cient,β1 to critical values of its non-parametrically obtainednull distribution [6]. Critical values for a two-tailed test ofsize,α are the 100× (α/2)th and 100× (1 − (α/2))th per-centile values of his distribution. This ANOVA model testedbetween-group differences in brain activation maps at a sig-nificance level ofP = 0.01.

2.4.3. Conjunction analysisThis was carried out on the current study and our previous

study of speechreading in deaf people [18] (for more details

2 Since the data were smoothed, it is possible that some type 1 errorvoxels may form clusters. For this reason, only clusters greater than fourvoxels are reported. Further details of the bootstrap experiment used todetermine this cluster size as the appropriate random level for this typeof experiment are reported in MacSweeney et al. [18].

see Section 3.3). A regression model was fitted to data fromboth experiments to determine the residual levels of activa-tion in each voxel after removal of experiment-dependenteffects. Conjunctions,context-independent activations, areareas in which these residual activations have an acceptablylow voxel-wise probability (P < 0.001) under the null hy-pothesis. Here, the null hypothesis is that speechreading inboth studies does not produce significant neural activationfollowing suitable multiple comparison correction. For moredetails of the analysis methodology see Rubia et al. [24].

3. Results

3.1. Hearing and deaf: group effects

The hearing group showed extensive activation of theauditory speech processing network within left lateral tem-poral cortices during speechreading. They demonstrated bi-lateral activation of the superior temporal sulci (see Table 1)extending into middle and superior temporal gyri, incor-porating BA42 (secondary auditory cortex). In contrast,the deaf group did not display any significant activation inthe temporal lobes during speechreading. This differencewas supported statistically by the ANOVA which showedsignificantly greater activation in the hearing group thandeaf group in the superior temporal sulci bilaterally duringspeechreading (see Table 2).

Despite these differences in temporal lobe activation dur-ing speechreading, both groups showed very similar patternsof temporal lobe activation during gurn counting. Both deafand hearing groups activated inferior posterior temporal cor-tex (BA37) during gurn counting (data not shown see [10]).This was bilateral in the deaf (right:x, y, z = 43,−58,−7;left: x, y, z = −55, −56, −2) and right lateralised in thehearing (x, y, z = 43, −61, −7).

3.2. Speechreading in the deaf group

Of primary interest were the areas activated by deaf peo-ple during speechreading. The largest clusters of significantactivation were in cingulate cortex. Activation of the pos-terior cingulate was significantly greater in the deaf groupthan the hearing during speechreading (x, y, z = 3, −44, 26;see Table 2). Activation of this region during speechreadingconcurs with our previous study in which the deaf group,but not the hearing group, activated posterior cingulate(x, y, z = −3, −56, 15) during speechreading, comparedwith a low level baseline task.

3.3. Additional conjunction analysis

Given the replication of our previous results, in which thesame subjects were tested but a different baseline task wasused, we carried out a conjunction analysis of the deaf datafrom this study and the earlier one [18]. The aim was to

Page 4: Speechreading circuits in people born deaf

804 M. MacSweeney et al. / Neuropsychologia 40 (2002) 801–807

Table 1Regions active during speechreading within each group relative to gurninga

Cerebral region (Brodmann area) No. of voxels P< Co-ordinates (mm)

x y z

Hearing groupRight STS (21/22) 35 0.00005 59 −20 −7Left STS (21/22) 27 0.00005 −56 −17 −1Left superior temporal gyrus (22) 9 0.00005 −61 −37 8Left superior temporal gyrus (22) 6 0.0005 −59 −25 7Right anterior cingulate (32) 6 0.001 4 16 35Left lingual gyrus (17) 5 0.001 −2 −86 4Left middle temporal gyrus (21) 5 0.0005 −54 −34 4

Deaf groupRight posterior cingulate (23) 52 0.00005 3 −45 25Right anterior cingulate (24/32) 28 0.00005 9 36 −5Right cuneus (18) 15 0.0005 11 −86 15Right precuneus (7) 9 0.0005 3 −58 42Left posterior cingulate (31) 6 0.0005 −9 −58 26Right cuneus (17) 5 0.0005 2 −80 9Left posterior cingulate/cuneus (31/17) 5 0.0005 −1 −63 9Left lingual gyrus (18) 5 0.001 −9 −91 −8

a Co-ordinates represent the centroid of each three-dimensional cluster.

Table 2ANOVA, regions that differ in speechreading-specific activation between deaf and hearing groups (P < 0.01)a

Cerebral region (Brodmann area) No. of voxels Co-ordinates (mm)

x y z

Hearing greater than deafRight superior temporal sulcus (21/22) 14 55 −17 −7Right superior temporal sulcus (22) 6 61 −19 −2Left superior temporal sulcus (21/22) 5 −61 −31 9Left superior temporal sulcus (21/22) 5 −58 −17 −2Right posterior inferior temporal lobe (37) 5 46 −67 −7

Deaf greater than hearingLeft inferior/middle occipital gyrus (18) 8 −35 −83 4Right posterior cingulate (31) 7 3 −44 26

a Co-ordinates represent foci of the peak activation in each two-dimensional cluster.

Table 3Conjunction analysis, regions activated by speechreading in deaf subjects in the current experiment (gurning vs. speechreading) and previously reportedexperiment (still baseline vs. speechreading [18]) (P < .001) a

Cerebral region (Brodmann area) No. of voxels Co-ordinates (mm)

x y z

Left posterior cingulate (23/30) 27 −1 −58 17Left lingual gyrus (18/19) 24 −16 −56 5Right hippocampal gyrus (19/30) 18 22 −48 4Right superior temporal gyrus (22) 15 59 −4 6Right posterior cingulate (23) 9 3 −36 26Left postcentral gyrus (1/2/3) 9 −42 −18 36Right posterior cingulate (30) 7 11 −54 10Left cuneus (18) 7 −10 −87 18Right postcentral gyrus (1/2) 7 42 −22 48Right medial frontal gyrus (6) 5 12 −18 48

a Co-ordinates represent centroids of three-dimensional clusters.

Page 5: Speechreading circuits in people born deaf

M. MacSweeney et al. / Neuropsychologia 40 (2002) 801–807 805

Fig. 1. Conjunction analyses, regions significantly activated by speechreading in deaf people across two experiments with differing baselines. Theseinclude posterior cingulate (BA23/30), bilateral lingual/hippocampal gyri (BA18/19) and the right superior temporal gyrus (BA22) (also see Table3).Contiguous axial sections are shown fromz = +3.5 to +20 mm. The data are shown super-imposed on a high-resolution anatomical image in radiologicalconvention so that the left of the image corresponds to the right side of the brain.

identify the essential areas recruited by deaf people duringspeechreading. This analysis identifies the regions signifi-cantly activated by speechreading, independent of baselineconditions used.

This analysis showed extensive activation of the poste-rior cingulate (BA23/30) and basal occipito-temporal re-gions bilaterally incorporating the lingual/hippocampal gyri(BA18/19). There was also significant activation in therightsuperior temporal gyrus (BA22) (see Table 3, Fig. 1). Aconjunction analysis of data from hearing participants in thecurrent and previous experiments did not generate activationin the posterior regions identified in the deaf group. Rather,as predicted from previous studies, the analysis highlightedextensive bilateral superior temporal activation (BA22/42)focusing on the superior temporal sulci.

4. Discussion

In contrast to hearing people, the deaf group showed noconsistent pattern of activation in the left temporal lobe dur-ing speechreading, but rather, increased posterior cerebralactivation. These data support our previous findings [18].Moreover, the conjunction analysis suggests that this patternis not a reflection of contrasts with particular baseline tasks.The group differences observed here appear to be specificto linguistic mouth movements, since both deaf and hearinggroups recruited occipito-temporal regions during gurn pro-cessing (BA37).3 Although our behavioural data is incom-plete, it seems unlikely that these results can be accountedfor in terms of differences in the task performance. Previousresearch suggests that deaf and hearing speechreaders per-form similarly on identification of items when the target setis limited (e.g. numbers 1–9; for further discussion see [18]).

3 We discuss the possible role of this region in gurn processing in hearingindividuals at length elsewhere [10].

The main purpose of this study was to explore the sys-tem commonly used by deaf people to process silent speech.The current data strengthen our hypothesis that if hearingis absent from birth, the speech processing system developsidiosyncratically within the left temporal lobe. In contrast,although no significant activation was observed in the righttemporal lobe in the deaf group in the study reported here,activation in this area (BA22) was observed in the more sen-sitive conjunction analysis of this group together with theprevious study. Right temporal lobe speech processing cir-cuits may be less affected by profound congenital deafnessthan those in the left temporal lobe. Right temporal acti-vation in the deaf group extended into the tip of Heschl’sgyrus (x, y, z = 61, −11, 9) traditionally classed as sec-ondary auditory cortex (BA42). This area has recently beenshown to be responsive to sign language in native deaf sign-ers [19,22]. The current data support the view that this maybe a key region for the analysis of segmental aspects of vis-ible language, whether signed or spoken.

The data presented here, and in the previous study, havehighlighted the posterior cingulate cortex as activated bysilent speechreading in deaf but not hearing people. Thisactivation appears to be specific to speechreading rather thanfacial movement in general. We conclude this report with aspeculative account of the function of posterior cingulate inrelation to these findings.

4.1. The possible role of posterior cingulate cortexin speechreading by deaf people

Lockwood et al. [16] reported activation in the poste-rior cingulate cortex during the detection of quiet (30 dB)tones. They argued that this area may act as a ‘volumecontrol’, increasing signal-to-noise ratio in non-optimal pro-cessing conditions. The posterior cingulate also plays a rolein memory processing [26]. Maguire et al. [20] report ac-tivation in posterior cingulate specific to the integration of

Page 6: Speechreading circuits in people born deaf

806 M. MacSweeney et al. / Neuropsychologia 40 (2002) 801–807

contextual, stored information with on-line processing of anauditory story. Both these studies suggest a role of the pos-terior cingulate in the co-ordination of stored informationand modulation of on-line stimulus processing, whether vi-sual or auditory, under non-optimal conditions. Under thesecircumstances, stored representations may be invoked by theposterior cingulate to enhance on-line processing. In deafspeechreaders, the posterior cingulate may be responsiblefor integrating stored speech knowledge with the incom-ing visual speech stimulus. Co-activation of visual associa-tion/memory areas in lingual/hippocampal gyri may be dueto modulation of responses in this area by the posterior cin-gulate. The hippocampal gyri (BA18/19) have been impli-cated in visual imagery, indicating their sensitivity to such‘top–down’ influences [15]. Moreover, in this study, the deafgroup activated the right hippocampus (x, y, z = 29, −42,4; number of voxels= 2) suggesting a role for memoryprocessing during speechreading by deaf people. Deaf peo-ple may recruit this hypothesised ‘top–down’ system for allspeechreading tasks regardless of the level of difficulty. Incontrast, the data from Lockwood et al. [16] suggest thathearing individuals may only recruit this system under moredemanding speech processing conditions. For example, au-ditory speech processing in noise or silent speechreading ofa set of open stimuli.

Whether or not this interpretation of the function of theposterior cingulate in speechreading by deaf people is cor-rect remains to be seen. However, it seems clear from theresults reported here that the speech processing network inpeople born deaf is considerably different from that seenin hearing people. As might be predicted, there is greaterdependence on visual processing areas to understand silentspeech. The speechreading processing system in deaf peo-ple appears to be predominantly right lateralised, extendinginto superior temporal structures, including secondary audi-tory cortex. Failure to find left temporal activation in deafspeechreaders supports the suggestion that auditory speechexperience is necessary to shape the speech processing net-work in the left temporal lobe.

Acknowledgements

This work was supported by the Medical ResearchCouncil (UK) Grant No. G 9702921N. M. MacSweeney iscurrently supported by a Wellcome Trust Advanced Train-ing Fellowship. We are grateful for the help of Trudi Collierin the testing of deaf volunteers and to Dr. Dominic Ffytchefor comments on an earlier draft of this paper.

References

[1] Bavelier D, Tomann A, Hutton C, Mitchell T, Corina D, Liu G, etal. Visual attention to the periphery is enhanced in congenitally deafindividuals. Journal of Neuroscience 2000;20:U1–6.

[2] Bernstein LE, Demorest ME, Tucker PE. Speech perception withouthearing. Perception and Psychophysics 2000;62:233–52.

[3] Brammer MJ, Bullmore ET, Simmons A, Williams SCR, Grasby PM,Howard RJ, et al. Generic brain activation mapping in functionalmagnetic resonance imaging: a nonparametric approach. MagneticResonance Imaging 1997;15:763–70.

[4] Bullmore ET, Brammer MJ, Williams SCR, Rabe-Hesketh S, JanotN, David AS, et al. Statistical methods of estimation and inferencefor functional MR image analysis. Magnetic Resonance in Medicine1996;35:261–77.

[5] Bullmore ET, Brammer MJ, Rabe-Hesketh S, Curtis VA, MorrisRG, Williams SCR, et al. Methods for diagnosis and treatment ofstimulus correlated motion in generic brain activation studies usingFMRI. Human Brain Mapping 1999;7:38–48.

[6] Bullmore ET, Suckling J, Overmeyer S, Rabe-Hesketh S, TaylorE, Brammer MJ. Global, voxel and cluster tests, by theory andpermutation for a difference between two groups of structuralMR images of the brain. IEEE Transactions on Medical Imaging1999;18:32–42.

[7] Calvert GA, Brammer MJ, Bullmore ET, Campbell R, Iversen SD,David AS. Response amplification in sensory-specific cortices duringcross-modal binding. Neuroreport 1999;10:2619–23.

[8] Calvert GA, Bullmore ET, Brammer MJ, Campbell R, WilliamsSCR, McGuire PK, et al. Activation of auditory cortex during silentlipreading. Science 1997;276:593–6.

[9] Calvert GA, Campbell R, Brammer MJ. Evidence from functionalmagnetic resonance imaging of cross-modal binding in the humanheteromodal cortex. Current Biology 2000;10:649–57.

[10] Campbell R, MacSweeney M, Sugurladze S, Calvert GA, McGuirePK, Brammer MJ, David AS, Suckling J, Cortical substrates forthe perception of face actions: an FMRI study of the specificityof activation for seen speech and for meaningless lower-face acts(gurning). Cognitive Brain Research 12;233–43.

[11] Cornette L, Dupont P, Spileers W, Sunaert S, Michiels J, Van HeckeP, et al. Human cerebral activity evoked by motion reversal andmotion onset: a pet study. Brain 121:143–57.

[12] Edgington ES. Randomisation tests. 3rd ed. New York: Dekker, 1995.[13] Ellis TJ. Test of adult speechreading (TAS), BSc Thesis. University

of Newcastle-upon-Tyne, 1999, unpublished result.[14] Friston KJ, Josephs O, Rees G, Turner R. Nonlinear event-related

response in FMRI. Magnetic Resonance Medicine 1998;39:41–52.[15] Howard RJ, Ffytche DH, Barnes J, McKeefry D, Ha Y, Woodruff PW,

et al. Colour imagery and perception. Neuroreport 1998;6:1019–23.[16] Lockwood AH, Salvi RJ, Coad ML, Arnold SA, Wack DS, Murphy

BW, et al. The functional anatomy of the normal human auditorysystem: responses to 0.5 and 4.0 kHz tones at varied intensities.Cerebral Cortex 1999;9:65–76.

[17] MacSweeney M, Amaro E, Calvert GA, Campbell R, DavidAS, McGuire PK, et al. Activation of auditory cortex by silentspeechreading in the absence of scanner noise: an event-related FMRIstudy. Neuroreport 2000;11:1729–34.

[18] MacSweeney M, Campbell R, Calvert GA, McGuire PK, David AS,Suckling J, et al. Dispersed activation in left temporal cortex forspeechreading in congenitally deaf people. Proceedings of the RoyalSociety London B 2001;268:451–7.

[19] MacSweeney M, Woll B, Campbell R, McGuire PK, David AS,Williams SCR, et al. Neural systems underlying British sign languageand audiovisual English processing in native users, submitted forpublication.

[20] Maguire EA, Frith CD, Morris RGM. The functional neuroanatomyof comprehension and memory: the importance of prior knowledge.Brain 1999;122:1839–50.

[21] Neville HJ, Lawson D. Attention to central and peripheral visualspace in a movement detection task: an event-related potential andbehavioural study. Part I. Hearing subjects. Part II. Congenitally deafadults. Brain Research 1987;405:253–83.

Page 7: Speechreading circuits in people born deaf

M. MacSweeney et al. / Neuropsychologia 40 (2002) 801–807 807

[22] Petitto LA, Zatorre RJ, Gauna K, Nidelski EJ, Dostie D, Evans AC.Speech-like cerebral activity in profoundly deaf people processingsigned languages: implications for the neural basis of humanlanguage. Proceedings of the National Academy of Sciences USA2000;97:13961–6.

[23] Rönnberg J. What makes a skilled speechreader? In: Plant G, SpensK, editors. Profound deafness and speech communication. London:Whurr Publications, 1995. p. 393–416.

[24] Rubia K, Russell T, Overmeyer S, Brammer MJ, Bullmore ET,Sharma T, et al. Mapping motor inhibition: conjunctive brainactivations across different versions of go/no-go and stop tasks.Neuroimage 2001;13:250–61.

[25] Talairach J, Tournoux P. Co-planar stereotaxic atlas of the humanbrain. New York: Thieme Medicine, 1988.

[26] Wiggs CL, Weisberg J, Martin A. Neural correlates of semantic andepisodic memory retrieval. Neuropsychologia 1999;37:103–18.