31
1 Can the Right Hemisphere Speak? Chris Code School of Communication Disorders Faculty of Health Sciences University of Sydney Lidcombe, NSW 2141 Australia Ph: (02) 646 6450 Fax: (02) 646 4853 Email: [email protected] (Paper appeared in Special Issue of BRAIN & LANGUAGE Edited by Diana Van Lancker)

Can the Right Hemisphere Speak?

  • Upload
    exeter

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

1

Can the Right Hemisphere Speak?

Chris Code

School of Communication Disorders

Faculty of Health Sciences

University of Sydney

Lidcombe, NSW 2141

Australia

Ph: (02) 646 6450

Fax: (02) 646 4853

Email: [email protected]

(Paper appeared in Special Issue of

BRAIN & LANGUAGE Edited by Diana Van Lancker)

2

Abstract While a capacity for the right hemisphere in language and language related

functions is established, a role for the right hemisphere in speech production is

controversial. The question of the nature of a possible right hemisphere speech

production capability has centred mainly around the Jacksonian notion of

nonpropositional speech. In this paper I examine whether the right hemisphere

does have a particular role in nonpropositional speech through an exploration

of the neurophysiological evidence, research in aphasic speech automatisms

and the degree of propositionality in the retained speech of adult left

hemispherectomy patients.

3

Introduction The final dogma of the dominance model is the notion that, while the left

hemisphere may not control all language processing, what it does do that the

right cannot do is speak. While there is little doubt that the normal right

hemisphere is engaged in language processing, a role for the right hemisphere

in speech production is controversial.

After some 30 years of research it now appears certain that the right cerebral

hemisphere in human beings is involved in language processing (for reviews see

Code, 1987; Chiarello, 1988; Joanette, Goulet & Hannequin, 1990). However, for

100 years, since the earliest days of modern neuropsychology, it was considered

impossible for the right hemisphere to possess any language functioning at all

(see Bogen, 1969). The question currently concerns the extent and nature of

right hemisphere involvement in language. The significant barrier to answering

questions concerning extent and nature results in large measure from the

population and methodological differences between studies. There has been

research with normal subjects, stroke and head injured patients, epileptic

patients, hemispherectomies and commissurotomies, in groups and as single

cases. Investigators have used a range of methodologies which have included

neuropathology, lesions studies, cerebral bloodflow and metabolism, electric

activity and electrical stimulation, and the effects of anaesthetizing the

hemispheres. A variety of behavioural methods that are thought to measure

'lateral advantage' for a wide range of language tasks and materials have

developed. These methods have been used to determine the contribution of

the left and right hemisphere to language processing by examining and

measuring laterality effects in the auditory, visual and tactile modalities; and by

measuring eye and eyebrow movements, finger tapping, and degree of lateral

mouth opening during speech. Thus, there is built in incompatibility in much

neuropsychological research with a lack of agreement between studies and a

poor replication record (Code, 1987). So, the data from these studies is mainly

inconclusive. A further complication is that it is clear that there are significant

individual differences in cognitive style and response to experimental tasks

(Segalowitz and Bryden, 1983), brain organisation and representation of

language (Ojemann, 1979), and cerebral circulation (Hart, Lesser, Fisher,

Schwerdt, Bryan and Gordon 1991).

4

Holding these substantial obstacles to firm conclusions in mind, what does the

research tells us about the right hemisphere's role in language processing in the

normal brain? The study of aphasic symptomatology over the last one hundred

years or so confirms that damage to the left hemisphere for the great majority of

right handers and most left handers results in impairments of those aspects of

language which can be characterised through a formal unit-and-rule

generative linguistic model. The foundations of neuropsychology, from the time

of Broca to the present day, are based predominantly on the data that brain

damaged people provide. The brain, of course, can suffer damage in a number

of unfortunate ways, and the nature of the damage that it suffers bears directly

upon the character of the disruption to cognition and the behaviour that is

observed. The neural organization of language, or any other aspect of

cognition, is unlikely to be the same in a brain with a history of epilepsy where

the two hemispheres have been split by surgery, a brain where a hemisphere

isolated by surgical removal of its neighbour and a brain damaged by

cerebrovascular accident or progressive disease process. The source of the

evidence then should be borne in mind when evaluating its relevance to the

role of the right hemisphere in speech.

In this paper we examine the question of a right hemisphere speech capability

through a critical evaluation of the main evidence. We will focus mainly on the

claim, originating with Hughlings Jackson, that the right hemisphere has a role in

the origination and generation of nonpropositional and automatic speech. First,

we outline the nature of automatic and non propositional speech; second, we

examine neurophysiological evidence for right hemisphere involvement in

automatic speech production; the third section assesses the idea that aphasic

speech automatisms are products of the right hemisphere; lastly, the nature of

remaining speech in adult left hemispherectomy subjects is evaluated.

1. Nonpropositional and Automatic Speech

Much of our general behaviour is routine and automatically produced

(MacNeilage, 1970; Shallice, 1988). A great deal of our mental and motor

activity is not under conscious control. In speech production there is much that is

automatic and routine, despite the originality and creativity of human language

(MacNeilage, 1979). Lenneberg (1957) pointed out over 25 years ago that

5

thousands of muscular contractions take place during every second of speech,

and these involve complex interactive muscular activity at respiratory,

articulatory, laryngeal and pharyngeal levels. Much of our speech is not under

moment-to-moment segmental control, with each segment being individually

and sequentially executed. The rapid and proficient production of speech that

we are capable of would be physiologically impossible with if we had to plan

and perform each segment individually. Speech appears to be under a mixture

of closed-loop and open-loop control (Kozhevnikov & Chistovitch, 1966). Under

closed-loop control, speech is feedback-controlled, segmentally planned and

produced whereas in open-loop control whole chunks are holistically formulated

and executed. Despite the physiological and mechanico-inertial constraints of

the neuromuscular system normal speech is produced with relative speed and

fluency. This means that a significant amount of automaticity characterizes

speech production.

Jackson’s (1874) observations of brain damaged patients convinced him that

language could be contrasted in terms of it's propositionality and automaticity.

He also introduced the idea that the left hemisphere was responsible for

processing propositional language whereas both right and left were involved in

the processing of nonpropositional language. The right hemisphere is the one for

the most automatic use of words, and the left the one in which automatic use

of words merges into voluntary use of words - into speech (Jackson, 1874; pp. 81-

82).

The idea that language can be distinguished in terms of it's automaticity and

propositionality has been utilised by a number of writers since Jackson

(Goldstein, 1948; Goldman-Eisler, 1968; Van Lancker, 1987, 1993; Code, 1987,

1991; Wray, 1992). It is argued (Van Lancker, 1987, 1993; Code, 1987, 1991; Wray,

1992) that nonpropositional, holistically processed, formulaic language does not

entail straight linguistic, unit-and-rule analysis and synthesis and does not

engage components of a generative grammar. Propositionality appears to be a

feature of natural language use, although it is not a variable which can be

easily manipulated in the psycholinguistics laboratory.

The main features of more nonpropositional and automatic language are

invariance of production and a non segmental and holistic construction. Verbal

6

activities like recitation, counting, listing the days of the week and the months of

the year and rote repetition of arithmetic tables are low in propositionality; they

do not involve the generation and process of new ideas and their conversion

into original utterances and they are very familiar. Van Lancker (1993) recently

listed examples of nonpropositional speech as idioms, slang, cliches, social

formulaic speech, rote learnt recited speech and serial speech. Such regularly

used idioms as 'Have a nice day', 'Good to see you' and 'By the way' are most

probably processed as single lexical items, as a complete holistic package, as

'sealed units'. Van Lancker (1993) suggests that the idiom is the clearest example

of nonliteral language as it is not interpretable by the simple application of

grammatical rules. The contents of popular songs, particularly the titles and

choruses of popular songs, often utilize idioms such as 'Walkin the Dog', 'What

Goes Up Must Come Down', 'What the World Needs Now', 'Give Peace a

Chance', 'The Way We Were', 'Here Comes the Sun'. The known role of the right

hemisphere in the processing of music, especially in non-musicians (see Code,

1987), suggests a more than coincidental relationship between successful

popular songs and idiomatic and slang language. Proverbs are interesting as

they appear to have a literal and a nonliteral meaning. For instance 'Rome

wasn't built in a day' at a literal concrete level means that it took more than a

day to build Rome. A more abstract nonliteral meaning is that it takes a look

time to complete major tasks. However, proverbs are never used in their literal

sense. Right hemisphere damaged patients (Hier & Kaplan, 1980) and patients

with recently diagnosed dementia (Code & Lodge, 1987) have problems with

nonliteral interpretation of proverbs.

Formulaic language is not devoid of meaning, and when such speech is used

more propositionally and appropriately, then the phonological representation

for the utterance would be activated by semantic representation. Much of it is

very high in expressive significance. Many nonpropositional communications, like

social greetings, have a major pragmatic function, and idioms express familiar

messages. Although automatic, therefore, the degree of 'propositionality'

inherent in automatically produced language must be variable and situation-

specific. Features of nonpropositional language may be evolutionally pre-

linguistic. Much of it appears to be concerned with social and emotional

aspects of communication and expression which pre-exist the capacity in

human beings to generate fully predicative propositional language.

7

It was Jackson's observations of aphasic speech automatisms and recurring

utterances which led him to propose the idea of propositionality. Jackson (1889)

saw aphasic speech automatisms as primitive and automatic behaviour and the

expression of levels lower down the neural hierarchy which have been released

from higher level inhibition. We explore these utterances in a later section, but

first examine the evidence from investigations using a range of

neurophsyiological technigues.

2. The Neurophysiological Evidence

There is little evidence that electrical stimulation of the right hemisphere

produces speech (see Ojemann, 1983). Stimulation of motor cortex impairs the

ability to mimic single orofacial movements, sometimes with arrestt of speech.

Right hemisphere stimulation rarely causes changes in mimicry of more

sequential oral movements. The classic studies of Penfield and Roberts (1959)

showed that stiluation of right and left hemisphere lip, tongue and jaw areas

precentrally (and to some degree post-centrally) caused vocalization of a fairly

unspecified and sustained vowel cry, 'which at times may have a consonant

component' (p.120). The first studies during speech using the regional cerebral

bloodflow (rCBF) technique by the Lund group in Sweden (Ingvar & Schwartz,

1974; Larsen, Skinhoj and Lassen, 1978; Skinhoj and Larsen, 1980) showed that the

right hemisphere was active during automatic speech tasks. Larsen et al. (1978)

examined automatic counting in 18 right handed, apparently neurologically

normal, subjects. The rCBF showed no significant differences between right and

left hemispheres. The bloodflow was predominatly in the upper premotor and

sensorimotor mouth areas and the auditory areas of the temporal lobes, with no

significant activation of Broca's areas on either side. Right hemipshere bloodflow

appears to be more diffuse during automatic speech tasks than in the left

(Skinhoj and Larsen, 1980). More recently, using the same technique, the same

group examined 15 nonaphasic subjects with some neurological symptoms or

history, all right handed, on reciting the days of the week and humming a

nursery rhyme with a closed mouth (Ryding, Bradvik & Ingvar, 1987). Significantly

more activity was observed in the right than left hemisphere during automatic

speech (p<.001) but not for humming which showed equal bilateral activation.

The investigators conclude that while for the left hemisphere there was more

8

activity in Broca's area, suggesting more concern with control of mouth and

tongue. The right hemisphere appeared to be mainly involved with laryngeal

motor control.

Is there any evidence for a superior role for the right hemisphere in automatic

speech production? Using the Wada technique, Milner and associates (Milner,

Branch and Rasmussen, 1966; Milner, 1974) showed that 7 from 17 left handed

(neurologically impaired) subjects with bilateral representation for speech

production made errors in serial counting forwards and backwards and reciting

the days of the week following right side anaesthesia.. However, following left

side injection the subjects made errors in naming and not in automatic speech.

For 2 other subjects from the same group, naming errors ocurred with right

hemisphere anaesthesia and automatic speech errors with left hemisphere

injection. Kurthen, Linke, Elger, Schramm (1992) reported on a small proportion of

left dominant epileptic subjects (5 out of 148) undergoing Wada investigation

who perseverated on counting while the left hemisphere was anaesthetized.

They suggest that this surprising finding is best explained as a continuation by the

right hemisphere of a programme origination in the left hemisphere.

Speedie, Wertman, Ta'ir and Heilman (1993) recently described a right-handed

Hebrew-French bilingual patient whose automatic speech was disrupted

following haemorrhage involving the right basal ganglia. The patient was not

aphasic but had marked difficulties counting up to 20, reciting the Hebrew

prayers and blessings before eating that were so familiar to him that he had

recited them daily throughout his life. He was unable to recite the intoned

prayers or sing highly familiar songs, although he was able to correctly hum

some. His comprehension of emotional prosody was intact but he could not

produce emotional prosody. While he had never been an habitual swearer he

had sworn occasionally. His ability to swear and curse was also impaired

following the basal ganglia lesion. He was unable to provide the appropriate

expletive for particular situations nor complete a curse. Despite these

impairments in production he was able to comprehend the automatic and

nonpropositional speech he could not produce. At 3 years post-onset he had

not recovered these automatic and nonpropositional speech abilities. This case

would appear to be the first to demonstrate a dissociation between

nonpropositional and propositional speech and provide evidence of actual right

9

hemisphere dominance for automatic and nonpropositional aspects of speech.

Speedie et al. (1993) consider a possible explanation for their patient's

impairments is that the lesion disrupted limbic system input to automatic speech

processes which impaired production while leaving comprehension intact. The

available evidence therefore suggests that a) there are individual differences

and that some people may have a far superior right hemisphere processing of

nonpropositional speech or b) the nature, as well as the extent, of the neural

damage in some patients determines that the right hemisphere is involved to a

larger extent in nonpropositional speech than in other patients.

3. Aphasic Speech Automatisms

Broca's (1861) aphasic patient Leborgne is probably the most familiar aphasic

patient in history; so familiar that he is best known to most simply by his nick-

name 'Tan'. He acquired the name because this was the meaningless utterance

that he produced most times he attempted speech. Tan is often cited as the first

example in the literature of an aphasic speech automatism, although Lebrun

(1986) cites a patient with the expletive 'Sacre nom de Dieu' described by

Aubertin just one week before Broca's case at a meeting of the same French

Anthropological Society. The patient with such a speech automatism, like

Leborgne, is often described as severely, often globally, aphasic in all modalities

(Alajouanine, 1957; Code, 1982a), with severe deficits in the ability to utilise

syntax, semantics and phonology in expression or comprehension in any

modality. For most patients the lesion has effectively destroyed most of the

language system. However, there are indications that certain processes, mainly

basic writing skills in the reported cases, can be partially preserved in some

individuals with CV type recurring utterances (Blanken, Dittmann, Hass &

Wallesch, 1988; Blanken, Wallesch & Papagno, 1990; Blanken, de Langen,

Dittmann & Wallesch, 1989; Kremin, 1987). Despite these retained abilities, these

utterances are associated with severe 'motor' aphasia, Broca's aphasia in

classical terminology, or severe apraxia of speech and aphasia, and they do

not occur in the fluent types of aphasia. The utterance is not an occasional one

for most patients and for some it is the only utterance they can produce (Code,

1982a).

General support for Jackson's view that automatic speech can be significantly

10

preserved in aphasia comes from recent research. Lum and Ellis (1994)

examined the nonpropositional speech skills of a group of 28 aphasic subjects

ranging in type and severity. They compared performance on a range of

nonpropositional tasks (eg., counting, reciting days of the week, months of the

year, nursery rhymes, repetition of familiar phrases and cued picture naming of

familiar phrases) with some matched propositional tasks. They found that

counting 1-10 and cued naming (eg., 'As green as...') showed a particularly

clear advantage over their propositional counterparts. More severely aphasic

patients showed a greater advantage for counting and reciting days of the

week. They too agree that nonpropositional speech is mediated via

phonological or motoric mechanisms and involves little or no semantic

mediation.

Jackson (1874, 1879) was the first to write extensively about speech

automatisms, and a number of terms have been used to describe them

(Wallesch, 1990; Code, 1991, 1994a). In contemporary research speech

automatism is the general term used for stereotyped and inappropriate

utterances, whether lexical or non lexical utterances, whereas recurring

utterance is used to refer to the non lexical variety made up of concatenated

CV syllables (Blanken, 1991; Wallesch, 1990). A range of pathological reiterative

utterances can be observed in neurological and psychiatric populations,

including verbal perseveration, echolalia, palilalia, ictal speech automatisms in

epilepsy and coprolalia in Gille de la Tourette syndrome (see Wallesch, 1990;

and Code, 1991; for reviews). These utterances are not of central concern to this

paper, although many share features and may be closely related. The central

concern here is with those speech automatisms which are a common symptom

of 'motor' forms of aphasia.

Common identifiable subtypes of lexical speech automatisms are expletives,

proper names, yes/no and serial numbers. Interestingly, while there are usually

no apparent semantic or pragmatic connections between the utterance and

the patient's world, proper names are sometimes traceable to a relative of the

patient (Code, 1982a). The most common subtype observed is probably the

pronoun+verb type (Code, 1982a). Here a pronoun is combined with an auxiliary

or modal verb, and sometimes, one or two other words. Additionally, and

intriguingly, the most common word in this subtype is "I". These utterances

11

appear as very personal and emotional expressions, often executed with great

feeling and frustration. Often they are functionally as well as syntactically

incomplete, although of those speech automatisms which do make complete, if

simple, sentences, the pronoun+verb subtype is the most common. An

interesting fact is that in Code (1982a) 3 separate patients from 3 separate

clinical settings produced the same utterance ("I want to.."). The probability of

this happening purely by chance would appear to be very low indeed. This

subtype in particular illustrates the very restricted semantic range utilised in

lexical speech automatisms.

Lexical speech automatisms are syntactically correct structures in the

overwhelming majority of cases. The utterances do not break the syntactic rules

of English. With few exceptions the initial words are syntactically stressed content

words. Although it is not possible to be sure of the syntactic function of words in

speech automatisms, or even if the words have a syntactic function, taking the

words at their face value shows that most are either nouns, pronouns or verbs.

With the exception of expletives and proper names (which do not appear in

frequency counts of normal conversational English) the words which make up

these utterances are all high frequency. Although many automatisms are single,

often repeated, lexical items there are some syntactically complete sentences,

for instance 'now wait a minute', and 'I bin to town' (Code, 1982a). The majority

of pronoun+modal/aux. utterances may fail to complete because of an

inadequate lexical specification of the main verb of an intended utterance, but

many of the utterances observed that do make up complete sentences are of

this subtype.Although made up of recognisable words, the lexical speech

automatism, with the interesting exception of the few personally relevant names

which were traceable to relatives of the patients, has no apparent referential or

contextual connection with the patient's world; the utterance appears to be

phonologically, syntactically and semantically identical each time it is produced

(Code, 1982a).

There is a marked reduction in articulatory complexity compared to normal

conversational English, an increase in the ratio of vowel to consonant

articulations, an increase in stops and nasals and a decrease in fricatives. Where

/n/ and /t/ are the most common phonemes in normal English (Mines, Hanson &

12

Shoup, 1978) and in the lexical type, in non lexical recurring utterances /i/,

schwa, /b/ and /d/ are the most common. This pattern suggests an increase in

use of the motorically 'easier', and unmarked articulations and a reduction of

articulations which are motorically more complex and marked.

Syllabification in non lexical speech automatisms adheres to the sonority

principle, at least in English and German examples of these utterances (Code

and Ball, 1994). Sound segments can be ordered along a sonority hierarchy or

scale from most to least sonorous. All such scales that have been devised agree

with an order that places obstruents (stops, fricatives and affricates) at the least

sonorant end, followed by nasals, liquids, glides to vowels (at the most sonorant

end) (Christman, 1992). The sonority sequencing principle (SSP) aims to account

for segment ordering within syllables, by positing a syntagmatic relationship

between the segments defined by relative sonority. Thus the syllable peak

(normally a vowel) is highlighted by there being an increase of sonority from the

syllable onset to the peak, and then a decrease of sonority from the syllable

peak to the coda. The ideal exponence of such a principle would be for

obstruents to take the onset and coda positions, thus resulting in a maximum

difference in sonority between those positions and the peak. In the case that

onsets and or codas contain more than one segment, the SSP predicts that the

sonority hierarchy comes into play. In onsets an initial obstruent would be

followed by other segments increasing in sonority until we reach the syllable

peak (i.e. obstruent-nasal-liquid-glide-vowel, or O-N-L-G-V), while in syllable

codas we expect the reverse ordering (i.e. V-G-L-N-O). This ordering would

account for commonly occurring syllable types in natural language such as /tra,

dva, sma, mla/, while excluding /*rta, *vda, *msa, *lma/ (Clements, 1988, 1990).

So, what evidence is there to support Jackson's original notion that the right

hemisphere plays a major role in the production of aphasic speech

automatisms? One question concerns the possible origins of the lexical type.

Over the years a number of authors have proposed that the utterance has some

special relationship to the actual moment of brain damage. Jackson (1879,

p.178) states 'I believe them to represent what was, or to represent part of what

ws, the last proposition the patient uttered or was about to utter when taken ill'.

Jackson's well known example is the man who was compiling a catalogue when

his stroke occurred and was left with the automatism 'list complete'. Gowers

13

(1887) believed that the utterance was a though that had already been

expressed; ie. the last thing the patient said before the cerebral incident.

Critchley (1970) supports Gowers position. Alajouanine (1957) proposed that the

utterance was a thought in the process of being organized into an utterance at

the time of the incident. A range of possible origins related to environmental

stimuli experienced by the patient following the cerebral incident are also

possible (Code, 1982b). There would appear to be an incompatibility between

some of these explanations and the very restricted nature of some of the

utterances that we have considered above. It is unlikely, for instance, that 3

separate individuals in separate clinics were all planning to say or had just said 'I

want to...' (Code, 1982a) when their stroke occurred. The major subtypes that

these utterances appear to fall in would also suggest that many were not

'ordinary' utterances just produced or in the process of being produced.

A further possible explanation is that the utterance originates not at the time of

the insult through these cognitive-behavioural activities in operation before or

during the stroke, but as a result of the neuro-chemical activity accompanying

the CVA. The origins of different subtypes may lie in the disturbance of or

disruption to electrical or neurotransmitter activity during or immediately

following the CVA. Thus, some, like the expletive utterances, may have their

origin following a sudden release of limbic mechanisms from the normal neural

balance or inhibition. Clinicians often report patients who were not in the habit

of swearing before their CVA, like the Minister of the church who could produce

only an expletive following his stroke, anecdotally supporting this idea that

expletives may result from release of inhibition of limbic structures. What we may

be observing is the fractionated output of different sub-systems reflecting

contributions from neuronal structures and mechanisms at different

organisational and representational levels throughout the brain.

Large anterior lesions have been found in globally aphasic patients with non

lexical types (Blunk, de Bleser, Willmes & Zeumer, 1981), whereas globally

affected patients without automatisms had a more posterior pattern of

damage. Non lexical types may therefore occur more often with lesions of the

greater Broca's area, and are accompanied by severe aphasia and apraxia of

speech. Subcortical and limbic structures have been implicated in automatism

production, and the basal ganglia have received particular attention. The

14

structure is seen as the site of a motor program generator (Darley, Aronson &

Brown, 1975; Kornhuber, 1977), and damage here has been implicated in the

production of non lexical recurring utterances, lexical speech automatisms,

coprolalia of Tourette's syndrome and pallilalia as found in Parkinson's disease

(Darley, Aronson & Brown, 1975; Leckman et al., 1991) and impaired automatic

speech production (Speedie, Wertman, Ta'ir and Heilman, 1993). Brunner,

Kornhuber, Seemuller, Suger and Wallesch's, 1982) found that basal ganglia

damage is essential for the production of speech automatisms. Using CT they

examined 26 patients with basal ganglia involvement of whom 12 had either

lexical or non lexical utterances. Neither type of utterance occurred in the

patients without basal ganglia damage but automatisms did not occur in

patients who had only subcortical (including basal ganglia) damage. In other

words, a large left hemisphere lesion incorporating both the cortex and basal

ganglia appears to be required in patients with automatisms. Of the 12 patients

with automatisms, 9 had both anterior and posterior damage involving the basal

ganglia and 3 had only anterior damage involving the basal ganglia. Haas,

Blanken, Mezger and Wallesch (1988) used CT scans to examine 49 subjects with

damage including more than 2% of forebrain volume who were more than 4

months post-onset. Sixteen had non lexical and 2 had lexical automatism and all

18 had lesions in the deep fronto-parietal white matter of the left hemisphere. A

relationship between automatism production and structures in the depth of the

area of supply of the middle cerebral artery was found. Automatisms were also

associated with older patients suggesting that they may perhaps be associated

with degenerative processes not visible to the CT scan, or some diffuse and

progressive vascular pathology.

It has been hypothesised that some lexical speech automatisms fit well with

what we know of right hemisphere-limbic interactions (Code, 1987). The limbic

forebrain makes a large contribution to human communication (Lamandella,

1977). In all mammals the phylogenically ancient limbic system is centrally

involved in the expression of emotional and affective signals of rage, fear,

surprise and social expressions of dominance, submission and aggression, as well

as inter-gender and mother-child relationships. The right hemisphere may have

an phylogenically old and unique relationship with the affective subsystems of

the limbic system, a relationship that the left hemisphere does not have. The

limbic system is 'the obvious candidate for the level of brain activity likely to be

15

responsible for the bulk of nonpropositional human communication'

(Lamandella, 1977; p.159). Tourette's syndrome presents with clear limbic

features (Leckman et al., 1991). Van Lancker (Van Lancker, 1991, Van Lancker

and Nicklay, 1992; Van Lancker and Klein, 1990) has suggested that the right

hemisphere has a special role in processing personally relevant entities such as

familiar faces, persons, topography, voices and names. Two globally aphasic

individuals with massive left hemisphere lesions studied by Van Lancker and

Nicklay (1992), were consistently better at recognizing familiar, intimate and

personal names compared to non familiar, intimate and personal words.

In people with Tourette's syndrome we observe probably the only example of

spontaneous, involuntary lexical speech produced by a conscious individual.

The obscene coprolalic nature of the utterance has parallels with expletive

speech automatisms. Although individuals with Tourette's syndrome appear

unable to suppress the emergence of the utterance, they are aware that they

are producing the utterance. It may be that the utterances emerge during

stressful episodes which appears to reduce the individuals powers to suppress

the utterance (Sweet, Solomon, Wayne, Shapiro, & Shapiro, 1973). Why it is the

foulest of expletives which should emerge in Tourette’s is unknown, but the

disorder presents with clear limbic features (Lamandella, 1977, Leckman et al.,

1991), and basal ganglia involvement have been suggested (Darley, Aronson &

Brown, 1975; Kent, 1984; Leckman et al., 1991). These utterances also are

invariantly and holistically produced.

The emotionally charged pronoun+verb subtype of speech automatism and the

expletive subtype are obvious candidates for central limbic involvement, being

holistically produced without formal linguistic input. I have suggested elsewhere

that 'assuming that the limbic system has no linguistic or phonetic programming

capability, but is simply the motivational force behind the utterance, then the

right hemisphere, through its capacity to provide a motor Gestalt, controls the

actual motor speech activity of the phono-articulatory mechanisms' (Code,

1987; p.73). Similar arguments can be made for coprolalia; also a fragment of

emotionally charged, holistically structured and invariantly produced speech

which could implicate a limbic-right hemisphere interaction.

Non lexical speech automatisms are clearly nonpropositional, they have minimal

16

linguistic structure and do not appear to engage linguistic processes in their

continues production, they do not entail affective-emotional processing. They

are not arbitrary syllables, as has been suggested (Critchley, 1970), but, as shown

earlier, concatenated CV syllables governed by phonotactic constraints and

their syllable structure adheres to the sonority sequencing principle. They appear

to reflect articulatory simplification where only high frequency and motorically

unmarked articulations taken from the phonetic inventory of the speaker's

language are produced to conform to phonotactic rules. The fact that the non

lexical type do not break phonotactic constraints and the sonority sequencing

principle may suggest that they access a phonological output lexicon the first

time they are produced. For these reasons the initial production of a nonlexical

recurring utterance may be by a severely damaged left hemisphere

phonological system which does not have access to limbic-right hemisphere

input.

Code (1994a, 1994b) outlines a model which attempts to characterise some of

the features of speech automatisms. The model acknowledges the lack of

linguistic input, reflects the apparent holistic preparation and the invariance of

production of the utterance, distinguishes between the two major types and

accounts for a) the initial production and b) the subsequent productions of the

utterance. The model assumes, as the evidence suggests (Blanken, 1991; Code,

1991) that most of the language production system is severely damaged for

most patients. The model therefore attempts to capture what remains of the

language system that can account for speech automatisms. The first

components of the model are labelled intention to communicate and

expression of state. An utterance may be formulated as either a result of an

intention to communicate or as an expression of internal state, where for the

latter the utterer has no intention to communicate to another party. The model

assumes that for an initial production of either type of utterance an intention to

communicate or an expression of state is generated and a speech act (e.g.,

question, command, statement, etc.) is formulated by the component labelled

speech act formulation. This component determines the essential message of

the utterance. In the case of lexical speech automatisms, the form of the

utterance is then generated by an holistic speech lexicon and passed to an

articulatory buffer [2] before final production. Initial non lexical recurring

utterances are generated by an articulatory formulation module which inputs

17

the articulatory buffer before final production. For these non lexical utterances,

however, it is the most unmarked and higher frequency phones that are

selected for production. For subsequent productions of both types of utterance,

shown by the thick arrow, an intention to communicate or an expression of state

inputs to speech act formulation and then directly into the articulatory buffer

before expression as an utterance.

The holistic speech lexicon is seen as a store of holistic-automatic schemas [45]

which holds the representations of lexical speech automatisms like expletives,

serial speech, pronoun+verb. An assumption of the model is that a non lexical

utterance will ensue at initial attempts to speak if the route through the holistic

lexicon is blocked by extensive neural damage. The lack of semantic range

observed in many lexical automatisms could be a reflection of the holistic

lexicon’s responsibility for very restricted and automatic output.

The model includes an articulatory buffer, after Blanken (1991), which could be

the locus of continued production. The buffer is responsible for temporary

storage of the phonetic plan of an utterance which becomes necessary for

most models of speech production to cope with the probability that the neural

planning of an utterance is a lot faster than the ability of the phonoarticulatory

mechanism to actually realise it. The buffer stores prepared utterances for short

time durations. Neural damage prevents changing the program within the

buffer so that input into the buffer causes it to generate the same stored

utterance each time.

The failure of the patient with an expletive speech automatism to inhibit the

utterance, may be accounted for by the model. The patient may be unable to

inhibit an expression of limbic state on initial production. This inputs the model at

the level of the speech act and its continued production is due to impairment at

the level of the articulatory buffer.

Lexical speech automatisms may originate as holistically created products of a

subcortical right hemisphere-limbic system mechanism. If they do then the

linguistic system of the left hemisphere is not engaged during their genesis. Non

lexical recurrent utterances show little evidence of right hemisphere language

structure and I have suggested that they might be the product of severely

18

compromised left hemisphere mechanisms disconnected from right hemisphere

mechanisms. This might suggest that access to the phonology was so impaired

that only very primitive CV syllables (usually one repeated syllable) are

produced. The continuing failure on the part of the individual to produce more

than the automatism is frustrated by an almost total apraxia of

phonoarticulatory mechanisms.

Code (1987, 1991, 1994a) has suggested that the fact that the non lexical type

do not break phonotactic constraints may imply that they access a

phonological output module the first time they are produced. Unlike the lexical

speech automatisms, recurring utterances do not involve words which might

implicate right hemisphere processing. The initial production of a non lexical

recurring utterance may be by a severely damaged left hemisphere

phonological system which does not have access to limbic-right hemisphere

input.

This notion gains support from Sussman (1984) who suggests that the reason

phonotactic constraints are not seen to be violated in even the most severely

aphasic patients, and syllabification is unaffected by extensive brain damage, is

because syllabification is 'hard-wired', and hard-wired specifically in the left

hemisphere. Sussman suggests it is this is that helps to provide the automaticity

characteristic of phonological organization. A neuronal model is outlined by

Sussman (1984) 'where each consonant and vowel position is associated with a

specific cell assembly network' (p.169). The model is supported by Code and Ball

(1994) which firstly confirms that phonotactic constraints are rigidly adhered to in

both English and German recurring utterances and shows that recurring

utterances retain strict syllabification.

The evidence from left hemisphere damaged aphasic individuals with speech

automatisms appears to suggest that the right hemisphere is responsible for

producing their lexical speech automatism providing support for the hypothesis

that nonpropositional and automatic aspects of speech entail right hemisphere

processing. If an individual has an entire left hemisphere removed in adult life

what happens to speech? The hemispherectomy cases EC and NF. demonstrate

that the isolated right hemisphere is capable of producing speech.

19

4. Left Hemispherectomy in Adulthood

Strictly, the term hemispherectomy describes the surgical procedure to remove

an entire cerebral hemisphere and should be reserved for this purpose.

Hemidecortication describes the removal of only the neocortex while leaving

intact subcortical nuclear masses such as components of the basal ganglia and

the thalamus. This distinction is important where discussion of the role of

remaining subcortical areas in cognition is concerned. Hemispherectomy and

hemidecortication are radical surgical procedures performed either on adults

with large life-threatening neoplastic tumours, or on children to reduce the

effects on congenital infantile hemiplegia. The hemispherectomy operation

entails a brutal insult to the brain which can have a massive effect upon the

patient's cognitive and behavioural functions. It is important to note that these

adult patients had tumours which precipitated their radical surgery and

therefore, unlike commisurotomy subjects who had long histories of severe

epilepsy, there is no reason to suspect a neurological abnormality in early life

which may have interfered with the ‘normal’ establishment and balance of

hemispheric specialisation for language or any other aspect of cognition during

development.

There have been relatively few left hemidecortications in adults reported in the

literature and the detail and care of recording much of the language

impairment in most reports is disappointing. Zollinger's (1935) early case (AC) was

a 43 year old right-handed woman who, following surgery which left the medial

part of the thalamus and a small portion of the globus pallidus. AC retained 'an

elementary vocabulary which was partially increased by training' (p.1063), but

comprehension is not discussed. Observed speech included 'all right' to all

questions several hours after surgery, but Zollinger does not say if this was

appropriately used. In the three days following surgery AC was able to say 'yes'

and 'no', 'thank you' and 'sleep', 'goodbye' and 'please'. On the third day AC is

reported to have shown 'a more accurate use of words' (p.1060). AC survived

just 17 days.

Crockett and Estridge (1951) described GS, a male patient whose surgery

spared half the globus pallidus, a third of the caudate nucleus, and all of the

thalamus. This hemidecorticated patient was able to say 'yes' and 'no' some

20

hours after surgery, although it is not clear if this was appropriately used. Two

weeks later the patient was able to say 'No, I don't want any' and 'Put me back

to bed'. Other simple words were observed until about one month post-surgery

when he began to deteriorate and could utter only 'caw' and 'aw-caw' (if this is

a non lexical recurring utterance, it is the only example reported following left

hemispherectomy), as well as 'yes' and 'no'. GS survived a further four months.

Hillier's (1954) patient was a 15-year-old boy who underwent two operations to

remove a glioblastoma within a year before a left hemidecortication 'sparing as

much of the basal ganglia as possible' (p.720). Sixteen days after the

hemidecortication he said 'mother', 'father', 'nurse' and other words. The patient

was discharged at 1 month and he was said to have normal auditory

comprehension and daily improvement in vocabulary. The patient survived for

27 months. Before his death he is described as being slightly euphoric with an

improving 'motor' aphasia and anomia.

In a series of studies, Smith and Burklund describe in detail the cases of EC and

NF (Smith, 1966; Smith & Burklund, 1966; Burklund, 1972; Burklund & Smith, 1977).

These studies include not only detailed examinations but the surgery was

sufficiently radical to permit the conclusion that only the right hemisphere could

account for the patient's speech. They therefore allow us to make some

confident inferences on the role of the isolated right hemisphere in speech.

Despite the complete absence of a left hemisphere EC had significant, if limited,

speech. EC was a 47 years old male who was experiencing attacks of

speechlessness and right sided signs in November, 1964. In March, 1965 Burklund

removed a neoplasm from the left sensori-motor area. A moderate 'expressive'

aphasia was evident but there was little evidence of receptive difficulties. The

neoplasm recurred and EC underwent a complete hemispherectomy in

December 1965, some 12 months following the emergence of the original

symptoms and 7 months after the first operation.

Following the second operation EC had a right hemiplegia and hemianopia and

global aphasia. His speech attempts were mostly unsuccessful at this time, but

expletives and emotional phrases were produced. 'He would open his mouth

and utter isolated words, and after apparently struggling to organise words for

21

meaningful speech, recognised his inability and would utter expletives or short

emotional phrases (e.g., "Goddamit!")' (Smith, 1966; p.468).

Zangwill (1967) examined EC about 18 months after surgery. He observed

appropriately used ‘yes’, ‘no’, ‘don’t know’, and the emotional expressions ‘No,

God damn it, that’s...’, ‘Yes, but I cannot’, God damn it, yes’, and ‘Oh my God!’

Zangwill observed that EC could repeat some words, occasionally name objects

and colours, but with semantic paraphasic errors (he called a ‘pen’ a ‘pencil’,

and ‘black’ he called ‘red’). But he also made errors in ‘automatic speech’ and

was able to count to 20 with some errors. He could read some object and colour

words, but could not even read a simple sentence. He was only able to only

print his name. Zangwill was convinced that some of EC's utterances must be

considered 'as essentially propositional' (p.1018). (But see also Bogen's

contribution to this Special Issue.)

Burklund and Smith reported NF in 1977. NF was also a right handed male in his

forties (age 41 years) and showed a similar recovery to EC. However, his

language recovered more rapidly than EC. NF began to experience episodes of

right hemiparesis and speechlessness 5 years before Burklund performed his first

surgery. The first operation was an 'inferior subtotal left frontal lobectomy with

removal of gross neoplasm' (Burklund and Smith, 1977; p. 628). The entire frontal

operculum was included in the resection. There was 'some difficulty expressing

words, especially names' and some dysarthria at discharge, but recovery of

speech was rapid. Eighteen months after the left lobectomy there was

recurrence of the neoplasm in the left frontal area with extension into the

temporal lobe and the right frontal lobe, and Burklund performed a left

hemispherectomy at this time. This surgery left some left thalamus but included

removal of some right frontal cortex and occlusion of both anterior cerebral

arteries. NF was very ill following this extensive surgery, but nonverbal response to

simple verbal commands was appropriate and prompt. Speech which was

described as appropriate but dysarthric emerged three days post-surgery.

Problems with fatigue and focusing attention during formal testing are reflected

in his performance at one month post-surgery where he could repeat words and

phrases but was unable to repeat the same utterances just a few moments later.

But he could sing the words and melody of "Jingle Bells" and respond 'yes' and

'no' appropriately. Spontaneous speech was observed about one month post-

22

surgery when he asked a nurse, "You got a match?". Three weeks after this he

responded with, "Well, it was OK", when asked if he liked his breakfast. But he

was unable to name objects, read object names, copy his name or simple

designs, or write single letters to dictation at this time. There was erratic

improvement in both verbal and nonverbal response to testing for a further 9

months.

Formal testing on the Minnesota Test of Differential Diagnosis of Aphasia at 8

months post-hemispherectomy showed just one error out of 32 in repeating

phrases, no errors in counting and reciting days of the week and two errors in

completing 8 sentences. Two months after this he was able to repeat 4 digits

forward and 3 backwards, 5 errors from 32 in matching words to objects and 8

from 32 in matching printed to spoken words. Writing was slower to recover but

by 6 months post-surgery he could print letters, his name and concrete words

(eg., cat, dog, banana) but errors in spelling were frequent. He could spell single

words to dictation and his name with a letter board, but made errors on words

of more than 4 letters. NF died 18 months following hemispherectomy.

Unquestionably, the right hemispheres of EC and NF were able to speak. For NF

language recovery was more rapid than for EC. This may be due to the 5 year

onset in NF's neoplasm which may have caused the right hemisphere to

become gradually more involved over time in left hemisphere functions. Further,

as in Hillier's case, there was an interval between operations. In NF's case there

was a period of 18 months between the first left frontal lobectomy on NF and the

complete hemispherectomy. Considerable shifting of language processing to

the right hemisphere may have taken place a) during the interceding 5 years

between commencement of symptoms and the lobectomy and b) as a

consequence of the staging of the surgery, with 18 months between first and

second operation.

The plasticity of the young (i.e., before puberty) brain has been appreciated for

some time. The staging of the surgery over time in NF allows us also to re-assess

the plasticity of the older brain and its abilities to reorganize. The permitable

inference appears to be that the completely mature brain may also have

significant facility to reorganization.

23

What these cases appear to substantiate is that the right hemisphere can

indeed speak, at least nonpropositionally, and at least in the absence of a left

hemisphere. Questions remains concerning the degree of propositionality in the

speech of the left hemispherectomy subject. Zangwill described EC's speech as

'essentially propositional' at times, but utterances like “You gotta match?” and

“Well, it was okay”, while certainly sentences would appear to be situation-

specific and responsive. Such utterances may be propositionally produced in

some circumstances but relatively automatically and responsively produced in

others. “You gotta match?” for a smoker would be a very habitually produced

utterance and not at all uniquely generated. So while the isolated adult right

hemisphere has been shown to be engaged in speech production, the

evidence does not permit us to claim that it is capable of generating

propositional speech.

As with aphasic speech automatisms, which we have suggested essentially

reflect right hemisphere processing, the phonotactic constraints of the language

are not broken by the adult left hemispherectomy patients we have discussed.

For left hemispherectomy subjects too syllabification is organized according to

the sonority sequencing principle discussed earlier. So, removal of the entire left

hemisphere in adulthood, while catastrophic for speech and language

processing, does not appear to obstruct the syllabification of speech

production. Why is this? Two alternative explanations seem plausible.

First, the left hemisphere model for syllabification proposed by Sussman's (1984)

may not be supported by the left hemispherectomy evidence. Syllabification, if

hard-wired, is hard-wired either subcortically or is diffusely represented

throughout the brain. This would support the position that sonority is not an

intrinsic feature of phonological processing but simply an artifact of speech

production (Ohala, 1984, cited in Christman, 1992). Christman (1992) too

suggests that sonority may be 'well-distributed' neurologically and linguistically

and may in fact be accessed at lexical levels to organise word syllabification, at

sub-segmental levels to organise phoneme sequencing, and also at the motor

instantation level.

If syllabification is not hard-wired in the left hemisphere but is diffusely

represented, this would suggest that it does not enjoy a fully abstract cognitive

24

representation, no mental reality and is simply an inevitable by-product of

speech production, an epiphenomenon, a non causal consequence of

neurophysiology and the mechanico-inertial properties and limitations of the

speech production mechanism. This could be why it survives even the most

serious brain damage, and even complete left hemispherectomy.

A second possiblility is that the nonpropositional utterances of left

hemispherectomy patients, as well as the lexical speech automatisms of aphasic

patients, were generated originally by a left hemisphere system in early

development. The processing of automatic and nonpropositional speech by the

right hemisphere may be part of a task-sharing metasystem. For the child

learning to speak there is no overlearnt, familiar, formulaic, automatic speech.

For the child all speech is newly generated by the left hemisphere's linguistic

system. Counting and days of the week, for instance, will acquire their

morphological and phonological framework, including syllabification perhaps,

during this early learning period. With time and (over)use, these utterances, like

other overlearnt and familiar utterances, will not require left hemisphere

processing. The left hemisphere’s role will have become redundant, and the

utterances may be passed to a right hemisphere holistic speech lexicon in order

to free-up the left hemisphere to allow it more processing space to do the most

demanding and exacting of human activities, the generation of novel language

for the expression of semantic states.

Conclusions

The right hemisphere is not mute. Evidence from several perspectives supports

this idea. The kind of speech that the right hemisphere is capable of appears to

be confined to the automatic, familiar, nonpropositional. There may be a

particular role for the right hemisphere in automatic and nonpropositional

aspects of speech production, perhaps as part of a task-sharing metasystem in

collaboration with the left hemipshere. The strongest evidence seems to come

from severely aphasic subjects with speech automatisms and adults with only a

right hemisphere. But does the evidence from severely aphasic and left

hemipherectomy patients converge?

25

A severely damaged left hemisphere in many aphasic patients appears to result

in an intact right hemisphere producing an unaltering nonpropostional lexical

speech automatism. The complete removal of the left hemisphere leaves the

individual with the ability to produce occasional, automatic, nonpropositional

speech. The right hemisphere in both groups would appear capable of

nonpropositional speech production. It may be, therefore, that the severely

aphasic individual with extensive left hemisphere damage is using a similar

deleted neurocognitive system to the left hemispherectomy. Extensive left

hemisphere damage may, in many practical respects, be equivalent to an

hemispherectomy (Smith and Sugar, 1975; Cummings, Benson, Walsh and

Levine, 1979; Cappa and Vallar, 1992; Landis, Cummings and Benson, 1980).

Investigation of the contribution of the right hemisphere to the recovery of

aphasia indicates that it is in severely aphasic patients that the right appears to

be involved rather than in milder patients and that this involvement increases as

a function of time since onset of damage (Code, 1987; Cappa & Vallar, 1992;

Gainotti, 1993). Milder patients of recent onset may not therefore utilize the right

hemiphere to the same degree as the severely aphasic individual with extensive

left hemiphere damage. The degree of right hemisphere involvement in the

hemispherectomy patients too appears to be related to a) a gradual

deteriorate of left hemisphere processing due the insidious advancement of the

tumour and b) the staging of surgery.

EC, for Zangwill, was indistinguishable from a left hemisphere damaged severely

aphasic patient. He concludes:

The general impression made upon me by this patient was very much like that of

a case of severe motor aphasia and right hemiplegia from left-sided

cerebrovascular accident. There was a typical severe motor aphasia, though

with some retention of emotional and automatic utterance; good oral

comprehension; evidence of insight into the speech defect, and almost total

absence of paraphasia or jargon (Zangwill, 1967; p1017).

These two groups may, therefore, have more in common than has previously

been thought.

26

References Alajouanine, A. (1956) Verbal realization in aphasia. Brain, 79, 1-28 Blanken, G. (1991) The functional basis of speech automatisms (recurring utterances). Aphasiology, 5, 103-127. Blanken, G., Dittmann, J., Haas, J-C. and Wallesch, C-W. (1988) Producing speech automatisms (recurring utterances): looking for what is left. Aphasiology, 2, 545-556, 1988. Blanken, G., De Langen, E. G., Dittmann, J., and Wallesch, C-W. (1989) Implications of preserved written language abilities for the functional basis of speech automatisms (recurring utterances): A single case study. Cognitive Neuropsychology, 6, 211-249. Blanken, G., Wallesch, C-W. and Papagno, C. (1990) Dissociations of language functions in aphasics with speech automatisms (recurring utterances). Cortex, 26, 41-63. Blunk, R., De Bleser, R., Willmes, K. and Zeumer, H. (1981) A refined method to relate morphological and functional aspects of aphasia. European Neurology, 30, 68-79. Bogen, J.E. (1969) The other side of the brain II: an appositional mind. Bulletin of the Los Angeles Neurological Societies, 34, 135-162. Broca, P. (1861) Remarques sur le siege de la faculte du langage articule suivies d'une observation d'aphemie (perte de la parole). Paris Bulletin de la Societe d'Anatomie, 36, 330-357. Brunner, R.J., Kornhuber, H.H., Seemuller, E., Suger, G., and Wallesch, C-W. (1982) Basal ganglia participation in language pathology. Brain and Language, 16, 281-299. Burklund, C.W. (1972) Cerebral hemisphere function in the human: fact versus tradition. In: W.L. Smith (ed.), Drugs, Development, and Cerebral Function. Springfield: C.C. Thomas. Burklund, C.W. & Smith, A. (1977) Language and the cerebral hemispheres. Neurology 27, 627-633. Cappa, S. F. & Vallar, G. (1992) The role of the left and right hemispheres in recovery from aphasia. Aphasiology, 6, 359-372.

27

Chase, R.A., Cullen, J.K., Niedermeyer, E.F., Stark, R.E. and Blumer, D.P. (1987) Ictal speech automatisms and swearing: studies on the auditory feedback control of speech. The Journal of Nervous and Mental Disease, 144, 406-420. Chiarello, C. (ed.), (1988). Right Hemisphere Contributions to Lexical Semantics. New York: Springer-Verlag. Christman, S. S. (1992) Uncovering phonological regularity in neologisms: contributions of sonority theory. Clinical Linguistics & Phonetics, 6, 219-47. Clements, G. N. (1988) The role of the sonority cycle in core syllabification. Working Papers of the Cornell Phonetics Laboratory, No.2. (Ithaca, NY). Clements, G. N. (1990) The role of the sonority cycle in core syllabification. In Kingston, J. and Beckman, M. (Eds), Papers in Laboratory Phonology I: Between the Grammar and the Physics of Speech. Cambridge: CUP. Code, C. (1982a) Neurolinguistic analysis of recurrent utterances in aphasia. Cortex, 18, 141-152. Code, C. (1982b) On the origins of recurrent utterances in aphasia. Cortex, 18, 161-164. Code, C. (1987) Language, Aphasia and the Right Hemisphere. Chichester: Wiley Code, C. (1991) Speech automatisms and recurring utterances. In: C. Code (ed.), The Characteristics of Aphasia. Hove: Lawrence Erlbaum Associated. Code, C. (1994a) Speech automatisms in aphasia. Journal of Neurolinguistics, 8, 135-148. Code, C (1994b) Modelling aphasic speech automatisms and recurring utterances. In: B.S. Weekes, C. Haslam, J. Ewing, U. Johns & R. Fernbach (eds.), Cognitive Function in Health, Disease and Disorder. Sydney: Academic Press. Code, C. & Ball, M.J. (1994) Syllabification in aphasic recurring utterances: contributions of sonority theory. Journal of Neurolinguistics, Code, C. & Lodge, B. (1987) Language in dementia of recent referral. Age & Ageing, 16, 366-372 Critchley, M. (1970) Aphasiology and Other Aspects of Language. London: Edward Arnold.

28

Crockett, H.G. & Estridge, N.M. (1951) Cerebral hemispherectomy. Bulletin of the Los Angeles Neurological Societies 16, 71-87. Cummings, J.L., Benson, D.F., Walsh, M.J. and Levine, H.L. (1979) Left-to-right transfer of language dominance: a case study. Neurology, 29, 1547-1550. Darley, F.L., Aronson, A.E. and Brown, J.R. (1975) Motor Speech Disorders. Saunders, Philadelphia. De Bleser, R. & Poeck, K. (1985) Analysis of prosody in the spontaneous speech of patients with CV-recurring utterances. Cortex, 21, 405-416. Gainotti, G. (1993?) The riddle of the right hemisphere’s contribution to the recovery of aphasia. European Journal of Disorders of Communication, 28, 227-246. Goldman-Eisler, F. (1968). Psycholinguistics: Experiments in Spontaneous Speech. London: Academic Press. Goldstein, K. (1948) Language and Language Disturbances. New York: Grune & Stratton. Hart, J., Lesser, R.P., Fisher, R.S., Schwerdt, P., Bryan, R.N.& Gordon, B. (1991) Dominant-side intracarotid amobarbital spares comprehension of word meaning. Archives of Neurology 48, 55-58. Haas, J.C. Blanken, G., Mezger, G. and Wallesch, C-W. (1988) Is there an anatomical basis for the production of speech automatisms? Aphasiology, 2, 552-565. Hillier, W.F. (1954) Total left cerebral hemispherectomy for malignant glioma. Neurology 4, 718-721. Ingvar, D.H. & Schwartz, M.S. (1974) Blood flow patterns induced in the dominant hemisphere by speech and reading. Brain, 97, 273-288. Jackson, J.H. (1874) On the nature of the duality of the brain. In: J. Taylor (ed.), (1958) Selected Writings of John Hughlings Jackson. Vol. Two. London: Staples Press. Jackson, J. H. (1879) On affections of speech from disease of the brain. In Selected Writings of John Hughlings Jackson: Vol. II, J. Taylor (ed.), Staples Press, London.

29

Joanette,Y., Goulet, P. & Hannequin, D. (1990) Right Hemisphere and Verbal Communication. New York: Springer-Verlag. Kent, R.D. (1984) Brain mechanisms of speech and language with special reference to emotional interactions. In Language Science., R.C. Naremore (Editor), pp. 281-335, College-Hill Press, San Diago. Kornhuber, H.H. (1977) A reconstruction of the cortical and subcortical mechanisms involved in speech and aphasia. In: J.E. Desmedt (ed.), Language and Hemispheric Specialization in Man: Cerebral ERPs. Basel: Karger. Kozhevnikov, V.A. and Chistovitch, L.A. (1966) Speech: Articulation and Perception. Springfield VA.: US Department of Commerce, Joint Publications Research Service, Vol. 30. Kremin, H. (1987) Is there more than ah-oh-ah? Alternative strategies for writing and repeating lexically. In: M. Coltheart, R. Job and G. Sartori (eds.), The Cognitive Neuropsychology of Language. pp.295-335. Hove: Lawrence Erlbaum Associates. Kurthen, M., Linke, D.B., Elger, C.E. and Schramm, J. (1992) Linguistic perseveration in dominant-side intracarotid amobarbital tests. Cortex, 28, 209-219. Lamendella, J.T. (1977) The limbic system in human communication. In: H. Whitaker and H.A. Whitaker (eds.), Studies in Neurolinguistics, Vol. III, pp. 157-222. London: Academic Press. Landis, T., Cummings, J.L. and Benson, D.F. (1980) Le passage de la dominance du langage a l'hemisphere droit: une interpretation de la recuperation tardive lors d'aphasies globales. Revue Medica de la Suisse Romande, 100, 171-177. Lebrun, Y. (1986) Aphasia with recurrent utterance: a review. British Journal of Disorders of Communication, 21, 3-10. Leckman, J.F., Knorr, A.M., Rasmusson, A.M. and Cohen, D.J. (1991) Basal ganglia research and Tourette’s syndrome. Trends in Neuroscience (Letter), 14, 94. Lenneberg, E. (1967) The Biological Foundations of Language. New york: John Wiley. Larsen, B., Skinhoj, E. and Lassen, N. (1978) Variations in regional cortical blood flow in the right and left hemispheres during automatic speech. Brain, 101, 193-

30

209. Lum, C. & Ellis, A.W. (1994) Is “nonpropositional” speech preserved in aphasia? Brain & Language, 46, 368-391. MacNeilage, P. (1970) Motor control of serial ordering of speech. Psychological Review, 77, 182-196. Milner, B. (1974) Hemispheric specialization: scopes and limits In: F.O. Schmidt & F. Worden (eds.), The Neurosciences, Vol. III. Cambridge, Mass.: MIT Press. Milner, B., Branch, C. and Rasmussen, T. (1966) Evidence for bilateral speech representation in some non-righthanders. Transactions of the American Neurological Association, 91, 306-308. Mines, M.A., Hanson, B.F. and Shoup, J.E. (1978) Frequency of occurrence of phonemes in conversational English. Language and Speech, 21, 221-241, 1978. Ojemann, G.A. (1979) Individual variability in cortical localization of language. Journal of Neurosurgery. 50, 164-169. O’Hala, J. (In Christman) Penfield, W. and Roberts, L. (1959) Speech and Brain Mechanisms. Princeton: Princeton University Press. Ryding, E., Bradvik, B. and Ingvar, D. (1987) Changes in regional cerebral bloodflow measured simultaneously in the right and left hemispheres during automatic speech and humming. Brain, 110, 1345-1358. Segalowitz, S.J. & Bryden, M.P. (1983) Individual differences in hemispheric representation of language. In : S.J. Segalowitz (ed.), Language Functions and Brain Organization. London: Academic Press. Shallice, T. (1988) From Neuropsychology to Mental Structure. Cambridge University Press, New York. Skinhoj, E. & Larsen, B. (1980) The pattern of cortical activation during speech and listening in normals and different types of aphasic patients as revealed by regional cerebral blood flow (rCBF). In: M.T.Sarno & O. Hook (eds.), Aphasia: Assessment and Treatment. New York: Masson Publishing. Smith, A. (1966) Speech and other functions after left (dominant) hemispherectomy. Journal of Neurology, Neurosurgery and Psychiatry, 29, 467-

31

471. Smith, A. & Burklund, C.W. (1966) Dominant hemispherectomy. Science, 153, 1280-1282. Smith, A. & Sugar, O. (1975) Development of above normal language and intelligence 21 years after left hemispherectomy. Neurology, 25, 813-818. Speedie, L.J., Wertman, E., Ta'ir, J. & Heilman, K.M. (1993) Disruption of automatic speech following a right basal ganglia lesion. Neurology, 43, 1768-1774. Sussman, H. M. (1984) A neuronal model for syllable representation. Brain and Language, 22, 167-77. Sweet, R.D., Solomon, G., Wayne, H., Shapiro, E. and Shapiro, A. (1973) Neurological features of Gilles de la Tourette's syndrome. Journal of Neurology, Neurosurgery and Psychiatry, 36, 1-9. Van Lanker, D. (1975) Heterogeneity in language and speech. UCLA Working Papers in Phonetics, 29. Van Lancker, D. (1987) Nonpropositional speech: neurolinguistic studies. In: A.W. Ellis (ed.), Progress in the Psychology of Language, Vol. III. London: Lawrence Erlbaum Associates. Van Lancker, D. (1993) Nonpropositional speech in aphasia. In: G. Blanken, J. Dittmann, H. Grimm, J.C. Marshall and C-W. Wallesch (eds.), Linguistic Disorders and Pathologies. Berlin: Walter de Gruyter. Wallesch, C-W. (9190) Repetitive verbal behaviour: functional and neurological considerations. Aphasiology, 4, 133-154, 1990. Wray, A. (1992) The Focusing Hypothesis. John Benjamin, Amsterdam. Zangwill, O. (1967) Speech and the minor hemisphere. Acta Neurol. Psychiatr. Belg., 67, 1013-1020. Zollinger, R. (1935). Removal of left cerebral hemisphere: a report of a case. Archives of Neurology and Psychiatry 34, 1055-1064.