16
Gesture can take on many characteristics of language when it is produced in the absence of speech. A summary of past findings in this area is complemented by recent data that demonstrate the psychological validity of a linguistic structure for gesture when it carries the full burden of communication. Gesture When There Is No Speech Model Jill P Morford Whether they are aware of it or not, most speakers gesture when they speak. This unconscious act is a boon to scientists because it provides a "window onto the mind" (McNeill, 1997). Gestures express information that is in the speakers mind. In some cases that information is also expressed in speech; in some cases it is not. But in either case, David McNeill (Chapter One) claims, gesture and speech form a single system, because the manner in which speech and gesture are produced indicate that the timing of their production is care- fully coordinated, and in such a way that the meaning expressed by each modality can be traced to a single thought process. On closer inspection he finds that speech encodes a thought via discrete units that are combined sequentially. Spontaneous gesture, by contrast, generally encodes a thought in a global and synthetic form. Why do speech and gesture encode information differently? A possible explanation is that speech and gesture are inherently different because they are produced in different modalities—speech is auditory and gesture is visual. However, the characteristics that McNeill has observed in speech can also be identified in signed languages, despite the fact that they, like gesture, are pro- duced in the visual modality. Thus modality alone does not suffice as an expla- nation for this difference in the form of representation. A second explanation is that speech and gesture are coordinated parts of a whole communicative act. In other words, gesture and speech do not act independently to simultaneously The work described here was supported by a grant from the Cusson Foundation and a McGill University Graduate Faculty research grant. The author would like to thank the par- ticipants in her studies and their families for their enthusiastic participation. NEW DIRECTIONS FOR CHILD DEVELOPMENT, no 79. Spring 1998 * Josscy-Bass Publishers 101

Gesture when there is no speech model

  • Upload
    jill-p

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Gesture when there is no speech model

Gesture can take on many characteristics of language when it is produced in the absence of speech. A summary of past findings in this area is complemented by recent data that demonstrate the psychological validity of a linguistic structure for gesture when it carries the full burden of communication.

Gesture When There Is No Speech Model Jill P Morford

Whether they are aware of it or not, most speakers gesture when they speak. This unconscious act is a boon to scientists because it provides a "window onto the mind" (McNeill, 1997). Gestures express information that is in the speakers mind. In some cases that information is also expressed in speech; in some cases it is not. But in either case, David McNeill (Chapter One) claims, gesture and speech form a single system, because the manner in which speech and gesture are produced indicate that the timing of their production is care­fully coordinated, and in such a way that the meaning expressed by each modality can be traced to a single thought process. On closer inspection he finds that speech encodes a thought via discrete units that are combined sequentially. Spontaneous gesture, by contrast, generally encodes a thought in a global and synthetic form.

Why do speech and gesture encode information differently? A possible explanation is that speech and gesture are inherently different because they are produced in different modalities—speech is auditory and gesture is visual. However, the characteristics that McNeill has observed in speech can also be identified in signed languages, despite the fact that they, like gesture, are pro­duced in the visual modality. Thus modality alone does not suffice as an expla­nation for this difference in the form of representation. A second explanation is that speech and gesture are coordinated parts of a whole communicative act. In other words, gesture and speech do not act independently to simultaneously

The work described here was supported by a grant from the Cusson Foundation and a McGill University Graduate Faculty research grant. The author would like to thank the par­ticipants in her studies and their families for their enthusiastic participation.

NEW DIRECTIONS FOR CHILD DEVELOPMENT, no 7 9 . Spring 1 9 9 8 * Josscy-Bass Publishers 101

Page 2: Gesture when there is no speech model

102 NATURE AND FUNCTIONS OF GESTURE IN CHILDREN'S COMMUNICATION

encode a thought. Rather, they each contribute to a single transformational process that yields expression from thought. We might gain greater insight into this process by observing the representational form of gesture when it is not produced together with speech. When gesture stands alone, does it still encode information in a global, synthetic form? Or does it exhibit the characteristics of spoken representation—namely, combination of discrete units?

Rarely do we have the opportunity to observe gesture when it is not "cooperating" with speech to encode a speaker's thoughts. Although speech occurs without gesture, nonconventional gestures (see Chapter One) rarely occur without speech (McNeill, 1992). Further, when gesture gets ahead of speech it "waits" for speech to catch up (Kita, 1993). Even when speech is severely disrupted and gesture would provide a useful form of compensation, as in the case of stuttering, gesture likewise "waits" for the disfluency to pass (see Chapter Five). Indeed, spontaneous gesture is so closely linked to speech that we only observe it functioning independently when speech is completely absent. In this chapter I summarize much of what we have learned about what happens to gesture when it is produced without speech. Interestingly, we will find that gesture looks much more like speech when speech is absent. This evi­dence can help us to understand why the gesture-speech relationship exists and how the human mind shapes the form of language.

W h e n Speakers Do Not Speak

It is rare for a speaker to combine a string of gestures in the absence of speech. To increase the occurrence of this phenomenon, some investigators have asked speakers to communicate without speaking. Bloom (1979) and Dufour (1992) each requested that subjects recount a well-known story using only gestures. Both investigators found that subjects began by using very elaborate pan­tomime and gradually reduced their gestures so that by the end of their nar­rative, or over the course of several retellings, their gestures began to look more like the signs of a conventional signed language than like pantomime. Gestures became simplified in form and more systematic across multiple productions.

A similar approach was taken by Goldin-Meadow, McNeill, and Singleton (1996) to elicit gesture from speakers. These investigators showed subjects a videotape of short action vignettes and asked them to describe the scenes using speech; once the subjects had completed this task they were asked to describe the same scenes without using speech. This approach allowed the investiga­tors to compare the gestures the subjects used when speaking with the ges­tures they used when not speaking. These researchers found that the subjects primarily used single gestures when they were speaking but produced strings of gestures in the absence of speech. Moreover, when speech was absent, the quality of the gestured responses differed from that of gestures combined with speech in a variety of ways. First, the subjects were much more likely to encode in gesture each of the semantic elements expressed in the action vignette when they gestured without speaking than when they gestured with

Page 3: Gesture when there is no speech model

GESTURE WHEN THERE IS NO SPEECH MODEL 103

speech. Second, when the subjects were not speaking they were much more likely to segment the expression of those semantic elements into separate ges­tures rather than combining their expression in a single gesture. Third, the ges­ture strings the subjects produced when they did not speak exhibited ordering preferences. Interestingly, the noun-like and predicate-like constituents of these gesture strings were typically ordered differently from the canonical order of their counterparts in the subjects' native language, English.

In McNeill's typology (1992; Chapter One), gesture is typically global and synthetic. These studies demonstrate the potential of gesture to become linear and segmented, characteristics McNeill attributes to speech. Goldin-Meadow, McNeill, and Singleton (1996) have proposed a model to account for the con­ditions under which gesture plays the role typically assumed by speech. Very generally, they claim that gesture will take on the properties of segmentation and hierarchical combination only when it carries the full burden of commu­nication. We have seen that this is indeed the case when people do not speak and communicate only in gesture. However, the nonspeaking behavior stud­ied by Bloom, Dufour, Goldin-Meadow, and others has not been naturally occurring. In the remainder of this chapter I focus on a population of individ­uals for whom communication by gesture alone is a necessity and a regular part of their daily lives.

W h e n Deaf Individuals Do Not Speak or Sign

The overwhelming majority of deaf children are born to hearing parents and thus do not naturally come into contact with a signed language in the home. When parents choose not to include a signed language in the home, their deaf children are exposed only to spoken language. In many such cases these chil­dren are unable to master enough speech to fulfill their communicative needs. In all instances that my colleagues and I have observed to date, a deaf child who is not able to communicate via speech and who has not had any contact with sign language will use gesture to communicate instead. I refer to this type of gesture as "homesign," a direct translation of an American Sign Language (ASL) expression used to refer to the gestures of a deaf person who communi­cates via idiosyncratic signs developed in the home.

Homesign is a phenomenon documented by many researchers in differ­ent parts of the world, including such distant places as Belgium, Japan, Nicaragua, and the Rennell Islands (Morford, 1996a). Most researchers agree that homesign includes at least two types of gestures: pointing gestures and iconic, or descriptive, gestures that are based on pantomime (Goldin-Meadow and Mylander, 1984; Kendon, 1980; Kuschel, 1973; Tervoort, 1961). Home-sign gestures are combined into gesture strings to communicate propositions in a productive manner. In cases where the parents of a deaf child prefer to use speech for communication, the gestures of the parents do not exhibit the struc­tural properties observed in the childs homesign (Goldin-Meadow and Mylan­der, 1984, 1990a), presumably because their gesticulation is confined to the

Page 4: Gesture when there is no speech model

104 NATURE AND FUNCTIONS OF GESTURE IN CHILDREN'S COMMUNICATION

structure determined by the accompanying speech. Thus the structure found in homesign systems is not structure that is projected onto the manual modal­ity by an individual with knowledge of a spoken language. It appears to emerge primarily as a result of the conditions placed on communication—specifically, the use of gesture when there is no speech model.

One universal characteristic of human languages is that they are struc­tured at multiple levels. Studies of homesign have identified three levels of structure: lexical, sublexical, and syntactic. I will address each of these areas in turn, detailing two types of evidence for each. First I will describe linguis­tic analyses of homesign corpora that identify structure at each level. These studies provide direct evidence of the systematicity within and across individ­ual homesigners' gesture systems. I rely primarily on evidence provided by Goldin-Meadow and her colleagues, who carried out a longitudinal study of ten homesigners between the ages of one and five (see Goldin-Meadow and Mylander, 1990a). Next I will describe my own research on the language acquisition patterns of homesigners who have used homesign throughout childhood and then entered a community at adolescence where they were able to learn a conventional signed language. If homesign is represented as language in the minds of these individuals, then we should see evidence of second-language learning effects of homesign on the newly acquired signed language. This approach provides converging evidence of the identified linguistic struc­tures of homesign by demonstrating their psychological validity for the lan­guage learner.

Lexical Structure in Homesign

One of the characteristics of speech that McNeill (1992) has identified is that it "segments" the information it encodes. By this he means that an idea is divided into component parts. Thus to refer to a scene in which a person is walking we use one word, a noun, to refer to the person and a second word, a verb, to refer to the action of the person. In gesture the two parts of this idea could be expressed in a single unit—for example, by moving your hand for­ward while wiggling two fingers as though they were walking legs. One ques­tion that has been asked about homesign systems is whether they distinguish between classes of words, such as nouns and verbs, that would allow the seg­mentation of the various roles included in a single proposition.

Several analyses of homesign corpora find there is a distinction between nouns and verbs in homesign systems (DeVilliers, Bibeau, Ramos, and Gatty, 1993; Goldin-Meadow, Butcher, Mylander, and Dodge, 1994; Goldin-Meadow and Mylander, 1984; Macleod, 1973). A common finding across these studies is that pointing gestures are primarily used to represent nouns. Points are a type of gesture that develop very early in the gestural repertoire of most chil­dren (see Chapters Two and Three) and several months prior to a child's first spoken word. Further, pointing gestures differ from some of children's first words in that they depend on the context in which they are used to convey

Page 5: Gesture when there is no speech model

GESTURE WHEN THERE IS NO SPEECH MODEL 1 0 5

meaning. Thus it may seem that they could not be linguistically equivalent to nouns. This is a question that has plagued sign language acquisition studies, because points are prevalent in signed languages as well (see, for example, Petitto, 1987). Is a point a full linguistic symbol or not? Does it act as a con­stituent within a phrase? Several studies support the view that pointing ges­tures do in fact "segment" information encoded by nouns in spoken language.

Goldin-Meadow and Mylander (1984) compared the pattern of pointing gestures and descriptive gestures to the semantic roles represented in the ges­ture utterances of ten homesigners. Pointing gestures were used primarily to represent noun-like constituents (such as entities, actors, and patients), whereas descriptive gestures were used primarily to represent predicate-like constituents (such as actions and descriptors). (Homesigners eventually begin to use descriptive gestures for noun-like constituents, but not until they dis­tinguish between nouns and verbs grammatically instead of lexically; see Goldin-Meadow, Butcher, Mylander, and Dodge, 1994.) A typical example of a homesign utterance would consist of pointing at a jar of bubble liquid, fol­lowed by a "blowing" gesture, to request that someone blow the bubbles. From a developmental perspective, homesigners' early pointing gestures resemble nouns because they are used to refer to the same semantic categories as the early nouns of children learning to speak (Feldman, Goldin-Meadow, and Gleitman, 1978). Further evidence for this view is gained by looking at the way that pointing gestures and descriptive gestures are combined into strings. If we assume that points are nouns and descriptive gestures are predicates, then these gesture strings exhibit the structure of early word combinations found in children learning to speak (Goldin-Meadow and Mylander, 1984).

Now let's turn to the case of homesigners who have learned ASL and ask whether there is any evidence from their acquisition of ASL that the pointing gestures they use in their homesign systems function as nouns. Consider the consequences of using pointing gestures only to direct an interlocutor's atten­tion to actual objects and places in the environment rather than using them as linguistic symbols. When a homesigner is exposed to ASL, we would expect such an individual to continue using points in this way. That is, the individ­ual's pointing gestures should either be used in addition to ASL nouns or to the exclusion of them. Alternatively, if points function as nouns in the home-signer's communication system, we should see a functional replacement of at least some of them by ASL noun signs (since ASL permits the use of points as pronouns, we would not expect all points to vanish).

I addressed this question in a longitudinal study of the ASL acquisition of two homesigners of different national origins (Morford, 1996c). The two sub­jects were both born profoundly deaf, never used hearing aids, and were exposed only to spoken language throughout childhood. Because they were raised in regions where there were no educational services for deaf children, they had no contact with other deaf children, nor were they exposed to a signed language. These individuals moved with their families to North Amer­ica to escape economic or political hardship. At the time of their arrival neither

Page 6: Gesture when there is no speech model

106 NATURE AND FUNCTIONS OF GESTURE IN CHILDREN'S COMMUNICATION

of the two deaf subjects could speak any words of their family's native spoken language or sign any signs of a signed language. However, both individuals had developed a homesign system to communicate with their families.

In a spontaneous sample collected in the home before the subjects were exposed to ASL, I found that their use of pointing gestures and descriptive ges­tures replicated results reported in other studies of homesign. Specifically, both subjects tended to use points to express noun constituents (actors, entities, and patients) and descriptive gestures to express predicate constituents (actions and descriptors). They expressed nouns with points about 80 percent of the time, rarely using descriptive gestures for this purpose. By contrast, they expressed verbs with points only 10 percent of the time, using descriptive ges­tures instead.

Once the two subjects entered an environment where ASL was being used I evaluated their noun and verb acquisition by showing them a random sam­ple of twenty object pictures and twenty action pictures and asking them to name the pictures. After one month of exposure to ASL the two homesigners had only learned a few signs (30 percent of the noun sample and 15 percent of the verb sample), but after two years of exposure they were producing the correct sign for almost all of the pictures (78 percent of the noun sample and 93 percent of the verb sample). When the subjects did not produce the expected response for this task they substituted a different ASL sign instead (for example, one subject used the sign "clothes" instead of the desired sign "shirt"). The results indicated that the homesigners were acquiring ASL nouns and verbs at a similar rate. There was no indication that the homesigners were learning only ASL verbs to combine with the use of pointing gestures.

In a second task the subjects were asked to use their newly learned vocab­ulary to retell a story. Although the subjects had used points to represent 80 per­cent of all nouns in the spontaneous homesign sample collected just two months earlier, they used such gestures to represent only 30 percent of the nouns in the storytelling task. I found that the homesigners had replaced many of their point­ing gestures with ASL noun signs. Interestingly, despite the similar rate of acqui­sition of ASL nouns and verbs in the labeling task described previously, these subjects used many more of their newly acquired nouns than verbs in the sto­rytelling task. Sixty percent of the nouns produced in the story context were expressed with ASL signs, but only 10 percent of the verbs were expressed using ASL signs. (The remaining 10 percent of the nouns were expressed with home-sign descriptive gestures. Two percent of predicates were expressed with point­ing gestures, and 88 percent of predicates were expressed with homesign descriptive gestures.) Since points are an acceptable form of pronominalization in ASL, the subjects' noun production was very similar to that of other ASL sign­ers. In contrast, their production of predicates had hardly changed as a result of their exposure to a signed language.

From the storytelling task alone it would have been easy to assume that the homesigners were learning ASL nouns much more quickly than ASL verbs. However, their ability to provide verbs in the picture-labeling task demon-

Page 7: Gesture when there is no speech model

GESTURE WHEN THERE Is No SPEECH MODEL 107

strates that this is not the case. Clearly the homesigners were acquiring ASL verbs just as readily as ASL nouns. In past work I have proposed that this pat­tern of high production of ASL nouns does not reflect what the subjects were able to learn but rather what they were choosing to use (Morford, 1995a).

This explanation involves two parts. First, why do homesigners choose to use their newly acquired nouns? Homesign pointing gestures are easily replaced by ASL noun signs because the linguistic frame in which each of these occurs is identical (just as ASL nouns can be replaced by pronominal points in ASL). Second, why might homesigners avoid using ASL predicates? A likely explanation is that homesign verbs and ASL verbs have different syntactic frames. Whereas ASL predicates morphologically mark person, number, and aspect, the gesture morphology that has been identified in homesign to date is quite different. Homesign predicate morphemes typically encode semantic ele­ments of the predicate such as path and instrument information. They rarely encode person, number, and aspect. Notably, the morphemes encoded by ASL predicates (see the following section) require that the predicate agree with other constituents in the phrase. Thus homesigners acquiring ASL may persist in using their own predicate-like gestures, which allow them to focus less on the structural integrity of the entire phrase and more on the lexical semantics encoded in their communication.

In sum, the acquisition pattern observed is consistent with the view that homesign points are represented like nouns, despite their simple form, in the mind of the homesigner. As a result, homesigners acquire new ASL nouns and verbs with relative ease. Homesigners' use of newly acquired ASL nouns is more prevalent than their use of ASL verbs, presumably because the syntactic structure is not as constraining for nouns as for verbs. These results demon­strate the psychological validity of the structural analyses of nouns and verbs in homesign systems performed by Goldin-Meadow and others.

Sublexical Structure in Homesign

One way in which a proposition can be segmented is into nouns and verbs, but further segmentation also occurs inside words. These subparts of words are called morphemes. A morpheme is the smallest meaning-bearing unit in a language. For example, in English the suffix -er on the end of an adjective is a morpheme indicating the comparative (for example, hard versus harder). A gesture can also be broken down into parts—the shape of the hand (or hand-shape) and the movement of the hand are two separate parts. These parts of the gesture can be considered morphemes if they are associated with a specific meaning in a systematic way.

Goldin-Meadow, Mylander, and Butcher (1995) identified morphological structure in the homesign systems of four deaf children by comparing the meaning associated with a specific handshape across a variety of gestures and then comparing the meaning associated with a specific movement across a vari­ety of gestures. In all four cases they found that the children used a limited set

Page 8: Gesture when there is no speech model

108 NATURE AND FUNCTIONS OF GESTURE IN CHILDREN'S COMMUNICATION

of handshapes and movements to represent specific meanings in their home-sign systems. Most important, these children combined the various handshapes and movements in a productive manner to create gestures with composite meanings. This analysis indicates that there is structure in homesign below the level of the lexical item.

A second related finding is that the morphological structure of homesign systems is stable over time. Goldin-Meadow and Mylander (1990b) observed one child's spontaneous interactions between the ages of two years ten months and four years ten months and described the handshape and movement reg­ularities in his homesign. Singleton, Morford, and Goldin-Meadow (1993) were able to demonstrate that this child continued to build homesign gestures out of these same handshape and motion morphemes five years later, at the age of nine. They showed the child short action vignettes and predicted how he would respond according to the morphological description in Goldin-Meadow and Mylander (1990b). Over two-thirds of the child's responses were correctly predicted in this manner. Thus there is evidence of sublexical con­sistency within homesigners' gestures, whether viewed synchronically or diachronically.

The individual who participated in the two studies just described learned ASL as an adult and came back to the laboratory to perform some of the same tasks he had performed as a nine-year-old, but answering questions in ASL instead of homesign (see Morford, Singleton, and Goldin-Meadow, 1995). In this study we were able to investigate whether or not his homesign morphol­ogy influenced his acquisition of ASL morphology. We tried to predict his adult responses based on the morphology of ASL. We found he made a number of interesting errors that are atypical of individuals who acquire ASL from birth. Rather than making errors evenly distributed across the task, the subject pro­duced many errors in the production of certain morphemes but few errors in the production of other morphemes. Why were certain morphemes particu­larly hard for him to learn?

Second-language research suggests that individuals sometimes transfer knowledge from their first language to their second language. To investigate whether language transfer could explain the resulting pattern, we classified the ASL morphemes according to how closely they matched the morphemes of the homesign this individual had used at age five, to determine whether closely matching morphemes were more easily acquired (positive transfer) or less eas­ily acquired (interference).

Morphemes can "match" in one of two ways. First, they can have the same form. For example, the suffix -er is also used as a morpheme in languages other than English. In German this suffix can be added to a word to indicate the plural (the plural of Haus is Hauser). These two morphemes have the same form (but a different meaning) across two different languages. Morphemes can also have the same meaning. Staying with the English-German comparison, the meaning of the progressive morpheme -ing in English is expressed by the suffix -end in German (the progressive of laufen ["to run"] is laufend ("run-

Page 9: Gesture when there is no speech model

GESTURE WHEN THERE IS NO SPEECH MODEL 109

ning"]). These two morphemes have the same meaning (but a different form) across two different languages.

Morphemes in homesign and ASL can similarly be matched according to form or meaning. After classifying the ASL morphemes in this way we found a surprising result. When the homesign and ASL morphemes had a similar form, the subject's performance varied between 36 and 74 percent correct. When the morphemes had a different form in ASL and homesign they still exhibited variation between 38 and 66 percent correct. However, when the morphemes had a similar meaning the subject's performance varied only between 66 and 74 percent. When the morphemes had a different meaning the subject performed much more poorly, between 36 and 38 percent correct. Thus form seemed to affect the process of acquisition very little. Whether the ASL signs looked like the homesign gestures or not was inconsequential to how easily they were learned. Meaning, on the other hand, had considerable influence on acquisition. ASL morphemes with identical meanings to home-sign morphemes were acquired much more systematically than ASL mor­phemes with different meanings. These results indicate that similarity in the meaning of the morphemes of ASL and homesign may have had a positive transfer effect on the subject's ability to learn ASL, whereas a difference in the meaning of morphemes in ASL and homesign could have interfered with his learning to use ASL morphemes appropriately.

In sum, there is evidence of the psychological validity of the sublexical structure of homesign in that it affects the way homesigners go on to acquire other languages. If homesign morphology were simply due to the iconic form of the gestures, it would not interfere with the acquisition of a natural human language in such a systematic way.

Syntactic Structure in Homesign

In the previous two sections I have demonstrated that when gesture is used in the absence of speech to represent a proposition the idea is segmented into lex­ical and morphological subcomponents, just as it is when expressed in speech. Recall that segmentation is one of two primary characteristics proposed by Goldin-Meadow, McNeill, and Singleton (1996) in their model of the expres­sion of grammatical properties by gesture when it carries the full burden of communication. A second characteristic, combination, will be discussed now. Combination is related to segmentation, because once the elements of a propo­sition have been separated they must be put back together again. Languages conjoin segments in rule-governed ways. The term syntax is used to refer to the rules that govern how morphemes and words are combined to create a phrase.

Word order is one way in which languages express syntactic structure. By placing the constituents of the phrase in a relatively fixed order, the speaker can distinguish between the subject and the object. English is a language that relies heavily on word order. It is called an SVO language because the typical

Page 10: Gesture when there is no speech model

110 NATURE AND FUNCTIONS OF GESTURE IN CHILDREN'S COMMUNICATION

order of the constituents is subject-verb-object. For example, in the sentence "Gabriel tickled Jocelyn" the syntax determines who was tickling and who was being tickled. Gabriel is the subject, and thus also the tickler, because he is placed before the verb. Jocelyn is the object, and thus also the one being tick­led, because she is placed after the verb.

Not all of the world's languages use word order as a syntactic device, but it does happen to be common in many homesign systems that have been stud­ied. Macleod's pioneering study (1973) analyzed the homesign system of a man living in England and found that he typically placed agents, patients, sources, and goals before actions and states. In grammatical terms, this man's homesign system exhibited an SOV (subject-object-verb) order. More recent studies of children (Goldin-Meadow and Mylander, 1984) and adolescent homesigners (Emmorey, Grant, and Ewan, 1994; Morford, 1996b) provide converging evi­dence of this pattern, with the exception that one individual studied by Goldin-Meadow and Mylander placed transitive agents (but not intransitive agents) and recipients after the action. None of the spoken languages that these homesigners were exposed to exhibited the constituent-ordering pattern of the individual's homesign system (for example, English uses SVO order, not SOV order). Thus we can be relatively certain that the gesture order patterns were not introduced by the hearing individuals in contact with these homesigners. However, it could be that gesture has taken on combinatorial structure because it is carrying the full burden of communication for these individuals.

Those who doubt that homesign is functioning as a language might argue that the ordering tendencies observed in homesign are generated by a more general cognitive process. Humans, after all, are analytic thinkers, and children show ordered behavior in a variety of domains, not just in language. To demonstrate that ordering preferences in homesign are a syntactic device we need evidence that homesigners show sensitivity to syntactic roles in an alter­native language task. In my research with homesigners who have acquired ASL I have investigated whether or not the homesigners' mastery of ASL syntax is influenced by the syntactic preferences these individuals expressed in their homesign prior to their exposure to ASL (Morford, 1996c). If we can demon­strate that there are transfer effects from homesign to ASL we will have addi­tional support for the view that ordering preferences in homesign systems are indeed an expression of syntactic structure.

Before we proceed it is important to understand the syntactic properties of the target language, ASL. ASL, like English, uses SVO sign order (Liddell, 1980; Padden, 1988) to encode syntactic relations for a subset of verbs called plain verbs. A second class of verbs, called agreement verbs, use a different device, called spatial inflection. When the signer produces the sign for the sub­ject or the object, it can be associated with a location in space in front of the signer. Subsequently, when the signer produces the sign for the action, the path of the action sign is modified such that it begins at the location associated with the subject and ends with the location associated with the object. It has also been argued, for Israeli Sign Language, that the syntactic structure is encoded

Page 11: Gesture when there is no speech model

GESTURE WHEN THERE IS NO SPEECH MODEL 111

in the direction in which the hands are facing. According to this account the direction of the path movement encodes the thematic structure (see Meir, 1995). To date no one has found evidence of spatial inflection for person in homesign. Inflection for location has been found in homesign by Goldin-Meadow, Butcher, Mylander, and Dodge (1994).

Given the current research findings it would be plausible to assume that word order is the primary device used to encode syntactic relations in home-sign. If this is indeed the case, homesigners learning ASL will encounter agree­ment for the first time when exposed to ASL. Studies of second language learning indicate that individuals who are exposed to their second language fairly late have a strong tendency to transfer their syntactic processing strate­gies from their first to their second language, whereas individuals who are exposed to their second language earlier are more likely to differentiate accord­ing to language or even transfer processing strategies from their second lan­guage to their first language (Liu, Bates, and Li, 1992). These investigators note specifically that English speakers who do not learn Chinese until the age of twenty are very likely to apply word order strategies to Chinese despite the fact that word order is not an important cue to syntactic structure in Chinese.

If the sign order preferences exhibited in homesign are linguistic rules, then we should predict one of two outcomes. For early learners of ASL we would expect homesigners to learn to use both agreement and word order to mark syntactic roles in ASL and word order alone when using homesigns. In contrast, homesigners who have not been exposed to ASL until relatively late should perform like the English speakers learning Chinese described previ­ously. They should use word order to encode syntactic relations in ASL and be less adept at using spatial inflection. Note that in this case the homesigner is transferring the generalized rule of using sign order to ASL, not the specific sign order that was used with homesigns. Thus a homesigner acquiring ASL after childhood should learn the appropriate order for ASL (namely, SVO).

If the use of sign order in homesign is not a syntactic device, we should predict a third outcome. Namely, the homesigners might attempt to continue using the exact same order they used with homesigns with their newly acquired ASL signs. Specifically, they would not use the appropriate SVO sign order with ASL; instead, they would use the sign order they used with their homesign system, OSV or SOV

To investigate these possibilities I asked two homesigners who were first exposed to ASL in adolescence to recount a story from a wordless storybook, Prog, Where Are You? by Mercer Mayer (1969) . The homesigners told the story at the end of their second school year in an ASL environment. The sto­ries were videotaped, transcribed, and subsequently analyzed for syntactic structure. In particular, I evaluated how the subjects marked the subject and the object of the sentence. Did they use spatial inflection? Or ASL word order? Or homesign word order? Prior to their entry to an ASL environment, both individuals expressed an OSV gesture order preference in homesign (Morford, 1996b).

Page 12: Gesture when there is no speech model

112 NATURE AND FUNCTIONS OF GESTURE IN CHILDREN'S COMMUNICATION

All ASL verbs were classified as "plain" or "agreement" verbs. Both sub­jects produced an equal number of tokens of plain and agreement verbs. Recall that plain verbs cannot be inflected. Only the agreement verbs, which accounted for half of the corpus, were analyzed to determine whether the sub­jects had mastered this novel syntactic device. On average the homesigners used spatial inflection to identify the subject in only 3.5 percent of their utter­ances and to identify the object in only 12.5 percent of their utterances. Thus after almost two years of exposure to ASL the homesigners were still rarely using the syntactic device most commonly used to mark subjects and objects with agreement verbs.

Subsequently, all verbs were pooled to investigate the use of word order. Note that for the majority of verbs the homesigners depended on discourse structure to identify the subject and object. Over half of the verbs were pro­duced without a subject or an object. For the remainder of the verbs, the sub­ject was placed before the verb in 87 percent of the utterances on average, and the object was placed after the verb in 81 percent of the utterances on average. Thus there was a relatively consistent use of ASL word order. Placing the sub­ject prior to the verb is also consistent with homesign word order, but in only 19 percent of the utterances on average was the object also placed before the verb, as would have been predicted had the homesigners been maintaining their ordering heuristic from homesign.

This pattern of results supports the second hypothesized outcome, that the homesigners would look just like English speakers learning Chinese as a second language fairly late in life. They were attentive to sign order in ASL because it was an important cue to syntactic structure in homesign. They did not transfer the actual sign order they used in homesign (OSV) to their use of ASL signs. They transferred only the generalized rule that sign order expresses the relation between constituents. The subjects were less successful in master­ing a new cue to syntactic structure, spatial agreement. These results corrobo­rate evidence from linguistic analyses of homesign systems that identify sign order as the primary device for marking syntactic structure in homesign.

Conclusion

This chapter has summarized research on the structure of gesture produced in the absence of speech. The gestures both of hearing individuals who have been asked not to speak and of deaf individuals who depend solely on gesture to communicate exhibit characteristics typically associated with speech. Specifi­cally, gestures produced in the absence of speech are segmented and linear rather than global and synthetic like the gestures that accompany speech. One explanation of the transfer of these characteristics from speech to gesture is that symbolic communication requires these features (Goldin-Meadow, McNeill, and Singleton, 1996) . It is only when speech is absent, and thus unable to assume the required features of symbolic communication, that we see these features in spontaneous gesture.

Page 13: Gesture when there is no speech model

GESTURE WHEN THERE Is No SPEECH MODEL 113

Several decades of research on homesign systems have contributed to the growing consensus that homesign—that is, the gestural communication sys­tems generated by deaf individuals who have not been exposed to a signed lan­guage—are structured at the lexical, sublexical, and syntactic level. Past studies have consisted of structural analyses of homesign corpora from children (De-Villiers, Bibeau, Ramos, and Gatty, 1993; Goldin-Meadow and Mylander, 1984), adolescents (Emmorey, Grant, and Ewan, 1994; Morford, 1996c), and adults (Kendon, 1980; Kuschel, 1973; Macleod, 1973) . These studies have documented a distinction in homesign between nouns and verbs (DeVilliers, Bibeau, Ramos, and Gatty, 1993; Goldin-Meadow, Butcher, Mylander, and Dodge, 1994; Macleod, 1973), the presence of handshape and motion mor­phemes that are combined productively to create gestures with composite meanings (Goldin-Meadow, Mylander, and Butcher, 1995), and gesture order­ing preferences that are akin to syntax (Emmorey, Grant, and Ewin, 1994; Goldin-Meadow and Mylander, 1984; Macleod, 1973; Morford, 1996b). Fur­ther, the structure of homesign has been shown to subserve functions typically associated with conventional languages (Morford, 1995b; Morford and Goldin-Meadow, 1997).

In this chapter I have summarized recent evidence of the psychological validity of these structural features of homesign systems. Namely, homesign­ers who are exposed to a signed language in adolescence exhibit language-learning tendencies that are familiar from the literature on second language acquisition. They are attentive to the grammatical distinctions in ASL that are also present in their homesign systems, such as the noun-verb distinction, reg­ularities in morphological structure, and devices for marking grammatical role. In several instances there is evidence of forward transfer from homesign to ASL. Specifically, homesigners master ASL morphemes that share a common meaning with homesign morphemes more readily than they master ASL mor­phemes that do not. Moreover, homesigners generalize the use of word order for marking syntactic roles to verbs in ASL that typically mark grammatical role through agreement. The sum of these acquisition phenomena indicate that homesign functions as a first language for individuals who have not been exposed to a language in childhood, thereby strengthening the claim that the structures observed in homesign systems are linguistic in nature.

With a variety of research approaches documenting the segmented and combinatorial structure of gesture, we are left with the question of why ges­ture does not maintain these characteristics when it is combined with speech. One possibility is that the task demands are simply too great. If both speech and gesture were used to express the same idea via linearly combined segments simultaneously, it would be similar to speaking two languages at once. Numer­ous attempts to speak a spoken language and sign a signed language simulta­neously have already demonstrated that the simultaneous production of two languages is error-ridden and asymmetrical in the inclusion of linguistic fea­tures. Maxwell, Bernstein, and Mear (1991) conclude that attempts to produce two languages simultaneously are better characterized as bimodal production,

Page 14: Gesture when there is no speech model

114 NATURE AND FUNCTIONS OF GESTURE IN CHILDREN'S COMMUNICATION

since the proposition is encoded across the two channels rather than fully and equally encoded within each channel.

An additional explanation for the lack of segmental and combinatorial structure in gesture that accompanies speech (despite its potential to encode meaning in this way) is that it is actually beneficial to speakers and listeners to use gesture and speech in a complementary fashion (see Goldin-Meadow and McNeill, forthcoming). Although gestures are primarily unconscious, they do facilitate both the expression and the comprehension of ideas. There are numerous instances in which gestures cannot be seen by the speaker (see Chapter Six) or the listener (Iverson and Goldin-Meadow, 1997) but are pro­duced nevertheless. McNeill (1992, p. 11) states, "Comparing speech to ges­tures produces an effect on our understanding of language and thought something like the effect of triangulation in vision. Many details, previously hidden, spring out in new dimension." This is certainly the case for scientists investigating the speech-gesture relationship. Most likely this is also the case for everyday interactions. The global-synthetic expression of an idea in gesture reveals more than can be encoded in the spoken modality alone.

References

Bloom, R. "Language Creation in the Manual Modality: A Preliminary Investigation." Unpublished manuscript, University of Chicago, 1 9 7 9 .

DeVilliers, J . , Bibeau, L. , Ramos, E . , and Gatty, J . "Gestural Communication in Oral Deaf Mother-Child Pairs: Language with a Helping Hand?" Applied Psycholinguistics, 1 9 9 3 , 1 4 , 3 1 9 - 3 4 7 .

Dufour, R. "The Use of Gestures for Communicative Purposes: Can Gestures Become Gram­matical?" Unpublished doctoral dissertation, University of Illinois, 1 9 9 2 .

Emmorey, K., Grant, R., and Ewan, B. "A New Case of Linguistic Isolation: Preliminary Report." Paper presented at the Boston University Conference on Language Development, Boston, 1 9 9 4 .

Feldman, H., Goldin-Meadow, S., and Gleitman, L. "Beyond Herodotus: The Creation of Language by Linguistically Deprived Deaf Children." In A. Lock (ed.). Action, Gesture and Symbol: The Emergence of Language. Orlando: Academic Press, 1 9 7 8 .

Goldin-Meadow, S., Butcher, C , Mylander, C , and Dodge, M. "Nouns and Verbs in a Self-Styled Gesture System: What's in a Name?" Cognitive Psychology, 1 9 9 4 , 2 7 , 2 5 9 - 3 1 9 .

Goldin-Meadow, S., and McNeill, D. "The Role of Gesture and Mimetic Representation in Making Language the Province of Speech." In M. C. Corballis and S. Lea (eds.), Evolution of the Hominid Mind. Oxford, England: Oxford University Press, forthcoming.

Goldin-Meadow, S., McNeill, D., and Singleton, J. L. "Silence Is Liberating: Removing the Handcuffs on Grammatical Expression in the Manual Modality." Psychological Review, 1 9 9 6 , 1 0 3 , 3 4 - 5 5 .

Goldin-Meadow, S., and Mylander, C. "Gestural Communication in Deaf Children: The Effects and Non-effects of Parental Input on Early Language Development." Monographs of the Society for Research in Child Development, 1 9 8 4 , 49 , 1 - 1 2 1 .

Goldin-Meadow, S., and Mylander, C. "Beyond the Input Given: The Child's Role in the Acquisition of Language." Language, 1990a , 66, 3 2 3 - 3 5 5 .

Goldin-Meadow, S., and Mylander, C. "The Role of Parental Input in the Development of a Morphological System." Journal of Child Language, 1 9 9 0 b , 17, 5 2 7 - 5 6 3 .

Goldin-Meadow, S., Mylander, C , and Butcher, C. "The Resilience of Combinatorial Structure at the Word Level: Morphology in Self-Styled Gesture Systems." Cognition, 1 9 9 5 , 5 6 , 1 9 5 - 2 6 2 .

Page 15: Gesture when there is no speech model

GESTURE WHEN THERE IS NO SPEECH MODEL 115

Iverson, J . M., and Goldin-Meadow, S. "Congenitally Blind Children Gesture as They Speak Even Though They Have Never Seen Anyone Gesture." Unpublished manuscript, Indi­ana University, 1997 .

Kendon, A. "A Description of a Deaf-Mute Sign Language from the Enga Province of Papua New Guinea with Some Comparative Discussion. Part I: The Formational Properties of Enga Signs." Semiotica, 1 9 8 0 , 3 1 ( 1 - 2 ) , 1 - 3 4 .

Kita, S. "Language and Thought Interface: A Study of Spontaneous Gestures and Japanese Mimetics." Unpublished doctoral dissertation. Departments of Psychology and Linguis­tics, University of Chicago, 1 9 9 3 .

Kuschel, R. "The Silent Inventor: The Creation of a Sign Language by the Only Deaf-Mute on a Polynesian Island." Sign Language Studies, 1 9 7 3 , 3, 1 - 2 7 .

Liddell, S. American Sign Language Syntax. Hawthorne, N.Y.: Mouton de Gruyter, 1 9 8 0 . Liu, H., Bates, E . , and Li, P. "Sentence Interpretation in Bilingual Speakers of English and

Chinese." Applied Psycholinguistics, 1 9 9 2 , 1 3 , 4 5 1 - 4 8 4 . Macleod, C. "A Deaf Man's Sign Language—Its Nature and Position Relative to Spoken Lan­

guages." Linguistics, 1 9 7 3 , 1 0 1 , 7 2 - 8 8 . Maxwell, M., Bernstein, M. E. , and Mear, K. M. "Bimodal Language Production." In P. Siple

and S. D. Fischer (eds.), Theoretical Issues in Sign Language Research. Vol. 2: Psychology. Chicago: University of Chicago Press, 1 9 9 1 .

Mayer, M. Frog, Where Are You? New York: Dial, 1 9 6 9 . McNeill, D. Hand and Mind. Chicago: University of Chicago Press, 1 9 9 2 . McNeill, D. "Imagery in Motion Event Descriptions: Gestures as Part of Thinking-for-Speak­

ing in Three Languages." Paper presented at the twenty-third annual meeting of the Berkeley Linguistics Society, Berkeley, Calif., February 1997 .

Meir, I. "Explaining Backwards Verbs in ISL: Syntactic-Semantic Interaction." In H. Bos and T. Schermer (eds.), Sign Language Research 1994: Proceedings of the Fourth European Con­gress on Sign Language Research in Munich. Hamburg, Germany: Signum Press, 1 9 9 5 .

Morford, J . P. "How Is Gesture Structured in the Absence of Speech? New Evidence of Structure in Homesign." Paper presented at a meeting of the Society for Research in Child Development, Indianapolis, Ind., 1995a .

Morford, J . P. "How to Hunt an Iguana: The Gestured Narratives of Non-signing Deaf Chil­dren." In H. Bos and T. Schermer (eds.). Sign Language Research 1994: Proceedings of the Fourth European Congress on Sign Language Research in Munich. Hamburg, Germany: Signum Press, 1995b .

Morford, J . P. "Insights to Language from the Study of Gesture: A Review of Research on the Gestural Communication of Non-signing Deaf People." Language and Communication, 1996a , 16 ( 2 ) , 1 6 5 - 1 7 8 .

Morford, J . P. "Tendance d'Ordre dans un Systeme de Signes Domestiques" ("Ordering Ten­dencies in a Homesign System"). In C. Dubuisson and D. Bouchard (eds.), Sptcificitts de la Recherche Linguistioue sur les Langues Signtes [Linguistic Research on Sign Languages). Montreal: Association Canadtenne-Francaise pour l'Avancement des Sciences, 1996b .

Morford, J . P. "Recharacterizing the Critical Period for Language Acquisition: Evidence from Linguistic Isolates." Paper presented at an invited colloquium sponsored by the Linguis­tics Department, University of New Mexico, Albuquerque, 1 9 9 6 c .

Morford, J . P., and Goldin-Meadow, S. "From Here and Now to There and Then: The Devel­opment of Displaced Reference in Homesign and English." Child Development, 1 9 9 7 , 68 ( 3 ) , 4 2 0 - 4 3 5 .

Morford, J . P., Singleton, J . L., and Goldin-Meadow, S. "From Homesign to ASL: Identify­ing the Influences of a Self-Generated Childhood Gesture System upon Language Profi­ciency in Adulthood." In D. MacLaughlin and S. McEwen (eds.), Proceedings of the 19th Boston University Conference cm Language Development. Somerville, Mass.: Cascadilla Press, 1 9 9 5 .

Padden, C. Interaction of Morphology and Syntax in American Sign Language. New York: Gar­land, 1 9 8 8 .

Page 16: Gesture when there is no speech model

1 1 6 NATURE AND FUNCTIONS OF GESTURE IN CHILDREN'S COMMUNICATION

J i l l P MORFORD is assistant professor of linguistics at the University of New Mexico, Albuquerque.

Petitto, L. A. "On the Autonomy of Language and Gesture: Evidence from the Acquisition of Personal Pronouns in American Sign Language." Cognition, 1 9 8 7 , 2 7 (1 ) , 1 - 5 2 .

Singleton, J . L., Morford, J . P., and Goldin-Meadow, S. "Once Is Not Enough: Standards of Well-Formedness in Manual Communication Created over Three Timespans." Language, 1 9 9 3 , 69, 6 8 3 - 7 1 5 .

Tervoort, B. T. "Esoteric Symbolism in the Communication Behavior of Young Deaf Chil­dren." American Annals of the Deaf, 1 9 6 1 , 1 0 6 , 4 3 6 - 4 8 0 .