Distinct Facial Characteristics Differentiate ... Facial Characteristics Differentiate Communicative Intent of ... infants during early ... Distinct Facial Characteristics Differentiate

  • View
    213

  • Download
    0

Embed Size (px)

Text of Distinct Facial Characteristics Differentiate ... Facial Characteristics Differentiate Communicative...

  • Infant and Child DevelopmentInf. Child Dev. 21: 555578 (2012)Published online 2 May 2012 in Wiley Online Library(wileyonlinelibrary.com). DOI: 10.1002/icd.1757

    Distinct Facial CharacteristicsDifferentiate Communicative Intentof Infant-Directed Speech

    *Corresponden41, The Univeutdallas.eduThis research wthe doctoral de

    Copyright 201

    Kate Georgelas Shepard*, Melanie J. Spence andNoah J. SassonSchool of Behavioral and Brain Sciences, The University of Texas at Dallas,Richardson, TX, USA

    Adults and infants can differentiate communicative messages usingthe nonlinguistic acoustic properties of infant-directed (ID) speech.Although the distinct prosodic properties of ID speech havebeen explored extensively, it is currently unknown whether thevisual properties of the face during ID speech similarly conveycommunicative intent and thus represent an additional source ofsocial information for infants during early interactions. To examinewhether the dynamic facial movement associated with IDspeech confers affective information independent of the acousticsignal, adults differentiation of the visual properties of speakerscommunicative messages was examined in two experiments inwhich the adults rated silent videos of approving and comfortingID and neutral adult-directed speech. In Experiment 1, adultsdifferentiated the facial speech groups on ratings of the intendedrecipient and the speakers message. In Experiment 2, an orig-inal coding scale identified facial characteristics of the speak-ers. Discriminant correspondence analysis revealed two factorsdifferentiating the facial speech groups on various characteristics.Implications for perception of ID facial movements in relation tospeakers communicative intent are discussed for both typicallyand atypically developing infants. Copyright 2012 John Wiley& Sons, Ltd.

    Key words: infant-directed speech; facial speech; face perception;emotion perception

    ce to: Kate Shepard, School of Behavioral and Brain Sciences, Mailstop GRrsity of Texas at Dallas, Richardson, TX 75080, USA. E-mail: kateshepard@

    as completed as a partial fulfilment of the qualifying thesis requirement forgree in Psychological Sciences for the first author.

    2 John Wiley & Sons, Ltd.

  • 556 K. G. Shepard et al.

    INTRODUCTION

    Adults communicate with infants using infant-directed (ID) speech, regardless oftheir prior experience with babies (Jacobson, Boersma, Fields & Olson, 1983). IDspeech, compared with adult-directed (AD) speech, has higher fundamentalfrequency (F0), more variation in frequency range, longer pauses and shorterutterances (Fernald & Simon, 1984), which directly impact the prosody, or therhythm and intonation patterns of speech. Adults use ID speech for a variety offunctions, such as modulating infants affect and attention in the early months of lifeand facilitating language learning later on (Fernald, 1992). Adults communicatedistinct intents or messages to infants, such as approving, using different prosodicproperties (Papousek, Papousek & Symmes, 1991); these intents are distinguishedby adult listeners on the basis of the acoustic prosodic properties in the absence ofthe linguistic content of the speech (Fernald, 1989). Moreover, 6-month-old infantscategorize the acoustic signals of different intents (Moore, Spence & Katz, 1997;Spence &Moore, 2003), which suggests that preverbal infants may use nonlinguisticproperties of speech to facilitate their interpretation of caregivers messages.

    A significant property of speech that has been understudied is the visualcomponent of ID speech or the facial movements and expressions that accompanythe acoustic signal (Chong,Werker, Russell & Carroll, 2003). Speakers often use facialexpressions to further communicate their intendedmessage, and adultsmay bemorevisually expressive when addressing children versus adults (Swerts & Krahmer,2010). A good listener is able to incorporate facial cues with the auditory signal tointerpret the intent of the message, and failure to do so is often considereddetrimental to the social interaction, as seen in certain social disorders (e.g. autismspectrum disorders). A casual observer of a mother speaking to her infant willquickly note the exaggerated, engaging expressions portrayed in the mothers face(Werker & McLeod, 1989). The facial movements that accompany ID speech mayportray distinct facial expressions that capture infants attention differently thanAD faces (Chong et al., 2003), and ID faces may be especially salient to young infantsbecause most of their early social experiences involve close face-to-face interactionswith their caregivers.

    While speaking to an infant, an adults face often displays such characteristics asexaggerated smiles, wide eyes and raised eyebrows (Werker & McLeod, 1989).Chong and her colleagues (2003) quantified facial expressions made by mothersduring interactions with their infants, which differed significantly from the mothersadult-oriented facial expressions. Mothers from either Chinese-speaking or English-speaking backgrounds were video-recorded while speaking to their infants. Naveraters identified three distinct facial expressions, or ID expressions, used by all 20mothers in the study, regardless of language background. These findings suggestthat distinct facial characteristics are portrayed during motherinfant interactions;whether these vary depending on the speakers communicative intent or spokenmessage has not been examined.

    Although the unimodal auditory signal of ID speech has been thoroughlyinvestigated (e.g. Cooper, Abraham, Berman & Staska, 1997; Fernald, 1985, 1989,1992, 1993; Golinkoff & Alioto, 1995; Jacobson et al., 1983; Kitamura & Burnham,2003; Kitamura, Thanavishuth, Burnham & Luksaneeyanawin, 2002; Papousek,Bornstein, Nuzzo, Papousek & Symmes, 1990; Papousek et al., 1991), variations ofID facial movements and expressions are less understood. Fernald (1989), forexample, demonstrated that the prosody, or melodic rhythm, of the acoustic speechsignal carries the intended message in the absence of the linguistic content of themessage. Alternatively, Graf et al. (2002), (as cited in Blossom & Morgan, 2006)

    Copyright 2012 John Wiley & Sons, Ltd. Inf. Child Dev. 21: 555578 (2012)DOI: 10.1002/icd

  • Facial Characteristics of ID Speech Intent 557

    suggested the visual prosody of the rhythmic movements of the face and head duringspeech production may highlight the speech signal. It is well known that visualspeech perception affects auditory speech perception even in young infants, suchas during a test of the McGurk effect (Burnham & Dodd, 2004; McGurk &MacDonald, 1976), which suggests that infants are capable of integrating auditoryand visual information. Further, speech movements made during ID speech, suchas vowel-specific exaggerations of the lips, differ from AD speech (Green, Nip,Wilson, Mefferd & Yunusova, 2010), which may facilitate infants visual perceptionof the acoustic signal.

    Beyond the visual linguistic properties, Walker-Andrews (1986) found that7-month-old infants matched the happy or angry affective visual properties of a faceto the corresponding auditory speech signal while watching the speakers face withthe mouth occluded. Thus, the infants demonstrated integration of the nonlinguisticcharacteristics of the face with the acoustic speech signal, an indication that facialexpressions, as portrayed in areas such as the eyes, may have provided affectivecommunicative information to the infants.

    Despite these demonstrations of infants integration of audiovisual information,little is known concerning the degree to which speakers modulate nonlinguisticvisual properties of the face when communicating distinct affective messages.Modulation of more general actions has been demonstrated, which may beanalogous tomodifications in ID speech (Meyer, Hard, Brand,McGarvey& Baldwin,2011). The term motionese has been used to describe mothers actions duringinteractions with their infants; when showing a novel object to an infant or an adult,mothers ID actions are more exaggerated in motion, simplistic and repetitivecompared with their AD actions (Brand, Baldwin & Ashburn, 2002). Such actionsmay be particularly important in caregivers use of acoustic packaging or themultimodal alignment of speech with actions (Hirsh-Pasek &Golinkoff, 1996; Meyeret al., 2011). Meyer and colleagues (2011) found, for example, that mothers actiondemonstrations to their infants were often accompanied by action-describingutterances, providing multimodal input that may facilitate infants understandingof goal-directed actions. The combination of visual and auditory stimulation mayprovide salient redundant multimodal information to infants, as proposed bythe intersensory redundancy hypothesis (Bahrick, Lickliter & Flom, 2004). Yet, therelative contribution of visual facial modifications in relation to specific ID messagesremains unclear.

    Infants prelinguistic differentiation of ID speech intents may be supported by thedynamic visual properties of the ID face, along with or in place of the acousticmessage. The current study seeks to identify the distinct properties of facialmovements portrayed during adults production of ID speech to examine whetheradults vary facial movement during ID speech dependent on communicative intent.Adults were asked in Experiment 1 to distinguish silent videos of women speakingin AD speech and in approving (e.g. Good girl!) or comforting (e.g. Poor baby)ID speech, and to quantify specific facial characteristics that helped make thosedistinctions in Experiment 2. Thus, the current study differs from and extends uponprevious reports examining visual properties of faces during ID speech (e.g. Chonget al., 2003) by (1) assessing if the facial expressions