1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

Embed Size (px)

Citation preview

  • 8/13/2019 1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

    1/10

    Journ al of Personality and Social Psychology Copyright 1986 by the American Psychological Association, inc.1986, Vol. 51, No. 4, 690- 699 0022-3514/86/ 00.75

    ues and hannels in Em otion RecognitionHarald G Wallbott and Klaus R Scherer

    Unive rs i ty o f Gies sen , Giessen , Wes t Germ anyThis artic le addresses methodological issues per tine nt to judgme nt studies in nonverbal com mun ica-tion research, in general, and to the percep tion and attri buti on of emotions, in particular. W e investi-gated which behavioral cues are used in portraying various emotions and to w hat extent the ch annelof presentation and encoding differences between actors affect judgm ent accuracy. In an encodingstudy the nonverbal behaviors of 6 different actors 3 male, 3 female) portr aying four emotions joy,sadness, anger, surprise) were analyzed from a videotape. In a decod ing study these portr ayals wereshown using 4 channels o f presentation audio-video, video only, audi o only, filtered audio) , togroups of naive judges. The results indicate that different nonverbal cues are used to portray thevario us emotion s and that differences between channels an d between actors strongly affect deco dingaccuracy. Specifically,overemphasis of behavioral cues chara cteristic for certain emo tions results inreduced decoding accuracy.

    Given the widespread a s sum p t ion tha t n onve rba l s igna l sys-tem s a re pa r t i cu la r ly su i t ab le fo r com m unica t ing a f fec tive in -fo rm a t ion , i t i s no t su rpr i s ing tha t m any re sea rchers have at -t em p ted to s tudy how we l l ind iv idua l s can recogn ize d if fe ren tem ot ions f rom va r ious nonve rba l cues fo r an ove rv iew of suchs tudies , see Ekm an, 1982; Scherer , 1981). On e o f the m ajor ob-s tades re sea rche rs have encounte red in th i s a rea has been theprob lem of ob ta in ing rea l i s t ic s tim ulus m a te r ia l to p re sen t tojudges ; tha t i s , ob ta in ing nonve rba l expres s ions tha t represen tva l id ind ica to rs o f va r ious em ot iona l s t ate s. The e th ica l andpract ical diff icul t ies in experimental ly inducing s trong emo-t iona l s t a te s , o r even record ing na tu ra l ly occur r ing em ot iona lexpe r iences , has been one o f the pe renn ia l p rob lem s in the s c i -en t i fi c s tudy o f em ot ion .

    In s tudy ing the ro le o f voca l cues in the com m unica t ion o femo tion al meanin g, this diff icul ty is aggravated because the re-searcher, in order to avoid interference from verbal content ,usua l ly wants to use s t and a rd ve rba l u t t e rances wi th the s am econtent for different emotions so as to focus on the differentparalinguistic cues tha t ca r ry the em ot iona l m ean ing . Giventhese cons tra ints , m ost researchers have taken recou rse to emo-t iona l expres s ion s im ula ted o r por t raye d by ac to rs us ing s tan-dar d verbal materia l such as s tandard sentences , nonsense syl la-b les , l et t e rs o f the a lphabe t , and the l ike s ee sum m ary inScherer, 1981).

    A se r ious ob jec tion tha t i s im m edia te ly ra i s ed aga ins t s tud ie s

    We would like to tha nk Ursula Scherer for con tributing to the designand run ning of this study; Jurek Karylowski for assisting with the statis-tical analyses and for critical comments; Arvid K appas and ThomasStaufenbiel for helping with dat a collection and analysis; an ano nym ousreviewer for very insightful comments; and, la st bu t no t least, the actorsof the Giessen Municip al Theatre Company who pa rticipated in thisproject.Correspondence concerning this article should be addressed to Klau sR. Scherer, Dep artm ent o f Psychology, University of Giessen, Otto Be-hagheI-Strasse 10, 6300 Giessen, West Germany.

    of th is k ind conce rns the l ack o f spon tane i ty and na tu ra lnes sinhe ren t in th i s p rocedure . The ques t ion o f whe the r one i sm ere ly s tudy ing conven t iona l r i tua l s o f a f fec t expres sion on thes tage, shared by actors and specta tors , is an issue of ma jor im -por tance in the f i e ld and one tha t can no t be dea l t wi th in thepresen t con tex t . F or the purposes o f the p re sen t s tudy we a s -sum ed tha t the re i s som e va lue in s tudy ing s im ula ted por t raya l sof em ot io na l expres s ion by p rofes s iona l o r l ay ac to rs .

    One jus t i f i ca t ion fo r the use o f th i s m e thod o f e l i c i t a t ion i stha t a l though s im ula ted em ot iona l expres sions a re dea r ly no tnaturalenough the na tu ra l expres s ions o f em ot ion o b ta ined inm os t s tud ie s to da te have no t been emotional enough . Inas -m uch a s we obv ious ly shou ld con t inue the a t t em pt to o b ta inna tu ra l i s t i c record ings o f em ot iona l expres s ions in the f i e ldand pos s ib ly in the l abora to ry) , in so fa r a s th i s i s e th ica l ly pos -

    s ib le , m eanwhi le we m ay have to re ly on the con t inued use o fs im ula t ion s tud ies . The s econd jus t i f i ca t ion is tha t the s tudy o fr i tua l i zed a f fec t expres s ion conven t ions , thea t r i ca l o r o the r -wise , is of major intere s t in and of i tself , part ic ularl y if , a t a la ters tage , i t can be com p ared wi th m o re na tu r a l and spon taneousa f fec tive com m unica t io n v ia nonve rba l cues.

    I f we s ta r t f rom the a s su m pt ion tha t the re i s som e va lue ins tudy ing s im ula ted por t raya l s o f em ot ion because , fo r ce r ta inpurposes , pa rt icula rly for the s tud y of vocal cues , i t is necessaryto ob ta in som e s tanda rd ve rba l m a te r ia l fo r d i f fe ren t em ot ions ,i t i s o f g rea t m e thodolog ica l in te re s t to dev i s e an e l i c i t a t ion de -s ign that , on the one h and, c omes as c lose as poss ible to a real-l i fe com m unica t ion s i tua t ion and tha t , on the o the r hand ,m akes o p t im a l use o f the pos s ib i l i t i es fo r expe r im en ta l va r ia -t ion des igns . Mos t p rev ious s im ula t ion s tud ie s a re found want -ing on bo th o f these c r i te r i a .

    As fa r as na tu ra lnes s i s conce rned , one o f the m a jor p r ob lem sis the pe rs i s t en t use o f m onologues fo r the em ot ion a l por t raya l .In mo st cases , senders are given a se t of s tandar d verb al materia land a re a sked to de l iver th i s t ex t wi th va r ious em ot iona l m ean-ings to the m icroph one, a s i t were . This is , of course , a ra therun typ ica l s i tua t ion , pa r t i cu la r ly fo r nonprofes s iona l ac to rs , and

    690

  • 8/13/2019 1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

    2/10

    CUES AND CHANNELS 691it is quite possible that individual conceptions of declamatorystyle rather than attempts at realistic emotional portrayal areobtained.

    A second majo r prob lem is that the senders or encoders aregenerally jus t given emotion labels to describe the e motio nalstate they are to simul ate (e.g., Read this text as if you werevery angry ). The problem here, of course, is that there aremany different kinds of anger and that different encoders maywell envisage rather different anger-inducin g situation s whendelivering he text, some ofw hich may be highly idiosyncratic.

    Both of these problems can be avoided by taking a scenarioapproach, like that used by Williams and Stevens (1972), forexample. I n this method, actors are given a situation and a plotto act o ut in an interactiv e fashion. The actual dialogue is to beimprov ised duri ng the course o f the scene. Most likely, the useof a realistic setting and the intera ction between several actorsincreases the chance that the resulting emotional expressionwill be somewhat closer to real life tha n is the monologic deliv-ery of a stand ard text (let alone nonsen se syllables or num bers).One obvious drawback of improvised dialogue is the absenceof standard verbal material, which may be needed for certaintypes of studies on the vocal co mmu nic atio n of affect. In thepresent study we have attempted a compromise by asking theactors to use a standard sentence at the apex of each impro visedemotion dialogue.

    With regard to the second criterion, the optimal use of experi-mental variation in simulation studies, most previous studiesneglected to use sufficient replication over senders and overemotional situations. In many studies, only one or a very fewactors were used, and generally, they were req uired to simulateeach em otio n only once. Once o ne is willing to tolerate all thedisadvantages of unnatural simulated material, one might aswell exploit the methodo logical advantages of this metho d, forexample by studying intra- an d interindividual differences inencoders as well as differences between several situations of th esame type of emotion. I n this study, we tried to assess the rela-tive strength of the effect o f these variables on the ac curacy o fthe emotional communicati on. In addition, we used a channelseparation approach (audiovisual, audio only, video only, fil-tered audio) to ob tain judgmen ts of the simulated emotionalutterances in order to study different effects of the factors dis-cussed above on emotion recognition in different channels.

    Specifically we were interested in the following questions:1. Do emoti ons differ with respect to the behaviors encoded

    by actors?2. How strong are the differences between professional actors

    in ter ms of their encoding ability for emotional expression?3. Do s uch differences result from the sex of the actor or the

    type of emot ion simulated?4. Are these differences depen dent on the chan nel of com-

    mun icat ion (i.e., audio, video, or audiovisual)?Given the large numb er of studies that have looked at individ-

    ual differences in the encoding and decoding of emotionalmean ing (cf. Fried man, 1979; Rosenthal, Hall, DiMatteo, Rog-ers, & Archer, 1979; Zucke rman , Hall, DeFrank, & Rosenthal,1976; Zuck erman , Lipets, Hall, & Rosenthal, 1975), one wouldexpect rather dramati c differences. On th e other hand, mu ch ofthe earlier research, in which only one or a very few actors were

    used, seemed to rely on the assumptio n that actors would beable to represent emotional expression in such a standardizedway that the use of several actors would be superfluous. Ind eed,if one assumes that what is communica ted in such simulatio nstudies are cultural conventions of affect expression, this as-sumption seems justified. The present study seeks to establishthe exten t to which this is really the case.

    M e t hodEncoding Study

    Actors Three male and 3 female professional actors were recruitedfrom the Municipal Theatre Giessen, a repertory company. All hadgraduated from professional acting schools and had performed in manydifferent plays at the Giessen theatre and in other professional repertorycompanies in Germany. They were paid for their services.Scenarios Two scenarios for each of the following four emotionswere enacted: joy, anger, sadness, surprise. All of the scenarios involveda young couple. For each scenario the actors were provided with a shortvignette of the antecedent events and the nature of the situation. On thebasis of this vignette they were to improvise a dialogue reflecting theemotional feeling inherent in the situation. Each scene was to last be-tween 1 /2 and 3 rain. During this time, the actors could develop thescenario in any way they wanted, provided they stayed within the con-straints of the situational description.When the actors felt that they had reached the apex of the emotionalfeeling required by the situation, they were to use the sentence Ichkann es nicht glauben ( I can't believe t ) at their turn in the dialogue.After this they were to continue the dialogue to conclude the situation.The roles were then reversed and, using the same scenario again, theother actor had to use the sentence at the apex of the emotional feeling.Thus, each scenario was played twice, with one actor uttering the sen-tence during the first run and the other actor uttering it during the sec-ond run.

    Recording of the scenarios Three dyads, each consisting of a maleand a female actor, were formed to act out the scenarios. They wereseated around a table in the recording studio of the laboratory, whichclosely resembled a living room setting. Except for a microphone on thetable before the actors, there was no sign of recording. All video cameraswere hidden behind one-waymirrors that were partially masked by cur-rains. Before each scenario, the actors were given a short vignette de-scribing the situation, and they were asked to prepare for the improvisa-tion. When they thought that they were ready, the actors gave a sign tothe experimenter, who then started the audio and video recording forthat scenario. When the actors had finished a scene, they again gave asign, and the recording was stopped.Video recordings and audio recordings were made on professionalequipment. Full shots of the actors were taken with two cameras, onecamera for each actor, and the two pictures were combined into onepicture using a split-screen device.Judgment Study

    Stimulus tape Stimuli for the judgment of emotional expressionin different channel conditions were obtained by editing the standardsentence ( I can 't believe t ), which was used by each actor at the apexof he emotional simulation, onto a stimulus tape. This standard passageprocedure (see Williams & Stevens, 1972) was chosen to keep verbalcontent comparable across actors and scenes. Because both semanticand syntactic factors affect pausing, tempo intensity, and fundamentalfrequency of the voice as well as gestural illustrating, the study of vary-ing verbal content scenarios required a much more complex approach.

  • 8/13/2019 1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

    3/10

    69 H A R A L D G . WA L L B OT T A N D K L A U S R . S C H E R E RAccordingly, a sequence o f the emotion al expressions from diffe rent ac-tors and different emotions was arranged in a ra ndom order sequenceon the stimulus tape.Four groups of udges were used to evaluate the emotion expressed inthe following four conditions: (a) audiovisual condition, provid ing boththe audio and the video informa tion (10 judges); (b) the video-only con-dition, showing the visual information on a monitor with the soundturne d of f(12 judges); (c) audio-only condition, playing only the sound-track o f the video tape with the m onitor turned black (11 judges); and(d) a filtered-a udio condition (10 judges; using a low-pass filter with acut-off frequency of 400 Hz). T he filter condition was used to reducethe cues available to the judges to changes of fundamental frequency ofthe voice (FO) and sequential speech cues; voice quality cues were nolonge r aud ible (see Scherer, 1982).Judges. Students in psychology and other fields were recruited byposter advertising the task. Twenty-one of the 43 judges we re female and22 were male. Their mean age was 24.8 years with a range of from 20to 32 years. Eleven of the judges studie d m edical science, 9 education,9 social science, 8 psychology, and 6 n atural science. T hey were paid fortheir participation, and prizes of 30, 20, and 10 Germ an marks (about$12, $8, and $4) were promised to the three most accurate judges.

    Procedure. Judges were told that their task w ould be to evaluate theemotions expressed by actors in a simulated scenario, and they weregiven a detailed background description of the nature of the stimuli.They were then shown the 48 stimuli (4 emotions by 2 types of scenariosby male o r female actor by 3 pairs of actors) in the respective cha nnelcondition (audiovisual, video only, audio only, filtered audio). Bright-ness and contrast o f the video monitor (66 era) was held constant for allvideo conditions, and power of the audio amplifier was also held con-stant for all audio conditions. Following each pre sentation, judges hadto indicate the perceived emotion on a forced choice answer sheet byselecting one of the four emotions.After they ha d finished this task, judges w ere given a second em otionjudgme nt task consisting of sound sequences that had been syntheticallyproduc ed with the Moo g synthesizer (see Scherer, 1974). This interm e-diate task was supposed to minim ize carryover and m emor y effects byseparating the two sessions, which used the actual stimulus tape, fromeach other. Following this, judges w ere again shown the 48 test stimuli,this time in the full audiovisual condition for all groups of judges in allchannel conditions. The judges in the full audiovisual condition whohad seen the stimuli before were told that this was done to cheek forpossible learning effects in their group. Data on these two judgmenttasks are not repo rted here.After these sessions judges were again told that the purpose of theexperiment was to see how well they could recognize emotions fromdifferent types of nonverbal cues and to determine whether there wereindividual differences in that ability. Th ey were told that lists, with indi-vidual judge's accurac y in percentages, would be displayed on the noticeboard in the student cafeteria (with judge codes) and that the best threejudges could obta in their m oney prizes in due course.

    ue MeasurementApart from the general question of whether the different emotionscould be accurately recognized on the basis of the stimuli as presentedin the different channels, we were interested in how the accuracy ofrecognition was me diated by specific aspects of the actors ' behavior. Inorder to assess the nonverbal behavior ofthe actors in the different videoscenes, both a subjective and an objective measurement approach wereused to yield estim ates o f proxi mal cues and distal cues, respectively (cf.

    Scherer & W alibott, 1985).Objective measurement o f movement behavior. Movement charac-teristics, gestural behavior (hand movements), head movements, head

    orientation, body movements, and body position of the actors duringthe short utterance were coded by two independent coders using theGiessen system for mov em ent nota tion (of. Scherer, W allbott, & Scherer,1979). This system is based on earlier work by Ekman and Friesen(1972) and Mehrabian (1972). The Giessen system is used to codemovement behavior into discrete categories such as hand movementillustrators or adaptors or body orientation toward or away from theinteraction partner. Furthermore, for each scene, the general amoun t o fmovem ent activity was judged on a scale from 0 no movement activity)to 5 extensive movement activity) by two independent observers.Clearly, the results of the objective measur ement procedure were lim-ited by the short duration of the utterance. Although fundam ental fre-quency is a continuous measure and apparently very stable even forshort utterances, only very few molar discrete movem ents were likely tooccur during this short period.The overall agreement reached by the independent coders was com-puted by using the gamm a coefficient (Goodm an & Kruskal, 1954).This coefficient reached .77 for gestural behavior, .58 for head m ove-ments, .80 for head orien tation, and .54 for body move ments . The over-all agreement for the judgment of general movement activity attained.93 (Pearson correlation). Thus, all coding, except the coding of bodymovem ents, and judgme nts rea ched ver y high reliability. Because of thelow reliability and the general low frequency of occurrence for bodymovem ent categories, these behavioral aspects were not included in fur-ther analyses. For the other data, the coding and judgments of one ofthe two coders were randomly chosen to be included in further statisticalanalysis.Subjective impressions o f movement behavior A group of 12 judges(psychology students, 6 male, 6 female), while watching the videot apewithout sound, judged the m ovem ent behavior of the actors within the48 video scenes on the following scales: slow/fast, small/exp ansive,weak/energetic, small m ovement activity/large movem ent activity, andunpleasant/pleasant. For each of the scenes the judges had to indicatewhether the m ovemen t behavior they saw was low, medium, or high onthe respective scales. The interobse rver agreem ent was calculated as .87(Cronb ach's alph a averaged across judg me nt scales).Objective measurement o f vocal behavior The 48 sentences ( I can ' tbelieve it ) were digitized and analyzed for mean fundamental fre-quency, standard deviation of fundamen tal frequency, and length of ut-terances by using the Giessen system for speech analysis implemen tedon PD P 11 comp uters (GISYS;Standke, 1981).Subjective impressions of vocal behavior The same group of 12judges that had evaluated the move ment behavior was used to evaluatevocal behavior. The judges had to listen to each scene without the videopicture and judge the vocal behavior on the scales slow/fast, weak/in-tense, low pitched/high pitched, monotonous/m elodious, and unpleas-ant/pleasa nt, a gain using the three possible scale values of low, med ium ,and high on the respective scale. Here, the average interobse rver agree-ment reached .80.Because the emotional nature of the stimuli was clearly recognizable,there is a possibility that the judges inferred the type o f emotion firstand then judged the proximal cues on the basis o f stereotypical notionsconcerning emotion-specific voice and movem ent behavior. However,data that will be reported later seem to indicate that this was not thecase

    esultsF i r s t , w e w i l l r ep o r t r e su l t s r e l ev an t t o t h e q u es t io n o f t o w h a t

    ex ten t d i sc re t e em o t io n p o r t r ay a l s d i f f e r i n t e rm s o f n o n v erb a lb eh av io r ch a rac t e r i s t i c s , an d seco n d , w e w i l l r ep o r t d a t a o n th ed i f f e ren ces b e tw een ac to r s . A f t e r t h i s w e w i l l p resen t d a t a o nd e c o d i n g a c c u r a c y i n r e l a t i o n t o n o n v e r b a l b e h a v i o r c u e s w i t h

  • 8/13/2019 1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

    4/10

    CUES AND CHANNELS 69 3Table 1Significant Emotion Main Effects or the Behavior Cues

    MSad- Sur-Cues F p Joy Anger ness prise

    Objective behaviorNumber of handmovem ents 3.80 .01 9 .33 .50 1.17" .42Head orientation 7.1 6 .001 1.67 1.58 2.67~ 1.25Mean fundamentalfrequency 3.71 .021 168 .8 18 2.6 13 6. 9 177.0

    Proximal behaviorMovement behaviorjudged as:Fast 2.81 .055 2.02 2.3 8" 1.67a 2.04Expansive 2.5 3 .07 4 1.63 1.93 1.40 2.10Energetic 10.23 .0001 2.12 2.41 1.49" 2.27Active 2.68 .06 4 1.88 2.14 1.49 2.16

    Pleasant 19.08 .0001 2.23 1.72a 1.75a 2.29Voice behaviorjudged as:Fast 3.7 3 .021 2.17 2.35~ 1.68~ 2.03Intense 6.48 .00 2 2.10 2.50 1.46~ 2.29High pitched 5.68 .003 2.19 2.10 1.67" 2.28Melodious 4.18 .013 2.18 2.03 1.73" 2.24Pleasant 4.14 .01 4 2.02 1.76a 2. 3a 1.90Note. All proximal cues we re based on judgments using a scale thatranged from 1 low) o 3 high). Twelve scenes represented each o f thefour emotions studied.a Significantly different from the other em otions acco rding to p ost hoeStudent Newman-Keuls test with p < 05

    cons ideration of the m ediat ing factors emotion, actor , and co n-di t ion (channel of presentation) .Differences Between Emotion Portrayalsin Behavior Cues

    An analys is of variance ANOVA)approach with addi t ionalpos t hoe comparisons (Student Newman-K euls procedure wi thp < .05) was used to analyze the relative effect of the differentfactors, specifically emotion, sex of actor, situation type (sce-nario with or witho ut child), and acto r (six different actors), onthe behavior cues that were measured or obtained in the judg-men t s tudies . We f i rs t examined whether the four em otions en-coded differed with respect to vocal and nonvocal behavior ofthe actors. I n the following sections, whenev er it is stated tha temotions are "character ized " by certa in behavioral cues , thisdoes not, of course, imply that the emo tions generally differwith respect to these cues. Given the design of he present study,this only means that the actors differentially portrayed the re-spective emot ion b y using these cues.Table 1 presents all significant main effects for emotion . T heresults indicate that the em otion s in fact differed on three of thecue measurements (n umbe r of hand movements , head orienta-tion, and mea n FO). N o significant differences for emo tion wereobtained for the num ber o f head movements , tota l movement

    activity, s tandard deviat ion of he mea n fu ndamen tal frequency,or utterance length.

    In terms of the object ive measurements , sadness seemed tobe different from the other three emotions, joy, anger, and sur-prise. Sadness was characterized by a relatively low fun dam en-ta l f requency of the voice , greater head orienta t ion down oraway from the interaction partner, and relatively frequent handmovements (a l though hand movements in general were qui teinfrequent). Most o f these hand moveme nts were shrugs (move-ments indicating helplessness; cf. Ekma n & Friesen, 1972) oradaptors (i.e., self-manipulations).

    A similarly distinctive pattern for sadness also held for thejudgment data on movement and vocal behavior . S ignif icantdifferences or tendencies in this direction between emotionswere obtained for all jud gm ent scales. Again, these effects weregenerally due to the difference between sadness portrayals andother emotions. When encoding sadness, actors used less ener-getic and less active movem ents (for activity, post hoe com pari -sons failed to reach significance). Pleasantness of mov eme ntsalso differed between emotions . Whereas movem ents associa tedwith anger and sadness (i .e., th e negative emotion s) were judg edas more unpleasant , m ovements shown when encoding joy orsurprise (i .e., positive emotions) were judged as significantlymore pleasant. Expansive movements, finally, again tended todifferentiate sadness, on the one han d, f rom anger and surpriseon the other hand, a l though the pos t hoc comparison was notsignificant. In sadness, movements were less expansive (i.e.,quite small in terms of space), whereas in anger, and especiallyin surprise , movements were more expans ive. Judgments ofmovem ent qual i ty thus character ized sadness as being differentfrom the other emotions, especially anger. Sadness movementswere less expansive, less energetic, less active, and less pleasantthan movem ents associa ted with other emotions , thus ch arac-terizing sadness as a passive, slow emo tion.This general t rend was confi rmed by the judgment data onvoice quality. Here, too, significant differences between emo-tions were mostly due to sadness and anger as the extremes. Insadness, actors talked very slowly, with a voice characterizedby low intensity, low pitch, and lack of melodiousness. Vocalbehavior in ang er was usually judg ed the opposite way; that is,as being very fast, very intense, and high pitched.

    I t i s interes t ing to note that judg ments of pleasantness ofvoice did not confi rm the results for judgm ents of pleasantnessof movem ent behavior . Al though for movem ent behavior angerand sadness were judged as being qui te unpleasant , the m aineffect for pleasantness of the voice indicated that only an a ngryvoice was judge d as unpleasant, whereas a sad voice on the op-posite was judg ed as being quite pleasant. Thus , anger was un-pleasant in terms o f both move ment behavior and vo ice behav-ior, and the impression of unpleasantness for sadness was onlydue to m ovemen t behavior , not to vocal behavior. Parenthet i -cally noted, this result seems to indicate that the judges whoprodu ced the proximal cue ra t ings actual ly focused on their im-pression of the behavior style rather than on stereotypical no -tions concern ing emotion-specific behav ior patterns. In general,the judgments of voice behavior s trongly confi rm the impres-s ion that sadness and anger are the mo st di fferent emotions interms of non verbal behavior .

  • 8/13/2019 1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

    5/10

    69 4 HARA LD G. WALLBOTT AND KLAUS R. SCHERERTableSignificant Actor and Se x o Ac tor M ain Effectsfor the Behavior Cues

    Cues F pActor main effects

    Objective behaviorNum ber of head movements 3.33 .032Am oun t of otal movement activity 2.89 .051Mean fundamen tal frequency 10.37 .0001SD fundamen tal frequency 5.56 .004Proximal behaviorMovem ent behavior judged as:Fast 5.16 .006Expansive 7.62 .001Energetic 2.79 .056Active 7.29 .001Pleasant 10.00 .0001Voice behavior judged as:Intense 4.12 .015High pitched 5.24 .006Melodious 4.44 .011Sex of actor main effects

    Objective behaviorMean fundamen tal frequency 35.84 .00001SD fundamen tal frequency 14.75 .0005Utterance length 18.35 .0002Proximal behaviorMovem ents judged as pleasant 6.51 .016Voice udged as high pitched i 5.74 .0004Voice udged as pleasant 4.56 .040

    Actor Differences in Beh avior C uesOur second ques t ion co ncerned the effect of actor on the n on-

    verbal cues of emo tion. T able 2 presents the significant maineffects for actor. Again, a considerable nu mb er of significanteffects was obtained. Actors di ffered in term s of the nu mbe r o fhead mo vements shown, in the am oun t o f movemen t act ivi ty,in mean fundamen tal frequency, and in the s tandard deviat ionof fundamen tal frequency. Al though the m ain effects for funda-mental f requency and s tandard deviat ion o f fundamental fre-que ncy were particularly du e to gender differences (wom en gen-erally have a higher fundam ental f requen cy and higher standarddeviation owing to anato mic reasons; com pare Table 2), the re-suits suggest different non verbal behavio r styles of the differentactors, both with respect to vocal and n onvo cal behavior.

    This hypothes is was confi rmed by the judgmen t data . Actorsdiffered in terms of udged veloci ty of movements and with re-spect to expansiveness, energy of movements, activity, andpleasantness, where the differences between actors were mostmarked. The same was t rue for judgmen ts o f voice behavior ,where actor differences were significant for loudness or inten-sity, high p itched voice (although this effect is again due to gen-der differences), and melodio us voice.

    Thus, actors differed considerably in their nonverbal behav-ior, both vocal and nonvocal. It is interesting and important tonote that these actor differences usually surfaced in maineffects. Interaction s of actor with emo tion rare ly occurre d (only

    for numb er o f hand movements) . This impl ies that acto r di ffer-ences did not depen d on the type of emotion encoded but weregeneral nonverbal s tyles of the actors independent of emotio nor other characteristics o f the en codin g task (like, for instance,situation type). Furthermore, these behavioral styles were notdepende nt on a ctors gender. Besides the trivial main effects ofsex for the voice parameter s m ean FO and s tandard deviat iono f F O SD FO) mentioned earlier, there were only three moremai n effects attributable to sex, again with respect to voca l be-havior: Female voices were judged as being more high pitched(obvious ly on the bas is of the higher FO) and more melodious(most l ikely because o f the higher SD FO) compared wi th ma levoices, and w omen ten ded to talk longer (utterance length). Thedifferent actors portrayals seemed to be characte rized by indi-vidual behavior styles, wh ich were shown irrespective ofe nc od -ing task and which did not depend on gender. Such behaviorstyles of different actors may have considerable influence o n theway in which emotions are encoded and, thus , on the acc uracyof decod ing by observers. This will be discussed later.Effect of Actor, Channel, an d Em otio n onDecoding A ccuracy

    To determine the effect of the experimental variations in thisstudy on decoding accuracy, ANOVASwith the emotion (fouremotions), condition (audiovisual, video-only, audio-only, fil-tered audio), actor (six different actors), and situation-type(scenes involving a child vs. scenes witho ut child) factors werecomputed . Decoding accuracy was de te rmined as the numbe rof judges wi th the correct choice of emotion for each scene ineach condition.

    Given th e large num ber of factors in the ANOVA, we decidednot to add sex of judge as a factor because the focus o f interestin this study was directed at encoding factors. Furthermore,there was no significant main effect for sex of actors (mean accu-racy for female actors = .53, for male actors = .50). These AN-OVAS resulted in a very large num ber of significant effects (seeTable 3). Decoding ac curacy in this s tudy seemed to be inf lu-enced b y all factors involved and by n early all possible interac-tions between factors. First , decod ing accu racy differed signifi-cant ly for the fou r emotions s tudied. The average accuracy washighest for anger (.74) followed by sadness (.56), whereas theaccu racy for joy (.43) and especially surprise (.37) was fairlylow. Given an expected chance accura cy of .25, a l l four emo-tions were recognized better than chance, but anger was recog-nized m ost accurately.

    Second, with respect to the channel conditions used, therewas a highly significant difference between conditions. Al-though decoding accuracy was qui te high for the audiovisualcondition (.62) and the video-only condition (.63), i t was sig-nificantly lower for the two audio conditions (audio only .47;fil tered audio .35). Thus, the conditions involving video re-sul ted in a far bet ter accurac y than the condi t ions involvingaudio.

    Third, a highly significant main effect was found for actor.This implies that actors did not only differ in their behaviorshown in the respective scenes (see above), but also with resp ectto how accurate ly thei r port rayals could be decoded. Som e ac-

  • 8/13/2019 1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

    6/10

    CUES AND CHANNELS 69 5Table 3Significant aNOVa Effects or Mean Accuracy

    Effects F pEmotion 65.48 .0000 1Condit ion 52.0 8 .00001Situa tion type 4.51 .05Actor 12.12 .00001Emotion Situation Type 15.71 .000 01Emotion x Situation Type Condition 3.38 .002Actor X Condition 1.79 .05Actor X Emotion 17.05 .00001Actor Situation Type 9.69 .00 001Actor X Situation Type X Emotion 3.28 .05Actor Emotion X Condition 3. 77 .00001Actor X Emotion X Condition Situation Type 2.49 .00 001

    tors were decoded fa r be t t e r than o the rs . We wi l l re tu rn to th i sissue when we discuss the interact ion effects found. F inal ly,there was a lso a s ignif icant , but somew hat weaker, main effectfor s i tuat ion type. Scenes involv ing a chi ld were reco gnized bet-ter ( .54) than scenes not involvin g a chi ld ( .50). Although the reis no o bviou s explanat i on for this f inding, i t migh t be poss iblethat ac tors were bet ter able to encode such scenes because in-vo lvem ent o f a ch i ld s t im ula ted the i r expres siveness and em p a-thy, resul t ing in greater accuracy.

    Besides these hig hly s ignif icant mai n effects , the ANOVA re-su i t ed in e igh t m ore o r l e s s com plex in te rac t ions , m os t h igh lys ignificant . Table 4 shows the means for the interact ion Emo-t ion X S i tua t ion Type X C ondi t ion . We wi l l no t d i s cus s a l l o fthese in te rac t ions in de ta i l , p r im ar i ly because ac to r i s invo lvedin m ost of these interact ions . I t is diff icul t to pr ovide expla na-t ions for these effects, as no re levant info rmati on o n the differ-ent actors is avai lable .

    These int eract io n effects indicate that different actors encod -ing different emo tions an d different s i tuat ion types in differentcond i t ions a re m ore o r l e s s eas i ly decoded by obse rve rs. S om eac to rs a re be t t e r in the au d io cond i t ion , o the rs in the v ideo con-di t ion; som e are bet ter for anger, others for sadness or joy; som eare be t t e r ab le to encode s cenes invo lv ing a ch i ld and o the rs a rebe t t e r wi th s cenes no t invo lv ing a ch i ld . Thus, the a s sum pt ionre l i ed on in som e s tud ie s - -ac to rs a re ab le to represen t em o-t iona l expres s ion in such a s t anda rd ized way tha t the use o f s ev-e ra l ac to rs s eem s s upe r f lu ous - - i s s eve re ly cha l l enged by thesefindings . Actors do not only differ in their nonverbal behaviors tyles (see above), but they a lso differ , to a large degree , in theirab i l i ty to encode d i f fe ren t em ot ions and d i f feren t s i tua t ions indifferent condit ions .Relations of Behavior Cues to Decoding Accuracy

    On the bas i s o f the f ind ings repor ted ea rl i er , one would p re -d ic t tha t in s cenes in which nonve rba l cues a re used incor rec t lyby an ac to r , accuracy o f em ot ion recogn i t ion wi l l be low andvice versa . For exam ple , because sadness seems to be char acter-i zed by ve ry s low m ovem ents ( s ee above ) , one can p red ic t tha tin sadness scenes , wherein actors use fas t movem ents , accu racywil l be cons ide rably lower than in the scenes wherein th ey use

    s low m ovem ents . In o rde r to t e s t th i s a s sum pt ion , cor re la t ionsbe tween nonvoca l cue m easurem ents and m ean accuracy fo rthe v ideo-on ly cond i t ion a s we l l a s cor re la t ions be tween voca lcha rac te r i st i c s and m ean accuracy fo r the aud io-on ly cond i t ionwere comp uted . We wil l discuss these resul ts separat e ly for eachof the four emot ions s tu died (see Table 5) .

    For surprise , mean accu racy in a l l cond it ions was fa ir ly low.The cor re la t ions ind ica te tha t decod ing accu racy fo r su rpr i s ebo th in the aud io and in the v ideo cond i t ions was no t m uchinf luenced by the behav iora l cues o f the ac to rs. In the aud iocondi t ion , on ly judgm ents o f the vo ice a s be ing in tense werepos i t ive ly re la ted to m e an aud io accuracy , whereas fo r v ideom ovem ents judged a s be ing fa s t was negat ive ly re la ted to m eanvideo accuracy. Thus , the general ly low accurac y for surprisewas no t m uch in f luenced by the behav iora l cues, nor was th i slow accuracy due to incor rec t use o f the cues m easured . T h i sim pl ie s tha t o the r behav iora l cues o f the ac to rs no t m ea suredhe re m ay account fo r the low accuracy o f decod ing surpr i s e.The da ta a re m ore in te re s t ing fo r the o the r th ree em ot ionss tud ied . F or joy , m ean accuracy fo r bo th the v ideo and ( to alesser degree) the aud io con dit ions was s ignif icant ly re la ted tocue m easurem ents . F or the aud io cond i t ion , m ean joy accuracywas pos i tive ly re la ted to vo ice judgm ents o f be ing h igh p i t chedand m e lod ious . As joy was , in fac t, cha rac te r i zed by re la t ivelyhigh pi tc hed vo ice (see ANOVA resul ts above), one can imp lytha t i f th i s behav ior cue was used cor rec t ly by ac to rs , accu-racy fo r recogn iz ing oy would be h igh. The s am e i s t rue fo r thev ideo cond i t ion . Here m ean v ideo accuracy fo r joy was cor re -l a ted s ign i f ican t ly and pos i t ive ly wi th judgm ents o f m ovem entsas being energet ic , act ive , fas t , and expansive . T he only negat ivecor re la t ion occur red be tween num b er o f hand m ovem en ts andm ean v ideo accuracy . In fac t , joy was cha rac te r i zed by a lowam o unt o f hand m ovem ents , which m eans tha t i f ac to rs encod-ing oy s cenes used a lo t o f hand m ovem ents , accuracy o f decod-ing joy was reduced because the f requency o f hand m ovem entsis in fact not a val id cue for joy.

    F or s adnes s , where m ean accuracy was in te rm edia te , re la -t ively few cor re la t ions be tween acc uracy and behav iora l cuesreached s ta t is t ica l s ignif icance. In the audio condit ion, onlyjudgm en ts o f the vo ice a s be ing p leasan t were nega t ive ly re la ted

    Table 4Mean Accuracies or the Significant Interaction Emotion xSituation Type X ConditionCondition and situation type

    FilteredAudiovisual Video only Audio only audioEmo- - - Tot ion W C W / P C W C W / P C W C W / P C W C W / P C tal

    Joy .48 .52 .67 .56 .30 .44 .18 .20 .43Anger .87 .77 .85 .83 .82 .47 .80 .45 .74Sadness .62 .70 .61 .65 .49 .65 .30 .43 .56Surpri se .55 .43 .54 .35 .32 .31 .27 .18 .37Total .62 .63 .47 .35 .52Note WC = with child, W/ O C = w ithout child.

  • 8/13/2019 1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

    7/10

    69 6 HARA LD G. WALLBOTT AND KLAUS R. SCHERERTableSignificant Correlations of Behavior Cues With Mean Accuracyfor the Four Emotions and the Video Onlyand Audio Only Conditions Separately

    Emotionsues Joy Anger Sadness Surprise

    Audio-only conditionProx imal behaviorVoice behavior udged as:Fas t - .46~Intense -.42~High pitche d .42aMelodious .54Pleasant -.62

    .43a

    Video-only conditionProximal behaviorMovement behavior udged as:Fast .44a - 9 .55

    Expansive .69 -. 62Energetic .55 - 9 .57Active .54 - 9 .54Pleasant .51Objective movementAmo unt o f total movementactivity .42a -.47 aNumber of head movements -.7 2

    - . 5 4

    Note. N = 12 for all correlations. For correla tions with subscripts p

  • 8/13/2019 1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

    8/10

    CUES AND CHANNELSTable 6Differences in Behavior Cues Between High-Accuracy HA) and Low-Accuracy LA) Scenes orVideo-Only and Audio-Only Conditions Separately

    6 9 7

    EmotionsJoy Anger Sadness Surprise

    Proximal behavior cues LA HA LA HA LA HA LA HAAudio-onlyc ondition a

    Voice behavior judged as:Fast (2.20 2.13) 2.73 1.97 (1.50 1.85) (1.88 2.18)Intense (1.93 2.27) 2.88 2.12 (1.15 1.77) 1.97 2.62High pitched (2.00 2.38) (2.23 1.97) (1.53 1.80) (2.30 2.27)Melodious (1.93 2.43) (2.03 2.03) (1.63 1.82) (2.08 2.40)Pleasant (1.97 2.08) (1.50 2.02) (2.27 1.98) (1.88 1.92)Video-only condition b

    Movem ent behavior judged as:Fast (1.77 2.27) 2.67 2.08 1.35 1.98 2.33 1.75Expansive 1.17 2.10 2.31 1.55 (1.18 1.62) (2.32 1.88)Energetic (1.90 2.33) 2.63 2.18 1.27 1.72 2.43 2.10Active 1.48 2.27 2.47 1.82 1.18 1.80 (2.37 1.95)Pleasant (2.18 2.28) (1.57 1.87) (1.80 1.70) (2.32 2.27)

    Note. For means i n parentheses there were no significant t-test differences between low and high accuracy scen es. For all other means p < .05.Th ere were no significant differences for objective voice cues.b There w ere no significant differences for objective movemen t cues.

    to infer the nature of the emotion. This interpreta t ion was sup-ported by the pat tern of corre la t ions shown in Table 5.

    The f inding that these resul ts were more p ronou nced for themovem ent judgmen ts than for the voice judgme nts might bedue to the fact that in this study the video condition generallyhad m ore impact o n the judgme nts than did the audio condi t ion(see ANOVA esults above). On the whole, decoders seem to havepaid more a t tent ion to the video than to the audio informat ion,or they m ay have been bet ter able to use this informat ion.

    However, the mo st imp orta nt finding seems to be that (espe-cially for the video cond ition) overem phasis o f certain behav -ioral characteris t ics o f emotion (depending on the em otion) didnot imp rove t ransmiss ion. Encoding an em otion in a very s te-reotypical fashion (e.g., sadness with v ery slow and inactive ap-pearing mov eme nt behavior or anger with very active, fast, andenerget ic m ovements) sharply reduced decoding accuracy. Infuture studies, in addition to an at temp t at replicating the pres-ent findings, i t would be interesting to check whether similarresults are obta ined w hen actors are asked to express a standar dutterance out o f context (i .e., witho ut being embed ded in a sce-nario).

    D i s c u s s i o nThe research reported here addresses some methodological

    ques t ions that are pert inent to judgm ent s tudies in nonverbalcomm unicat io n research, in general , and to the percept ion andattributio n of emotions, in particular. As many studies in thisarea rely on very few actors, if not only one, for presenting emo -tions, the main question o f this article was whether actors woulddiffer in their en coding ability as well as in their n onverba l be-

    havior style and how this would affect decoding accuracy. Theresults obtained here indicate tha t actors indeed differ to a largeextent in terms o f encod ing skill . Furth ermo re, th e variable ac-tor i s involved in many interact ions wi th other experimentalfactors. The highly significant interaction A ctor Emo tion , forinstance, indicates th at a ctors differ in general e ncod ing ability,and t hat different actors are differentially able to enco de specificemotions adequately. An actor obtaining high accuracy, for in-stance, for joy mig ht succeed to a lesser degree when e ncod inganger, an d vice versa.

    This impl ies that re lying on only one actor in decod ing s tud-ies can result in severe artifacts in terms o f recogn ition o f em o-tion. Results obtained in one-actor studies that show that oneemotion is recognized bet ter than other emotions might be dueexclusively to the skills o f that actor and migh t chang e consider-ably i f another actor is used. I t is importan t to note tha t theactors ' differences in terms of encod ing ability reported in thisarticle did not result from the actors ' gender, b ut fr om oth er (asyet unknow n) character ist ics of the actors. We have shown thatactors differ considerably no t only in term s o f encoding abilitybu t also in terms of their nonv erbal behavior, which is shownwhen they encode the different emotions. Th e im portan t p ointis that these actor differences are m ostly indepen dent of thedifferent emotions encoded. Thus, different actors show differ-ent, yet consistent, nonverbal be havior styles indepen dent of theemotion encoded. These idiosyncrat ic behavior s tyles may inturn determ ine th e different enco ding abilit ies for specific emo-tions (or differentially affect the dec oding accu racy of judges).This is reminiscent o f the demeanor effect that Zuckerman, De-Frank, Hal l , Larrance, and Rosenthai (1979) demonstra ted forencoders in a lie detection study.

  • 8/13/2019 1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

    9/10

    69 8 HARA LD G. WALLBOTT AND KLAUS R. SCHERERThe results that indicate that the different conditions o f pre-

    senting the stimuli (channels) lead to differences in decodingaccuracy, the video condition being superior to the audio co ndi-tion, replicate findings reported in the li terature (cf. Burns &Beier, 1973; Mehrab ian, 1972). Although this is a replication ofa com m on result in d ecodin g studies, in this study, besides themain effect for condi tion, condi t ion was involved in importan tand highly significant interactions with the other factors. Thegeneral superiori ty of the video condi t ion over the audio condi-tion was mediated by the factors actor, situation type, and e mo -tion. Thu s, as discussed above with respect to actor differences,the effects of different stimulus condition s can not be evaluatedwithou t considering all the oth er factors involved (of. also Ek-ma n, Friesen, O 'Sullivan , & Scherer, 1980).

    The differences between em otions in d ecoding accuracy, to alarge degree, also replicate earlier findings (Ekman, 1982;Scherer, 1979). An ger was reco gni zed best, followed by sadness,whereas joy an d especially surprise were deco ded with th e leastaccuracy. In addition to this main effect, again, interactionswith other factors have to be considered. For instance, althoughsome emotions were decoded with higher accuracy when thescripts involved a child, other emotions were decoded betterwhen no child was involved. Furthermore, these differencesseemed to depend on the s t imulus presentat ion condi t ion aswell as on th e specific actor.

    Besides the differences between emotions in decoding accu-racy, this s tudy demonstra ted that the en coded em otions a lsodiffer in terms of man y behavior cues used by the actors in por-traying the respective emotion. Here, anger and sadness espe-cially differed to a large extent. W hereas anger was depicted bythe actors as an energetic, active, and intense emo tion, sadnesswas characterized as a slow, nonenergetic, and passive emotion.These results confirm findings in othe r studies, but the y are alsoreminiscent o f comm on s tereotypes about the respect ive emo-tions.

    These differences between emotions, w ith respect to both be-havior cues used by the actors and to the differential decodingaccuracy as well as to the differences between actors generally,led to the ques t ion of whether decoding accuracy was inf lu-enced by the behavior used by the actors. M edian-spl it com pari -sons between scenes with high-decoding accuracy and low-de-coding accuracy suggest that the nonverbal behaviors used bythe actors and their general behavioral style influence decodingaccu racy to a large degree. This is particularly true for qualita-tive aspects of movement as captured in subjective ratings.Overemphas is of sa l ient qual i ties for certa in emo tions reduceddecodin g accur acy sharply. Again, these effects were mo st pro-nou nce d for the emotion s sadness and anger.

    It is difficult to assess the m echa nism that underlies these re-sults . If an actor just overemp hasizes the appro priate c ues itmay no t lead to judges ' choos ing one o f the other emotions . On epossibility is that o verempha sis coincides with a disturbance inthe proper com binat ion or synchronizat ion of those cues .

    These findings suggest that portrayals o f the different emo-tions in a highly stereotypical fashion do not help observers torecognize emotions, b ut they m ay actual ly decrease decodingaccuracy. This suggests that in deco ding studies using standardemo tion stimuli, the latter have to be very carefully constructe d

    and selected in order to avoid artifacts as a result of ove remp ha-sis. Further more, in encod ing studies one has to beware o f ac-tors who try to en code emo tions in a very exaggerated way.

    With respect to the use of udg ment s tudies in nonverbal be-havior research, the results obtained here indicate that the useof actors to encode emotions seems to be o f some value, as thedifferences in nonverbal behavior between emotions suggest.However, i t may be misleading or even dangerous in terms ofartifacts to use only one actor for such studies. Actors, l ike everyother hum an being, seem to have their own idiosyncrat ic behav-ior styles, w hich c an strongly affect decodin g accuracy, often ininteract ion with the type of emotion and the channel of presen-tation. As soon as a larger num ber of studies using differentialnumbers of emotion express ing encoders become available inthe li terature, i t would be useful to conduct a meta-analysis todiscover which types o f research designs and hy potheses aremost affected by the numb er ofencoders . Furthermore, i t wouldbe interesting to look at possible interactions between encoderstyle and decod er characteristics.

    R e f e r e n c e sBurns, K. L., & Beier, E. G. (1973). Significance of vocal and visualchannels in the decoding of emotional meaning. Journal o f Comm u-nication 23 118- 30.Ekman, P. (Ed.). (1982). Emotion in the human face. Cambridge, En-gland: Cambridge University Press.Ekman, P., & Friesen, W. V. (1972). Ha nd movem ents. Journal o f Com-munication 22 353-374.Ekman, P., Friesen, W. V., O'Sullivan, M., & Scherer, K. R. (1980).Relative importance of face, body, and speech in judgments of per-sonality and affect. Journal o f Personality and Social Psychology 38270-277.Friedman, H. S. (1979). The interactive effects of facial expression ofemotion and verbal messages on perceptions o f affective meaning.Journal o f Exper imen tal Social Psychology 15, 453-469.Goodman, L. A., & Kruskal, W. H. (1954). Measures of association forcross-classification. Journal o f the American Statistic al Association49 732-764.Mehrabian, A. (1972). Nonverbal comm unication. Chicago: Aldine-Atherton.Rosenthal, R., Hall, J. A., DiMatteo, M. R., Rogers, P. L., & Archer,D. (1979). Sensi tivity to nonverbal communication. Baltimore, MD"Johns Hopkins University Press.Soberer, K. R. (19 74). Acoustic concomitants of emotional dimensions:Judging affect from synthesized tone sequences. In S. Weitz Ed.),Nonverbalcommunication (pp. 105-111). New York: Oxford Univer-

    sity Press.Scherer, K. R. (1979). Nonlinguistic vocal indicators of emotion andpsychopatholog y. In C . E. Izard (Ed.), Emotions in personality andpsychopathology (pp. 493-529). New York: Plenum Press.Scherer, K. R. (1981 ). Speech and emotional states. In J. K. Da rby Ed.),The evaluation of speech in psychiatry (pp. 189-220). New York:Grun e & Stratton.Scherer, K. R. (1982). Methods of research on vocal communication:Paradigms and parameters. In K. R. Scherer & P. Ekman (Eds.),Handbook of methods in nonverbal behavior research (pp. 136-198).Cambridg e, England: Cambridge Univ ersity Press.Scherer, K. R ., & WaUbo tt, H. G . (198 5). Analysis of nonv erbal behav-ior. In T. A. van D ijk (Ed.), Handbook of discourse analysis (pp. 199-230). London : Academic Press.

  • 8/13/2019 1986_Wallbott_JPSP Cues and Channels in Emotion Recognition

    10/10

    CUES AND CHANNELS 99Scherer, K. R., Wallbott, H. G., & Scherer, U. (1979). M ethoden z ur

    Klassifikation von Bewegungsverhalten: Ein funk tionaler Ansatz[Methods for the classification of mo vem ent behavior: A fu nctiona lapproach]. Zeitschrift Jfir Semiotik 1 177-192.

    Stand ke, R. (1981). GISYS:Ein Software-Editor zur fileorientierten digi-talen Sprachv erarbeitung im Zeitbereich [GISYS: A software editorfor file-based digital speech processing in the time doma in]. In W.Mich aelis (Ed.), Bericht iiber den 32. Kongress der Deutschen Gesell-schaft i ir Psychologie in Ziirich 1980 (pp. 197-200). G6 tting en, WestGerm any : Hogrefe.

    Williams, C. E., & Stevens, K. N. (1972). Em otions an d speech: Som eacoustical correlates. Journal o f the Acoustical Society o f America52 1238-1250.

    Zuc kerm an, M., DeFra nk, R. S., Hall, J. A., Larrance, D. T., & Rosen-thal, R. (1979). Facial and vo cal cues of deception an d honesty. Jour-nal o f Experimental Social Psychology 15, 378-396.Zuckerman, M., Hall, J. A., DeFrank, R. S., & Rosenthal, R. (1976).Enco ding and deco ding of spontaneo us and posed facial expressions.Journal o f Personality and So cial Psychology 34 966-977.Zuc kerm an, M., Lipets, M. S., Hall, J. A., & Rosentha l, R. (1975). En -coding and decoding non verbal cues of emo tion. Journal of Personal-ity and Social Psychology 32 1068-1076.

    R ece i ved Ju l y 3 , 1984Revis ion received Apr i l 1 , 1985 9