Transcript

Cross-language interaction in emotion recognition

Laughing rats, crying rats

Chills and tears as two types of psychophysiological responses to music

Psychological stress disorganizes dexterous movements in musical performance

Tomoya Nakai, The University of Tokyo, Graduate School of Arts and Sciences

Yumi Saito, The University of Tokyo, Graduate School of Arts and Sciences

Shinichi Furuya, Sophia University, Faculty of Science and Technology

Kazuma Mori, National Institute of Information and Communications Technology, Center for Information and Neural Networks (CiNet)

Emotional mimicry induced by manipulated speech

Uncovering mental representations of social traits in the voice

Effects of speaker identity on emotion-related auditory change detection

Workshop on Music cognition, emotion

and audio technology

Pablo Arias, Institut de Recherche et Coordination Acoustique/Musique (IRCAM)

Emmanuel Ponsot, Institut de Recherche et Coordination Acoustique/Musique (IRCAM)

Laura Rachman, Institut de Recherche et Coordination Acoustique/Musique (IRCAM)

Jean-Julien Aucouturier, Institut de Recherche et Coordination Acoustique/Musique (IRCAM) Is musical consonance a signal of social affiliation?

Date and Time: 7th Nov. 13:00-18:00Venue: Advanced Research Laboratory 410, Komaba, The University of Tokyo

Advanced Research Laboratory

Organizer: Tomoya Nakai, Kazuo OkanoyaContact: [email protected]

Supported by ERC CREAM (Cracking the emotional code of music) StG 335536,Center for Evolutionary Cognitive Science and Okanoya Laboratory, The University of Tokyo

Session 2: Speech

Session 1: Music & Non-verbal communication

IntroductionTomoya Nakai, The University of Tokyo, Graduate School of Arts and Sciences

Jean-Julien Aucouturier, Institut de Recherche et Coordination Acoustique/Musique (IRCAM)

Break

13:00

13:40

13:10

14:10

14:40

15:10

15:40

16:10

16:40

17:10

ConclusionKazuo Okanoya, The University of Tokyo, Graduate School of Arts and Sciences

17:40

Recommended