1
RESEARCH POSTER PRESENTATION DESIGN © 2012 www.PosterPresentations.com In our daily lives emotions prepare us to deal with the most vital events of everyday life, allowing us to make quick decisions without having to think on detail of what or when to do something (Ekman, 2007). If we were to turn off our emotions completely for a period of time, it would make interactions even worse than if we were emotional all the time. The people around us would think that we are detached, or worse, inhuman. In general, emotions play a very critical role in attention, planning, reasoning, learning, memory, and decision making. They are also very influential in our capability towards perception, cognition, coping, and creativity (Johnson, Rickel & Lester, 2000; Picard, 1997). EMOTIONS IN HUMANS FACIAL EXPRESSIONS OF EMOTION The FaceReader is a machine that uses video cameras to measure facial expressions from the user. It is fully automatic and it distinguishes six basic emotions (i.e., happy, angry, sad, surprised, scared, disgusted) plus neutral (Terzis, Moridis & Economides, 2010). According to Den Uyl & van Kuilenburg (2005) the Face Reader has an accuracy of 89%. The system is based on Ekman and Friesen’s Theory of the Facial Action Coding System (FACS) which states that the basic emotions correspond with specific facial models (Zaman & Shrimpton-Smith, 2006). THE FACEREADER The system consists of the Noldus FaceReader software and a webcam camera in a well illuminated room. The camera collects video feed while the participants interact with the simulation environment. Meanwhile, the software gathers facial expression information from the participant’s face. The output file of the system is a text file with time stamped facial expression information. PROCESS REFERENCES Den Uyl, M. J., & van Kuilenburg, H. (2005) The facereader: Online facial expression recognition. In Proceedings of MB 2005, 589-590. Ekman, P., & Friesen, W.V. (2003). Unmasking the face: A guide to recognizing emotions from facial expressions. Cambridge, MA: Malor Books. Ekman, P. (2007). Emotions revealed. New York, NY: Holts Paperbacks. Johnson, W.L., Rickel, J.W., & Lester, J.C. (2000). Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial Intelligence in Education, 11, 47-78. Picard, R.W. (1997). Affective Computing. Cambridge, MA: MIT Press. Terzis, V., Moridis, C.N., & Economides, A.A. (2010). Measuring instant emotions during a self-assessment test: The use of faceReader. Proceedings of Measuring Behavior 2010 7th International Conference on Methods and Techniques in Behavioral Research. Eindhoven, The Netherlands. Zaman, B., & Shrimpton-Smith, T. (2006) The FaceReader: Measuring instant fun of use. NordiCHI 2006, 457-460. CONTACT Enilda J. Romero Ph.D. Candidate Instructional Design &Technology Program Darden College of Education Old Dominion University [email protected] Studies regarding the expression of emotions in humans established that the visual channel gathers information from four specific sources: the face, the tilts of the head, the total body posture, and the skeletal muscle movements of the arms, hands, legs, and feet (Ekman & Friesen, 2003). However, it has been confirmed that the face provides emotionally charged information with far more precision than the rest of the human body (Ekman & Friesen, 2003). The face is a multi-signal, multi-message system that can convey different types of messages. These messages are shared through facial signals. The facial signals, at the same time, send emblematic messages that describe the meaning of the signal. The meaning of emblematic messages are the non- verbal equivalent of a common word or phrase. The FaceReader system has been used as a reliable measuring tool in human-computer interaction studies (Terzis et al., 2010). However, there have been very few approaches for the purpose of affect recognition with regard to learning (Terzis et al., 2010). It is possible that the human ability to read emotions from one’s facial expressions is the basis of facial affect processing that can lead to expanding interfaces with emotional communication and, in turn, to obtaining a more flexible, adaptable, and natural interaction between the learner and the elements of the interface (Den Uyl & van Kuilenburg, 2005). Participant Video Feed FaceReader Data Text File Output Data Enilda J. Romero Old Dominion University The FaceReader: Affect Recognition from Facial Representations

The FaceReader: Affect Recognition from Facial Representations

Embed Size (px)

Citation preview

Page 1: The FaceReader: Affect Recognition from Facial Representations

RESEARCH POSTER PRESENTATION DESIGN © 2012

www.PosterPresentations.com

In our daily lives emotions prepare us to deal with the most vital events of everyday life, allowing us to make quick decisions without having to think on detail of what or when to do something (Ekman, 2007). If we were to turn off our emotions completely for a period of time, it would make interactions even worse than if we were emotional all the time. The people around us would think that we are detached, or worse, inhuman. In general, emotions play a very critical role in attention, planning, reasoning, learning, memory, and decision making. They are also very influential in our capability towards perception, cognition, coping, and creativity (Johnson, Rickel & Lester, 2000; Picard, 1997).

EMOTIONS IN HUMANS

FACIAL EXPRESSIONS OF EMOTION

The FaceReader is a machine that uses video cameras to measure facial expressions from the user. It is fully automatic and it distinguishes six basic emotions (i.e., happy, angry, sad, surprised, scared, disgusted) plus neutral (Terzis, Moridis & Economides, 2010). According to Den Uyl & van Kuilenburg (2005) the Face Reader has an accuracy of 89%. The system is based on Ekman and Friesen’s Theory of the Facial Action Coding System (FACS) which states that the basic emotions correspond with specific facial models (Zaman & Shrimpton-Smith, 2006).

THE FACEREADER

The system consists of the Noldus FaceReader software and a webcam camera in a well illuminated room. The camera collects video feed while the participants interact with the simulation environment. Meanwhile, the software gathers facial expression information from the participant’s face. The output file of the system is a text file with time stamped facial expression information.

PROCESS

REFERENCES

Den Uyl, M. J., & van Kuilenburg, H. (2005) The facereader: Online facial expression recognition. In Proceedings of MB 2005, 589-590.

Ekman, P., & Friesen, W.V. (2003). Unmasking the face: A guide to recognizing emotions from facial expressions. Cambridge, MA: Malor Books.

Ekman, P. (2007). Emotions revealed. New York, NY: Holts Paperbacks.

Johnson, W.L., Rickel, J.W., & Lester, J.C. (2000). Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial Intelligence in Education, 11, 47-78.

Picard, R.W. (1997). Affective Computing. Cambridge, MA: MIT Press.

Terzis, V., Moridis, C.N., & Economides, A.A. (2010). Measuring instant emotions during a self-assessment test: The use of faceReader. Proceedings of Measuring Behavior 2010 7th International Conference on Methods and

Techniques in Behavioral Research. Eindhoven, The Netherlands.

Zaman, B., & Shrimpton-Smith, T. (2006) The FaceReader: Measuring instant fun of use. NordiCHI 2006,

457-460.

CONTACT

Enilda J. Romero

Ph.D. Candidate

Instructional Design &Technology Program

Darden College of Education

Old Dominion University

[email protected]

Studies regarding the expression of emotions in humans established that the visual channel gathers information from four specific sources: the face, the tilts of the head, the total body posture, and the skeletal muscle movements of the arms, hands, legs, and feet (Ekman & Friesen, 2003). However, it has been confirmed that the face provides emotionally charged information with far more precision than the rest of the human body (Ekman & Friesen, 2003). The face is a multi-signal, multi-message system that can convey different types of messages. These messages are shared through facial signals. The facial signals, at the same time, send emblematic messages that describe the meaning of the signal. The meaning of emblematic messages are the non-verbal equivalent of a common word or phrase.

The FaceReader system has been used as a reliable measuring tool in human-computer interaction studies (Terzis et al., 2010). However, there have been very few approaches for the purpose of affect recognition with regard to learning (Terzis et al., 2010). It is possible that the human ability to read emotions from one’s facial expressions is the basis of facial affect processing that can lead to expanding interfaces with emotional communication and, in turn, to obtaining a more flexible, adaptable, and natural interaction between the learner and the elements of the interface (Den Uyl & van Kuilenburg, 2005).

• Participant

Video Feed

• FaceReader

Data

• Text File

Output Data

Enilda J. Romero

Old Dominion University

The FaceReader: Affect Recognition from Facial Representations