Upload
others
View
5
Download
0
Embed Size (px)
Citation preview
Lecture Notes in Computer Science 5611Commenced Publication in 1973Founding and Former Series Editors:Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David HutchisonLancaster University, UK
Takeo KanadeCarnegie Mellon University, Pittsburgh, PA, USA
Josef KittlerUniversity of Surrey, Guildford, UK
Jon M. KleinbergCornell University, Ithaca, NY, USA
Alfred KobsaUniversity of California, Irvine, CA, USA
Friedemann MatternETH Zurich, Switzerland
John C. MitchellStanford University, CA, USA
Moni NaorWeizmann Institute of Science, Rehovot, Israel
Oscar NierstraszUniversity of Bern, Switzerland
C. Pandu RanganIndian Institute of Technology, Madras, India
Bernhard SteffenUniversity of Dortmund, Germany
Madhu SudanMassachusetts Institute of Technology, MA, USA
Demetri TerzopoulosUniversity of California, Los Angeles, CA, USA
Doug TygarUniversity of California, Berkeley, CA, USA
Gerhard WeikumMax-Planck Institute of Computer Science, Saarbruecken, Germany
Julie A. Jacko (Ed.)
Human-ComputerInteraction
Novel InteractionMethods and Techniques
13th International Conference, HCI International 2009San Diego, CA, USA, July 19-24, 2009Proceedings, Part II
13
Volume Editor
Julie A. JackoUniversity of MinnesotaInstitute of Health InformaticsMMC 912, 420 Delaware Street S.E., Minneapolis, MN 55455, USAE-mail: [email protected]
Library of Congress Control Number: 2009929048
CR Subject Classification (1998): H.5, I.3, I.7.5, I.5, I.2.10
LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Weband HCI
ISSN 0302-9743ISBN-10 3-642-02576-5 Springer Berlin Heidelberg New YorkISBN-13 978-3-642-02576-1 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material isconcerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publicationor parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,in its current version, and permission for use must always be obtained from Springer. Violations are liableto prosecution under the German Copyright Law.
springer.com
© Springer-Verlag Berlin Heidelberg 2009Printed in Germany
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, IndiaPrinted on acid-free paper SPIN: 12707218 06/3180 5 4 3 2 1 0
Foreword
The 13th International Conference on Human–Computer Interaction, HCI Interna-tional 2009, was held in San Diego, California, USA, July 19–24, 2009, jointly with the Symposium on Human Interface (Japan) 2009, the 8th International Conference on Engineering Psychology and Cognitive Ergonomics, the 5th International Conference on Universal Access in Human–Computer Interaction, the Third International Confer-ence on Virtual and Mixed Reality, the Third International Conference on Internation-alization, Design and Global Development, the Third International Conference on Online Communities and Social Computing, the 5th International Conference on Augmented Cognition, the Second International Conference on Digital Human Model-ing, and the First International Conference on Human Centered Design.
A total of 4,348 individuals from academia, research institutes, industry and govern-mental agencies from 73 countries submitted contributions, and 1,397 papers that were judged to be of high scientific quality were included in the program. These papers ad-dress the latest research and development efforts and highlight the human aspects of design and use of computing systems. The papers accepted for presentation thoroughly cover the entire field of human–computer interaction, addressing major advances in the knowledge and effective use of computers in a variety of application areas.
This volume, edited by Julie A. Jacko, contains papers in the thematic area of Human–Computer Interaction, addressing the following major topics:
• Multimodal User Interfaces • Gesture, Eye Movements and Expression Recognition • Human–Robot Interaction • Touch and Pen-Based Interaction • Brain Interfaces • Language, Voice, Sound and Communication • Visualization, Images and Pictures
The remaining volumes of the HCI International 2009 proceedings are:
• Volume 1, LNCS 5610, Human–Computer Interaction––New Trends (Part I), edited by Julie A. Jacko
• Volume 3, LNCS 5612, Human–Computer Interaction––Ambient, Ubiqui-tous and Intelligent Interaction (Part III), edited by Julie A. Jacko
• Volume 4, LNCS 5613, Human–Computer Interaction––Interacting in Vari-ous Application Domains (Part IV), edited by Julie A. Jacko
• Volume 5, LNCS 5614, Universal Access in Human–Computer Interaction––Addressing Diversity (Part I), edited by Constantine Stephanidis
• Volume 6, LNCS 5615, Universal Access in Human–Computer Interaction––Intelligent and Ubiquitous Interaction Environments (Part II), edited by Constantine Stephanidis
VI Foreword
• Volume 7, LNCS 5616, Universal Access in Human–Computer Interaction––Applications and Services (Part III), edited by Constantine Stephanidis
• Volume 8, LNCS 5617, Human Interface and the Management of Information––Designing Information Environments (Part I), edited by Mi-chael J. Smith and Gavriel Salvendy
• Volume 9, LNCS 5618, Human Interface and the Management of Informa-tion - Information and Interaction (Part II), edited by Gavriel Salvendy and Michael J. Smith
• Volume 10, LNCS 5619, Human Centered Design, edited by Masaaki Kurosu • Volume 11, LNCS 5620, Digital Human Modeling, edited by Vincent G.
Duffy • Volume 12, LNCS 5621, Online Communities and Social Computing, edited
by A. Ant Ozok and Panayiotis Zaphiris • Volume 13, LNCS 5622, Virtual and Mixed Reality, edited by Randall
Shumaker • Volume 14, LNCS 5623, Internationalization, Design and Global Develop-
ment, edited by Nuray Aykin • Volume 15, LNCS 5624, Ergonomics and Health Aspects of Work with
Computers, edited by Ben-Tzion Karsh • Volume 16, LNAI 5638, The Foundations of Augmented Cognition: Neuro-
ergonomics and Operational Neuroscience, edited by Dylan Schmorrow, Ivy Estabrooke and Marc Grootjen
• Volume 17, LNAI 5639, Engineering Psychology and Cognitive Ergonomics, edited by Don Harris
I would like to thank the Program Chairs and the members of the Program Boards
of all thematic areas, listed below, for their contribution to the highest scientific quality and the overall success of HCI International 2009.
Ergonomics and Health Aspects of Work with Computers
Program Chair: Ben-Tzion Karsh
Arne Aarås, Norway Pascale Carayon, USA Barbara G.F. Cohen, USA Wolfgang Friesdorf, Germany John Gosbee, USA Martin Helander, Singapore Ed Israelski, USA Waldemar Karwowski, USA Peter Kern, Germany Danuta Koradecka, Poland Kari Lindström, Finland
Holger Luczak, Germany Aura C. Matias, Philippines Kyung (Ken) Park, Korea Michelle M. Robertson, USA Michelle L. Rogers, USA Steven L. Sauter, USA Dominique L. Scapin, France Naomi Swanson, USA Peter Vink, The Netherlands John Wilson, UK Teresa Zayas-Cabán, USA
Foreword VII
Human Interface and the Management of Information
Program Chair: Michael J. Smith
Gunilla Bradley, Sweden Hans-Jörg Bullinger, Germany Alan Chan, Hong Kong Klaus-Peter Fähnrich, Germany Michitaka Hirose, Japan Jhilmil Jain, USA Yasufumi Kume, Japan Mark Lehto, USA Fiona Fui-Hoon Nah, USA Shogo Nishida, Japan Robert Proctor, USA Youngho Rhee, Korea
Anxo Cereijo Roibás, UK Katsunori Shimohara, Japan Dieter Spath, Germany Tsutomu Tabe, Japan Alvaro D. Taveira, USA Kim-Phuong L. Vu, USA Tomio Watanabe, Japan Sakae Yamamoto, Japan Hidekazu Yoshikawa, Japan Li Zheng, P.R. China Bernhard Zimolong, Germany
Human–Computer Interaction
Program Chair: Julie A. Jacko
Sebastiano Bagnara, Italy Sherry Y. Chen, UK Marvin J. Dainoff, USA Jianming Dong, USA John Eklund, Australia Xiaowen Fang, USA Ayse Gurses, USA Vicki L. Hanson, UK Sheue-Ling Hwang, Taiwan Wonil Hwang, Korea Yong Gu Ji, Korea Steven Landry, USA
Gitte Lindgaard, Canada Chen Ling, USA Yan Liu, USA Chang S. Nam, USA Celestine A. Ntuen, USA Philippe Palanque, France P.L. Patrick Rau, P.R. China Ling Rothrock, USA Guangfeng Song, USA Steffen Staab, Germany Wan Chul Yoon, Korea Wenli Zhu, P.R. China
Engineering Psychology and Cognitive Ergonomics
Program Chair: Don Harris
Guy A. Boy, USA John Huddlestone, UK Kenji Itoh, Japan Hung-Sying Jing, Taiwan Ron Laughery, USA Wen-Chin Li, Taiwan James T. Luxhøj, USA
Nicolas Marmaras, Greece Sundaram Narayanan, USA Mark A. Neerincx, The Netherlands Jan M. Noyes, UK Kjell Ohlsson, Sweden Axel Schulte, Germany Sarah C. Sharples, UK
VIII Foreword
Neville A. Stanton, UK Xianghong Sun, P.R. China Andrew Thatcher, South Africa
Matthew J.W. Thomas, Australia Mark Young, UK
Universal Access in Human–Computer Interaction
Program Chair: Constantine Stephanidis
Julio Abascal, Spain Ray Adams, UK Elisabeth André, Germany Margherita Antona, Greece Chieko Asakawa, Japan Christian Bühler, Germany Noelle Carbonell, France Jerzy Charytonowicz, Poland Pier Luigi Emiliani, Italy Michael Fairhurst, UK Dimitris Grammenos, Greece Andreas Holzinger, Austria Arthur I. Karshmer, USA Simeon Keates, Denmark Georgios Kouroupetroglou, Greece Sri Kurniawan, USA
Patrick M. Langdon, UK Seongil Lee, Korea Zhengjie Liu, P.R. China Klaus Miesenberger, Austria Helen Petrie, UK Michael Pieper, Germany Anthony Savidis, Greece Andrew Sears, USA Christian Stary, Austria Hirotada Ueda, Japan Jean Vanderdonckt, Belgium Gregg C. Vanderheiden, USA Gerhard Weber, Germany Harald Weber, Germany Toshiki Yamaoka, Japan Panayiotis Zaphiris, UK
Virtual and Mixed Reality
Program Chair: Randall Shumaker
Pat Banerjee, USA Mark Billinghurst, New Zealand Charles E. Hughes, USA David Kaber, USA Hirokazu Kato, Japan Robert S. Kennedy, USA Young J. Kim, Korea Ben Lawson, USA
Gordon M. Mair, UK Miguel A. Otaduy, Switzerland David Pratt, UK Albert “Skip” Rizzo, USA Lawrence Rosenblum, USA Dieter Schmalstieg, Austria Dylan Schmorrow, USA Mark Wiederhold, USA
Internationalization, Design and Global Development
Program Chair: Nuray Aykin
Michael L. Best, USA Ram Bishu, USA Alan Chan, Hong Kong Andy M. Dearden, UK
Susan M. Dray, USA Vanessa Evers, The Netherlands Paul Fu, USA Emilie Gould, USA
Foreword IX
Sung H. Han, Korea Veikko Ikonen, Finland Esin Kiris, USA Masaaki Kurosu, Japan Apala Lahiri Chavan, USA James R. Lewis, USA Ann Light, UK James J.W. Lin, USA Rungtai Lin, Taiwan Zhengjie Liu, P.R. China Aaron Marcus, USA Allen E. Milewski, USA
Elizabeth D. Mynatt, USA Oguzhan Ozcan, Turkey Girish Prabhu, India Kerstin Röse, Germany Eunice Ratna Sari, Indonesia Supriya Singh, Australia Christian Sturm, Spain Adi Tedjasaputra, Singapore Kentaro Toyama, India Alvin W. Yeo, Malaysia Chen Zhao, P.R. China Wei Zhou, P.R. China
Online Communities and Social Computing
Program Chairs: A. Ant Ozok, Panayiotis Zaphiris
Chadia N. Abras, USA Chee Siang Ang, UK Amy Bruckman, USA Peter Day, UK Fiorella De Cindio, Italy Michael Gurstein, Canada Tom Horan, USA Anita Komlodi, USA Piet A.M. Kommers, The Netherlands Jonathan Lazar, USA Stefanie Lindstaedt, Austria
Gabriele Meiselwitz, USA Hideyuki Nakanishi, Japan Anthony F. Norcio, USA Jennifer Preece, USA Elaine M. Raybourn, USA Douglas Schuler, USA Gilson Schwartz, Brazil Sergei Stafeev, Russia Charalambos Vrasidas, Cyprus Cheng-Yen Wang, Taiwan
Augmented Cognition
Program Chair: Dylan D. Schmorrow
Andy Bellenkes, USA Andrew Belyavin, UK Joseph Cohn, USA Martha E. Crosby, USA Tjerk de Greef, The Netherlands Blair Dickson, UK Traci Downs, USA Julie Drexler, USA Ivy Estabrooke, USA Cali Fidopiastis, USA Chris Forsythe, USA Wai Tat Fu, USA Henry Girolamo, USA
Marc Grootjen, The Netherlands Taro Kanno, Japan Wilhelm E. Kincses, Germany David Kobus, USA Santosh Mathan, USA Rob Matthews, Australia Dennis McBride, USA Robert McCann, USA Jeff Morrison, USA Eric Muth, USA Mark A. Neerincx, The Netherlands Denise Nicholson, USA Glenn Osga, USA
X Foreword
Dennis Proffitt, USA Leah Reeves, USA Mike Russo, USA Kay Stanney, USA Roy Stripling, USA Mike Swetnam, USA Rob Taylor, UK
Maria L.Thomas, USA Peter-Paul van Maanen, The Netherlands Karl van Orden, USA Roman Vilimek, Germany Glenn Wilson, USA Thorsten Zander, Germany
Digital Human Modeling
Program Chair: Vincent G. Duffy
Karim Abdel-Malek, USA Thomas J. Armstrong, USA Norm Badler, USA Kathryn Cormican, Ireland Afzal Godil, USA Ravindra Goonetilleke, Hong Kong Anand Gramopadhye, USA Sung H. Han, Korea Lars Hanson, Sweden Pheng Ann Heng, Hong Kong Tianzi Jiang, P.R. China
Kang Li, USA Zhizhong Li, P.R. China Timo J. Määttä, Finland Woojin Park, USA Matthew Parkinson, USA Jim Potvin, Canada Rajesh Subramanian, USA Xuguang Wang, France John F. Wiechel, USA Jingzhou (James) Yang, USA Xiu-gan Yuan, P.R. China
Human Centered Design
Program Chair: Masaaki Kurosu
Gerhard Fischer, USA Tom Gross, Germany Naotake Hirasawa, Japan Yasuhiro Horibe, Japan Minna Isomursu, Finland Mitsuhiko Karashima, Japan Tadashi Kobayashi, Japan
Kun-Pyo Lee, Korea Loïc Martínez-Normand, Spain Dominique L. Scapin, France Haruhiko Urokohara, Japan Gerrit C. van der Veer, The Netherlands Kazuhiko Yamazaki, Japan
In addition to the members of the Program Boards above, I also wish to thank the
following volunteer external reviewers: Gavin Lew from the USA, Daniel Su from the UK, and Ilia Adami, Ioannis Basdekis, Yannis Georgalis, Panagiotis Karampelas, Iosif Klironomos, Alexandros Mourouzis, and Stavroula Ntoa from Greece.
This conference could not have been possible without the continuous support and advice of the Conference Scientific Advisor, Prof. Gavriel Salvendy, as well as the dedicated work and outstanding efforts of the Communications Chair and Editor of HCI International News, Abbas Moallem.
Foreword XI
I would also like to thank for their contribution toward the organization of the HCI International 2009 conference the members of the Human–Computer Interaction Laboratory of ICS-FORTH, and in particular Margherita Antona, George Paparoulis, Maria Pitsoulaki, Stavroula Ntoa, and Maria Bouhli.
Constantine Stephanidis
HCI International 2011
The 14th International Conference on Human–Computer Interaction, HCI Interna-tional 2011, will be held jointly with the affiliated conferences in the summer of 2011. It will cover a broad spectrum of themes related to human–computer interaction, in-cluding theoretical issues, methods, tools, processes and case studies in HCI design, as well as novel interaction techniques, interfaces and applications. The proceedings will be published by Springer. More information about the topics, as well as the venue and dates of the conference, will be announced through the HCI International Conference series website: http://www.hci-international.org/
General Chair Professor Constantine Stephanidis
University of Crete and ICS-FORTH Heraklion, Crete, Greece
Email: [email protected]
Table of Contents
Part I: Multimodal User Interfaces
Using Acoustic Landscapes for the Evaluation of Multimodal MobileApplications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Wolfgang Beinhauer and Cornelia Hipp
Modeling and Using Salience in Multimodal Interaction Systems . . . . . . . 12Ali Choumane and Jacques Siroux
Exploring Multimodal Interaction in Collaborative Settings . . . . . . . . . . . 19Lúıs Duarte, Marco de Sá, and Lúıs Carriço
Towards a Multidimensional Approach for the Evaluation ofMultimodal Application User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
José E.R. de Queiroz, Joseana M. Fechine, Ana E.V. Barbosa, andDanilo de S. Ferreira
Multimodal Shopping Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Jhilmil Jain, Riddhiman Ghosh, and Mohamed Dekhil
Value of Using Multimodal Data in HCI Methodologies . . . . . . . . . . . . . . . 48Jhilmil Jain
Effective Combination of Haptic, Auditory and Visual InformationFeedback in Operation Feeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Keiko Kasamatsu, Tadahiro Minami, Kazuki Izumi, andHideo Jinguh
Multi-modal Interface in Multi-Display Environment for Multi-users . . . . 66Yoshifumi Kitamura, Satoshi Sakurai, Tokuo Yamaguchi,Ryo Fukazawa, Yuichi Itoh, and Fumio Kishino
Reliable Evaluation of Multimodal Dialogue Systems . . . . . . . . . . . . . . . . . 75Florian Metze, Ina Wechsung, Stefan Schaffer, Julia Seebode, andSebastian Möller
Evaluation Proposal of a Framework for the Integration of MultimodalInteraction in 3D Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Héctor Olmedo-Rodŕıguez, David Escudero-Mancebo, andValent́ın Cardeñoso-Payo
Building a Practical Multimodal System with a Multimodal FusionModule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Yong Sun, Yu (David) Shi, Fang Chen, and Vera Chung
XVI Table of Contents
Modeling Multimodal Interaction for Performance Evaluation . . . . . . . . . 103Emile Verdurand, Gilles Coppin, Franck Poirier, andOlivier Grisvard
Usability Evaluation of Multimodal Interfaces: Is the Whole the Sumof Its Parts? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Ina Wechsung, Klaus-Peter Engelbrecht, Stefan Schaffer,Julia Seebode, Florian Metze, and Sebastian Möller
Part II: Gesture, Eyes Movement and ExpressionRecognition
An Open Source Framework for Real-Time, Incremental, Static andDynamic Hand Gesture Learning and Recognition . . . . . . . . . . . . . . . . . . . . 123
Todd C. Alexander, Hassan S. Ahmed, andGeorgios C. Anagnostopoulos
Gesture-Controlled User Input to Complete Questionnaires onWrist-Worn Watches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Oliver Amft, Roman Amstutz, Asim Smailagic, Dan Siewiorek, andGerhard Tröster
UbiGesture: Customizing and Profiling Hand Gestures in UbiquitousEnvironment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Ayman Atia, Shin Takahashi, Kazuo Misue, and Jiro Tanaka
The Gestural Input System for Living Room Digital Devices . . . . . . . . . . 151Wen-Shan Chang and Fong-Gong Wu
Touchless Interaction-Novel Chances and Challenges . . . . . . . . . . . . . . . . . . 161René de la Barré, Paul Chojecki, Ulrich Leiner,Lothar Mühlbach, and Detlef Ruschin
Did I Get It Right: Head Gestures Analysis for Human-MachineInteractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Jürgen Gast, Alexander Bannat, Tobias Rehrl, Gerhard Rigoll,Frank Wallhoff, Christoph Mayer, and Bernd Radig
Interactive Demonstration of Pointing Gestures for Virtual Trainers . . . . 178Yazhou Huang and Marcelo Kallmann
Anthropometric Facial Emotion Recognition . . . . . . . . . . . . . . . . . . . . . . . . . 188Julia Jarkiewicz, Rafa�l Kocielnik, and Krzysztof Marasek
Real-Time Face Tracking and Recognition Based on Particle Filteringand AdaBoosting Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Chin-Shyurng Fahn, Ming-Jui Kuo, and Kai-Yi Wang
Table of Contents XVII
A Real-Time Hand Interaction System for Image Sensor BasedInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
SeIn Lee, Jonghoon Seo, Soon-bum Lim, Yoon-Chul Choy, andTackDon Han
Gesture-Based Interface for Connection and Control of Multi-device ina Tabletop Display Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Hyunglae Lee, Heeseok Jeong, Joongho Lee, Ki-Won Yeom, andJi-Hyung Park
Shadow Awareness: Bodily Expression Supporting System with Use ofArtificial Shadow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Yoshiyuki Miwa, Shiorh Itai, Takabumi Watanabe, Koji Iida, andHiroko Nishi
An Approach to Glove-Based Gesture Recognition . . . . . . . . . . . . . . . . . . . 236Farid Parvini, Dennis McLeod, Cyrus Shahabi, Bahareh Navai,Baharak Zali, and Shahram Ghandeharizadeh
cfHMI: A Novel Contact-Free Human-Machine Interface . . . . . . . . . . . . . . 246Tobias Rehrl, Alexander Bannat, Jürgen Gast, Gerhard Rigoll, andFrank Wallhoff
Fly! Little Me: Localization of Body-Image within Reduced-Self . . . . . . . . 255Tatsuya Saito and Masahiko Sato
New Interaction Concepts by Using the Wii Remote . . . . . . . . . . . . . . . . . . 261Michael Schreiber, Margeritta von Wilamowitz-Moellendorff, andRalph Bruder
Wireless Data Glove for Gesture-Based Robotic Control . . . . . . . . . . . . . . 271Nghia X. Tran, Hoa Phan, Vince V. Dinh, Jeffrey Ellen, Bryan Berg,Jason Lum, Eldridge Alcantara, Mike Bruch, Marion G. Ceruti,Charles Kao, Daniel Garcia, Sunny Fugate, and LorRaine Duffy
PALMbit-Silhouette:A User Interface by SuperimposingPalm-Silhouette to Access Wall Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Goshiro Yamamoto, Huichuan Xu, Kazuto Ikeda, and Kosuke Sato
Potential Limitations of Multi-touch Gesture Vocabulary:Differentiation, Adoption, Fatigue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Wendy Yee
Part III: Human-Robot Interaction
A Multimodal Human-Robot-Interaction Scenario: Working togetherwith an Industrial Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Alexander Bannat, Jürgen Gast, Tobias Rehrl, Wolfgang Rösel,Gerhard Rigoll, and Frank Wallhoff
XVIII Table of Contents
Robotic Home Assistant Care-O-bot R© 3 – Product Vision andInnovation Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Birgit Graf, Christopher Parlitz, and Martin Hägele
Designing Emotional and Interactive Behaviors for an EntertainmentRobot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Yo Chan Kim, Hyuk Tae Kwon, Wan Chul Yoon, andJong Cheol Kim
Emotions and Messages in Simple Robot Gestures . . . . . . . . . . . . . . . . . . . 331Jamy Li, Mark Chignell, Sachi Mizobuchi, and Michiaki Yasumura
Life with a Robot Companion: Video Analysis of 16-Days of Interactionwith a Home Robot in a “Ubiquitous Home” Environment . . . . . . . . . . . . 341
Naoko Matsumoto, Hirotada Ueda, Tatsuya Yamazaki, andHajime Murai
Impression Evaluation of a Conversational Robot Playing RAKUGO . . . 351Akihiro Ogino, Noritaka Moriya, Park Seung-Joon, andHirotada Ueda
Performance Assessment of Swarm Robots . . . . . . . . . . . . . . . . . . . . . . . . . . 361Ercan Oztemel, Cemalettin Kubat, Ozer Uygun, Tuba Canvar,Tulay Korkusuz, Vinesh Raja, and Anthony Soroka
A Robotic Introducer Agent Based on Adaptive Embodied EntrainmentControl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Mutsuo Sano, Kenzaburo Miyawaki, Ryohei Sasama,Tomoharu Yamaguchi, and Keiji Yamada
Robot Helps Teachers for Education of the C Language Beginners . . . . . . 377Haruaki Tamada, Akihiro Ogino, and Hirotada Ueda
An Interactive Robot Butler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385Yeow Kee Tan, Dilip Limbu Kumar, Ridong Jiang, Liyuan Li,Kah Eng Hoe, Xinguo Yu, Li Dong, Chern Yuen Wong, andHaizhou Li
Part IV: Touch and Pen-Based Interaction
A Study on Fundamental Information Transmission Characteristics ofan Air-Jet Driven Tactile Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Takafumi Asao, Hiroaki Hayashi, Masayoshi Hayashi,Kentaro Kotani, and Ken Horii
VersaPatch: A Low Cost 2.5D Capacitive Touch Sensor . . . . . . . . . . . . . . . 407Ray Bittner and Mike Sinclair
Table of Contents XIX
From Implicit to Touching Interaction by Identification Technologies:Towards Tagging Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Jose Bravo, Ramon Hervas, Carmen Fuentes, Vladimir Villarreal,Gabriel Chavira, Salvador Nava, Jesus Fontecha, Gregorio Casero,Rocio Peña, and Marcos Vergara
VTouch: A Vision-Base Dual Finger Touched Inputs for LargeDisplays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Ching-Han Chen and Cun-Xian Nian
Overview of Meta-analyses Investigating Vibrotactile versus VisualDisplay Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
Linda R. Elliott, Michael D. Coovert, and Elizabeth S. Redden
Experimental Study about Effect of Thermal Information Presentationto Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
Shigeyoshi Iizuka and Sakae Yamamoto
Preliminary Study on Vibrotactile Messaging for Sharing BriefInformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Teruaki Ito
Orientation Responsive Touch Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . 461Jinwook Kim, Jong-gil Ahn, and Heedong Ko
Representation of Velocity Information by Using Tactile ApparentMotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
Kentaro Kotani, Toru Yu, Takafumi Asao, and Ken Horii
Tactile Spatial Cognition by the Palm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479Misa Grace Kwok
A Study on Effective Tactile Feeling of Control Panels for ElectricalAppliances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Miwa Nakanishi, Yusaku Okada, and Sakae Yamamoto
Facilitating the Design of Vibration for Handheld Devices . . . . . . . . . . . . . 496Taezoon Park, Jihong Hwang, and Wonil Hwang
Interaction Technique for a Pen-Based Interface Using FingerMotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
Yu Suzuki, Kazuo Misue, and Jiro Tanaka
A Basic Study of Sensory Characteristics toward Interaction with aBox-Shaped Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Noriko Suzuki, Tosirou Kamiya, Shunsuke Yoshida, and Sumio Yano
TACTUS: A Hardware and Software Testbed for Research inMulti-Touch Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Paul Varcholik, Joseph J. Laviola Jr., and Denise Nicholson
XX Table of Contents
Low Cost Flexible Wrist Touch UI Solution . . . . . . . . . . . . . . . . . . . . . . . . . 533Bin Wang, Chenguang Cai, Emilia Koskinen, Tang Zhenqi,Huayu Cao, Leon Xu, and Antti O. Salo
Grasping Interface with Photo Sensor for a Musical Instrument . . . . . . . . 542Tomoyuki Yamaguchi and Shuji Hashimoto
Part V: Brain Interfaces
Ensemble SWLDA Classifiers for the P300 Speller . . . . . . . . . . . . . . . . . . . . 551Garett D. Johnson and Dean J. Krusienski
The I of BCIs: Next Generation Interfaces for Brain–ComputerInterface Systems That Adapt to Individual Users . . . . . . . . . . . . . . . . . . . . 558
Brendan Allison
Mind-Mirror: EEG-Guided Image Evolution . . . . . . . . . . . . . . . . . . . . . . . . . 569Nima Bigdely Shamlo and Scott Makeig
BEXPLORER: Computer and Communication Control Using EEG . . . . 579Mina Mikhail, Marian Abdel-Shahid, Mina Guirguis, Nadine Shehad,Baher Soliman, and Khaled El-Ayat
Continuous Control Paradigms for Direct Brain Interfaces . . . . . . . . . . . . . 588Melody Moore Jackson, Rudolph Mappus, Evan Barba,Sadir Hussein, Girish Venkatesh, Chetna Shastry, andAmichai Israeli
Constructive Adaptive User Interfaces Based on Brain Waves . . . . . . . . . . 596Masayuki Numao, Takayuki Nishikawa, Toshihito Sugimoto,Satoshi Kurihara, and Roberto Legaspi
Development of Symbiotic Brain-Machine Interfaces Using aNeurophysiology Cyberworkstation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
Justin C. Sanchez, Renato Figueiredo, Jose Fortes, andJose C. Principe
Sensor Modalities for Brain-Computer Interfacing . . . . . . . . . . . . . . . . . . . . 616Gerwin Schalk
A Novel Dry Electrode for Brain-Computer Interface . . . . . . . . . . . . . . . . . 623Eric W. Sellers, Peter Turner, William A. Sarnacki,Tobin McManus, Theresa M. Vaughan, and Robert Matthews
Effect of Mental Training on BCI Performance . . . . . . . . . . . . . . . . . . . . . . . 632Lee-Fan Tan, Ashok Jansari, Shian-Ling Keng, and Sing-Yau Goh
The Research on EEG Coherence around Central Area of LeftHemisphere According to Grab Movement of Right Hand . . . . . . . . . . . . . 636
Mincheol Whang, Jincheol Woo, and Jongwha Kim
Table of Contents XXI
Part VI: Language, Voice, Sound and Communication
A Speech-Act Oriented Approach for User-Interactive Editing andRegulation Processes Applied in Written and Spoken Technical Texts . . . 645
Christina Alexandris
Interacting with a Music Conducting System . . . . . . . . . . . . . . . . . . . . . . . . 654Carlos Rene Argueta, Ching-Ju Ko, and Yi-Shin Chen
Hierarchical Structure: A Step for Jointly Designing InteractiveSoftware Dialog and Task Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
Sybille Caffiau, Patrick Girard, Laurent Guittet, andDominique L. Scapin
Breaking of the Interaction Cycle: Independent Interpretation andGeneration for Advanced Dialogue Management . . . . . . . . . . . . . . . . . . . . . 674
David del Valle-Agudo, Javier Calle-Gómez,Dolores Cuadra-Fernández, and Jessica Rivero-Espinosa
SimulSort: Multivariate Data Exploration through an EnhancedSorting Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
Inkyoung Hur and Ji Soo Yi
WeMe: Seamless Active and Passive Liquid Communication . . . . . . . . . . . 694Nicolas Masson and Wendy E. Mackay
Study of Feature Values for Subjective Classification of Music . . . . . . . . . 701Masashi Murakami and Toshikazu Kato
Development of Speech Input Method for Interactive VoiceWebSystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
Ryuichi Nisimura, Jumpei Miyake, Hideki Kawahara, andToshio Irino
Non-verbal Communication System Using Pictograms . . . . . . . . . . . . . . . . 720Makiko Okita, Yuki Nakaura, and Hidetsugu Suto
Modeling Word Selection in Predictive Text Entry . . . . . . . . . . . . . . . . . . . 725Hamed H. Sad and Franck Poirier
Using Pictographic Representation, Syntactic Information and Gesturesin Text Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
Hamed H. Sad and Franck Poirier
Embodied Sound Media Technology for the Enhancement of the SoundPresence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
Kenji Suzuki
Compensate the Speech Recognition Delays for Accurate Speech-BasedCursor Position Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
Qiang Tong and Ziyun Wang
XXII Table of Contents
Effectiveness of the Text Display in Bilingual Presentation of JSL/JTfor Emergency Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
Shunichi Yonemura, Shin-ichiro Eitoku, and Kazuo Kamata
Part VII: Visualisation, Images, and Pictures
Specifying the Representation of Non-geometric Information in 3DVirtual Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
Kaveh Bazargan and Gilles Falquet
Prompter “•” Based Creating Thinking Support CommunicationSystem That Allows Hand-Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
Li Jen Chen, Jun Ohya, Shunichi Yonemura, Sven Forstmann, andYukio Tokunaga
A Zoomable User Interface for Presenting Hierarchical Diagrams onLarge Screens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
Christian Geiger, Holger Reckter, Roman Dumitrescu,Sascha Kahl, and Jan Berssenbrügge
Phorigami: A Photo Browser Based on Meta-categorization andOrigami Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
Shuo-Hsiu Hsu, Pierre Cubaud, and Sylvie Jumpertz
Sphere Anchored Map: A Visualization Technique for Bipartite Graphsin 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
Takao Ito, Kazuo Misue, and Jiro Tanaka
Motion Stroke-A Tablet-Based Interface for Motion Design Tool UsingDrawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
Haruki Kouda, Ichiroh Kanaya, and Kosuke Sato
Tooling the Dynamic Behavior Models of Graphical DSLs . . . . . . . . . . . . . 830Tihamér Levendovszky and Tamás Mészáros
Pattern Recognition Strategies for Interactive Sketch Composition . . . . . 840Sébastien Macé and Eric Anquetil
Specification of a Drawing Facility for Diagram Editors . . . . . . . . . . . . . . . 850Sonja Maier and Mark Minas
A Basic Study on a Drawing-Learning Support System in theNetworked Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
Takashi Nagai, Mizue Kayama, and Kazunori Itoh
Benefit and Evaluation of Interactive 3D Process Data Visualizationfor the Presentation of Complex Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
Dorothea Pantförder, Birgit Vogel-Heuser, and Karin Schweizer
Table of Contents XXIII
Modeling the Difficulty for Centering Rectangles in One and TwoDimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
Robert Pastel
Composing Visual Syntax for Domain Specific Languages . . . . . . . . . . . . . 889Luis Pedro, Matteo Risoldi, Didier Buchs, Bruno Barroca, andVasco Amaral
The Effectiveness of Interactivity in Computer-Based InstructionalDiagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
Lisa Whitman
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
Part I
Multimodal User Interfaces
J.A. Jacko (Ed.): Human-Computer Interaction, Part II, HCII 2009, LNCS 5611, pp. 3–11, 2009. © Springer-Verlag Berlin Heidelberg 2009
Using Acoustic Landscapes for the Evaluation of Multimodal Mobile Applications
Wolfgang Beinhauer and Cornelia Hipp
Fraunhofer Institute for Industrial Engineering, Nobelstrasse 12, 70569 Stuttgart, Germany
{wolfgang.beinhauer,cornelia.hipp}@iao.fraunhofer.de
Abstract. Multimodal mobile applications are gaining momentum in the field of location based services for special purposes. One of them is navigation sys-tems and tourist guides for pedestrians. In some cases, when the visibility is limited or blind people are longing for guidance, acoustic landmarks are used for macro-navigation rather than visual landmarks. Likewise, micro-navigation supported by pedestrian navigation systems must comply to the user's expecta-tions. In this paper, we present an acoustic landscape that allows the emulation of arbitrary out-door situations dedicated for the evaluation of navigation sys-tems. We present the evaluation capabilities and limitations of the laboratory as well as an example of an evaluation of a pedestrian navigation system that uses acoustic and haptic feedback.
Keywords: Acoustic landscape, test bed, pedestrian navigation, haptic feed-back.
1 Introduction
User Centered Design (UCD) [1] is a widely accepted process of product develop-ment, in which specific emphasis is put on the active involvement of users at each step of the design process. In particular, the user centered design process comprises the iterative run through a cycle of product prototyping, real world tests with actual users, evaluation and refined prototyping until a satisfying design has been found. While UCD is well established as a method for the development of hardware devices or user interfaces for software solutions, it encounters difficulties in case of other types of applications, such as location based services, especially if those are in a pre-mature stage.
When it comes to mobile devices and location based services, a thorough evalua-tion in real world outdoor tests can be hardly practicable and difficult to manage. Ubiquitous network access is not given. The test setting is far off the development labs. In many cases, it is impossible to maintain equal test conditions due to rapidly changing environmental factors which are not under control. Quantitative evaluation data might be blurred by imperfect test conditions. In the worst case, the evaluation of premature prototypes in outdoor scenarios involving handicapped test persons might even put them at risk.
4 W. Beinhauer and C. Hipp
With the Fraunhofer Interaction Laboratories, we introduce a test environment that emulates realistic outdoor conditions without the drawbacks of public space. The laboratory contains several indoor installations for testing and prototyping, one of them being a large scaled multimodal immersive environment that combines a 3D stereo projection with a novel surround sound system. The sound system allows for enhanced spatial audio perception and serves as a test bed for the development and evaluation of new interaction techniques, notably mobile applications. Within the immersive environment, ubiquitous network access is given as well as camera sys-tems for easy evaluation and data processing, pathing the way for rapid design and short test circles.
Simple two-channel stereophony, quadrophonic or 5.1 sound systems fall short of the requirements for precise sound-based localization as introduced in the sequel. A more accurate and more flexible sound positioning is needed. The sound system pre-sented in this paper does not deliver the quality and accuracy of high-end acoustic installations such as wave field synthesis [2], but enhances conventional multi-channel sound generation in a way that it is easy to implement and suitable for test purposes. The system is capable of emulating real-world audio conditions in a labora-tory environment and creating repeatable test conditions for the evaluation of third services. This paper presents the laboratory itself and shows how the lab is used as an acoustic landscape for the development of alternative multimodal pedestrian navigation.
2 Related Work
Spatial hearing relies on interaural time difference, interaural intensity difference and head related transfer functions. Acoustic landscapes that build on these effects have been around for quite a while already. Most of previous work aimed at special audio effects. Mauney and Walker [3] introduced the idea of livable “soundscapes”, and Mynatt et al. [4] introduced the term of an “audio aura”. In distinction to those contri-butions, we focus on acoustic environments that can be used for the development and evaluation of interactive systems such as a navigation system.
The use of audio interfaces or other modalities for navigation tasks have been ex-amined by various groups. Loomis and Klatzky [5, 6] dealt with self-orientation and related spatial orientation and multimodal spatial representations and examined their relevance for sensory substitution.
Walker and Lindsay presented a rapid prototyping tool for auditory navigation dis-plays based on virtual reality [7]. An approach to interactive acoustic environments that are capable of streaming spatiotemporal audio data is introduced by Heimrich et al. [8].
3 The Acoustic Landscape
The idea of the lightweight acoustic landscape is to create a test environment that al-lows for spatial sound generation beyond stereophonic reproduction. Therefore, sev-eral independent 7.1 speaker systems are stacked on multiple layers, providing spatial
Using Acoustic Landscapes for the Evaluation of Multimodal Mobile Applications 5
perception in all three dimensions beyond psychoacoustic effects. A dedicated soft-ware controller enables the positioning of mono sounds within the laboratory area.
3.1 Installation
The physical installation of the acoustic landscape consists of two 7.1 speaker systems mounted at different height levels in a rectangular room. While the upper annulus is located at a height of y1 = 282cm, the lower one raises at y2 = 44cm, sandwiching the average ears’ elevation. The speakers are controlled by specific software that creates a spatial sound perception. The set-up emulates a cylindrical sound space with a base radius of 256cm. Due to the given room geometry and limited fixation possibilities, an aberration from the ideal cylindrical shape is inevitable and leads to some distor-tion in angular sound perception. This effect can be reduced by a fine calibration of the amplifiers. An empirical evaluation has proven that satisfying results are obtained within a larger sweet spot extending close to the boundaries of the room.
The speakers deployed are conventional high-end consumer devices (GigaSet S750) that feature no special directional characteristics. The configuration leaves a rectangular experimentation space of roughly 25m2. Additionally, at the right hand border, a stereoscopic display extending to the full width of the laboratory is avail-able, making up a multimodal immersive environment.
Fig. 1. The acoustic landscape consists of two 7.1 speaker systems mounted above each other (only the tweeters depicted). The set-up emulates a cylindrical sound space with a base radius of 256cm within the rectangular room. The aberration of the ideal cylindrical shape leads to some distorted angular sound perception. View from above; all dimensions are given in centi-meters (cm).
6 W. Beinhauer and C. Hipp
3.2 Control Software
In order to put the acoustic landscape into operation, a dedicated control software was developed that makes use of the speakers beyond stereophony, quadrophony or 5.1 deployment that build on pseudoacoustic effects in order to create the impres-sion of distance or movement. Besides the subwoofers, 14 tweeters have to be con-trolled in order to generate a sound composition that allows for near-reality test conditions.
One possible approach to control software would be direct control of each indi-vidual speaker. Sound samples could move along a given trajectory through the audio laboratory by hand-overs between the speakers. However, this approach is not practical, since it would require separate sound cards and amplifiers for each speaker.
The key to a simpler approach is the segmentation of the acoustic environment. Ex-isting libraries such as OpenAL by creative labs or Microsoft DirectSoundspace ease the development of psychoacoustic effects in the periphery of the soundscape. Refer-ring to the Fraunhofer installation, this refers to the space above and below the upper and lower plane of 7.1 systems, as well as to the outside of the cylindrical base model. Proper generation of pseudoacoustic effects requires an equidistant mounting of the speakers along the circumference of the cylinder – a requirement that is not perfectly fulfilled due to the given room architecture, but that has been partly compensated for by a calibration procedure.
In between the speaker levels, pseudoacoustic effects in vertical dimensions are eliminated and replaced by manual cross-fading. The same yields for the horizontal plane. Moreover, the Doppler Effect for moving sound sources is calculated and ap-plied to the sounds before their reproduction. This way, up to 256 sound samples (mono-recorded wav, 44.1kHz, 16bit) can be moved along predefined trajectories through the room. The situation is depicted in Fig 2. The system is extensible in case a neighboring system is installed, which would require cross-fading between the two systems.
Fig. 2. Segmentation of the room with two sound systems. Human ears will usualy be located in the middle between the two planes, allowing for the most realistic sound impression.
Using Acoustic Landscapes for the Evaluation of Multimodal Mobile Applications 7
3.3 Building an Acoustic Landscape
Usability and modularity were important aspects in the design of the user interface of the authoring tool for sound composition. The generation of a sound landscape is a two-step process. Firstly, the trajectory and playback parameters of a single sound have to be defined. This can be done graphically as shown in figure 3. A sound file is selected and its positioning in space is defined. Additionally, the dimensions of the laboratory can be altered here in order to adapt the system to new geometries. Several sound sources or trajectories can be saved as macros and can be re-used for the gen-eration of more complex sounds.
The second step consists of the orchestration of the various sounds to a landscape. For this purpose, the preset sounds are triggered on a timeline and follow their respec-tive trajectory. Figure 4 shows an example of an audio scene composed of 18 different sounds.
Fig. 3. User interface for presetting sounds. A trajectory through the sound space is attached to each sound source (graphical tool to the right). Alternatively, a series of co-ordinate quadruples in space and time can be defined, which are approached consecutively and in between which a linear interpolation is performed.
Fig. 4. User interface for composing a landscape. In the example, a landscape consisting of 18 sound sources is created.
8 W. Beinhauer and C. Hipp
4 Discussion
4.1 Evaluation
Currently no detailed evaluation with professional audio measurement equipment has been performed. However, an empirical study has produced some first preliminary results. The relevant criteria for the functionality of the sound landscape are the direc-tion, where a certain sound is heard to come from and the perceived distance, where a certain sound is estimated to origin from. Moreover, it was tested how the transition of a moving sound source from the inner area of interpolated positioning to the outer area of psychoacoustic effects is perceived.
The precision in which positioned sounds are perceived respective to their set point was evaluated with a series of tests. Firstly, a test person was placed into the sweet spot, hence, in the centre of the enveloping cylinder. A sequence of pulsating sound signals was played, originating from different pre-defined set points. The test person pointed towards the direction from where she heard the sound. The series have been repeated with different persons and different sound sources.
Despite of the very inaccurate measurement and the inherent subjectivity of the procedure, the analysis delivered some first results: As expected, the anisotropy of the speaker installation leads to a distortion of the sound characteristics in horizontal di-mension. In the right hand section (see fig. 1) of the lab, a good precision of about 10° angle of aperture is reached, which is in line with the subjective perception tolerance. In the left part, where the installation deviates most from the ideal cylindrical set-up, the distortion tears down the accuracy to 30° approximately. In vertical dimension, the precision remains at 10° also in the distorted area.
The perceived distance could not be captured satisfyingly so far. However, no dis-continuity was recognized by the test persons when transitioning between two sound effect areas. The physically correct interpolation inside and outside psychoacoustic effects are integrated seamlessly.
4.2 Future Development
A precise evaluation of the system using acoustic equipment is in progress in order to minimize the current distortions. The next step will be an enhancement of the interac-tivity of the acoustic landscape. Currently, only predefined sound sources can be moved along predefined trajectories. It is planned to stream arbitrary sound sources to arbitrary positions in real time. This way, the localizable sound output could be com-bined with object tracking and create new possibilities of immersive interaction be-havior.
5 Exemplary Development
5.1 Test Conditions
As an example, we introduce a system that has been developed and evaluated in the acoustic landscape. The exemplary development makes use of the laboratories’ facili-ties and shows the benefits of an indoor test environment.
Using Acoustic Landscapes for the Evaluation of Multimodal Mobile Applications 9
Navigation systems are nowadays broadly used. The use of pedestrian navigation systems is rather restricted to touristic areas or larger sports events like the Olympics. Other systems use tagging technologies like geoannotation [9]. However, these ap-proaches based on PDA computers fail when it comes to blind people who need pe-destrian navigation aids the most. In the sequel, we introduce a navigation system for the blind whose development and evaluation was supported by the creation of an acoustic landscape in the laboratory.
The acoustic landscape can be a powerful tool for the simulation of realistic test con-ditions, in particular in overlaying desired sound signals with unwanted background-noise. Those conditions are completely repeatable and ease evaluation procedures.
Background noise is not necessarily intrusive or distracting. It can be very informa-tive as well. Especially for visual impaired and blind people, sounds of the personal environment of a person are valuable indicators for orientation. Especially outdoors, blinded people use the background-sound in great amount for their macro-navigation. Steady noises from distant sound sources such as the roaring of a motorway are per-ceived as acoustic landmarks just like a spire would de for sighted people. By con-trast, micro-orientation is performed by aid of the immediate and direct environment within a short distance of a few meters. Compared to that, macro-navigation is aiming at the environment with a greater distance, e.g. the location of the next bus-station. Therefore the acoustic landscape can simulate a surrounding, which is used by blind people for the orientation in their macro-navigation. Additionally, disturbing sound sources can be blend such as the abrupt appearance of a rattling motor bike. In the sequel, two systems for blind people are tested in the acoustic landscape.
5.2 A Pedestrian Navigation System
Since the visual channel is not available, other modalities have to be found in order to create an effective navigation system for the blind. One means for accessing blind people and signaling hints for their orientation is the use of haptic interaction. An-other one would be audio feeds. Haptic interaction has advantages over audio signals especially in very noisy surroundings, such as pedestrian areas or at bus-stations. Also, haptic feedback is not distracting for third people. The first test case presents the idea of haptic navigation wristbands.
The guided person wears one wristband on each arm, equipped with vibration mo-tors, which can be controlled by a central navigation-system to direct him in the right or left direction, or to stop him in case of danger. Likewise, the person could wear a smart phone on the belt, which receives GPS information about the user’s location and subsequently calculates and sends correct signals to the wristband. To enhance the accuracy, a pedometer can be integrated as well.
This wristband is very useful for blind people, when the surrounding is noisy and they cannot listen properly to an audio navigational aid. To prove this and to enhance the wristband, a noisy situation can be simulated with the help of the acoustic land-scape and ensure, that in the test situation, there is always the same interfering noise, which makes audio-information less useful.
The second example represents a sound based navigation system for pedestrians. Especially for visual impaired and blind people an orientation with the help of their hearing is very important. Beside the usage of a speech-system which can tell
10 W. Beinhauer and C. Hipp
Fig. 5. The concept of haptic wristbands for pedestrian navigation. A GPS device attached to a belt receives localization data and guides the user by means of haptic stimuli transmitted by the wristbands.
information explicitly, information can be given via sounds. This can be less intrusive as speech commands because speech is intensely attention-grabbing. A continuously repeated command can be distracting and annoying, whereas a discrete sound might be much less intrusive. A comfortable short sound at this point can improve the audio-navigation system. Moreover, this idea was continued in order to improve the macro-navigation by sounds. In laboratory testing in the acoustic landscape it was found which kind of information blinds are interested in: information for orientation, safety-related information and temporary barriers like construction-sites. Furthermore, ade-quate sounds could be identified some of them as earcons and some as auditory icons. Earcons refer to an arbitrary melody, which were designed for information, which do not have a natural sound, like “stairs”. In contrast, the auditory icons refer to reality and therefore we used for example for the information “construction work” a re-cording of a real construction site.
The acoustic landscape proved to be a valuable tool to test the sounds, whether the system is well heard and understood actually and not be ignored, when other sounds of the real environment are interfering. Further more it can be tested, how often and exactly when the sounds should be played in interfering situations to make sure the user received the information. Additionally it can be investigated, whether auditory icons are always useful or whether users have problems to identify, that the auditory icon is coming from the earphones and not from the real surrounding.
Ongoing developments of smart phones and personal digital assistances tend to the continuous trend of mobile usage of systems. Therefore the demand of methodology and tools for testing devices and application in a mobile situation will increase. One solution therefore will be the acoustic landscape.
Acknowledgements. We would like to thank the all contributors to the development of the acoustic landscape, in particular Frank Falkenberg and Till Issler, Jan Roehrich
Using Acoustic Landscapes for the Evaluation of Multimodal Mobile Applications 11
and Andreas Friedel. Moreover, thanks to Romy Kniewel and Brigitte Ringbauer who have been contributing decisively to the development of the pedestrian navigation system.
References
1. Norman, D.A.: The Design of Everyday Things. Perseus Books Group, New York (1990) 2. Brandenburg, K., Brix, S., Sporer, T.: Wave Field Synthesis: From Research to Applica-
tions. In: Proceedings of 12th European Signal Processing Conference (EUSIPCO), Vienna, Austria, August 7-10 (2004)
3. Mauney, B.S., Walker, B.N.: Creating functional and livable soundscapes for peripheral monitoring of dynamic data. In: Proceedings of ICAD 2004. The tenth international confer-ence on auditory display, Sydney, Australia (2004)
4. Mynatt, E.D., Back, M., Want, R., Baer, M., Ellis, J.B.: Designing audio aura. In: Proceed-ings of the SIGCHI conference on Human factors in computing systems, Los Angeles, Cali-fornia, United States, April 18-23, 1998, pp. 566–573 (1998)
5. Klatzky, R.L., Loomis, J.M., Beall, A.C., Chance, S.S., Golledge, R.G.: Spatial updating of self-position and orientation during real, imagined, and virtual locomotion. Psychological Science 9(4), 293–298 (1998)
6. Loomis, J.M., Klatzky, R.L.: Functional equivalence of spatial representations from vision, touch and hearing: Relevance for sensory substitution. In: Rieser, J., Ashmead, D., Ebner, F., Corn, A. (eds.) Blindness, brain plasticity and spatial function, pp. 155–184. Erlbaum, Mahwah (2007)
7. Walker, B.N., Lindsay, J.: Using virtual reality to prototype auditory navigation displays. Assistive Technology Journal 17(1), 72–81 (2005)
8. Heimrich, T., Reichelt, K., Rusch, H., Sattler, K., Schröder, T.: Modelling and Streaming Spatiotemporal Audio Data. In: Meersman, R., Tari, Z., Herrero, P. (eds.) OTM-WS 2005. LNCS, vol. 3762, pp. 7–8. Springer, Heidelberg (2005)
9. Beeharee, A., Steed, A.: Minimising Pedestrian Navigation Ambiguities Through Geoanno-tation and Temporal Tagging. In: Jacko, J. (ed.) Human-computer Interaction: Proceedings of the 12th International Conference, HCI International 2007, Beijing, China, July 22-27 (2007)