12
Multipresence-Enabled Multipresence-Enabled Mobile Spatial Audio Mobile Spatial Audio Interfaces Interfaces PI: Adrian David Cheok Co-PI: Owen Noel Newton Fernando Organisation: National University of Singapore Collaborator: Michael Cohen Organisation: University of Aizu

Multipresence-Enabled Mobile Spatial Audio Interfaces

  • Upload
    garry54

  • View
    423

  • Download
    0

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Multipresence-Enabled Mobile Spatial Audio Interfaces

Multipresence-Enabled Multipresence-Enabled Mobile Spatial Audio Mobile Spatial Audio

InterfacesInterfaces

PI: Adrian David CheokCo-PI: Owen Noel Newton FernandoOrganisation: National University of SingaporeCollaborator: Michael CohenOrganisation: University of Aizu

Page 2: Multipresence-Enabled Mobile Spatial Audio Interfaces

OBJECTIVESOBJECTIVES

• The main objective of this research is to develop multipresence-enabled audio windowing systems for visualization, attention, and privacy awareness of narrowcasting (selection) functions in collaborative virtual environments (CVEs) for GSM mobile devices, 3rd- and 4th-generation mobile phones.

• The deployment of audio narrowcasting operations encourages modernization of office- and mobile-based conferencing, leveraging session integration across coextensive spaces and anticipating multipresence enabled by higher bandwidth and more durable mobile connectivity for effectively persistent sessions.

• This research can be considered an extension of presence technology, and anticipates deployment of such narrowcasting protocols into session protocols like SIP or the internet infrastructure (routers, etc.) itself.

Page 3: Multipresence-Enabled Mobile Spatial Audio Interfaces

SCOPESCOPE• The mobile audio windowing system is a multidisciplinary project that will

focus on research, definitions, and applications of new types of communication in collaborative virtual environments.

• Implement narrowcasting operations in mobile collaborative environments.

• Implement multipresence functionality.• Implement autofocus algorithm to determine a best multipresent

sink for each source.• Implement best source algorithm to determine best multipresent

source for each sink.• Implement clipboard operations for teleporting and cloning.• Design and implement GUI (graphical user interface) for multiple

spaces using time- or space-division multiplexing.• Implement realtime voice communication for multiple spaces

(initially, up to five spaces).• Validation of efficacy of auditory interfaces, including usability

testing.• Deploy spatial sound in mobile applications.• Integrate SIP-based media mixing of narrowcast audio streams.

Page 4: Multipresence-Enabled Mobile Spatial Audio Interfaces

DELIVERABLESDELIVERABLES• We propose to perform appropriate R&D to enable delivery of a multipresence-

enabled mobile telephony system comprising three main subsystems:• control: We will extend narrowcasting to mobile interfaces (by porting a

previously developed “ι·Con” program to local GSM mobile platform), including multispatial idioms (clipboard functions for teleporting/cloning).

• communication: The above-described interface will control realtime polyphonic audio (probably mostly voice) streams, extending to more than two conversants. We plan to apply SIP-based presence protocols (described later) to such chat sessions.

• display: We plan to deploy spatial audio on mobile platforms for rich chat capability, based on JSR-234, a proposed standard for enabling advanced multimedia on mobile platforms, including models of spatial sound.

• An audio windowing system for NTT-DoCoMo mobile phones has been developed as partially working prototypes using JME and NTT DoCoMo/Java (DoJa) libraries. As a first step in the proposed research, we will port these prototypes into GSM mobile phones, as supported by local mobile network operators.

• The narrowcasting formalization validated by a workstation proof-of-concept and the infrastructure of the CVE client/server architecture, and extended by the servent proxy to support mobile transactions, is robust enough to use for the research and development described in this proposal.

Page 5: Multipresence-Enabled Mobile Spatial Audio Interfaces

IMPACTIMPACTApplications

This research will enable different applications, including teleconferences, chatspaces, virtual concerts, location-based services, sonic cursors, and entertainment & culture (like online role-playing games).

• Mobile teleconferencing: audio windowing system• Multipresence-enabled interaction systems

• Domestic (family)• Social (friends)• Vocational (office)

• Mobile spatial audio system• Karaoke and virtual concerts• MMORPG: massively multiplayer online role-playing

games• LBS: location-based services• Mobile “talking books”• Audio navigation and way-finding

Page 6: Multipresence-Enabled Mobile Spatial Audio Interfaces

• A unique feature of our system is the ability of a human pilot to delegate multiple avatars simultaneously, increasing quantity of presence; such multipresence enables us to overcome some fundamental constraints of human condition.

• Presence awareness is rapidly becoming an important component of many collaborative applications. One serious limitation is that no existing presence awareness systems can handle multiply present sources and sinks. The narrowcasting operations presented in this proposal suggest an elegant approach such multipresence environments.

• We are the only group working on multipresence and the protocols required to articulate privacy and attention across the multiple spaces users will virtually inhabit with the near-constant connectivity (“ABC”: always best connected) afforded by mobile networks.

• Sound spatialization has particular potential when developing application on small screen displays. To the best of our knowledge, no existing mobile interfaces support multiple simultaneous audio streams with multipresence. Our system will allow users to multicast their voice to multiple receivers and control crowded soundscapes using narrowcasting operations.

Possible contributions to research/industry

Page 7: Multipresence-Enabled Mobile Spatial Audio Interfaces

• The proposed system will enhance human communication, allowing users to interact with friends, family, and colleagues “anytime anywhere.” One will have virtual presence in multiple different virtual places, and the ability to control privacy and shift attention back and forth. For instance, one's family members, schoolmates, friends, etc. will have persistent virtual copresence and one can virtually go back and forth among different spaces, encouraging synchronous social interaction even in busy life.

• Mobile phones have become a ubiquitous technology and for many people an important tool for communication and information access. Mobile telephony offers an interesting platform for building multipresence-enabled applications that utilize the phone as a social or commercial assistant. We expect that commercial development of this research will involve partnerships with network providers, who might license such technology to offer to their subscribers as an added-value service.

Future Work• Multiuser interfaces generally provide shared workspaces for human

users to collaborate. Role-based collaborative user interfaces can enhance CSCW systems. Role-based collaborative systems allow human users to collaborate with each other easily and productively. We should explore special interface design strategies and provide user requirements for interfaces supporting role-based applications, based on role assignments and role transition mechanisms.

Exploitation Potential / Commercialisation

Page 8: Multipresence-Enabled Mobile Spatial Audio Interfaces

Professor Michael Cohen is Professor of Computer Arts at the University of Aizu in Japan, where he heads the Spatial Media Group, teaching information theory, audio interfaces, computer music, and researching interactive multimedia, including virtual & mixed reality, computer music, spatial audio & stereotelephony, stereography, ubicomp, and mobile computing. He received an Sc.B. in EE from Brown University (Providence, Rhode Island), M.S. in CS from the University of Washington (Seattle), and Ph.D. in EECS from Northwestern University (Evanston, Illinois). He is the co-developer of the Sonic (sonic.u-aizu.ac.jp) online audio courseware, the author or coauthor of over one hundred publications, four book chapters, and two patents, and the inventor or co-inventor of multipresence (virtual cloning algorithm), the Schaire, nearphones, SQTVR, and Zebrackets.

Owen Noel Newton Fernando is a Research Fellow in the Mixed Reality Lab. at the National University of Singapore. He received his B.Sc. in Computer Science from the University of Colombo, Sri Lanka, and M.Sc. & Ph.D. in Computer Science and Engineering from the University of Aizu, Japan. He has previously worked at the Peoples Bank (leading government bank in Sri Lanaka) IT department as a Systems Analyst. He was awarded the Japanese Government (Monbukagakusho: MEXT) scholarship in 2004. Fernando is the author or coauthor of five Journals and fifteen conference publications.

Principle Investigator (PI)

CO-PI Collaborator

Adrian David Cheok is Director of the Mixed Reality Lab, National University of Singapore. He is currently an Associate Professor at the National University of Singapore where he leads a team of over 20 researchers and students. He has been a keynote and invited speaker at numerous international and local conferences and events. He is invited to exhibit for two years in the Ars Electronica Museum of the Future, launching in the Ars Electronica Festival 2003. He was IEEE Singapore Section Chairman 2003, and is presently ACM SIGCHI Chapter President. He was awarded the Hitachi Fellowship 2003, the A-STAR Young Scientist of the Year Award 2003, and the SCS Singapore Young Professional of the Year Award 2004. In 2004 he was invited to be the Singapore representative of the United Nations body IFIP SG 16 on Entertainment Computing and the founding and present Chairman of the Singapore Computer Society Special Interest Group on Entertainment Computing. Also in 2004, he was awarded an Associate of the Arts award by the Minister for Information, Communications and the Arts, Singapore.

Page 9: Multipresence-Enabled Mobile Spatial Audio Interfaces

Q & AQ & A Session Session

Page 10: Multipresence-Enabled Mobile Spatial Audio Interfaces

Narrowcasting and selection functions can be formalized in predicate calculus notation, where ‘¬’ means “not,” ‘^’ means conjunction (logical “and”), ‘’ means “there exists,” and ‘’ means “implies.” The general expression of inclusive selection is active(media processor x) = ¬ exclude(x) ^

( y (include(y) ^ self(x) = = self(y)) include(x) )

So, for mute and select (solo), the relation is

active(source x) = ¬ mute(x) ^ ( y (select(y) ^ self(x) = = self(y)) select(x) )

mute explicitly turning off a source, and select disabling the collocated (same room/window) complement of the selection (in the spirit of “anything not mandatory is forbidden”). For deafen and attend, the relation is

active(sink x) = ¬ deafen(x) ^ ( y (attend(y) ^ self(x) = = self(y)) attend(x) )

Narrowcasting and selection Narrowcasting and selection functionsfunctions

Page 11: Multipresence-Enabled Mobile Spatial Audio Interfaces

Media mixing and delivery for Media mixing and delivery for (P1 mutes P2 and deafen P4) (P1 mutes P2 and deafen P4)

Page 12: Multipresence-Enabled Mobile Spatial Audio Interfaces

Policy configuration, evaluation, media mixing, and delivery for selective

privacy