13
P. van den Besselaar and S. Koizumi (Eds.): Digital Cities 2003, LNCS 3081, pp. 204-216, 2005. Springer-Verlag Berlin Heidelberg 2005 Virtual Cities for Real-World Crisis Management Hideyuki Nakanishi 1 , Satoshi Koizumi 2 , Toru Ishida 1,2 1 Department of Social Informatics, Kyoto University [email protected], [email protected] 2 JST CREST Digital City Project Kyoto 606-8501, JAPAN [email protected] Abstract. In this paper, we present the evacuation simulation system that is the combination of a virtual city and a crisis management simulation. The system allows users to become virtual evacuees in an evacuation simulation to learn about crowd behavior. In the experimental use of the system, we found that the synergic effects between a bird’s-eye and a first-person views in learning emergency escaping behaviors. Based on this result, we designed a novel communication system that allows a remote leader to guide escaping crowds in an emergency situation. We deployed our prototype in the Kyoto Station. 1 Introduction The increased graphical performance of PCs and the proliferation of broadband networks have accelerated R&D on virtual cities [10, 10a]. Typical applications include route guidance in an urban area, link collection of regional Web sites, and graphical chat environments. On the other hand, it has become popular to represent crisis management simulations through 3D graphics. For example, an emergency situation is simulated in the realistic 3D model of a building [1]. If we could find a way to use virtual cities for visualizing crisis management simulations, we would be able to build more useful simulations at lower development cost. In the Digital City project [4], we are pursuing a new method to construct crisis management simulations. In this paper, we propose a way to connect virtual cities with crisis management simulations. If it works, we can extend crisis management simulations beyond the 3D animated representation of emergency situations. First, a virtual city can become an evacuation simulation system for education and training. Users become avatars escaping in the virtual city. Second, a virtual city can become an evacuation guidance system. A remote leader and on-site escaping people can communicate with one another through the virtual city. 2 The Evacuation Simulation System Multi-agent simulation is a typical method of evacuation simulations [3]. Virtual cities are basically multi-user environments but few of them support the function of

Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

P. van den Besselaar and S. Koizumi (Eds.): Digital Cities 2003, LNCS 3081, pp. 204-216, 2005. Springer-Verlag Berlin Heidelberg 2005

Virtual Cities for Real-World Crisis Management

Hideyuki Nakanishi1, Satoshi Koizumi2, Toru Ishida1,2

1 Department of Social Informatics, Kyoto University [email protected], [email protected]

2 JST CREST Digital City Project Kyoto 606-8501, JAPAN

[email protected]

Abstract. In this paper, we present the evacuation simulation system that is the combination of a virtual city and a crisis management simulation. The system allows users to become virtual evacuees in an evacuation simulation to learn about crowd behavior. In the experimental use of the system, we found that the synergic effects between a bird’s-eye and a first-person views in learning emergency escaping behaviors. Based on this result, we designed a novel communication system that allows a remote leader to guide escaping crowds in an emergency situation. We deployed our prototype in the Kyoto Station.

1 Introduction

The increased graphical performance of PCs and the proliferation of broadband networks have accelerated R&D on virtual cities [10, 10a]. Typical applications include route guidance in an urban area, link collection of regional Web sites, and graphical chat environments. On the other hand, it has become popular to represent crisis management simulations through 3D graphics. For example, an emergency situation is simulated in the realistic 3D model of a building [1]. If we could find a way to use virtual cities for visualizing crisis management simulations, we would be able to build more useful simulations at lower development cost.

In the Digital City project [4], we are pursuing a new method to construct crisis management simulations. In this paper, we propose a way to connect virtual cities with crisis management simulations. If it works, we can extend crisis management simulations beyond the 3D animated representation of emergency situations. First, a virtual city can become an evacuation simulation system for education and training. Users become avatars escaping in the virtual city. Second, a virtual city can become an evacuation guidance system. A remote leader and on-site escaping people can communicate with one another through the virtual city.

2 The Evacuation Simulation System

Multi-agent simulation is a typical method of evacuation simulations [3]. Virtual cities are basically multi-user environments but few of them support the function of

Page 2: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

Virtual Cities for Real-World Crisis Management 205

multi-agent simulation. To develop evacuation simulation systems, we need a technique to combine multi-user and multi-agent functions. FreeWalk is a good example of such a technique. We originally developed FreeWalk for supporting communication [12], and for the current application, we added a function to support multi-agent simulation [5, 5a]. In the next section, we describe the system architecture, which can handle both multi-user and multi-agent functions.

2.1 Multi-user Multi-agent Architecture

To allow users to participate in a multi-agent simulation of a virtual crowd, they can be an ‘avatar’ or an ‘agent’. ‘Avatar’ stands for an element of a virtual crowd that is manipulated by a user through keyboard, mouse, and other devices. An ‘agent’ is an element of a virtual crowd controlled by an external program connected to the command interface of FreeWalk. We use the term ‘character’ for either of them.

Figure 1 illustrates our multi-user multi-agent architecture and the relations among FreeWalk, users, and programs. The figure illustrates that FreeWalk only administers the external states of the characters. The external state is a set of visually and acoustically observable parameters as position, posture, and utterances. The internal state is a set of such invisible and causal elements as intention, belief, knowledge, emotion, and characteristics. The internal state of each character is administered by either the user who manipulates it or the program that controls it. From the users and programs, FreeWalk accepts requests to change the external states.

The exclusion of internal mechanisms is an important design choice for FreeWalk. If agents and avatars can be defined separately in the simulation beforehand, it is possible to incorporate such internal mechanisms as a planning engine into the simulator [15]. However, in the case of FreeWalk, it must be easy to switch an agent to an avatar and vice versa. Since avatars are manipulated by humans who have their own internal mechanism, we designed FreeWalk to exclude any internal mechanism. In Figure 1, you can see that the boundary between FreeWalk and a program is the same as that between FreeWalk and a user. There has already been a successful example of applying the same design principle to a single-user single-agent simulation [9].

Another important design choice is the distributed architecture of FreeWalk. If a multi-agent simulation does not require very heavy calculation, a single computer is sufficient to run it. However, the multi-agent simulation of FreeWalk is executed in a distributed style since the simulation must be compatible with the multi-user function of FreeWalk. Figure 1 illustrates this compatibility. Each agent can be assigned to any machine that may be used by a user. An avatar and multiple agents can run simultaneously on the same machine. Their external states are changed by FreeWalk running on the machine. The changes are transmitted between machines so that the external states of all characters can be shared by them.

Page 3: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

206 H. Nakanishi, S. Koizumi, and T. Ishida

2.2 Unified Control Mechanism of Avatars and Agents

In the multi-user multi-agent architecture, the difference between the control mechanisms of avatars and agents may result in unequal behavioral abilities. To make an evacuation simulation fair to both human and computer participants, the control mechanisms should be designed in the same way as much as possible. For example, in FreeWalk, the calculation process to determine the next position is equally designed for avatars and agents.

To move avatars, users manipulate input devices to indicate the direction to proceed. To move agents, programs call the command to begin walking toward the indicated coordinates. The subsequent process to determine the exact next position is equally designed for avatars and agents. This process includes collision avoidance [13] and gait animation generation [16]. Furthermore, movements are automatically adjusted to such social manners as forming a line to go through a doorway and forming a circle to have a conversation [8]. Figure 2 shows the data flow of this unified control mechanism.

The control mechanism of the gestures is also unified. Since deictic gestures play an important role in crowd behavior, characters have the ability to use facing and pointing gestures. Both avatars and agents can equally control the angles of their faces and arms. Users indicate the angles through input devices. Programs indicate the angles as the argument of the facing and pointing commands.

FreeWalk Process C

External state of a character

External state of all characters

FreeWalk Process A

Manipulates his/her avatar

Human user

FreeWalk Process B

Controls its agent

Program

Computer network

Fig. 1. Multi-user multi-agent architecture

Page 4: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

Virtual Cities for Real-World Crisis Management 207

2.3 Command Interface

The multi-user multi-agent architecture provides a command interface for outside programs to control agents. The command interface was designed to simulate social interaction. Some of the action and perception commands are listed below.

walk: walk to the coordinates. face: face to the direction. point: point to the direction. speak: speak the sentence. see: perceive seeing the character. hear: perceive hearing the sentence.

When programs call commands, the commands are stored and listed in the memory shared by the FreeWalk and the outside programs. FreeWalk repeats the cycle of changing the external states of characters and drawing them based on the changes. Before beginning the next cycle, FreeWalk loads the commands listed in the shared memory and begins running them. The detailed mechanism of command execution is

Next position

Adjusted by social manners

A virtual city Command interface

Input devices

Adjusted by collision avoidance

‘walk’ command

Gait animation generation

Position and posture

Human user Program

Transmitted and displayed

Fig. 2. Unified control mechanism

Page 5: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

208 H. Nakanishi, S. Koizumi, and T. Ishida

as follows. If the loaded command is an action command that takes some period of time to complete, FreeWalk continues to change the external state across the cycles. The degree of change in each cycle is determined by the period of the time passed from the previous cycle. For example, in the case of the ‘walk’ command, it is repeated to draw a character that goes forward a little more than in the previous frame until it reaches the coordinates indicated by the caller program. For the convenience of caller programs, the command interface has the two modes to call. One is a blocking mode and the other is a non-blocking mode. In the blocking mode, an agent can begin the next command after finishing the ongoing one, while in the non-blocking mode, an agent can begin the next command immediately.

Through the command interface, FreeWalk is currently connected with the scenario description language Q [6]. This language represents behavioral rules of each agent as a scenario. In the scenario, the simulation is divided into several scenes. Each scene has a set of rules like “if the agent perceives event A, then the agent executes action B.”

3 Evacuation Simulation Experiment

3.1 Hypothesis

Our evacuation simulation system enables people to experience crowd behavior and observe it from their first-person views (Figure 3(b)). Such experience should be valuable to people for learning the crowd behavior. However, the bird’s-eye view (Figure 3(a)) may be more effective in understanding crowd behavior as well as navigation [2] than a first-person view. Probably, both views have different efficacies. To compare them and also derive their synergic effects, we tested each view and a combination of them in both orders. We compared four groups: experiencing a first-person view (FP group); observing a bird’s-eye view (BE group); experiencing a first-person view before observing a bird’s-eye view (FP-BE group); and observing a bird’s-eye view before experiencing a first-person view (BE-FP group). The subjects are 96 college students. They are divided into the four groups. Six subjects participated in the simulation at once (Figure 3(c)). So, four simulations were conducted in each group.

3.2 Measure

The previous experiment [14] gave us a gauge to measure subjects’ understandings of crowd behavior. This study demonstrated how the following two group leading methods cause different crowd behaviors.

Follow-direction method: The leaders point their arms at the exit and shout out, “the exit is over there!” to indicate the direction. They begin escaping after all evacuees go out.

Follow-me method: To a few of the nearest evacuees, the leaders whisper, “follow me” and proceed to the exit. This behavior forms a flow toward the exit.

Page 6: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

Virtual Cities for Real-World Crisis Management 209

The simulation is based on this study [11]. At the beginning of the simulation, every-one was in the left part of the room, which was divided into left and right parts by the

(a)

(b)

(c)

Fig. 3. Evacuation simulation experiment

Page 7: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

210 H. Nakanishi, S. Koizumi, and T. Ishida

center wall as shown in figure 3a. The four leaders had to guide the sixteen evacuees to the correct exit at the right part, and prevent them from going out through the incorrect exit at the left part. In the FP simulations, six evacuees were subjects and the others were agents. In the BE simulations, all evacuees and leaders were agents.

In the experiment, subjects observed and experienced the two different crowd behaviors caused by the two methods. Subjects were asked which of the two methods did cause the specific crowd behavior. In a questionnaire with 17 questions, the subjects read the descriptions of crowd behavior and had to select one of the two methods. The questionnaire was completed before as well as after the experiment. A t-test was used to find significant differences between the scores of the pre-test and the post-test. A significant difference means that the subjects do learn the nature of crowd behavior through his or her observation and experience.

3.3 Results

Table 1 summarizes the results of the t-test for nine questions. Since no group could correctly answer the other eight questions, they are omitted here. The results seems to indicate that a bird’s-eye observation was necessary to grasp the crowd behavior. The FP group could not answer correctly questions 3 to 9 about the evacuees’ behavior. However, a first-person experience is not worthless. It is interesting that the BE-FP group did learn to understand the behavior described in questions 6 and 7, but that the BE and FP-BE groups didn’t. These questions seem to be related to the dense nature of crowd behavior. This result implies that the background knowledge of overall behavior enables subjects interpret gathering behavior, based on their first-person experiences of density.

We conclude that a bird’s-eye view is effective in understanding the spatial movements of crowds, and that this understanding can be increased by first-person experiences.

Table 1. Summary of the results of the questionnaire (one-sided paired t-test)

No. Question (the correct answer is the follow-me method.)

FP BE FP-BE BE-FP

1 Leaders are the first to escape. *** * * *** 2 Leaders do not observe evacuees. ** *** *** *** 3 Leaders escape like evacuees. * * **

4 One’s escape behavior is caused by others’ escape behavior.

* ** **

5 Nobody prevents evacuees from going to the incorrect exit.

*** *** ***

6 Evacuees follow other evacuees. * 7 Evacuees form a group. * 8 Leaders and evacuees escape together. * ** 9 Evacuees try to behave the same as other evacuees. *

*p<.05, **p<.01, ***p<.001 (df=23)

Page 8: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

Virtual Cities for Real-World Crisis Management 211

4 The Evacuation Guidance System

The result described above showed that communication between a person who can observe the bird’s-eye view of an emergency situation and another person who is inside the situation is meaningful for grasping the situation. Thus, we designed a communication interface that allows a remote leader to lead escaping crowds in an emergency situation.

Recent advances in wireless communication and sensor devices will allow virtual cities to simulate the current state of real cities synchronously. These synchronous virtual cities can be used to observe what is happening in a real-world emergency situation. We developed an evacuation guidance system that is a synchronous virtual Kyoto Station to connect the staff and the passengers. The staff can watch the behavior of the real passengers represented in the virtual station and guide them.

4.1 The Transcendent Communication Interface

The evacuation guidance system provides a bird’s-eye view that is appropriate for observational tasks. To explain our system, we propose a new communication style called transcendent communication. First-person view is immanent since it is supposed that the user exists inside the virtual city as an avatar. Conversely, a bird’s-eye view is transcendent since it is supposed that the user is looking at the virtual city from outside. The evacuation guidance system is a user interface for transcendent communication. The transcendent communication interface is a seamless combination of a visualization interface to observe the real world and a pointing interface to choose people to talk to.

A sensor for determining the locations of people is necessary to implement transcendent communication interfaces. For those purposes, the most generally used tools currently include a map, a mouse, and GPS. A transcendent communication interface that combines them can work as follows. The telephone numbers of people and their locations are transmitted to the interface. Then, they are represented as the icons shown on the map. When the user clicks one of the icons, the interface establishes a vocal connection between the user’s microphone and the mobile phone of the clicked person. To implement the evacuation guidance system, we used a virtual city instead of a map, and a vision sensor network instead of GPS [7].

Figure 4 is a photo of the evacuation guidance system. In this figure, a remote leader looks over the virtual Kyoto Station and freely chooses people to talk to. In the virtual station, virtual crowds try to walk along the trajectory data continuously transmitted from the vision sensor network of Kyoto Station. The bird’s-eye view of the virtual station is displayed on a large-scale touch screen so that the leader can grasp the entire situation of the crowd behavior. When the leader touches people displayed on the screen, the system establishes voice channels between the leader’s microphone and the mobile phones of these people.

The evacuation guidance system enables the leader to guide several groups of crowds separately. This ability is nearly impossible with conventional announcement facilities. Our system brings a distributed fashion to evacuation guidance announce-ments.

Page 9: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

212 H. Nakanishi, S. Koizumi, and T. Ishida

Fig. 4. Evacuation guidance system

(a) (b) Fig. 5. Vision sensor

Page 10: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

Virtual Cities for Real-World Crisis Management 213

4.2 The Vision Sensor Network

To synchronize the virtual Kyoto Station with the real Kyoto Station, we installed a vision sensor network. Figure 5(a) is a picture of the vision sensor installed in the station. In this picture, you can see a CCD camera and a reflector with a special shape. This reflector is necessary to cover a wide range with a small number of sensors. If we could expand the field of view of each camera, we could reduce the number of cameras. However, a widened field of view causes minus (barrel) distortion in the images taken by conventional cameras. The reflector of our vision sensor can eliminate such distortion. The shape of the reflector can tailor a plane that perpendicularly intersects the optical axis of the camera to be projected perspectively

Concourse area

Platform

Fig. 6. Installed positions

Page 11: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

214 H. Nakanishi, S. Koizumi, and T. Ishida

to the camera plane. This optical device makes it possible to have a large field of view without distortion. Figure 5(b) is an image taken by our vision sensors attached to the ceiling of the station platform.

We installed 12 sensors in the concourse area and 16 sensors on the platform. The small black circles in figure 6 show the positions of the sensors. In figure 7, one sees

Concourse area

Platform

Fig. 7. Installation of vision sensors

Page 12: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

Virtual Cities for Real-World Crisis Management 215

how these sensors are installed at the station. The vision sensor network can track the passengers between the platform and the ticket gate.

The facilities installed at the station include 28 vision sensors, 7 quad processors, 7 PCs for image processing, and a PC for trajectory detection. The method of process-sing images is as follows. First, a quad processor assembles the images taken by four sensors into one video image and sends it to the image-processing PC. Next, the PC extracts the regions of moving objects by the background subtraction technique and sends the results to the trajectory-detection PC. Finally, the PC detects the positions of the moving objects based on geographical knowledge such as the positions of the cameras, the occlusion edges in the cameras’ views, and the boundaries of the areas.

5 Conclusions

In this paper, we presented the multi-user multi-agent architecture. The architecture enabled a virtual city to function as a participatory crisis management simulation. We found that the bird’s-eye view of the simulation provided an overall understanding of crowd behavior, whereas the first-person view provided more in-depth understanding. This result showed the valuable of the transcendent communication interface. The interface enabled a virtual city to be an evacuation guidance system. It is a future work to explore the potential of our approach of using virtual cities for crisis management in real-world cities.

Acknowledgements

We express our thanks to the cooperation of Municipal Transportation Bureau and General Planning Bureau of Kyoto city. Hiroshi Ishiguro advised us on the deployment of the vision sensor network. We received a lot of support in the construction of the simulation from Toshio Sugiman, Shigeyuki Okazaki, and Ken Tsutsuguchi. Thanks to Reiko Hishiyama, Hideaki Ito, Tomoyuki Kawasoe, Toyokazu Itakura, CRC solutions, Mathematical system, and CAD center for their efforts in the development of the evacuation simulation and guidance systems.

References

1. CAD center. Virtual Reality Simulation Program for Architectural Performances (VR-SPAP). http://www.cadcenter.co.jp/en/webgallery/webgallery_vr5.html

2. S. Fukatsu, Y. Kitamura, T. Masaki, and F. Kishino. Intuitive Control of “Bird’s Eye” Overview Images for Navigation in an Enormous Virtual Environment. ACM Symposium on Virtual Reality Software and Technology (VRST98), 67-76, 1998.

3. D. Helbing, I.J. Farkas, and T. Vicsek. Simulating Dynamical Features of Escape Panic. Nature, Vol. 407, No. 6803, pp. 487-490, 2000.

4. T. Ishida, H. Ishiguro, and H. Nakanishi. Connecting Digital and Physical Cities. M. Tanabe, P. van den Besselaar, and T. Ishida Ed., Digital Cities II. Lecture Notes in Computer Science 2362, Springer-Verlag, pp. 246-256, 2002.

Page 13: Virtual Cities for Real-World Crisis Managementsmg.ams.eng.osaka-u.ac.jp/~nakanishi/hnp_2005_dc.pdfVirtual Cities for Real-World Crisis Management 207 2.3 Command Interface The multi-user

216 H. Nakanishi, S. Koizumi, and T. Ishida

5. T. Ishida. Digital City Kyoto: Social Information Infrastructure for Everyday Life. Communications of the ACM (CACM), Vol. 45, No. 7, pp. 76-81, 2002.

5a. T. Ishida, Activities and technologies in Digital City Kyoto. In P. van den Besselaar, S. Koizumi (eds), Digital Cities 3. Information technologies for social capital. Lecture Notes in Computer Science, Vol. 3081. Springer-Verlag, Berlin Heidelberg New York (2005) pp. 162-183.

6. T. Ishida. Q: A Scenario Description Language for Interactive Agents. IEEE Computer, Vol. 35, No. 11, pp. 54-59, 2002.

7. P.H. Kelly, A. Katkere, D.Y. Kuramura, S. Moezzi, and S. Chatterjee. An Architecture for Multiple Perspective Interactive Video, International Conference on Multimedia, (Multimedia95), pp. 201-212, 1995.

8. A. Kendon. Spatial Organization in Social Encounters: the F-formation System. A. Kendon, Ed., Conducting Interaction: Patterns of Behavior in Focused Encounters, Cambridge University Press, pp. 209-237, 1990.

9. J.E. Laird. It Knows What You’re Going To Do: Adding Anticipation to a Quakebot. International Conference on Autonomous Agents (AAMAS2001), pp. 385-392, 2001.

10. R. Linturi, M. Koivunen, and J. Sulkanen. Helsinki Arena 2000 - Augmenting a Real City to a Virtual One. T. Ishida, K. Isbister Ed., Digital Cities, Technologies, Experiences, and Future Perspectives. Lecture Notes in Computer Science 1765, Springer-Verlag, New York, pp. 83-96. 2000.

10a R. Linturi & T. Simula, Virtual Helsinki. In P. van den Besselaar, S. Koizumi (eds), Digital Cities 3. Information technologies for social capital. Lecture Notes in Computer Science, Vol. 3081. Springer-Verlag, Berlin Heidelberg New York (2005) pp. 110-137.

11. Y. Murakami, T. Ishida, T. Kawasoe, and R. Hishiyama. Scenario Description for Multi-Agent Simulation. International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS2003), pp. 369-376, 2003.

12. H. Nakanishi, C. Yoshida, T. Nishimura and T. Ishida. FreeWalk: A 3D Virtual Space for Casual Meetings. IEEE Multimedia, Vol.6, No.2, pp. 20-28, 1999.

13. S. Okazaki and S. Matsushita. A Study of Simulation Model for Pedestrian Movement with Evacuation and Queuing. International Conference on Engineering for Crowd Safety, pp. 271-280, 1993.

14. T. Sugiman and J. Misumi. Development of a New Evacuation Method for Emergencies: Control of Collective Behavior by Emergent Small Groups. Journal of Applied Psychology, Vol. 73, No. 1, pp. 3-10, 1988.

15. W. Swartout, R. Hill, J. Gratch, W.L. Johnson, C. Kyriakakis, K. Labore, R. Lindheim, S. Marsella, D. Miraglia, B. Moore, J. Morie, J. Rickel, M. Thiebaux, L. Tuch, R. Whitney and J. Douglas. Toward the Holodeck: Integrating Graphics, Sound, Character and Story. International Conference on Autonomous Agents (AAMAS2001), pp. 409-416, 2001.

16. K. Tsutsuguchi, S. Shimada, Y. Suenaga, N. Sonehara, and S. Ohtsuka. Human Walking Animation based on Foot Reaction Force in the Three-dimensional Virtual World. Journal of Visualization and Computer Animation, Vol. 11, No. 1, pp. 3-16, 2000.