Click here to load reader
Upload
marc-cavazza
View
229
Download
1
Embed Size (px)
Citation preview
ARTICLE IN PRESS
0097-8493/$ - se
doi:10.1016/j.ca
�Correspondfax: +44 1642 2
E-mail addr
Computers & Graphics 29 (2005) 852–861
www.elsevier.com/locate/cag
Intelligent virtual environments for virtual reality art
Marc Cavazzaa,�, Jean-Luc Lugrina, Simon Hartleya, Marc Le Renardb,Alok Nandic, Jeffrey Jacobsond, Sean Crooksa
aSchool of Computing, University of Teesside, Middlesbrough, TS1 3BA, UKbCLARTE, 4 Rue de l’Ermitage, 53000 Laval, France
cCommediastra, 182, av. W. Churchill, 1180 Brussels, BelgiumdDepartment of Information Sciences, University of Pittsburg 135, North Bellefield, PA 15260, USA
Abstract
The development of virtual reality (VR) art installations is faced with considerable difficulties, especially when one
wishes to explore complex notions related to user interaction. We describe the development of a VR platform, which
supports the development of such installations, from an art+science perspective. The system is based on a CAVETM-
like immersive display using a game engine to support visualisation and interaction, which has been adapted for
stereoscopic visualisation and real-time tracking. In addition, some architectural elements of game engines, such as their
reliance on event-based systems have been used to support the principled definition of alternative laws of Physics. We
illustrate this research through the development of a fully implemented artistic brief that explores the notion of causality
in a virtual environment. After describing the hardware architecture supporting immersive visualisation we show how
causality can be redefined using artificial intelligence technologies inspired from action representation in planning and
how this symbolic definition of behaviour can support new forms of user experience in VR.
r 2005 Elsevier Ltd. All rights reserved.
Keywords: Virtual reality art; Artificial intelligence; Causal perception; Immersive displays
1. Introduction and objectives
Virtual reality (VR) art has emerged in the last decade
as an unexpected application for high-end VR systems
as well as a new direction for digital arts [1,2].
However, the development of VR art installations is
an extremely complex process. Leading VR artists have
often benefited from a supportive technical environment
for the development of their major installations. Some of
them were able to hire teams of systems developers,
while others were affiliated to academic institutions,
e front matter r 2005 Elsevier Ltd. All rights reserve
g.2005.09.002
ing author. Tel.: +44 1642 342 657;
30 527.
ess: [email protected] (M. Cavazza).
which brought together artists and scientists or engi-
neers. The level of complexity and cost of such
development is certainly a limitation to the development
of VR art. As such there is a rationale for new tools that
would facilitate the development of VR art installations.
However, the strategy for creating such tools has to be
carefully considered, as one can only feel bemused at
how diverse the relation to technology is among various
artists. Some advocate a strong technical involvement
and even participation in programming tasks while
others tend to follow a production model in which
technical developments are subordinated to the artistic
objectives. This makes the prospect of generic tools
rather unrealistic. Another approach consists in obser-
ving that often-artistic concepts revisit fundamental
aspects of interactivity, or question essential concepts
d.
ARTICLE IN PRESSM. Cavazza et al. / Computers & Graphics 29 (2005) 852–861 853
such as reality, physical experience or even the perceived
nature of life. In other words, as these interrogations
also happen to be scientific ones, they open the way to
what has been recently described as the art+science
approach, in which VR artists have otherwise played a
prominent role. In this paper, we describe such research,
whose aim is to facilitate the development of VR art
installations in an art+science context [3].
This is why, rather than simply developing a ‘‘toolkit’’
to lower the accessibility threshold of VR art technol-
ogy, we propose a system where artistic and scientific
simulation can meet at the level of conceptual repre-
sentations, while still generating technical output in the
form of implemented VR installations.
2. Intelligent virtual environment: knowledge layer and
programming principles
The notion of ‘‘behaviour’’ of a virtual environment
normally encompasses all reactions of the environment
to the user’s physical intervention. This in turn
corresponds to the physical processes triggered by the
user, when for instance s/he grasps, then drops an
object. More often, it will consist of all devices’
behaviour that are ultimately not derived from physical
simulations (for obvious reasons related to optimal
levels of description), but scripted within the system’s
implementation. In both cases, such behaviour is
encoded procedurally and the concepts underlying
behaviours (e.g., patterns of motion, physical concepts,
etc.) are not explicitly represented other than through
variables embedded in equations or scripts. VR art is
often concerned with the creation of virtual worlds that
exhibit idiosyncratic behaviours, which might violate the
traditional laws of Physics, such behaviours often being
described in the installation briefs in abstract or
metaphorical terms only. This makes it rather tedious
to implement non-standard behaviours directly in terms
of the low-level primitives (physical or procedural) that
animate the world objects. This process could be
facilitated if behaviours could be described at a more
abstract, conceptual level, in the VR system itself. The
creation of alternative behaviours could take place
directly in this representation layer, which would also
support iterative explorations of initial ideas. The use of
an AI layer to define the behaviour of a virtual
environment implements the notion of an intelligent
virtual environment [4]. This experimental technology
should bring numerous benefits to the development of
VR art installations: it supports the redefinition of non-
realistic and alternative behaviour from first principles,
it allows rapid prototyping and experimentation and,
finally it is well adapted to an art+science approach as it
explicitly represents those concepts that are the object of
artistic or scientific experimentation.
3. The illustrative briefs
To illustrate the technical presentation we will use
examples from a fully implemented artistic installation,
‘‘Ego.geo.Graphies’’ by Alok Nandi. This brief is
situated in an imaginary world governed by alternative
laws of Physics [5]. The Ego.geo.Graphies brief is
exploring interaction and navigation in a non-anthro-
pomorphic world, blurring the boundaries between
organic and inorganic. Its installation involves an
immersive VR world with which the user can interact.
The virtual world comprises of a landscape in which the
user can navigate, populated by autonomous entities
(floating spheres), which are actually all part of the same
organism. In this world, two sorts of interaction take
place: those involving elements of the world (spheres and
landscape) and those involving the user. The first type of
interaction is essentially mediated by collisions and will
be perceived in terms of causality. The second is based
on navigation and position and will be sensed by the
world in terms of ‘‘empathy’’, as a high-level, emotional
translation of the user exploration.
Through the staging of the Ego.geo.Graphies installa-
tion, we are interested in exploring aspects related to
predictability/non-predictability and hence some kind of
narrative accessibility, from the perspective of user
interaction. On one hand, this brief is an exploration of
the notion of context through the variable behaviour of the
environment which itself responds to the user involvement.
But on the other hand, it constitutes an exploration of
causality. As such, it requires mechanisms varying the
physical effects of collisions (bouncing, merging, bursting,
exploding, altering neighbouring objects, etc.), taking into
account the semantics of the environment.
This also implies that we explore how the user can be
affected by causality. The spontaneous movements of
the spheres focus the user attention, within the
constraints of his/her visual and physical exploration
of the landscape. The user will perceive consequences of
spheres colliding with each other, which are equivalent
to an emotional state of the world (as these multiple
spheres still constitute one single organism) responding
to perceived user empathy.
As a consequence, a dialogue should emerge from this
situation: user exploration will affect world behaviour
through levels of perceived empathy, and in return the
kind of observed causality will influence user exploration
and navigation.
4. System overview
The system presents itself as an immersive installation
supporting alternative worlds with which the user can
interact and, through this interaction, experience the
nature of the fantasy worlds created by the artistic brief.
ARTICLE IN PRESS
Fig. 1. Immersive visualisation in the SAS CubeTM.
Fig. 2. The SAS CubeTM installation.
M. Cavazza et al. / Computers & Graphics 29 (2005) 852–861854
The choice of an immersive hardware platform was
dictated by the necessity to match state-of-the-art VR
installations. The vast majority of them are based on
CAVETM-like systems [6], which are multi-screen im-
mersive projection displays. The advantages of CA-
VETM-like systems for VR art are well established: they
constitute an optimal compromise between user immer-
sion in visual content and the ability for physical
navigation (although in a limited space) and interaction.
In addition, CAVETM-based installations can be explored
by a small audience of up to four spectators (Fig. 1).
The software architecture implements the notion of an
intelligent virtual environment, in which alternative
reality can be defined through a symbolic description
of the virtual world’s behaviour. This software archi-
tecture is based on an integration layer, which consists in
an event-based system, relating the visualisation engine
to the behavioural layer. We use a state-of-the-art game
engine, Unreal Tournament 2003TM (UT), as a visua-
lisation engine. Game engines provide sophisticated
visualisation features and most importantly constitute a
software development environment in which further
components can be integrated. This aspect explains that
game engines are increasingly used in VR research [7].
The behavioural layer is in turn composed of two
modules, one for alternative Physics (using qualitative
Physics) and another for artificial causality. In this paper
we shall concentrate on the latter component. Through
this event-based system, real-time interaction with the
visualisation engine can trigger alternative behaviours
calculated by the intelligent virtual environment.
5. The VR architecture: stereoscopic visualisation in the
SAS cubeTM
The immersive display we have used for this research
is known as the SAS CubeTM (Fig. 2) and is a four-sided
CAVETM-like projection system in which the front, left,
right and floor sides (each 3m wide) are used as
projection screens, receiving a back-projected image
produced by four BarcoTM projectors.
This immersive display supports the use of a game
engine as a visualisation engine through specific soft-
ware known as CaveUTTM [8]. A multi-screen display
based on CaveUTTM requires a server computer
connected by a standard LAN to a number of client
computers, at least one for each screen in the display.
Stereo visualisation is an essential feature of immer-
sive displays and CaveUTTM supports stereographic
display by using two computers per screen, one to render
the left eye view and one to render the right eye view,
with an average frame rate of 60 frames/s per eye in most
experiments reported here. The camera view can be
offset from the viewer’s default configuration by a set
value equal to half the inter-pupillary distance. Active
stereo requires a single stereographic projector that will
alternate between the left and right eye views at 120
frames per second. The user wears ‘‘shutter glasses’’ on
where each lens alternates between black and clear, also
at 120 frames per second. The glasses switch in time with
the display, and the result is that each eye gets the view it
is supposed to at 60 fps—the left view for the left eye and
the right view for the right eye. All the screens in the
composite display must also switch view at exactly the
same time, a desirable state called ‘‘genlock’’.
The CaveUTTM installation in the SAS CubeTM
platform uses two computers for each screen, one for
each eye view, and uses the DVG (video) cards in their
ORADTM (PC) cluster to mix the two video signals and
send the combined signal to a single stereographic
projector. The DVG cards also handle the genlock
synchronisation across all screens of the composite
display. The overall hardware/software architecture
supporting CaveUTTM in the SAS CubeTM is depicted
on Fig. 3.
CaveUTTM supports real-time tracking in physical
space, using the IntersenseTM IS900 system or any
similar devices. Tracking the player’s head allows
CaveUTTM to generate a stable view of the virtual
world, while the player is free to move around inside the
ARTICLE IN PRESS
Fig. 3. Stereoscopic visualisation in the SAS CubeTM with CaveUTTM.
M. Cavazza et al. / Computers & Graphics 29 (2005) 852–861 855
display (which has the size of a traditional CAVETM).
From a system integration perspective, CaveUTTM uses
another freeware package, Virtual Reality Peripheral
Network (VRPN)1 to handle input from all control
peripherals such as joysticks, buttons, gamepads and the
tracking system itself. All controllers are physically
attached to the server machine, and data from the
peripherals are collected by the VRPN server, which
runs in parallel to the UT game server. The VRPN
server converts data from the control peripherals into a
generic normalised form and sends it to the CaveUTTM
code in the UT game server, via a UDP port. The
modified UT game server uses this information to
update the user’s location in the virtual world from the
head tracker and to process commands from the other
control peripherals. The VRPN server also broadcasts
the user’s new location to each one of the UT clients,
and the information is received by a VRPN client. Then,
the VRPN client sends the tracking information via
another UDP port to the VRGL code attached to the
UT client. VRGL uses this information to adjust
the perspective correction, in real-time, to preserve the
perspective depth illusion. The overall result is that the
user’s view into the virtual world looks stable to him and
1Released by the Department of Computer Science at the
University of North Carolina at Chapel Hill.
the correspondence between the virtual world and the
real one is maintained.
6. Software architecture: the event interception system
The choice made for the software architecture also
reflects our philosophy of relating technical implementa-
tion to high-level concepts of interactivity. This is why
the software architecture, which integrates the visualisa-
tion components with those in charge of interactivity
and world behaviour, is based on the notion of ‘‘event’’
as a basic unit of interaction. The role of events as
formalism for VR is well established [9] and, in addition,
it plays a crucial role in the implementation of
interactivity in game engines. Event systems are
generally developed on top of graphic engines primitive
that detect collisions between objects or between objects
and graphic volumes. In particular, and this aspect is
central to our own use of the concept of event, events
tend to be used to discretise behaviours taking place in
the virtual world. This can be illustrated on a simple
example: moving objects see their behaviour dictated by
Physics until they interact (e.g. collide) with other
objects or surfaces. Upon collision, behaviour ceases
to be determined by physical calculations (such as
continuous mechanics); instead a ‘‘collision’’ event is
ARTICLE IN PRESSM. Cavazza et al. / Computers & Graphics 29 (2005) 852–861856
created using impact velocities as input parameters. The
pre-calculated outcome of this collision is directly
associated to the event and triggered upon event
activation. This approach saves considerable computing
power in current game engines. More importantly, the
mechanisms behind event systems constitute an ideal
API for the integration of the kind of behavioural layers
we have developed. In this context, the overall software
architecture is represented in Fig. 4.
In standard event-based virtual environments, beha-
viours tend to be encoded directly from low-level events.
Event systems are generally derived from the low-level
graphical event systems for collision detection (between
objects, between objects and volumes). In this domain,
the UT native event system proposes a large collection
of events (called native events), such as Bump ( ),
Landed ( ), Hitwall ( ), Encroachedby ( ), etc. For each
object class and/or states, ‘‘event-effect’’ relations are
embedded in native event procedures (call-back system)
associated to one or many effect procedures. When a
native event is detected by the visualisation engine, its
effects’ procedures are immediately instantiated and
triggered, generating animations or object movements,
as a response. Moreover, to obtain realistic animations,
most of the virtual environments are coupled to power-
ful Physics/Particle engine, as the case of the KarmaTM
engine used by the UT engine.
Such an ad-hoc definition of causality (cause-effect
association) in a virtual environment raises a certain
number of problems. Firstly, as the ‘‘event-effect’’
Fig. 4. The software architecture for a
relations are dispersed in the code, their identifications
request expertise of the environment and of its platform
(visualisation/Physics engines). Secondly, such hard-
coded associations cannot support dynamic alterations
of causality. As a result, in its default implementation,
causality is static, basic and hardly accessible. The Event
Interception System (EIS), we have developed on top of
the UT event system, proposes to correct this limitation
of native formalisms. In addition, it provides a complete
interface between the event formalism, where causal
relations are expressed through context event (CE)
structures, and the UT visualisation/Physics engines,
which is central to our software architecture. In our
system, native low-level engine events are not directly
linked to effect functions. The EIS module processes
occurrences of the game engine’s low-level native events,
to produce intermediate-level events, such as Hit( ),
Push( ), Touch( ), Press( ), Enter( ), Exit( ), etc. For
instance, the magnitude of the colliding object momen-
tum in a colliding event can be used to instantiate a Hit
(?obj, ?surface) event from the system-level Bump(?obj,
?surface) event. Basic events constitute a base from
which the derivation of higher-level events is possible.
On the other hand, CEs provide a proper semantic
description of events, which clearly identifies actions and
their consequences and therefore supports the modifica-
tion of such actions to generate alternative effects. Such
high-level events explicitly encode default object beha-
viours in the environment. This module constitutes one
of the most innovative aspects of our approach, in which
n intelligent virtual environment.
ARTICLE IN PRESSM. Cavazza et al. / Computers & Graphics 29 (2005) 852–861 857
an ontology for actions serves as a representation layer
for the virtual world.
Typically, a CE is represented using an action
formalism inspired from those serving similar functions
in planning and robotics. These representations origin-
ally describe operators responsible for transforming
state of affairs in the world. They tend to be organised
around pre-conditions, i.e. conditions that should be
satisfied for them to take place and effects or post-
conditions, i.e. those world changes induced by their
application. Our CE formalism has been inspired from
the SIPE planning representation [10], which clearly
distinguishes the triggering conditions of an action and
its effects. Fig. 5 shows an example of CE formalism
used in the ‘‘Ego.geo.Graphies’’ artistic installation. The
triggering conditions correspond to basic events detected
by the EIS (for instance, the collision between two
spheres), while the effects field contains procedures
corresponding to the consequences of the collision. The
set of CE defines ontology of possible events in the
virtual worlds. This ontology will be authored as part of
the artistic installations to be developed. The CE
represented corresponds to the default consequence of
a collision between two spheres, which consists of these
spheres merging. Dynamic modifications of this CE can
produce new consequences and create new forms of
causality.
Fig. 5. The recognition of high-level actions from low-level
system events.
Fig. 5 also demonstrates the instantiation of a CE
from the stream of low-level events intercepted by the
system. The collision between two spheres is recognised
as a Hit(?sphere1, ?sphere2) Event (step 1). This low-
level event being part of the trigger field of the CE, it can
prompt its instantiation by the system, provided the
objects involved satisfy conditions defined in the CE
(step 2). Upon recognition of the CE, the control of the
objects’ behaviours depends on the effect field. The
default effects can be applied to the colliding spheres
(step 3). Alternatively, these effects can be modified
during CE instantiation, which will result in a new
cause-effect association, perceived as alternative caus-
ality by the user.
7. The techniques of alternative reality
Our concept of alternative reality, which is at the
heart of the ALTERNE Project [11], encompasses all
descriptions of fantasy worlds in which the elements of
behaviour underlying alternative laws of Physics or
imaginary life forms have been described from first
principles, using precisely those conceptual representa-
tions common to art+science. In the search for
techniques supporting the implementation of alternative
reality, we have focused our effort on two aspects. The
first one is the use of qualitative reasoning, which can
generate interactive behaviours from the description of
qualitative laws as generic principles. While qualitative
physics in itself can address the consequences of user
interaction in an alternative world, we have indepen-
dently identified the perception of causality [12] as an
important element of user experience, already the target
of contemporary art experiments despite the difficulties
attached with its exploration [13]. This is why we have
developed, independently of the qualitative physics
system, a causal engine, whose goal is to support specific
installations in which the user can be faced with causal
illusions. Both systems are integrated in the overall
event-based architecture described above and operate
interactively in user real-time.
The concept of causality is central to our under-
standing of the physical world and this is why it has been
for many years a topic of discussion for physicists,
psychologists and philosophers alike. Because we use
causality to make sense of the conjunction of events
taking place in our environments, any system that could
create an illusion of causality would be a powerful tool
for the creation of alternative realities.
This environment specifically supports the elicitation
of causal perception by supporting the creation of event
co-occurrences, in real time, in the virtual world. These
co-occurrences can be generated from high-level princi-
ples, such as analogies between object physical proper-
ties. The original idea behind this research was that such
ARTICLE IN PRESSM. Cavazza et al. / Computers & Graphics 29 (2005) 852–861858
high-level principles could be used to implement the
artistic intentions described in artistic briefs.
The technical approach for this ‘‘artificial causality’’
can be described as follows: as the behaviour of objects
in virtual environment is under the control of event
systems, we can use these event systems to associate
arbitrary outcomes to a given action. This in turn
generates event co-occurrences that would be perceived
as causally related by human subjects. In that sense,
artificial causality is potentially a powerful tool to create
VR experiences, including specific illusions.
The causal engine operates continuously through a
sampling cycle, during which it receives low-level events
and parses them into candidate action representations.
The essential point is that these action representations
are ‘‘frozen’’ during any given cycle, i.e. their conse-
quences are not enacted in the virtual world. During this
period of time (unnoticed to the user), the engine can
substitute new outcomes for the action, prior to its
reactivation. This substitution is performed by ‘‘macro-
operators’’ (MOp) which are knowledge structures,
which, applied to a CE representation modify the effect
part of that CE so as to generate a new outcome for the
frozen action.
We can now illustrate this, more specifically, through
several examples involving collisions between spheres in
the Ego.geo.Graphies brief. It can be noted that
(although the brief was in no way influenced by this
fact) collision between moving objects is the best-studied
Fig. 6. Alternative effects-inducing causal perception. The default e
disruption of causality correspond to alternative effects.
phenomenon in causal perception. In the world of
Ego.geo.Graphies, sphere-shaped object-actors may
collide with one another or with elements of the
landscape. The effects of a collision between spheres is
normally expected to be felt on the spheres themselves
and the nature of the effect will depend on visual cues as
to their physical properties (i.e. soft/hard, deformable,
etc.), which can be conveyed to some extent by their
textures and animations. Because the spheres are all part
of the same organism, when they collide the basic effect
should be that they coalesce into a bigger sphere. This is
represented as the baseline action for sphere–sphere
collision (Fig. 6).
The causal engine can apply various transformations
to this baseline action. It can for instance replace the
merging effect with the explosion of one or both spheres
(by applying a ‘‘swap effect’’ MOp). As an alternative,
both spheres can also bounce back from each other
(Fig. 6). Another way of inducing causal perception is to
propagate effects to elements of the landscape itself (a
specific class of operators exists in the system for
propagating effects). In that instance, the collision
between two spheres will result in the explosion of
landscape elements (Fig. 6).
Fig. 7 details the operation of the causal engine on
the collision event between two spheres at the level of the
CE formalism [5]. First, the causal engine recognises the
collision event and instantiates the default action repre-
sentation for merging spheres (the default consequence),
ffect consists for colliding spheres to merge. Various levels of
ARTICLE IN PRESS
Fig. 7. The causal engine operates by dissociating actions and their default effects. Actions represented as CE are modified by ‘‘macro-
operators’’ which alter the action parameters while these have been intercepted by the EIS layer. Upon re-activation the action triggers
an alternative effect.
M. Cavazza et al. / Computers & Graphics 29 (2005) 852–861 859
while at the same time it freezes its execution. This
representation can thus be modified to create alternative
outcomes for that collision: the nature of this modifica-
tion derives from some parameters of the user interac-
tion history, thus implementing the ‘‘dialogue’’ between
empathy and causality discussed above.
8. The final installation: user experience
The user experience obtained with our platform
compares favourably with state-of-the-art VR art
installations in terms of visual aesthetics and user
interaction. The VR experience can be described as
resulting from interactive visualisation, from physical
interaction triggering environmental responses and from
the observation of autonomous behaviours (of the
environment or agents that populate it). The world of
Ego.geo.Graphies blurs the boundaries between the
organic and the inorganic: the sphere-shaped creatures
that populate it are constantly generated in various
regions of the world: they navigate the environment and,
as they reach a certain density, start colliding with each
other. The consequences of these collisions correspond
to ‘‘levels of causality’’, which in turn are affected by the
user interaction.
The overall user experience of the Ego.geo.Graphies
installation consists in navigating in the environment
and perceiving its responses to his/her exploration, in the
form of variations of causality induced by the environ-
ment’s perceived empathy. The concept of empathy
captures the relation between the user and the world on
the basis of his/her interaction with the world’s
creatures. An empathy value is computed as a function
of different parameters, measuring the amount of time
spent in close contact with spheres and the number of
spheres interacted with. The presence of explicit paths
(Fig. 8) inside the world facilitates user navigation and
localisation. They also direct the user to potential
‘‘action zones’’, like creature emission/collision zones.
The user navigation is not limited to paths; the user can
also freely explore the whole terrain, including ‘‘swamp
areas’’. At a human scale, the surface of the map would
be equivalent to 17,000m2(approximately 130 �
130m), supporting significant navigation and explora-
tion of the environment.
User interaction consists of navigation and also direct
physical intervention, as the user can ‘‘push’’ creatures
moving around him/her (pressing some controls on the
tracker), prompting further reactions from the environ-
ment.
The behaviour of the environment reflects an overall
level of causality, which manifests itself in the con-
sequences of collisions between spheres that the user can
observe. Examples of these consequences include:
spheres merging (the default world behaviour), spheres
bouncing away, spheres exploding, spheres collisions
affecting other elements of the landscape (Fig. 6). The
ARTICLE IN PRESS
Fig. 8. Explicit navigation paths in the Ego.geo.Graphies virtual world.
Fig. 9. Perceived empathy affect the laws of causality in the Ego.geo.Graphies world.
M. Cavazza et al. / Computers & Graphics 29 (2005) 852–861860
actual consequences are computed dynamically by
modifying the default CE describing the spheres
behaviour. The user experience and the world behaviour
are related through the notion of ‘‘level of disruption’’,
which defines how different the perceived causality
should be from the default behaviour. This level of
disruption directly controls the kind of transformations
applied to intercepted collisions in the causal engine.
In the Ego.geo.Graphies world, the level of causality
disruption is dynamically updated in relation to the
perceived empathy, which is calculated from two
objective parameters that are: user-creature proximity
and user-agitation (movement amplitude and fre-
quency).
The user-creature proximity is a value in [0,1] interval
corresponding to the average distance between the user
and the creatures present in a specific radius around
him/her. The smaller the value, the closer the user is to
the creatures.
The ‘‘user-agitation’’ represents an appreciation of the
user movement (velocity and/or rotation speed), ex-
pressed again as a [0,1] value. A value of 0 means that
the user is immobile; a value close to 1 denotes a user
moving fast.
This level of disruption is frequently updated (every
25 s). We use a simple matrix (depicted in Fig. 9) to
determine the amplitude of the causality transformation
in relation to the user empathy (i.e. user-creature
(proximity) and user-agitation (movement)). In turn,
the level of disruption affects causality by determining
the kind of MOps that will transform CE representa-
tions associated to ongoing actions. Each type of MOp
uses the current level of disruption to constraint their
action, and so the amplitude of the transformation they
produce.
We can illustrate this with a MOp that ‘‘swaps’’
effects from the default effect of a collision to an
alternative one (that is still compatible with the type of
ARTICLE IN PRESSM. Cavazza et al. / Computers & Graphics 29 (2005) 852–861 861
object considered). As this list has been classified
according to a heuristic (for instance from the more
plausible to the less plausible alternative effect), the level
of disruption determines the position of the effect to
swap with. In the Ego.geo.Graphies brief, alternative
affects are used to be an expression of an ‘‘emotional’’
state of the actor. Thus, alternative effects classification
translates an escalation of emotional state, from calm to
aggressive. For instance, if we consider the collision
between two spheres, the default effect ‘‘merge’’ will be
replaced by ‘‘expand’’ in a low disruption configuration
and by explode in a very high level of disruption.
By relating the level of disruption to perceived
empathy, we obtain a complex interaction loop in which
the whole world reacts to the user’s interaction history.
The user’s approach to the world, in terms of navigation
and interaction modes, will determine the overall world
behaviour, which will be perceived by the user as more
or less predictable, and more or less agitated environ-
ment. Fig. 9 illustrates the processing cycle with its
updating of the disruption level as a function of
perceived empathy, and the implications in terms of
alternative causality. The empathy score is updated at
regular intervals and is used to compute the level of
disruption using a matrix representation, which associ-
ates levels of causality to empathy scores. Low empathy
scores are associated with significant alterations of
causality, which translates into the selection and
application of MOp. This provides a unified principle
to relate user interaction to user experience through the
concept of causality.
9. Conclusions
One of the objectives of this research was to facilitate
the development of art+science experiments, or VR art
installations whose briefs address fundamental concepts
of interaction. This can only be achieved by providing
systems that support high-level representations whose
concepts can be as close as possible to those used in the
early steps of brief creation. In other words, we have
tried to evolve the development of VR art installation
from a software engineering process, in which the artistic
specifications have to be interpreted by a team of
developers and ‘‘compiled’’ into low-level representa-
tions, to a knowledge engineering process, in which the
system representations remain closer to the original
abstractions of the artistic brief. This prototype envir-
onment remains of a significant complexity and is only
usable within art+science approaches where VR artists
have a genuine concern about philosophical issues in
interaction, realism or artificial life. However, in this
context the system’s sophistication is not an obstacle to
artists’ involvement. Many of the causal representations
we have formalised can be elicited from simple natural
language descriptions of tables. In addition, we have
recently developed authoring tools that enable artists to
browse and modify the conceptual representations
underlying the system.
Acknowledgements
This research has been funded in part by the
European Commission through the ALTERNE project,
IST-38575.
References
[1] Moser MA, editor. Immersed in technology: art and
virtual environments. Cambridge, MA: MIT Press; 1996.
[2] Grau O. Virtual Art: from illusion to immersion. Cam-
bridge: MIT Press; 2003.
[3] Sommerer C, Mignonneau L, editors. Art @ science. New
York: Springer; 1998.
[4] Aylett R, Cavazza M. Intelligent virtual environment: a
state-of-the-art report. In: Eurographics 2001 conference.
STAR Reports: 2001.
[5] Cavazza M, Lugrin J-L, Hartley S, Libardi P, Barnes MJ,
Le Bras M, Le Renard M, Bec L, Nandi A. New ways of
worldmaking: the alterne platform for VR Art. New York,
USA: ACM Multimedia; 2004.
[6] Cruz-Neira C, Sandin DJ, DeFanti TA. Surround-screen
projection-based virtual reality: the design and implemen-
tation of the CAVE. In: Proceedings of the ACM
SIGGRAPH 1993 conference. 1993. p. 135–42.
[7] Lewis M, Jacobson A. Games engines in scientific research.
Communications of ACM 2002;45(1):27–31.
[8] Jacobson J, Hwang Z. Unreal tournament for immersive
interactive theater. Communications of the ACM
2002;45(1):39–42.
[9] Jiang H, Kessler GD, Nonnemaker J. DEMIS: a dynamic
event model for interactive systems. Hong Kong: ACM
Virtual Reality Software Technology; 2002.
[10] Wilkins DE. Causal reasoning in planning. Computational
Intelligence 1988;4(4):373–80.
[11] Cavazza M, Hartley S, Lugrin J-L, Le Bras M. Alternative
reality: a new platform for digital arts. In: ACM
symposium on Virtual Reality Software and Technology
(VRST2003). Osaka, Japan: 2003. p. 100–8.
[12] Michotte A. The perception of causality [Translated from
the French by T. R. and E. Miles]. New York: Basic
Books; 1963.
[13] Sato M, Makiura N. Amplitude of chance: the horizon of
occurrences. Kawasaki, Japan: Kinyosya Printing Co.;
2001.