28
1. Introduction 2. Contextual Review 2.1 The issue 2.2 Current solutions/thinking 2.2.1 Relaxed Behaviour 2.2.2 Humanity Systems 2.3 Contextual Review Summary 3. Methodology 4. Discussion 4.1 Discussion Intro 4.2 Visual Coherence 4.3 Skilful Performance 4.3.1 12 Principles of Animation 4.3.2 7 Essential Acting Principles 4.4 Awareness 4.5 Discussion Summary? 5. Proposed Solution 5.1 Proposed Solution Introduction 5.2 Situational awareness 5.3 Definition of surroundings 5.3.1 The Environment 5.3.2 Other Characters 5.3.3 Player Agency 5.3.4 Narrative Events 5.4 Prioritisation 5.4.1 A worked example 5.4.2 Layered Behaviour 5.5 Personality 5.6 SA Practical Application Summary 5.6.1 Timing – not impeding player agency 5.6.2 No generic! 5.6.3 Focus on emotion over physical (code restricted) 5.6.4 Posture Power Centres Line of Action Camera Angles Symbolism 5.6.5 Contexualisation Architecture Can current animations be advanced so the avatar elicits a reaction to both the physicality and tone of their surroundings, which reflects their personality and mood?

jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

1. Introduction 2. Contextual Review

2.1 The issue2.2 Current solutions/thinking

2.2.1 Relaxed Behaviour2.2.2 Humanity Systems

2.3 Contextual Review Summary3. Methodology4. Discussion

4.1 Discussion Intro4.2 Visual Coherence4.3 Skilful Performance

4.3.1 12 Principles of Animation4.3.2 7 Essential Acting Principles

4.4 Awareness4.5 Discussion Summary?

5. Proposed Solution5.1 Proposed Solution Introduction5.2 Situational awareness5.3 Definition of surroundings

5.3.1 The Environment5.3.2 Other Characters5.3.3 Player Agency5.3.4 Narrative Events

5.4 Prioritisation5.4.1 A worked example5.4.2 Layered Behaviour

5.5 Personality5.6 SA Practical Application Summary

5.6.1 Timing – not impeding player agency5.6.2 No generic!5.6.3 Focus on emotion over physical (code restricted)5.6.4 Posture

Power CentresLine of ActionCamera AnglesSymbolism

5.6.5 ContexualisationArchitecture

6. Conclusion6.1 Conclusion6.2 Future Work

Can current animations be advanced so the avatar elicits a reaction to both the physicality and tone of their surroundings, which reflects their personality and mood?

Page 2: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

1. Introduction2. Contextual Review

2.1 The Issue2.2 Current solutions/thinking

2.2.1 Relaxed Behaviour2.2.2 Humanity Systems

2.3 Contextual Review Summary3. Methodology4. Development of solution????

4.1 Visual Coherence4.2 Skilful performance

4.2.1 12 Principles of Animation4.2.2 7 Essential Acting Principles

4.3 Awareness4.4 Development of Solution Summary

5. Situational Awareness5.1 Theory of Situational Awareness5.2 Situational Awareness in Games

5.2.1 Definition of Surroundings5.2.1.1 The Environment5.2.1.2 Other Characters5.2.1.3 Player Agency5.2.1.4 Narrative Events

5.2.2 Demonstrating Personality6. Practical Application of Situational Awareness

6.1 Reacting to Surroundings6.1.1 The Environment

Contextualisation6.1.2 Other Characters

Head tracking6.1.3 Player Agency

Camera angles, Timing6.1.4 Narrative Events

6.2 Prioritisation6.3 Performance

Power centres, line of action

Page 3: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

1. Introduction Whilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of these stimuli are not equally received; Gaulin and McBurney (2001) explain that visual stimulus is more prevalent over audition stimuli, calling this phenomenon visual dominance. As visual stimulus is the most influential factor in a player’s experience, importance should be placed on making sure all the viewed elements work together cohesively.

There are several theories for devising visual coherence, (Gibbs 2002; Monaco 2009; Worch and Smith 2010) yet one area that is often forgotten to be aligned with the rest of the visuals, is the character’s gameplay animations; Hooks (2011) encountered this when teaching at a games company. When he asked why the character did not respond to the creepy atmosphere of the castle it was exploring, the answer was the animators had not thought about how the location would affect the character’s behaviour.

This discord between the character and their surroundings can lead to unrealistic behaviour from the character and if severe enough, break the player’s immersion (Develop 2013b). The discord is noticeable as humans change their behaviour in reaction to their surroundings; a person in their home with the lights on will act differently than if they were in a graveyard at midnight; a shy person’s behaviour will differ when in a large crowd compared to being alone.

If game characters are to become compelling, like humans, they need to change their behaviour based on their surroundings (Perkins 2008). Several games (Assassin’s Creed III 2012; The Last of Us 2013; Tomb Raider 2013) already have their characters exhibit spatial awareness, where they respond to the physical environment. Yet, there is minimal demonstration of the character interacting with their surroundings and other characters in a way that reflects the tone of the scenario as well as their personality in their gameplay animations.

Therefore, this research project aims to explore the following topic:

Can current animations be advanced so the avatar elicits a reaction to both the physicality and tone of their surroundings, which reflects their personality and mood?

This project focuses on the gameplay animations of the player-controlled character, the avatar, in third person games. When referring to the avatar moving towards a goal, it is implied that the player is controlling this movement, and the avatar is not moving of its own accord.

The proposed solution for rectifying the current visual discord is applying a model of Situational Awareness designed by Mica Endsley (2000) to the avatars animations. This concept is demonstrated and evaluated through theorised applications and frameworks, and practice-based research media tests.

Page 4: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

2. Contextual Review2.1 The issueWhilst reviewing games such as The Last of Us (2013), Tomb Raider (2013), and Wind Waker [date], it is clear to observe the visual discord when comparing gameplay animations and cut-scene animations. In WindWaker [date], during the cut-scene when Link is rescuing his sister from the Forsaken Fortress, he sneaks in to the room she is in, taking his time to look around and creep forward. However, this behaviour is not represented in his gameplay animations during the rest of the dungeon, even though the objectives and atmosphere is the same.

There is no reason why gameplay animations should exclude expressive behaviour; Disney has long been imbuing their character’s movement with personality and attitude with Thomas and Johnstone ([date], p347) stating, “Walks... are one of the animator’s key tools in communication. Many actors feel that the first step in getting hold of a character is to analyse how he will walk.”

Many current video games fail to utilise performance-based locomotion and instead focus on emphasising lifelike adaptive movement through procedural animation (Sloan 2011). Hooks (2011) explains the lack of expressive animation is due to finance and time budgets as believable actions cost more to animate and producers figure it is unlikely to be noticed in a game environment.

However, as games have advanced so far, players are now calling for realistic reactions from the avatar, including demonstrating situationally dependent behaviour (Develop 2013a). Murray (1997) concurs with this idea, highlighting that expressive gestures beyond that of physical movement are needed in order to build comprehensive worlds. Murray (1997, p.150) believes, “there is no reason why gestures could not be animated in a way that very closely matches the visual display with the interactor’s movement and heightens the dramatic impact of the story”.

To clarify, this project is not trying to guess what the player is feeling and display that emotion through the avatar’s animations. It is solely focused on how the avatar should react in relation to its own personality, mood or goals. Contrary to being immersion breaking, by having the avatar display emotions that are uncontrolled by the player it creates distance, which in turn allows empathy with the avatar [jungbluth and hooks ref].

2.2 Current Solutions & ThinkingAlthough there is little documentation for expressive avatar animation, there are many systems and theories for creating expressive behaviour for non-playable characters (NPCs) and buddy characters. These systems provided a base knowledge of AI and adaptive behaviour design, which influenced the development of the avatar’s reactive animation framework.

Section 2.2.1 to 2.2.3 details three different approaches to building character AI and how they combat the issue of a visual discord between the character and its surroundings.

Page 5: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

2.2.1 Relaxed BehaviourBobby Anguelov and Jeet Shroff explained their approach towards developing expressive interactive NPC AI behaviour in their 2015 Game Developers Conference (GDC) talk [ref] for games such as Just Cause 3 [date]. They detail how relaxed behaviour is needed for creating life with-in a space, immersing the player and keeping them engaged as “if the world ignores the player, the player ignores the world” [ref]. Having alterable NPC behaviour also offers the opportunity to express the narrative of the game or world outside of cut-scenes.

The system Anguelov and Shroff developed is called external actions, where all the environmental and contextual AI is embedded in the environment rather than in the NPC’s AI [see XXXX for further details]. This means the system becomes highly re-useable, as it is not tied to one character. This also allows behaviours and animations to be efficiently layered over a character, in a non-disruptive way to the core AI, so the NPC can react to events happening in the world, such as being shot or moving towards a fire for warmth, or interact with the environment, for example leaning on a wall or sitting on a bench.

As this system has been designed to be used by multiple NPCs, it does not focus on personality based performance and instead is populated with contextual actions. Thus, external actions allows the NPC to contextually react to their environment and events happening around them.

2.2.2 Humanity SystemDuring his talk at GDC 2014 [ref], Shawn Robertson explained the AI and animation systems behind the buddy character Elizabeth from Bioshock Infinite[ref]. The developers wanted the player to build a relationship with Elizabeth as well as give her the illusion of life. Robertson explains to create the illusion of life, they needed an AI that reacted to the environment, other AI’s and the player.

To achieve this, the developers set up the ‘Liz squad’, a multidisciplinary team whose purpose was to bring Elizabeth to life. They did this by creating her humanity system, a combination of five subsystems focusing on different areas of interaction and performance; Emotion system, Gesture system, Head and eye tracking, Smart Terrain and Combat system [see XXX for further details].

The humanity system is systemic, playing and interrupting based on the players movements and decisions, thus player agency is never sacrificed. The humanity system also makes use of layering animations. Hence, if Elizabeth is displaying an angry emotion with her arms folded and then needs to move to keep up with the player, instead of snapping between angry idle, walk, angry idle, her arms will stay folded as she walks.

Page 6: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

Altogether, this means Elizabeth’s humanity system allows her to react to the physicality and tone of the environment, as well as display her personality and mood without sacrificing the player’s agency.

2.3 Contextual Review SummaryBy identifying the issue of the avatar not responding to their surroundings and framing it as a discord between the animations and the surroundings, this allows the project to focus on giving the avatar personality and mood in accordance with their character, rather than try to second guess what the player is feeling.

As can be seen in the examination current solutions used for AI behaviour of NPC’s and buddy characters, it is possible to create interactive performance based behaviour in a systemic manner which does not disrupt player agency. Therefore, it should be possible to propose a model for the avatar, which allows the demonstration of performance in relation to their surroundings whilst adhering to player agency.

Page 7: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

3. Methodology

Figure _____ Project methodology

Figure ___ depicts the methodology followed during this project. With the issue of a visual discord identified, an extensive literature and contextual review was carried out to examine current models and systems used in games. These models were then reshaped when compared with research into human behaviour and psychology.

Following the reshape, came testing. This was carried out using practical tests to explore different areas of animation. The practical tests were evaluated against a set of aesthetic criteria based on existing theories (12 principles of animation [ref], the framed image [ref] and the 7 essential acting principles [ref]) and in relation to the current model for rectifying the visual discord. The findings from these tests led to further research into selected games, which then yielded further practical tests, and thus the cycle repeated in an iterative format.

At many points, the testing revealed flaws in the model, and thus the model was reshaped before the testing cycle began again.

The entire process was documented through a blog and this dissertation, which allowed for critical analysis and evaluation of the processes undertaken and results.

Evaluate against aesthetic criteria

Practical tests

Case studies

Test the model

Reshape themodel

Establish currentmodels

DissertationBlog

Document

Page 8: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

4. Solution DevelopmentThe current solutions discussed in section 2.2 focused on NPC’s or buddy characters as little data was found for methods that were applied to the avatar. Due to the lack of avatar related systems, the initial development for a solution began by comparing how films and games achieve visual coherence, which led to examining skilful performance in games and the concept of avatar awareness.

4.1 Visual CoherenceThere are techniques that can aid in creating visual coherence, such as mise-en-scène and environmental storytelling. Mise-en-scène, which originated in film, and is concerned with all the visual elements that make up a shot, can be utilised in order to influence the audience’s mood as well as advance the story through the interplay of elements such as lighting, colour, props, framing, décor and performance (Gibbs 2002).

Yet due to the interactive nature of games, the designer does not have as much freedom with the camera compared to film; Falstein (2004), a 24-year game industry veteran, explains that although the position of the camera can influence the emotional involvement of the player, playability must come first. This is why many games, such as Tomb Raider (2013) and Legend of Zelda: Skyward Sword (2011), have cameras that are set behind and slightly above the player, sacrificing emotional involvement, but improving gameplay by giving the player a wider field of view.

In regards to this, a sub-section of mise-en-scène called the ‘framed image’, described by Monaco (2009), is more appropriate when analysing games as it focuses on everything within the frame, regardless of the position of the camera.

Another technique similar to mise-en-scène that ensures visual coherence among the elements but accounts for dynamic movement through a space is environmental storytelling. This technique is used when designing theme parks (Gamsutra 2000) but can also be applied to virtual worlds. According to Worch and Smith (2010) by using this technique, the environment should self-narrate the history of the place; the functional purpose of the place; what might happen next; and the mood.

As Worch and Smith (2010) described, environmental storytelling is an effective method for narrative exposition and creating atmosphere yet, for games, it misses out a key component of mise-en-scène - action and performance. Gibbs (2002, p.12) states, “At an important base level, mise-en-scène is concerned with the action and the significance it might have. Whilst thinking about décor, lighting and the use of colour, we should not forget how much can be expressed through the direction of action and through skilful performance.”

If skilful performance can be aligned with the framed image and environmental storytelling, then a visual coherence between the avatar, environment and tone in games, in theory, should be achievable.

Page 9: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

4.2 Skilful Performance4.2.1 Twelve Principles of AnimationHow do you achieve skilful performance in animation? This has been something animators have been working on for years to achieve with Disney’s twelve principles of animation, developed back in the ____, still the core of animation (Thomas and Johnston [date]) ;

1. Squash and stretch2. Anticipation3. Staging4. Straight ahead and pose to pose5. Follow through and Overlapping action6. Slow In and Slow Out7. Arcs8. Secondary Action9. Timing10. Exaggeration11. Solid Drawing 12. Appeal

Although these are at the heart of animated films, tv series etc., not all of the principles can be translated across into dynamic gameplay; Mariel Cartwright [ref] talks about the strict limitations they have for animating the characters on Skull Girls [date], a 2D fighting game, where they can only have three frames for the avatar to move from an idle to ‘hit’ the other character or the game doesn’t feel responsive. Thus, the team has to sacrifice anticipation for gameplay, and instead focus on overlapping action and exaggeration to bring performance to the game.

Staging is also restricted during gameplay. As previously mentioned, the intimacy of the camera may have to be sacrificed in order to aid the players view (Falstein 2004). However, there are ways to provide both artistic staging and retain playability. An example of this is from Journey [date] during the sand sliding stage of the game. During the end of the segment, the player loses control of the camera as it frames the silhouette of the avatar against the glimmering sand, architecture and distant mountain, but still allows the player enough view of the world to avoid obstacles.

4.2.2 Seven Essential Acting PrincipalsAlthough Disney’s 12 principles of animation explain how to animate the performance of an action, they do not detail how to design the action. Instead we can look to Hooks[date] Seven Essential Acting Principles:

1. Thinking tends to lead to conclusions, and emotion tends to lead to action.2. We humans empathize only with emotion. Your job as a character animator

is to create in the audience a sense of empathy with your character.3. Theatrical reality is not the same thing as regular reality.4. Acting is doing; acting is also reacting.

Page 10: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

5. Your character should play an action until something happens to make him play a different action.

6. Scenes begin in the middle, not at the beginning.7. A scene is a negotiation.

Aside from principles 6 and 7, as they are based on scripted action, all others could be applied to dynamic animation in games.

Principle 2, the idea of empathy, can work in games as long as there is distance between the avatar and player (as previously discussed in section 2.1). Principle 3 is clearly seen in games like WindWaker [date] where the design of the world creates a stylistic reality, such as not dying when jumping from great heights. Principle 5 can be applied in the technical implementation of animation and used in the logic of the state machines.

It is principles 1 and 4 that stand out in relation to the project issue. To be able to think and react, which is what is needed to rectify the current visual discord, the avatar would first need to be aware of their surroundings.

4.3 AwarenessMost current game avatars demonstrate a high level of spatial awareness, where they respond realistically to the physical world around them. In Assassin’s Creed III (2012), Desmond had complex animation systems that allowed him to have predictive foot placement (to allow adaptation to rapid changes in terrain height) and respond to different degrees of gradation in a lifelike way (Gamasutra, 2013).

Yet it is often buddy characters and NPCs that demonstrate a higher level of awareness of their surroundings. The Last of Us (2013) took spatial awareness further, and created avatars that reacted to real-time and narrative events happening around them. Ellie, the buddy character for most of the game, will shy away if a torch is shone in her face, become startled by gunfire, or complain if you are needlessly shooting objects and wasting ammunition (EuroGamer 2013).

Although these actions aid realism, they contain little personality. By contrast, in Windwaker [date], an enemy NPC called a Moblin not only shows an awareness of other characters by being able to differentiate between friend or foe, the way it is demonstrated is full of personality. This can be observed as when it spots Link, it charges toward him and begins attacking recklessly. If during this charge it accidentally hits another enemy, like a fellow Moblin, it looks around in shock and surprise with its mouth wide open. These actions add to the idea that this character has more brawn than brain, which is also reflected in its character design.

Of the games researched [see appendices for details] Captain Toad, from Captain Toad: Treasure Tracker [date] was the avatar that demonstrated the highest level of awareness to his surroundings. Captain Toad is aware of his enemies; he panics when he has been spotted

Page 11: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

and this distress continues in his locomotion animations when he is being chased. He also demonstrates awareness to the tone and physicality of his surroundings, shivering in fright during spooky levels, shaking with cold in ice levels and holding his breath when under water. However, he does not react to events happening around him, such as the player moving parts of the level.

As shown from these game examples, avatars and characters are becoming increasingly aware of their surroundings. Focusing on the avatar, there seems to be no consistency to their demonstrated awareness; spatial movement might be highly polished but then there is no distinction between other characters or tone of the environment. Compared to the buddy characters and NPCs there is also a lack of personality in the animations.

Therefore, to bring visual coherence through skilful performance to the avatar, there needs to be an awareness model that can be applied to all of the surroundings, in order to maintain consistency and offers the opportunity to express personality.

4.4 Solution Development SummaryThrough the examination of mise-en-scène and environmental storytelling, it has been identified that skilful performance is an important part of visual coherence. To create a skilful performance for dynamic gameplay animations, the twelve principles of animation [ref] and the seven essential acting principles [ref] can be used as aesthetic criteria for creation and evaluation.

However, to fully achieve a skilful performance, the avatar needs to demonstrate a consistent awareness of its surroundings and display its personality.

Page 12: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

5. Situational AwarenessAs highlighted in the previous section, awareness and personality is key for creating a skilful performance and thus resolving the visual discord. The proposed model for obtaining an aware avatar is based on a model of situational awareness.

This section details a brief outline on the theory behind situational awareness before moving on to explain its application to an avatar’s animations, including descriptions of the surroundings the avatar should react too. Following this is a discussion on how situational awareness can demonstrate personality.

5.1 Theory of Situational awarenessSituational awareness (SA) was a phrase that originated from modern fighter pilots where, simply put “it is the ability to know what is going on around you all of the time” (Hendrick 1999, p.10). Endsley (2000, p.3), the current Chief Scientist of the United Stated Air Force, has a more in-depth explanation, “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future.”

Endsley’s Model of SA in Dynamic Decision Making (Endsley 2000)

According to Endsleys model, there are three levels of SA:- Perception: acquiring all of the relevant facts.- Comprehension: understanding the facts in relation to current knowledge, motivations

and goals. - Projection: forecasting the future status of events.

Page 13: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

Therefore, in a decision process, the person takes in cues from visual, aural, tactile and other stimuli and relates them to their goals or expectations before forecasting what future situations may occur. Following these outcomes, they make their decision and act upon it.

5.2 Situational Awareness in GamesIn relation to gameplay animations, SA would be used to visualise the perception, comprehension and projection of stimuli that the avatar was experiencing. Thus, alongside stimuli the player can directly receive – visual, audio and haptic - the player can indirectly experience smell, taste, balance, heat etc. through the reactions of the avatar.

The three levels of SA provide a framework where these reactions can be designed and built in a scalable way to suit different sized projects, whilst maintaining a level of consistency. In its simplest form SA would require the avatar to only look towards stimulus in its surroundings, thus demonstrating perception of the world. For any further advancement, the level of complexity is dictated by the design and scope of the project.

SA can also easily be applied to any aspect of the avatars surroundings due to the simplicity of the three levels. As SA is an open model, the definition for the environment it is applying too will change based on its setting. The following section details the surroundings that SA should be applied to when used in the context of the avatar’s gameplay animations. It also offers examples of SA application to each category in relation to the three levels of SA (perception, comprehension and projection).

5.2.1 Definition of surroundingsTo define the surroundings, case studies into several games were performed in order to find common elements with which the avatar interacted. Seven games were selected as they were third person games where the player controlled a single avatar. A wide scope of styles was also chosen, in order to observe if there were any broad commonalities in animations. The chosen games were:

- Windwaker - Journey- The Last of Us- Captain Toad: Treasure Tracker- Ni No Kuni: Curse of the White Witch- Assassins Creed III- Tomb Raider

After studying these games closely, and observing what the avatars reacted too [see appendix for results], the following four categories are offered as a definition of surroundings:

- The Environment- Other Characters- Player Agency- Narrative Events

Page 14: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

For further clarification, each of these four categories will be discussed in turn, demonstrating the application of SA in relation to the avatar’s gameplay animations and detailing examples behaviour from the previously listed games.

5.2.1.1 The EnvironmentThe environment consists of all parts of the landscape surrounding the avatar, both physical and tonal. This includes, but is not limited to, the terrain, foliage, buildings, props climate and atmosphere.

In relation to the physicality of the environment, the perception part of SA can be applied to avatar navigation. The avatar can look towards obstacles, points of interest the developers have predetermined to reinforce an avatar’s objective or other environment based stimuli. This can already be found in games like Bioshock Infinite [date] as previously discussed in section 2.2.2.

For environment elements such as weather, the avatar can demonstrate comprehension as well as perception. In Ni no Kuni [date] when Olly enters the Winter Isles, he shivers and tries to protect himself against the cold demonstrating he has comprehension of heat stimuli. Comprehension can also come from the avatar responding to environmental dangers, such as protecting themselves from falling debris (Tomb Raider [date]).

Projection requires the avatar to be forward thinking, which is difficult to do in an open world environment, as it would be very difficult for the avatar to guess where the player wants to go. Thus, projection will most likely only be applied in areas where player movement is limited. For example, when Lara has to make a treacherous crossing across a gorge by climbing a fallen plane at the beginning of Tomb Raider [date], she demonstrates projection, by saying to herself “I can do this.” There is nowhere else the player can meaningfully go, so this becomes an appropriate place to use projection.

As highlighted by Hooks [date], awareness of tone is often underrepresented in games. As atmosphere is intangible, thus cannot be perceived, the importance lies in the avatar demonstrating comprehension. Comprehension can be achieved through most of the avatar’s gameplay animations, by altering the character’s movement to reflect the surrounding tone. Captain Toad (Captain Toad: Treasure Tracker [date]) will shiver in fright when in a spooky haunted house or happily toddle about when in the sunshine highlighting his comprehension of the tone of the level.

5.2.1.2 Other CharactersIn most games, the avatar will interact with another character in some way. Yet, behaviour that differs depending on who the other character is, is rarely

Page 15: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

seen in current games (Schell 2008), although humans constantly change their behaviour depending on whom they are interacting with (source).

By using the three core concepts of SA (perception, comprehension and projection), it is possible to build a categorisation system for other characters, which the avatar can then appropriately respond too.

EntityNew: Appear Harmless?

Appear Dangerous?

Seen Before: Friend: RoyaltyCivilian: Comrade

FamilyTraderPetDweller: Older

Same AgeYounger

Foe: Highly DangerousDangerousAnnoying

Figure (number) details a proposed high-level categorisation system for a third person adventure game, such as Windwaker [date] where the avatar has encountered another character. For the ease of working through this example, the other character the avatar has encountered shall be referred to as Entity.

The first categorisation is; has the avatar seen the Entity before? If the avatar has not, how does it appear? If the Entity looks threatening, by carrying weapons, breathing fire or roaring loudly for example, the avatar should be more cautious in its approach. On the flipside, if the Entity resembles something small and fluffy, then the avatar should approach without hesitation.

If the avatar has seen the Entity before, then does it fall it to the friend or foe category? These top-level categories can then be further divided based on the status of the Entity. The idea of the avatar being aware of status was proposed by Schell (2008) in order for the avatar to appear more alive.

The animations demonstrating this awareness do not have be deeply complex; reaching for a weapon when approaching something dangerous, or

Page 16: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

dropping down to head height when speaking to a small child subtly show an awareness of the Entity the avatar is interacting with.

5.2.1.3 Player AgencyApplying SA to player agency allows the avatar to respond intelligently to the players input. This is mainly accomplished by the avatar showing projection by forecasting the player’s movements and adjusting the response accordingly.

An example of how this can be applied in game, is in a preview of the latest (unreleased at time of writing) Zelda game; the developers comment how Epona, a horse, will not collide with trees when she is being ridden as, “real horses don’t run into trees” (GamersPrey HD 2014). In this case, the player still retains agency as Epona will move in the inputted direction, yet Epona responds intelligently to that input in the context of the surroundings by projecting that she will hit a tree then altering course to avoid it.

5.3.4 Narrative EventsNarrative events are triggered real-time or pre-rendered cut-scene events that advance the story or world of the game. It could be a critical narrative point, such as when Aryll is kidnapped in Windwaker [date], or a minor side quest moment, such as turning the lighthouse’s light back on, on Windfall Island in Windwaker [date].

Perception can be used to direct the player’s attention to these moments. The Last of Us [date] had an optional mechanic where during a background in-game cutscene, the player could focus the avatar’s attention in the direction of the event by clicking a button, helpfully framing the cut-scene for the player under the illusion the avatar was watching.

The next stage would be for the avatar to react to the cutscene, showing comprehension of what happened. At the beginning of Tomb Raider, Lara is impaled on a piece of bone during a quick-time event. After she pulls herself free, she holds her wound during her gameplay animations until she rests for the night and heals. Not only does she maintain consistency between cutscenes and gameplay she is also reacting to what happened during that cutscene demonstrating comprehension of the event.

As most cutscenes have little or no player agency, projection will most likely be included in the bespoke animations. In the above case of Aryll’s kidnapping, Link shows projection when he yells in fright just before Aryll is taken away.

5.2.2 Personality

Page 17: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

SA holds the opportunity of expressing the avatars personality whilst being aware, thus improving the avatar’s believability alongside visual coherence. SA does this as it incorporates a person’s experience, training and goals into the decision they make; meaning if two people had the same information and decision to make, they may still arrive at different outcomes as, “no two individuals react identically, since no two are the same” (Egri 1960, p.38).

Being able to express personality is key to believability (Thomas and Johnston 1981; Loyall 1997; Sandercock, Padgham and Zambetta 2006; Laird 2000). Having a believable avatar can benefit the game in two ways. Firstly, “believable characters significantly increase the immersion and the ‘fun’ that a player has” (Sandercock, Padgham and Zambetta 2006, p.357). Secondly, believability is vital in creating empathy in the player, which in turn means the player has a stronger engagement with the avatar (Hall et al 2005).

One more point to consider when designing interactions for personality through SA, is the avatars behaviour should be consistent in relation to their emotional and motivational state (Ortony [no date]). An (extreme) example of non-coherent behaviour would be if Lara (Tomb Raider [date]) started laughing whilst stabbing another human, as if she was enjoying it. This would undermine her sense of vulnerability and remorse, which the developers had been aiming for (GameNewsOfficial 2012).

6. Practical Application of Situational Awareness

5.4 PrioritisationWith the surroundings defined, a prioritisation model is needed so the avatar can respond appropriately as surroundings change. This logic would be built into the avatar’s AI or state machine to aid switching between animations.

Other CharactersNarrative EventsThe Environment

Player Agency

Figure ____ Prioritisation Model

Figure ___ depicts the prioritisation model, which concentrates on an upward path of layered behaviour. This allows for easy interruption or layering of behaviours in response to the changing surroundings. By prioritising the surroundings, this allows the avatar to respond appropriately

Page 18: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

It was previously mentioned that avatars rarely demonstrate awareness of the tone of the environment (Hooks [date]) but by using this model, the avatar should always have an awareness of the environment, thus a

Layered behaviour can be technically achieved using feathered blending, animation masks, additive animation or different sets of animation.

5.4.1 A worked exampleTo help explain how this model would function, the following details a short scenario with an explanation of how the prioritisation model would be applied in-game.

The avatar approaches a shopkeeper NPC in a cheerful, busy town. The avatar talks to the NPC and listens to how there has been many robberies around the town recently. Suddenly there is a loud explosion and a cloud of smoke rises up behind some buildings across the square. The avatar heads towards the smoke to investigate.

Using the prioritisation model, the prioritisation state (PS) begins with player agency. When the avatar engages with the NPC, and has stopped moving, the PS moves to other characters. When the explosion happens the other characters animation is interrupted by the narrative events PS, which visually could mean a surprised reaction and look towards to the area of the explosion. This animation is then interrupted by the player agency PS as the avatar moves towards the smoke.

Page 19: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

6. Conclusion6.1 ConclusionThis project began with the question:

Can current animations be advanced so the avatar elicits a reaction to both the physicality and tone of their surroundings, which reflects their personality and mood?

The answer is probably. As no testing was carried out as part of this project, this question can neither be validated or dismissed. However, an extensive contextual review determined this behaviour should be applied to the avatar to aid player immersion (Develop 2013b; Perkins 2008)and improve the standing of game worlds (Hooks 2011; Murray 1997).

Research into current theories and practices [gdc refs] discovered that NPCs and buddy characters can already technically achieve expressive behaviour. As the behaviour systems used for these characters were mindful of not disrupting playability, it was concluded that systems could be applied to the avatar to allow performance without impeding player agency.

Exploration in to the design of a behaviour system for the avatar began with a focus on visual coherence. Through examining the techniques mise-en-scène and environmental storytelling it was identified that visual coherence in games relied on the alignment of the framed image (Monaco 2009), environmental storytelling (Worch and Smith 2010) and the area that seemed to be forgotten, skilful performance (Gibbs 2002).

To create a skilful performance for dynamic gameplay animations, the 12 principles of animation [ref] and the 7 essential acting principles (Hooks 2011) were discussed for their creation and evaluation purposes. Yet, in relation to a system built for expressive avatar animations, it was the concept of awareness that was deemed to be the most applicable.

Thus Endsley’s (2000) model of situational awareness was proposed in order to provide a framework for a system to be developed which allowed the avatar to demonstrate an awareness of the surroundings. Centred on three levels of SA, the model was scalable, maintained a level of consistency and could be applied to any aspect of the avatars surroundings. Through case studies, the surroundings were defined as the environment, other characters, narrative events and player agency. These elements were then prioritised so the avatar could intelligently respond to the stimulus around them. Finally, how personality could be incorporated into SA, and the benefits, were explained.

Through a successful practical application test of SA, it was shown how the levels of SA could be built in relation to the surroundings. It also demonstrated how expressive behaviour can be applied to locomotion without disrupting player agency which reflects the avatar’s personality and mood. The avatar also demonstrated a consistent awareness of their surroundings by reacting to another character, the physical and tonal environment and narrative events.

6.2 Future Study

Page 20: jesshiderhonours.files.wordpress.com  · Web viewWhilst playing video games, players are exposed to visual, audition and occasionally haptic (tactile) feedback, however, each of

Whilst it was determined that having an avatar display emotions that were uncontrollable, created the distance required for empathy, and thus wasn’t initially breaking the players immersion, it raised the question of how far you can push awareness before it becomes immersion breaking? What would happen if an avatar refused to kill another character because their personality would not let them? Or if they ran away from a gang of enemies because they were too frightened?