188
TARDIS TIME AND RELATIVE DIMENSIONS IN SPACE THE POSSIBILITIES OF UTILISING VIRTUAL[LY IMPOSSIBLE] ENVIRONMENTS IN ARCHITECTURE C H R I S K E L L Y U N I V E R S I T Y OF GREENWICH

TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

Embed Size (px)

DESCRIPTION

Architectural Thesis Our understanding of space is not always a direct function of the sensory input but a perceptual undertaking in the brain where we are constantly making subconscious judgements that accept or reject possibilities supplied to us from our sensory receptors. This process can lead to illusions or manipulations of space that the brain perceives to be reality. Much of the research in this field utilises physically impossible VR environments to discover how far our perception can differ from the measured reality of our senses. This ability to manipulate the illusion and perception of presence and space within an environment provides interesting opportunities in the field of architecture. This paper begins by bringing together current and past research in the fields of neuroscience, psychology, and philosophy and looks at the current and developing technology for the creation of immersive virtual environments then applies these findings to an architectural context.

Citation preview

Page 1: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TARDIS

TIME AND RELATIVE DIMENSIONS IN SPACE

THE POSSIBILITIES OF UTILISING VIRTUAL[LY IMPOSSIBLE] ENVIRONMENTS IN ARCHITECTURE

C H R I S K E L L Y

U N I V E R S I T Y O F G R E E N W I C H

Page 2: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 3: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

T A R D I STime and Relative Dimensions in Space

The Possibilities of Utilising Virtual[ly Impossible] Environments in Architecture

Page 4: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 5: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

T A R D I STime and Relative Dimensions in Space

CHRIS KELLYDiploma in Architecture

School of Architecture and LandscapeUniversity of Greenwich

2013

Tutor: Mike Aling

The Possibilities of Utilising Virtual[ly Impossible] Environments in Architecture

Page 6: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TARDISTIME AND RELATIVE DIMENSIONS IN SPACE

THE POSSIBILITIES OF UTILISING VIRTUAL [LY IMPOSSIBLE] ENVIRON-MENTS IN ARCHITECTURE

Page 7: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

APPENDIX

VIRTUAL REALITY AND IMMERSION OF THE BODY

CONTENTS

INTRODUCTION

EXPERIENCING AND PROCESSING PHYSICAL SPACE

MULTISENSORY EXPERIENCE

VISUAL PERCEPTION OF SPATIAL RELATIONSHIPS

THE UNCANNY

IMMERSIVE ENVIRONMENTS

DEFINING AN IMPOSSIBLE SPACE

TEST 01

DISTORTION AND ADAPTIONS

TECHNIQUES OF IMMERSION

COGNITIVE PROCESSING

PROJECTING INTO THE IMAGE

PERCEPTION AND EXPECTATIONS

PRESENCE IN IMMERSIVE ENVIRONMENTS

REDIRECTION TECHNIQUES

TEST 02

UTILISING THE VIRTUAL TARDIS

TEST 03

CAUSE AND EFFECT

Multisensory

Visual Perception

Visual Immersion

Mental Imagery

Sight

Kinaesthetic Perception

Kinaesthetic Immersion

Kinaesthetic Projection

Proprioception/Kinaesthetics

Sensorimotor Navigation

Simulator Sickness and Aesthetic Distance

Cognitive Mapping

ILLUSIONS

VIRTUAL[LY IMPOSSIBLE] SPACES

ABSTRACTi

03

01

07

11

33

41

51

77

95103

115

ii

iii

01

01.0101.01.01

01.01.02

01.01.03

01.02.01

01.02.02

01.02.03

01.02.04

02.02.01

02.02.02

04.03.01

04.03.02

04.03.03

01.02

02.0102.02

03.0103.0203.0303.04

04.0104.0204.03

05.01

v.i

05.02

v.ii

05.03

v.iii

02

03

04

05

06

iv

v

METHODS AND METHODOLOGIES STATEMENT

PROCESSING IMAGE SPACE

CONCLUSION

BIBLIOGRAPHY

Page 8: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

VICTOR ENRICH, Looping, 2012, WWW.VICTORENRICH.COM

Page 9: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

i

CHAPTER i

Abstract

Our understanding of space is not a direct function of the sensory input received from our sense organs but a perceptual undertaking in the brain where we are constantly making subconscious judgements that accept or reject possibilities supplied to us from our sensory receptors. This process can lead to illusions or manipulations of space that the brain perceives to be reality. Much of the recent research in this field utilises virtual reality (VR) immersive environments to create spaces that would be impossible in the physical world. Neuroscientists and psychologists are using these spaces to conduct further research into how far our perception can differ from the measured reality of our senses. This ability to manipulate the illusion and perception of presence and space within an environment provides interesting opportunities in the field of architecture. This paper begins by bringing together current and past research in the fields of neuroscience, psychology, physiology and philosophy and looks at the current and developing technology for the creation of immersive virtual environments (VEs), the paper then applies these findings to an architectural context, speculating on the possible opportunities for the built environment.

1

Page 10: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

2

Page 11: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

Immersive VR environments that appeal to multiple senses of the body span a wide research area in fields such as neuroscience, computing, art, gaming and cybernetics. This paper aims to bring together research from these fields to address the opportunities that immersive VEs present to future architecture. Currently this research is relatively segregated into its separate fields, but each field presents insights and opportunities that will be explored and combined to create an overall picture of the current state of research on immersive environments. This will be used to develop an argument for the use of VEs within the physical world of architecture and assess the possibility of using VEs to create an architecture that is ‘impossible’ in the physical world. This combination of research fields will create a broad range of sources and by combining these sources in one paper the outcome should feedback into each field with possibilities for further research.

CHAPTER ii

Method and Methodologies Statement

ii

3

Page 12: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

4

The underlying question within this paper is to assess the opportunities that immersive VR environments present to architecture in terms of interventions in the physical world. The question arose from a research paper entitled Impossible Spaces: Maximising Natural Walking in Virtual Environments with Self-Overlapping Architecture1, which identified the possibility of creating impossible virtual spaces that can be physically explored by natural locomotion in a physical space much smaller than the VE. This creates the possibility of designing a virtual TARDIS, a space larger than it initially appears that would be physically impossible in the real world governed by physics. Without access to the technology required to create sophisticated VEs this paper will focus on the current bodies of research to create links and new ideas within the field and where possible this information has been supplemented with primary research gathered from empirical experiments. These experiments utilised videos in place of fully immersive virtual environments and their limitations have been documented in the appendix of this thesis. With further time and access to the technologies required it would have been beneficial to develop a series of immersive VEs that could be used to test the theories of this paper.

There are a number of interpretations of the term impossible space, but in this paper, it is taken to mean any space that violates the laws of physics and could not physically exist or be constructed in the physical world. The physical world is taken to be the natural or man-made world which we live in and that is governed by what we understand to be the laws of physics, it is what we consider to be the ‘actual world’. Opposed to this idea of the physical world is the VR environment which within this paper is termed to be an immersive, multisensory environment that simulates physical presence in an artificial environment.

Before looking at our perception of VR environments it is important to understand how we experience the physical world, both emotively and cognitively. The paper will begin by drawing from architectural and neurological sources on phenomenology, vision, kinaesthetics, sensorimotor theories and perception. This paper will draw on all of these sources to bridge the gap between emotive and cognitive responses to spatial environments. To fully understand how these responses are developed it is important to know how the brain processes information from the senses and creates the illusion of perception. It is also important to understand how those experiences and reactions are then committed to memory and re-accessed for use when perceiving new spaces.

Visual experience accounts for a large proportion of our perception of space, so within VEs it is essential to create a constant stream of visual information to create a convincing simulation of physicality. Therefore, before attempting 1 Suma, E. et al, Impossible Spaces: Maximising Natural Walking in Virtual Environments with Self-Overlapping Architecture, IEEE Transactions on Visualization and Computer Graphics, Vol 18(4), 555-564, April 2012

Page 13: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

5

METH

OD

AND

METH

OD

OLO

GIES STATEM

ENT

ii

to create or utilise a VE it is important that we understand how images are processed in relation to our experience of physical space. This relationship will provide one of the key tools in VR that can strengthen or destroy the illusion. The perception of spatial relationships within images and how this relates to our perception of physical space will be explored. Optical illusions will also be investigated as a system that creates a considerable conflict between our sensory stimulus and the perception of that stimulus in the brain. Understanding illusions is key to understanding perception and being able to manipulate it in impossible environments.

An immersive VE is one that appeals to multiple senses and envelopes the body into a virtual world. Therefore it is important to be able to understand how our visual perception works in relation to our kinaesthetic senses of proprioception and equilibrioception. This paper investigates current and past research into how these senses can be combined with vision to form a VE that allows for natural locomotion and a greater feeling of presence.

The final part of this paper looks directly at immersive VR environments. The key to the success of an immersive VE is based upon the way it convincingly stimulates your senses to make your brain believe that the virtual world could be part of the physical world, even if the virtual world would be impossible in the physical world. The tools used to create this presence in VEs are therefore important factors in any VR setup. There are numerous techniques and technologies currently being developed, such as virtual retina displays and bionic contact lenses. The strength of these methods of immersion lie in their ability to stimulate multiple senses and the perception of them, the more subtle the physical technology is to the user, the more convincing it may become. This paper will outline some of the current and developmental techniques for immersion using published research papers that had the means to physically test them and provide qualitative and quantitative results.

The concluding part of this paper will be the combining of all the above research, with case studies on existing environments and their uses to speculate as to the future uses of immersive VEs as a virtual TARDIS. Currently VR environments are used for gaming, military, medical, educational and artistic applications, to name a few. Many of these applications have been tried and tested and were either unsuccessful, proved useful or were a tool for further development. However, there is limited precedent for the use of impossible virtual spaces outside of these research fields. This paper will evaluate a number of opportunities that these environments present that will hopefully stimulate debate within all of the mentioned fields and lead to a number of possibilities for further architecture and urbanism research.

Page 14: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 15: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

CHAPTER iii

Introduction

iii

Page 16: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

8

‘I enter a building, see a room, and – in a fraction of a second – have this feeling about it....we are capable of immediate appreciation, of spontaneous emotional response, of rejecting things in a flash.’2 - Peter Zumthor

2 Zumthor, P., Atmospheres, Birkhausser, Boston, 2006

ROB CARTWRIGHT, 365.360° Day 91, 2011, WWW.ROBCARTWRIGHTPHOTOGRAPHY.WORDPRESS.COM

Page 17: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

9

9

INTR

OD

UC

TION

iii

Our bodies inhabit environments that produce an almost instantaneous emotive response within us. Be it positive or negative, we are constantly being effected by the spaces that present themselves to us. Individually the sense organs have a limited range of measurements and receptors and the information they each gather is inherently specialized. Even when multiple sense organs are combined the information they gather from a space and the effect that space has on us is not a distinctly direct relationship. It is not until the sensory signals reach the brain and the complex process of perception takes place that we have the ability to interpret the available information to create an understanding of our environment and develop a response to it – all taking place in a fraction of a second.

This paper is intended as an introduction to human sensory perception, specifically visual and kinaesthetic, and how an understanding of this system can lead to illusions and perceived manipulations of space that can be utilised in immersive VEs by spatial designers. The first chapter focuses on the process from stimulation of the senses through to perception of that information when experiencing the physical world. The paper then progresses onto investigating how we develop spatial relationships in image space and how our perceptive system differs or matches that of physical space. This then leads on to a study of illusions and how differences between our sensory stimulus and our perception can lead to contradictions where the brain acts as a mediator to select and reject possible solutions. These first three chapters are primarily used to create an understanding of perception that is used to explain the techniques described in the following part of the paper.

The second part of the paper looks at immersive VEs, whilst these are many and infinite in their types this paper is specifically looking at VEs that appeal to multiple senses as a sensorimotor experience and convey a feeling of presence in a place that is different to the person’s physical location. This section investigates how immersive environments can be created and how they appeal to the senses to develop an illusion of presence. This is then extended to look at impossible VEs, environments that couldn’t exist in the physical world, that are created through a manipulation of our sensory perception utilising the difference between what we sense and what we perceive. This is related to a multisensory experience of space that primarily focuses on the visual and kinaesthetic senses described in the first section. In the final part of the paper it is assessed how valuable these impossible space could be in future architectures and if and how they could be integrated into the physical world.

Page 18: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 19: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

CHAPTER 01

Experiencing and Processing Physical Space

1.0

Page 20: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

12

01.01.01 Multi-sensory

‘I confront the city with my body; my legs measure the length of the arcade and the width of the square; my gaze unconsciously projects my body onto

the facade of the cathedral, where it roams over the mouldings and contours, sensing the size of recesses and projections; my body weight meets the mass

of the cathedral door, and my hand grasps the door pull as I enter the dark void behind.’3 - Juhani Pallasmaa

We each perceive the world in relation to ourselves, with our bodies at the centre as this is the only information available to us from our senses. Stimuli in our environments are read by our sensory receptors. Stimuli are categorized at different levels, a distal stimulus is the object or property that we perceive and the proximal stimulus is the specific thing that stimulates the sense organ. When we view a room the distal stimulus would be the wall, ceiling and floor planes and the proximal stimulus would be the light entering the eye. It is important to differentiate the two stimuli, for when we view them we are not actually sensing the walls, ceiling and floor as objects but are sensing the light bouncing off them into our eyes. It is only when the brain interprets those signals that we build up what we perceive to be a room. Distal stimuli are infinite in their possibilities whereas proximal stimuli are limited to the functions of our sensory receptors.

Currently there are widely accepted to be 9 senses in the human body; vision, smell, hearing, taste, touch, thermoception (temperature), nociception (pain), equilibrioception (balance) and proprioception (relative body positions). There is much on-going debate among neuroscientists and philosophers as to what constitutes a sense and whether there are in fact many more than 94. The 3 Pallasmaa, J., The Eyes of the Skin; Architecture and the Senses, Wiley, Chichester, 20054 Macpherson, F., The Senses: Classical and Contemporary Philosophical Perspectives, Oxford University Press, Oxford, 2011

01.01 MULTI-SENSORY EXPERIENCE

Page 21: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

EXPERIEN

CIN

G AN

D PR

OC

ESSING

PHYSIC

AL SPACE

1.0

13

experience of environments is a multi-sensory, phenomenological one where the senses combine together to give us a feeling of being within the world.

‘The problem is to understand these strange relationships which are woven between the parts of the landscape, or between it and me as incarnate subject, and through which an object perceived can concentrate in itself a whole scene or become the imago of a whole segment of life. Sense experience is that vital communication with the world which makes it present as a familiar setting of our life. It is to it that the perceived object and the perceiving subject owe their thickness.’5 - Maurice Merleau-Ponty

This immersion of our body and mind within the world of sensory stimuli convinces us of our presence within it, if we can experience an object or environment with multiple senses that support one another then we can generally accept that thing to be real and exist within the same physical world as ourselves. Our sense of place is derived from all of our senses, when we enter a room we not only see the boundaries and contents of that room but we are also greeted by smells, sounds, temperature and the tactility of its surfaces. Even if we don’t consciously appreciate all of the information collected by our senses we are still constantly processing a vast amount of data via our sensory organs and the brain. For the purpose of this essay however the senses of sight and proprioception/kinaesthetics will be concentrated on for their relevance to the research question. A piece of writing of this length would not be able to do justice to a study of all of the 9 senses and their perceptions, therefore by focusing on vision and proprioception/kinaesthetics they will be investigated at a higher level of detail.5 Merleau-Ponty, M., Phenomenology of Perception, Routledge, London, 2002

JONATHAN LUCAS, Crossings, 2010, WWW.JONATHANLUCAS.COM

Page 22: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

14

NICK KANE, Overall from the West, Clay Field, Eco-Friendly Social Housing, WWW.NICKKANE.CO.UK

Page 23: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

EXPERIEN

CIN

G AN

D PR

OC

ESSING

PHYSIC

AL SPACE

1.0

15

01.01.02 Vision

It can be argued that vision is the sense that the majority of healthy people would say has the most effect on everyday life. Like all senses vision conveys four types of information when stimulated; modality, location, intensity and timing;6 each sensory receptor is dedicated to gathering one type of information, from a specific area of the body, are sensitive to different intensities of stimulation and will measure the duration of stimulation. Vision, however, has much more resources dedicated to it than any other bodily sense. The optic nerve contains 18 times more never endings than the cochlear, which has the second highest amount7 and over 50% of neural tissue is devoted to vision directly or indirectly, with 2 billion of the 3 billion total firings in the brain relating to the visual sense every second.8 Vision is becoming more and more important in the modern world. Cultural theorist Fredric Jameson talks about the ‘depthlessness’ of the modern world, a world that is obsessed with the images of things rather than the things themselves.9 With film, television, advertising, the internet and smart phones we are constantly being bombarded with visual stimulus.

‘Computer imaging tends to flatten our magnificent, multisensory, simultaneous and synchronic capacities of imagination by turning the design process into a passive visual manipulation, a retinal journey...In our culture of pictures, the gaze itself flattens into a picture and loses its plasticity. Instead of experiencing our being in the world, we behold it from outside as spectators of images projected on the surface of the retina.’10 - Juhani Pallasmaa

The proliferation of vision as a symptom of the modern society is going to develop further and further with the and the development of products such as Google Glasses and augmented reality. However vision on its own will always fall short of a true experience of space. Our understanding of spaces is one of multiple points of view, of moving through that space and exploring it. We are not static viewers of a physical world but appreciate our environments as a sensorimotor experience.

6 Gardner, E., Principles of Neuroscience Fourth Edition, McGraw-Hill, New York, 20007 Jay, M., Downcast Eyes: The Denigration of Vision in Twentieth-Century French Thought, university of California Press, London, 19948 Bowan, M., Integrating Vision with Other Senses, OEP Foundation post-graduate curriculum, Vol. 40(2):1-10, Dec. 19999 Jameson, F., Postmodernism or The Cultural Logic of Late Capitalism, Duke University Press, Durham, 199110 Pallasmaa, J., The Eyes of the Skin; Architecture and the Senses, Wiley, Chichester, 2005

Page 24: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

16

01.01.03 Proprioception/Kinaesthetics

‘The outline of my body is a frontier which ordinary spatial relations do not cross. This is because its parts are inter-related in a peculiar way: they are not spread out side by side, but enveloped in each other...Similarly my whole body for me is not an assemblage of organs juxtaposed in space. I am in undivided

possession of it and I know where each of my limbs is through a body image in which all are included.’11 - Maurice Merleau-Ponty

Proprioception, often used interchangeably with kinaesthesia, is the sense that allows us to locate different parts of our body in space, for example if we make a fist behind our back we cannot see the fist but we are aware of it. It differs slightly from kinaesthesia, which is not generally accepted as a sense in its own right. Kinaesthesia is also the ability to know where parts of our body are in space but it can rely on other senses as well, such sight, touch or the vestibular system. Proprioception is very closely linked with equilibrioception allowing us to judge acceleration, rotation and balance and take appropriate actions with our bodies in these circumstances.

‘The problems arise from the isolation of the eye outside its natural interaction with other sense modalities, and from the elimination and suppression of the other senses, which increasingly reduce the world into the sphere of vision...

reinforcing a sense of detachment and alienation’12 - Juhani Pallasmaa

When combined, vision and proprioception give us a strong sensorimotor experience of space. It is hoped that by combining these, through contextualising vision with proprioception and kinaesthesia, to create a sensorimotor understanding of our experience of space the question of immersion and spatial illusion can be intellectually explored and evaluated.

11 Merleau-Ponty, M., Phenomenology of Perception, Routledge, London, 200212 Pallasmaa, J., The Eyes of the Skin; Architecture and the Senses, Wiley, Chichester, 2005

JONATHAN LUCAS, Holding Court, 2010, WWW.JONATHANLUCAS.COM

Page 25: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

17

1.0

EXPERIEN

CIN

G AN

D PR

OC

ESSING

PHYSIC

AL SPACE

Page 26: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

18

01.02.01 Visual Perception

What we see and what we believe to know about what we are viewing can be very different. Our perceptions are unconscious inferences from sensory data or ‘predictive, never entirely certain, hypothesis of what might be there.’13 Perception is separate to cognitive problem solving; it is an unconscious, almost instantaneous translation of the signals from our senses into what we perceive as reality. The role of visual perception can be better understood by looking at optical illusions where what we believe we are seeing and what is actually in front of us are clearly two different things, we will look at that later but here we will look briefly at some key parts in the process of visual perception.

The eyes are very rarely still, we are constantly scanning our environments, tests have shown that if an image is fixed on the retina it will gradually fade and eventually disappear.14 Scanning our environments does not only allow us to take in a wide view of our immediate surroundings but it also means there is constant motion of images over the retina, repeatedly refreshing the signals that are being sent to the brain. Despite this constant movement of the eyes the world seems to be still, it doesn’t appear to spin around us, this is because during saccades (rapid eye movements) the signals from the eye are suppressed meaning we do not perceive the period of motion.

Although humans have a field of view of approximately 200 degrees we can only focus on a small proportion of this in the centre of our view. We use successive fixations to scan our environments and focus on areas of our vision that we consider to be most important. Combining this perception of what our brain deems to be important and the suppression of the signals from the optic nerve 13 Gregory. R., Eye and Brain; The Psychology of Seeing, Fifth Edition, Oxford University Press, Oxford 199814 Ibid

01.02 COGNITIVE PROCESSING

Page 27: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

EXPERIEN

CIN

G AN

D PR

OC

ESSING

PHYSIC

AL SPACE

1.0

19

top from: ALFRED YARBUS, Eye Movements and Vision, Plenum, New York, 1967

bottom: R&B GROUP, Gaze Plot, 2010, WWW.EYETRACKING.COMMore recently eye tracking has been utilised to analyse visual attention in advertising

Eye scan-paths when viewing images of faces

Page 28: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

20

during saccades a phenomenon called change blindness occurs. When two similar images are presented to a viewer and a saccade takes place between each image it is possible to change significant portions of the image without the viewer being able to immediately identify them.15 This occurs as long as the parts of the image that are changed are not deemed highly important by the brain and thus receiving high levels of attention. It has been seen that change blindness also occurs during other interruptions of images, such as when two images flicker with a blank image between.16 It may be that instead of building up a full image of our environments we select a few important properties of each view and create a partial representation through successive fixations.17 There are other theories that argue that we do actually store more information but are unable to access it, that the original information is overwritten or that we produce a perceived third image that is a combination of the two images18 but these all result in the same effect of change blindness.

We visually perceive movement in two ways by either an image moving across the retina, such as when an object passes across our vision and our eyes remain stationary(image-retina system), or by eye movements when we track a moving object with our eyes (eye-head system). During normal eye movement over a stationary scene the two systems cancel each other out giving stability to the world as we look around it. Visually movement is relative and we must make judgements on what it is that is moving, for example when you are sat on a train that is stopped at a station and a stationary train beside you starts to move it is hard to judge which of the two trains is actually in motion. It has been seen through experiments that generally we perceive the larger of two objects to be moving and that this perception is developed from our knowledge of the world where generally large objects such as buildings are less likely to move than people or cars.19 When this perception is wrong it can lead to disorientation and motion sickness. This effect is common when we are in passive motion, being transported by another body, but when the body is in active motion proprioceptive clues and contact with the surfaces it is moving over offer confirmation that it is the body that is in motion and not the environment.

15 Grimes, J., On the Failure to Detect Changes in Scenes across Saccades, in Perception, Oxford University Press, Oxford, 199616 Rensink, R. et al, On the Failure to Detect Changes in Scenes Across Brief Interruptions, Visual Cognition, Vol 7(1/2/3);127-145, 200017 Ibid18 Simons, D., Current Approaches to Change Blindness, Visual Cognition, 7(1-3), 1- 15, 200019 Gregory. R., Eye and Brain; The Psychology of Seeing, Fifth Edition, Oxford University Press, Oxford 1998

Page 29: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

EXPERIEN

CIN

G AN

D PR

OC

ESSING

PHYSIC

AL SPACE

1.0

21

There are two systems for visually perceiving movement;A Image-retina system - The image moves across the retina while the eye remains stationaryB Eye-head system - The image remains stationary on the retina while the eye moves to track the objectDrawings by Author

A

B

Page 30: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

22

These Images produced for this paper were tested on 3 participants with the time taken to find the single difference varying between 46 to 98 secondsImages by Author

Examples Of Change Blindness 01

Page 31: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

EXPERIEN

CIN

G AN

D PR

OC

ESSING

PHYSIC

AL SPACE

1.0

23

Page 32: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

24

Page 33: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

EXPERIEN

CIN

G AN

D PR

OC

ESSING

PHYSIC

AL SPACE

1.0

25

These images took the viewers between 164 and 270 seconds to spot the difference

Images by Author

Examples Of Change Blindness 02

Page 34: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

26

PETER LUDVIG PANUM, Panum Grating, WWW.NEUROPORTRAITS.EU

When the first two images [top and middle] are pre-sented to each eye separately the third image is perceived

by the viewer as a result of retinal rivalry

Page 35: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

EXPERIEN

CIN

G AN

D PR

OC

ESSING

PHYSIC

AL SPACE

1.0

27

Stereoscopic vision is caused by the slightly different views of the two eyes giving us the ability to perceive depth as objects are triangulated from their distance from each eye. The brain processes both of the signals from the eyes and combines them so we perceive one image. This process is called retinal rivalry, when we are presented with two distinctly different views in each eye the brain will try to combine aspects of the two images, repeatedly selecting and rejecting parts of them.20 It is thought that the limit for stereoscopic fusion of images is a difference of 1 degree within the visual field, any differences beyond this will lead to the brain rejecting this section of the view from one of the eyes and replacing it solely with the view from the other meaning it is possible for the brain to completely mask part of what our eyes are seeing.

In recognising objects and spatial relationships within the visual field we are responding to the reflective properties of those objects. Cells in the retina, called ganglion cells, detect differences in contrast. The receptive fields of these cells overlap in the retina and vary in size, with different sizes responsible for recognition of different scales of contrast in the visual field.21 The rate of information that is sent from the eye decreases when we are viewing large areas of constant intensity but rises dramatically when viewing high contrast scenes; our eyes are tuned to collect information about borders and edges so that we can perceive relationships in our environment.22 There are a number of theories of how we then recognise these borders and edges as specific objects, one being that we process the scene in three stages, as a 2D image, then a 2.5D image identifying which planes are closest and then as a 3D image connecting all the planes together and using their orientation and scale to identify their physical properties.23 Other theories include projective invariance, geon structural description and active shape matching, whichever theory is used however perception plays a huge part in how we translate the different intensities and wavelengths of light projected on to the retina into cognitive understanding of spatial relationships.

20 Gregory. R., Eye and Brain; The Psychology of Seeing, Fifth Edition, Oxford University Press, Oxford 199821 Wallis, G. et al, Learning to Recognise Objects, Trends in Cognitive Sciences, Vol 3(1), 22-31, Jan 199922 Gregory. R., Eye and Brain; The Psychology of Seeing, Fifth Edition, Oxford University Press, Oxford 199823 Marr, D., Vision; A Computational Investigation into the Human Representation and Processing of Visual Information, MIT Press, London, 2010

Page 36: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

28

01.02.02 Kinaesthetic Perceptions

Our understanding of the position and location of our bodies in time and space allows us to explore our environments and move freely within them, gaining different viewpoints and orientating our visual sense. It has already been stated how important relative movement is in terms of perception of our visual stimulus. Whilst proprioception is the body’s internal sense of where each element of the body is relation to one another, kinaesthesia is our external awareness of where our body is in relation to our surroundings. As previously mentioned proprioception is regarded as a sense in itself but kinaesthesia draws on other senses, sight and touch being very important in locating ourselves in relation to other objects. Whilst vision is a function of electromagnetic waves, proprioception is a mechanical sense based on the displacement of certain receptors, these include muscle spindle receptors that monitor stretching of the muscles; Golgi tendon organs that sense the force exerted by a muscle; joint receptors that sense flexion or extension of a joint; and Ruffini endings, Merkel Cells and field receptors in the skin that are sensitive to stretching.24

When we move or reorientate parts of the body we have a reference of where we want to move that part of the body to, often visual, and a proprioceptive reference of where the limb is currently. As we move vision and proprioception work together to track the position of the body in reference to itself and the environment. When we reach the desired position it is confirmed again using both senses which signal the movement to cease. This is called sensorimotor coupling.25 We are consciously aware of our desire to undertake the movement at a given time but the intricacies of a movement, the physical act of contracting specific muscles is not one we are consciously aware of. The body is capable of undertaking complex motor tasks by relying on information gathered from the senses, for example a cricketer hitting a ball is aware of the physical position of the bat in their hands and the trajectory of the ball but the process of specifically selecting the muscles that will react to that information is largely an automatic response. 24 Gardner, E., Principles of Neuroscience Fourth Edition, McGraw-Hill, New York, 200025 Flanders, M., What is the Biological Basis of Sensorimotor Integration? Biological Cybernetics, Vol 104(1/2), 1-8, 2011

Page 37: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

EXPERIEN

CIN

G AN

D PR

OC

ESSING

PHYSIC

AL SPACE

1.0

29

MIKE KAPLAN, Air Force Baseball Player Swinging, 2012, WWW.GOAIRFORCEFALCONS.COM

Page 38: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

30

01.02.03 Sensorimotor Experience

Experience of our environments is dynamic, our bodies are in constant motion and our orientation adjusts and varies as do our environments. How much effect this motor action has directly on our perception of the space is being debated in recent research. Sensorimotor theories put forward that perception and motor action are intrinsically linked to one another and interconnected; we perceive objects not only for their properties but also for the potential actions upon them. This supports spatial experience being a phenomenological experience of awareness of oneself in an environment and their active engagement with it.26 The opposing theory is that of the two visual systems hypothesis that suggest that humans have two distinct visual systems relating to different pathways in the brain; a vision-for-perception system and a vision-for-action system. The vision-for-perception system is deemed to be responsible for experiential awareness and object and colour recognition, whereas the vision-for-action system is responsible for direct control of action resulting from visual information.27 It is an argument that is on-going in the philosophical and neuroscience fields and is as yet unresolved. There is evidence for both systems but some key pieces of evidence will be looked at later in this essay that seem to suggest a sensorimotor theory of perception.26 Madary, M. et al, Perception, action, and consciousness: Sensorimotor Dynamics and Two Visual Systems, Oxford University Press, Oxford, 201027 Ibid

Page 39: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

EXPERIEN

CIN

G AN

D PR

OC

ESSING

PHYSIC

AL SPACE

1.0

31

01.02.04 Cognitive Mapping

Cognitive maps arrange the information gained through sense and perception into a mental framework that can be accessed in real time. A cognitive map is not necessarily the equivalent of a cartographic map as the name suggests,

‘it is not assumed that a cognitive map is an equivalent of a cartographic map, nor is it assumed that there is a simple Euclidean one-to-one mapping between a piece of objective reality and a person’s cognitive map of that reality. Cognitive maps are generally assumed to be incomplete, distorted, mixed-metric representations of real-world environments, but they can also be maps of the imaginary environments represented in literature, folk tales, legends, song, paintings, or film.’28 - Reginald Golledge

Cognitive maps can be accessed, reproduced and reorientated depending on the context of the task required. They span over all the senses and include information on other environments, such as memories, relationships, social and political meanings.

Specific cells have been identified in the brain that contribute to the process of cognitive mapping such as place cells which repeatedly become active when an animal is in a specific environment. It has been seen that the same place cells can activate for constructing cognitive maps of varying environments but their relationship to other place cells vary with each map.29 Spatial view cells become active when viewing a specific section of an environment or object, different cells represent different directional views towards a specific object. They differ from place cells as they do not include any locational data but rather a specific view of a scene. Research has identified these cells as responding when a specific scene is being recalled from memory. Head directional cells act much like a compass to record the direction of the head using proprioception and the vestibular system, they work with the spatial view cells and place cells to orient views in relation to one another. The process of path integration and dead reckoning30 takes place in grid cells and the specific views from the spatial view cells and place cells are orientated in relation to the specific locations.

The process of spatial cognition from sensory input, to perception, to cognitive mapping and memory is by no means an accurate representation of the physical world. Perceptions can differ from reality through ambiguous information provided by the senses and then that information is arranged into a cognitive map which can be warped and non-Euclidean and combined with information from other spaces. When we try to recall a cognitive map of an environment we can be failed by our memory and the mixing of other memories. This process leaves a potential fracture between what we believe to be the arrangement of an environment and its physical properties.28 Golledge, R., et al, Spatial Behaviour: A Geographic Perspective, Guildford Press, New York, 199729 O’Keefe, J. et al, The Hippocampus as a Cognitive Map, Oxford University Press, Oxford, 197830 The process of identifying ones orientation and position from that of a previous position.

Page 40: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 41: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

2.0

CHAPTER 02

Processing Image Space

Page 42: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

34

2D images presented to the eye, such as computer images, photographs and artwork present us with different problems in perception of spatial relationships. Not only are they 2D representations of 3D environments but through image manipulation relationships that are impossible in the physical environment can be visually represented in images producing illusions and irrational spaces.

When viewing objects in the physical world we utilise the use of both eyes in binocular disparity and convergence, as well as judging motion parallax and accommodation to focus on near and far objects to judge the spatial relationships of objects. When viewing 2D images these are all things we cannot draw upon, except in the case of motion pictures and motion parallax, so we need to identify other cues that aid in our perception of 3D spatial relationships in 2D images. We inherently attempt to perceive 3D relationships from 2D representations as we attempt to make sense of images from our experience of our 3D physical world. If we fail to draw enough 3D cues from an image however, we will make assumptions that can lead to optical illusions such as the Necker cube.

There are a series of pictorial cues that we draw upon including perspective, texture, shading and shadow, reference frames, learnt sizes and aerial perspective. Perspective relates to the understanding that an objects size is inversely proportional to the distance from the viewer, texture relates to visual gradients, such as those seen on curved or slanted surfaces. Reference frames relate to the visual information that surround objects in a scene and learnt sizes will draw on our experience of the physical world and our perception of objects within it, for example we would expect a tower block to be larger than an apple, however this can lead to confusion in images where perspective and relative sizes are deliberately manipulated, such as in the Ames room. Aerial perspective is the function that contrast and colour saturation decrease with depth as a result of haze in the atmosphere. Tests have found that when identifying both position and orientation of objects in images perspective is the dominant factor, with shadow also important in determining relative positions.31

31 Wagner, L. et al, Perceiving Spatial Relationships in Computer-Generated Images, Computer Graphics and Applications, Vol 12(3), 44-58, May 1992

04.01 VISUAL PERCEPTION OF SPATIAL RELATIONSHIPS

Page 43: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

35

PRO

CESSIN

G IM

AGE SPAC

E

2.0

ALI ERTURK, Berlin Holocaust Memorial, 2011, WWW.ARTOFHDR.COM

The 3 dimensional relationships of the Necker Cube remain ambiguous and we perceive two

possible cubes, one with point A in the front plane and one with point B at the front

We are able to deduce 3 dimensional relationships from complex 2 dimensional images

A

B

Page 44: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

36

‘The crucial faculty of the image is its magical capacity to mediate between physical and mental, perceptual and imagery. Factual and affectual. Poetic

images, especially, are embodied and lived as part of our existential world and sense of self’32 - Juhani Pallasmaa

Mental imagery is the process of recalling previous experiences or predicting and imagining future or potential experiences. It can be perceived as a function of all the senses but without the stimulus usually involved in sensual experience. They can be produced from one’s own experiences or a construct from the description of someone else’s experience. The process of perceptual experience is an unconscious act, but producing mental imagery in the brain requires a conscious effort to produce the imagery. Mental imagery by its name is described as images, or pictures, in the brain. There are two opposing theories on the structure of mental images, the first that we experience these mental images as pictures in the brain, as if we have an inner eye inspecting them and perceiving them and the opposing suggests that the images are more abstract representations, similar to the way we use text and language to describe things, the brain translates images into coded, propositional representations.33 It is an argument of content and format. This debate is still unsolved by experimentation but what is known is that when we draw upon mental imagery the visual cortex of the brain becomes active in the same way that it would when we view a similar picture or environment, suggesting that however the image is stored in the brain the experience and perception of the mental imagery is one of actually experiencing that image. This is also true when we dream and hallucinate,34 which could go some way to explain why these episodes can feel so real.32 Pallasmaa, J., The Embodied Image, Wiley, Chichester, 201133 Kossyln, S., et al, When Is Early Visual Cortex Activated During Visual Mental Imagery?, Psychological Bulletin, Vol 129(5), 723-746, September 200334 Gregory. R., Eye and Brain; The Psychology of Seeing, Fifth Edition, Oxford University Press, Oxford 1998

04.02 PROJECTING INTO THE IMAGE

AH JUN, Self-Portrait, 2011, WWW.AHNJUN.COMBy projecting ourselves into an images we can

induce perceived sensations without any physical stimulus, such as the feeling of vertigo

Page 45: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

37

PRO

CESSIN

G IM

AGE SPAC

E

2.0

Page 46: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

38

02.02.02 Kinaesthetic Projection

‘...the quality of architectural reality seems to depend on peripheral vision, which enfolds the subject in the space. A forest of context, and richly moulded

architectural space, provide ample stimuli for peripheral vision, and these settings centre us in the space. The preconscious perceptual realm, which is

experienced outside the sphere of focused vision, seems to be just as important existentially as the focused image.’35 - Juhani Pallasmaa

Similarly to how the visual cortex of the brain becomes active when experiencing mental imagery research involving fMRI scanning has found that other aspects of brain function are recreated when undertaking other tasks. Studies have found that when reading stories we produce neural representations of visual and motor actions relating to those described in the text,36 this discovery seems to support the theory of perception as a sensorimotor experience rather than the two visual system. If the two visual system was correct and there are two distinct systems of vision-for-perception and vision-for-action then as we read stories we would be processing the text through our vision for perception stream, whereas the activation of areas of the brain responsible for motor functions suggests that we are also perceiving the actions involved in the descriptions, integrating motor function and visual perception into one experience. Another study of the effect of still imagery where participants rated their perceived kinaesthetic experience when viewing images of dancers found that generally people’s kinaesthetic awareness was aroused by the images and they perceived a feeling of motion or anticipated motion from viewing still imagery of bodies in motion.37

These findings, as well as supporting the perception of experiences as a sensorimotor function, suggest a projection of the self into the image or text. When viewing imagery we draw on our own experiences and the activation of areas of the brain associated with the content of the image promotes a theory of our perceptual system trying to enter the image and experience it. We may try to project ourselves and our experiences onto an image but without appropriate reactions to motor functions they will still fall short of an experience of physical space with a lack of motion parallax, induced or relative motion of objects and the perception of the image border.

35 Pallasmaa, J., The Eyes of the Skin; Architecture and the Senses, Wiley, Chichester, 200536 Speer, N., Reading Stories Activates Neural Representations of Visual and Motor Experiences, Psychological Science, Vol 20(8), 989-999, 200937 Jola, C., Moved by Stills: Kinesthetic Sensory Experiences in Viewing Dance Photographs, Seeing and Perceiving, Vol 25, 80-81, 2012

Page 47: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

39

PRO

CESSIN

G IM

AGE SPAC

E

2.0

NICOLA SELBY, Dancer, 2012, WWW.NICOLASELBY.COMIt has been found that viewing images of dancers arouses

the viewer’s own kinaesthetic awareness

Page 48: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 49: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

3.0

CHAPTER 03

Illusions

Page 50: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

42

M. C. ESCHER, Ascending and Descending, 1960

Page 51: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

43

ILLUSIO

NS

3.0

The uncanny is used to describe something that is at once both homely and unhomely.38. An analogy of it is the haunted house, the home which is associated with comfort and safety becomes an object of fear and unknown when inhabited by an unwelcome being.39 A similar effect is caused when we experience an ambiguous sensual stimulation which leads to difficulties in perception. This can be seen in the drawings of Escher, such as his infinite staircases, at first the images strike us as familiar, we identify the staircase through recognition but on closer inspection the relative spatial relationships cannot be resolved and the image becomes uncanny. Through our learnt knowledge of spatial relationships and the tools we draw upon when viewing imagery we inherently try to make sense of it in terms of the 3D physical world. When presented with ambiguous or insufficient sensory information our perceptive system makes assumptions which can shift the relationship between the sensory stimulus and what we perceive to exist in the world.

Illusion is a term used to describe when our perception of a sensory input differs from the physical stimulus creating that input. Illusions can be identified by using tools or measurements to quantify the physical stimulus. Illusions are not created by the sensory receptors themselves but, are creations of our perception of these sensory inputs from our brain’s expectations of relationships within the physical world and our learnt understanding of general rules of interaction, often producing a perception that is mismatched with reality. This has important implications on our perception of reality and our trust in what our brain perceives to be real. We are willing to take significant bets based on the odds that our perception of sensory inputs is generally accepted and confirmed to be true through combinations of senses and measurements. However, when these bets are wrong and our perception differs from reality a void is opened up which offers a significant opportunity for our perception of reality to be manipulated. If we can understand how to open up this void then there are huge implications for the manipulation of perceived space.38 Freud, S., The Uncanny, The Standard Edition of the Complete Psychological Works of Sigmund Freud, Volume XVII (1917-1919): An infantile Neurosis and Other Works, 191939 Vidler, A., The Architectural Uncanny; Essays in the Modern Unhomely, MIT Press, London, 1992

03.01 THE UNCANNY

03.02 PERCEPTION AND EXPECTATIONS

Page 52: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

44

Previously mentioned the Ames room produces issues with perceptions of relative positions and size consistency. An Ames room is a tapered, warped room that appears from a single viewing position as a simple rectangular room, because we are accustomed to rooms having a simple planar geometry we perceive the room as rectangular, rather than its actual physical geometry. If the viewer changes their point of view the illusion will fall apart as the perspectival relationships will change with the viewing position. When two identical objects are placed in the two back corners of the room which we perceive to be parallel to the front of the room the one that in reality is positioned closer to us is perceived as much larger than the one that is positioned further away. The brain must weigh up two main factors in viewing this situation, the first is size constancy and the second is perspective. Usually we perceive objects to be a constant size and if two of the same objects are in our view and one is much smaller than the other we usually induce that the visually smaller object is positioned further away, rather than being physically smaller, as a result of perspective. However, the brain is also assessing the perspectival relationships of the room and in this case it wrongly perceives the room to be a rectangular shape and the objects to be different physical sizes.

If we directly manipulate the light entering the eyes using prisms or lenses so that our view is distorted or shifted, we can produce a similar effect to that of visual illusions. The distortions produce a visual stimulus different to that we are used to, for example the light entering the eyes could be mirrored or offset, and our brains will attempt to make sense of the distorted images in relation to our normal understanding of vision. Often in this case we will be able to adapt our other senses and motor functions to work with the distorted visual perception. As discussed earlier when we undertake motor functions we use a combination of senses, for the basis of this explanation we will use vision and proprioception. If the visual sense is distorted we need to adapt our perception to adjust for that distortion during motor functions. This has been tested numerous times, such as in experiments conducted by K. U. Smith.40 It has been revealed that not 40 Welch, R., Adaption to Prism- Displaced Vision, Perception and Psychophysics, Vol 5(5), 305-309, 1969Smith used cameras to displace the visual position of a participants hand that was hidden behind a screen. The participant would see the view of the camera in a screen that would flip their view horizontally, vertically and displace it in both directions, as well as scaling it in a series of tests. The participant was then asked to use that hand to write with only proprioceptive and tactile cues as to

03.03 DISTORTION AND ADAPTATION

Page 53: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

45

ILLUSIO

NS

3.0

only is there a sensorimotor recalibration, that adjusts our movements based on the perceived distortions but that the visual distortion also results in a partial proprioception recalibration where our perception of limb positions shifts in line with the shift of visual information.41 This suggests that we are able to recalibrate our visuomotor functions in a remapping of our cognitive map to account for variations in perception but also manipulation of our retinal images leads to an adjustment of our proprioceptive sense so we are able to fool the body into believing its position has shifted.

Distortions and displacements of images in space, as above, generally lead to a rapid recalibration of our visuomotor functions to account for the distortion and restore normal action to our movements after just a few attempts. However experiments that displace our visual stimuli in time have shown very different results. An adaption of the tests carried out by Smith where the visual stimulus was displaced in time but not spatially was carried out and participants showed jerky and ill-coordinated movements if the delay became greater than 0.5 seconds and little or no improvement was seen over time through practice.42

its location, test have also been undertaken that involve pointing to a target whilst wearing prism goggles.41 Cressman, E. et al, Sensory Recalibration of Hand Position Following Visuomotor Adaptation , Journal of Neurophysiology, 102, 3505–3518, Oct 2009Cressman, E. et al, Motor adaptation and proprioceptive recalibration, Progress in Brain Research, Vol 191, 9-99, 201142 Gregory. R., Eye and Brain; The Psychology of Seeing, Fifth Edition, Oxford University Press, Oxford 1998

Ames Room at the Camera Obscurer and World of Illusions, WWW.EDINBURGH.ORG

Page 54: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

46

Drawings Of A Variation Of The Ames Room And An Equivalent Rectangular Room

A View from point XB Perspective Plan

C Parallel Projection SectionD Perspective Section

Although the two rooms differ spatially, when viewed from point x they both appear exactly the same, as if they were both rectangular with walls at right angles to one

another. In the case of the Ames room our brains makes a judgement to perceive the room as rectangular leading to the illusion that

the people are of differing sizesDrawings by Author

Page 55: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

47

ILLUSIO

NS

3.0

A

RECTANGULAR ROOM AMES ROOM

B X X

C

D

Page 56: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

48Whilst not purely illusionary it is necessary to mention in this section the perception of cause and effect. Making judgements about cause and effect helps us to understand the past, present and estimate future occurrences. However like other forms of perception it is open to judgement. We can’t always be completely certain that one event caused another. What we can do is draw on our understanding of physical relationships and known laws of science to make informed hypothesis about likely causes and effects. Michotte carried out early experiments to test theories of cause and effect, involving a pair of lines on a spinning disc that converge and then move away from one another, when viewed through a small slit the lines become small squares. As the squares appear to move together and then apart his participants identified one of the squares as striking the other square causing it to ricochet in the opposite direction.43 In further research it has been found that in similar circumstances we perceive a cause and effect between two objects even when they don’t physically or visually come into contact with one another.44 As the modern world becomes more and more technologically advanced we have to make more judgements about causes and effects. Bulky mechanical inputs and outputs have been replaced with smart interfaces such as motion tracking and verbal inputs so in some cases the link between cause and effect is blurred as interfaces become more intuitive.45

43 Gregory. R., Eye and Brain; The Psychology of Seeing, Fifth Edition, Oxford University Press, Oxford 199844 Goldstein, E. et al, Encyclopedia of Perception, Sage, London, 201045 Ibid

03.04 CAUSE AND EFFECT

Page 57: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

49

ILLUSIO

NS

3.0

An example of Michotte’s experiment using coloured lines on a spinning disc. When viewed through a thin slot viewers perceived the red square to strike the blue one causing it to deviate from its original pathDrawing by Author

Page 58: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 59: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

4.0

CHAPTER 04

Virtual Reality and Immersion of the Body

‘The expression ‘Virtual Reality’ is a paradox, a contradiction in terms, and it describes a space of

possibility or impossibility formed by illusionary addresses to the senses.’46 - Oliver Grau

‘Physically remaining in the real world, the user steers his course through the 46 Grau, O., Virtual Art: From Illusion to Immersion, MIT Press, Cambridge, 2003, p.15

Page 60: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

52

incredibly large or indefinitely small virtual space by stimulating head, hand and body movements with his brain. His mind thereby functions as an eye that goes

on a voyage of discovery and crosses the boundaries of the material world to enter VR...i.e the space of imagination.’47 - Wolfgang Strauss

An immersive VE is one that stimulates your senses in a way that leads your brain to perceive the virtual world as a possible physical reality. This does not mean that the virtual world has to conform to the same rules as the physical world but that the sensory perception of it is convincing to the brain. The definition of virtual is ‘having the essence or effect but not the appearance or form’48 so a VE could be described as an environment that captures the sensual essence of a physical environment but does not have its physical form. A VE doesn’t have to conform to the laws of the physical world meaning it can be manipulated and tweaked to adjust our sensual perception. We can use immersive VEs to isolated and adjust certain senses in a controlled environment. They have been used comprehensively by neuroscientists to research brain activity to develop an understanding of how we perceive space.

There are numerous types of immersive environment in the modern world, social media, smart phones, even books can be seen as an immersive world. There are 4 widely accepted categories of immersion; tactical immersion, when performing tactile operations that require deep concentration and skill, strategic immersion, when undertaking a mental challenge, narrative immersion within a story or description and spatial immersion when a simulated world becomes perceptually convincing. For the purpose of this paper the argument will be centred on spatial immersion that uses specific techniques and equipment to artificially manipulate the senses, such as a CAVE or head mounted display and motion tracking. It is an environment that leads to one’s perception of their body becoming submerged in an encompassing artificial environment and relies on the ability of that environment to suspend the feelings of doubt by strongly and convincingly stimulating the senses to create an illusion of presence.

47 Strauss, W., Virtual Architecture and Imagination in Film and Arc 1, Artimage Graz, 1993, p.448 Collins English Dictionary 11th Ed., HarperCollins, London, 2011

04.01 IMMERSIVE ENVIRONMENTS

Page 61: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

53

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

53

‘... presence in a virtual environment necessitates a belief that the participant no longer inhabits the physical space but now occupies the computer generated virtual environment as a ‘place’’49 - Woodrow Barfield

An immersive VE is one that conveys a feeling of presence within it. We may still be fully aware that the virtual world we are experiencing is virtual, but our perception of presence within it is a ‘strong illusion of being in a place in spite of the sure knowledge that you are not there.’50 So presence is not necessarily a belief that you have been transported through time and space to another dimension it is more of a perception that the VE could exist as a real environment perceived by the senses. Many users of VR environments will describe their experience of visiting distinct ‘places’ whilst at the same time knowing they have not left the physical room in which they are actually located. The ability of a VE to suspend your knowledge of the physical space you inhabit and encourage your senses and brain to perceive your body as present within the VE is the key to creating a an effective immersive VE.

Carrie Heeter identifies three categories of presence in VEs that can enhance our perception of being there; personal, social and environmental51. Personal presence relates to the your own bodily experience, this may be seeing yourself or parts of your body within the space, haptic feedback systems or being able to identify consistent patterns of action and interaction of the body52. Personal presence is effectively the measure of how vivid the artificial sensory feedback is to each of our bodily senses53, when this feedback becomes stronger and identifiable the feeling of presences is increased.

49 Barfield, W. et al, The Sense of Presence Within Virtual Environments: A Conceptual Framework, in Human-Computer Interaction: Software and Hardware Interfaces, Vol B, Elsevier, p.70250 Slater, M., Place Illusion and Plausibility can Lead to Realistic Behaviour in Immersive Virtual Environments, Philosophical Transactions of The Royal Society B, Vol 364, 3549-3557, 200951 Heeter, C., Being there: the subjective experience of presence, Presence: Teleoperators and Virtual Environments, Vol 1(2), 267-271, 199252 These consistent patterns of action do not have to comply with the same actions as the physical world but as long as they can be identified and become consistent with our body they will create a feeling of presence, for example if we moved our head to the left and the virtual display appears to move in the opposite direction, as long as it consistently does that we are able to build an understanding of how we interact with the environment and the resulting sensory stimulus.53 Steuer, J., Defining Virtual Reality: Dimensions Determining Telepresence, Journal of Communication, vol 4(24) 73-93, 1992

04.02 PRESENCE IN IMMERSIVE ENVIRONMENTS

Page 62: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

54

Inhabiting a virtual space with other beings will enhance our feeling of social presence, this is strengthened if the other beings, whether they are human, animal or abstract, acknowledge our presence and interact with us.54 Without this interaction and acknowledgement we can begin to question our own presence in the space, this is when the other two factors become more important to convince us that we are actually part of the environment and our experience becomes more of a fly on the wall with regard to the beings within it.

Environmental presence is similar to social presence in that it is strengthened by the acknowledgement and interaction with a medium in the environment but in this case it is with the environment itself rather than the population of that environment. This can be achieved with proximity sensors or interactional objects such as switches, but it is often more simple forms of interaction that can convince us of the reality of a VE.55 A virtual world can be designed as highly interactive, exaggerating or surpassing the physical world leading to an overpowering, sometimes unsettling conviction of presence as our sensory organs are overloaded with data.

Even without direct sensory inputs affecting our experience of an immersive VE we can perceive induced sensations and a feeling of presence. For example in a second person VE where we control the movements of a computer generated human it has been found that 71% of participants rated the computer generated person to have a greater feeling of being themselves than their physical body whilst in the VE. In the same test it was found that when they directed the computer generated version of themselves to touch a virtual animal 86% of participants felt an emotional response and 76% perceived a physical stimulation of touch, even though there was no stimulus except visual.56 This is built up from our expectations and illusions of cause and effect, in the real world if we reach out and touch something we expect to feel a sensory stimulus and if our

54 There is a difference in this case between acknowledgement and interaction, a being can acknowledge our presence simply by avoiding collisions with us as we move or by rotating to face us. Interaction with other beings in a populated space is both the hardest technically to create but also gives the highest feeling of presence, if we are able to speak to, touch or in other ways interact the beings appear to become more real and the quality of their recognition and interaction with us as part of their environment leads us to believe we must be part of that environment.55 Many objects in the physical world do not involve active interaction such as consciously deciding to turn on a switch but rather involve unintentional interactions resulting from the physicality of the world, such as colliding with a wall. These physical properties such as solidness and immovability can be the key properties that we use to identify objects. These are properties that seem simple and straight forward but it is often this solidness that is difficult to simulate in a non-physical environment – we have all experienced computer games where it is possible to walk through objects that visually we would usually identify as solid.56 Heeter, C., Being there: the subjective experience of presence, Presence: Teleoperators and Virtual Environments, Vol 1(2), 267-271, 1992

Page 63: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

55

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

55

feeling of expectation is strong we can begin to perceive a sensory stimulus without it actually being there. There is no unique formula for creating a feeling of presence within the virtual world, there are guides that can be used to strengthen a feeling of immersion by appealing to multiple senses but our complex system of perception can also subconsciously aid us by filling in the blanks and create a feeling of immersion from strong stimuli to specific senses.

Slater presents two theories of presence; ‘place illusion’ and ‘plausibility illusion’.57 Place illusions relate to our perception of being within the VE and are affected by how well our motor actions are effected by and affect our sensory stimuli; our ‘sensorimotor contingencies’.58 Place illusion is about how the virtual world is perceived as an environment, plausibility illusion is about what is perceived. It is the illusion that what is apparently happening is really happening, even though you may be fully aware that is not the case. Place illusions that are developed through sensorimotor relations between the body, its senses and the environment are forms of perception dealing with a direct stimulation of senses. Plausibility illusions are ‘correlations between external events not directly caused by the participant and his/her own sensations.’59 They are more likely to evoke responses that we associate with emotions, such as increased heart rate causing excitement or anxiety. We can overcome these feelings by re-assuring ourselves that the situation is not actually real, but as with sensory perception our brain creates an unconscious initial response and it is a conscious effort to overcome this.60 The stronger both the place illusion, that you are there, and the plausibility illusion, what appears to be happening is happening, then the stronger you are likely to respond to a VE as if it were an actual, physical environment.

‘The more intensely a participant is involved, interactively and emotionally in a VE, the less the computer generated world seems a construction, rather it is constructed as a personal experience.’61 - Oliver Grau

57 Slater, M., Place Illusion and Plausibility can Lead to Realistic Behaviour in Immersive Virtual Environments, Philosophical Transactions of The Royal Society B, Vol 364, 3549-3557, 200958 Ibid59 Slater, M., Place Illusion and Plausibility can Lead to Realistic Behaviour in Immersive Virtual Environments, Philosophical Transactions of The Royal Society B, Vol 364, 3549-3557, 200960 A popular example of this is the visual cliff scenario originally proposed by Gibson (Gibson, E. et al, The visual Cliff, Scientific America, Vol 202, 64-67, 1960). Variations of this are deployed in a VE where the floor is a narrow circular ledge around a large drop, participants are asked to walk to the opposite side of the space and the majority do so by carefully shuffling around the edge of the perceived ledge showing signs of anxiety, even though the entire space is actually flat (and they know it to be flat) and the pit is just a visual illusion.61 Grau, O., Virtual Art: From Illusion to Immersion, MIT Press, Cambridge, 2003, p.200

Page 64: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

56

As previously seen the human perceptual system has developed to make sense of the sensory stimuli provided to us from physical three-dimensional environments, immersion in a virtual space can be measured by its ability to appeal to those same perceptual and sensory systems allowing us to read the virtual space as a similar physical construct. When developing an immersive VE it is not always possible, due to technical constraints such as computing processing power or access to appropriate equipment, to create an environment that stimulates all of the senses in the same way and at the same time that we experience physical space. The development of the VE is a trade-off between our perceived importance of a certain sense to produce a feeling of immersion and the technical knowledge and ability to do so. It is also a fight between our perception of the physical world that our body is actually present in and the illusion of presence within the developer’s virtual world62, this is why a kinaesthetic understanding of our presence in a VE is key to able to lift the body out of the physical world and into a virtual one. This section will look into current and developing techniques that are being used to stimulate a perception of visual and kinaesthetic presence within a VE.

04.03.01 Visual Immersion

‘Peripheral vision integrates us within a space, while focused vision pushes us out of the space’63 - Juhani Pallasmaa

In image space we project our understanding of physical space onto the image and perceive it in relation to that understanding, even though when we look at images we experience brain activity similar to that of experiencing its content, we do not get a feeling of immersion within it. Immersion is multisensory, it encompasses the body and is a feeling of being within the image. This section will briefly look at some widely researched techniques of creating a sense of visual immersion and the later chapter, kinaesthetic immersion, will look at how this can be combined with bodily immersion techniques to achieve a strong feeling of presence. Whilst this is not intended as an exhaustive list of immersive technologies it is intended to give a brief overview of popular and emerging technologies. I aim to focus on the effectiveness of these technologies in creating a perception of immersion and where appropriate refer to the technology within the equipment but the focus of this paper is not directed to the technologies themselves but rather the effect they give.62 Steuer, J., Defining Virtual Reality: Dimensions Determining Telepresence, Journal of Communication, vol 4(24) 73-93, 199263 Pallasmaa, J., The Eyes of the Skin; Architecture and the Senses, Wiley, Chichester, 2005, p.13

04.03 TECHNIQUES OF IMMERSION

Page 65: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

57

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

57

The most important aspect of visual immersion is to create a visual stimulus similar to what we experience through the eye in normal conditions. Therefore it is important that any virtual display addresses both eyes, allowing us to experience depth with binocular vision. Stereoscopic displays are becoming more widespread in recent times with 3D televisions and cinema. Anaglyph, polarization, LCD shutter and interference filter techniques all utilise glasses that the viewer wears to view a stereoscopic display in order to see a different version of the image in each eye. All these glasses work on a variation that within the display there are two offset images corresponding to the views from each eye and each lens of the glasses will filter out one of the images meaning we see one image in each eye slightly offset from the image we see in the opposing eye.

Autostereoscopy defines methods of viewing stereoscopic images without the need to wear glasses. One way that this can be achieved is by arranging two images in narrow alternating strips where one set of strips is obscured from the view of each eye using a parallax barrier of lenticular lens that is only effective from the viewing angle of that eye. Although this would seem to be more natural without the need for glasses it is limited in that it requires the user to view the display from an angle almost perpendicular to it and the effect is diminished from very acute angles either side.

So far the techniques mentioned above are all ways of displaying two still or moving images in a way that we perceive them as three dimensional. On their own they do not allow the viewer to explore the visual environment, they present a predefined view that the viewer has no control over. These displays need to be combined with tracking data if the viewer is to be able to explore the space under their own viewing terms. Motion tracking will be looked at later in this chapter, but there are also forms of autostereoscopy that can be used to produce the effect of binocular vision not by creating two separate images that project onto the eye but by creating three dimensional representations that we view as we would a physical three dimensional object under normal conditions. Examples of this technique include volumetric displays and holograms. Holograms involve recording the detailed properties of light scattered from an object, whether the object is physical or virtual, the hologram can then be reproduced by shining light through a recording plate that modifies the properties of the light to produce a three dimensional representation of the original object as if the original object was in its place, allowing the viewer to move around the object and view it from different angles. Multiple objects can be combined in a single hologram and motion parallax will be preserved.

Page 66: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

58

Volumetric displays involve recreating an object in three dimensions in space by projecting light onto a receiving medium. Instead of using two dimensional pixels a volumetric display utilises voxels (pixels with an added third dimension) which correlates to its position on the receiving medium which can be made up of spinning transparent screens or stationary receiving mediums arranged in a three dimension grid. Moving receiving mediums utilise persistence of vision within our perception to create a whole image from a series of parts quickly projected onto a screen as it moves through space. Whilst holograms are patterns of light perceived to be suspended in space volumetric displays require a medium for their display which can interfere with our physical movement around them, movement around both is also limited by the interference of the body with the projection systems.

Whilst holograms and volumetric projection create a more realistic three dimensional representation of an object in space the same illusion of three dimensionality can be achieved through the use of stereoscopic displays that require the use of glasses. Glasses may be seen as an inconvenience to the viewer but they are lightweight and this inconvenience is seen to be outweighed by the relative ease that stereoscopic displays can be created compared to the more specialised equipment and computing power required by autostereoscopic displays. When combined with motion tracking the glasses can create a very effective illusion of visual immersion.

ACTUALITY SYSTEMS, Perspecta Volumetric 3D Display, WWW.INITION.CO.UKA circular disc inside the display spins at 720 rpm receiving projected images that we perceive as a 3 dimensional

representation through the persistence of vision.

Examples of Stereoscopic Imagestop: Visualisations created from two positions relating to the two positions of the human eyes

middle: The red channel is removed from the left image and the blue and green channels are removed from the right image

bottom: the images are combined together as one, anaglyph glasses filter out portions of the image that relate to the views from each eye creating a perceived depth from the 2D image

Images by Author

Page 67: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

59

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

59

Page 68: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

60

Two widely used pieces of equipment for creating visually immersive environments in VR are the Cave Automatic Virtual Environment (CAVE) and the head mounted display (HMD).64 A CAVE is a cube shaped room or cubicle constructed from rear projections screens. Stereoscopic images are projected onto each of the screens,65 the viewer wears a pair of LCD shutter glasses to allow them to perceive the images as three dimensional and motion tracking systems record their body movements and head position within the CAVE, this information feedbacks into the image processors to update the view accordingly. Whilst the CAVE allows for the viewer to move around most setups are 2.5m square or smaller so the degree of movement and exploration is limited. The advantages of the CAVE system are the relatively small amount of equipment required for the user to wear to create the illusion of immersion and the wide field of view afforded by being completely enclosed in the visual environment. Other participants can enter the CAVE and view the three dimensional environment but their view will be slightly distorted and their perception of depth within the VE will be compressed or expanded as the perspective will only be correct for the viewer being motion tracked.66

64 The acronym CAVE relates to Plato’s cave and the theory of perceived reality to be that of the projected views we experience.65 Sometimes only 3 of the planes are used as screens but for a fully immersive environment all 6 planes are used to completely enclose the viewer in the environment.66 Pollock, B., The Right View from the Wrong Location: Depth Perception in Stereoscopic Multi-User Virtual Environments, IEEE

LOS ALAMOS NATIONAL LABORATORY, Astronomical Simulation In The CAVE, WWW.LANL.GOV

Page 69: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

61

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

61

The HMD is a piece of equipment worn by the viewer that presents stereoscopic images to the eyes simultaneously to create the effect of binocular vision. The HMD blocks out any view of the physical world and replaces it with images of the VE that link with motion tracking data. HMDs have varying fields of view with the most effective covering a high proportion of the eyes’ field of view.67 HMDs are much bulkier than the glasses required to be worn in a CAVE environment but they do allow for a greater degree of freedom for the viewer, as they are attached to the viewer themselves and move with them as they as they explore the VE. Therefore their freedom of movement in the VE is only limited by the size of the physical space that their body inhabits, although later in this paper we will look at ways that these limits of physical space can be extended virtually.

Transactions on Visualization and Computer Graphics, Vol 18(4), April 201267 The Occulus Rift is a HMD currently under development that is considered to have the widest field of view of any HMD AT 110° vertically and 90° horizontally for each eye. Whilst this falls slightly short of the field of view of our eyes under normal conditions it is complemented by motion tracking equipment that adjust the view as you move your head to look in different directions.

OCULUS VR, Concept Rendering (left), Software Development Kit (right), WWW.OCULUSVR.COMThe Oculus Rift combines a lightweight design with a 110o field of view, low latency tracking and stereoscopic rendering to create a sense of visual immersion

Page 70: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

62

More subtle technologies are currently being developed in the form of a bionic contact lens that contains translucent circuitry and LEDs that are 300 µm in diameter powered wirelessly by radio frequencies68,69. Currently the technology only allows for an 8 by 8 array of LEDs so the technology is not developed enough to produce immersive images of VEs but it could be possible in the future. As well as LEDs the researchers are looking into incorporating micro-lasers into the lenses that would allow them to scan an image directly onto the retina and they are also looking at passive systems that filter and modulate the environment light to create a perceived image by adjusting the intensity and colour of light that enters the eye naturally. Currently this technology is being developed to create overlays of augmented reality on our everyday experience of the physical world and as the resolution of the images available increases it could be possible to create realistic VEs that replace or overlay the physical world that are seamlessly integrated into our vision.

Virtual retina displays developed by the Human Interface Technology lab at the University of Washington use lasers to scan an image directly onto the retina of the viewer. This process negates the need for an intermediate image to appear in front of the eye, creating a stimulus straight onto the retina that is perceived by the brain as an image received from the eye.70 As the system requires no intermediate screen the resolution and field of view of the virtual image is less restricted and can approach that of our natural vision.71 Bypassing the eye completely it is also possible to stimulate areas of the visual cortex with electronic signals that produce what is perceived as visual stimulation by the person experiencing it. This technology is currently under development to replace vision in blind patients but if developed further in it could be utilised in VR to create a direct manipulation of the brain that could be indistinguishable from the information received from the eye in natural vision. However the brain is a complex organ that is still not fully understood so understanding the correct areas of the visual cortex to stimulate to create meaningful images is a long way off in the future.72

Whichever type of equipment or technique is used to simulate visual immersion in a VE there are a number of factors that contribute to the quality of the immersion, such as the graphics frame rate, the extent of tracking and freedom allowed for the viewer, the tracking latency, field of view, the visual quality of the rendered 68 Parvis, B., Augmented Reality in a Contact Lens, IEEE Spectrum, http://spectrum.ieee.org, 2009, accessed March 201369 The minimum focal distance of the human eye is 22mm so for LEDs placed directly on the surface of the eye we would not normally be able to focus on them, in bionic contact lenses however tiny microlenses create a visual distance between the light from the LED and the surface of the eye creating an illusionary distance that allows the eye to focus the light that in reality is sitting directly on its surface.70 As the laser directly scans an image onto the retina without an intermediate image in front of the eye the only visual information that exists about the virtual display exists within the body of the person experiencing it, making a very private experience, strengthening the feeling of personal presence.71 Tidwell, M. et al, The Virtual Retinal Display – A Retinal Scanning Imaging System, Proceedings of Virtual Reality World ‘95, 325-333, 199572 Normann, R., Cortical Implants for the Blind, Spectrum, IEEE, Vol 33(5), 54 – 59, May 1996

Page 71: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

63

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

63

UNIVERSITY OF WASHINGTON, Bionic Contact Lens, SPECTRUM.IEEE.ORGThe possibility of incorporating augmented reality into a contact lens opens

up new opportunities for the seamless integration of virtual environments into the physical world

Page 72: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

64

The visual cliff scenes from Slater, M., Visual realism enhances realistic response in an immersive virtual environment, IEEE Computer Graphics, Vol 29(3) 76-84, 2009

The supermarket scenes from Meijer, F., Navigating Through Virtual Environments: Visual Realism Improves Spatial Cognition, Cyberpsychology and Behaviour, Vol 12(5), 517 – 521, October 2009

A and B show the scene rendered without shadows and reflection used in the first experimentC and D show the scenes used in the second experiment with reflections and shadows of the body

It is unclear whether the increased feeling of presence relates to the enhanced rendering of the scene or to the representation of the body within the scene

E shows the supermarket rendered without materials and F shows the environment used in the second test with materials applied, the increase in visual information led to an improvement in navigation and

spatial cognition.

C

E

A

D

F

B

Page 73: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

65

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

65

scene and the subtlety of the equipment.73 Many of these factors relate to the speed at which information can be captured from the body, processed and fed back into the visual stimulus of the VE. As discussed in a previous chapter when or visual stimulus is offset from our motor actions it becomes very hard to undertake simple tasks with little improvement with practice. If the visuals in a VE are noticeably out of sync with our movements it will have a negative impact on the feeling of immersion and presence within the environment.

Studies have been undertaken into how the visual realism of the images presented can affect the response of the person experiencing the VE. In experiments at UCL using the visual cliff scenario74 it was found that participants’ perception of being within the environment, their presence, was increased with a more realistic representation of the shadows and reflections and also their anxiety at the cliff scenario was increased, it could however be argued that this is not necessarily to do with the realism of the rendering but also to do with the representation of the body within the space given by shadows and reflections creating personal presence. Other tests at the University of Twente75 have shown that spatial learning are increased with a photorealistic style of rendering than with a more abstract, non-realistic representation.76 This may be linked to the simple increase in the amount of visual data available to create landmarks and therefore build up survey knowledge of the environment, from Siegel and White’s Landmark Route Survey Method theory.77 The photorealistic environment was full of objects and tiled materials have been applied to the floors whereas in the non-realistic version the space is sparser and entirely rendered grey. The results of these experiments seem to suggest that as the visual stimuli from a VE begin to come closer to our experience of physical space it becomes easier to understand and process. When a more abstract representation is experienced we will use more of our brain power to apply meaning to the environment than if we can immediately appreciate and understand the environment in direct relation to the physical world. With both tests however it is currently unclear as to whether these assumptions are purely down to the realism of the render or other factors affecting the experiments.

73 Slater, M., Place Illusion and Plausibility can Lead to Realistic Behaviour in Immersive Virtual Environments, Philosophical Transactions of The Royal Society B, Vol 364, 3549-3557, 200974 Slater, M., et al, Visual realism enhances realistic response in an immersive virtual environment, IEEE Computer Graphics, Vol 29(3) 76-84, 2009 - Participants wore a HMD in an experiment where they were instructed to walk to a chair positioned on the opposite side of a visual drop that could only be reached by navigating a narrow ledge in the VE. The VE was rendered in two different ways for two groups of participants. In the first experiment the environment was rendered with static lighting and textures and in the second experiment it was rendered with dynamic shadows of the participant as they moved through the environment and reflections of their body in objects present in the environment.75 The tests involved navigating a predefined route in a virtual supermarket that was rendered in two ways, the first as a grey model with a general ambient light and the second with photorealistic textures and lighting. The participants had to memorise the route and repeat it unaided whilst also undertaking other cognitive tasks.76 Meijer, F. et al, Navigating Through Virtual Environments: Visual Realism Improves Spatial Cognition, Cyberpsychology and Behaviour, Vol 12(5), 517 – 521, October 200977 Siegel A. et al, The Development of Spatial Representations of Large-Scale Environment. In Reese H., Advances in Child Development and Behaviour. Academic Press, New York, p 9–55. 1975

Page 74: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

66

04.03.02 Kinaesthetic Immersion

‘The process of thinking does not only involve the brain but also the body. The body supplies spatial experience that is subsequently translated into reality.’78

- Wolfgang Strauss

‘The computer has transformed the image and now suggests it is possible to enter it....In VR, a panoramic view is joined by sensorimotor exploration of an

image space that gives the impression of a ‘living’ environment. Interactive media have changed our idea of the image into one of multisensory interactive

space with a time frame.’79 - Oliver Grau

No matter how closely a visual image can mimic the images we perceive in natural vision if they do not respond to the body as a sensorimotor experience their effect as an immersive environment will always be limited. The depth of a painted or photo space can only be experienced in the imagination whereas successful VR appeals to multiple senses and allows a sensorimotor experience. In the previous chapter on visual immersion it was briefly discussed how stereoscopic images and motion tracking can be combined to allow the viewer to experience multiple views of an object as with objects in the physical world. This chapter will look at this and other techniques that combine vision with kinaesthetic awareness to create a strong perception of immersion in a VE.

Head tracking is important in VR to update the view of the user in relation to their movement, any movement can be described using six degrees of freedom (DOF) in three dimensional x,y,z space. The six degrees of freedom are linear movement in the x, y and z directions; forwards/backwards, left/right and up/

78 Strauss, W., Virtual Architecture and Imagination in Film and Arc 1, Artimage Graz, 1993, p.479 Grau, O., Virtual Art: From Illusion to Immersion, MIT Press, Cambridge, 2003, p.3

Page 75: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

67

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

67

down and rotational movement in each other axis; pitch, yaw and roll. Tracking can be achieved with multiple methods, including optical systems utilising visible or Infra-red light, electromagnetic systems, acoustic systems and mechanical systems that require a physical link between the object being tracked and a reference point. Each system has its benefits and limitations as shown in the table on the following page.80 As we have previously discussed the effectiveness of a tracking system can be considered in relation to the speed at which the tracking information can be recorded and relayed to the visual display allowing for it to be updated without any noticeable lag between the motor actions of the viewer and the view they are experiencing.

Updating the view experienced by a user in relation to the direction and orientation of their gaze is a step towards creating a sensorimotor experience in VR but to strengthen the feeling of presence other parts of the body need to be perceived

80 Motion tracking is becoming more widely available with the widespread access to webcams, Xbox Kinects and Nintendo Wiis that can all be adapted with free software packages and used in a VR setup. As part of this paper a motion tracking cap was developed using an adapted webcam and a hat with 3 infra-red LEDs mounted on it. See Appendix v.i, Test 01 for a full description of the hat.

FORWARD

BACKWARDLEFT

RIGHT

UP

DOWN

PITCH

YAW

ROLL

All 3 dimensional movements can be described using the 6 degrees of freedom show in this diagram, for motion tracking to create a natural interface it must be able to track movements of the body in each of these directions.

A motion tracking hat produced for this paper that uses infra-red LEDs and a webcam to track head movements.

The hat allows 6DOF to be tracked and translated into input controls for interactive videos.

Images by Author

Page 76: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

68

VISI

BLE

LIG

HT

INFR

A-R

ED

ELEC

TRO

MAG

NET

IC

ACO

UST

IC

MEC

HAN

ICAL

INER

TIAL

Wid

ely

avai

labl

e eq

uipm

ent

Low

late

ncy

Low

cos

t

Less

inte

rfere

nce

than

vis

ible

ligh

tLo

w la

tenc

yLo

w c

ost

No

occl

usio

n fro

m th

e bo

dyR

elat

ivel

y lo

w c

ost

No

clea

r adv

anta

ges

over

oth

er s

yste

ms

Unl

imite

d ca

ptur

e vo

lum

eR

ealti

me

- lo

w la

tenc

y

No

exte

rnal

rece

iver

s re

quire

dAc

cura

te to

with

1o

Rec

eive

r eas

ily o

bscu

red

by th

e bo

dyEn

viro

nmen

t lig

ht c

an c

ause

inte

rfere

nce

Rec

eive

r eas

ily o

bscu

red

by th

e bo

dySu

nlig

ht e

mits

IR li

ght t

hat c

an c

ause

inte

rfere

nce

outd

oors

Hig

h la

tenc

y

Smal

l cap

ture

are

aTr

acki

ng b

ecom

es im

prec

ise

at e

dges

Low

late

ncy

- so

und

trave

ls s

low

lySo

und

trave

ls a

t var

ying

spe

eds

in d

iffer

ent t

empe

ratu

res

and

hum

iditi

es

Lim

ited

num

ber o

f mov

emen

tsW

eigh

t of w

earin

g de

vice

on

user

Spec

ialis

t exp

ensi

ve e

quip

men

t

Onl

y gi

ves

rela

tive

mov

emen

ts, n

ot a

bsol

ute

posi

tion

Rel

ativ

ely

expe

nsiv

e

MET

HO

DAD

VAN

TAG

ESD

ISAD

VAN

TAG

ES

Mot

ion

Trac

king

Met

hods

And

The

ir Ad

vant

ages

And

Dis

adva

ntag

esEa

ch m

otio

n tra

ckin

g sy

stem

has

its

adva

ntag

es a

nd d

isad

vant

ages

, som

etim

es

mul

tiple

sys

tem

s ca

n be

com

bine

d to

ove

rcom

e th

ese

issu

es

Page 77: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

69

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

69

VIR

TUSP

HER

E, T

rans

pare

nt V

irtus

pher

e fo

r Eve

nts,

WW

W.V

IRTU

SPH

ERE.

CO

MC

YBER

WAL

K, C

yber

wal

k O

mni

-Dire

ctio

nal T

read

mill,

WW

W.C

YBER

WAL

K-PR

OJE

CT.

OR

G

The

Virtu

sphe

re s

its o

n ro

llers

that

trac

k th

e m

ovem

ent o

f the

sp

here

and

upd

ate

the

virtu

al e

nviro

nmne

t acc

ordi

ngly

. The

sp

here

allo

ws

the

user

full

360o m

ovem

ent w

ithin

the

virtu

al

envi

ronm

ent w

hile

rem

ain

in th

e sa

me

plac

e in

the

phys

ical

w

orld

.

The

mov

emen

t of t

he u

ser i

s tra

cked

and

the

data

is a

pplie

d to

th

e om

ni-d

irect

iona

l tre

adm

ill ca

usin

g it

to m

ove

in th

e op

posi

te

dire

ctio

n to

the

user

’s m

ovem

ents

allo

win

g fo

r the

exp

lora

tion

of

a la

rge

virtu

al s

pace

usi

ng p

hysi

cal w

alki

ng b

ut s

till r

emai

n in

the

sam

e po

sitio

n.

Page 78: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

70

as entering the environment. There are a range of input devices that can be used in a VE which have been split into two main categories, direct and indirect. Indirect inputs describe the use of an intermediate device that acts as an interface between your body and the VE, such as a mouse or joystick, these involve an active process of translating one movement into another where the input movement does not directly imitate the effective output movement. Direct inputs are more intuitive and translate a physical movement in space into a similar movement in the virtual world, such as motion tracking the position of the body during walking to create the illusion of walking in the virtual world. There are also inputs that straddle the border between indirect and direct inputs, such as walking on the spot to produce an illusion of walking forwards in the virtual world or walking on a treadmill or virtusphere.

Personal presence in VR is the measure of how much one’s own actions and movements relate to the actions and movements of the virtual self although the input and output actions do not have to directly relate to one and another, as long as the same input action consistently creates the same output action. Here we will look at this theory in more detail using experimental research data. Tests carried at UCL have compared three types of locomotion in VEs; push button flying, walking in place and real walking.81 82 It was found that the participants had a greatest subjective perception of presence in the VE with real walking and the lowest with push button flying. Another finding was that participants in the push button and walking in place experiments experienced simulator sickness whereas none of the real walkers experienced this. Whilst the perception of presence from walking in place was greater than that of push button flying, real walking was the most simple to understand and felt the most natural.83 Further experiments have also found that participants performed better at remembering a series of spatial and verbal items when navigating VEs with natural locomotion than with unnatural methods, it has been suggested that this is due to the relative ease at which the brain can cognitively navigate an environment explored naturally compared to in a situation where the brain is also processing unnatural movements at the same time as mapping the spatial relationships.84

81 Usoh, M. et al, Walking, Walking in Place, Flying, In Virtual Environments, SIGGRAPH ‘99 Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 359-364, 199982 Push button flying involves using an indirect input, pushing a button, to glide to another position in the environment. Real walking is a direct input that involves motion tracking the body to produce a visual representation of walking with the virtual world and walking in place sits between the two, it is direct in the fact that an act of walking is being translated to an act of walking in the virtual world but indirect that the person walking in the physical world is not moving within the space but the virtual person is moving forwards through space. The act of walking, both in place and real was tracked in the users HMD and a computer was able to identify and separate movements of the head created from the participants walking gait from movements associated with looking around by actively moving the head. Real walking also used an optical tracker attached to the participants hand to track their movements in the x and y planes within the space. Both types of walking were translated into the visuals of the VE movements we associated with walking whilst the movements in the push button environment were smooth.83 Usoh, M. et al, Walking, Walking in Place, Flying, In Virtual Environments, SIGGRAPH ‘99 Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 359-364, 199984 Marsh, W. et al, The Cognitive Implications of Semi Natural Virtual Locomotion, IEEE 2012 Virtual Reality Short Papers and Posters, 47-50, 2012

Page 79: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

71

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

71

Natural walking combines both visual and appropriate proprioceptive stimulation for the motor movements so to produce the most compelling experience this would therefore be the most natural and effective way of exploring a VE, however this requires a much greater physical space for the person experiencing the space to explore than other methods. If a person is to be able to explore a 10m by 10m room in the virtual space with natural locomotion it could be assumed that they would require a 10m by 10m physical space in which to move around, compared to a space no bigger than the person themselves when using indirect locomotion techniques. In the following chapter we will look at how natural locomotion can be utilised to develop a feeling of presence in the VE whilst reducing the physical space required creating a virtual TARDIS.

Besides relating movement of the body to the visual stimulus there is also the issue of creating physicality in the virtual world. Effectively the virtual world is a world of immersive images that adapt and change with our motor functions, so in most VEs there can be a distinct difference between what we would perceive as a valid sensorimotor action in the physical world and that which takes place in the virtual world. For example if we reach out to grasp what we perceive to be a three dimensional object in the virtual world we realise that object does not have any mass, it is purely an image, no matter how convincing a visual stimulus is the illusion is broken when we fail to interact with it. Whilst our learnt knowledge of the three dimensional world will give us visual clues that some objects should be solid, there will also be other interactional virtual objects that we will need to respond to and these will require some a feedback system. Currently there are no VEs that have been produced that can match the physical world for its inherent physicality but there are systems that have been developed to create the illusion of interaction and force feedback.

It is considerably easier to create haptic feedback systems in VEs that use unnatural forms of locomotion and interaction such as the use of joysticks or other mechanical devices that will provide a force that reacts to the participants input when a collision occurs with a virtual object.85 However when attempting to utilise natural locomotion and interaction within a VE the difficulty is greatly increased. Our bodies are covered in touch, thermoception and nociception receptors and to create a completely realistic immersion of the body a feedback system would need to be able to address all or any combination of these groups of receptors at any time. As previously mentioned this is not currently possible but there are haptic systems in place that can address specific parts of the body to create the illusion of immersion. Whilst these senses are not the focus of this essay they are important factors in VEs so a brief overview of some technologies will be given.

85 These have been used for many years in gaming devices such as the rumble pack for the N64 or the Dualshock controller for the Playstation.

Page 80: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

72

Most current feedback sensors involve mechanical exoskeletons that are worn by the user, such as the Cybergrasp exoskeleton, Master II glove, Freflex or the Master arm.86 These are all fairly bulky systems that provide a mechanical feedback to specific joints within the limbs and all require wired links to other equipment that limit the range of movements by the user. The lack of subtlety in this type of equipment is damaging to the depth of presence felt in the VE. A more lightweight solution is the MEMICA exoskeleton being used by NASA87, it utilises electrorheological fluids (ERF), a smart material that becomes more viscous when a current is applied to it.88 All of these systems however involve tactile contact with the skin even when tactile feedback is not require, the downfall of a wearable feedback system.

Another haptic system currently under development is a non-contact method being developed at the University of Tokyo. The system uses ultrasound acoustic radiation pressure to create a tactile feedback with a ‘high spatial and temporal resolution.’89 Whilst the system doesn’t yet approach the reality of holding an object it can produce vibrations indicating initial contact and texture sensation without the need for the user to wear or be mechanically attached to a bulky device. It is currently limited to a small feedback area of 30 cubic centimetres but it has the potential to be developed further in the future.90

Haptic feedback is the most limiting factor in VEs but studies outlined in this paper have shown that other factors can induce a strong feeling of immersion and presence within a VE without the presence of haptic systems, such as visual stimuli inducing a perceived feeling of touch.91 The use of an effective feedback system could increase the feeling of presence but currently the tools available are limited in their functions.

86 Bar-Cohen, Y. et al, Biologically Inspired Intelligent Robots, Society for Photo-Optical Instrumentation Engineers, Washington, 200387 Ibid88 MEMICA uses ERF in hydraulic tubes attached to sets of muscles with Velcro to create resistance or additional positive forces to certain movements that can be adjusted with a varying low level current provided by a small battery pack. The computing devices and sensory equipment can all be carried in a backpack by the user allowing for freedom of movement.89 Iwamoto, T. et al, Non-contact Method for Producing Tactile Sensation Using Airborne Ultrasound, Proceedings EuroHaptics 2008, LNCS 5024, 504-513, June 200890 The current system in development is being used to simulate the interaction of the hands with virtual objects. The hands are placed within the 30 cubic cm ultrasound field and the acoustic pulses can be directed to specific areas of the hands.91 Heeter, C., Being there: the subjective experience of presence, Presence: Teleoperators and Virtual Environments, Vol 1(2), 267-271, 1992

Page 81: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

73

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

73

NINTENDO, N64 Controller with Rumble Pack, WWW.WIKIMEDIA.ORG

MASTER II GLOVE, from Bouzit, M., The Rutgers Master II - New Design Force Feedback Glove, IEEE/ASME Transactions on Mechatronics, Vol 7(2), 256-

263, jUNE 2002

CYBERGRASP, Cybergrasp Glove, WWW.CYBERGLOVESYSTEMS.COM

MEMICA EXOSKELETON IMAGES, from Bar-Cohen, Y., Biologically Inspired Intelligent Robots, Society for Photo-Optical Instrumentation Engineers, Washington, 2003

SONY, Playstation Dualshock Controller, WWW.WIKIMEDIA.ORG

Page 82: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

74

04.03.03 Simulator Sickness and Aesthetic Distance

‘The serious contradiction between corporeal reality and nature’s laws, may result in problems of perception that should not be underestimated.’92 - Oliver Grau

Simulator sickness or motion sickness is caused by a perceived contradiction between our visual stimulus and our kinaesthetic awareness, it can manifest itself in a number of ways such as impairment of motor control and vision, nausea, disorientation and migraines.93 This contradiction leads to confusion in perception which leads to the feeling of motion or simulator sickness. This can be a problem in VEs if our brain is able to perceive differences between our motor actions and their effect on our visual stimulus. Even if we don’t consciously realise that it is a contradiction between these two systems that is causing the feeling of sickness it can have a detrimental effect on the feeling of immersion and it is more likely to occur when utilising unnatural forms of locomotion in VEs.94 This is an important factor to consider when using VEs and the safety of the users.

92 Grau, O., Virtual Art: From Illusion to Immersion, MIT Press, Cambridge, 2003, p.393 We are all aware of how we can become travel sick when reading a book whilst in a moving car; from our visual sense we perceive the world to be still as the book is not moving in relation to us or the internal space of the car, however our vestibular and kinaesthetic systems are providing information to us that our body is in motion, but motion that is not being caused by movement of our limbs, as identified by our proprioceptive systems.94 Usoh, M. et al, Walking, Walking in Place, Flying, In Virtual Environments, SIGGRAPH ‘99 Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 359-364, 1999

Page 83: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

75

VIRTU

AL REALITY AN

D IM

MER

SION

OF TH

E BOD

Y

4.0

75

‘At first the audience is overwhelmed by the new and unaccustomed visual experiences, and for a short period, their inner psychological ability to distance themselves is suspended...When a new medium of illusion is introduced, it opens a gap between the power of the images effect/reflected distance of the observer. This gap narrows again with increasing exposure and there is a reversion to conscious appraisal...Habituation chips away at the illusion and soon it no longer has the power to captivate.’95- Oliver Grau

Another aspect that could have a detrimental impact on the feeling of immersion in VEs is aesthetic distance. At the first showing of the Lumière brother’s film ‘Arrival of a Train at La Ciotat’ it is rumoured that people ran to the back of the cinema in fright at the fast approaching train, whether this is true or not there was certainly a feeling of astonishment among the early cinema audiences not used to seeing such realistic moving pictures.96 In modern times we are accustomed to such images and don’t treat cinema films with such novelty. I would argue with Grau that cinema still has the ability to captivate us in its narrative and produce an emotional response but not in the same way. Immersive VEs, as defined in this paper, are still a novelty to most people so it is rather too soon to be able to judge whether with increased exposure to them we will be able to distance ourselves from the effects and illusions of immersion. The technology is still developing rapidly with more subtle technologies such as bionic contact lenses that will push the immersive experience further and therefore it may be a long time before we perceive the VE to stagnate and develop an aesthetic distance from it.95 Grau, O., Virtual Art: From Illusion to Immersion, MIT Press, Cambridge, 2003, p.15296 Karasek, H., Locomotive of Emotions, Der Spiegel, Vol 52, 152-159, 1994

Page 84: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 85: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

5.0

CHAPTER 05

Virtual[ly Impossible] Spaces

‘These Virtual Environments are somewhat related to film, theatre and literature. The linearity of drama however is abandoned in favour of space and time

warps, interactivity and telepresence. The work of the architect thus becomes an experiment, an

exciting game...The architecture of the virtual space is variable, flexible, flowing and hybrid.’97- Wolfgang

Strauss

97 Strauss, W., Virtual Architecture and Imagination in Film and Arc 1, Artimage Graz, 1993, p.4

Page 86: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

78

There are a number of interpretations of the term impossible space, but in this paper it is taken to mean any space that violates the laws of physics and could not physically exist or be constructed in the physical world. The physical world is taken to be the natural or man-made world which we live in, it is what we consider to be the ‘real world’. Whilst some of the techniques mentioned later in this paper could be constructed using mechanical systems to adapt spaces, such as Kent Larson’s work on ‘Changing Places’, this paper is primarily focused on the manipulation of space through our sensual experience and perception of it. The description of an impossible space in this text relies very much on our perception that our built environment is solid and stationary; that rooms and buildings do not morph or change size spontaneously.

05.01 DEFINING AN IMPOSSIBLE SPACE

Page 87: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

5.0

79

VIRTU

ALLY IMPO

SSIBLE SPACES

PAUL HOLLINGWORTH, We Love to Build, WWW.PH-GRAPHIC.CO.UK

Page 88: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

80

Utilising natural locomotion techniques in an immersive VE has been shown to produce the greatest presence and most natural experience and therefore is the most appropriate form of locomotion in a VE. However, it does have disadvantages when compared to indirect forms of locomotion that can provide exploration of an infinite virtual space from a physical area of the size required for a single person to stand in. Natural locomotion requires a similar size physical space as that of the virtual space to allow the user to move freely without reaching the limits of and colliding with objects in the physical world. Freedom to explore is an important factor in creating a feeling of presence in VEs, if a person can explore freely in an infinite space it is more likely to evoke feelings that the space is real and not a virtual construction.98

It has been found that combining motor movements with visual stimuli helps us to orientate and navigate in our environments. However tests carried out for the purpose of this paper revealed that when devoid of any visual stimulus participants find it very hard to navigate and maintain any spatial orientation of their body in space.99 Combining this finding with previously discussed research our kinaesthetic perception helps our body in applying appropriate sensorimotor perceptions to aid in perception of our visual stimuli but when we have purely a kinaesthetic awareness without visual stimuli the body struggles to map spatial relationships. This could be due to our general reliance on our visual sense and relative inexperience of consciously interpreting our kinaesthetic, proprioceptive and vestibular systems. This limitation of our ability to perceive relative movements opens up opportunities for spatial manipulation in the virtual world.

Redirection techniques are tools that can be used in VEs to expand or compress the virtual space in relation to the physical space the user is physically in. These techniques allow for a virtual space that is larger than the physical space to be explored using natural locomotion without the user reaching the limits of the physical space. It is an illusion caused by the limitations of our perceptual systems to accurately cognitively map space as an Euclidean cartographic map. These techniques include translation, rotation and curvature gains, resetting, change blindness, teleportation and overlapping architecture.98 Heeter, C., Being there: the subjective experience of presence, Presence: Teleoperators and Virtual Environments, Vol 1(2), 267-271, 199299 See Appendix v.ii, Test 02

05.02 REDIRECTION TECHNIQUES

Page 89: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

5.0

81

VIRTU

ALLY IMPO

SSIBLE SPACES

A D

B E

C F

Normal walking experienceEnvironment is stationaryUser travels the same distance in the virtual environment and the physical locationExpanding spaceEnvironment moves in opposite direction to walking motionUser travels a greater distance in the virtual environment than in the physical locationCompressing spaceEnvironment moves in same direction as walking motionUser travels a shorter distance in the virtual environment than in the physical location

Normal walking experienceEnvironment is stationaryUser rotates the same amount in the virtual environment and the physical locationCompressing spaceEnvironment moves in the same direction as rotation motionUser rotates a smaller amount in the virtual environment than in the physical locationExpanding spaceEnvironment moves in opposite direction to rotation motionUser rotates a larger amount in the virtual environment than in the physical location

A

B

C

D

E

F

Translation Gains Rotation Gains

Walked distance

Walked distance

Walked distance

Body rotation

Body rotation

Body rotationPerceived distance

Perceived distance

Perceived distance Perceived rotation

Perceived rotation

Perceived rotation

Drawings by Author

Page 90: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

82

Translation gains are used to modify the distance covered by natural locomotion in the virtual space compared to that of the physical space, this is achieved by either speeding up the rate at which the visual stimulus of the VE moves in relation to the actual speed that the person’s movements. As with all forms of redirection technique these gains can be utilised overtly or subtly. The more subtle the redirection technique the less likely the user is to pick up on it and therefore the more natural the interaction appears and the more immersive the environment becomes. Tests have been carried out to determine that distances can be downscaled by 14% or upscaled by 26% before they become noticeable as unnatural to the user.100 Overt techniques can also be effective if they are combined with visual metaphors such as escalators, lifts or vehicular motion.101

100 Steinicke, F. et al, Estimation of detection thresholds for redirected walking techniques. IEEE Transactions on Visualization and Computer Graphics, 16(1), 17–27, 2010.101 Suma, E. et al, A Taxonomy for Deploying Redirection Techniques in Immersive Virtual Environments, IEEE 2012 Virtual Reality Short Papers and Posters, 43-46, 2012

INFINITE CITY, Deploying Rotation gains to create

an infinite loop to explore the city

Drawing by Author

Page 91: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

5.0

83

VIRTU

ALLY IMPO

SSIBLE SPACES

Rotation gains are similar to translation gains but address angles rather than distances. As a person rotates in physical space their view of the virtual space rotates slower or faster than their actual rotation meaning they perceive an increased or decreased angle of rotation. These can be used to make a person perceive a route as a snaking ‘S’ shaped route in the virtual space but in reality they are walking in a figure of eight in physical space, it allows the creator of the VE to redirect the person experiencing it away from the limits of the physical space. It is possible to scale up rotation by up to 49% or scale it down by 20% before it is identified by the person experiencing it.102 Curvature gains are a continuous deployment of rotation gains when the user perceives themself to be walking in a straight line whereas the virtual space is constantly rotating by small amounts and the body is subconsciously adjusting its relative movement forcing them into walking in an arced route in physical space without realising. The radius of the curve that they are walking must be greater 22m for it to be perceived as a straight route.103 Rotation gains can also be deployed overtly as a failsafe if the user is about to reach the physical limits of the space; the visual display of the VE is frozen whilst the user rotates and then reactivated when they have finished their rotation.104 This obviously interrupts the user’s immersion creating discontinuity in the VE and is classified as ‘resetting’.105

102 Steinicke, F. et al, Estimation of detection thresholds for redirected walking techniques. IEEE Transactions on Visualization and Computer Graphics, 16(1), 17–27, 2010.103 Ibid104 Williams, B. et al, Exploring Large Virtual Environments with an HMD when Physical Space is Limited. In Symposium on Applied Perception in Graphics and Visualization, 41–48, 2007.105 Suma, E. et al, A Taxonomy for Deploying Redirection Techniques in Immersive Virtual Environments, IEEE 2012 Virtual Reality Short Papers and Posters, 43-46, 2012

COMPRESSED CITY, Deploying translation gains to compress the city into a

smaller areaDrawing by Author

Page 92: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

84

In tests carried out for the purpose of this paper, to affirm the findings of the research mentioned above, videos were used with a HMD to replace the visual stimulus whilst participants underwent physical rotations. When asked to undertake physical rotations of 180° the participants’ were presented with visual stimulus of rotations that were faster or slower than the movement of their body. These tests found that when presented with visual rotations that are faster than the bodily rotations the participants would rotate their bodies less than 180°, whilst perceiving that they had rotated the full amount and when presented with visual rotations faster than theirs they would over rotate. This was due to a sensorimotor recalibration to create a perceptive compromise between the contradicting signals being provided by the visual and kinaesthetic senses.106

106 See Appendix v.iii, Test 03

This graphs show the mean average amounts of physical rotation plotted against the speed of rotation in the tests carried out to investigate rotation gains in this paper.

They show that as the video rotation speed decreased the physical rotation of the participants increased to compensate for the contradicting signals from the visual and kinaesthetic senses.

Page 93: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

5.0

85

VIRTU

ALLY IMPO

SSIBLE SPACES

Stills from the stereoscopic video used in the tests.The linear nature of a train platform was used so a visual rotation of 180° was easily identifiable to the

participants.Images by Author

Utilising a simple HMD to test theories of rotation gains in this paper.See Appendix v.iii Test 03 for a full explanation of the tests

Image by Author

Page 94: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

86

The previous examples have all dealt with manipulation the user’s perception of self-motion, the following examples show an alternative approach that is to manipulate the VE. Change blindness, as identified previously, is our lack of ability to perceive considerable changes in our visual stimulus during eye movements. This perceptual gap allows creators of VEs to adjust features within the VE, this technique can be used to move or add features such as doorways into different areas of the VE. Studies have found that when utilised in this way only 1 in 77 participants notice the changes to the scene.107

Teleportation is an overt technique to transport a user from one environment directly to another without natural motor movements. It can prove very disorientating to the user but can be made sense of by the use of portals in the environment.108 It has also been found that users experience a greater level of presence when entering the virtual world through a portal in a virtual replica of their physical world rather than abruptly entering the virtual world.109

Overlapping architecture creates a VE where different virtual areas sit on top of each other in the physical world. This technique requires the use of a transitional space between the two or more overlapping sections of the environment, such as a foyer between two rooms. As the user exits the first room and enters the foyer the virtual space is reconfigured and when they enter the second room part of its footprint overlaps with where the first room was originally positioned. This means that under natural locomotion the user will be walking over the same part of the physical space but will be experiencing a new virtual space while in that position. For this spatial manipulation it was found that two small spaces that only partially fill the overall physical space can overlap by 55% before the participants detect them as an impossible space and two larger spaces that fill the entire physical space can overlap by up to 31%.110 It was found that one person in the test experienced strong simulator sickness but this was accredited to their strong prescription glasses that affected the use of the head mounted display.111

All of these techniques contribute to the ability to create a virtual TARDIS that can be explored using natural locomotion. The researched limitations to these spaces set out a number of parameters that set the maximum expansion possible before it is perceived by the user of the environment. This opportunity when combined with the developing technologies that create a more subtle illusion of immersion present opportunities for augmenting impossible VEs in the physical world, allowing spatial designers to create spaces that are larger on the inside.

107 Suma, E et al, Leveraging Change Blindness for Redirection in Virtual Environments. In IEEE Virtual Reality,159–166, 2011108 Suma, E. et al, A Taxonomy for Deploying Redirection Techniques in Immersive Virtual Environments, IEEE 2012 Virtual Reality Short Papers and Posters, 43-46, 2012109 Steinicke, F et al, Gradual transitions and their Effects on Presence and Distance Estimation. Computers & Graphics, 34(1) 26–33, 2010110 Suma, E. et al, Impossible Spaces: Maximising Natural Walking in Virtual Environments with Self-Overlapping Architecture, IEEE Transactions on Visualization and Computer Graphics, Vol 18(4), 555-564, April 2012111 Ibid

Page 95: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

5.0

87

VIRTU

ALLY IMPO

SSIBLE SPACES

VIRTUAL TILES, A library of interchangeable tiles could be used to create dynamic overlapping architectures in the virtual worldDrawing by Author

Page 96: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

88

‘We were asked to design a way to show all aspects of the cultural impact existing in the city through a mobile exhibition...In other words the task was to

minimize and pack the whole city in a bag and send it on its journey.’112- Ivan Redi

Currently immersive VR environments are being used across a wide range of fields such as gaming, art, science, medical, psychological and philosophical functions. Research into impossible VEs utilising natural locomotion techniques is fairly new and therefore their use is not yet widespread having been focused in the fields of neuroscience and psychology as research tools.113 This part of the paper looks at some existing virtual and non-virtual precedents and attempts to speculate on future uses of the impossible VE. Whilst the implications of impossible VEs using natural locomotion are extensive across many fields I will focus on the role they can play within future architecture.

‘We’re looking at design algorithms where you match a personal profile to a solution profile, you assemble a completely configured apartment and then

you give people the tools to go into that space and refine it using these kind of advanced computational tools...we think we can make a very small apartment

that functions as if it is twice as big.’114- Kent Larson112 Redi, I. Et al, The Relationship Between Architecture and Virtual Media in Disappearing Architecture: From Real to Virtual to Quantum, Birkhauser, Basel , 2005113 Suma, E. et al, Impossible Spaces: Maximising Natural Walking in Virtual Environments with Self-Overlapping Architecture, IEEE Transactions on Visualization and Computer Graphics, Vol 18(4), 555-564, April 2012114 Larson, K., Kent Larson: Brilliant Designs to Fit More People in Every City, June 2012, retrieved from http://www.ted.com/talks/kent_larson_brilliant_designs_to_fit_more_people_in_every_city.html, March 2013

05.03 UTILISING THE VIRTUAL TARDIS

Page 97: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

5.0

89

VIRTU

ALLY IMPO

SSIBLE SPACES

Kent Larson’s work on the CityHome at the MIT media lab is a project that looks at creating flexible, high tech, mechanised residential interiors. The interiors are intended to address the decreasing size of urban residences and robotically shift and adjust to the needs of the user, allowing for multiple spaces that collectively would require a large space to fit into. Instead of being individually designed or mass produced they are working on developing a series of user profiles that relate to a set of spatial solutions for the apartments allowing customisation of the flexible parts. This technique is very similar to the concept of impossible spaces but rather than utilising virtual means it uses mechanics in the physical world. It has its advantages over virtual worlds in its physicality, the ability for the user to touch and interact with the objects in that world. However a virtual version of this system would allow for constant and infinite opportunities for flexibility compared to the limited set of movements and layouts offered by a physical mechanical system. In the CityHome there is a predefined set of spaces that can be created such as a single open plan apartment, guest bedroom, dining room and study that exist in a finite set of combinations whereas in the virtual world the opportunities to adapt and model the spaces would be infinite, only limited by the imagination.

MIT MEDIA LAB CHANGING PLACES, Home Genome Project: Design Solutions as a Set of Inter-changeable Components, CP.MEDIA.MIT.EDU

Page 98: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

90

Page 99: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

5.0

91

VIRTU

ALLY IMPO

SSIBLE SPACES

MIT MEDIA LAB CHANGING PLACES, CityHome Variations, CP.MEDIA.MIT.EDU

Page 100: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

92

Google glass is a wearable computer, with functionality very similar to that of a smart phone, but it is all housed in the frame of a set of glasses. In the frames is a small heads up display that can produce visual augmented reality components within the wearer’s field of view. It is currently being developed to with functions such as voice activation, wifi, bluetooth, gps, a built in camera and links to social media and the internet. The gps will allow for the wearers location to be recorded and appropriate augmented reality views to appear before them. The product is due to be commercially available in 2013 and has already undergone a period of consumer and developer testing. Whilst questions have been raised about the distraction and invasion of privacy that could result from the glasses it does open up considerable opportunities for the worlds of augmented and VR. If someone is always wearing this type of device that can be geographically located it is possible that overlays of VEs could be almost seamlessly incorporated into the physical world. As these kinds of technologies, including bionic contact lenses and virtual retinal displays develop further and become more subtle and incorporate large amounts of computing power the boundary between the virtual and physical can be blurred and manipulated, challenging our perception of what constitutes ‘real’ experience and ‘actual’ space.115

Combining immersive technologies and impossible virtual spaces to allow for a sensorimotor experience that can lead us to perceive the virtual as a physical possibility presents a new situation to architects. Their role of creating spatial relationships is no longer one of just physical space but also virtual space. The opportunities this present are infinite, subtly deployed VEs that merge with the physical world can be explored using undetectable redirection techniques creating an illusion of infinite space within tight physical environments. The architect’s new role is one of expanding or compressing space within space, of creating TARDIS space. There may even be situations where impossible spaces can be deployed overtly, in situations where it doesn’t matter if the experiencer perceives a spatial glitch, these could be used in virtual showrooms or to transport you to another place, creating an architecture that spans the globe from within your home. Creating interactive immersive environments further increases the possibilities, if we can adjust and manipulate our environment in real time our environments will become flexible and constantly evolving, creating a participatory, evolving architecture. A framework of algorithms and limitations can be developed that subtly deploy redirection techniques automatically when we approach the limits of the physical space whilst immersed in the virtual; an intelligent system of tracking would be constantly overlooking and manipulating our perception of the world. It is a viable possibility that in the future we may be living in a world that straddles relative virtual and physical dimensions in time and space.

115 Rashid, H., Entering an Age of Fluidity in Disappearing Architecture, from Real to Virtual to Quantum, Birkhauser, Basel, 2005

Page 101: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

5.0

93

VIRTU

ALLY IMPO

SSIBLE SPACES

GOOGLE, Google Glass, WWW.GOOGLE.COM/GLASS

Page 102: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

94

RETINA CITY, In the future it will be possible to project images directly onto the eye with virtual retina displays and bionic contact lenses, with these technologies a virtual city can be created that exists only in the eye of the viewerDrawing by Author

Page 103: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

95

VIRTU

ALLY IMPO

SSIBLE SPACES

5.0

Page 104: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 105: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

6.0

CHAPTER 06

Conclusion

Page 106: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

98

It has become evident through the course of this paper that there is a considerable gap between what we perceive as reality and the actual stimulation we receive from our senses, this unconscious act of interpretation is open to illusion and manipulation when the cognitive processes are understood. Often seen as a limitation of perception, this gap is not necessarily a negative thing, it presents opportunities in the creation of perceived spatial relationships that allow an expansion or contraction of space that offers a tool to spatial designers.

‘More and more we spend our time in VEs or places determined by image production rather than space making’116- Aaron Betsky

There is a lot of discussion on the proliferation of the image in modern culture especially that of architectural design. It is often argued that the widespread use of virtual design tools are causing the focus of architecture to shift from space making to creating stylised utopian images. Whilst a large proportion of immersive VR environments rely heavily on the visual sense to develop this immersion it is the combining of this with the body to allow for natural locomotion through the VE that moves this technique away from that of the purely visual. Haptic feedback technologies are the limiting factor in these VEs and it could therefore be argued that it is still an image space but an image space that can be inhabited and responds to the viewer. As haptic technologies develop and allow for more feedback and physicality the virtual world will move further away from this idea of an image world and become another interactive dimension of the physical world.

‘As soon as the internet is able image spaces will be available online that at present can be seen only in the form of elaborate and costly installations at

festivals or in media museums.’117- Oliver Grau

Moore’s law states that computing power approximately doubles every two years, already computing is incredibly powerful but as this power develops into the future mobile technologies and miniaturization will become available to designers to create a more subtle integration of virtual space into the physical world. This is already being seen in the development of Google Glass and bionic contact lenses. This subtlety and ability to manipulate our perception and spatial understanding without our knowledge opens up many ethical questions of privacy, control and freedom. It may be the case that like many augmented reality technologies currently available that user will have to opt in to the experience through an interface rather than becoming seamlessly immersed into the VE, a factor that would deal with some of these ethical issues but would affect the perception of presence and integration of the two worlds.

116 Betsky, A., From Box to Intersection – Architecture at the Crossroads in Flachbart, G. et al (eds), Disappearing Architecture, from Real to Virtual to Quantum, Birkhauser, Basel, 2005117 Grau, O., Virtual Art: From Illusion to Immersion, MIT Press, Cambridge, 2003, p.15

Page 107: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

99

CO

NC

LUSIO

N

6.0

‘Conscious reality changes as the software of everyday life changes, and remains changed thereafter. . . . Chronic exposure to simulated ideas, moods, and images conditions your sensibilities . . . for how the real world should look, how fast it should go, and how you should feel when living in it.’118- Richard DeGrandpre

Due to the relative new development of natural locomotion used with redirection techniques in VEs there is no long term research into how it will affect our sensibilities during everyday life. Prolonged exposure to the virtual world and especially a virtual world where your perception of space is manipulated by misaligning your visual stimulus with your kinaesthetic senses, albeit a misalignment that is not consciously perceived by the body, could have serious implications on our overall perception. Effects like the feeling you get when you step off a moving walkway and it takes your body a moment to adjust to being on stationary ground could emerge from prolonged use of world where your physical movements are being translated into an exaggerated virtual movement. It has been shown that when presented with a visual stimulus that is offset from our proprioceptive perception, we undergo a proprioceptive recalibration to bring our senses back in tune with one another119 so what will the effect of constantly reverting between two worlds with different relationships between the proprioceptive and visual sense. Simulator sickness has shown that when our bodies perceive the contradiction between our kinaesthetic and visual perception we can become dizzy, nauseous and develop headaches, in the studies into the limits of redirection techniques these have been taken into consideration but all of these tests have been for short periods of exposure and individual responses to travel and motion sickness vary so further research is required. There are also ethical issues relating to desensitisation, in the virtual world cause and effect are different to that of the physical world. Behaviour that affects the virtual world can be undone and the virtual reproduced or edited but if this behaviour is carried out in the physical world it could have more prominent consequences.

In some of the tests carried out into redirection techniques there have been anomalies in the detection of impossible spaces. In the tests involving overlapping architecture two of the 12 participants vocalised their ability to detect the incorrect spatial relationships between the two overlapping spaces earlier than the other participants, one of them stating ‘It doesn’t make sense. This room is too big considering that one was over there.’120 Both of these participants were later identified to have extensive experience in using video games. This suggests that there may be an ability to have an aesthetic distance from the virtual world when accustomed to experiencing virtual spatial representations. Again this is an area that is under researched and there are many factors that could affect this judgement such as the immersive technology used.

118 DeGrandpre, R., Digitopia: The Look of the New Digital You, Random House, New York, 2001119 Cressman, E. et al, Sensory Recalibration of Hand Position Following Visuomotor Adaptation , Journal of Neurophysiology, 102, 3505–3518, Oct 2009120 Suma, E. et al, Impossible Spaces: Maximising Natural Walking in Virtual Environments with Self-Overlapping Architecture, IEEE Transactions on Visualization and Computer Graphics, Vol 18(4), 555-564, April 2012

Page 108: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

100

Page 109: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

101

CO

NC

LUSIO

N

6.0

‘Today architecture must organize itself into different configurations, simultaneously hybrid spatialities nourished by technology and media. Architecture is entering an age of fluidity without the ontological anchor that geometrically defined space previously supplied; it must express and create new modalities, open up possible worlds. What we are in general experiencing is a continually mutating spatiality.’121 - Hani Rashid

Before these technologies can be implemented into our everyday lives there needs to be further research into the effect they have on our physical and mental wellbeing but also there are developments that can be made to increase the feeling of immersion, the development of further haptic feedback systems of interaction with the virtual world. Despite this need for further research there are interesting opportunities presented to spatial designers by this unperceived gap between our sensory stimulus and our spatial perceptions and how manipulations of these techniques can manifest themselves in the physical world. It is an exciting prospect that the virtual world could become so real that we no longer detect the difference between the virtual and the physical and are able to manipulate space and time in a way that has never before been possible in the physical world.

‘I think of immersive virtual space as a spatio-temporal arena wherein mental models or abstract constructs can be given virtual embodiment in three dimensions and then kinaesthetically, synaesthetically explored through full body immersion and interaction. No other space allows this, no other medium of human expression.’122- Charlotte Davies

121 Rashid, H., Entering an Age of Fluidity in Disappearing Architecture, from Real to Virtual to Quantum, Birkhauser, Basel, 2005122 Davies, C., Osmose: Notes on Being in Immersive Virtual Space, Digital Creativity, Vol 9(2), 65-74, 1998, p.67

INFINITE INTERCHANGEABLE CITY, combing rotation gains with overlapping architecture that uses interchangeable city tiles an infinite constantly morphing city could be explored naturally in the virtual environmentDrawing by Author

Page 110: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

102

Page 111: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

103

CO

NC

LUSIO

N

6.0

DENSE CITY, by utilising all the redirection techniques a dense city that becomes

larger as you freely explore it could fit into the tight

boundaries of the physical world.

Drawing by Author

Page 112: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 113: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

iv

CHAPTER iv

Bibliography

Page 114: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

106

Applin, A. et al, A Cultural Perspective on Mixed, Dual and Blended Reality, IUI Workshop on Location Awareness for Mixed and Dual Reality LAMDaʼ11, Palo Alto, 2011

Bar-Cohen, Y. et al, Biologically Inspired Intelligent Robots, Society for Photo-Optical Instrumentation Engineers, Washington, 2003

Barfield, W. et al, The Sense of Presence Within Virtual Environments: A Conceptual Framework, in Human-Computer Interaction: Software and Hardware Interfaces, Vol B, Elsevier, 699-704, 1993

Bennett, A., Do animals have cognitive maps? Journal of Experimental Biology, Vol 199, 219-224, 1996

Betsky, A., From Box to Intersection – Architecture at the Crossroads in Flachbart, G. et al (eds), Disappearing Architecture, from Real to Virtual to Quantum, Birkhauser, Basel, 2005

Biederman, I., Recognition by Components: A Theory of Human Image Understanding, Psychological Review, Vol 94(2), 115-147, 1987

Billinghurst, M., Collaborative Mixed Reality, in Proceedings of International Symposium on Mixed Realities (ISMR ‘99), Mixed Reality – Merging Real and Virtual Worlds, 261-284, 1991

Bowan, M., Integrating Vision with Other Senses, OEP Foundation post-graduate curriculum, Vol 40(2), 1-10, December 1999

Bouzit, M., The Rutgers Master II - New Design Force Feedback Glove, IEEE/ASME Transactions on Mechatronics, Vol 7(2), 256-263, June 2002

Bruder, G. et al, Redirecting Walking and Driving for Natural Navigation in Immersive Virtual Environments, IEEE Visualization and Computer Graphics, Vol 18(4), 538-545, April 2012

Burgess, N., et al, Memory for events and their spatial context: models and experiments. Philosophical Transactions of the Royal Society of London - Series B: Biological Sciences, Vol 356, 1493-503, 2001

Burgess, N., et al, The human hippocampus and spatial and episodic memory. Neuron, Vol 35, 625-41, 2002

Page 115: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

107

BIBLIOG

RAPH

Y

iv

Christou, C., et al, Perception, Representation and Recognition, A Holistic View of Recognition, Spatial Vision, Vol 13(2), 265-275, 2000

Chuah, J, et al, Increasing Agent Physicality to Raise Social Presence and Elicit Realistic Behaviour, IEEE 2012 Virtual Reality Short Papers and Posters, 11-14, March 2012

Cirio, G. et al, Walking in a Cube: Novel Metaphors for Safely Navigating Large Virtual Environments in Restricted Real Workspaces, IEEE Visualization and Computer Graphics, Vol 18(4), 546-554, April 2012

Collins English Dictionary 11th Ed., HarperCollins, London, 2011Crary, J., Suspensions of perception: attention, spectacle, and modern culture MIT Press, Cambridge, 2001

Cressant, A., et al, Remapping of place cell firing patterns after maze rotations. Experimental Brain Research, Vol 143, 470-479, 2002

Cressman, E. et al, Motor adaptation and proprioceptive recalibration, Progress in Brain Research, Vol 191, 9-99, 2011

Cressman, E. et al, Reach Adaption and Proprioceptive Recalibration Following Exposure to Misaligned Sensory Input, Journal of Neurophysiology, Vol 103, 1888-1895, February 2010

Cressman, E. et al, Sensory Recalibration of Hand Position Following Visuomotor Adaptation , Journal of Neurophysiology, Vol 102, 3505–3518, Oct 2009

Davies, C., Osmose: Notes on Being in Immersive Virtual Space, Digital Creativity, Vol 9(2), 65-74, 1998

DeGrandpre, R., Digitopia: The Look of the New Digital You, Random House, New York, 2001

Edwards, E. et al, Visual Sense: A Cultural Reader, Berg, Oxford, 2008Eichenbaum, H., et al, The hippocampus,memory, and place cells: is it spatial memory or a memory space? Neuron, Vol 23, 209–226, 1999

Ekstrom, A. D., et al, Cellular networks underlying human spatial navigation. Nature, Vol 425, 184-188, 2003

Page 116: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

108

Epstein, R. and Kanwisher, N., A cortical representation of the local visual environment. Nature, Vol 392, 598-601, 1998

Fangtu, T. et al, Figure and Ground in the Visual Cortex: V2 Combines Stereoscopic Cues with Gestalt Rules, Neuron, Vol 47(1) 155–166, 2005

Ferwerda, J. et al, Perceiving Spatial Relationships in Computer Generated Images, IEEE Computer Graphics and Applications, Vol 12(3), 44-58, 1992

Ferwerda, J., Envisioning the Material World, Vision, Vol 22(1), 49-59, 2010

Ferwerda, J., Fundamentals of Spatial Vision, Applications of Visual Perception in Computer Graphics Conference, 1998

Flanders, M., What is the Biological Basis of Sensorimotor Integration?. Biological Cybernetics, Vol 104(1/2), 1-8, 2011

Freud, S., The ‘Uncanny’. The Standard Edition of the Complete Psychological Works of Sigmund Freud, Volume XVII (1917-1919): An Infantile Neurosis and Other Works, 217-256, 1919

Gardner, E., Principles of Neuroscience Fourth Edition, McGraw-Hill, New York, 2000

Gärling, T., et al, Memory for the spatial layout of the everyday physical environment: factors affecting rate of acquisition, J. Environ. Psychol., Vol 1, 263–277, 1981

Gaussier, P. et al, From view cells and place cells to cognitive map learning: processing stages of the hippocampal system. Biological Cybernetics, Vol 86, 15-28, 2002

Geuss, M. et al, Can I Pass?: Using Affordances to Measure Perceived Size in Virtual Environments, Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization, New York, 61-64, 2010

Gibson, E. et al, The visual Cliff, Scientific America, Vol 202, 64-67, 1960Goldstein, E. et al, Encyclopedia of Perception, Sage, London, 2010

Golledge, R. et al, Spatial Behaviour: A Geographic Perspective, Guildford Press, New York, 1997

Page 117: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

109

BIBLIOG

RAPH

Y

iv

Grau, O., Virtual Art: From Illusion to Immersion, MIT Press, Cambridge, 2003Greenberg, D., et al, A Framework for Realistic Image Synthesis, Communications of the ACM, Vol 42(8), 44-53, August 1999

Gregory, R., Sensation and Perception, Longman, London 1994

Gregory, R., Seeing Through Illusions, Oxford University Press, Oxford, 2009

Gregory. R., Eye and Brain; The Psychology of Seeing, Fifth Edition, Oxford University Press, Oxford 1998

Grill-Spector, K. et al, Object Recognition: Insights From Advances in fMRI Methods, Current Directions in Psychological Science, Vol 17(2), 73-79, 2008

Grimes, J., On the Failure to Detect Changes in Scenes across Saccades, in Perception, Oxford University Press, Oxford, 1996

Hay, J. et al, Visual and Proprioceptive Adaptation to Optical Displacement of the Visual Stimulus, Experimental Psychology, Vol 71(1), 150-158, 1966

Heeter, C., Being there: the subjective experience of presence, Presence: Teleoperators and Virtual Environments, Vol 1(2), 267-271, 1992

Heidegger, M., Being and Time, SCM Press, London, 1962.

Heidegger, M., The Basic Problems of Phenomenology, Indiana University Press, Bloomington, 1975

Henshaw, J., A Tour of the Senses: How Your Brain Interprets the World, Johns Hopkins Univeristy Press, Baltimore, 2012

Immanuel Kant, Critique of Pure Reason, trans. and ed. by Paul Guyer, Cambridge Univ. Press, Cambridge, 1998

Iwamoto, T. et al, Non-contact Method for Producing Tactile Sensation Using Airborne Ultrasound, Proceedings EuroHaptics 2008, LNCS 5024, 504-513, June 2008

Jameson, F., Postmodernism or the Cultural Logic of Late Capitalism, Duke University Press, Durham, 1991

Jay, M., Downcast Eyes: The Denigration of Vision in Twentieth-Century

Page 118: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

110

French Thought, university of California Press, London ,1994

Jola, C. et al, Moved by Stills: Kinesthetic Sensory Experiences in viewing Dance Photographs, Seeing and Percieving, Vol 25, 80-81, 2012

Jola, C., Moved by Stills: Kinesthetic Sensory Experiences in Viewing Dance Photographs, Seeing and Perceiving, Vol 25, 80-81, 2012

Karasek, H., Locomotive of Emotions, Der Spiegel, Vol 52, 152-159, 1994

Kelso, J. et al, The Role of Proprioception in the Perception and Control of Human Movement: Toward a Theoretical Reassessment, Perception and Psychophysics, Vol 28(1), 45-52, 1980

Knierim, J. et al, Place cells, head direction cells, and the learning of landmark stability. Journal of Neuroscience, Vol 15, 1648-1659, 1995

Kossyln, S. et al, When Is Early Visual Cortex Activated During Visual Mental Imagery?, Psychological Bulletin, Vol 129(5), 723-746, September 2003

Laeng, B. et al, Eye scanpaths during visual imagery reenact those of perception of the same visual scene, Cognitive Science, Vol 26, 207–231, 2002

Larson, K., Kent Larson: Brilliant Designs to Fit More People in Every City, June 2012, retrieved March 2013 <http://www.ted.com/talks/kent_larson_brilliant_designs_to_fit_more_people_in_every_city.html>

Lowe, D., Three-Dimensional Object Recognition from Single Two-Dimensional Images, Artificial Intelligence, Vol 31(3), 355-395, March 1987

Luo, J., Psychophysical study of Image Orientation Perception, Spatial Vision, Vol 16(5), 429-457, 2002

Macpherson, F., The Senses: Classical and Contemporary Philosophical Perspectives, Oxford University Press, Oxford, 2011

Macpherson, F., The Senses: Cultural and Contemporary Philosophical Perspectives, Oxford University Press, Oxford, 2011

Madary, M. et al, Perception, action, and consciousness: Sensorimotor Dynamics and Two Visual Systems, Oxford University Press, Oxford, 2010

Page 119: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

111

BIBLIOG

RAPH

Y

iv

Mallgrave, H., The Architect’s Brain: Neuroscience, Creativity and Architecture, Wiley-Blackwell, Chichester, 2011

Marr, D., Vision; A Computational Investigation into the Human Representation and Processing of Visual Information, MIT Press, London, 2010

Marsh, W. et al, The Cognitive Implications of Semi Natural Virtual Locomotion, IEEE 2012 Virtual Reality Short Papers and Posters, 47-50, 2012

McMaha, R. et al, Evaluating Display Fidelity and Interaction Fidelity in a Virtual Reality Game, IEEE Visualization and Computer Graphics, Vol 18(4), 626-633, April 2012

Meijer, F. et al, Navigating Through Virtual Environments: Visual Realism Improves Spatial Cognition, Cyberpsychology and Behaviour, Vol 12(5), 517 – 521, October 2009

Merleau-Ponty, M., The Primacy of Perception, Northwestern University Press, Evanston, 1964.

Merleau-Ponty, M., Phenomenology of Perception, Routledge, London, 2002

Moser, E. et al, Place Cells, Grid Cells, and the Brain’s Spatial Representation System, Annual Review of Neuroscience, Vol 31, 69-89, 2008

Normann, R., Cortical Implants for the Blind, Spectrum, IEEE, Vol 33(5), 54 – 59, May 1996

O’Regan, J. K. et al, A sensorimotor account of vision and visual consciousness, Behavioural Brain Sci., Vol 24, 939–973, 2001

O’Keefe, J. et al, The Hippocampus as a Cognitive Map, Oxford University Press, Oxford, 1978

O’Keefe, J., et al, Place cells, navigational accuracy, and the human hippocampus. Philosophical Transactions of the Royal Society of London - Series B: Biological Sciences, Vol 353, 1333-40, 1998

Pallasmaa, J., The Embodied Image, Wiley, Chichester, 2011

Pallasmaa, J., The Eyes of the Skin; Architecture and the Senses, Wiley,

Page 120: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

112

Chichester, 2005

Pallasmaa, J., The Thinking Hand, Wiley, New York, 2009.

Parvis, B., Augmented Reality in a Contact Lens, 2009, retrieved March 2013 <IEEE Spectrum, http://spectrum.ieee.org>

Pollock, B., The Right View from the Wrong Location: Depth Perception in Stereoscopic Multi-User Virtual Environments, IEEE Transactions on Visualization and Computer Graphics, Vol 18(4), April 2012

Ragan, E., The Effects of Navigational Control and Environmental Detail on Learning in 3D Virtual Environments, IEEE 2012 Virtual Reality Short Papers and Posters, 19-22, March 2012

Rashid, H., Entering an Age of Fluidity in Disappearing Architecture, from Real to Virtual to Quantum, Birkhauser, Basel, 2005

Redi, I. et al, The Relationship Between Architecture and Virtual Media in Flachbart, G. et al (eds), Disappearing Architecture: From Real to Virtual to Quantum, Birkhauser, Basel, 2005

Rensink, R. et al, On the Failure to Detect Changes in Scenes Across Brief Interruptions, Visual Cognition, Vol 7(1/2/3);127-145, 2000

Riecke, B., Self-Motion Illusions (Vection) in VR – Are They Good For Anything?, IEEE 2012 Virtual Reality Short Papers and Posters, 35-38, March 2012

Robertson, R. G. et al, Spatial view cells in the primate hippocampus: effects of removal of view details. Journal of Neurophysiology, Vol 79, 1145-56, 1998

Rolls, E. T., Spatial view cells and the representation of place in the primate hippocampus. Hippocampus, Vol 9, 467-80, 1999

Rose, F., The Art of Immersion, W. M. Norton, New York, 2011

Sagardia, M. et al, Evaluation of Visual and Force Feedback in Virtual Assembly Verifications, IEEE 2012 Virtual Reality Short Papers and Posters, 23-26, March 2012

Save, E. et al, Contribution of multiple sensory information to place field stability

Page 121: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

113

BIBLIOG

RAPH

Y

iv

in hippocampal place cells. Hippocampus, Vol 10, 64-76,2000

Schloerb, D, et al. A Quantitative Measure of Telepresence, Presence: Teleoperators and Virtual environments, Vol 4(1), 64-80, 1995

Siegel A. et al, The Development of Spatial Representations of Large-Scale Environment. In Reese H., Advances in Child Development and Behaviour. Academic Press, New York, p 9–55.1975

Sigurdarson, S. et al, Can Physical Motions Prevent Disorientation in Naturalistic VR?, IEEE 2012 Virtual Reality Short Papers and Posters, 31-34, March 2012

Simons, D., Current Approaches to Change Blindness, Visual Cognition, Vol 7(1-3), 1- 15, 2000

Slater, M. et al, Computer Graphics and Virtual Environments: From Realism to Real-Time, Addison-Wesley, London, 2001

Slater, M. et al, Controlling virtual environments by thoughts, Clinical Neurophysiology, Vol 118(4), 36, 2007

Slater, M. et al, The Sense of Embodiment in Virtual Reality, Presence: Teleoperators and Virtual environments, Vol 21(4), 373-387, 2012

Slater, M. et al, Visual realism enhances realistic response in an immersive virtual environment, IEEE Computer Graphics, Vol 29(3) 76-84, 2009

Slater, M., Place Illusion and Plausibility can Lead to Realistic Behaviour in Immersive Virtual Environments, Philosophical Transactions of The Royal Society B, Vol 364, 3549-3557, 2009

Speer, N. et al, Reading Stories Activates Neural Representations of Visual and Motor Experiences, Psychological Science, Vol 20(8), 989-999, 2010

Stanney, K., Human Factors: Issues in Virtual Environments, Presence: Teleoperators and Virtual environments, Vol 7(4), 327-351, 1998

Steinicke, F et al, Gradual transitions and their Effects on Presence and Distance Estimation. Computers & Graphics, Vol 34(1) 26–33, 2010

Steuer, J., Defining Virtual Reality. Dimensions Determining Telepresence.

Page 122: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

114

Journal of Communications, Vol 42(4), 7-93, 1992

Strauss, W., Virtual Architecture and Imagination in Film and Arc 1, Artimage Graz, 1993

Suma, E et al, Leveraging Change Blindness for Redirection in Virtual Environments. IEEE 2011 Virtual Reality Short Papers and Posters,159–166, 2011

Suma, E. et al, A Taxonomy for Deploying Redirection Techniques in Immersive Virtual Environments, IEEE 2012 Virtual Reality Short Papers and Posters, 43-46, 2012

Suma, E. et al, Impossible Spaces: Maximising Natural Walking in Virtual Environments with Self-Overlapping Architecture, IEEE Transactions on Visualization and Computer Graphics, Vol 18(4), 555-564, April 2012

Tan, D. et al, Kinaesthetic Cues Aid Spatial Memory, CHI ‘02 Extended Abstracts on Human Factors in Computing Systems, New York, 806-807, 2002

Tidwell, M. et al, The Virtual Retinal Display – A Retinal Scanning Imaging System, Proceedings of Virtual Reality World ‘95, 325-333, 1995

Ullman, S., High-Level Vision: Object Recognition and Visual Cognition, MIT Press, London, 1996

Usoh, M. et al, Walking, Walking in Place, Flying, In Virtual Environments, SIGGRAPH ‘99 Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 359-364, 1999

Vidler, A., The Architectural Uncanny; Essays in the Modern Unhomely, MIT Press, London, 1992

Wagner, L. et al, Perceiving Spatial Relationships in Computer-Generated Images, Computer Graphics and Applications, Vol 12(3), 44-58, May 1992

Wallis, G. et al, Learning to Recognise Objects, Trends in Cognitive Sciences, Vol 3(1), 22-31, Jan 1999

Welch, R., Adaption to Prism- Displaced Vision, Perception and Psychophysics, Vol 5(5), 305-309, 1969

Page 123: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

115

BIBLIOG

RAPH

Y

iv

Westerdahl, B., User’s Evaluation of Virtual Reality Architectural Model Compared with the Experience of the Completed Building, Automation in Construction, 15, 150-165, 2006

Whitworth, B., The Physical World as Virtual Reality, CDMTCS Research Report Series, Vol 316, 2007

Williams, B. et al, Exploring Large Virtual Environments with an HMD when Physical Space is Limited. In Symposium on Applied Perception in Graphics and Visualization, 41–48, 2007.

Witmer, B., Measuring Presence in Virtual Environments: A Presence Questionnaire, Presence: Teleoperators and Virtual environments, Vol 7(3), 225-240, 1998

Youngblood, G., Expanded Cinema, P. Dutton and Co, New York, 1970Zetzsche C. et al, From Visual Perception to Place, Cognitive Process, Vol 10(2), 351-354, 2009

Zetzsche C. et al, Representation of space: Image-Like or Sensorimotor, Spatial Vision, Vol 22(5), 409-424, 2009

Zetzsche, C., et al, Sensorimotor representation and knowledge-based reasoning for spatial exploration and localisation, Cognit. Proc., Vol 9, 283–297, 2008

Zumthor, P., Atmospheres, Birkhausser, Boston, 2006

Page 124: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture
Page 125: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

v

CHAPTER v

Appendix

Page 126: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

118

Page 127: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

119

APPEND

IX

v

Abstract

The purpose of this test was to create a hat that could be used to track head movements of a person viewing a 360° panoramic video. The tracked head movements would act as an input for controlling the directional view of the video allowing the user to pan and zoom using natural head movements that would cor-relate to those views in normal vision of physical environments. The test involved creating a 360° pannable and zoomable video, creating a 6DOF motion tracked hat that could be tracked using a webcam and the use of software to translate the tracking data into input controls for the video.

v.i TEST 01Creating a Motion Tracked Hat as an Input Device for the

Control Of 360° Videos

Page 128: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

120

Methods

Creating a 360° Pannable Video

The 360° video differs from a normal video in that every frame is a seamless 360° panorama of the film environment, effectively recording every possible view from each position of the camera. To play the video the panorama is mapped onto the inside face of a virtual sphere that encloses the viewer in the environ-ment. The viewer can control the direction of the camera by panning it around within the sphere allowing them to rotate their view and experience the film envi-ronment as they choose.

It is possible to use banks of cameras to record live footage for the creation of panoramic videos but these are expensive and complicated to create. For the purpose of this test a virtual model was created in 3D Studio Max and rendered using a 360° virtual camera. The realism of the environment was not important in this test so the virtual environment was rendered as a white model.

The rendered video created a video plane that could be seamlessly mapped onto a virtual sphere for viewing. A 360° video viewer by ImmersiveMedia was used to achieve the spherical mapping and for the control of panning and zooming by the viewer. The viewer was already configured so the video could be controlled with mouse inputs, clicking and dragging to pan and using the scroll wheel to zoom in and out.

Page 129: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

121

APPEND

IX

v

VIEWS OF THE 360 VIDEO, views taken from the Immersive media 360 Desktop ViewerImages by Author

Page 130: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

122

360

PAN

OR

AMIC

VID

EO, V

iew

sho

win

g th

e fu

ll pa

nora

mic

vid

eo b

efor

e it

was

map

ped

onto

the

insi

de o

f a s

pher

eIm

age

by A

utho

r

Page 131: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

123

APPEND

IX

v

Creating a Motion Tracked Hat

Equipment

Baseball cap9V battery

3 x Infrared LEDsElectrical cableElectrical tape

3 x 120 ohm ResistorsElectrical crimp connectors

WebcamDeveloped photographic film

Process

For the video to respond to natural movements of the head as our view does in natural vision the motion tracking must match the 6 degrees of freedom afforded by the movement of the head; forwards/backwards, up/down, left/right, pitch, yaw and roll. To achieve this it is necessary to have 3 motion tracked points on the hat that the computer can interpret in relation to one another. If the control of the video only required for up/down and left/right movement only one mo-tion tracked reference point would be required. As described in the main text of this paper there are multiple options for collecting motion tracking data; optical, mechanical, acoustic and electromagnetic. For this test optical techniques were chosen for the relative ease that they can be reproduced with simple, accessi-ble equipment. Optical systems can be affected by environment light that cause interference with the motion tracking. To overcome this a system was used that utilises infrared light, rather than light in the visible spectrum, although there is still infrared radiation occurring naturally in our environments it is less common. To create the cap infrared LEDs were attached to the cap, two on the peak in the same plane and then one further back placed centrally on top of the hat. The LEDs circuit was powered with a 9V battery and using the diode forward voltage for the LEDs (1.5V, 4.5V for 3 LEDs) it was calculated that a resistance of 360 ohms was required so not to damage the LEDs. Therefore three 120 ohm resis-tors were connected in series in the circuit. The battery was attached to the back of the hat so the movement of the head was not restricted. To allow for the hat to be correctly calibrated later in the process relative distance between the LEDs and parts of the head were measured and recorded.

The LEDs needed to be tracked in reference to a piece of recording equipment, which in this case was an adapted webcam. Consumer webcams come with

FORWARD

BACKWARDLEFT

RIGHT

UP

DOWN

PITCH

YAW

ROLL

The 6 degrees of freedom required for tracking of the head

Page 132: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

124

Circuit diagram of the LED circuit for the hat. Image by Author

Measurements required to calibrate the hat. Images by AuthorFreeTrack Software Interface showing the 3 tracking LEDs and the relative movements

C

D

B

A

E

Page 133: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

125

APPEND

IX

v

inbuilt infrared filters to stop this spectrum of light interfering with the camera in normal use. However, in this instance we wanted to track the infrared light and cut out the visible spectrum so the cover to the webcam was removed and the infrared filter, a thin sheet of film, was removed from the lens. This was replaced with a small piece of developed photographic film that acted as a filter for the visible light but allowed infrared light to pass through. Once this was completed the webcam could be connected to a PC and when the webcam was turned on it picked up the three LEDs as red dots on a black background.

The infrared LEDs used had a narrow angle of emittance meaning that when the hat was rotated and the angle between the LEDs and the recording webcam increased the webcam could no longer pick up the light emitted from the diode. To overcome this the rounded end of the LEDs were filed down flat to just above the metallic element of the diode and then sanded smooth to a matte finish. This acted to diffuse the infrared light emitted from the diode at a greater angle.

With the hat created and the webcam adapted to pick up infrared light a piece of software was required to translate the recorded data from the webcam into head movement data. An open source program called FreeTrack was used to do this. To calibrate the hat the measurements taken earlier in the process were entered in appropriate fields in the program and the brightness and resolution of the webcam could be adjusted to achieve the best distinction of the 3 LEDs from the background. The program offered a number of options to allow the tracked head movements to be used as an input for the computer. For the purpose of this experiment an option to replace the mouse input with tracking data was used. At the first attempt the mouse movement was limited and slow, only covering about a quarter of the screen with head movements from two extremes in the webcam’s tracking field. This was adjusted with interactive graphs for speed and distance in each of the 6DOFs within the program. The graphs were also adjusted to make the system less sensitive to very small movements of the head that were caus-ing jerky results in the tracking. The program now correlated natural movements of the head with movements of the mouse on the computer screen. This meant that when using ImmersiveMedia’s 360° DesktopViewer the mouse controls for the panoramic video had now been replaced with head tracking controls meaning that when the viewer moved and rotated their head the view of the video was up-dated appropriately, for example tilting the head upwards would rotate the video upwards and leaning in towards the screen would zoom in on the video.

Page 134: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

126

Con

stru

ctio

n of

the

hat

Imag

e by

Aut

hor

Page 135: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

127

APPEND

IX

v

The

finis

hed

hat

Imag

e by

Aut

hor

Page 136: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

128

Results

The hat worked fairly well in converting general movements of the head into similar movements on the screen, however the inaccuracy of using the dragging function on the interactive graphs to adjust the sensitivity of the hat meant it was difficult to calibrate the movements of the head exactly to the movement on the screen. This meant that occasionally the tracking would fall out of sync with the visuals on the screen, having a negative effect on the experience. With further time to test the hat it would be beneficial to spend some of it tweaking the graphs and trying different combinations of sensitivity for movements.

Although the head tracking hat worked relatively effectively in converting move-ments of the head into movement of a dynamic video on a computer screen the effectiveness as an immersive technology was limited by the display medium of the screen. To pan the video by large amounts the viewer had to rotate their head so that it was pointing beyond the edge of the screen whilst fixing their eyes on the screen creating an unnatural viewing interaction. To try and overcome the computer screen was replaced with a larger projector screen, this went some way to create a more natural interaction as the larger screen meant that the viewer’s head was not directed beyond the edge but there was still some disassociation of the direction of the head and the direction of the eyes. This could be overcome by using a head mounted display that was fixed in front of the viewer’s eyes so that when they moved their head the screen moved with them to prevent their head from being directed beyond the edge of the video. The effect could also be enhanced with the use of a stereoscopic display.

Another limitation was the fragility of the hat and LED circuit that led to the diodes being displaced on the hat and calibration problems with the tracking. This could easily be overcome with a more robust design. There were also problems with tracking the hat in areas with high levels of sunlight as this contains a large pro-portion of infrared light. The hat did however work effectively in internal environ-ments where sunlight could be controlled. Other limiting factors were the resolu-tion and data transfer rate of the webcam. To achieve a high data transfer rate between the webcam and the FreeTrack software the resolution of the webcam had to be lowered to 640 x 480 so that the head movements were translated quickly enough for the delay to not be perceived by the user. The webcam used was an old model and it may now be possible with newer models and higher speed USB interfaces to use a higher resolution.

Overall the test was a success in showing that a motion tracked hat could be produced from readily available equipment and software and that this can be combined with interactive video techniques to develop a basis for the production and presentation of immersive videos when combined with large scale projection or a head mounted display.

Page 137: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

129

APPEND

IX

v

The sensitivity of the tracking was adjusted by dragging two points of the graphs above, this interface made it difficult to properly calibrate the movements

The hat in use showing the hat itself, the webcam and the 360 video on the screenImage by Author

Page 138: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

130

Page 139: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

131

APPEND

IX

v

v.ii TEST 02The Effective of Removing Visual and Auditory Stimulus on

Spatial Navigation and Mapping

Abstract

This test looks at the effect of removing the visual and auditory sense on spatial navigation and cognitive mapping. In the test participants followed four different predetermined routes by walking behind a guide, the four variations were natural sensory experience, visual sense removed, auditory sense removed and both visual and auditory senses removed. The participants were then tested on their mapping of the route by attempting to return to the start position and then sketch-ing the perceived route they had taken. A further two tests involved isolating the visual sense with the participants watching two videos of someone else walking and attempting to draw the route that had been taken. The aim of these tests was to determine the relative importance placed on the different senses during navigation and spatial cognition.

Page 140: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

132

HYPOTHESIS

This experiment was developed to test how our cognitive mapping of spatial relationships during natural locomotion is affected when our visual and auditory stimuli are removed. The predicted outcome of the experiment was that our ability to map our movements would decrease only slightly, if at all, when our auditory stimulus was removed and that there would be a considerable decrease in this ability when our visual stimulus was removed, with an even larger effect when both visual and auditory stimulus were removed. This prediction is based on the theory that we have a strong reliance on visual data to orientate ourselves with relation to our environment. Whilst we can draw on the senses of proprioception and equilibrioception to track our bodily movements we are not experienced in using these senses in isolation without a visual reference to create spatial relationships.

METHODS

The experiment involved 6 individual tests that each participant undertook sequentially.

Test 01 – Spatial Mapping with all Senses Available

The first test was intended as a control, which was used to assess the participants’ ability to map their motor movements in space. This was used later in the analysis of the tests to take into account participants’ tendencies to overestimate or underestimate distances and rotations.

This test formed the basis for the other tests that were all a variation of it. A predetermined route was walked by a guide who was part of the experiment, the route had been memorised by this person who used a 10 x 10m grid marked out with the use of the existing paving slabs in the experiment’s location. The participant was instructed to follow the guide and whilst doing so to try and memorise their route so that immediately afterwards they could draw a sketch of the route with estimated distances for each part of the route. When the guide reached the end of the route and before they sketched out their perception of the route the participant was instructed to walk to where they though the route had started.

Page 141: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

133

APPEND

IX

v

Test 02 – Auditory Stimulus Removed

This test followed the same parameters as test A but used a different route within the same 10 x 10m area and whilst walking the route and returning to the start point the participant wore headphones playing loud music so their auditory stimulus from the environment of the route was replaced with generic music.

Test 03 – Visual Stimulus Removed

Similar to the previous test, again using a different route, the participant had no headphones in but was blindfolded so they had no visual stimulus from the route they were walking.

Test 04 – Visual and Auditory Stimuli Removed

The test was repeated with a new route but this time the participant wore headphones and a blindfold so their auditory and visual stimuli from their location were removed.

Test 05 – Only Visual Stimulus Provided

This test was slightly different in that the participant did not walk the route themselves but watched a video taken from the point of view of a person walking a route within the same 10 x 10m grid in the same location as the previous tests. After watching the video once the participant was again asked to sketch out the route with estimated distances that they had seen in the video. This test removed all sensory stimulus except that of vision.

Test 06 – Only Visual Stimulus Provided in an Unknown Location

This test was very similar to test E but the video showed the point of view of someone walking in a different location to the one that the previous experiments had been undertaken. This was used to assess whether the participant had become accustomed to their environment and if they would have greater difficulty creating spatial relationships in an unknown environment.

Page 142: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

134

Route Design

The routes were designed to be similar so that the results would not be affected by extreme differences in distance or complexity of the routes, but different enough so that they could not be memorised by the participants. Each route was enclosed within a 10 x 10m grid and contained between 7 and 8 rotations that varied between acute, obtuse and right angles. The start and end positions of each route was different for each experiment.

Participants

Each test was undertaken by 7 participants (3 male, 4 female) who were all postgraduate architecture diploma students, aged between 22 and 29. From the nature of their profession they may have a greater spatial understanding and should be more proficient in creating spatial drawings to describe their perceived routes. The narrow range of the population that this test covers is taken into consideration in the limitations section of this test.

A participant being led during one of the experimentsImage by Author

Page 143: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

135

APPEND

IX

v

The routes used for each of the tests

ROUTE 01 ROUTE 04

ROUTE 02 ROUTE 05

ROUTE 03 ROUTE 06

Page 144: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

136

RESULTS

The results were analysed in relation to four different attributes; percentage error in overall distance travelled, distance walked to perceived start position, angle of walking to perceived start position and overall rotation. The analysis will be split up into these four categories and then assessed as a whole at the end of this section.

Distance Travelled

In the analysis of estimated distance travelled the first test (01) was used as a control. This was then used as a reference point for the analysis of the following tests. This was so that any tendencies to over or underestimate distances by individual participants were removed by relating their estimates to the first test where they had all available senses to draw information from.

From the hypothesis it was expected that the percentage error in distance away from test 01 would increase with each experiment with a greater error with restricted auditory sense, increasing with restricted visual sense and at the greatest with both restricted auditory and visual. However in the test the percentage error did not correlate with this theory. There was no distinct pattern showing one test to have a greater impact on distance estimation; 3 participants greatest error in relation to the control test 01 was in test 02 (restricted auditory sense), 2 participants greatest error in test 04 (restricted auditory and visual), 1 participant in test 03(restricted visual) and 1 participant in test 06 (Visual stimulus in unknown location). This result could suggest that our distance estimation is poor whether we have the ability to utilise all our senses or limited senses. However the limitations of the test itself lead us to conclude that insufficient data has been collected to produce any solid outcomes on distance estimation. The design of the test itself where the participants repeatedly walked routes of similar lengths, enclosed in the same 10 x 10m grid could have meant that they became familiar with the space and as such by relating their experience in the later tests to the previous tests they were able to more accurately estimate the distance they walked each time. In future tests this could be factored out by using different groups of participants to undertake each test so they are not pre-exposed to the test area. This would mean that their results could not be related back to the first control test so a larger sample of participants would be required to reduce the chance of any anomalies.

The following set of graphs show each participant’s percentage error in the perceived distance of each route, with a negative percentage showing under-estimation. The dashed line shows the percentage error in the first test

with all senses available that was used as a control for comparison of the test that followed. The graphs show no distinct pattern between all the participants.

Page 145: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

137

APPEND

IX

v

Page 146: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

138

Page 147: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

139

APPEND

IX

v

Page 148: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

140

Page 149: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

141

APPEND

IX

v

Distance Walked to Perceived Start Position

In tests 01-04 when asked to walk from the end of the route back to their perceived start position there was a correlation between the hypothesis and empirical results. In the test 01 all of the participants walked a distance within of +1.65m of the correct distance, in test 02 they all walked within +2.12m, in test 03 they were all within +4.54m and in test 04 they were all within +6.83m. The amount of error within each of these tests increases at it was expected, with the lowest error in test 01 with all senses available and increasing to the highest error in test 04 with restricted visual and auditory senses. This seems to suggest that restricting the auditory sense has a negative impact on our ability to map spatial relationships, but restricting the visual sense has a greater impact and restricting both senses has an even greater impact. The negative effect of wearing headphones during test 02 and 04 may be due to us not being able to gather directional audio clues from our surroundings but it could also be due to the impact that the music in this test had on the participants’ ability to concentrate on the route they were undertaking. An improvement to this would be to use sound deadening ear protection that removes the environment sounds leaving the participant in silence rather than listening to loud music to drown out the environment sounds.

Angle Walked to Perceived Start Position

The results from this analysis seem to correlate with the results relating to distance walked to the perceived start position, however there is less distinct difference between tests 03 and 04 with the angle walked. The test were analysed in relation to the greatest angle that a single participant deviated from the correct angle back to the start position and the total angle that enclosed all of the participants’ angles of walking either side of the correct angle. In test 01 the greatest angle that one participant walked away from the correct angle was 34° and all participants were within a total angle of 63°. In test 02 the greatest angle was 76° and the total angle was 103°, test 03 had a greatest angle of 92° and a total angle of 183° and test 04 had a greatest angle of 130° and a total angle of 141°. All of these results correlate with the results relating to distance to the start position outline in the section above, except that the total angle decreases between test 03 and test 04. In tests 01-03 the angles that the participants walked were evenly spread either side of the correct angle meaning that the total angle was increased from the combination of errors either side. In test 04 however all but one of the participants walked to one side of the correct angle meaning the total angle was not affected by errors in both directions, because of this the deviation of the data from the pattern can be assessed as an anomaly and the greatest angle by an individual participant represents a more reliable piece of data for analysis.

Page 150: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

142

These diagrams represent the distance and direction that each participant walked when asked to return to their perceived starting position in tests 01-04. The coloured lines represent each participant, the grey arrow shows the direction they approach the end of the route from and the grey line shows the actual distance and angle back to the start position. Each concentric ring in the diagrams represents a distance of 1m. The graphs show a gradual decrease in the ability to return to the start position with each of the tests

Page 151: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

143

APPEND

IX

v

Page 152: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

144

Overall Rotation

The overall rotation perceived by each participant was calculated from the sketches of their routes. This was then related to the actual rotation as a positive or negative percentage relating to clockwise or anticlockwise rotation. An overall percentage error for all of the participants was also calculated for each test. In test 01 four of the seven participants correctly identified the amount of rotation showing a clear ability to map spatial relationships when all the senses are available, of the other three participants one correctly identified the amount of rotation but perceived it in the opposite direction and the other two were within 34°. The average error for this test was 58°, but this was raised by a distinct anomaly of the participant who made an error of 360° by perceiving the rotation in the opposite direction – without this anomaly the average error would be closer to 6°.

In test 02 the average error was increased to 302°, with three of the seven participants within 135° and the largest error 585°. The average error increased again in test 03 to 482° with two participants estimating the correct amount of rotation but in the wrong direction and the other 5 participants all underestimating the rotation. Test 04 presented an unexpected result with the average error decreasing from the previous two tests to 256°. In this test all of the participants correctly identified that the overall rotation was in a clockwise direction but as the average error does not include the directional error this would not have affected the results in the same way as in the previous analysis of angle walked to perceived start position. This anomaly is hard to explain and may be due to the practice that the participants had gained in the previous test or it could be purely an anomaly but to find out a larger number of participants would have to be tested. If the result is not an anomaly it would suggest that when we have no visual or auditory cues to draw upon we concentrate more intensely on our proprioceptive and equilibrioceptive senses and are able to use them to measure bodily rotations, but when only one of our visual or auditory cues are restricted we concentrate harder on the other and do not utilise our proprioceptive and equilibrioceptive senses as strongly, resulting in a reduced ability to measure rotations.

Test 05 and 06 where the participants estimated rotation using purely their visual sense from a video had a greatly increased average error, with 1022° in test 05 and 1039° in test 06. All the previous results have pointed to the fact that we have a reliance on the visual sense for orientating ourselves in an environment but these results would suggest that when the visual sense is isolated from all the other senses we find it difficult to create any reference for spatial relationships. It would seem that despite the importance of vision we still need to reference our visual stimulus with other sensory stimulus for it to be effective. This theory supports a sensorimotor spatial experience.

Page 153: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

145

APPEND

IX

v

These diagrams show the perceived finishing direction of each participant as a result of the total rotations shown in their sketched routes. The coloured lines represent each participant and the dark

grey line shows the actual finishing direction of the route. In the first test it is seen clearly that the participants were able to correctly identify their finishing direction within 35o, however in all of the

other tests there is a very broad range of results show for each of the participants.

Page 154: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

146

Page 155: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

147

APPEND

IX

v

Page 156: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

148

CONCLUSIONS AND LIMITATIONS

A major limitation of this test was the small number of participants that were tested and the limited range of the population they covered. To draw any general conclusions it would be necessary to survey a wider range of the population with a larger demographic. Any conclusions made in these tests are limited to being applicable to the 7 architecture students tested aged between 22-29. The tests were also carried out in a specific environment at the Royal Naval College, Greenwich. The test was undertaken in a courtyard that was enclosed on 3 sides with a repetitive façade and on the other with an open colonnade. Factors in this environment such as the repetitive façade and the location of a central obelisque in the space could have provided visual cues that positively or negatively affected the results. If there was more time to repeat the experiment it would be beneficial to conduct it in an area with a different spatial configuration to assess what effect this has on the results. Another limitation is the fact that each participant was exposed to each of the tests, whilst this was used to create a control for the analysis of the date, it could also have meant that as the tests progressed the participants were developing better processes for estimating and memorising distances and rotations that skewed the results.

Despite these limitations and the limitations described in the analysis section it can be deduced from the empirical tests undertaken that when navigating back to the starting position of a walked route we are most able to do so when we can draw on all of our senses and least able when our visual stimulus is removed individually or when both our visual and auditory stimuli are removed. This suggests a reliance on visual cues for orientating ourselves spatially and that our other senses are less effective when no visual references are presented. This is also the case when estimating overall rotations of a walked route but these rotation tests have also shown that when the visual sense is isolated it becomes much less effective. This suggests that although we consciously place an importance on vision it is only when we combine it with other senses that map bodily movements, such as proprioception and equilibrioception that it becomes a useful tool for mapping our motor actions in spatial environments. The results gathered from these tests in relation to distance estimation provided no clear patterns so additional tests would be required in this area.

Page 157: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

149

APPEND

IX

v

Page 158: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

150

The following pages show each of the participants perceived routes mapped over the actual route within the 10 x10m grid

Route 01

Route 02

Route 03

Page 159: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

151

APPEND

IX

v

PARTICIPANT A

Route 04

Route 05

Route 06

Page 160: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

152

Route 01

Route 02

Route 03

Page 161: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

153

APPEND

IX

v

PARTICIPANT B

Route 04

Route 05

Route 06

Page 162: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

154

Route 01

Route 02

Route 03

Page 163: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

155

APPEND

IX

v

PARTICIPANT C

Route 04

Route 05

Route 06

Page 164: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

156

Route 01

Route 02

Route 03

Page 165: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

157

APPEND

IX

v

PARTICIPANT D

Route 04

Route 05

Route 06

Page 166: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

158

Route 01

Route 02

Route 03

Page 167: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

159

APPEND

IX

v

PARTICIPANT E

Route 04

Route 05

Route 06

Page 168: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

160

Route 01

Route 02

Route 03

Page 169: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

161

APPEND

IX

v

PARTICIPANT E

Route 04

Route 05

Route 06

Page 170: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

162

Route 01

Route 02

Route 03

Page 171: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

163

APPEND

IX

v

PARTICIPANT G

Route 04

Route 05

Route 06

Page 172: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

164

Page 173: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

165

APPEND

IX

v

Abstract

Tests by other researchers have found that in virtual environments it is possible to speed up the rotation of the virtual world presented to the user in relation to their physical rotation without them noticing the contradicting signals between their visual and proprioceptive/equilibrioceptive senses. This experiment aimed to test this theory and therefore confirm that rotational gains can be deployed in virtual environments to create a virtual TARDIS. Without access to specialist virtual real-ity tracking systems and displays a form of test was conceived involving a HMD created from a baseball cap and a smart phone with videos used in the place of interactive virtual environments.

v.iii TEST 03Using a Head mounted Display to assess the effect of replacing the visual sense with an alternative view during rotations using natural locomotion.

Page 174: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

166

Hypothesis

From previous research such as that conducted by Steinicke , professor of com-puter sciences at the University of Wurzburg, it has been shown that visual rota-tion of a virtual environment can be increased by up to 49% or decreased by 20% from that of the user’s physical bodily rotations before they identify the contra-diction between the senses. In this test it is predicted that when the participants are asked to rotate by 180° and they are presented with a video that rotates at a faster speed than their physical rotations they will under-rotate their bodies in the physical space and when the view rotates slower than their physical rotation they will over-rotate. This would be due to the data from their visual sense giving them information that the rotation angle had been reached earlier during faster rotations and later during slower rotations. From the previous tests conducted for this paper it was shown that when devoid of the visual sense we have difficulty in orientating ourselves so this test will develop these findings to investigate the effect of replacing the visual sense.

Methods

The test involved the creation of a Head mounted display (HMD), multiple ste-reoscopic videos and then the testing of the effect the videos had in relation to physical rotations.

Creating the HMD

To allow the users to move freely whilst fixing their gaze on the videos used for the test it was necessary to create a display device that would be positioned in front of their eyes and move with them during locomotion. For this purpose a HMD was devised that used a mobile phone and a baseball cap. A cardboard attachment was created that allowed the phone to be suspended in front of the viewer’s eyes. The phone was positioned at a distance from the eyes where it was easy to focus on. (This distance and the small display screen led to problems relating to immersion that will be discussed later in this paper.)

To allow the phone to be controlled whilst suspended in the HMD a piece of soft-ware, called TeamViewer, was used to remote access the phone interface from a PC. This allowed the videos to be selected and played to the participant during the experiments without the need to keep removing the display.

Page 175: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

167

APPEND

IX

v

Photographs of the HMD mounted on a baseball capImages by Author

Page 176: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

168

Creating a Stereoscopic Video

The original intention of the tests was to create an interactive virtual environment combined with motion tracking that allowed for the participants’ movements to directly affect the view of the virtual world and then manipulate the relationship between physical rotations and virtual rotations by increasing or decreasing the speed that the virtual world rotates with relation to the tracking data received. It was intended that the motion tracking cap could be utilised from the first test in this paper but the inaccuracies of the motion tracking and the limited range that the cap could be tracked from (the hat had limited range of 1.5m between the hat and the webcam and a tracking field of view of roughly 150°, any rotations greater than this would lead to occlusion of the LEDs). This meant that there would not be a sufficient amount of freedom for the participants’ movements or a suitable measure of control within the test. If further time was available it would be beneficial to attempt to improve the hat design and tracking software to cre-ate a greater level of accuracy and control or to utilise other tracking systems. Other options were also investigated into the use of the Xbox Kinect and the Nintendo Wii controller as tracking devices but these also have a limited range of rotations as both require the body (or the remote) to be directed towards the tracking device and when the rotations are greater than this there would be a loss of tracking data. Another possibility investigate was the use of the Kinect in plan view from above, This would have allowed the participants full 360° rotations to be tracked in the ground plain and overcome the problems of limited range of rotations allowed. Although this would have been a suitable solution all the avail-able software for utilising the Kinect as a tracking device has been developed to recognise the body in a front on view, to use the Kinect from above the software would need to be re-formatted to allow a new tracking shape to be recognised and then interpret the movements of that tracking shape into data that could be used as an interface for a virtual environment. Due to the time constraints of this paper it was not possible to edit an existing or create a new piece of software that was capable of achieving this.

Due to the limitations in utilising tracking data to control movements in a virtual environment a compromise was decided on that would utilise pre-produced vide-os that contain visual instructions that told the user when to move. These instruc-tions allowed the users movements to coincide as closely as possible with the appropriate parts of the video allowing their bodily rotation to be synced with the visual rotation of the environment. The use of the videos is assessed in the re-sults section of this test. The visual instructions consisted of a traffic light system; a red light appeared to instruct the participant to stop and an amber light followed by a green one instructed them to start their movements. The video was rendered with realistic materials and lighting and was produced as a stereoscopic video to try and create as much realism as possible to increase the feeling of presence.

Page 177: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

169

APPEND

IX

v

Stills from the video used in the tests.The linear nature of the platform made it easy for the participants to identify the 180° rotation.Images by Author

Page 178: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

170

The environment used for the video needed to be one with distinct directional clues that would allow the user to easily identify when the video had rotated by 180° and therefore allow the experiment to test whether perceived rotation of the visual scene has an impact on the perceived physical rotation of the body. For this reason a scene of a train platform was used with a train on one side of the viewer and the station on the other, this allowed the participants to quickly distin-guish between directions in the environment relating them to surrounding objects.

Each video began with a still view of the station platform, then a red, amber then green light flashed on the screen to instruct the participant to walk for-ward (participants had already been briefed as to which moves to undertake and when), after the lights flashed the video moved forward. This part of the video was intended to set the scene that the video is responding as the equivalents of their movements. Another red light flashed to instruct them to stop and this was followed again by an amber and green light as an instruction for them to rotate 180°, whilst they did so the video view rotated at the same time. In total 8 dif-ferent videos were produced, each with the view rotating at a different speeds, taking between 1 and 8 seconds to rotate 180°. Each video continued its rotation beyond 180° to allow for visual consistency to the viewer, although it continued rotating beyond that of the users’ movements, as it wasn’t linked to tracking data.

Experiment

The test was carried out on 7 participants, who undertook the test 8 times, once with each video. They were given instructions to walk forward after the first green light, stop at the red light and then turn 180° immediately after the second green light. Each participant wore the HMD combined with a black sheet over the top of their head so they could not see any of their physical environment. The tests were also carried out in a darkened room so that areas of light could not be used as navigational clues when seen through the sheet. At the end of each rotation the angle of the participants’ physical rotation was measured using a large scale protractor. The test was then repeated until each participant had undertaken each test. Between tests the participant did not remove the HMD so they were unable to see their actual rotation, this was done to reduce the possibility of their rotational awareness improving with practice. The videos were also mixed up so they were not presented to the participants in ascending order of increasing time, this was to try and remove an anomalies caused by the users becoming preconditioned to the video increasing with a specific amount and adjusting their movement appropriately.

At the end of the test each participant was asked to subjectively rate their feeling of presence on a scale of 1-5, where 1 was a feeling that they were in the physi-cal test room and 5 was that they were on the train platform shown in the video.

Page 179: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

171

APPEND

IX

v

Stills showing the traffic light system used to instruct the participants to move in time with the movement of the videoImages by Author

Page 180: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

172

Results and Limitations

As with the previous test the results are limited by the number of participants and representative demographic. There were 3 male and 4 female participants all aged between 24 and 27. Their occupations were architecture student, recruit-ment consultant, psychology PhD student, junior editor, marketing assistant, web designer and teacher. From the graphs we can see that the results matched a general trend towards the hypothesis proposed at the start of this test. As the rate that the video rotated de-creased the amount of physical rotation by the participants increased to adjust for the information being supplied to their visual sense. When the video was rotating quickly the participants visually reached the desired rotation of 180° faster and this triggered them to stop their physical rotation before they had actually rotated the full amount. This suggests that we have a reliance on our visual sense during rotations to orientate ourselves in relation to our environments. The results how-ever were not directly proportional to one another, for example when the speed of rotation in the video doubled the physical rotation did not half, this would have been show by a steeper relationship on the graph. The results show that a dou-bling of the speed of the video reduces the physical rotation by less than half, in fact doubling the speed of the video only reduced the physical rotation by 1/10th. This shows that we do not purely rely on the visual sense for orientation during rotations. Although we rely heavily on our visual sense it there are other factors that affected the participant’s ability to perceive rotations, these factors are our proprioceptive and equilibrioceptive senses. During movements we use a com-bination of all of our senses to orientate our bodies in space, so when the visual sense is adapted we can utilise our other senses to aid in orientating ourselves, this explains why the results were not directly proportional between the visual and physical rotations. So when the visual sense is adjusted there is a recalibration of all our senses resulting in a result that is a compromise between the contradicting information our bodies are supplied with.

There was a more exaggerated relationship between visual and physical rota-tion shown between the females in the test than the males, shown on the graph. This relationship could be linked the average higher rating of presence felt by the females. This higher rating meant that the females were more likely to consider themselves to be on the platform and therefore more likely to perceive the visual rotation of the scene as more closely related to their physical rotation. From fur-ther investigation it was revealed that the males all had high levels of experience playing computer games, whereas only one of the females did, this could mean that the males were more able to distance themselves from the feeling of being within the video. It was also found that two of the males often took part in actions sports (skateboarding and BMXing) which could mean their kinaesthetic senses

Page 181: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

173

APPEND

IX

v

These graphs show the mean average amounts of physical rotation plotted against the speed of rotation in the video, they show that as the video rotation speed decreased the physical rotation of the participants increased to compensate. The bottom graph shows the difference in results between male and female participants showing that the females were more affected by the visual rotation of the video

Page 182: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

174

were more developed than the other participants and therefore were more able to utilise these senses during physical rotations.

The overall low rating of presence within the tests, with only one person rating their presence above 2/5, could be related to a number of issues. Firstly the video was not actually relating to their movements so the participants did not have direct control over the way they affected the view of the environment, secondly the HMD design did not create a very immersive visual experience. The screen of the phone used to display the visuals was 100mm x 60mm and was positioned at a distance of 180mm in front of the eyes meaning it only filled a portion of the visual field that was 19° vertically and 31° horizontally, compared to the field of view of roughly 135°vertically and 180° horizontally of our natural vision. If the test was repeated it would be beneficial to use a more immersive HMD to see if this created a greater feeling of presence and produce more exaggerated results regarding rotations. The screen was placed at the distance of 180mm to allow for comfortable viewing, but it could be brought closer to the eyes with the use of lenses, such as those used in the Oculus Rift headset, meaning the video would fill a wider portion of the participants’ field of view.

Overall these tests appeared to show a confirmation of the original hypothesis, however it would still be beneficial, if there was more time, to create a virtual en-vironment that directly responds to the users movement rather than using a video with visual instructions. This would create a greater feeling of immersion within the environment and it is predicted that would lead to the rotation of the visual environment having a greater impact on the physical rotation of the participants. Overall however these tests would lead us to believe that rotation gains could be deployed effectively and lead users of virtual environments to over or under esti-mate their bodily rotations. If the effect has been achieved with a non-immersive video then it is assumed that the effect would be even more effective in an im-mersive environment.

The following graphs show the results of each of the individual participants. Graphs in red represent females and the blue graphs represent the males

Page 183: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

175

APPEND

IX

v

Page 184: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

176

Page 185: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

177

APPEND

IX

v

Page 186: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

TIM

E AN

D R

ELAT

IVE

DIM

ENSI

ON

S IN

SPA

CE

THE PO

SSIBILITIES OF U

TILISING

VIRTU

AL[LY IMPO

SSIBLE] ENVIR

ON

MEN

TS IN AR

CH

ITECTU

RE

178

Page 187: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture

179

APPEND

IX

v

Page 188: TARDIS: Utilising Virtual[ly Impossible] Environments in Architecture