8
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SAP 2014, August 08 – 09, 2014, Vancouver, British Columbia, Canada. Copyright © ACM 978-1-4503-3009-1/14/08 $15.00 Effects of Visual and Proprioceptive Information in Visuo-Motor Calibration During a Closed-Loop Physical Reach Task in Immersive Virtual Environments Elham Ebrahimi * , Bliss Altenhoff , Leah Hartman , J. Adam Jones * , Sabarish V. Babu * , Christopher C. Pagano , Timothy A. Davis * Abstract Research in visuo-motor coupling has shown that the matching of visual and proprioceptive information is important for calibrating movement. Many state-of-the art virtual reality (VR) systems, com- monly known as immersive virtual environments (IVE), are created for training users in tasks that require accurate manual dexterity. Unfortunately, these systems can suffer from technical limitations that may force de-coupling of visual and proprioceptive informa- tion due to interference, latency, and tracking error. We present an empirical evaluation of how visually distorted movements af- fects users’ reach to near field targets using a closed-loop physical reach task in an IVE. We specifically examined the recalibration of movements when the visually reached distance is scaled differently than the physically reached distance. Subjects were randomly as- signed to one of three visual feedback conditions during which they reached to target while holding a tracked stylus: i) Condition 1 (- 20% gain condition) in which the visual stylus appeared at 80% of the distance of the physical stylus, ii) Condition 2 (0% or no gain condition) in which the visual stylus was co-located with the phys- ical stylus, and iii) Condition 3 (+20% gain condition) in which the visual stylus appeared at 120% of the distance of the physical sty- lus. In all conditions, there is evidence of visuo-motor calibration in that users’ accuracy in physically reaching to the target locations improved over trials. During closed-loop physical reach responses, participants generally tended to physically reach farther in condi- tion 1 and closer in condition 3 to the perceived location of the targets, as compared to condition 2 in which participants’ physical reach was more accurate to the perceived location of the target. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Virtual reality; I.4.7 [Image Processing and Computer Vision]: Scene Analysis—Depth cues; H.5.1 [In- formation Interfaces and Presentation]: Multimedia Information Systems—Artificial, augmented, and virtual realities; H.1.2 [Infor- mation Systems]: User/Machine Systems—Human factors Keywords: Depth perception, visuo-motor re-calibration, virtual reality, immersive virtual environments 1 Introduction Depth perception is one of the key factors that affects how users perform dexterous manual activities in virtual reality, such as ma- nipulation and selection. There are many factors related to the per- * School of Computing, Clemson University e-mail: ee- [email protected], [email protected], [email protected], ta- [email protected], Department of Psychology, Clemson University, [email protected], [email protected], [email protected] ception of depth in virtual reality, but only some of these factors have been well studied. There are still many that need further in- vestigation. Perception and neurology literature suggest that ac- curate depth perception is a fundamental process that is necessary for higher level perceptual and cognitive spatial processing, such as shape, speed, and size perception [Landy et al. 1995]. In spite of substantial efforts to create virtual environments that carefully replicate real world situations, many studies still demonstrate that depth perception in VR is distorted [Witmer and Sadowski 1998; Messing and Durgin 2005; Richardson and Waller 2005; Willemsen et al. 2009]. These kinds of distortions become problematic, espe- cially when VR is used to train skills that are aimed to transfer to the real world. Examples of such applications include surgical sim- ulation to improve operating room performance [Seymour 2008] or robot teleoperation using head mounted displays (HMD) [Hine et al. 1994]. Many of these applications require users to perform manual, dex- terous activities that require visual feedback. However, feedback indicative of the users’ actions in virtual reality may consist of miss- ing or maligned information in different visuo-motor sensory chan- nels [Casper and Murphy 2003]. The human visual system has evolved to accommodate sensory information from many different inputs [Milner et al. 2006]. This often happens in a closed-loop manner allowing feedback from multiple sensory inputs to influ- ence physical action. In other words, visual and proprioceptive sen- sory channels are highly tied together and constantly calibrate and recalibrate based on new information from the real world [Bing- ham and Pagano 1998]. In many current VR simulations, the visual and proprioceptive in- formation provided to users is distorted and dissimilar when com- pared to real world experiences. The dissonance between visual and proprioceptive feedback may occur due to simulation artifacts such as latency, tracker drift, or registration errors. We know that in the real world visuo-motor calibration rapidly alters one’s actions to accommodate new circumstances [Rieser et al. 1995]. However, it is not well understood to what extent users are able to recalibrate their actions when given dissonant visual and proprioceptive infor- mation in IVEs while performing manual, dexterous visuo-motor tasks. Therefore, we investigated the effects of conflicting visual and proprioceptive information on near field distance judgments during closed-loop physical reaching tasks in an immersive virtual reality simulation. 2 Related Work Previous research in medium field distances (space beyond users’ arm reach to a distance of about 30 m) has shown that users greatly underestimate distances in VR [Richardson and Waller 2005; Thompson et al. 2004]. Willemsen et al. [2009] showed that the mechanical properties of an HMD, such as weight and field of view (FOV), can potentially contribute to distance underestima- tion as measured using blind walking (but not using timed imag- ined walking). However, Grechkin et al. [2010] pointed out that mechanical properties of the HMD cannot be the only reason for the distance underestimation in VE. Grechkin et al. [2010] com- pared real world (RW) viewing, both with and without an HMD, 103

Effects of Visual and Proprioceptive Information in Visuo …Effects of Visual and Proprioceptive Information in Visuo-Motor Calibration During a Closed-Loop Physical Reach Task in

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Effects of Visual and Proprioceptive Information in Visuo …Effects of Visual and Proprioceptive Information in Visuo-Motor Calibration During a Closed-Loop Physical Reach Task in

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SAP 2014, August 08 – 09, 2014, Vancouver, British Columbia, Canada. Copyright © ACM 978-1-4503-3009-1/14/08 $15.00

Effects of Visual and Proprioceptive Information in Visuo-Motor CalibrationDuring a Closed-Loop Physical Reach Task in Immersive Virtual Environments

Elham Ebrahimi∗, Bliss Altenhoff†, Leah Hartman†, J. Adam Jones∗, Sabarish V. Babu∗, Christopher C. Pagano†,Timothy A. Davis ∗

Abstract

Research in visuo-motor coupling has shown that the matching ofvisual and proprioceptive information is important for calibratingmovement. Many state-of-the art virtual reality (VR) systems, com-monly known as immersive virtual environments (IVE), are createdfor training users in tasks that require accurate manual dexterity.Unfortunately, these systems can suffer from technical limitationsthat may force de-coupling of visual and proprioceptive informa-tion due to interference, latency, and tracking error. We presentan empirical evaluation of how visually distorted movements af-fects users’ reach to near field targets using a closed-loop physicalreach task in an IVE. We specifically examined the recalibration ofmovements when the visually reached distance is scaled differentlythan the physically reached distance. Subjects were randomly as-signed to one of three visual feedback conditions during which theyreached to target while holding a tracked stylus: i) Condition 1 (-20% gain condition) in which the visual stylus appeared at 80% ofthe distance of the physical stylus, ii) Condition 2 (0% or no gaincondition) in which the visual stylus was co-located with the phys-ical stylus, and iii) Condition 3 (+20% gain condition) in which thevisual stylus appeared at 120% of the distance of the physical sty-lus. In all conditions, there is evidence of visuo-motor calibrationin that users’ accuracy in physically reaching to the target locationsimproved over trials. During closed-loop physical reach responses,participants generally tended to physically reach farther in condi-tion 1 and closer in condition 3 to the perceived location of thetargets, as compared to condition 2 in which participants’ physicalreach was more accurate to the perceived location of the target.

CR Categories: I.3.7 [Computer Graphics]: Three-DimensionalGraphics and Realism—Virtual reality; I.4.7 [Image Processingand Computer Vision]: Scene Analysis—Depth cues; H.5.1 [In-formation Interfaces and Presentation]: Multimedia InformationSystems—Artificial, augmented, and virtual realities; H.1.2 [Infor-mation Systems]: User/Machine Systems—Human factors

Keywords: Depth perception, visuo-motor re-calibration, virtualreality, immersive virtual environments

1 Introduction

Depth perception is one of the key factors that affects how usersperform dexterous manual activities in virtual reality, such as ma-nipulation and selection. There are many factors related to the per-

∗School of Computing, Clemson University e-mail: [email protected], [email protected], [email protected], [email protected], † Department of Psychology, Clemson University,[email protected], [email protected], [email protected]

ception of depth in virtual reality, but only some of these factorshave been well studied. There are still many that need further in-vestigation. Perception and neurology literature suggest that ac-curate depth perception is a fundamental process that is necessaryfor higher level perceptual and cognitive spatial processing, suchas shape, speed, and size perception [Landy et al. 1995]. In spiteof substantial efforts to create virtual environments that carefullyreplicate real world situations, many studies still demonstrate thatdepth perception in VR is distorted [Witmer and Sadowski 1998;Messing and Durgin 2005; Richardson and Waller 2005; Willemsenet al. 2009]. These kinds of distortions become problematic, espe-cially when VR is used to train skills that are aimed to transfer tothe real world. Examples of such applications include surgical sim-ulation to improve operating room performance [Seymour 2008]or robot teleoperation using head mounted displays (HMD) [Hineet al. 1994].

Many of these applications require users to perform manual, dex-terous activities that require visual feedback. However, feedbackindicative of the users’ actions in virtual reality may consist of miss-ing or maligned information in different visuo-motor sensory chan-nels [Casper and Murphy 2003]. The human visual system hasevolved to accommodate sensory information from many differentinputs [Milner et al. 2006]. This often happens in a closed-loopmanner allowing feedback from multiple sensory inputs to influ-ence physical action. In other words, visual and proprioceptive sen-sory channels are highly tied together and constantly calibrate andrecalibrate based on new information from the real world [Bing-ham and Pagano 1998].

In many current VR simulations, the visual and proprioceptive in-formation provided to users is distorted and dissimilar when com-pared to real world experiences. The dissonance between visualand proprioceptive feedback may occur due to simulation artifactssuch as latency, tracker drift, or registration errors. We know that inthe real world visuo-motor calibration rapidly alters one’s actionsto accommodate new circumstances [Rieser et al. 1995]. However,it is not well understood to what extent users are able to recalibratetheir actions when given dissonant visual and proprioceptive infor-mation in IVEs while performing manual, dexterous visuo-motortasks. Therefore, we investigated the effects of conflicting visualand proprioceptive information on near field distance judgmentsduring closed-loop physical reaching tasks in an immersive virtualreality simulation.

2 Related Work

Previous research in medium field distances (space beyond users’arm reach to a distance of about 30 m) has shown that usersgreatly underestimate distances in VR [Richardson and Waller2005; Thompson et al. 2004]. Willemsen et al. [2009] showedthat the mechanical properties of an HMD, such as weight and fieldof view (FOV), can potentially contribute to distance underestima-tion as measured using blind walking (but not using timed imag-ined walking). However, Grechkin et al. [2010] pointed out thatmechanical properties of the HMD cannot be the only reason forthe distance underestimation in VE. Grechkin et al. [2010] com-pared real world (RW) viewing, both with and without an HMD,

103

Page 2: Effects of Visual and Proprioceptive Information in Visuo …Effects of Visual and Proprioceptive Information in Visuo-Motor Calibration During a Closed-Loop Physical Reach Task in

to four VR presentations; i) virtual world in HMD, ii) augmentedreality (AR) in HMD, iii) virtual world in large screen immersivedisplay (LSID) and iv) photorealistic virtual world in LSID. Theyalso found that underestimation occurred in all VE conditions, al-though the magnitude of the errors varied substantially. In anotherstudy, Witmer and Kline [1998] demonstrated that users underes-timated distances in both the real world and a VE with underesti-mation in VE being more pronounced. They also pointed out thattraversing distances in virtual environment (VE) reduced the overallunderestimation.

Some studies have investigated the differences between verbal andaction responses. These have found that verbal judgments weremore variable, less accurate, and subject to systematic distortionsthat were not evident in action responses [Pagano and Isenhower2008; Pagano and Bingham 1998]. It has been suggested that ver-bal reports and action responses may involve two distinct percep-tual processes [Napieralski et al. 2011; Pagano et al. 2001; Foleyet al. 1977]. For instance, Pagano and Isenhower [2008] com-pared verbal report and reaching responses for egocentric distancejudgments. They characterized the verbal reports to be more in-dicative of relative distance perception whereas reaching responseswere more indicative of a absolute distance perception. In this doc-ument, our empirical evaluation investigating distance perceptionduring closed-loop visuo-motor calibration employs an immediate,action based response that uses physical reach.

There is a large amount of previous work that focuses on visuo-motor recalibration through closed-loop interactions in real world[Rieser et al. 1995; Bingham and Pagano 1998] as well as in VE’s[Mohler et al. 2006; Kunz et al. 2013]. To overcome the prob-lem of seeing the world as compressed in VR, some suggested thatusers’ interactions with the environment could potentially changedistance estimation in relatively short amount of time [Richardsonand Waller 2005; Altenhoff et al. 2012]. In another study, Kelly etal. [2014] showed that only five closed-loop interactions with anIVE significantly improved participants’ distance estimates. Theresult of their study also indicated that the improvements plateauedafter a small number of interactions over a fixed range of distances.

Much of the work investigating visuo-motor calibration has usedopen-loop distance judgments with no vision of the target, such asblind walking and blind reaching. However, in the real world, weare constantly correlating vision and movements, as such visuo-motor calibration is a constant and ongoing process [Pagano andBingham 1998]. It can be reasonably stated that open-loop tasksthat deny feedback may not directly mimic real world interactions.

Previous work by Altenhoff et al. [2012] studied the effects ofvisuo-motor calibration in an IVE to determine whether visual andtactile feedback could potentially improve misperceptions of depth.In that work, participants’ distance estimates were measured be-fore and after their interaction with near field targets using both vi-sual and haptic feedback. It was seen that users’ performance sig-nificantly improved after the visuo-motor calibration interactions.Mohler et al. [2006] also showed that users performed similarlyin the RW and an IVE while vision was coupled with action whenwalking on a treadmill. Likewise, Bingham and Romack [1999]studied real world calibration in users’ physical reaches with dis-placement prisms over a three-day period. They showed that cali-bration improved daily with participants interacting quite naturallyby the third day.

There are relatively few studies on near field distance estimation inIVEs. Previous work by Napieralski et al. [2011] compared nearfield egocentric distance estimation in an IVE and the RW. In thatwork, it was seen that distance underestimation took place in boththe IVE and RW. The results also showed that distance underes-

timation increased with distance in both the IVE and the RW. Inanother study, Singh et al. [2010] compared closed-loop and open-loop near-field distance estimation in AR. Their results indicatedthat open-loop blind reaching was significantly underestimated ascompared to closed-loop matching task.

Despite the importance of near field distance estimation, there arevery few studies in this area in VR. Rolland et al. [1995], showedoverestimation in near field distance judgments in AR using aforced-choice task. Taken together, these studies have shown thatnear field AR and VR introduce substantial distortions in distancejudgments as compared to the real world [Singh et al. 2010; Al-tenhoff et al. 2012; Napieralski et al. 2011].

As discussed in the previous paragraphs, the effect of closed-loopinteractions with an environment, in terms of distance estimation,is well known in the medium field for both VR and RW. In thesecases, distance judgments significantly improves after users inter-act with the environment [Richardson and Waller 2005; Altenhoffet al. 2012; Kelly et al. 2014]. As previously discussed, there is verylittle work comparing open and closed-loop distance judgments ineither AR or VR environments. Also, the impact of visual and pro-prioceptive information during closed-loop visuo-motor calibrationin the IVE has not been well examined.

In the following experiment, we examined the calibration of nearfield distance estimation via a between-subjects experiment in-volving perturbations of visual feedback during closed-loop visuo-motor calibration. These perturbations will be explained in detail inthe experiment methodology section. In our experiment, distancejudgments were measured using physical reach responses to targetsin an IVE. We specifically examined the end of the ballistic reachphase in order to ascertain the perceived depth judgments.

2.1 Research Questions

In this study, we investigated the following research questions;

I How do users improve their near field distance judgments with(closed-loop) visual feedback in the IVE?

II To what extent are users’ distance judgments affected by mis-match in visual and proprioceptive information during closed-loop interaction in the IVE?

III Does closed-loop interaction in an IVE cause continuous im-provement in distance estimation over time?

3 Experiment Methodology

3.1 Participants

The experiment included 36 Clemson University students who re-ceived course credit for participating in the experiment. 26 were fe-male and 10 were male and all were right handed. All participantswere tested for visual acuity better than 20/40 using the Titmus FlyStereotest when viewing an image with a disparity of 400 sec ofarc. All participants provided informed consent.

3.2 Apparatus and Material

3.2.1 General Setup

Figure 1 shows the experimental apparatus used for this experi-ment. Participants were asked to sit on a wooden chair to whichtheir shoulders were loosely tied. This was done to serve as a gen-tle reminder for them to keep their shoulders in the chair during theexperiment. Otherwise, they had the full control of their head and

104

Page 3: Effects of Visual and Proprioceptive Information in Visuo …Effects of Visual and Proprioceptive Information in Visuo-Motor Calibration During a Closed-Loop Physical Reach Task in

arms. Participants reached with a tracked wooden stylus that was26.5cm long, 0.9cm in diameter, and weighing 65g. All users wereasked to hold the stylus in their right hand in such a way that it ex-tended approximately 3cm above and 12cm below their closed fist.Each trial began with the back end of the stylus inserted in a 0.5 cmgroove on top of the launch platform, which was located next to theparticipant’s right hip.

The target consisted of a groove that was 0.5 cm deep, 8.0 cm tall,and 1.2 cm wide. The groove extended from the center of the baseof a 8.0 cm wide and 16 cm tall white rectangle. The target wasenclosed within a 0.5 cm border made from thick, black tape. Thiswas added to help participants to distinguish the target from thewhite background wall. Participants were required to match thestylus tip to the groove of the target during the experiment.

The target was placed at participants’ eye level and midway be-tween the participants’ eyes and right shoulder in order to keep thedistance from the eye to the target as close as possible to the dis-tance from the shoulder to the target. The position of the target wasadjusted by the experimenter using a 200 cm wooden optical rail.The rail extended in depth along the floor and was parallel to theparticipants’ viewing direction. The target was attached to the opti-cal rail via an adjustable, hinged stand. To prevent any interferencewith the electromagnetic tracking system, the target, stand, stylusand optical rail were made of wood.

Electromagnetic Tracker SourceSensor

behind

Target

Adjustable

Stand

Optical Rail

with Ruler

Stylus with Sensor

HMD with

Sensor

Target

Figure 1: Shows the near-field distance estimation apparatus. Thetarget, participant’s head, and stylus are tracked in order to recordactual and perceived distances of physical reach in the IVE.

3.2.2 Visual aspects

An NVIS nVisor SX111 HMD weighing about 1.8 kg was used forthe experiment. The HMD contains two LCOS displays each with aresolution of 1280 x 1024 pixels for viewing a stereoscopic virtualenvironment. The field of view of the HMD was determined to be102 degrees horizontal and 64 degrees vertical. The field of viewwas determined by rendering a carefully registered virtual modelof a physical object [Napieralski et al. 2011]. The simulation usedhere consisted of the virtual model of the training room, experimen-tal room and apparatus, which are described in detail in our previ-ous work (See Napieralski et al. [2011] and Altenhoff et al. [2012]).We extended this experimental simulation to provide the followingvisual feedback.

Unlike our previous studies, we did not provide tactile feedback anddesigned our simulation so that the stylus’ tip would turn red whenit was within a 1 cm radius of target’s groove. Figure 2 shows threescreen shots of the virtual target and stylus. Based on the visual

information provided to participants, they visually detected whenthe stylus intersected a groove in the target’s face in the IVE.

Figure 2: Image on the left shows a screen shot of the virtual targetas perceived by participants in the IVE with the stylus in front ofthe target. Image on the middle shows that the tip of the stylusturned red when it was placed in the groove of the target, and on theright shows stylus passed the target. Participants received visualand proprioceptive feedback only when interacting with the targetduring closed-loop trials.

3.3 Procedure

Upon arrival, all participants completed a standard informed con-sent form and demographic survey. Participants’ visual and stereoacuity was tested, and their interpupillary distance (IPD) was mea-sured using the mirror-based method described by Willemsen etal. [2008]. The measured IPD was used as a parameter for the ex-periment simulation to set the graphical inter-occular distance, andthe HMD was adjusted accordingly for each participant. The partic-ipant was then loosely strapped in a chair as described before andthe target height was set to participant’s eye height. Participant’smaximum arm reach was then measured by adjusting the target sothat the participant could place the stylus in the groove of the targetwith their arm fully extended. However, this was performed with-out using the extension of their shoulder, as used in Altenhoff etal. [2012]. This maximum arm length was then used to generatetarget distances to be set during the experiment. Participants wereinstructed on how to make their physical reach judgments beforeputting on the HMD. They were asked to start each trial with thestylus in the dock next to their hip and reach to the target with afast, ballistic motion, and then adjust their initial reach by movingback and forth.

All participants started the experiment by viewing a training envi-ronment IVE that was designed to help the participants acclimateto the viewing experience. Next, the participants were presentedwith a photorealistic virtual representation of the real room withinwhich the experiment took place. The virtual room also includedan accurate replica of the experimental apparatus. During testing,the participants performed 2 practice trials followed by 30 trials ofblind reaching in the baseline or pretest session. Trials consistedof 5 random permutations of 6 target distances corresponding to50, 58, 67, 75, 82, and 90 percent of participant’s maximum armlength. For each trial, with the HMD display turned off, the targetdistance was adjusted using the physical target to which the sensorin attached. Then, vision was restored and virtual target was dis-played. Once participants notified the experimenter that they wereready, the vision in the HMD was turned off via a key press. Thephysical target was then immediately retracted to prevent any col-lision between the participants’ stylus and target during open-loopblind reaching.

After the open-loop pretest or baseline session, participants per-formed the closed-loop calibration phase of the experiment at least2 days after the baseline phase in order to minimize any after ef-fects. During the closed-loop phase, participants were randomlyassigned to one of the three viewing conditions:

105

Page 4: Effects of Visual and Proprioceptive Information in Visuo …Effects of Visual and Proprioceptive Information in Visuo-Motor Calibration During a Closed-Loop Physical Reach Task in

• Condition 1: -20% gain where the visual stylus appeared at80% of the distance of the physical stylus.

• Condition 2: 0% gain, or no gain, where the visual stylus wasco-located with the physical stylus.

• Condition 3: +20% gain where the visual stylus appeared at120% of the distance of the physical stylus.

Figure 3 depicts the physical and virtual stylus in different condi-tions during the closed-loop session.

100%

100%

100%

Figure 3: The top image shows Condition 1 (the -20% gain condi-tion), in which the virtual stylus appears 20% closer than its physi-cal position. The middle image shows Condition 2 (the no gain con-dition) in which physical and virtual stylus are co-located. The bot-tom image shows Condition 3 (the +20% gain condition) in whichthe virtual stylus appears 20% farther than its physical position.

Based on participants viewing condition and their maximum armlength, they were provided with five random permutations of fourtarget distances. For viewing Condition 1 four target distances cor-responding to 50, 58, 67, and 75 percent of the participant’s maxi-mum reach was displayed, for viewing Condition 2 four target dis-tances corresponding to 58, 67, 75, and 82 percent of the partici-pant’s maximum reach was displayed, and for viewing Condition 3four target distances corresponding to 67, 75, 82, and 90 percent ofthe participant’s maximum reach for a total of 20 trial distances wasdisplayed. Note that the 67 and 75 percent of participant maximumarm reach were common target distances in all three conditions.Some participants were asked to repeat particular trials, if they ap-peared to make a slow, calculated reach instead of a ballistic reachto the targets.

4 Results

4.1 Data Preprocessing

Rapid reaches to targets are characterized by a fast ballistic phaseand then a much smaller and slower corrective phase. Past work hasshown that the most accurate way to measure distance perception

via rapid reaches is to use the end point of fast ballistic phase [Bing-ham and Pagano 1998; Bingham and Romack 1999; Pagano andBingham 1998; Pagano et al. 2001; Pagano and Isenhower 2008].To be able to extract the end of the ballistic reaches, we used fol-lowing methods:

1. The target face, stylus tip, head and eye plane locations weretracked and logged by the experiment simulation, which waspulled from the electromagnetic tracking system during thecourse of the experiment. Using an after action review visual-izer, the participants’ actions were replayed from the log filedata, and the experimenter coded the approximate location ofthe ballistic reach in the visualizer. In this manner the visual-izer was used to code the end of the ballistic reaches for eachtrial from each participant’s data log. Figure 4 shows a screenshot of the visualizer.

Figure 4: A screen shot of the visualizer that was used to tag theapproximate location of the end of the ballistic reach. In this image,the coordinate system attached to the stylus, target, and user’s eyecentered point also can be seen.

2. We also extracted the end of the ballistic reach by analyzingthe XY trajectories and speed profile associated with the phys-ical reach motions. To do so, the end of the forward trajectory(motion toward the target) was tagged as a baseline for theend of the ballistic reach. Then, all the tagged data pointsfrom XY trajectories were embedded in the speed profile tobe used to pick the end of the ballistic reaches. Figure 5 is anexample of a XY trajectory. The blue line represents the for-ward motion and the red line represents the backward motionsof the stylus, as the participant reached to make a perceptualjudgment. The black square is the tagged data point denotingthe end of the ballistic reach. Note that participants made fine,continuous adjustments to the stylus position after completingthe ballistic reach phase. After this phase, participant movesthe stylus back in the starting position.

All the points from the XY trajectories for each trial weregathered, and the speed of the stylus motion for each trial wasplotted in a separate window (Figure 6). The speed profilewas rendered as a blue line. Figure 6 shows a full view ofthe speed profile for a single trial. The time instance at theend of the ballistic reach, extracted from the previous stepwas also denoted in the speed profile as a magenta line. Thisline provided an estimate based on the XY trajectory graph asto the location of the end of the ballistic reach, and was thenvisually confirmed by examining the speed profile generatedin this data processing step. The end of the ballistic reachwas chosen by the experimenter examining the speed profiles,as the first time instant when the speed reaches a local min-ima below a threshold of 20 cm/s, immediately after attainingpeak speed caused by the forward motion of the stylus (whenreaching to the target).

106

Page 5: Effects of Visual and Proprioceptive Information in Visuo …Effects of Visual and Proprioceptive Information in Visuo-Motor Calibration During a Closed-Loop Physical Reach Task in

Figure 5: An example of XY trajectory for a single trial. The blacksquare is the tagged data point indicating the end of the ballisticreach.

Figure 6: An example of speed profile (solid blue line). The timeinstance at the end of the ballistic reach was extracted from XY tra-jectory initially was also denoted in the speed profile as a magentaline. The black dot in the figure denotes the final x and y position ofthe stylus at the end of the ballistic reach.

4.2 Constant and Absolute Error

4.2.1 Computing Error

Accuracy measures were calculated to examine the differences be-tween participants’ estimated and actual target position. These werethen combined for individual participants in each condition (1, 2,or 3). Constant and Absolute Error were calculated based on tech-niques described by Schmidt [1988], see formula 1 and 2, whereT is the target distance of a given trial, xi is the distance estimateby a participant in a particular trial, and n is the number of trials aparticipant performed in a session.

Constant error measures the direction of the errors of a participants’responses and the average error magnitude. In essence, this mea-sure indicates the direction and accuracy of each participant. Con-stant Error was calculated using the following formula to examineaverage error:

∑ni=1(xi − T )

n(1)

Notice that the sign of the difference is preserved, so the ConstantError calculation reflects magnitude and direction of average error.For example, a score of 10 for a given participant indicates that theparticipant’s reach fell, on average, farther than the actual targetposition by 10 centimeters. However, it’s possible that a partici-pant’s constant error could be less than the error exhibited in anyone response if his or her responses varied a great deal around thetarget, both overestimating and underestimating the target distance.For this reason, Absolute Error is used to calculate Average Errorwithout considering direction of error.

Although similar to constant error, Absolute Error does not takeinto account the direction of a participant’s error. Rather, it is theaverage absolute deviation between a participant’s estimate and thelocation of the target. Absolute Error is considered a measure ofoverall accuracy because it represents how successful the partici-pant was in accurately estimating the location of the target.

Absolute Error was calculated using the following formula to ex-amine overall accuracy in performance:∑n

i=1(|xi − T |)n

(2)

Data from two participants was not included in the analysis due totechnical difficulties.

4.2.2 Open-Loop vs. Closed-Loop Calibration Condition 2

As presented in Table 1, Constant Error of reach estimates in thepretest showed that, on average, participants in Condition 2 (nogain condition) reached 3.12 cm past the actual target location inthe pretest (SD = 2.64), and only 0.03 cm in front of the actualtarget location in the calibration phase (SD = 4.01), indicating thatparticipant reaches were 3.09 cm closer to the target after the cali-bration phase with the stylus appearing at its actual physical loca-tion. A paired-samples t-test indicated that this was a significantdifference, t(10) = 2.238, p = 0.05.

Absolute Error of reach estimates showed that on average, partici-pants in Condition 2 were off by 5.86 cm in the pretest (SD = 1.68),and 4.79 cm in the calibration phase (SD = 1.89), also indicatingthat participants were more accurate after calibration, although thisdifference was not significant, t(10) = 1.588, p >0.05. On aver-age, participants no longer overestimated to target locations in thecalibration phase with the stylus appearing at its actual physical lo-cation as they had in the pretest.

Constant Error Absolute ErrorC2 PID P Calb P Calb

8 1.04 -0.02 3.88 3.8512 4.91 8.7 6.15 9.3118 1.97 -4.23 3.85 4.4522 5.02 1.54 7.48 4.3323 4.97 -6.85 6.37 7.2827 3.1 0.18 8.68 4.6224 7.88 -0.37 7.92 4.6125 -0.35 3.35 4.07 4.5728 -1.1 -1.76 5.95 3.2933 4.16 0.82 4.47 2.6734 2.72 -1.69 5.69 3.72

Avg. 3.12 -0.03 5.86 4.79

Table 1: Constant Error and Absolute Error of reach estimates(cm) in the pretest (P) and calibration phase (Calb) in Condition 2(no gain condition) for each participant (C2 PID).

4.2.3 Condition 1 vs. Condition 3

As presented in Table 2, Constant Error of reach estimates showedthat on average, participants reached 3.72 cm past the actual targetlocation in the calibration phase of Condition 1 (SD = 3.67), and7.15 cm short of the actual target in the calibration phase of Con-dition 3 (SD = 4.22), indicating that participant reaches were 10.87cm farther in the calibration phase of Condition 1 than Condition 3,which was significantly different, t(20) = 6.437, p <0.001.

107

Page 6: Effects of Visual and Proprioceptive Information in Visuo …Effects of Visual and Proprioceptive Information in Visuo-Motor Calibration During a Closed-Loop Physical Reach Task in

Absolute Error of reach estimates showed that on average, partici-pants were off by 5.61 cm in the calibration phase of Condition 1(SD = 1.65), and 7.89 cm in the calibration phase of Condition 3(SD = 3.56), also indicating that participants were more accurate inthe calibration phase of Condition 1 than Condition 3, although thiswas not significantly different, t(20) = -1.927, p >0.05. Participantreaches in calibration phase of Condition 1 were more accurate andsignificantly farther than those in Condition 3.

4.3 Rate of Visuo-Motor Calibration on Depth Judg-ments

In this section, we utilized a mixed model analysis of variance(ANOVA) to examine changes in reached distance over the courseof the experiment. Since the calibration phase of the experimentconsisted of 20 total trials, we subdivided the experiment into 4groups of 5 trials each. We refer to these groups simply as 5-Trials.The analysis was conducted on reached distance as expressed interms of percentage of the target distance. This was calculated suchthat percent distance = (reached distance / target distance) * 100.Viewing condition (1, 2, 3) varied between subjects while 5-Trialsvaried within subjects. As such, this resulted in analysis with 3 x 4mixed model ANOVA.

4.3.1 Overall Stylus Location

In this section, data has been analyzed based on two sources of sen-sory information (i.e. 1. visual sensory information with respectto the virtual location of the stylus 2. kinesthetic sensory informa-tion with respect to the physical location of the stylus). Note thatthe physical and visual stylus locations are basically two sides ofthe same coin (they are only different by the imposed gain factor).Therefore, temporal analysis can be done based on either the phys-ical or visual stylus location. Thus, the temporal analysis has beenconducted using the physical stylus location (significance in oneentails significance in the other). However, the statistical analysison the difference between the means for different conditions (1, 2,3) have been conducted for both physical and visual stylus location.

As can be seen in Figure 7, in Condition 2 (0% gain, or gain =1.0), physically reached distance was typically very close to thetarget distance, with very little change over the course of the exper-iment. The overall accuracy and stability of judgments within thiscondition is not particularly surprising since visual movements veryclosely matched physical movements. There appeared to be generaltendency toward shortened reaches over time but not significantlyso (F(3, 33) = 1.513, p = 0.229).

However, upon examining the scaled movement conditions (Con-ditions 1 and 3) we find significant changes in physically reacheddistance. Particularly, in Condition 3 (20% gain, or gain = 1.2), onewould expect participants’ physical reach to be noticeably shorterthan when no gains were applied, because the stylus appears to befarther. This expectation was confirmed in the data with partici-pants reaching significantly shorter (-15.8%) than in Condition 2(0% gain) (F(1, 22) = 16.532, p = 0.001). Over the course of theexperiment, participants significantly shortened their reached dis-tance (F(3, 33) = 2.881, p = 0.051). This pattern is qualitativelysimilar to that seen in Condition 2.

When examining Condition 1 (-20% gain, or gain = 0.8), we wouldexpect to see physical reaches that are longer than those expressedwhen no gains were applied, because the stylus appears closer.When comparing Conditions 1 and 2, we see that this is, in fact,the case. Participants in Condition 1 reached significantly further(11.5%) than their Condition 2 counterparts (F(1, 22) = 7.864, p= 0.010). There was no significant change in physically reached

distance over the course of the experiment (F(3, 33) = 0.666, p =0.579). However, the magnitude of the scaled reaches in this con-dition was slightly less than that seen in the Condition 3. Neitherof the physical reach conditions, however, exactly reached the gainfactor applied to the visual reach.

C2 in Virtual Space C2 in Physical Space C1 in Virtual Space C1 in Physical Space

C3 in Virtual Space C3 in Physical Space

C1 = 0.8 C2 = 1.0 C3 = 1.2Gain Factor

5-Trials

Pre

sen

t D

ista

nce

(%

), +

/-1

SE

M

Present Distance in Physical and Virtual Space

120

110

90

80

100

1 2 3 4 1 2 3 4 1 2 3 4

Figure 7: Physical and visual stylus location for all closed-loopconditions (C1 = 0.8, C2 = 1.0, C3 = 1.2)

If we examine, instead, the visual distance of the reach as it ap-peared in the VE, we would expect performance in the scaled condi-tions (Conditions 1 and 3) to very closely match that of the unscaledcondition (Condition 2). Figure 7 summarizes these results. Whencomparing Conditions 2 and 3, we find that they did not signifi-cantly differ (0.1%) in visually reached distance (F(1, 22) = 0.000,p = 0.999). However, when comparing Condition 1 to Condition2, that participants in the scaled condition very consistently underreached (-9.2%) relative to their no gain counterparts (F(1, 22) =6.709, p = 0.017).

5 Discussion

5.1 Comparing Open-Loop vs. Closed-Loop DistancesJudgments

We compared constant and absolute error of the perceived distancesto targets between the open-loop blind reaching and the closed-loop physical reaching to targets with visuo-motor calibration (sec-tion 4.2.2). The closed-loop phase provided participants with vi-sual feedback that was co-located with the physical location of thetracked stylus (Condition 2), and thus visual and proprioceptive in-formation matched and reinforced the stylus location to the par-ticipant during visually guided reaching. Our results indicate thatthe primary mechanism by which recalibration occurred was visualfeedback as the visual position of the stylus strongly influenced theend position of the participants’ ballistic reach. Our findings sug-gest that participants generally over estimated distances to the tar-gets by 3.12 cm, when reaching to the perceived location of thetarget without visual guidance. The tendency towards overestima-tion of reached distance observed in this study is consistent witha similar pattern observed by Rolland et. al [1995] in the AR.However, others have reported underestimation when performingsimilar tasks [Altenhoff et al. 2012; Singh et al. 2010; Napieralskiet al. 2011]. The explanation for these diverse results is still unclearand necessitates future research.

During the closed-loop visuo-motor calibration trials in Condition2, participants received accurate visual and proprioceptive feedback

108

Page 7: Effects of Visual and Proprioceptive Information in Visuo …Effects of Visual and Proprioceptive Information in Visuo-Motor Calibration During a Closed-Loop Physical Reach Task in

Constant Error Absolute Error Constant Error Absolute ErrorC1 PID C1 C1 PoAL (%) C1 C3 PID C3 C3 PoAL (%) C3

5 5.4 9.65 5.56 1 -9.44 -18.5 9.447 -5.03 -9 5.65 6 -4.42 -7.49 7.649 7 13.53 7.49 10 -6.11 -14.12 6.2311 5.18 9 5.33 13 -11.3 -20.73 11.314 6.48 13.94 6.81 17 -5.43 -11.31 5.4515 6.51 14 7.2 20 -4.91 -9.26 5.1916 6.03 11.06 6.61 26 -1.42 -2.42 3.0519 3.78 7.7 5.19 29 -1.62 -3.15 4.0821 -0.03 -0.05 1.99 30 -14.28 -25.05 14.2831 1.04 1.97 3.46 35 -12.15 -21.14 12.1532 4.51 8.12 6.38 36 -7.54 -15.24 7.95

Avg. 3.72 7.27 5.61 Avg. -7.15 -13.49 7.89

Table 2: Constant Error and Absolute Error of reach estimates (cm) in calibration phase Condition 1 (C1) and Condition 3 (C3) and theProportion of Max Arm Length (PoAL (%)) for each participant.

regarding the targets through the precise rendering of visual infor-mation of the actual stylus position and the change in stylus tipcolor when the tip of the stylus was placed within a 1 cm diametergroove on the target face. Mean absolute error in perceptual judg-ments to the targets also decreased from 5.86 cm in the open-loopsession to 4.79 cm in the closed-loop session (Condition 2), show-ing an improvement in absolute error of 1.07 cm on average. Themean constant error of physical reach responses of participants inthe closed-loop session (Condition 2), where participants reachedwith visual guidance, decreased to -0.03 cm as compared to 3.12cm in the open-loop session, revealing an improvement of 3.09 cmon average. This is similar to Altenhoff et. al. [2012], in whichwe found that closed-loop visuo-motor calibration with visual andhaptic (tactile) feedback improved near field distance judgments by4.27 cm as compared to a pre-calibration open-loop baseline. How-ever, our findings suggest that accurate visual feedback alone tothe location of the effector (hand/stylus), during closed-loop inter-actions where users received constant visuo-motor calibration viavisual and proprioceptive information, appears as effective as theaddition of the kinesthetic and tactile information [Altenhoff et al.2012] in calibrating physical reach responses to targets in near fieldIVE simulations.

5.2 Rate of Visuo-Motor Calibration on Distance Judg-ments in Closed-Loop Perturbations

In section 4.3, we performed a statistical analysis to compare thechange in percent actual distance reached by the physical/virtualstylus (section 4.3.1) over four sets of trials (each set consisting of 5trials), during the closed-loop session in which participants receivedvisuo-motor calibration via visual and proprioceptive information(Conditions 1, 2, and 3). In Condition 2 (0% gain), we found thatthere were no significant changes in participants’ physical reach re-sponses over the course of the experiment. However, participantsdid show a slight over estimation in the initial trials, and the physi-cal reach responses tended to calibrate towards 100% of the actualdistance. Whereas in Condition 1 (-20% gain) participants’ physi-cal reach responses showed an over estimation to favor the propri-oceptive information in the first five trials, but participants tendedto scale their responses down towards the visual information. Inthis case, they showed an overall overestimation of physical reachof 11.5% of the actual distance (or 7.25% of the mean maximumarms reach), 3.72 cm mean constant error and 5.61 cm mean abso-lute error (section 4.2.3). In Condition 3 (+20% gain) participants’physical reach responses showed less of an immediate underestima-tion in the first five trials (perhaps favoring the visual information,

contrary to Condition 1), but the underestimation tended to increaseover the course of the session biasing the physical reach responsetowards the physical location of the hand/stylus (favoring the pro-prioceptive information). Participants in Condition 3 showed anoverall underestimation of -15.8% of the actual distance (or -13.5%of their mean maximum arms reach), -7.15 cm mean constant errorand 7.89 cm mean absolute error (section 4.2.3).

6 Conclusion and Future Work

In an empirical evaluation, we showed that participants’ depth judg-ments are scaled to be more accurate in the presence of visual andproprioceptive information during closed-loop near field activitiesin the IVE, as compared to absolute depth judgments in an open-loop session, when measured via physical reaching. These findingsare important, as most VR simulations lack tactile haptic feedbacksystems for training in dexterous manual tasks such as surgical sim-ulation, welding and painting applications. It seems that the use ofvisual information to reinforce the location of physical effectorssuch as the hand or stylus appears sufficient in improving depthjudgments. However, we have also shown that depth perceptioncan be altered drastically when visual and proprioceptive informa-tion, even in closed-loop conditions, are no longer congruent in theIVE. Thus they may cause significant distortions in our spatial per-ception, and potentially degrade training outcomes, experience andperformance in VR simulations.

In future work, we plan to empirically evaluate the effects of vi-sual and proprioceptive information mismatch on post-calibrationopen-loop perceptual judgments, in order to investigate any lastingcarry over effects from the perturbations in the IVE. We also planto examine if the effects of visual and proprioceptive scaling duringvisuo-motor calibration transfers from the virtual world to the realworld. This research direction has profound implications with re-spect to the success of the transfer of psychomotor skills learned invisuo-motor activities in VR simulations to real world tasks.

Acknowledgements

The authors wish to gratefully acknowledge that this research waspartially supported by the University Research Grant Committee(URGC) award from Clemson University.

References

ALTENHOFF, B. M., NAPIERALSKI, P. E., LONG, L. O.,

109

Page 8: Effects of Visual and Proprioceptive Information in Visuo …Effects of Visual and Proprioceptive Information in Visuo-Motor Calibration During a Closed-Loop Physical Reach Task in

BERTRAND, J. W., PAGANO, C. C., BABU, S. V., AND DAVIS,T. A. 2012. Effects of calibration to visual and haptic feedbackon near-field depth perception in an immersive virtual environ-ment. In Proceedings of the ACM Symposium on Applied Per-ception, ACM, 71–78.

BINGHAM, G. E., AND PAGANO, C. C. 1998. The necessityof a perception-action approach to definite distance perception:Monocular distance perception to guide reaching. Journal ofExperimental Psychology: Human Perception and Performance24, 1, 145–168.

BINGHAM, G., AND ROMACK, J. L. 1999. The rate of adaptationto displacement prisms remains constant despite acquisition ofrapid calibration. Journal of Experimental Psychology: HumanPerception and Performance 25, 5, 1331.

CASPER, J., AND MURPHY, R. R. 2003. Human-robot interac-tions during the robot-assisted urban search and rescue responseat the world trade center. Systems, Man, and Cybernetics, PartB: Cybernetics, IEEE Transactions on 33, 3, 367–385.

FOLEY, J. M., ET AL. 1977. Effect of distance information andrange on two indices of visually perceived distance. Perception6, 4, 449–460.

GRECHKIN, T. Y., NGUYEN, T. D., PLUMERT, J. M., CREMER,J. F., AND KEARNEY, J. K. 2010. How does presentationmethod and measurement protocol affect distance estimation inreal and virtual environments? ACM Transactions on AppliedPerception (TAP) 7, 4, 26.

HINE, B., STOKER, C., SIMS, M., RASMUSSEN, D., HONTA-LAS, P., FONG, T., STEELE, J., BARCH, D., ANDERSEN, D.,MILES, E., ET AL. 1994. The application of telepresence andvirtual reality to subsea exploration. In Proceeding of the 2ndWorkshop on: Mobile Robots for Subsea Environments, Inter-national Advanced Robotics Program (IARP), MJ Lee and RBMcGee (eds), Monterey, CA, 117–126.

KELLY, J. W., HAMMEL, W. W., SIEGEL, Z. D., AND SJOLUND,L. A. 2014. Recalibration of perceived distance in virtual en-vironments occurs rapidly and transfers asymmetrically acrossscale. IEEE Transaction on Visualization and Computer Graph-ics 20, 4, 588–595.

KUNZ, B. R., CREEM-REGHER, S. H., AND THOMPSON, W. B.2013. Does perceptual-motor calibration generalize across twodifferent forms of locomotion? investigations of walking andwheelchairs. PLOS one 8, 2.

LANDY, M. S., MALONEY, L. T., JOHNSTON, E. B., ANDYOUNG, M. 1995. Measurement and modeling of depth cuecombination: In defense of weak fusion. Vision research 35, 3,389–412.

MESSING, R., AND DURGIN, F. H. 2005. Distance perception andthe visual horizon in head-mounted displays. ACM Transactionson Applied Perception (TAP) 2, 3, 234–250.

MILNER, A. D., GOODALE, M. A., AND VINGRYS, A. J. 2006.The visual brain in action, vol. 2. Oxford University Press Ox-ford.

MOHLER, B. J., CREEM-REGEHR, S. H., AND THOMPSON,W. B. 2006. The influence of feedback on egocentric distancejudgments in real and virtual environments. In Proceedings ofthe 3rd symposium on Applied perception in graphics and visu-alization, ACM, 9–14.

NAPIERALSKI, P. E., ALTENHOFF, B. M., BERTRAND, J. W.,LONG, L. O., BABU, S. V., PAGANO, C. C., KERN, J., AND

DAVIS, T. A. 2011. Near-field distance perception in realand virtual environments using both verbal and action responses.ACM Transactions on Applied Perception (TAP) 8, 3, 18.

PAGANO, C. C., AND BINGHAM, G. P. 1998. Comparing mea-sures of monocular distance perception: Verbal and reaching er-rors are not correlated. Journal of Experimental Psychology:Human Perception and Performance 24, 4, 1037.

PAGANO, C. C., AND ISENHOWER, R. W. 2008. Expectationaffects verbal judgments but not reaches to visually perceivedegocentric distances. Psychonomic bulletin & review 15, 2, 437–442.

PAGANO, C. C., GRUTZMACHER, R. P., AND JENKINS, J. C.2001. Comparing verbal and reaching responses to visually per-ceived egocentric distances. Ecological Psychology 13, 3, 197–226.

RICHARDSON, A. R., AND WALLER, D. 2005. The effect offeedback training on distance estimation in virtual environments.Applied Cognitive Psychology 19, 8, 1089–1108.

RIESER, J. J., PICK, H. L., ASHMEAD, D. H., AND GARING,A. E. 1995. Calibration of human locomotion and models ofperceptual-motor organization. Journal of Experimental Psy-chology: Human Perception and Performance 21, 3, 480–497.

ROLLAND, J. P., BURBECK, C. A., GIBSON, W., AND ARIELY,D. 1995. Towards quantifying depth and size perception in 3dvirtual environments. Presence: Teleoperators and Virtual Envi-ronments 4, 1, 24–48.

SCHMIDT, R. A., AND LEE, T. 1988. Motor Control and Learning,5E. Human kinetics.

SEYMOUR, N. E. 2008. Vr to or: a review of the evidence thatvirtual reality simulation improves operating room performance.World journal of surgery 32, 2, 182–188.

SINGH, G., SWAN II, J. E., JONES, J. A., AND ELLIS, S. R.2010. Depth judgment measures and occluding surfaces in near-field augmented reality. In Proceedings of the 7th Symposium onApplied Perception in Graphics and Visualization, ACM, 149–156.

THOMPSON, W. B., WILLEMSEN, P., GOOCH, A. A., CREEM-REGEHR, S. H., LOOMIS, J. M., AND BEALL, A. C. 2004.Does the quality of the computer graphics matter when judgingdistances in visually immersive environments? Presence: Tele-operators and Virtual Environments 13, 5, 560–571.

WILLEMSEN, P., GOOCH, A. A., THOMPSON, W. B., ANDCREEM-REGEHR, S. H. 2008. Effects of stereo viewing condi-tions on distance perception in virtual environments. Presence:Teleoperators and Virtual Environments 17, 1, 91–101.

WILLEMSEN, P., COLTONA, M. B., CREEM-REGEHR, S. H.,AND THOMPSON, W. B. 2009. The effects of head-mounteddisplay mechanical properties and field of view on distance judg-ments in virtual environments. ACM Transactions on AppliedPerception (TAP) 6, 2, 1–14.

WITMER, B. G., AND KLINE, P. B. 1998. Judging perceived andtraversed distance in virtual environments. Presence: Teleoper-ators and Virtual Environments 7, 2, 144–167.

WITMER, B. G., AND SADOWSKI, W. J. 1998. Nonvisuallyguided locomotion to a previously viewed target in real and vir-tual environments. Human Factors: The Journal of the HumanFactors and Ergonomics Society 40, 3, 478–488.

110