24
Noname manuscript No. (will be inserted by the editor) Operator Performance in Exploration Robotics A Comparison Between Stationary and Mobile Operators Alberto Valero 1 , Gabriele Randelli 2 , Fabiano Botta 3 , Diego Rodr´ ıguez-Losada 4 , Miguel Hernando 5 1 Universidad Carlos III de Madrid, Spain e-mail: [email protected] 2 Department of Computer and Systems Sciences, SAPIENZA - Universit`a di Roma e-mail: [email protected] 3 Universidad de Granada, Spain e-mail: [email protected] 4 Universidad Polit´ ecnica de Madrid, Spain e-mail: [email protected] 5 Universidad Polit´ ecnica de Madrid e-mail: [email protected] The date of receipt and acceptance will be inserted by the editor Abstract Mobile robots can accomplish high-risk tasks without exposing humans to danger: robots go where humans fear to tread. Until the time in which completely autonomous robots are fully deployed, remote operators will be required in order to fulfill desired missions. Remotely controlling a robot requires that the operator receive the information about the robot’s surroundings, as well as its location in the scenario. Based on a set of ex- periments conducted with users, we evaluate the performance of operators when they are provided with a hand-held-based interface or a desktop-based interface. Results show how performance depends on the task asked of the operator and the scenario in which the robot is moving. The conclusions prove that the operator’s intra-scenario mobility when carrying a hand-held device can counterbalance the limitations of the device. By contrast, the experiments show that if the operator cannot move inside of the scenario, his performance is significantly better when using a desktop-based inter- face. These results set the basis for a transfer of control policy in missions involving a team of operators, some equipped with hand-held devices and others working remotely with desktop-based computers.

Operator Performance in Exploration Robotics

Embed Size (px)

Citation preview

Noname manuscript No.(will be inserted by the editor)

Operator Performance in Exploration

Robotics

A Comparison Between Stationary and Mobile

Operators

Alberto Valero1, Gabriele Randelli2, Fabiano Botta3, DiegoRodrıguez-Losada4, Miguel Hernando5

1 Universidad Carlos III de Madrid, Spain e-mail: [email protected] Department of Computer and Systems Sciences, SAPIENZA - Universita diRoma e-mail: [email protected]

3 Universidad de Granada, Spain e-mail: [email protected] Universidad Politecnica de Madrid, Spain e-mail: [email protected] Universidad Politecnica de Madrid e-mail: [email protected]

The date of receipt and acceptance will be inserted by the editor

Abstract Mobile robots can accomplish high-risk tasks without exposinghumans to danger: robots go where humans fear to tread. Until the time inwhich completely autonomous robots are fully deployed, remote operatorswill be required in order to fulfill desired missions. Remotely controlling arobot requires that the operator receive the information about the robot’ssurroundings, as well as its location in the scenario. Based on a set of ex-periments conducted with users, we evaluate the performance of operatorswhen they are provided with a hand-held-based interface or a desktop-basedinterface. Results show how performance depends on the task asked of theoperator and the scenario in which the robot is moving. The conclusionsprove that the operator’s intra-scenario mobility when carrying a hand-helddevice can counterbalance the limitations of the device. By contrast, theexperiments show that if the operator cannot move inside of the scenario,his performance is significantly better when using a desktop-based inter-face. These results set the basis for a transfer of control policy in missionsinvolving a team of operators, some equipped with hand-held devices andothers working remotely with desktop-based computers.

1 Introduction

Mobile robots are increasingly becoming an aid to humans in accomplishingdangerous tasks. Examples of such tasks include search and rescue missions,military missions, surveillance, scheduled operations, and so forth. The ad-vantage of using robots in such situations is that they accomplish high-risktasks without exposing humans to danger: robots go where humans fear totread.

Let’s consider the explosion and subsequent fire at the Chernobyl Plantin the Soviet Union in 1986. A major challenge facing first response teamsin accidents of this kind is to assess the extent of the damages and theassociated risks. For their part, robots can be deployed to help such firstresponders make a proper situation assessment. When an initial situationassessment has been made, and areas safe for humans have been identified,responders can go into the affected zones, aided by the robots and, as thecase may be, interacting with them through hand-held devices. Being inthe scene of the damage they in principle have partial visibility both of therobots and of the scenario in general.

Terrorist attacks are another example of situations in which robots couldbe used whenever the structure under attack is not seriously damaged. Anexample is the Sarin attack on the Tokyo subway on 1995. In five coor-dinated attacks, the perpetrators released sarin gas on several lines of theTokyo Metro, killing a dozen people, severely injuring fifty, and causing tem-porary vision problems for nearly a thousand others. The toxic shock was athreat for responders, who could not go inside the affected areas. Similarlyto the Chernobyl case, the infrastructure was not gravely damaged, a factthat would have allowed existing robots to be deployed and move withinthe affected area.

Another example is that of scheduled operations, like the inspection ofchemical or nuclear plants, which make it necessary to release a sensor intoa place or places difficult to reach for operators, on account, for example,of high temperatures. Remote operators could drive a mobile robot to thedesired point, where the equipment for sensing, or even for implementationof the scheduled operations, needs to be placed. [22]

Such missions are characterized by remote stationary personnel and on-site responders. Robots are a useful aid for dealing with situations involvinghazard or inaccessibility. Thus, it is important to identify which sort of op-erator is best able to control them. Intra-scenario operator mobility is oftensaid to be advantageous for acquiring Situational Awareness (SA) in thecontext of robot tele-operation, but fixed devices can provide a greater vol-ume of processed information. This should not be discounted when seekingto construct more effective Human-Robot Interaction in Exploration mis-sions. In this article, on the basis of extensive experimentation comparinga desktop-based interface with a PDA-based interface for remote controlof mobile robots, we attempt to do two things. First we undertake to de-fine which kinds of operators have the best SA under different conditions.

Second, we seek to lay the groundwork for a control transfer policy for de-termining when one should hand over control to another depending on thedevice, task and the context.

The article is divided into four main sections. We begin by present-ing some theoretical aspects concerning the cognitive differences between aPDA and a Desktop Interface for controlling a remote robot. Afterwards wepresent our desktop-based and PDA GUIs. We then go on to explore thedifferences between the desktop interface and the pda interface under thesame operational conditions. Next we asses how variation of the operatingconditions influences operator performance for each interface prototype. Wepresent afterwards the results and the contribution of the study on whichthis article is based. The two-fold research question is the following: Whenshould the remote operator take the control of the robot, and when shouldit be transferred to the on-site operator? ; Under what sort of circumstancesand/or in the context of what sorts of tasks?

2 Background

2.1 Situational Awareness and Spatial Cognition

We can recall here the five human-robot situational awareness categoriespresented in [26]: location awareness, activity awareness, surroundings aware-ness, status awareness, and overall mission awareness. Among these we willin particular analyse location awareness and surroundings awareness, sincethese are fundamental for the remote tele-operation of a robot. In order bet-ter to understand how SA enhances an operator’s performance when he isguiding a robot it is useful to introduce two important concepts from humanspatial cognition: route knowledge and survey knowledge. The distinction be-tween route and survey knowledge helps us to understand what cognitiveskills are needed by a human operator remotely controlling a robot.

Route perspective is closely linked to perceptual experience: it occursfrom the egocentric perspective in a ”retinomorphous reference system,”such that the subject is able to perceive himself in space [12], with a spe-cial emphasis on spatial relations between objects composing the scene inwhich the subject is situated. An example is the case of an operator con-trolling a robot by means of a three-dimensional display on a screen thatsimulates the visual information that he would obtain by directly navigat-ing in the environment. Route-based information, gathered from a groundperspective, is stored in memory; this makes it possible to keep track ofturning points, distances and landmarks or relevant points of reference inthe observed context. By contrast, survey perspective is characterized byan external and allocentric perspective, such as an aerial or map-like dis-play, and it thus facilitates direct access to the global spatial layout [2].In this sense, it reproduces the situation that would obtain if the opera-tor had a device that enabled a global, aerial view of the environment and

that contained the robot. Previous studies have shown that a navigator (inour case: the operator) having access to both perspectives exhibits moreaccurate performances [12].

We see that there is a relation between location awareness and surveyknowledge, while surroundings awareness is correlated with route knowledge.Path-planning, for the sake of obstacle-avoidance, depends on the operator’ssurroundings awareness; it deploys an egocentric system of reference for de-ciding the robot’s direction of movement. The problem, however, is that in-formation about the overall environment remains rigid and relatively poor.By contrast, survey knowledge, for way-finding, depends on the operator’slocation awareness, which is generally considered an integrated form of rep-resentation permitting fast, route-independent access to selected locationsstructured in an allocentric coordinate system [23].

Our case study consists of a human operator remotely controlling a robotusing a human-robot interface. When the operator is not physically in thenavigation scenario, the interface must enhance his spatial cognitive abil-ities by offering multilevel information about the environment (route andsurvey knowledge). Complex interfaces can provide different perspectiveson the environment (offering either a bird’s eye view or a first-person view).Such richly varied information enables an operator looking at a graphicaluser interface to have access to more than one perspective at the same time.Contrarily, if the operator is in the scenario, part of the needed informationcan be acquired by direct observation, depending on the visibility the op-erator has. In such situations less information needs to be displayed on theGUI.

The above-mentioned spatial-cognitive aspects should be taken into con-sideration when designing a human-robot interface for remote tele-operation.In the last several years, there has been a great surge of interest in human-robot interface design. Adams’ article ”Critical Considerations for Human-Robot Interface Development,” written in the year 2002 [1], set researchersworking on how to fill the gap between human factors engineering androbotic research. Drury, Yanco, Scholtz and Adams herself, as well as theircollaborators, have applied the knowledge thus gained to the specific field ofSearch and Rescue Robots in a number of publications, [20][6][25][4][13][15].These works have resulted in a set of guidelines for Human-Robot Interfacedesign, which can be summarily presented in the following list:

1. Enhance awareness.– Location Awareness: Provide a map of where the robot has been and

locate the robot within the map.– Surroundings Awareness: Provide more spatial information about the

robot in the environment and so make operators more aware of theirrobots’ immediate surroundings.

2. Lower cognitive load: Provide fused sensor information to avoid makingthe user fuse the data mentally.– Provide user interfaces that support multiple robots in a single win-

dow, if possible.

– In general, minimize the use of multiple windows.3. Provide a granulation of the autonomy spectrum according to operator

task and abilities.4. Provide help in choosing robot autonomy level.5. Prevent/manage operator errors and anticipate/interpret his intentions.

There are many GUIs for tele-operating a mobile robot present in theliterature. Before describing our GUIs, we review some interfaces and lookat their evaluation from the point of view of how well they provide theoperator with the situational awareness required for controlling a robot.

2.2 Existing Desktop Interfaces

When an operator is controlling a robot by means of a desktop computer,we can assume that in the majority of the cases he will not be able tosee the robot or the navigating scenario. In this situation, the human’sknowledge of the robot’s surroundings, location, activities and status isgathered solely through the interface. An insufficient or mistaken situationalawareness of the robot’s surroundings may, for example, provoke a collision,and inadequate location awareness may mean that the explored area doesnot fit the requirements of the mission or that the task is not accomplishedefficiently. When this happens, the use of robots can be more of a detrimentto the task than a benefit to it.

2.2.1 Map-centric Interfaces Map-centric interfaces are those that stressthe representation of the robot inside the environment map and therebyseek to enhance the operator’s locations awareness. They provide a bird’seye view of the scenario. The operator may follow the robot, or else he mayconcentrate on the map with the robot located inside it.

Map-centric interfaces are better suited for operating remote robot teamsthan video-centric interfaces, given the inherent location awareness that amap-centric interface can easily provide. The relation of each robot in theteam to other robots, as well as its position in the search area, can be seenin the map. However, it is less clear that map-centric interfaces are betterfor use with a single robot. If the robot does not have adequate sensingcapabilities, creating the maps that these interfaces rely on may not be pos-sible. If the map is not generated correctly on account of the interferenceof moving objects in the environment, the presence of open spaces withoutobjects within the laser range, faulty sensors, software errors, and other fac-tors, the user could become confused as to the true state of the environment.Moreover, the emphasis on location awareness may inhibit the effective me-diation of good surroundings awareness. Examples of a map-centric GUIscan be seen in [17][18][3].

2.2.2 Video-centric Interfaces It has been shown in studies that operatorsrely heavily on the video feed from the robot. Video-centric interfaces are

thought to provide the most important information through the video, evenif other information, including a map, is present. Video-centric interfaces areby far the most common type of interface used with remote robots, and theyrange from interfaces that consist only of the video image to more complexinterfaces that incorporate other information and controls. The problemwith video-centric interfaces, however, is that whenever they include otherinformation apart from the video, this information tends to be ignored, asdemonstrated in [24][17].

Most existing interfaces are video-centric, the most referenced in HRIliterature is that of UMass-Lowell GUI [24][6][27][5]. The main differencewith the map-centric interfaces presented above lies in the fact that theUMass-Lowell interface keeps the video centred, and it is the robot and themap or obstacles that are rotated. A joint work between Idaho NationalLaboratories GUI (map-centric) and UMass compares their interfaces [5].The authors demonstrate that this difference does not influence the perfor-mance of the operators.

2.3 Existing PDA Interfaces

As we have seen in the previous considerations regarding spatial cognition,intra-scenario operator mobility is a great advantage in the context of ac-quiring situational awareness in robot tele-operation, as the operator hasvisual access to the environment and in some situations may have visualcontact with the robot. Even if remote operators, using powerful work sta-tions, can visualize and process a larger amount of data, responders carryinga PDA interface can boost the pervasiveness of robotic systems in mobileapplications where operators cannot be pinned down in a particular place.Even if mobile devices are less powerful than desktop computers, they offerthe operator the capacity to move, thus allowing him partially to view theactual scenario with the robot that he is controlling. The disadvantages re-lated to device limitations could be balanced by the advantage of mobility.Mobility could facilitate better situational awareness and so enhance thecontrol of the robot. First responders could control a robot team with aPDA interface while having a partial view of the environment, and thuscould obtain on-field information not retrievable by the robot sensors.

With these sorts of goals in mind, some research groups have designedgraphical user interfaces for PDAs. In the last few years, many interfaceshave been developed for use on hand-held devices: for military applications:[8][9][7]; for exploration and navigation: [15][16][11]; and for service robots:[19][14][21].

3 Description of the Interfaces

In this section we will describe the interfaces used for the experiments wewill present. The interfaces’ code as well as their robotic framework areavailable online and free to use under GPL license1.

3.1 Desktop Interface

Our desktop-based interface is designed for controlling robots in structuredand partially unstructured environments. Its goal is the ability to controla robot, which mainly involves exploration, navigation and mapping issues.Our desktop interface seeks to break the video-centric and map-centric di-vison. Its strength is the integration of map and video in a single display.The interface design is principally concerned with providing surroundingsand location situational awareness ([20][24]) to an operator who must re-motely control a robot. Our concern was that the global information shouldbe visible at all times on the screen in order to enabling monitoring of allthe data data sensed by the robot while controlling it. We will now describethe display of the data.

3.2 Providing Situational Awareness

The interface is shown in Figure 1. The display includes allocentric andegocentric views of the scenario. It consists of three views: a Local View ofthe Map; a Global View of the Map giving a bird’s-eye view of the exploredarea, and a pseudo-3D View giving a first-person perspective on it.

Local View The local view of the map is designed to provide a precisesurroundings awareness of the robot through an allocentric point of viewdefined by the robot’s position (this position is always fixed in the interface,and only the robot’s orientation changes). The operator can see the robotinside the constructed map and the laser and sonar readigns coming fromthe robot. The operator can zoom the view in and out choosing the levelof detail he desires. The robot is represented by a solid rectangle, and itsdirection by a solid triangle. When the operator is controlling a team ofrobots, each robot is marked with a different color to help the operatoridentify which robot is operating. The map may be north-oriented or robot-oriented, depending on the operator’s need.

Map View The global map view provides a bird’s-eye perspective. In theglobal view of the map, all the individual maps are fused into one. All therobots are indicated inside the map. The path the robot has followed isalso traced in the color of the robot. This view provides a precise location

1 A web page will be available soon. The link will be included here.

Fig. 1: Interface Display

awareness of the whole team. The map is resized as the area it covers ex-pands, which ensures that the map always fits in the display. The operatorcan select rectangular areas to zoom.

Both the local and map view mark the target point each robot is tryingto reach (in autonomy or shared control) and the path it will try to followin order to reach this target point.

Pseudo-3D View The pseudo-3D view of the environment is designed toprovide surroundings awareness of the robot through a point of view de-fined in terms of robot position. A revolving arrow on the top of the robotindicates the direction of the robot. The operator can shift the perspective,either to behind and to above the robot or ”in the place” of the robot, andthus has a first-person point of view of the situation. Conversely to the INLdisplay, the operator can choose to view either the constructed map or thelaser and sonar sensor readings (or both at the same time). This is espe-cially useful in two situations: 1) When the map is mistaken, the operatorcan choose the laser view, which shows the correct position of obstacles infront of the robot; 2) in very narrow spaces, the map may not be preciseenough to provide adequate surroundings awareness, while the laser is farmore exact.

This display design covers the two types of situational awareness requiredby an operator for controlling the robot. Surroundings awareness is providedin a precise way by the local view and the 3D view, which show both laserand map readings, thereby avoiding the problem of wrongly constructedmaps. Location awareness is provided by the global map view or the 3Dview; either way, the field of vision is set above the environment. It hasbeen shown in [17] that an operator having several displays (in the caseexamined, video and map) would pay attention only to one of them. Weagree with this finding, but, if he has more than one display to choosefrom, the operator can switch from one to another according to his needs.It seems clear that none of the views described is most appropriate forall situations. Furthermore, this design, as should be clear, supports thecontrol and supervision of a team of robots, as it includes robot-attachedviews as well as allocentric views, thus avoiding the problem raised by INLand UMass-Lowell designs in [27].

3.3 PDA Interface

The PDA interface can boost the pervasiveness of robotic systems in mobileapplications where operators cannot be pinned down in a particular place.Even if mobile devices are less powerful than desktop computers, they offerthe operator the capacity to move, allowing a partial view of the actual sce-nario with the robot that he is controlling. The disadvantages tied to devicelimitations could be balanced by the advantage of mobility, which might af-ford better situational awareness and so enhance the control of the robot. Ina case such as the Chernobyl disaster, de- scribed on the introduction, firstresponders could control a robot team with a PDA interface while havinga partial view of the environment, and thus obtain on-field information notretrievable by the robot sensors. The PDA interface is obviously suited toexploiting just this advantage of mobility.

3.3.1 Operator Displays Due to the reduced size of a PDA and its com-putational limitations, the display cannot present on-screen all the dataprovided by the HRI system. In order to preserve the same functions of-fered by the desktop-based interface, we implemented them using varioussimplified layouts. This underlines how critically important it is to presentthe operator only with the crucial data, as each layout change implies alonger interaction time with the device. Another critical point was to con-sider the slower input capacities of the operator with a PDA: These consistof a touch screen and a four-way navigation joystick. Thus, it is importantto minimize the number of interactive steps required to change a setting orto command the robot.

The PDA has two kinds of 2D views, each selectable with its own tab.The first, centred on the robot, is the Laser View (Figure 2(a)). The secondis the Map View (Figure 2(b)), equivalent to the desktop interface’s GlobalMap View.

A third tab (Figure 2(c)) is dedicated to the Robot Control functionali-ties.

Laser View. This is an egocentric view attached to the robot; it remainsstationary at the bottom part of the display. It offers a precise real-timelocal representation of what obstacles the robot is facing. The graphic forthis view is very simple and no other information relative to the robot’sstatus (orientation, speed, etc.) is provided. This view can be zoomed inand out.

Map View. This is an allocentric view relative to the environment ex-plored. It allows the operator to retrieve the map of the explored area. Itcan be zoomed in or out. By clicking on a point within the map, the oper-ator commands the robot to go to a target point (shared control). As thewhole map is computationally expensive, we decided to eliminate periodicself-updating; the map refreshes itself only on the user’s demand.

Autonomy Levels Panel. This allows the user to set his desired robot con-trol mode. In Shared Control or Autonomy Mode, he can also set the kindof heuristic that will be used by the robot with respect to its motion speed.

4 Initial Experiment

The goal of this exploratory study was to compare the usefulness of a PDAwith a Desktop GUI in order to determine the optimal way to distribute thecontrol of a robot between an operator roving with a hand-held device anda remote stationary operator using a desktop computer, so as eventuallyto work out a control transfer policy for determining when robot guidanceshould be passed from one operator to another. A particular goal of our re-search was to investigate which of the two interfaces is more effective whenused in navigation and/or in exploration tasks, depending on different con-ditions of visibility, operator mobility, and environmental spatial structures.

Although both desktop and PDA interfaces can provide both of the kindsof spatial knowledge mentioned above, the PDA may allow less access tosurvey knowledge (location awareness), due to the need to switch screensand to the amount of time required to retrieve, process, and design the map.In the desktop interface, survey and route knowledge are always providedsimultaneously on the same screen.

4.1 Experiment Design and Procedure

Our initial intuition was that the desktop-based interface would providebetter SA to an operator using it for remote robot control. The importanceof this question is obvious: The better an operator’s SA will be, the better his

(a) PDA Interface v.1: LaserView

(b) PDA Interface v.1: MapView

(c) PDA Interface v.1: Auton-omy Levels

Fig. 2: PDA Interface.

Fig. 3: The P2AT robot in the outdoor area during one of the experimentalruns

performance will be. There are two major tasks that an operator must carryout when remotely guiding a robot: navigation and exploration. Navigationinvolves reaching a target point while avoiding obstacles. Exploration, onthe other hand, requires choosing among multiple alternative ways to reacha particular goal, as is the case when searching for victims in a disaster area,assessing the extent of damages after an explosion, and so forth. We set upone controlled experiment for each of these two tasks. We hypothesized thatin both cases the operator using the desktop-based interface would performbetter than the operator using the PDA-based interface.

The operators should perform two tasks: exploration and navigation. Forthe exploration task we set up the experiment using the Player/Stage [10]robotics simulator. The subjects were asked to explore an unknown virtualenvironment of 20m x 20m using a mobile robot equipped with a laser rangescanner. Users were given twenty minutes each to explore the maximumarea without collisioning with any obstacles. Each candidate was randomlyassigned one interface-type and had a single trial with it. In order to give thesubjects a plausible reason for performing the task, we asked them to lookfor ”radioactive sources” distributed in the area. The ”radioactive” spotswere detected by a simulated sensor installed in the robot. For the navigationtask subjects were asked to navigate with the real Rotolotto Pioneer P2ATrobot equipped with a SICK Laser Range Finder (Fig. 3) along a path,approximately 15 meters in length, made up of narrow spaces, clusteredareas, and corridors. Users were not required to find a route, but simply tocomplete the designated one in the minimum time and without collisioning.The subjects were not familiar with the scenario and were not able to seeit at any point. This experiment aimed to reproduce the situation in whichoperators must remotely guide a robot to a target, for example, in order tobring a sensor to a certain pinpointed area.

The experiments involved twenty-four subjects, nineteen undergraduatesand five PhD candidates ranging in age between 20 and 30 and distributed

Fig. 4: Area covered in square meters by the operator using the PDA (lowercurve) and the operator using the desktop-interface (upper curve)

among four females and twenty males. The scenario of each of the twoexperiments was different from that of the other and no participant hadprevious experience of either of the two interface prototypes. All the sub-jects went through the experiments in the same order, which ensured thatnone had more experience than the others. Every subject went through atwenty-minute training program to acquire a basic knowledge of the func-tionalities provided by the interfaces. After the training, they ran throughthe experiments in order. Each subject had a single trial.

4.2 Data Analysis

For the exploration task analysis we have sub-divided an exploration timeof 10 minutes into twenty discrete values (from 0.5 to 10); then a 2x20ANOVA on the explored area (in m2) was carried on with the ”between-participants” factor of Interface (Desktop and PDA) and the ”between-participants” factor of Time (in minutes from 0.5 to 10). The area coveredwas taken as a measure of exploration performance.

Results are shown in Figure 4. The analysis showed a significant in-teraction between Interface and Time [F (19, 361) = 13.65, p < .00001]. Aplanned comparison for each level of time was calculated. The calculation

showed that at 1.5 minutes of exploration the difference between Desk-top and PDA, in terms of explored area, crosses the significance threshold[p < .05]. After this point, the difference remains significant and its signifi-cance increases with each higher level of time.

For the second task (navigation), a one way ANOVA on navigation times(measured in seconds) was calculated to compare the interfaces in order tosee if the differences under the PDA condition and the desktop condition ledto significant differences when the operator had to navigate without beingrequired to do any exploration (way-finding).

The collected data is shown in the following table.

Desktop Interface PDA Interface

Average 341 391.89Std. Dev. 116.7 117.38Max 560 543Min 144 245Conf. Int. (95%) 68.96 76.68

Table 1: Completion Time in Seconds

The ANOVA results are:

Sum of Sq. df Mean Sq. F-value p-value

Between Gr. 12819.471 1 12819.471 1.259 0.277Within Gr. 183227.479 18 10179.304Total 196046.950 19

Table 2: Navigation Time - ANOVA

The results can be seen graphically in Figure 5.

4.3 Discussion

Unexpectedly, the ANOVA on navigation times turned out non-significant([p < 0.05]), revealing no difference in performance times between the twotypes of interfaces. We therefore conclude that operators perform practi-cally identically when navigating remotely, independently of the interfacetype, while the interfaces lead to a highly significant difference in the con-text of exploration. We hypothesize that this difference in range of perfor-mance between navigation and exploration might derive from differences ininformation requirements that each type of task respectively entails. Ar-guably, not all of the information given by the desktop (local, global, andthree-dimensional perspectives) is necessary for navigation, whereas all of itis indispensable for exploration, inasmuch as exploration requires LocationAwareness.

Desktop Interface PDA Interface250

300

350

400

450

500C

ompl

etio

n T

ime

(sec

onds

)

Fig. 5: Completion times - Confidence Intervals (95 %)

5 Second Experiment

Once we experimentally checked the differences between the two interfacesunder the same conditions (no-visibility), we designed an experiment mod-ifying those conditions in order to determine when an operator using thePDA-interface can perform better than an operator using our desktop inter-face. We modified the visibility conditions for the operator equipped withthe PDA, since the portability of the PDA affords him intra-scenario mo-bility, while the operator on the Desktop remained remote. Ascertainingdifferences in performance according to task, context, and device is, we felt,an important step towards working out a control transfer policy for passingcontrol of the robot to the best-preforming operator.

5.1 Experiment Design and Procedure

The whole experimental context (including the initial experiments) wasscheduled for a period of five days. The same twenty-four subjects fromthe previous experiment were divided into two groups, one using the PDAand the other the desktop GUI. Subjects used the same interface as in theinitial experiments.

Subjects were asked to navigate in a real scenario with the P2AT robot.The environment consisted of an outdoor area in a courtyard, connected bymeans of a ramp to a corridor inside a building. The scenario simulated adisaster area and is made up of three different zones:

– A Maze, having one entrance and one exit;

Fig. 6: Operator guiding the robot with the PDA interface. The operator istrying to see the robot through a window of the building

– Narrow Spaces, which the robot must pass through without choice ofdirection;

– Cluttered Areas, containing several isolated, irregularly place obstacles,such that the robot can navigate the area in more than one direction.

Subjects using the PDA were able to move in the outdoor scenario, butcould not enter the building. Conversely, the indoor area was only par-tially visible through some windows located at one end of the corridor. ThisARENA configuration resulted in a variety of situations: in some, both sce-nario and robot were completely visible; in others, they were only partlyvisible.

5.2 Preliminary Hypothesis

We expected a better general performance for PDA users under full visibil-ity, since in such cases the operator is able to see the robot either representedon the PDA display or in the real environment. We speculated that full vis-ibility might decrease disparities in information accessibility between thetwo interfaces inasmuch as it could provide more salient route informationaccess from direct experience of the environment. The initial results it wasdifficult to construct hypotheses concerning the differences under partialvisibility, since the degree of difference in such conditions depends on thetask: significant difference for exploration, no difference for navigation.

5.3 Data Analysis

The collected data are shown in the following tables. There are two scenar-ios. In the first one, which was outdoor, the PDA operator had full visibility

Maze Narrow Space Clustered Area0

100

200

300

400

500

600

700

Space Type

Nav

igat

ing

Tim

es (

seco

nds)

DesktopPDA

(a) Operator using PDA with full visibility of the scenario wrt. operator usingdesktop. Mean times and confidence intervals (95%) are represented

Maze Narrow Space Clustered Area50

100

150

200

250

300

Space Type

Nav

igat

ing

Tim

es (

seco

nds)

Desktop

PDA

(b) Operator using PDA with partial visibility of the scenario wrt. operatorusing desktop. Mean times and confidence intervals (95%) are represented

of the robot and scenario, while the Desktop operator did not have a viewof them. In the second scenario, indoor, the PDA operator had partial visi-bility of the robot and scenario, while the Desktop operator did not have aveiw of them.

Desktop Interface - No VisibilityMaze Narrow Space Clusters

Mean 182.73 504 237.82Std. dev. 120.62 199 84.52Max 480 804 375Min 88 204 118Conf. int. (95%) 71.28 117.6 49.95

PDA Interface - Full VisibilityMaze Narrow Space Clusters

Mean 107.89 147.56 182.44Std. dev. 20.14 40.84 37.92Max 143 232 264Min 79 108 150Conf. int. (95%) 13.16 26.68 24.77

Table 3: Completion Times - Outdoor Scenario

Desktop Interface - No VisibilityMaze Narrow Space Clusters

Mean 174 175.55 85.64Std. dev. 28.8 74.71 34.14Max 225 289 140Min 142 93 33Conf. int. (95%) 17.02 43.79 20.18

PDA Interface - Partial VisibilityMaze Narrow Space Clusters

Mean 242.67 166.33 94.67Std. dev. 76.62 89.45 37.86Max 415 317 172Min 150 70 42Conf. int. (95%) 50.06 58.44 24.73

Table 4: Completion Times - Indoor Scenario

Two separate ANOVA’s were carried out to study the PDA visibilityeffect on performance on navigation times for each condition of PDA visibil-ity: Total Visibility (TV) and Partial Visibility (PV) (the visibility variabledid not vary in the Desktop interface). First of all we analysed the threespace conditions together, by adding the times of every space. The ANOVAanalysis is:

Sum of Sq. df Mean Sq. F-value p-value

Between Gr. 1352208.704 1 1352208.704 34.032 0.000Within Gr. 715200.016 18 39733.334Total 2067408.720 19

Table 5: Navigation Time - Outdoor - ANOVA

Afterwards we repeated the analysis for each space condition. The ANOVAdata can be seen in the following tables:

Sum of Sq. df Mean Sq. F-value p-value

Between Gr. 27725.077 1 27725.077 3.355 0.084Within Gr. 148736.801 18 8263.156Total 176461.878 19

Table 6: Navigation Time - Maze - Outdoor - ANOVA

Sum of Sq. df Mean Sq. F-value p-value

Between Gr. 628894.894 1 628894.894 27.654 0.000Within Gr. 409353.245 18 22741.847Total 1038248.139 19

Table 7: Navigation Time - Narrow Space - Outdoor - ANOVA

Sum of Sq. df Mean Sq. F-value p-value

Between Gr. 15181.375 1 15181.375 3.295 0.086Within Gr. 82939.715 18 4607.762Total 98121.090 19

Table 8: Navigation Time - Clusters - Outdoor - ANOVA

The same thing was done for the indoor scenario. The results of the firstANOVA are:

Sum of Sq. df Mean Sq. F-value p-value

Between Gr. 23219.856 1 23219.856 1174 0.293Within Gr. 356083.527 18 19782.418Total 379303.384 19

Table 9: Navigation Time - Narrow Space - Outdoor - ANOVA

The Analysis for each space condition yields the following:

Sum of Sq. df Mean Sq. F-value p-value

Between Gr. 27725.077 1 27725.077 3.355 0.084Within Gr. 148736.801 18 8263.156Total 176461.878 19

Table 10: Navigation Time - Maze - Indoor - ANOVA

Sum of Sq. df Mean Sq. F-value p-value

Between Gr. 420.792 1 420.792 0.064 0.804Within Gr. 118918.520 18 6606.584Total 119339.312 19

Table 11: Navigation Time - Narrow Space - Indoor - ANOVA

Sum of Sq. df Mean Sq. F-value p-value

Between Gr. 403.627 1 403.627 0.314 0.582Within Gr. 23122.433 18 1284.580Total 23526.060 19

Table 12: Navigation Time - Clusters - Indoor - ANOVA

Results are shown in Figures 7(a) and 7(b). A simple glance at the figuresshows that the operator using the PDA with full visibility drove the robotfaster. Considering the three scenarios together, this difference is significant(p < 0.05 - Table 5). Considering each space condition, the differences arenot significant for the maze and the clustered areas (Tables 6 and 8), asp > 0.05, but they have a tendency to be significant (p < 0.1). For thenarrow spaces, the difference is significant (Table 7).

Data analysis for the condition of partial visibility is shown in Tables9, 10, 11 and 12. These results can be seen in Figure 7(b). There is nostatistical difference for the addition of the three spaces, nor for each spaceconsidered separately. In any case, for the maze space the operator using thedesktop interface showed a better performance with a statistical tendencytoward significant difference (p < 0.1)

5.4 Discussion

The data analysis of the experiment shows that operators using the PDA-interface in conditions of total visibility performed better in terms of navi-gation times than the operators controlling remotely with the desktop GUI.That is, the information the operator receives through the PDA, completedby information he obtains directly from the operating scenario, provides himor her with better SA for guiding the robot. This suggests that a PDA per-mits successful task-accomplishment when the robot is monitored with the

help of both on-screen information and real environment cues. This capac-ity for information integration, together with the simplicity of the interface,makes it possible to compensate for the limitations of the device. Wheneach space-type was analysed separately the difference was significant onlyfor the narrow spaces, but there is a tendency toward a statistical differencealso for the other two space-types. This let us recall that location awarenessprovided directly by the operator allowed him to guide the robot in narrowpassages.

In conditions of partial visibility, results indicate that an operator guid-ing the robot with our desktop-interface got no better results than theoperator using the PDA. In the maze-like space there is a tendency for theoperator using the desktop to achieve faster navigation times than the op-erator using our PDA-interface. We hypothesize that this effect is due tothe respective amount of information provided by the two interfaces: Whilethe desktop-interface makes local and global (Survey Perspective - Loca-tion Awareness) and three-dimensional (Route - Surrounding Awareness)perspectives available simultaneously. By contrast, the PDA can displayonly one of these perspectives at any given moment, and the operator musttherefore employ more time switching between tabs to change from oneperspective to the other. This problem occurs mostly in mazes, presumablybecause in these kinds of environments a global configuration of the spatialstructure (Survey Perspective) is needed in order to find a way out.

The results of the experiments clearly indicate a difference between in-terfaces depending on the type of task, as shown in the initial experiments.Table 13 illustrates the different cases that were considered and indicateswhich operator performs better depending on interface, task, scenario, andcondition of visibility. This table could help pinpoint when one operatorshould transfer control to another operator depending on task and context.In analysing the data, we kept in mind that finding a way through the mazeis an exploration task, driving through narrow spaces is a navigation task,and driving in a clustered area is a combination of both.

Exploration Expl./Nav. Navigation

Total VisibilityPDA

Partial VisibilityDesktop Desktop/PDA Desktop/PDA

No VisibilityDesktop Desktop/PDA

Table 13: Best performing operator depending on interface, task, and visi-bility

6 Conclusions

We have studied the influence of task and operator mobility on controllinga robot using a PDA-interface wrt. controlling a robot using a desktop-interface in view of providing a control transfer policy among stationaryremote operators and roving operators. Even if the results here analysedapply only to our interfaces, we believe that they can be generalized accord-ing to device. Our thesis is therefore that similar results would be obtainedeven if the same experiments were run with differently designed interfaces.

As a main conclusion, we can state that the SA of the operator is re-duced when he uses the PDA (Location Awareness suffers most), due tothe fact that the small size and low computation capacity of a PDA deviceprevent the operator from accessing the same amount of information he hasavailable when using a desktop. Nonetheless, the ability to move inside theoperating scenario ensured by a hand-held device may counterbalance thisdisadvantage, as our results suggest. Once it is determined which operatorhas a better SA, according to task and context, this would presumably pro-vide a basis for a control transfer policy governing when control is to behanded over to the most suitable operator.

The material presented here has a triple potential. First, it helps deter-mine which operators have the best SA in different situations (according tomobility, device, visibility). Second, it provides a first step towards identify-ing when a particular operator should transfer control to another operator.Third, it enhances the optimal operator:robot ratio by offering a transferof control policy for distributing the control of robots among the availableoperators.

In the future we propose to work on finding ways to enhance SurveyKnowledge (Location Awareness) with the PDA-interface in order to di-minish the differences in performance that we have mentioned. For the timebeing, our results demonstrate that when the operator is not required toexplore, but only to navigate, the two interfaces enable an equal level ofperformance in controlling the robot.

References

1. J. A. Adams. Critical considerations for human-robot interface development.Technical report, 2002 AAAI Fall Symposium: Human Robot InteractiomnTechnical Report, 2002.

2. G. Cohen. Memory in the real world. Hove: Erlbaum, 1989.

3. F. Driewer, M. Sauer, and K. Schilling. Design and evaluation of an user inter-face for the coordination of a group of mobile robots. In 17th Iternational Sym-posium on Robot and Human Interactive Communication. RO-MAN 2008,pages 237–242, August 2008.

4. J. L. Drury, D. Hestand, H. A. Yanco, and J. Scholtz. Design guidelinesfor improved human-robot interaction. In Extended abstracts of the 2004Conference on Human Factors in Computing Systems, page 1540, 2004.

5. J. L. Drury, B. Keyes, and H. A. Yanco. Lassoing hri: analyzing situa-tion awareness in map-centric and video-centric interfaces. In Proceedings ofthe Second ACM SIGCHI/SIGART Conference on Human-Robot Interaction,pages 279 – 286, 2007.

6. J. L. Drury, H. A. Yanco, and J. C. Scholtz. Beyond usability evaluation:Analysis of human-robot interaction at a major robotics competition. Human-Computer Interaction Journal, January 2004.

7. T. Fong, C. Thorpe, and B. Glass. Pdadriver: A handheld system for remotedriving. In In IEEE International Conference on Advanced Robotics, 2003.

8. T. Fong, C. E. Thorpe, and C. Baur. Advanced interfaces for vehicle tele-operation: Collaborative control, sensor fusion displays, and remote drivingtools. Autonomous Robots, 11(1):77–85, 2001.

9. T. Fong, C. E. Thorpe, and C. Baur. Robot, asker of questions. Robotics andAutonomous Systems, 42(3-4):235–243, 2003.

10. B. P. Gerkey, R. T. Vaughan, and A. Howard. The player/stage project: Toolsfor multi-robot and distributed sensor systems. In 11th International Con-ference on Advanced Robotics (ICAR 2003), Portugal, pages 317–323, June2003.

11. A. Hedstrom, H. I. Christensen, and C. Lundberg. A wearable gui for fieldrobots. In Field and Service Robotics, pages 367–376, 2005.

12. T. Herrmann. Blickpunkte und blickpunktsequenzen. In Sprache & Kognition,volume 15 of 217-233. 1996.

13. C. M. Humphrey, C. Henk, G. Sewell, B. W. Williams, and J. A. Adams. As-sessing the scalability of a multiple robot interface. In Proceedings of the Sec-ond ACM SIGCHI/SIGART Conference on Human-Robot Interaction, HRI2007, Arlington, Virginia, USA, March 10-12, 2007, pages 239–246, 2007.

14. H. Httenrauch and M. Norman. Pocketcero - mobile interfaces for servicerobots. In In Proceedings of the International Workshop on Human ComputerInteraction with Mobile Devices, 2001.

15. H. Kaymaz-Keskinpala and J. A. Adams. Objective data analysis for a pda-based human robotic interface. In Proceedings of the IEEE InternationalConference on Systems, Man & Cybernetics, pages 2809–2814, 2004.

16. H. Kaymaz-Keskinpala, K. Kawamura, and J. A. Adams. Pda-based human-robotic interface. In Proceedings of the IEEE International Conference onSystems, Man & Cybernetics, volume 4, pages 3931 – 3936, 2003.

17. C. W. Nielsen and M. A. Goodrich. Comparing the usefulness of videoand map information in navigation tasks. In Proceedings of the 1st ACMSIGCHI/SIGART Conference on Human-Robot Interaction, HRI 2006, pages95–101, 2006.

18. C. W. Nielsen, M. A. Goodrich, and R. W. Ricks. Ecological interfacesfor improving mobile robot teleoperation. IEEE Transactions on Robotics,23(5):927–941, 2007.

19. D. Perzanowski, A. C. Schultz, W. Adams, E. Marsh, and M. D. Bugajska.Building a multimodal human-robot interface. IEEE Intelligent Systems,16(1):16–21, 2001.

20. J. Scholtz, J. Young, J. L. Drury, and H. A. Yanco. Evaluation of human-robot interaction awareness in search and rescue. In Robotics and Automation,2004. Proceedings ICRA’04, volume 3, pages 2327 – 2332. IEEE, May 2004.

21. M. Skubic, C. Bailey, and G. Chronis. A Sketch Interface for Mobile Robots.In In Proc. IEEE 2003 Conf. on SMC, pages 918–924, 2003.

22. A. Valero. An adaptative human-robot interaction system for mobile robots.http://www.dis.uniroma1.it/dottoratoii/db/relazioni/relaz valero 1.pdf.

23. S. Werner, B. Krieg-Bruckner, H. A. Mallot, K. Schweizer, and C. Freksa. Spa-tial cognition: The role of landmark, route, and survey knowledge in humanand robot navigation. In GI Jahrestagung, pages 41–50, 1997.

24. H. A. Yanco and J. L. Drury. ”where am i?” acquiring situation awarenessusing a remote robot platform. In Proceedings of the IEEE InternationalConference on Systems, Man & Cybernetics: The Hague, Netherlands, 10-13October 2004, pages 2835–2840, 2004.

25. H. A. Yanco and J. L. Drury. Rescuing interfaces: A multi-year study ofhuman-robot interaction at the aaai robot rescue competition. Auton. Robots,22(4):333–352, 2007.

26. H. A. Yanco, J. L. Drury, and J. Scholtz. Awareness in human-robot in-teractions. In Proceedings of the IEEE Conference on Systems, Man andCybernetics, Washington, DC, October 2003, 2003.

27. H. A. Yanco, B. Keyes, J. L. Drury, C. W. Nielsen, D. A. Few, and D. J.Bruemmer. Evolving interface design for robot search tasks. Journal on FieldRobotics, 24(8-9):779–799, 2007.