24
Adaptive Teams of Autonomous Aerial and Ground Robots for Situational Awareness M. Ani Hsieh, Anthony Cowley, James F. Keller, Luiz Chaimowicz, Ben Grocholsky, Vijay Kumar, and Camillo J. Taylor GRASP Laboratory University of Pennsylvania Philadelphia, Pennsylvania 19104 e-mail: acowley,mya,jfkeller,kumar, [email protected] Yoichiro Endo and Ronald C. Arkin Georgia Tech Mobile Robot Lab College of Computing Georgia Institute of Technology Atlanta, Georgia 303332 e-mail: endo,[email protected] Boyoon Jung, Denis F. Wolf, and Gaurav S. Sukhatme Robotic Embedded Systems Laboratory Center for Robotics and Embedded Systems University of Southern California Los Angeles, California 90089 e-mail: boyoon,denis, [email protected] Douglas C. MacKenzie Mobile Intelligence Corporation Livonia, Michigan 48150 e-mail: [email protected] Received 20 November 2006; accepted 20 August 2007 FIELD REPORT Journal of Field Robotics 24(11), 991–1014 (2007) © 2007 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/rob.20222

Adaptive teams of autonomous aerial and ground robots for ... · while Durrant-Whyte, Stevens & Nettleton 2001 ex-ploited the parallel processing power of sensor net-works for data

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Adaptive Teams of AutonomousAerial and Ground Robots for

Situational Awareness

M. Ani Hsieh, Anthony Cowley,James F. Keller, Luiz Chaimowicz,Ben Grocholsky,Vijay Kumar, and Camillo J. TaylorGRASP LaboratoryUniversity of PennsylvaniaPhiladelphia, Pennsylvania 19104e-mail: acowley,mya,jfkeller,kumar,[email protected]

Yoichiro Endo and Ronald C. ArkinGeorgia Tech Mobile Robot LabCollege of ComputingGeorgia Institute of TechnologyAtlanta, Georgia 303332e-mail: endo,[email protected]

Boyoon Jung,Denis F. Wolf, and Gaurav S. SukhatmeRobotic Embedded Systems LaboratoryCenter for Robotics and Embedded SystemsUniversity of Southern CaliforniaLos Angeles, California 90089e-mail: boyoon,denis,[email protected]

Douglas C. MacKenzieMobile Intelligence CorporationLivonia, Michigan 48150e-mail: [email protected]

Received 20 November 2006; accepted 20 August 2007

FIELD REPORT

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

Journal of Field Robotics 24(11), 991–1014 (2007) © 2007 Wiley Periodicals, Inc.Published online in Wiley InterScience (www.interscience.wiley.com). • DOI: 10.1002/rob.20222

In this paper, we report on the integration challenges of the various component technolo-gies developed toward the establishment of a framework for deploying an adaptive sys-tem of heterogeneous robots for urban surveillance. In our integrated experiment anddemonstration, aerial robots generate maps that are used to design navigation controllersand plan missions for the team. A team of ground robots constructs a radio-signal strengthmap that is used as an aid for planning missions. Multiple robots establish a mobile adhoc communication network that is aware of the radio-signal strength between nodes,and can adapt to changing conditions to maintain connectivity. Finally, the team of aerialand ground robots is able to monitor a small village, and search for and localize humantargets by the color of the uniform, while ensuring that the information from the team isavailable to a remotely located human operator. The key component technologies andcontributions include: �a� Mission specification and planning software; �b� explorationand mapping of radio-signal strengths in an urban environment; �c� programming ab-stractions and composition of controllers for multirobot deployment; �d� cooperative con-trol strategies for search, identification, and localization of targets; and �e� three-dimensional mapping in an urban setting. © 2007 Wiley Periodicals, Inc.

1. INTRODUCTION

Urban and unstructured environments provideunique challenges for the deployment of multirobotteams. In these environments, buildings and large ob-stacles pose three-dimensional �3D� constraints onvisibility, communication network performance isdifficult to predict, and global positioning system�GPS� measurements can be unreliable or even un-available. The deployment of a network of aerial andground vehicles working in cooperation can oftenachieve better performance since these 3D sensingnetworks may be better poised to obtain higher qual-ity and more complete information and be robust tothe challenges posed by these environments. Underthese circumstances, it is necessary to keep the net-work tightly integrated at all times to enable the ve-hicles to better cooperate and collaborate and achievegreater synergy. Furthermore, one must provide en-abling technologies to permit the deployment ofthese heterogeneous teams of autonomous mobile ro-bots by a few human operators to execute the desiredmission. This paper presents our attempts to realizeour vision of an autonomous adaptive robot networkcapable of executing a wide range of tasks within anurban environment. The work, funded by the De-fense Advanced Research Projects Agency’s�DARPA� MARS2020 program, was a collaborativeeffort between the General Robotics, Automation,Sensing, and Perception �GRASP� Laboratory at theUniversity of Pennsylvania, the Georgia Tech MobileRobot Laboratory, and the University of SouthernCalifornia’s �USC� Robotic Embedded Systems Labo-ratory.

Our vision for the project was the development ofa framework that would enable a single human op-erator to deploy a heterogenous team of autonomousair and ground robots to cooperatively execute tasks,such as surveillance, reconnaissance, and targetsearch and localization, within an urban environmentwhile providing high-level situational awareness fora remote human operator. Additionally, the frame-work would enable autonomous robots to synthesizethe desirable features and capabilities of both delib-erative and reactive control while incorporating a ca-pability for learning. This would also include a soft-ware composition methodology that incorporatesboth precomposed coding and learning-derived orautomated coding software to increase the ability ofautonomous robots to function in unpredictable en-vironments. Moreover, the framework would be con-text driven, and use multisensor processing to disam-biguate sensor-derived environmental stateinformation. A team of heterogeneous robots withthese capabilities has the potential to empower the in-dividual robotic platforms to efficiently and accu-rately characterize the environment, and hence po-tentially exceed the performance of human agents. Inshort, our goals for the project were to develop anddemonstrate an architecture, the algorithms, and soft-ware tools that:

• Are independent of team composition;• Are independent of team size, i.e., number of

robots;• Are able to execute of a wide range of tasks;• Allow a single operator to command and

control the team; and

992 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

• Allow for interactive interrogation and/orreassignment of any robot by the operator atthe task or team level.

We report on the first outdoor deployment of ateam of heterogeneous aerial and ground vehicleswhich brought together three institutions with over15 different robotic assets to demonstrate communi-cation sensitive behaviors for situational awarenessin an urban village at the McKenna Military Opera-tions on Urban Terrain �MOUT� site in Fort Benning,Georgia. The integrated demonstration was the cul-mination of the MARS2020 project bringing togetherthe various component technologies developed aspart of the project. The demonstration featured fourdistinct types of ground robots each using differenttypes of command and control software and operat-ing systems at the platform level. These were coor-dinated at the team level by a common mission planand operator control and display interconnectedthrough an ad hoc wireless network. The result wasan integrated team of unmanned aerial vehicles�UAVs� and unmanned ground vehicles �UGVs�, inwhich the team and the network had the ability toadapt to the needs and commands of a remotely lo-cated human operator to provide situational aware-ness.

This paper is organized as follows: We presentsome related work in networked robotic systems inSection 2. Section 3 provides a brief description of theexperimental testbed used to evaluate the componenttechnologies summarized in Section 4. Section 5 de-scribes the integrated demonstration that brought to-gether the numerous key technologies summarizedin this paper and the integration challenges. Section6 provides a discussion on the successes and lessonslearned with some concluding remarks.

2. RELATED WORK

There have been many successes in the manufactur-ing industry where existing sensors, actuators, mate-rial handling equipment, and robots have been recon-figured and networked with new robots and sensorsvia wireless networks to enhance productivity, qual-ity, and safety. However, in most of these cases, thenetworked robots operate in a structured environ-ment with very little variation in configurationand/or operating conditions, and tasks are often welldefined and self-contained.

The growing interest in the convergence of the ar-eas of multiagent robotics and sensor networks haslead to the development of networks of sensors androbots that not only perceive their environment butalso achieve tasks, such as locomotion �Majumder,Scheding & Durrant-Whyte, 2001�, manipulation�Kang, Xi & Spark, 2000�, surveillance �Hsieh, Cow-ley, Kumar & Taylor, 2006�, and search and rescue, toname a few. Besides being able to perform tasks thatindividual robots cannot perform, networked robotsalso result in improved efficiency. Tasks such assearching or mapping �Thibodeau, Fagg & Levine,2004� can be achieved by deploying multiple robotsperforming operations in parallel in a coordinatedfashion. Furthermore, networked systems enablefault tolerance in design by having the ability to reactto information sensed by other mobile agents or re-mote sensors. This results in the potential to providegreat synergy by bringing together components withcomplementary benefits and making the wholegreater than the sum of the parts.

Some applications for networked robots includeenvironmental monitoring, where one can exploitmobility and communication abilities of the roboticinfrastructure for observation and data collection atunprecedented scales in various aspects of ecologicalmonitoring. Some examples include: Sukhatme et al.�2006� for aquatic monitoring, Kaiser et al. �2005� forterrestrial monitoring, and Amarss �2006� for subsoilmonitoring. Other applications for networked roboticsystems include surveillance of indoor environments�Rybski et al., 2000� and support for first respondersin a search and rescue operation �Kotay, Peterson &Rus, 2005�. In Corke, Peterson & Rus, �2003� the com-munication capabilities of a network of stationarysensor nodes are exploited to aid in the localizationand navigation of an autonomous aerial vehicle,while Durrant-Whyte, Stevens & Nettleton �2001� ex-ploited the parallel processing power of sensor net-works for data fusion. A theoretical framework forcontrolling team formation for optimal target track-ing is provided in Spletzer & Taylor �2002�, whileStroupe & Balch �2003� used a behavior-based ap-proach to solve a similar problem. In Sukkarieh,Nettleton, Grocholsky & Durrant-Whyte �2003�, co-operative target tracking is achieved by optimizingover all joint team actions.

While there are many successful embodiments ofnetworked robots with numerous applications, thereare significant challenges that have to be overcome.The problem of coordinating multiple autonomous

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 993

Journal of Field Robotics DOI 10.1002/rob

units and making them cooperate creates problems atthe intersection of communication, control, and per-ception. Cooperation entails more than one entityworking toward a common goal, while coordinationimplies a coupling between entities that is designedto achieve the common goal. Some works that con-sider coordination and task allocation strategies inuncertain environments include Mataric, Sukhatme& Ostergaard �2003�, Lerman, Jones, Galstyan & Ma-taric �2006�, and McMillen & Veloso �2006�. Abehavior-based software architecture for heteroge-neous multirobot cooperation is proposed in Parker�1998, while a methodology for automatic synthesisof coordination strategies for multirobot teams to ex-ecute given tasks is described in Tang & Parker�2005�. A market-based task allocation algorithm formultirobot teams tasked to extinguish a series of firesarising from some disaster is considered in Jones,Diaz, & Stentz �2006b�. Dynamic coalition formationfor a team of heterogeneous robots executing tightlycoupled tasks is considered in Jones et al. �2006a�.

Our goal is to develop networks of sensors androbots that can perceive their environment and re-spond to it, anticipating information needs of the net-work users, repositioning and self-organizing to bestacquire and deliver the information, thus achievingseamless situational awareness within various typesof environments. Furthermore, we are also interestedin providing proper interfaces to enable a single hu-man user to deploy networks of unmanned aerial,ground, surface, and underwater vehicles. Therehave been several recent demonstrations of multiro-bot systems exploring urban environments�Chaimowicz et al., 2005; Grocholsky, Swaminathan,Keller, Kumar & Pappas, 2005� and interiors of build-ings �Howard, Parker & Sukhatme, 2006; Fox et al.,2006� to detect and track intruders, and transmit all ofthe above information to a remote operator. Althoughthese examples show that it is possible to deploy net-worked robots using an off-the-shelf 802.11b wirelessnetwork and have the team be remotely tasked andmonitored by a single operator, they do not quitematch the level of team heterogeneity and complexitydescribed in this paper.

3. EXPERIMENTAL TESTBED

Our multirobot team consists of two unmanned aerialvehicles �UAVs� and eight unmanned ground ve-hicles �UGVs�. In this section, we provide a short de-

scription of the various components of the experi-mental testbed used to evaluate the key technologiesemployed in the integrated experiment.

3.1. UAVs

The two UAVs are quarter-scale Piper Cub J3 modelairplanes with a wing span of 104 in. ��2.7 m� �seeFigure 1�a��. The glow fuel engine has a power rat-ing of 3.5 HP, resulting in a maximum cruise speedof 60 knots ��30 m/s�, at altitudes up to 5000 feet��1500 m�, and a flight duration of 15–20 min. EachUAV is equipped with a sensor pod containing ahigh resolution firewire camera, inertial sensors, anda 10 Hz GPS receiver �see Figure 1�b��, and is con-trolled by a highly integrated user-customizable Pic-colo avionics board which is manufactured byCloudCap Technologies �Vaglienti & Hoag, 2003�.The autopilot provides innerloop attitude and veloc-ity stabilization control allowing research to focus onguidance at the mission level.

Additionally, each UAV continuously communi-cates with the ground station at 1 Hz and the rangeof the communication can reach up to 6 miles. Directcommunication between UAVs can be emulatedthrough the ground or by using the local communi-cation channel on the UAVs �802.11b-wireless net-

Figure 1. �a� Two Piper J3 cub model airplanes. �b� UAVexternal payloads �POD�.

994 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

work card�. The ground station has an operator in-terface program �shown in Figure 2�, which allowsthe operator to monitor flight progress, obtain telem-etry data, or dynamically change the flight plans us-ing georeferenced maps. The ground station canconcurrently monitor up to ten UAVs, and performsdifferential GPS corrections and updates the flightplan, which is a sequence of 3D waypoints con-nected by straight lines.

3.2. UGVs

Our team of UGVs consists of three ClodBusters,two Pioneer2 ATs, one Segway RMP, two ATRV-Jrs,and an AM General Hummer Vehicle modified andaugmented with multiple command and controlcomputers and deployed as a Base Station. TheClodBuster UGVs are commercial four-wheel drivemodel trucks modified and augmented with a Pen-tium III laptop computer, a specially designed uni-versal serial bus device which controls drive motors,odometry, steering servos, and a camera pan mountwith input from the personal computer, GPS re-ceiver, inertial measurement unit �IMU�, andfirewire stereo camera. The Pioneer2 AT is a typicalfour-wheeled statically stable robot designed foroutdoor uses. This skid-steer platform can rotate inplace and achieve a maximum speed of 0.7 m/s. TheSegway RMP is a two-wheeled dynamically stablerobot with self-balancing capability. Both the Pio-neer2 AT and the Segway are equipped with a GPSreceiver, an IMU, built-in odometry, a horizontal

scanning laser sensor, and a pan/tilt/zoom-capablecamera. The Segway is also equipped with an addi-tional vertical scanning laser to enable 3D mapping.

The ATRV-Jr is a four-wheeled robot that cannavigate outdoor terrains reaching approximately2 m/s at its full speed. It is equipped with onboarddual processor Pentium III computers, a differentialGPS, a compass, an IMU, and shaft encoders. In ad-dition, two sets of laser range finders are mountedon top of the robot in order to provide full 360° cov-erage for obstacle detection. The Hummer Vehicle isoutfitted with seating for three human operators andcommand and control computers for UGV deploy-ment, launch missions, and monitor the progress ofthe ongoing missions. Figure 3 shows our team ofUGVs.

3.3. Software

Three software platforms were used to task and con-trol our team of UAVs and UGVs: MissionLab,ROCI, and PLAYER/STAGE.

MISSIONLAB �MissionLab, 2006� is a suite ofsoftware tools for developing and testing behaviorsfor a single or team of robots. The user interactsthrough a design interface tool that permits the vi-sualization of a specification as it is created. Indi-vidual icons correspond to behavioral task specifica-tions, which can be created as needed or preferablyreused from an existing repertoire available in thebehavioral library. Multiple levels of abstraction are

Figure 2. Ground Station Operator Interface showingflight plan and actual UAV position �August 2003, FortBenning, Georgia�.

Figure 3. Our team of UGVs.

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 995

Journal of Field Robotics DOI 10.1002/rob

available, which can be targeted to the abilities of thedesigner, ranging from whole robot teams, down tothe configuration description language for a particu-lar behavior within a single robot, with the higherlevels being those easiest to use by the average user.After the behavioral configuration is specified, thearchitecture and robot types are selected and compi-lation occurs, generating the robot executables.These can be run within the simulation environmentprovided by MISSIONLAB itself for verification ofuser intent, such as Endo, MacKenzie & Arkin �2004�and MacKenzie & Arkin �1998�, or through a soft-ware switch that can be downloaded to the actualrobots for execution.

ROCI �Chaimowicz, Cowley, Sabella & Taylor,2003; Cowley, Hsu & Taylor, 2004a� is a softwareplatform for programming, tasking, and monitoringdistributed teams of robots. ROCI applications arecomposed from self-describing components that al-low for message-passing-based parallelism that al-lows for the creation of robust distributed software.ROCI is especially suited for programming andmonitoring distributed ensembles of robots and sen-sors since modules can be transparently launchedand connected across a network using mechanismsthat provide automated data formatting, verifica-tion, logging, discovery, and optimized transfer.

PLAYER is a device server that provides a flex-ible interface to a variety of sensors and actuators�e.g., robots�. PLAYER is language and platform in-dependent allowing robot control programs to ex-ecute on any computer with network connectivity tothe robot. In addition, PLAYER supports multipleconcurrent client connections to devices, creatingnew possibilities for distributed and collaborativesensing and control. STAGE is a scaleable multiplerobot simulator; it simulates a population of mobilerobots moving in and sensing a two-dimensional en-vironment, controlled through PLAYER.

3.4. Communication

Every agent on the network is equipped with a smallembedded computer with 802.11b wireless Ethernetcalled the junction box �JBox�. Communicationthroughout the team and across the different soft-ware platforms was achieved via the wireless net-work. The JBox, developed jointly by the Space andNaval Warfare Systems Center, BBN Technologies,and the GRASP Lab, handles multihop routing in anad hoc wireless network and provides the full link

state information enabling network connectivityawareness to every agent on the network.

4. COMPONENT TECHNOLOGIES

We present the component technologies developedtoward the goal of providing an integrated frame-work for the command and control of an adaptivesystem of heterogeneous robots. These technologieswere developed as a set of tools that can allow a hu-man user to deploy a robot network to search and lo-cate information in a physical world, analogous to theuse of computer networks via a search engine to lookfor and locate archived multimedia files. Of course,the analogy only goes so far. Unlike the World WideWeb, looking for a human target does not reduce tosearching multimedia files that might contain seman-tic information about human targets. Robots mustsearch the urban environment while keeping connec-tivity with a base station. They must be able to detectand identify the human target. And, they must beable to alert the human operator by presenting infor-mation ordered in terms of salience, through a wire-less network, allowing the human operator to requestdetailed information as necessary. Ideally, while allthis is happening, the processes of reconfiguring—routing information through a multihop network—and moving to maintain connectivity must be evidentto the human user. In this section, we provide a briefsummary of the enabling technologies developed tobring us closer to our vision. We refer the interestedreader to the relevant literature for more detailed dis-cussions.

4.1. Mission Specification and Execution

A pressing problem for robotics, in general, is howto provide an easy-to-use method for programmingteams of robots, making these systems more acces-sible to the average user. The MISSIONLAB missionspecification system �MissionLab, 2006� has been de-veloped to address such an issue.

An agent-oriented philosophy �MacKenzie, Ar-kin & Cameron, 1997� is used as the underlyingmethodology, permitting the recursive formulationof entire societies of robots. A society is viewed as anagent consisting of a collection of either homoge-neous or heterogeneous robots. Each individual ro-botic agent consists of assemblages of behaviors, co-ordinated in various ways. Temporal sequencing

996 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

affords transitions between various behavioral statesthat are naturally represented as a finite state accep-tor. Coordination of parallel behaviors can be accom-plished via fusion, action selection, priority, or othermeans as necessary. These individual behavioral as-semblages consist of groups of primitive perceptualand motor behaviors, which ultimately aregrounded in the physical sensors and actuators of arobot. An important feature of MISSIONLAB is theability to delay binding to a particular behavioralarchitecture �e.g., schema-based �Arkin, 1998�� untilafter the desired mission behavior has been speci-fied. Binding to a particular physical robot also oc-curs after specification, permitting the design to beboth architecture and robot independent. This char-acteristic allowed the incorporation of the ROCI andPLAYER/STAGE systems.

To achieve the level of coordination required inan integrated mission involving a team of heteroge-neous robots controlled by three different mobilesoftware platforms �MISSIONLAB, ROCI, andPLAYER/STAGE�, the Command Description Lan-guage interpreter �CMDLi� was developed. The CM-DLi is a common software library that is compiledinto each of the software target platforms. It parsesand executes a common text file �a CMDL script�that contains the integrated mission plan developedin MISSIONLAB by the operator �see Figure 4�a��.Hence, the script has to be distributed among all theparticipating robots prior to execution. The CMDLscript is organized into two parts: �1� The back-ground information and �2� a list of behavioral tasksto be executed sequentially. For example, the CMDLscript used during the integrated experiment isshown in Figure 4�b�. The background informationincludes the names of the robot executables and theinformation regarding memberships of predefinedgroups. At runtime, the CMDLi interpreter residenton each platform sequentially executes the list ofspecified behaviors, and sends corresponding com-mands to the underlying controller program �i.e.,PLAYER in PLAYER/STAGE, etc.�.

Tasks/behaviors supported by CMDLi includeMoveTo, Loiter, TrackTarget, and Synchronize. InMoveTo, the robot drives and steers itself toward thetarget position; whereas in Loiter, the robot stops andstands by at the target position. When the robot isexecuting the TrackTarget behavior, the robot identi-fies and follows a target object. In Synchronize, therobot waits for other specified robots to reach thesame synchronization state. To realize this synchro-

nization, each robot broadcasts its behavioral statusto others via the JBox. When synchronization is at-tained, the robot resumes execution of the remainingmission.

The available tasks for CMDLi can be easily ex-panded to a more general list. Each of the three soft-ware platforms already supports various behaviorrepertoires. �For example, the robot controlled byMISSIONLAB can execute more than 80 types of be-haviors �MissionLab, 2006.�� To add a new task tothis CMDLi framework, a new binding between thenew task name and the platform’s behavior simplyneeds to be defined. It is important to note that in-creasing the size of the task list does not significantlyaffect computational complexity or performance, assequencing of the tasks is previously defined by ahuman operator rather than an automatic planningalgorithm. Of course, if the new task involves a com-putationally very expensive algorithm �e.g., solvinga traveling salesman problem�, the performanceshould be solely affected by the execution of the taskitself �i.e., the size of the list does not matter�.

The status of the robot can also be monitored bythe MISSIONLAB console along with the overallprogress of the mission. More specifically, the dis-play consists of a mission area map showing thereal-time GPS coordinates of the robots as well as aCMDLi interface that can dynamically display theprogress of an integrated mission. A screen captureof the MISSIONLAB console showing progress dur-ing the integrated experiment is depicted in Figure5. In this particular example, at the North cache,ClodBuster 1 �controlled by ROCI and denoted byupenn�1� waits for ATRV-Jr 1 �controlled by MIS-SIONLAB and denoted as gtechRobot1� to completethe MoveTo �GIT-A1� behavior, so that synchroniza-tion can be achieved. In the South cache, two Clod-Busters �denoted by upenn�2 and upeen�3�: A Pio-neer2 AT and a Segway �controlled by PLAYER anddenoted by usc�pioneer1 and usc�segway, respec-tively�, all wait for the second ATRV-Jr �denoted bygtechRobot2� to arrive at their cache. Lastly, at anygiven point, the operator is given the option to inter-rupt or even abort the current mission via theCMDLi interface at the MISSIONLAB console.

4.2. Communication Network and Control forCommunication

Successful deployment of multirobot tasks for sur-veillance and search and rescue relies in large part

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 997

Journal of Field Robotics DOI 10.1002/rob

Figure 4. �a� Coordination of heterogeneous mobile robot software platforms through the CMDLi. �b� CMDL script usedduring the MARS 2020 integrated experiment.

998 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

on a reliable communication network. In general, ra-dio propagation characteristics are difficult to pre-dict a priori since they depend upon a variety offactors �Neskovic, Neskovic & M13 Paunovic, 2000�which makes it difficult to design multiagent sys-tems such that the individual agents operate withina reliable communication range at all times. In thissection, we consider the problem of acquiring infor-mation for radio connectivity maps in urban terrainsthat can be used to plan multirobot tasks and alsoserve as useful perceptual information.

A radio connectivity map is a function that re-turns the signal strength between any two positionsin the environment. In general, it is extremely diffi-cult to obtain a connectivity map for all pairs of po-sitions in the desired workspace, thus one aims toconstruct a map for pairs of locations selected a pri-ori. For small teams of robots, the construction of theradio connectivity map can be formulated as a graphexploration problem. Starting with an overhead sur-veillance picture, it is possible to automatically gen-erate roadmaps for motion planning and encode

these roadmaps as roadmap graphs.1 From theseroadmap graphs, a radiomap graph is obtained bydetermining the set of desired signal strength mea-surements �between pairs of positions� one wouldlike to obtain. The discretization of the workspaceallows us to strategically place each robot in ak-robot team in k separate locations on the roadmapgraph to obtain the desired measurements encodedin the radiomap graph. A sample roadmap graphand its corresponding radiomap graph are shown inFigure 6. The solid edges in Figure 6�a� denote fea-sible paths between pairs of positions denoted bythe circles. The dashed edges in Figure 6�b� denotesignal strength measurements between pairs of posi-tions that must be obtained. Figure 6�c� show threepossible placements of a team of three robots suchthat the team can obtain at least one of the measure-ments given by the radiomap graph. An explorationstrategy then consists of a set of waypoints that each

1In the event that an overhead surveillance picture is not avail-able, one can generate roadmaps for motion planning with a mapof the region of interest.

Figure 5. Screen capture of the MISSIONLAB console showing the progress of the integrated mission. The locations ofthe robots with respect to the MOUT site map are displayed on the left-hand side. The progress of the mission withrespect to the CMDLi script is shown on the right-hand side.

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 999

Journal of Field Robotics DOI 10.1002/rob

robot must traverse to obtain all the desired signalstrength measurements encoded in the radiomapgraph.

Experiments were performed using three of ourground vehicles to obtain radio connectivity data atthe Fort Benning MOUT site. In these experiments,an optimal exploration strategy was determined us-ing the algorithm described by Hsieh, Kumar & Tay-lor �2004�. Each robot was individually tasked withthe corresponding list of waypoints. Team membersnavigate to their designated waypoints and synchro-nize, every member of the team measures its signalstrength to the rest of the team. Once the robots havecompleted the radio-signal strength measurements,

they synchronize before moving on to their next tar-geted location. This is repeated until every memberhas traversed through all the waypoints on their list.The waypoints are selected to minimize the prob-ability of losing connectivity under line-of-sight con-ditions in the planning phase to ensure the successof the synchronization based on line-of-sight propa-gation characteristics that can be determined a pri-ori. Figure 7 shows the radio connectivity map thatwas obtained for the MOUT site. The weights on theedges denote the average signal strength that wasmeasured between the two locations. In these ex-periments, the signal strength was measured usingthe JBox, described in Section 3.

Figure 6. �a� A roadmap graph. The solid edges denote feasible paths between neighboring cells associated with eachnode. �b� A radiomap graph for �a�. The dashed edges denote links for which signal strength information must beobtained. �c� Three sample configurations of three robots on the roadmap graph that can measure at least one of the edgesin the radiomap graph. The solid vertices denote the locations of the robots.

Figure 7. �a� Overhead image of the MOUT site. �b� Experimental radio connectivity map for the MOUT site obtainedusing our multirobot testbed.

1000 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Radio connectivity maps can therefore be usedto plan multirobot tasks to increase the probabilityof a reliable communication network during the ex-ecution phase. Ideally, the measurements obtainedduring the exploration phase can be used to con-struct a limited model for radio propagation in thegiven environment such that, when coupled withadditional reactive behaviors �Hsieh et al., 2006�, areliable communication network can be maintainedduring deployment. This two-prong approach en-sures that communication constraints are always sat-isfied and allows the operator to redeploy the teamand/or deploy additional assets in the presence ofdynamic changes in the environment.

4.3. Programming Abstractions and Compositionfor Multirobot Deployment

The software development process in robotics hasbeen changing in recent years. Instead of developingmonolithic programs for specific robots, engineersare using smaller software components to constructnew complex applications. Component-based devel-opment offers several advantages such as reuse ofcode, and increased robustness, modularity, andmaintainability. To this end, we have been develop-ing ROCI, a software platform for programming,tasking, and monitoring distributed teams of robots�Cowley et al., 2004a�. In ROCI, applications arebuilt in a bottom-up fashion from basic componentscalled ROCI modules. A module encapsulates a pro-cess which acts on data available on its inputs andpresents its results as outputs. Modules are self-contained and reusable, thus complex tasks can bebuilt by connecting the inputs and outputs of spe-cific modules. We say that these modules create thelanguage of the ROCI network, allowing task de-signers to abstract away low level details in order tofocus on high level application semantics �Cowley,Hsu & Taylor, 2004b�.

One key characteristic of a component-basedsystem is the development of robust interfaces toconnect individual modules. In component-baseddevelopment, external interfaces should be clearlydefined to allow an incremental and error resistantconstruction of complex applications from simplerself-contained parts. By making interfaces explicitand relying on strongly typed self-describing datastructures, ROCI allows the development of robustapplications. Moreover, ROCI’s modularity supports

the creation of parallel data flows which favors thedevelopment of efficient distributed applications.

The composition of complex behaviors in acomponent-based system may be achieved throughthe use of a more declarative application specifica-tion that defines application components, param-eters of those components, and the connections be-tween components, as opposed to the moretraditional imperative programming style with thecomponents which themselves may be developed.This delineates a separation between the specifica-tion of what an application does from how it does it.This division is enabled by the syntactic and seman-tic interface specifications associated with individualcomponents, which may be generated automaticallyusing type introspection or manually by the devel-opers. The system should be designed everywhereto require minimal extra effort from the developer tosupport the notion of the actual distributed compo-sitional execution model.

The emphasis on interfaces further steers com-ponent development toward a natural implementa-tion of message-passing parallelism, once again withminimal impact on the component developer. In-deed, the many pitfalls common to parallel process-ing should not be of primary concern to the devel-opers of many types of modules whose behaviorought to be conceptually atomic. Instead, the appli-cation architect, working with the vocabulary de-fined by the component developers, may constructparallel data flows implicitly through the creation ofa module network, the nature of which is of no in-trinsic interest to the component developer.

4.4. Distributed Databases for SituationalAwareness

A data logging system has been built on top of thefoundation described in the previous section as real-ized by the ROCI software platform. Due to the factthat component interfaces are defined in terms of thedata types they transact, operations on componentoutputs may be automatically dispatched to an ap-propriate handler via traditional single dispatch. Inthis case, we developed a generic logging systemthat could maintain a store of data indexed by time.While the types of data relevant to a mobile robotdeployment are varied, time is a universally mean-ingful index due to the sequential manner in whichdata are collected. This basic indexing can be aug-mented by additional mechanisms that handle more

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 1001

Journal of Field Robotics DOI 10.1002/rob

specific data types; for example, indexing positionmeasurements by location. These loggers operate in-dependently of the components that generate thedata, thus freeing the component developer fromconcerns regarding serialization, indexing, or queryresolution. This functional separation is a hallmarkof componentized development and is responsiblefor the extensibility of the system as a whole.

With these flexible data logging components inhand, an application over the robot network may bedecorated with logs on any intercomponent connec-tion. These logs are then discoverable not just as ge-neric data logs, but as data logs specific to the typeof data to which they are attached. This is made pos-sible by the self-describing nature of intercomponentconnections based on the underlying type system.Having such logs attached to arbitrary data sourcesfrees the development team from having to foreseeevery useful combination of sensor data. Instead, ag-gregate data types are created ondemand by crossindexing separate data stores, perhaps across mul-tiple machines. In this way, smart compound datatypes are created from data streams that are anno-tated only with the metadata necessary to describetheir own type; there is no unnecessary coupling im-posed on the software architecture at any level.

The logging system itself was inspired by theobservation that the sensor and processor band-width onboard many mobile robots far outstripsavailable bandwidth. Due to this imbalance, it is of-ten beneficial to optimize query resolution over thedistribution of data sources by distributing querylogic to the data before performing a join over theresults of that initial filtering step. In the ROCI sys-tem, it is easy to programmatically launch a compo-nent, or collection of components, on another ma-chine and attach inputs and outputs to dynamicallydiscovered data sources. The code of the componentwill be automatically downloaded by the node host-ing the relevant data in question via a peer-to-peersearch and download mechanism that is transparentto the node launching the component and the nodethat is to execute the component or collection ofcomponents. This allows for the creation of activequeries that ship their logic to the data and returnonly resulting data sets to the originator of thequery. In most usages, the result data set is signifi-cantly smaller than the data set taken as a whole.

An example of this functionality is the determi-nation of from where a particular target was sighted.The query is a joining of a position table with an

image table over the shared time index where theimages contain a particular target. In our experimen-tal setup, accurate position information was oftenlogged by a camera system mounted on rooftopsoverlooking the arena of operations, while the mo-bile robot logged many hundreds of megabytes ofimage data. The query, in this case, was executed byshipping a target identification component, param-eterized to look for a specified target, to the nodethat maintained the image log. The time indexes forimages containing the target were used to index intothe position log maintained by the node tracking themobile units. Finally, the positions from which mo-bile units identified the target were sent to the queryoriginator. Note that transferring the image data setover the network would be impractical; even trans-ferring the position data set, which was generatedfrom high-frequency sampling, would have beenprohibitively expensive. Instead, resources wereused commensurate with their availability.

4.5. Cooperative Search, Identification, andLocalization

In this section, we describe the framework used toexploit the synergy between UAVs and UGVs to en-able cooperative search, identification, and localiza-tion of targets. In general, UAVs are adept at cover-ing large areas searching for targets. However,sensors on UAVs are typically limited in their accu-racy of localization of targets on the ground. On theother hand, UGVs are suitable for accurately locat-ing ground targets but they do not have the abilityto move rapidly and see through such obstacles asbuildings or fences. Using the Active Sensor Net-work architecture proposed in Makarenko, Brooks,Williams, Durrant-Whyte & Grocholsky �2004�, webuild upon the key idea that the value of a sensingaction is marked by its associated reduction in un-certainty and that mutual information �Cover &Thomas, 1991� formally captures the utility of sens-ing actions in these terms. This allows us to incorpo-rate the dependence of the utility on the robot andsensor state and actions and allows us to formulatethe tasks of coverage, search, and localization as op-timal control problems. Our algorithms for searchand localization are easily scalable to large numbersof UAVs and UGVs and transparent to the specificityof the individual platforms.

In this framework, the detection and estimationproblems are formulated in terms of summation and

1002 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

propagation of formal information measures. We usecertainty grids �Makarenko, Williams & Durrant-Whyte, 2003� as the representation for the search andcoverage problems. The certainty grid is a discrete-state binary random field in which each element en-codes the probability of the corresponding grid cellbeing in a particular state. For the feature detectionproblem, the state x of the ith cell Ci can have one oftwo values, target, and no target. This coverage al-gorithm allows us to identify cells that have an ac-ceptably high probability of containing features ortargets of interest.

The localization of features or targets problem isfirst posed as a linearized Gaussian estimation prob-lem where the information form of the Kalman filteris used �Grocholsky, Makarenko, Kaupp & Durrant-Whyte, 2003�. In this manner, one can show the in-fluence of sensing processes on estimate uncertainty�Grocholsky et al., 2005�, where the control objectiveis to reduce estimate uncertainty. Because this uncer-tainty directly depends on the system state and ac-tion, each vehicle chooses an action that results in amaximum increase in utility or the best reduction inthe uncertainty. New actions lead to an accumula-tion of information and a change in the overall util-ity. Thus, local controllers are implemented on eachrobotic sensor platform that direct the vehicle andsensors according to the mutual information gradi-ent with respect to the system state. This gradientcontroller allows individual vehicles to drive in di-rections that maximize their information gain locally.The additive structure of the update equations forthe information filter lends itself to decentralization.Thus measurements from different robots �UAVsand UGVs� are propagated through the network andupdated through propagation of internodal informa-tion differences, and decisions based on this updatedinformation are made independently by each robot�Grocholsky, Keller, Kumar & Pappas, 2006�. A com-munications manager known as a channel filterimplements this process at each internodal connec-tion �Grocholsky, 2002�.

The network of aerial and ground sensor plat-forms can then be deployed to search for targets andfor localization. Both the search and localization al-gorithms are driven by information-based utilitymeasures and as such are independent of the sourceof the information, the specificity of the sensor ob-taining the information, and the number of nodesthat are engaged in these actions. Most importantly,these nodes automatically reconfigure themselves in

this task. They are proactive in their ability to plantrajectories to yield maximum information instead ofsimply reacting to observations. Thus, we are able torealize a proactive sensing network with decentral-ized controllers, allowing each node to be seamlesslyaware of the information accumulated by the entireteam. Local controllers deploy resources accountingfor and, in turn, influencing this collective informa-tion which results in coordinated sensing trajectoriesthat evidently benefit from complementary sub-system characteristics. Information aggregation andsource abstraction results in nodal storage, process-ing, and communication requirements that are inde-pendent of the number of network nodes. The ap-proach scales to large sensor platform teams.

4.6. 3D Mapping

Many different methods can be used to representoutdoor environments. A point cloud �Wolf, Howard& Sukhatme, 2005� is one of the most frequentlyused techniques. It can describe features in fine de-tail when a sufficient number of points is used.These maps can be generated fairly easily whengood pose estimation and range information areavailable.

In order to smooth pose estimation, we devel-oped a particle filter-based GPS approximation algo-rithm �Wolf et al., 2005�. Each particle represents apossibility of the robot being at a determinate posi-tion, and the particles are propagated as the robotmoves. The motion model for the particles is basedon the odometer and IMU sensors data. A smallamount of Gaussian noise is also added to compen-sate for a possible drift in the robot’s motion. Theobservation model is based on the GPS information.The particles are weighted based on how distantthey are from the GPS points. The closer a particle isfrom the GPS point, the higher it is weighted. Afterbeing weighted, the particles are resampled. Thechance of a particle being selected for resampling isproportional to its weight; high weighted particlesare replicated and low weighted particles are elimi-nated. The complete path of each particle is kept inthe memory, and at the end only particles that rea-sonably followed the GPS points will be alive. Con-sequently, the path of any of these particles can beused as a reasonable trajectory estimation for the ro-bot. The closer a particle is to the GPS point, thehigher its probability for being selected. In order toobtain accurate local pose estimation, a scan match-

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 1003

Journal of Field Robotics DOI 10.1002/rob

ing algorithm is applied afterward. Scan matchingconsists of computing the relative motion of the ro-bot by maximizing the overlap of consecutive rangescans. Features, such as trees, long grass, and mov-ing entities, make scan matching a hard task in anoutdoor environment. Figure 8 shows GPS data,odometry, and the particle filter-based GPS approxi-mation for the robot’s trajectory.

Once the current robot pose is obtained �fromthe localization module� and a desired targetlocation/trajectory is specified, a Vector FieldHistogram+ �VFH+� �Ulrich & Borenstein, 1998� al-gorithm is used for point-to-point navigation. VFH+algorithm provides a natural way to combine a localoccupancy grid map and the potential field method,and the dynamics and kinematics of a mobile robotcan be integrated to generate an executable path.

In addition, the robot’s motion property �e.g.,goal oriented, energy efficient, or smooth path� canbe controlled by changing the parameters of a costfunction. Once the robot arrives at the desired way-point, the point-to-point navigation module notifiesthe achievement to CMDLi, and CMDLi proceeds tothe next waypoint. Figure 9 shows two trajectoriesthat the robot generated while performing point-to-point navigation using two different waypoint sets.

Thus, when constructing 3D maps based on therobot’s position, the environment representation isbuilt directly by plotting range measurements into

the 3D Cartesian space. Figure 10 shows the result ofmapping experiments performed at the Fort. Ben-ning MOUT site. The maps were plotted using astandard Virtual Reality Modeling Language�VRML� tool, which allows us to virtually navigateon the map. It is possible to virtually go on streetsand get very close to features, such as cars and traf-fic signs, and it is also possible to view the entiremap from the top.

5. INTEGRATED DEMONSTRATION

In this section, we describe the final experimentwhich demonstrated the integration of all the compo-nent technologies with a discussion of integrationchallenges that had to be overcome. In order to testand demonstrate the integration of the componenttechnologies, we conceived an urban surveillancemission which involved the detection of a human tar-get wearing a uniform with a specified color within adesignated area, and then tracking the target once theidentity of the target has been confirmed by a re-motely located human operator. We briefly describethe mission in the next section that was used to stagea demonstration before discussing the results.

5.1. Demonstration Setup

To meet our project goals, an integrated demonstra-tion based on an urban surveillance mission by a

Figure 8. Localization on the MOUT site. Figure 9. Point-to-point navigation using two way-pointsets.

1004 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

heterogeneous team of robots was conceived. Thegoal of the demonstration was for the team to ascer-tain if a human target with a particular uniform waswithin the surveillance region. The demonstrationwas conducted on December 1, 2004, at the Fort Ben-ning MOUT site, which spans approximately 90 mNorth to South and 120 m East to West. We de-ployed one UAV, three ClodBusters, two Pioneer2ATs, two ATRV-Jrs, and one Segway. Three humanoperators, responsible for monitoring the progress ofthe demonstration and target verification, wereseated in the Hummer Vehicle which was used asthe base station �Base�.

The experiment consisted of an aerial phase,where a UAV was tasked to conduct an initial coarsesearch of the surveillance region and determine po-tential target locations. This was then followed by asecond phase, where UGVs, based on the UAV’s ini-tial assessment, were tasked to conduct a more local-ized search and identification of the targets. Sincethe goal was surveillance rather than target recogni-tion, targets in the aerial phase of the experimentconsisted of bright orange color blobs and the targetin the ground phase was a human in an orange col-ored vest. The orange color was simply used to en-sure positive autonomous recognition without hav-ing to resort to complex and/or expensive means oftarget acquisition and designation. The human op-erator was brought into the loop on certain synchro-nization points. While deployment decisions �i.e.,passing synchronization points� dedicated to main-taining network connectivity were made automati-

cally, the human operator was engaged by the robotsto confirm the identity of the target to ensure thatthe robots had indeed acquired the correct target be-fore proceeding.

A single UAV was initially deployed to activelysearch and localize specified targets within the des-ignated region. Targets were located at various loca-tions on the site. Once a target�s� was detected, analert was sent from the UAV to the Base Station totrigger the deployment of the UGVs. Figure 11shows some targets detected by the UAV during oneof these fly-by experiments.

Figure 10. Top and side view of the 3D map of the Fort Benning site.

Figure 11. Targets localized by the UAV on the MOUTsite encircled by a white square.

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 1005

Journal of Field Robotics DOI 10.1002/rob

In our demonstration, we assumed a scenariowhere a UAV observed a human target entering thesurveillance area from the north of the MOUT sitewhich triggered an alert message at the base station.Once the Base had been notified, two groups of ro-bots were dispatched from the Base to caching areasat the limits of radio network connectivity to awaitfurther instructions, marked as Cache N and CacheS in Figure 12. A ClodBuster was positioned atCache N, while two ClodBusters, two Pioneer2 ATs,and a Segway were positioned at Cache S. The twoATRV-Jrs remained at the Base. For the groundphase of the experiment, the initial target sightingwas selected a priori based on previous UAV experi-ments, and thus the trigger was delivered manually.

The human target then entered into the building,shown in Figure 12, unseen by the team. At thispoint, a surveillance mission was composed fromthe Base to search the town for the target of interest.The mission plan was initially given to two ATRV-Jrswhich were then deployed, one to each cache area.Upon arrival, the mission plan was then transferredto the individual platforms; in this case, already po-sitioned at the two caches, via the wireless network.The two ATRV-Jrs then acted as radio network re-peaters to allow the others to venture beyond thelimit of one-hop network communication. Followinga universal commence signal from the Hummer basestation, the robots then autonomously deployedthemselves to search for the target of interest. Once

the ClodBuster robots had arrived at their target po-sitions, they entered a scanning mode, and passedimages of the candidate target to the operator. Thesepositions were chosen during the mission planningphase based on the radio connectivity map of theMOUT site obtained during an initial mapping andexploration phase shown in Figure 7. A schematic ofthe deployment scheme and all the robot trajectoriesare shown in Figure 13�a�. Network connectivity wasmaintained to ensure that once the target was lo-cated, an alert would be sent to the base station, per-mitting the operator to make a positive identificationby viewing images obtained by the robotic agents.Figure 14�a� shows the actual alert that was seen bythe human operator when the target was located byone of the ClodBusters. Figure 14�b� shows the im-age that was used by the human operator to makethe positive identification. The individual robots au-tonomously selected the best image, i.e., images inwhich the target was centrally located, from theirdatabases to forward to the Base when it was re-quested.

Once detected, the target was then tracked viathe cameras from some of the robots, while the Seg-way moved in to physically track it as it left the area.This commitment to a particular target was finalizedby the human, while the target tracking wasachieved using a particle filter-based algorithm de-veloped to enable tracking in real time �Jung &Sukhatme, 2004�. Figure 15 shows some snapshots of

Figure 12. Robot initial positions and position of the Base.

1006 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

Figure 14. �a� Screen capture of base station console showing the alert message notifying the human operator that atarget has been located. �b� Image obtained by one of the ClodBusters and provided to the human operator foridentification.

Figure 15. Snapshots of previous target tracking result.

Figure 13. �a� Schematic of robot trajectories. �b� Schematic of target trajectory and Segway trajectory as it tracks andfollows the target.

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 1007

Journal of Field Robotics DOI 10.1002/rob

previous target tracking results. The information col-lected by the Segway was then transmitted to thebase station over the multihop network. Figure 16 isa snapshot of the monitoring station. The current po-sitions of the robots were displayed on the site mapon the left in real time. The two windows on theright showed live video streams from the Segway�on the top� and one of the Pioneer2 AT �on the bot-tom� for surveillance activity. Detected targets weredisplayed on top of the video streams.

The experiment concluded as the target of inter-est departed the bounds of the MOUT site, while theSegway tracked it movements. This experiment wascarried out live and the deployment was fully au-tonomous with the experiment lasting approxi-mately 30 min. A short movie of the integrated ex-periment has been included with this publication.

5.2. Challenges Toward Integration

5.2.1. Mission Specification and Execution

Specification and execution of a mission throughMISSIONLAB was found to be fairly robust. As thesimulator in MISSIONLAB allows the mission sce-nario to be tested without actual deployment of therobots, a solid CMDL script for the Fort BenningMOUT site �100% success rate in simulation� wascomposed before being integrated with other com-ponents. Even when the mission was executed by allof the actual robots and integrated with all othercomponents, the CMDLi was found to be consider-ably reliable. Every robot was able to carry out all of

the assigned tasks, and the synchronization wasproperly attained as specified in the script. No majorproblem was found during the execution of the mis-sion.

Improvements can be made to the mission speci-fication process to enhance robustness to errors dur-ing execution. For example, during the demonstra-tion, the Segway collided with one of the ClodBusterbecause of errors in localization �caused by poorGPS information� and because the Segway sensorsfor obstacle avoidance were not low enough to de-tect the smaller ClodBusters. This could have beenprevented by explicitly modeling the heterogeneityof the robots and adding additional constraints onthe waypoints of the individual robots.

5.2.2. Communication Network and Control forCommunication

A team of three ClodBuster robots were used to col-lect the radio-signal strength map shown in Figure 7.The map was obtained prior to the final demonstra-tion. Robots were simultaneously tasked to continu-ously log signal strength and position information atspecified location during the entire experiment. Thecontinuous logs proved to be extremely useful in theconstruction of the map shown in Figure 7 since GPSerrors of more than 5 m meters were fairly common,especially toward the bottom right region of the sitewhere robots consistently had problems obtainingaccurate position information.

This experience suggests that it may be possibleto obtain a finer resolution map if one can incorpo-rate some learning into the exploration strategy. Ad-ditionally, while the map proved to be very usefulfor determining the deployment positions of theClobuster robots in the final demonstration, it failedto provide much insight for the deployment posi-tions of the other robots due to the differences inrobot sizes and capabilities. Thus, future work in-cludes the development of exploration strategies forteams of heterogeneous robots to enable better utili-zation of the various available resources. Lastly,since the robots were not required to operate at thelimit of their communication links for the integrateddemonstration, the radio connectivity map provedto be a useful resource. In situations where robotswould be operating at these limits, one must incor-porate reactive strategies to enable robots to adapt tochanges in their signal strengths, as shown in Hsiehet al. �2006�.

Figure 16. Screenshot of USC monitoring station.

1008 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

5.2.3. Programming Abstractions and Compositionfor Multirobot Deployment

A benefit of componentized development is that itleads to a natural structuring of test activities. Allcomponents are extensively tested in isolation to en-sure that they yield the proper results given some setof inputs. Unfortunately, the outputs of many com-ponents are not amenable to a binary classificationof success or failure, and the input domains of manycomponents are too large to provide total test cover-age. A good example of the difficulty in componenttesting is a component designed to recognize a tar-get object in images coming from a camera. Lightingconditions, distance to the target, and environmentalfeatures all have dramatic effects on the ability of thesoftware to perform correctly. For this reason, suchcomponents were tested until they sometimes satis-fied loosely defined performance specifications. Inthe MARS experiments, image components pro-vided tuning parameters that let engineers adjustperformance to match experimental operating condi-tions. Such tuning—used to account for lighting con-ditions in color-based segmentation and groundclutter in obstacle avoidance—presents a criticalpoint of failure for the entire experiment. This weak-ness may be mitigated by self-tuning software thatmonitors its own performance when possible, andby tools designed to allow human operators toquickly calibrate the software when automated test-ing metrics are difficult to specify.

Isolation testing also presented a difficulty earlyin the development schedule when it was discov-ered that some components were designed withbuilt-in performance assumptions. Such assump-tions are only revealed when the component istested in various execution environments, many ofwhich may be difficult to anticipate without experi-mental experience. The simplest class of problemswere those related to processing latency. Certaincontrollers were built with hard-coded real-time pro-cessing assumptions that could be satisfied by thehost �nonreal-time� operating system under minimalcentral processing unit load, but were violated whenrun alongside the many components that must coex-ist for the robot to be able to express all desired be-haviors. Some of these controllers may be designedto dynamically tune parameters based on observedperformance characteristics, while others may bemoved to dedicated microcontrollers closer to thehardware.

An approach that makes it easy to have asyn-chronous interoperation between multiple proces-sors, some of which may be offering real-time per-formance guarantees, offers the engineering teamthe ability to avoid a potentially painful compro-mise, while also improving the scalability of the sys-tem as a whole.

5.2.4. Distributed Databases for SituationalAwareness

The distributed database for the interactive situ-ational awareness provision was successful on itsown, but highly dependent on general robot perfor-mance. While the software could correctly task ro-bots, and integrate disparate data stores into a cohe-sive view, there was little oversight of robotresources. This meant that a database query, such asa request for an image taken from a particular loca-tion, could fail due to the robot being unable toachieve its task due to a loss of GPS reception, afailure of the obstacle avoidance system, a mechani-cal failure, or a networking failure. All of these fail-ure modes were observed, but there was no auto-mated contingency management to handle suchfailures from a high level. A significant difficulty indesigning such contingency plans is that each ofthese failure modes itself represents a possible out-come of myriad actual circumstances. An obviousstrategy would be to attach a timeout to each query,and to send another robot if the first failed.

Unfortunately, this strategy tended to result ineither a robot pileup at some environmental featurethat the robots were not capable of handling, orgreat inefficiency when the first robot was able tosuccessfully recover and complete its mission. Ide-ally, such inefficiencies should be acceptable operat-ing losses of a multirobot system, and multiple robotlosses could be an acceptable price for learning valu-able information about the environment �i.e., do notsend robots to this location�, but our experimentalsetup was constrained by too small a population ofrobots to accept such eventualities.

5.2.5. Cooperative Search, Identification, andLocalization

In the demonstration, the synergy between aerialand ground vehicles was exploited to detect, iden-tify, and localize targets on the ground. Aerial ve-hicles are capable of searching quickly over a large

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 1009

Journal of Field Robotics DOI 10.1002/rob

area, but they are unable to obtain accurate estimatesof locations of potential targets because of errors inlocalization �position and orientation�. On the otherhand, ground vehicles are slower but capable ofclose-range observations that can confirm the iden-tity of the target and provide better position infor-mation. This synergy was exploited during the dem-onstration. Overflight of the UAV �only one wasflown for the demonstration� narrowed down thesearch area for the mission, while the ground robotswere able to pursue potential targets for better infor-mation.

The integration of different communication net-works and the distances involved proved to be chal-lenging. The communication between the UAV andthe base station involved a low bandwidth radiomodem connection precluding the possibility of pro-cessing of data on the ground. Thus, onboard imageprocessing algorithms on an off-the-shelf laptopwere necessary. The communication between thebase station and the robots would have requiredmultiple repeaters because of the distances betweenthe base station and the robot. Instead of pursuingthis solution, we manually connected the UAV net-work and the UGV network allowing effective ex-perimentation.

A second challenge with cooperative behaviorswith multiple sensors is the need to have an efficientdata structure coding the information in the networkthat can be shared globally. While in principle thisapproach allows the scaling up to large numbers ofanonymous vehicles; in practice, the communica-tions manager needs to be aware of the identities ofeach vehicle to ensure that there are no loops in thesensor fusion network. This is a research problemthat is currently under investigation by severalgroups �see, for example Dellaert, Kipp & Krau-thausen �2005��.

5.2.6. 3D Mapping

During our experiments in Fort Benning, the robotmapped an area of approximately 50 m�90 m�350 m tour with an average speed of 1.2 m/s�. AGPS unit �with an accuracy of approximately 2 m�was used as reference for the robot’s pose estima-tion. The pose estimation error can be noticed in thewalls of some buildings, which appear bent in thepoint cloud map. Unfortunately, ground truth wasnot provided during these experiments, but visualcomparisons between the aerial image and the pla-

nar model suggest errors around 2 m, which is com-patible with the GPS errors. A better reference forpose estimation would certainly lead our algorithmto generate more accurate models of the environ-ment.

The performance can be further improved by ex-tracting planar information from the incompletepoint clouds. In our initial results, we representedflat surfaces found on point cloud map by planes�Wolf Howard & Sukhatme, 2005�. These planes donot possess the same level of detail as compared tothe point clouds but they are more efficient in termsof memory. In situations were the application doesnot require a fine level of detail in the urban maps,planar information may be a convenient alternative.

In the future, we plan to investigate differentmethods for mapping urban environments and rep-resent these maps efficiently even for large environ-ments and high level of details. We are consideringstrategies for combining range information with im-ages. Lastly, we are also considering the combinationof range information provided by both ground ro-bots and helicopters.

5.3. Analysis of Integrated Demonstration

Reliability and repeatability are most easily appreci-ated when viewed from a high level. One emphasisof the design of this demonstration was the impor-tance of generality in the handling of robot re-sources. That is, the team was heterogeneous inmakeup, and human operators should only be con-cerned with relevant data. The question of what ro-bot provided what service should never come up,thus freeing human operators to focus on mission-specific goals, and offering welcome flexibility to en-gineering teams in terms of what robots are fieldedfor a given mission. This abstraction of hardware re-sources allows for a great level of scalability andfault tolerance.

During hardware warmup at the start of thepublic iteration of the integrated demonstration, theGPS unit on one of the ClodBusters failed. The prob-lem was quickly detected during the routine hard-ware startup check �the importance of this activityhaving long been established over months of test-ing�, but the device could not be made to respond.Given that there was no time to debug the issue, aquick solution was needed. In this case, the fastestfix was to pull the computer from the faulty robot,and plug it into a spare robotic platform that carried

1010 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

similar hardware. The swap worked smoothly, andno changes needed to be made to any other robot.Calibration settings for the camera and motors wereadjusted to reflect the new hardware platform, butall high level scripting remained unchanged. In thiscase, the abstraction of hardware resources to thescripting systems used to drive the demonstrationprovided great flexibility in terms of specific hard-ware provisioning.

The execution of the demonstration itself pro-vided an example of the dangers of team heteroge-neity, and the benefits of a loosely coupled robotteam with built-in redundancy. The mission wasspecified such that three ClodBusters would be inposition to view the person of interest as she left thebuilding. Two robots had relatively close views ofthe two street-facing sides of the building, while athird robot had a slightly longer view that also en-compassed a third side of the building. Once theserobots were in position, the Segway was to beginpatrolling an area adjacent to the building believedto house the target. This all proceeded according toplan, but, unnoticed by any observer, the third Clod-Buster stopped slightly short of where it was in-tended to be positioned. Its location was within nor-mal localization thresholds, but just barely.Meanwhile, the Segway patrol also strayed towardthe boundary of its expected area of coverage, whichled to a run-in between the two robots, as previouslymentioned. It was expected that the ClodBuster�GRASP� robots could have to contend with poten-tial collisions among themselves, which they nar-rowly avoided during the demo, but such a collisioncould be prevented due to the robots detecting eachother as unnavigable obstacles. The Segway, how-ever, being a much larger robot, had its obstacleavoidance thresholds set such that an obstacle thesize of a ClodBuster was considered to be surmount-able. In this case, the Segway did surmount theGRASP robot soon after it sent its first long distanceview of the target back to the Base. The GRASP ro-bot was fairly seriously damaged, and began a spon-taneous retreat back to its initial waypoint close tothe Base. Fortunately, this third ClodBuster was notcritical to mission success, and the remaining twoGRASP vehicles were able to continue capturing im-ages of the target to be selectively pushed to the hu-man operator.

The multiple views were sufficient for the hu-man operator to positively identify the target, andallow the mission to proceed with the Segway track-

ing the target as she left the mission area. Had themission hinged on a single ground vehicle to detectthe target leaving a building in an urban setting, itwould have been far more likely that positioning er-ror or unwanted robot interaction �i.e., a collision�could have led to mission failure without an outrightfailure of any single component of the system.

6. CONCLUSION

Our vision for the demonstration was to advance thestate-of-the-art in the integration of heterogeneousrobots into a single team with minimal human inter-vention. This required the presentation of a single in-tegrated command and control interface for the hu-man operator that enabled him/her to task the teamand monitor performance of the mission. This provedto be very challenging since the team consisted of di-verse robotic assets from different universities, eachrunning different operating systems and robot con-trol architectures, and all quite different in physicalsize and capabilities.

Our final multirobot coordination frameworkhad to be both flexible and responsive for our team tobe able to execute tasks efficiently and robustly. In ourintegrated demonstration at the McKenna MOUTsite, the task was to patrol a small village, and reportand track a human target. The approach taken was toaugment each robotic asset’s controller with an in-stance of a distributed tasking software agent. Foreach robot, this agent negotiated work assignmentswith the other assets’ agents and with the operatorconsole to support assigning tasks across the assets.Each tasking agent instance maintained a work queuefor its robot and passed commands and waypoints tothe underlying controller for execution. It also aggre-gated status reported by the underlying controllerand sent status reports back to the controller and tothe other robots. This architecture allowed the opera-tor to create a single mission for the team, distributethe mission to the robotic team members over thewireless network, and monitor, modify, or replace themission during execution. In this fashion, the com-mander was able to deploy the mission across theteam using the operator console and monitorprogress of the mission and the location of vehicles ona map display during the demonstration. When athreat was identified, the operator was presentedwith video of the potential target for confirmation.

Although the initial task assignment was central-

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 1011

Journal of Field Robotics DOI 10.1002/rob

ized, control of the individual robotic assets was ac-complished in a decentralized fashion so as to avoidthe difficult task of seamless integration of all threecommand and control softwares. This strategy al-lowed team members to respond to dynamic changesin the environment, as well as achieve full fault tol-erance. Two of our robotic assets suffered cata-strophic failures during mission deployment,2 how-ever due to our decentralized architecture at theindividual robot level, the team was still able to locateand track the target and complete the mission.

This experiment successfully demonstrated thatdiverse robots and robot control architectures couldbe reliably aggregated into a team with a single uni-form operator control station. It showed that dispar-ate robots could perform tightly coordinated tasks,such as distributed surveillance and coordinatedmovements. Further, all of these capabilities wereadded as a software agent sitting on top of each ro-bot’s existing controller, without invasive modifica-tions to the existing architecture or software.

Field testing is expensive, tiring, and frustrating,but irreplaceable in moving the competency of thesystem forward. In the field, sensors and perceptualalgorithms are pushed to their limits where vegeta-tion, lighting, and terrain are uncontrollable, andcommunication radios struggle in cluttered areaswith many nodes competing for bandwidth. Just en-suring that each robot’s batteries were charged at thesame time to allow running an integrated experimentwas difficult with this large collection of robots. Muchof the success of the integrated demonstration wasdue to the extensive testing of the individual subcom-ponents at each university and on the MOUT site.

Additionally, even with extensive field-testing, itis often difficult to guarantee system performance atexecution time. Despite months of testing, GPS cov-erage was spotty at best during the final days leadingup to the integrated experiment. To mitigate the lo-calization problems, we placed stationary overheadcamera nodes on key buildings on the MOUT site.These can be seen as the deployment of additionalUAVs,—capable of estimating their own positions aswell as accurately track ground vehicles,—to providelocalization support. Without this additional support,the integrated experiment would have failed due tolocalization problems. Success was dependent on ourability to anticipate and prepare for such failures. As

always, integration requires substantial planning andeffort to be successful. This project, involving threeuniversities and two corporations, benefited fromstrong leadership and collaboration to ensure that in-tegration received the required emphasis and com-mitment from all involved.

Finally, the most important lesson was that bring-ing together the different groups into a single teamwas extremely beneficial and the whole was trulygreater than the sum of the parts. Each team hasunique capabilities that other teams could leverage tomake rapid progress. Further, each style of robot hasunique physical capabilities and sensors that wereutilized to fill gaps and provide a solid collection ofcapabilities for the integrated team.

ACKNOWLEDGMENTS

The authors would like to thank Dan Gomez-Ibanez,Joshua Karch, and Rahul Swaminathan, formerlyfrom the University of Pennsylvania, Dr. Jason Rediand Keith Manning from BBN Technologies, and Dr.Tucker Balch, Yang Cheng, Zsolt Kira, Lilia Mosh-kina, Matt Powers, Alexander Stoytchev, PatrickUlam, and Alan Wagner from the Georgia Tech Mo-bile Robot Laboratory. A special thanks to Mike Bar-nes, Irv �Rod� Rodriguez, and their colleagues at theSoldier Battle Laboratory for their support at FortBenning. We gratefully acknowledge the support ofDARPA MARS NBCH1020012.

REFERENCES

Amarss �2006�, Amarss-networked minirhizotron plan-ning and initial deployment. �http://research.cens.ucla.edu/�.

Arkin, R. C. �1998�, Behavior-based robotics. Cambridge,MA, MIT Press.

Chaimowicz, L., Cowley, A., Sabella, V., & Taylor, C. J.�2003�, Roci: A distributed framework for multi-robotperception and control. Paper presented at the 2003IEEE/RJS International Conference on Intelligent Ro-bots and Systems, Las Vegas, NV.

Chimowicz, L., Cowley, A., Gomez-Ibanez, D., Grochol-sky, B., Hsieh, M.A., Hsu, H., Keller, J.F., Kumar, V.,Swaminathan, R., and Taylor, C.J. �2005�, Deployingair-ground multi-robot teams in urban environments.Proceedings of the 2005 International WorkshopMulti-Robot Systems. Parker, Lynne E., Schneider,Frank E., Schultz, Alan, eds. 2005, Vol. 3, 223–234.Dordrecht, The Netherlands.

Corke, P., Peterson, R., & Rus, D. �2003�, Networked ro-

2One of the Pioneers failed to initialize, while the Segway ran overone of the ClodBusters during the demonstration.

1012 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob

bots: Flying robot navigation using a sensor net. Pa-per presented at the Eleventh International Sympo-sium of Robotics Research �ISRR�, Springer Tracts onAdvanced Robotics �STAR�. Berlin, Springer-Verlag.

Cover, T., & Thomas, J. �1991�, Elements of informationtheory. New York, Wiley.

Cowley, A., Hsu, H., & Taylor, C. J. �2004a�, Distributedsensor databases for multi-robot teams. Paper pre-sented at the 2004 IEEE International Conference onRobotics and Automation, New Orleans, LA.

Cowley, A., Hsu, H., & Taylor, C. J. �2004b�, Modular pro-gramming techniques for distributed computingtasks. Paper presented at the Performance Metrics forIntelligent Systems �PerMIS� Workshop, Gaithers-burg, MD.

Dellaert, F., Kipp, A., & Krauthausen, P. �2005�. A multi-frontal qr factorization approach to distributed infer-ence applied to multi-robot localization and mapping.Paper presented at the National Conference on Artifi-cial Intelligence �AAAI ’05�, Vol. 5, pp. 1261–1266,Pittsburgh, PA.

Durrant-Whyte, H., Stevens, M., & Nettleton, E. �2001�.Data fusion in decentralised sensing networks. Paperpresented at the Fusion 2001, Montreal, Canada, pp.302–307.

Endo, Y., MacKenzie, D. C., & Arkin, R. C. �2004�. Usabil-ity evaluation of high-level user assistance for robotmission specification. IEEE Transactions on Systems,Man, and Cybernetics, 34�2�, pp. 168–180.

Fox, D., Ko, J., Konolige, K., Limketkai, B., Schulzz, D., &Stewart, B. �2006�. Distributed multi-robot explorationand mapping. Proc. IEEE, Vol. 94�7�, pp. 1325–1339.

Grocholsky, B. �2002�. Information-theoretic control ofmultiple sensor platforms. Ph.D. thesis, The Univer-sity of Sydney. Available from �http://www.acfr.usyd.edu.au�.

Grocholsky, B., Keller, J., Kumar, V., & Pappas, G. J. �2006�.Cooperative air and ground surveillance. IEEE Robot-ics and Automation Magazine, 13�3�, 16–25.

Grocholsky, B., Makarenko, A., Kaupp, T., & Durrant-Whyte, H. �2003�. Scalable control of decentralisedsensor platforms. Paper presented at the InformationProcessing in Sensor Networks: 2nd InternationalWorkshop, IPSN03, pp. 96–112.

Grocholsky, B., Swaminathan, R., Keller, J. F., Kumar, V., &Pappas, G. �2005�. Information driven coordinated air-ground proactive sensing. Paper presented at theIEEE International Conference on Robotics and Auto-mation, Barcelona, Spain.

Howard, A., Parker, L. E., & Sukhatme, G. S. �2006�. Ex-periments with a large heterogeneous mobile robotteam: Exploration, mapping, deployment and detec-tion. International Journal of Robotics Research, 25�5–6�, 431–447.

Hsieh, M. A., Cowley, A., Kumar, V., & Taylor, C. J. �2006�.Towards the deployment of a mobile robot networkwith end-to-end performance guarantees. Paper pre-sented at the International Conference on Roboticsand Automation �ICRA� 2006, Orlando, FL.

Hsieh, M. A., Kumar, V., & Taylor, C. J. �2004�. Construct-ing radio signal strength maps with multiple robots.

Paper presented at the IEEE International Conferenceon Robotics and Automation, New Orleans, LA.

Jones, E., Browning, B., Dias, M. B., Argall, B., Veloso, M.,& Stentz, A. T. �2006a�. Dynamically formed heteroge-neous robot teams performing tightly-coordinatedtasks. Proceedings of the 2006 IEEE International Con-ference on Robotics and Automation �ICRA�, May2006, Orlando, FL, pp. 570–575.

Jones, E., Dias, M. B., & Stentz, A. T. �2006b�. Learning-enhanced market-based task allocation for disaster re-sponse. Technical Report CMU-RI-TR-06-48, RoboticsInstitute, Carnegie Mellon University, Pittsburgh, PA.

Jung, B., & Sukhatme, G. S. �2004�. Detecting moving ob-jects using a single camera on a mobile robot in anoutdoor environment. Paper presented at the Interna-tional Conference on Intelligent Autonomous Systems�IAS�, Amsterdam, The Netherlands, pp. 980–987.

Kaiser, W., Pottie, G., Srivastava, M., Sukhatme, G. S., Vil-lasenor, J., & Estrin., D. �2005�. Networked infome-chanical systems �nims� for ambient intelligence, Am-bient Intelligence, Berlin, Springer, pp. 83–114.

Kang, W., Xi, N., and Spark, A. �2000�. Formation controlof autonomous agents in 3D workspace, Paper pre-sented at the IEEE International Conference on Robot-ics and Automation ICRA’00, San Francisco, CA, pp.1755–1760.

Kotay, K., Peterson, R., & Rus, D. �2005�. Experiments withrobots and sensor networks for mapping and naviga-tion. International Conference on Field and ServiceRobotics. Port Douglas, Australia, July 2005, pp. 243–255.

Lerman, K., Jones, C., Galstyan, A., & Mataric, M. J. �2006�.Analysis of dynamic task allocation in multi-robotsystems. International Journal of Robotics Research,Vol. 25�3�, 2006, pp. 225–241.

MacKenzie, D. C., & Arkin, R. C. �1998�. Evaluating theusability of robot programming toolsets. InternationalJournal of Robotics Research 17�4�, p. 381401.

MacKenzie, D. C., & Arkin, R. C., & Cameron, J. �1997�.Multiagent mission specification and execution. Au-tonomous Robot 4.

Majumder, S., Scheding, S., & Durrant-Whyte, H. �2001�.Multi sensor data fusion for underwater navigation.Robotics and Autonomous Systems 35�1�, 97–108.

Makarenko, A., Brooks, A., Williams, S., Durrant-Whyte,H., & Grocholsky, B. �2004�. An architecture for decen-tralized active sensor networks. Paper presented atthe IEEE International Conference on Robotics andAutomation �ICRA’04�, New Orleans, LA.

Makarenko, A., Williams, S., & Durrant-Whyte, H. �2003�.Decentralized certainty grid maps. Proceedings of theIEEE/RSJ Int. Conf. on Intelligent Robots and Systems�IROS�, Vol. 4, pp. 3258–3263.

Mataric, M. J., Sukhatme, G. S., & Ostergaard, E. �2003�.Multi-robot task allocation in uncertain environments.Autonomous Robots. Vol. 14�2-3�, pp. 255–263.

McMillen, C., & Veloso, M. �2006�. Distributed, play-basedrole assignment for robot teams in dynamic environ-ments. Proceedings of DARS-2006, Minneapolis, MN.

MissionLab �2006�. Missionlab: User manual for mission-

Hsieh et al.: Adaptive teams of autonomous robots for situational awareness • 1013

Journal of Field Robotics DOI 10.1002/rob

lab 7.0. College of Computing, Georgia Institute ofTechnology.

Neskovic, A., Neskovic, N., & Paunovic, G. �2000�. Mod-ern approaches in modeling of mobile radio systemspropagation environment. IEEE Communications Sur-veys, Third Quarter.

Parker, L. E. �1998�. Alliance: An architecture for fault tol-erant multi-robot cooperation. IEEE Translations onRobotics and Automation. Vol. 14�2�, April 1998, pp.220–240.

Rybski, P. E., Stoeter, S. A., Erickson, M. D., Gini, M., Hou-gen, D. F., & Papanikolopoulos, N. �2000�. A team ofrobotic agents for surveillance. Paper presented at theAgents 2000, Barcelona, Spain.

Spletzer, J., & Taylor, C. J. �2002�. Sensor planning andcontrol in a dynamic environment. Paper presented atthe IEEE International Conference on Robotics andAutomation, Washington, DC.

Stroupe, A., & Balch, T. �2003�. Value-based observationwith robot teams �vbort� using probabilistic tech-niques. Proceedings of the International Conferenceon Advanced Robotics �ICAR�. September 2003, Por-tugal.

Sukhatme, G. S., Dhariwal, A., Zhang, B., Oberg, C.,Stauffer, B., & Caron, D. A. �2006�. The design anddevelopment of a wireless robotic networked aquatic

microbial observing system. Environmental Engineer-ing Science.

Sukkarieh, S., Nettleton, E., Grocholsky, B., & Durrant-Whyte, H. �2003�. Information Fusion and Control forMultiple UAVS, Dordrecht, Kluwer.

Tang, F., & Parker, L. E. �2005�. Asymtre: Automated syn-thesis of multi-robot task solutions through softwarereconfiguration. Paper presented at the IEEE Interna-tional Conference on Robotics and Automation, Bar-celona, Spain.

Thibodeau, B. J., Fagg, A. H., & Levine, B. N. �2004�. Signalstrength coordination for cooperative mapping. Tech-nical Report, Department of Computer Science, Uni-versity of Massachusetts, Amherst.

Ulrich, I., & Borenstein, J. �1998�. Vfh+: Reliable obstacleavoidance for fast mobile robots. Paper presented atthe IEEE International Conference on Robotics andAutomation, Leuven, Belgium, pp. 1572–1577.

Vaglienti, B., & Hoag, R. �2003�. Piccolo system user guide.Available from �http://www.cloudcaptech.com/downloads.htm�.

Wolf, D. F., Howard, A., & Sukhatme, G. S. �2005�. To-wards geometric 3D mapping of outdoor environ-ments using mobile robots. Paper presented at theIEEE/RSJ International Conference on Intelligent Ro-bots and Systems �IROS�, pp. 1258–1263. Edmington,Alberta, Canada.

1014 • Journal of Field Robotics—2007

Journal of Field Robotics DOI 10.1002/rob