10
24 PERVASIVE computing Published by the IEEE CS and IEEE ComSoc 1536-1268/04/$20.00 © 2004 IEEE FIRST RESPONSE Robot and Sensor Networks for First Responders T he Oklahoma City bombing and attacks on the World Trade Center are unforgettable events in which humans were ill-equipped to re- spond. The need to collect, inte- grate, and communicate information effectively in emergency response scenarios exceeds the state of the art in information technology. Com- manders can’t easily locate their personnel or diagram the inside of affected structures. Shar- ing information between firefighters is limited to verbal communication over radio, because visibility inside a burning building can be re- duced to inches. Furthermore, it’s impossible to send humans to some places, either because the access area is too small or the danger too great. This emergency response problem provides an interesting and important test bed for studying networks of distributed mobile robots and sensors. Here, we describe the component technologies (see the “Related Work” sidebar) required to deploy a networked-robot system that can aug- ment human firefighters and first responders, sig- nificantly enhancing their firefighting capabili- ties. In a burning building at a firefighting training facility, we deployed a network of stationary Mote sensors, mobile robots with cameras, and stationary radio tags to test their ability to guide firefighters to targets and warn them of potential dangers. Our long-term vision is a physical net- work that can sense, move, compute, and reason, letting network users (firefighters and first respon- ders) Google for physical information—that is, information about the location and properties of physical objects in the real world. Our vision The physical network we’re building as part of a collaborative National Science Foundation Information Technology Research project will include several types of nodes (or agents). Some will be small mobile robots (either autonomous or teleoperated), some might be stationary sen- sors placed in the environment during an opera- tion, and others might be computers embedded in the suits of emergency response personnel. During an operation, we envision that, along with emergency personnel, tens of agents will enter a building about which potentially little is known (see Figure 1a). If floor plans are available a pri- ori, agents will use them to expedite the search process, acquiring information and providing an integrated view for situational awareness. The agents’ small size will let them penetrate nooks and niches, possibly being teleoperated by a human operator. A network of distributed mobile sensor systems can help first responders during emergencies. Experiments conducted at a burning building with firefighters show the sensors’ potential to localize themselves and acquire and combine information, providing an integrative view of a dynamically changing environment. Vijay Kumar University of Pennsylvania Daniela Rus Massachusetts Institute of Technology Sanjiv Singh Carnegie Mellon University

Robot and Sensor Networks for First Responders - Home | …ssingh/pubs/pervasiveComputingOct04.pdf · 2019-01-14 · Robot and Sensor Networks for First Responders T ... “A Catalyst

Embed Size (px)

Citation preview

24 PERVASIVEcomputing Published by the IEEE CS and IEEE ComSoc ■ 1536-1268/04/$20.00 © 2004 IEEE

F I R S T R E S P O N S E

Robot and SensorNetworks for First Responders

The Oklahoma City bombing andattacks on the World Trade Centerare unforgettable events in whichhumans were ill-equipped to re-spond. The need to collect, inte-

grate, and communicate information effectivelyin emergency response scenarios exceeds thestate of the art in information technology. Com-manders can’t easily locate their personnel ordiagram the inside of affected structures. Shar-ing information between firefighters is limitedto verbal communication over radio, becausevisibility inside a burning building can be re-

duced to inches. Furthermore,it’s impossible to send humansto some places, either becausethe access area is too small orthe danger too great. Thisemergency response problemprovides an interesting andimportant test bed for studyingnetworks of distributed mobilerobots and sensors.

Here, we describe the component technologies(see the “Related Work” sidebar) required todeploy a networked-robot system that can aug-ment human firefighters and first responders, sig-nificantly enhancing their firefighting capabili-ties. In a burning building at a firefighting trainingfacility, we deployed a network of stationary

Mote sensors, mobile robots with cameras, andstationary radio tags to test their ability to guidefirefighters to targets and warn them of potentialdangers. Our long-term vision is a physical net-work that can sense, move, compute, and reason,letting network users (firefighters and first respon-ders) Google for physical information—that is,information about the location and properties ofphysical objects in the real world.

Our visionThe physical network we’re building as part of

a collaborative National Science FoundationInformation Technology Research project willinclude several types of nodes (or agents). Somewill be small mobile robots (either autonomousor teleoperated), some might be stationary sen-sors placed in the environment during an opera-tion, and others might be computers embedded inthe suits of emergency response personnel.

During an operation, we envision that, alongwith emergency personnel, tens of agents will entera building about which potentially little is known(see Figure 1a). If floor plans are available a pri-ori, agents will use them to expedite the searchprocess, acquiring information and providing anintegrated view for situational awareness. Theagents’ small size will let them penetrate nooksand niches, possibly being teleoperated by ahuman operator.

A network of distributed mobile sensor systems can help first respondersduring emergencies. Experiments conducted at a burning building withfirefighters show the sensors’ potential to localize themselves and acquire and combine information, providing an integrative view of adynamically changing environment.

Vijay KumarUniversity of Pennsylvania

Daniela RusMassachusetts Institute of Technology

Sanjiv SinghCarnegie Mellon University

Once inside, the agents will auto-nomously organize themselves to com-municate effectively, integrate informa-tion efficiently, and obtain relativeposition information quickly. They willrecord temperature gradients, measureconcentrations of toxins and relevantgases, track sources of danger, and lookfor human victims. They will then cordonoff areas of threat (for example, areaswhere the temperature is greater than300°F) and convey to remote humanoperators information about the envi-ronment and about emergency responsepersonnel inside the building (see Figure1b). Information broadcast from eachgroup will be integrated into an immer-sive environment that rescue workers andfirefighters can visualize on remote work-stations or helmet-mounted displays (seethe “Panoramic Display” sidebar). Addi-tionally, a microphone on the agent willlet a trapped human call for help, and theagent’s ability to localize (identify its spa-tial location) with assistance from itsneighbors will help rescuers find thetrapped victim (see Figure 1c).

Losing inexpensive robots because offire or falling debris is much moreacceptable than human injuries ordeaths. For example, in the event of con-tamination from hazardous materialsarising from a spill or transportationaccident, we could deploy the networkto determine the extent of the contami-nation and the speed with which it’spropagating. We could integrate simul-

OCTOBER–DECEMBER 2004 PERVASIVEcomputing 25

(b)

(c)

(a)

R3

R4

R1

R5

R2

R6

H1

RH

H2

R 4

Victim

Radiotags

H1

R3

R1

R5

R2

R1

R3

R5R6

R6

R4

H2

H2

Failed sensor

R2H2“sees”trouble

spot andtarget ofrescue

H1“sees”rubble area

is unsafe andhalts

Rubble

H1

Trapped(no mobility)

Mote sensors placedby humans or robots

Motetemperaturesensors

Humanfirefighter Furniture Robots Flow of

information

Smoke

Rubble

Figure 1. A motivating scenario: (a) A team of six robots enter a burning building, dispersing radio tags and Motetemperature sensors. (b) The team adaptsto failures, using robots with failed actuators as static sensors and relays, and guiding robots with failed sensorsthrough smoke and past rubble usinginformation obtained via the network. (c) The network guides human firefightersto potential targets for rescue and to thefire’s source while warning them of dangerous areas along the way.

taneous measurements of chemical con-centration over a widespread, distributedarea, letting the network follow thesource’s movement—for example, fol-lowing a chemical plume as it spreads inthe air, a moving object on the ground,or the source of a fire.

As a first step toward realizing thisvision, we conducted experiments in a burning building at the AlleghenyCounty firefighting training facility inPittsburgh. Based on these experiments,we highlight three key challenges to real-izing an intelligent robot network: local-izing robots, maintaining the flow of rel-evant information, and controlling therobot network.

LocalizationLocalization in the dynamic environ-

ments that search and rescue operationspose is difficult because we can’t pre-

sume an infrastructure or even makesimple assumptions—for example, wecan’t assume responders will be able tosee known features. The system thusmust be able to localize each node in thenetwork, including those attached tofirefighters.

Our methodWe propose combining small (12 × 9

cm), low-cost (approximately US$40)radio tags (or beacons) with conventionalline-of-sight optical sensors, because res-cue personnel and robots can easily dis-perse the radio tags, and optical sensorscan operate in the visible or infrared (IR)spectrum. A robot can query these radiotags, and any tags within range will reply.The robot can then estimate the distanceto each responding tag by determiningthe time elapsed between sending thequery and receiving the response.

Because this method doesn’t require aline of sight between the tags and themobile robot, it’s useful in many envi-ronmental conditions where opticalmethods fail. Because each tag transmitsa unique ID number, robots can auto-matically associate distance readingswith the appropriate tags, easily solvingthe data association problem—a diffi-cult issue in environments that can bevisually obscured. Robots that firstresponders can’t locate using the RF sys-tem might be able to sense their positionwith respect to other RF-located robots,using the robots’ optical sensors.

Because we don’t initially know wherethe tags are located, and because theirpositions can change during operation,robots must localize both the receiver andthe tags simultaneously. This problem isoften known as Simultaneous Localiza-tion and Mapping (SLAM).1 Although

26 PERVASIVEcomputing www.computer.org/pervasive

F I R S T R E S P O N S E

C urrent research in pervasive computing,1,2 sensor networks,3

and active perception4 is relevant to our work. In addition,

several other projects are investigating search and rescue applica-

tions. Perhaps the two most relevant projects are the Scout project

at the University of Minnesota, where small robots are used for

mapping and exploration, and the University of South Florida effort

to effectively deploy remotely operated, radio-controlled robots in

real disaster areas, including a successful attempt at the World

Trade Center.5 There is even a competition for robot search and

rescue.6

In contrast to these efforts, our focus is on deriving useful func-

tionality from the network and autonomously deploying the net-

work. So, the results we describe in this article are first steps toward

developing autonomous networked solutions to control and per-

ception problems that arise in emergency response scenarios.

From an application viewpoint, our work differs from the work at

the University of South Florida in that our project concerns automat-

ing support for first responders as opposed to search and rescue.

First responders need real-time deployment of communication and

computation infrastructure. They need adaptive, active, and proac-

tive processing for information flow, and their world is dynamic and

dominated by the difficulties of the rapidly changing threat (such as

fire) and uncertainties associated with the information. In search and

rescue, the automation is post hoc and the real-time adaptive and

proactive aspect of the computation isn’t crucial. The two problem

domains (support for first responders and search and rescue) are

synergistic and solutions to both problems are needed to develop

a comprehensive program that supports national infrastructure

security.

REFERENCES

1. M. Satyanarayanan, “A Catalyst for Mobile and UbiquitousComputing,” IEEE Pervasive Computing, vol. 1, no. 1, 2002, pp. 2–5.

2. D. Estrin et al., “Connecting the Physical World with PervasiveNetworks,” IEEE Pervasive Computing, vol. 1, no. 1, 2002, pp. 59–69.

3. J. Hill et al., “System Architecture Directions for Network Sensors,” Proc.9th Int’l. Conf. Architectural Support for Programming Languages andOperating Systems (ASPLOS IX), ACM Press, 2000, pp. 93–104.

4. M. Pollefeys, R. Koch, and L.J. Van Gool, “Self-Calibration and MetricReconstruction in Spite of Varying and Unknown Internal Camera Para-meters,” Proc. IEEE Int’l Conf. Computer Vision (ICCV 98), IEEE CS Press,1998, pp. 90–95.

5. J. Casper and R. Murphy, “Workflow Study on Human-Robot Interac-tion in Usar,” Proc. IEEE Int’l Conf. Robotics and Automation (ICRA 02),IEEE Press, 2002, pp. 1997–2003.

6. R. Murphy, J. Blitch, and J. Casper. “AAAI/Robocup-2001 Urban Searchand Rescue Events Reality and Competition,” AI Magazine, vol. 23, no.1, 2002, pp. 37–42.

Related Work

it’s generally assumed that a receiver canmeasure both range and bearing to “fea-tures,” we can determine only range—and even that measurement might be verynoisy. To localize robots using range-onlymeasurements, we adapted the well-known estimation techniques of Kalmanfiltering, Markov methods, and MonteCarlo localization.2 All three methodsestimate robot position as a distributionof probabilities over the space of possi-ble robot positions. We also have an algo-rithm that can solve SLAM in caseswhere approximate a priori estimates ofrobot and landmark locations exist.2

The primary difficulty stems from theannular distribution of potential relativelocations, resulting from a range-onlymeasurement. Because the distributionis highly non-Gaussian, SLAM solutionsderived from Kalman filters are ineffec-tive. In theory, Markov methods (prob-

ability grids) and Monte Carlo methods(particle filtering) have the flexibility tohandle annular distributions. In fact,they have more flexibility than weneed—they can represent arbitrary dis-tributions, but we need only deal withwell-structured annular distributions.Unfortunately, the scaling properties ofthese methods severely limit the numberof landmarks robots can map.

We’ve implemented a compact way torepresent annular distributions togetherwith a computationally efficient way ofcombining annular distributions witheach other and with Gaussian distribu-tions. In most cases, we expect the resultsof these combinations to be well approx-imated by mixtures of Gaussians so thatwe can apply standard techniques suchas Kalman filtering or multiple hypoth-esis tracking to solve the remaining esti-mation problem.

We’ve also extended these results todeal with the case in which the tag loca-tions are unknown using a simple boot-strapping process. The idea is to use thebest estimate of robot localization alongwith measured ranges to an unknown tagto coarsely estimate the tag’s location.Once we have an estimate, it’s addedto the Kalman filter, which improvesthe estimate along with estimates ofother (previously seen) tags. Because thismethod takes advantage of the problem’sspecial structure (in that it’s an easiermapping problem than a simultaneouslocation and mapping problem), it’s lesscomputationally cumbersome in that itreasonably estimates the tag’s location,avoiding local minimas.

We can also combine range-only mea-surements with range and bearing orbearing-only measurements availablefrom robot cameras. Our robots are

OCTOBER–DECEMBER 2004 PERVASIVEcomputing 27

W hen robots or people interact with a sensor network, it

becomes an extension of their capabilities. We’ve thus

developed software that allows an intuitive, immersive display of

environments. Using panoramic imaging sensors that small robots

can carry into the heart of a damaged structure, the display can be

coupled to head-mounted sensors. These sensors then let a remote

operator look around in the environment without the delay associ-

ated with mechanical pan and tilt mechanisms (see Figure A). Dis-

tributed protocols collect data from the geographically dispersed

sensor network and integrate this data into a global map (such as

a temperature gradient) that can also be displayed on a wearable

computer to the user.

Even if practical limitations on size and packaging make it diffi-

cult to display this information to firefighters, such information

would be invaluable to commanders outside the building, who

typically have limited information about where firefighters are

located and the local information accessible to each individual.

According to the National Institute for Occupational Safety and

Health, such information can help commanders better coordinate

and guide firefighters.

Panoramic Display

Figure A. The 360° panorama of a room in the burning building taken by a catadioptric system.

equipped with omnidirectional camerasthat currently work in the visible andnear-IR range. IR omnidirectional cam-eras are feasible but are not yet off-the-shelf.

Experiment and results To benchmark the radio tags’ perfor-

mance, we conducted localization exper-iments in a 30 m × 40 m flat, open, out-door environment with an autonomousrobot where we could use highly accurate(2 cm) GPS receivers, a fiber-optic gyro,and wheel encoders for ground truth. Weequipped the robot with an RF rangingsystem (Pinpoint from RF Technologies)that has four antennae pointing in fourdirections and a computer to control thetag queries and process responses.

For each tag response, the system pro-duces a time-stamped distance estimateto the responding tag, along with theunique ID number for that tag. The dis-tance estimate is simply an integer esti-mate of the distance between the robotand tag. A calibration step conducted

ahead of time produces a probabilitydensity function (PDF) for each mea-surement. The calibration step takesmany measurements from candidateenvironments, using them to determinean offset of the mean and standard devi-ation for each reading. That is, for eachrange reading we could measure, we cal-culated the offset of the experimentalmean from the reading and the standarddeviation of the reading. We expect thatthis calibration needs to be done onlyonce to get noise characteristics basedon various environments.

We distributed 13 RF tags throughoutthe area and then programmed the robotto drive in a repeating path among thetags. With this setup, we collected threekinds of data: the robot’s ground truthpath (from GPS and inertial sensors), therobot’s dead-reckoning estimated path(from inertial sensors only), and therange measurements to the RF tags. Fig-ures 2 and 3 show results from theseexperiments (greater details of the dataset and results obtained appear else-

where3). Although the error for individ-ual range readings has a standard devi-ation of as much as 1.3 m, the tags local-ized the robot to within 0.3 m of itslocation.

Although our results are preliminarybecause they were obtained outdoors,they’re still promising. It’s important torecognize that multipath can severelyaffect the time-of-flight measurement inpopulated indoor environments. How-ever, our preliminary experiments inoffice buildings show that we can rejectmultipath by estimating position uncer-tainty to accept or reject range readingsusing a mechanism such as a chi-squaredfilter. That is, the more uncertain the posi-tion estimate, the more likely the algo-rithm is to incorporate a large differencebetween measured and expected range.

Additionally, because we don’t assumewe’ll get range readings from all or anyparticular set of tags, the fact that onlysome might be visible at any given timeisn’t a problem, because we use deadreckoning between range readings. If the

28 PERVASIVEcomputing www.computer.org/pervasive

F I R S T R E S P O N S E

(b)(a)0

100

90

80

70

60

50

40

30

20

10

00.2 0.4 0.6

Error (meters)

Cumulative Distribution Function

0.8 1.0 1.2 1.4 1.6

Erro

rs le

ss th

an (%

)

5 m

True pathEstimated path

Tag locations

Figure 2. (a) 13 RF tags in known locations, in conjunction with a wheel encoder and gyro, localize a robot moving in an open area.The lines between the paths and tags show two range measurements (one noisy). (b) A cumulative distribution function of errors in localization from an extended run, showing the percentage of errors for each range (in meters). During this run, the vehicle traveled 1,401 m. Over the course of the run, 14,633 range readings were received, of which 163 were rejected. The mean Cartesianerror was 0.296 m with a standard deviation of 0.1627 m.

range readings are obtained in a fashionsuch that little overlap exists betweenthe tags that are heard (as in a long lin-ear traverse), then odometric errorscome into play and the map is topolog-ically correct rather than metrically cor-rect. (The reference frame for the mapand localization is arbitrary and can beeither specified by an axis formed by twoof the tags or by the robot’s initial start-ing position, such as at a building’sentrance.)

In addition to localizing robots, whichhave the potential to interact with radiotags and other robots, it’s also necessaryto localize other sensors. A robot deploy-ing the sensors can localize them usingtechniques such as the robot-assistedlocalization method.4

Information flowSensors can locally store information

about the area they cover or forward itto a base station for further analysis anduse. They can also use communication tointegrate their sensed values with the restof the sensor landscape. With localizedinformation, we can build global mapsof sensory information (such as a tem-perature map) and use them to navigatehumans or robots to a target, avoidingpotentially dangerous areas. Networkusers (robots or people) can use this infor-mation as they traverse the network.

In the Allegheny experiments, we usedMote sensors to measure environmentalconditions such as temperature, humid-ity, and chemical concentrations of tox-ins. Each Mote sensor (www.xbow.com/Products/Wireless_Sensor_Networks.htm)consisted of an Atmel ATMega128microcontroller (with a 4 MHz, 8-bitCPU, a 128-Kbyte flash program space,4-Kbyte RAM, and 4-Kbyte EEPROM),a 916 MHz RF transceiver (50 Kbps,

100 ft. range), a Universal AsynchronousReceiver-Transmitter, and a 4-Mbit ser-ial flash. A Mote runs for approximatelyone month on two AA batteries. Itincludes light, sound, and temperaturesensors, but you can add other types ofsensors. Each Mote runs the TinyOSoperating system. Because of the limitedsensing and computational power, Motesensors must be deployed either manu-ally or by robots at locations that are reg-istered on the map.

Our work in this area has led to twonovel developments. First, we’ve devel-oped distributed protocols for naviga-tion tasks in which a distributed sensorfield guides a user across the field.4 Sec-ond, we’ve developed the Sensory Flash-light, a handheld device that lets a net-work user (human or robot) interactwith the network as a whole or talk toindividual nodes in the network. Analternative approach would be to tag theMotes with pinpoint devices.

The navigation guidance applicationis an example of how simple nodes dis-tributed over a large geographical areacan assist with global tasks. This appli-cation relies on the user’s ability to inter-act with the network as a whole andwith specific nodes. This interaction isdirected at retrieving data from the net-

work (such as collecting local informa-tion from individual nodes and collect-ing global maps from the network) andinjecting data into the network (such asconfiguring the network with a new taskor reprogramming its nodes).

In our experiments,5 we started a firein a room and obtained local tempera-ture data from each sensor. A commu-nication protocol running over the entiresystem uses the local data to compute aglobal temperature map in this space,ultimately producing a temperature gra-dient for the space, computing the mapand the gradients during the fire. For thisexperiment, we manually deployed andlocalized the Mote sensors at the loca-tions marked in Figure 4a. The Motesensors established multihop communi-cation paths to a base station placed atthe door. The sensors sent data to thebase station over a period of 30 minutes,from which we were able to create a tem-perature gradient (see Figure 4b).

Figure 5a shows our prototype Sen-sory Flashlight (the name is derived fromthe optical flashlight metaphor), whichlets a mobile node interact with a sensorfield. The prototype comprises an ana-log compass, an alert LED, a pagervibrator, a three-position mode switch,a power switch, a range potentiometer,

OCTOBER–DECEMBER 2004 PERVASIVEcomputing 29

Actual tag locationsInitial estimate ofthe tag's locationFinal estimate ofthe tag's location

True pathEstimated path

Figure 3. When we don’t initially know atag’s location, we can approximate itusing a batch scheme first and then addit to a Kalman filter to continuouslyupdate the tag’s position along with therobot’s position.

some power conditioning circuitry, anda microcontroller-based CPU/RF trans-ceiver. The processing and RF commu-nication components and the sensor net-work are Berkeley Motes (Figure 5b and5c show the Mote board and Mote sen-sor board).

The Flashlight’s “beam” comprises sen-sor-to-sensor, multihop-routed RF mes-sages that send or return information. Aswitch selects the sensor type (such aslight, sound, or temperature). Whenpointed in a specific direction, the sensor

reports information received from anysensors in that direction. The human-operated version includes a silent vibrat-ing alarm, with the vibration amplitudeencoding the distance (either actual dis-tance estimated or the number of hops)to the triggered sensor. A potentiometersets the detection range while an elec-tronic compass supplies heading dataindicating the direction in which thedevice is pointed. Thus the flashlight col-lects information from all the sensorslocated in that direction and provides its

user with sensory feedback. The devicecan also issue commands to the sensors inthat direction.

We envision using the Flashlight as ahandheld or robot-held device that canhelp guide robots or human respondersalong relevant paths that might dynam-ically change over time, because theFlashlight can reconfigure the wirelesssensor network in a patterned way. Fur-thermore, it can facilitate interactionwith the network, both consuming anddispersing the network’s stored infor-mation and changing and reacting to itstopology. Finally, it lets the user “mark”geographic regions with information,enabling effective management of sen-sors and efficiency in message routing.

We deployed 12 Mote sensors alongcorridors in our building and used theFlashlight and the communication infra-structure presented here to guide ahuman user out of the building (see Fig-ure 6). The Flashlight was used by ahuman in the building to interact withsensors to compute the next direction ofmovement toward the exit. For eachinteraction, the user did a rotation scanuntil the Flashlight was pointed in thedirection computed from the sensordata. The user then walked in that direc-tion to the next sensor. Each time, werecorded the correct direction and the

30 PERVASIVEcomputing www.computer.org/pervasive

F I R S T R E S P O N S E

Sensors

Robots

(b)(a)

Figure 4. (a) An ad hoc network of robots and Mote sensors deployed in a burning building at the Allegheny Fire Academy, 23August 2002 (from an experimental exercise involving Carnegie Mellon University, Dartmouth, and the University of Pennsylvania).(b) The temperature gradient graph collected using an ad hoc network of Mote sensors.

Figure 5. (a) The Flashlight prototype, (b) a Mote board, which has the computer andcommunication system, and (c) a Mote sensor board, which holds the sensors.

(a) (b) (c)

direction the Flashlight detected. Itsdirectional error was eight percent (or30 degrees) on average. However,because the corridors and office door-ways are wide, and the sensors suffi-ciently dense, it successfully identified theexit and never directed the user towarda blocked or wrong configuration.

Certainly, a device such as the Flash-light raises many practical and theoret-ical questions: How dense should thesensor network be for such applications,given the feedback accuracy? How canthe Flashlight’s functionality be inte-grated into mobile robots? It would alsobe useful to integrate multiple sensorsand multiple directions into the Flash-light. These are some directions of ongo-ing and future work.

Network controlRobots augment sensor networks’ sur-

veillance capabilities using mobility. Weenvision large, networked groups ofrobots and sensors operating in dynamic,resource-constrained, adversarial envi-ronments. The constraint in resourcescomes from the fact that each node mighthave to operate for relatively long peri-ods without recharging and withouthuman attention, while the adversarialconditions are typical of emergencyresponse. Because there could be numer-ous agents and because the system mustbe robust enough to add and delete newsensors or vehicles, each agent must beanonymous. Furthermore, each nodemust operate asynchronously and beagnostic to who its neighbor is (in otherwords, the neighbors are interchange-able). There will be variations in dynam-ics from agent to agent. However, whileindividual agents might not exhibit per-formance guarantees, groups offer guar-antees at the population level thatalgorithm designers must model andunderstand. Groups of this type will typ-ically operate with little or no directhuman supervision, and it will be diffi-

cult, if not impossible, to efficientlymanage or control such groups throughprogramming or teleoperation. Thus,managing such large groups will beextremely challenging and will require theapplication of new, yet-to-be-developedmethods of communication, control,computation, and sensing specifically tai-lored to the control of autonomouslyfunctioning robot networks.

The robots used in our experiments arecar-like robots equipped with omni-directional cameras as their primary sen-sors.5 The communication among therobots relies on IEEE 802.11b network-ing. By using information from its cam-era system, each robot can estimate onlyits distance and bearing from its team-mates. However, if two robots exchangeinformation about their bearings to eachother, they can also estimate their rela-tive orientations. We use this idea to com-

bine the information of a group of twoor more robots to improve the group’sknowledge about its relative position. Wecan combine this information with radiotag information to get local or globalposition and orientation information.

We decompose the team’s motion intoa gross position (and orientation) descrip-tor g and its shape or distribution ρ. Wecan model g as an element of a Lie group(a differentiable manifold obeying thegroup properties and satisfying the addi-tional condition that the group opera-tions are differentiable), while ρ dependson the representation used for the shapespace. Previously, we’ve established themathematical framework for decentral-ized controllers that let a team of robotsachieve a desired position and shape tra-jectory.6 This work also showed how tocompose such deliberative motion con-trollers with reactive controllers that are

OCTOBER–DECEMBER 2004 PERVASIVEcomputing 31

XX

X

X

X X

X

X

X

XX

X

1

2(44, -23) 2(44, -23)

12(280, 518)

3(112, 14) 3(112, 14)

4(112, 84) 4(112, 84)

5(112, 126)6(168, 135) 5(112, 126)

6(168, 135)

7(203, 215)

8(203, 309)

9(203, 418) 10(280, 413)

11(203, 512)

(0, 0)X

X

X

X

X X

X

X

X

XX

X

1

12(280, 518)

7(203, 215)

8(203, 309)

9(203, 418) 10(280, 413)

11(203, 512)

(0, 0)

Figure 6. The floor map for the directional guidance experiment. Blue arrows indicatethe correct direction to be detected by the Flashlight; red arrows show Flashlight-computed feedback directions.

necessary to circumvent unexpectedobstacles. Other work discusses abstrac-tions and control laws that allow par-tial state information to be used effec-tively and in a scalable manner fornumerous units.7

In this example, we assume that anominal map of the building is availableand that a network of Mote temperaturesensors has been deployed in the build-ing. The nominal map lets robotsdevelop potential-field-based plannersto navigate while achieving a desired

shape. The ability to communicate witheach other and with Mote temperaturesensors lets the team incrementally builda global temperature map.

In the absence of any significant tem-perature gradient detected by the sensors,the robots can use a potential-field con-troller to navigate the building, travers-ing specified waypoints while maintain-ing formation (see Figure 7a). Becauseeach robot can communicate with Motesensors in its neighborhood, they eachsense a different temperature and thus

establish gradient information. When thisgradient is significant, the robots switchcontrollers and follow the temperaturegradient toward the fire’s source (see Fig-ure 7b). Omnidirectional visible and IRcameras provide information back to oneor more remotely located human respon-ders. At any point, when commanded bya human, the robots can switch back to adesignated goal, ignoring the temperaturegradient (see Figure 7c).

We can easily tailor these algorithmsto adapt to constraints that sensorsimpose (such as a limited field of view)and radios (such as a weak signal). Thislets robots stay within reasonable dis-tances of each other so that they canfunction as a team. They’re also scalableto large numbers of robots1 and lendthemselves to a “swarm” paradigm fornavigation and autonomous exploration.

Although this set of experimentsdidn’t provide a completedemonstration, it did revealcomponent solutions that can

enhance the first responders’ capabili-ties. The firefighters were quick to seethe potential but also sensitized us to theextreme environments they must func-tion in.

For example, we can’t expect fire-fighters to carry anything substantial ordeploy devices carefully. Similarly, be-cause firefighters often must crawl to

32 PERVASIVEcomputing www.computer.org/pervasive

F I R S T R E S P O N S E

y (c

m)

0 100 200 300 400 500 600 700 800 9000

50

100

150

200

250

300

350

400

x (cm)(b)

(c)

(a)

y (c

m)

0 100 200 300 400 500 600 700 800 9000

50

100

150

200

250

300

350

400

x (cm)

y (c

m)

0 100 200 300 400 500 600 700 800 9000

50

100

150

200

250

300

350

400

x (cm)

ExitFire

ExitFire

ExitFire

Figure 7. Three robots switching motionplans in real time to get information fromthe building’s hottest spot: (a) In theabsence of any significant temperaturegradient detected by the sensors, therobots can use a potential-field controllerto navigate the building, traversing specified waypoints while maintainingformation. (b) A network of Mote sensorsdistributed on the ground obtains a gradient of temperature. (c) At any point,when commanded by a human, the robotscan switch to a new goal.

avoid the smoke and heat, this has impli-cations for designing displays and devel-oping the sensors they wear. Further-more, even though floor plans ofbuildings in many US cities have beencomputerized and are available over theInternet, firefighters have no way of inte-grating this information for emergencyresponse. Finally, firefighters and emer-gency response personnel who have usedrobotic devices claim that robots aremost useful as enablers for sensing.However, the need to remotely controleach robot means that significant timeand manpower is spent during an oper-ation that necessitates speed of responseand efficiency. This points to the needfor autonomy with robot and sensor net-works, and the infrastructure to supportnetwork deployment.

Addressing these needs will require sur-mounting many challenges in mechanicaldesign, control, sensing and estimation,and human-robot interaction—which iswhy first response is such a compellingand unifying application for pervasivecomputing. In our current work, we’redeveloping scalable algorithms for largenumbers of networked heterogeneousagents (some simple and specialized,some static, some mobile, some control-lable, some human) that will have tocoordinate in an environment that mightbe poorly known, dynamic, andobscured. Because each sensor, by itself,might not provide useful information—and might not even survive—the func-tionality stems from the network and itsability to adapt on-the-fly.

As part of our collaborative NSF ITRproject, we’re also examining the synergybetween communication, control, andperception in support of such applica-tions and developing new algorithms forcommunication-assisted control, com-munication-assisted sensing, and controland perception-assisted communication.We’ll explore in greater depth how to usea communication network to control the

mobility of the agents as a group andhow each agent can use the network toinfer its relative location and develop anintegrated model of its environment.We’re also exploring how the agents’ability to control their position and ori-entation can help sustain the communi-cation network.

ACKNOWLEDGMENTS We are grateful to George Kantor, Ron Peterson,Aveek Das, Guilherme Pereira, and John Spletzerfor their contributions to developing andimplementing this system. We thank the staff atthe Allegheny Fire Academy for their help and mostvaluable advice. Support for this work has beenprovided in part by NSF grants IIS-0426838,CCR02-05336, and 0225446, ISTS ODP 2000-DT-CX-K001, Intel, and Boeing. Finally, we thank thereviewers for their careful reading and thoughtfulsuggestions.

REFERENCES1. S. Thrun, W. Burgard, and D. Fox, “A Prob-

abilistic Approach to Concurrent Mappingand Localization by Mobile Robots,”Machine Learning, Apr. 1998, pp. 29–53.

2. G. Kantor, S. Singh, and D. Strelow,“Recent Results in Extensions to Simulta-neous Localization and Mapping,” Proc.8th Int’l Symp. Experimental Robotics,Springer Tracts in Advanced Robotics, vol.5, Springer-Verlag, 2003, pp. 210–221.

3. D. Kurth, G. Kantor, and S. Singh, “Exper-imental Results in Range-Only Localizationwith Radio,” Proc. IEEE/RSJ Int’l Conf.Intelligent Robots and Systems, IEEE CSPress, 2003, pp. 974–979.

4. P. Corke, R. Peterson, and D. Rus, “Net-worked Robotics: Flying Robot Navigationwith a Sensor Network,” Proc. 2003 Int’lSymp. Robotics Research, Springer Tractsin Advanced Robotics, Springer-Verlag,2003.

5. A. Das et al., “Distributed Search and Res-cue with Robot and Sensor Teams,” Proc.Field and Service Robotics (FSR 03), to bepublished, Springer-Verlag.

6. A. Das et al., “Vision Based FormationControl of Multiple Robots,” IEEE Trans.Robotics and Automation, vol. 18, no. 5,2002, pp. 813–825.

7. C. Belta and V. Kumar, “Abstractions andControl Policies for a Swarm of Robots,”IEEE Trans. Robotics and Automation, vol.20, no. 5, 2004, pp. 865–875.

For more information on this or any other comput-ing topic, please visit our Digital Library at www.computer.org/publications/dlib.

OCTOBER–DECEMBER 2004 PERVASIVEcomputing 33

the AUTHORS

Vijay Kumar is a professor and the deputy dean for research in the School of Engi-neering and Applied Science at the University of Pennsylvania. He also directs theGRASP Laboratory, a multidisciplinary robotics and perception laboratory. His researchinterests include robotics, dynamical systems, and multiagent coordination. Hereceived his PhD in mechanical engineering from Ohio State University. He is a fellowof the American Society of Mechanical Engineers and a senior member of the IEEEand of Robotics International, Society of Manufacturing Engineers. Contact him atthe Dept. of Mechanical Engineering & Applied Mechanics, Univ. of Pennsylvania,

Room 111, Towne Bldg., 220 S. 33rd St., Philadelphia, PA 19104-6315; [email protected].

Daniela Rus is an associate professor in the Electrical Engineering and ComputerScience department at MIT. Her research interests include distributed robotics,mobile computing, and self-organization. She received her PhD in computer scienceform Cornell University. Contact her at MIT Computer Science and Artificial Intelli-gence Laboratory, The Stata Center, Bldg. 32-374, 32 Vassar St., Cambridge, MA02139; [email protected].

Sanjiv Singh is an associate research professor at the Robotics Institute, CarnegieMellon University. His recent work includes perception in natural and dynamic envi-ronments (developing autonomous ground and air vehicles to navigate with little orno a priori information using onboard sensors) and multiagent coordination (devel-oping teams of agents made of humans, embedded sensors, and robots to performtasks more efficiently and reliably). He received his PhD in robotics from CarnegieMellon. Contact him at Robotics Inst., Carnegie Mellon Univ., Pittsburgh, PA 15213;[email protected]; www.frc.ri.cmu.edu/~ssingh.