12
Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor Target Discovery and Tracking Patrick D. Ulam a , Zsolt Kira* a , Ronald C. Arkin a , Thomas R. Collins b a Mobile Robot Laboratory, Georgia Tech, TSRB S27, 85 Fifth St., Atlanta, GA, USA 30308; b GTRI/ School of Electrical and Computer Engineering, Georgia Tech, Atlanta, GA, USA 30332 ABSTRACT This paper describes ongoing research by Georgia Tech into the challenges of tasking and controlling heterogonous teams of unmanned vehicles in mixed indoor/outdoor reconnaissance scenarios. We outline the tools and techniques necessary for an operator to specify, execute, and monitor such missions. The mission specification framework used for the purposes of intelligence gathering during mission execution are first demonstrated in simulations involving a team of a single autonomous rotorcraft and three ground-based robotic platforms. Preliminary results including robotic hardware in the loop are also provided. Keywords: Mission Specification, Microautonomous Systems, Indoor Target Tracking 1. INTRODUCTION One of the most promising applications for microautonomous robotic systems (e.g., Figure 1) lies in the domain of distributed reconnaissance. These systems’ potentially decentralized processing, innocuous size, distributed sensing capabilities, and low cost will afford tomorrow’s soldier a powerful tool for situational awareness. This paper describes one such indoor/outdoor reconnaissance scenario designed and implemented as part of the Army Research Lab’s (ARL) Micro Autonomous Systems and Technology (MAST) initiative [1]. While the scenario described is geared towards microautonomous systems, initial research has focused on larger surrogate platforms until the first microautonomous vehicles become available. In the scenarios that serve as the focus of this paper, an unmanned aerial vehicle is used first to scout the exterior of a target building, discover an entrance point, and then utilize that ingress to locate a target of interest. Once the target has been identified, the aerial vehicle then guides the team of ground vehicles into the building and into the proximity of the target of interest using a controlled formation. Finally, when contact has been made, the ground vehicles form a mobile, distributed sensor network suitable for intelligence gathering, including visual Simultaneous Localization and Mapping (SLAM) for reconnaissance. The remainder of the paper is structured as follows. First, a review of microautonomous robotic systems is presented. Then, an overview of the MissionLab robot mission specification system is provided and the manner by which an operator may specify the particular reconnaissance scenario that serves as the focus of this paper within MissionLab follows. A simulation-based verification of this exemplar mission is covered in section 4 while the description of the scenario running upon hardware platforms appears in Section 5. Finally, we conclude by reviewing the major contributions of this paper. 2. RELATED WORK A significant body of work has begun to accrue which explores the viability of microautonomous vehicles in various *[email protected]; phone 1 (404) 894-9311; http://www.cc.gatech.edu/ai/robot-lab/

Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

Mission Specification and Control for Unmanned Aerial and Ground

Vehicles for Indoor Target Discovery and Tracking

Patrick D. Ulama, Zsolt Kira*

a, Ronald C. Arkin

a, Thomas R. Collins

b

aMobile Robot Laboratory, Georgia Tech, TSRB S27, 85 Fifth St., Atlanta, GA, USA 30308;

bGTRI/ School of Electrical and Computer Engineering, Georgia Tech, Atlanta, GA, USA 30332

ABSTRACT

This paper describes ongoing research by Georgia Tech into the challenges of tasking and controlling

heterogonous teams of unmanned vehicles in mixed indoor/outdoor reconnaissance scenarios. We outline the tools and

techniques necessary for an operator to specify, execute, and monitor such missions. The mission specification

framework used for the purposes of intelligence gathering during mission execution are first demonstrated in simulations

involving a team of a single autonomous rotorcraft and three ground-based robotic platforms. Preliminary results

including robotic hardware in the loop are also provided.

Keywords: Mission Specification, Microautonomous Systems, Indoor Target Tracking

1. INTRODUCTION

One of the most promising applications for microautonomous robotic systems (e.g., Figure 1) lies in the domain of

distributed reconnaissance. These systems’ potentially decentralized processing, innocuous size, distributed sensing capabilities, and low cost will afford tomorrow’s soldier a powerful tool for situational awareness. This paper describes

one such indoor/outdoor reconnaissance scenario designed and implemented as part of the Army Research Lab’s (ARL)

Micro Autonomous Systems and Technology (MAST) initiative [1].

While the scenario described is geared towards microautonomous systems, initial research has focused on larger

surrogate platforms until the first microautonomous vehicles become available. In the scenarios that serve as the focus

of this paper, an unmanned aerial vehicle is used first to scout the exterior of a target building, discover an entrance

point, and then utilize that ingress to locate a target of interest. Once the target has been identified, the aerial vehicle

then guides the team of ground vehicles into the building and into the proximity of the target of interest using a

controlled formation. Finally, when contact has been made, the ground vehicles form a mobile, distributed sensor

network suitable for intelligence gathering, including visual Simultaneous Localization and Mapping (SLAM) for reconnaissance.

The remainder of the paper is structured as follows. First, a review of microautonomous robotic systems is

presented. Then, an overview of the MissionLab robot mission specification system is provided and the manner by

which an operator may specify the particular reconnaissance scenario that serves as the focus of this paper within

MissionLab follows. A simulation-based verification of this exemplar mission is covered in section 4 while the

description of the scenario running upon hardware platforms appears in Section 5. Finally, we conclude by reviewing

the major contributions of this paper.

2. RELATED WORK

A significant body of work has begun to accrue which explores the viability of microautonomous vehicles in various

*[email protected]; phone 1 (404) 894-9311; http://www.cc.gatech.edu/ai/robot-lab/

Page 2: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

Figure 1. Artist's rendition of a microautonomous robot inspired by a spider (Photo from BAE Systems).

contexts. Much of this existing research examines microautonomous vehicles from a feasibility standpoint. That is

to say, these avenues of research focus on the mechanical, electrical, and control requirements necessary for

microautonomous vehicles. Examples include research into the physical characteristics necessary for microautonomous

underwater vehicles [2], the power requirements of these vehicles [3], and the various approaches towards sensing [4][5]

and control [6] of microautonomous robotic systems.

As the technical challenges of deploying microautonomous vehicles in the land, sea, and air are identified and overcome,

an increasing number of researchers have begun examining the space of potential applications and missions to which

these vehicles may be suited. This paper serves as one such example. Others include Tester, et al.’s research using

teams of microautonomous underwater vehicles to monitor hydrographic features within shallow waters [7]. Davis et al., on the other hand, provide an early example of possible surveillance applications for microautonomous unmanned

aerial vehicles [8]. While surveillance scenarios such as this one are a common mission type for microautonomous and

traditional robotic systems, less frequently are heterogeneous robot teams involved in these scenarios. When

heterogeneous teams of unmanned systems are deployed, one of the most common configurations is a combination of

unmanned aerial vehicles and unmanned ground vehicles (e.g. [9]). In these scenarios, the unmanned aerial vehicles

take advantage of the mobility and elevation to provide intelligence to the deployed ground vehicles and guide them to

areas of interest. The scenario described in this paper takes a similar approach in which the unmanned vehicle identifies

and tracks the target in service of a group of unmanned vehicles within the mission area.

3. SCENARIO

The remainder of this paper describes one potential indoor/outdoor reconnaissance mission in detail including the

behavior of the individual robots that participate in the mission, the behavior of the target of interest in this scenario, and

details concerning the sensory processing that will be performed by the vehicles in the mission. Finally both simulation

results as well as hardware runs of the scenario are described.

As mentioned earlier, in the scenario described in the paper, an unmanned aerial vehicle is used to survey the

exterior of an unknown building, discover a point of ingress, and then utilize that ingress to locate a target of interest.

Potential ingress points for the ground vehicles are also located by the aerial vehicle. If and when the target has been

Page 3: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

identified, the aerial vehicle then informs the ground vehicles in proximity of the building of the target’s location. A

subset of the ground vehicles then proceed to advance towards that position until contact with the target is made.

Figure 2. Environment for the experiment, with the three ground robots in place and the aerial vehicle perched.

Figure 3. Indoor environment that the scenario takes place in, including the partition separating the room into two parts.

The other ground vehicles also enter the building but explore the environment. Once visual contact has been made of the

target, the ground vehicles spread out across the environment so as to generate a map of the building, utilizing

distributed SLAM techniques being developed by our collaborators for similar computationally limited platforms [10].

Page 4: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

The robots used in this scenario consist of a team of three platforms, each equipped with a color camera, Zigbee-

based wireless communication, and a single forward-facing IR sensor. As such, these surrogate platforms are sensor-

limited as the real microautomous robots will also be. In particular, the odometry available from the ground robots is as

limited as that of a small crawler might be. In addition, a single autonomous rotorcraft equipped with a color camera and

wireless communication serves as the target designator and tracker for the team. The scenario takes place inside and just

adjacent to a building approximately 9x6 meters large. A small open window accessible from the outside of the building serves as the entrance for the UAV so that it may observe the interior. In addition to this window, a freight-loading door

leading into the building is partially open allowing the unmanned ground vehicles to enter should the UAV identify

anything of interest in the building. Note that in this preliminary implementation, the ground robots begin just inside the

building and the aerial vehicle is perched just below a vent (Figure 2). Finally, the interior of the building is partially

divided by a partition splitting the room into two sections (Figure 3).

4. SCENARIO IMPLEMENTATION

We now describe a preliminary implementation of the above scenario that is tested in simulation as well as on real

robots. Note that some of the elements, such as ingress detection by the aerial vehicle remain to be implemented as the

project progresses. In order to generate missions for each individual member of the team, MissionLab, a software toolset

that allows a user to compose multi-robot mission through a graphical interface was used [11]. MissionLab allows a

commander to generate finite state acceptor (FSA) based mission plans that can be compiled and executed on each

individual robot. In MissionLab’s FSA-based mission representation, actions (behaviors) are represented as circles,

while perceptual triggers (conditions for executing the next action) are denoted as arrows between behaviors. A mission

consists of a sequence of these behaviors and the perceptual triggers that result in transitions between them. In this

particular scenario, each of the robots is tasked with a slightly different mission. The remainder of the section describes

the composition of the individual FSAs used in the scenario.

Figure 4 depicts the FSA used in the scenario to describe the unmanned rotorcraft’s mission. The first stage

instructs the vehicle to approach the window of the building and then once it arrives there to alert the operator that it will

begin to search for the target. At the window, the robot begins searching for the target by panning its camera (by

rotating about its yaw axis). If the target is found, the robot reports acquisition to both the operator and the ground

vehicles. Further, the position of the target is sent to the UGVs on the team so that they may be able to find the target

upon entering the building. Finally, the mission specifies that if the rotorcraft ever loses track of the target, is should

resume its search until it is found again.

The general FSA specifying the mission for the unmanned ground vehicles in the scenario can be seen in Figure 5.

While the mission specification for each UGV differs slightly, all three follow the same general plan, which consists of

three independent phases. In the first phase, the unmanned ground vehicle nears the entry point of the target building

awaiting the signal from the rotorcraft indicating that the target of interest has been identified within. Once the location of the target of interest has been transmitted to the UGVs, the second phase of the mission begins. The vehicles are then

instructed to enter the building in a loose formation and begin traveling to the target while avoiding any obstacles in their

path. This loose formation is the result of a repulsive sphere around each robot to ensure that they remain a certain

distance away from one another coupled with an attractive sphere to make sure that if the robots do not stray too far from

their teammates. This behavior creates a formation that is not dependent on the number of robots, and hence additional

robots can be added as needed without modifying the mission.

The unmanned rotorcraft reports updates on the position of the target of interest to the ground vehicles to assist in

maintaining the target’s location in the event that the target is mobile. Once the robots have encountered the target, a

report is sent to the commander and the third phase of the mission begins. This third phase consists of the UGVs

spatially distributing themselves around the building and at varying intervals stopping and visually scanning the building

in order to generate a map that may be of use to accompanying human ground forces. Note that, as mentioned in the previous section, a subset of the robots do not enter the second stage (movement in a formation towards the target);

instead, they enter the building and begin the third stage directly and wander around the environment in order to aid

map-building. Note that all movement behaviors contain sub-behaviors responsible for obstacle avoidance.

Page 5: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

Figure 4. Mission FSA used for the rotorcraft in the demonstration scenario. Initially the UAV travels to the opening in the building and then proceeds to identify and track the target of interest visually.

Figure 5. Typical mission for unmanned ground vehicles used in the demonstration scenario. The vehicles are first ordered to

travel to the opening of the building. Upon receiving the position of the target of interest from the UAV, a subset of the

ground vehicles travel in formation until they encounter the target. Once they have encountered the target, the team joins the other ground robots and explores the environment to conduct a visual survey in support of mapping.

Page 6: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

5. SCENARIO SIMULATION

In order to evaluate the scenario outlined above, a simulation-based verification was conducted. This verification took

place via the integration of two software packages. MissionLab was used to specify the mission as shown above and to run the robot executables (a compiled software program which realizes the mission specified in MissionLab for each

robot.) The Gazebo simulation platform [12] was used to simulate the motor and sensory capabilities of the platforms in

a 3D physics-based simulation environment. The remainder of this section will discuss a typical execution of the target

scenario. A movie of the scenario described here can be found at [13].

Figures 6 to 8 depict a typical mission run in the Gazebo simulation environment. Figure 6 depicts the initial

configuration of the robots in the scenario. The team of three UGVs initially waits at a partially open garage leading to

the building interior. The UAV is positioned north of a potential ingress to the building. As the mission begins, the

UAV proceeds to the window in order to observe any potential targets inside. Once the UAV has detected the target, it

reports its position to the UGV team which then proceeds to enter the building and head towards the target along each of

the two potential routes around the partition (Figure 7). The scenario enters its final phase when one of the UGVs makes contact with the target and reports that contact has been made. Upon receiving this report, the UGV team begins to

distribute themselves around the environment in order to both track future movement of the target and to perform SLAM

using their onboard cameras.

UGVs

UAV

Window

Partition

Target

Obstacles

Figure 6. Initial scenario configuration within the MissionLab/Gazebo simulation environment. The team of UGVs stand ready at the entrance of the building while the UAV begins moving towards the window for an interior view.

Page 7: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

Target

UAV

Target

Robot

Moves

Toward

Target

Exploring

Robot

Figure 7. The UAV has positioned itself at the window and has identified the target. The team of UGVs enter the building in a

loose formation. One of the ground vehicles begins to move to the target, while the rest spread out and explore the

environment. As the target moves, the UAV provides updates to the UGVs within the building. (The UAV is partially obscured by the wall due to the camera angle of the simulation.)

Figure 8. After encountering the target, all of the robots explore the environment in order to ensure the target may be tracked as well as to generate additional intelligence in the form of a map.

Page 8: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

Figure 9. Left: The Skybotix CoaX autonomous helicopter used in the hardware based verification of the scenario (Photo from Skybotix). Right: Wowwee's Rovio telepresence platform (Photo from Wowwee).

Figure 10. A Cricket indoor localization developed by MIT and manufactured by Crossbow. Each listener mote determines is position based upon the position of several beacon motes.

6. HARDWARE VERIFICATION

Once the simulation-based verification of the scenario had been conducted, we transitioned the mission onto hardware

platforms. As discussed earlier, this scenario was developed in the context of the goals of the Micro Autonomous

Systems and Technology initiative. As the first microautonomous platforms being developed were not available at this

time, we used larger surrogate platforms as the means of testing. The unmanned rotorcraft that serves in the target

identification role was realized via the Skybotix Technologies CoaX platform (Figure 9). The CoaX platform is capable

of fully autonomous or teleoperated flight and comes equipped with a color camera, inertial, and sonar sensors.

The platforms that serve as the surrogate ground-based microautonomous vehicles were Wowwee’s telepresence

robot Rovio (Figure 9). While the Rovio platform is equipped with a maneuverable color camera, the difficulty in

detecting and navigating around obstacles using this non-stereo camera necessitated a precomputed map of all obstacles

and walls that is used only during this developmental stage (i.e., prior to integration of SLAM capability). This map was

provided to each ground robot so as to compute ‘virtual’ obstacle locations. These virtual obstacles were then treated as

sensed obstacles as appropriate given the robot’s currently estimated position. In addition, the single forward-facing IR

sensor was used for obstacle avoidance when the robots became too close to an obstacle.

To assist in pose estimation and to provide a means of computing ground truth localization for evaluation of visual

SLAM mapping performance, each Rovio was also equipped with the Cricket indoor localization sensor, manufactured

by Crossbow (Figure 10). Listener Cricket motes compute their current position based upon active radio and sonar pings

generated by groups of beacon motes mounted around the building. These listeners estimate their position based upon the known distance of three or more of these beacons. In addition, when only two beacons were available, prior altitude

Page 9: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

information was used to constrain the solution and obtain new pose information. This position estimate is then fused

with the Rovio’s odometry using an extended Kalman filter and logged to provide ground truth positioning for map

quality estimation.

Note that the Cricket-based localization system provides three-dimensional position information but does not

provide yaw, pitch, or roll. In order to estimate yaw, consecutive Cricket positions were averaged over two time windows and the angle between the average positions was calculated. If the resulting two positions were far enough

apart (according to an empirically-derived threshold), then the yaw information was added to the Cricket readings and

incorporated into the Kalman filter. The target of interest in this scenario was realized via a separate human-controlled

Rovio platform with a brightly-colored object place on it. Finally, target identification on both the Rovio and CoaX

platforms was conducted via color blob tracking implemented using OpenCV, an open-source computer vision library

[14].

Figures 11 to 13 show one example of the hardware-based runs of the reconnaissance scenario. A video of the

hardware-based run can be found at [15]. Each of the figures shows the scenario at analogous points in time to the

simulation shown in Figures 6 to 8. In the hardware-based runs of the scenario, both the team of Rovios and the CoaX

begin outside of the building (Figure 11). Similar to the simulation of the scenario, the mission begins with the CoaX

successfully identifying the target via the building’s window and provides a constant stream of target status updates as the target moves about the building (Figure 12). The Rovio platforms use these position updates to navigate into the

building, and a subset of the robots travel to the target. As in the simulation, once the target has been reached, all of the

UGVs explore the environment to collect camera imagery suitable for map building (Figure 13).

CONCLUSIONS

Figure 11. Initial scenario configuration of the real robot run. The team of UGVs stand ready just inside the building while the UAV (off-camera) attempts to track the target.

Target

Direction

UGVs

Page 10: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

Figure 12. The UAV has identified the target, and the team of UGVs begins their mission. One of the ground vehicles begins

to move to the target, while the rest explore the environment. As the target moves, the UAV provides updates to the UGVs within the building. (The UAV is outside of the frame toward the upper left.)

Figure 13. The robot arrives at the target. After encountering the target, all of the robots further explore the environment in order to ensure the target may be continuously tracked in addition to generating additional intelligence in the form of a map.

Target

Robot

Moves

Toward

Target Exploring

Robot

UAV

Direction

Target

Robot

At

Target

Page 11: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

7. CONCLUSIONS

The use of microautonomous vehicles in surveillance scenarios is one of the most promising application domains for

these platforms. Significant research must be done in order to measure the effectiveness of those platforms in such

coordinated teaming scenarios, however. We have presented one such indoor/outdoor surveillance scenario utilizing both autonomous unmanned ground and unmanned aerial vehicles. We described how such a mission can be specified

in the MissionLab multi-robot specification environment. This scenario was first demonstrated in simulation to verify

operator intent. Once the target scenario was operating appropriately in simulation, a hardware-based verification was

successfully demonstrated. This particular mission is only an example of the large potential space of missions that can

be created and tested using the software tools described herein (more or less robots, different building layouts, etc.)

Future work will involve the complete implementation of the scenario, including ingress detection, utilization of

additional ground robots that will flock towards the target, and integration with distributed visual SLAM. Beyond the

current scenario, future work will also look at additional domains in which the promise of microautonomous vehicles

will be most beneficial.

8. ACKNOWLEDGMENTS

The work in this paper was funded under the U.S. Army Research Laboratory Micro Autonomous Systems and

Technology Collaborative Technology Alliance project (BAE # 329420.)

REFERENCES

[1] Beekman, D.W.; Mait, J.N.; Doligalski, T.L., “Micro Autonomous Systems and Technology at the Army Research

Laboratory, In the proceedings of the 2008 Aerospace and Electronics Conference (NAECON), p. 159 – 162, July

2008.

[2] Walker, D. "Micro Autonomous Underwater Vehicle Concept for Distributed Data Collection”. In the proceedings

of OCEANS 2006, p. 1-4, September 2006. [3] Naravan, S.R, Kisor, A., Valdez, T.I., Manohara, H., "Power sources for micro-autonomous vehicles: challenges

and prospects," In the proceedings of the SPIE Micro- and Nanotechnology Sensors, Systems, and Applications,

May 2009.

[4] Fontana, R.J., Richley, E.A. Marzullo, A.J. Beard, L.C. Mulloy, R.W.T. Knight, E.J., “An ultra wideband

radar for micro air vehicle applications”, In the proceedings of the 2002 IEEE Conference on Ultra Wideband

Systems and Technologies,” p. 187-191, 2002.

[5] Kanade, T.; Amidi, O.; Ke, Q., “Real-time and 3D vision for autonomous small and micro air vehicles,” In the

proceedings of the 43rd IEEE Conference on Decision and Control, p. 1655 – 1662, December 2004.

[6] Bouabdallah, S. Noth, A. Siegwart, R., “PID vs LQ control techniques applied to an indoor micro quadrotor,” In

the Proceedings of the 2004 IEEE International Conference on Intelligent Robots and Systems, p. 2451- 2456,

September, 2004. [7] Tester, P.A.; Kibler, S.R.; Hobson, B.; Litaker, R.W., “A test of an autonomous underwater vehicle as a monitoring

tool in shallow water,” African Journal of Marine Science, Volume 28, Number 2, pp. 251-255, September 2006.

[8] Davis, W. R., Jr., B. B. Kosicki, D. M. Boroson, and D. F. Kostishack. “Micro Air Vehicles for Optical

Surveillance.” The Lincoln Laboratory Journal, Vol. 9, No. 2, 1996: 197-213.

[9] Sauter, J. A., Matthews, R. S., Robinson. J. S., Moody, J., Riddle, S., "Swarming Unmanned Air and Ground

Systems for Surveillance and Base Protection," In the Proceedings of the AIAA Infotech@Aerospace Conference,

April 2009.

[10] Dellaert, F., Kipp, A., Krauthausen, P., “A Multifrontal QR Factorization Approach to Distributed Inference

Applied to Multirobot Localization and Mapping,” In proceedings of the 22nd AAAI National Conference on AI, pp.

1261-1266, 2005.

[11] MacKenzie, D., Arkin R., and Cameron, J., “Multiagent mission specification and execution,” Autonomous Robots,

vol. 4, no. 1, pp. 29–52, 1997.

Page 12: Mission Specification and Control for Unmanned Aerial and Ground Vehicles … · 2010-03-05 · Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor

[12] V. Kumar, J. Fink, T. Collins, Y. Mostofi, B. Sadler, “A simulation environment for modeling and development of

algorithms for ensembles of mobile microsystems,” SPIE DSS09 (Micro-Nanotechnology Sensors, Systems, and

Applications), 2009.

[13] http://www.cc.gatech.edu/ai/robot-lab/mast/SimulationDemo_3Robot.avi

[14] Bradski, G. “The opencv library”, Doctor Dobbs Journal, Vol. 25, No. 11, p.120-126, 2000.

[15] http://www.cc.gatech.edu/ai/robot-lab/mast/HardwareDemo_3Robot.avi