6
Toward the Automatic Detection of Access Holes in Disaster Rubble Christopher Kong, Alex Ferworn, Jimmy Tran, Scott Herman, Elliott Coleshill and Konstantinos G. Derpanis Department of Computer Science Ryerson University Toronto, Ontario, Canada {c5kong, aferworn, q2tran, scott.herman, elliott.coleshill, kosta}@scs.ryerson.ca Abstract—The collapse of buildings and other structures in heavily populated areas often result in multiple human victims becoming trapped within the resulting rubble. This rubble is often unstable, difficult to traverse and dangerous for first responders who are tasked with finding and extricating victims through access holes in the rubble. Recent work in scene mapping and reconstruction using RGB-D data collected by unmanned aerial vehicles (UAVs) suggest the possibility of automatically identifying potential access holes into the interior of rubble. This capability would allow critical limited search capacity to be concentrated in areas where potential access holes can be verified as useful entry points. In this paper, we present a system to automatically identify access holes in rubble. Our investigation begins with defining a hole in terms of its functionality as a potential means for accessing the interior of rubble. From this definition, we propose a set of discriminative geometric and photometric features to detect “access holes”. We conducted experiments using RGB- D data collected over several disaster training facilities using a UAV. Our empirical evaluation indicates the potential of the proposed approach for successfully identifying access holes in disaster rubble scenes. I. I NTRODUCTION Disasters involving collapsed buildings in urban areas occur for a variety of reasons. Due to the increased population density in these areas, the likelihood of humans becoming trapped in the resultant building rubble is high. In response to these events, organized teams of Urban Search and Rescue (USAR) personnel are deployed to locate and extract victims, build support for unstable structures (i.e., shoring), and provide medical care [1]. When rescue personnel perform triage on a collapsed struc- ture, they first determine areas that are likely to contain trapped victims and then determine how to access the structure’s interior. If access holes already exist, these will be evaluated before rubble removal is considered to save time, reduce the chances of creating secondary collapses and ultimately save lives [1]. Figure 1 shows an access hole that potentially leads into the rubble interior. The terms “hole” or “access hole” are not clearly defined within the USAR nomenclature. The challenge lies in the amorphous nature of holes (e.g., lack of standard dimensions and depth) thus a definition is left open to the interpretation of each search team. To compound the difficulty of this problem, disaster rubble typically contains many irregularities within the rubble pile. For instance, inconsistency in the size, Fig. 1. An image of a rubble field from our introduced dataset. The “access hole” is highlighted by the red bounding box. shape and types of the material involved all affect what is and is not considered to be a candidate entry “hole”. While we are looking for “holes” within the collapsed structures, functionally we are interested in “access holes”. We can define an access hole through its intended use. In the context of our work, we wish to find holes that are sufficiently large to permit human entry. In this way, the function of an access hole is defined in the context of its utility for this purpose. This is analogous to the functional object recognition paradigm pursued in computer vision [2] that defines objects, such as chairs, in terms of their function, i.e., the ability to support a human, rather than appearance. The instability of rubble can prevent USAR teams from safely traversing it to find access holes. This situation has led researchers to investigate ways to minimize risk to rescue workers through the use of unmanned vehicles [3] that can be used to find such holes. Our previous work [4] demonstrated the ability to equip a UAV with a low- cost, off-the-shelf RGB-Depth sensor (e.g., Microsoft Kinect [5]) and capture critical disaster scene related information from a safe distance and alternative perspective. This information can then be used to generate a scene-level model that can quickly provide first responders with important details about the structure of the rubble [6]. In addition to the challenges posed by handling large amounts of data, traumatic events such as a disaster or building collapse can overwhelm an individual. This can lead to critical incident stress that impairs the ability for personnel to function and perform tasks involving detailed observation [1]. We argue 978-1-4799-0880-6/13/$31.00 c 2012 IEEE

Toward the Automatic Detection of Access Holes in Disaster …ncart.scs.ryerson.ca/wp-content/uploads/2015/10/Toward... · 2015. 10. 31. · Christopher Kong, Alex Ferworn, Jimmy

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Toward the Automatic Detection of Access Holes in Disaster …ncart.scs.ryerson.ca/wp-content/uploads/2015/10/Toward... · 2015. 10. 31. · Christopher Kong, Alex Ferworn, Jimmy

Toward the Automatic Detection of Access Holes inDisaster Rubble

Christopher Kong, Alex Ferworn, Jimmy Tran, Scott Herman, Elliott Coleshill and Konstantinos G. DerpanisDepartment of Computer Science

Ryerson UniversityToronto, Ontario, Canada

{c5kong, aferworn, q2tran, scott.herman, elliott.coleshill, kosta}@scs.ryerson.ca

Abstract—The collapse of buildings and other structures inheavily populated areas often result in multiple human victimsbecoming trapped within the resulting rubble. This rubble is oftenunstable, difficult to traverse and dangerous for first responderswho are tasked with finding and extricating victims throughaccess holes in the rubble. Recent work in scene mapping andreconstruction using RGB-D data collected by unmanned aerialvehicles (UAVs) suggest the possibility of automatically identifyingpotential access holes into the interior of rubble. This capabilitywould allow critical limited search capacity to be concentrated inareas where potential access holes can be verified as useful entrypoints. In this paper, we present a system to automatically identifyaccess holes in rubble. Our investigation begins with defininga hole in terms of its functionality as a potential means foraccessing the interior of rubble. From this definition, we proposea set of discriminative geometric and photometric features todetect “access holes”. We conducted experiments using RGB-D data collected over several disaster training facilities usinga UAV. Our empirical evaluation indicates the potential of theproposed approach for successfully identifying access holes indisaster rubble scenes.

I. INTRODUCTION

Disasters involving collapsed buildings in urban areas occurfor a variety of reasons. Due to the increased populationdensity in these areas, the likelihood of humans becomingtrapped in the resultant building rubble is high. In responseto these events, organized teams of Urban Search and Rescue(USAR) personnel are deployed to locate and extract victims,build support for unstable structures (i.e., shoring), and providemedical care [1].

When rescue personnel perform triage on a collapsed struc-ture, they first determine areas that are likely to contain trappedvictims and then determine how to access the structure’sinterior. If access holes already exist, these will be evaluatedbefore rubble removal is considered to save time, reduce thechances of creating secondary collapses and ultimately savelives [1]. Figure 1 shows an access hole that potentially leadsinto the rubble interior.

The terms “hole” or “access hole” are not clearly definedwithin the USAR nomenclature. The challenge lies in theamorphous nature of holes (e.g., lack of standard dimensionsand depth) thus a definition is left open to the interpretationof each search team. To compound the difficulty of thisproblem, disaster rubble typically contains many irregularitieswithin the rubble pile. For instance, inconsistency in the size,

Fig. 1. An image of a rubble field from our introduced dataset. The “accesshole” is highlighted by the red bounding box.

shape and types of the material involved all affect what isand is not considered to be a candidate entry “hole”. Whilewe are looking for “holes” within the collapsed structures,functionally we are interested in “access holes”. We can definean access hole through its intended use. In the context ofour work, we wish to find holes that are sufficiently large topermit human entry. In this way, the function of an accesshole is defined in the context of its utility for this purpose.This is analogous to the functional object recognition paradigmpursued in computer vision [2] that defines objects, such aschairs, in terms of their function, i.e., the ability to support ahuman, rather than appearance. The instability of rubble canprevent USAR teams from safely traversing it to find accessholes. This situation has led researchers to investigate ways tominimize risk to rescue workers through the use of unmannedvehicles [3] that can be used to find such holes. Our previouswork [4] demonstrated the ability to equip a UAV with a low-cost, off-the-shelf RGB-Depth sensor (e.g., Microsoft Kinect[5]) and capture critical disaster scene related information froma safe distance and alternative perspective. This informationcan then be used to generate a scene-level model that canquickly provide first responders with important details aboutthe structure of the rubble [6].

In addition to the challenges posed by handling largeamounts of data, traumatic events such as a disaster or buildingcollapse can overwhelm an individual. This can lead to criticalincident stress that impairs the ability for personnel to functionand perform tasks involving detailed observation [1]. We argue

978-1-4799-0880-6/13/$31.00 c�2012 IEEE

Page 2: Toward the Automatic Detection of Access Holes in Disaster …ncart.scs.ryerson.ca/wp-content/uploads/2015/10/Toward... · 2015. 10. 31. · Christopher Kong, Alex Ferworn, Jimmy

RGB Image Data Depth Image Data

UAV

Fig. 2. Raw RGB and depth images are obtained by a UAV outfitted with aKinect sensor.

that a system that automates the identification of access holesreduces the cognitive fatigue faced by response personnel.

Current approaches for identifying access holes rely onmanual visual inspection by first responders. A large disasterfield would quickly tax the attention of a trained humanobserver. Figure 2 shows representative imagery captured bya UAV, used as input in our approach: photometric RGB anddepth. A dataset collected from a scan area of 25m2 typicallyrequires at least 6000 images each of RGB and depth (datarecorded at 30fps over 2-3 mins). While a human operator iscapable of manually inspecting each image for access holes,a large disaster field would make this task extremely difficultand time consuming.

The long-term goal of our system is to perform real-timeon-board analysis while raw image data is being gatheredby a UAV. When an access hole is detected, its geographiccoordinates will be provided by the UAV and transmittedwirelessly to ground teams to flag the location for furtherinvestigation. This information will then be combined with aphysics model [6] for USAR teams to virtually inspect thecollapsed structure in that area and allow them to manipulatethe model before committing resources to the rubble.

In USAR research, characterizing rubble is a difficultproblem. In particular, recreating physical rubble models withcharacteristics similar to those in real environments has provento be quite ambitious [7] and, in our opinion, unlikely tolead to better operational techniques for finding trapped peoplefaster. Motivated by this finding, our work in identifying accessholes will address a sub-problem. We concentrate not on theidentification of rubble but rather on identifying its absence.

In this paper, we examine the case of holes leadinginto a subsurface void. We deliberately simplify the task todemonstrate the proof-of-concept. We assume an access holepossesses clearly marked edges and a strong depth change fromthe surrounding area. In addition, the potential access holepossesses a minimum width and aspect ratio to accommodatethe entry of an adult human searcher. While the focus in thecurrent paper is on human searchers, other types of searchentities are trivially accommodated, such as canines or robots.

A. Contributions

This paper makes three main contributions. First, we definean access hole based on a set of features derived from the

photometric appearance and functional characteristics of ahole required to enter a collapsed structure. Second, we havedeveloped an approach to identify access holes in collapsedstructures to be used by USAR personnel in accessing thecollapsed structure. We perform analysis on the aerial imageryobtained by a UAV outfitted with an RGB-Depth sensor toidentify candidate access holes. Third, we introduce a publiclyavailable dataset obtained from real-world USAR trainingareas, where the access holes are manually identified. Quantita-tive empirical evaluation on the introduced dataset indicates thepotential of the proposed approach for successfully identifyingaccess holes in disaster rubble scenes.

II. RELATED WORK

While existing research has been carried out regardingautonomous navigation within collapsed structures and voids,to the best of our knowledge there is no previous work thatautonomously detects and localizes access holes for searcherinsertion.

Rescue robots benefit USAR operations by providingrobust platforms to carry sensors, collect data and deliversupplies to trapped victims [8]. Autonomous navigation withinvoid interiors present opportunities to locate trapped victims.While robots have been equipped with sensor arrays formapping [9], this approach does not attempt to locate accessholes for insertion into rubble.

Work has been carried out using autonomous vehicles fordetecting subsurface voids in mining operations [10]; however,this approach requires a mobile platform to traverse across thearea of inspection and has not been applied to USAR scenarios.

Ground robots are limited in the areas they are ableto successfully traverse since terrain features of rubble canadversely impact locomotion [11]. Similar work has addressedthe problem of detecting “gaps” in terrain for the purpose ofsafe navigation of ground robots in the USAR domain [12]. Incontrast, the current work addresses the suitability for insertinghuman searchers, canines and robots.

Using a UAV for USAR allows rescue personnel to surveyareas that would not ordinarily be accessible and view theterrain in perspectives unattainable by terrestrial robots [13].UAVs have been equipped with an RGB-D sensor to collectdata for both terrain mapping and 3D scene reconstructionpurposes [4, 14].

The Gareki Analyzer tool [14] produces a 3D model ofa building to simulate collapse and create an “empty-space”network within rubble. This network is used to find optimalpaths for rescue robots; however, the estimated paths withinthe simulation are unlikely to be similar to an actual collapseinterior because each collapsing process is considered a prob-abilistic behaviour. This technique requires a pre-existing 3Dmodel, cannot reproduce path results, and there is no indicationof how access holes are found for insertion into these paths.

III. TECHNICAL APPROACH

A. Definition of an Access Hole

Before we can develop a system that detects access holes,we first require an operational definition of the concept. In this

Page 3: Toward the Automatic Detection of Access Holes in Disaster …ncart.scs.ryerson.ca/wp-content/uploads/2015/10/Toward... · 2015. 10. 31. · Christopher Kong, Alex Ferworn, Jimmy

Input FramesRGBDepth

Image Over-segmentation (superpixels)

Feature Score

Generation

Detection Score

Calculation

Detection System

Build Adjacency Graph

Detected And Localized

Access Hole

Fig. 3. Data flow for our algorithm. Starting with a depth image and a corresponding registered RGB image, the depth image is oversegmented and the averagedepth for each resulting region is computed. We treat the segments as an undirected graph and discover the connections for each region. For each region, wedetermine a set of feature scores that are used to calculate a final detection score.

paper, an “access hole” (or “hole”, for short) is defined by thefunction of its usefulness for access into a collapsed structure.A hole must be deeper in the interior of a collapse than thesurrounding terrain. To be useful, a hole must be large enoughto be entered by a searcher, such as a human, canine or rescuerobot. In the remainder of the this paper, a searcher is assumedto be an adult human. Here, we have identified three attributesthat characterize a hole and allows us to perform detection: (i)depth disparity, (ii) hole size and (iii) photometric brightness.

B. Hole Detection

The input to our approach is a pair of images extractedfrom a Microsoft Kinect RGB-D sensor. The Kinect providestwo images, colour (RGB) and depth, both at a resolutionof 640 ⇥ 480 pixels. The two images are registered suchthat they have a one-to-one mapping. To perform detection,the proposed approach segments the inputs into regions ofinterests, i.e., superpixels [15]. Details of the segmentationare described below. Feature scores are calculated andcandidate regions are assigned a final detection score. Figure3 summarizes the data processing flow for our proposedapproach to access hole detection.

Depth disparity Images of rubble scenes are typicallyextremely cluttered and unstructured. As a result, directfigure-ground segmentation is not possible. Fortunately, theKinect provides an estimate of metric depth information.We exploit this depth information to segment the image intoregions along boundaries that exhibit a strong depth gradient.We employ the Entropy Rate Superpixel (ERS) [15] algorithmto partition an image into a set of compact regions where thelocal pixel information is roughly uniform. An inappropriatenumber of partitions results in a contiguous object/surfaceeither being under-segmented or over-segmented. While anaccess hole may be comprised by multiple partitions (i.e.,oversegmented), we assume that the boundaries of an accesshole coincide with the boundaries of a subset of superpixels.After segmentation we determine the average metric distanceof each superpixel from the sensor, denoted by avgDepth.

The absolute depth value of a region does not determineif a region is a hole. A hole by definition must be deeperthan its surrounding terrain. As such, we are actually interestedin the depth discontinuity between adjacent regions. For eachsuperpixel, we build an adjacency graph to obtain a list ofits neighbouring regions. The results of superpixel extractionand neighbourhood discovery are shown in Fig. 4. Once theconnections have been discovered, for each superpixel we

compare its average depth against those of its connectedneighbours. Superpixels that correspond to a local maxima interms of average depth compared to its neighbours serve asaccess hole candidates for scoring.

Rubble scenes are inherently composed of uneven terrain.This means that there may be many regions correspondingto a local depth maxima but are not access holes. To avoidfalse detection we further evaluate hole candidates by theirrelative depth. The deeper the hole, the more likely it is anaccess hole. For every superpixel, a relative depth score, dS,is calculated by comparing its average depth to that of itsnearest neighbour. A score of 1 is assigned for any relativedepth greater than 1.1m and 0 for any depth lower than 0.2m.

Hole size An access hole must have an appropriate sizefor the insertion (i.e., its function) of rescue personnel orsimilarly sized entities (canines or rescue robots).

We calculate the size of a region using two properties:(i) width and (ii) aspect ratio. The width of the region isdetermined by taking the minimum length between the majorand minor axes of a bounding box around the superpixelregion. We have determined in our investigation that a validaccess hole occupies a minimum area of approximately 7%of the 640 ⇥ 480 image. To minimize missed detections ofholes that may be partially occluded or obfuscated, we assigna linear score denoted by wS. A score of 1 is assigned if theminimum width is greater than 150 pixels and 0 to any widthbelow 35 pixels.

In addition, to limit the candidacy of holes that may bethin and curvilinear we apply a score for its aspect ratio.The aspect ratio score is assigned linearly by calculatingthe ratio of superpixel area to its bounding box. The higherthe percentage occupied in the bounding box, the better thecandidacy of the detected region. A linear aspect ratio score,aRS, between 0 and 1 is assigned to regions that occupy aminimum of 35% of its bounding box.

Photometric appearance Examining the depth imagealone does not provide sufficient discriminatory informationabout a hole. To account for this uncertainty we incorporatephotometric information (i.e., brightness intensity) from theRGB image. In particular, an access hole will likely be poorlyilluminated and thus appear darker in the RGB image. Tocapture this attribute we apply a mask of the superpixeldetected in the depth frame to the grayscale of the RGBframe. We calculate the average intensity, avgRGBInt, withinthe masked region. The pixel values range from 0 to 255,

Page 4: Toward the Automatic Detection of Access Holes in Disaster …ncart.scs.ryerson.ca/wp-content/uploads/2015/10/Toward... · 2015. 10. 31. · Christopher Kong, Alex Ferworn, Jimmy

Fig. 4. Outline of the process flow for segmenting a depth image anddetermining the neighbours of a particular superpixel. (left) Raw depth image,(middle) superpixel segmentation and (right) a superpixel highlighted in whitewith its surrounding neighbours.

where a higher value indicates a brighter region. We apply alinear score, cS, ranging between 0 and 1. A higher score isassigned to regions with lower pixel intensity.

Detection Score Calculation The detected region is assigneda final detection score, DS, between 0 and 1 by weightingthe feature scores and summing the results. The resultingdetection score, DS, is calculated as follows:

DS = (w0 · dS) + (w1 · wS) + (w2 · aRS) + (w3 · cS), (1)

where w0=0.5, w1=0.2, w2=0.15 and w3=0.15 and are deter-mined empirically. The detection algorithm produces a list ofspatial bounding boxes for each frame. The region encapsu-lated within a bounding box represents the detected candidateaccess hole. The final output of our approach consists of thecoordinates of the bounding boxes and their correspondingdetection score. All weights and thresholds used to realize thevarious feature scores were empirically determined and fixedfor all experiments.

IV. EVALUATION

A. Dataset

The performance of our proposed access hole detectionapproach has been evaluated on a novel, challenging datasetcontaining images of rubble scenes. A detection is representedas a (rectilinear) bounding box that spatially outlines thecandidate access hole.

The dataset is comprised of 265 image pairs (depth andcorresponding registered RGB image) with a resolution of640⇥480, collected from various USAR training sites (the On-tario Provincial Police UCRT reference rubble pile [17] locatedin Bolton, Ontario, Canada and “Disaster City” [18], locatedon the grounds of Texas A&M University in College Station,Texas, USA). Out of this set, there are 63 image pairs thatcontain 20 unique holes that meet our definition of an accesshole. Positive images consist of various access holes leadingto subsurface voids used during the training of USAR teamswithin a rubble pile. Figure 5 shows a sample subset of thedata used for our evaluation. The dataset is publicly availableat: http://ncart.scs.ryerson.ca/research/access-hole-detection/.

B. Full Dataset Evaluation

We ran our detection approach against the full dataset. Tosegment the image, 10 superpixels was used for all images.Figure 6 shows sample detection outputs. To quantitativelyevaluate our approach, we use Precision-Recall (P-R). P-Rrepresents a standard evaluation tool in information retrieval

A

B

D

C

E

Fig. 5. (A-C) A set of positive images showing clearly defined access holes.(D and E) a scene captured in direct sunlight and at dusk, respectively. Theexcessive missing depth data (denoted by black) is due to direct sunlight.

[19]. The curve captures the trade-off between accuracy andnoise as the detection threshold is varied. “Precision” denotesthe number of correctly detected holes over the total numberof detections,

Precision = TP/(TP + FP), (2)

where TP denotes the number of true positives (i.e., correctlydetected holes) and FP denotes the number of false positives,i.e., the number of detections where no hole is present. Recallis the fraction of true positives that are detected rather thanmissed:

Recall = TP/nP, (3)

where nP is the total number of positives present in the dataset.A detection is considered a true positive if there is a spatialoverlap greater than 50% with the hand labelled ground truth.

Figure 7 shows the P-R curve results after evaluating

Page 5: Toward the Automatic Detection of Access Holes in Disaster …ncart.scs.ryerson.ca/wp-content/uploads/2015/10/Toward... · 2015. 10. 31. · Christopher Kong, Alex Ferworn, Jimmy

A

B

C

Fig. 6. Sample output of our access hole detection approach. (left) InputRGB image, (middle) superpixel segmentation with ground truth label givenin red and (right) detected regions given in green. The first two rows (A, B)represent successful detections and the last row (C) a successful detection withthree false detections.

the detection algorithm over the entire dataset. Using thefull dataset we achieved a maximum recall rate of 51%.Further investigation of the detections found that many misseddetections were the result of missing depth values in regions ofthe image, a limitation that is directly attributed to our visualsensor. As previously discussed [4], direct sunlight will corruptdepth estimates. We attempted to account for this by collectingdata at dusk; however, a portion of the images were capturedwith some degree of direct sunlight. Figure 5 (D) shows animage pair containing excessive missing depth data.

To restrict our evaluation to images with adequate depthdata, we manually inspected the data and removed imagescontaining significant invalid depth information. The resultingsubset contains 235 images, with 32 containing holes.

C. Subset Dataset Evaluation

The second experiment follows the same evaluation pro-tocol as the first. The results of the second experiment areshown in Fig. 8 and Table I. Notably there is a significantincrease in the recall (87.5%). These are favourable resultsand show a large improvement over the experiment utilizingthe full dataset. In general, the results achieve significantlybetter performance with precision never dropping below 50%.

V. DISCUSSION SUMMARY

We are the first to present an automated system for thedetection of access holes in rubble. We have shown promisingresults for detecting access holes using both photometric andfunctional attributes for the insertion of search personnel intoholes within rubble.

A current limitation of our approach is the need for depthimage data that contains moderate to small amounts of missingor erroneous measurements. In this study, we manually filteredthese images to conduct our evaluation. In the future, weplan to investigate ways of improving the depth data, suchas integrating the depth estimates over time.

Fig. 7. Precision-Recall curve over the entire dataset.

Fig. 8. Precision-Recall curve for dataset without images containing thesignificantly corrupted depth data.

TABLE I. ACCESS HOLE DETECTION EVALUATION

DS Recall Precision TruePositives

FalsePositives

FalseNegatives

0.0 0.875 0.519 28 26 40.1 0.875 0.519 28 26 40.2 0.875 0.538 28 24 40.3 0.813 0.650 26 14 60.4 0.188 0.600 6 4 260.5 0.125 0.571 4 3 280.6 0.094 0.600 3 2 290.7 0.094 0.600 3 2 290.8 0.094 0.600 3 2 290.9 0.000 1.000 0 0 321.0 0.000 1.000 0 0 32

We are interested in further developing our approach byimproving the features we have identified and augmenting theset with additional ones. We expect to expand the criteriawe use to identify holes, widen the nomenclature around theterm “hole” and investigate other methods for discovering andvalidating them. Improving the approach to run in real-time asdata is being captured on-board a UAV can provide numerous

Page 6: Toward the Automatic Detection of Access Holes in Disaster …ncart.scs.ryerson.ca/wp-content/uploads/2015/10/Toward... · 2015. 10. 31. · Christopher Kong, Alex Ferworn, Jimmy

benefits to search teams. GPS coordinates obtained from theUAV can be transmitted wirelessly to ground crews allowingUSAR teams to mark areas that require further investigationquickly and accurately as they are detected.

In summary, this paper presents a novel approach for theautomated detection of access holes in rubble scenes. Webegin by first defining the characteristics of an access holethrough both photometric and functional attributes inherent ofa valid entry point. We perform analysis on RGB and depthdata to identify candidate holes. We introduced a new datasetand provide an evaluation protocol to validate our approach.Empirical evaluation showed promising results for detectingaccess holes.

REFERENCES

[1] FEMA US&R Structures Sub-Group Resource, “Urban Search& Rescue Structures Specialist FIELD OPERATIONS GUIDE.”2009. [Online]. Available: http://disasterengineer.org/LinkClick.aspx?fileticket=8OpGWDUY7w8\%3D&..

[2] S. J. Dickinson, “Challenge of image abstraction,” in Object catego-rization: computer and human vision perspectives, S. J. Dickinson,A. Leonardis, B. Schiele, and M. J. Tarr, Eds. Cambridge UniversityPress, 2009.

[3] R. R. Murphy, “Trial by fire [rescue robots],” Robotics & AutomationMagazine, vol. 11, no. 3, pp. 50–61, 2004.

[4] A. Ferworn, J. Tran, A. Ufkes, and A. D’Souza, “Initial experiments on3D modeling of complex disaster environments using unmanned aerialvehicles,” in SSRR, 2011, pp. 167–171.

[5] Microsoft Kinect, “Kinect for windows.” [Online]. Available: http://www.microsoft.com/en-us/kinectforwindows/

[6] A. Ferworn, S. Herman, J. Tran, A. Ufkes, and R. McDonald, “Disasterscene reconstruction: Modeling and simulating urban building collapserubble within a game engine,” SCSC, vol. 45, no. 11, 2013.

[7] V. Molino, R. Madhavan, E. Messina, A. Downs, S. Balakirsky, andA. Jacoff, “Traversability metrics for rough terrain applied to repeatabletest methods,” in IROS, 2007, pp. 1787–1794.

[8] R. Murphy, “Marsupial and shape-shifting robots for urban search andrescue,” IEEE Intelligent Systems and Their Applications, vol. 15, no. 2,pp. 14–19, 2000.

[9] B. Mobedi and G. Nejat, “3-D active sensing in time-critical urbansearch and rescue missions,” IEEE/ASME Transactions on Mechatron-ics, vol. 17, no. 6, pp. 1111–1119, 2012.

[10] S. S. Wilson, L. Gurung, E. A. Paaso, and J. Wallace, “Creation ofrobot for subsurface void detection,” in HST, 2009, pp. 669–676.

[11] A. Ollero, “Control and perception techniques for aerial robotics,”Annual Reviews in Control, no. 2, pp. 167–178, 2004.

[12] A. Sinha and P. Papadakis, “Mind the gap: Detection and traversabilityanalysis of terrain gaps using LIDAR for safe robot navigation,”Robotica, pp. 1–17, 2013.

[13] M. Onosato, F. Takemura, K. Nonami, K. Kawabata, K. Miura, andH. Nakanishi, “Aerial robots for quick information gathering in USAR,”in SICE-ICASE, 2006, pp. 3435–3438.

[14] M. Onosato, S. Yamamoto, M. Kawajiri, and F. Tanaka, “Digitalgareki archives: An approach to know more about collapsed housesfor supporting search and rescue activities,” SSRR, 2012.

[15] M.-Y. Liu, O. Tuzel, S. Ramalingam, and R. Chellappa, “Entropy ratesuperpixel segmentation,” in CVPR, 2011, pp. 2097–2104.

[16] OpenKinect, “Depth camera,” p. 1. [Online]. Available: http://openkinect.org/wiki/Imaging Information#Depth Camera

[17] U.C.R.T., “USAR (Urban Search and Rescue) CBRNE (Chemical,Biological, Radiological and Nuclear) Response Team (U.C.R.T.),”p. 1. [Online]. Available: http://www.opp.ca/ecms/index.php?id=69

[18] Texas A&M Engineering, “Disaster City.” [On-line]. Available: http://www.teex.com/teex.cfmpageid=ESTIprog&area=ESTI&templatei\d=1944

[19] C. V. Rijsbergen, Information Retrieval, 2nd ed. Butterworth-Heinemann, 1979.