15
J Intell Robot Syst (2012) 65:587–601 DOI 10.1007/s10846-011-9572-6 Multi-Purpose Environment Awareness Approach for Single Line Laser Scanner in a Small Rotorcraft UA Stefan Krause Received: 28 January 2011 / Accepted: 18 April 2011 / Published online: 19 August 2011 © Springer Science+Business Media B.V. 2011 Abstract The work presents an environment aware- ness approach for a small rotorcraft unmanned aircraft (UA) which operates at low height using a single line laser scanner which enables a height estimation with a concurrent detection of ground fixed obstacles. The approach is suitable for small UA which are not able to carry complex and heavy 3D laser scanner mountings having addi- tional drives or mirrors. It works without using external reference systems like DGPS. The ap- proach was especially developed for a mission of the “International Micro Air Vehicle Confer- ence” outdoor contest, where it is the aim to fly through a 6x6m artificial gate. The sensor data processing enables the height estimation above ground as well as the detection of obstacles in order to meet the mission’s goal. The height esti- mation enables a near-ground flight to prevent a collision with a top boundary of the gate, and a terrain following. The obstacle detection senses the pillars of the gate and finds a safe way through the narrow gate passage. The develop- ment and optimisation of the mounting and the sensor processing, as well as the validation, was re- alized under operational conditions with manual S. Krause (B ) DLR, German Aerospace Center, Institute of Flight Systems, 38108 Braunschweig, Germany e-mail: [email protected] remote control (RC) helicopter flights and virtual flights at a simulation environment. The results of the experiments show that with this approach the mission can be fulfilled as a reliable ground estimation and object detection is ensured. Keywords Small UAV · Laser scanning · Environment detection 1 Introduction Environment awareness, including the detection of obstacles, is one of the main research topics in unmanned aviation. This ability is especially im- portant for small unmanned aircraft (UA) which are used outdoor to fly in low level heights in natural or urban canyons, where, in addition to aerial obstacles, ground fixed obstacles like build- ings and vegetation have to be detected too. En- vironment awareness can be realized with passive and active sensors like cameras [8], laser scanner [4] or radar systems [2, 11]. The sensor data is used for most mapping [14, 15], obstacle avoidance [6] or state estimation [5] approaches to fulfil general and special missions. The application, which is described in this pa- per, is especially developed to fulfil a given con- test mission, where the aim is to cross a narrow artificial gate with a small UA. A safe crossing is nearly impossible with only a GPS-based vehicle

Multi-Purpose Environment Awareness Approach for Single Line Laser Scanner in a Small Rotorcraft UA

Embed Size (px)

Citation preview

J Intell Robot Syst (2012) 65:587–601DOI 10.1007/s10846-011-9572-6

Multi-Purpose Environment Awareness Approachfor Single Line Laser Scanner in a Small Rotorcraft UA

Stefan Krause

Received: 28 January 2011 / Accepted: 18 April 2011 / Published online: 19 August 2011© Springer Science+Business Media B.V. 2011

Abstract The work presents an environment aware-ness approach for a small rotorcraft unmannedaircraft (UA) which operates at low height usinga single line laser scanner which enables a heightestimation with a concurrent detection of groundfixed obstacles. The approach is suitable for smallUA which are not able to carry complex andheavy 3D laser scanner mountings having addi-tional drives or mirrors. It works without usingexternal reference systems like DGPS. The ap-proach was especially developed for a missionof the “International Micro Air Vehicle Confer-ence” outdoor contest, where it is the aim to flythrough a 6x6m artificial gate. The sensor dataprocessing enables the height estimation aboveground as well as the detection of obstacles inorder to meet the mission’s goal. The height esti-mation enables a near-ground flight to prevent acollision with a top boundary of the gate, and aterrain following. The obstacle detection sensesthe pillars of the gate and finds a safe waythrough the narrow gate passage. The develop-ment and optimisation of the mounting and thesensor processing, as well as the validation, was re-alized under operational conditions with manual

S. Krause (B)DLR, German Aerospace Center, Institute of FlightSystems, 38108 Braunschweig, Germanye-mail: [email protected]

remote control (RC) helicopter flights and virtualflights at a simulation environment. The resultsof the experiments show that with this approachthe mission can be fulfilled as a reliable groundestimation and object detection is ensured.

Keywords Small UAV · Laser scanning ·Environment detection

1 Introduction

Environment awareness, including the detectionof obstacles, is one of the main research topics inunmanned aviation. This ability is especially im-portant for small unmanned aircraft (UA) whichare used outdoor to fly in low level heights innatural or urban canyons, where, in addition toaerial obstacles, ground fixed obstacles like build-ings and vegetation have to be detected too. En-vironment awareness can be realized with passiveand active sensors like cameras [8], laser scanner[4] or radar systems [2, 11]. The sensor data is usedfor most mapping [14, 15], obstacle avoidance [6]or state estimation [5] approaches to fulfil generaland special missions.

The application, which is described in this pa-per, is especially developed to fulfil a given con-test mission, where the aim is to cross a narrowartificial gate with a small UA. A safe crossing isnearly impossible with only a GPS-based vehicle

588 J Intell Robot Syst (2012) 65:587–601

localization, because it cannot guarantee the de-manded accuracy. To increase the knowledgeabout the surrounding obstacles, it is necessary todirectly sense the environment.

In order to make the optimum sensor dataavailable different sensor approaches exist. Mostof the approaches with highly accurate sensorscannot be used in a small UA due to the avail-able payload in small UA. To address this is-sue, a new method was developed combining thefunctionality of two sensors into one sensor. Thiscombination means that the payload required forthe original sensors can now be allocated to thissingle sensor, allowing selection of higher qualitydata. With consideration of the mission and UArequirements and abilities, an approach using asingle line laser scanner was developed, whichprovides surrounding obstacle and height datawith a sufficient accuracy.

The processing of the data generated by thelaser scanner runs onboard the UA, such that nocommunication with the ground control station isnecessary. The data processing is specific to themission and includes the gate detection and heightestimation. It is an approach without complexworld representation like a grid or mesh, to enableits application on a minimal onboard hardwareconfiguration.

Sections 1.1 and 1.2 introduce the mission andUA hardware used, Section 2 describes the sensorsetup and Section 3 the data processing algorithm.Section 4 explains the verification setup of theflight test and the simulation. In Section 5 the re-sults of flight tests and simulations are described.Section 6 describes the discussion, conclusion andfinally outlook.

1.1 Mission

The mission is a scenario, which was part ofthe International Micro Air Vehicle Conference(IMAV) outdoor competition 2010 [9]. The aimof this scenario was the automatic flight througha gate, which was represented by two obstacles. Amore detailed description of the gate is as follows;on a flat area two pillars stand upright. Thesepillars are six meter tall and have 0.1 × 0.1 m edgelength. Between these pillars is a distance of 6 m.The pillars are connected by a rope at their tops.

Fig. 1 Model of the gate

Each pillar has three flags, which are to facilitateoptical detection. This flags are coplanar to thegate and outside of the gate opening. Based on thissetup, the gate has an opening of 6 × 6 m (Fig. 1).

1.2 UA Setup

The UA used in this work is the maxiARTISrotorcraft UA of the Institute of Flight Systems,Department of Unmanned Aircraft of the DLRin Braunschweig [1], see Fig. 2. maxiARTIS is aproprietary design which uses a turbine engine.Its rotor diameter is 3 m and its maximum takeoff weight is 25 kg. The permissible maximal sci-entific payload is 2 kg, e.g. for additional sensorand processing units for environment detection.Taking into account the current data processingunit, 1 kg payload remains available for sensors.The maximal duration of one flight is 30 min.

The state of the helicopter is provided by anavigation solution using a differential GPS, amagnetometer and an inertial measurement unit.The navigation solution is available to the scien-tific payload, which allows for the laser scanner

Fig. 2 maxiARTIS UA of the Unmanned Aircraft Depart-ment of the DLR in Braunschweig

J Intell Robot Syst (2012) 65:587–601 589

measurements to be related to the helicopter atti-tude. The accuracy of the state estimation is betterthan 0.5 m in position and 4◦ in attitude [3].

2 Sensor Setup

2.1 Related Work

Approaches suitable for smaller UA often suc-cessfully use cameras, because they are relativelylight weight, mechanically simple and need lesspower than lidar or radar systems. Different cam-era configurations like monocular or stereo sys-tems were demonstrated in the past [3, 6, 19].Monocular systems, which do not use triangula-tion by changing view points, cannot detect thedistance therefore only relative measurements canbe generated. Stereo systems have this ability,but the depth resolution depends on the baselinebetween the two cameras, which limits their long-range accuracy [10]. With a constant baseline andan increasing measurement distance the accuracydecreases cubically. A further drawback of stereosystems is the necessary presence of textures in thescanned environment. In homogeneous regionslike uniform walls or streets, where insufficientfeatures are found, no matching between the twocamera pictures is possible and consequently norange information is produced.

Approaches which are used for larger UA (e.g.Yamaha RMax) and ground vehicles with the abil-ity to carry sufficient payload and power supply,often use a high end 3D laser scanner. A popularapproach is the use of a single line scanner incombination with additional drives [16, 22] and/ormoving mirrors [18] to produce a 3D point cloud.With these mountings a large field of view can beachieved (up to 360◦ × 360◦). The disadvantage ofthese approaches is, that the motion mechanism ofthe additional drive is too slow and prohibitivelybulky for most small UA [17, 20, 21].

Nevertheless approaches exist, which presentthe usage of laser scanners in small UA, as well[12, 13]. These approaches often use line scan-ner without additional drives. The probable rea-sons are following: The long-range accuracy ofthe single laser scanner is better than the accu-

racy of a stereo camera with a comparable sizeand weight. Without an additional drive, prob-lems with timing, synchronization and weight canbe avoided. However, without the addition of adrive the vertical field of view of this scanner isnarrow.

In contrast to the vertical field, the horizontalfield of view is often much larger, between 180◦and 360◦. As a consequence, the scanner may takeunnecessary measurements towards the rear ofthe aircraft. Grzonky et al. [5] show an approachusing the back view to monitor the height of thecarrier. Here, the laser scanner was used for twotasks, obstacle collision prevention and height es-timation. Advantages of this approach could bemore accurate measurements compared to baro-metric or sonar height estimation and having morepayload available for other tasks combining twofunctions in one sensor.

2.2 Proposed Approach

The proposed approach uses a single line laserscanner with a narrow vertical field of view and270◦ horizontal field of view in a small helicopterUA. As in the work by Grzonky et al. [5], twotasks are monitored to optimally exploit the gen-erated sensor data. The first task is the detectionof ground based obstacles to facilitate their avoid-ance. The second task is a ground detection andthus estimation of height above ground.

A deviation to the work of Grzonky et al. is thatin the proposed approach no additional mirror isused to split the beam plane in a forward anddownward view. The reason is that Grozonky’sapproach was implemented on a quadrocopter,which generates significantly less vibrations com-pared to a conventionally configured helicopter.It is assumed that the vibrations would be trans-ferred to the laser-mirror construction and in-hibit the recording of useful measurements whenusing maxiARTIS. An analysis which quantifiedthe impact of the vibration at the mirror and thedifference between quadrocopter and helicopterwas not done.

The proposed implementation is independentof additional structures like mirrors. To obtain asimultaneous detection of the ground and groundfixed obstacles, the single line laser scanner is

590 J Intell Robot Syst (2012) 65:587–601

xbxlyb

ylz b

z l

xg

yg

z g

N

Fig. 3 Reference frame definitions

mounted to the helicopter as can be seen in Fig. 3.The scan plane coincides with the xl − zl planeof the laser frame of reference. The front viewdirection of the laser is parallel to the zl axis. Thexl axis is parallel to the body fixed yb -axis of thehelicopter. The optical axis of the laser scannerzl lies in the xb − zb -plane of the body fixed he-licopter system. There is an angle κ between thezl- xb -axes, as can be seen in Fig. 4.

For the definition of the laser scanner mountingangle κ , it is necessary that the standard flight stateof the carrying aircraft is considered. The aircraftpitch angle � must be taken into account in thecomputation of κ . In the following step it will beassumed that � = 0.

The definition of κ depends on three parame-ters: the maximum height at which the aircraft op-erates, the maximum sensor measurement rangeand the width of the channel, which is necessaryto make the ground scan possible. The maximumsensor range and the width of the ground scanchannel is used to compute the length of the ray,which is radiated from the centre of the scanner

Θκ

zg

xg

yg, yb, xl

zbxb

c

zl

yl

Fig. 4 Computation of the mounting angle κ

Fig. 5 Hokuyo UTM-30LX

field of view. It is parallel to the zl-axis. Further,the maximum height, which is measured in thegeodetic coordination system along the zg-axisand the length of the ray along zl are the basisfor the computation of the correct κ . The angleκ should be chosen as obtuse as possible, but so,that the ground detection for the desired scenariois possible as well.

2.3 Sensor Hardware

For the reason as outlined above, we decided for asingle line scanner. The device is a Hokuyo UTM-30LX with a 270◦ field of view and 0.25◦ angle res-olution, Fig. 5. The maximum measurement rangeis 30 m with a resolution of ±50 mm. The framerate is 40 Hz. Compared to other lidar solutions,this device is a low cost sensor and its weight of350 g is very low [7].

3 Sensor Processing

Continuous scanning over a time period in combi-nation with the movement of the UA, generatesa three dimensional point cloud representationof the ground and ground based obstacles. Thisrepresentation enables a processing of each sin-gle scan and/or a computation over a consecutive

J Intell Robot Syst (2012) 65:587–601 591

scan sequence with consideration of the move-ment. In this work both approaches are used. Theprocessing of the sensor measurements is parti-tioned into four steps. The first step is the process-ing of the height estimation. The processed heightis the basis for the distinction between groundplane and obstacle, which includes each pointmeasured by the laser. Based on this assignment,the point cloud elements, which are flagged asobstacles, will be combined into bounding boxes.Afterwards, it is checked whether a combinationof two obstacles could constitute a gate.

3.1 Height Above Ground Estimation

The height estimation assumes a flat ground planewith gates, which is applicable to the IMAV mis-sion. These conditions allow a laser scanner, whichis slanted towards the ground as described abovein Section 2.2, can measure the height aboveground if the predominant part of the scan isground or low ground vegetation. A further as-sumption is that the ground is more or less planar(natural grassland) and the flight control com-puter provides a sufficiently accurate attitude esti-mation. A position estimation based on DGPS orother sensors is not necessary.

The height estimation includes the followingsteps. Laser measurements are transferred witha homogeneous matrix transformation

(C HL

)−1

from the laser coordination system to the geodeticframe of reference as can be seen in Fig. 6. After-

z l

lg

(C H L)L S

z g

yl

τ

hi

Fig. 6 Height estimation based on the laser scannermeasurement

wards, the angle τ between the laser ray lg andthe zg-axis of the geodetic system is computed.Based on this result and the length of lg, the lengthof a vector hi is obtained, which is parallel tozg. The length of vector hi represents the heightabove ground including vegetation. The resultingheights are combined with a median computation.The computation considers height measurementsof the current scan and several prior estimatedheights based on prior scans. The reason to usethe median computation, was a sufficient filteringwith low computation expense.

3.2 Distinction Between Obstaclesand the Ground Plane

Under the conditions assumed in this work (air-craft height, sensor mounting angle and nearlyflat ground) a point cloud, containing elementsof obstacles and the ground, is generated. To theapplication of subsequent algorithms, for examplefor obstacle avoidance or landing place evalua-tion, a break down of the point cloud is advanta-geous. This break down divides the set of scanningpoints into the two subsets: ground and obstacles.The border is represented by a regression plane,which bases on a set of valid scans. The regressionplane is based on two regression lines. The firstregression line grs uses the values in the geo-detic frame of reference of the last complete scanwhich has been pruned for “infinite” distances. Toobtain these values, a coordinate transformationfrom laser to the geodetic coordinates must beperformed. The regression line grs is based on aprincipal component analysis and identifies themain alignment between the point set and a sup-port point. The second regression line grt bases ona point set, which contains the grs support pointsof valid last scans. Again the principal componentanalysis has been used. The number of scans,which are used, depends on the motion of thescanner and consequently on the motion of theaircraft. This is necessary, because if the motionis too slight and consequently the scans are tooconcentrated around one position, the distributionof the support points is not significant enough tocompute an alignment of the line estimation. Ifthis is not considered, the computation generatespoor results around the yg-axis. To guarantee that

592 J Intell Robot Syst (2012) 65:587–601

the used support points of the last several scansare a significant set, two conditions have beenimplemented. The first one is that at least 20 scanshave to be used. The second one holds that, ifthe motion is less than 2 m, all scans which weregenerated within the last 2 m, have to be used.The result of the computations are two alignmentsand the support point of grs. They are used togenerate the regression plane. The combinationof the two regression lines builds a plane, whichcoincides with the ground plane. In order to use itas a threshold, the plane is lifted a little bit, in ourcase 0.1 m. The result is a separator, which flags allelements underneath it as ground and those aboveit as obstacles.

3.3 Obstacle Detection

In order to facilitate an obstacle avoidance, itis necessary to apply a further separation of theobstacle point subset into single obstacles. Theseparation uses a bounding box, which containsthe corresponding scan points of a probable obsta-cle. The procedure of the allocation of a point toan obstacle is as follows: The main routine checksif a point is part of one of the existing obstacles.If this is the case, the point is added. Otherwisea new obstacle is generated. The result of thisroutine depends on the distance between the pointand the obstacle surface. This maximum rangeis represented by a virtual shell which enclosesthe entire obstacle. If a point is in the interior ofthe shell, it is added to the obstacle. If a pointwas added to the obstacle, which was inside thevirtual shell, but outside of the obstacle shape, theobstacle is growing.

This algorithm could lead to a situation inwhich different measurements generate differentobstacles, which then grow together or overlapeach other, based on the motion of the aircraft. Toprevent the detection of more than one obstacleat the same place, the position and the shape ofeach obstacle will be compared with the otheridentified obstacles after each measurement cycle.If different obstacles are at the same place, theyare combined into a single one. The boundingbox enables an environment representation uponwhich further processing steps build.

3.4 Gate Detection

For the implementation of the IMAV scenario,it is necessary to detect a gate as described inSection 1.1. The position of the gate is only ap-proximately known. To prevent a collision be-tween the rotorcraft and the gate, a detection ofthe gate is early as possible is necessary. The earlydetection ensures a larger security space betweenthe gate and the rotorcraft to enable a securitystop, in case the detection has failed. To enablea safe gate crossing with the maxiARTIS, it wasdecided that the detection of the gate must bedone 8 m in front of the gate.

The gate is only defined through its dimen-sions. Because of the mounting of the pillars withbracing wires and the navigation and sensor mea-surement inaccuracy, the known dimensions andshape of the pillars cannot be used for an iden-tification of the gate. Furthermore, the mount-ing of the scanner prevents the measurement ofthe pillar’s height. It can be measured only afterflying through the gate. The same takes effectfor the flags, which only can be measured toolate. Only the distance between the pillars can bemeasured with reasonable accuracy. The distanceis measured between the centroids of two obsta-cles. Their coordinates are based on the distrib-ution of the single measurements in a boundingbox. This leads to a relatively good representationof the pillar. The expected distance is d ≈ 6 m.With the knowledge of the expected distance dan algorithm was implemented, which computesthe distances between all obstacles and comparesthese with d. If the distance between two ob-stacles is close enough to d, both obstacles arecombined into a gate. This procedure is relativelyeasy, works reliably in the given IMAV scenario.

In our implementation the algorithm alwaysfinds the right gate, but due to the bracing wiresoften additional gates are found as well. Thesearch of the right gate cannot be solved withonly the distance d. An additional parameter mustbe taken into account. To reduce the number offalse positives a confidence threshold is imple-mented. It uses the probability that the compo-nent objects of a gate candidate are real and notfailure measurements. The confidence thresholdis derived from the number of laser hits for the

J Intell Robot Syst (2012) 65:587–601 593

single objects. If the number of hits is higher thana predefined threshold number, it is probable thatthe found object is a real obstacle and not a scanphantom. After this selection, only those obstaclesare used for the gate search, which have a numberof hits higher than the threshold. The probabilitythreshold limits the number of gates, however stillmore than one gate can be found. A safe flight ismost probable, if the most probable gate is chosenfor calculating the flight path. The probability fora gate candidate is the sum of the probabilities ofall obstacles which are part of it. The candidatewith the highest probability describes the mostprobable gate.

4 Verification Setup

To develop and optimize algorithms, several man-ual helicopter flight tests and simulation tests wereused. In order to prove the correctness and ac-curacy of the mounting and the data processingunder real operational conditions, the flight testswere used. To generate an easy to manipulate, butreproducible test environment, a simulation wasimplemented and subsequently used to verify thechosen threshold parameter. The outdoor testsas well as the simulation were implemented toreplicate the IMAV scenario description. The UAis flown in different heights between 1 m and6 m. The data were recorded with κ ≈ 12◦. Theresults of the real flight scenarios were based onrecorded measurement data and subsequent off-

line computations. The results of the simulationdata were generated in a closed loop simulation.

The hardware, which was used in the flight test,is shown in Sections 1.2 and 2.3. How the simula-tion was implemented, is shown in the followingsection.

In the simulation different three dimensionalscenarios can be created. Components of the envi-ronment are represented through geometric prim-itives like prisms or planes. In its current formtwo pillars and the ground plane are used with thesettings given in Section 1.1.

The sensor simulation emulates the functioningof the real scanning laser. The sensor emits asimulated beam in the virtual environment, andchecks if the ray intersects with one of the envi-ronmental elements. The method is a ray tracingapproach, which is repeated for each beam. Thecoordinates of intersection are transferred fromthe simulated geodetic frame of reference, intothe laser frame of reference. For a more realisticsimulation, the measurements were superimposedwith measurement inaccuracy at the same levelas the original sensor inaccuracy as described inSection 2.3.

Afterwards, the three dimensional data aretransformed into the protocol, which is providedby the real laser scanner. This last transforma-tion enables a software-in-the-loop simulation ap-proach. This again enables an analysis of thefunctioning of the generated processing algorithmwithout changing the algorithm structure betweensimulation and the real hardware. On the otherside it increases the significance of the simulation,

Fig. 7 Comparisonbetween the heightestimation basedon the navigationsolution and the laserdata for a flight test

00.5

11.5

22.5

33.5

4

0 42 6 8 10 12 14

Hei

ght [

m]

Time [s]

Navigation Altitude based on DGPSLaser Height filtered

Absolute Laser Variance

594 J Intell Robot Syst (2012) 65:587–601

Fig. 8 Comparisonbetween default inputheight of the simulationand the laser simulation

0

1

2

3

4

5

6

7

0 2 4 6 8 10 12H

eigh

t [m

]Time [s]

Default Navigation Altitude without ErrorLaser Height

Absolute Laser Variance

because the output data can be validated againstthe real device without additional software.

Apart from the single line laser scanner, astereo camera has been implemented. The camerahas been used to monitor the laser data. Further-more, the simulation enables the implementationof a multiline scanner or other active contactlessmeasurement systems.

A sensor carrier, in this case a helicopter, hasalso been integrated into the simulation. The im-plementation of the helicopter only contains theabilities, which are necessary for this experiment;especially waypoint navigation and position esti-mation, which, as in reality, has been distortedby measurement inaccuracy. The level of the in-accuracy is inherited by the state estimation ofmaxiARTIS as is described in Section 1.2.

5 Experimental Results

5.1 Height Above Ground Estimation

The results of the height estimation which wascomputed in flight test, has been compared to thereference height values of the navigation solutionusing differential GPS, as described in Section 1.2.The data, which has been computed using thesimulation output, has been compared to the inputflight parameters without noise. The Figs. 7 (real)and 8 (simulated) show example flights for thelaser altimeter solution, the reference solution andthe absolute discrepancy between both. For thesimulated data the standard variance is ≈0.15 mand the maximum discrepancy is ≈0.5 m. For the

Fig. 9 Height estimationfor a flight test, includingparts of the flight over themaximum measurableheight range

0

2

4

6

8

10

12

14

16

10 20 30 40 50 60 70

Hei

ght [

m]

Time [s]

Navigation Altitude based on DGPSLaser Height filtered

Absolute Differences between Both

J Intell Robot Syst (2012) 65:587–601 595

(a)

(b)

(c)

Fig. 10 2D top view and 3D visualisation of real scan data for the given scenario. Figure 10 shows the application of thebounding box on the data set

596 J Intell Robot Syst (2012) 65:587–601

real flight data the standard variance is ≈0.07 mand the maximum discrepancy of ≈0.14 m.

Figure 9 shows results of a longer real flight,which includes several passes through the samegate. Between these passes the rotorcraft flew ata height above ground which was larger than themaximum measurement range of the laser altime-ter as has been explained in Section 2.2. For theserecordings κ was chosen with ≈6◦ with a theoret-ically resulting maximum measurable laser heightof 3 m. In these parts of the flight higher than 3 mno height measurements were possible and so theheight estimation decreases to zero. When the ro-torcraft flies lower than 2 m, the height estimationis possible. The increasing reference height andthe decreasing laser height towards zero lead to anincreasing absolute difference, which maximizesat the reference height.

5.2 Distinction Between Obstacles and theGround Plane & Bounding Boxes

The results of the algorithm, for the obstacle-ground-separation and bounding boxes, are pre-sented in one section because the results aretightly associated. The review encompasses firstlya qualitative visual and second a quantitative val-idation, based on the positions and shapes of thebounding boxes.

All results presented here have been calculatedwith the following parameters: The break downinto ground plane and obstacles, as described inSection 3.2, uses a threshold to lift up the regres-sion plane above the average ground detectionto prevent wrong assignments of measurementpoints on the ground as obstacles caused by littlebumps or sensor noise. The lift is about 0.1 m.The obstacle detection, described in Section 3.3,

Table 1 Comparison between the real parameters of apillar and the results of the break down into ground planeand obstacles and bounding box approach for a simulation

Real pillar Measured boundingdimensions [m] box dimensions [m]

Point of gravity (0, −3, −3) (0.02, −3, −2.4)

(x,y,z)Shape 0.1 × 0.1 × 6.0 0.07 × 0.5 × 3.6Bottom point 0.0 0.15

(z-value)

Table 2 Comparison between the real shape of a pillar andresults of the break down into ground plane and obstaclesand the bounding box approach in a flight test

Shape Bottom point

Real pillar 0.1 × 0.1 × 6.0 0.0dimensions [m]

Measured bounding box 1.3 × 0.3 × 3.7 0.4dimensions [m] ex. I

Measured bounding box 1.0 × 2.3 × 3.5 0.8dimensions [m] ex. II

also uses two thresholds for the growing and com-bining of the obstacles, both were set to 0.1 m.The choice is based on the laser range accuracyof ±50 mm as explained in Section 2.3.

To enable visual validation, the laser data wasvisualized in a 2D topdown view and a 3D viewas can be seen in Figs. 10a and b. In the imagesthe ground was coloured green and the obstaclesred. The images show the data of a real flight withthe pillars including flags and bracing wires. Theseparts of the gate are well distinguishable fromthe ground and can be clearly identified. Someparts of the ground were not identified correctly.It is assumed that these parts are bumps of theground, which rise above the neighbouring areasby about 0.1 m. For the quantitative validationthe bounding boxes are used. It is assumed, thatonly the scan points, which were not flagged asground, are permitted to build up the boundingboxes. If the geometrical parameters of the boxescorrespond with those of the real pillars, the sep-aration and box building is sufficient for a reliableenvironment detection, see Fig. 10c.

To enable an easy comparison, a rudimentarypillar without flags and wires was used in thesimulation. The ground plane is parallel to the x-y plane of the geodetic frame of reference with z= 0. Table 1 shows an example comparison of the

Table 3 Distances, which are between the components ofpotential gates

Pillar/pillar Pillar/ground Pillar/leashbump

Simulation 6.1 m – –Real example 6.4 m 5.6 m 6.4 m

The components of the gates are pillars, ground bumps andwires

J Intell Robot Syst (2012) 65:587–601 597

Fig. 11 Relation betweenthe number of potentialgates and the number oflaser hits, which arenecessary to deliver anobstacle for the gateconstruction. The resultsbased on simulated flights

0

50

100

150

200

250

300

0 5 10 15 20 25 30 35 40G

ates

Laser hits

gate number before right gate was detectedabsolute number of gates

default parameters of a pillar and the calculatedparameters of the bounding boxes, which wereidentified as a gate. The bottom point of the boxcontaining the pillar is 0.15 m above the ground,because it is the first point above the separationplane. The results of a real flight are shown inTable 2. The lowest point, which was assigned tothe bounding box of a pillar, was approximately0.4 − 0.8 m above the ground. In the case of flighttests, the pillar positions (centroid) were not com-pared between the default and measured solu-tion, because the position of the real pillars couldonly be measured with DGPS, whose accuracy isnot higher than the calculated values. An exam-ple comparison of shape parameters is shown inTable 2.

5.3 Gate Detection

The gate detection uses three parameters to filterthe most probable gate out of the set of obsta-cles. The main parameter is the distance between

two objects, more precisely their centroids. If thecomputed distance corresponds with the defaultdistance 6 m ± 10%, the obstacles are flagged asa possible gate. This approach finds the real gate,but also any other pair of obstacles, which happento have the required distance as well (see Table 3).

To restrict the number of obstacles, which arethe base for the gate construction, a threshold wasused. The threshold bases on the following results:By using a confidence threshold between 1 and10 laser hits per obstacle,the right gate is foundin every flight test, whereas beforehand severalother gates were found. Figure 11 shows someresults, the upper graph shows the total numberof potential gates for the complete flight and thebottom graph shows the number of gates, whichare detected before the right gate was found.

Above a threshold of 12 laser hits per obstacle,the real gate was found as the first gate. Thisthreshold of 12 hits per obstacle as the lowerborder was determined for the IMAV scenariothrough systematically increasing the threshold. It

Fig. 12 The relationbetween the laser hits,which are necessary toselect an obstacle for thegate construction and thedistance between the UAand the gate

8

9

10

11

12

13

14

15

0 5 10 15 20 25 30 35 40

Dis

tanc

e [m

]

Laser hits

598 J Intell Robot Syst (2012) 65:587–601

Fig. 13 Potential gateswhich were found in areal flight test with theircomponents and thecorresponding lasernumber of hits

1 ground bump 1 bracing wire 1

2 pillar 1 bracing wire 1

3 pillar 1 ground bump 2

4 pillar 2 ground bump 3

5 pillar 2 pillar 1

potentialgate component 1 component 2

800 600 400 200 0 200 400laser hits

depends on the environment where the scenario isimplemented.

By applying this threshold the number of possi-ble gates is reduced from 259 gates (no threshold)to 5 gates (12 hits). Among these 5 gates a furtherselection has to take place. One possible solutionwould be to further increase the threshold. Butthis generates the problem that, by increasing thenumber of necessary hits, the laser will have toscan the environment much longer to generate avalid gate candidate. This relation is depicted inFig. 12, which shows an increasing delay in gatedetection with increasing threshold. To show thisrelation, the UA follows a path with constant 5 m/svelocity, which leads directly through the gate. Itprevents an unlimited increasing of this threshold.The systematic tests showed that a threshold of20 hits per obstacle is most practicable. It filtersenough noise out with a security buffer to thelowest border and enables a rotorcraft hoveringat a height of 3 m to detect the gate 11 m in frontof it.

The threshold reduces the number of possiblegates, but cannot guarantee that only one gate willbe found. By applying a threshold of 20 hits perobstacle to the flight data in our tests results, inbetween 5 and 10 possible gates were found. Asdescribed in Section 3.4, selecting the real gatefrom this subset is done by choosing the gate withthe highest confidence. The chart in Fig. 13 showsthe identified gates with their components and thecorresponding number of hits for the flight test(visualised in Fig. 10c). The five identified gateswere constructed out of pillars, wires and groundbumps. Both pillars have the highest number of

hits (841, 442) and this causes the gate as a wholeto have the highest number of hits as well. In alltest cases, which have been conducted, the realgate was chosen by the algorithm.

6 Discussion, Conclusion & Outlook

In this paper an approach for small rotorcraft UAwith a line laser scanner was shown, enabling boththe fly-through of artificial gates and measure-ment of height above ground. Both the flight testand simulation demonstrate gate detection andheight estimation.

6.1 Height Above Ground Estimation

The results of the height above ground estima-tion for the real and simulated flight data seemsufficient for a UA mission like the IMAV sce-nario, compared to the reference DGPS values.The quality of the DGPS reference estimation isnot known. A reliable statement can only be madeunder the usage of a third sensor, possible sonar.

The comparison between real and simulatedflights in this example shows that the results of thestandard variance and the maximal discrepancy ofthe simulation are less good than of the real flightsas can seen in Section 5.1. A possible explanationcould be, that the navigation solution of the sim-ulation uses a noise level assumption which wasascertained over several real flights. So it is quitepossible that the conditions (DGPS reception) inthe real flights are better than the assumption in

J Intell Robot Syst (2012) 65:587–601 599

the simulation. Therefore the results of the realflights are also better.

As shown in Fig. 9, the theoretically assumedheight estimation from 3m on, can not be fulfilledin all cases. A possible cause is that the laserlight does not reach the ground. The helicopterstate is considered in the height estimation, butif the pitch motion is too large and the maximalrange of laser scanner does not suffices to intersectthe ground, no height estimation is possible. Pitchmotions can emerge through acceleration changesor reaction of wind changes.

A problem, which must be accepted with thiskind of sensor mounting, is that the height mea-surement is erroneous if the helicopter flies at aconstant altitude in the world coordinate systemover, in flight direction decreasing or increasingground area. If the ground is sagging, the laserheight over ground is measured too high and ifthe ground is raising, the estimated values aretoo low. This problem can be partially avoidedif flying over prior scanned areas. For these ar-eas the ground shape is already known and incombination with the navigation solution, it ispossible to compute the height which is directlyunderneath the UA. A further solution for thisproblem is the use of an additional mirror, whichdeflects some rays directly under the UA [5]. Bothapproaches enable a height estimation without anerror caused by the ground shape.

A further drawback is the loss of the heightdetection when flying higher than the assumedmaximum height which was used for the compu-tation of the mounting angle in Section 2.2. Butthis is not valued as problematic, because in higherheights a precise height measure above ground isnot absolutely necessary. Additionally, if the heli-copter arrives again in close-to-terrain flight level,the height measurement is unrestrictedly possibleto prevent crashes or to evaluate a landing place.

6.2 Distinction Between Obstacles and theGround Plane

The qualitative evaluation of the data showsthat the allocation of the measurements into thesubsets Ground and Obstacles evidently workssufficiently correct. Only three small parts at thefringe of the ground were also flagged as obsta-

cles, as described in Section 5.2. The flagging ofthe ground bumps as obstacles is not automati-cally wrong, because if the bumps are too tall, asafe landing could be prevented.

For a quantitative evaluation, the bottom pointcoordinates of the generated bounding boxes wereused, as can be seen Tables 1 and 2. The bottompoint of the example box of a simulated pillar isin 0.15 m above the ground. The height of thispoint consists of the offset of the separation planeabout 0.1 m and the noise of the scanner andthe navigation solution. The flight tests could notmatch this performance.

A reason for this difference between simulationand flight tests could be the shape of the ground.In the simulation a perfect flat ground plane wasused, whereas the flight test were conducted overnatural grass land. Through the application ofpreviously detected ground segments for the cur-rent measurements, it is possible that the separa-tion plane rises or falls thus flagging obstacles asground and vice versa.

Another reason for erroneous flagging couldbe the accuracy of the navigation solution. Theselected κ ≈ 12◦ in combination with a pitch ac-curacy of ±2◦ (see Section 1.2), enables a maxi-mum height estimation error up to 0.8 m. Such anextreme deviation did not occur in the test, but itshows a possible error source for the ground planeassignment.

As well as the qualitative evaluation, the quan-titative evaluation was shown that the separationbetween obstacles and ground, works very welland robust. Further some differences between insimulated and real flight tests was shown, but alsothat the approach is usable for the given IMAVscenario.

6.3 Obstacle Detection and Classification

The presented bounding box approach uses agrowing and a combining threshold to define aseparated obstacle. But with this procedure it isimpossible to distinguish between very close ob-stacles as pillars and their flags and wires, seeTable 2 and Fig. 10c. In Table 1 is depicted, thatin the simulation, where the flags and leashes arenot available, the detection works very well. Inthe real flight scenarios the boxes covering the

600 J Intell Robot Syst (2012) 65:587–601

pillars contained also parts of the flags and wiresattached to the pillars. That leads to a coveringof the obstacle shape in the x-y dimension to bemuch wider than the pillar itself.

Further veiling of the pillar shape bases on thedeviating computation of the pillar height. Thereasons are the mounting of the scanner and theflight height of the UA during the passing of thegate. The mounting and the height of 3 m duringthe passing, only allow detection of objects, whichare between ground and 3 m, the height of therotorcraft. Only after the rotorcraft crosses thegate, the parts of the objects above the UA heightcan be measured.

The veiling of the real pillar shape disallows acategorical classification opposite other environ-ment elements like electricity pylon, but such ob-jects were not given in the IMAV scenario. Onlyground bumps and the pillars with their bracingleashes were classified as obstacles. The leashesare unproblematic because they could be definedas a part of the pillar. If the combining thresholdof the bounding box approach will be chosenhigher, the leash and pillar will be combined toone obstacle. The ground bumps have a lesserexpansion as the pillows. Applying this as a fur-ther restriction of the classification a categoricalclassification for the IMAV scenario is possible.The shown classification is not usable for generalapproaches but it was shown, that the presentedapproach was sufficient to generate an obstaclesrepresentation to find the gate.

6.4 Gate Detection

Because obstacle classification is impossible, onlythe distance between the pillars can be used tofind a gate. This approach finds a gate, but it findsall other things having the same distance betweenthem, too and classifies them as a gate. The us-age of the laser number of hits as a confidencestatement is based on the assumption, that thepillars and therefore the gate, are the objects withthe largest obstacle surface in the test scenario. Ifthe assumption is not accurate because of morecomplex scenarios like urban canyons or otherbuildings, it is necessary to find other constraints.The results achieved in Section 5.3 are for a specialcase and probably not applicable to complexer

scenarios. The application to a more general sce-nario may lead to false matches. The necessaryfiltering may delay identification of the correctgate up to a point where the aircraft is too close toit for a controlled fly-through maneuver. Furtherit is possible, that other incorrect gates were de-tected, before the right gate is found. Both resultswere shown by the algorithm before the currentthreshold parameters were found, and it can leadto failures.

Nevertheless, it was shown that threshold basedon the number of hits in combination with thedistance between the pillars works for this specialscenario, however the approach has only limitedscope for generalization.

6.5 Outlook

The obtained experimental results presented inthe paper show, that laser scanner data are wellsuited to sense the environment during a UAflight. But the approach has drawbacks, whichhave to be solved. The narrow vertical field ofview of the single line laser scanner, as well asthe use of bounding boxes as obstacle represen-tation, prevent an identification and classificationof the identified obstacles in a timely and accuratemanner. Although the drawbacks of the narrowfield of view of a single line scanner cannot beresolved, this may be bypassed. A solution is theusage of upgrades with drives and mirrors withthe corresponding problems, as were described inSection 1.

Another promising approach, which is going tobe implemented, is a sensor fusion of laser scannerand camera data. A typical array camera has ataller vertical field of view than the single linescanner. That relieves a identification of objectsin the environment. The fusion with mono camerabuilds up a 3D environment awareness, which hasa combined field of views and allows a transfer oflaser measurements to explicit pattern or textures.The transfer allows a partial expansion of the 3Dview in the vertical direction. A fusion betweenscanner and stereo camera identifies false posi-tives, which can be emerged by a stereo mismatch-ing, identifies initial values for the stereo matchingalgorithm or generates more accurate absolutemeasurements, than one of both single sensors.

J Intell Robot Syst (2012) 65:587–601 601

References

1. Adolf, F.-M., Andert, F., Lorenz, S., Goormann, L.,Dittrich, J.: An unmanned helicopter for autonomousflights in urban terrain. In: German Workshop on Ro-botics (2009)

2. Altenkirch, D.: WASLA-HALE III: Nachweis vonTechniken und Verfahren für die Teilnahme vonUAV’s am allgemeinen Luftverkehr. Manching, Work-shop Unbemante Luftfahrzeuge, 29 May 2006

3. Andert, F., Goormann, L.: Combined grid and feature-based occupancy map building in large outdoor en-vironments. In: IEEE/RSJ International Conferenceon Intelligent Robots and Systems, 2007, IROS 2007,pp. 2065–2070. ISBN 978–1–4244–0912–9 (2007)

4. Freed, M., Fitzgerald, W., Harris, R.: Intelligent Auto-nomous Surveillance of Many Targets with few UAVS.http://www.entish.org/published/freed-fitzgerald-harris-05.pdf (2005)

5. Grzonka, S., Grisetti, G., Burgard, W.: Towards a nav-igation system for autonomous indoor flying. In: IEEEInternational Conference on Robotics and Automa-tion, 2009, ICRA ’09, pp. 2878–2883, 12–17 May 2009.doi: 10.1109/ROBOT.2009.5152446

6. He, Z., Iyer, R.V., Chandler, P.R.: Vision-based UAVflight control and obstacle avoidance. In: AmericanControl Conference, 14–16 June 2006. doi:10.1109/ACC.2006.1656540

7. Hokuyo Automatic (Hrsg.): Scanning Laser Range Find-er UTM-30LX/LN Specification. http://www.hokuyo-aut.jp/02sensor/07scanner/download/data/UTM-30LX_spec.pdf (2009)

8. Hrabar, S.: 3D path planning and stereo-based obsta-cle avoidance for rotorcraft UAVs. In: IEEE/RSJ In-ternational Conference on Intelligent Robots and Sys-tems, 2008, IROS 2008, pp. 807–814, 22–26 Sept 2008.doi:10.1109/IROS.2008.4650775

9. International Micro Air Vehicle Conference (Hrsg.):IMAV 2010 flight competition: mission description andrules. http://imav2010.org/Mission_Discription_and_Rules_IMAV_2010.pdf (2010)

10. Jähne, B.: Digital image processing. 5., rev. and ex-tended. Springer, Berlin. http://www.gbv.de/dms/hebis-darmstadt/toc/104523808.pdf (2002). ISBN 3540677542

11. Kemkemian, S., Nouvel-Fiani, M., Cornic, P.,Le Bihan, P., Garrec, P.: Radar systems for sense andavoid on UAV. In: Radar Conference—Surveillancefor a Safer World, pp. 1–6 (2009)

12. Lin, Y., Hyyppä, J., Jaakkola, A.: Mini-UAV-BorneLIDAR for fine-scale mapping. In: Geoscience and

Remote Sensing Letters, IEEE PP. Nr. 99, pp. 426–430 (2010). ISSN 1545–598X. doi:10.1109/LGRS.2010.2079913

13. Meister, O., Frietsch, N., Ascher, C., Trommer, G.F.:Adaptive path planning for VTOL-UAVs. In: Po-sition, Location and Navigation Symposium, 2008IEEE/ION 24, Nr. 7, pp. 1252–1259, 5–8 May 2008.doi:10.1109/MAES.2009.5208559

14. Miura, J., Negishi, Y., Shirai, Y.: Mobile robot mapgeneration by integrating omnidirectional stereo andlaser range finder. In: Intelligent Robots and System,2002. IEEE/RSJ International Conference on Proceed-ings Bd. 1, pp. 250–255. IEEE Operations Center, Pis-cataway, NJ (2002). ISBN 0–7803–7398–7

15. Nagai, M., Tianen, C., Shibasaki, R., Kumagai, H.,Ahmed, A.: UAV-borne 3-D mapping system bymultisensor integration. In: Geoscience and RemoteSensing, IEEE Transactions on 47, Nr. 3, pp. 701–708 (2009). ISSN 0196–2892. doi:10.1109/TGRS.2008.2010314

16. Reimer, M., Wagner, B.: 3D scanning on a small mobilerobot. In: Autonomous Minirobots for Research andEdutainment, pp. 65–72, 2–5 Oct 2007

17. Scherer, S., Singh, S., Chamberlain, L., Elgersma, M.:Flying fast and low among obstacles: methodology andexperiments. Int. J. Rob. Res 27, 549–574 (2008). ISSN0278–3649. doi:10.1177/0278364908090949

18. Ng, T.C., Guzman, J.I., Tan, J.C.: Development of a 3DLADAR system for autonomous vehicle guidance. In:Robotics, Automation and Mechatronics 2004 IEEEConference, Nr. 1, pp. 13–18. http://www.simtech.a-star.edu.sg/Research/TechnicalReports/tech-report2005/STR_05-01_03_NTC.pdf (2004)

19. Wang, Y., Wang, G.: 3D feature points reconstruc-tion based on stereo vision of UAV. In: 2nd Inter-national Conference on Information Engineering andComputer Science (ICIECS), pp. 1–4, 25–26 Dec 2010.doi:10.1109/ICIECS.2010.5677735

20. Whalley, M., Schulein, G., Theodore, C., Takahashi,M.: Design and flight test results for a hemisphericalladar developed to support unmanned rotorcraft ur-ban operations research. In: AHS Annual Forum 64.http://www.vtol.org/f64_bestPapers/uav.pdf (2008)

21. Wulf, O., Wagner, B.: Fast 3D scanning meth-ods for laser measurement systems. http://www.rts.uni-hannover.de/images/5/5f/Wulf03-CSCS.pdf (2003)

22. Zhiyu, X., Eryong, W.: Design and calibration of afast 3d scanning LADAR. In: Proceedings of the 2006IEEE International Conference on Mechatronics andAutomation, pp. 211–215. IEEE Operations Center,Piscataway, NJ (2006). ISBN 1–4244–0465–7