PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1 Autonomous Ground Vehicles – Concepts and a Path to the Future – Thorsten Luettel, Michael Himmelsbach, and Hans-Joachim Wuensche Abstract—Autonomous vehicles promise numerous improve- ments to vehicular traffic: an increase in both highway capacity and traffic flow because of faster response times, less fuel consumption and pollution thanks to more foresighted driving, and hopefully less accidents thanks to collision avoidance systems. And their drivers can save time for more useful activities. In order for these vehicles to safely operate in everyday traffic or in harsh off-road environments, a multitude of problems in perception, navigation and control have to be solved. This article gives an overview of the most current trends in autonomous vehicles, highlighting the concepts common to most successful systems as well as their differences. It concludes with an outlook into the promising future of autonomous vehicles. Index Terms—Autonomous Driving, UGV, Machine Vision, LIDAR, Environment Perception, Navigation, Control I. I NTRODUCTION Autonomous driving started in the 1980s, when Carnegie Mellon University presented its Navlab vehicles that operated in structured environments [1], and when UniBw Munich showed early results in high speed motorway driving [2]. At EUREKA-PROMETHEUS project’s final demonstration in 1994, UniBw Munich and Daimler-Benz presented au- tonomous driving in 3-lane French Autoroute-traffic with speeds up to 130 km/h, which included tracking both: lane markings and other vehicles. The system decided when to change between lanes by itself, although the approval of a human driver was required for safety reasons [3]. Today, some aspects of these early achievements have reached series-production in form of driver assistance systems for cars. Lane detection is used to facilitate lane departure warnings (LDW) for the driver and to augment the drivers heading control in lane keeping assist systems (LKAS). The detection and tracking of vehicles driving ahead is used in adaptive cruise control systems (ACC) to keep a safe and com- fortable distance. More recently, precrash systems emerged that trigger full braking power to lessen damage if a driver reacts too slowly. Meanwhile, the attention of research in autonomous vehicles has switched its focus from the well structured environments encountered on highways as studied in the beginning to more unstructured environments, like urban traffic or off- road scenarios. This trend has been boosted by the 2001 National Defense Authorization Act, where the US congress mandated that “by 2010, one third of the aircraft (...) fleet and (...) by 2015, one third of the operational ground combat All authors are with the Institute for Autonomous Systems Technology (TAS), University of the Bundeswehr Munich, 85577 Neubiberg, Germany. Contact author email: [email protected] Manuscript received September 16, 2011; revised February 5, 2012. Fig. 1. In DARPA Urban Challenge 2007 six vehicles finished the race. This image shows “BOSS” (Tartan Racing Team, Carnegie Mellon University, 1st place), “Junior” (Stanford Racing Team, Stanford University, 2nd place) and “Odin” (Team Victor Tango, Virginia Tech, 3rd place). The sensors on the roofs of these cars are nearly the same: LIDARs, cameras and GPS antennas. vehicles are unmanned”. Especially for unmanned ground ve- hicles (UGVs), DARPA (Defense Advanced Research Projects Agency) is still powering development at universities and in the industry to reach this goal. In this article, we give an overview of current research activities in the field of autonomous vehicles since the DARPA challenges. We present the key concepts that evolved during recent years all over the world and describe the performance of autonomous vehicles today. The article ends with an outlook to the future of autonomous vehicles. II. PLATFORMS AND SENSORS Three elements are common to all autonomous ground vehicles: sensors to perceive the environment and the own movement, on-board computers, and actuators for vehicle con- trol. Figure 1 shows the LIDARs, cameras and GPS antennas used on three vehicles of the DARPA Urban Challenge teams. For environment perception, both image-based sensors like monocular and stereo cameras (monochrome and color), and range sensing devices like RADAR and LIDAR are used. The high definition Velodyne LIDAR with a 360 -3D-view and rich point-cloud was designed especially for autonomous vehicles and is used in many systems. RADAR sensors are additionally able to determine the object’s relative velocity directly. Distance-providing image-based sensors are mostly based on a time-of-flight principle. Another principle called Light Coding is utilized in the low priced Microsoft Kinect, now popular for indoor robotics. However, both principles are not yet suited for outdoor use in autonomous vehicles because they provide limited range only. Copyright (c) IEEE 2012 Accepted for publication in Proceedings of the IEEE, Centennial Issue, to appear May 2012

PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1

PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1

Autonomous Ground Vehicles– Concepts and a Path to the Future –

Thorsten Luettel, Michael Himmelsbach, and Hans-Joachim Wuensche

Abstract—Autonomous vehicles promise numerous improve-ments to vehicular traffic: an increase in both highway capacityand traffic flow because of faster response times, less fuelconsumption and pollution thanks to more foresighted driving,and hopefully less accidents thanks to collision avoidance systems.And their drivers can save time for more useful activities.

In order for these vehicles to safely operate in everyday trafficor in harsh off-road environments, a multitude of problems inperception, navigation and control have to be solved.

This article gives an overview of the most current trends inautonomous vehicles, highlighting the concepts common to mostsuccessful systems as well as their differences. It concludes withan outlook into the promising future of autonomous vehicles.

Index Terms—Autonomous Driving, UGV, Machine Vision,LIDAR, Environment Perception, Navigation, Control

I. INTRODUCTION

Autonomous driving started in the 1980s, when CarnegieMellon University presented its Navlab vehicles that operatedin structured environments [1], and when UniBw Munichshowed early results in high speed motorway driving [2].At EUREKA-PROMETHEUS project’s final demonstrationin 1994, UniBw Munich and Daimler-Benz presented au-tonomous driving in 3-lane French Autoroute-traffic withspeeds up to 130 km/h, which included tracking both: lanemarkings and other vehicles. The system decided when tochange between lanes by itself, although the approval of ahuman driver was required for safety reasons [3].

Today, some aspects of these early achievements havereached series-production in form of driver assistance systemsfor cars. Lane detection is used to facilitate lane departurewarnings (LDW) for the driver and to augment the driversheading control in lane keeping assist systems (LKAS). Thedetection and tracking of vehicles driving ahead is used inadaptive cruise control systems (ACC) to keep a safe and com-fortable distance. More recently, precrash systems emergedthat trigger full braking power to lessen damage if a driverreacts too slowly.

Meanwhile, the attention of research in autonomous vehicleshas switched its focus from the well structured environmentsencountered on highways as studied in the beginning tomore unstructured environments, like urban traffic or off-road scenarios. This trend has been boosted by the 2001National Defense Authorization Act, where the US congressmandated that “by 2010, one third of the aircraft (...) fleetand (...) by 2015, one third of the operational ground combat

All authors are with the Institute for Autonomous Systems Technology(TAS), University of the Bundeswehr Munich, 85577 Neubiberg, Germany.Contact author email: [email protected]

Manuscript received September 16, 2011; revised February 5, 2012.

Fig. 1. In DARPA Urban Challenge 2007 six vehicles finished the race. Thisimage shows “BOSS” (Tartan Racing Team, Carnegie Mellon University, 1stplace), “Junior” (Stanford Racing Team, Stanford University, 2nd place) and“Odin” (Team Victor Tango, Virginia Tech, 3rd place). The sensors on theroofs of these cars are nearly the same: LIDARs, cameras and GPS antennas.

vehicles are unmanned”. Especially for unmanned ground ve-hicles (UGVs), DARPA (Defense Advanced Research ProjectsAgency) is still powering development at universities and inthe industry to reach this goal.

In this article, we give an overview of current researchactivities in the field of autonomous vehicles since the DARPAchallenges. We present the key concepts that evolved duringrecent years all over the world and describe the performance ofautonomous vehicles today. The article ends with an outlookto the future of autonomous vehicles.

II. PLATFORMS AND SENSORS

Three elements are common to all autonomous groundvehicles: sensors to perceive the environment and the ownmovement, on-board computers, and actuators for vehicle con-trol. Figure 1 shows the LIDARs, cameras and GPS antennasused on three vehicles of the DARPA Urban Challenge teams.

For environment perception, both image-based sensors likemonocular and stereo cameras (monochrome and color), andrange sensing devices like RADAR and LIDAR are used.The high definition Velodyne LIDAR with a 360◦-3D-viewand rich point-cloud was designed especially for autonomousvehicles and is used in many systems. RADAR sensors areadditionally able to determine the object’s relative velocitydirectly. Distance-providing image-based sensors are mostlybased on a time-of-flight principle. Another principle calledLight Coding is utilized in the low priced Microsoft Kinect,now popular for indoor robotics. However, both principles arenot yet suited for outdoor use in autonomous vehicles becausethey provide limited range only.

Copyright (c) IEEE 2012 Accepted for publication in Proceedings of the IEEE, Centennial Issue, to appear May 2012

Page 2: PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1

2 PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012

To estimate the vehicle’s motion, measurements from odom-etry and inertial sensors are incorporated, supported mainly byglobal position measurements from GPS.

Typical sensor setups need to be calibrated: Intrinsic pa-rameters like the camera lens’ focal length or the orientationof laser diodes must be determined. Sensor fusion additionallyrequires an extrinsic calibration of all fused sensors, describingthe position of each sensor in a common reference frame.Especially for non-rigid frames and platforms, continuouslykeeping the intrinsic and extrinsic calibration up-to-date byadjusting it online while the autonomous vehicle operatesbecomes very important [4]. Sensor fusion is performed inmost current systems, especially when complementary sensorslike color cameras (good angular resolution, no distance infor-mation) and range measuring devices (no color, bad angularresolution, precise distance information) are available. Sensorfusion may then proceed at different levels of abstraction:It is possible to fuse the raw data of the sensors, e. g. toproduce a colored point cloud. The next level of fusion isdone at the feature level, e. g. tracking a road boundary basedon 3D measurements from LIDAR and image features fromvision [5]. The last level of fusion works on the object level,e. g. fusing objects detected by RADAR and LIDAR into asingle track estimate [6].

All distributed or centralized processing on-board the vehi-cle has to be real-time capable. This is an important prerequi-site for vehicle control algorithms and system safety checks.

Actuators are necessary for closing the control loop, e. g.for steering wheel, brake or throttle control.

For safety reasons, providing redundancy in both sensorsetup and data processing is common practice, especially whenshifting functionality from a research level to series productionlevel. Still, legal reasons require a human “safety driver” fortoday’s autonomous vehicles to operate in public traffic.

III. PERCEPTION

The ability to perceive the vehicle’s local environment isone of the main challenges in the field of autonomous groundvehicles. Environmental conditions like lighting or colors arepermanently changing, and there are a lot of static as well as ofdynamic objects in the scene to be taken into account. The bestperceptional results are typically achieved by capitalizing onthe strengths of the different sensors mentioned in Sec. II. Inthe following we give examples of important perception tasksin the field of autonomous vehicles and their realizations.

A. Vehicle State Estimation and Ego-Motion CompensationAs a prerequisite for the perception and control modules,

a good estimation of the vehicle’s motion is necessary. Espe-cially when the vehicle is moving fast or on non-flat terrain,relevant rotation along the longitudinal and lateral axes occurs.Working with measurements which are not taken at oneunique timestamp, it becomes important to compensate forthe vehicle’s ego-motion in all measurements [7]. A reliableposition estimation is also essential for trajectory control [8].

For a good estimation of the motion all measurementsregarding the own vehicle are incorporated. Typical measure-ments are velocity and steering angle from odometry and

accelerations, angular rates and attitude from inertial sensors.Additionally, motion information from visual odometry [9]can be incorporated. Considering the actual length of thesuspension struts allows to derive sensor height and the angleto the ground plane. If global position measurements like GPSare available, they can be used to calculate a geo-referencedposition estimate which is necessary when following globalplans. Making use of a motion model for the vehicle, all thesemeasurements are fused using Kalman Filter techniques to pro-vide a smooth state estimation. Usually, two different types ofposition are estimated: A jump-free position estimation whichis subject to drift is based on integration of the vehicles’smotion only. Another way to estimate the position incorporatesglobal measurements like GPS. This compensates for drift, butframe jumps can occur. Thus it is less suited to compensatefor the vehicle’s motion in the sensor data.

B. Static Obstacles

For handling static obstacles in the environment, occupancygrid mapping is commonly used. Data from successive LIDARpoint-clouds or stereo camera range images is condensed intoa regular metric grid. For mapping tasks like this, precisecompensation of the vehicle’s own movement is important.

Each grid cell stores a probability for the existence of anobstacle at the particular location. Additionally, minimum andmaximum height measurements are often recorded to be ableto estimate e. g. the slope of the terrain. Determining a cell’sterrain type [10] and color [11] has also been addressed. Thesize of such maps is restricted to a local area around theautonomous vehicle, and the size limits its maximum velocity.These grids usually do not exceed a size of 200 m×200 m,with quadratic cell’s edge length varying from 0.1 m to 0.5 m.This size and resolution is enough to support most safe drivingdecisions. Still, if generating a map of the environment is anexplicit goal, larger maps can be built the same way. Mappingthen has to cope with the large amount of storage spacenecessary.

Special care must be taken of moving objects, which leave atrail of obstacle cells in the grid [6]. If these objects are knownfrom tracking, their estimated position and motion can be fedback into the grid for clearing the obstacle trails. Alternatively,a visibility analysis can be used to detect those cells as beingfree again [12].

When moving in a complex environment it is important tonot only classify certain static objects, but also to deduce theirmeaning, e. g. by analyzing visual features of traffic lights andsigns.

C. Traffic Participants and other Moving Obstacles

The knowledge of 3D position and motion of objects inthe scene is essential for a safe motion of the autonomousvehicle in the environment. The motion of different objectclasses is best described by different motion models. For ex-ample, Ackermann-steered vehicles like cars move on clothoidsegments, while pedestrians can move in all directions. Theirmotion is also characterized by different maximum velocitiesand accelerations, and the behavior of an object may change

Copyright (c) IEEE 2012 Accepted for publication in Proceedings of the IEEE, Centennial Issue, to appear May 2012

Page 3: PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1

LUETTEL et al.: AUTONOMOUS GROUND VEHICLES – CONCEPTS AND A PATH TO THE FUTURE – 3

Fig. 2. Results of UniBw’s LIDAR-based object tracking: Black dotsrepresent LIDAR measurements in the ground plane, brighter dots lie above.Tracked vehicles are drawn as green boxes with arrows pointing into thedirection of movement. An approximation of the crossing’s geometry (yellow)was added for clarification.

(a) (b)

Fig. 3. Visual object tracking: (a) Detection using optical flow vectors(magenta) [13]. (b) Monocular model-based 3D vehicle tracking in unstruc-tured environment dealing with partial occlusions [5]. The blue and red imagepaintings show the projection of the current 3D position estimate of thegeometric vehicle model onto the image plane.

over time. It is thus desirable to estimate an object’s motionby a mixture of different motion models, where the one thatcurrently best explains the observations contributes most tothe object’s state estimate. State estimation is typically donewith some instance of recursive Bayesian Filters (BF), wherethe most prominent one is the Extended Kalman Filter (EKF).

The Velodyne LIDAR is ideally suited for detecting andtracking objects participating in urban traffic. Figure 2 showsresults of UniBw’s tracking system. The basic tracking pro-ceeds by first removing all ground points and performing a 3Dclustering of the remaining points. Object hypotheses derivedfrom the 3D point clusters are then associated to existing tracksto update the state estimates. Depending on the success ofdata association, new tracks are added or spurious ones areremoved. Mature tracks are finally classified into passengercars, trucks, bikes and pedestrians by having a look on theappearance and history of motion behavior.

Several methods exist to detect moving objects in the imageplane. One possibility is to segment the image according tothe optical flow, grouping pixels that show similar movements[13] (see Fig. 3a). Another one is to use a detector pre-trainedto find a certain class of object in images. This also allowsthe detection of static objects. However, one main issue withtracking objects based on monocular camera images still is toinitialize and constantly estimate the 3D position of the objectfrom its detection in the 2D image plane. Thus, the rangeinformation provided by additional range measuring devices,such as stereo camera, RADAR or LIDAR, is often used for

(a) (b)

Fig. 4. Road Shape Estimation: (a) Lane tracking on a marked highwayutilizing a B-spline lane model [14]. (b) Tracking of unpaved roads andcrossings in a rural environment [11]. The vehicle is about to turn into theright branch, indicated by the yellow line.

this purpose.For some highly structured scenarios and with certain

simplifying assumptions, the distance to other objects canalso be inferred from a single camera image. For example,a vehicle driving ahead typically casts a characteristic shadowonto the ground. Assuming a flat world, this shadow can beused for estimating the distance and size of the vehicle [3].These assumptions can be dropped when using monocularmotion stereo. Exploiting the observability criterion as definedin modern control theory allows many state variables to beestimated recursively from one moving camera, which wouldnot be observable from static monocular cameras. An exampleof very precise tracking working in more challenging, off-roadenvironments is shown in Fig. 3b, where a detailed 3D modelof a certain vehicle with edges and color blobs is utilized ina recursive particle filter tracking framework [5].

D. Road Shape Estimation

Detecting static and dynamic obstacles is not sufficient forautonomous driving, although it helps with safety. For drivingin a human-made environment, the ability to perceive the roadshape is essential.

There is much work on tracking lanes on marked roads in2D images since the 1980s [3]. Actual systems mostly useclothoidal [6] or B-spline models [14] (see Fig. 4a) for 3Dlane representation. Both model parameters and the relativeposition are estimated using Bayesian Filter derivatives. Byprojecting the 3D estimate of the model into the image, a veryefficient measurement of directed edges of lane markings canbe performed, introduced as “4D approach” [3]. Additional topainted lines, curbs can also be integrated using LIDAR-basedmeasurements [6].

In scenarios with less contrast like on field- and forest-tracks, the edge-based method alone fails. Using additionalimage-based measurements like edge direction, color or tex-ture, and obstacles from an occupancy grid enable navigationin these environments (see Fig. 4b) [11].

When navigating on road networks, also crossings have tobe detected. The crossing’s geometry can be estimated usingthe same perceptual features as in lane tracking [11], [15]. Itis also possible to infer the crossing’s geometry from othertraffic participants’ movements [16].

Copyright (c) IEEE 2012 Accepted for publication in Proceedings of the IEEE, Centennial Issue, to appear May 2012

Page 4: PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1

4 PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012

(a) (b)

Fig. 5. Map-Aided Localization using probabilistic high-resolution infraredremittance ground maps [17]. Incoming laser scans are shown superimposedon a pre-recorded map. (a) When using GPS localization only, features fromthe map (gold) and from the current laser scan (grayscale) show offsets, ascan be seen at the road markings. (b) These offset can be compensated bymap-aided localization.

E. Map-Aided Localization

An imprecise localization of an autonomous vehicle mightresult in catastrophic behavior. The vehicle could drive on thewrong side of the road, it could believe a goal position tobe inside a large obstacle, or it might even think pedestrianscrossing the road to be on the sidewalk. It is well known thatGPS signals get weak or corrupted in dense forests [7] or“urban canyons”. Modern navigation systems then integrateodometry and inertial sensors to arrive at a diverging solutionafter a few minutes.

If some sort of map of the environment is known in advance,it can be used to improve localization by associating mapfeatures with features found by local sensing and determinethe offset. Such approaches are summarized as map-aidedlocalization, and they mostly differ in the kind of map andenvironment features. All of them again use some sort ofBayesian Filter to recursively estimate the offset from a historyof feature associations. In [17], the map is given in terms of a2D infrared reflectivity map build from LIDAR data, which ismatched against the reflectivities of the current LIDAR pointcloud (see Fig. 5). Two approaches use a map describing theroad network topology and match it with different kinds ofvisual features of the road in the current field of view [6],[18]. An interesting approach is that of [19], that uses readilyavailable aerial images, thus avoiding the need to construct amap beforehand.

IV. BEHAVIOR, NAVIGATION AND CONTROL

Urban and off-road scenarios expose a variety of naviga-tional situations that require an individual treatment, tailoredto the given situation. Navigating along an unrehearsed trackthrough a forest under poor GPS conditions might require adifferent treatment than navigating through open terrain. Thesame is true for navigating in a parking zone vs. driving onhighways.

For coordination of behavior in different situations, finitehierarchical state machines are a common tool [6], [12], [20]for the vehicles to decide what to do, but major differencesexist in navigation approaches.

A. Global Navigation

In contrast to this demand of flexibility, the concept of“Global Navigation” still dominates other approaches to au-tonomous navigation. The vehicle is guided by trajectories [23]derived from planning algorithms operating in global metricmaps of the environment [21], as shown in Fig. 6a.

In global navigation the map separates the rest of the systemfrom perception: Situation assessment, behavior selection andpath planning do not need to know which sensors or algorithmswere used to create the map. They just rely on the informationcontained in it. One of the key tasks is to create the mapand maintain its consistency over time. A difficulty is toadjust a once-planned trajectory when the map is updated.Usually, global re-planning takes much longer than updatingthe map with new sensor information. A solution is to locallyrestrict the search space for re-planning by using a global basetrajectory as reference [8].

B. Reactive Navigation

Reactive navigation mechanisms can lead to a completeexploration of an environment, when running it long enough.To illustrate this idea, consider a robot that moves within anarbitrary polygon and behaves like a billard ball. Even whenthe polygon is very complex the system will eventually explorethe complete polygon.

In an autonomous driving scenario, the polygon edgescorrespond to curbs, other waysides and obstacles, and thereflection behavior can be replaced e. g. by an approachlike the “tentacles”-approach [24]. Figure 6c shows the basicprinciples of this navigation approach, using a local occupancygrid for obstacle avoidance.

Global navigation does not require any object recognitioncapabilities, but planning and mapping algorithms. Reactivenavigation is just the opposite: A reactive mechanism directlycouples navigation to perception. Object recognition is re-quired to decide when the goal has been reached.

C. Guided Navigation

Since global navigation and reactive navigation are an-tithetic paradigms in terms of planning and perception, itwouldn’t surprise if there existed a paradigm in between, andindeed it does: We refer to it as guided navigation because ituses structures in the environment whose perception allows thedirect estimation of feedback values for control. This approachis in between global and reactive navigation, because it doesn’tuse metric planning, yet the motion patterns that result are notrandom at all.

A good example for guided navigation is the task to followa road or an off-road track. It is possible to follow a trackby perceiving and estimating the curvature and relative stateof the track to the vehicle, neither requiring to determine anyposition nor to plan any trajectory. Instead the trajectory isperceived in terms of the track [3]. Apart from track-followingthere exist numerous more examples where guided navigationis possible: It can be used for platooning (see Fig. 6b) [5], fornavigation at intersections [11], for approaching a landmarkor driving through a gate.

Copyright (c) IEEE 2012 Accepted for publication in Proceedings of the IEEE, Centennial Issue, to appear May 2012

Page 5: PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1

LUETTEL et al.: AUTONOMOUS GROUND VEHICLES – CONCEPTS AND A PATH TO THE FUTURE – 5Fig. 10. Velocity-independent transient behavior (orbits) to different initialerrors d and ∆ψ for forward (blue) and backward driving (red); the greendots represent the knots of the planned path which are the input to thecontroller

Fig. 11. Long distance navigation results. paths were planned from positionA to B, B to C and C to D subsequently.

Search time remains below 2 seconds if search is restrictedto an area of 200 m by 200 m. Though the environmentis assumed to be static, this is fast enough to cope withslow changes in the environment by continous replanning.Additionally, to avoid collision with fast moving objects, alower level process continously determines the free sectionof the planned path and, if necessary, stops by changing thecontrol mode of the longitudinal control system. The lateralcontroller follows the generated paths precisely enough toimplement all of the intended maneuvers. Speed was re-stricted to 7 m/s for the experiments and the UC competition.

VII. SUMMARY

We have implemented a path planning system with adownstream closed loop controller that is capable of solvingall of the following navigation problems: Precise parkingmaneuvers, narrow turns and longer distance navigationin unstructured environment. Designing search graph andvehicle controller around the same kinematic model allowedfor easy integration and robust interaction of both systems.

The main contribution is the design of an obstacle sensitivecost function, which is used to accelerate the search process.It proved to be generally suited for all practically occuring

Fig. 12. Three subsequent backward parking maneuvers. Vehicle startedon the left.

path planning problems, taking into account not only thevehicle’s kinematic constraints, but also the topology of itsfree space. It has been designed to be applied on a grid-like, discrete obstacle representation that, in case of theANNIEWAY robot, is obtained from an upstream sensorsystem based on laser range measurements.

The complete system proved well suited to tackle thechallenges posed at the DARPA Urban Challenge of 2007.Its suitability for planning both parking maneuvers and nav-igation in the presence of many obstacles, even in difficult,maze-like situations, easily surpasses the demands of theUrban Challenge.

VIII. ACKNOWLEDGEMENTS

The authors gratefully acknowledge the collaboration ofall partners from University of Karlsruhe (TH), TechnicalUniversity of Munich (TUM) and University of the GermanFederal Armed Forces, Munich. This work had not been pos-sible without drawing inspiration from ongoing research ofthe transregional collaborative research centre 28 Cognitive

Automobiles.

REFERENCES

[1] Urban Challenge Rules. DARPA, Oktober 2007.[2] Gunilla Borgefors. Distance transformations in digital images. Com-

puter Vision, Graphics, and Image Processing, 34(3):344–371, 1986.[3] Yong K. Hwang and Narendra Ahuja. Gross motion planning - a survey.

ACM Computing Surveys, 24(3):219–291, 1992.[4] L. Kavraki. Computation of configuration-space obstacles using the

fast fourier transform. IEEE Transactions on Robotics and Automation,11(3):408–413, 1995.

[5] H. Li and A. M. Vossepoel. Generation of the euclidean skeleton fromthe vector distance map by a bisector decision rule. In CVPR ’98:

Proceedings of the IEEE Computer Society Conference on Computer

Vision and Pattern Recognition, page 66, Washington, DC, USA, 1998.IEEE Computer Society.

[6] J.A. Reeds and R.A. Shepp. Optimal paths for a car that goes bothforward and backward. Pacific Journal of Mathematics, 145(2):144–154, 1991.

[7] A. Rosenfeld and J. L. Pfaltz. Sequential operations in digital pictureprocessing. The Journal of the ACM, 13:471–496, Oktober 1968.

[8] P. Šwestka and M.H. Overmars. Motion planning for car-like robotsusing a probabilistic learning approach. The International Journal of

Robotics Research, 16(2):119–143, April 1997.

791

(a)

c0

c1b c1f

x vy v

xe

ye

(b) (c)

Fig. 6. Navigation (a) Global Navigation: Long distance navigation results from Urban Challenge finalist Team AnnieWay [21]. Paths were planned fromposition A to B, B to C and C to D subsequently. (b) Guided Navigation in platooning: a clothoid (orange), fitted to the last estimated leader vehicle positions(green), provides the feedback values for vehicle control. (c) Combined reactive and global off-road navigation with “tentacles”. Drivable tentacles, a subset ofprecalculated feasible driving primitives, colored according to the results of evaluation in an occupancy grid (transparent overlay, brightness indicates obstacleprobability), where green tentacles are preferred over red ones. The tentacle selected for motion execution is shown yellow. The large offset between theautonomous vehicle and a GPS target trajectory (cyan) is typical for difficult off-road scenarios [22].

Planning is also possible with guided navigation, but not interms of trajectories but of the structure used for guidance.This way, behaviors like “turn right at the third crossing” canbe planned, if the related landmarks can be perceived.

D. Combinations

Depending on the situation one or the other navigationparadigm might be preferable. When trying to park a carinto a narrow spot at the other end of the parking fieldwith many obstacles in the way, planning a global trajectorymight be the choice. When driving along a path, guidednavigation is the natural choice. When no guidance structuresexist, perception conditions are bad or immediate reactionshave to be taken, reactive navigation is desirable. Reactivenavigation can easily be combined with both other navigationapproaches. The tentacles approach to reactive navigation wascombined with GPS (see Fig. 6c) at competitions like theDARPA Urban Challenge [25] and the European Land RobotTrials (ELROB) from 2007–2010 [7], [22]. Similarly, [8] useda reactive approach in combination with a global trajectory.

Global and guided navigation are fundamentally different.However, they can still be combined by temporal chaining orat different levels of navigation. Global navigation is appliedat the strategic level, considering larger distances and omittingdetails. At the finer operational scale, where a high precisionis demanded and unexpected events might occur, guidednavigation can be used.

V. PERFORMANCE

The DARPA Grand Challenge (GC) was a competitionfor driverless vehicles, sponsored by the US Department ofDefense. The courses of GC 2004 and 2005 led through theMojave desert and were described by lots of GPS waypoints.Because in 2004 no vehicle finished, more effort was made tomake the competition practicable in 2005. The environmentcould be assumed to be static, hence the focus was more onperception than on planning. Stanford’s car “Stanley” won thisrace [8], three other robots finished within the time limit.

A remarkable attempt to show the possibilities of complexrobot behavior was the DARPA Urban Challenge (UC) in2007, where several unmanned cars managed to autonomouslydrive together with manned cars in urban traffic and to observethe local traffic rules (see Fig. 1). The road network wasspecified by GPS waypoints in detail. Precise localization onthis network and object detection had to be performed byperception, most teams used global navigation. But the realchallenge was to make intelligent decisions when interactingwith other cars. Six cars finished successfully, the winner wasCMU’s “Boss” [6] before Stanford’s “Junior” [12].

The success achieved during the DARPA challenges is nowcontinued with Google’s driverless car, driven by leadingpeople from Stanford’s and CMU’s DARPA-Teams. In 2011,they published to have driven more than 300,000 km in publictraffic [26], but still with occasional manual interventions.They use previously collected data together with actual sensorinformation from LIDAR and camera to perform a preciselocalization in a global map. The complete navigation task isdone in this global system, of course taking other traffic intoaccount.

German Technical University of Braunschweig, also UCfinalist with team CarOLO, presented in 2010 their ongoingproject “Stadtpilot”. The autonomous car “Leonie” drovewithin heavy traffic on Braunschweig’s inner ring road [20].

The VisLab Intercontinental Autonomous Challenge(VIAC) took three months to autonomously drive a coupleof cars from Italy to Shanghai Expo 2010 [27]. The identicalcars were equipped with different modes of autonomy, like“lane-keeping”, “waypoint following” or “leader-follower-mode”, using LIDAR and preferably visual information. Theenvironment consisted of paved and unpaved roads, all withregular traffic, hilly passages, and deserts. Depending onthe situation and environment, the leading vehicle used lanekeeping capabilities or was driven manually. The followingvehicle typically just followed the first one, using vehicletracking and waypoint following. No global planning wasused during this experiment due to a lack of maps for large

Copyright (c) IEEE 2012 Accepted for publication in Proceedings of the IEEE, Centennial Issue, to appear May 2012

Page 6: PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1

6 PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012

parts of the course from Italy to China.ELROB is a yearly trial for autonomous vehicles, targeting

university and industry teams. Compared to GC and UC, theenvironment is by far less structured, with unpaved muddyroads through dense woods, sometimes hilly terrain, andonly few moving obstacles. One of the many scenarios isautonomous navigation, with only 10–30 waypoints (insteadof about three thousand in the GC) describing the coarseway. Since 2007, we participated with our vehicle MuCAR-3(Munich Cognitive Autonomous Robot Car, 3rd generation) indifferent scenarios. Our performance increased from year toyear [7], [22], [24], dealing with the problems of bad GPSconditions in the dense woods, while GPS offsets of up to50 m caused trouble to many participants.

VI. PATH TOWARDS THE FUTURE

In general we assume an ongoing technical progress ofsensor performance and cost-performance ratio for the future.This covers, among others, improvements of global local-ization systems by e. g. using Galileo, more precise inertialsensors, cameras better dealing with bad lighting conditions,miniaturized LIDAR sensors adapted for vehicle integration,active 3D cameras getting suited to outdoor environments, andof course faster computers to handle this growing amount ofdata. But sensors are just a prerequisite for better perception,leading to a higher grade of autonomy, and not the key tosolve the real problems.

A. Available Today and in the Near Future

Many driver assistance systems which emerged from au-tonomous vehicles are available for passenger cars today, likethe lane keeping assist system (LKAS). In the next yearsstop-and-go systems will appear taking over full control ofthe vehicle. The velocity and scope at which these systemsoperate is likely to increase from the traditional “traffic jam onhighways” scenarios [28]. More complex scenarios like roadconstruction sites with inconsistent lane markings are anothertopic of current research [29]. But, for legal reasons, they willstill require the driver holding the steering wheel for the nextcouple of years.

Also in the agricultural sector much effort is invested toachieve a higher level of automation, known as “PrecisionFarming”. The main objective is to reduce cost as well aspollution: higher machine utilization, lower manpower require-ments and a reduced amount of fertilizer. “Hands-off” drivingtractors are in use since about one decade. They are basedeither on a guided navigation approach, following structures inthe field, or a global navigation approach, precisely followingGPS way-points [30]. Until today the tractors are manned forsupervision because there is no obstacle avoidance. There isdevelopment towards unmanned tractors, too, e. g. a team oftwo vehicles operated by just one operator [31], not yet readyfor production. In the agricultural sector the vehicles operateon private terrain, possibly fenced, hence unmanned drivingseems rather possible from a legal point of view.

In contrast, having completely autonomous unmanned ve-hicle in real public traffic is still legally vague. Who is

accountable for accidents: the (not-driving) driver, or the carmanufacturer? After lobbying from Google’s driverless carproject, Nevada has passed first laws regarding autonomousvehicles in 2011, at least to facilitate research in this field.

In the next years autonomous driving in public traffic stillwill be limited to very structured environments, because evensmall changes in the environment can destroy the performanceof the complete perception system [32].

Redundancy in sensing and processing is important,e. g. by using different principles of sensing (cameras andRADAR/LIDAR), to better cope with environmental effectslike direct sunlight. This will help to increase the reliabilityof autonomous vehicles.

B. Mid-Term FutureMost current research systems like the Google driverless

cars require detailed pre-recorded maps to perform their map-aided localization and global navigation task.

In a couple of years, these maps could become morecommon, like today’s vector-based street maps from geo infor-mation systems (GIS). An aim could be to have all autonomousvehicles continuously updating these maps, maybe also sharingthem via car-to-car (C2C) and car-to-infrastructure communi-cation (C2I). Issues like how to model the world for buildingmaps, e. g. using landmarks in a topological manner as analternative to grid-based metric maps, have to be dealt withas well as dynamic aspects in maps or limited communicationbandwidth.

For autonomous vehicles in real off-road environments, theproblem of correctly classifying obstacles into flexible andthus drivable obstacles like vegetation and real ones like stoneshas to be solved as well as a better detection of water-basedobstacles. This includes the intelligence to distinguish a verywet, but drivable road from a lake or a flat water-filled potholefrom a deep pothole.

A robust system is based not only on the perception itselfbut also on a kind of self-reflection to be aware of the reliabil-ity of sensor and perception modules [33], [34]. Additionally,there is the requirement to develop strategies to safely copewith untrustworthiness of perception in planning.

It is very interesting to observe that even car manufacturersopposed to or at least not pushing autonomous vehicles in thepast, now investigate such systems, up to autonomous drivingtests on public roads [35]. So the field is gaining momentum,and it now seems more likely than just a couple of yearsago that we will experience autonomously driving vehiclesin everyday traffic within the next 10 to 15 years.

C. Possible Long-Term FutureVenturing an outlook to the long-term future, much research

work is necessary to make more cognitive systems and reach ahigher grade of autonomy. Autonomous vehicles will behavemore sophisticated in a more abstract, human-like way, i. e.relative to other objects: “Follow this road to the red church,turn right and stop at the bakery with the big tree in front.”

An important step towards this goal is learning, e. g. howother vehicles look like and move, and using this knowl-edge for perception, navigation and interaction purposes. The

Copyright (c) IEEE 2012 Accepted for publication in Proceedings of the IEEE, Centennial Issue, to appear May 2012

Page 7: PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1

LUETTEL et al.: AUTONOMOUS GROUND VEHICLES – CONCEPTS AND A PATH TO THE FUTURE – 7

knowledge has to be continuously updated, and some partsalso can be forgotten e. g. at raising uncertainty. For adaptingthe vehicle’s behavior to the owner’s one, e. g. more sporty orcomfortable, the human’s driving behavior could be observed,but the resulting behavior must always comply to traffic rules.One example is imitation learning, meaning to deduce thevehicle’s behavior from the observed behavior of other trafficparticipants or from manually operated runs [36]. However,there will always be unexpected situations never seen and“learned” before, which the vehicle nevertheless has to copewith.

The difficult interaction between autonomous vehicles andother traffic participants, both robots and humans, will alsoneed much work. For this purpose the behavior and intentionsof other traffic participants have to be gathered.

The terrain today called off-road is mostly just rural, e. g.small paths through the woods or fields. For performing in realoff-road terrain, much work has to be done for classificationand path planning in full 3D, e. g. dealing with overhangingobstacles, and very different soil conditions.

All in all, much more work is required until autonomousvehicles can safely and robustly participate in real-world urbantraffic as well as in complex off-road scenarios.

REFERENCES

[1] C. Thorpe, M. Hebert, T. Kanade, and S. Shafer, “Toward AutonomousDriving: The CMU Navlab. Part II: System and Architecture,” IEEEExpert, vol. 6, no. 1, pp. 44–52, Aug. 1991.

[2] E. D. Dickmanns and A. Zapp, “Autonomous High Speed Road VehicleGuidance by Computer Vision,” in Proc. 10th IFAC World Congress,vol. 4 (preprint), 1987, pp. 232–237.

[3] E. D. Dickmanns, Dynamic Vision for Perception and Control of Motion.London: Springer Verlag, 2007.

[4] J. Levinson, J. Askeland, et al., “Towards fully autonomous driving:Systems and algorithms,” in Proc. IEEE Intelligent Vehicles Symposium(IV), 2011, pp. 163–168.

[5] M. Manz, T. Luettel, F. von Hundelshausen, and H.-J. Wuensche,“Monocular Model-Based 3D Vehicle Tracking for Autonomous Vehi-cles in Unstructured Environment,” in Proc. IEEE Int’l Conf. on Roboticsand Automation (ICRA), 2011.

[6] C. Urmson, J. Anhalt, et al., “Autonomous Driving In Urban Envi-ronments: Boss and the Urban Challenge,” Journal of Field Robotics,vol. 25, no. 1, pp. 425–466, June 2008.

[7] T. Luettel, M. Himmelsbach, et al., “Autonomous Offroad NavigationUnder Poor GPS Conditions,” in 3rd Workshop On Planning, Perceptionand Navigation for Intelligent Vehicles (PPNIV), Proc. IEEE/RSJ Int’lConf. on Intelligent Robots and Systems (IROS), 2009.

[8] S. Thrun, M. Montemerlo, et al., “Stanley: The robot that won theDARPA Grand Challenge: Research Articles,” Journal of Robotic Sys-tems, vol. 23, no. 9, pp. 661–692, 2006.

[9] M. Schweitzer, A. Unterholzner, and H.-J. Wuensche, “Real-Time VisualOdometry for Ground Moving Robots using GPUs,” in Proc. Int’l Conf.on Computer Vision Theory and Applications (VISAPP), 2010, pp. 20–27.

[10] K. M. Wurm, R. Kummerle, C. Stachniss, and W. Burgard, “ImprovingRobot Navigation in Structured Outdoor Environments by IdentifyingVegetation from Laser Data,” in Proc. IEEE/RSJ Int’l Conf. on IntelligentRobots and Systems (IROS), 2009.

[11] M. Manz, M. Himmelsbach, T. Luettel, and H.-J. Wuensche, “Detectionand Tracking of Road Networks in Rural Terrain By Fusing Visionand LIDAR,” in Proc. IEEE/RSJ Int’l Conf. on Intelligent Robots andSystems (IROS), 2011.

[12] M. Montemerlo, J. Becker, et al., “Junior: The Stanford Entry in theUrban Challenge,” Journal of Field Robotics, vol. 25, no. 9, pp. 569–597, 2008.

[13] J. Klappstein, F. Stein, and U. Franke, “Monocular Motion DetectionUsing Spatial Constraints in a Unified Manner,” in Proc. IEEE IntelligentVehicles Symposium (IV), 2006, pp. 261–267.

[14] H. Loose and U. Franke, “B-Spline-Based Road Model for 3D LaneRecognition,” in Proc. IEEE Intelligent Transportation Systems Conf.(ITSC), 2010, pp. 91–98.

[15] C. Duchow, “Videobasierte Wahrnehmung markierter Kreuzungen mitlokalem Markierungstest und Bayes’scher Modellierung,” Dissertation,Karlsruher Institut fur Technologie, 2011.

[16] A. Geiger and B. Kitt, “ObjectFlow: A Descriptor for Classifying TrafficMotion,” in Proc. IEEE Intelligent Vehicles Symposium (IV), 2010.

[17] J. Levinson and S. Thrun, “Robust Vehicle Localization in UrbanEnvironments Using Probabilistic Maps,” in Proc. IEEE Int’l Conf. onRobotics and Automation (ICRA), 2010, pp. 4372–4378.

[18] I. Miller, M. Campbell, and D. Huttenlocher, “Map-aided localization insparse global positioning system environments using vision and particlefiltering,” Journal of Field Robotics, vol. 28, no. 5, pp. 619–643, 2011.

[19] O. Pink, F. Moosmann, and A. Bachmann, “Visual Features for VehicleLocalization and Ego-Motion Estimation,” in Proc. IEEE IntelligentVehicles Symposium (IV), 2009, pp. 254–260.

[20] F. Saust, J. Wille, B. Lichte, and M. Maurer, “Autonomous VehicleGuidance on Braunschweig’s inner ring road within the StadtpilotProject,” in Proc. IEEE Intelligent Vehicles Symposium (IV), 2011, pp.169–174.

[21] J. Ziegler, M. Werling, and J. Schroder, “Navigating car-like Robots inunstructured Environments using an Obstacle sensitive Cost Function,”in Proc. IEEE Intelligent Vehicles Symposium (IV), 2008, pp. 787–791.

[22] M. Himmelsbach, T. Luettel, et al., “Autonomous Off-Road Navigationfor MuCAR-3 – Improving the Tentacles Approach: Integral Structuresfor Sensing and Motion,” KI - Kunstliche Intelligenz, vol. 25, no. 2, pp.145–149, 2011.

[23] M. Werling, L. Groll, and G. Bretthauer, “Invariant Trajectory TrackingWith a Full-Size Autonomous Road Vehicle,” IEEE Trans. Robot.,vol. 26, no. 4, pp. 758–765, Aug. 2010.

[24] F. von Hundelshausen, M. Himmelsbach, et al., “Driving with tentacles:Integral structures for sensing and motion,” Journal of Field Robotics,vol. 25, no. 9, pp. 640–673, 2008.

[25] S. Kammel, J. Ziegler, et al., “Team AnnieWAY’s autonomous systemfor the DARPA Urban Challenge 2007,” Journal of Field Robotics,vol. 25, no. 9, pp. 615–639, 2008.

[26] S. Thrun and C. Urmson, “Self-Driving Cars,” San Francisco, CA, USA,Sept. 2011, presented in a keynote talk at IEEE/RSJ Int’l Conf. onIntelligent Robots and Systems (IROS).

[27] M. Bertozzi, L. Bombini, et al., “VIAC: an Out of Ordinary Exper-iment,” in Proc. IEEE Intelligent Vehicles Symposium (IV), 2011, pp.175–180.

[28] Volkswagen Group. (2011, June) Driving without a Driver– Volkswagen presents the “Temporary Auto Pilot”. PressRelease. Wolfsburg, Germany / Boras, Sweden. [Online]. Avail-able: http://www.haveit-eu.org/LH2Uploads/ItemsContent/117/HAVEitVolkswagen 110621 PR TAP EN.pdf

[29] T. Gumpp, D. Nienhuser, R. Liebig, and J. M. Zollner, “Recognitionand Tracking of Temporary Lanes in Motorway Construction Sites,” inProc. IEEE Intelligent Vehicles Symposium (IV), 2009, pp. 305–310.

[30] CLAAS Steering systems – Tracking Con-trol Optimization. (2011) Visited 2011-12-23. [On-line]. Available: http://www.claas.com/cc/servlet/contentblob/cl-pw/zzzz-Celum-DLC/pool/133959,bpSite=51524,property=data.pdf

[31] X. Zhang, M. Geimer, P. O. Noack, and L. Grandl, “Development of anintelligent master-slave system between agricultural vehicles,” in Proc.IEEE Intelligent Vehicles Symposium (IV), 2010, pp. 250–255.

[32] M. M. Moore and B. Lu, “Autonomous Vehicles for Personal Transport:A Technology Assessment,” SSRN eLibrary, 2011. [Online]. Available:http://ssrn.com/paper=1865047

[33] M. Campbell, M. Egerstedt, J. P. How, and R. M. Murray, “Autonomousdriving in urban environments: approaches, lessons and challenges,”Philosophical Transactions of the Royal Society A: Mathematical, Phys-ical and Engineering Sciences, vol. 368, no. 1928, pp. 4649–4672, 2010.

[34] M. Aeberhard, S. Paul, N. Kaempchen, and T. Bertram, “Object Exis-tence Probability Fusion using Dempster-Shafer Theory in a High-LevelSensor Data Fusion Architecture,” in Proc. IEEE Intelligent VehiclesSymposium (IV), 2011, pp. 770–775.

[35] BMW Group. (2011, Aug.) Ready for takeover! – Taking BMW auto-mated vehicle technology to the motorway. Press Release. Munich, Ger-many. [Online]. Available: https://www.press.bmwgroup.com/pressclub/p/pcgl/download.html?textId=144140&textAttachmentId=174417

[36] J. A. Bagnell, D. Bradley, et al., “Learning for Autonomous Navigation:Advances in Machine Learning for Rough Terrain Mobility,” IEEERobot. Autom. Mag., vol. 17, no. 2, pp. 74–84, June 2010.

Copyright (c) IEEE 2012 Accepted for publication in Proceedings of the IEEE, Centennial Issue, to appear May 2012

Page 8: PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012 1

8 PROCEEDINGS OF THE IEEE, VOL. XXX, NO. XXX, MAY 2012

Thorsten Luettel (Graduate Student Member,IEEE) received his Dipl.-Ing. degree in ElectricalEngineering / Mechatronics from University of Han-nover, Germany, in 2006. Currently he is a researchassistant at University of the Bundeswehr Munichand pursuing a doctorate in the field of autonomoussystems. His research interests include data fusion,safe automated vehicle guidance, mapping and nav-igation.

Michael Himmelsbach received his Dipl.-Inf. de-gree from Humboldt University Berlin, Germany,in 2007. Currently he is a research assistant atUniversity of the Bundeswehr Munich, pursuing adoctorate in the field of autonomous systems. Hisresearch interests include LIDAR environment andobject perception with focus on machine learningand pattern recognition techniques.

Hans-Joachim Wuensche got his PhD from Uni-versity of the Bundeswehr Munich in 1987 withErnst D. Dickmanns, where he co-developed the4D-approach to computer vision. After 17 years inmanagement he returned to the same University tolead the Institute for Autonomous Systems Tech-nology. His research interests include autonomousrobots, esp. on- and off-road vehicles exploring andnavigating unknown terrain.

Copyright (c) IEEE 2012 Accepted for publication in Proceedings of the IEEE, Centennial Issue, to appear May 2012