14
J. Scot Collins Environment Mapping for Autonomous Driving into Parking Lots R. Luca, F Troester, R. Gall; Heilbronn University in Germany C. Simion; Lucian Blaga University of Sibiu in Romania Valeo; car components manufacturer in Germany 1 of 14

Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Embed Size (px)

Citation preview

Page 1: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

J. Scot Collins

Environment Mapping for Autonomous Driving into Parking Lots

R. Luca, F Troester, R. Gall; Heilbronn University in Germany

C. Simion; Lucian Blaga University of Sibiu in Romania Valeo; car components manufacturer in Germany

1 of 14

Page 2: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

For defining the odometry specific to parking procedures we assume that the single wheel model is reliable to our pur-pose. The middle point of the rear axis of the vehicle is being tracked and also represents the reference point for the sensor infrastructure design.

The driving path is calculated within the incremental sen-sors installed on each rear wheel axis and the steering wheel angle sensor.

The first task of the autonomous vehicle consists on Map-ping. It represents the basic component of the Simultaneous Localization and Mapping (SLAM) concept. This is performed using the SICK LD 1000 laser sensor for environment scanning and incremental map building.

The real time processing of data has to be considered while the implementation on the PC-104 system. The communication between the laser scanner and the computation unit is made trough the CAN I/O port. After reading the data, the computa-tion unit programmed with xPC, the embedded component of Matlab/Simulink, executes algorithmic data reduction for the further Mapping process.

Figure 1. The SICK LD 1000 laser scanner and PC-104 computing unit on the vehicle.

For short range object identification we use extended ultra-

sonic cells. Each cell compounds of two sensors capable of measuring the closest surrounding area, specific for completing parking procedures. The blind zone of the sensor measuring far is completed by another sensor having a shorter measuring range. The obtained data is also processed by the PC-104 com-putation unit and used for the parking procedures.

Figure 2. Autonomous vehicle equipped with the SICK LD laser scanner, ultra-sonic sensor cells and the PC-104 system.

The visualization of data is transferred to a host PC (Lap-

top) via wireless protocol. We define as target the PC-104 sys-tem running the real time computation. Only relevant data is being kept and processed in the target PC while applying the on-line incremental map building algorithms.

IV. MAP BUILDING PROCEDURE

We provide a method for map building by considering the ac-quired scans the input for the system. Each scan consists of 360 points and covers the area in front of the vehicle by indexing a laser beam each 0.5 unit until reaching 180 degrees.

Figure 3. Laser beam sensing area After reaching a number of m scans acquired (in this method,

10 scans=1cycle), the system creates an array of intersection points which are sorted by angle. The array is structured as following:

DS1= [n, x1, y1 … xn, yn] (1)

While n represents the number of points generated from the

sensor by scanning the environment, the sorted array becomes the input for the clustering module, which operates the points in clusters. Additional sorting of the array of points is neces-sary for the dynamical map building. As shown in Figure 6

2 of 14

J. Scot Collins

Presentation Outline 1.  Introduction 2.  Related Work 3.  Hardware/software utilized 4.  Algorithms 5.  Conclusion

Page 3: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Introduction

Problem: How to collect information and utilize in a Simultaneous Localization and Mapping (SLAM) algorithm? •  End use: Navigate in unknown environment with the aim of

safe self parking on a standard parking lot, through the use of external sensors.

•  Main Focus: Reduce data, by algorithms, while

maintaining enough map complexity to accomplish task.

3 of 14

Page 4: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Related Work

•  Identification Convexity Rule for Shape Decomposition Based on Discrete Contour Evolution.

•  Map building for a mobile robot equipped with a 2d laser rangefinder.

Integration of numerous local maps into one global map to represent the whole environment observed by the robot during navigation.

•  Precise positioning using model-based maps.

The basis for this paper is using the environment and location of matching previous scans to interpret the position of the mobile robot with respect to fixed landmarks.

•  A comparison of Line Extraction Algorithms using 2D Laser Rangefinder for

indoor mobile robots. compare 6 popular algorithms for speed, complexity, correctness and precision in robots •  A Method for Building Small-Size Segment-Based Maps using a map manager to organize smaller maps into a global map.

4 of 14

442 LATECKI AND LAKAMPER

FIG. 1. A few stages of the proposed discrete curve evolution. Contour shown in (a) is a distorted version of the contour on the WWW page http://www.ee.surrey.ac.uk/Research/VSSP/imagedb/demo.html.

in Fig. 1. Since in every evolution step, the number of digital linesegments in the curve decomposition decreases by one, the evo-lution converges to a convex polygon, which defines the highestlevel in the shape hierarchy, e.g., Fig. 1f. The presented evolutionmethod is translation, rotation, reflection, and scaling invariant.The obtained shape hierarchy is a base for object decomposi-

tion into relevant visual parts. The parts obtained on the highestlevels of the hierarchy determine the most significant parts ofthe object. To decompose a digital curve into relevant parts ona given level of the shape hierarchy, we group the digital linesegments of its boundary curve into maximal convex arcs.The fact that visual parts are somehow related to convexity

has been noticed in the literature; e.g., Basri et al. [1] state

Parts generally are defined to be convex or nearly convex shapes separatedfrom the rest of the object at concavity extrema, as inHoffman andRichards[12], or at inflections, as in Koenderink and Doorn [15].

InVaina andZlateva [31] largest convex patcheswere used forpart decomposition of 3D objects. Although the observation thatvisual parts are “nearly convex shapes” is very natural, the mainproblem is to determine the meaning of “nearly” in this context.We will present the details of our solution to this problem inSection 2. Now we state a few simple definitions.We call maximal convex arcs supported arcs following

Latecki and Rosenfeld [17]. We call finite unions of supportedarcs tame arcs. Maximal supported arcs determine alternatingand overlapping convex and concave parts of the object bound-ary. For example, Fig. 2 shows a decomposition of a polygonal

FIG. 2. Decomposition of a polygonal arc into maximal supported arcs.

curve into maximal supported arcs: the dashed arcs (inside thepolygon) indicate the concave arcs with respect to the polygonwhile the dotted arcs (outside of the polygon) indicate the con-vex arcs with respect to the polygon. In the following, the termmaximal convex arc denotes a supported arc that is convex withrespect to the object.Recall that we work with digital arcs, which can be treated

as polygonal arcs without loss of information. Thus, we usepolygonal definitions of the global curvature, which is equalto the total turn of a polygonal arc. The definition of the globalcurvature (or total turn) of a polygonal arc is illustrated in Fig. 3.This definition has been extended to digital arcs in Latecki andRosenfeld [17].

2. SHAPE DECOMPOSITION

It is a simple and natural observation that maximal convexparts of objects determine visual parts. However, the problem isthat many significant visual parts are not convex, since a visualpart may have concavities. We solve this problem by identifyingconvex parts at different stages of the proposed contour evolutionin which significant visual parts will become convex object partsat higher stages of the evolution: A significant visual part mayhave concavities, whose boundary parts are supported concavearcs. Since these concave arcs contribute in a less relevant wayto the shape of the object than the boundary arc enclosing thesignificant visual part, the concavitieswill disappear in an earlierstage of the boundary evolution. Thus, there exists an evolutionstage at which a significant visual part is a convex part. We baseour approach to shape decomposition into visual parts on thefollowing rule:

• Hierarchical convexity rule. The maximal convex arcs(w.r.t. the object) at various levels of the contour evolution de-termine parts of object boundary that enclose visual parts of theobject.

The visual parts are determined on the original shape, i.e., itis not a maximal convex arc that is itself a visual part, but the

442 LATECKI AND LAKAMPER

FIG. 1. A few stages of the proposed discrete curve evolution. Contour shown in (a) is a distorted version of the contour on the WWW page http://www.ee.surrey.ac.uk/Research/VSSP/imagedb/demo.html.

in Fig. 1. Since in every evolution step, the number of digital linesegments in the curve decomposition decreases by one, the evo-lution converges to a convex polygon, which defines the highestlevel in the shape hierarchy, e.g., Fig. 1f. The presented evolutionmethod is translation, rotation, reflection, and scaling invariant.The obtained shape hierarchy is a base for object decomposi-

tion into relevant visual parts. The parts obtained on the highestlevels of the hierarchy determine the most significant parts ofthe object. To decompose a digital curve into relevant parts ona given level of the shape hierarchy, we group the digital linesegments of its boundary curve into maximal convex arcs.The fact that visual parts are somehow related to convexity

has been noticed in the literature; e.g., Basri et al. [1] state

Parts generally are defined to be convex or nearly convex shapes separatedfrom the rest of the object at concavity extrema, as inHoffman andRichards[12], or at inflections, as in Koenderink and Doorn [15].

InVaina andZlateva [31] largest convex patcheswere used forpart decomposition of 3D objects. Although the observation thatvisual parts are “nearly convex shapes” is very natural, the mainproblem is to determine the meaning of “nearly” in this context.We will present the details of our solution to this problem inSection 2. Now we state a few simple definitions.We call maximal convex arcs supported arcs following

Latecki and Rosenfeld [17]. We call finite unions of supportedarcs tame arcs. Maximal supported arcs determine alternatingand overlapping convex and concave parts of the object bound-ary. For example, Fig. 2 shows a decomposition of a polygonal

FIG. 2. Decomposition of a polygonal arc into maximal supported arcs.

curve into maximal supported arcs: the dashed arcs (inside thepolygon) indicate the concave arcs with respect to the polygonwhile the dotted arcs (outside of the polygon) indicate the con-vex arcs with respect to the polygon. In the following, the termmaximal convex arc denotes a supported arc that is convex withrespect to the object.Recall that we work with digital arcs, which can be treated

as polygonal arcs without loss of information. Thus, we usepolygonal definitions of the global curvature, which is equalto the total turn of a polygonal arc. The definition of the globalcurvature (or total turn) of a polygonal arc is illustrated in Fig. 3.This definition has been extended to digital arcs in Latecki andRosenfeld [17].

2. SHAPE DECOMPOSITION

It is a simple and natural observation that maximal convexparts of objects determine visual parts. However, the problem isthat many significant visual parts are not convex, since a visualpart may have concavities. We solve this problem by identifyingconvex parts at different stages of the proposed contour evolutionin which significant visual parts will become convex object partsat higher stages of the evolution: A significant visual part mayhave concavities, whose boundary parts are supported concavearcs. Since these concave arcs contribute in a less relevant wayto the shape of the object than the boundary arc enclosing thesignificant visual part, the concavitieswill disappear in an earlierstage of the boundary evolution. Thus, there exists an evolutionstage at which a significant visual part is a convex part. We baseour approach to shape decomposition into visual parts on thefollowing rule:

• Hierarchical convexity rule. The maximal convex arcs(w.r.t. the object) at various levels of the contour evolution de-termine parts of object boundary that enclose visual parts of theobject.

The visual parts are determined on the original shape, i.e., itis not a maximal convex arc that is itself a visual part, but the

Page 5: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Hardware/Software Utilized

SICK LD1000 Laser Measurement System •  Rotating scanner head

–  5-10 Hz –  Angular resolution 0.125°

•  Supports CAN standard 2.0A –  Data transmission between 10 bit/s and 1 Mbit/s. –  Can configure ID for priority on bus –  Used to transmit distances only

•  Digital Outputs •  Emits laser pulses at a max frequency of14.4 kHz (14,400 per second)

•  Measurement range up to 250m (~820 ft.)

5 of 14

Operating Instructions

LD(LRS

For your safety

8011306/S821/2008-05-30 © SICK AG · Division Auto Ident · Germany · All rights reserved 13

Chapter 2

Laser output aperture

The laser output aperture is the round optic in the rotating scanner head on the

LD(LRS1000 or the cover on the LD(LRSx100.

Fig. 1: Laser output aperture in the rotating scanner head on the LD(LRS1000

Fig. 2: Laser output aperture on the LD(LRSx100

Laser power

The laser operates at a wavelength � = 905 nm (invisible infrared light). The radiation

emitted in normal operation is not harmful to the eyes and human skin.

Rotating scanner headLaser output aperture

Laser output aperture

Page 6: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Hardware/Software Utilized

PC/104 •  An Embedded Computer Standard

–  Defined form factor and bus. –  Intended for specialized embedded

computing environments dependent on reliable data acquisition.

•  Modules stack together like building blocks

–  Typically includes a motherboard, analog-to-digital converter and digital I/O for data acquisition.

•  Constraints

–  3.55 x 3.775 inches –  Height is typically constrained to the

boundaries of the connectors.

6 of 14

Page 7: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Hardware/Software Utilized

xPC Target •  Enables the execution of Matlab/Simulink models on the PC104

system for real-time testing.

•  It provides a library of drivers, a real-time kernel and a host target interface for real time monitoring.

•  Download code generated by Simulink to the PC104 target via the

communications link.

7 of 14

@!��$*:0.<!.:/7:5�1*:-?*:.�26�<1.�4778�;25=4*<276�*6-�:.*4�<25.�:*82-�,76<:74�8:7<7<A8260

@!��$*:0.<E .6*+4.;�A7=�<7�.@.,=<. #25=4263C *6- #<*<./47?C 57-.4;�76�*�<*:0.<�,758=<.:�/7:1*:-?*:.�26�<1.�4778�������;25=4*<276��:*82-�,76<:74�8:7<7<A8260��*6-�7<1.:�:.*4�<25.�<.;<260�*8842,*<276;���<8:7>2-.;�*�42+:*:A�7/�-:2>.:;��*�:.*4�<25.�3.:6.4��*6-�*�17;<�<*:0.<�26<.:/*,.�/7:�:.*4�<25.�5762<7:260��8*:*5.<.:<=6260��*6-�-*<*�4700260�

�7:�*�,7584.<.��/=44A�*;;.5+4.-��:.*4�<25.�<.;<260�;74=<276��@!��$*:0.<�$=:63.A�,75+26.;�@!��$*:0.<�?2<1�*>*:2.<A�7/�1201�8.:/7:5*6,.�:.*4�<25.�<*:0.<�,758=<.:;�

�.*:6�57:.�*+7=< @!��$*:0.<�$=:63.A�

Key FeaturesG ".*4�<25.�.@.,=<276�7/�#25=4263�*6-�#<*<./47?�57-.4;�76�*�<*:0.<�,758=<.:�>2*�*6�78<252B.-�:.*4�<25.�3.:6.4

G #=887:<�/7:�<*:0.<�,758=<.:�1*:-?*:.��26,4=-260�!����!����!��.��,!����*6-�!� �/7:5�/*,<7:;

G �47,3;�;=887:<260�6=5.:7=;��� �57-=4.;��26,4=-260�*6*470��� ��-202<*4��� ��8=4;.�<:*26�0.6.:*<276�*6-,*8<=:.��*6-�;1*:.-�5.57:A

G �47,3;�;=887:<260�,755=62,*<276�8:7<7,74;�*6-�-*<*�+=;.;��26,4=-260�;.:2*4��%�!��!��������������"��� ����*6-�����#$�����

G �+242<A�<7�8:70:*5��!���+7*:-;�/7:�@!��$*:0.<�$=:63.A�;A;<.5;��?2<1������7-.:�

G $774;�/7:�:.*4�<25.�5762<7:260��8*:*5.<.:�<=6260��*6-�-*<*�4700260

G #<*6-*476.�78.:*<276�?2<1 @!��$*:0.<��5+.--.-� 8<276

G �!�;�/7:�-.>.478260�=;.:�26<.:/*,.;��&2;=*4��*;2,����������*>*��*6-����$�

xPC Target Turnkey real-time testing solution. Once you connect to your hardware under test, you can run your Simulink andStateflow models in real time and verify your designs.

Page 8: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Algorithms

Map Building Procedure 1.  180° scans of the environment as an input to

the system.

2.  Creates an array of points which is sorted by the laser angle.

3.  Array structure

8 of 14

For defining the odometry specific to parking procedures we assume that the single wheel model is reliable to our pur-pose. The middle point of the rear axis of the vehicle is being tracked and also represents the reference point for the sensor infrastructure design.

The driving path is calculated within the incremental sen-sors installed on each rear wheel axis and the steering wheel angle sensor.

The first task of the autonomous vehicle consists on Map-ping. It represents the basic component of the Simultaneous Localization and Mapping (SLAM) concept. This is performed using the SICK LD 1000 laser sensor for environment scanning and incremental map building.

The real time processing of data has to be considered while the implementation on the PC-104 system. The communication between the laser scanner and the computation unit is made trough the CAN I/O port. After reading the data, the computa-tion unit programmed with xPC, the embedded component of Matlab/Simulink, executes algorithmic data reduction for the further Mapping process.

Figure 1. The SICK LD 1000 laser scanner and PC-104 computing unit on the vehicle.

For short range object identification we use extended ultra-

sonic cells. Each cell compounds of two sensors capable of measuring the closest surrounding area, specific for completing parking procedures. The blind zone of the sensor measuring far is completed by another sensor having a shorter measuring range. The obtained data is also processed by the PC-104 com-putation unit and used for the parking procedures.

Figure 2. Autonomous vehicle equipped with the SICK LD laser scanner, ultra-sonic sensor cells and the PC-104 system.

The visualization of data is transferred to a host PC (Lap-

top) via wireless protocol. We define as target the PC-104 sys-tem running the real time computation. Only relevant data is being kept and processed in the target PC while applying the on-line incremental map building algorithms.

IV. MAP BUILDING PROCEDURE

We provide a method for map building by considering the ac-quired scans the input for the system. Each scan consists of 360 points and covers the area in front of the vehicle by indexing a laser beam each 0.5 unit until reaching 180 degrees.

Figure 3. Laser beam sensing area After reaching a number of m scans acquired (in this method,

10 scans=1cycle), the system creates an array of intersection points which are sorted by angle. The array is structured as following:

DS1= [n, x1, y1 … xn, yn] (1)

While n represents the number of points generated from the

sensor by scanning the environment, the sorted array becomes the input for the clustering module, which operates the points in clusters. Additional sorting of the array of points is neces-sary for the dynamical map building. As shown in Figure 6

DS1= [n, x1, y1,..., xn, yn ]

new points are generated between the existent (intermediate) while the vehicle movement form position 1 to position 2. The array needs to be updated with the sorted angular information after each scan cycle.

Figure 4. Obtaining the sorted DS1 The obtained sorted data points are further processed in

sorted clusters. The nearest neighbor method is used to proof the limits of

the clusters. Each cluster begins new when the distance is greater than the one established by the parameter and is placed in a new line of a matrix. The sorted clusters are further processed in the line extraction algorithm. Figure 4 represents the flow of data from sensor until reaching the clustering. Ob-stacle edges are identified within the intersection points.

Figure 5. DS1 Data representation Mapping becomes effective only when the number of seg-

ments is kept as small as possible.

Therefore the next module after clustering merges points into lines and poly-lines, each representing a separate cluster. Separate points representing a higher distance then the one defined in the algorithm represent separate clusters and are not unified to lines in this phase. The right order of joining points into lines and reducing the irrelevant mapping data becomes very important because of firstly reduced processing time and a correct shape construction.

Considering shape information obtained by a range sensor, scanning the same object from different position generates the effect of doubled data, which requires the merge and sorting before the line creation.

The reduction of data becomes consistent when clustering points into linear segments. Only the beginning and ending points of the segments are held for further processing of the map. This makes our approach applicable to scenarios in which the map is generated in real time. The defined structure of the map at this moment has the following data structure, each in-dexed row of the matrix defining a line or poly-line consisting on variable number of points:

DS2= [n1, x1, y1 … xn1, yn1;

n2, x1, y1 … xn2, yn2; …] (2)

Until reaching the final map structure data reduction is done in two steps. Firstly a clustering of the points and merging of them together into lines and poly-lines is done.

Figure 6. Theoretical cluster formation on obstacle edges Because of the variable distances between clusters, points

merge together in short line segments, statement which has to be solved further in the implementation of the DCE (Discrete Contour Evolution) algorithm. We assume two conditions for the DCE algorithm implementation. Firstly a linear regression is applied. The ending line points are relevant. Secondly by connecting the relevant obtained points, following conditions are important for the line segment generation, where L1 and L2 are two different line lengths of segments and L12 the reducing line.

intermediate points

position1

position2

Page 9: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Algorithms

Map Building Procedure 4.  Clustering Module

–  Grouping of neighbors –  New clusters begin when the distance is

greater than a predefined parameter. –  Sorts the clusters.

9 of 14

new points are generated between the existent (intermediate) while the vehicle movement form position 1 to position 2. The array needs to be updated with the sorted angular information after each scan cycle.

Figure 4. Obtaining the sorted DS1 The obtained sorted data points are further processed in

sorted clusters. The nearest neighbor method is used to proof the limits of

the clusters. Each cluster begins new when the distance is greater than the one established by the parameter and is placed in a new line of a matrix. The sorted clusters are further processed in the line extraction algorithm. Figure 4 represents the flow of data from sensor until reaching the clustering. Ob-stacle edges are identified within the intersection points.

Figure 5. DS1 Data representation Mapping becomes effective only when the number of seg-

ments is kept as small as possible.

Therefore the next module after clustering merges points into lines and poly-lines, each representing a separate cluster. Separate points representing a higher distance then the one defined in the algorithm represent separate clusters and are not unified to lines in this phase. The right order of joining points into lines and reducing the irrelevant mapping data becomes very important because of firstly reduced processing time and a correct shape construction.

Considering shape information obtained by a range sensor, scanning the same object from different position generates the effect of doubled data, which requires the merge and sorting before the line creation.

The reduction of data becomes consistent when clustering points into linear segments. Only the beginning and ending points of the segments are held for further processing of the map. This makes our approach applicable to scenarios in which the map is generated in real time. The defined structure of the map at this moment has the following data structure, each in-dexed row of the matrix defining a line or poly-line consisting on variable number of points:

DS2= [n1, x1, y1 … xn1, yn1;

n2, x1, y1 … xn2, yn2; …] (2)

Until reaching the final map structure data reduction is done in two steps. Firstly a clustering of the points and merging of them together into lines and poly-lines is done.

Figure 6. Theoretical cluster formation on obstacle edges Because of the variable distances between clusters, points

merge together in short line segments, statement which has to be solved further in the implementation of the DCE (Discrete Contour Evolution) algorithm. We assume two conditions for the DCE algorithm implementation. Firstly a linear regression is applied. The ending line points are relevant. Secondly by connecting the relevant obtained points, following conditions are important for the line segment generation, where L1 and L2 are two different line lengths of segments and L12 the reducing line.

intermediate points

position1

position2

new points are generated between the existent (intermediate) while the vehicle movement form position 1 to position 2. The array needs to be updated with the sorted angular information after each scan cycle.

Figure 4. Obtaining the sorted DS1 The obtained sorted data points are further processed in

sorted clusters. The nearest neighbor method is used to proof the limits of

the clusters. Each cluster begins new when the distance is greater than the one established by the parameter and is placed in a new line of a matrix. The sorted clusters are further processed in the line extraction algorithm. Figure 4 represents the flow of data from sensor until reaching the clustering. Ob-stacle edges are identified within the intersection points.

Figure 5. DS1 Data representation Mapping becomes effective only when the number of seg-

ments is kept as small as possible.

Therefore the next module after clustering merges points into lines and poly-lines, each representing a separate cluster. Separate points representing a higher distance then the one defined in the algorithm represent separate clusters and are not unified to lines in this phase. The right order of joining points into lines and reducing the irrelevant mapping data becomes very important because of firstly reduced processing time and a correct shape construction.

Considering shape information obtained by a range sensor, scanning the same object from different position generates the effect of doubled data, which requires the merge and sorting before the line creation.

The reduction of data becomes consistent when clustering points into linear segments. Only the beginning and ending points of the segments are held for further processing of the map. This makes our approach applicable to scenarios in which the map is generated in real time. The defined structure of the map at this moment has the following data structure, each in-dexed row of the matrix defining a line or poly-line consisting on variable number of points:

DS2= [n1, x1, y1 … xn1, yn1;

n2, x1, y1 … xn2, yn2; …] (2)

Until reaching the final map structure data reduction is done in two steps. Firstly a clustering of the points and merging of them together into lines and poly-lines is done.

Figure 6. Theoretical cluster formation on obstacle edges Because of the variable distances between clusters, points

merge together in short line segments, statement which has to be solved further in the implementation of the DCE (Discrete Contour Evolution) algorithm. We assume two conditions for the DCE algorithm implementation. Firstly a linear regression is applied. The ending line points are relevant. Secondly by connecting the relevant obtained points, following conditions are important for the line segment generation, where L1 and L2 are two different line lengths of segments and L12 the reducing line.

intermediate points

position1

position2

Page 10: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Algorithms

Map Building Procedure 5.  Lines Module

–  Discrete Contour Evolution, DCE, algorithm –  First remove doubled data –  Linear regression is applied –  Endpoints maintained, intermediate forgotten.

6.  DS2 matrix created

10 of 14

L12<L1+L2 (3)

A merging between the resulted lines is calculated. We as-

sume that the orientation of the merged lines is having similar values. By merging line segments together we obtain a new data reduction similar to the first step.

Figure 7. Reducing DS1 to relevant data DS2

The final shape of the objects from the environment is to be

generated after the reduction steps and line merging algo-rithms. Therefore a map manager is required to establish the shape of the objects and the representation of the environment. Figure 8 indicates the data reducing flow. The input for the map becomes DS2 after data reduction, while creating poly-lines.

Figure 8. Reducing DS1 to relevant data DS2

The mapping completes the reduction algorithms and gets

the reliable information of the environment. By reaching this point, a simplified graphical representation of data is obtained.

Figure 9. DS2 data representation reduced by lines and poly-lines. For further processing of the localization a correction of the

vehicles odometry is required. Over time and motion, locating and mapping errors build cumulatively, grossly distorting the map and therefore the vehicles ability to determine its actual location and heading with sufficient accuracy. To compensate errors, recognizing features that has come across previously and re-skewing recent parts of the map to make sure the instances of that feature become one is required. A scan matching method assumes that generated poly-lines can be compared and matched together within the map manager similar to the way of the data reduction.

Figure 9 indicates generated and merged lines as a result of data reducing based on a simulation environment. We are considering the final map as a representation of the environment formed by lines and poly-lines which represent the input for the motion planning of the vehicle as a map matrix.

Herby we consider implementing based on the potential field theory a motion planner inside the system.

A very common effect that appears in dynamic mapping refers to occlusion and segmented or doubled data. The occlusion appears during dynamic mapping; when between the scanned object and the scanner appear other objects. Edges are identified as small line segments belonging to the same line. The distance between points and angular position of the merged points into lines become very important. Only similar data characteristics can be used for the merging and simplifying process.

L12<L1+L2 (3)

A merging between the resulted lines is calculated. We as-

sume that the orientation of the merged lines is having similar values. By merging line segments together we obtain a new data reduction similar to the first step.

Figure 7. Reducing DS1 to relevant data DS2

The final shape of the objects from the environment is to be

generated after the reduction steps and line merging algo-rithms. Therefore a map manager is required to establish the shape of the objects and the representation of the environment. Figure 8 indicates the data reducing flow. The input for the map becomes DS2 after data reduction, while creating poly-lines.

Figure 8. Reducing DS1 to relevant data DS2

The mapping completes the reduction algorithms and gets

the reliable information of the environment. By reaching this point, a simplified graphical representation of data is obtained.

Figure 9. DS2 data representation reduced by lines and poly-lines. For further processing of the localization a correction of the

vehicles odometry is required. Over time and motion, locating and mapping errors build cumulatively, grossly distorting the map and therefore the vehicles ability to determine its actual location and heading with sufficient accuracy. To compensate errors, recognizing features that has come across previously and re-skewing recent parts of the map to make sure the instances of that feature become one is required. A scan matching method assumes that generated poly-lines can be compared and matched together within the map manager similar to the way of the data reduction.

Figure 9 indicates generated and merged lines as a result of data reducing based on a simulation environment. We are considering the final map as a representation of the environment formed by lines and poly-lines which represent the input for the motion planning of the vehicle as a map matrix.

Herby we consider implementing based on the potential field theory a motion planner inside the system.

A very common effect that appears in dynamic mapping refers to occlusion and segmented or doubled data. The occlusion appears during dynamic mapping; when between the scanned object and the scanner appear other objects. Edges are identified as small line segments belonging to the same line. The distance between points and angular position of the merged points into lines become very important. Only similar data characteristics can be used for the merging and simplifying process.

DS2 = [n1, x1, y1,..., xn1, yn1;n2, x1, y1,..., xn2, yn2;...]

Page 11: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Algorithms

Map Building Procedure 7.  Map Manager

–  Vehicle Odometry Correction uses previous lines to reskew

–  Segmented Maps for occlusion and doubled data(segmented)

11 of 14

L12<L1+L2 (3)

A merging between the resulted lines is calculated. We as-

sume that the orientation of the merged lines is having similar values. By merging line segments together we obtain a new data reduction similar to the first step.

Figure 7. Reducing DS1 to relevant data DS2

The final shape of the objects from the environment is to be

generated after the reduction steps and line merging algo-rithms. Therefore a map manager is required to establish the shape of the objects and the representation of the environment. Figure 8 indicates the data reducing flow. The input for the map becomes DS2 after data reduction, while creating poly-lines.

Figure 8. Reducing DS1 to relevant data DS2

The mapping completes the reduction algorithms and gets

the reliable information of the environment. By reaching this point, a simplified graphical representation of data is obtained.

Figure 9. DS2 data representation reduced by lines and poly-lines. For further processing of the localization a correction of the

vehicles odometry is required. Over time and motion, locating and mapping errors build cumulatively, grossly distorting the map and therefore the vehicles ability to determine its actual location and heading with sufficient accuracy. To compensate errors, recognizing features that has come across previously and re-skewing recent parts of the map to make sure the instances of that feature become one is required. A scan matching method assumes that generated poly-lines can be compared and matched together within the map manager similar to the way of the data reduction.

Figure 9 indicates generated and merged lines as a result of data reducing based on a simulation environment. We are considering the final map as a representation of the environment formed by lines and poly-lines which represent the input for the motion planning of the vehicle as a map matrix.

Herby we consider implementing based on the potential field theory a motion planner inside the system.

A very common effect that appears in dynamic mapping refers to occlusion and segmented or doubled data. The occlusion appears during dynamic mapping; when between the scanned object and the scanner appear other objects. Edges are identified as small line segments belonging to the same line. The distance between points and angular position of the merged points into lines become very important. Only similar data characteristics can be used for the merging and simplifying process.

Figure 10. Occlusion effect

By moving from point A to B the observation of the line seg-ments differ. In position A the lines S2 and S4 are observable. Position B shows line S1 and S3 as the mapped lines. The mapped segments are actually representing the same edge. We therefore provide a segmented map comparison to find simi-larities within different map segments. A triggered system is sensed in this way, so that the actual map segment is compared to the pervious map segment. The global map consists on joining these segments.

Schematically, the problem of map building consists of the following steps:

• Sensing the environment of the vehicle at time t using onboard sensors;

• Processing of sensor data (feature based line extraction); • Integration of the recently perceived observations at time t

with the previously learned structure of the environment estimated at time t − 1.

V. MAPPING PARKING LOTS

Our main goal is to detect spare places to use as parking lots. By identifying the obstacles edges we assume a classifica-tion of the objects into classes.

The main premise represents the detection of other vehicles in the first place and the space between them. The “L” shaped edges are mostly common for vehicles.

The resulting shapes extracted from the mapping process represent reliable data for further localization of the vehicle trough landmarks which represents our future work after map-ping the environment.

Figure 11. Masked vehicles extracted from mapping data.

Within the information received we obtain a virtual map of

the parking area surrounding the autonomous vehicle which is continuously added to the global map. The available map ma-trix is considered as input in the further movement calculation process.

VI. CONCLUSIONS

The described sensor infrastructure used for autonomous na-vigation and map building in an arbitrary environment search-ing for parking lots represents a common solution for mobile robots navigation.

Although we are looking to describe the environment in the further research by switching the laser scanner off to obtain the same reliable data by using only the ultrasonic sensor cells. In this case these sensors will also be capable of mapping and will be used not only as a guidance system during the parking ma-neuvers.

The actual designed infrastructure permits a flexible ap-proach for mapping the environment and using the data for an automated parking procedure.

The algorithmic computation on the PC-104 target is based on a real time computation using embedded Software. By the implementation of the wireless protocol we get flexibility in viewing the data during the computation progress.

The map building procedure is based on object identification by using data reducing algorithms.

By merging and sorting points we obtain sorted clusters used for further data reducing and shape simplification. Within this method data is being reduced as described in the figure below.

Page 12: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Further Use/Research

•  Implement an algorithm responsible for object avoidance and for the calculation of the shortest path during parking procedures

•  Remove laser scanner and replace with

network of ultrasonic sensor cells. •  Be able to safely autonomously drive the

vehicle to a parking lot, based off potential field theory.

12 of 14

For defining the odometry specific to parking procedures we assume that the single wheel model is reliable to our pur-pose. The middle point of the rear axis of the vehicle is being tracked and also represents the reference point for the sensor infrastructure design.

The driving path is calculated within the incremental sen-sors installed on each rear wheel axis and the steering wheel angle sensor.

The first task of the autonomous vehicle consists on Map-ping. It represents the basic component of the Simultaneous Localization and Mapping (SLAM) concept. This is performed using the SICK LD 1000 laser sensor for environment scanning and incremental map building.

The real time processing of data has to be considered while the implementation on the PC-104 system. The communication between the laser scanner and the computation unit is made trough the CAN I/O port. After reading the data, the computa-tion unit programmed with xPC, the embedded component of Matlab/Simulink, executes algorithmic data reduction for the further Mapping process.

Figure 1. The SICK LD 1000 laser scanner and PC-104 computing unit on the vehicle.

For short range object identification we use extended ultra-

sonic cells. Each cell compounds of two sensors capable of measuring the closest surrounding area, specific for completing parking procedures. The blind zone of the sensor measuring far is completed by another sensor having a shorter measuring range. The obtained data is also processed by the PC-104 com-putation unit and used for the parking procedures.

Figure 2. Autonomous vehicle equipped with the SICK LD laser scanner, ultra-sonic sensor cells and the PC-104 system.

The visualization of data is transferred to a host PC (Lap-

top) via wireless protocol. We define as target the PC-104 sys-tem running the real time computation. Only relevant data is being kept and processed in the target PC while applying the on-line incremental map building algorithms.

IV. MAP BUILDING PROCEDURE

We provide a method for map building by considering the ac-quired scans the input for the system. Each scan consists of 360 points and covers the area in front of the vehicle by indexing a laser beam each 0.5 unit until reaching 180 degrees.

Figure 3. Laser beam sensing area After reaching a number of m scans acquired (in this method,

10 scans=1cycle), the system creates an array of intersection points which are sorted by angle. The array is structured as following:

DS1= [n, x1, y1 … xn, yn] (1)

While n represents the number of points generated from the

sensor by scanning the environment, the sorted array becomes the input for the clustering module, which operates the points in clusters. Additional sorting of the array of points is neces-sary for the dynamical map building. As shown in Figure 6

Page 13: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

Conclusion

•  Data reduction represents a common solution to mobile robot navigation

•  Utilizing real time computation using embedded software data reduction is accomplished.

•  Object identification and data reducing algorithms are used and intermediate points/lines are discarded, reducing memory required.

13 of 14

Page 14: Environment Mapping for Autonomous Driving into Parking …jmconrad/ECGR6185-2013... · we assume that the single wheel model is reliable t o our pur- ... tracked and also represents

References

[1] J. L. Latecki and R. Lakamper, Identification Convexity Rule for Shape Decomposition Based on Discrete Contour Evolution. Computer Vision and Image Understanding, Academic Press vol. 73, 1999

[2] J. Gonzales, A. Ollero and A. Reina, “Map building for a mobile robot equipped with a 2d Laser rangefinder”, IEEE International Conference on Robotics and Automation.ICRA Proceedings, 1994

[3] P. MacKenzie and G. Dudek, “Precise positioning using model&based maps”, IEEE International Conference on Robotics and Automation.ICRA Proceedings, 1994

[4] H. Gonzalez&Banos and J. – C. Latombe, “Robot navigation for automatic model construction using safe regions. IEEE International Conference on Robotics and Automation.Kluwer Academic Publishers, 2000

[5] R. Siegwart and I.R. Nourbakhsh, Introduction to Autonomous Mobile Robots, The MIT Press. Massachusetts, 2004

[6] V. Nguyen, A. Martinelli, N. Tomatis and R. Siegwart, “ A Comparison of Line Extraction Algorithms using 2D Laser Rangefinder for indoor mobile Robotics”, Conference on Intelligent Robots and System.Kluwer Academic Publishers, 2005

[7] F. Amigoni, G. Fontana, F. Garigiola, “A Method forBuilding Small&Size Segment&Based Maps”,Conference on Distributed Autonomous Ro-botic Systems.Springerlink, 2006.

[8] “LD-LRS1000 to 5100 Laser Measurement Sytem: Operating Instructions.” SICK Sensor Intelligence. 2008. Web. 3/17/2013. https://www.mysick.com/saqqara/pdf.aspx?id=im0027494

[9] Wikipedia contributors. "PC/104." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 26 Feb. 2013. Web. 17 Mar. 2013. <http://en.wikipedia.org/wiki/PC104>

14 of 14