Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Research ArticleSloped Terrain Segmentation for Autonomous Drive UsingSparse 3D Point Cloud
Seoungjae Cho1 Jonghyun Kim1 Warda Ikram1 Kyungeun Cho1 Young-Sik Jeong1
Kyhyun Um1 and Sungdae Sim2
1 Department of Multimedia Engineering Dongguk University-Seoul Seoul 100-715 Republic of Korea2 Agency for Defense Development Daejeon 305-152 Republic of Korea
Correspondence should be addressed to Kyungeun Cho ckedonggukedu
Received 5 April 2014 Accepted 5 June 2014 Published 24 June 2014
Academic Editor Jong-Hyuk Park
Copyright copy 2014 Seoungjae Cho et alThis is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehiclesAn algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneouslyexecuting autonomous drive This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired fromundulating terrain A sparse 3Dpoint cloud can be acquired by scanning the geographyusing light detection and ranging (LiDAR) sensors For efficient ground segmentation 3D point clouds are quantized in units ofvolume pixels (voxels) and overlapping data is eliminated We reduce nonoverlapping voxels to two dimensions by implementing alowermost heightmapThe ground area is determined on the basis of the number of voxels in each voxel group We execute groundsegmentation in real time by proposing an approach to minimize the comparison between neighboring voxels Furthermore weexperimentally verify that ground segmentation can be executed at about 1931ms per frame
1 Introduction
The objective of this study is to develop a system thataccurately plans travel routes for an unmanned autonomousvehicle on the basis of the vehiclersquos driving data and geo-graphical location To this end datasets are acquired througha variety of sensors and data required for route planningis extracted from these datasets The route is designed toavoid obstacles by accurately identifying different kinds ofobstacles on the road in order to assist an unmanned vehicleto successfully arrive at its destination At present groundsegmentation technology is a necessary preprocessing step toidentify roads available for driving as well as obstacles on theroads based on the vehiclersquos geographical information
Vehicle-to-everything (V2X) technologies have beenactively investigated in order to develop intelligent vehiclesThese technologies include the autonomous driving technol-ogy explained above V2X is designed to improve traffic safetyand provide a variety of services to drivers by combining
wireless communication technologies such as vehicle-to-vehicle (V2V) vehicle-to-infrastructure (V2I) and vehicle-to-nomadic (V2N) technologies [1] A systematic travel routecan be planned using these V2X technologies by sharing thevehiclersquos geographical information Data traffic is reduced tofacilitate communication using wireless network [2 3]
Autonomous drive in various geographical environmentsrequires the technology to accurately segment the groundeven in slopes Vehicles such as those involved in militaryoperations inmountainous areas frequently have to negotiatea series of steep hills Since the position of a vehicle rapidlychanges in such an environment it is not easy to accuratelysegment the ground For reliable travel route planning onslopes our proposed system will conduct ground segmenta-tion in real time
Sensors are required to detect the surrounding geographyfor accurate ground segmentation Thus a light detectionand ranging (LiDAR) sensor is used to accurately detectgeographical shapes LiDAR sensors calculate the distance
Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 582753 9 pageshttpdxdoiorg1011552014582753
2 The Scientific World Journal
between the vehicle and the surrounding surface by emittinga number of lasers The launch angle of the laser and thedistance data are converted into three-dimensional (3D)pointsThenumber of points acquired ranges from thousandsto hundreds of thousands For LiDAR on account of massproduction at low cost the density of the 3D point cloudacquired is low Thus it is difficult to accurately segmentthe ground Accordingly when LiDAR is used in our sys-tem 3D point clouds will be continuously accumulatedand overlapping points will be eliminated The 3D pointclouds are accumulated through a simultaneous localizationand mapping (SLAM) algorithm or by using a differentialglobal positioning systemwith inertialmeasurement unit (D-GPSIMU) sensor To accurately eliminate overlapping datathe 3D point clouds need to be operated in a one-coordinatesystem
In order to satisfy the above requirements we propose aground segmentation framework based on awireless networkenvironment to successfully plan travel routes in real time in avariety of geographical environments Our framework buildsa dynamic flagmap data structure to reduce the amount ofdata by eliminating overlapping data and is comprised ofseveral algorithms that execute ground segmentation basedon certain data structures This study can be applied to thenavigation of autonomous agriculture vehicles and indoorrobots [4 5]
In Section 2 we summarize related research in the areaWe present our ground segmentation framework in Section 3Section 4 describes dynamic flagmap implementation foreffective ground segmentation whereas Section 5 explainsthe ground segmentation algorithm executed on the basis ofthe dynamic flagmap In Section 6 we describe an experimentto verify the efficiency of our frameworkWe offer concludingthoughts in Section 7
2 Related Work
Ground segmentation technologies have been investigated ina variety of fields including autonomous drive 3D geographymodeling and object tracking
Moosmann et al proposed an approach to segment theground as well as objects on the basis of a local convexitycriterion [6] This approach cannot be applied for LiDARwhich detects low-density 3D point clouds because it usesLiDARdetecting high-density 3D point clouds FurthermoreMoosmann et alrsquos system cannot process data at the sametime of acquiring scanned data because of high operation costby the algorithm
Research in the area was conducted that did not applygeneral approaches based on relationships with neighboringdata and features of points [7] The research proposed quicksegmentation technology by applying a two-dimensional(2D) line extraction algorithm to 3DpointsThe approach canbe applied to mild slopes but its efficiency in environmentswith tortuous and continually undulating routes has not beenverified Moreover since 3D point clouds acquired throughLiDAR have high density the approach cannot be applied tolow-density 3D point clouds
Douillard et al proposed different object segmentationapproaches for high-density and low-density 3D point clouds[8] They implemented a framework that executes groundsegmentation and then clusters the data For low-density 3Dpoint cloud data they used Gaussian process incrementalsample consensus algorithm to estimate the ground How-ever this approach is not appropriate for autonomous drivebecause it requires that users select points that are certainlyon the ground as seeds
Other research papers focused on converting 3D pointclouds from LiDAR to 2D range images and on segmentingobjects from them [9] The approach cannot be applied toslopes because it uses the simple threshold method assump-tion that the ground is flat when eliminating ground dataduring the preprocessing step for object segmentation
Song et al carried out ground segmentation using heighthistograms and a Gibbs-Markov random field model toreconstruct the geography into a 3Dmodel [10] However theefficiency of its algorithm for slopes has not been verified
Chen et al have proposed a real-time ground segmen-tation approach for travel route planning of autonomousland vehicle in open space [11] The approach applies one-dimensional (1D)Gaussian process regression to the basis of acircular polar gridmapThe approach cannot execute groundsegmentation on steep slopes
The problems identified by investigating existingapproaches to ground segmentation are summarized asfollows
(i) Ground segmentation cannot be executed in low-density 3D point clouds
(ii) Ground segmentation cannot be executed for geo-graphical features that have uphill and downhillroads
(iii) Ground segmentation cannot be executed in realtime
Accordingly we propose in this paper a ground segmentationframework that solves the above problems The frameworkcomprises ground segmentation technologies to plan a travelroute in real time using a wireless network environment ina geographical environment containing a several hills
3 Overview of GroundSegmentation Framework
LiDAR and D-GPSIMU are installed on an autonomousvehicle to obtain 3D surface information and vehicle move-ment information respectively For the sparse 3D point cloudacquired from LiDAR it is difficult from one point toacquire information about neighboring points because of thelong distance between points
For accurate ground segmentation the sparse 3D pointcloud needs to be accumulated continuously during thevehiclersquos movement Accordingly data size becomes largerwith time The size of the data needs to be reduced inorder for it to be transferred through the wireless networkConsequently the efficiency of the algorithm improves asthe data size is reduced We propose a ground segmentation
The Scientific World Journal 3
Gro
und
segm
enta
tion
fram
ewor
k
LiDAR
Point cloud
Dynamic flagmap Lowermost heightmap
Voxel labeling
Ground voxel groups
Nonground voxelgroups
IMU-GPS
Move information
Figure 1 Overview of the ground segmentation framework
framework comprising various algorithms including onesfor the elimination of overlapping data and ground segmen-tation Figure 1 illustrates the ground segmentation systemproposed in this paper
The framework performs ground segmentation throughthe following three steps
(i) dynamic flagmap implementation to eliminate over-lapped data
(ii) lowermost heightmap implementation to improveefficiency of ground segmentation algorithm
(iii) voxel labeling algorithm implementation for group-ing neighboring voxels
To reduce the number of points in 3D point clouds witha lot of data overlapping points should be deleted as thevehicle moves In the interest of this this paper proposes adynamic flagmap data structure expressing the volume pixel(voxel) space quantized in 10 cm units The implementationof the dynamic flagmap data structure helps eliminate pointsassigned to the same voxel
The next step is effectively identifying the driving route ofthe autonomous vehicle Ground segmentation is performedbased on the dynamic flagmap which comprises nonover-lapping voxels Several other algorithms are also applied inconcert
The implementation of the lowermost heightmap helpsreduce the data size required for the ground segmentationalgorithm and removes a number of nonground voxelsHeightmap is a general approach to effectively represent aterrain and is based on a two-dimensional coordinate systemcomprising only the 119909-axis and the 119911-axis Each 2D coordi-nate has a constant height value The lowermost heightmapis used for preprocessing in order to select only voxels with
high ground-segmentation probability The reduction in thenumber of voxels used in subsequent algorithms improvestheir efficiency
Once the lowermost heightmap has been built the neigh-boring voxels are gathered by a voxel labeling algorithmThe algorithm assigns the same label to voxels with a smalldifference in height with respect to the lowermost heightmapWhen labeling for all voxels is complete the position ofeach voxel is determined that is whether or not eachvoxel is on the ground The algorithm to reduce access toneighboring voxels is applied to reduce the execution time ofthe algorithm
4 Building Dynamic Flagmap
The 3D point cloud acquired from LiDAR is representedusing local coordinates based on the sensor As theautonomous vehicle with LiDAR mounted on it moves theoverlapping points among the local 3D point clouds acquiredper frame cannot be properly identified Thus we need toconvert and accumulate local 3D point clouds acquired perframe into a global coordinate system For this the vehi-clersquos movement information is acquired by the D-GPSIMUmounted on it Once the 3D point clouds are integrated intoa single coordinate system the neighboring points can beremoved as they are determined to be overlapping points
However the estimation of adjacency between 3D pointsthat have real number values requires more calculation than2D points In this section we propose a data structure calleddynamic flagmap that finds overlapping points by easilyidentifying adjacency between two points To present thevoxel space ldquoflagmaprdquo simply the voxel location is expressedusing a 1D array and not 3D coordinates Each element
4 The Scientific World Journal
D998400
W998400
M
m
x
z
Vehicle
0
Global 3Dpoint cloud
Figure 2 Structure to express a local 3D point cloud in a global coordinate system in a fixed memory (from top view)
in the array is one-bit Boolean data item to indicate thevoxelrsquos existence Such approach facilitates access betweenneighboring points
To reduce the dimensions of the 3D voxels and to expressthem in a 1D array we need to reduce voxel space This isbecause the array size which can be expressed by the arrayindex is limited and large array data requires more memoryThe following equation is used to convert a local 3Dpoint intoan array index item in a limited voxel space [12]
V = 2119882119867 sdot floor(119911120583
+
119863
2
) + 119867 sdot floor(119909120583
+
119882
2
)
+ floor(119910
120583
+
119867
2
)
(1)
where119882119867 and119863 represent the width height and depth ina limited voxel space respectively 119909 119910 and 119911 are the valuesof the 119909-axis the 119910-axis and the 119911-axis of each local 3D pointrespectively and 120583 represents the length of a side of eachvoxel and V is an array indexThe equation above helps easilyreduce memory usage and estimate adjacency among voxelsby quantizing a local 3D point and converting it into an arrayindex
However since (1) determines whether or not a point isoverlapping based on a local 3D point it cannot be used if anautonomous vehicle moves at high speed To determine dataoverlap in such a case we implement the dynamic flagmapon the basis of 3D global points Figure 2 shows the structurerequired to express local 3D point clouds acquired per frameon a global coordinate system in the fixed memory
In Figure 21198821015840 1198671015840 and 1198631015840 represent the width heightand depth of the dynamic voxel space respectively They areindicated in 2D from the top view so that 1198671015840 is omitted 119898and119872 are the bounds respectively of dynamic voxel spacecalculated with the minimum and the maximum value of the11990910158401199101015840 and 1199111015840-axes among the 3D global point clouds acquired
per frame Both values are dynamically changed dependingon the direction of the autonomous vehicle Furthermore11988210158401198671015840 and 1198631015840 vary withm andM The maximum values of11988210158401198671015840 and 1198631015840 are determined depending on the maximum
distance in the 3Dpoint cloud acquired fromLiDARAccord-ingly the array size of the dynamic flagmap is determinedaccording to the maximum value of11988210158401198671015840 and1198631015840 as shownin the equation below
size = max1198821015840 timesmax1198671015840 timesmax1198631015840 (2)
Voxel space around the vehicle given in the array with a fixedsize can be expressed in global coordinates by updating119898 and119872 per frame In other words the coordinates of the 0th indexin the array of the dynamic flagmap are the same as m Thecoordinates of the (1198821015840 times1198671015840 times1198631015840 minus 1)th index are identical toM The following equation converts a 3D point into an arrayindex using the dynamic flagmap
V1015840 = 211988210158401198671015840 sdot floor((119911119898minus 1199111015840)
120583
+
1198631015840
2
)
+ 1198671015840sdot floor(
(119909119898minus 1199091015840)
120583
+
1198821015840
2
)
+ floor((119910119898minus 1199101015840)
120583
+
1198671015840
2
)
(3)
The global coordinates can be expressed in the array with thefixed size by expressing the location of voxel based on thedynamic flagmap as described above
The following supplementary process is required todetermine whether voxels converted into global coordinatesoverlap Let us express m and M at time 119905
119894as 119898119894and
119872119894 respectively Then 119898
119894= 119898119895and 119872
119894=119872119895 if 119894 = 119895 This
is because of an error in vehicle location even when an
The Scientific World Journal 5
(1) function BuildLowermostHeightmap(119865)(2) foreach V in F do(3) if V
119910lt 119867[V] then119867[V] larr V
119910
(4) end(5) return119867(6) end
Pseudocode 1 Building lowermost heightmap
autonomous vehicle stops Accordingly all 3D global pointsare kept in a separate linked list after they have been convertedinto an array index and added to the dynamic flagmap Each3D global point in the linked list is deleted when it exceedsthe bounds of dynamic voxel space per frame As the 3Dglobal points are kept in the linked list voxel overlap can bedetermined based on the global coordinate system
In this section we explained the dynamic flagmap datastructure that can effectively eliminate overlapping data Thedata structure helps reduce data size required for groundsegmentation and thus improves the performance of thealgorithm
5 Ground Segmentation
Ground segmentation is a preprocessing step in planningthe route of an autonomous vehicle Ground segmenta-tion requires a dynamic flagmap which consists of a 1DBoolean array and bounds of dynamic voxel space Usingthe dynamic flagmap a lowermost heightmap is constructedThe voxel labeling algorithm is executed using the lower-most heightmap Ground voxel groups and nonground voxelgroups are generated because of ground segmentation Thissection describes an effective and efficient ground segmenta-tion approach for an autonomous vehicle
51 Lowermost Heightmap A lowermost heightmap is thebasic data structure for the execution of ground segmenta-tion We show how to improve the estimation efficiency ofthe ground segmentationmechanism and reduce the numberof nonground voxels by building a lowermost heightmap Aheightmap generally consists of a plane coordinate systemwith the 119909-axis and the 119911-axis such that each coordinate has aheight valueWe reduce the dimensions of 3D voxels to 2D byusing the above structure The 3D point cloud acquired fromLiDAR is the distance data between a nearby object surfaceand the autonomous vehicle The 3D point cloud cannotdetect the object surface which is lower than the ground Alowermost heightmap considers account for such a scenarioand is thus comprised of only voxels with the smallest heightvalues Pseudocode 1 illustrates the algorithm that builds thelowermost heightmap using a dynamic flagmap
In the algorithm 119865 is the dynamic flagmap V is the indexin the array V
119910is the height value of V on the 119910-axis and 119867
is the lowermost heightmap The lowermost heightmap hasonly one height value on the vertical line passing each (119909 119911)coordinate on the 119909119911 plane Accordingly the height value of
(1) function VoxelLabeling(119867)(2) VerticalLabeling(119867 0)(3) for 119895 from 1 to ColumnCount(119867) minus 1 do(4) VerticalLabeling(119867 119895)(5) HorizontalLabeling(119867 119895 minus 1 119895)(6) end(7) end
Pseudocode 2 Voxel labeling algorithm
neighboring (119909 119911) coordinates of a specific (119909 119911) coordinatecan be immediately identified
52 Voxel Labeling The ground surface on which an auton-omous vehicle moves has features similar to a sloped contin-uous surface This feature of the ground surface is roughlymodeled by building a lowermost heightmap However alowermost heightmap also includes a height value whichdoes not exist on the ground surface Voxel labeling is analgorithm that creates voxel groups by classing togethergeometrically continuous voxels among neighboring voxelsIn this section we propose an approach to minimize accessto neighboring voxels for voxel labeling in real time
Voxel labeling is based on the lowermost heightmap Thelowermost heightmap can access voxels neighboring eachvoxel very quickly because each voxel position is an arrayindex
When the height difference between two neighboringvoxels is below a certain threshold the same label is assignedto both voxels and hence one voxel group is created Whenthe height difference between neighboring voxels is in the plusmn1range both voxels have a high probability of being part of thegroundThus they are placed in the same groupThe boundsof each voxel group expand as voxel labeling proceeds If evenone voxel in a group is close in value to any voxel in anothergroup the two groups are integrated Voxel labeling applies anoptimized algorithm for effective performance in real timePseudocode 2 illustrates the voxel labeling algorithm
The algorithm considers the lowermost heightmap as akind of matrix equation The following equation defines therow and column in a lowermost heightmap
row119894= forall119909 119911 = 119894 | (119909 119911)
column119895= forall119909 119911 = 119895 | (119909 119911)
(4)
Figure 3 describes the voxel-labeling processThe figure illus-trates the conceptual top view of the lowermost heightmapsFirst vertical labeling is applied to voxels in the 119895th and119895+1th column in the lowermost heightmap Vertical labelingis the process to label one columnWhen the height differencebetween voxels at (119894 119895) and (119894 + 1 119895) is in the plusmn120591 range bothvoxels are labeled as part of the same group The process isrepeated for all rows in the 119895th column Vertical labeling isalso executed on the 119895 + 1th column Finally voxel groupsare generated for each column Figure 4(a) illustrates theneighboring voxels that each voxel needs to access in order to
6 The Scientific World Journal
h 10l
h 10l
hl
h 12l
h 11l
hl
hlhlhl
hl
lhlhl
l
hlhlhl
l
lhlhlhl
l
l
hlhl
hl
h 12
lh 12
lh 12
h 11
h 11
h 12l
h 9l
l
h 13
lh 13
lh 13
h 10
h 12h 12
middot middot middot
2 1 0Column
h heightl label
(a)
h 10l 1
hl
h 12
lh 11l 1
hl
h 13
l
hl
h 12
l 4hl
hl
h 11l 4
hl
h
l
h 11l 4
h 12
lh 10l 2
hl
h 12
l
h 9l 2
h
lh 13
lh 10l 2
h
lhl
hl
h 12
l 5
h 12l
hl
h 13l 5
hl
h 12l 3
hl
hl
middot middot middot
2 1 0
(b)
h 10l 1
hl
h 12
l
h 11l 1
h
l
h 13
lh
l
h 12
l 1
hl
h
lh 11l 1
h
l
h
lh 11l 1
h 12
l
h 10
l 1
h
l
h 12
l
h 9
l 1
hl
h 13l
h 10l 1
h
lhl
hl
h 12
l 3h 12
lh
l
h 13l 3
hl
h 12
l 3
hl
h
l
middot middot middot
2 1 0
(c)
h 10l 1
h
l
h 12l 4
h 11l 1
hl
h 13l 4
hl
h 12l 1
h
l
hl
h 11l 1
hl
hl
h 11l 1
h 12l 5
h 10l 1
hl
h 12l 5
h 9l 1
hl
h 13l 5
h 10l 1
hl
hl
hl
h 12l 3
h 12l 6
hl
h 13l 3
hl
h 12l 3
hl
hlmiddot middot middot
2 1 0
(d)
Figure 3 Voxel labeling in lowermost heightmap Each square represents a voxel which has height and label attributes (a) Initial stateof lowermost heightmap before executing voxel labeling (b) Execute VerticalLabeling for 0th and 1st columns independently (c) ExecuteHorizonalLabeling for the two columns (d) After executing VerticalLabeling for 2nd column execute (c) for 1st and 2nd columns
(a) (b)
Figure 4 Neighboring voxels to which each voxel needs accessfor voxel labeling (a) Neighboring voxels in vertical labeling (b)Neighboring voxels in horizontal labeling
compare height difference in vertical labeling Following thisvoxel groups in both columns are integrated by horizontallabeling When the height difference between each voxel inthe 119895th column and in the neighboring 119895 + 1th column isbelow plusmn120591 horizontal labeling integrates both voxel groupsFigure 4(b) shows the neighboring voxels
When both columns have been integrated vertical label-ing is executed on the voxels in the 119895 + 2th column andhorizontal labeling on the 119895 + 1th and 119895 + 2th columns Theprocess is repeated until voxel labeling has been executed forall columns in the lowermost heightmap The voxel groupwith the highest number of voxels is determined to be thefinal ground voxel group
6 Experiments
We performed an experiment to verify the efficiency of ourground segmentation framework For the experiment 3Dpoint clouds were acquired from actual mountainous roadsapproximately 35 km in length which consisted of flatlandsslopes trees and buildings The experiment platform is avehicle on which LiDAR was mounted Using the sensorthe vehicle acquired dataset at an average velocity of about20 kmh The LiDAR model used for the experiment isVelodyne HDL-32E To increase the density of the 3D pointclouds we only used data within about 30m of the LiDARThe PC used for ground segmentation had an Intel i7-870(CPU) and a DDR3 10600 8GB (RAM) The algorithm wastested on the acquired dataset
Local 3D point clouds from LiDAR were accumulatedbased on global coordinates using the vehiclersquos movementinformation Figure 5 illustrates the slope of the geographicalarea used in the experiment
Figure 8 visualizes the ground segmentation results Theresults were visualized with texturedmesh and colored pointsto represent ground and nonground respectively Color ofeach point was assigned according to the height of the point
To validate the performance of proposed frameworkwe compared it with threshold based segmentation methodThreshold based method uses a height value to classify voxelsinto ground and nonground voxels Figures 8(a) 8(c) 8(e)8(g) and 8(i) are the results by threshold based method Andthe results by ground segmentation framework proposed in
The Scientific World Journal 7
minus450
minus400
minus350
minus300
minus250
minus200
minus150
minus100
minus50
0
50
1
1344
2687
4030
5373
6716
8059
9402
10745
12088
13431
14774
16117
17460
18803
20146
21489
22832
24175
25518
26861
28204
29547
Hei
ght v
alue
(cm
)
Packet count
Figure 5 Slope of the geography used in the experiment
0
5
10
15
20
25
30
35
1
178
355
532
709
886
1063
1240
1417
1594
1771
1948
2125
2302
2479
2656
2833
3010
3187
3364
3541
3718
3895
4072
Elap
sed
time (
ms)
Frame count
Figure 6 Elapsed time performing segmentation
this paper are illustrated in Figures 8(b) 8(d) 8(f) 8(h)and 8(j) There is no big difference between Figures 8(a) and8(b) because the parts of the scene were captured at a flatroad In Figure 8(c) a slope at right side was misclassifiedas nonground But in Figure 8(d) the slope was classifiedas ground correctly Figure 8(e) illustrates that some partsof trees are rendered as textured mesh because they weremisclassified as ground Whereas in Figure 8(f) the parts oftrees were rendered as colored points by classifying them asnonground Voxels in an uphill road were misclassified asnonground in Figures 8(g) and 8(i) But they were classifiedproperly as ground in Figures 8(h) and 8(j)
The LiDAR used for the experiment typically scans thesurrounding environment at a rate of about 10Hz To verifyground segmentation in real time we executed ground seg-mentation at about 20Hz twice as quick as the rotation speedof the LiDAR Figure 6 indicates the time spent on groundsegmentation per ground segmentation frame in the graphTable 1 shows the time required for ground segmentationper frame It verifies that the proposed framework is enoughto segment ground at real time as the average elapsed timeperforming segmentation is less than 50ms 50ms is themaximum limitation to perform segmentation at 20Hz
Figure 7 represents the ground data size following groundsegmentation by frame in the graph Table 2 shows the ground
0
5000
10000
15000
20000
25000
1
194
387
580
773
966
1159
1352
1545
1738
1931
2124
2317
2510
2703
2896
3089
3282
3475
3668
3861
4054
Gro
und
data
size
(byt
es)
Frame count
Figure 7 Ground data size after segmentation
Table 1 Stats of elapsed time by frame
Average elapsed time 1931msMaximum elapsed time 3207msStandard deviation 391ms
Table 2 Stats of ground data size by frame
Average data size 746757 bytesMaximum data size 1971600 bytesStandard deviation 359144 bytes
data statistics Because the segmentation was performed at20Hz in this experiment average ground data size (bytes) byframe can be converted to Mbps unit using this equation
Mbps =(bytes times 8 times 20)10242 (5)
With this equation average Mbps is calculated as 114MbpsBecause minimum data rate per stream of 80211a networkstandard is 6Mbps the data size is acceptable to typicalwireless network
7 Conclusion
In this paper we proposed a ground segmentation frameworkfor real-time route planning through a wireless network foran autonomous vehicle in a ubiquitous road environmentThe framework involves elimination of overlapping data thereduction of data dimensions and ground segmentationTo this end the implementation of the dynamic flagmapthe lowermost heightmap and technologies including voxellabeling were described in detail A voxel labeling algorithmwas developed to minimize access to neighboring voxelsfor real-time ground segmentation Furthermore we exper-imentally verified the efficiency of our real-time groundsegmentation system even in a geographical environmentwith numerous hills The segmented ground data size canbe shared through a wireless network in real time by binarycompression
8 The Scientific World Journal
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 8 Ground segmentation results (the different parts are indicated by arrows) (a) Threshold based method result at a flat road (b)Proposedmethod result at a flat road (c)Threshold basedmethod result at flat road with a slope (d) Proposedmethod result at flat road witha slope (e) Threshold based method result at a downhill road (f) Proposed method result at a downhill road (g) Threshold based methodresult at an uphill road (h) Proposed method result at an uphill road (i) Threshold based method result at another uphill road (j) Proposedmethod result at another uphill road
The Scientific World Journal 9
Our future research will deal with technology for elimi-nating small objects along a road such as bushes from theground data To this end we will develop an algorithm thatconsiders the density of the voxel group in real time
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This work was supported by the Agency for Defense Devel-opment Republic of Korea
References
[1] Z C Taysi and A G Yavuz ldquoETSI compliant geo-networkingprotocol layer implementation for IVC simulationsrdquo Human-Centric Computing and Information Sciences vol 3 no 4 pp1ndash12 2013
[2] MYoon YKim and J Chang ldquoAn energy-efficient routing pro-tocol using message success rate in wireless sensor networksrdquoJournal of Convergence vol 4 no 1 pp 15ndash22 2013
[3] W-H Chung S Kumar S Paluri S Nagaraj A AnnamalaiJr and J D Matyjas ldquoA cross-layer unequal error protectionscheme for prioritized H264 video using RCPC codes andhierarchical QAMrdquo Journal of Information Processing Systemsvol 9 no 1 pp 53ndash68 2013
[4] L EmmiM Gonzalez-de-Soto G Pajares and P Gonzalez-de-Santos ldquoNew trends in robotics for agriculture integration andassessment of a real fleet of robotsrdquoThe ScientificWorld Journalvol 2014 Article ID 404059 21 pages 2014
[5] Y Xu X Chen and Q Li ldquoAdaptive iterated extended kalmanfilter and its application to autonomous integrated navigationfor indoor robotrdquoThe ScientificWorld Journal vol 2014 ArticleID 138548 7 pages 2014
[6] F Moosmann O Pink and C Stiller ldquoSegmentation of 3D lidardata in non-flat urban environments using a local convexitycriterionrdquo in Proceedings of the IEEE Intelligent Vehicles Sympo-sium pp 215ndash220 June 2009
[7] M Himmelsbach F V Hundelshausen and H-J WuenscheldquoFast segmentation of 3D point clouds for ground vehiclesrdquo inProceedings of the IEEE Intelligent Vehicles Symposium (IV rsquo10)pp 560ndash565 June 2010
[8] B Douillard J Underwood N Kuntz et al ldquoOn the segmen-tation of 3D lidar point cloudsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 2798ndash2805 May 2011
[9] J Hernandez and B Marcotegui ldquoPoint cloud segmentationtowards urban ground modelingrdquo in Proceedings of the JointUrban Remote Sensing Event pp 1ndash5 May 2009
[10] W Song K Cho K Um C S Won and S Sim ldquoIntuitiveterrain reconstruction using height observation-based groundsegmentation and 3D object boundary estimationrdquo Sensors vol12 no 12 2012
[11] T Chen B Dai R Wang and L Daxue ldquoGaussian-process-based real-time ground segmentation for autonomous landvehiclesrdquo Journal of Intelligent amp Robotic Systems 2013
[12] W Song S Cho K Cho K Um C S Won and S SimldquoTraversable ground surface segmentation and modeling forreal-time mobile mappingrdquo International Journal of DistributedSensor Networks vol 2014 Article ID 795851 8 pages 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
2 The Scientific World Journal
between the vehicle and the surrounding surface by emittinga number of lasers The launch angle of the laser and thedistance data are converted into three-dimensional (3D)pointsThenumber of points acquired ranges from thousandsto hundreds of thousands For LiDAR on account of massproduction at low cost the density of the 3D point cloudacquired is low Thus it is difficult to accurately segmentthe ground Accordingly when LiDAR is used in our sys-tem 3D point clouds will be continuously accumulatedand overlapping points will be eliminated The 3D pointclouds are accumulated through a simultaneous localizationand mapping (SLAM) algorithm or by using a differentialglobal positioning systemwith inertialmeasurement unit (D-GPSIMU) sensor To accurately eliminate overlapping datathe 3D point clouds need to be operated in a one-coordinatesystem
In order to satisfy the above requirements we propose aground segmentation framework based on awireless networkenvironment to successfully plan travel routes in real time in avariety of geographical environments Our framework buildsa dynamic flagmap data structure to reduce the amount ofdata by eliminating overlapping data and is comprised ofseveral algorithms that execute ground segmentation basedon certain data structures This study can be applied to thenavigation of autonomous agriculture vehicles and indoorrobots [4 5]
In Section 2 we summarize related research in the areaWe present our ground segmentation framework in Section 3Section 4 describes dynamic flagmap implementation foreffective ground segmentation whereas Section 5 explainsthe ground segmentation algorithm executed on the basis ofthe dynamic flagmap In Section 6 we describe an experimentto verify the efficiency of our frameworkWe offer concludingthoughts in Section 7
2 Related Work
Ground segmentation technologies have been investigated ina variety of fields including autonomous drive 3D geographymodeling and object tracking
Moosmann et al proposed an approach to segment theground as well as objects on the basis of a local convexitycriterion [6] This approach cannot be applied for LiDARwhich detects low-density 3D point clouds because it usesLiDARdetecting high-density 3D point clouds FurthermoreMoosmann et alrsquos system cannot process data at the sametime of acquiring scanned data because of high operation costby the algorithm
Research in the area was conducted that did not applygeneral approaches based on relationships with neighboringdata and features of points [7] The research proposed quicksegmentation technology by applying a two-dimensional(2D) line extraction algorithm to 3DpointsThe approach canbe applied to mild slopes but its efficiency in environmentswith tortuous and continually undulating routes has not beenverified Moreover since 3D point clouds acquired throughLiDAR have high density the approach cannot be applied tolow-density 3D point clouds
Douillard et al proposed different object segmentationapproaches for high-density and low-density 3D point clouds[8] They implemented a framework that executes groundsegmentation and then clusters the data For low-density 3Dpoint cloud data they used Gaussian process incrementalsample consensus algorithm to estimate the ground How-ever this approach is not appropriate for autonomous drivebecause it requires that users select points that are certainlyon the ground as seeds
Other research papers focused on converting 3D pointclouds from LiDAR to 2D range images and on segmentingobjects from them [9] The approach cannot be applied toslopes because it uses the simple threshold method assump-tion that the ground is flat when eliminating ground dataduring the preprocessing step for object segmentation
Song et al carried out ground segmentation using heighthistograms and a Gibbs-Markov random field model toreconstruct the geography into a 3Dmodel [10] However theefficiency of its algorithm for slopes has not been verified
Chen et al have proposed a real-time ground segmen-tation approach for travel route planning of autonomousland vehicle in open space [11] The approach applies one-dimensional (1D)Gaussian process regression to the basis of acircular polar gridmapThe approach cannot execute groundsegmentation on steep slopes
The problems identified by investigating existingapproaches to ground segmentation are summarized asfollows
(i) Ground segmentation cannot be executed in low-density 3D point clouds
(ii) Ground segmentation cannot be executed for geo-graphical features that have uphill and downhillroads
(iii) Ground segmentation cannot be executed in realtime
Accordingly we propose in this paper a ground segmentationframework that solves the above problems The frameworkcomprises ground segmentation technologies to plan a travelroute in real time using a wireless network environment ina geographical environment containing a several hills
3 Overview of GroundSegmentation Framework
LiDAR and D-GPSIMU are installed on an autonomousvehicle to obtain 3D surface information and vehicle move-ment information respectively For the sparse 3D point cloudacquired from LiDAR it is difficult from one point toacquire information about neighboring points because of thelong distance between points
For accurate ground segmentation the sparse 3D pointcloud needs to be accumulated continuously during thevehiclersquos movement Accordingly data size becomes largerwith time The size of the data needs to be reduced inorder for it to be transferred through the wireless networkConsequently the efficiency of the algorithm improves asthe data size is reduced We propose a ground segmentation
The Scientific World Journal 3
Gro
und
segm
enta
tion
fram
ewor
k
LiDAR
Point cloud
Dynamic flagmap Lowermost heightmap
Voxel labeling
Ground voxel groups
Nonground voxelgroups
IMU-GPS
Move information
Figure 1 Overview of the ground segmentation framework
framework comprising various algorithms including onesfor the elimination of overlapping data and ground segmen-tation Figure 1 illustrates the ground segmentation systemproposed in this paper
The framework performs ground segmentation throughthe following three steps
(i) dynamic flagmap implementation to eliminate over-lapped data
(ii) lowermost heightmap implementation to improveefficiency of ground segmentation algorithm
(iii) voxel labeling algorithm implementation for group-ing neighboring voxels
To reduce the number of points in 3D point clouds witha lot of data overlapping points should be deleted as thevehicle moves In the interest of this this paper proposes adynamic flagmap data structure expressing the volume pixel(voxel) space quantized in 10 cm units The implementationof the dynamic flagmap data structure helps eliminate pointsassigned to the same voxel
The next step is effectively identifying the driving route ofthe autonomous vehicle Ground segmentation is performedbased on the dynamic flagmap which comprises nonover-lapping voxels Several other algorithms are also applied inconcert
The implementation of the lowermost heightmap helpsreduce the data size required for the ground segmentationalgorithm and removes a number of nonground voxelsHeightmap is a general approach to effectively represent aterrain and is based on a two-dimensional coordinate systemcomprising only the 119909-axis and the 119911-axis Each 2D coordi-nate has a constant height value The lowermost heightmapis used for preprocessing in order to select only voxels with
high ground-segmentation probability The reduction in thenumber of voxels used in subsequent algorithms improvestheir efficiency
Once the lowermost heightmap has been built the neigh-boring voxels are gathered by a voxel labeling algorithmThe algorithm assigns the same label to voxels with a smalldifference in height with respect to the lowermost heightmapWhen labeling for all voxels is complete the position ofeach voxel is determined that is whether or not eachvoxel is on the ground The algorithm to reduce access toneighboring voxels is applied to reduce the execution time ofthe algorithm
4 Building Dynamic Flagmap
The 3D point cloud acquired from LiDAR is representedusing local coordinates based on the sensor As theautonomous vehicle with LiDAR mounted on it moves theoverlapping points among the local 3D point clouds acquiredper frame cannot be properly identified Thus we need toconvert and accumulate local 3D point clouds acquired perframe into a global coordinate system For this the vehi-clersquos movement information is acquired by the D-GPSIMUmounted on it Once the 3D point clouds are integrated intoa single coordinate system the neighboring points can beremoved as they are determined to be overlapping points
However the estimation of adjacency between 3D pointsthat have real number values requires more calculation than2D points In this section we propose a data structure calleddynamic flagmap that finds overlapping points by easilyidentifying adjacency between two points To present thevoxel space ldquoflagmaprdquo simply the voxel location is expressedusing a 1D array and not 3D coordinates Each element
4 The Scientific World Journal
D998400
W998400
M
m
x
z
Vehicle
0
Global 3Dpoint cloud
Figure 2 Structure to express a local 3D point cloud in a global coordinate system in a fixed memory (from top view)
in the array is one-bit Boolean data item to indicate thevoxelrsquos existence Such approach facilitates access betweenneighboring points
To reduce the dimensions of the 3D voxels and to expressthem in a 1D array we need to reduce voxel space This isbecause the array size which can be expressed by the arrayindex is limited and large array data requires more memoryThe following equation is used to convert a local 3Dpoint intoan array index item in a limited voxel space [12]
V = 2119882119867 sdot floor(119911120583
+
119863
2
) + 119867 sdot floor(119909120583
+
119882
2
)
+ floor(119910
120583
+
119867
2
)
(1)
where119882119867 and119863 represent the width height and depth ina limited voxel space respectively 119909 119910 and 119911 are the valuesof the 119909-axis the 119910-axis and the 119911-axis of each local 3D pointrespectively and 120583 represents the length of a side of eachvoxel and V is an array indexThe equation above helps easilyreduce memory usage and estimate adjacency among voxelsby quantizing a local 3D point and converting it into an arrayindex
However since (1) determines whether or not a point isoverlapping based on a local 3D point it cannot be used if anautonomous vehicle moves at high speed To determine dataoverlap in such a case we implement the dynamic flagmapon the basis of 3D global points Figure 2 shows the structurerequired to express local 3D point clouds acquired per frameon a global coordinate system in the fixed memory
In Figure 21198821015840 1198671015840 and 1198631015840 represent the width heightand depth of the dynamic voxel space respectively They areindicated in 2D from the top view so that 1198671015840 is omitted 119898and119872 are the bounds respectively of dynamic voxel spacecalculated with the minimum and the maximum value of the11990910158401199101015840 and 1199111015840-axes among the 3D global point clouds acquired
per frame Both values are dynamically changed dependingon the direction of the autonomous vehicle Furthermore11988210158401198671015840 and 1198631015840 vary withm andM The maximum values of11988210158401198671015840 and 1198631015840 are determined depending on the maximum
distance in the 3Dpoint cloud acquired fromLiDARAccord-ingly the array size of the dynamic flagmap is determinedaccording to the maximum value of11988210158401198671015840 and1198631015840 as shownin the equation below
size = max1198821015840 timesmax1198671015840 timesmax1198631015840 (2)
Voxel space around the vehicle given in the array with a fixedsize can be expressed in global coordinates by updating119898 and119872 per frame In other words the coordinates of the 0th indexin the array of the dynamic flagmap are the same as m Thecoordinates of the (1198821015840 times1198671015840 times1198631015840 minus 1)th index are identical toM The following equation converts a 3D point into an arrayindex using the dynamic flagmap
V1015840 = 211988210158401198671015840 sdot floor((119911119898minus 1199111015840)
120583
+
1198631015840
2
)
+ 1198671015840sdot floor(
(119909119898minus 1199091015840)
120583
+
1198821015840
2
)
+ floor((119910119898minus 1199101015840)
120583
+
1198671015840
2
)
(3)
The global coordinates can be expressed in the array with thefixed size by expressing the location of voxel based on thedynamic flagmap as described above
The following supplementary process is required todetermine whether voxels converted into global coordinatesoverlap Let us express m and M at time 119905
119894as 119898119894and
119872119894 respectively Then 119898
119894= 119898119895and 119872
119894=119872119895 if 119894 = 119895 This
is because of an error in vehicle location even when an
The Scientific World Journal 5
(1) function BuildLowermostHeightmap(119865)(2) foreach V in F do(3) if V
119910lt 119867[V] then119867[V] larr V
119910
(4) end(5) return119867(6) end
Pseudocode 1 Building lowermost heightmap
autonomous vehicle stops Accordingly all 3D global pointsare kept in a separate linked list after they have been convertedinto an array index and added to the dynamic flagmap Each3D global point in the linked list is deleted when it exceedsthe bounds of dynamic voxel space per frame As the 3Dglobal points are kept in the linked list voxel overlap can bedetermined based on the global coordinate system
In this section we explained the dynamic flagmap datastructure that can effectively eliminate overlapping data Thedata structure helps reduce data size required for groundsegmentation and thus improves the performance of thealgorithm
5 Ground Segmentation
Ground segmentation is a preprocessing step in planningthe route of an autonomous vehicle Ground segmenta-tion requires a dynamic flagmap which consists of a 1DBoolean array and bounds of dynamic voxel space Usingthe dynamic flagmap a lowermost heightmap is constructedThe voxel labeling algorithm is executed using the lower-most heightmap Ground voxel groups and nonground voxelgroups are generated because of ground segmentation Thissection describes an effective and efficient ground segmenta-tion approach for an autonomous vehicle
51 Lowermost Heightmap A lowermost heightmap is thebasic data structure for the execution of ground segmenta-tion We show how to improve the estimation efficiency ofthe ground segmentationmechanism and reduce the numberof nonground voxels by building a lowermost heightmap Aheightmap generally consists of a plane coordinate systemwith the 119909-axis and the 119911-axis such that each coordinate has aheight valueWe reduce the dimensions of 3D voxels to 2D byusing the above structure The 3D point cloud acquired fromLiDAR is the distance data between a nearby object surfaceand the autonomous vehicle The 3D point cloud cannotdetect the object surface which is lower than the ground Alowermost heightmap considers account for such a scenarioand is thus comprised of only voxels with the smallest heightvalues Pseudocode 1 illustrates the algorithm that builds thelowermost heightmap using a dynamic flagmap
In the algorithm 119865 is the dynamic flagmap V is the indexin the array V
119910is the height value of V on the 119910-axis and 119867
is the lowermost heightmap The lowermost heightmap hasonly one height value on the vertical line passing each (119909 119911)coordinate on the 119909119911 plane Accordingly the height value of
(1) function VoxelLabeling(119867)(2) VerticalLabeling(119867 0)(3) for 119895 from 1 to ColumnCount(119867) minus 1 do(4) VerticalLabeling(119867 119895)(5) HorizontalLabeling(119867 119895 minus 1 119895)(6) end(7) end
Pseudocode 2 Voxel labeling algorithm
neighboring (119909 119911) coordinates of a specific (119909 119911) coordinatecan be immediately identified
52 Voxel Labeling The ground surface on which an auton-omous vehicle moves has features similar to a sloped contin-uous surface This feature of the ground surface is roughlymodeled by building a lowermost heightmap However alowermost heightmap also includes a height value whichdoes not exist on the ground surface Voxel labeling is analgorithm that creates voxel groups by classing togethergeometrically continuous voxels among neighboring voxelsIn this section we propose an approach to minimize accessto neighboring voxels for voxel labeling in real time
Voxel labeling is based on the lowermost heightmap Thelowermost heightmap can access voxels neighboring eachvoxel very quickly because each voxel position is an arrayindex
When the height difference between two neighboringvoxels is below a certain threshold the same label is assignedto both voxels and hence one voxel group is created Whenthe height difference between neighboring voxels is in the plusmn1range both voxels have a high probability of being part of thegroundThus they are placed in the same groupThe boundsof each voxel group expand as voxel labeling proceeds If evenone voxel in a group is close in value to any voxel in anothergroup the two groups are integrated Voxel labeling applies anoptimized algorithm for effective performance in real timePseudocode 2 illustrates the voxel labeling algorithm
The algorithm considers the lowermost heightmap as akind of matrix equation The following equation defines therow and column in a lowermost heightmap
row119894= forall119909 119911 = 119894 | (119909 119911)
column119895= forall119909 119911 = 119895 | (119909 119911)
(4)
Figure 3 describes the voxel-labeling processThe figure illus-trates the conceptual top view of the lowermost heightmapsFirst vertical labeling is applied to voxels in the 119895th and119895+1th column in the lowermost heightmap Vertical labelingis the process to label one columnWhen the height differencebetween voxels at (119894 119895) and (119894 + 1 119895) is in the plusmn120591 range bothvoxels are labeled as part of the same group The process isrepeated for all rows in the 119895th column Vertical labeling isalso executed on the 119895 + 1th column Finally voxel groupsare generated for each column Figure 4(a) illustrates theneighboring voxels that each voxel needs to access in order to
6 The Scientific World Journal
h 10l
h 10l
hl
h 12l
h 11l
hl
hlhlhl
hl
lhlhl
l
hlhlhl
l
lhlhlhl
l
l
hlhl
hl
h 12
lh 12
lh 12
h 11
h 11
h 12l
h 9l
l
h 13
lh 13
lh 13
h 10
h 12h 12
middot middot middot
2 1 0Column
h heightl label
(a)
h 10l 1
hl
h 12
lh 11l 1
hl
h 13
l
hl
h 12
l 4hl
hl
h 11l 4
hl
h
l
h 11l 4
h 12
lh 10l 2
hl
h 12
l
h 9l 2
h
lh 13
lh 10l 2
h
lhl
hl
h 12
l 5
h 12l
hl
h 13l 5
hl
h 12l 3
hl
hl
middot middot middot
2 1 0
(b)
h 10l 1
hl
h 12
l
h 11l 1
h
l
h 13
lh
l
h 12
l 1
hl
h
lh 11l 1
h
l
h
lh 11l 1
h 12
l
h 10
l 1
h
l
h 12
l
h 9
l 1
hl
h 13l
h 10l 1
h
lhl
hl
h 12
l 3h 12
lh
l
h 13l 3
hl
h 12
l 3
hl
h
l
middot middot middot
2 1 0
(c)
h 10l 1
h
l
h 12l 4
h 11l 1
hl
h 13l 4
hl
h 12l 1
h
l
hl
h 11l 1
hl
hl
h 11l 1
h 12l 5
h 10l 1
hl
h 12l 5
h 9l 1
hl
h 13l 5
h 10l 1
hl
hl
hl
h 12l 3
h 12l 6
hl
h 13l 3
hl
h 12l 3
hl
hlmiddot middot middot
2 1 0
(d)
Figure 3 Voxel labeling in lowermost heightmap Each square represents a voxel which has height and label attributes (a) Initial stateof lowermost heightmap before executing voxel labeling (b) Execute VerticalLabeling for 0th and 1st columns independently (c) ExecuteHorizonalLabeling for the two columns (d) After executing VerticalLabeling for 2nd column execute (c) for 1st and 2nd columns
(a) (b)
Figure 4 Neighboring voxels to which each voxel needs accessfor voxel labeling (a) Neighboring voxels in vertical labeling (b)Neighboring voxels in horizontal labeling
compare height difference in vertical labeling Following thisvoxel groups in both columns are integrated by horizontallabeling When the height difference between each voxel inthe 119895th column and in the neighboring 119895 + 1th column isbelow plusmn120591 horizontal labeling integrates both voxel groupsFigure 4(b) shows the neighboring voxels
When both columns have been integrated vertical label-ing is executed on the voxels in the 119895 + 2th column andhorizontal labeling on the 119895 + 1th and 119895 + 2th columns Theprocess is repeated until voxel labeling has been executed forall columns in the lowermost heightmap The voxel groupwith the highest number of voxels is determined to be thefinal ground voxel group
6 Experiments
We performed an experiment to verify the efficiency of ourground segmentation framework For the experiment 3Dpoint clouds were acquired from actual mountainous roadsapproximately 35 km in length which consisted of flatlandsslopes trees and buildings The experiment platform is avehicle on which LiDAR was mounted Using the sensorthe vehicle acquired dataset at an average velocity of about20 kmh The LiDAR model used for the experiment isVelodyne HDL-32E To increase the density of the 3D pointclouds we only used data within about 30m of the LiDARThe PC used for ground segmentation had an Intel i7-870(CPU) and a DDR3 10600 8GB (RAM) The algorithm wastested on the acquired dataset
Local 3D point clouds from LiDAR were accumulatedbased on global coordinates using the vehiclersquos movementinformation Figure 5 illustrates the slope of the geographicalarea used in the experiment
Figure 8 visualizes the ground segmentation results Theresults were visualized with texturedmesh and colored pointsto represent ground and nonground respectively Color ofeach point was assigned according to the height of the point
To validate the performance of proposed frameworkwe compared it with threshold based segmentation methodThreshold based method uses a height value to classify voxelsinto ground and nonground voxels Figures 8(a) 8(c) 8(e)8(g) and 8(i) are the results by threshold based method Andthe results by ground segmentation framework proposed in
The Scientific World Journal 7
minus450
minus400
minus350
minus300
minus250
minus200
minus150
minus100
minus50
0
50
1
1344
2687
4030
5373
6716
8059
9402
10745
12088
13431
14774
16117
17460
18803
20146
21489
22832
24175
25518
26861
28204
29547
Hei
ght v
alue
(cm
)
Packet count
Figure 5 Slope of the geography used in the experiment
0
5
10
15
20
25
30
35
1
178
355
532
709
886
1063
1240
1417
1594
1771
1948
2125
2302
2479
2656
2833
3010
3187
3364
3541
3718
3895
4072
Elap
sed
time (
ms)
Frame count
Figure 6 Elapsed time performing segmentation
this paper are illustrated in Figures 8(b) 8(d) 8(f) 8(h)and 8(j) There is no big difference between Figures 8(a) and8(b) because the parts of the scene were captured at a flatroad In Figure 8(c) a slope at right side was misclassifiedas nonground But in Figure 8(d) the slope was classifiedas ground correctly Figure 8(e) illustrates that some partsof trees are rendered as textured mesh because they weremisclassified as ground Whereas in Figure 8(f) the parts oftrees were rendered as colored points by classifying them asnonground Voxels in an uphill road were misclassified asnonground in Figures 8(g) and 8(i) But they were classifiedproperly as ground in Figures 8(h) and 8(j)
The LiDAR used for the experiment typically scans thesurrounding environment at a rate of about 10Hz To verifyground segmentation in real time we executed ground seg-mentation at about 20Hz twice as quick as the rotation speedof the LiDAR Figure 6 indicates the time spent on groundsegmentation per ground segmentation frame in the graphTable 1 shows the time required for ground segmentationper frame It verifies that the proposed framework is enoughto segment ground at real time as the average elapsed timeperforming segmentation is less than 50ms 50ms is themaximum limitation to perform segmentation at 20Hz
Figure 7 represents the ground data size following groundsegmentation by frame in the graph Table 2 shows the ground
0
5000
10000
15000
20000
25000
1
194
387
580
773
966
1159
1352
1545
1738
1931
2124
2317
2510
2703
2896
3089
3282
3475
3668
3861
4054
Gro
und
data
size
(byt
es)
Frame count
Figure 7 Ground data size after segmentation
Table 1 Stats of elapsed time by frame
Average elapsed time 1931msMaximum elapsed time 3207msStandard deviation 391ms
Table 2 Stats of ground data size by frame
Average data size 746757 bytesMaximum data size 1971600 bytesStandard deviation 359144 bytes
data statistics Because the segmentation was performed at20Hz in this experiment average ground data size (bytes) byframe can be converted to Mbps unit using this equation
Mbps =(bytes times 8 times 20)10242 (5)
With this equation average Mbps is calculated as 114MbpsBecause minimum data rate per stream of 80211a networkstandard is 6Mbps the data size is acceptable to typicalwireless network
7 Conclusion
In this paper we proposed a ground segmentation frameworkfor real-time route planning through a wireless network foran autonomous vehicle in a ubiquitous road environmentThe framework involves elimination of overlapping data thereduction of data dimensions and ground segmentationTo this end the implementation of the dynamic flagmapthe lowermost heightmap and technologies including voxellabeling were described in detail A voxel labeling algorithmwas developed to minimize access to neighboring voxelsfor real-time ground segmentation Furthermore we exper-imentally verified the efficiency of our real-time groundsegmentation system even in a geographical environmentwith numerous hills The segmented ground data size canbe shared through a wireless network in real time by binarycompression
8 The Scientific World Journal
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 8 Ground segmentation results (the different parts are indicated by arrows) (a) Threshold based method result at a flat road (b)Proposedmethod result at a flat road (c)Threshold basedmethod result at flat road with a slope (d) Proposedmethod result at flat road witha slope (e) Threshold based method result at a downhill road (f) Proposed method result at a downhill road (g) Threshold based methodresult at an uphill road (h) Proposed method result at an uphill road (i) Threshold based method result at another uphill road (j) Proposedmethod result at another uphill road
The Scientific World Journal 9
Our future research will deal with technology for elimi-nating small objects along a road such as bushes from theground data To this end we will develop an algorithm thatconsiders the density of the voxel group in real time
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This work was supported by the Agency for Defense Devel-opment Republic of Korea
References
[1] Z C Taysi and A G Yavuz ldquoETSI compliant geo-networkingprotocol layer implementation for IVC simulationsrdquo Human-Centric Computing and Information Sciences vol 3 no 4 pp1ndash12 2013
[2] MYoon YKim and J Chang ldquoAn energy-efficient routing pro-tocol using message success rate in wireless sensor networksrdquoJournal of Convergence vol 4 no 1 pp 15ndash22 2013
[3] W-H Chung S Kumar S Paluri S Nagaraj A AnnamalaiJr and J D Matyjas ldquoA cross-layer unequal error protectionscheme for prioritized H264 video using RCPC codes andhierarchical QAMrdquo Journal of Information Processing Systemsvol 9 no 1 pp 53ndash68 2013
[4] L EmmiM Gonzalez-de-Soto G Pajares and P Gonzalez-de-Santos ldquoNew trends in robotics for agriculture integration andassessment of a real fleet of robotsrdquoThe ScientificWorld Journalvol 2014 Article ID 404059 21 pages 2014
[5] Y Xu X Chen and Q Li ldquoAdaptive iterated extended kalmanfilter and its application to autonomous integrated navigationfor indoor robotrdquoThe ScientificWorld Journal vol 2014 ArticleID 138548 7 pages 2014
[6] F Moosmann O Pink and C Stiller ldquoSegmentation of 3D lidardata in non-flat urban environments using a local convexitycriterionrdquo in Proceedings of the IEEE Intelligent Vehicles Sympo-sium pp 215ndash220 June 2009
[7] M Himmelsbach F V Hundelshausen and H-J WuenscheldquoFast segmentation of 3D point clouds for ground vehiclesrdquo inProceedings of the IEEE Intelligent Vehicles Symposium (IV rsquo10)pp 560ndash565 June 2010
[8] B Douillard J Underwood N Kuntz et al ldquoOn the segmen-tation of 3D lidar point cloudsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 2798ndash2805 May 2011
[9] J Hernandez and B Marcotegui ldquoPoint cloud segmentationtowards urban ground modelingrdquo in Proceedings of the JointUrban Remote Sensing Event pp 1ndash5 May 2009
[10] W Song K Cho K Um C S Won and S Sim ldquoIntuitiveterrain reconstruction using height observation-based groundsegmentation and 3D object boundary estimationrdquo Sensors vol12 no 12 2012
[11] T Chen B Dai R Wang and L Daxue ldquoGaussian-process-based real-time ground segmentation for autonomous landvehiclesrdquo Journal of Intelligent amp Robotic Systems 2013
[12] W Song S Cho K Cho K Um C S Won and S SimldquoTraversable ground surface segmentation and modeling forreal-time mobile mappingrdquo International Journal of DistributedSensor Networks vol 2014 Article ID 795851 8 pages 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World Journal 3
Gro
und
segm
enta
tion
fram
ewor
k
LiDAR
Point cloud
Dynamic flagmap Lowermost heightmap
Voxel labeling
Ground voxel groups
Nonground voxelgroups
IMU-GPS
Move information
Figure 1 Overview of the ground segmentation framework
framework comprising various algorithms including onesfor the elimination of overlapping data and ground segmen-tation Figure 1 illustrates the ground segmentation systemproposed in this paper
The framework performs ground segmentation throughthe following three steps
(i) dynamic flagmap implementation to eliminate over-lapped data
(ii) lowermost heightmap implementation to improveefficiency of ground segmentation algorithm
(iii) voxel labeling algorithm implementation for group-ing neighboring voxels
To reduce the number of points in 3D point clouds witha lot of data overlapping points should be deleted as thevehicle moves In the interest of this this paper proposes adynamic flagmap data structure expressing the volume pixel(voxel) space quantized in 10 cm units The implementationof the dynamic flagmap data structure helps eliminate pointsassigned to the same voxel
The next step is effectively identifying the driving route ofthe autonomous vehicle Ground segmentation is performedbased on the dynamic flagmap which comprises nonover-lapping voxels Several other algorithms are also applied inconcert
The implementation of the lowermost heightmap helpsreduce the data size required for the ground segmentationalgorithm and removes a number of nonground voxelsHeightmap is a general approach to effectively represent aterrain and is based on a two-dimensional coordinate systemcomprising only the 119909-axis and the 119911-axis Each 2D coordi-nate has a constant height value The lowermost heightmapis used for preprocessing in order to select only voxels with
high ground-segmentation probability The reduction in thenumber of voxels used in subsequent algorithms improvestheir efficiency
Once the lowermost heightmap has been built the neigh-boring voxels are gathered by a voxel labeling algorithmThe algorithm assigns the same label to voxels with a smalldifference in height with respect to the lowermost heightmapWhen labeling for all voxels is complete the position ofeach voxel is determined that is whether or not eachvoxel is on the ground The algorithm to reduce access toneighboring voxels is applied to reduce the execution time ofthe algorithm
4 Building Dynamic Flagmap
The 3D point cloud acquired from LiDAR is representedusing local coordinates based on the sensor As theautonomous vehicle with LiDAR mounted on it moves theoverlapping points among the local 3D point clouds acquiredper frame cannot be properly identified Thus we need toconvert and accumulate local 3D point clouds acquired perframe into a global coordinate system For this the vehi-clersquos movement information is acquired by the D-GPSIMUmounted on it Once the 3D point clouds are integrated intoa single coordinate system the neighboring points can beremoved as they are determined to be overlapping points
However the estimation of adjacency between 3D pointsthat have real number values requires more calculation than2D points In this section we propose a data structure calleddynamic flagmap that finds overlapping points by easilyidentifying adjacency between two points To present thevoxel space ldquoflagmaprdquo simply the voxel location is expressedusing a 1D array and not 3D coordinates Each element
4 The Scientific World Journal
D998400
W998400
M
m
x
z
Vehicle
0
Global 3Dpoint cloud
Figure 2 Structure to express a local 3D point cloud in a global coordinate system in a fixed memory (from top view)
in the array is one-bit Boolean data item to indicate thevoxelrsquos existence Such approach facilitates access betweenneighboring points
To reduce the dimensions of the 3D voxels and to expressthem in a 1D array we need to reduce voxel space This isbecause the array size which can be expressed by the arrayindex is limited and large array data requires more memoryThe following equation is used to convert a local 3Dpoint intoan array index item in a limited voxel space [12]
V = 2119882119867 sdot floor(119911120583
+
119863
2
) + 119867 sdot floor(119909120583
+
119882
2
)
+ floor(119910
120583
+
119867
2
)
(1)
where119882119867 and119863 represent the width height and depth ina limited voxel space respectively 119909 119910 and 119911 are the valuesof the 119909-axis the 119910-axis and the 119911-axis of each local 3D pointrespectively and 120583 represents the length of a side of eachvoxel and V is an array indexThe equation above helps easilyreduce memory usage and estimate adjacency among voxelsby quantizing a local 3D point and converting it into an arrayindex
However since (1) determines whether or not a point isoverlapping based on a local 3D point it cannot be used if anautonomous vehicle moves at high speed To determine dataoverlap in such a case we implement the dynamic flagmapon the basis of 3D global points Figure 2 shows the structurerequired to express local 3D point clouds acquired per frameon a global coordinate system in the fixed memory
In Figure 21198821015840 1198671015840 and 1198631015840 represent the width heightand depth of the dynamic voxel space respectively They areindicated in 2D from the top view so that 1198671015840 is omitted 119898and119872 are the bounds respectively of dynamic voxel spacecalculated with the minimum and the maximum value of the11990910158401199101015840 and 1199111015840-axes among the 3D global point clouds acquired
per frame Both values are dynamically changed dependingon the direction of the autonomous vehicle Furthermore11988210158401198671015840 and 1198631015840 vary withm andM The maximum values of11988210158401198671015840 and 1198631015840 are determined depending on the maximum
distance in the 3Dpoint cloud acquired fromLiDARAccord-ingly the array size of the dynamic flagmap is determinedaccording to the maximum value of11988210158401198671015840 and1198631015840 as shownin the equation below
size = max1198821015840 timesmax1198671015840 timesmax1198631015840 (2)
Voxel space around the vehicle given in the array with a fixedsize can be expressed in global coordinates by updating119898 and119872 per frame In other words the coordinates of the 0th indexin the array of the dynamic flagmap are the same as m Thecoordinates of the (1198821015840 times1198671015840 times1198631015840 minus 1)th index are identical toM The following equation converts a 3D point into an arrayindex using the dynamic flagmap
V1015840 = 211988210158401198671015840 sdot floor((119911119898minus 1199111015840)
120583
+
1198631015840
2
)
+ 1198671015840sdot floor(
(119909119898minus 1199091015840)
120583
+
1198821015840
2
)
+ floor((119910119898minus 1199101015840)
120583
+
1198671015840
2
)
(3)
The global coordinates can be expressed in the array with thefixed size by expressing the location of voxel based on thedynamic flagmap as described above
The following supplementary process is required todetermine whether voxels converted into global coordinatesoverlap Let us express m and M at time 119905
119894as 119898119894and
119872119894 respectively Then 119898
119894= 119898119895and 119872
119894=119872119895 if 119894 = 119895 This
is because of an error in vehicle location even when an
The Scientific World Journal 5
(1) function BuildLowermostHeightmap(119865)(2) foreach V in F do(3) if V
119910lt 119867[V] then119867[V] larr V
119910
(4) end(5) return119867(6) end
Pseudocode 1 Building lowermost heightmap
autonomous vehicle stops Accordingly all 3D global pointsare kept in a separate linked list after they have been convertedinto an array index and added to the dynamic flagmap Each3D global point in the linked list is deleted when it exceedsthe bounds of dynamic voxel space per frame As the 3Dglobal points are kept in the linked list voxel overlap can bedetermined based on the global coordinate system
In this section we explained the dynamic flagmap datastructure that can effectively eliminate overlapping data Thedata structure helps reduce data size required for groundsegmentation and thus improves the performance of thealgorithm
5 Ground Segmentation
Ground segmentation is a preprocessing step in planningthe route of an autonomous vehicle Ground segmenta-tion requires a dynamic flagmap which consists of a 1DBoolean array and bounds of dynamic voxel space Usingthe dynamic flagmap a lowermost heightmap is constructedThe voxel labeling algorithm is executed using the lower-most heightmap Ground voxel groups and nonground voxelgroups are generated because of ground segmentation Thissection describes an effective and efficient ground segmenta-tion approach for an autonomous vehicle
51 Lowermost Heightmap A lowermost heightmap is thebasic data structure for the execution of ground segmenta-tion We show how to improve the estimation efficiency ofthe ground segmentationmechanism and reduce the numberof nonground voxels by building a lowermost heightmap Aheightmap generally consists of a plane coordinate systemwith the 119909-axis and the 119911-axis such that each coordinate has aheight valueWe reduce the dimensions of 3D voxels to 2D byusing the above structure The 3D point cloud acquired fromLiDAR is the distance data between a nearby object surfaceand the autonomous vehicle The 3D point cloud cannotdetect the object surface which is lower than the ground Alowermost heightmap considers account for such a scenarioand is thus comprised of only voxels with the smallest heightvalues Pseudocode 1 illustrates the algorithm that builds thelowermost heightmap using a dynamic flagmap
In the algorithm 119865 is the dynamic flagmap V is the indexin the array V
119910is the height value of V on the 119910-axis and 119867
is the lowermost heightmap The lowermost heightmap hasonly one height value on the vertical line passing each (119909 119911)coordinate on the 119909119911 plane Accordingly the height value of
(1) function VoxelLabeling(119867)(2) VerticalLabeling(119867 0)(3) for 119895 from 1 to ColumnCount(119867) minus 1 do(4) VerticalLabeling(119867 119895)(5) HorizontalLabeling(119867 119895 minus 1 119895)(6) end(7) end
Pseudocode 2 Voxel labeling algorithm
neighboring (119909 119911) coordinates of a specific (119909 119911) coordinatecan be immediately identified
52 Voxel Labeling The ground surface on which an auton-omous vehicle moves has features similar to a sloped contin-uous surface This feature of the ground surface is roughlymodeled by building a lowermost heightmap However alowermost heightmap also includes a height value whichdoes not exist on the ground surface Voxel labeling is analgorithm that creates voxel groups by classing togethergeometrically continuous voxels among neighboring voxelsIn this section we propose an approach to minimize accessto neighboring voxels for voxel labeling in real time
Voxel labeling is based on the lowermost heightmap Thelowermost heightmap can access voxels neighboring eachvoxel very quickly because each voxel position is an arrayindex
When the height difference between two neighboringvoxels is below a certain threshold the same label is assignedto both voxels and hence one voxel group is created Whenthe height difference between neighboring voxels is in the plusmn1range both voxels have a high probability of being part of thegroundThus they are placed in the same groupThe boundsof each voxel group expand as voxel labeling proceeds If evenone voxel in a group is close in value to any voxel in anothergroup the two groups are integrated Voxel labeling applies anoptimized algorithm for effective performance in real timePseudocode 2 illustrates the voxel labeling algorithm
The algorithm considers the lowermost heightmap as akind of matrix equation The following equation defines therow and column in a lowermost heightmap
row119894= forall119909 119911 = 119894 | (119909 119911)
column119895= forall119909 119911 = 119895 | (119909 119911)
(4)
Figure 3 describes the voxel-labeling processThe figure illus-trates the conceptual top view of the lowermost heightmapsFirst vertical labeling is applied to voxels in the 119895th and119895+1th column in the lowermost heightmap Vertical labelingis the process to label one columnWhen the height differencebetween voxels at (119894 119895) and (119894 + 1 119895) is in the plusmn120591 range bothvoxels are labeled as part of the same group The process isrepeated for all rows in the 119895th column Vertical labeling isalso executed on the 119895 + 1th column Finally voxel groupsare generated for each column Figure 4(a) illustrates theneighboring voxels that each voxel needs to access in order to
6 The Scientific World Journal
h 10l
h 10l
hl
h 12l
h 11l
hl
hlhlhl
hl
lhlhl
l
hlhlhl
l
lhlhlhl
l
l
hlhl
hl
h 12
lh 12
lh 12
h 11
h 11
h 12l
h 9l
l
h 13
lh 13
lh 13
h 10
h 12h 12
middot middot middot
2 1 0Column
h heightl label
(a)
h 10l 1
hl
h 12
lh 11l 1
hl
h 13
l
hl
h 12
l 4hl
hl
h 11l 4
hl
h
l
h 11l 4
h 12
lh 10l 2
hl
h 12
l
h 9l 2
h
lh 13
lh 10l 2
h
lhl
hl
h 12
l 5
h 12l
hl
h 13l 5
hl
h 12l 3
hl
hl
middot middot middot
2 1 0
(b)
h 10l 1
hl
h 12
l
h 11l 1
h
l
h 13
lh
l
h 12
l 1
hl
h
lh 11l 1
h
l
h
lh 11l 1
h 12
l
h 10
l 1
h
l
h 12
l
h 9
l 1
hl
h 13l
h 10l 1
h
lhl
hl
h 12
l 3h 12
lh
l
h 13l 3
hl
h 12
l 3
hl
h
l
middot middot middot
2 1 0
(c)
h 10l 1
h
l
h 12l 4
h 11l 1
hl
h 13l 4
hl
h 12l 1
h
l
hl
h 11l 1
hl
hl
h 11l 1
h 12l 5
h 10l 1
hl
h 12l 5
h 9l 1
hl
h 13l 5
h 10l 1
hl
hl
hl
h 12l 3
h 12l 6
hl
h 13l 3
hl
h 12l 3
hl
hlmiddot middot middot
2 1 0
(d)
Figure 3 Voxel labeling in lowermost heightmap Each square represents a voxel which has height and label attributes (a) Initial stateof lowermost heightmap before executing voxel labeling (b) Execute VerticalLabeling for 0th and 1st columns independently (c) ExecuteHorizonalLabeling for the two columns (d) After executing VerticalLabeling for 2nd column execute (c) for 1st and 2nd columns
(a) (b)
Figure 4 Neighboring voxels to which each voxel needs accessfor voxel labeling (a) Neighboring voxels in vertical labeling (b)Neighboring voxels in horizontal labeling
compare height difference in vertical labeling Following thisvoxel groups in both columns are integrated by horizontallabeling When the height difference between each voxel inthe 119895th column and in the neighboring 119895 + 1th column isbelow plusmn120591 horizontal labeling integrates both voxel groupsFigure 4(b) shows the neighboring voxels
When both columns have been integrated vertical label-ing is executed on the voxels in the 119895 + 2th column andhorizontal labeling on the 119895 + 1th and 119895 + 2th columns Theprocess is repeated until voxel labeling has been executed forall columns in the lowermost heightmap The voxel groupwith the highest number of voxels is determined to be thefinal ground voxel group
6 Experiments
We performed an experiment to verify the efficiency of ourground segmentation framework For the experiment 3Dpoint clouds were acquired from actual mountainous roadsapproximately 35 km in length which consisted of flatlandsslopes trees and buildings The experiment platform is avehicle on which LiDAR was mounted Using the sensorthe vehicle acquired dataset at an average velocity of about20 kmh The LiDAR model used for the experiment isVelodyne HDL-32E To increase the density of the 3D pointclouds we only used data within about 30m of the LiDARThe PC used for ground segmentation had an Intel i7-870(CPU) and a DDR3 10600 8GB (RAM) The algorithm wastested on the acquired dataset
Local 3D point clouds from LiDAR were accumulatedbased on global coordinates using the vehiclersquos movementinformation Figure 5 illustrates the slope of the geographicalarea used in the experiment
Figure 8 visualizes the ground segmentation results Theresults were visualized with texturedmesh and colored pointsto represent ground and nonground respectively Color ofeach point was assigned according to the height of the point
To validate the performance of proposed frameworkwe compared it with threshold based segmentation methodThreshold based method uses a height value to classify voxelsinto ground and nonground voxels Figures 8(a) 8(c) 8(e)8(g) and 8(i) are the results by threshold based method Andthe results by ground segmentation framework proposed in
The Scientific World Journal 7
minus450
minus400
minus350
minus300
minus250
minus200
minus150
minus100
minus50
0
50
1
1344
2687
4030
5373
6716
8059
9402
10745
12088
13431
14774
16117
17460
18803
20146
21489
22832
24175
25518
26861
28204
29547
Hei
ght v
alue
(cm
)
Packet count
Figure 5 Slope of the geography used in the experiment
0
5
10
15
20
25
30
35
1
178
355
532
709
886
1063
1240
1417
1594
1771
1948
2125
2302
2479
2656
2833
3010
3187
3364
3541
3718
3895
4072
Elap
sed
time (
ms)
Frame count
Figure 6 Elapsed time performing segmentation
this paper are illustrated in Figures 8(b) 8(d) 8(f) 8(h)and 8(j) There is no big difference between Figures 8(a) and8(b) because the parts of the scene were captured at a flatroad In Figure 8(c) a slope at right side was misclassifiedas nonground But in Figure 8(d) the slope was classifiedas ground correctly Figure 8(e) illustrates that some partsof trees are rendered as textured mesh because they weremisclassified as ground Whereas in Figure 8(f) the parts oftrees were rendered as colored points by classifying them asnonground Voxels in an uphill road were misclassified asnonground in Figures 8(g) and 8(i) But they were classifiedproperly as ground in Figures 8(h) and 8(j)
The LiDAR used for the experiment typically scans thesurrounding environment at a rate of about 10Hz To verifyground segmentation in real time we executed ground seg-mentation at about 20Hz twice as quick as the rotation speedof the LiDAR Figure 6 indicates the time spent on groundsegmentation per ground segmentation frame in the graphTable 1 shows the time required for ground segmentationper frame It verifies that the proposed framework is enoughto segment ground at real time as the average elapsed timeperforming segmentation is less than 50ms 50ms is themaximum limitation to perform segmentation at 20Hz
Figure 7 represents the ground data size following groundsegmentation by frame in the graph Table 2 shows the ground
0
5000
10000
15000
20000
25000
1
194
387
580
773
966
1159
1352
1545
1738
1931
2124
2317
2510
2703
2896
3089
3282
3475
3668
3861
4054
Gro
und
data
size
(byt
es)
Frame count
Figure 7 Ground data size after segmentation
Table 1 Stats of elapsed time by frame
Average elapsed time 1931msMaximum elapsed time 3207msStandard deviation 391ms
Table 2 Stats of ground data size by frame
Average data size 746757 bytesMaximum data size 1971600 bytesStandard deviation 359144 bytes
data statistics Because the segmentation was performed at20Hz in this experiment average ground data size (bytes) byframe can be converted to Mbps unit using this equation
Mbps =(bytes times 8 times 20)10242 (5)
With this equation average Mbps is calculated as 114MbpsBecause minimum data rate per stream of 80211a networkstandard is 6Mbps the data size is acceptable to typicalwireless network
7 Conclusion
In this paper we proposed a ground segmentation frameworkfor real-time route planning through a wireless network foran autonomous vehicle in a ubiquitous road environmentThe framework involves elimination of overlapping data thereduction of data dimensions and ground segmentationTo this end the implementation of the dynamic flagmapthe lowermost heightmap and technologies including voxellabeling were described in detail A voxel labeling algorithmwas developed to minimize access to neighboring voxelsfor real-time ground segmentation Furthermore we exper-imentally verified the efficiency of our real-time groundsegmentation system even in a geographical environmentwith numerous hills The segmented ground data size canbe shared through a wireless network in real time by binarycompression
8 The Scientific World Journal
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 8 Ground segmentation results (the different parts are indicated by arrows) (a) Threshold based method result at a flat road (b)Proposedmethod result at a flat road (c)Threshold basedmethod result at flat road with a slope (d) Proposedmethod result at flat road witha slope (e) Threshold based method result at a downhill road (f) Proposed method result at a downhill road (g) Threshold based methodresult at an uphill road (h) Proposed method result at an uphill road (i) Threshold based method result at another uphill road (j) Proposedmethod result at another uphill road
The Scientific World Journal 9
Our future research will deal with technology for elimi-nating small objects along a road such as bushes from theground data To this end we will develop an algorithm thatconsiders the density of the voxel group in real time
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This work was supported by the Agency for Defense Devel-opment Republic of Korea
References
[1] Z C Taysi and A G Yavuz ldquoETSI compliant geo-networkingprotocol layer implementation for IVC simulationsrdquo Human-Centric Computing and Information Sciences vol 3 no 4 pp1ndash12 2013
[2] MYoon YKim and J Chang ldquoAn energy-efficient routing pro-tocol using message success rate in wireless sensor networksrdquoJournal of Convergence vol 4 no 1 pp 15ndash22 2013
[3] W-H Chung S Kumar S Paluri S Nagaraj A AnnamalaiJr and J D Matyjas ldquoA cross-layer unequal error protectionscheme for prioritized H264 video using RCPC codes andhierarchical QAMrdquo Journal of Information Processing Systemsvol 9 no 1 pp 53ndash68 2013
[4] L EmmiM Gonzalez-de-Soto G Pajares and P Gonzalez-de-Santos ldquoNew trends in robotics for agriculture integration andassessment of a real fleet of robotsrdquoThe ScientificWorld Journalvol 2014 Article ID 404059 21 pages 2014
[5] Y Xu X Chen and Q Li ldquoAdaptive iterated extended kalmanfilter and its application to autonomous integrated navigationfor indoor robotrdquoThe ScientificWorld Journal vol 2014 ArticleID 138548 7 pages 2014
[6] F Moosmann O Pink and C Stiller ldquoSegmentation of 3D lidardata in non-flat urban environments using a local convexitycriterionrdquo in Proceedings of the IEEE Intelligent Vehicles Sympo-sium pp 215ndash220 June 2009
[7] M Himmelsbach F V Hundelshausen and H-J WuenscheldquoFast segmentation of 3D point clouds for ground vehiclesrdquo inProceedings of the IEEE Intelligent Vehicles Symposium (IV rsquo10)pp 560ndash565 June 2010
[8] B Douillard J Underwood N Kuntz et al ldquoOn the segmen-tation of 3D lidar point cloudsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 2798ndash2805 May 2011
[9] J Hernandez and B Marcotegui ldquoPoint cloud segmentationtowards urban ground modelingrdquo in Proceedings of the JointUrban Remote Sensing Event pp 1ndash5 May 2009
[10] W Song K Cho K Um C S Won and S Sim ldquoIntuitiveterrain reconstruction using height observation-based groundsegmentation and 3D object boundary estimationrdquo Sensors vol12 no 12 2012
[11] T Chen B Dai R Wang and L Daxue ldquoGaussian-process-based real-time ground segmentation for autonomous landvehiclesrdquo Journal of Intelligent amp Robotic Systems 2013
[12] W Song S Cho K Cho K Um C S Won and S SimldquoTraversable ground surface segmentation and modeling forreal-time mobile mappingrdquo International Journal of DistributedSensor Networks vol 2014 Article ID 795851 8 pages 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
4 The Scientific World Journal
D998400
W998400
M
m
x
z
Vehicle
0
Global 3Dpoint cloud
Figure 2 Structure to express a local 3D point cloud in a global coordinate system in a fixed memory (from top view)
in the array is one-bit Boolean data item to indicate thevoxelrsquos existence Such approach facilitates access betweenneighboring points
To reduce the dimensions of the 3D voxels and to expressthem in a 1D array we need to reduce voxel space This isbecause the array size which can be expressed by the arrayindex is limited and large array data requires more memoryThe following equation is used to convert a local 3Dpoint intoan array index item in a limited voxel space [12]
V = 2119882119867 sdot floor(119911120583
+
119863
2
) + 119867 sdot floor(119909120583
+
119882
2
)
+ floor(119910
120583
+
119867
2
)
(1)
where119882119867 and119863 represent the width height and depth ina limited voxel space respectively 119909 119910 and 119911 are the valuesof the 119909-axis the 119910-axis and the 119911-axis of each local 3D pointrespectively and 120583 represents the length of a side of eachvoxel and V is an array indexThe equation above helps easilyreduce memory usage and estimate adjacency among voxelsby quantizing a local 3D point and converting it into an arrayindex
However since (1) determines whether or not a point isoverlapping based on a local 3D point it cannot be used if anautonomous vehicle moves at high speed To determine dataoverlap in such a case we implement the dynamic flagmapon the basis of 3D global points Figure 2 shows the structurerequired to express local 3D point clouds acquired per frameon a global coordinate system in the fixed memory
In Figure 21198821015840 1198671015840 and 1198631015840 represent the width heightand depth of the dynamic voxel space respectively They areindicated in 2D from the top view so that 1198671015840 is omitted 119898and119872 are the bounds respectively of dynamic voxel spacecalculated with the minimum and the maximum value of the11990910158401199101015840 and 1199111015840-axes among the 3D global point clouds acquired
per frame Both values are dynamically changed dependingon the direction of the autonomous vehicle Furthermore11988210158401198671015840 and 1198631015840 vary withm andM The maximum values of11988210158401198671015840 and 1198631015840 are determined depending on the maximum
distance in the 3Dpoint cloud acquired fromLiDARAccord-ingly the array size of the dynamic flagmap is determinedaccording to the maximum value of11988210158401198671015840 and1198631015840 as shownin the equation below
size = max1198821015840 timesmax1198671015840 timesmax1198631015840 (2)
Voxel space around the vehicle given in the array with a fixedsize can be expressed in global coordinates by updating119898 and119872 per frame In other words the coordinates of the 0th indexin the array of the dynamic flagmap are the same as m Thecoordinates of the (1198821015840 times1198671015840 times1198631015840 minus 1)th index are identical toM The following equation converts a 3D point into an arrayindex using the dynamic flagmap
V1015840 = 211988210158401198671015840 sdot floor((119911119898minus 1199111015840)
120583
+
1198631015840
2
)
+ 1198671015840sdot floor(
(119909119898minus 1199091015840)
120583
+
1198821015840
2
)
+ floor((119910119898minus 1199101015840)
120583
+
1198671015840
2
)
(3)
The global coordinates can be expressed in the array with thefixed size by expressing the location of voxel based on thedynamic flagmap as described above
The following supplementary process is required todetermine whether voxels converted into global coordinatesoverlap Let us express m and M at time 119905
119894as 119898119894and
119872119894 respectively Then 119898
119894= 119898119895and 119872
119894=119872119895 if 119894 = 119895 This
is because of an error in vehicle location even when an
The Scientific World Journal 5
(1) function BuildLowermostHeightmap(119865)(2) foreach V in F do(3) if V
119910lt 119867[V] then119867[V] larr V
119910
(4) end(5) return119867(6) end
Pseudocode 1 Building lowermost heightmap
autonomous vehicle stops Accordingly all 3D global pointsare kept in a separate linked list after they have been convertedinto an array index and added to the dynamic flagmap Each3D global point in the linked list is deleted when it exceedsthe bounds of dynamic voxel space per frame As the 3Dglobal points are kept in the linked list voxel overlap can bedetermined based on the global coordinate system
In this section we explained the dynamic flagmap datastructure that can effectively eliminate overlapping data Thedata structure helps reduce data size required for groundsegmentation and thus improves the performance of thealgorithm
5 Ground Segmentation
Ground segmentation is a preprocessing step in planningthe route of an autonomous vehicle Ground segmenta-tion requires a dynamic flagmap which consists of a 1DBoolean array and bounds of dynamic voxel space Usingthe dynamic flagmap a lowermost heightmap is constructedThe voxel labeling algorithm is executed using the lower-most heightmap Ground voxel groups and nonground voxelgroups are generated because of ground segmentation Thissection describes an effective and efficient ground segmenta-tion approach for an autonomous vehicle
51 Lowermost Heightmap A lowermost heightmap is thebasic data structure for the execution of ground segmenta-tion We show how to improve the estimation efficiency ofthe ground segmentationmechanism and reduce the numberof nonground voxels by building a lowermost heightmap Aheightmap generally consists of a plane coordinate systemwith the 119909-axis and the 119911-axis such that each coordinate has aheight valueWe reduce the dimensions of 3D voxels to 2D byusing the above structure The 3D point cloud acquired fromLiDAR is the distance data between a nearby object surfaceand the autonomous vehicle The 3D point cloud cannotdetect the object surface which is lower than the ground Alowermost heightmap considers account for such a scenarioand is thus comprised of only voxels with the smallest heightvalues Pseudocode 1 illustrates the algorithm that builds thelowermost heightmap using a dynamic flagmap
In the algorithm 119865 is the dynamic flagmap V is the indexin the array V
119910is the height value of V on the 119910-axis and 119867
is the lowermost heightmap The lowermost heightmap hasonly one height value on the vertical line passing each (119909 119911)coordinate on the 119909119911 plane Accordingly the height value of
(1) function VoxelLabeling(119867)(2) VerticalLabeling(119867 0)(3) for 119895 from 1 to ColumnCount(119867) minus 1 do(4) VerticalLabeling(119867 119895)(5) HorizontalLabeling(119867 119895 minus 1 119895)(6) end(7) end
Pseudocode 2 Voxel labeling algorithm
neighboring (119909 119911) coordinates of a specific (119909 119911) coordinatecan be immediately identified
52 Voxel Labeling The ground surface on which an auton-omous vehicle moves has features similar to a sloped contin-uous surface This feature of the ground surface is roughlymodeled by building a lowermost heightmap However alowermost heightmap also includes a height value whichdoes not exist on the ground surface Voxel labeling is analgorithm that creates voxel groups by classing togethergeometrically continuous voxels among neighboring voxelsIn this section we propose an approach to minimize accessto neighboring voxels for voxel labeling in real time
Voxel labeling is based on the lowermost heightmap Thelowermost heightmap can access voxels neighboring eachvoxel very quickly because each voxel position is an arrayindex
When the height difference between two neighboringvoxels is below a certain threshold the same label is assignedto both voxels and hence one voxel group is created Whenthe height difference between neighboring voxels is in the plusmn1range both voxels have a high probability of being part of thegroundThus they are placed in the same groupThe boundsof each voxel group expand as voxel labeling proceeds If evenone voxel in a group is close in value to any voxel in anothergroup the two groups are integrated Voxel labeling applies anoptimized algorithm for effective performance in real timePseudocode 2 illustrates the voxel labeling algorithm
The algorithm considers the lowermost heightmap as akind of matrix equation The following equation defines therow and column in a lowermost heightmap
row119894= forall119909 119911 = 119894 | (119909 119911)
column119895= forall119909 119911 = 119895 | (119909 119911)
(4)
Figure 3 describes the voxel-labeling processThe figure illus-trates the conceptual top view of the lowermost heightmapsFirst vertical labeling is applied to voxels in the 119895th and119895+1th column in the lowermost heightmap Vertical labelingis the process to label one columnWhen the height differencebetween voxels at (119894 119895) and (119894 + 1 119895) is in the plusmn120591 range bothvoxels are labeled as part of the same group The process isrepeated for all rows in the 119895th column Vertical labeling isalso executed on the 119895 + 1th column Finally voxel groupsare generated for each column Figure 4(a) illustrates theneighboring voxels that each voxel needs to access in order to
6 The Scientific World Journal
h 10l
h 10l
hl
h 12l
h 11l
hl
hlhlhl
hl
lhlhl
l
hlhlhl
l
lhlhlhl
l
l
hlhl
hl
h 12
lh 12
lh 12
h 11
h 11
h 12l
h 9l
l
h 13
lh 13
lh 13
h 10
h 12h 12
middot middot middot
2 1 0Column
h heightl label
(a)
h 10l 1
hl
h 12
lh 11l 1
hl
h 13
l
hl
h 12
l 4hl
hl
h 11l 4
hl
h
l
h 11l 4
h 12
lh 10l 2
hl
h 12
l
h 9l 2
h
lh 13
lh 10l 2
h
lhl
hl
h 12
l 5
h 12l
hl
h 13l 5
hl
h 12l 3
hl
hl
middot middot middot
2 1 0
(b)
h 10l 1
hl
h 12
l
h 11l 1
h
l
h 13
lh
l
h 12
l 1
hl
h
lh 11l 1
h
l
h
lh 11l 1
h 12
l
h 10
l 1
h
l
h 12
l
h 9
l 1
hl
h 13l
h 10l 1
h
lhl
hl
h 12
l 3h 12
lh
l
h 13l 3
hl
h 12
l 3
hl
h
l
middot middot middot
2 1 0
(c)
h 10l 1
h
l
h 12l 4
h 11l 1
hl
h 13l 4
hl
h 12l 1
h
l
hl
h 11l 1
hl
hl
h 11l 1
h 12l 5
h 10l 1
hl
h 12l 5
h 9l 1
hl
h 13l 5
h 10l 1
hl
hl
hl
h 12l 3
h 12l 6
hl
h 13l 3
hl
h 12l 3
hl
hlmiddot middot middot
2 1 0
(d)
Figure 3 Voxel labeling in lowermost heightmap Each square represents a voxel which has height and label attributes (a) Initial stateof lowermost heightmap before executing voxel labeling (b) Execute VerticalLabeling for 0th and 1st columns independently (c) ExecuteHorizonalLabeling for the two columns (d) After executing VerticalLabeling for 2nd column execute (c) for 1st and 2nd columns
(a) (b)
Figure 4 Neighboring voxels to which each voxel needs accessfor voxel labeling (a) Neighboring voxels in vertical labeling (b)Neighboring voxels in horizontal labeling
compare height difference in vertical labeling Following thisvoxel groups in both columns are integrated by horizontallabeling When the height difference between each voxel inthe 119895th column and in the neighboring 119895 + 1th column isbelow plusmn120591 horizontal labeling integrates both voxel groupsFigure 4(b) shows the neighboring voxels
When both columns have been integrated vertical label-ing is executed on the voxels in the 119895 + 2th column andhorizontal labeling on the 119895 + 1th and 119895 + 2th columns Theprocess is repeated until voxel labeling has been executed forall columns in the lowermost heightmap The voxel groupwith the highest number of voxels is determined to be thefinal ground voxel group
6 Experiments
We performed an experiment to verify the efficiency of ourground segmentation framework For the experiment 3Dpoint clouds were acquired from actual mountainous roadsapproximately 35 km in length which consisted of flatlandsslopes trees and buildings The experiment platform is avehicle on which LiDAR was mounted Using the sensorthe vehicle acquired dataset at an average velocity of about20 kmh The LiDAR model used for the experiment isVelodyne HDL-32E To increase the density of the 3D pointclouds we only used data within about 30m of the LiDARThe PC used for ground segmentation had an Intel i7-870(CPU) and a DDR3 10600 8GB (RAM) The algorithm wastested on the acquired dataset
Local 3D point clouds from LiDAR were accumulatedbased on global coordinates using the vehiclersquos movementinformation Figure 5 illustrates the slope of the geographicalarea used in the experiment
Figure 8 visualizes the ground segmentation results Theresults were visualized with texturedmesh and colored pointsto represent ground and nonground respectively Color ofeach point was assigned according to the height of the point
To validate the performance of proposed frameworkwe compared it with threshold based segmentation methodThreshold based method uses a height value to classify voxelsinto ground and nonground voxels Figures 8(a) 8(c) 8(e)8(g) and 8(i) are the results by threshold based method Andthe results by ground segmentation framework proposed in
The Scientific World Journal 7
minus450
minus400
minus350
minus300
minus250
minus200
minus150
minus100
minus50
0
50
1
1344
2687
4030
5373
6716
8059
9402
10745
12088
13431
14774
16117
17460
18803
20146
21489
22832
24175
25518
26861
28204
29547
Hei
ght v
alue
(cm
)
Packet count
Figure 5 Slope of the geography used in the experiment
0
5
10
15
20
25
30
35
1
178
355
532
709
886
1063
1240
1417
1594
1771
1948
2125
2302
2479
2656
2833
3010
3187
3364
3541
3718
3895
4072
Elap
sed
time (
ms)
Frame count
Figure 6 Elapsed time performing segmentation
this paper are illustrated in Figures 8(b) 8(d) 8(f) 8(h)and 8(j) There is no big difference between Figures 8(a) and8(b) because the parts of the scene were captured at a flatroad In Figure 8(c) a slope at right side was misclassifiedas nonground But in Figure 8(d) the slope was classifiedas ground correctly Figure 8(e) illustrates that some partsof trees are rendered as textured mesh because they weremisclassified as ground Whereas in Figure 8(f) the parts oftrees were rendered as colored points by classifying them asnonground Voxels in an uphill road were misclassified asnonground in Figures 8(g) and 8(i) But they were classifiedproperly as ground in Figures 8(h) and 8(j)
The LiDAR used for the experiment typically scans thesurrounding environment at a rate of about 10Hz To verifyground segmentation in real time we executed ground seg-mentation at about 20Hz twice as quick as the rotation speedof the LiDAR Figure 6 indicates the time spent on groundsegmentation per ground segmentation frame in the graphTable 1 shows the time required for ground segmentationper frame It verifies that the proposed framework is enoughto segment ground at real time as the average elapsed timeperforming segmentation is less than 50ms 50ms is themaximum limitation to perform segmentation at 20Hz
Figure 7 represents the ground data size following groundsegmentation by frame in the graph Table 2 shows the ground
0
5000
10000
15000
20000
25000
1
194
387
580
773
966
1159
1352
1545
1738
1931
2124
2317
2510
2703
2896
3089
3282
3475
3668
3861
4054
Gro
und
data
size
(byt
es)
Frame count
Figure 7 Ground data size after segmentation
Table 1 Stats of elapsed time by frame
Average elapsed time 1931msMaximum elapsed time 3207msStandard deviation 391ms
Table 2 Stats of ground data size by frame
Average data size 746757 bytesMaximum data size 1971600 bytesStandard deviation 359144 bytes
data statistics Because the segmentation was performed at20Hz in this experiment average ground data size (bytes) byframe can be converted to Mbps unit using this equation
Mbps =(bytes times 8 times 20)10242 (5)
With this equation average Mbps is calculated as 114MbpsBecause minimum data rate per stream of 80211a networkstandard is 6Mbps the data size is acceptable to typicalwireless network
7 Conclusion
In this paper we proposed a ground segmentation frameworkfor real-time route planning through a wireless network foran autonomous vehicle in a ubiquitous road environmentThe framework involves elimination of overlapping data thereduction of data dimensions and ground segmentationTo this end the implementation of the dynamic flagmapthe lowermost heightmap and technologies including voxellabeling were described in detail A voxel labeling algorithmwas developed to minimize access to neighboring voxelsfor real-time ground segmentation Furthermore we exper-imentally verified the efficiency of our real-time groundsegmentation system even in a geographical environmentwith numerous hills The segmented ground data size canbe shared through a wireless network in real time by binarycompression
8 The Scientific World Journal
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 8 Ground segmentation results (the different parts are indicated by arrows) (a) Threshold based method result at a flat road (b)Proposedmethod result at a flat road (c)Threshold basedmethod result at flat road with a slope (d) Proposedmethod result at flat road witha slope (e) Threshold based method result at a downhill road (f) Proposed method result at a downhill road (g) Threshold based methodresult at an uphill road (h) Proposed method result at an uphill road (i) Threshold based method result at another uphill road (j) Proposedmethod result at another uphill road
The Scientific World Journal 9
Our future research will deal with technology for elimi-nating small objects along a road such as bushes from theground data To this end we will develop an algorithm thatconsiders the density of the voxel group in real time
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This work was supported by the Agency for Defense Devel-opment Republic of Korea
References
[1] Z C Taysi and A G Yavuz ldquoETSI compliant geo-networkingprotocol layer implementation for IVC simulationsrdquo Human-Centric Computing and Information Sciences vol 3 no 4 pp1ndash12 2013
[2] MYoon YKim and J Chang ldquoAn energy-efficient routing pro-tocol using message success rate in wireless sensor networksrdquoJournal of Convergence vol 4 no 1 pp 15ndash22 2013
[3] W-H Chung S Kumar S Paluri S Nagaraj A AnnamalaiJr and J D Matyjas ldquoA cross-layer unequal error protectionscheme for prioritized H264 video using RCPC codes andhierarchical QAMrdquo Journal of Information Processing Systemsvol 9 no 1 pp 53ndash68 2013
[4] L EmmiM Gonzalez-de-Soto G Pajares and P Gonzalez-de-Santos ldquoNew trends in robotics for agriculture integration andassessment of a real fleet of robotsrdquoThe ScientificWorld Journalvol 2014 Article ID 404059 21 pages 2014
[5] Y Xu X Chen and Q Li ldquoAdaptive iterated extended kalmanfilter and its application to autonomous integrated navigationfor indoor robotrdquoThe ScientificWorld Journal vol 2014 ArticleID 138548 7 pages 2014
[6] F Moosmann O Pink and C Stiller ldquoSegmentation of 3D lidardata in non-flat urban environments using a local convexitycriterionrdquo in Proceedings of the IEEE Intelligent Vehicles Sympo-sium pp 215ndash220 June 2009
[7] M Himmelsbach F V Hundelshausen and H-J WuenscheldquoFast segmentation of 3D point clouds for ground vehiclesrdquo inProceedings of the IEEE Intelligent Vehicles Symposium (IV rsquo10)pp 560ndash565 June 2010
[8] B Douillard J Underwood N Kuntz et al ldquoOn the segmen-tation of 3D lidar point cloudsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 2798ndash2805 May 2011
[9] J Hernandez and B Marcotegui ldquoPoint cloud segmentationtowards urban ground modelingrdquo in Proceedings of the JointUrban Remote Sensing Event pp 1ndash5 May 2009
[10] W Song K Cho K Um C S Won and S Sim ldquoIntuitiveterrain reconstruction using height observation-based groundsegmentation and 3D object boundary estimationrdquo Sensors vol12 no 12 2012
[11] T Chen B Dai R Wang and L Daxue ldquoGaussian-process-based real-time ground segmentation for autonomous landvehiclesrdquo Journal of Intelligent amp Robotic Systems 2013
[12] W Song S Cho K Cho K Um C S Won and S SimldquoTraversable ground surface segmentation and modeling forreal-time mobile mappingrdquo International Journal of DistributedSensor Networks vol 2014 Article ID 795851 8 pages 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World Journal 5
(1) function BuildLowermostHeightmap(119865)(2) foreach V in F do(3) if V
119910lt 119867[V] then119867[V] larr V
119910
(4) end(5) return119867(6) end
Pseudocode 1 Building lowermost heightmap
autonomous vehicle stops Accordingly all 3D global pointsare kept in a separate linked list after they have been convertedinto an array index and added to the dynamic flagmap Each3D global point in the linked list is deleted when it exceedsthe bounds of dynamic voxel space per frame As the 3Dglobal points are kept in the linked list voxel overlap can bedetermined based on the global coordinate system
In this section we explained the dynamic flagmap datastructure that can effectively eliminate overlapping data Thedata structure helps reduce data size required for groundsegmentation and thus improves the performance of thealgorithm
5 Ground Segmentation
Ground segmentation is a preprocessing step in planningthe route of an autonomous vehicle Ground segmenta-tion requires a dynamic flagmap which consists of a 1DBoolean array and bounds of dynamic voxel space Usingthe dynamic flagmap a lowermost heightmap is constructedThe voxel labeling algorithm is executed using the lower-most heightmap Ground voxel groups and nonground voxelgroups are generated because of ground segmentation Thissection describes an effective and efficient ground segmenta-tion approach for an autonomous vehicle
51 Lowermost Heightmap A lowermost heightmap is thebasic data structure for the execution of ground segmenta-tion We show how to improve the estimation efficiency ofthe ground segmentationmechanism and reduce the numberof nonground voxels by building a lowermost heightmap Aheightmap generally consists of a plane coordinate systemwith the 119909-axis and the 119911-axis such that each coordinate has aheight valueWe reduce the dimensions of 3D voxels to 2D byusing the above structure The 3D point cloud acquired fromLiDAR is the distance data between a nearby object surfaceand the autonomous vehicle The 3D point cloud cannotdetect the object surface which is lower than the ground Alowermost heightmap considers account for such a scenarioand is thus comprised of only voxels with the smallest heightvalues Pseudocode 1 illustrates the algorithm that builds thelowermost heightmap using a dynamic flagmap
In the algorithm 119865 is the dynamic flagmap V is the indexin the array V
119910is the height value of V on the 119910-axis and 119867
is the lowermost heightmap The lowermost heightmap hasonly one height value on the vertical line passing each (119909 119911)coordinate on the 119909119911 plane Accordingly the height value of
(1) function VoxelLabeling(119867)(2) VerticalLabeling(119867 0)(3) for 119895 from 1 to ColumnCount(119867) minus 1 do(4) VerticalLabeling(119867 119895)(5) HorizontalLabeling(119867 119895 minus 1 119895)(6) end(7) end
Pseudocode 2 Voxel labeling algorithm
neighboring (119909 119911) coordinates of a specific (119909 119911) coordinatecan be immediately identified
52 Voxel Labeling The ground surface on which an auton-omous vehicle moves has features similar to a sloped contin-uous surface This feature of the ground surface is roughlymodeled by building a lowermost heightmap However alowermost heightmap also includes a height value whichdoes not exist on the ground surface Voxel labeling is analgorithm that creates voxel groups by classing togethergeometrically continuous voxels among neighboring voxelsIn this section we propose an approach to minimize accessto neighboring voxels for voxel labeling in real time
Voxel labeling is based on the lowermost heightmap Thelowermost heightmap can access voxels neighboring eachvoxel very quickly because each voxel position is an arrayindex
When the height difference between two neighboringvoxels is below a certain threshold the same label is assignedto both voxels and hence one voxel group is created Whenthe height difference between neighboring voxels is in the plusmn1range both voxels have a high probability of being part of thegroundThus they are placed in the same groupThe boundsof each voxel group expand as voxel labeling proceeds If evenone voxel in a group is close in value to any voxel in anothergroup the two groups are integrated Voxel labeling applies anoptimized algorithm for effective performance in real timePseudocode 2 illustrates the voxel labeling algorithm
The algorithm considers the lowermost heightmap as akind of matrix equation The following equation defines therow and column in a lowermost heightmap
row119894= forall119909 119911 = 119894 | (119909 119911)
column119895= forall119909 119911 = 119895 | (119909 119911)
(4)
Figure 3 describes the voxel-labeling processThe figure illus-trates the conceptual top view of the lowermost heightmapsFirst vertical labeling is applied to voxels in the 119895th and119895+1th column in the lowermost heightmap Vertical labelingis the process to label one columnWhen the height differencebetween voxels at (119894 119895) and (119894 + 1 119895) is in the plusmn120591 range bothvoxels are labeled as part of the same group The process isrepeated for all rows in the 119895th column Vertical labeling isalso executed on the 119895 + 1th column Finally voxel groupsare generated for each column Figure 4(a) illustrates theneighboring voxels that each voxel needs to access in order to
6 The Scientific World Journal
h 10l
h 10l
hl
h 12l
h 11l
hl
hlhlhl
hl
lhlhl
l
hlhlhl
l
lhlhlhl
l
l
hlhl
hl
h 12
lh 12
lh 12
h 11
h 11
h 12l
h 9l
l
h 13
lh 13
lh 13
h 10
h 12h 12
middot middot middot
2 1 0Column
h heightl label
(a)
h 10l 1
hl
h 12
lh 11l 1
hl
h 13
l
hl
h 12
l 4hl
hl
h 11l 4
hl
h
l
h 11l 4
h 12
lh 10l 2
hl
h 12
l
h 9l 2
h
lh 13
lh 10l 2
h
lhl
hl
h 12
l 5
h 12l
hl
h 13l 5
hl
h 12l 3
hl
hl
middot middot middot
2 1 0
(b)
h 10l 1
hl
h 12
l
h 11l 1
h
l
h 13
lh
l
h 12
l 1
hl
h
lh 11l 1
h
l
h
lh 11l 1
h 12
l
h 10
l 1
h
l
h 12
l
h 9
l 1
hl
h 13l
h 10l 1
h
lhl
hl
h 12
l 3h 12
lh
l
h 13l 3
hl
h 12
l 3
hl
h
l
middot middot middot
2 1 0
(c)
h 10l 1
h
l
h 12l 4
h 11l 1
hl
h 13l 4
hl
h 12l 1
h
l
hl
h 11l 1
hl
hl
h 11l 1
h 12l 5
h 10l 1
hl
h 12l 5
h 9l 1
hl
h 13l 5
h 10l 1
hl
hl
hl
h 12l 3
h 12l 6
hl
h 13l 3
hl
h 12l 3
hl
hlmiddot middot middot
2 1 0
(d)
Figure 3 Voxel labeling in lowermost heightmap Each square represents a voxel which has height and label attributes (a) Initial stateof lowermost heightmap before executing voxel labeling (b) Execute VerticalLabeling for 0th and 1st columns independently (c) ExecuteHorizonalLabeling for the two columns (d) After executing VerticalLabeling for 2nd column execute (c) for 1st and 2nd columns
(a) (b)
Figure 4 Neighboring voxels to which each voxel needs accessfor voxel labeling (a) Neighboring voxels in vertical labeling (b)Neighboring voxels in horizontal labeling
compare height difference in vertical labeling Following thisvoxel groups in both columns are integrated by horizontallabeling When the height difference between each voxel inthe 119895th column and in the neighboring 119895 + 1th column isbelow plusmn120591 horizontal labeling integrates both voxel groupsFigure 4(b) shows the neighboring voxels
When both columns have been integrated vertical label-ing is executed on the voxels in the 119895 + 2th column andhorizontal labeling on the 119895 + 1th and 119895 + 2th columns Theprocess is repeated until voxel labeling has been executed forall columns in the lowermost heightmap The voxel groupwith the highest number of voxels is determined to be thefinal ground voxel group
6 Experiments
We performed an experiment to verify the efficiency of ourground segmentation framework For the experiment 3Dpoint clouds were acquired from actual mountainous roadsapproximately 35 km in length which consisted of flatlandsslopes trees and buildings The experiment platform is avehicle on which LiDAR was mounted Using the sensorthe vehicle acquired dataset at an average velocity of about20 kmh The LiDAR model used for the experiment isVelodyne HDL-32E To increase the density of the 3D pointclouds we only used data within about 30m of the LiDARThe PC used for ground segmentation had an Intel i7-870(CPU) and a DDR3 10600 8GB (RAM) The algorithm wastested on the acquired dataset
Local 3D point clouds from LiDAR were accumulatedbased on global coordinates using the vehiclersquos movementinformation Figure 5 illustrates the slope of the geographicalarea used in the experiment
Figure 8 visualizes the ground segmentation results Theresults were visualized with texturedmesh and colored pointsto represent ground and nonground respectively Color ofeach point was assigned according to the height of the point
To validate the performance of proposed frameworkwe compared it with threshold based segmentation methodThreshold based method uses a height value to classify voxelsinto ground and nonground voxels Figures 8(a) 8(c) 8(e)8(g) and 8(i) are the results by threshold based method Andthe results by ground segmentation framework proposed in
The Scientific World Journal 7
minus450
minus400
minus350
minus300
minus250
minus200
minus150
minus100
minus50
0
50
1
1344
2687
4030
5373
6716
8059
9402
10745
12088
13431
14774
16117
17460
18803
20146
21489
22832
24175
25518
26861
28204
29547
Hei
ght v
alue
(cm
)
Packet count
Figure 5 Slope of the geography used in the experiment
0
5
10
15
20
25
30
35
1
178
355
532
709
886
1063
1240
1417
1594
1771
1948
2125
2302
2479
2656
2833
3010
3187
3364
3541
3718
3895
4072
Elap
sed
time (
ms)
Frame count
Figure 6 Elapsed time performing segmentation
this paper are illustrated in Figures 8(b) 8(d) 8(f) 8(h)and 8(j) There is no big difference between Figures 8(a) and8(b) because the parts of the scene were captured at a flatroad In Figure 8(c) a slope at right side was misclassifiedas nonground But in Figure 8(d) the slope was classifiedas ground correctly Figure 8(e) illustrates that some partsof trees are rendered as textured mesh because they weremisclassified as ground Whereas in Figure 8(f) the parts oftrees were rendered as colored points by classifying them asnonground Voxels in an uphill road were misclassified asnonground in Figures 8(g) and 8(i) But they were classifiedproperly as ground in Figures 8(h) and 8(j)
The LiDAR used for the experiment typically scans thesurrounding environment at a rate of about 10Hz To verifyground segmentation in real time we executed ground seg-mentation at about 20Hz twice as quick as the rotation speedof the LiDAR Figure 6 indicates the time spent on groundsegmentation per ground segmentation frame in the graphTable 1 shows the time required for ground segmentationper frame It verifies that the proposed framework is enoughto segment ground at real time as the average elapsed timeperforming segmentation is less than 50ms 50ms is themaximum limitation to perform segmentation at 20Hz
Figure 7 represents the ground data size following groundsegmentation by frame in the graph Table 2 shows the ground
0
5000
10000
15000
20000
25000
1
194
387
580
773
966
1159
1352
1545
1738
1931
2124
2317
2510
2703
2896
3089
3282
3475
3668
3861
4054
Gro
und
data
size
(byt
es)
Frame count
Figure 7 Ground data size after segmentation
Table 1 Stats of elapsed time by frame
Average elapsed time 1931msMaximum elapsed time 3207msStandard deviation 391ms
Table 2 Stats of ground data size by frame
Average data size 746757 bytesMaximum data size 1971600 bytesStandard deviation 359144 bytes
data statistics Because the segmentation was performed at20Hz in this experiment average ground data size (bytes) byframe can be converted to Mbps unit using this equation
Mbps =(bytes times 8 times 20)10242 (5)
With this equation average Mbps is calculated as 114MbpsBecause minimum data rate per stream of 80211a networkstandard is 6Mbps the data size is acceptable to typicalwireless network
7 Conclusion
In this paper we proposed a ground segmentation frameworkfor real-time route planning through a wireless network foran autonomous vehicle in a ubiquitous road environmentThe framework involves elimination of overlapping data thereduction of data dimensions and ground segmentationTo this end the implementation of the dynamic flagmapthe lowermost heightmap and technologies including voxellabeling were described in detail A voxel labeling algorithmwas developed to minimize access to neighboring voxelsfor real-time ground segmentation Furthermore we exper-imentally verified the efficiency of our real-time groundsegmentation system even in a geographical environmentwith numerous hills The segmented ground data size canbe shared through a wireless network in real time by binarycompression
8 The Scientific World Journal
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 8 Ground segmentation results (the different parts are indicated by arrows) (a) Threshold based method result at a flat road (b)Proposedmethod result at a flat road (c)Threshold basedmethod result at flat road with a slope (d) Proposedmethod result at flat road witha slope (e) Threshold based method result at a downhill road (f) Proposed method result at a downhill road (g) Threshold based methodresult at an uphill road (h) Proposed method result at an uphill road (i) Threshold based method result at another uphill road (j) Proposedmethod result at another uphill road
The Scientific World Journal 9
Our future research will deal with technology for elimi-nating small objects along a road such as bushes from theground data To this end we will develop an algorithm thatconsiders the density of the voxel group in real time
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This work was supported by the Agency for Defense Devel-opment Republic of Korea
References
[1] Z C Taysi and A G Yavuz ldquoETSI compliant geo-networkingprotocol layer implementation for IVC simulationsrdquo Human-Centric Computing and Information Sciences vol 3 no 4 pp1ndash12 2013
[2] MYoon YKim and J Chang ldquoAn energy-efficient routing pro-tocol using message success rate in wireless sensor networksrdquoJournal of Convergence vol 4 no 1 pp 15ndash22 2013
[3] W-H Chung S Kumar S Paluri S Nagaraj A AnnamalaiJr and J D Matyjas ldquoA cross-layer unequal error protectionscheme for prioritized H264 video using RCPC codes andhierarchical QAMrdquo Journal of Information Processing Systemsvol 9 no 1 pp 53ndash68 2013
[4] L EmmiM Gonzalez-de-Soto G Pajares and P Gonzalez-de-Santos ldquoNew trends in robotics for agriculture integration andassessment of a real fleet of robotsrdquoThe ScientificWorld Journalvol 2014 Article ID 404059 21 pages 2014
[5] Y Xu X Chen and Q Li ldquoAdaptive iterated extended kalmanfilter and its application to autonomous integrated navigationfor indoor robotrdquoThe ScientificWorld Journal vol 2014 ArticleID 138548 7 pages 2014
[6] F Moosmann O Pink and C Stiller ldquoSegmentation of 3D lidardata in non-flat urban environments using a local convexitycriterionrdquo in Proceedings of the IEEE Intelligent Vehicles Sympo-sium pp 215ndash220 June 2009
[7] M Himmelsbach F V Hundelshausen and H-J WuenscheldquoFast segmentation of 3D point clouds for ground vehiclesrdquo inProceedings of the IEEE Intelligent Vehicles Symposium (IV rsquo10)pp 560ndash565 June 2010
[8] B Douillard J Underwood N Kuntz et al ldquoOn the segmen-tation of 3D lidar point cloudsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 2798ndash2805 May 2011
[9] J Hernandez and B Marcotegui ldquoPoint cloud segmentationtowards urban ground modelingrdquo in Proceedings of the JointUrban Remote Sensing Event pp 1ndash5 May 2009
[10] W Song K Cho K Um C S Won and S Sim ldquoIntuitiveterrain reconstruction using height observation-based groundsegmentation and 3D object boundary estimationrdquo Sensors vol12 no 12 2012
[11] T Chen B Dai R Wang and L Daxue ldquoGaussian-process-based real-time ground segmentation for autonomous landvehiclesrdquo Journal of Intelligent amp Robotic Systems 2013
[12] W Song S Cho K Cho K Um C S Won and S SimldquoTraversable ground surface segmentation and modeling forreal-time mobile mappingrdquo International Journal of DistributedSensor Networks vol 2014 Article ID 795851 8 pages 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
6 The Scientific World Journal
h 10l
h 10l
hl
h 12l
h 11l
hl
hlhlhl
hl
lhlhl
l
hlhlhl
l
lhlhlhl
l
l
hlhl
hl
h 12
lh 12
lh 12
h 11
h 11
h 12l
h 9l
l
h 13
lh 13
lh 13
h 10
h 12h 12
middot middot middot
2 1 0Column
h heightl label
(a)
h 10l 1
hl
h 12
lh 11l 1
hl
h 13
l
hl
h 12
l 4hl
hl
h 11l 4
hl
h
l
h 11l 4
h 12
lh 10l 2
hl
h 12
l
h 9l 2
h
lh 13
lh 10l 2
h
lhl
hl
h 12
l 5
h 12l
hl
h 13l 5
hl
h 12l 3
hl
hl
middot middot middot
2 1 0
(b)
h 10l 1
hl
h 12
l
h 11l 1
h
l
h 13
lh
l
h 12
l 1
hl
h
lh 11l 1
h
l
h
lh 11l 1
h 12
l
h 10
l 1
h
l
h 12
l
h 9
l 1
hl
h 13l
h 10l 1
h
lhl
hl
h 12
l 3h 12
lh
l
h 13l 3
hl
h 12
l 3
hl
h
l
middot middot middot
2 1 0
(c)
h 10l 1
h
l
h 12l 4
h 11l 1
hl
h 13l 4
hl
h 12l 1
h
l
hl
h 11l 1
hl
hl
h 11l 1
h 12l 5
h 10l 1
hl
h 12l 5
h 9l 1
hl
h 13l 5
h 10l 1
hl
hl
hl
h 12l 3
h 12l 6
hl
h 13l 3
hl
h 12l 3
hl
hlmiddot middot middot
2 1 0
(d)
Figure 3 Voxel labeling in lowermost heightmap Each square represents a voxel which has height and label attributes (a) Initial stateof lowermost heightmap before executing voxel labeling (b) Execute VerticalLabeling for 0th and 1st columns independently (c) ExecuteHorizonalLabeling for the two columns (d) After executing VerticalLabeling for 2nd column execute (c) for 1st and 2nd columns
(a) (b)
Figure 4 Neighboring voxels to which each voxel needs accessfor voxel labeling (a) Neighboring voxels in vertical labeling (b)Neighboring voxels in horizontal labeling
compare height difference in vertical labeling Following thisvoxel groups in both columns are integrated by horizontallabeling When the height difference between each voxel inthe 119895th column and in the neighboring 119895 + 1th column isbelow plusmn120591 horizontal labeling integrates both voxel groupsFigure 4(b) shows the neighboring voxels
When both columns have been integrated vertical label-ing is executed on the voxels in the 119895 + 2th column andhorizontal labeling on the 119895 + 1th and 119895 + 2th columns Theprocess is repeated until voxel labeling has been executed forall columns in the lowermost heightmap The voxel groupwith the highest number of voxels is determined to be thefinal ground voxel group
6 Experiments
We performed an experiment to verify the efficiency of ourground segmentation framework For the experiment 3Dpoint clouds were acquired from actual mountainous roadsapproximately 35 km in length which consisted of flatlandsslopes trees and buildings The experiment platform is avehicle on which LiDAR was mounted Using the sensorthe vehicle acquired dataset at an average velocity of about20 kmh The LiDAR model used for the experiment isVelodyne HDL-32E To increase the density of the 3D pointclouds we only used data within about 30m of the LiDARThe PC used for ground segmentation had an Intel i7-870(CPU) and a DDR3 10600 8GB (RAM) The algorithm wastested on the acquired dataset
Local 3D point clouds from LiDAR were accumulatedbased on global coordinates using the vehiclersquos movementinformation Figure 5 illustrates the slope of the geographicalarea used in the experiment
Figure 8 visualizes the ground segmentation results Theresults were visualized with texturedmesh and colored pointsto represent ground and nonground respectively Color ofeach point was assigned according to the height of the point
To validate the performance of proposed frameworkwe compared it with threshold based segmentation methodThreshold based method uses a height value to classify voxelsinto ground and nonground voxels Figures 8(a) 8(c) 8(e)8(g) and 8(i) are the results by threshold based method Andthe results by ground segmentation framework proposed in
The Scientific World Journal 7
minus450
minus400
minus350
minus300
minus250
minus200
minus150
minus100
minus50
0
50
1
1344
2687
4030
5373
6716
8059
9402
10745
12088
13431
14774
16117
17460
18803
20146
21489
22832
24175
25518
26861
28204
29547
Hei
ght v
alue
(cm
)
Packet count
Figure 5 Slope of the geography used in the experiment
0
5
10
15
20
25
30
35
1
178
355
532
709
886
1063
1240
1417
1594
1771
1948
2125
2302
2479
2656
2833
3010
3187
3364
3541
3718
3895
4072
Elap
sed
time (
ms)
Frame count
Figure 6 Elapsed time performing segmentation
this paper are illustrated in Figures 8(b) 8(d) 8(f) 8(h)and 8(j) There is no big difference between Figures 8(a) and8(b) because the parts of the scene were captured at a flatroad In Figure 8(c) a slope at right side was misclassifiedas nonground But in Figure 8(d) the slope was classifiedas ground correctly Figure 8(e) illustrates that some partsof trees are rendered as textured mesh because they weremisclassified as ground Whereas in Figure 8(f) the parts oftrees were rendered as colored points by classifying them asnonground Voxels in an uphill road were misclassified asnonground in Figures 8(g) and 8(i) But they were classifiedproperly as ground in Figures 8(h) and 8(j)
The LiDAR used for the experiment typically scans thesurrounding environment at a rate of about 10Hz To verifyground segmentation in real time we executed ground seg-mentation at about 20Hz twice as quick as the rotation speedof the LiDAR Figure 6 indicates the time spent on groundsegmentation per ground segmentation frame in the graphTable 1 shows the time required for ground segmentationper frame It verifies that the proposed framework is enoughto segment ground at real time as the average elapsed timeperforming segmentation is less than 50ms 50ms is themaximum limitation to perform segmentation at 20Hz
Figure 7 represents the ground data size following groundsegmentation by frame in the graph Table 2 shows the ground
0
5000
10000
15000
20000
25000
1
194
387
580
773
966
1159
1352
1545
1738
1931
2124
2317
2510
2703
2896
3089
3282
3475
3668
3861
4054
Gro
und
data
size
(byt
es)
Frame count
Figure 7 Ground data size after segmentation
Table 1 Stats of elapsed time by frame
Average elapsed time 1931msMaximum elapsed time 3207msStandard deviation 391ms
Table 2 Stats of ground data size by frame
Average data size 746757 bytesMaximum data size 1971600 bytesStandard deviation 359144 bytes
data statistics Because the segmentation was performed at20Hz in this experiment average ground data size (bytes) byframe can be converted to Mbps unit using this equation
Mbps =(bytes times 8 times 20)10242 (5)
With this equation average Mbps is calculated as 114MbpsBecause minimum data rate per stream of 80211a networkstandard is 6Mbps the data size is acceptable to typicalwireless network
7 Conclusion
In this paper we proposed a ground segmentation frameworkfor real-time route planning through a wireless network foran autonomous vehicle in a ubiquitous road environmentThe framework involves elimination of overlapping data thereduction of data dimensions and ground segmentationTo this end the implementation of the dynamic flagmapthe lowermost heightmap and technologies including voxellabeling were described in detail A voxel labeling algorithmwas developed to minimize access to neighboring voxelsfor real-time ground segmentation Furthermore we exper-imentally verified the efficiency of our real-time groundsegmentation system even in a geographical environmentwith numerous hills The segmented ground data size canbe shared through a wireless network in real time by binarycompression
8 The Scientific World Journal
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 8 Ground segmentation results (the different parts are indicated by arrows) (a) Threshold based method result at a flat road (b)Proposedmethod result at a flat road (c)Threshold basedmethod result at flat road with a slope (d) Proposedmethod result at flat road witha slope (e) Threshold based method result at a downhill road (f) Proposed method result at a downhill road (g) Threshold based methodresult at an uphill road (h) Proposed method result at an uphill road (i) Threshold based method result at another uphill road (j) Proposedmethod result at another uphill road
The Scientific World Journal 9
Our future research will deal with technology for elimi-nating small objects along a road such as bushes from theground data To this end we will develop an algorithm thatconsiders the density of the voxel group in real time
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This work was supported by the Agency for Defense Devel-opment Republic of Korea
References
[1] Z C Taysi and A G Yavuz ldquoETSI compliant geo-networkingprotocol layer implementation for IVC simulationsrdquo Human-Centric Computing and Information Sciences vol 3 no 4 pp1ndash12 2013
[2] MYoon YKim and J Chang ldquoAn energy-efficient routing pro-tocol using message success rate in wireless sensor networksrdquoJournal of Convergence vol 4 no 1 pp 15ndash22 2013
[3] W-H Chung S Kumar S Paluri S Nagaraj A AnnamalaiJr and J D Matyjas ldquoA cross-layer unequal error protectionscheme for prioritized H264 video using RCPC codes andhierarchical QAMrdquo Journal of Information Processing Systemsvol 9 no 1 pp 53ndash68 2013
[4] L EmmiM Gonzalez-de-Soto G Pajares and P Gonzalez-de-Santos ldquoNew trends in robotics for agriculture integration andassessment of a real fleet of robotsrdquoThe ScientificWorld Journalvol 2014 Article ID 404059 21 pages 2014
[5] Y Xu X Chen and Q Li ldquoAdaptive iterated extended kalmanfilter and its application to autonomous integrated navigationfor indoor robotrdquoThe ScientificWorld Journal vol 2014 ArticleID 138548 7 pages 2014
[6] F Moosmann O Pink and C Stiller ldquoSegmentation of 3D lidardata in non-flat urban environments using a local convexitycriterionrdquo in Proceedings of the IEEE Intelligent Vehicles Sympo-sium pp 215ndash220 June 2009
[7] M Himmelsbach F V Hundelshausen and H-J WuenscheldquoFast segmentation of 3D point clouds for ground vehiclesrdquo inProceedings of the IEEE Intelligent Vehicles Symposium (IV rsquo10)pp 560ndash565 June 2010
[8] B Douillard J Underwood N Kuntz et al ldquoOn the segmen-tation of 3D lidar point cloudsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 2798ndash2805 May 2011
[9] J Hernandez and B Marcotegui ldquoPoint cloud segmentationtowards urban ground modelingrdquo in Proceedings of the JointUrban Remote Sensing Event pp 1ndash5 May 2009
[10] W Song K Cho K Um C S Won and S Sim ldquoIntuitiveterrain reconstruction using height observation-based groundsegmentation and 3D object boundary estimationrdquo Sensors vol12 no 12 2012
[11] T Chen B Dai R Wang and L Daxue ldquoGaussian-process-based real-time ground segmentation for autonomous landvehiclesrdquo Journal of Intelligent amp Robotic Systems 2013
[12] W Song S Cho K Cho K Um C S Won and S SimldquoTraversable ground surface segmentation and modeling forreal-time mobile mappingrdquo International Journal of DistributedSensor Networks vol 2014 Article ID 795851 8 pages 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World Journal 7
minus450
minus400
minus350
minus300
minus250
minus200
minus150
minus100
minus50
0
50
1
1344
2687
4030
5373
6716
8059
9402
10745
12088
13431
14774
16117
17460
18803
20146
21489
22832
24175
25518
26861
28204
29547
Hei
ght v
alue
(cm
)
Packet count
Figure 5 Slope of the geography used in the experiment
0
5
10
15
20
25
30
35
1
178
355
532
709
886
1063
1240
1417
1594
1771
1948
2125
2302
2479
2656
2833
3010
3187
3364
3541
3718
3895
4072
Elap
sed
time (
ms)
Frame count
Figure 6 Elapsed time performing segmentation
this paper are illustrated in Figures 8(b) 8(d) 8(f) 8(h)and 8(j) There is no big difference between Figures 8(a) and8(b) because the parts of the scene were captured at a flatroad In Figure 8(c) a slope at right side was misclassifiedas nonground But in Figure 8(d) the slope was classifiedas ground correctly Figure 8(e) illustrates that some partsof trees are rendered as textured mesh because they weremisclassified as ground Whereas in Figure 8(f) the parts oftrees were rendered as colored points by classifying them asnonground Voxels in an uphill road were misclassified asnonground in Figures 8(g) and 8(i) But they were classifiedproperly as ground in Figures 8(h) and 8(j)
The LiDAR used for the experiment typically scans thesurrounding environment at a rate of about 10Hz To verifyground segmentation in real time we executed ground seg-mentation at about 20Hz twice as quick as the rotation speedof the LiDAR Figure 6 indicates the time spent on groundsegmentation per ground segmentation frame in the graphTable 1 shows the time required for ground segmentationper frame It verifies that the proposed framework is enoughto segment ground at real time as the average elapsed timeperforming segmentation is less than 50ms 50ms is themaximum limitation to perform segmentation at 20Hz
Figure 7 represents the ground data size following groundsegmentation by frame in the graph Table 2 shows the ground
0
5000
10000
15000
20000
25000
1
194
387
580
773
966
1159
1352
1545
1738
1931
2124
2317
2510
2703
2896
3089
3282
3475
3668
3861
4054
Gro
und
data
size
(byt
es)
Frame count
Figure 7 Ground data size after segmentation
Table 1 Stats of elapsed time by frame
Average elapsed time 1931msMaximum elapsed time 3207msStandard deviation 391ms
Table 2 Stats of ground data size by frame
Average data size 746757 bytesMaximum data size 1971600 bytesStandard deviation 359144 bytes
data statistics Because the segmentation was performed at20Hz in this experiment average ground data size (bytes) byframe can be converted to Mbps unit using this equation
Mbps =(bytes times 8 times 20)10242 (5)
With this equation average Mbps is calculated as 114MbpsBecause minimum data rate per stream of 80211a networkstandard is 6Mbps the data size is acceptable to typicalwireless network
7 Conclusion
In this paper we proposed a ground segmentation frameworkfor real-time route planning through a wireless network foran autonomous vehicle in a ubiquitous road environmentThe framework involves elimination of overlapping data thereduction of data dimensions and ground segmentationTo this end the implementation of the dynamic flagmapthe lowermost heightmap and technologies including voxellabeling were described in detail A voxel labeling algorithmwas developed to minimize access to neighboring voxelsfor real-time ground segmentation Furthermore we exper-imentally verified the efficiency of our real-time groundsegmentation system even in a geographical environmentwith numerous hills The segmented ground data size canbe shared through a wireless network in real time by binarycompression
8 The Scientific World Journal
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 8 Ground segmentation results (the different parts are indicated by arrows) (a) Threshold based method result at a flat road (b)Proposedmethod result at a flat road (c)Threshold basedmethod result at flat road with a slope (d) Proposedmethod result at flat road witha slope (e) Threshold based method result at a downhill road (f) Proposed method result at a downhill road (g) Threshold based methodresult at an uphill road (h) Proposed method result at an uphill road (i) Threshold based method result at another uphill road (j) Proposedmethod result at another uphill road
The Scientific World Journal 9
Our future research will deal with technology for elimi-nating small objects along a road such as bushes from theground data To this end we will develop an algorithm thatconsiders the density of the voxel group in real time
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This work was supported by the Agency for Defense Devel-opment Republic of Korea
References
[1] Z C Taysi and A G Yavuz ldquoETSI compliant geo-networkingprotocol layer implementation for IVC simulationsrdquo Human-Centric Computing and Information Sciences vol 3 no 4 pp1ndash12 2013
[2] MYoon YKim and J Chang ldquoAn energy-efficient routing pro-tocol using message success rate in wireless sensor networksrdquoJournal of Convergence vol 4 no 1 pp 15ndash22 2013
[3] W-H Chung S Kumar S Paluri S Nagaraj A AnnamalaiJr and J D Matyjas ldquoA cross-layer unequal error protectionscheme for prioritized H264 video using RCPC codes andhierarchical QAMrdquo Journal of Information Processing Systemsvol 9 no 1 pp 53ndash68 2013
[4] L EmmiM Gonzalez-de-Soto G Pajares and P Gonzalez-de-Santos ldquoNew trends in robotics for agriculture integration andassessment of a real fleet of robotsrdquoThe ScientificWorld Journalvol 2014 Article ID 404059 21 pages 2014
[5] Y Xu X Chen and Q Li ldquoAdaptive iterated extended kalmanfilter and its application to autonomous integrated navigationfor indoor robotrdquoThe ScientificWorld Journal vol 2014 ArticleID 138548 7 pages 2014
[6] F Moosmann O Pink and C Stiller ldquoSegmentation of 3D lidardata in non-flat urban environments using a local convexitycriterionrdquo in Proceedings of the IEEE Intelligent Vehicles Sympo-sium pp 215ndash220 June 2009
[7] M Himmelsbach F V Hundelshausen and H-J WuenscheldquoFast segmentation of 3D point clouds for ground vehiclesrdquo inProceedings of the IEEE Intelligent Vehicles Symposium (IV rsquo10)pp 560ndash565 June 2010
[8] B Douillard J Underwood N Kuntz et al ldquoOn the segmen-tation of 3D lidar point cloudsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 2798ndash2805 May 2011
[9] J Hernandez and B Marcotegui ldquoPoint cloud segmentationtowards urban ground modelingrdquo in Proceedings of the JointUrban Remote Sensing Event pp 1ndash5 May 2009
[10] W Song K Cho K Um C S Won and S Sim ldquoIntuitiveterrain reconstruction using height observation-based groundsegmentation and 3D object boundary estimationrdquo Sensors vol12 no 12 2012
[11] T Chen B Dai R Wang and L Daxue ldquoGaussian-process-based real-time ground segmentation for autonomous landvehiclesrdquo Journal of Intelligent amp Robotic Systems 2013
[12] W Song S Cho K Cho K Um C S Won and S SimldquoTraversable ground surface segmentation and modeling forreal-time mobile mappingrdquo International Journal of DistributedSensor Networks vol 2014 Article ID 795851 8 pages 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
8 The Scientific World Journal
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 8 Ground segmentation results (the different parts are indicated by arrows) (a) Threshold based method result at a flat road (b)Proposedmethod result at a flat road (c)Threshold basedmethod result at flat road with a slope (d) Proposedmethod result at flat road witha slope (e) Threshold based method result at a downhill road (f) Proposed method result at a downhill road (g) Threshold based methodresult at an uphill road (h) Proposed method result at an uphill road (i) Threshold based method result at another uphill road (j) Proposedmethod result at another uphill road
The Scientific World Journal 9
Our future research will deal with technology for elimi-nating small objects along a road such as bushes from theground data To this end we will develop an algorithm thatconsiders the density of the voxel group in real time
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This work was supported by the Agency for Defense Devel-opment Republic of Korea
References
[1] Z C Taysi and A G Yavuz ldquoETSI compliant geo-networkingprotocol layer implementation for IVC simulationsrdquo Human-Centric Computing and Information Sciences vol 3 no 4 pp1ndash12 2013
[2] MYoon YKim and J Chang ldquoAn energy-efficient routing pro-tocol using message success rate in wireless sensor networksrdquoJournal of Convergence vol 4 no 1 pp 15ndash22 2013
[3] W-H Chung S Kumar S Paluri S Nagaraj A AnnamalaiJr and J D Matyjas ldquoA cross-layer unequal error protectionscheme for prioritized H264 video using RCPC codes andhierarchical QAMrdquo Journal of Information Processing Systemsvol 9 no 1 pp 53ndash68 2013
[4] L EmmiM Gonzalez-de-Soto G Pajares and P Gonzalez-de-Santos ldquoNew trends in robotics for agriculture integration andassessment of a real fleet of robotsrdquoThe ScientificWorld Journalvol 2014 Article ID 404059 21 pages 2014
[5] Y Xu X Chen and Q Li ldquoAdaptive iterated extended kalmanfilter and its application to autonomous integrated navigationfor indoor robotrdquoThe ScientificWorld Journal vol 2014 ArticleID 138548 7 pages 2014
[6] F Moosmann O Pink and C Stiller ldquoSegmentation of 3D lidardata in non-flat urban environments using a local convexitycriterionrdquo in Proceedings of the IEEE Intelligent Vehicles Sympo-sium pp 215ndash220 June 2009
[7] M Himmelsbach F V Hundelshausen and H-J WuenscheldquoFast segmentation of 3D point clouds for ground vehiclesrdquo inProceedings of the IEEE Intelligent Vehicles Symposium (IV rsquo10)pp 560ndash565 June 2010
[8] B Douillard J Underwood N Kuntz et al ldquoOn the segmen-tation of 3D lidar point cloudsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 2798ndash2805 May 2011
[9] J Hernandez and B Marcotegui ldquoPoint cloud segmentationtowards urban ground modelingrdquo in Proceedings of the JointUrban Remote Sensing Event pp 1ndash5 May 2009
[10] W Song K Cho K Um C S Won and S Sim ldquoIntuitiveterrain reconstruction using height observation-based groundsegmentation and 3D object boundary estimationrdquo Sensors vol12 no 12 2012
[11] T Chen B Dai R Wang and L Daxue ldquoGaussian-process-based real-time ground segmentation for autonomous landvehiclesrdquo Journal of Intelligent amp Robotic Systems 2013
[12] W Song S Cho K Cho K Um C S Won and S SimldquoTraversable ground surface segmentation and modeling forreal-time mobile mappingrdquo International Journal of DistributedSensor Networks vol 2014 Article ID 795851 8 pages 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World Journal 9
Our future research will deal with technology for elimi-nating small objects along a road such as bushes from theground data To this end we will develop an algorithm thatconsiders the density of the voxel group in real time
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
This work was supported by the Agency for Defense Devel-opment Republic of Korea
References
[1] Z C Taysi and A G Yavuz ldquoETSI compliant geo-networkingprotocol layer implementation for IVC simulationsrdquo Human-Centric Computing and Information Sciences vol 3 no 4 pp1ndash12 2013
[2] MYoon YKim and J Chang ldquoAn energy-efficient routing pro-tocol using message success rate in wireless sensor networksrdquoJournal of Convergence vol 4 no 1 pp 15ndash22 2013
[3] W-H Chung S Kumar S Paluri S Nagaraj A AnnamalaiJr and J D Matyjas ldquoA cross-layer unequal error protectionscheme for prioritized H264 video using RCPC codes andhierarchical QAMrdquo Journal of Information Processing Systemsvol 9 no 1 pp 53ndash68 2013
[4] L EmmiM Gonzalez-de-Soto G Pajares and P Gonzalez-de-Santos ldquoNew trends in robotics for agriculture integration andassessment of a real fleet of robotsrdquoThe ScientificWorld Journalvol 2014 Article ID 404059 21 pages 2014
[5] Y Xu X Chen and Q Li ldquoAdaptive iterated extended kalmanfilter and its application to autonomous integrated navigationfor indoor robotrdquoThe ScientificWorld Journal vol 2014 ArticleID 138548 7 pages 2014
[6] F Moosmann O Pink and C Stiller ldquoSegmentation of 3D lidardata in non-flat urban environments using a local convexitycriterionrdquo in Proceedings of the IEEE Intelligent Vehicles Sympo-sium pp 215ndash220 June 2009
[7] M Himmelsbach F V Hundelshausen and H-J WuenscheldquoFast segmentation of 3D point clouds for ground vehiclesrdquo inProceedings of the IEEE Intelligent Vehicles Symposium (IV rsquo10)pp 560ndash565 June 2010
[8] B Douillard J Underwood N Kuntz et al ldquoOn the segmen-tation of 3D lidar point cloudsrdquo in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRArsquo11) pp 2798ndash2805 May 2011
[9] J Hernandez and B Marcotegui ldquoPoint cloud segmentationtowards urban ground modelingrdquo in Proceedings of the JointUrban Remote Sensing Event pp 1ndash5 May 2009
[10] W Song K Cho K Um C S Won and S Sim ldquoIntuitiveterrain reconstruction using height observation-based groundsegmentation and 3D object boundary estimationrdquo Sensors vol12 no 12 2012
[11] T Chen B Dai R Wang and L Daxue ldquoGaussian-process-based real-time ground segmentation for autonomous landvehiclesrdquo Journal of Intelligent amp Robotic Systems 2013
[12] W Song S Cho K Cho K Um C S Won and S SimldquoTraversable ground surface segmentation and modeling forreal-time mobile mappingrdquo International Journal of DistributedSensor Networks vol 2014 Article ID 795851 8 pages 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Submit your manuscripts athttpwwwhindawicom
Computer Games Technology
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Distributed Sensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Applied Computational Intelligence and Soft Computing
thinspAdvancesthinspinthinsp
Artificial Intelligence
HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014
Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation
httpwwwhindawicom Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational Intelligence and Neuroscience
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014