12
Project Report Arvind Menon [email protected] Steven Thomson [email protected] Dustin Wright [email protected] May 13, 2003 Abstract We present a demonstration of an autonomous mobile robot creating the map of an indoor environment. Odometery readings are sent into a Kalman filter to propigate pose. Range finder readings are used to detect the walls around it and extract features. 1 Introduction The task of knowing where you are located in your environment is an important one in many of the tasks people wish to perform. Thus, the task of concurrent mapping and localization (CML) has been considered one of the fundamental problems facing autonomous mobile robots. Robots equipped with odometry can estimate their motion, however these estimations are only approximate; error can be introduced from many sources, such as the wheels slipping. If left unchecked, the errors from odometry will grow unbounded and leave the robot thinking that it is somewhere it is not. This problem of noise in measurements effects at all sensors that we use. Source of noise in different sensors, and thus it is possible to take advantage of using multiple sensors to obtain more accurate information of our surroundings. The next question then, is how to combine this information into one model. Because the actual value of this error is unknown, probabilistic representations of the pose of the world are popular. That is, because the error coming from the sensors will be integrated over time, the robot could be at many different locations. While we actually don’t know which pose is the correct one, some are more likely than others. There have been several solutions to the CML problem presented in this probabilistic manner. The method we chose for this project is an extended Kalman filter (EKF). Kalman filtering offers a technique for integrating information from many sensors, taking into consideration the accuracy of each reading. Inherent in this approach is keeping track of uncertainties in the robot beliefs. Besides offering crucial information to the filter, uncertainties can provide 1

Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon [email protected] Steven Thomson [email protected] Dustin Wright

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

Project Report

Arvind [email protected]

Steven [email protected]

Dustin [email protected]

May 13, 2003

Abstract

We present a demonstration of an autonomous mobile robot creatingthe map of an indoor environment. Odometery readings are sent into aKalman filter to propigate pose. Range finder readings are used to detectthe walls around it and extract features.

1 Introduction

The task of knowing where you are located in your environment is animportant one in many of the tasks people wish to perform. Thus, thetask of concurrent mapping and localization (CML) has been consideredone of the fundamental problems facing autonomous mobile robots.

Robots equipped with odometry can estimate their motion, howeverthese estimations are only approximate; error can be introduced frommany sources, such as the wheels slipping. If left unchecked, the errorsfrom odometry will grow unbounded and leave the robot thinking thatit is somewhere it is not. This problem of noise in measurements effectsat all sensors that we use. Source of noise in different sensors, and thusit is possible to take advantage of using multiple sensors to obtain moreaccurate information of our surroundings.

The next question then, is how to combine this information into onemodel. Because the actual value of this error is unknown, probabilisticrepresentations of the pose of the world are popular. That is, because theerror coming from the sensors will be integrated over time, the robot couldbe at many different locations. While we actually don’t know which pose isthe correct one, some are more likely than others. There have been severalsolutions to the CML problem presented in this probabilistic manner. Themethod we chose for this project is an extended Kalman filter (EKF).Kalman filtering offers a technique for integrating information from manysensors, taking into consideration the accuracy of each reading. Inherentin this approach is keeping track of uncertainties in the robot beliefs.Besides offering crucial information to the filter, uncertainties can provide

1

Page 2: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

us with information to easily visualize how well the robot knows where itis located.

The remaining of this paper is divided up as follows. Section 2 de-scribes the hardware used to implement the project. Section 3 discussesthe path planning algorithm used to move the robot around its envi-ronment. Section 4 describes method used to extract features from theenvironment. Section 5 discusses our results. Finally section 6 gives someconcluding remarks and talks about potential future work.

2 Hardware

The robot we use is a Pioneer I. The Pioneer includes wheel encodersoperating at 100 ticks per revolution which provide odometric informa-tion. The Pioneer also has sonar range finders, however we added a moreaccurate Sick Laser Range Finder to provide us range data. The Sickprovides ranges in a 180 ◦ semicircle in front of the robot, with an angularresolution of 0.5 ◦. The Sick noise corruption can be approximated bya zero-mean Gaussian curve with standard deviation of 5.3mm [1]. Theprocessing was done on an attached laptop operating at 1600 MHz.

3 Path Planning

Path planning is the task of determining where the robot should movenext. Our path planning is fairly simpler. We make the simplifying as-sumption that a correct path can be followed by always taking a rightturn when the environment permits. Our path planning can be brokendown into two parts, open-area seeking and obstacle watcher.

In open-area seeking, the robot scans the area to its right and straightin front of it with the laser. It then chooses to travel in the directionthat returned the longest range. Range values are capped at 8 meters.When two ranges of equal value are returned, the robot prefers to takethe rightmost direction. In this manner the robot tends to head down thehallway, take right turns and not head directly towards obstacles. Figure3 shows the robot following this action. Here the robot has a couple goodcanidate orientations that will lead it to an open area, so the robot picksthe rightmost orientation, as indicated by the green line.

The obstacle-watcher action is necessary to avoid hitting the edge ofobjects. This is necessary because often times the longest reading foundwill be tangent to the end of an object. In trying to pursue such paths,only half of the robot will clear the obstacle. The second action watchesfor nearby obstacles and instructs the robot to turn away from them.Figure 3 shows the robot as he might approach a wall using the open-areaseeking action. The open-area seeker would prefer to follow the yelloworientation, but the obstacle-watcher action notices the corner in front ofthe robot and turns the robot towards the green orientation so it can clearthe corner.

Once the robot has travelled 5 meter from the origin, it begins totest its position to determine if it is close to the origin, using a simple

2

Page 3: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

Euclidean distance measurement. Once it returns close to the origin,the robot knows it has completed a map of its environment and halts.The distance thresholds should increase proportional to the size of theenvironment the robot operates in. One meter was used as the distancefrom origin tolerance is telling the robot to stop.

Figure 1: The effect of the open-area seeking action: The robot scans and takesthe the longest range, rightmost path indicating by green.

4 Feature Detection

Obtaining useful information out of Sick is an important, non-trivial task.Some techniques use all of the raw data from the range finder [3]. Anotherpopular technique is to look for features, or landmarks, that distinguisha particular location from other locations. In our project, we use cornersas our features.

The laser range scan data at each time step is converted into therobot’s local co-ordinates and analyzed to find features. New scan datais analyzed every time the robot moves 0.5m or turns 10◦. Features thatare seen in consecutive scans are added to a global list of features.

The points on the local map are traversed through from right to left.An initial set of points is taken and the angle of the line joining the pointswith respect to the Y-axis is determined, and is taken as a reference angle.

Once an initial reference angle is determined, the program scans throughthe other points taking four points at a time. The angle between the linejoining the four points and the reference line is calculated and is checkedto see if it is within a threshold of 45◦ (fig. 4a). Another check that isperformed is the distance between the last of the four point and the lastof the previous four points.

Lines that fall out of the 45◦ threshold (fig. 4b), or are at least 300mmapart from the previous line are considered to be new lines and a line break

3

Page 4: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

Figure 2: The effect of the obstacle watcher action: The open-area seeker wantsto follow the yellow path, but red ranges indicate that the robot is too close soit goes in the green direction.

is added. The four point are then analyzed one at a time to determine theexact point of crossover from the previous line. The points of cross-overare considered to be “end-points”. When an endpoint is found, a newreference angle is calculated with the next set of points, if they fall withinthe distance threshold.

Endpoints that lie on lines that are above the angle threshold areadded to the list of potential features. When considering endpoints onlines that cross the distance threshold, there would be two endpoints. Ifthe two lines lie in the same axis and have the same reference angle, bothpoints are added to the list of features (fig. 4c). This is a case when thescan is taken near a doorway or a small opening. If the endpoints lie onlines that are not on the same axis, the point that is closer to the robotis considered a feature and the other point just remains an endpoint (fig.4d). This is a case when a robot approaches an inside corner with anotherwall near it.

The first and last points of the scan are also added to the list ofendpoints. At the end of a scan analysis, a list of end-points and a listof potential features are collected. The map that is achieved at the endof this scan consists of lines that connect the endpoints. The features arenot finalized until it is found in the next scan. If a feature is not found inthe next scan, it is removed from the list of potential features.

5 Results

Partial results were obtained in simulation and in the real world. In thesetests, a standard deviation of 2% was used to represent the zero-meanwhite Gaussian noise of the odometry. Figure 6 shows the results of path

4

Page 5: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

Figure 3: (a) shows points that fall within the threshold (b) second set is outsidethe angle threshold (c) both lie within the angle threshold, are in the same axis ,but fall out of the distance threshold. Hence two landmarks(open doorways) (d)not within angle & distance threshold and so closest point is taken as landmark

5

Page 6: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

planning, feature detection, and odometric propagation in a simulatedworld. Figure 6 shows the same statistics inside an office building. Thegreen trail of crosses represent the estimated trajectory of the robot. Ascan be seen the robot tended to drift back and forth in the hall. Thisangular velocity caused increase error in the orientation of the robot thanif the robot had gone straight done the hall. The red circles representfeatures detected in the environment. Feature detection worked fairly well,however one corner was missed and one corner was detected twice. Theblue regions are where the laser range finder detected a wall or obstacle.The ellipses represent a region of 99% certainty of the robot’s position.Notice how the ellipses start out small; as the robot moves, the odometric–dependent pose estimate gets more and more uncertain causing the ellipsesto grow. By the time the robot has returned to the origin, the area of theellipse has grown to 50 meters2 for the simulation and 50 meters2 for thereal world result. The simulated world was a 10 meters2 area, and thereal world was about a 225 meters2 area.

Our origin detection scheme worked well in the simulated environment,the robot was able to stop with a couple meters of its starting location.However in the real world test, the robot did not stop. The problem lieswith the threshold hardcoded into the control software. Because the sizeof the real world test and simulation test were so different, using the samethresholds was not appropriate.

The entire sensing, feature detection, and propagation steps took ap-proximately XXX seconds.

6 Conclusions and Future Work

Our robot created maps good enough for human use. The largest prob-lem appeares to be with orientation estimation, as the walls fail to lineup as they should towards then end. While pure odometric results maybe sufficient for some tasks in the tested environment, as the size of theenvironment grows, mapping based strictly on odometry will fail. Alter-natives to localization based solely on propagation of odometery includeusing the features detected to update the position estimate of the robot.We feel we came close to getting updating to work however we weren’table to unravel some numerical issues involved in the process.

In the current implementation, we use a Euclidean distance betweenthe estimated positioning of a robot and the origin to determine when wehave returned to the starting location. We were in the process of imple-menting a Mahalanobis distance to match features. Mahalanobis distancescan be used to detect when the robot turns to the starting landmark, per-haps with more success. Another possibility might include making thedistance thresholds used in the Euclidean distance a function of the totaldistance the robot has traveled rather than a hardcoded number.

6

Page 7: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

−8000 −6000 −4000 −2000 0 2000 4000 6000 8000

−10000

−8000

−6000

−4000

−2000

0

2000Laser PointsLandmarksRobot PathUncertainity Ellipse

Figure 4: Results from simulation

7

Page 8: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

−1 −0.5 0 0.5 1 1.5 2 2.5

x 104

−20000

−15000

−10000

−5000

0

5000

Figure 5: Results from actual tests

8

Page 9: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

References

[1] C. Ye and J. Borenstein, “Characterization of a 2-d laser scannerfor mobile robot obstacle negotiation,” in Proc. IEEE Int. Conf. onRobotics and Automation, Washington D.C., May 11–15 2002,pp. 2512-2518.

[2] R. Smith, M. Self, and P. Cheeseman, ”Estimating Uncertain SpatialRelationships in Robotics,” in Autonomous Robot Vehicles, I. J. Coxand G. T. Wilfong, editors, pp. 167–193, Springer-Verlag, 1990.

[3] S. Thrun, D. Fox, W. Burgard, and F. Dellaert, ”Robust Monte CarloLocalization for Mobile Robots,” in Artificial Intelligence (AI), 2001.

A Kalman Filter

The Kalman filter is a general filtering tool that can be used in a largenumber of problems. Much attention has been paid to using the Kalmanfilter on CML problems [2]. There have been many variants to the Kalmanfilter in this area. For linear systems a standard Kalman filter is sufficient.In nonlinear systems, such as our case, the extended Kalman filter isused. The extended Kalman filter approximates the actual system byusing a Taylor’s series approximation. We use a first order approximation,although it is possible, but more computationally expensive, to use higherorder approximations. In addition there are different choices for what touse to model the system. For instance, if kinematics and dynamics areknown for the system, they can be used. We use the odometry as ourmodel, a technique called an indirect Kalman filter.

A.1 Extended Kalman Filter

Let the actual state of the system, X, contain the 2-d position of therobot,xr and yr, the orientation of the robot θr, and the 2-d positions ofthe i–th landmarks, xli and yli :

~X =`

xr yr θr xl1 yl1 . . . xli yli . . . xln yln

´T.

Our estimated state of the system, X, is then given by

X =`

xr yr θr xl1 yl1 . . . xli yli . . . xln yln

´T.

This estimation has an uncertainty, P , defined by the covariance of X:

P =

0BBB@Prr Prl1 . . . Prln

Pl1r Pl1l1 . . . Pl1ln

......

. . ....

Plnr P lnl1 . . . Plnln

1CCCA . (1)

Here, Prr is a matrix of scalars, correlating the robot with itself:

Prr =

0@ pxrxr pxryr pxrθr

pyrxr pyryr pyrθr

pθrxr pθryr pθrθr

1A .

9

Page 10: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

The cross–correlation terms, Plilj , describe how related the i–th landmarkis to the j–th:

Plilj =

pxli

xljpxli

ylj

pylixlj

pyliylj

!.

The system is subject to noise, W . This noise is assumed to be zero-mean white Gaussian noise, with covariance Q (defined in equation 16below). Now we are interested in how X and P are updated over time.The relationship between X at time k and what we expect X to be attime k + 1, given only the information up to time k, is given by:

Xk+1|k = f(Xk|k, uk, 0) (2)

Here, f is a function that describes how we expect the world to changegiven state X, action u, and zero noise. The gradient of function f withrespect to the state, let us call it Φ, is defined as:

Φk = ∇Xf(Xk|k, uk, 0) (3)

The gradient of f with respect to the noise, let us call it G, is defined as:

Gk = ∇W (Xk|k, uk, 0) (4)

Remember, in the extended Kalman filter, we estimate these derivativeswith a Taylor series approximation. Next, the uncertainty, P , is updatedby:

Pk+1|k = ΦkPk|kΦTk + GkQkGT

k (5)

where Q is the covariance of the noise at time k.Given Xk+1|k, we can produce an expected observation at time k + 1:

Zk+1|k = h(Xk+1|k) (6)

where h is a function that converts from a state to what our sensors wouldtell us if that was the state.

We now have propigated what we expect to happen at the next timestep. When we reach the next time step we can look at what our sensorstell us about our new situation and use this knowledge to update our stateand uncertainty estimates.

The difference in what we observe and what we had expected to observeis often called the residual:

Zk+1 = Zk+1|k+1 − Zk+1|k. (7)

The gradient of h with respect to the state describes how our observationswill change when the state changes:

Hk+1 = ∇Xh(Xk+1|k). (8)

The covariance of the residual is given by:

Sk+1 = Hk+1Pk+1|kHTk+1 + Rk+1 (9)

where R describes the noise of the sensor, or as we will discuss later, thenoise of the landmark detection process using the sensor. The Kalman

10

Page 11: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

gain is a measurement of how much this update should effect our esti-mates:

Kk+1 = Pk+1|kHTk+1S

−1k+1. (10)

Notice that if we are using a poor sensor, that is one with a lot of noise,S−1 will be a small value, making the gain from this reading small. Fur-thermore, if we don’t really have a good idea where we are, that is if Pis large, then any information we can get will probably help us out, so Kwill be larger. We use the Kalman gain and residual to update our stateestimate:

Xk+1|k+1 = Xk+1|k + Kk+1Zk+1. (11)

Finally we update P by:

Pk+1|k+1 =Pk+1|k − Pk+1|kHTk+1S

−1k+1Hk+1Pk+1|k

=Pk+1|k −Kk+1Hk+1Pk+1|k (12)

A.2 Indirect Extended Kalman Filter

By applying the equations of the extended Kalman filter to the Pioneer Irobot, we can derive the particular equations used in our project.

Our robot defines the origin of its map to be the location where it firstobserves a landmark. By defining the origin in such a manner, the originhas zero uncertainty and there furthermore there is a landmark that wecan reference to bring our uncertainty upon future observations of thislandmark near zero. The initial positions of the the other landmarks donot matter, as their uncertainty will be infinity:

X0 =`

0 0 0 xl1 yl2 c . . . c´T

.

Here xl1 and yl1 are the initial coordinates as indicated by the Sick. Ini-tially, we have no uncertainty in the robots pose and there is no corre-lation between all the yet–to–be–discovered landmarks. The locations ofthe landmarks themselves is totally unknown:

P0 =

0BBB@0 0 . . . 00 I ∗∞ . . . 0...

.... . .

...0 0 . . . I ∗∞

1CCCA .

Here each index is a matrix as defined in equation 1. Equation 2 is deter-mined by using the velocity returned from the encoders for time intervaldt and the estimated orientation:

Xk+1|k =

0BBBBBBBBBBBB@

xkr + Vmdtcos(θkr )

ykr + Vmdtsin(θkr )

θ + θmkdtxkl0

ykl0...

xkln

ykln

1CCCCCCCCCCCCA. (13)

11

Page 12: Project Report - Enet - Main Page - meno0012/projects/slam/slam_ekf.pdf · 2005-12-28 · Project Report Arvind Menon amenon@me.umn.edu Steven Thomson sthomson@cs.umn.edu Dustin Wright

By taking the derivatives of equation 13, we obtain the Φ matrix:

Φk =

0BBBBBBB@

1 0 −Vmdtsin(θkr ) 0 . . . 0

0 1 Vmdtcos(θkr ) 0 . . . 00 0 1 0 . . . 00 0 0 I . . . 0...

......

.... . .

...0 0 0 0 . . . I

1CCCCCCCA(14)

and the G matrix:

Gk =

0BBBBBBB@

−dtcos(θk) 0−dtsin(θk) 0

0 −dt0 0...

...0 0

1CCCCCCCA(15)

where I is the 2x2 identity matrix. The exact noise entering the systemthrough odometry, W , cannot be determined, but it is possible derive thecovariance of this noise:

Qk =

σ21+σ2

24

σ21−σ2

22a

σ21−σ2

22a

σ21−σ2

2a2

!. (16)

Here, σ1 and σ2 are the standard deviations of the noise introduced fromeach of the wheel encoders. These values can be modelled to be propor-tional to the velocity of the wheels. We chose 2% of the wheel velocity forthe respective σ’s. a is the distance between the two wheels of the robot,317.5mm in the case of the Pioneer I.

12