79
1 ME 597/747- Lecture 6 Autonomous Mobile Robots Instructor: Chris Clark Term: Fall 2004 Figures courtesy of Siegwart & Nourbakhsh

ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

1

ME 597/747- Lecture 6Autonomous Mobile Robots

Instructor: Chris ClarkTerm: Fall 2004

Figures courtesy of Siegwart & Nourbakhsh

Page 2: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

2

Navigation Control Loop

Perception

Localization Cognition

Prior Knowledge Operator Commands

Motion Control

Page 3: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

3

Localization: Outline

1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory

2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters

Page 4: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

4

Localization

Behaviour based navigation

Model-based navigation

Page 5: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

5

Localization

Model-based navigation– Use dead-reckoning to predict position x’– Use x’ to probabilistically match external sensor data to

map– Update estimate of position x

Page 6: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

6

Localization Control Loop

Matching

Position Prediction

Position Update

Odometry

Map

Observations

Page 7: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

7

Localization: Outline

1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory

2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters

Page 8: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

8

Odometry & Dead Reckoning

Odometry– Use wheel sensors to update position

Dead Reckoning– Use wheel sensors and heading sensor to update

positionStraight forward to implementErrors are integrated, unbounded

Page 9: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

9

Odometry & Dead Reckoning

Odometry Error Sources– Limited resolution during integration (time increments,

measurement resolution).– Unequal wheel diameter (deterministic)– Variation in the contact point of the wheel

(deterministic)– Unequal floor contact (slipping, not planar, …)

Page 10: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

10

Odometry & Dead Reckoning

Odometry Errors– Deterministic errors can be eliminated through proper

calibration– Non-deterministic errors have to be described by error

models and will always lead to uncertain position estimate.

Page 11: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

11

Odometry & Dead Reckoning

Integration errors:– Range Error

Sum of wheel movements

– Turn ErrorDifference in wheel movements

– Drift ErrorDifference in the error of the wheels leads to error in orientation

Page 12: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

12

Odometry & Dead Reckoning

Differential Drive Robot

Page 13: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

13

Odometry & Dead Reckoning

Differential Drive Robot

Page 14: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

14

Odometry & Dead Reckoning

Differential Drive Robot

Page 15: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

15

Odometry & Dead Reckoning

Errors perpendicular to the direction grow much larger.

Page 16: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

16

Odometry & Dead Reckoning

Error ellipse does not remain perpendicular to direction.

Page 17: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

17

Odometry & Dead Reckoning

Square Path Experiment

Page 18: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

18

Localization: Outline

1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory

2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters

Page 19: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

19

Belief Representation

Continuous

Page 20: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

20

Belief Representation

Grid Topological

Page 21: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

21

Belief Representation

Continuous (single hypothesis)

Continuous (multiple hypothesis)

Page 22: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

22

Belief Representation

Discretized (prob. Distribution)

Discretized Topological (prob. dist.)

Page 23: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

23

Belief Representation

Continuous– Precision bound by sensor data– Typically single hypothesis pose estimate– Lost when diverging (for single hypothesis)– Compact representation– Reasonable in processing power

Page 24: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

24

Belief Representation

Discrete– Precision bound by resolution of discretization– Typically multiple hypothesis pose estimate– Never lost (when diverges converges to another cell).– Memory and processing power needed (unless

topological map used)– Aids planner implementation

Page 25: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

25

Belief Representation

Multi-Hypothesis Example

Page 26: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

26

Localization: Outline

1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory

2. Localization Algorithms1. Probabilistic map-based Localization2. Markov Localization3. Particle Filters

Page 27: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

27

Map Representation

Map precision vs. ApplicationFeatures precision vs. Map precisionPrecision vs. Computational complexityTwo main types:– Continuous– Discretized

Page 28: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

28

Map Representation

Continous line-based

Page 29: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

29

Map Representation

Exact cell decomposition

Page 30: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

30

Map Representation

Fixed cell decomposition

Page 31: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

31

Map Representation

Fixed cell decomposition

Page 32: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

32

Map Representation

Adaptive cell decomposition

Page 33: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

33

Map Representation

Topological decomposition

Page 34: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

34

Map Representation

Topological decomposition

Page 35: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

35

Map Representation

Topological decomposition

Page 36: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

36

Localization: Outline

1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory

2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters

Page 37: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

37

Basic Probability Theory

Probability that A is trueP(A)

– We compute the probability of each robot state given actions and measurements.

Conditional Probability that A is true given that B is true

P(A | B)– For example, the probability that the robot is at

position xt given the sensor input zt is P( xt | zt )

Page 38: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

38

Basic Probability Theory

Product Rule:p (A B ) = p (A | B ) p (B ) p (A B ) = p (B | A ) p (A )

– Can equate above expressions to derive Bayes rule.

Bayes Rule:p (A | B) = p (B | A ) p (A )

p (B )

Page 39: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

39

Localization: Outline

1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory

2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters

Page 40: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

40

LocalizationProbabilistic Methods

Problem Statement:– Consider a mobile robot moving in a known

environment.– It might start to move from a known location, and

keep track of its position using odometry.– However, the more it moves the greater the

uncertainty in its position.– Therefore, it will update its position estimate using

observation of its environment

Page 41: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

41

LocalizationProbabilistic Methods

Motion generally improves the position estimate.

Page 42: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

42

LocalizationProbabilistic Methods

Method:– Fuse the odometric position estimate with the

observation estimate to get best possible update of actual position

This can be implemented with two main functions:

1. Act2. See

Page 43: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

43

LocalizationProbabilistic Methods

Action Update (Prediction)– Define function to predict position estimate based

on previous state xt-1 and encoder measurement otor control inputs ut

x’t = Act (ot , xt-1)– Increases uncertainty

Page 44: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

44

LocalizationProbabilistic Methods

Perception Update (Correction)– Define function to correct position estimate x’t using

exteroceptive sensor inputs zt

xt = See (zt , x’t)– Decreases uncertainty

Page 45: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

45

Map-Based LocalizationSingle Iteration

Given:– position estimate, x ( t | t )– its covariance for time t, Σp ( t | t )– current control input, u ( t )– current set of observations, Z ( t + 1 )– map, M ( t )

Compute:– new position estimate, x ( t + 1 | t + 1 )– its covariance, Σp ( t + 1 | t + 1 )

Page 46: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

46

Map-Based LocalizationFive Steps

1. Position prediction based on previous estimate and odometry

2. Observation with onboard sensors3. Measurement prediction based on position

prediction and map4. Matching of observation and map5. Estimation – position update

Page 47: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

47

Map-Based LocalizationKalman Filtering vs. Markov

Markov Localization– Can localize from any

unknown position in map– Recovers from ambiguous

situation– However, to update the

probability of all positions within the whole state space requires discrete representation of space. This can require large amounts of memory and processing power.

Kalman Filter Localization

– Tracks the robot and is inherently precise and efficient

– However, if uncertainty grows too large, the KF will fail and the robot will get lost.

Page 48: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

48

Localization: Outline

1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory

2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters

Page 49: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

49

Markov Localization

Markov localization uses an explicit, discrete representation for the probability of all positions in the state space.Usually represent the environment by a finite number of (states) positions:– Grid– Topological Map

At each iteration, the probability of each state of the entire space is updated

Page 50: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

50

Markov LocalizationApplying Probability Theory

Updating the belief state:p (xt | ot) = ∫ p ( xt | x’t-1 , ot ) p (x’t-1 ) dx’t-1

– Map from a belief state and action to a new belief state (Act)

– Sum over all possible ways (i.e. from all states x’ ) in which the robot may have reached x

– This assumes that update only depends on previous state and most recent actions/perception

Page 51: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

51

Markov LocalizationApplying Probability Theory

Use Bayes rule to refine the belief state:p (xt | z) = p (zt | xt ) p (xt )

p (zt )

– p( xt ): the belief state before the perceptual update i.e. p( xt | ot )

– p( zt | xt ): the probability of getting measurement zfrom state xt

– p( zt ): the probability of a sensor measurement zt. Used to normalize so that the sum over all states xtfrom X equals 1.

Page 52: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

52

Markov LocalizationGrid Based Example

Use a fixed decomposition grid (x, y, θ) with resolution 15cm x 15 cm x 1’

Page 53: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

53

Markov LocalizationGrid Based Example

Action Update:– Sum over previous possible positions and motion

model.p (xt | ot) = Σx’ p ( xt | x’t-1 , ot ) p (x’t-1 )

Perception Update:– Given perception zt, what is the probability of being

at state xt

p (xt | zt) = p (zt | xt ) p (xt ) p (zt )

Page 54: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

54

Markov LocalizationGrid Based Example

Critical challenge is calculation of p ( z | x )– The number of possible sensor readings and

geometric contexts is extremely large– p ( z | x ) is computed using a model of the robot’s

sensor behavior, its position x, and the local environment metric map around x.

– AssumptionsMeasurement error can be described by a distribution with a meanNon-zero chance for any measurement

Page 55: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

55

Markov LocalizationGrid Based Example

Sensor Behavior:

Page 56: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

56

Markov LocalizationGrid Based Example

The 1D case

1. StartNo knowledge at start, thus we have an uniform probability distribution.

2. Robot perceives first pillarSeeing only one pillar, the probability being at pillar 1, 2 or 3 is equal.

3. Robot movesAction model enables to estimate the new probability distribution based on the previous one and the motion.

4. Robot perceives second pillarBase on all prior knowledge the probability being at pillar 2 becomes dominant

Page 57: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

57

Markov LocalizationGrid Based Example

Laser Scan 1 of Museum

Figures courtesy of W. Burgard

Page 58: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

58

Markov LocalizationGrid Based Example

Laser Scan 2 of Museum

Figures courtesy of W. Burgard

Page 59: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

59

Markov LocalizationGrid Based Example

Laser Scan 3 of Museum

Figures courtesy of W. Burgard

Page 60: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

60

Markov LocalizationGrid Based Example

Laser Scan 13 of Museum

Figures courtesy of W. Burgard

Page 61: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

61

Markov LocalizationGrid Based Example

Laser Scan 21 of Museum

Figures courtesy of W. Burgard

Page 62: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

62

Markov LocalizationParticle Filter

Fine fixed decomposition results in a huge state space– Needs both large processing power and memory

Reducing complexity– Main goal is to reduce the number of states updated

at each stepRandomized Sampling: Particle Filter– Use an approximated belief state with a subset of all

possible states (obtained from random sampling).

Page 63: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

63

Localization: Outline

1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory

2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters

Page 64: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

64

Markov LocalizationParticle Filter

Used to reduce the number of statesBased on estimating the posterior probability distribution on the state

Page 65: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

65

Markov LocalizationParticle Filter

Algorithm (Initialize at t=0):– Randomly draw N states in the work space and add

them to the set X0.– Iterate on these N states over time (see next slide).

Page 66: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

66

Markov LocalizationParticle Filter

Algorithm (Loop over time step t):

1. For i = 1 … N2. Pick xt-1

[i] from Xt-1

3. Draw xt[i] with probability p( xt

[i] | xt-1[i] , ut)

4. Calculate wt[i] = p( zt | xt

[i] )5. Add xt

[i] to XtTemp

6. For j = 1 … N7. Draw xt

[j] from XtTemp with probability wt

[j]

8. Add xt[i] to Xt

Page 67: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

67

Markov LocalizationParticle Filter Example

Provided is an example where a robot (depicted below), starts at some unknown location in the bounded workspace.

x0

Page 68: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

68

Markov LocalizationParticle Filter Example

At time step t0:We randomly pick N=3 states represented as

X0 ={x0[1], x0

[2], x0[3]}

For simplicity, assume known heading

x0[1]

x0[2]

x0[3]

x0

Page 69: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

69

Markov LocalizationParticle Filter Example

The next few slides provide an example of one iteration of the algorithm, given X0.

This iteration is for time step t1.The inputs are the measurment z1 and control inputs u1 x0

[1]

x0[2]

x0[3]

x0

Page 70: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

70

Markov LocalizationParticle Filter Example

For Time step t1:Randomly generate new states by propagating previous states X0 with u1

X1Temp ={x1

[1], x1[2], x1

[3]}

x1[1]

x1[2]

x1[3]

x1

Page 71: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

71

Markov LocalizationParticle Filter Example

For Time step t1:To get new states, use a 2D gaussian probability distribution to randomly generate new state x1

[i].In the example below, we see the new state is generated relatively close to the expected value.

x0[i]

x1[i]

Page 72: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

72

Markov LocalizationParticle Filter Example

For Time step t1:The probability distribution can come directly from equations on slide 14If control inputs called for a straight line movement, we could simplify with experimentally determined σx and σy.

x1[i] = rand(‘norm’, x0+u1,x, σx )

y1[i] = rand(‘norm’, y0+u1,y, σy )

x0[i]

x1[i]

Page 73: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

73

Markov LocalizationParticle Filter Example

For Time step t1:Using the measurement z1, calculate the expected weights w[i] = p( z1 | x1

[i] ) for each state.W1 = {w1

[1], w1[2], w1

[3]}

x1[1], w1

[1]

x1

x1[3], w1

[3]

x1[2], w1

[2]

µ1[1] µ1

[3]

µ1[2] z1

Page 74: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

74

Markov LocalizationParticle Filter Example

For Time step t1:To calculate p( z1 | x1

[i] ) we use the sensor probability distribution of a single gaussian of mean µ1

[i].The gaussian variance can be taken from sensor data.

µ1[i]z1

P(µ1[i])

p(z1 | x1[i])

Page 75: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

75

Markov LocalizationParticle Filter Example

For Time step t1:Resample the temporary state distribution based on the weights w1

[2] > w1[1] > w1

[3]

X1 ={x1[2], x1

[2], x1[1]}

x1[1]

x1

x1[2]

Page 76: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

76

Markov LocalizationParticle Filter Example

For Time step t2:Iterate on previous steps to update state belief at time step t2 given (X1, u2, z2).

Page 77: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

77

Markov Localization

Courtesy of S. Thrun

Page 78: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

78

Markov Localization

Courtesy of S. Thrun

Page 79: ME 597- Lecture 1 Autonomous Mobile Robotsme597/ME597-Lecture6-LocalizationI.pdf · 2004. 11. 16. · 1. Start zNo knowledge at start, thus we have an uniform probability distribution

79

Markov LocalizationParticle Filter

Next Lab:– Bring objects to act as “walls”– Establish map, and a preset path– Use open-loop controller to move robot on preset

path.– Record range, heading measurements along path.– In Matlab, establish robot position in map using a

particle Filter.