Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
1
ME 597/747- Lecture 6Autonomous Mobile Robots
Instructor: Chris ClarkTerm: Fall 2004
Figures courtesy of Siegwart & Nourbakhsh
2
Navigation Control Loop
Perception
Localization Cognition
Prior Knowledge Operator Commands
Motion Control
3
Localization: Outline
1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory
2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters
4
Localization
Behaviour based navigation
Model-based navigation
5
Localization
Model-based navigation– Use dead-reckoning to predict position x’– Use x’ to probabilistically match external sensor data to
map– Update estimate of position x
6
Localization Control Loop
Matching
Position Prediction
Position Update
Odometry
Map
Observations
7
Localization: Outline
1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory
2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters
8
Odometry & Dead Reckoning
Odometry– Use wheel sensors to update position
Dead Reckoning– Use wheel sensors and heading sensor to update
positionStraight forward to implementErrors are integrated, unbounded
9
Odometry & Dead Reckoning
Odometry Error Sources– Limited resolution during integration (time increments,
measurement resolution).– Unequal wheel diameter (deterministic)– Variation in the contact point of the wheel
(deterministic)– Unequal floor contact (slipping, not planar, …)
10
Odometry & Dead Reckoning
Odometry Errors– Deterministic errors can be eliminated through proper
calibration– Non-deterministic errors have to be described by error
models and will always lead to uncertain position estimate.
11
Odometry & Dead Reckoning
Integration errors:– Range Error
Sum of wheel movements
– Turn ErrorDifference in wheel movements
– Drift ErrorDifference in the error of the wheels leads to error in orientation
12
Odometry & Dead Reckoning
Differential Drive Robot
13
Odometry & Dead Reckoning
Differential Drive Robot
14
Odometry & Dead Reckoning
Differential Drive Robot
15
Odometry & Dead Reckoning
Errors perpendicular to the direction grow much larger.
16
Odometry & Dead Reckoning
Error ellipse does not remain perpendicular to direction.
17
Odometry & Dead Reckoning
Square Path Experiment
18
Localization: Outline
1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory
2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters
19
Belief Representation
Continuous
20
Belief Representation
Grid Topological
21
Belief Representation
Continuous (single hypothesis)
Continuous (multiple hypothesis)
22
Belief Representation
Discretized (prob. Distribution)
Discretized Topological (prob. dist.)
23
Belief Representation
Continuous– Precision bound by sensor data– Typically single hypothesis pose estimate– Lost when diverging (for single hypothesis)– Compact representation– Reasonable in processing power
24
Belief Representation
Discrete– Precision bound by resolution of discretization– Typically multiple hypothesis pose estimate– Never lost (when diverges converges to another cell).– Memory and processing power needed (unless
topological map used)– Aids planner implementation
25
Belief Representation
Multi-Hypothesis Example
26
Localization: Outline
1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory
2. Localization Algorithms1. Probabilistic map-based Localization2. Markov Localization3. Particle Filters
27
Map Representation
Map precision vs. ApplicationFeatures precision vs. Map precisionPrecision vs. Computational complexityTwo main types:– Continuous– Discretized
28
Map Representation
Continous line-based
29
Map Representation
Exact cell decomposition
30
Map Representation
Fixed cell decomposition
31
Map Representation
Fixed cell decomposition
32
Map Representation
Adaptive cell decomposition
33
Map Representation
Topological decomposition
34
Map Representation
Topological decomposition
35
Map Representation
Topological decomposition
36
Localization: Outline
1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory
2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters
37
Basic Probability Theory
Probability that A is trueP(A)
– We compute the probability of each robot state given actions and measurements.
Conditional Probability that A is true given that B is true
P(A | B)– For example, the probability that the robot is at
position xt given the sensor input zt is P( xt | zt )
38
Basic Probability Theory
Product Rule:p (A B ) = p (A | B ) p (B ) p (A B ) = p (B | A ) p (A )
– Can equate above expressions to derive Bayes rule.
Bayes Rule:p (A | B) = p (B | A ) p (A )
p (B )
39
Localization: Outline
1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory
2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters
40
LocalizationProbabilistic Methods
Problem Statement:– Consider a mobile robot moving in a known
environment.– It might start to move from a known location, and
keep track of its position using odometry.– However, the more it moves the greater the
uncertainty in its position.– Therefore, it will update its position estimate using
observation of its environment
41
LocalizationProbabilistic Methods
Motion generally improves the position estimate.
42
LocalizationProbabilistic Methods
Method:– Fuse the odometric position estimate with the
observation estimate to get best possible update of actual position
This can be implemented with two main functions:
1. Act2. See
43
LocalizationProbabilistic Methods
Action Update (Prediction)– Define function to predict position estimate based
on previous state xt-1 and encoder measurement otor control inputs ut
x’t = Act (ot , xt-1)– Increases uncertainty
44
LocalizationProbabilistic Methods
Perception Update (Correction)– Define function to correct position estimate x’t using
exteroceptive sensor inputs zt
xt = See (zt , x’t)– Decreases uncertainty
45
Map-Based LocalizationSingle Iteration
Given:– position estimate, x ( t | t )– its covariance for time t, Σp ( t | t )– current control input, u ( t )– current set of observations, Z ( t + 1 )– map, M ( t )
Compute:– new position estimate, x ( t + 1 | t + 1 )– its covariance, Σp ( t + 1 | t + 1 )
46
Map-Based LocalizationFive Steps
1. Position prediction based on previous estimate and odometry
2. Observation with onboard sensors3. Measurement prediction based on position
prediction and map4. Matching of observation and map5. Estimation – position update
47
Map-Based LocalizationKalman Filtering vs. Markov
Markov Localization– Can localize from any
unknown position in map– Recovers from ambiguous
situation– However, to update the
probability of all positions within the whole state space requires discrete representation of space. This can require large amounts of memory and processing power.
Kalman Filter Localization
– Tracks the robot and is inherently precise and efficient
– However, if uncertainty grows too large, the KF will fail and the robot will get lost.
48
Localization: Outline
1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory
2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters
49
Markov Localization
Markov localization uses an explicit, discrete representation for the probability of all positions in the state space.Usually represent the environment by a finite number of (states) positions:– Grid– Topological Map
At each iteration, the probability of each state of the entire space is updated
50
Markov LocalizationApplying Probability Theory
Updating the belief state:p (xt | ot) = ∫ p ( xt | x’t-1 , ot ) p (x’t-1 ) dx’t-1
– Map from a belief state and action to a new belief state (Act)
– Sum over all possible ways (i.e. from all states x’ ) in which the robot may have reached x
– This assumes that update only depends on previous state and most recent actions/perception
51
Markov LocalizationApplying Probability Theory
Use Bayes rule to refine the belief state:p (xt | z) = p (zt | xt ) p (xt )
p (zt )
– p( xt ): the belief state before the perceptual update i.e. p( xt | ot )
– p( zt | xt ): the probability of getting measurement zfrom state xt
– p( zt ): the probability of a sensor measurement zt. Used to normalize so that the sum over all states xtfrom X equals 1.
52
Markov LocalizationGrid Based Example
Use a fixed decomposition grid (x, y, θ) with resolution 15cm x 15 cm x 1’
53
Markov LocalizationGrid Based Example
Action Update:– Sum over previous possible positions and motion
model.p (xt | ot) = Σx’ p ( xt | x’t-1 , ot ) p (x’t-1 )
Perception Update:– Given perception zt, what is the probability of being
at state xt
p (xt | zt) = p (zt | xt ) p (xt ) p (zt )
54
Markov LocalizationGrid Based Example
Critical challenge is calculation of p ( z | x )– The number of possible sensor readings and
geometric contexts is extremely large– p ( z | x ) is computed using a model of the robot’s
sensor behavior, its position x, and the local environment metric map around x.
– AssumptionsMeasurement error can be described by a distribution with a meanNon-zero chance for any measurement
55
Markov LocalizationGrid Based Example
Sensor Behavior:
56
Markov LocalizationGrid Based Example
The 1D case
1. StartNo knowledge at start, thus we have an uniform probability distribution.
2. Robot perceives first pillarSeeing only one pillar, the probability being at pillar 1, 2 or 3 is equal.
3. Robot movesAction model enables to estimate the new probability distribution based on the previous one and the motion.
4. Robot perceives second pillarBase on all prior knowledge the probability being at pillar 2 becomes dominant
57
Markov LocalizationGrid Based Example
Laser Scan 1 of Museum
Figures courtesy of W. Burgard
58
Markov LocalizationGrid Based Example
Laser Scan 2 of Museum
Figures courtesy of W. Burgard
59
Markov LocalizationGrid Based Example
Laser Scan 3 of Museum
Figures courtesy of W. Burgard
60
Markov LocalizationGrid Based Example
Laser Scan 13 of Museum
Figures courtesy of W. Burgard
61
Markov LocalizationGrid Based Example
Laser Scan 21 of Museum
Figures courtesy of W. Burgard
62
Markov LocalizationParticle Filter
Fine fixed decomposition results in a huge state space– Needs both large processing power and memory
Reducing complexity– Main goal is to reduce the number of states updated
at each stepRandomized Sampling: Particle Filter– Use an approximated belief state with a subset of all
possible states (obtained from random sampling).
63
Localization: Outline
1. Localization Tools1. Odometry and Dead Reckoning2. Belief representation3. Map representation4. Probability Theory
2. Probabilistic map-based Localization1. Markov Localization2. Particle Filters
64
Markov LocalizationParticle Filter
Used to reduce the number of statesBased on estimating the posterior probability distribution on the state
65
Markov LocalizationParticle Filter
Algorithm (Initialize at t=0):– Randomly draw N states in the work space and add
them to the set X0.– Iterate on these N states over time (see next slide).
66
Markov LocalizationParticle Filter
Algorithm (Loop over time step t):
1. For i = 1 … N2. Pick xt-1
[i] from Xt-1
3. Draw xt[i] with probability p( xt
[i] | xt-1[i] , ut)
4. Calculate wt[i] = p( zt | xt
[i] )5. Add xt
[i] to XtTemp
6. For j = 1 … N7. Draw xt
[j] from XtTemp with probability wt
[j]
8. Add xt[i] to Xt
67
Markov LocalizationParticle Filter Example
Provided is an example where a robot (depicted below), starts at some unknown location in the bounded workspace.
x0
68
Markov LocalizationParticle Filter Example
At time step t0:We randomly pick N=3 states represented as
X0 ={x0[1], x0
[2], x0[3]}
For simplicity, assume known heading
x0[1]
x0[2]
x0[3]
x0
69
Markov LocalizationParticle Filter Example
The next few slides provide an example of one iteration of the algorithm, given X0.
This iteration is for time step t1.The inputs are the measurment z1 and control inputs u1 x0
[1]
x0[2]
x0[3]
x0
70
Markov LocalizationParticle Filter Example
For Time step t1:Randomly generate new states by propagating previous states X0 with u1
X1Temp ={x1
[1], x1[2], x1
[3]}
x1[1]
x1[2]
x1[3]
x1
71
Markov LocalizationParticle Filter Example
For Time step t1:To get new states, use a 2D gaussian probability distribution to randomly generate new state x1
[i].In the example below, we see the new state is generated relatively close to the expected value.
x0[i]
x1[i]
72
Markov LocalizationParticle Filter Example
For Time step t1:The probability distribution can come directly from equations on slide 14If control inputs called for a straight line movement, we could simplify with experimentally determined σx and σy.
x1[i] = rand(‘norm’, x0+u1,x, σx )
y1[i] = rand(‘norm’, y0+u1,y, σy )
x0[i]
x1[i]
73
Markov LocalizationParticle Filter Example
For Time step t1:Using the measurement z1, calculate the expected weights w[i] = p( z1 | x1
[i] ) for each state.W1 = {w1
[1], w1[2], w1
[3]}
x1[1], w1
[1]
x1
x1[3], w1
[3]
x1[2], w1
[2]
µ1[1] µ1
[3]
µ1[2] z1
74
Markov LocalizationParticle Filter Example
For Time step t1:To calculate p( z1 | x1
[i] ) we use the sensor probability distribution of a single gaussian of mean µ1
[i].The gaussian variance can be taken from sensor data.
µ1[i]z1
P(µ1[i])
p(z1 | x1[i])
75
Markov LocalizationParticle Filter Example
For Time step t1:Resample the temporary state distribution based on the weights w1
[2] > w1[1] > w1
[3]
X1 ={x1[2], x1
[2], x1[1]}
x1[1]
x1
x1[2]
76
Markov LocalizationParticle Filter Example
For Time step t2:Iterate on previous steps to update state belief at time step t2 given (X1, u2, z2).
77
Markov Localization
Courtesy of S. Thrun
78
Markov Localization
Courtesy of S. Thrun
79
Markov LocalizationParticle Filter
Next Lab:– Bring objects to act as “walls”– Establish map, and a preset path– Use open-loop controller to move robot on preset
path.– Record range, heading measurements along path.– In Matlab, establish robot position in map using a
particle Filter.