Upload
janis-golden
View
227
Download
1
Tags:
Embed Size (px)
Citation preview
1
Mobile Robot Localization (ch. 7)
• Mobile robot localization is the problem of determining the pose of a robot relative to a given map of the environment. Because,
• Unfortunately, the pose of a robot can not be sensed directly, at least for now. The pose has to be inferred from data.
• A single sensor measurement is enough?
• The importance of localization in robotics.
• Mobile robot localization can be seen as a problem of coordinate transformation. One point of view.
2
Mobile Robot Localization
•Localization techniques have been developed for a broad set of map representations. • Feature based maps, location based
maps, occupancy grid maps, etc. (what exactly are they?) (See figure 7.2)
• (You can probably guess What is the mapping problem?)
•Remember, in localization problem, the map is given, known, available.
• Is it hard? Not really, because,
3
Mobile Robot Localization
•Most localization algorithms are variants of Bayes filter algorithm.
•However, different representation of maps, sensor models, motion model, etc lead to different variant.
•Here is the agenda.
4
Mobile Robot Localization
• We want to know different kinds of maps.
• We want to know different kinds of localization problems.
• We want to know how to solve localization problems, during which process, we also want to know how to get sensor model, motion model, etc.
5
Mobile Robot Localization(We want to know different kinds of maps. )
Different kinds of maps.
At a glance, ….
feature-based, location-based, metric, topological map, occupancy grid map, etc.
see figure 7.2• http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-
html/node23.html
6
Mobile Robot Localization – A Taxonomy(We want to know different kinds of localization problems.)
•Different kinds of Localization problems.
•A taxonomy in 4 dimensions• Local versus Global (initial knowledge)
• Static versus Dynamic (environment)
• Passive versus active (control of robots)
• Single robot or multi-robot
7
Mobile Robot Localization
•Solved already, the Bayes filter algorithm. How?
•The straightforward application of Bayes filters to the localization problem is called Markov localization.
•Here is the algorithm (abstract?)
8
Mobile Robot Localization
• Algorithm Bayes_filter ( )
• for all do
•
• endfor
• return
111^ )(),|()( tttttt dxxbelxuxpxbel
ttt zuxbel ,),( 1
)( txbel
)()|( )( ^tttt xbelxzpxbel
tx
9
Mobile Robot Localization
• Algorithm Markov Locatlization ( )
• for all do
•
• endfor
• return
The Markov Localization algorithm addresses the global localization problem, the position tracking problem, and the kidnapped robot problem in static environment.
111^ )(),,|()( tttttt dxxbelmxuxpxbel
mzuxbel ttt ,,),( 1
)( txbel
)(),|( )( ^tttt xbelmxzpxbel
tx
10
Mobile Robot Localization
• Revisit Figure 7.5 to see how Markov localization algorithm in working.
• The algorithm Markov Localization is still very abstract. To put it in work (eg. your project), we need a lot of more background knowledge to realize motion model, sensor model, etc….
• We start with Guassian Filter (also called Kalman filter)
11SA-1SA-1
Bayes Filter Implementations (1)
Kalman Filter(Gaussian filters)
(back to Ch.3)
12
•Prediction
•Correction
Bayes Filter Reminder
111 )(),|()( tttttt dxxbelxuxpxbel
)()|()( tttt xbelxzpxbel
13
Gaussians
2
2)(
2
1
2
2
1)(
:),(~)(
x
exp
Nxp
-
Univariate
)()(2
1
2/12/
1
)2(
1)(
:)(~)(
μxΣμx
Σx
Σμx
t
ep
,Νp
d
Multivariate
14
),(~),(~ 22
2
abaNYbaXY
NX
Properties of Gaussians
22
21
222
21
21
122
21
22
212222
2111 1
,~)()(),(~
),(~
NXpXpNX
NX
15
• We stay in the “Gaussian world” as long as we start with Gaussians and perform only linear transformations.
• Review your probability textbook
),(~),(~ TAABANY
BAXY
NX
Multivariate Gaussians
12
11
221
11
21
221
222
111 1,~)()(
),(~
),(~
NXpXpNX
NX
http://en.wikipedia.org/wiki/Multivariate_normal_distribution
16
Kalman Filter
tttttt uBxAx 1
tttt xCz
Estimates the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation
with a measurement
17
Components of a Kalman Filter
t
Matrix (nxn) that describes how the state evolves from t to t-1 without controls or noise.
tA
Matrix (nxl) that describes how the control ut changes the state from t to t-1.tB
Matrix (kxn) that describes how to map the state xt to an observation zt.tC
t
Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance Rt and Qt respectively.
18
Kalman Filter Algorithm
1. Algorithm Kalman_filter( t-1, t-1, ut, zt):
2. Prediction:3. 4.
5. Correction:6. 7. 8.
9. Return t, t
ttttt uBA 1
tTtttt RAA 1
1)( tTttt
Tttt QCCCK
)( tttttt CzK
tttt CKI )(
19
Kalman Filter Updates in 1D
20
Kalman Filter Updates in 1D
1)(with )(
)()(
tTttt
Tttt
tttt
ttttttt QCCCK
CKI
CzKxbel
2,
2
2
22 with )1(
)()(
tobst
tt
ttt
tttttt K
K
zKxbel
21
Kalman Filter Updates in 1D
tTtttt
tttttt RAA
uBAxbel
1
1)(
2
,2221)(
tactttt
tttttt a
ubaxbel
22
Kalman Filter Updates
23
0000 ,;)( xNxbel
Linear Gaussian Systems: Initialization
• Initial belief is normally distributed:
24
• Dynamics are linear function of state and control plus additive noise:
tttttt uBxAx 1
Linear Gaussian Systems: Dynamics
ttttttttt RuBxAxNxuxp ,;),|( 11
1111
111
,;~,;~
)(),|()(
ttttttttt
tttttt
xNRuBxAxN
dxxbelxuxpxbel
25
Linear Gaussian Systems: Dynamics
tTtttt
tttttt
ttttT
tt
ttttttT
tttttt
ttttttttt
tttttt
RAA
uBAxbel
dxxx
uBxAxRuBxAxxbel
xNRuBxAxN
dxxbelxuxpxbel
1
1
1111111
11
1
1111
111
)(
)()(2
1exp
)()(2
1exp)(
,;~,;~
)(),|()(
26
• Observations are linear function of state plus additive noise:
tttt xCz
Linear Gaussian Systems: Observations
tttttt QxCzNxzp ,;)|(
ttttttt
tttt
xNQxCzN
xbelxzpxbel
,;~,;~
)()|()(
27
Linear Gaussian Systems: Observations
1
11
)(with )(
)()(
)()(2
1exp)()(
2
1exp)(
,;~,;~
)()|()(
tTttt
Tttt
tttt
ttttttt
tttT
ttttttT
tttt
ttttttt
tttt
QCCCKCKI
CzKxbel
xxxCzQxCzxbel
xNQxCzN
xbelxzpxbel
See page 45-54 for mathematical derivation.
28
The Prediction-Correction-Cycle
tTtttt
tttttt RAA
uBAxbel
1
1)(
2
,2221)(
tactttt
tttttt a
ubaxbel
Prediction
29
The Prediction-Correction-Cycle
1)(,)(
)()(
tTttt
Tttt
tttt
ttttttt QCCCK
CKI
CzKxbel
2,
2
2
22 ,)1(
)()(
tobst
tt
ttt
tttttt K
K
zKxbel
Correction
30
The Prediction-Correction-Cycle
1)(,)(
)()(
tTttt
Tttt
tttt
ttttttt QCCCK
CKI
CzKxbel
2,
2
2
22 ,)1(
)()(
tobst
tt
ttt
tttttt K
K
zKxbel
tTtttt
tttttt RAA
uBAxbel
1
1)(
2
,2221)(
tactttt
tttttt a
ubaxbel
Correction
Prediction
31
Kalman Filter Summary
•Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2)
•Optimal for linear Gaussian systems!
•However, most robotics systems are nonlinear, unfortunately!
32
Nonlinear Dynamic Systems
•Most realistic robotic problems involve nonlinear functions
),( 1 ttt xugx
)( tt xhz
33
Linearity Assumption Revisited
34
Non-linear Function
35
EKF Linearization (1)
36
EKF Linearization (2)
37
EKF Linearization (3)
38
•Prediction:
•Correction:
EKF Linearization: First Order Taylor Series Expansion
)(),(),(
)(),(
),(),(
1111
111
111
ttttttt
ttt
tttttt
xGugxug
xx
ugugxug
)()()(
)()(
)()(
ttttt
ttt
ttt
xHhxh
xx
hhxh
39
EKF Algorithm
1. Extended_Kalman_filter( t-1, t-1, ut, zt):
2. Prediction:3. 4.
5. Correction:6. 7. 8.
9. Return t, t
),( 1 ttt ug
tTtttt RGG 1
1)( tTttt
Tttt QHHHK
))(( ttttt hzK
tttt HKI )(
1
1),(
t
ttt x
ugG
t
tt x
hH
)(
ttttt uBA 1
tTtttt RAA 1
1)( tTttt
Tttt QCCCK
)( tttttt CzK
tttt CKI )(
40SA-1SA-1
Bayes Filter Implementations (2)
Particle filters
41
Sample-based Localization (sonar)
42
Represent belief by random samples
Estimation of non-Gaussian, nonlinear processes
Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter
Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96]
Computer vision: [Isard and Blake 96, 98] Dynamic Bayesian Networks: [Kanazawa et al., 95]
Particle Filters
43
Particle Filters
44
)|()(
)()|()()|()(
xzpxBel
xBelxzpw
xBelxzpxBel
Sensor Information: Importance Sampling
45
'd)'()'|()( , xxBelxuxpxBel
Robot Motion
46
)|()(
)()|()()|()(
xzpxBel
xBelxzpw
xBelxzpxBel
Sensor Information: Importance Sampling
47
Robot Motion
'd)'()'|()( , xxBelxuxpxBel
48
1. Algorithm particle_filter( St-1, ut-1 zt):
2.
3. For Generate new samples
4. Sample index j(i) from the discrete distribution given by wt-
1
5. Sample from using and
6. Compute importance weight
7. Update normalization factor
8. Insert
9. For
10. Normalize weights
Particle Filter Algorithm
0, tS
ni ...1
},{ it
ittt wxSS
itw
itx ),|( 11 ttt uxxp )(
1ij
tx 1tu
)|( itt
it xzpw
ni ...1
/it
it ww
49
draw xit1 from Bel(xt1)
draw xit from p(xt | xi
t1,ut1)
Importance factor for xit:
)|()(),|(
)(),|()|(ondistributi proposal
ondistributitarget
111
111
tt
tttt
tttttt
it
xzpxBeluxxp
xBeluxxpxzp
w
1111 )(),|()|()( tttttttt dxxBeluxxpxzpxBel
Particle Filter Algorithm
50
Weight samples: w = f / g
Importance Sampling
http://en.wikipedia.org/wiki/Importance_sampling
51
Importance Sampling with Resampling
),...,,(
)()|(),...,,|( :fon distributiTarget
2121
n
kk
n zzzp
xpxzpzzzxp
)(
)()|()|( :gon distributi Sampling
l
ll zp
xpxzpzxp
),...,,(
)|()(
)|(
),...,,|( : w weightsImportance
21
21
n
lkkl
l
n
zzzp
xzpzp
zxp
zzzxp
g
f
52
Importance Sampling with Resampling
Weighted samples After resampling
53
Resampling
• Given: Set S of weighted samples.
• Wanted : Random sample, where the probability of drawing xi is given by wi.
• Typically done n times with replacement to generate new sample set S’.
54
w2
w3
w1wn
Wn-1
Resampling
w2
w3
w1wn
Wn-1
• Roulette wheel
• Binary search, n log n
• Stochastic universal sampling
• Systematic resampling
• Linear time complexity
• Easy to implement, low variance
55
1. Algorithm systematic_resampling(S,n):
2.
3. For Generate cdf4. 5. Initialize threshold
6. For Draw samples …7. While ( ) Skip until next threshold reached8. 9. Insert10. Increment threshold
11. Return S’
Resampling Algorithm
11,' wcS
ni ...2i
ii wcc 1
1],,0]~ 11 inUu
nj ...1
11
nuu jj
ij cu
1,'' nxSS i
1ii
Also called stochastic universal sampling
56
57
58
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
Sample-based Localization (sonar)
76
Initial Distribution
77
After Incorporating Ten Ultrasound Scans
78
After Incorporating 65 Ultrasound Scans
79
Estimated Path
80
Using Ceiling Maps for Localization
81
Vision-based Localization
P(z|x)
h(x)z
82
Under a Light
Measurement z: P(z|x):
83
Next to a Light
Measurement z: P(z|x):
84
Elsewhere
Measurement z: P(z|x):
85
Global Localization Using Vision
86
Robots in Action: Albert
87
Limitations
•The approach described so far is able to • track the pose of a mobile robot and to• globally localize the robot.
•How can we deal with localization errors (i.e., the kidnapped robot problem)?
88
Approaches
•Randomly insert samples (the robot can be teleported at any point in time).
• Insert random samples proportional to the average likelihood of the particles (the robot has been teleported with higher probability when the likelihood of its observations drops).