204
Unit 4: Localization Part 2 Introduction to Markov Localization Computer Science 6912 Department of Computer Science Memorial University of Newfoundland July 8, 2016 COMP 6912 (MUN) Localization July 8, 2016 1 / 27

av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Unit 4: Localization Part 2Introduction to Markov Localization

Computer Science 6912

Department of Computer ScienceMemorial University of Newfoundland

July 8, 2016

COMP 6912 (MUN) Localization July 8, 2016 1 / 27

Page 2: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

1 Introduction to Markov Localization

2 Notation

3 Bayes Filter

4 Example: A Pizza-Turning Robot

5 Derivation of Bayes Filter

COMP 6912 (MUN) Localization July 8, 2016 2 / 27

Page 3: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Introduction to Markov Localization

In this section we introduce Bayes filter which is the basis of mostrobot localization algorithms

We will refer to this style of localization as Markov localizationbecause Bayes filter utilizes the Markov assumption (to be described)

Three different Markov localization algorithms will be presented:

Grid localizationMonte Carlo localizationKalman filter localization

COMP 6912 (MUN) Localization July 8, 2016 3 / 27

Page 4: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Introduction to Markov Localization

In this section we introduce Bayes filter which is the basis of mostrobot localization algorithms

We will refer to this style of localization as Markov localizationbecause Bayes filter utilizes the Markov assumption (to be described)

Three different Markov localization algorithms will be presented:

Grid localizationMonte Carlo localizationKalman filter localization

COMP 6912 (MUN) Localization July 8, 2016 3 / 27

Page 5: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Introduction to Markov Localization

In this section we introduce Bayes filter which is the basis of mostrobot localization algorithms

We will refer to this style of localization as Markov localizationbecause Bayes filter utilizes the Markov assumption (to be described)

Three different Markov localization algorithms will be presented:

Grid localizationMonte Carlo localizationKalman filter localization

COMP 6912 (MUN) Localization July 8, 2016 3 / 27

Page 6: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Introduction to Markov Localization

In this section we introduce Bayes filter which is the basis of mostrobot localization algorithms

We will refer to this style of localization as Markov localizationbecause Bayes filter utilizes the Markov assumption (to be described)

Three different Markov localization algorithms will be presented:

Grid localization

Monte Carlo localizationKalman filter localization

COMP 6912 (MUN) Localization July 8, 2016 3 / 27

Page 7: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Introduction to Markov Localization

In this section we introduce Bayes filter which is the basis of mostrobot localization algorithms

We will refer to this style of localization as Markov localizationbecause Bayes filter utilizes the Markov assumption (to be described)

Three different Markov localization algorithms will be presented:

Grid localizationMonte Carlo localization

Kalman filter localization

COMP 6912 (MUN) Localization July 8, 2016 3 / 27

Page 8: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Introduction to Markov Localization

In this section we introduce Bayes filter which is the basis of mostrobot localization algorithms

We will refer to this style of localization as Markov localizationbecause Bayes filter utilizes the Markov assumption (to be described)

Three different Markov localization algorithms will be presented:

Grid localizationMonte Carlo localizationKalman filter localization

COMP 6912 (MUN) Localization July 8, 2016 3 / 27

Page 9: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Different Localization Problems

Local localization (a.k.a position tracking): The initial pose of therobot is known

Global localization: The initial pose of the robot is unknown; Moredifficult than local localization; Requires a multiple hypothesis beliefrepresentation

The kidnapped robot problem: This is a particular form of globallocalization where the robot is transported from its current position to anew position—without telling the robot that this transportation has takenplace. A robot which solves the kidnapped robot problem will be able todetermine its new position. A robot which cannot solve the problem willstill believe it is at the old position.

A robot that can solve the kidnapped robot problem is able to recoverfrom errors much more readily than one that cannot.

COMP 6912 (MUN) Localization July 8, 2016 4 / 27

Page 10: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Different Localization Problems

Local localization (a.k.a position tracking): The initial pose of therobot is known

Global localization: The initial pose of the robot is unknown

; Moredifficult than local localization; Requires a multiple hypothesis beliefrepresentation

The kidnapped robot problem: This is a particular form of globallocalization where the robot is transported from its current position to anew position—without telling the robot that this transportation has takenplace. A robot which solves the kidnapped robot problem will be able todetermine its new position. A robot which cannot solve the problem willstill believe it is at the old position.

A robot that can solve the kidnapped robot problem is able to recoverfrom errors much more readily than one that cannot.

COMP 6912 (MUN) Localization July 8, 2016 4 / 27

Page 11: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Different Localization Problems

Local localization (a.k.a position tracking): The initial pose of therobot is known

Global localization: The initial pose of the robot is unknown; Moredifficult than local localization

; Requires a multiple hypothesis beliefrepresentation

The kidnapped robot problem: This is a particular form of globallocalization where the robot is transported from its current position to anew position—without telling the robot that this transportation has takenplace. A robot which solves the kidnapped robot problem will be able todetermine its new position. A robot which cannot solve the problem willstill believe it is at the old position.

A robot that can solve the kidnapped robot problem is able to recoverfrom errors much more readily than one that cannot.

COMP 6912 (MUN) Localization July 8, 2016 4 / 27

Page 12: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Different Localization Problems

Local localization (a.k.a position tracking): The initial pose of therobot is known

Global localization: The initial pose of the robot is unknown; Moredifficult than local localization; Requires a multiple hypothesis beliefrepresentation

The kidnapped robot problem: This is a particular form of globallocalization where the robot is transported from its current position to anew position—without telling the robot that this transportation has takenplace. A robot which solves the kidnapped robot problem will be able todetermine its new position. A robot which cannot solve the problem willstill believe it is at the old position.

A robot that can solve the kidnapped robot problem is able to recoverfrom errors much more readily than one that cannot.

COMP 6912 (MUN) Localization July 8, 2016 4 / 27

Page 13: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Different Localization Problems

Local localization (a.k.a position tracking): The initial pose of therobot is known

Global localization: The initial pose of the robot is unknown; Moredifficult than local localization; Requires a multiple hypothesis beliefrepresentation

The kidnapped robot problem: This is a particular form of globallocalization where the robot is transported from its current position to anew position—without telling the robot that this transportation has takenplace

. A robot which solves the kidnapped robot problem will be able todetermine its new position. A robot which cannot solve the problem willstill believe it is at the old position.

A robot that can solve the kidnapped robot problem is able to recoverfrom errors much more readily than one that cannot.

COMP 6912 (MUN) Localization July 8, 2016 4 / 27

Page 14: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Different Localization Problems

Local localization (a.k.a position tracking): The initial pose of therobot is known

Global localization: The initial pose of the robot is unknown; Moredifficult than local localization; Requires a multiple hypothesis beliefrepresentation

The kidnapped robot problem: This is a particular form of globallocalization where the robot is transported from its current position to anew position—without telling the robot that this transportation has takenplace. A robot which solves the kidnapped robot problem will be able todetermine its new position

. A robot which cannot solve the problem willstill believe it is at the old position.

A robot that can solve the kidnapped robot problem is able to recoverfrom errors much more readily than one that cannot.

COMP 6912 (MUN) Localization July 8, 2016 4 / 27

Page 15: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Different Localization Problems

Local localization (a.k.a position tracking): The initial pose of therobot is known

Global localization: The initial pose of the robot is unknown; Moredifficult than local localization; Requires a multiple hypothesis beliefrepresentation

The kidnapped robot problem: This is a particular form of globallocalization where the robot is transported from its current position to anew position—without telling the robot that this transportation has takenplace. A robot which solves the kidnapped robot problem will be able todetermine its new position. A robot which cannot solve the problem willstill believe it is at the old position.

A robot that can solve the kidnapped robot problem is able to recoverfrom errors much more readily than one that cannot.

COMP 6912 (MUN) Localization July 8, 2016 4 / 27

Page 16: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Different Localization Problems

Local localization (a.k.a position tracking): The initial pose of therobot is known

Global localization: The initial pose of the robot is unknown; Moredifficult than local localization; Requires a multiple hypothesis beliefrepresentation

The kidnapped robot problem: This is a particular form of globallocalization where the robot is transported from its current position to anew position—without telling the robot that this transportation has takenplace. A robot which solves the kidnapped robot problem will be able todetermine its new position. A robot which cannot solve the problem willstill believe it is at the old position.

A robot that can solve the kidnapped robot problem is able to recoverfrom errors much more readily than one that cannot.

COMP 6912 (MUN) Localization July 8, 2016 4 / 27

Page 17: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Notation

Note: The notation will now change to that of [Thrun et al., 2005]

xt The robot’s state at time t

This is a vector giving everything we need to know about the robot’s stateat time t; It will often just give the robot’s pose, xt = [x , y , θ]T , butsometimes additional information about the world, such as the position oflandmarks, will be added to xt

ut Control input at time t; This can either be a measuredvelocity, or a proprioceptive (e.g. wheel encoder)measurement of the robot’s movement from time t − 1 totime t

zt Sensor input at time t

COMP 6912 (MUN) Localization July 8, 2016 5 / 27

Page 18: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Notation

Note: The notation will now change to that of [Thrun et al., 2005]

xt The robot’s state at time t

This is a vector giving everything we need to know about the robot’s stateat time t

; It will often just give the robot’s pose, xt = [x , y , θ]T , butsometimes additional information about the world, such as the position oflandmarks, will be added to xt

ut Control input at time t; This can either be a measuredvelocity, or a proprioceptive (e.g. wheel encoder)measurement of the robot’s movement from time t − 1 totime t

zt Sensor input at time t

COMP 6912 (MUN) Localization July 8, 2016 5 / 27

Page 19: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Notation

Note: The notation will now change to that of [Thrun et al., 2005]

xt The robot’s state at time t

This is a vector giving everything we need to know about the robot’s stateat time t; It will often just give the robot’s pose, xt = [x , y , θ]T , butsometimes additional information about the world, such as the position oflandmarks, will be added to xt

ut Control input at time t; This can either be a measuredvelocity, or a proprioceptive (e.g. wheel encoder)measurement of the robot’s movement from time t − 1 totime t

zt Sensor input at time t

COMP 6912 (MUN) Localization July 8, 2016 5 / 27

Page 20: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Notation

Note: The notation will now change to that of [Thrun et al., 2005]

xt The robot’s state at time t

This is a vector giving everything we need to know about the robot’s stateat time t; It will often just give the robot’s pose, xt = [x , y , θ]T , butsometimes additional information about the world, such as the position oflandmarks, will be added to xt

ut Control input at time t

; This can either be a measuredvelocity, or a proprioceptive (e.g. wheel encoder)measurement of the robot’s movement from time t − 1 totime t

zt Sensor input at time t

COMP 6912 (MUN) Localization July 8, 2016 5 / 27

Page 21: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Notation

Note: The notation will now change to that of [Thrun et al., 2005]

xt The robot’s state at time t

This is a vector giving everything we need to know about the robot’s stateat time t; It will often just give the robot’s pose, xt = [x , y , θ]T , butsometimes additional information about the world, such as the position oflandmarks, will be added to xt

ut Control input at time t; This can either be a measuredvelocity, or a proprioceptive (e.g. wheel encoder)measurement of the robot’s movement from time t − 1 totime t

zt Sensor input at time t

COMP 6912 (MUN) Localization July 8, 2016 5 / 27

Page 22: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Notation

Note: The notation will now change to that of [Thrun et al., 2005]

xt The robot’s state at time t

This is a vector giving everything we need to know about the robot’s stateat time t; It will often just give the robot’s pose, xt = [x , y , θ]T , butsometimes additional information about the world, such as the position oflandmarks, will be added to xt

ut Control input at time t; This can either be a measuredvelocity, or a proprioceptive (e.g. wheel encoder)measurement of the robot’s movement from time t − 1 totime t

zt Sensor input at time t

COMP 6912 (MUN) Localization July 8, 2016 5 / 27

Page 23: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Notation

Note: The notation will now change to that of [Thrun et al., 2005]

xt The robot’s state at time t

This is a vector giving everything we need to know about the robot’s stateat time t; It will often just give the robot’s pose, xt = [x , y , θ]T , butsometimes additional information about the world, such as the position oflandmarks, will be added to xt

ut Control input at time t; This can either be a measuredvelocity, or a proprioceptive (e.g. wheel encoder)measurement of the robot’s movement from time t − 1 totime t

zt Sensor input at time t

COMP 6912 (MUN) Localization July 8, 2016 5 / 27

Page 24: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We assume the robot is in some state xt−1 (which happens to beunknown)

. It then executes some action ut , which moves it into state xt .The exact state is uncertain, so the robot reads its sensors to obtain zt .The task of the localization algorithm is to estimate the probabilitydistribution of xt , given ut , zt , and the distribution of xt−1.The figurebelow illustrates this development,

t+1u

t−1u

tu

x

tz

t+1x

t+1zt−1z

xt−1 t

Figure 2.2 The dynamic Bayes network that characterizes the evolution of controls,

states, and measurements.

COMP 6912 (MUN) Localization July 8, 2016 6 / 27

Page 25: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We assume the robot is in some state xt−1 (which happens to beunknown). It then executes some action ut , which moves it into state xt

.The exact state is uncertain, so the robot reads its sensors to obtain zt .The task of the localization algorithm is to estimate the probabilitydistribution of xt , given ut , zt , and the distribution of xt−1.The figurebelow illustrates this development,

t+1u

t−1u

tu

x

tz

t+1x

t+1zt−1z

xt−1 t

Figure 2.2 The dynamic Bayes network that characterizes the evolution of controls,

states, and measurements.

COMP 6912 (MUN) Localization July 8, 2016 6 / 27

Page 26: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We assume the robot is in some state xt−1 (which happens to beunknown). It then executes some action ut , which moves it into state xt .The exact state is uncertain, so the robot reads its sensors to obtain zt

.The task of the localization algorithm is to estimate the probabilitydistribution of xt , given ut , zt , and the distribution of xt−1.The figurebelow illustrates this development,

t+1u

t−1u

tu

x

tz

t+1x

t+1zt−1z

xt−1 t

Figure 2.2 The dynamic Bayes network that characterizes the evolution of controls,

states, and measurements.

COMP 6912 (MUN) Localization July 8, 2016 6 / 27

Page 27: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We assume the robot is in some state xt−1 (which happens to beunknown). It then executes some action ut , which moves it into state xt .The exact state is uncertain, so the robot reads its sensors to obtain zt .The task of the localization algorithm is to estimate the probabilitydistribution of xt , given ut , zt , and the distribution of xt−1.

The figurebelow illustrates this development,

t+1u

t−1u

tu

x

tz

t+1x

t+1zt−1z

xt−1 t

Figure 2.2 The dynamic Bayes network that characterizes the evolution of controls,

states, and measurements.

COMP 6912 (MUN) Localization July 8, 2016 6 / 27

Page 28: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We assume the robot is in some state xt−1 (which happens to beunknown). It then executes some action ut , which moves it into state xt .The exact state is uncertain, so the robot reads its sensors to obtain zt .The task of the localization algorithm is to estimate the probabilitydistribution of xt , given ut , zt , and the distribution of xt−1.The figurebelow illustrates this development,

t+1u

t−1u

tu

x

tz

t+1x

t+1zt−1z

xt−1 t

Figure 2.2 The dynamic Bayes network that characterizes the evolution of controls,

states, and measurements.

COMP 6912 (MUN) Localization July 8, 2016 6 / 27

Page 29: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We assume the robot is in some state xt−1 (which happens to beunknown). It then executes some action ut , which moves it into state xt .The exact state is uncertain, so the robot reads its sensors to obtain zt .The task of the localization algorithm is to estimate the probabilitydistribution of xt , given ut , zt , and the distribution of xt−1.The figurebelow illustrates this development,

t+1u

t−1u

tu

x

tz

t+1x

t+1zt−1z

xt−1 t

Figure 2.2 The dynamic Bayes network that characterizes the evolution of controls,

states, and measurements.

COMP 6912 (MUN) Localization July 8, 2016 6 / 27

Page 30: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The robot’s belief that it is in some state xt will be denoted bel(xt),

bel(xt) = p(xt |z1:t , u1:t)

where the subscript 1 : t indicates all current and past measurements orodometry readings.

If we know the robot’s start position, then bel(x0) would be initialized to 1at bel(x true0 ) and 0 everywhere else (minimum uncertainty). If the startposition is totally unknown then bel(x0) would be set to a uniformdistribution (maximum uncertainty).

Assuming we begin with the robot’s previous belief bel(xt−1), the newbelief bel(xt) will be obtained through Bayes filter in a two stage process.In the first stage the new odometry reading ut is incorporated. This givesus the following predicted belief

bel(xt) = p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 7 / 27

Page 31: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The robot’s belief that it is in some state xt will be denoted bel(xt),

bel(xt) = p(xt |z1:t , u1:t)

where the subscript 1 : t indicates all current and past measurements orodometry readings.

If we know the robot’s start position, then bel(x0) would be initialized to 1at bel(x true0 ) and 0 everywhere else (minimum uncertainty). If the startposition is totally unknown then bel(x0) would be set to a uniformdistribution (maximum uncertainty).

Assuming we begin with the robot’s previous belief bel(xt−1), the newbelief bel(xt) will be obtained through Bayes filter in a two stage process.In the first stage the new odometry reading ut is incorporated. This givesus the following predicted belief

bel(xt) = p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 7 / 27

Page 32: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The robot’s belief that it is in some state xt will be denoted bel(xt),

bel(xt) = p(xt |z1:t , u1:t)

where the subscript 1 : t indicates all current and past measurements orodometry readings.

If we know the robot’s start position, then bel(x0) would be initialized to 1at bel(x true0 ) and 0 everywhere else (minimum uncertainty). If the startposition is totally unknown then bel(x0) would be set to a uniformdistribution (maximum uncertainty).

Assuming we begin with the robot’s previous belief bel(xt−1), the newbelief bel(xt) will be obtained through Bayes filter in a two stage process.In the first stage the new odometry reading ut is incorporated. This givesus the following predicted belief

bel(xt) = p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 7 / 27

Page 33: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The robot’s belief that it is in some state xt will be denoted bel(xt),

bel(xt) = p(xt |z1:t , u1:t)

where the subscript 1 : t indicates all current and past measurements orodometry readings.

If we know the robot’s start position, then bel(x0) would be initialized to 1at bel(x true0 ) and 0 everywhere else (minimum uncertainty)

. If the startposition is totally unknown then bel(x0) would be set to a uniformdistribution (maximum uncertainty).

Assuming we begin with the robot’s previous belief bel(xt−1), the newbelief bel(xt) will be obtained through Bayes filter in a two stage process.In the first stage the new odometry reading ut is incorporated. This givesus the following predicted belief

bel(xt) = p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 7 / 27

Page 34: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The robot’s belief that it is in some state xt will be denoted bel(xt),

bel(xt) = p(xt |z1:t , u1:t)

where the subscript 1 : t indicates all current and past measurements orodometry readings.

If we know the robot’s start position, then bel(x0) would be initialized to 1at bel(x true0 ) and 0 everywhere else (minimum uncertainty). If the startposition is totally unknown then bel(x0) would be set to a uniformdistribution (maximum uncertainty).

Assuming we begin with the robot’s previous belief bel(xt−1), the newbelief bel(xt) will be obtained through Bayes filter in a two stage process.In the first stage the new odometry reading ut is incorporated. This givesus the following predicted belief

bel(xt) = p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 7 / 27

Page 35: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The robot’s belief that it is in some state xt will be denoted bel(xt),

bel(xt) = p(xt |z1:t , u1:t)

where the subscript 1 : t indicates all current and past measurements orodometry readings.

If we know the robot’s start position, then bel(x0) would be initialized to 1at bel(x true0 ) and 0 everywhere else (minimum uncertainty). If the startposition is totally unknown then bel(x0) would be set to a uniformdistribution (maximum uncertainty).

Assuming we begin with the robot’s previous belief bel(xt−1), the newbelief bel(xt) will be obtained through Bayes filter in a two stage process

.In the first stage the new odometry reading ut is incorporated. This givesus the following predicted belief

bel(xt) = p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 7 / 27

Page 36: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The robot’s belief that it is in some state xt will be denoted bel(xt),

bel(xt) = p(xt |z1:t , u1:t)

where the subscript 1 : t indicates all current and past measurements orodometry readings.

If we know the robot’s start position, then bel(x0) would be initialized to 1at bel(x true0 ) and 0 everywhere else (minimum uncertainty). If the startposition is totally unknown then bel(x0) would be set to a uniformdistribution (maximum uncertainty).

Assuming we begin with the robot’s previous belief bel(xt−1), the newbelief bel(xt) will be obtained through Bayes filter in a two stage process.In the first stage the new odometry reading ut is incorporated

. This givesus the following predicted belief

bel(xt) = p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 7 / 27

Page 37: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The robot’s belief that it is in some state xt will be denoted bel(xt),

bel(xt) = p(xt |z1:t , u1:t)

where the subscript 1 : t indicates all current and past measurements orodometry readings.

If we know the robot’s start position, then bel(x0) would be initialized to 1at bel(x true0 ) and 0 everywhere else (minimum uncertainty). If the startposition is totally unknown then bel(x0) would be set to a uniformdistribution (maximum uncertainty).

Assuming we begin with the robot’s previous belief bel(xt−1), the newbelief bel(xt) will be obtained through Bayes filter in a two stage process.In the first stage the new odometry reading ut is incorporated. This givesus the following predicted belief

bel(xt) = p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 7 / 27

Page 38: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The robot’s belief that it is in some state xt will be denoted bel(xt),

bel(xt) = p(xt |z1:t , u1:t)

where the subscript 1 : t indicates all current and past measurements orodometry readings.

If we know the robot’s start position, then bel(x0) would be initialized to 1at bel(x true0 ) and 0 everywhere else (minimum uncertainty). If the startposition is totally unknown then bel(x0) would be set to a uniformdistribution (maximum uncertainty).

Assuming we begin with the robot’s previous belief bel(xt−1), the newbelief bel(xt) will be obtained through Bayes filter in a two stage process.In the first stage the new odometry reading ut is incorporated. This givesus the following predicted belief

bel(xt) = p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 7 / 27

Page 39: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The robot’s belief that it is in some state xt will be denoted bel(xt),

bel(xt) = p(xt |z1:t , u1:t)

where the subscript 1 : t indicates all current and past measurements orodometry readings.

If we know the robot’s start position, then bel(x0) would be initialized to 1at bel(x true0 ) and 0 everywhere else (minimum uncertainty). If the startposition is totally unknown then bel(x0) would be set to a uniformdistribution (maximum uncertainty).

Assuming we begin with the robot’s previous belief bel(xt−1), the newbelief bel(xt) will be obtained through Bayes filter in a two stage process.In the first stage the new odometry reading ut is incorporated. This givesus the following predicted belief

bel(xt) = p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 7 / 27

Page 40: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The Markov Assumption: Knowing the previous state vector xt−1 andthe system’s current inputs ut and zt gives everything we need to know tocompute xt .

That is, we don’t need the past history of states xt−2, xt−3, ...,or the past history of movements or sensor inputs in order toprobabilistically determine xt+1. Another way of saying this is that thestate vector x is considered complete complete.

For this assumption to hold, the state vector must include a completedescription of all objects within the environment (inluding a completemap; a description of all people/robots/animals in the environment andtheir complete state vectors; etc...).

Thus, this assumption is generally untrue in practise, but we utilize itnonetheless as it renders the localization problem tractable.

COMP 6912 (MUN) Localization July 8, 2016 8 / 27

Page 41: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The Markov Assumption: Knowing the previous state vector xt−1 andthe system’s current inputs ut and zt gives everything we need to know tocompute xt .That is, we don’t need the past history of states xt−2, xt−3, ...,or the past history of movements or sensor inputs in order toprobabilistically determine xt+1. Another way of saying this is that thestate vector x is considered complete complete.

For this assumption to hold, the state vector must include a completedescription of all objects within the environment (inluding a completemap; a description of all people/robots/animals in the environment andtheir complete state vectors; etc...).

Thus, this assumption is generally untrue in practise, but we utilize itnonetheless as it renders the localization problem tractable.

COMP 6912 (MUN) Localization July 8, 2016 8 / 27

Page 42: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The Markov Assumption: Knowing the previous state vector xt−1 andthe system’s current inputs ut and zt gives everything we need to know tocompute xt .That is, we don’t need the past history of states xt−2, xt−3, ...,or the past history of movements or sensor inputs in order toprobabilistically determine xt+1. Another way of saying this is that thestate vector x is considered complete complete.

For this assumption to hold, the state vector must include a completedescription of all objects within the environment (inluding a completemap; a description of all people/robots/animals in the environment andtheir complete state vectors; etc...).

Thus, this assumption is generally untrue in practise, but we utilize itnonetheless as it renders the localization problem tractable.

COMP 6912 (MUN) Localization July 8, 2016 8 / 27

Page 43: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

All of the main probabilistic localization algorithms can be derived fromfrom Bayes Filter,

Bayes Filter

Inputs: bel(xt−1), ut , ztFor all xt ,

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1 (1)

bel(xt) = ηp(zt |xt)bel(xt) (2)

Output: bel(xt)

Equation (1) updates the belief to account for the robot’s motion. Thisgenerates the prediction bel(xt). Equation (2) achieves themeasurement update. It incorporates the sensor values and combinesthis information with the prediction to update the belief.

Page 44: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

All of the main probabilistic localization algorithms can be derived fromfrom Bayes Filter,

Bayes Filter

Inputs: bel(xt−1), ut , ztFor all xt ,

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1 (1)

bel(xt) = ηp(zt |xt)bel(xt) (2)

Output: bel(xt)

Equation (1) updates the belief to account for the robot’s motion. Thisgenerates the prediction bel(xt). Equation (2) achieves themeasurement update. It incorporates the sensor values and combinesthis information with the prediction to update the belief.

Page 45: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

All of the main probabilistic localization algorithms can be derived fromfrom Bayes Filter,

Bayes Filter

Inputs: bel(xt−1), ut , ztFor all xt ,

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1 (1)

bel(xt) = ηp(zt |xt)bel(xt) (2)

Output: bel(xt)

Equation (1) updates the belief to account for the robot’s motion. Thisgenerates the prediction bel(xt). Equation (2) achieves themeasurement update. It incorporates the sensor values and combinesthis information with the prediction to update the belief.

Page 46: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

All of the main probabilistic localization algorithms can be derived fromfrom Bayes Filter,

Bayes Filter

Inputs: bel(xt−1), ut , ztFor all xt ,

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1 (1)

bel(xt) = ηp(zt |xt)bel(xt) (2)

Output: bel(xt)

Equation (1) updates the belief to account for the robot’s motion. Thisgenerates the prediction bel(xt). Equation (2) achieves themeasurement update. It incorporates the sensor values and combinesthis information with the prediction to update the belief.

Page 47: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

All of the main probabilistic localization algorithms can be derived fromfrom Bayes Filter,

Bayes Filter

Inputs: bel(xt−1), ut , ztFor all xt ,

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1 (1)

bel(xt) = ηp(zt |xt)bel(xt) (2)

Output: bel(xt)

Equation (1) updates the belief to account for the robot’s motion

. Thisgenerates the prediction bel(xt). Equation (2) achieves themeasurement update. It incorporates the sensor values and combinesthis information with the prediction to update the belief.

Page 48: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

All of the main probabilistic localization algorithms can be derived fromfrom Bayes Filter,

Bayes Filter

Inputs: bel(xt−1), ut , ztFor all xt ,

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1 (1)

bel(xt) = ηp(zt |xt)bel(xt) (2)

Output: bel(xt)

Equation (1) updates the belief to account for the robot’s motion. Thisgenerates the prediction bel(xt)

. Equation (2) achieves themeasurement update. It incorporates the sensor values and combinesthis information with the prediction to update the belief.

Page 49: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

All of the main probabilistic localization algorithms can be derived fromfrom Bayes Filter,

Bayes Filter

Inputs: bel(xt−1), ut , ztFor all xt ,

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1 (1)

bel(xt) = ηp(zt |xt)bel(xt) (2)

Output: bel(xt)

Equation (1) updates the belief to account for the robot’s motion. Thisgenerates the prediction bel(xt). Equation (2) achieves themeasurement update

. It incorporates the sensor values and combinesthis information with the prediction to update the belief.

Page 50: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

All of the main probabilistic localization algorithms can be derived fromfrom Bayes Filter,

Bayes Filter

Inputs: bel(xt−1), ut , ztFor all xt ,

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1 (1)

bel(xt) = ηp(zt |xt)bel(xt) (2)

Output: bel(xt)

Equation (1) updates the belief to account for the robot’s motion. Thisgenerates the prediction bel(xt). Equation (2) achieves themeasurement update. It incorporates the sensor values and combinesthis information with the prediction to update the belief.

Page 51: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The prediction: equation (1)

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1

bel(xt−1) gives the probability of each possible previous state xt−1. Theprediction is obtained by summing the probability of each possible previousstate times the probability that we have just made the trip from thatprevious state to the current state:

p(xt |ut , xt−1)

This is known as the motion model.

COMP 6912 (MUN) Localization July 8, 2016 10 / 27

Page 52: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The prediction: equation (1)

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1

bel(xt−1) gives the probability of each possible previous state xt−1

. Theprediction is obtained by summing the probability of each possible previousstate times the probability that we have just made the trip from thatprevious state to the current state:

p(xt |ut , xt−1)

This is known as the motion model.

COMP 6912 (MUN) Localization July 8, 2016 10 / 27

Page 53: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The prediction: equation (1)

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1

bel(xt−1) gives the probability of each possible previous state xt−1. Theprediction is obtained by summing the probability of each possible previousstate times the probability that we have just made the trip from thatprevious state to the current state:

p(xt |ut , xt−1)

This is known as the motion model.

COMP 6912 (MUN) Localization July 8, 2016 10 / 27

Page 54: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The prediction: equation (1)

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1

bel(xt−1) gives the probability of each possible previous state xt−1. Theprediction is obtained by summing the probability of each possible previousstate times the probability that we have just made the trip from thatprevious state to the current state:

p(xt |ut , xt−1)

This is known as the motion model.

COMP 6912 (MUN) Localization July 8, 2016 10 / 27

Page 55: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The prediction: equation (1)

bel(xt) =

∫p(xt |ut , xt−1)bel(xt−1)dxt−1

bel(xt−1) gives the probability of each possible previous state xt−1. Theprediction is obtained by summing the probability of each possible previousstate times the probability that we have just made the trip from thatprevious state to the current state:

p(xt |ut , xt−1)

This is known as the motion model.

COMP 6912 (MUN) Localization July 8, 2016 10 / 27

Page 56: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Example: A Pizza-Turning Robot

Mrs. Vanelli’s has purchased a pizza-turning robot which employs a ratherinaccurate motor

. The robot is designed such that at any discrete timestep t, only one of the pizza’s eight slices will be underneath the heat lamp

Heat Lamp

COMP 6912 (MUN) Localization July 8, 2016 11 / 27

Page 57: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Example: A Pizza-Turning Robot

Mrs. Vanelli’s has purchased a pizza-turning robot which employs a ratherinaccurate motor. The robot is designed such that at any discrete timestep t, only one of the pizza’s eight slices will be underneath the heat lamp

Heat Lamp

COMP 6912 (MUN) Localization July 8, 2016 11 / 27

Page 58: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Example: A Pizza-Turning Robot

Mrs. Vanelli’s has purchased a pizza-turning robot which employs a ratherinaccurate motor. The robot is designed such that at any discrete timestep t, only one of the pizza’s eight slices will be underneath the heat lamp

Heat Lamp

COMP 6912 (MUN) Localization July 8, 2016 11 / 27

Page 59: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We wish to determine the probability that slice i of the 8-slice pizza isunder the lamp

.

Our belief representation will be discrete; It is an array of length 8 storingthe probabilities of each slice being under the lamp

At every time step the action, ut , is the same—the pizza rotates by onenoisy step; This fact, plus the discrete representation allow equation (1) ofBayes filter to be simplified:

For all xt ,bel(xt) =

∑p(xt |xt−1)bel(xt−1)

This robot has no sensing capability. Thus, bel(xt) = bel(xt). There istherefore no need to apply equation (2).

COMP 6912 (MUN) Localization July 8, 2016 12 / 27

Page 60: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We wish to determine the probability that slice i of the 8-slice pizza isunder the lamp.

Our belief representation will be discrete

; It is an array of length 8 storingthe probabilities of each slice being under the lamp

At every time step the action, ut , is the same—the pizza rotates by onenoisy step; This fact, plus the discrete representation allow equation (1) ofBayes filter to be simplified:

For all xt ,bel(xt) =

∑p(xt |xt−1)bel(xt−1)

This robot has no sensing capability. Thus, bel(xt) = bel(xt). There istherefore no need to apply equation (2).

COMP 6912 (MUN) Localization July 8, 2016 12 / 27

Page 61: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We wish to determine the probability that slice i of the 8-slice pizza isunder the lamp.

Our belief representation will be discrete; It is an array of length 8 storingthe probabilities of each slice being under the lamp

At every time step the action, ut , is the same—the pizza rotates by onenoisy step; This fact, plus the discrete representation allow equation (1) ofBayes filter to be simplified:

For all xt ,bel(xt) =

∑p(xt |xt−1)bel(xt−1)

This robot has no sensing capability. Thus, bel(xt) = bel(xt). There istherefore no need to apply equation (2).

COMP 6912 (MUN) Localization July 8, 2016 12 / 27

Page 62: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We wish to determine the probability that slice i of the 8-slice pizza isunder the lamp.

Our belief representation will be discrete; It is an array of length 8 storingthe probabilities of each slice being under the lamp

At every time step the action, ut , is the same—the pizza rotates by onenoisy step

; This fact, plus the discrete representation allow equation (1) ofBayes filter to be simplified:

For all xt ,bel(xt) =

∑p(xt |xt−1)bel(xt−1)

This robot has no sensing capability. Thus, bel(xt) = bel(xt). There istherefore no need to apply equation (2).

COMP 6912 (MUN) Localization July 8, 2016 12 / 27

Page 63: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We wish to determine the probability that slice i of the 8-slice pizza isunder the lamp.

Our belief representation will be discrete; It is an array of length 8 storingthe probabilities of each slice being under the lamp

At every time step the action, ut , is the same—the pizza rotates by onenoisy step; This fact, plus the discrete representation allow equation (1) ofBayes filter to be simplified:

For all xt ,bel(xt) =

∑p(xt |xt−1)bel(xt−1)

This robot has no sensing capability. Thus, bel(xt) = bel(xt). There istherefore no need to apply equation (2).

COMP 6912 (MUN) Localization July 8, 2016 12 / 27

Page 64: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We wish to determine the probability that slice i of the 8-slice pizza isunder the lamp.

Our belief representation will be discrete; It is an array of length 8 storingthe probabilities of each slice being under the lamp

At every time step the action, ut , is the same—the pizza rotates by onenoisy step; This fact, plus the discrete representation allow equation (1) ofBayes filter to be simplified:

For all xt ,

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

This robot has no sensing capability. Thus, bel(xt) = bel(xt). There istherefore no need to apply equation (2).

COMP 6912 (MUN) Localization July 8, 2016 12 / 27

Page 65: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We wish to determine the probability that slice i of the 8-slice pizza isunder the lamp.

Our belief representation will be discrete; It is an array of length 8 storingthe probabilities of each slice being under the lamp

At every time step the action, ut , is the same—the pizza rotates by onenoisy step; This fact, plus the discrete representation allow equation (1) ofBayes filter to be simplified:

For all xt ,bel(xt) =

∑p(xt |xt−1)bel(xt−1)

This robot has no sensing capability. Thus, bel(xt) = bel(xt). There istherefore no need to apply equation (2).

COMP 6912 (MUN) Localization July 8, 2016 12 / 27

Page 66: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We wish to determine the probability that slice i of the 8-slice pizza isunder the lamp.

Our belief representation will be discrete; It is an array of length 8 storingthe probabilities of each slice being under the lamp

At every time step the action, ut , is the same—the pizza rotates by onenoisy step; This fact, plus the discrete representation allow equation (1) ofBayes filter to be simplified:

For all xt ,bel(xt) =

∑p(xt |xt−1)bel(xt−1)

This robot has no sensing capability

. Thus, bel(xt) = bel(xt). There istherefore no need to apply equation (2).

COMP 6912 (MUN) Localization July 8, 2016 12 / 27

Page 67: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We wish to determine the probability that slice i of the 8-slice pizza isunder the lamp.

Our belief representation will be discrete; It is an array of length 8 storingthe probabilities of each slice being under the lamp

At every time step the action, ut , is the same—the pizza rotates by onenoisy step; This fact, plus the discrete representation allow equation (1) ofBayes filter to be simplified:

For all xt ,bel(xt) =

∑p(xt |xt−1)bel(xt−1)

This robot has no sensing capability. Thus, bel(xt) = bel(xt)

. There istherefore no need to apply equation (2).

COMP 6912 (MUN) Localization July 8, 2016 12 / 27

Page 68: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We wish to determine the probability that slice i of the 8-slice pizza isunder the lamp.

Our belief representation will be discrete; It is an array of length 8 storingthe probabilities of each slice being under the lamp

At every time step the action, ut , is the same—the pizza rotates by onenoisy step; This fact, plus the discrete representation allow equation (1) ofBayes filter to be simplified:

For all xt ,bel(xt) =

∑p(xt |xt−1)bel(xt−1)

This robot has no sensing capability. Thus, bel(xt) = bel(xt). There istherefore no need to apply equation (2).

COMP 6912 (MUN) Localization July 8, 2016 12 / 27

Page 69: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 70: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 71: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 72: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j)

. Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 73: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice

; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 74: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices

; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 75: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all

. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 76: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 77: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 78: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j

0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 79: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 1

0.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 80: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 2

0 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 81: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 82: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

bel(xt) =∑

p(xt |xt−1)bel(xt−1)

What is the robot’s belief that slice i is under the heat lamp?

bel(xt = i) =8∑

j=1

p(xt = i |xt−1 = j)bel(xt−1 = j)

We need a motion model for p(xt = i |xt−1 = j). Lets say that the turningmechanism has a 0.5 probability of turning the pizza by one slice; a 0.25probability of turning by two slices; and a 0.25 probability of not turning itat all. We assume all turns are in the positive direction.

p(xt = i |xt−1 = j) =

0.25 for i = j0.5 for i = j ⊕ 10.25 for i = j ⊕ 20 otherwise

where ⊕ indicates addition with wrap-around (e.g. 7⊕ 2 = 1)

Page 83: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume that bel(x0) is 1 for x0 = 1 and 0 otherwise

. The application ofequation (1) yields:

i

t 1 2 3 4 5 6 7 8

0 1 0 0 0 0 0 0 01 0.25 0.5 0.25 0 0 0 0 02 0.0625 0.25 0.375 0.25 0.0625 0 0 0...

......

......

......

......

60 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

A peak in bel(xt) at the most likely position persists for some time, butour uncertainty about which slice is under the lamp only grows withtime—eventually leading to global uncertainty (i.e. a uniform distribution).

COMP 6912 (MUN) Localization July 8, 2016 14 / 27

Page 84: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume that bel(x0) is 1 for x0 = 1 and 0 otherwise. The application ofequation (1) yields:

i

t 1 2 3 4 5 6 7 8

0 1 0 0 0 0 0 0 01 0.25 0.5 0.25 0 0 0 0 02 0.0625 0.25 0.375 0.25 0.0625 0 0 0...

......

......

......

......

60 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

A peak in bel(xt) at the most likely position persists for some time, butour uncertainty about which slice is under the lamp only grows withtime—eventually leading to global uncertainty (i.e. a uniform distribution).

COMP 6912 (MUN) Localization July 8, 2016 14 / 27

Page 85: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume that bel(x0) is 1 for x0 = 1 and 0 otherwise. The application ofequation (1) yields:

i

t 1 2 3 4 5 6 7 8

0 1 0 0 0 0 0 0 01 0.25 0.5 0.25 0 0 0 0 02 0.0625 0.25 0.375 0.25 0.0625 0 0 0...

......

......

......

......

60 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

A peak in bel(xt) at the most likely position persists for some time, butour uncertainty about which slice is under the lamp only grows withtime—eventually leading to global uncertainty (i.e. a uniform distribution).

COMP 6912 (MUN) Localization July 8, 2016 14 / 27

Page 86: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume that bel(x0) is 1 for x0 = 1 and 0 otherwise. The application ofequation (1) yields:

i

t 1 2 3 4 5 6 7 8

0 1 0 0 0 0 0 0 01 0.25 0.5 0.25 0 0 0 0 02 0.0625 0.25 0.375 0.25 0.0625 0 0 0...

......

......

......

......

60 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

A peak in bel(xt) at the most likely position persists for some time, butour uncertainty about which slice is under the lamp only grows withtime—eventually leading to global uncertainty (i.e. a uniform distribution).

COMP 6912 (MUN) Localization July 8, 2016 14 / 27

Page 87: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume that bel(x0) is 1 for x0 = 1 and 0 otherwise. The application ofequation (1) yields:

i

t 1 2 3 4 5 6 7 8

0 1 0 0 0 0 0 0 0

1 0.25 0.5 0.25 0 0 0 0 02 0.0625 0.25 0.375 0.25 0.0625 0 0 0...

......

......

......

......

60 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

A peak in bel(xt) at the most likely position persists for some time, butour uncertainty about which slice is under the lamp only grows withtime—eventually leading to global uncertainty (i.e. a uniform distribution).

COMP 6912 (MUN) Localization July 8, 2016 14 / 27

Page 88: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume that bel(x0) is 1 for x0 = 1 and 0 otherwise. The application ofequation (1) yields:

i

t 1 2 3 4 5 6 7 8

0 1 0 0 0 0 0 0 01 0.25 0.5 0.25 0 0 0 0 0

2 0.0625 0.25 0.375 0.25 0.0625 0 0 0...

......

......

......

......

60 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

A peak in bel(xt) at the most likely position persists for some time, butour uncertainty about which slice is under the lamp only grows withtime—eventually leading to global uncertainty (i.e. a uniform distribution).

COMP 6912 (MUN) Localization July 8, 2016 14 / 27

Page 89: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume that bel(x0) is 1 for x0 = 1 and 0 otherwise. The application ofequation (1) yields:

i

t 1 2 3 4 5 6 7 8

0 1 0 0 0 0 0 0 01 0.25 0.5 0.25 0 0 0 0 02 0.0625 0.25 0.375 0.25 0.0625 0 0 0

......

......

......

......

...60 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

A peak in bel(xt) at the most likely position persists for some time, butour uncertainty about which slice is under the lamp only grows withtime—eventually leading to global uncertainty (i.e. a uniform distribution).

COMP 6912 (MUN) Localization July 8, 2016 14 / 27

Page 90: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume that bel(x0) is 1 for x0 = 1 and 0 otherwise. The application ofequation (1) yields:

i

t 1 2 3 4 5 6 7 8

0 1 0 0 0 0 0 0 01 0.25 0.5 0.25 0 0 0 0 02 0.0625 0.25 0.375 0.25 0.0625 0 0 0...

......

......

......

......

60 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

A peak in bel(xt) at the most likely position persists for some time, butour uncertainty about which slice is under the lamp only grows withtime—eventually leading to global uncertainty (i.e. a uniform distribution).

COMP 6912 (MUN) Localization July 8, 2016 14 / 27

Page 91: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume that bel(x0) is 1 for x0 = 1 and 0 otherwise. The application ofequation (1) yields:

i

t 1 2 3 4 5 6 7 8

0 1 0 0 0 0 0 0 01 0.25 0.5 0.25 0 0 0 0 02 0.0625 0.25 0.375 0.25 0.0625 0 0 0...

......

......

......

......

60 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

A peak in bel(xt) at the most likely position persists for some time, butour uncertainty about which slice is under the lamp only grows withtime—eventually leading to global uncertainty (i.e. a uniform distribution).

COMP 6912 (MUN) Localization July 8, 2016 14 / 27

Page 92: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume that bel(x0) is 1 for x0 = 1 and 0 otherwise. The application ofequation (1) yields:

i

t 1 2 3 4 5 6 7 8

0 1 0 0 0 0 0 0 01 0.25 0.5 0.25 0 0 0 0 02 0.0625 0.25 0.375 0.25 0.0625 0 0 0...

......

......

......

......

60 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125

A peak in bel(xt) at the most likely position persists for some time, butour uncertainty about which slice is under the lamp only grows withtime—eventually leading to global uncertainty (i.e. a uniform distribution).

COMP 6912 (MUN) Localization July 8, 2016 14 / 27

Page 93: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the Bayes filter...

The measurement update: equation (2)

bel(xt) = ηp(zt |xt)bel(xt)

Recall that bel(xt) is our new belief in xt after incorporating ut . Weremain confident in xt only if our sensory observation zt is consistent withxt . This confidence is expressed by the measurement model,

p(zt |xt)

η is a normalizing factor applied after all p(zt |xt)bel(xt) have beencomputed,

η =1∫

p(zt |xt)bel(xt)dxtTypically, the measurement model requires a map of the environment sothat we can determine how likely it was to observe zt at position xt .

COMP 6912 (MUN) Localization July 8, 2016 15 / 27

Page 94: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the Bayes filter...

The measurement update: equation (2)

bel(xt) = ηp(zt |xt)bel(xt)

Recall that bel(xt) is our new belief in xt after incorporating ut . Weremain confident in xt only if our sensory observation zt is consistent withxt . This confidence is expressed by the measurement model,

p(zt |xt)

η is a normalizing factor applied after all p(zt |xt)bel(xt) have beencomputed,

η =1∫

p(zt |xt)bel(xt)dxtTypically, the measurement model requires a map of the environment sothat we can determine how likely it was to observe zt at position xt .

COMP 6912 (MUN) Localization July 8, 2016 15 / 27

Page 95: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the Bayes filter...

The measurement update: equation (2)

bel(xt) = ηp(zt |xt)bel(xt)

Recall that bel(xt) is our new belief in xt after incorporating ut . Weremain confident in xt only if our sensory observation zt is consistent withxt . This confidence is expressed by the measurement model,

p(zt |xt)

η is a normalizing factor applied after all p(zt |xt)bel(xt) have beencomputed,

η =1∫

p(zt |xt)bel(xt)dxtTypically, the measurement model requires a map of the environment sothat we can determine how likely it was to observe zt at position xt .

COMP 6912 (MUN) Localization July 8, 2016 15 / 27

Page 96: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the Bayes filter...

The measurement update: equation (2)

bel(xt) = ηp(zt |xt)bel(xt)

Recall that bel(xt) is our new belief in xt after incorporating ut

. Weremain confident in xt only if our sensory observation zt is consistent withxt . This confidence is expressed by the measurement model,

p(zt |xt)

η is a normalizing factor applied after all p(zt |xt)bel(xt) have beencomputed,

η =1∫

p(zt |xt)bel(xt)dxtTypically, the measurement model requires a map of the environment sothat we can determine how likely it was to observe zt at position xt .

COMP 6912 (MUN) Localization July 8, 2016 15 / 27

Page 97: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the Bayes filter...

The measurement update: equation (2)

bel(xt) = ηp(zt |xt)bel(xt)

Recall that bel(xt) is our new belief in xt after incorporating ut . Weremain confident in xt only if our sensory observation zt is consistent withxt

. This confidence is expressed by the measurement model,

p(zt |xt)

η is a normalizing factor applied after all p(zt |xt)bel(xt) have beencomputed,

η =1∫

p(zt |xt)bel(xt)dxtTypically, the measurement model requires a map of the environment sothat we can determine how likely it was to observe zt at position xt .

COMP 6912 (MUN) Localization July 8, 2016 15 / 27

Page 98: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the Bayes filter...

The measurement update: equation (2)

bel(xt) = ηp(zt |xt)bel(xt)

Recall that bel(xt) is our new belief in xt after incorporating ut . Weremain confident in xt only if our sensory observation zt is consistent withxt . This confidence is expressed by the measurement model

,

p(zt |xt)

η is a normalizing factor applied after all p(zt |xt)bel(xt) have beencomputed,

η =1∫

p(zt |xt)bel(xt)dxtTypically, the measurement model requires a map of the environment sothat we can determine how likely it was to observe zt at position xt .

COMP 6912 (MUN) Localization July 8, 2016 15 / 27

Page 99: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the Bayes filter...

The measurement update: equation (2)

bel(xt) = ηp(zt |xt)bel(xt)

Recall that bel(xt) is our new belief in xt after incorporating ut . Weremain confident in xt only if our sensory observation zt is consistent withxt . This confidence is expressed by the measurement model,

p(zt |xt)

η is a normalizing factor applied after all p(zt |xt)bel(xt) have beencomputed,

η =1∫

p(zt |xt)bel(xt)dxtTypically, the measurement model requires a map of the environment sothat we can determine how likely it was to observe zt at position xt .

COMP 6912 (MUN) Localization July 8, 2016 15 / 27

Page 100: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the Bayes filter...

The measurement update: equation (2)

bel(xt) = ηp(zt |xt)bel(xt)

Recall that bel(xt) is our new belief in xt after incorporating ut . Weremain confident in xt only if our sensory observation zt is consistent withxt . This confidence is expressed by the measurement model,

p(zt |xt)

η is a normalizing factor applied after all p(zt |xt)bel(xt) have beencomputed,

η =1∫

p(zt |xt)bel(xt)dxtTypically, the measurement model requires a map of the environment sothat we can determine how likely it was to observe zt at position xt .

COMP 6912 (MUN) Localization July 8, 2016 15 / 27

Page 101: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the Bayes filter...

The measurement update: equation (2)

bel(xt) = ηp(zt |xt)bel(xt)

Recall that bel(xt) is our new belief in xt after incorporating ut . Weremain confident in xt only if our sensory observation zt is consistent withxt . This confidence is expressed by the measurement model,

p(zt |xt)

η is a normalizing factor applied after all p(zt |xt)bel(xt) have beencomputed,

η =1∫

p(zt |xt)bel(xt)dxt

Typically, the measurement model requires a map of the environment sothat we can determine how likely it was to observe zt at position xt .

COMP 6912 (MUN) Localization July 8, 2016 15 / 27

Page 102: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the Bayes filter...

The measurement update: equation (2)

bel(xt) = ηp(zt |xt)bel(xt)

Recall that bel(xt) is our new belief in xt after incorporating ut . Weremain confident in xt only if our sensory observation zt is consistent withxt . This confidence is expressed by the measurement model,

p(zt |xt)

η is a normalizing factor applied after all p(zt |xt)bel(xt) have beencomputed,

η =1∫

p(zt |xt)bel(xt)dxtTypically, the measurement model requires a map of the environment sothat we can determine how likely it was to observe zt at position xt .

COMP 6912 (MUN) Localization July 8, 2016 15 / 27

Page 103: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the pizza...

To correct the problem of ever-increasing uncertainty, a mushroomdetector is installed. In order for this sensor to be useful in determiningwhich pizza slice is under the lamp, a map of the pizza is required.

The mushroom map:

slice i 1 2 3 4 5 6 7 8

Mu? Yes Yes Yes Yes No No No No

p(zt = Yes|xt = i) 0.9 0.9 0.9 0.9 0.1 0.1 0.1 0.1

p(zt = No|xt = i) 0.1 0.1 0.1 0.1 0.9 0.9 0.9 0.9

The bottom two rows give the measurement model—the mushroomdetector is correct 90% of the time

COMP 6912 (MUN) Localization July 8, 2016 16 / 27

Page 104: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the pizza...

To correct the problem of ever-increasing uncertainty, a mushroomdetector is installed

. In order for this sensor to be useful in determiningwhich pizza slice is under the lamp, a map of the pizza is required.

The mushroom map:

slice i 1 2 3 4 5 6 7 8

Mu? Yes Yes Yes Yes No No No No

p(zt = Yes|xt = i) 0.9 0.9 0.9 0.9 0.1 0.1 0.1 0.1

p(zt = No|xt = i) 0.1 0.1 0.1 0.1 0.9 0.9 0.9 0.9

The bottom two rows give the measurement model—the mushroomdetector is correct 90% of the time

COMP 6912 (MUN) Localization July 8, 2016 16 / 27

Page 105: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the pizza...

To correct the problem of ever-increasing uncertainty, a mushroomdetector is installed. In order for this sensor to be useful in determiningwhich pizza slice is under the lamp, a map of the pizza is required.

The mushroom map:

slice i 1 2 3 4 5 6 7 8

Mu? Yes Yes Yes Yes No No No No

p(zt = Yes|xt = i) 0.9 0.9 0.9 0.9 0.1 0.1 0.1 0.1

p(zt = No|xt = i) 0.1 0.1 0.1 0.1 0.9 0.9 0.9 0.9

The bottom two rows give the measurement model—the mushroomdetector is correct 90% of the time

COMP 6912 (MUN) Localization July 8, 2016 16 / 27

Page 106: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the pizza...

To correct the problem of ever-increasing uncertainty, a mushroomdetector is installed. In order for this sensor to be useful in determiningwhich pizza slice is under the lamp, a map of the pizza is required.

The mushroom map:

slice i 1 2 3 4 5 6 7 8

Mu? Yes Yes Yes Yes No No No No

p(zt = Yes|xt = i) 0.9 0.9 0.9 0.9 0.1 0.1 0.1 0.1

p(zt = No|xt = i) 0.1 0.1 0.1 0.1 0.9 0.9 0.9 0.9

The bottom two rows give the measurement model—the mushroomdetector is correct 90% of the time

COMP 6912 (MUN) Localization July 8, 2016 16 / 27

Page 107: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the pizza...

To correct the problem of ever-increasing uncertainty, a mushroomdetector is installed. In order for this sensor to be useful in determiningwhich pizza slice is under the lamp, a map of the pizza is required.

The mushroom map:

slice i 1 2 3 4 5 6 7 8

Mu? Yes Yes Yes Yes No No No No

p(zt = Yes|xt = i) 0.9 0.9 0.9 0.9 0.1 0.1 0.1 0.1

p(zt = No|xt = i) 0.1 0.1 0.1 0.1 0.9 0.9 0.9 0.9

The bottom two rows give the measurement model—the mushroomdetector is correct 90% of the time

COMP 6912 (MUN) Localization July 8, 2016 16 / 27

Page 108: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the pizza...

To correct the problem of ever-increasing uncertainty, a mushroomdetector is installed. In order for this sensor to be useful in determiningwhich pizza slice is under the lamp, a map of the pizza is required.

The mushroom map:

slice i 1 2 3 4 5 6 7 8

Mu? Yes Yes Yes Yes No No No No

p(zt = Yes|xt = i) 0.9 0.9 0.9 0.9 0.1 0.1 0.1 0.1

p(zt = No|xt = i) 0.1 0.1 0.1 0.1 0.9 0.9 0.9 0.9

The bottom two rows give the measurement model—the mushroomdetector is correct 90% of the time

COMP 6912 (MUN) Localization July 8, 2016 16 / 27

Page 109: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the pizza...

To correct the problem of ever-increasing uncertainty, a mushroomdetector is installed. In order for this sensor to be useful in determiningwhich pizza slice is under the lamp, a map of the pizza is required.

The mushroom map:

slice i 1 2 3 4 5 6 7 8

Mu? Yes Yes Yes Yes No No No No

p(zt = Yes|xt = i) 0.9 0.9 0.9 0.9 0.1 0.1 0.1 0.1

p(zt = No|xt = i) 0.1 0.1 0.1 0.1 0.9 0.9 0.9 0.9

The bottom two rows give the measurement model—the mushroomdetector is correct 90% of the time

COMP 6912 (MUN) Localization July 8, 2016 16 / 27

Page 110: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the pizza...

To correct the problem of ever-increasing uncertainty, a mushroomdetector is installed. In order for this sensor to be useful in determiningwhich pizza slice is under the lamp, a map of the pizza is required.

The mushroom map:

slice i 1 2 3 4 5 6 7 8

Mu? Yes Yes Yes Yes No No No No

p(zt = Yes|xt = i) 0.9 0.9 0.9 0.9 0.1 0.1 0.1 0.1

p(zt = No|xt = i) 0.1 0.1 0.1 0.1 0.9 0.9 0.9 0.9

The bottom two rows give the measurement model—the mushroomdetector is correct 90% of the time

COMP 6912 (MUN) Localization July 8, 2016 16 / 27

Page 111: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Back to the pizza...

To correct the problem of ever-increasing uncertainty, a mushroomdetector is installed. In order for this sensor to be useful in determiningwhich pizza slice is under the lamp, a map of the pizza is required.

The mushroom map:

slice i 1 2 3 4 5 6 7 8

Mu? Yes Yes Yes Yes No No No No

p(zt = Yes|xt = i) 0.9 0.9 0.9 0.9 0.1 0.1 0.1 0.1

p(zt = No|xt = i) 0.1 0.1 0.1 0.1 0.9 0.9 0.9 0.9

The bottom two rows give the measurement model—the mushroomdetector is correct 90% of the time

COMP 6912 (MUN) Localization July 8, 2016 16 / 27

Page 112: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1

. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 113: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 114: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 115: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt

. Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 116: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes

.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 117: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 118: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 119: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 120: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 121: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 122: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 123: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Assume again that we know the heat lamp is initially on slice 1. Theapplication of the prediction step yields,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

To apply the measurement update, we need to know zt . Assume zt = Yes.p(zt = Yes|xt = 1) = 0.9, so

t almost bel(xt)

1 0.225 0.45 0.225 0 0 0 0 0

We then have to normalize this ‘almost correct’ belief

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

Why did this measurement update fail to reduce uncertainty?

COMP 6912 (MUN) Localization July 8, 2016 17 / 27

Page 124: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief

; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 125: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 126: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 127: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 128: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)

0.0625 0.25 0.375 0.25 0.0625 0 0 0bel(xt)

0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 129: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 130: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)

0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 131: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 132: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)

0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0bel(xt)

0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 133: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 134: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)

0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 135: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 136: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We continue to update our belief; Assume z2 = z3 = Yes,

t bel(xt)

1 0.25 0.5 0.25 0 0 0 0 0

2 bel(xt)0.0625 0.25 0.375 0.25 0.0625 0 0 0

bel(xt)0.0662 0.2647 0.3971 0.2647 0.0074 0 0 0

3 bel(xt)0.0165 0.0993 0.2482 0.3309 0.2335 0.0699 0.0018 0

bel(xt)0.0227 0.1362 0.3405 0.4540 0.0356 0.0107 0.0003 0

Notice how the measurements now reduce uncertainty.

COMP 6912 (MUN) Localization July 8, 2016 18 / 27

Page 137: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Derivation of Bayes Filter

We will derive Bayes filter below

. First, we illustrate the probability of anevent conditioned on multiple other events. Consider this Venn diagram,

The Law of Total Probability requires mutually exclusive and exhaustiveevents yi . Hence:

p(x) 6=∑i

p(x |yi )p(yi )

p(x |z) =∑i

p(x |yi , z)p(yi |z)

Page 138: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Derivation of Bayes Filter

We will derive Bayes filter below. First, we illustrate the probability of anevent conditioned on multiple other events

. Consider this Venn diagram,

The Law of Total Probability requires mutually exclusive and exhaustiveevents yi . Hence:

p(x) 6=∑i

p(x |yi )p(yi )

p(x |z) =∑i

p(x |yi , z)p(yi |z)

Page 139: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Derivation of Bayes Filter

We will derive Bayes filter below. First, we illustrate the probability of anevent conditioned on multiple other events. Consider this Venn diagram,

The Law of Total Probability requires mutually exclusive and exhaustiveevents yi . Hence:

p(x) 6=∑i

p(x |yi )p(yi )

p(x |z) =∑i

p(x |yi , z)p(yi |z)

Page 140: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Derivation of Bayes Filter

We will derive Bayes filter below. First, we illustrate the probability of anevent conditioned on multiple other events. Consider this Venn diagram,

The Law of Total Probability requires mutually exclusive and exhaustiveevents yi . Hence:

p(x) 6=∑i

p(x |yi )p(yi )

p(x |z) =∑i

p(x |yi , z)p(yi |z)

Page 141: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Derivation of Bayes Filter

We will derive Bayes filter below. First, we illustrate the probability of anevent conditioned on multiple other events. Consider this Venn diagram,

The Law of Total Probability requires mutually exclusive and exhaustiveevents yi

. Hence:

p(x) 6=∑i

p(x |yi )p(yi )

p(x |z) =∑i

p(x |yi , z)p(yi |z)

Page 142: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Derivation of Bayes Filter

We will derive Bayes filter below. First, we illustrate the probability of anevent conditioned on multiple other events. Consider this Venn diagram,

The Law of Total Probability requires mutually exclusive and exhaustiveevents yi . Hence:

p(x) 6=∑i

p(x |yi )p(yi )

p(x |z) =∑i

p(x |yi , z)p(yi |z)

Page 143: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Derivation of Bayes Filter

We will derive Bayes filter below. First, we illustrate the probability of anevent conditioned on multiple other events. Consider this Venn diagram,

The Law of Total Probability requires mutually exclusive and exhaustiveevents yi . Hence:

p(x) 6=∑i

p(x |yi )p(yi )

p(x |z) =∑i

p(x |yi , z)p(yi |z)

Page 144: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Derivation of Bayes Filter

We will derive Bayes filter below. First, we illustrate the probability of anevent conditioned on multiple other events. Consider this Venn diagram,

The Law of Total Probability requires mutually exclusive and exhaustiveevents yi . Hence:

p(x) 6=∑i

p(x |yi )p(yi )

p(x |z) =∑i

p(x |yi , z)p(yi |z)

Page 145: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes Rule is always applicable,

p(x |y) =p(y |x)p(x)

p(y)

If an event is conditioned on multiple events, we can still apply Bayes ruleas long as we take care which of these other events is expanded,

p(x |y , z) =p(y |x , z)p(x |z)

p(y |z)

COMP 6912 (MUN) Localization July 8, 2016 20 / 27

Page 146: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes Rule is always applicable,

p(x |y) =p(y |x)p(x)

p(y)

If an event is conditioned on multiple events, we can still apply Bayes ruleas long as we take care which of these other events is expanded,

p(x |y , z) =p(y |x , z)p(x |z)

p(y |z)

COMP 6912 (MUN) Localization July 8, 2016 20 / 27

Page 147: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes Rule is always applicable,

p(x |y) =p(y |x)p(x)

p(y)

If an event is conditioned on multiple events, we can still apply Bayes ruleas long as we take care which of these other events is expanded,

p(x |y , z) =p(y |x , z)p(x |z)

p(y |z)

COMP 6912 (MUN) Localization July 8, 2016 20 / 27

Page 148: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes Rule is always applicable,

p(x |y) =p(y |x)p(x)

p(y)

If an event is conditioned on multiple events, we can still apply Bayes ruleas long as we take care which of these other events is expanded,

p(x |y , z) =p(y |x , z)p(x |z)

p(y |z)

COMP 6912 (MUN) Localization July 8, 2016 20 / 27

Page 149: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes filter takes as input the belief from the last time step,

bel(xt−1) = p(xt−1|z1:t−1, u1:t−1)

It requires the motion and measurement models to be supplied,

Motion model: p(xt |ut , xt−1)

Measurement model: p(zt |xt)

As its output, it produces the current belief,

bel(xt) = p(xt |z1:t , u1:t)

COMP 6912 (MUN) Localization July 8, 2016 21 / 27

Page 150: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes filter takes as input the belief from the last time step,

bel(xt−1) = p(xt−1|z1:t−1, u1:t−1)

It requires the motion and measurement models to be supplied,

Motion model: p(xt |ut , xt−1)

Measurement model: p(zt |xt)

As its output, it produces the current belief,

bel(xt) = p(xt |z1:t , u1:t)

COMP 6912 (MUN) Localization July 8, 2016 21 / 27

Page 151: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes filter takes as input the belief from the last time step,

bel(xt−1) = p(xt−1|z1:t−1, u1:t−1)

It requires the motion and measurement models to be supplied,

Motion model: p(xt |ut , xt−1)

Measurement model: p(zt |xt)

As its output, it produces the current belief,

bel(xt) = p(xt |z1:t , u1:t)

COMP 6912 (MUN) Localization July 8, 2016 21 / 27

Page 152: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes filter takes as input the belief from the last time step,

bel(xt−1) = p(xt−1|z1:t−1, u1:t−1)

It requires the motion and measurement models to be supplied,

Motion model: p(xt |ut , xt−1)

Measurement model: p(zt |xt)

As its output, it produces the current belief,

bel(xt) = p(xt |z1:t , u1:t)

COMP 6912 (MUN) Localization July 8, 2016 21 / 27

Page 153: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes filter takes as input the belief from the last time step,

bel(xt−1) = p(xt−1|z1:t−1, u1:t−1)

It requires the motion and measurement models to be supplied,

Motion model: p(xt |ut , xt−1)

Measurement model: p(zt |xt)

As its output, it produces the current belief,

bel(xt) = p(xt |z1:t , u1:t)

COMP 6912 (MUN) Localization July 8, 2016 21 / 27

Page 154: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes filter takes as input the belief from the last time step,

bel(xt−1) = p(xt−1|z1:t−1, u1:t−1)

It requires the motion and measurement models to be supplied,

Motion model: p(xt |ut , xt−1)

Measurement model: p(zt |xt)

As its output, it produces the current belief,

bel(xt) = p(xt |z1:t , u1:t)

COMP 6912 (MUN) Localization July 8, 2016 21 / 27

Page 155: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes filter takes as input the belief from the last time step,

bel(xt−1) = p(xt−1|z1:t−1, u1:t−1)

It requires the motion and measurement models to be supplied,

Motion model: p(xt |ut , xt−1)

Measurement model: p(zt |xt)

As its output, it produces the current belief,

bel(xt) = p(xt |z1:t , u1:t)

COMP 6912 (MUN) Localization July 8, 2016 21 / 27

Page 156: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

Bayes filter takes as input the belief from the last time step,

bel(xt−1) = p(xt−1|z1:t−1, u1:t−1)

It requires the motion and measurement models to be supplied,

Motion model: p(xt |ut , xt−1)

Measurement model: p(zt |xt)

As its output, it produces the current belief,

bel(xt) = p(xt |z1:t , u1:t)

COMP 6912 (MUN) Localization July 8, 2016 21 / 27

Page 157: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt . Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 158: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt . Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 159: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt . Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 160: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt . Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 161: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt

. The denominator above does not dependupon xt . Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 162: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt

. Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 163: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt . Therefore, we do not need to evaluate it

. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 164: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt . Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1

. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 165: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt . Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found

. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 166: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt . Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 167: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt . Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 168: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

We begin with the current belief and work backwards to determine how tocompute it,

bel(xt) = p(xt |z1:t , u1:t)

Apply Bayes rule,

p(xt |z1:t , u1:t) =p(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

p(zt |z1:t−1, u1:t)

Bayes rule is applied for all xt . The denominator above does not dependupon xt . Therefore, we do not need to evaluate it. We have the constraintthat the probability over all xt must sum to 1. Therefore, we treat thedenominator as a normalizing constant which can be determined after allof the numerators have been found. We call this constant η,

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

COMP 6912 (MUN) Localization July 8, 2016 22 / 27

Page 169: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption). Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 170: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption). Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 171: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption). Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 172: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption)

. Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 173: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption). Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 174: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption). Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 175: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption). Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor

. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 176: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption). Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 177: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption). Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 178: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption). Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 179: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t , u1:t) = ηp(zt |xt , z1:t−1, u1:t)p(xt |z1:t−1, u1:t)

Consider the first factor,

p(zt |xt , z1:t−1, u1:t)

If xt is given then we don’t need to know anything about the pastmeasurements or controls in order to compute the probability of zt(Markov assumption). Thus,

p(zt |xt , z1:t−1, u1:t) = p(zt |xt)

Where p(zt |xt) is the measurement model.

Now consider the second factor. This is equal to our definition of theprediction,

bel(xt) = p(xt |z1:t−1, u1:t)

Therefore we have established equation (2) of Bayes filter,

bel(xt) = ηp(zt |xt)bel(xt)

COMP 6912 (MUN) Localization July 8, 2016 23 / 27

Page 180: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)

? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov). Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.

COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 181: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov). Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.

COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 182: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov). Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.

COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 183: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities

. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov). Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.

COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 184: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov). Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.

COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 185: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov). Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.

COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 186: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov). Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.

COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 187: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov). Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.

COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 188: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov)

. Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.

COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 189: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov). Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.

COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 190: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

But how do we obtain bel(xt)? We repeat again the definition:

bel(xt) = p(xt |z1:t−1, u1:t)

The previous possible states xt−1 are assumed to be mutually exclusiveand to represent all possibilities. Hence, we can apply the law of totalprobability:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, z1:t−1, u1:t)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the first factor in the integral,

p(xt |xt−1, z1:t−1, u1:t)

If xt−1 is given, knowledge of previous measurements z1:t−1 and pastcontrols u1:t−1 tell us nothing (Markov). Hence,

p(xt |xt−1, z1:t−1, u1:t) = p(xt |xt−1, ut)

This is the motion model.COMP 6912 (MUN) Localization July 8, 2016 24 / 27

Page 191: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly). Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.

COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 192: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly). Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.

COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 193: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly). Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.

COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 194: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1

(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly). Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.

COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 195: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly)

. Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.

COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 196: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly). Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.

COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 197: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly). Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.

COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 198: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly). Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.

Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.

COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 199: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly). Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.

COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 200: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly). Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.

COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 201: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)p(xt−1|z1:t−1, u1:t)dxt−1

Consider the second factor,

p(xt−1|z1:t−1, u1:t)

Knowledge of the most recent action ut tells us nothing about xt−1(unlesswe know that in certain previous states we would have taken certainactions—assume that we don’t know this, meaning that actions are chosenrandomly). Therefore, we can re-write the factor as follows:

p(xt−1|z1:t−1, u1:t) = p(xt−1|z1:t−1, u1:t−1)

We can recognize this as bel(xt−1) which is the input to Bayesfilter.Therefore the integral above can be rewritten as follows:

p(xt |z1:t−1, u1:t) =

∫p(xt |xt−1, ut)bel(xt−1)dxt−1

This is equation (1) of Bayes filter.COMP 6912 (MUN) Localization July 8, 2016 25 / 27

Page 202: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The above analysis shows that Bayes filter appropriately computes bel(xt)if the input bel(xt−1) is correct

. If we assume that the initial belief bel(x0)at time t = 0 was correct, then the correctness of Bayes filter follows byinduction.

COMP 6912 (MUN) Localization July 8, 2016 26 / 27

Page 203: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

The above analysis shows that Bayes filter appropriately computes bel(xt)if the input bel(xt−1) is correct. If we assume that the initial belief bel(x0)at time t = 0 was correct, then the correctness of Bayes filter follows byinduction.

COMP 6912 (MUN) Localization July 8, 2016 26 / 27

Page 204: av/courses/6912-s16/manual_uploads/localize2_in… · Introduction to Markov Localization In this section we introduce Bayes lter which is the basis of most robot localization algorithms

References

Thrun, S., Burgard, W., and Fox, D. (2005).

Probabilistic Robotics.MIT Press.

COMP 6912 (MUN) Localization July 8, 2016 27 / 27