39
6DOF Vision-Aided Inertial Localization Florian Shkurti [email protected] Mobile Robotics Lab

6DOF Vision-Aided Inertial Navigation

  • Upload
    others

  • View
    15

  • Download
    3

Embed Size (px)

Citation preview

Page 1: 6DOF Vision-Aided Inertial Navigation

6DOF Vision-Aided Inertial Localization

Florian Shkurti

[email protected]

Mobile Robotics Lab

Page 2: 6DOF Vision-Aided Inertial Navigation

Overview

• Desired localization accuracy

• Available sensor inputs (inertial measurement unit, camera)

• Types of environments

• State propagation

• State augmentation

• Feature extraction and matching

• State update

• Some results

• Concluding remarks

Page 3: 6DOF Vision-Aided Inertial Navigation

Our goals

• Estimate the 3D position and 3D orientation (3D pose) of a moving robot in real-time.

• Ideally, we’d like position error of the distance traveled.

• Unfortunately, this is hard to guarantee.

%10

Page 4: 6DOF Vision-Aided Inertial Navigation

Visual input in the lab

Good

Not so good

Page 5: 6DOF Vision-Aided Inertial Navigation

Visual input in the pool

Good

Not so good

Page 6: 6DOF Vision-Aided Inertial Navigation

Visual input in the ocean

Good

Not so good

Page 7: 6DOF Vision-Aided Inertial Navigation

Typical SLAM approaches

• Try to estimate the robot’s 3D pose and the 3D positions of the observed landmarks

• Correlations between landmarks are also estimated.

• The size of the state becomes significantly big, very quickly. Each update step of EKF-SLAM takes .

• Need to remove landmarks from the state.

] ... [

1

NLLR

zyxzyxzyxx

)( 2NO

Page 8: 6DOF Vision-Aided Inertial Navigation

Proposed approach

• Heavily based on work by A. Mourikis and S. Roumeliotis [1]

• Uses input from a single camera and an inertial measurement unit (IMU).

• Combines these inputs through an Extended Kalman Filter.

• Each update step takes where N is the number of landmarks.

• [1] “A multi-state constrained Kalman filter for vision-aided inertial navigation,” ICRA 2007

)(NO

Page 9: 6DOF Vision-Aided Inertial Navigation

Inertial Measurement Units

• Think of an IMU as a gyroscope and an

accelerometer.

• If is the robot’s pose then

the IMU gives us noisy measurements of

• Let be the true linear acceleration and

be the true angular velocity. IMU measurements are typically

modeled as:

] [ zyx

] [ zyx

] [ zyx a ] [ ω

),0(~ and ),0(~

),0(~ and ),0(~

gbggngggm

abaanaaam

NN

NNc

σbσnnbωω

σbσnnbaa

Page 10: 6DOF Vision-Aided Inertial Navigation

IMU noise modeling

• The IMU measurement noise is typically assumed to be fixed. It is estimated offline while keeping the robot still.

• The accelerometer bias is initialized as the average offset from [0 0 -g] when the robot is left still with zero pitch and roll. The noiseparameters of are estimated offline. is estimated online.

• Similarly for the gyroscope bias.

• Note: In the proposed algorithm we do not estimate the accelerometer scaling factor. We fix it to .

),0(~ and ),0(~ gngana NN σnσn

),0(~ aa N σbab

1c

Page 11: 6DOF Vision-Aided Inertial Navigation

IMU integration drift

• “Why don’t we just integrate the IMU measurements to get a pose estimate?”

• Because errors will also be integrated.

Page 12: 6DOF Vision-Aided Inertial Navigation

Back to the algorithm

• Main idea: assuming feature-rich scenes we can use visual motion estimation to correct IMU drift.

Match features

Propagate 3D pose by integration

Update the entire state vector and the covariance

Augment the state vector and the covariance

Page 13: 6DOF Vision-Aided Inertial Navigation

The state vector

] ... [1

1

N

N

C

GC

GC

GC

GI

G

aI

G

g

I

G pqpqpbvbqx

IMU state 1st camera frame

Nth camera frame

G

I

CN

Page 14: 6DOF Vision-Aided Inertial Navigation

The state vector

• is a quaternion that represents the rotation from the global frame G to the current IMU frame.

• is the origin of the IMU frame in global coordinates.

• is the velocity of the IMU frame in global coordinates.

• is the rotation from the global frame to the ith camera frame.

• is the origin of the ith camera frame in global coordinates.

]/2)cos( )2/sin([uqI

G

I

Gp

I

Gv

qiC

G

iC

Gp

Page 15: 6DOF Vision-Aided Inertial Navigation

The covariance matrix

• At t=0 because no images have been recorded yet.

CC

T

IC

ICII

PP

PPxxxP )ˆcov()~cov(

IMUII xxPP ˆˆ and 0

Page 16: 6DOF Vision-Aided Inertial Navigation

EKF Propagation

• Is done every time we receive an IMU measurement.

Propagate 3D pose by integration

Page 17: 6DOF Vision-Aided Inertial Navigation

Propagation (state)

)(ˆ )(ˆ00 tt gm

I

am

Ibωωbaa

I

C

t0

t1

)(

)2

ˆcos(

)2

ˆsin(

ˆ

ˆ

)( 01 t

dt

dt

t I

GI

I

I

I

I

G qω

ω

ω

ω

q

dtttt I

G

I

G

I

G )()()( 001 vpp

dtttt GIG

II

G

I

G )ˆ)(()()( 001 gaRvv

tt

tt

aa

gg

)()(

)()(

01

01

bb

bb

I

C

Note: the camera frames of the state are not propagated

Page 18: 6DOF Vision-Aided Inertial Navigation

Propagation (covariance)

• How do we propagate ?

• Since we didn’t modify the camera frames:

• We can show that IMU propagation errors increase according to:

where depends on and depends on .

• We can get the propagated covariance by integration.

CC

T

IC

ICII

PP

PPxxxP )ˆcov()~cov(

F qaω ˆ ,ˆ ,ˆ I

G G qI

G

CC

T

IC

ICII

PP

PP

T

aaggIMUIMU ttdt

d] [)(~)(~ bnbnGxFx

Page 19: 6DOF Vision-Aided Inertial Navigation

State augmentation

• Is done every time an image is recorded

Augment the state vector and the covariance

Page 20: 6DOF Vision-Aided Inertial Navigation

State augmentation

• The covariance is also augmented.

I

C

t0

t1

I

C

] [1

1

C

GC

GI

G

aI

G

g

I

G pqpbvbqx

] [ I

G

aI

G

g

I

G pbvbqx

Transformation from camera to global

coordinates

Page 21: 6DOF Vision-Aided Inertial Navigation

Feature matching

• Is done between every pair of consecutive images.

• SURF-64 features are matched using the Fast Library for Approximate Nearest Neighbors, by Muja & Lowe at UBC.

• Approximately 1000 features per frame.• Average feature tracking length is 4 frames.

Match features

Page 22: 6DOF Vision-Aided Inertial Navigation

Feature matching (in the lab)

Page 23: 6DOF Vision-Aided Inertial Navigation

Feature matching (in the pool)

Page 24: 6DOF Vision-Aided Inertial Navigation

Feature matching (in the ocean)

Page 25: 6DOF Vision-Aided Inertial Navigation

Feature matching

• Not very reliable, so outlier detection is necessary.

• If done carefully, it allows us to estimate the 3D position of a feature:

Match features

111

1

1

1nz

f

C

f

C

f

C Y

X

Z 555

5

5

1nz

f

C

f

C

f

C Y

X

Z

...

Page 26: 6DOF Vision-Aided Inertial Navigation

Structure Estimation

• We are searching for the 3D position of the fish in global coordinates,

• If we work in global coordinates we have 3 unknowns and 5 measurements. We can solve the nonlinear problem using iterative minimization methods (e.g. Levenberg-Marquardt).

111

1

1

1nz

f

C

f

C

f

C Y

X

Z 555

5

5

1nz

f

C

f

C

f

C Y

X

Z

...

f

Gp

f

G

f

G

f

G ZYX , ,

Page 27: 6DOF Vision-Aided Inertial Navigation

EKF Update

• Is done every time a feature stops being tracked.

Update the entire state vector and the covariance

Page 28: 6DOF Vision-Aided Inertial Navigation

EKF Update (the residual)

• The only source of correction we have is the camera.

111

1

1

1nz

f

C

f

C

f

C Y

X

Z 555

5

5

1nz

f

C

f

C

f

C Y

X

Z

...

What we measured

f

C

f

C

f

C Y

X

Z ˆ

ˆ

ˆ

1

1

11z ...

f

C

f

C

f

C Y

X

Z ˆ

ˆ

ˆ

5

5

55z

What we expected to measure after we estimated the 3D position of the fish

Page 29: 6DOF Vision-Aided Inertial Navigation

EKF Update (the residual)

• Each residual is a nonlinear function of

111 zzr 555 zzr...

)ˆˆ(ˆ

ˆ

ˆ

ˆ

whereˆ

ˆ

ˆ

1i

i

i

i

i

i

i

iC

G

f

GC

G

f

C

f

C

f

C

f

C

f

C

f

Cii

Z

Y

X

Y

X

ZppRzr

), ,( f

G

C

GC

G i

i ppR

Page 30: 6DOF Vision-Aided Inertial Navigation

EKF Update

• We linearize the residual for each frame.

• We stack the residuals for all camera frames and for all features into one vector

• We apply the usual update equations of the Extended Kalman Filter:

iii nxHr ~

nxHr ~

tttt

tttt

TT

|11|1

|11|1

1

)(

))cov((

PKHIP

xxx

Krx

nHPHPHK

Page 31: 6DOF Vision-Aided Inertial Navigation

Lab experiment

Page 32: 6DOF Vision-Aided Inertial Navigation

Estimated trajectory

Page 33: 6DOF Vision-Aided Inertial Navigation

Initial pose

Page 34: 6DOF Vision-Aided Inertial Navigation

Estimated final pose

Page 35: 6DOF Vision-Aided Inertial Navigation

Orientation estimates

0

0

0

0~

r

r0

0

0

2~

p

p0

0

70

5~

y

y

Page 36: 6DOF Vision-Aided Inertial Navigation

Velocity and position estimates

mx

mx

2.2

3.0~

my

my

2

1~

mz

mz

0

3.0~

Page 37: 6DOF Vision-Aided Inertial Navigation

Concluding remarks

• The algorithm is very sensitive to the visual motion estimation.

• Current work: • Figure out how to better estimate 3D locations of features to improve the position

and velocity estimates.• Implement the algorithm so that it runs in real time.

• Future work:• Integrate it with the robot controller to enable the robot to perform requested

trajectories• Examine if bundle adjustment methods will make the estimate more robust .

Page 38: 6DOF Vision-Aided Inertial Navigation

Thank you

• Questions, feedback, and criticisms would be appreciated!

Page 39: 6DOF Vision-Aided Inertial Navigation

Optional: Propagation (covariance)

• From we get

• We can numerically integrate in the propagation interval to obtain .

• And then compute the IMU-camera covariance:

T

aaggIMUIMU ttdt

d] [)(~)(~ bnbnGxFx

T

aagg

T

IIIIII tttdt

dGbnbnGFPFPP ]) cov([)()()(

)(tdt

dIIP

IIP

)|(

))(~),(~cov(

))(~),1(~cov()|1(

tte

tte

tttt

IC

dt

CAMIMU

dt

CAMIMUIC

P

xx

xxP

F

F