22
Null-Space-based Marginalization: Analysis and Algorithm Yulin Yang - [email protected] James Maley - [email protected] Guoquan Huang - [email protected] Department of Mechanical Engineering University of Delaware, Delaware, USA Robot Perception and Navigation Group (RPNG) Derivation - RPNG-2017-001 Last Updated - Oct. 15, 2017 1

Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Null-Space-based Marginalization:Analysis and Algorithm

Yulin Yang - [email protected] Maley - [email protected]

Guoquan Huang - [email protected]

Department of Mechanical EngineeringUniversity of Delaware, Delaware, USA

Robot Perception and Navigation Group (RPNG)Derivation - RPNG-2017-001Last Updated - Oct. 15, 2017

1

Page 2: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Contents

1 Introduction 1

2 Marginalization 22.1 Null-Space Marginalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Schur Complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

3 Analysis of Null-Space Marginalization 33.1 Information Matrix After Null-Space Marginalization . . . . . . . . . . . . . . . . . . . . . 33.2 Information Matrix After Schur Complement . . . . . . . . . . . . . . . . . . . . . . . . . . 43.3 Equivalence of Two Marginalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.4 Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

4 Pose-and-Feature Marginalization 5

5 Analytical Null-Space 7

6 Experimental Results 86.1 MSCKF Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86.2 Factor Graph Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86.3 Null-Space Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

7 Conclusions and Future Work 9

A Proof of Lemma 3.1 10A.0.1 Case I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10A.0.2 Case II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

B Marginalization of Multi-Poses and Features 11

C Running Time Comparison 13C.1 Graph SLAM Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14C.2 Marginalization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

C.2.1 Schur Complement Marginalization . . . . . . . . . . . . . . . . . . . . . . . . . . 14C.2.2 Schur Complement Marginalization With Sparse Structure . . . . . . . . . . . . . . 15C.2.3 Null Space Marginalization With Householder QR . . . . . . . . . . . . . . . . . . 15C.2.4 Null Space Marginalization With Projection . . . . . . . . . . . . . . . . . . . . . . 16C.2.5 Null Space Marginalization With Given Rotations . . . . . . . . . . . . . . . . . . . 17

C.3 Marginalization Runtime Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1

Page 3: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Abstract

In SLAM, the size of the state vector tends to grow when exploring unknown environments, causingthe ever-increasing computational complexity. To reduce the computational cost, one needs to con-tinuously marginalize part of previous states (features and/or poses). This can be achieved by usingeither the null-space operation or Schur complement based marginalization. The former was originallydeveloped particularly for visual-inertial odometry (VIO), while the latter is the standard approach toreduce the state vector in bundle adjustment (BA). In this paper, for the �rst time ever, we prove thatunder mild assumptions (i.i.d. white Gaussian noise model and same linearization points) these twotechniques retain the same amount of information about the state, which is validated with real-worldexperiments. Moreover, based on this key insight, we derive analytically the left null-space expressionsfor multi-state constraint Kalman �lter (MSCKF)-based VIO, which is veri�ed through Monte-Carlosimulations.

1 IntroductionIt is essential for robots to perform SLAM when navigating in the unknown environments. However, a SLAMsolution typically su�ers from ever-increasing computational cost. To address this issue, the marginalizationtechnique is often used. One common approach is to use the Schur complement to marginalize out partof the states while preserving all information about the remaining states [1, 2, 3, 4, 5, 6]. For example, invisual-inertial navigation systems (VINS) [1, 2, 3], visual point features are continuously marginalized bythe Schur complement to prevent the state vector from growing too large while retaining relative cameramotion constraints, which are then fused with (preintegrated) IMU measurements. In the graph-basedSLAM, Schur complement is also employed to reduce the graph by marginalizing some of the graph nodes(features and/or poses) [4, 5, 6]. Alternatively, the null-space based marginalization can be used, which wasoriginally developed in the multi-state constraint Kalman �lter (MSCKF) [7] particularly for visual-inertialodometry (VIO) and recently has been applied to di�erent VINS [7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. The basicidea is to use the left null-space of the measurement Jacobian to marginalize feature components from thestate vector. Note that a similar idea has also been employed in graph-based VINS [17] and our recent workon acoustic underwater navigation [16].

While the aforementioned work has shown that the null-space marginalization is e�cient. In this work,we move one step further and for the �rst time ever, establish a link between the null-space operationand the Schur compliment. Besides using the null-space marginalization for features, we also adapt thenull-space marginalization for both poses and features. In particular, the main theoretical contributions ofthis work include:

• We analytically show that the null-space operation and the Schur complement operation for featuremarginalization retain the same amount of information, under the mild assumptions of independentand identically distributed (i.i.d.) white Gaussian noise and using the same linearization points, whichmotivates us to apply the null-space operation for feature marginalization in graph-based SLAM.

• We show that the null-space-based marginalization can be applicable for both poses and featuresunder the white Gaussian noise assumption.

• We analytically drive the well-structured null-space expression for commonly-used sensors suchas stereo and RGBD cameras, which is shown to perform better than the numerically-computedcounterpart.

The rest of the paper is structured as follows: After brie�y overview of the null space and the Schurcomplement operations in next section, we present our main result in Section III that the two marginalizationtechniques are equivalent up to certain mild conditions. In Section IV we apply the null space operation toboth pose and feature marginalization, and in Section V we derive the analytical null space expressions for

1

Page 4: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

MSCKF. Our analysis and algorithms are validated with simulations and experiments in Section VI. SectionVII concludes the paper and outlines the future work.

2 MarginalizationIn this section, we brie�y explain the null-space marginalization and Schur complement operation in thecontext of SLAM, which will form the basis for our ensuing analysis. In particular, the SLAM state of arobot exploring an unknown environment propagates according to the motion model:

xk+1 = fk (xk , uk ) +wk (1)

where xk represents the robot poses at time step k with k ∈ 1 . . .K , uk represents measurements fromodometry, inertial or other motion sensors, and wk is additive white Gaussian noise with covariance Qk .Therobot also observes features based on the measurement model:

zk j = hk (xk , xfj ) + nk j (2)

where zk j denotes the measurements to feature xfj (j ∈ 1 . . .N ) at time step k , and nk j represents the whiteGaussian noise with covariance Rk j .

Based on (1) and (2), a SLAM solution is to estimate both robot pose xk and all observed featuresxfj (j = 1 . . .N ). clearly, if N is large (which is often the case for visual navigation), the estimation processof SLAM may become cost prohibitive. Thus, we will marginalize all the N features.

2.1 Null-Space MarginalizationWe �rst review the null-space marginalization that was introduced in the MSCKF [7]. Speci�cally, the statevector of the MSCKF is given by:

x =[x>I x>C1

x>C2. . . x>Cm

]>(3)

where xI represents the current IMU states, and xCi , i ∈ 1 . . .m represents the cloned robot (sensor) posesinm previous time steps when the sensor get measurements according to (2).

During MSCKF update, we assume a feature xfj has been observed mj (mj ≤ m) times by the robotwithin the cloned state window. With all these measurements, we can compute feature estimates xfj . Bylinearizing the nonlinear measurement model (2) about the current state estimate x and feature estimatesxfj , we can stack all the linearized measurements and multiply by the left null space U of feature JacobianHf j as follows:

U>

z1j...zi j...

zmj j

︸ ︷︷ ︸z(j )

' U>

H1j...

Hi j...

Hmj j

︸ ︷︷ ︸H(j )x

x + U>

Hf1j...

Hfi j...

Hfmj j

︸ ︷︷ ︸Hf j

xfj + U>

n1j...ni j...

nmj j

︸ ︷︷ ︸n(j )

(4)

where i = 1 . . .mj , zi j = zi j − hi (xCi , xfj ) denotes the measurement residual for feature xfj at the i-thcamera pose within the window, Hi j and Hfi j denote the state and feature Jacobians of the measurementzi j . In [7], ni j is white Gaussian noise with covariances as Ri j = σ 2IRi j , where IRi j denotes identity matrixwith same dimension as Ri j . Since that U>Hf j = 0, the feature xfj is marginalized from (4) by null spaceoperation. Finally, stacking all the n feature measurements observed within the sliding window, we obtain:

zo = Ho x + no (5)

where zo , Ho and no are all stacked measurement residuals, state Jacobians and measurement noise fromz(j ) , H(j )

x and n(j ) (j = 1 . . .n), respectively [see (4)]. Up to this point, we essentially have marginalizedall the features by the null-space operation (i.e., null-space marginalization), and obtained the inferredmeasurement equations (5) with only robot states involved, which is ready for EKF update.

2

Page 5: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

2.2 Schur ComplementWe optimally formulate the SLAM problem as maximum likelihood estimation (MLE). Assuming that allthe measurements are independent, under the Gaussian noise assumption, this MLE problem is equivalentto a weighted nonlinear least squares problem [2, 18]:

minx,xf

K∑k=1‖xk+1 − fk (xk )‖2Q−1

k+

∑k, j

zk j − hk (xk , xfj )

2

R−1k j

(6)

where x =[x>1 . . . x

>K

]>denotes all the robot poses, xf =

[x>f1 . . . x

>fN

]>denotes all the landmarks. Iterative

methods, such as Gaussian-Newton or Levenberg-Marquardt algorithms, are often used to solve (6) for xand xf .Upon this linearization point (x, xf ), the system information matrix Σ can be computed as:

Σ =K∑k=1

F>kQ−1k Fk +

∑k, j

H>k jR−1k jHk j =:

[Σxx ΣxfΣfx Σ�

](7)

where Fk and Hk j are the Jacobians of the system model (1) and (2) with respect to xk and xfj , respectively,and Σ is partitioned based on the dimension of x and xf . Since we are more interested in the robot states, theinformation matrix Σx of the robot states can be easily obtained through Schur complement by marginalizingfeature xf :

Σx = Σxx − ΣxfΣ−1� Σfx (8)

which implies that in spite of the feature marginalization, the remaining information matrix Σx contains allthe information inherited from (7) about x.

3 Analysis of Null-Space MarginalizationIn this section, from the information perspective, we analytically study the connection between null-spacemarginalization and Schur complement based marginalization. To make the presentation concise, weassume a simple SLAM scenario: A robot takes measurements of feature xf at poses x1 and x2 respectivelywhile this result can be generalized to any SLAM cases:

x2 = f1 (x1) +w1 (9)zk = hk (xk, xf ) + nk (10)

where k = 1, 2. Given the prior knowledge of x1 ∼ N (x1, P1), we can solve the SLAM problem and get theestimate of robot pose by marginalizing the feature. Given the linearization point (x1, x2, xf ), the linearizedsystem can be written as: {

x2 ' F1x1 +w1zk ' Hk xk + Hfk xf + nk

(11)

We will marginalize features with null-space (NS) and Schur complement (SC) operations and comparethe resulting information matrix for robot poses.

3.1 Information Matrix After Null-Space MarginalizationWe solve this problem within the MSCKF framework. The system propagates from x1 to x2 according to (1)and clones the state of x1. At the same time, the system stacks all the measurements and updates at x2. Thecovariance matrix for the robot poses after the propagation is:

P(x1, x2) =[

P1 P1F>1F1P1 F1P1F>1 + Q1

](12)

After linearization of the measurement model, we perform the null-space operation:[z1z2

]=

[H1 00 H2

] [x1x2

]+

[Hf1Hf2

]xf +

[n1n2

](13)

⇒ U>n

[z1z2

]= U>nHx

[x1x2

]+ U>n

[n1n2

](14)

3

Page 6: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

where Hx =

[H1 00 H2

]and Hf =

[Hf1Hf2

]. Un represents the left null space for Hf , and can be computed by

QR:

Hf =[Ue Un

] [R∆

0

]= UeR∆ (15)

where[Ue Un

]is a unitary matrix and R∆ is an upper triangular matrix. We can get the information

matrix for the robot poses (x1, x2) after the EKF update as:

ΣNS =

[P1 P1F>1F1P1 F1P1F>1 + Q1

]−1

︸ ︷︷ ︸Σ(NS)

1

+H>xUn (U>nRUn)−1U>nHx︸ ︷︷ ︸

Σ(NS)2

(16)

where Σ(NS)1 and Σ(NS)

2 denote the information from the motion model (9) and the measurement model (10),

respectively. R =[R1 00 R2

]denotes the stacked noise covariances.

3.2 Information Matrix After Schur ComplementWe now formulate the SLAM problem as a MLE where cost function is given by:

J = x1 − x1 2P−1

1+ ‖x2 − f1 (x1)‖2Q−1

1+

2∑k=1‖zk − hk (xk, xf )‖2R−1

k(17)

The information matrix ΣJ for the system can be computed according to (7), and for simplicity, wepartitioned ΣJ as:

ΣJ =

[ΣJ1 ΣJ2ΣJ3 ΣJ4

](18)

where:

ΣJ1 =

[P1 P1F>1F1P1 F1P1F>1 + Q1

]−1+ H>x R

−1Hx (19)

ΣJ2 =

[H>1 R

−11 Hf1

H>2 R−12 Hf2

]= H>x R

−1Hf (20)

ΣJ3 =[H>f1R

−11 H1 H>f2R

−12 H2

]= H>f R

−1Hx (21)

ΣJ4 = H>f1R−11 Hf1 + H>f2R

−12 Hf2 = H>f R

−1Hf (22)

We marginalize the feature xf with Schur complement and obtain the information matrix of the robot poses(x1, x2) as:

ΣSC = ΣJ1 − ΣJ2Σ−1J4 ΣJ3 (23)

=

[P−1

1 + F>1 Q−11 F1 −F>1 Q

−11

−Q−11 F1 Q−1

1

]

︸ ︷︷ ︸Σ(SC)

1

+ H>x R−1Hx − H>x R

−1Hf (H>f R−1Hf )

−1H>f R−1Hx︸ ︷︷ ︸

Σ(SC)2

where Σ(SC)1 and Σ(SC)

2 denote the information from the motion model (9) and measurement model (10),respectively.

3.3 Equivalence of Two MarginalizationsBased on the above, we present our main result:

4

Page 7: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Lemma 3.1. Under the i.i.d. white Gaussian noise assumption, given the same linearization points, theinformation matrix (23) after Schur complement operation is equivalent to the information matrix (16) afternull-space marginalization.

Proof. See Appendix A. �

Note that since we have separated the information from the motion model (9) and measurement model(10), this analysis can be easily extended to multiple poses and features (see Appendix B). We also want topoint out the following:

(i) The assumption that the noise is i.i.d. white Gaussian noise is widely used in most SLAM systems(e.g., [7, 9, 10]). If there is a di�erence between the null-space marginalization and Schur complement,it is due to di�erent linearization points used.

(ii) The null-space marginalization for features can be applied to pre-integration based VINS [3, 17]. Wecan provide a simpli�ed expression for the information matrix ΣNS for robot poses after marginaliza-tion:

ΣNS =

[P1 P1F>1F1P1 F1P1F>1 + Q1

]−1

︸ ︷︷ ︸Σ(NS)

1

+1σ 2 (U

>nHx)

>U>nHx︸ ︷︷ ︸Σ(NS)

2

(24)

where Σ(NS)1 represents the information from the inertial measurements by pre-integration, while Σ(NS)

2represents the information from the visual measurements after null-space feature marginalization.Note that for Σ(NS)

2 , we do not need to calculate the null space Un explicitly. U>nHx can be directlycomputed by Givens rotations.

(iii) With (42) in the proof, we �nd that I−UeU>e is also the null space of the feature Jacobian Hf . Inspiredby the this �nding, we propose an algorithm for constructing analytical null space expression inSection 5.

3.4 Computational ComplexityWe further compare the computational costs for Schur component and null-space marginalization. From theproof we know that ΣSC

1 and ΣNS1 undertake the same operations, we only need to compare the computation

of ΣSC2 and ΣNS

2 . We assume there are n features that are all tracked by them robot poses. Given the i.i.d.white Gaussian noise assumption, the computational complexity are outlined in Table 1 and 2.

It is not di�cult to see that the computation for null-space operation is much easier. The dominant stepfor Schur complement operation is Step SC3 O (mn3), while the dominant step for null-space operation isStep NS2 O (m3n). Since features number n is much larger than that of the robot posesm, Step SC3 is morecostly than Step NS2. Therefore, the null-space operation is more e�cient. In Step NS1, O (mn) representsthe computational complexity with Givens rotations. Note that the complexity in Table 1 represents themost general case, but if taking into account the sparse structure of information matrix and using techniquessuch as column re-ordering in [18], the computational cost of the Schur complement can be greatly reduced.

4 Pose-and-Feature MarginalizationWe discover that the null-space marginalization can also be applied to pose and feature marginalization. Tosee this, we consider the same SLAM system (1) and (2) as before. We want to marginalize the the pose x2and the feature xf . The stacked linearized system can be written as:

0z1z2

︸︷︷︸z

'

F1H10

︸︷︷︸Hk

x1 +

−I 00 Hf1H2 Hf2

︸ ︷︷ ︸Hm

[x2xf

]+

w1n1n2

︸︷︷︸n

(25)

5

Page 8: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Table 1: Computational Complexity For Schur Component Operation

Steps Operation Complexity

Step SC1 H>xHx O (m3n)

Step SC2 H>xHf O (m2n2)

Step SC3 H>f Hf O (mn3)

Step SC4 (H>f Hf )−1 O (n3)

Step SC5 H>xHf (H>f Hf )−1 O (mn2)

Step SC6 H>xHf (H>f Hf )−1 H>f Hx O (m2n)

Step SC7 H>xHx− H>xHf (H>f Hf )−1 H>f Hx O (m)

Table 2: Computational Complexity For Null Space Operation

Steps Operation Complexity

Step NS1 U>nHx O (mn)

Step NS2 (U>nHx)> U>nHx O (m3n)

In order to simplify the derivation, we denote xk the state that we are going to keep and xm the statethat are going to be marginalized, Hk and Hm are their related Jacobians. n is the stacked noise with

covariances R as

Q1 0 00 R1 00 0 R2

. If Hm is of full column rank, then we can also use the left null space of Um

to marginalize the state xm with U>mHm = 0. Thus we have:

U>mz = U>mHkxk + U>mn (26)

where Um can computed by QR factorization of Hm as:

Hm =[Uk Um

] [R∆k

0

](27)

The information for (26) is described as:

Σ(NS) = P−11 + H

>kUm (U>mRUm)

−1U>mHk (28)

By Schur complement, we get the information for xk as:

Σ(SC) = Σkk − ΣkmΣ−1mmΣmk (29)

where we have partitioned the information matrix Σ according to the xk and xm as:

Σ =

[Σkk ΣkmΣmk Σmm

](30)

Then we can arrive at:

Σ(SC) = P−11 + H

>k R−1Hk − H>k R

−1Hm (H>mR−1Hm)

−1H>mR−1Hk ⇒

6

Page 9: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Σ(SC) = P−11 + H

>k

[R−1 − R−1Uk (U>k R

−1Uk)−1U>k R

−1]Hk (31)

From (41) and (42), we can use the same proof for Lemma 3.1 to prove that (28) and (31) are equivalentunder the i.i.d. assumption for the noises. Therefore, with the same assumption, we can extend the null-space operation for both pose and feature marginalization given that Hm is of full column rank, which isoften needed in reducing the cost of graph SLAM (e.g., [6]).

5 Analytical Null-SpaceIn this section, we derive an analytical expression for null space in the MSCKF. In particular, the linearizedmeasurement model for feature xfj at time step k of the MSCKF can be structured as follows:

zk =∂hk∂Ck xfj︸ ︷︷ ︸HCk

∂Ck xfj∂x︸ ︷︷ ︸Hxk

x +∂hk∂Ck xfj︸ ︷︷ ︸HCk

∂Ck xfj∂xfj︸ ︷︷ ︸Hk f j

xfj + nk (32)

where Ckxfj =CkG R(xfj −

GpCk ), Hkf j =∂Ck xfj∂xfj

=CkG R, GpCk and Ck

G R are position and orientation fromthe cloned state xCk , HCk represents the Jacobians of the sensor measurements (i.e., image key points)regarding to the local feature position Ckxfj . Hence, if the sensor(s) can get the relative position estimateCk xfj of feature xfj with only one measurement, HCk is of full column rank (e.g., stereo camera, RGBDcamera). Thus, we can assume that H−1

Ckis the inverse (or pseudo inverse) of HCk , then multiply by (32) we

get:H−1Ck zk = Hxk x + Hkf j xfj + H

−1Cknk (33)

Assuming that we have m measurements for feature xfj , the system can be linearized regarding to thecloned states and features as:

H−1C1z1

H−1C2z2...

H−1Cm

zm

︸ ︷︷ ︸z

=

Hx1

Hx2...

Hxm

︸ ︷︷ ︸Hx

x +

C1G RC2G R...

CmG R

︸ ︷︷ ︸Hf j

xfj +

H−1C1n1

H−1C2n2...

H−1Cm

nm

︸ ︷︷ ︸n

(34)

We now can �nd two analytical solutions for the left null space U of Hf j such that U>Hf j = 0.

(i) One analytical left null space solution U3m×3m is:

U = I3m×3m −Hf jH>f jm

(35)

Note thatmI3 = H>f jHf j =∑m

k=1CkG R

>CkG R.

(ii) Another analytical left null space U3(m−1)×3m is:

U> =

−C1G R

> C2G R

>0 0 0

−C1G R

>0 C3

G R>

0 0... 0 0

. . . 0−C1G R

>0 0 0 Cm

G R>

(36)

Note that many other null spaces can be formulated. Compared with (35), (36) has fewer elements and alower dimension, thus less computation cost. Thus, it is used in our experiments.

7

Page 10: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

6 Experimental ResultsIn this section, we present three sets of experiments to validate our analysis, in particular, Lemma 3.1and the analytical null-space. Speci�cally, we evaluate with the MSCKF on real data to show that thenull-space marginalization and Schur complement produce the same estimation accuracy. We then examinea factor graph-based visual odometry (VO) implementation to show that the information retained after bothoperations is identical. Lastly, we compare the analytical and numerical null-space operations implementedin the MSCKF evaluated in Monte-Carlo simulations.

6.1 MSCKF CaseWe use the EuRoC data sets [19] to experimentally validate the equivalence of the null space and Schurcompliment operations. The datasets were created by �ying a AscTex Fire�y MAV equipped with two globalshutter greyscale cameras and an ADIS16448 IMU through two separate indoor environments. Position andattitude ground truth measurements are provided with the datasets from a post processing solution aidedby a VICON system or a Leica laser tracker depending on the environment. The IMU data was provided at200Hz, and the image data at 20Hz. Camera and IMU calibrations were provided.

We implemented the conventional covariance form of the MSCKF as described in [7] which utilizesthe null space operation. We also implemented the MSCKF in information form, so that we could use theSchur complement operation. To clarify, we do not eliminate the feature sensitivity from the measurementequation as in (4) and (5). Instead, after the propagation step we obtain the total information matrix:

Σk =

[P−1k |k−1 00 0

]+

[H>xH>f

]R−1

[Hx Hf

]=

[Axx AxfAfx A�

](37)

We then obtain the updated covariance matrix and state correction from:

Pk |k =(Axx − AxfA−1

� Afx)−1

(38)

xk |k =[Pk |k −Pk |kAxfA−1

] [H>xH>f

]R−1zk (39)

Both �lters were tuned the same, and had the same sliding window size of 7. The �lters were runside-by-side so that the information form �lter could be forced to use the same features in the updatestep (although each �lter performed its own triangulation). The output of the two �lters was identicalup to machine precision. Figure 1 shows the similarity in position root mean squared error (RMSE) andnormalized estimation error squared (NEES) [20]. The former measures the estimation consistency whilethe latter evaluates the estimation accuracy.

6.2 Factor Graph CaseWe also implemented a simple example of a typical vision factor (Fig. 2) to validate Lemma 3.1. As shownin the photo from the KITTI odometry data set [21], the car is moving from time step 1 to time step 2. Inoder to fuse the visual information with other sensors (eg., Lidar, GPS), we need to get the estimate of theodometry information 1

2x and its covariance (or its information Σ). For this single vision factor, there is nomotion model information. Therefore, we only need to compare the visual information Σ(SC)

2 from (23) andΣ(NS)

2 from (16). We choose 2 pairs of stereo images from this dataset, extracted and matched 121 pairs ofstereo features. We then formulate the measurement model with respect to the relative transformation12x and all the features. The information matrix after the Schur complement operation and null spaceoperation are shown in the 2. Note that σ 2 represents the normalized image noise variance. The di�erencesΣDif f (= ΣSC

2 − ΣNS2 ) have orders of magnitude less than 10−12, which is identical to machine precision. The

8

Page 11: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

20 40 60 80 100

time (sec)

0

100

200

Po

sitio

n N

EE

S

MH 05 difficult KLT

Nullspace

Schur Compliment

20 40 60 80 100

time (sec)

0

0.2

0.4

Po

sitio

n R

MS

E (

m)

Nullspace

Schur Compliment

20 40 60 80 100

time (sec)

0

50

100

150A

ttitu

de

NE

ES

Nullspace

Schur Compliment

20 40 60 80 100

time (sec)

0

0.5

1

Att

itu

de

RM

SE

(d

eg

)

Nullspace

Schur Compliment

20 40 60 80 100

time (sec)

0

200

400

Po

sitio

n N

EE

S

V1 03 difficult KLT

Nullspace

Schur Compliment

20 40 60 80 100

time (sec)

0

0.5

1

Po

sitio

n R

MS

E (

m)

Nullspace

Schur Compliment

20 40 60 80 100

time (sec)

0

50

100

150

Att

itu

de

NE

ES

Nullspace

Schur Compliment

20 40 60 80 100

time (sec)

0

0.5

1

1.5

Att

itu

de

RM

SE

(d

eg

)

Nullspace

Schur Compliment

Figure 1: Comparison between nullspace and Schur compliment methods for feature marginalization in two EuRoC datasetsequences.

di�erences are shown below:

ΣDif f =1σ 2

−5.04e−14 −1.25e−13 −1.05e−13 −4.44e−15 3.64e−14 3.05e−15−5.15e−14 3.06e−12 −1.60e−14 9.41e−14 2.73e−15 1.17e−13−9.26e−14 −6.39e−14 −3.09e−13 −3.77e−15 −8.33e−17 2.18e−156.63e−15 1.51e−13 2.22e−15 5.11e−15 −3.94e−16 2.69e−153.91e−14 −1.66e−16 3, 33e−16 8.71e−17 −3.22e−15 −2.84e−161.17e−14 1.15e−13 −1.90e−15 5.22e−15 −4.99e−16 2.96e−15

6.3 Null-Space ComparisonIn order to validate our proposed analytical null space (36), we run 50 Monte Carlo simulations of MSCKFwith both numerical and analytical null spaces and compare the NEES and RMSE. In particular, Figure 3shows the average NEES of 50 Monte-Carlo simulations for robot’s position, attitude and orientation. Itshows that numerical and analytical null space solutions work well for the estimate of attitude and velocity,and the results (average NEES) are almost the same. However,the MSCKF with analytical null space cangenerate more consistent position estimates. Similarly in Figure 4, both null space solutions have similaraccuracy for attitude estimate, while the analytical solution generates a more accurate position estimate.The better performance of the proposed analytical null space is probably because the matrix has a sparsestructure and utilizes the orthogonality of the state rotation matrix. Therefore, applying this analytical nullspace is similar to adding an implicit motion constraint to the estimator.

7 Conclusions and Future WorkWe have analytically shown that the null-space marginalization and Schur complement preserve the sameinformation about the remaining states under certain mild assumption (i.i.d. Gaussian noise model andsame linearization points). This validates the application of null space marginalization to graph SLAM for

9

Page 12: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Figure 2: A typical vision factor for visual odometry. 12x(

21R,

1 p2) represents the transformation from stereo camera pose 1 to pose2, where 2

1R and 1p2 denote the rotation and translation between pose 1 and pose 2,respectively.

0 50 100 150 200 250 300 350

time (s)

0

10

Po

stio

n N

EE

S

NEES For Position

numerical

analytic

0 50 100 150 200 250 300 350

time (s)

0

5

10

Att

itu

de

NE

ES

NEES For Attitude

0 50 100 150 200 250 300 350

time (s)

0

5

10

Ve

locity N

EE

S

NEES For Velocity

Figure 3: Average NEES of 50 Monte-Carlo Simulations for Robot’s Position, Attitude and Velocity. The blue and red lines representthe results for the MSCKF with numerical null space and analytical null space, respectively.

potential e�ciency gain. Moreover, we have o�ered the analytical null space expression for commonlyused sensors such as stereo and RGBD cameras, which has been shown to have better performance thanthe numerically computed one. As for the future work, we plan to apply the null-space marginalization to abroader family of estimation problems that require marginalization to gain better e�ciency.

A Proof of Lemma 3.1To show Σ(NS)

1 = Σ(SC)1 , matrix inversion lemma yields:

Σ(NS)1 =

[P1 P1F>1F1P1 F1P1F>1 + Q1

]−1=

[P−1

1 + F>1 Q−11 F1 −F>1 Q

−11

−Q−11 F1 Q−1

1

]= Σ(SC)

1 (40)

To show Σ(NS)2 = Σ(SC)

2 , we consider the following cases:

A.0.1 Case I

We start with Rk = σ 2IRk , where σ 2 is the noise variance and IRk is an identity matrix with size of Rk .Under the assumption of i.i.d. Gaussian noise, we can easily get the stacked noise covariances as R = σ 2I.

10

Page 13: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

0 50 100 150 200 250 300 350

time (s)

0

0.1

0.2

0.3

0.4

0.5

0.6

Po

sitio

n R

MS

E (

m)

RMSE For Position

numerical

analytic

0 50 100 150 200 250 300 350

time (s)

0

0.2

0.4

0.6

0.8

1

Att

itu

de

RM

SE

(d

eg

)

RMSE For Attitude

(a) (b)

Figure 4: Average RMSE of 50 Monte-Carlo Simulations for Robot’s Position and Attitude. The blue and red lines represents theresults for the MSCKF with numerical null space and analytical null space, respectively.

Then, from (15) (16) and (23), we have:

Σ(SC)2 = H>x

[R−1 − R−1Ue (U>e R

−1Ue)−1U>e R

−1]Hx

= H>x[ 1σ 2 (I − UeU>e )

]Hx (41)

Σ(NS)2 = H>x

[Un (U>nRUn)

−1U>n]Hx

= H>x[ 1σ 2UnU>n

]Hx (42)

From QR, we know UeU>e + UnU>n = I. Thus, upon the same linearization point, we have Σ(SC)2 = Σ(NS)

2 .Therefore, we conclude that (23) and (16) are equivalent, that is: information from NS operation equals tothat with SC operation.

A.0.2 Case IIIf the noise covariances matrix Rk are full matrix, we can perform pre-whitening for the measurementequations before the null space operation. Since Rk is symmetrical, positive and de�nite, we can factorizedRk as:

Rk = VΛV> = (VΛ12 ) (VΛ

12 )> = VV> (43)

⇒ V−1Rk (V>)−1 = IΛ (44)

where Λ is a diagonal matrix and V = VΛ12 . Left multiplication of V−1 to the linearized measurement

equation yields:V−1zk︸︷︷︸

zk

= V−1Hk︸ ︷︷ ︸Hk

xk + V−1Hfk︸ ︷︷ ︸Hfk

xf + V−1nk︸︷︷︸nk

(45)

After pre-whitening, the new measurement noise becomes nk ∼ N (0, IΛ), where IΛ is identity matrix.Then, we can follow the same steps in case I with new Jacobians in (45).

B Marginalization of Multi-Poses and FeaturesWe �rst solve the information matrix with null space operation ΣNS according to (16):

ΣNS = Σ(NS)1 + Σ(NS)

2 (46)

11

Page 14: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

where Σ(NS)1 and Σ(NS)

2 denote the information from multiple poses and features, respectively. We knowthat the inverse of state covariances after K steps of propagation (Px)−1 = Σ(NS)

1 , and Px is:

Px =

P1 P1F>1 0 0 0F1P1 P2 |1 P2 |1F>2 0 0

0 F2P2 |1 P3 |2. . . 0

0 0. . .

. . . PK−1 |K−2F>K−10 0 0 FK−1PK−1 |K−2 PK |K−1

(47)

where Pk+1 |k = FkPk |k−1F>k + Qk , where k = 2 . . .K − 1. The information Σ(NS)1 from features j = 1 . . .N

can be expressed as follows:

Σ(NS)2 =

N∑j=1

Σ(NSj )2 =

N∑j=1

H(j )>x U(j )

n (U(j )>n R(j )U(j )

n )−1U(j )>n H(j )

x (48)

where (·) (j ) represents the corresponding parameters regarding to the feature xfj . Similarly, we can solvethe information matrix in MLE formulation with Schur complement operation ΣSC as follows:

ΣSC = Σ(SC)1 + Σ(SC)

2 (49)

where Σ(SC)1 and Σ(SC)

2 denote the information from poses and features after Schur complement operation,respectively.

Σ(SC)1 =

P−11 + F

>1 Q−11 F1 −F>1 Q

−11 0 0 0

−Q−11 F1 Q−1

1 + F>2 Q−12 F2 −F>2 Q

−12 0 0

0 −Q−12 F2 Q−1

2 + F>3 Q−13 F3

. . . 0

0 0. . .

. . . −F>K−1Q−1K−1

0 0 0 −Q−1K−1FK−1 Q−1

K−1

(50)

If we consider the feature Jacobians for the whole sytem as:

Hf =

H(1)f 0 0 00 H(2)

f 0 0

0 0. . . 0

0 0 0 H(N )f

=[U(1)e U(2)

e · · · U(N )e

]

R(1)∆ 0 0 00 R(2)

∆ 0 0

0 0. . . 0

0 0 0 R(N )∆

(51)

The matrix Σ(SC)2 can be written as:

Σ(CS)2 =

N∑j=1

Σ(CSj )2 =

N∑j=1

[H(j )>x

[(R(j ) )−1 − (R(j ) )−1U(j )

e [U(j )>e (R(j ) )−1U(j )

e ]−1U(j )>e (R(j ) )−1

]H(j )x

](52)

From the proof for a single feature xfj , we can have Σ(NSj )2 = Σ(SCj )

2 . (48) and (52) are just the summationof each feature’s information. Therefore, they are also equivalent.

At last, Σ(NS)1 = P−1

x = Σ(SC)1 can be proved by induction. Let Pxk denotes the propagated covariance at

time step k, then we have:

Pxk+1 =

[Pxk PxkH

>Fk

HFkPxk Pk+1 |k

](53)

where HFk =[0 · · · 0 Fk

]. We also de�ne Σ(NS)

1k and Σ(SC)1k , which denote the propagated information

and MLE information, respectively. For the induction, when k=1, we have:

Σ(NS)11= P−1

x1 = P−11 = Σ(SC)

11(54)

12

Page 15: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Now, if we have Σ(NS)1k = Σ(SC)

1k , we need to show that: Σ(NS)1k+1= Σ(SC)

1k+1. From matrix inversion Lemma, we

have:

Σ(NS)1k+1= P−1

xk+1=

[Pxk PxkH

>Fk

HFkPxk HFkPxkH>Fk+ Qk

]−1

=

[P−1xk + H

>FkQ−1k HFk −H>FkQ

−1k

−Q−1k HFk Q−1

k

]= Σ(SC)

1k+1(55)

C Running Time ComparisonThis section is a detail explanation for the vision factor experiment. From the Description in the paper, wecan have the following measurement equations (nk ∼ N (0,Rk )):

z1 = h(1xf ) + n1 (56)z2 = h(12x,

1xf ) + n2 (57)

where

zk =

kulkvlkur

=

kxfkzfkyfkzf

kxf−bkzf

(58)

Rk = σ 2I2 (59)

After linearization, we can have:[z1z2

]=

[03×6 Hproj1

Hproj2H12 Hproj2H2f

] [ 12x1xf

]+

[n1n2

](60)

where

∂z1

∂12 x= 0 (61)

∂z1

∂1xf= Hproj1 (62)

∂z2

∂12 x= Hproj1H12 = Hproj1

[b21R(

1Pf − 1P2)×c −21R

](63)

∂z1

∂1xf= Hproj2H2f = Hproj2

21R (64)

Hprojk =

1kzf

0 −kxfkz2

f

0 1kzf

−kyfkz2

f1

kzf0 −

kxf−bkz2

f

(65)

Then, we can rewrite the linearized equations as:[z1z2

]

︸︷︷︸z

=

[03×6

Hproj2H12

]

︸ ︷︷ ︸Hx

12x +

[Hproj1

Hproj2H2f

]

︸ ︷︷ ︸Hf

1xf +[n1n2

]

︸︷︷︸n

(66)

13

Page 16: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Note that, for feature j, j = 1 . . .m, we can have:

z(1) = H(1)x

12x + H

(1)f

1x(1)f + n(j ) (67)

... =... (68)

z(j ) = H(j )x

12x + H

(j )f

1x(j )f + n(j ) (69)

... =... (70)

z(m) = H(m)x

12x + H

(m)f

1x(m)f + n(m) (71)

Now we will compare the computation running time for feature marginalization.

C.1 Graph SLAM OptimizationFor the Marginalization, the �rst step is to compute the linearizion points (no matter MSCKF or graphSLAM), that is the state estimate 1

2x and the xf , the formulation for this Graph SLAM problem is:

m∑j=1

[z1z2

] (j )−

[h(1xf )

h(12x,1xf )

] (j )

2

R−1k

(72)

The Jacobians are shown in (66). After this graph optimization, we can get the linearization points andmarginalize the features to keep only the pose information Σx.

Algorithm 1 SLAM with Feature Marginalization

1: Input: z(j )1 ,z(j )2 and stereo baseline b with j = 1 . . .m2: for j ← 1, m do3: Compute H(j )

x and H(j )f

4: end for5: Solve the 1

2x and the x(j )f (j = 1 . . .m) from (72) with Levenberg Marquardt Method.6: x(j )f (j = 1 . . .m) marginalization with one algorithm from Section C.27: Return the remaining pose information Σx

C.2 Marginalization Algorithm

C.2.1 Schur Complement Marginalization

We need �rst construct the Measurement Jacobians:

z(1)...

z(j )...

z(m)

︸ ︷︷ ︸z(all )

=

H(1)x...

H(j )x...

H(m)x

︸ ︷︷ ︸H(all )x

12x +

H(1)f 0 0 0 0

0. . . 0 0 0

0 0 H(j )f 0 0

0 0 0. . . 0

0 0 0 0 H(m)f

︸ ︷︷ ︸H(all )x

1x(1)f...

1x(j )f...

1x(m)f

+

n(1)

...

n(j )

...

n(m)

︸ ︷︷ ︸n(all )

(73)

And the info after marginalization for the 12x is:

ΣSCFx = H(all )>x

(R(all )

)−1H(all )x −H(all )>

x(R(all )

)−1H(all )f

[H(all )>f

(R(all )

)−1H(all )f

]−1H(all )>f

(R(all )

)−1H(all )x

(74)

14

Page 17: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Since R(all ) = σ 2I, we can simplify the above equation as:

ΣSCFx =1σ 2

(H(all )>x H(all )

x − H(all )>x H(all )

f

[H(all )>f H(all )

f

]−1H(all )>f H(all )

x

)(75)

See the Algorithm 2.

Algorithm 2 Schur Complement Marginalization

1: Input: 12x, 1x(j )f ,n

(j ), j = 1 . . .m2: Construct H(all )

x and H(all )f

3: Compute H(all )>x H(all )

x

4: Compute H(all )>x H(all )

f

5: Compute[H(all )>f H(all )

f

]−1

6: Compute ΣSCFx by Eq. (75)

C.2.2 Schur Complement Marginalization With Sparse Structure

With a careful inspection of the (73), we �nd that the feature Jacobians are with perfect sparse structure,therefore, we can consider to take advantage of this and design more e�cient algorithm. For (75), we canhave:

[H(all )>f H(all )

f

]−1=

(H(1)>f H(1)

f

)−10 0 0 0

0. . . 0 0 0

0 0(H(j )>

f H(j )f

)−10 0

0 0 0. . . 0

0 0 0 0(H(m)>

f H(m)f

)−1

(76)

H(all )>x H(all )

f =[H(1)>x H(1)

f · · · H(j )>x H(j )

f · · · H(m)>x H(m)

f

](77)

Therefore, we can rewrite (75) as:

ΣSCSx = ΣSCFx (78)

=1σ 2

m∑j=1

(H(j )>x H(j )

x − H(j )>x H(j )

f

[H(j )>f H(j )

f

]−1H(j )>f H(j )

x

)(79)

=1σ 2

m∑j=1

(ΣSCx(j )

)(80)

From the above derivation, we can see that the Schur complement operation is equivalent to thesummation of results from individual schur complement for each single feature. We can design the Schurcomplement algorithm with sparse structure as:

C.2.3 Null Space Marginalization With Householder QR

For null space marginalization, the key is to compute the left null space of the feature Jacobian Hf . Onemethod is to directly compute the QR factorization of Hf as:

Hf =[Ue Un

] [R∆

0

](81)

15

Page 18: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Algorithm 3 Schur Complement Marginalization With Sparse Structure

1: Input: 12x, 1x(j )f ,n

(j ), j = 1 . . .m2: for j ← 1, m do3: Compute H(j )

x and H(j )f

4: Compute ΣSCx(j )

5: end for6: Compute ΣSCSx by Eq. (80)

where Un is the null space. Then, the resulting information can be described as:

Σ(NSqr )x =

m∑j=1

((U(j )>n H(j )

x)>

U(j )>n H(j )

x

)=

m∑j=1

(Σ(NSqr )x(j )

)(82)

Therefore, the algorithm can be described as:

Algorithm 4 Null-Space Marginalization With Householder QR

1: Input: 12x, 1x(j )f ,n

(j ), j = 1 . . .m2: for j ← 1, m do3: Compute H(j )

x and H(j )f

4: Compute QR factorization with Householder (Eigen function) and get U(j )n

5: Compute U(j )>n H(j )

x

6: Compute Σ(NSqr )x(j )

7: end for8: Compute Σ

NSqrx by Eq. (82)

C.2.4 Null Space Marginalization With Projection

An explicit null space expression can be described in the form as:

Up = I − Hf(H>f Hf

)−1H>f (83)

Following (82), we can write the information with Up as:

Σ(NSpr )x =

m∑j=1

((U(j )>p H(j )

x)>

U(j )>p H(j )

x

)=

m∑j=1

(Σ(NSpr )x(j )

)(84)

However, we still need to prove that (84) and (82) are equivalent.Note that, since Up and Un are both the null space of Hf , and the columns of Un can be seen as a basis

for Up. Therefore, we can write it as:Up = UnS (85)

where S is the scalar matrix describing the coordinates of the each column of Up in the basis of Un. SinceUp is a projector, we have:

U>pUp = S>S (86)UpU>p = UnS (87)

16

Page 19: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Therefore (Un − S>

)S = 0 (88)

We can see that S = U>n is a solution, That is:

Up = UnU>n (89)

Plug this solution into (84) we can prove that it is equivalent to (82). Therefore, the algorithm can bedescribed as:

Algorithm 5 Null-Space Marginalization With Projection

1: Input: 12x, 1x(j )f ,n

(j ), j = 1 . . .m2: for j ← 1, m do3: Compute H(j )

x and H(j )f

4: Compute U(j )p

5: Compute Σ(NSpr )x(j )

6: end for7: Compute Σ

NSprx by Eq. (84)

C.2.5 Null Space Marginalization With Given Rotations

The original method for calculating the null space is Given Rotations. So, we also provide a version for thisnull space calculation.

Σ(NSдn)x =

m∑j=1

((U(j )>gn H(j )

x)>

U(j )>gn H(j )

x

)=

m∑j=1

(Σ(NSдn)x(j )

)(90)

Therefore, the algorithm can be described as:

Algorithm 6 Null-Space Marginalization With Given Rotations

1: Input: 12x, 1x(j )f ,n

(j ), j = 1 . . .m2: for j ← 1, m do3: Compute H(j )

x and H(j )f

4: Compute U(j )>gn H(j )

x with given rotations5: Compute Σ

(NSдn)x(j )

6: end for7: Compute Σ

NSдnx by Eq. (90)

C.3 Marginalization Runtime Resultswe use the same setup as experiments before. We have 121 features are going to be marginalized, andcompare the running time for the whole optimization and marginalization with 500 times. The results areas following:

For each marginalization method, we also run 500 times and take average to compare each marginaliza-tion performances.

17

Page 20: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

Algorithm 7 Given Rotations1: Input: Hf , Hx, (r , c ): rows and columns of Hf2: for i ← 1, c do3: for j ← 1, r do4: Compute aii = Hf (i,i ) and aji = Hf (j,i )5: if aii == 0 then6: c = 0 and s = 17: else8: c = aii√

a2ii+a

2ji

and s = −aji√

a2ii+a

2ji

9: end if

10: Construct R =[c −ss c

]

11: Hf ([i, j],1:3) = RHf ([i, j],1:3)12: Hx([i, j], :) = RHx([i, j], :)13: end for14: end for

Table 3: Overall Running Time

Marginalization Algorithm Average Running Time(s)Optimization with Schur Complement Marginalization 0.0440574

Optimization with Given Rotations 0.00953969

Table 4: Marginalization Method Comparison

Marginalization Algorithm Average Running Time(s) Speed RankFull Schur Complement 0.0321067 5

Null Space with Given Rotations 0.0000867469 2Null Space with HouseHolder QR 0.000116933 3

Null Space with Direct HouseHolder 0.000118446 4Null Space with Projection Form 0.0000757341 1

18

Page 21: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

References[1] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframe-based visual-inertial

odometry using nonlinear optimization,” International Journal of Robotics Research, Dec. 2014.

[2] G. Huang, K. Eckenho�, and J. Leonard, “Optimal-state-constraint EKF for visual-inertial navigation,”in Proc. of the International Symposium on Robotics Research, (Sestri Levante, Italy), Sept. 12–15, 2015.

[3] K. Eckenho�, P. Geneva, and G. Huang, “High-accuracy preintegration for visual-inertial navigation,”in in Proc. of International Workshop on the Algorithmic Foundations of Robotics, December 18-20 2016.

[4] G. Huang, M. Kaess, and J. Leonard, “Consistent sparsi�cation for graph optimization,” in Proc. of theEuropean Conference on Mobile Robots, (Barcelona, Spain), pp. 150–157, Sept. 25–27, 2013.

[5] E. Nerurkar, K. Wu, and S. Roumeliotis, “C-KLAM: Constrained keyframe-based localization andmapping,” in Proc. of the International Conference on Robotics and Automation, (Hong Kong, China),pp. 3638–3643, May 31-June 7 2014.

[6] K. Eckenho�, L. Paull, and G. Huang, “Decoupled, consistent node removal and edge sparsi�cationfor graph-based slam,” in in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems,IEEE, October 9-14 2016.

[7] A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint Kalman �lter for vision-aided inertialnavigation,” in Proceedings of the IEEE International Conference on Robotics and Automation, (Rome,Italy), pp. 3565–3572, Apr. 10–14, 2007.

[8] M. Li and A. I. Mourikis, “Optimization-based estimator design for vision-aided inertial navigation,” inProc. of the Robotics: Science and Systems Conference, (Sydney, Australia), July 2012.

[9] M. Li and A. I. Mourikis, “High-precision, consistent EKF-based visual-inertial odometry,” InternationalJournal of Robotics Research, vol. 32, no. 6, pp. 690–711, 2013.

[10] M. Li and A. I. Mourikis, “Vision-aided inertial navigation with rolling-shutter cameras,” InternationalJournal of Robotics Research, vol. 33, pp. 1490–1507, Sept. 2014.

[11] M. Li and A. I. Mourikis, “Online temporal calibration for Camera-IMU systems: Theory and algorithms,”International Journal of Robotics Research, vol. 33, pp. 947–964, June 2014.

[12] C. X. Guo and S. I. Roumeliotis, “Imu-rgbd camera navigation using point and plane features,” in Proc. ofthe IEEE/RSJ International Conference on Intelligent Robots and Systems, (Tokyo, Japan), pp. 3164–3171,Nov. 3–7, 2013.

[13] C. Guo, D. Kottas, R. DuToit, A. Ahmed, R. Li, and S. Roumeliotis, “E�cient visual-inertial navigationusing a rolling-shutter camera with inaccurate timestamps,” in Proc. of the Robotics: Science and SystemsConference, (Berkeley, CA), July 13–17, 2014.

[14] D. G. Kottas and S. I. Roumeliotis, “E�cient and consistent vision-aided inertial navigation using lineobservations,” in Proc. of 2013 IEEE International Conference on Robotics and Automation, (Karlsruhe,Germany), pp. 1540–1547, May 6-10 2013.

[15] D. G. Kottas and S. I. Roumeliotis, “An iterative kalman smoother for robust 3d localization on mobileand wearable devices,” in Proc. of IEEE International Conference on Robotics and Automation, (Seattle,WA), pp. 6336–6343, May 26-30 2015.

19

Page 22: Null-Space-based Marginalization: Analysis and Algorithmudel.edu/~yuyang/downloads/tr_nullspace.pdf · 10/15/2017  · null-space marginalization for both poses and features. In particular,

[16] Y. Yang and G. Huang, “Acoustic-inertial underwater navigation,” in 2017 IEEE International Conferenceon Robotics and Automation (ICRA), IEEE, 2017.

[17] C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-manifold preintegration for real-timevisual–inertial odometry,” IEEE Transactions on Robotics, vol. 33, pp. 1–21, Feb 2017.

[18] M. Kaess, A. Ranganathan, and F. Dellaert, “iSAM: Incremental smoothing and mapping,” IEEE Trans-actions on Robotics, vol. 24, pp. 1365–1378, Dec. 2008.

[19] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “Theeuroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10,pp. 1157–1163, 2016.

[20] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with Applications to Tracking and Navigation.New York: John Willey and Sons, 2001.

[21] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmarksuite,” in Conference on Computer Vision and Pattern Recognition, (Providence, RI), June 18-20 2012.

20