Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by...

Preview:

Citation preview

Multiple View Geometry

Marc PollefeysUniversity of North Carolina at Chapel Hill

Modified by Philippos Mordohai

2

Outline

• Parameter estimation• Robust estimation (RANSAC)

• Chapter 4 of “Multiple View Geometry in Computer Vision” by Hartley and Zisserman

3

Parameter estimation

• 2D homographyGiven a set of (xi,xi’), compute H (xi’=Hxi)

• 3D to 2D camera projectionGiven a set of (Xi,xi), compute P (xi=PXi)

• Fundamental matrixGiven a set of (xi,xi’), compute F (xi’TFxi=0)

• Trifocal tensor Given a set of (xi,xi’,xi”), compute T

4

Number of measurements required

• At least as many independent equations as degrees of freedom required

• Example:

Hxx'

11

λ

333231

232221

131211

y

x

hhh

hhh

hhh

y

x

2 independent equations / point8 degrees of freedom

4x2≥8

5

Approximate solutions

• Minimal solution4 points yield an exact solution for H

• More points– No exact solution, because

measurements are inexact (“noise”)– Search for “best” according to some cost

function– Algebraic or geometric/statistical cost

6

Gold Standard algorithm

• Cost function that is optimal for some assumptions

• Computational algorithm that minimizes it is called “Gold Standard” algorithm

• Other algorithms can then be compared to it

7

Direct Linear Transformation(DLT)

ii Hxx 0Hxx ii

i

i

i

i

xh

xh

xh

Hx3

2

1

T

T

T

iiii

iiii

iiii

ii

yx

xw

wy

xhxh

xhxh

xhxh

Hxx12

31

23

TT

TT

TT

0

h

h

h

0xx

x0x

xx0

3

2

1

TTT

TTT

TTT

iiii

iiii

iiii

xy

xw

yw

Tiiii wyx ,,x

0hA i

8

Direct Linear Transformation(DLT)

• Equations are linear in h

0

h

h

h

0xx

x0x

xx0

3

2

1

TTT

TTT

TTT

iiii

iiii

iiii

xy

xw

yw

0AAA 321 iiiiii wyx

0hA i• Only 2 out of 3 are linearly independent

(indeed, 2 eq/pt)

0

h

h

h

x0x

xx0

3

2

1

TTT

TTT

iiii

iiii

xw

yw

(only drop third row if wi’≠0)• Holds for any homogeneous

representation, e.g. (xi’,yi’,1)

9

Direct Linear Transformation(DLT)

• Solving for H

0Ah 0h

A

A

A

A

4

3

2

1

size A is 8x9 or 12x9, but rank 8

Trivial solution is h=09T is not interesting

1-D null-space yields solution of interestpick for example the one with 1h

10

Direct Linear Transformation(DLT)

• Over-determined solution

No exact solution because of inexact measurementi.e. “noise”

0Ah 0h

A

A

A

n

2

1

Find approximate solution- Additional constraint needed to avoid 0, e.g.

- not possible, so minimize

1h Ah0Ah

11

Singular Value Decomposition

XXVT XVΣ T XVUΣ T

Homogeneous least-squares

TVUΣA

1X AXmin subject to nVX solution

12

DLT algorithm

ObjectiveGiven n≥4 2D to 2D point correspondences {xi↔xi’}, determine the 2D homography matrix H such that xi’=Hxi

Algorithm

(i) For each correspondence xi ↔xi’ compute Ai. Usually only two first rows needed.

(ii) Assemble n 2x9 matrices Ai into a single 2nx9 matrix A

(iii) Obtain SVD of A. Solution for h is last column of V

(iv) Determine H from h

13

Inhomogeneous solution

'

'h~

''000'''

'''''000

ii

ii

iiiiiiiiii

iiiiiiiiii

xw

yw

xyxxwwwywx

yyyxwwwywx

Since h can only be computed up to scale, pick hj=1, e.g. h9=1, and solve for 8-vector

h~

Solve using Gaussian elimination (4 points) or using linear least-squares (more than 4 points)

However, if h9=0 this approach fails also poor results if h9 close to zeroTherefore, not recommended

14

Solutions from lines, etc.

ii lHl T 0Ah

2D homographies from 2D lines

Minimum of 4 lines

Minimum of 5 points or 5 planes

3D Homographies (15 dof)

2D affinities (6 dof)

Minimum of 3 points or lines

Conic provides 5 constraints

Mixed configurations?

15

Cost functions

• Algebraic distance• Geometric distance• Reprojection error

• Comparison• Geometric interpretation• Sampson error

16

Algebraic distance

AhDLT minimizes

Ahe residual vector

ie partial vector for each (xi↔xi’)algebraic error vector

2

22alg h

x0x

xx0Hx,x

TTT

TTT

iiii

iiiiiii

xw

ywed

algebraic distance

22

21

221alg x,x aad where 21

T321 xx,,a aaa

2222alg AhHx,x eed

ii

iii

Not geometrically/statistically meaningfull, but given good normalization it works fine and is very fast (use for initialization)

17

Geometric distancemeasured coordinatesestimated coordinatestrue coordinates

xxx

2

HxH,xargminH ii

i

d Error in one image

e.g. calibration pattern

221-

HHx,xxH,xargminH iiii

i

dd Symmetric transfer error

d(.,.) Euclidean distance (in image)

ii

iiiii

ii ddii

xHx subject to

x,xx,xargminx,x,H 22

x,xH,

Reprojection error

18

Reprojection error

221- Hx,xxHx, dd

22 x,xxx, dd

19

Outline

• Parameter estimation• Robust estimation (RANSAC)

• Chapter 3 of “Multiple View Geometry in Computer Vision” by Hartley and Zisserman

20

Robust estimation

• What if set of matches contains gross outliers?

21

RANSAC

ObjectiveRobust fit of model to data set S which contains outliers

Algorithm

(i) Randomly select a sample of s data points from S and instantiate the model from this subset.

(ii) Determine the set of data points Si which are within a distance threshold t of the model. The set Si is the consensus set of samples and defines the inliers of S.

(iii) If the subset of Si is greater than some threshold T, re-estimate the model using all the points in Si and terminate

(iv) If the size of Si is less than T, select a new subset and repeat the above.

(v) After N trials the largest consensus set Si is selected, and the model is re-estimated using all the points in the subset Si

22

How many samples?

• Choose t so probability for inlier is α (e.g. 0.95) – Or empirically

• Choose N so that, with probability p, at least one random sample is free from outliers. e.g. p =0.99

sepN 11log/1log

peNs 111

proportion of outliers es 5% 10% 20% 25% 30% 40% 50%2 2 3 5 6 7 11 173 3 4 7 9 11 19 354 3 5 9 13 17 34 725 4 6 12 17 26 57 1466 4 7 16 24 37 97 2937 4 8 20 33 54 163 5888 5 9 26 44 78 272 1177

23

Acceptable consensus set?

• Typically, terminate when inlier ratio reaches expected ratio of inliers

neT 1

24

Adaptively determining the number of samples

e is often unknown a priori, so pick worst case, e.g. 50%, and adapt if more inliers are found, e.g. 80% would yield e=0.2

– N=∞, sample_count =0– While N >sample_count repeat

• Choose a sample and count the number of inliers• Set e=1-(number of inliers)/(total number of points)• Recompute N from e• Increment the sample_count by 1

– Terminate

sepN 11log/1log

25

Robust Maximum Likelihood Estimation

Previous MLE algorithm considers fixed set of inliers

Better, robust cost function (reclassifies)

outlier

inlier ρ with ρ

222

222

i tet

teeed iD

26

Other robust algorithms

• RANSAC maximizes number of inliers• LMedS minimizes median error

– No threshold– Needs estimate of percentage of outliers

• Not recommended: case deletion, iterative least-squares, etc.

27

Automatic computation of HObjective

Compute homography between two imagesAlgorithm

(i) Interest points: Compute interest points in each image

(ii) Putative correspondences: Compute a set of interest point matches based on some similarity measure

(iii) RANSAC robust estimation: Repeat for N samples

(a) Select 4 correspondences and compute H

(b) Calculate the distance d for each putative match

(c) Compute the number of inliers consistent with H (d<t)

Choose H with most inliers

(iv) Optimal estimation: re-estimate H from all inliers by minimizing ML cost function with Levenberg-Marquardt

(v) Guided matching: Determine more matches using prediction by computed H

Optionally iterate last two steps until convergence

28

Determine putative correspondences

• Compare interest pointsSimilarity measure:– SAD, SSD, ZNCC on small neighborhood

• If motion is limited, only consider interest points with similar coordinates

• More advanced approaches exist, based on invariance…

29

Example: robust computation

Interest points(500/image)(640x480)

Putative correspondences (268)(Best match,SSD<20,±320)

Outliers (117)(t=1.25 pixel; 43 iterations)

Inliers (151)

Final inliers (262)(2 MLE-inlier cycles; d=0.23→d=0.19; IterLev-Mar=10)

#in 1-e adapt. N

6 2% 20M

10 3% 2.5M

44 16% 6,922

58 21% 2,291

73 26% 911

151 56% 43

Recommended