3D Model Matching with Viewpoint-Invariant
Patches(VIP)
Reporter:鄒嘉恆Date: 10/06/2009
Introduction
This paper introduces Viewpoint-invariant patch(VIP) for robust registration and large scale scene reconstruction.
Outline
Viewpoint-Invariant Patch(VIP)Hierarchical estimation of 3D similarity
transformationExperimental results and evaluationConclusion
VIP-Viewpoint normalizationWarp the image textureProject the textureExtract the VIP descriptor
VIP-VIP generationVIP is defined as (x, σ, n, d, s)
x : 3D positionσ: patch sizen: surface normald: dominant orientations: SIFT descriptor
Hierarchical estimation of 3D similarity transformation3D similarity transformation from a single
VIP correspondence(x1, σ1, n1, d1, s1), (x2, σ2, n2, d2, s2)
scaling:
rotation:
translation:
Hierarchical estimation of 3D similarity transformationHierarchical Efficient Hypothesis-
Test(HEHT) method3 stages:
ScalingRotationTranslation
Using RANSAC with VIP.
Experimental result and evaluationThe number of inlier correspondences.
The re-detection rate.
Experimental result and evaluationUse Structure from Motion(SfM) to
compute its depths map and camera positions for each sequence.
Camera positions were defined relative to the pose of the first camera in each sequence.
Experimental result and evaluation
Number of inliers
Re-detection rate
Experimental result and evaluationScene 1:
Experimental result and evaluationScene 2:
Experimental result and evaluationScene 3:
Experimental result and evaluation
Experimental result and evaluation
Conclusion
Their evaluation demonstrates that VIP features are an improvement on current methods for robust and accurate 3D model alighment.