14
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

Embed Size (px)

Citation preview

Page 1: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

A Robust Super Resolution Method for Images of 3D Scenes

Pablo L. Sala

Department of Computer Science

University of Toronto

Page 2: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

Super Resolution

“The process of obtaining, from a set of images that overlap, a new image that in the regions of overlapping has a higher

resolution than that of each individual image.”

This work:

• Introduces a method for robust super resolution

• Applies it to obtain higher resolution images of a 3D scene from a set of calibrated low-resolution images under the assumption that the scene can be approximated by planes lying in 3D space.

Page 3: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

The Registration Problem

The algorithm presented here starts from a discreet set of planes in 3D space and for each plane it back-projects all the input images onto that plane and performs super-resolution .

Only the image pixels of 3D points of the scene that lie on the plane will be back-projected to the same points in the plane.

Page 4: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

Problem Formulation

Assumptions:

• The scene can be thought of as mostly lying on planes in 3D space.

• All viewpoints are located in a region separable by a plane from the scene, so to hold to a notion of scene depth.

The Idea:• A set of N calibrated images of the scene.

• A list of planes ordered by its depth.

• Images set is partitioned in disjoint subsets of relatively same size.

• Super-resolution attempted on each plane using the back-projection of the images of each subset on that plane.

• The obtained higher resolution images are compared. The regions of coincidence will be assumed to correspond to parts of the scene lying on the plane.

• The portions of each image that correspond to these regions will not be taken into account when applying super-resolution on the following planes.

Page 5: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

Standard Super-Resolution

Super-resolution can be formulated as an optimization problem:

 X is the unknown higher resolution imagexj are the low resolution imagesFj = Dj Hj Wj are the imaging formation matrices, with Dj a decimation, Hj a blurring, and Wj a geometric warp matrix.

Deriving E with respect to X and equaling it to 0 we get to: 

N

j

jj xXFXE1

2

N

j

jTjN

j

jTj xFXFFXX

E

11

0

Page 6: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

Robust Super-Resolution

Objective function:

Error: 

Cauchy’s robust estimator:

22

2

e

ee

N

j

M

iijeXE

1 1

ijjij xXFe

Page 7: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

Robust Super-Resolution

Deriving E with respect to X:

 

 

where Wj is a diagonal matrix such that:

Iteratively re-weighted least squares:

Initial guess X0: the higher resolution image computed using the standard version of super-resolution.

kkk XbXXA

1

222

21

eee

Wijij

jii

N

j

jjTjN

j

jjTj

l

xWFXFWFXX

E

11

0

Page 8: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

The AlgorithmInput: L a list of planes in 3D space; Ij images; Pj camera matrices

• Set O = (set of the occluding 3D points)

• Randomly partition the images set into two or more disjoint subsets S1,..., SR.

• For each plane in L do:

• For each subset Si do:

• Use , Pj and O to compute the image formation matrices Fj

• X0 = S-Resolution of images in Si

• Yi = Robust S-R of images in Si, using X0 as the initial guess

• R = region R of where all Yi coincide

• Assume R is the portion of the scene that lies on • O = O R

Page 9: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

Observations

For the algorithm to be effective:

• The 3D planes uniformly distributed, close one to another, and covering all scene depth, so to detect all scene portions to be aware of occlusions.

• The scene should be highly textured for the similarity check to work well.

Page 10: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

Experimental Results

Synthetic scene:

Page 11: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

• 12 images were used as input

• 2 disjoint subsets

Experimental Results

Page 12: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

Standard vs Robust Super-Resolution:

Experimental Results

Page 13: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

Output images:

Experimental Results

Page 14: A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto

• Results get worse as we advance in the 3D planes due to errors in defining precisely what portion of the scene lies in each plane

• Using more images and partitioning the images set in more than just two subsets will definitely give more accuracy to the similarity check.

• Using color images instead of just B&W. The super-resolution steps of the algorithm would be applied to each color channel separately and similarity would be assumed when the difference in reconstruction is close to zero for all three channels.

Observations and Future Work