Upload
fay-neal
View
216
Download
1
Embed Size (px)
Citation preview
CS 376bIntroduction to Computer Vision
04 / 29 / 2008
Instructor: Michael Eckmann
Michael Eckmann - Skidmore College - CS 376b - Spring 2008
Today’s Topics• Comments/Questions• Look back on course and what there is still to learn
• Chapter 11 – 2D matching– matching in 2d (models to images)
• focus feature method• pose clustering• geometrical hashing
Course Information (slide from 1st day)• First week and a half to two weeks maximum
– dive in and learn the major differences between C++ and Java
• so you can code the assignments in C++ using the openCV library.
• C++ programming knowledge is a great skill to have for any computer science major
– quick overview of the openCV library– I will provide sample programs using the openCV
library.
Michael Eckmann - Skidmore College - CS 376b - Spring 2008
Course Information (slide from 1st day)• Computer Vision topics to be covered
– parts of chapters 1-7 and 9-11 in our text book
– additional material when our text doesn't go deep enough into a topic
• e.g. image processing techniques
• Expect to have 4 programming assignments
– 1st one will make sure you understand certain important C++ concepts as well how to use the openCV library in your code
– 2nd one will deal with image processing techniques
– 3rd and 4th will probably have to do with edge/feature detection and segmentation
Michael Eckmann - Skidmore College - CS 376b - Spring 2008
What we accomplished• You learned all the major differences between C++ and Java
and should be comfortable programming in C++.• We covered image processing techniques and the main
operations that are used in many computer vision techniques (e.g. morphological operations, histograms, Fourier transforms,)
• Filtering images and convolution / cross-correlation techniques (for smoothing, noise reduction/suppression, edge detection, etc.)
• We examined a few larger topics utilizing the lower-level techniques --- segmentation, texture measures, contour/line finding, Hough transforms, matching in 2D etc.
• We also covered a good chunk of chapter 12 (Perceiving 3D from 2D)
Michael Eckmann - Skidmore College - CS 376b - Spring 2008
Looking ahead• A better name for the course might have been
– C++ for Java programmers, some digital image processing, and low-level computer vision techniques
• We didn't cover 3D sensing in any depth nor camera calibration.• We didn't cover any complete application areas (e.g. robot
navigation, iris recognition, surveillance, ...)– but you should know that all these applications use some of the
various techniques we learned as well as ones we haven't combined in various ways to accomplish their tasks
• You have a good background knowledge now to allow for further study in computer vision/image processing. It is a field with plenty of unsolved problems and continues to have an active community of researchers.
Michael Eckmann - Skidmore College - CS 376b - Spring 2008
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
2D object recognition via affine mapping• Our text describes 3 techniques to determine an affine transformation
from a model to an image.
• Local Feature Focus method
• Pose Clustering
• Geometric Hashing
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Local Feature Focus method• This is a process to determine if an object model appears in an image
and if so, what is the general affine transform between the model and
the image.
• The model has a set of focus features, which are major features that
should be able to be found in an image of this object easily (as long as
they are not occluded).
• The model also has a set of nearby features for each focus feature to
allow verification of a correct focus feature match and to help
determine position and orientation.
• Let's see the algorithm and figures from the text.
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Pose Clustering• This is another process to determine if an object model appears in an
image and if so, what is the general affine transform between the
model and the image.
• The model has a set of features and the image has a set of features.
These need to be matched. The general idea of pose clustering is to
take every possible pair of matching points and compute an RST
transform then check for clusters of RST transforms.
• To get less redundancy and more accuracy, instead of doing every
possible pair we can– filter our features by type, where a certain type of feature will only
match a feature of the same type (ex. fig. 11.13)
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Pose Clustering• To get less redundancy and more accuracy, instead of doing every
possible pair we can– filter our features by type, where a certain type of feature will only
match a feature of the same type
– then only use pairs of matching points that satisfy the above
• Compute the RST transforms as before but now for a smaller set of matching pairs.
• For each RST transform with specific computed parameters, count the number of other RST transforms that are within some distance of the transform parameters.
• There are n-1 distance computations for each of n parameter sets of RST transforms.
• Or can use binning (like Hough) --- this will be faster but bins need to be chosen well to capture similar parameter sets in the same bin.
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Geometric Hashing• The last two procedures allowed us to determine if a particular model
object was found in an image. What if we had many models that we
wanted to check against our images?
• If use pose clustering or local feature focus method, then each model
would have to be checked separately to determine if it's in the image.
• Geometric hashing allows us to check among a large database of
models to determine if any of them are in the image.
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Geometric Hashing• It requires a large amount of offline preprocessing of the models as
well as a fair amount of space. But this allows for fairly fast online
recognition in the average case.
• Given: large database of models described by feature points in some
2d coordinate system and an image with features extracted from it.
• Assuming affine transformations only, we want to know which
model(s) are in the image and what position and orientation the models
are in in the image.
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Geometric Hashing• Each model M is stored as an ordered set of feature points.
• Any 3 non-collinear points E = (e00
, e01
, e10
) define an affine
coordinate frame. (Think of a plane. How many points define a plane?
Any 3 non-collinear points define a plane.)
• D = xi(e01
-e00
) + eta(e10
-e00
) + e00
• Can think of e00
as the origin and (e01
-e00
) and (e10
-e00
) as the
coordinate axes of the affine coordinate system.
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Geometric Hashing• Any point D in M can be represented as (xi,eta) pairs w.r.t. the points
E. These (xi,eta) pairs are the affine coordinates of the point D.
• If we apply an affine transformation to all the points in M, (xi,eta) will
be the same for each point in M, given the same points E defining the
affine coordinate frame.
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Geometric Hashing• Offline processing
•For each model M
•choose an ordered set of three model points E = (e00
, e01
, e10
) to form
the equation: D = xi(e01
-e00
) + eta(e10
-e00
) + e00
•for each other point in the model, D, compute (xi,eta) and put M,E in
the hash table indexed on (xi,eta)
•do the above two bullets for all possible sets of three model points
•The above gives us a hash table indexed on (xi,eta) which are affine-
invariant coordinates. From these affine invariant coordinates, we can get
the model M and basis points E where some D in M has (xi,eta) affine
coordinates w.r.t. E.
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Geometric Hashing• Now, any (xi,eta) pair can be looked up in the hash table to get all
the models/basis points for which some model point has the affine invariant coordinates (xi,eta).
• If (xi1, eta
1) are affine coordinates for an image point, written in
terms of some image basis (set of 3 points), then (xi1, eta
1) are in
the hash table iff there is a legal affine transformation of the 4 model points that maps them onto the 4 image points.
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Geometric Hashing• Online processing
– Choose a set of 3 feature points in an image to form a basis, make one the origin and the other two minus the origin make the coordinate axes.
– for each other feature point in the image, compute the affine coordinates (xi
1, eta
1)
– look up (xi1, eta
1) in the hash table. If it is there, then all the
models/basis points stored in the hash table are possible candidate matches (the 4 points are possibly from the model stored there)
• increment a counter in a histogram for each of the models/basis points
– peaks in the histogram are possible matches
Michael Eckmann - Skidmore College - CS 376 - Spring 2007
Geometric Hashing• Online processing
– peaks in the histogram are possible matches– as long as the peak is sufficiently high, then it is a possible
match. The entire model can be transformed into image coordinates and verified that enough model points actually are in the image --- if so, the model appears, affinely transformed, in the image.
– Do all the above steps for all triples of feature points.