7
Camera Calibration Utility Description Robert Bryll, Xinfeng Ma, Francis Quek Vision Interfaces and Systems Laboratory The university of Illinois at Chicago April 6, 1999 1 Introduction To calibrate our cameras for stereo viewing we use Roger Y. Tsai’s versatile camera calibration algorithm described in [1]. The problem with calibrating two cameras for stereo viewing is that to calibrate them it is necessary to record a calibration target with points of known coordinates and then find the image coordinates of calibration points on images from both cameras, put them in files in appropriate order and pass them to the calibration routines. Tsai’s algorithm requires at least 11 calibration points, but the normally used number is between 20 and 60. Picking those points by hand to create input files for calibration rooutines is tedious and time consuming. We make the proces of finding calibration points in frames captured from both cameras semi- automatic. 2 Solution The calibration target (a plywood box with two ” walls put at an angle of 120 degrees to one another) is painted white and all the calibration points (of known coordinates) are marked with black 3/4” circles. The size of the calibration target approximates the average volume of the human gesture space. The calibration target is presented in Figure 1. Figure 2 shows how the world coordinate system is positioned with respect to the calibration target. During the calibration the box should fill approx. 1/3 rd of the frame area. Black calibration points are detected semi-automatically by the following steps: 1. Threshold the input images to detect dark areas (thresholds are adjusted by hand). 2. Find connected components in the image, label the detected regions. 3. Run size filter over the detected regions. Since the calibration points are relatively small (around 15-50 pixels according to our tests), this step filters out many unwanted regions. 4. Remove false positives by hand. 5. Compute centers of gravity of the final set of regions to obtain more precise locations of the calibration points. 1

Camera Calibration Utility Description

  • Upload
    others

  • View
    15

  • Download
    0

Embed Size (px)

Citation preview

Camera Calibration Utility Description

Robert Bryll, Xinfeng Ma, Francis QuekVision Interfaces and Systems Laboratory

The university of Illinois at Chicago

April 6, 1999

1 IntroductionTo calibrate our cameras for stereo viewing we use Roger Y. Tsai’s versatile camera calibrationalgorithm described in [1].

The problem with calibrating two cameras for stereo viewing is that to calibrate them it isnecessary to record a calibration target with points of known coordinates and then find the imagecoordinates of calibration points on images from both cameras, put them in files in appropriateorder and pass them to the calibration routines. Tsai’s algorithm requires at least 11 calibrationpoints, but the normally used number is between 20 and 60. Picking those points by hand to createinput files for calibration rooutines is tedious and time consuming.

We make the proces of finding calibration points in frames captured from both cameras semi-automatic.

2 SolutionThe calibration target (a plywood box with two25 � 25” walls put at an angle of 120 degreesto one another) is painted white and all the calibration points (of known coordinates) are markedwith black 3/4” circles. The size of the calibration target approximates the average volume of thehuman gesture space. The calibration target is presented in Figure 1. Figure 2 shows how theworld coordinate system is positioned with respect to the calibration target.

During the calibration the box should fill approx. 1/3 rd of the frame area.Black calibration points are detected semi-automatically by the following steps:

1. Threshold the input images to detect dark areas (thresholds are adjusted by hand).

2. Find connected components in the image, label the detected regions.

3. Run size filter over the detected regions. Since the calibration points are relatively small(around 15-50 pixels according to our tests), this step filters out many unwanted regions.

4. Remove false positives by hand.

5. Compute centers of gravity of the final set of regions to obtain more precise locations of thecalibration points.

1

Figure 1: Stereo camera calibration target used in our experiments.

2.1 Camera Calibration Utility (CCU)Figure 3 shows the Camera Calibration Utility interface that allows the user to perform the stepsdescribed above semi-automatically.

The following sections describe the steps necessary to perform the calibration point detectionand to save the resulting files.

2.1.1 Input Files

There are two input files for the calbration (one image from each camera). They should both be inthe raw data grayscale .pgm format, both have the same dimensions (close to 640� 480 pixels).The names of the files should follow a simple convention: the filename of the lower image isequal to the name of the upper image with extension.2 . So, for example, if the upper imagefile is calib1.pgm , the lower image file should becalib1.pgm.2 . Both files are opened bypressing theLoad Images button and selecting the nameof the upper image only.

2.1.2 Thresholding

After opening the image files, they are displayed in the window as in the figure 3. Then the userhas to adjust the thresholds for both images, so that the calibration points are clearly defined on thefaces of the calibration target on both images. The thresholds are adjusted by moving the sliders intheThresholds part of the interface window. When sliders are moved, the corresponding imagechanges, showing the result of the thresholding. The faces of the calibration target should be black

2

X

Z

X

Y

Figure 2: Position of the World Coordinate System with repect to the calibration target.

and calibration points should be represented by white dots. The default values of the thresholdswork in some cases, but usually they have to be adjusted (or at least the results of the thresholdinghave to be checked, by clicking on the sliders). It is possible to display the original images at anytime by pressing theShow Originals button.

It is also possible to adjust size filters ro the calibration points, but in our experinments it wasnever necessary. It may be necessary if the calibration target is very close or very far away fromthe cameras.

2.1.3 Detecting Calibration Points

After the thresholds have been set, the user should press theDetect Points button to perform theactual detection. The program filters the image with 5�5 Gaussian filter, then thresholds it accord-ing to the adjusted threshold values, finds connected components and runs size filter over them.Finally, it computes the ceter of gravity of each accepted region and displays the ceners’ positionson the original images. The accepted regions are represented by red crosses with correspondingnumbers. After this stage the regions are unsorted and there is no correlation between the regionson both images.

2.1.4 Removing False Positives

After the detection stage there usually are some false positives that have to be removed. The pointcan be removed by left-clicking on the red cross representing the calibration point (another clickwill turn the point back on). All false positives must be removed, and moreover,the sets of pointson both calibration images have to be identical(the same number of points, same positionson the target!). For example, in figure 3 we can see that on both images the leftmost column ofcalibration points is unsused. If on the upper image there are calibration points in this column, andon the lower one there are no calibration points there (because of false negatives), the calibrationpoints have to be removed from the upper image to make both sets identical.

3

Figure 3: Camera Calibration Utility Interface.

4

2.1.5 Setting the Points on the Calibration Points Panel and Sorting the Calibration Points

In order the correspondence between two sets of points to be resolved correctly, the sets of cali-bration points have to be reflected on theCalibration Points Panel. Calibration Points Panelcontains two sets of 25 buttons, each set represents one face of the calibration target. Initially allbuttons contain are checked (contain letterx), whicgh means that all 50 points are selected. How-ever, if there are false negatives on the calibration images, the user has to “uncheck” the necessarycalibration points to reflect the actual detected state. Figure 3 shows the situation in which theleftmost column of the calibration points is “unchecked”, and there are four unused (undetected)calibration points in the center of the calibration target.

Setting theCalibration Points Panel is very important for sorting the calibration points onboth images so that there is exact correspondence between the two images. The sorting is per-formed as follows:

1. The calibration points array for each image is first sorted in place with respect to the imageX coordinate, so that the numbers of points are increasing in columns starting from the left.The assumption here is that the target is relatively vertical, that is there is no overlap betweenthe calibration points’ columns in the X direction. If the target in the image is very far fromvertical, the algorithm will fail (because it won’t be able to distinguish the columns if thereis overlap between the points belonging to different columns in the X direction). However,such situation didn’t occur in our experiments, so we think the assumption is safe.

2. The calibration points array for each image is thenpiecewisesorted in place using the infor-mation from theCalibration Points Panel. The pieces sorted correspond to the columns inthe calibration target and sorting is done with respect to the image Y coordinate (so now thepoints are sorted only in columns). That’s why the correct information from theCalibrationPoints Panel is crucial for the algorithm (because the numbers of points in columns do nothave to be equal).

As the result, the points are ordered starting from the upper point in the leftmost column, andthe numbers increase when going down in columns and then to the right between columns.

The sorting is done after setting theCalibration Points Panel and pressing theSORT POINTSbutton. The program shows the resulting order (calibration points with their numbers). The situa-tion after sorting is presented in figure 3. After sorting there is exact correspondence between thecalibration points on both images and their coordinates can be saved for further processing (usingXinfeng’s calibration interface as a wrapper over the Tsai’s routines).

2.1.6 Saving the Calibration Files

After sorting the calibration points, they can be saved into calibration files (simple text files) usingtwo buttons in the lower left corner of the interface window. TheSave World & Image Coordsbutton saves 5 floating point numbers for each calibration point: first the three world coordinates(x,y,z) of each detected point and then its two image coordinates (computed with subpixel accuracyas the regions’ center of gravity). TheSave Image Coords Only button saves only the two imagecoordinates for each calibration point on each calibration image.

3 Naming Conventions for FilesThe names of saved calibration files follow simple naming conventions. We will show them byexample.

If the fupper calibration image is calledcalib1.pgm , the lower image has to be calledcalib1.pgm.2 . Then the names of calibration files for both images will be as follows:

5

� calib1.pgm.world.dat - the file with world and image coordinates for the upper im-age

� calib1.pgm.2.world.dat - the file with world and image coordinates for the lowerimage

� calib.pgm.image.dat - the file with only image coordinates for the upper image

� calib.pgm.2.image.dat - the file with only image coordinates for the lower image

The files can then be used (together with the camera data files) by the Xinfeng’s camera cali-bration interface built over the Tsai’s calibration routines.

4 Accuracy TestsWe wrote a simple program analyzing the output from the “world” data files after 3D triangula-tion and comparing it to the original world coordinates. We performed the accuracy tests by firstcalibrating the camera using a set of calibration pictures and then using the same pictures (imagecoordinates only) to calculate the real world coordinates. Comparing the calculated world coordi-nates with the original calibration world coordinates yielded the calibration errors in 3 dimensions.Here are the results of our tests (all errors in mm):

1. Baseline Length approximately 1953 mm; 38 calibration points:

� Average error in X: 0.82108 mm� Maximal error in X: 2.15344 mm� Average error in Y: 0.620187 mm� Maximal error in Y: 2.00497 mm� Average error in Z: 0.717979 mm� Maximal error in Z: 2.3317 mm

2. Baseline Length Approximately 1198 mm; 43 calibration points:

� Average error in X: 0.903424 mm� Maximal error in X: 3.73572 mm� Average error in Y: 0.942091 mm� Maximal error in Y: 3.9338 mm� Average error in Z: 1.68031 mm� Maximal error in Z: 4.29699 mm

3. Baseline Length Approximately 1186 mm; 39 calibration points:

� Average error in X: 0.7097 mm� Maximal error in X: 2.16577 mm� Average error in Y: 0.562133 mm� Maximal error in Y: 1.39504 mm� Average error in Z: 1.22877 mm� Maximal error in Z: 3.56952 mm

It can easily be observed that the longer baseline (point 1 above) gives higher accuracy of tri-angulation. However, even for the shorter baselines the maximal triangulation errornever exceeds5 mm, which is adequate for our 3D gesture experiments.

6

References[1] R.Y. Tsai, “A versatile camera calibration technique for high accuracy 3d machine vision

metrology using off-the-shelf TV cameras and lenses”,IEEE Journal of Robotics and Au-tomation, vol. RA-3, pp. 323–344, 1987.

7