24
Handbook of practical camera calibration methods and models Optical Metrology Centre CHAPTER 2 CAMERA CALIBRATION MODEL SELECTION Executive summary The interface between object space and image space in a camera is the lens. A lens can be modeled using by a pin-hole or a parametric function. This chapter discusses how distortions introduced by the lens can be estimated and if necessary compensated for. Given good calibration, solid state sensors or film can generate object space accuracies up to 1 part in 100,000. Put another way sensors can make full use of feature location precision in the image space of 0.1 to 1 micron. Not all tasks require the ultimate performance but nevertheless correction for gross distortions is still required. At the end of the chapter a table is given that indicates the appropriate camera model for a given task. Later chapters address how camera calibration parameters are estimated. 2.1 Introduction With the possible exception of astronomy, experts in the area of photogrammetry have made the greatest geometric use of lenses over the past hundred years. They have required lenses to perform aerial and satellite mapping of vast areas of the world as well as performing measurements with a precision of up to 1 part in a million of the object space. As a result the models for camera calibration can be relied upon to represent the best that is likely to be achieved with a lens for all but the most exotic applications. This handbook seeks to make this knowledge accessible to a wider audience with the objective of allowing the reader to pick an appropriate level of sophistication for an application requirement. This chapter provides a description of the geometric parameters of lenses and their mathematical models. The calibration procedures that are detailed in this report also make reference to the position and orientation of the camera as one method of calibration makes use of a 3-D method. Issues concerned with the sensor and its part in the image collection process are dealt with in chapter 3. It is useful to be aware of some of the terms used by photogrammetrists when dealing with camera-lens systems. Interior orientation and Exterior orientation are terms employed to describe the internal and external geometric configurations of a camera and lens system, others use the terms intrinsic and extrinsic parameters . Camera calibration is the process of estimating the parameters that best describe what happens to a bundle of rays coming from the object when they pass through the lens and onto 2-1

CAMERA CALIBRATION MODEL SELECTION

  • Upload
    others

  • View
    18

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

CHAPTER 2

CAMERA CALIBRATION MODEL SELECTION Executive summary

The interface between object space and image space in a camera is the

lens. A lens can be modeled using by a pin-hole or a parametric function. This chapter discusses how distortions introduced by the lens can be estimated and if necessary compensated for. Given good calibration, solid state sensors or film can generate object space accuracies up to 1 part in 100,000. Put another way sensors can make full use of feature location precision in the image space of 0.1 to 1 micron. Not all tasks require the ultimate performance but nevertheless correction for gross distortions is still required. At the end of the chapter a table is given that indicates the appropriate camera model for a given task. Later chapters address how camera calibration parameters are estimated.

2.1 Introduction With the possible exception of astronomy, experts in the area of photogrammetry have made the greatest geometric use of lenses over the past hundred years. They have required lenses to perform aerial and satellite mapping of vast areas of the world as well as performing measurements with a precision of up to 1 part in a million of the object space. As a result the models for camera calibration can be relied upon to represent the best that is likely to be achieved with a lens for all but the most exotic applications. This handbook seeks to make this knowledge accessible to a wider audience with the objective of allowing the reader to pick an appropriate level of sophistication for an application requirement. This chapter provides a description of the geometric parameters of lenses and their mathematical models. The calibration procedures that are detailed in this report also make reference to the position and orientation of the camera as one method of calibration makes use of a 3-D method. Issues concerned with the sensor and its part in the image collection process are dealt with in chapter 3. It is useful to be aware of some of the terms used by photogrammetrists when dealing with camera-lens systems. Interior orientation and Exterior orientation are terms employed to describe the internal and external geometric configurations of a camera and lens system, others use the terms intrinsic and extrinsic parameters. Camera calibration is the process of estimating the parameters that best describe what happens to a bundle of rays coming from the object when they pass through the lens and onto

2-1

Page 2: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

the image plane. The geometric configuration of the passage of a bundle of light rays through the lens to the image plane can be described mathematically by a set of parameters. This chapter explains the parameters relating to radial and decentering distortion, the location of the principal point, the focal length (more correctly known as the principal distance) and the interrelationship of these parameters with translation and rotation of the camera itself. As this handbook is primarily designed for operators of digital cameras, a couple of extra parameters that refer to the shape and orientation of the sensor (array of pixels) are also detailed. It should be noted that camera calibration refers here to the determination of the parameters that allow a camera to be used as an angle-measuring device. The computer vision community refers to the process of estimating the external orientation of the camera as camera calibration. This chapter does deal with the external parameters because these are necessary for some methods of determining the interior parameters. It is recommended that the term ‘camera calibration’ is used to refer to the process of estimating parameters belonging to a camera and perhaps the term ‘system calibration’ might be appropriate for a collection of cameras that may make up a measurement system. The camera model developed though this chapter will eventually not only compensate for gross effects such as radial lens distortion but will also take into account differences that occur at differing object distances. At the end of the chapter the usefulness of this model will be put into perspective with a practical guide linking the desired accuracy required with the appropriate components of the model. 2.2 Principal point The location of an image on the image plane formed by the direct axial ray passing through the centre of the lens system is known as the principal point. The focal plane should be perpendicular to the optical axis, but a parameter to correct for any misalignment is usually necessary. This is particularly necessary for the electronic sensor camera where the requirements for geometric alignment of the lens with the sensor array are minimal. The principal point can also be thought of as the foot of the perpendicular from the nominal centre of the lens system (more exactly the rear node of the lens) to the plane of focus. It represents the point that would be the ideal origin for a coordinate system on the image plane. When images are being used for 3-D measurement purposes it is normal to convert from pixel related co-ordinates to image co-ordinates in millimetres. To achieve this the following equation is used xi= (xsensor – xcentre)sx (2.1) yi = -(ysensor – ycentre)sy where

xi, yi are the new co-ordinates of the image in mm, xsensor, ysensor are the co-ordinates of the image as given by the camera/frame-grabber in pixels, xcentre, ycentre is the notional centre of the image in pixels, and sx, sy are the pixel sizes in mm

2-2

Page 3: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

The principal point should ideally be at the centre of the image. The centre of the image format may be found by dividing the number of x and y axis pixels in the image by two or by (intersecting the diagonal fiducial marks if they are present in a film camera). Any differences in the x and y directions between the centre of the image plane and the base of this perpendicular are known as offsets of the principal point and conventionally expressed as xpp, ypp or sometimes as xo, yo. The principal distance, c, and the offsets of the principal point (Figure 2.1) are key elements in the set of parameters defining camera calibration. The principal point location was not included in equation 2.1 because it is included as an unknown parameter in a calibration procedure. For applications where the principal point is required a priori then xsensor and ysensor can be substituted by xpp, and ypp.

Figure 2.1. The definition of the principal distance, principal point, and image

centre To correctly "centre" the image co-ordinates, the offsets (xpp, ypp) from the principal point to the notional image centre or fiducial origin must be added to image co-ordinates. These offsets may not always be known initially, so they may be estimated, often at zero, and solved for in the calibration procedure with all other unknowns. The magnitude of the offsets (xpp,ypp) are usually under one millimetre for a film based camera, and less than twenty of pixels for a digital or video camera. The relationship between the principal point and the image centre will usually remain stable for long periods of time for medium precision purposes. In some cases cameras used for 3-D measurement are only checked every six months or so. However, the relationship cannot be guaranteed when the lens is adjusted or moved in any way. This is especially true with zoom lenses that are particularly prone to principal point shifts as the focal distance is changed. In these situations the unknowns for these values should be included in the photogrammetric solution for each occasion the focus has been altered. Recently, cases of unstable principal point location in Kodak DCS digital cameras have been encountered. This is because these cameras have been made for studio photographic purposes where the principal point location is not an issue.

2-3

Page 4: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

However, when multiple images are taken with the camera in various orientations the sensor has been shown to move with respect to the camera body. 2.3 Principal distance The perpendicular distance from the perspective centre of the lens system to the image plane is termed the principal distance and usually denoted by ‘c’ (Figure 2.1). In aerial mapping, where the camera lens is fixed at infinity focus, the terms focal length and principal distance are often used synonymously. In other applications the lenses are usually of variable focus the principal distance will not equal the focal length and will often need to be determined. In some industrial applications, the lens will be focused in each image collection phase, so the principal distance will vary from one image to the next. It is good practice not to confuse the focal length of the lens ‘f’ with ‘c’. The values for the principal distance and the offsets of the principal point can be determined in a laboratory if required. However, other methods (discussed later) provide a direct means of determining their values so an exact value of the principal distance does not have to be known a priori. The method of least squares can be applied to solve the unknown parameters to model the relationship between the image and object co-ordinates given reasonably close initial approximation. Similarly for the values of the offsets to the principal point as noted in section 2.2. The calculated, or a posteriori, values of the principal distance or offsets of the principal point are often of little interest to the user who primarily requires co-ordinates of features on the object. But such calculations can provide evidence of errors, depending on the closeness of the values to previous calibrations. 2.4 Camera position and orientation The position and orientation of a camera may require defining for some applications and for the calibration methods described later a definition is required. Figure 2.2 illustrates the common photogrammetric definition where the camera position is described by Xc,Yc,Zc and the orientation by ω,φ,κ is defined with reference to a world co-ordinate system. The rotation parameters ω,φ,κ are calculated with respect to the world co-ordinate system in the order they are written in (this is important as other orders will produce different results). The image co-ordinate system co-ordinates xi, yi are aligned parallel to the camera X and Y co-ordinate axes respectively.

2-4

Page 5: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

Figure 2.2. Definition of the exterior orientation parameters for a camera

The six parameters of exterior orientation are the only ones required. The parameters of inner orientation discussed so far are supplemented by further parameters that are described in the later sections of this chapter. If traditional laboratory methods of camera calibration (described briefly in chapter 4) are followed, then it is usually the case that parameters are determined in isolation to one another with problems occurring because each method will usually require physical changes to the camera settings. The methods used for the calibration of cameras to measure objects at distances less than a few hundred metres have evolved over the last few decades. Initially the methods used for aerial cameras where the application was essentially parallel-axis stereoscopic photography were mimicked and the same equipment was used. Nowadays techniques use the favorable geometric conditions of convergent camera configurations to simultaneously extract the all the required calibration parameters. In general, close-range photogrammetric applications require a quick and cost-effective solution to an immediate problem, unlike the situation with aerial work where a camera system may be scheduled to work on mapping projects for years and the relative "leisure" of an annual calibration in an expensive laboratory can be afforded. In both cases there has been an evolution in the mathematical modeling used in camera calibration that has coincided with the ability to solve large, redundant sets of equations and the gradual increase in the film and solid state sensors to provide high geometric accuracy results.

The collinearity equations are a set of equations that describe the geometric passage of a straight ray of light from an object point (subscripted by ‘p’) through the perspective centre of the lens (usually referred to as the location of the camera and subscripted by ‘c’) to the image plane (subscripted by ‘i’). These equations use the exterior orientation parameters to describe the direction of the principal ray through the lens with respect to the world co-ordinate system Xw, Yw, Zw. In addition the interior

2-5

Page 6: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

parameters are used to define the image location corresponding to the object point that it is (in an ideal situation) collinear with.

Figure 2.3. The central configuration described by the collinearity equations

The collinearity equations may be expressed as: x i = xpp - cx . m11(Xp – Xc) + m12(Yp – Yc) + m13 (Zp – Zc) m31(Xp – Xc) + m32 (Yp- Yc) + m33 (Zp – Zc) yi = ypp - cy . m21(Xp – Xc) + m22(Yp – Yc) + m23 (Zp – Zc) (2.2) m31(Xp – Xc) + m32 (Yp- Yc) + m33 (Zp – Zc) where xi, yi are the image co-ordinates of an object point Xp, Yp, Zp and Xc, Yc, Zc is the location of a camera in object space. The mij (i =1,3; j=1,3) terms are the elements of a rotational matrix, M, and contain trigonometric functions of the angles ω, φ and κ. The principal distance, c, may be represented by two components, one in each of the x and y axis directions. This is a realistic model for an anamorphic lens, but it is usual to replace cx and cy with a common value ‘c’ for most photogrammetric applications. Given sufficient image observations and diversity of viewpoints it is possible to use the collinearity equations to solve for the unknown 3-D co-ordinates of the object points. A useful by-product is that not only can the parameters relating to the object co-ordinates be estimated but also all of the other parameters such as camera exterior and interior parameters. In other words the cameras can be fully calibrated. This method is described later in the handbook. The collinearity equations are non-linear and so it is necessary to linearise them by use of a Taylor’s Theorem expansion for solution by iterative least squares techniques. Reasonable approximations to all unknowns are needed before a convergent solution can be guaranteed.

2-6

Page 7: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

It is important for the reader to have a basic understanding of the collinearity equations: to realise that they form the basis for most modern photogrammetry and that they provide a well tried and trusted mathematical model to describe a ray of light passing through a camera’s lens. In a later section of this chapter the collinearity equations will be modified to account for the deviations from linearity which occur due to lens distortions and other effects. 2.5 Radial distortion 2.5.1. Introduction With a perfect lens system, light rays would pass without deviation from object space to image space and form a sharp image on the plane of focus. Unfortunately, such an ideal system does not exist. Aberrations, or deviations from theoretically exact models, are the reality facing the user of a camera system. Aberrations can be grouped into two categories: those that reduce image quality and those that alter the location of the image. Image quality aberrations may have an influence on geometric location and are briefly discussed. Radial and decentering distortions comprise the primary aberrations that affect image geometry and measurement and modeling of these distortions are crucial to good results. The equations for radial and decentering distortions are derived from the Seidel aberrations, named after the 19th century German mathematician who developed the relevant equations. These aberrations may be expressed in terms of a polynomial curve. Only the lower order terms in the series are relevant for most lenses. For extremely wide-angled or "fish-eye" lenses, another one or two terms in the series may be of value. 2.5.2. Radial lens distortion If the image of an off-axis target is displaced radially either closer to or for further from the principal point then it has been radially distorted. The terms barrel or pin-cushion are used, respectively, to describe the image shape of a rectangle which has been radially distorted closer to or farther from the principal point.

2-7

Page 8: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

Figure 2.4 Radial lens distortion vectors for pin-cushion distortion – the grid

represents the corrected image and the ends of the vectors the observed positions. In a map for barrel distortion the vectors would be pointing from the

grid towards the principal point Gaussian radial distortion describes the magnitude of radial distortion when the nominal principal distance is used as a basis for calculations. Figure 2.5 illustrates that the magnitude of these distortions varies with radial distance and may change with focus. Lens distortion graphs typically show the distortion in micrometres against the radial distance in millimetres, although it is reasonable to replace the horizontal scale in millimetres by a distance in pixels for some tasks.

2-8

Page 9: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

Figure 2.5 Radial distortion calibration curves for three object distances

Balanced radial distortion is the term used where the Gaussian curve has been mathematically transformed by shifting the principal distance by an amount ∆c which has usually been chosen such that the mean value of the transformed distortion curve out to a certain radial distance is zero. Gaussian radial distortion can be expressed as a series of odd powered terms, δr = K1r3 + K2r5 + K3r7 + …. (2.3) where K1, K1, K3 are the coefficients of radial distortion corresponding to infinity focus, δr is in micrometres, and r2= (x – xpp)2 + (y - ypp)2 (2.4) where r is the radial distance to the point which has co-ordinates (x, y) and xpp,and ypp are the offsets of the principal point from an indicated (or assumed) centre of the image. All values are usually expressed in millimetres. Balanced radial distortion, δrb, can be expressed as δrb = δr + K0r = K0r + K1r3+ K2r5+ K3r7+ ...... (2.5)

2-9

Page 10: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

Radius / mm.

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0

Rad

ial d

isto

rtion

/ m

m.

-0.006-0.004-0.0020.0000.0020.0040.0060.0080.0100.0120.014

Figure 2.6 Balanced radial lens distortion graph

The difference between the two graphs (figures 2.5. and 2.6) can be thought of as a simple tilting of the horizontal axis to the angle required to achieve the desired mathematical condition. The angle of tilt is incorporated into the formula for radial distortion (equation 2.3) through the constant term Ko as shown in equation 2.5. Although the curve in Figure 2.6 would appear to have a smaller maximum magnitude radial distortion than its equivalent unbalanced curve, there is no computational advantage in such a mathematical transformation. It can be shown that the compensatory changes in principal distance and radial distortion will not affect the results for the co-ordinates of the object points. However, the reason the model was introduced was in the use of aerial camera lenses where the lens designer and manufacturer sought to minimise the radial distortion about a straight line. Lenses could be designed to achieve this +/- a few microns. As a result the cameras were given a “calibrated focal length” that enabled the radial distortion to be effectively ignored even though it was still of significant magnitude. For most lenses of the type found on non-metric 35mm and 70mm cameras, and the simple C-mount lenses on most video cameras, only the terms Ko and K1 will be significant in describing radial distortion to the micrometre level. For wide angle lenses or those on metric cameras, the K2 or even the K3 term may be required to

2-10

Page 11: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

accurately describe the radial distortion all the way out to the edge of the image format. 2.5.3. Variation of radial distortion with focussing and depth of field Radial distortion varies with focusing of the lens and also within the photographic depth of field. This latter phenomenon is relevant to just few applications (say at camera to object distances of under 30 focal lengths) and even in those cases it is only significant if there is considerable variation in depth of some target points on the object. The variation with focussing is the major effect of these variations. If the radial distortion coefficients can be determined at two distinct focus settings, preferably one close to the camera and the other at infinity, then the formula shown below as equations 2.6 and 2.7 will allow the calculation of the radial distortion coefficients at any other focus setting. Let S1 and S2 be the two distances from the camera to object planes at which parameters of radial distortion K1s1, K2s1, K3s1....... and K1s2, K2s2, K3s2 have been determined. If S1 is the distance from the camera at which the lens is now focused, then the parameters of radial distortion for objects at that distance S1 from the camera will be

K1s 1 =

1 –

cS1

3

αs1

1 –

cS1

3 . K1s1 + ( )1 – αs1

1 –

cS2

3 . K1s2

K2s 1 =

1 –

cS1

5

αs1

1 –

cS1

5 . K1s2 + ( )1 – αs1

1 –

cS2

5 . K2s2 (2.6)

and so on, where c is the principal distance and

αs1 = S2 – S1

S2 – S1 .

S1 – c

S1 – c (2.7)

The variation of radial distortion within the depth of field requires the evaluation of a further coefficient δss1 where s refers to the distance of the plane of focus from the camera and S1 is the distance to object point under consideration.

γss1 = S – cS1 – c

. S1S (2.8)

The final form of the radial distortion δrss1 for an object point at a distant S1 from the camera, when the camera is focused at a distance S is

2-11

Page 12: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

δrss1 = γ 2ss1 K1s 1 r

3 + γ 4ss1 K2s 1 r

5 + γ 6ss1 K3s 1 r7 + .... (2.9)

The total amount of radial distortion can be decomposed into its δx and δy components as

δx = xr δrss1 , δy =

yr δrss1 (2.10)

2.5.4. Summary While the above formulae (2.6) through (2.10) are comprehensive, in most situations only the basic formula for radial distortion is ever used, that is equations 2.3 - 2.5. Only equation 2.3 or 2.5 is required in a model for determining the parameters of radial distortion during the calibration process. The other details have been supplied for completeness. When using the equations for the variation of radial distortion with focussing, if the distance for determining one of the sets of radial distortion parameters is infinity (S2 = ∞), then there is a considerable simplification. One minor complication in the application of equations 2.6 - 2.10 is that an exact value for the radial distortion δrss

1cannot be calculated until the distance S1 to the object point is known, so any solution must be iterative. Usually one iteration will suffice, unless exceptional circumstances of very close imaging and large depth of field are present. Consideration of the above equations in conjunction with a typical radial distortion curve for a close range camera would indicate that for camera to object distances greater than approximately 100x the focal length, the difference between an 'exact' radial distortion calculation and the value from an infinity calibration will be very small. In general the minor effect of variations within the depth of field and focussing can usually be ignored but the effect of radial distortion can only be disregarded in cases where it is small or where geometric accuracy is not important. Typically radial distortion can be as large as 10 pixels (say 100 microns) towards the edge of the image format of a digital camera and is usually an order of magnitude larger than decentering distortion. 2.6 Decentering Distortion All elements in a lens system ideally should be aligned, at the time of manufacture, to be collinear to the optical axis of the entire lens system. Any displacement or rotation of a lens element from a perfect alignment with the optical axis will cause the geometric displacement of images known as decentering distortion. Literally, this title refers to the "off-centering" of the lens elements (figure 2.7).

2-12

Page 13: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

Figure 2.7. Decentering of a lens element

Decentering distortion was initially represented as the same effect that could be achieved by placing a thin prism in front of the lens system. The typical decentering angle expected for ordinary lenses is approximately 1/diameter of the lens in minutes. The lens itself may not be perfectly made and can have a difference in thickness between opposite sides creating a residual wedge effect (Smith, 1990). It is usual for the lens decentering parameters to provide an improvement of between 1/7 and 1/10 of the magnitude of radial lens distortion. The decentering distortion effect contains both a radial and tangential component (figure 2.8).

Figure 2.8 Tangential distortion error vectors

The commonly accepted mathematical model for decentering distortion includes a term that allows for variation within the depth of field and for different focus settings. In practice these refinements to the basic quadratic formula (equations 2.11, 2.12) are seldom used, apart from extremely close range situations as the decentering distortion is often an order of magnitude less than radial distortion and the secondary effects of variation within the field are also very small. A graphical representation of

2-13

Page 14: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

decentering distortion can be made in a manner analogous to radial distortion (see Figure 2.9).

Radial distance / mm0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0

Tang

entia

l dis

torti

on /

µm

0.00

0.01

0.02

0.03

0.04

0.05

Figure 2.9 Tangential distortion plot P(r)

The function that is graphed is called the "profile function" and is represented by P(r),

P(r) =

P1

2 + P22

1/2r2 (2.11)

where the parameters P1 and P2 refer to values at infinity focus. The effect of decentering distortion can be represented to sufficient accuracy in a truncated polynomial form as

∆xs =

1 –

cS

P1 ( )r2 + 2( )x – xpp 2 + 2P2 ( )x – xpp ( )y – ypp

∆ys =

1 –

cS

P2 ( )r2 + 2( )y – ypp 2 + 2P1 ( )x – xpp ( )y – ypp (2.12)

where ∆xs , ∆ys are the components of the decentering distortion at an image point x, y; r is the radial distance as described in equation (2.4) and c is the principal distance for a lens focused on an object plane at a distance S from the lens.

2-14

Page 15: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

For the case of an object which lies at a distance S1 from the lens, equations (2.12) must be multiplied by the factor γss1 (see equation 2.8). Decentering distortion is usually an order of magnitude smaller than radial distortion and rarely exceeds 30µm at the edge of the image format for large format film cameras and a few microns for small format CCD cameras. Since decentering distortion is basically a quadratic function, its magnitude will only be one-quarter this size in the middle regions of the image. Consequently, unless the image scale is large, say greater than 1:30 and the depth of field is extensive, equations (2.12) can be adopted for direct use without introducing significant errors. One feature of decentering distortion that is sometimes quoted in calibration reports is the angle φo. This represents the angle in the image plane from the x-axis to the axis of maximum tangential distortion on the image. It can be represented as

φo = tan-1

P1

P2 (2.13)

The practical applications of φo are limited because it cannot be determined to a high precision due to inherent uncertainties in the values for P1 and P2 (and their interdependence, in turn, with xpp and ypp and some rotational aspects of the camera’s orientation). It is prudent to examine φo from each camera calibration with a particular camera/lens combination to check for stability. Decentering distortion has been known to vary over time as the result of pressure changes, vibration or shock. 2.7. A model for camera calibration A mathematical model for camera calibration can be formed by the addition of the equations for radial and decentering lens distortion (equations (2.10) and (2.12)) to the fundamental collinearity equation (equations (2.2)). That is, the equations describing the ideal straight line path from the object through the lens to the image are corrupted by the terms describing how light rays are deviated from linearity. If some additional parameters are added to account for any ‘shearing’ effect in the sensor array (shown as a1x and b1y below) and any non-perpendicularity of the image plane from the optical axis (shown as b2xy below), then a model exists for describing all aspects of lens and camera calibration, namely : x i = xpp - cx . m11(Xo - Xp) + m12(Yo - Yp) + m13 (Zo - Zp) +δx + ∆x + a1x m31(Xo - Xp) + m32 (Yo- Yp) + m33 (Zo - Zp) yi = ypp - cy . m21(Xo - Xp) + m22(Yo - Yp) + m23 (Zo - Zp) + δy + ∆y +b1y +b2xy m31(Xo - Xp) + m32 (Yo- Yp) + m33 (Zo - Zp) (2.14) The inclusion of the lens and camera calibration parameters (such as radial and decentering distortion, etc.) into the collinearity equations used for the general solution for the co-ordinates of the targeted points and camera locations led to the term additional parameters (APs) being coined. The addition of various sets of APs to bundle adjustments become very popular in the late 1970s and early 1980s as a means for reducing the magnitude of the observation residuals on the image frame. The assumption at the time was that if the image residuals were decreased then suspected

2-15

Page 16: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

systematic errors had been eliminated. It has since been demonstrated that the indiscriminate or excessive use of APs may actually lead to deterioration in the final accuracy of the object co-ordinates while yielding better precision estimates. A more reasoned and sensible approach to the incorporation of APs is that only those parameters that can be shown to have definite physical justification should be used. This is the approach recommended in this handbook. To determine the six exterior orientation parameters defining the camera's spatial location and orientation and the eleven camera calibration parameters consisting of c, xpp, ypp, K1, K2, K3, P1and P2 and the additional parameters a1, b1 and b2, it would theoretically be possible to use only nine targets. This would provide two x, y observation on each target, summing to a total of 18 observation values. The subsequent solution for the 17 unknowns would contain a slight redundancy. Experience has shown that a more desirable arrangement is a three-dimensional array of targets imaged across the entire format area of the camera. Thirty targets may suffice but 50 to 100 are more commonly used to provide ample redundancy and a reliable solution. Instead of using one known (fixed) camera location from which the targets are imaged, it is usual to take images from 4 to 9 camera locations (sometimes called ‘stations’). The geometrical arrangement of the cameras, the intersection angles of rays from object points to cameras, the number of targeted points seen from a diversity of camera locations and the spread of targeted points across the image format are all important factors influencing both the precision of the co-ordinates of the targets on the object and the parameters of camera calibration. Certain “tricks-of-the-trade” have been devised to ensure that the correlation which is known to exist amongst the required parameters is kept to a minimum, if not eliminated. For example, to improve the estimation of the values of xpp and ypp and to reduce correlations it is useful to roll the camera through approximately 90°, either between camera stations or at each camera station such that two exposures are captured at each view-point. Convergent images are crucial for the successful recovery of the principal distance if the object under consideration is planar, and indeed it is also recommended for non-planar objects as the convergence enhances the strength of the geometric intersection of rays. A feature known by photogrammetrists as ‘projective equivalence’ is something that must be acknowledged by those undertaking camera calibration. Correlation clearly exists between camera station co-ordinates (Xc, Yc Zc) and the interior orientation elements (xpp, ypp, c) for certain geometric configurations. Stereoscopic photography for architectural facade recording provides a good example. In this situation, it does not matter for the determination of co-ordinates on the facade if the principal distance is slightly in error, as a compensating change in the distance to the wall will accommodate this error. This is because architectural facades are essentially planar in nature. The rotations about the X and Y axes (ω and φ) and the offsets of the principal point are clearly correlated (hence the reason for camera rotations to cancel out the effect). There is also a similarity in effect between small amounts of decentering distortion and the offsets of the principal point which show up as correlations. Clearly a

2-16

Page 17: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

situation is present when all parameters are to be simultaneously derived for inconsistencies in the results of some parameters to occur due to over-parameterisation. The design of the layout of camera stations and targets for a successful calibration requires some insight. The scheme proposed in this manual has given due consideration to these correlations. 2.8. A note about zoom lenses One type of lens rarely used with photogrammetric film cameras is the zoom. It is an inherently unstable lens as its operation relies on the movement of one, or more, lens along a rack (or screw thread) relative to other lens components. Such movements or rotations can easily lead to changes in the parameters for decentering distortion and, of course, the principal distance will vary every time the lens is focused. Changes in the principal distance also cause severe variations in radial distortion. However, many video or digital cameras are fitted with zoom lenses as standard. Given the small dimensions of the imaging sensor in most electronic cameras, a zoom lens enhances the flexibility of these cameras. Various researchers have studied the changes in interior orientation for electronic cameras resulting from variation in the principal distance of the zoom lens. They have observed some significant changes with translations of up to 100 pixels in the position of the principal point. This was attributable to a tilting of the optical axis with respect to the sensor. This lens may still perform acceptably for the purposes that it was made and the user may be unaware of the defect. The care and consideration in manufacturing associated with a £3,000 to £6,000 semi-metric film camera or even a good £1,500 non-metric camera will be missing due to the mass production techniques employed. Quite clearly, electronic cameras are often being used for tasks that were not envisaged by the camera or lens manufacturer and so special care must be taken and basic assumptions such that the optical axis is perpendicular to the sensor plane, cannot be made. Another problem found with zoom camera is non-linear variation in radial distortion for shorter principal distances and significant variations in the decentering distortion parameters (up to 5 pixels on a 525 x 350 pixel array!). This result is over an order of magnitude larger than for film cameras. Zoom lens, however, can be calibrated and performance improved. Calibration results have shown to be stable over a period of weeks. In some cases it is necessary with a motorised zoom to approach the desired focal length from the same direction each time due to hysteresis in the lens assembly. All of these factors mean that zoom lenses are not likely to give such good results as single focal length lenses. 2.9 Radiometric aspects of cameras Radiometric aspects of lenses can affect the geometric accuracy of results if not properly understood. This section gives some results from practical experiments carried out to investigate these effects but does not purport to address this area in any depth.

2-17

Page 18: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

2.9.1 Intensity variations between cameras Figure 2.10 illustrates image intensity variations between different camera and lens permutations. For this experiment, an area of white card was evenly illuminated by a pair of lamps positioned at 45° with respect to the card. Small RMS image intensity differences of ±2 grey levels occurred between the different lenses mounted on the same camera body. The camera demonstrated differences of up to 14 grey levels. Whilst such variations could be removed by adjusting the camera gain, the settings should be determined with respect to signal saturation levels. During this experiment, possible image illumination fall-off at the edge of the format using wide apertures was found to be indistinguishable from the ±2 grey value variations present in all Pulnix images. No significant fall off in intensity was found since the Fujinon 25 mm f/1.4 lenses are of standard construction, unlike short focal length lenses that are often of retrofocus construction to allow for the 17.536 mm spacing of the C mount standard.

Lens No.

RMSGreyValue

Figure. 2.10. RMS grey level for three camera/lens combinations

Figure 2.11 demonstrates some results of varying the lens aperture whilst imaging a uniform white card with two different lenses. The only significant difference occurred at f/5.6.

Error bars show the RMS grey level σ

f/8

f/5.6

f/4.5

f/11

f/16

Log relative exposure

Lens 1Lens 2

RMSImageGreyValue

Figure 2.11. RMS grey-level and lens aperture for two lenses

2-18

Page 19: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

2.9.2. Use of test charts To evaluate the resolution of a camera system a lens test chart can be images as in the example shown in figure 2.12. The resolution will differ across the field of view so if the test chart does not have multiple test sections that can be viewed at one time it is necessary to view the same chart in many locations.

Figure 2.12 An image of a lens test resolution chart

Visual evaluation of the patterns produced at high magnification demonstrated that there was no significant difference between the centre and the edges of the format for all permutations. Figure 2.13 illustrates a set of intensity profiles through three sets of line pairs. It can be seen that the spatial resolution of the system is somewhere between 38 and 60 lmm-1. Such a value would agree with the 58 lmm-1 theoretical maximum resolution given by the Nyquist theorem for the sensor. The variations in resolving power between the three optics tested was found to be insignificant.

Pixel No.

24 l mm

38 l mm

60 l mmIntensity

-1

-1

-1

0 5 10 15 2050

70

90

110

130

150

Figure 2.13. Intensity profiles through three sets of line pairs

2.10. Physical characteristics of photographic film Despite the rapid and continuing advances and advantages of digital image technology there are still uses for film-based imaging - high speed cameras and ultra high accuracy photogrammetry are two examples. The following notes may assist those constrained to use such technology.

2-19

Page 20: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

2.10.1. Spectral sensitivity. Films can be classified, in part, by the wavelengths of light to which they are sensitive. More common classifications include those sensitive to only short wavelengths (370-480 nm); ortho-chromatic: range 370-540 nm, most sensitive 420 nm; panchromatic: almost whole visible spectrum 360-690 nm; and, infrared: 660-930 nm (red-infrared) and, also, simultaneously 370-460 nm (blue). Note that with infrared film a special lens should be used to accommodate the longer than usual wavelengths and a heavy blue filter should be used to eliminate the blue rays to which the film may also be sensitive. 2.10.2. Resolution. The grain size of the crystals of silver bromide, which are sensitive to light and change chemically, may be thought of as the primary element of resolution. The traditional way of thinking of the ‘resolving power’ of film was to determine the number of ‘line pairs per mm’ which could be discerned on an image. Each line pair consisted of an alternate black and white stripe of similar width. Typically the resolving power expressed this way varies from 10 to 100 line pairs per mm, with an average value for good quality colour transparency film of around 40. High contrast black and white films with resolutions to 400 line pairs per mm are commercially available (Kodak Tech Pan, for example) while military and other defence users report up to an order of magnitude improvement on that quality again. A more complete definition of resolution is in terms of a modulation transfer function (MTF). Other MTF curves may be derived for lenses and so the MTF of the complete imaging system may be determined. Suffice to state here that, as a general rule, as the speed of a film increases (for example from ASA100 to ASA400) then the grain size increases, the period of exposure decreases and the resolution becomes worse (the image gets ‘grainier’ and cannot withstand magnification). 2.10.3. Speed. The speed of a film is defined in a quaint old-fashioned way by its ASA rating. ASA = American Standards Association and ASA100 means that “On a fine sunny day the exposure should be 1/100th of a second at an aperture stop of f-16”. Films with speeds as low as 2.5 may be commercially purchased (extremely small grain size, useful for high definition work at slow shutter speeds). The higher the film speed, the lower the exposure time required, but, usually, at the price of poorer resolution. 2.10.4. Developing. One of the real drawbacks of film used for photogrammetric purposes is that results can never be ‘real-time’. It usually takes about an hour to develop a roll of film once it reaches the processing laboratory. The steps in film development are basically simple. They consist of the Developer, which reduces the exposed silver salts to grains of metallic silver. This is a time critical process as the developer is aggressive and will ruin all the film if left in contact for too long. The film is then put in a Stop Bath to prevent further development before being placed in a Fixing solution. The fixer dissolves out unreduced silver salts, and then the film must be thoroughly washed to remove all traces of the chemicals.

2-20

Page 21: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

2.10.5. Film stretching. The image is present in the film’s emulsion, which is, in effect, glued to the carrier material. This carrier material can vary tremendously in physical properties with respect to stability under temperature, humidity and when wet during the developing process. Older style film bases such as cellulose-acetate are easily susceptible to dimensional change when subjected to atmospheric and mechanical stress. The commercial processing of film where it is drawn through the processing tanks and then force dried with hot air can leave the best photogrammetric procedure in tatters. The author has actually experienced some 35mm colour transparencies of original size 36 by 24 mm returning from the ‘One-hour’ fast commercial processor measuring 38 by 24 mm. An accuracy of under 1 part in 20 was not part of the photogrammetric design. Modern plastic films use carriers with names like ‘cronaflex’ and are made from ester plastics. They are truly stable, but are not used on most small format commercial films, only specialist or aerial survey camera films. One warning if using these modern films: you must have scissors in the dark-room as they are impossible to tear off a roll by biting, as is the conventional practice. Film deformation is of course the very reason why fiducial marks, reseau or grid plates were invented and added to, or in front of, the image plane of cameras used for serious photogrammetry. A further note of caution must be added here, when applying a two-dimensional transformation to allow for film stretch or deformation based on measurements at fiducial or grid crosses. A conformal (4-parameter), rather than affine (6-parameter), transformation should be selected as although the residuals will always be less with the latter, some undesirable ‘warping’ may be introduced as a result of applying the affine transformation. A projective (8-parameter) transformation should never be applied. 2.10.6. Holding the Film Flat During Exposure There are basically three options for film on an image plane:

(1) Do nothing. This is the existing situation in most 35 and 70mm cameras. The film tends to bow away from the pressure backing plate by amounts of, approximately 0.5 to 1.0 mm for these types of cameras respectively. This makes no discernible difference to your holiday snap-shots but can completely ruin any high accuracy tasks requiring accuracies greater than about 1:500. The principal distance has different values across the image and the geometric fidelity of the photogrammetric solution is severely compromised. (2) Glass Plate. A glass plate added just in front of the image plane does two things. It constrains the amount of flexing that the film can do and is an excellent medium to accept fiducial marks or grid crosses. Two problems remain: the film is still not perfectly flat (despite even a pressure plate in the film wind-on mechanism pushing the film against the glass ... the film buckles in a so-called ‘orange-peel’ fashion) and the presence of the glass plate acts like an extra lens element and causes the normal focussing to change and more

2-21

Page 22: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

radial distortion to be present. This latter effect may be overcome by calibration procedures. (3) Vacuum. This is the most expensive and complicated option. It is also the only one that satisfies all high accuracy constraints. One word of caution: the backing plate must be flattened to precise engineering tolerances or else the concept of holding the film tightly against it becomes a nonsense.

Film unflatness is considered to be the main factor limiting the attainment of higher accuracies in non-metric small format photogrammetry. The deviation of the film away from its backing plate has been determined (for example, Donnelly, 1990) to be up to 0.5mm in 35mm cameras (image format 24 x 36mm) and estimated to be as high as 1.0mm in 70mm cameras (image format 55 x 55mm). Bulges cannot be considered central, nor symmetric, although for a particular film type and camera, it tends to be highly consistent in a sequence of exposures. One problem associated with film unflatness is that during the later measurement of the film, the bulge existing at the time of exposure cannot be re-created. In fact, the film will almost certainly be flattened out with a glass plate in the set-up phase prior to measuring, causing the film's dimensions to be 50 to 100 µm longer than when in the back of the camera. Whilst this would intuitively suggest a radial movement (distortion) of image locations, which would be included in any radial distortion parameters derived from the film measurement process, it has, in fact, been shown that this film bulge does not influence the radial distortion parameters of the lens. This bulge effect is very largely removed from consideration if a conformal or affine transformation is used in the interior orientation procedure before the lens distortion calculations commence. Localised expansion or shrinkage will not be effectively dealt with by corner or side fiducial marks and an extensive grid of reseau crosses is required to counteract such effects. Much investigation has taken place as to the most appropriate formula to use for interpolation inside such a reseau. A transformation based on the nearest surrounding four crosses seems most appropriate and Kotowski and Weber (1984) proposed a bilinear interpolation x1 = a1 + a2 x + a3 y + a4 xy y1 = a5 + a6 x + a7 y + a8 xy (4.12) that provides in a unique solution and avoids discontinuity problems across the edges of the reseau cells. Robson (1990) conducted a study into the dimensional stability of films and concluded that humidity was a major factor. Changes in moisture content cause the emulsion to expand or contract and this change must be resisted by the base material. Robson's work demonstrates many of the uncertainties inherent with analogue camera systems and provides reasons why accuracy greater than 1:5000 are extremely difficult to obtain with simple non-metric cameras which do not possess vacuum, reseau or other film flattening devices.

2-22

Page 23: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

2.11. Selection of the appropriate model for a given measurement task The selection of an appropriate model for lens distortion will depend on the application. If a wide-angle lens (of short focal length) is used then distortion is likely to be greater than for a normal lens. If the quantity of distortion is unknown a straight line can be imaged at the end of the format of the image and the deviation of the line from straight can be measured using any simple graphics program. Remember that this distortion only indicates the difference in distortion between the middle of the line and the edges and not the level of gross distortion. Table 2.1.gives a rough idea of which lens distortion modeling parameters might be applied to achieve a given level of accuracy. It is only approximate as the requirements change considerable depending on the focal length and design of the lens but at least the progression of the general scheme is indicated. Level of accuracy required across whole image

Model parameters required Comment

5 pixels K1 Gross lens distortion removed

1 pixel xpp, ypp, K1 Improvement due to principal point location

0.5 pixels xpp, ypp, K1, P1, P2 Decentring distortion added

0.1 pixels xpp, ypp, c, K1, P1, P2 More accurate calibration needed

0.05 pixels xpp, ypp, c, K1, K2, K3, P1, P2 Higher order lens distortion terms

0.01 pixels xpp, ypp, c, K1, K2, K3, P1, P2, a, b Sensor parameters required

Table 2.1. Approximate indication of which lens parameters to use This table only gives a rough guide. Using more parameters is not detrimental if the calibration method is capable of estimating that parameter properly, but over parameterisation can lead to unrealistic precision estimates compared to the accuracy of calibration actually achieved. Hence, for many applications a reasonably full model can be used without problems but more computations will be required and the calibration may be more difficult. 2.12 Summary Knowledge of the geometry of the bundle of light rays that pass through a lens is essential for the accurate application of high accuracy techniques. The formulae described in this chapter are capable of correcting the geometric location of an image to within a few tenths of a micrometre. It is often important that the user does not to become too concerned or involved with the mechanics of calibration techniques, since it is really the accurate determination of object co-ordinates in 2-D or 3-D that are usually the primary aim. In routine tasks,

2-23

Page 24: CAMERA CALIBRATION MODEL SELECTION

Handbook of practical camera calibration methods and models Optical Metrology Centre

the results for the calibration parameters should only be used as a check to ensure that no unforeseen systematic error is present. The additional parameters, including those of camera and lens calibration, should be similar to those from previous calibrations to ensure a quality check on the entire adjustment procedure. Advice on the selection of an appropriate calibration method to match the model chosen is given in Chapter 4. 2.13. References Smith, W.J. 1990. Modern Optical Engineering – the design of optical systems. 2nd Edition, Published by Mc Graw Hill.

2-24