9
High-speed close-range photogrammetry for dynamic shape measurement I Wallace* a , N J Lawson b , A R Harvey a , J D C Jones a and A J Moore a a Heriot-Watt University, School of Engineering and Physical Sciences, Edinburgh, EH14 4AS, United Kingdom b Cranfield University, Department of Aerospace Sciences, Cranfield, MK43 0AL, United Kingdom ABSTRACT We describe and characterize an experimental arrangement to perform shape measurements on a deformable object through dynamic close-range photogrammetry; specifically, an insect in flight. The accuracy of shape measurements in photogrammetry is improved by increasing the number of camera views. In static close-range photogrammetry, one may increase the number of camera views by moving the camera and taking a number of images, or equivalently, by moving the object. In dynamic close-range photogrammetry of rigid objects, one may combine all the camera views from a video sequence. However, in dynamic close-range photogrammetry of a deformable object, the number of camera views is restricted to the number of physical cameras available. The technique described here is to arrange a number of cameras around a measurement volume, illuminated by a laser synchronized to the cameras. The cameras are first calibrated, and then a bundle adjustment is used to determine point positions on the object. In this paper, we first determine the capabilities of the system in static close-range photogrammetry. We then perform a static shape measurement on our dynamic target and compare this with the results of dynamic close-range photogrammetry. The results indicate that high-speed dynamic measurements of the deformation of insect wings during flight should provide adequate resolution to develop an aeroelastic model of a flapping wing. Keywords: close-range photogrammetry, high-speed, optical metrology 1. INTRODUCTION The subject of photogrammetry, where objects are measured from photographic or electronic images, is one with a long tradition. Long-range photogrammetry to survey buildings from high vantage points was used in the 1850s and this was developed into aerial photogrammetry in the early years of the twentieth century. Close-range photogrammetry, where the camera is not focussed at infinity, brings its own particular problems and has been used extensively in machine vision, in surveying buildings and ancient artefacts, in plastic surgery, in analysis of traffic accidents and many other fields 1 . In this paper we describe a system that takes advantage of availability of high-speed digital cameras of accessible cost to implement dynamic deformation measurements using close range photogrammetry. The system is designed to measure the deformation of free-flying insects during hovering flight, and is based on pulsed laser illumination to accommodate wing-tip surface velocities of the order of 10 m/s. We investigate the performance of the system, particularly with respect to the restricted number of views, due to the dynamic nature of the experiment. In dynamic photogrammetry, two situations can be distinguished. In the case of rigid bodies, a series of images may be used to build up several views of the subject. This is similar to static photogrammetry where the subject, or equivalently the camera, may be moved to provide more views of the subject than there are cameras. This reduces the errors and allows photogrammetry to be done with just one camera. In the case of non-rigid bodies, however, where the subject may be changing shape between successive images, the number of views is restricted to the number of cameras. Examples of close-range photogrammetry of rigid bodies would include metrology of manufactured goods passing a camera on a belt while that of non-rigid bodies would include vehicle collision studies or metrology of insects in flight. Section 2 describes our approach to close-range photogrammetry. Section 3 details experimental results for static targets using white light illumination, which establishes base-line performance of our system. Then the laser system is introduced, and the results of dynamic and static close-range photogrammetry on a target are compared. *[email protected]; phone 44 131 451-4362; fax 44 131 451-3129 26th International Congress on High-Speed Photography and Photonics, edited by D. L. Paisley, S. Kleinfelder, D. R. Snyder, B. J. Thompson, Proc. of SPIE Vol. 5580 (SPIE, Bellingham, WA, 2005) · 0277-786X/05/$15 · doi: 10.1117/12.567331 358 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 04/04/2014 Terms of Use: http://spiedl.org/terms

SPIE Proceedings [SPIE 26th International Congress on High-Speed Photography and Photonics - Alexandria, VA (Sunday 19 September 2004)] 26th International Congress on High-Speed Photography

  • Upload
    brian-j

  • View
    215

  • Download
    2

Embed Size (px)

Citation preview

Page 1: SPIE Proceedings [SPIE 26th International Congress on High-Speed Photography and Photonics - Alexandria, VA (Sunday 19 September 2004)] 26th International Congress on High-Speed Photography

High-speed close-range photogrammetry for dynamic shape measurement

I Wallace*a, N J Lawsonb, A R Harvey a, J D C Jones a and A J Moore a

a Heriot-Watt University, School of Engineering and Physical Sciences, Edinburgh, EH14 4AS,

United Kingdom b Cranfield University, Department of Aerospace Sciences,

Cranfield, MK43 0AL, United Kingdom

ABSTRACT We describe and characterize an experimental arrangement to perform shape measurements on a deformable object through dynamic close-range photogrammetry; specifically, an insect in flight. The accuracy of shape measurements in photogrammetry is improved by increasing the number of camera views. In static close-range photogrammetry, one may increase the number of camera views by moving the camera and taking a number of images, or equivalently, by moving the object. In dynamic close-range photogrammetry of rigid objects, one may combine all the camera views from a video sequence. However, in dynamic close-range photogrammetry of a deformable object, the number of camera views is restricted to the number of physical cameras available. The technique described here is to arrange a number of cameras around a measurement volume, illuminated by a laser synchronized to the cameras. The cameras are first calibrated, and then a bundle adjustment is used to determine point positions on the object. In this paper, we first determine the capabilities of the system in static close-range photogrammetry. We then perform a static shape measurement on our dynamic target and compare this with the results of dynamic close-range photogrammetry. The results indicate that high-speed dynamic measurements of the deformation of insect wings during flight should provide adequate resolution to develop an aeroelastic model of a flapping wing. Keywords: close-range photogrammetry, high-speed, optical metrology

1. INTRODUCTION The subject of photogrammetry, where objects are measured from photographic or electronic images, is one with a long tradition. Long-range photogrammetry to survey buildings from high vantage points was used in the 1850s and this was developed into aerial photogrammetry in the early years of the twentieth century. Close-range photogrammetry, where the camera is not focussed at infinity, brings its own particular problems and has been used extensively in machine vision, in surveying buildings and ancient artefacts, in plastic surgery, in analysis of traffic accidents and many other fields1. In this paper we describe a system that takes advantage of availability of high-speed digital cameras of accessible cost to implement dynamic deformation measurements using close range photogrammetry. The system is designed to measure the deformation of free-flying insects during hovering flight, and is based on pulsed laser illumination to accommodate wing-tip surface velocities of the order of 10 m/s. We investigate the performance of the system, particularly with respect to the restricted number of views, due to the dynamic nature of the experiment. In dynamic photogrammetry, two situations can be distinguished. In the case of rigid bodies, a series of images may be used to build up several views of the subject. This is similar to static photogrammetry where the subject, or equivalently the camera, may be moved to provide more views of the subject than there are cameras. This reduces the errors and allows photogrammetry to be done with just one camera. In the case of non-rigid bodies, however, where the subject may be changing shape between successive images, the number of views is restricted to the number of cameras. Examples of close-range photogrammetry of rigid bodies would include metrology of manufactured goods passing a camera on a belt while that of non-rigid bodies would include vehicle collision studies or metrology of insects in flight. Section 2 describes our approach to close-range photogrammetry. Section 3 details experimental results for static targets using white light illumination, which establishes base-line performance of our system. Then the laser system is introduced, and the results of dynamic and static close-range photogrammetry on a target are compared.

*[email protected]; phone 44 131 451-4362; fax 44 131 451-3129

26th International Congress on High-Speed Photography and Photonics, edited byD. L. Paisley, S. Kleinfelder, D. R. Snyder, B. J. Thompson, Proc. of SPIE Vol. 5580

(SPIE, Bellingham, WA, 2005) · 0277-786X/05/$15 · doi: 10.1117/12.567331

358

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 04/04/2014 Terms of Use: http://spiedl.org/terms

Page 2: SPIE Proceedings [SPIE 26th International Congress on High-Speed Photography and Photonics - Alexandria, VA (Sunday 19 September 2004)] 26th International Congress on High-Speed Photography

2. METHODOLOGY

2.1 Static close-range photogrammetry Our photogrammetry process consists of two stages: calibration to determine lens distortions and internal parameters of the cameras, followed by bundle adjustment in which the system of equations describing the camera locations and orientations, the 3-dimensional positions of points on the unknown structure and internal camera parameters are solved simultaneously to reduce errors through optimization. The bundle adjustment method is considerably speeded up by starting with the best initial solution available and estimates for the camera locations and orientations and object structure are provided by resection and intersection steps respectively.

2.1.1 Bundle Adjustment

Bundle adjustment2

is a very general technique, which minimises the error between measured values and some model of reality. Any predictive parametric model can be handled. The model often used in photogrammetry is projection in a camera to represent how 3D structure is mapped onto 2D images in the camera(s). A projective transformation is the most general mapping where straight lines are still preserved. This can only find the 3D structure (for the noise-free case) to within some projective transformation of the real structure. However, in these experiments, calibration with objects with known 3D structure removes this ambiguity. A projective bundle adjustment optimises the camera model and derived 3D structure to best fit the point measurements in all camera images. The projection of j=1..n 3D structure points Xj = (P; 1) = (Xj; Yj; Zj; 1) in homogeneous co-ordinates to 2D image points in pixel units xi,j = (xi,j , yi,j, 1) in homogeneous co-ordinates viewed in i=1..m cameras can be expressed3 as: xi,j = Mi Xj i = 1..m, j = 1..n (1) where the camera matrix Mi is a 3X4 matrix with muv the element in the u-th row, v-th column (dropping the i subscript for clarity). The camera matrix Mi may be decomposed as Mi = Ki (Ri |- Ri ti) where ti represents the location and Ri the rotation matrix of camera i. Quaternions are used to represent the rotation, as these are inherently more stable than Euler angles and not prone to gimbal lock. We use the standard representation for unit quaternions q=(a; b; c; w). The quaternion rotations can be converted to a rotation matrix:

+−+−−+−++−+−

=)(21)(2)(2

)(2)(21)(2

)(2)(2)(21

22

22

22

bawabcwbac

wabccawabc

wbacwcabcb

Ri (2)

We choose one camera to have the reference axes, with all other camera axes defined by rotation or translation from that one. For, the reference camera, then, M = K(I|0). Each camera matrix Mi can then be defined by a rotation quaternion q and a translation t. Using the collinearity equations3 for each camera image i:

34333231

14131211, mZmYmXm

mZmYmXmx

jjj

jjjji +++

+++= (3)

34333231

24232221, mZmYmXm

mZmYmXmy

jjj

jjjji +++

+++= (4)

Proc. of SPIE Vol. 5580 359

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 04/04/2014 Terms of Use: http://spiedl.org/terms

Page 3: SPIE Proceedings [SPIE 26th International Congress on High-Speed Photography and Photonics - Alexandria, VA (Sunday 19 September 2004)] 26th International Congress on High-Speed Photography

we derive the sub-Jacobian Ji,j for the case where only the motion parameters are to be optimised:

==

’’’’10000

’’’’00001,, YZYYYXYZYX

XZXYXXXZYX

dM

dJ

jjj

jjj

jijix

(5)

where

( )234333231

14131211’mZmYmXm

mZmYmXmX

jjj

jjj

+++

+++−= (6)

( )234333231

24232221’mZmYmXm

mZmYmXmY

jjj

jjj

+++

+++−= (7)

In a similar way, the Jacobian can be extended to also include optimisation of the structure and calibration parameters. In the bundle adjustment code we iteratively search for a solution for the camera matrix Mi and 3D structure points Xj by

re-projecting Xj through Mi and minimising the error from the measured jx̂ . Depending on the implementation, we may

allow only the Mi (motion), Mi and Xj (motion and structure) or Mi, Xj and Ci (motion, structure and calibration factors, Ci) to be optimised. In addition, any other model (such as known ’ground truth’ measurements or relations) may be included in the model. The model can be extended with further parameters to represent rigid or deformable motion3. In our implementation, a Levenberg Marquardt scheme is used to do the optimisation4. Here, the bundle adjustment includes Mi, Xj and Ci, where the calibration factors, Ci, are restricted to allowing a different principal distance for each image. The bundle adjustment is therefore partially self-calibrating.

2.1.2 Calibration and initial estimate of internal camera parameters

The central perspective projection model represents the camera as an ideal pinhole, which is a simplification of the geometry of a real camera. Camera calibration is a method of determining and incorporating the difference of the real camera to the ideal case. One major difference is that of distortion due to the camera lens, which can be separated into radial and tangential components5,6,7. The lens distortion is introduced into the central perspective projection model in the following way. Let P be a point in space of coordinate vector (Xc;Yc;Zc) in the camera reference frame. This point is projected onto the image plane to give the normalized (pinhole) image projection: x=Xc/Zc, y = Yc/Zc. After including lens distortion, the new normalized point coordinate (xd, yd) is defined as follows: xd = (1 + k1 r

2) x + 2 k2 x y + k3 (r2 + 2 x2) (8)

yd = (1 + k1 r

2) y + k2 (r2 + 2 y2) + 2 k3 x y (9)

where r2 = x2 + y2. The radial distortion, which causes variations in angular magnification with angle of incidence, is represented by the single parameter k1 and the tangential distortion, due to "decentering", or misalignment of lens components in a compound lens, is represented by k2 and k1. Once distortion is applied, the final pixel coordinates (xp;yp) of the projection of P on the image plane is: xp = fx (xd + s yd)+ px (10)

yp = fy yd + py (11)

360 Proc. of SPIE Vol. 5580

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 04/04/2014 Terms of Use: http://spiedl.org/terms

Page 4: SPIE Proceedings [SPIE 26th International Congress on High-Speed Photography and Photonics - Alexandria, VA (Sunday 19 September 2004)] 26th International Congress on High-Speed Photography

where px and py are the coordinates of the principal point, the intersection of the optic axis with the image plane (in the co-ordinate frame where the center of the sensor is the origin), s is a skew parameter corresponding to a skewing of the coordinate axes and fx is the magnification in units of pixels/mm (or principal distance in pixel units) in the x-coordinate direction and fy is the magnification in the y-coordinate direction. We can relate the pixel coordinate vector (xp; yp) and the normalized (distorted) coordinate vector (xd; yd) through the linear equation (xp; yp; 1) = K (xd; yd; 1) with the matrix K representing the internal camera parameters as follows:

=

100

pf0

psff

yy

xxx

K (12)

For these experiments, we assume square pixel cameras with perpendicular x and y sensor axes and therefore s=0, fx = fy = fc and set px and py to 0 which gives:

=

100

0f0

00f

c

c

K (13)

To calibrate the cameras, several images are taken of a known object at various positions and angles to the camera to give a range of views. In this work, the unknown object is the chrome-on-glass distortion grid described below. The solution to the over-determined system of equations are found with a gradient descent method to provide internal camera parameters, describing the lens distortion and camera geometry, and external camera parameters describing the camera orientations and locations and these are used as initial solutions for the resection code. 2.1.3 Initial estimate of camera locations and orientations: resection In resection1, images of known objects are used to determine the location and orientation of the cameras with the collinearity equations (3,4). Here, the same images as those used in the calibration were used. The calibration process provided initial estimates of camera orientations and locations, images corrected for lens distortions, dot centers and point correspondences. The resection algorithm employed a least squares estimator to iteratively change the camera locations and orientations until the maximum error defined as the correction to the previous estimate fell below 10-11 pixels.

2.1.3 Initial estimate of object structure: intersection In intersection1, images of unknown objects and the known interior and exterior camera parameters are used to determine the 3-dimensional structure of the unknown objects from the collinearity equations (3,4). There are now 4 equations and 3 unknowns, and this over-constrained problem can again be solved with a least squares estimator, although an iterative technique is not required for this small system of equations. Since there is only one degree of freedom, reliability is low but this is addressed by the bundle adjustment technique described above, which solves large systems of equations simultaneously (the resection and intersection are no longer independent).

3. RESULTS

3.1 Static close-range photogrammetry In this section, we describe our typical static close-range photogrammetry experimental set-up. Two NAC Hi-Dcam II high-speed digital cameras were used with Nikon 50 mm standard lenses. The camera-subject-camera angle was ninety

Proc. of SPIE Vol. 5580 361

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 04/04/2014 Terms of Use: http://spiedl.org/terms

Page 5: SPIE Proceedings [SPIE 26th International Congress on High-Speed Photography and Photonics - Alexandria, VA (Sunday 19 September 2004)] 26th International Congress on High-Speed Photography

degrees, which ensures the lowest possible ratio of out-of-plane to in-plane errors8,9 of close to unity. The camera-subject distance was approximately 400 mm. For this error analysis of the static close-range photogrammetry technique, the unknown object was a 1 mm-spaced grid of 6-by-26 0.5mm-diameter chrome dots on the glass distortion grid from Edmund Scientific where the dot diameters and spacings are known to a specified accuracy of ± 2.5 µm. The dots form large images (typically 20 pixels in diameter) and a correlation method was used to find the centers of the dots. The grid was held at arbitrary positions and orientations and various angles and 48 stereo images taken. This grid is a regular object but no regularity was known by the intersection algorithm so this instance is equivalent in difficulty to that of arbitrarily placed spots on an arbitrary object. The out-of-plane (Z) and in-plane (X, Y) mean object errors and standard deviations are plotted versus the number of views in Figure 1(a). The mean is of the object error of each of the 156 points. They show the expected exponential decrease in object error with number of views. For the case of two views, where the error was very dependent on the views selected, several pairs of views were taken and the mean calculated. The out-of-plane error (Z) dominates and is above 10 µm for less than 5 views. The errors can be compared with an independent probe measurement of the Edmund Scientific Optics chrome-on-glass distortion grid where the mean spot-to-spot distance errors (and standard deviations) in µm of 1.6 (1.9), 1.5 (0.9), 7.9 (53.1) and 8.2 (53.2) were measured for X-, Y- and Z-directions, for the total error respectively. The spot diameter was accurate to (0.9±0.7) µm. The errors become comparable to these value for large numbers of views where the mean object errors (and standard deviations) in µm become 1.3 (1.6), 1.4 (1.7), 3.7 (4.0) and 4.2 (4.6) for X-, Y- and Z-directions and for the total error respectively. It is instructive to compare the object errors achieved with the expectation given the mean pixel errors (MPE). The longest side of the glass slide is 50 mm. Typically, it fills approximately half the vertical image dimension of 1024 pixels (image is 1280 by 1024). So for a typical MPE of 0.11, the object error in one direction is expected to be 0.11 x 50 mm x 2 / 1024 = 11 µm and the total object error is expected to be √ [3 x (11 µm)2] = 19.1 µm. The object error may be expected to behave like the standard deviation of a sample and be proportional to 1/√ (N-1), where N is the number of views. The curve 19.1 µm /√ (N-1) is plotted in Figure 1(a) and fits the data fairly well. The MPE also depends on the number of views. The less views there are, the easier it is to fit a camera model to the points on the image. Hence, the pixel error should improve as the number of views is reduced. The observed variation of MPE and standard deviation of pixel error with number of views is plotted in Figure 1(b) along with, for interest, the curve MPE x (1-1/N) for an MPE of 0.11 which fits the data quite well, although no claim for its general applicability is made here. The MPE is observed to plateau at 0.11 with a standard deviation of 0.06.

3.2 Dynamic close-range photogrammetry of rigid bodies

In this section, we describe the errors experimentally achieved by dynamic close-range photogrammetry of rigid bodies. The velocities of a spot near the centre of the grid, which passed under the laser vibrometer spot, were measured with dynamic photogrammetry and compared with the independent measurement from the vibrometer. The spot speed from photogrammetry was found to be (5.03 ± 0.12) m/s and from the vibrometer (5.06 ± 0.07) m/s. In addition, a direct calculation from the rotation frequency and spot radius (85 mm) yielded (5.02 ± 0.12) m/s. The speeds derived from all measurements were thus found to agree within uncertainties. The range of spot speeds from direct calculation is 4.3 to 5.7 m/s. In this section, we describe the experimental set-up for dynamic close-range photogrammetry of rigid bodies. In this case the unknown object was a 3 mm-spaced 8-by-8 grid of 1.3 mm-diameter dots printed at 600 dpi and glued to the blade of an optical chopper. This grid is a regular object and, again, no regularity was known by the intersection algorithm. In order to measure the dot positions by static photogrammetry, the chopper blade was rotated to various angles and 20 stereo images taken. For dynamic photogrammetry measurements, the chopper was rotated at 563 rpm. The velocity of the grid was independently measured by a laser vibrometer (Polytec OFV-5000). The laser spot from the vibrometer was visible in the stereo images to allow comparison with the velocities measured by photogrammetry.

362 Proc. of SPIE Vol. 5580

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 04/04/2014 Terms of Use: http://spiedl.org/terms

Page 6: SPIE Proceedings [SPIE 26th International Congress on High-Speed Photography and Photonics - Alexandria, VA (Sunday 19 September 2004)] 26th International Congress on High-Speed Photography

0

10

20

30

40

50

60

0 5 10 15 20 25 30

Number of views

Mea

n ob

ject

err

or (

mic

rons

)

X

Y

Z

Total

Fitted

0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.14

0.16

0.18

0 5 10 15 20 25 30

Number of views

MPE MPE

Fitted

Figure 1 – (a) Out-of-plane (Z), in-plane (X, Y) and total mean object errors versus number of views. The error bars represent ± one standard deviation on the total mean object errors. (b) Variation of mean pixel error (MPE) with number of views. The error bars represent ± one standard deviation.

(a)

(b)

Proc. of SPIE Vol. 5580 363

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 04/04/2014 Terms of Use: http://spiedl.org/terms

Page 7: SPIE Proceedings [SPIE 26th International Congress on High-Speed Photography and Photonics - Alexandria, VA (Sunday 19 September 2004)] 26th International Congress on High-Speed Photography

In order to determine the errors introduced by the dynamic photogrammetry it was first necessary to measure the spot positions on the printed grid used in the experiment. This was done by static photogrammetry of the target illuminated by a laser. The errors in the positions of spot centres were defined relative to the mean of 28 stereo image pairs. The X- and Y-axes and origins were aligned in each object derived from the static photogrammetry to make the comparison. The standard deviations of the errors were 36 µm, 51 µm and 52 µm for the X-, Y- and Z-directions respectively where Z is out of the plane of the printed grid. These are measures of the errors typically found for each point from one stereo image and are of a similar size to those described above for static photogrammetry of two views. Here we are making a measurement of the printed grid and may combine the photogrammetry measurements from the 28 stereo image pairs to reduce the errors. The errors in the mean positions of the points on the grid are therefore reduced by 1/√55 to 5 µm, 7 µm and 7 µm for the X-, Y- and Z-directions respectively. The mean pixel error for these images was 0.22 and the standard deviation of the pixel errors 0.25. Dynamic photogrammetry was used to measure the spot positions on the printed grid. The target was again illuminated by synchronized 10 µs laser pulses. The NAC Hi-DCAM II cameras were fitted with 50 mm Nikon standard lenses at f/4 and 13 mm extension tubes. For dynamic close-range photogrammetry it was useful to implement an automatic point-tracking routine. In this routine, an image plane velocity was determined for each point independently using the current and previous images in the sequence. This velocity was used to predict the likely position of the point in the next image, which was used as the centre of the correlation routine search window to find the centre of the circle. In addition, accelerations were determined from the previous two images to further refine the predictions. The automatic point-tracking algorithm was found to considerably reduce the time spent and the number of errors generated during this process. The algorithm was further enhanced to cope with occlusion of points behind an obstacle or when going outside the field-of-view. In this case, basis vectors were initially derived from an image where all the points were visible. All the point locations were then expressed as multiples of these basis vectors. For the images where some points were occluded, the equivalent basis vectors were derived from the visible points and the point locations of the occluded points estimated from these. This technique was found to be successful in dealing with occlusions behind a post placed in front of the object and with non-rigid bodies. The cameras were running at 500 fps full-frame with an exposure time of 200 µs. The errors in the positions of spot centres were defined relative to the mean static photogrammetry measurements and are plotted in Figures 2 (a), (b) and (c) for the X-, Y- and Z-directions respectively. The standard deviations of the errors were 111 µm, 88 µm and 45 µm for the X-, Y- and Z-directions respectively. Again we are making a measurement of the printed grid and may combine the photogrammetry measurements from the 9 stereo image pairs to reduce the errors. The errors in the mean positions of the points on the grid are therefore expected to reduce by 1/√17 to 27 µm, 21 µm and 11 µm for the X-, Y- and Z-directions respectively. The actual standard deviations of the errors in the mean positions were 10 µm, 32 µm and 18 µm for the X-, Y- and Z-directions respectively. The mean pixel error for these images was 0.22 and the standard deviation of the pixel errors 0.36. It is instructive to compare the object errors achieved with the expectation given the mean pixel errors (MPE). The side of the printed grid is 24 mm. Typically, it fills approximately half the vertical image dimension of 1024 pixels (image is 1280 by 1024). So for a typical MPE of 0.22, the object error in one direction is expected to be 0.22 x 24 mm x 2 / 1024 = 10.3 µm and the total object error is expected to be √ [3 x (10 µm)2] = 17.9 µm. The actual total object error observed was 38.1 µm.

364 Proc. of SPIE Vol. 5580

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 04/04/2014 Terms of Use: http://spiedl.org/terms

Page 8: SPIE Proceedings [SPIE 26th International Congress on High-Speed Photography and Photonics - Alexandria, VA (Sunday 19 September 2004)] 26th International Congress on High-Speed Photography

Figure 2 – Errors of spot centers on printed grid from laser-illuminated dynamic photogrammetry of 9 stereo images in (a) X-direction, (b) Y-direction and (c) Z-direction.

4. DISCUSSION To determine the capability of the photogrammetry setup, a precisely known calibration target was measured. The precision increases with number of camera views until in-plane (out-of-plane) mean object errors of approximately 1 (4) microns were achieved when ten cameras were used. In dynamic photogrammetry of rigid bodies, it is also possible to increase the number of camera views by combining successive images in a sequence taken with a small number of cameras. For the dynamic photogrammetry work, we used a printed grid of dots mounted on the spinning disc of an optical chopper. The dots were first measured with static photogrammetry illuminated by laser, then by dynamic photogrammetry with synchronized laser illumination. The results were generally similar to those for static photogrammetry, indicating that no extra errors were introduced in moving from static to dynamic photogrammetry. Finally, we discuss the applicability of the results to dynamic close-range photogrammetry of non-rigid bodies. The use of deformable bodies would have meant shape determinations for each image set which would have been very time-

Proc. of SPIE Vol. 5580 365

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 04/04/2014 Terms of Use: http://spiedl.org/terms

Page 9: SPIE Proceedings [SPIE 26th International Congress on High-Speed Photography and Photonics - Alexandria, VA (Sunday 19 September 2004)] 26th International Congress on High-Speed Photography

consuming for the accuracy required. Equivalently, we may restrict the dynamic photogrammetry technique to determining shape from only one image in a sequence. The technique may not use the information that the same object is being viewed in several images since we are here simulating a deformable body. The errors obtained are then the same as those obtained previously for rigid bodies where the number of views is now restricted to be no more than the number of cameras.

5. CONCLUSIONS A survey of static Photogrammetry capability was carried out using the Edmund Optics chrome-on-glass distortion slide and the experimental results reported here. The results show that at least five camera views were required to get a mean object error of less than 10 µm. The object error was shown to decrease and the mean pixel error to increase with increasing number of views. Dynamic results for rigid bodies give similar object errors to static results since multiple object estimates in a sequence can be combined to reduce the error in a similar way to if there were multiple views of an object. In the case of dynamic results for non-rigid bodies the number of views is restricted to the number of physical cameras and the errors are the same as those for the static case for that number of views.

ACKNOWLEDGEMENTS

This work is supported by a grant from the Engineering and Physical Sciences Research Council, United Kingdom and DSTL, United Kingdom.

REFERENCES 1. K. B. Atkinson (ed.), Close Range Photogrammetry and Machine Vision, Whittles Publishing, Scotland, 1996. 2. B. Triggs, P. McLauchlan, R. Hartley, A. Fiztgibbon, “Bundle Adjustment - A Modern Synthesis,” in Vision

Algorithms: Theory and Practice, B. Triggs, A. Zisserman, R. Szeliski, Eds., pp. 298-372, LNCS Vol.1883, Springer-Verlag, Berlin, 2000.

3. R. I. Hartley, “Euclidean Reconstruction from Uncalibrated Views,” US Workshop on Applications of Invariance in Computer Vision, In Proceeding of the DARPA-ESPRIT workshop on Applications of Invariants in Computer Vision, pp. 187-202, Azores, Portugal, 1993.

4. B. Girod, G. Greiner, and H. Niemann, Eds., Principles of 3D Image Analysis and Synthesis, Kluwer Academic Publishers, Boston-Dodrecht-London, 2000.

5. D.C. Brown, “Lens Distortion for Close-Range Photogrammetry,” Photometric Engineering 37(8), pp. 855-866, 1971.

6. D.C. Brown, “Decentering Distortion of Lenses,” Photometric Engineering 32(3), pp. 444-462, 1966. 7. A. Conrady, “Decentering lens systems,” Monthly notices of the Royal Astronomical Society 79, pp. 384-390, 1919. 8. N. J. Lawson and J. Wu, “Three-dimensional particle image velocimetry: error analysis of stereoscopic

techniques”, Meas. Sci. Technol. 8, pp. 894-900, 1997. 9. N. J. Lawson and J. Wu, “Three-dimensional particle image velocimetry: experimental error analysis of a digital

angular stereoscopic system”, Meas. Sci. Technol. 8, pp. 1455-1464, 1997.

366 Proc. of SPIE Vol. 5580

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 04/04/2014 Terms of Use: http://spiedl.org/terms