13
Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by the Ameri- can Society for Photogrammetry and Remote Sensing as “the art, science, and technology of obtaining reliable information about physical objects and the environment through the pro- cesses of recording, measuring, and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phe- nomena.” Simply, photogrammetry allows 3-D measurements (e.g., position, orientation, shape, and size) of objects from photographs. Photogrammetry is as old as modern photog- raphy and can be dated to the mid-nineteenth century (Konecny 1985). Over the past 80 years, the principal application of photogrammetry has been in the compilation of maps from aerial pho- tographs. In recent decades, the development of high-resolution satellite imaging and close-range techniques have facilitated the application of photogrammetry to many other fields, such as Earth observation, environmental monitoring, smart cities, architecture, industrial inspection, robotics, and so on. This entry provides a brief review of the historical development of photogrammetry and presents the fundamental techniques for deriving 3-D information from imagery via photogrammetry. Some observations and con- siderations about the future development of photogrammetry are also presented. The International Encyclopedia of Geography. Edited by Douglas Richardson, Noel Castree, Michael F. Goodchild, Audrey Kobayashi, Weidong Liu, and Richard A. Marston. © 2017 John Wiley & Sons, Ltd. Published 2017 by John Wiley & Sons, Ltd. DOI: 10.1002/9781118786352.wbieg0942 Historical development of photogrammetry Photogrammetry began soon after the inven- tion of photography in 1839. In 1849, Aimé Laussedat was the first person to use terrestrial photographs for the compilation of a topo- graphic map and is now referred to as the “father of photogrammetry” (Birdseye 1940). In 1893, Albrecht Meydenbauer was the first person to use the term photogrammetry. He also designed the first wide-angle lens for topo- graphical mapping and architectural surveying (Meyer 1987). The 1900s were pioneering years in the development of photogrammetry, with achievements including the generation of topographic maps based on techniques such as a “photographic plane table” using terrestrial photography or aerial photography supported by kites or balloons (Konecny 1985). With the Wright brothers’ invention of the air- plane in 1903, the development of photogram- metry entered a prosperous era due to the better camera platform. Since then, the development of photogrammetry has followed three develop- ment stages: analog, analytical, and digital pho- togrammetry. Analog photogrammetry The theory of stereoscopic vision widely used in the 1900s provided the foundations for analog photogrammetry. When looking at an object at a particular distance, our eyes simultaneously focus on and converge upon the object. The angle of convergence is called the parallactic angle. Objects at different distances from the

Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

Photogrammetry: 3-Dfrom imagery

Bo WuThe Hong Kong Polytechnic University, China

Photogrammetry is defined by the Ameri-can Society for Photogrammetry and RemoteSensing as “the art, science, and technology ofobtaining reliable information about physicalobjects and the environment through the pro-cesses of recording, measuring, and interpretingphotographic images and patterns of recordedradiant electromagnetic energy and other phe-nomena.” Simply, photogrammetry allows 3-Dmeasurements (e.g., position, orientation, shape,and size) of objects from photographs.

Photogrammetry is as old as modern photog-raphy and can be dated to the mid-nineteenthcentury (Konecny 1985). Over the past 80 years,the principal application of photogrammetry hasbeen in the compilation of maps from aerial pho-tographs. In recent decades, the development ofhigh-resolution satellite imaging and close-rangetechniques have facilitated the application ofphotogrammetry to many other fields, such asEarth observation, environmental monitoring,smart cities, architecture, industrial inspection,robotics, and so on.

This entry provides a brief review of thehistorical development of photogrammetryand presents the fundamental techniques forderiving 3-D information from imagery viaphotogrammetry. Some observations and con-siderations about the future development ofphotogrammetry are also presented.

The International Encyclopedia of Geography.Edited by Douglas Richardson, Noel Castree, Michael F. Goodchild, Audrey Kobayashi, Weidong Liu, and Richard A. Marston.© 2017 John Wiley & Sons, Ltd. Published 2017 by John Wiley & Sons, Ltd.DOI: 10.1002/9781118786352.wbieg0942

Historical developmentof photogrammetry

Photogrammetry began soon after the inven-tion of photography in 1839. In 1849, AiméLaussedat was the first person to use terrestrialphotographs for the compilation of a topo-graphic map and is now referred to as the“father of photogrammetry” (Birdseye 1940).In 1893, Albrecht Meydenbauer was the firstperson to use the term photogrammetry. Healso designed the first wide-angle lens for topo-graphical mapping and architectural surveying(Meyer 1987). The 1900s were pioneeringyears in the development of photogrammetry,with achievements including the generation oftopographic maps based on techniques such asa “photographic plane table” using terrestrialphotography or aerial photography supportedby kites or balloons (Konecny 1985).

With the Wright brothers’ invention of the air-plane in 1903, the development of photogram-metry entered a prosperous era due to the bettercamera platform. Since then, the developmentof photogrammetry has followed three develop-ment stages: analog, analytical, and digital pho-togrammetry.

Analog photogrammetry

The theory of stereoscopic vision widely used inthe 1900s provided the foundations for analogphotogrammetry. When looking at an objectat a particular distance, our eyes simultaneouslyfocus on and converge upon the object. Theangle of convergence is called the parallacticangle. Objects at different distances from the

Page 2: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

viewer are perceived through different parallacticangles. Due to the parallactic angle, the imageof an object falls on different locations on theretinas of the left and right eyes. Points at dif-ferent distances from the eyes appear at differentrelative locations. The difference in the positionof the points on images is called parallax, thuschanges in the parallactic angle result in parallax.The measurement of parallax offers an accuratemethod of measuring height or depth fromstereo images. Figure 1 illustrates the principle ofheight determination from stereoscopic vision.

In Figure 1, assuming that two cameras (El andEr) take images of an object at point P, the object’simage will appear as two image points (pl and pr)on the left and right images. Their locations inthe images along the x direction are xl and xr.The parallax is determined as p = xl − xr . Theheight can then be computed from the parallaxmeasurements:

Hp = H −Bfp

(1)

where B is the baseline length between the twocameras and f is the focal length of the camera. Itcan be seen that parallax of any point is inverselyproportional to its distance from the cameraand that parallax due to height occurs onlyon the x axis (the direction of flight for aerialimages).

If looking at a stereo pair of images, withone eye looking only at the first image and theother looking only at the second image, thescene would appear as a 3-D image. Viewinga stereopair is actually quite difficult withoutthe aid of a mechanical device, the simplest ofwhich is called a stereoscope. A stereoscopeset up for viewing a stereopair is shown inFigure 2.

From the beginning of the twentieth cen-tury, analog photogrammetry was developedbased on the above theory of stereoscopicvision. In 1908, Eduard von Orel invented the

El

pl pr

Er

f

B

H

P

xl xr

f

Hp

Figure 1 Illustration of height determination fromstereoscopic vision.

first stereoautograph. The development of thisplotter was significant because its constructionprinciples made terrestrial photogrammetrypractical in mountainous areas by allowing theoperator to trace elevation contours directly(Collier 2002). After 50 years of development,this type of instrument reached maturity in the1960s. As these instruments used an optical or amechanical projection device, or a combinationof them, to simulate the imaging process andto intersect the 3-D positions of the objectsin the images, they were called analog pho-togrammetric instruments. During the evolutionof analog photogrammetry, the main focus ofdevelopment was on expensive stereoscopeinstruments.

2

Page 3: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

Figure 2 A stereoscope for analog photogrammetric applications.

Analytical photogrammetry

Analytical photogrammetry began with theinvention of the computer in 1950 (Konecny1985), which allowed digital projections toreplace the physical projections (e.g., the opti-cal or mechanical projections) used in analogphotogrammetry. Digital projection uses a com-puter to calculate the 3-D positions of objectsin images in real time, based on the colinear-ity equation. Fundamental to the colinearityequation is a perspective projection, where apoint in the real world, its image point, andthe perspective center of the image lie on

one straight line. The colinearity equation isdescribed as:

x − x0

= −fm11(X−XS) + m12(Y−YS) + m13(Z−ZS)m31(X−XS) + m32(Y−YS) + m33(Z−ZS)

y − y0

= −fm21(X−XS) + m22(Y−YS) + m23(Z−ZS)m31(X−XS) + m32(Y−YS) + m33(Z−ZS)

(2)

3

Page 4: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

This equation represents a direct link betweenan image point (x, y) and its 3-D position(X ,Y ,Z) in the object space. (x0, y0) is theprincipal point (the foot of the perpendicular onthe image of the perspective center) and f is thefocal length of the camera. (XS,YS,ZS) are thecoordinates of the camera center in the objectspace and mij are the elements of a rotation matrixthat is determined by three rotation angles (φ, ω,k) of the camera frame with respect to the objectspace. However, as each equation represents astraight line, the conjugate image points froma stereo pair of images need to be measured tocompute the object point’s 3-D position. Thisprocess is referred to as space intersection. Thecolinearity equation provided the theoreticalfoundation for analytical photogrammetry.

In the early 1950s, Everett Merritt pub-lished works on analytical photogrammetry. Hedeveloped a series of analytical solutions forcamera calibration, space resection, interior andexterior orientation (EO), relative and absoluteorientation of stereo pairs, and analytical controlextension (Doyle 1964). In 1955, Duane Brown

developed new approaches for camera calibrationand the mathematical formulation of the bundleadjustment. This was a significant developmentbecause it involved the simultaneous solutionof the EO parameters of the camera and thecoordinates of the survey points, along with theinterior orientation (IO) and systematic radiallens distortion. In 1957, Uuno Helava developedthe first analytical stereoplotter (Konecny 1985).A computer was used to drive the instrumentaround the stereomodel and to digitally trans-form the coordinates between the image andthe map. Various types of analytical stereoplot-ters were developed, reaching a climax in the1980s due to the technological improvementsin large-scale integrated chips, personal com-puters, and interface technology. Figure 3 showsan example of an analytical photogrammetricsystem.

Significant technological developments inanalytical photogrammetry were also madeat this time. For example, the bundle adjust-ment for large photogrammetric blocks withself-calibration, developed by Duane Brown in

Computer PlotterAnalytical Measurementsystem

Figure 3 An example of an analytical photogrammetric system.

4

Page 5: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

the 1960s, improved the accuracy and reliabilityof photogrammetric adjustments. The directlinear transformation method developed bySam Karara in 1971 enabled photogrammetricapplications with non-metric cameras (Wolf2001). These developments in technologiesand instruments led to immense growth in theapplication of analytical photogrammetry invarious fields during the 1980s.

Digital photogrammetry

The previously described analytical methodsrequired an operator to view the photographsand place floating marks in the correct posi-tions to derive the 3-D information. Digitalphotogrammetry originated from the search forways of automating the manual work involvedin analytical photogrammetry. For example,the task of placing the floating marks on pho-tographs was replaced by image matching, whichinvolves identifying the conjugate points (pointsrepresenting the same image texture) on a pair ofdigital images, from which the 3-D coordinatesof the point can be computed. Such processesautomate and speed up the extraction of 3-Dinformation from a stereo pair of images. Digitalphotogrammetry also automated other processes,such as image orientations.

Although digital photogrammetry originatedin the 1950s, major research activities did notbegin until the 1980s, sparked by significantadvances in electronics and computing, such asdigital cameras, parallel processing, and increasedstorage capacity (Schenk 1999). In 1957, GilbertLouis Hobrough first demonstrated the conceptof image correlation on a Kelsh plotter. Dueto the technology at the time, the correlationprocess was analog and hardware was used tocompare the gray levels of the images (Schenk1999). In 1967, Hobrough developed the gestaltphoto mapper, an automated orthophotographic

system that used the correlations between stereoimages. The system consisted of a scanner,correlator, computer system, operator console,and input/output device. Uuno Helava alsoplayed a central role in the development ofdigital photogrammetry, helping to developdigital photogrammetric workstations for theDefense Mapping Agency in 1986. ZhizuoWang of China presented his ideas and solutionsfor a fully automatic digital photogrammetricsystem in 1978 and developed the WUDAMS, afully automatic digital photogrammetric system(Wang 1998). In the 1990s, the WUDAMSwas upgraded to a digital photogrammetricworkstation, VitrtuoZo, and a new generation,DPGrid, was recently developed based on net-work computing and cluster parallel processing(Zhang et al. 2011). In the early 2000s, AirbusDefence and Space in France released a newgeneration, fully automatic photogrammetricsystem named PIXEL FACTORY. Using massparallel computing technology and native openarchitecture, PIXEL FACTORY is capable ofautomatically processing vast numbers of imagesto produce a wide range of 3-D cartographic endproducts such as digital surface models (DSMs),digital terrain models (DTMs), and TrueOrthoimages.

3-D from imagery via photogrammetry

Deriving 3-D information from imagery viaphotogrammetry actually involves the reversalof the photographic process. If the position andorientation information of the photography raywhen taking the images can be recovered, thereverse process can be achieved so that the 3-Dinformation can be derived from the image.The process of recovering the photographic ray’sposition and orientation information is calledimage orientation.

5

Page 6: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

Image orientation

Image orientation includes interior orientation(IO) and exterior orientation (EO). The formerderives the relationship between the imagemeasurement and the image-space coordinatesystems, while the latter derives the relationshipbetween the image-space and object-space coor-dinates. The parameters derived from the imageorientation enable the 3-D positions of objectsto be calculated from their corresponding imagepoints through the aforementioned colinearityequation.

The interior orientation relates the coordinatesmeasured on the image to those of the object tobe measured. To do so, it is necessary to estab-lish the location of the principal point (x0, y0) inthe image-space coordinate system, as illustratedin Figure 4. The principal point (x0, y0) and thecamera focal length f are referred to as the IOparameters in the colinearity equation, as theyare the camera’s intrinsic parameters and, thus, donot change when the location and orientation ofthe camera change.

Taking aerial images as an example, nor-mally there are at least four fiducial marksdistributed in the four corners of the image.These marks have known coordinates in theimage-coordinate system; these are measured

and used as observations. From these observa-tions, a transformation between the observationand image coordinate systems is computed,which can then be used to determine theprincipal point (x0, y0) in the image-space coor-dinate system and transform other coordinatesmeasured on the image to the image-spacecoordinate system.

The images taken by cameras may have dis-tortions, as illustrated in Figure 4, that needto be estimated and calibrated for accuratemeasurement. In the case of film-based images,distortions may occur because of lens distortionand other factors, such as stretching or shrinkageof the film due to handling, processing, or stor-age. Distortions in digitally recorded images mayoccur because of lens distortion, dissimilar pixelspacing, or differences in the pixel dimensions ofthe imaging device. Ultimately, the shape of anobject measured on the image must be the sameas that of the image when it was recorded. Thiscan be achieved by adding distortion correctionparameters to the left side of the colinearityequation in equation 2. The camera distortionparameters also belong to the IO parameters.

The image IO parameters are normally cal-ibrated separately through a special controlfield with precisely measured targets as groundtruth, or through self-calibration approaches

(a) (b)

x0

y0 o

f

S

Image

Perspectivecenter

Perspectivecenter

Figure 4 (a) Image IO parameters f and (x0, y0); (b) image distortions.

6

Page 7: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

that incorporate the IO parameters into aphotogrammetric bundle adjustment processso that they can all be solved together andsimultaneously with other unknowns.

There are six EO parameters describing therelationship between the image’s and the object’scoordinate systems: the aforementioned threerotation angles (φ, ω, k), which describe theangular relationships, and (XS,YS,ZS), whichdescribe the location of the point of exposureof the image in the object coordinate system.These exterior orientation parameters can bederived in one of three ways:

1 direct space resection,2 relative orientation followed by absolute ori-

entation.3 simultaneous orientation by bundle adjust-

ment.

1. Direct space resection. Based on the colinearityequation (equation 2), if one point in the objectspace and its corresponding point in the imagespace are known (called the control point), theycontribute two observations. If three controlpoints are available, the six EO parameters can besolved. In practice, four or more control pointsare normally used to calculate the EO parame-ters for improved accuracy, using a least-squaresadjustment. This direct space resection methodis normally used to determine the EO param-eters of single images. For a stereopair or animage block, the exterior orientation parametersare derived by one of the other two methodsbecause they require fewer control points.2. Relative orientation and absolute orientation.

Relative orientation is used to establish the rela-tionship between two images without knowingabout the object. Relative orientation is basedon the coplanarity condition that two imagepoints on a stereopair, the perspective centers ofthe two images, and the object point lie on the

same plane (Figure 5). It assumes that the orien-tation and position of the left image are fixed,and the relative relationship between the leftand right images can be determined by the fiverelative orientation parameters by, bz, ω, ϕ, andκ (assuming bx = 1). ω, ϕ, and κ are the rotationangles that are needed to make the right imagecoordinate system parallel to the left imagecoordinate system. by and bz are the translationsrequired to correctly position the right image’sperspective center.

With the coplanarity condition, each pair ofconjugate points identified on the stereo imagesgenerates one observation. At least five conjugatepoints are required for a unique solution of thefive relative orientation parameters. Normally,six points are used for this process, includingthe two principal points and four other pointsdistributed on either side of the principal pointalong the y axis of the images. These are oftenreferred to as von Gruber points. Once thefive relative orientation parameters have beenobtained, a 3-D model of the imaged sceneis established. At this time, however, the 3-Dmodel is not accurately scaled and its coor-dinates are based on an arbitrary coordinatesystem.

Before the 3-D model derived from the rel-ative orientation can be used for measurement,it must be scaled and oriented to the objectcoordinate system. This procedure is calledabsolute orientation. Absolute orientation is a3-D conformal transformation that converts themodel coordinates obtained during the relativeorientation into correctly oriented mappingcoordinates. There are seven parameters in this3-D conformal transformation, including threerotation angles Rω, Rϕ, and Rκ , three translationcomponents TX, TY, and TZ, and a scale factors. At least three control points with knownhorizontal and vertical positions are necessaryto achieve a result. The accuracy of absolute

7

Page 8: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

bx

by

bz

P

p

P′

Epipolar lines

Camera base (bx, by, bz)

ϕ

ω

κ

Figure 5 Illustration of the coplanarity condition of a stereopair.

orientation depends on the quality of the relativeorientation and the accuracy of the controlpoints.

The relative and absolute orientations can beperformed on an individual stereopair or oncomplete image blocks covering a large area. Inthe latter case, the term aerial triangulation isoften used to describe the procedure.3. Simultaneous orientation by bundle adjust-

ment. An alternative to the relative/absoluteorientation methods is bundle adjustment. Thismethod is based on the principle that, given atie point in an image, it is possible to producean observation based on the aforementionedcolinearity equation. This observation representsan optical ray originating from the measuredimage point that goes through the center ofthe camera to the ground point. From theconjugate tie points identified on a stereo pair ofimages or multiple images in a block, a bundle

of optical rays defined by the tie points canconnect to the images themselves to form animage network and, thus, connect the imagesand object space. Ideally, the optical rays fromthe same conjugate tie points on different imagesshould intersect at exactly the same groundpoint in the object space; however, in realitythis may not be the case (Figure 6a) due tovarious uncertainties and errors. Therefore,bundle adjustment is used to adjust and solvethe accurate image orientation parametersso that the corresponding rays intersect cor-rectly (Figure 6b). This process is basically aleast-squares adjustment based on the colinearityequation.

Given a few 3-D control points and tie pointsidentified on the images, the image orien-tation parameters and the object coordinatesof tie points can be computed simultaneouslythrough bundle adjustment. This involves a

8

Page 9: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

(a) (b)

zyx

zyx

Figure 6 (a) The optical rays from the conjugate tie points do not intersect at the same ground point due tovarious uncertainties and errors, and (b) the improved results after bundle adjustment.

simultaneous resection and intersection pro-cess. The process may be applied either on astereopair-by-stereopair basis, or on the entireimage block. In the latter case, the term blockadjustment is often used to describe the proce-dure. In the stereopair-by-stereopair case, thereneeds to be at least three 3-D control points inthe overlap area of each stereopair. Observationof the image-space coordinates of each controlpoint in the overlap area allows the image ori-entation parameters to be computed for bothimages. For an image block, a few 3-D controlpoints and a number of conjugate tie pointsidentified from the images are necessary, andall of the orientation parameters for the imagesare computed at once. The entire image blockis processed in a single homogenous coordinatesystem.

In the bundle adjustment process, differentweights can be assigned to different observationsdepending on their a prior precisions. The resid-uals of all parameters can also be computed andused to evaluate the performance of the bundleadjustment, which is an important advantage ofthis method.

Automatic solutions in digitalphotogrammetry

Along with the development of digital pho-togrammetry since the 1980s, significantadvances have been made in digital imageprocessing; these have been applied to almostevery aspect of photogrammetry, from imageacquisition to image processing. These devel-opments have also facilitated the automationor semi-automation of photogrammetricprocessing in modern digital photogram-metric workstations. The automation of digitalphotogrammetry strongly depends on theautomation of image matching and the afore-mentioned IO and EO processes.

Automatic image matching

Image matching is the process of finding conju-gate image correspondences (points or patterns)in the overlapping regions of two or moredigital images. The process is based on eitherexamining and matching the grey levels of smallportions (image patches) of both images in astereopair, or matching an image patch with an

9

Page 10: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

image template. The matching may be doneon a pixel-by-pixel basis (area-based matching)or by examining and matching the individualfeatures of the image patches (feature-basedmatching). The most important applications ofimage matching are the automatic IO and EOof images, and the automatic creation of DTMsfrom multiple images. For the former, only asmall number of reliable image correspondenceswith a favourable distribution are required,whereas for the latter, dense and reliable imagecorrespondences are necessary.

Tremendous work has been done on imagematching in the fields of photogrammetry andcomputer vision. The most straightforwardmethod is normalized cross correlation (NCC),which directly assesses the degree of agree-ment between two local image windows bycross-correlation of their grey levels (Lhuillieand Quan 2002). An important developmentin image matching is the scale invariant featuretransform (SIFT) method (Lowe 2004). SIFTdetects points of interest based on local 3-Dextrema in the scaled-space pyramid invariantover a wide set of transformations and matchesthe points according to descriptors defined bytheir gradient distributions in the detected localregions. SIFT provides automatic robust match-ing results even in the presence of scale changesand distortions. However, SIFT only providessparse matching results. A representative methodof area-based matching is semi-global matching(SGM) (Hirschmuller 2008), which combinesthe concepts of global and local stereo methodsfor pixel-wise matching. SGM approximates thecost of the global aggregation of matching froma number of 1-D cost paths, which providesaccurate dense matching results. A represen-tative method of feature-based matching isself-adaptive triangulation-constrained matching(SATM) (Wu, Zhang, and Zhu 2011), whichuses triangulations to constrain the matching of

feature points and edges. An important charac-teristic of SATM is that the triangulations aredynamically updated along with the matchingprocess by inserting the newly matched pointsand edges into the triangulations. The most dis-tinctive features are always successfully matchedfirst, so that the densification of triangulationsautomatically self-adapts to the changes inimage texture, and provides robust constraintsto generate dense and reliable matching results.Figure 7 shows an example of SATM’s resultsfor the automatic matching of Mars groundimages acquired by NASA’s Mars ExplorationRover, Opportunity, and the generated DTM.SATM was also used to generate high-precisionlunar DTMs for selecting the landing site for theChinese Chang’E-3 lunar exploration mission(Wu, Hu, and Guo 2014).

Automatic image orientation

IO and EO are the fundamental orientationprocedures in analog and analytical photogram-metry. In digital photogrammetry, for examplein an interactive setting using digital pho-togrammetric workstations, the orientationtasks are essentially performed in the sameway as on an analytical plotter. This sectionfocuses on automatic orientation procedures,highlights their important differences, anddiscusses some solutions for digital photogram-metry.

Automatic IO is the starting point of theautomation chain. For aerial images from scan-ning film, automatic measurement of the fiducialmarks is the key step in automatic IO. Imagesfrom a metric camera contain at least four fidu-cial marks in the corners. Template matching canbe used for this task. This process is driven by thestructure description of the template that can beconstructed for a fiducial mark, which automat-ically matches the predefined template with the

10

Page 11: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

(a)

(b) (c)

(d)

Figure 7 (a) A Mars satellite image showing the mapping area; (b) a stereopair of ground images of the areataken by the Mars Rover, with the matching results marked in red; (c) another stereopair showing large perspec-tive changes and the matching results; and (d) the generated DTM from the matching results.

fiducial marks. The autonomous process requiresa general and robust solution to accommodatethe different types of fiducial marks. Subpixellocalization of the fiducial marks should alsobe emphasized, as the pixel size is likely to belarger than the expected precision of the fiducialcenters.

The development of automatic relative orien-tation has primarily focused on image matchingto identify conjugate points. The aforemen-tioned image matching methods can be usedto find conjugate points automatically. Theexisting point-based algorithms developed todetermine the orientation parameters in ana-lytical photogrammetry can be used for thispurpose. At least five conjugate points are nec-essary to determine the parameters. In practice,dozens or hundreds of matched conjugate pointswith a favorable distribution are normally usedto identify the relative orientation parametersusing least-squares adjustment. To determinethe EO parameters of single images throughautomatic space resection and of stereopairsthrough automatic absolute orientation, it is

critical to automatically establish the relationshipbetween the image and object spaces. Schenk(1999) discussed the theory of using linear (e.g.,road boundaries) or surface features (availablefrom existing DTMs or from laser altimetry) asentities in the adjustment procedure, to com-pute the EO parameters automatically. Morerecently, Tommaselli and Berveglieri (2014)presented a more practical solution for automaticEO. They used a special camera attached toa GPS receiver to collect panoramic imagesin nadir view, while simultaneously collectingthe 3-D coordinates of control points usingthe GPS. The panoramic images were thenautomatically matched with the aerial or satelliteimages to identify the locations of the con-trol points in the latter to compute the EOparameters.

Future trends in photogrammetry

There is a tremendous worldwide demand for3-D data, yet the methods for generating 3-D

11

Page 12: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

data from imagery are still not fully automatedand remain relatively slow. The developmentof a more automated photogrammetric processfor deriving 3-D data from various types ofimages is a challenge. High redundancy (e.g.,with every ground point visible in about 10images) may play a major role in the design offully automatic photogrammetric methods; forexample, the UltraCam digital aerial mappingsystem has eight lenses, providing high imag-ing redundancy. Multi-image matching withhigh redundancy ensures reliable matching andenables high-quality 3-D reconstructions to beproduced entirely automatically.

Laser scanning has been popular since the1990s, as it enables much faster delivery of 3-Dinformation than traditional photogrammetrictechniques. It has even been argued that laserscanning may replace traditional photogram-metry in the future. However, in the past fewyears, reliable and automatic image matching hasbecome one of the most active research areas inboth the photogrammetric and the computervision communities. The generation of accurateand dense 3-D information from multi-angledimages from multiple sensors with a high level ofautomation is now a reality. A typical exampleis the “Building Rome in a Day” projectby Microsoft Research. The development ofoblique photogrammetry in recent years alsooffers encouraging solutions for 3-D city datageneration and modeling. Photogrammetry andlaser scanning have distinct characteristics thatrender them preferable for certain applications.The respective advantages and disadvantages ofthe two techniques suggest that their integra-tion would provide better performance thancan be achieved by either method alone. Wu,Hu, and Guo (2014) provided an example ofintegrating lunar imagery and laser altimeterdata for consistent and precise lunar topographicmapping.

The timeliness of photogrammetry is currentlyattracting plenty of attention. In applicationssuch as industrial measurement and real-timemonitoring, the timeliness of photogramme-try is the key to success. The development ofreal-time (or quasi-real-time) photogrammetryhas become an urgent task. Compared with tradi-tional photogrammetric techniques, the researchand development of real-time photogrammetryneeds to overcome certain challenges, such asthe full automation of photogrammetric imageinterpretation, the development of embeddedalgorithms, chips, and hardware systems forreal-time image processing, and the develop-ment of new real-time photogrammetric sensorsand multisensor collaborative technologies.

In addition, there is a trend of integratingmethods developed for computer vision, suchas shape-from-shading, shape-from-shadow, andstructure-from-motion, into photogrammetryto allow better derivation of 3-D informationfrom images. Regardless of the challenges ahead,photogrammetry remains the most complete,economical, portable, flexible, and widely usedapproach for deriving 3-D information. Withthe further developments mentioned here,photogrammetry has a bright future.

SEE ALSO: Geodesy; Geographic informationsystem; Optical remote sensing

References

Birdseye, C.H. 1940. “Stereoscopic Phototopo-graphic Mapping.” Annals of the Association of Amer-ican Geographers, 30(1): 1–24.

Collier, O. 2002. “The Impact on Topographic Map-ping of Developments in Land and Air Survey:1900–1939.” Cartography and Geographic InformationScience, 29(3): 155–174.

12

Page 13: Photogrammetry: 3-D Historicaldevelopment from imagery …€¦ · Photogrammetry: 3-D from imagery Bo Wu The Hong Kong Polytechnic University, China Photogrammetry is defined by

PHOTOGRAMMETRY: 3-D FROM IMAGERY

Doyle, F. 1964. “The Historical Development ofAnalytical Photogrammetry.” Photogrammetric Engi-neering, XXX(2): 259–265.

Hirschmuller, H. 2008. “Stereo Processing bySemiglobal Matching and Mutual Information.”IEEE Transactions on Pattern Analysis and MachineIntelligence, 30(2): 328–341.

Konecny, G. 1985. “The International Society forPhotogrammetry and Remote Sensing: 75 YearsOld, or 75 Years Young?” Photogrammetric Engineer-ing and Remote Sensing, 51(7): 919–933.

Lhuillier, M., and L. Quan. 2002. “Match Propaga-tion for Image-based Modeling and Rendering.”IEEE Transactions on Pattern Analysis and MachineIntelligence, 24(8): 1140–1146.

Lowe, D.G. 2004. “Distinctive Image Features fromScale-Invariant Key Points.” International Journal ofComputer Vision, 60(2): 91–110.

Meyer, R. 1987. “100 Years of Architectural Pho-togrammetry.” Kompendium Photogrametrie, vol.XIX, 183–200. Leipzig: Akademische Verlagsge-sellschaft.

Schenk, T. 1999. Digital Photogrammetry. Laurelville,OH: TerraScience.

Tommaselli, A., and A. Berveglieri. 2014. “Auto-matic Orientation of Multi-Scale Terrestrial Imagesfor 3-D Reconstruction.” Remote Sensing, 6:3020–3040.

Wang, Z. 1998. “The Research Program of the FullDigitized and Automatic Mapping System.” Journalof Wuhan Technical University of Surveying and Map-ping, 23(4): 287–293.

Wolf, P. 2001. “Houssam Mahmoud Karara MemorialAddress.” Photogrammetric Engineering and RemoteSensing, 67(7): 811–815.

Wu, B., H. Hu, and J. Guo. 2014. “Integration ofChang’E-2 Imagery and LRO Laser Altimeter Datawith a Combined Block Adjustment for PrecisionLunar Topographic Modeling.” Earth and PlanetaryScience Letters, 391: 1–15.

Wu, B., Y. Zhang, and Q. Zhu. 2011. “ATriangulation-Based Hierarchical Image MatchingMethod for Wide-Baseline Images.” Photogrammet-ric Engineering and Remote Sensing, 77(7): 695–708.

Zhang, J., Z. Zhang, T. Ke, et al. 2011. “Digital Pho-togrammetry Grid (DPGrid) and Its Application.”In Proceedings of the ASPRS 2011 Annual Conference,Milwaukee, WI, May 1–5.

13