Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
TERRENCE J. KEATING, Ph.D.PAUL R. WOLF, PH.D.
FRANK L. SCARPACE, PH.D.Unive'rsity of Wisconsin
Madison, WI 53706
An Improved Method of DigitalImage Correlation*Scanned densitometric data, and epipolar geometricprinciples are used to correlate corresponding imagery fromstereoscopic film negatives.
INTRODUCTION
O NE OF THE major concerns of photogrammetrists manipulating digital data
is a practical and efficient method for reducing the tremendous overburden of digits intoa useful format. This study is directed specif-
A description is given of improved correlation strategies that exploit the perspectivegeometric relationships found in aerialphotography. Advantages of perspectiveepipolar geometry are included within theanalytical reduction software to better bal-
ABSTRACT: An improved digital image-correlation system has beendesigned with practicality and efficiency as the major considerations.The imagery is represented within the computer as discrete densityvalues spatially located using a scanning microdensitometer. Thesystem is designed to better use the inherent geometric relationshipsbetween the image planes and object space in searchingfor conjugateimagery on overlapping photographs. Initially, two-dimensional density difference algorithms and enlarged search areas are used tocorrelate passpoint imagery needed to compute the relative orientation parameters of the photographic system. The epipolar geometricrelationships for the stereopair are then calculated and used to betterestimate the respective locations ofconjugate imagery. Searching forcorresponding imagery along epipolar lines, which contain onlyx-parallax, reduces search time significantly and improves thechances for successful correlation. Elevations are simply interpolated from match-point locations which determine the amount ofimage point x-parallax along these lines. The horizontal position offinal object space coordinates then are determined independent ofany residual y-parallax contaminating the system.
ically towards developing an efficient digitalimage-correlation procedure for identifyingand locating conjugate imagery of overlapping aerial photographs. All external hardware normally required for photographicorientation, image-correlation, and ultimaterepresentation is replaced by a generalpurpose digital computer.
• Presented at the Annual Convention of theAmerican Society of Photogrammetry, Washington, DC, March 1975.
ance computational and accuracy tradeoffs.Certainly the maximum efficiency of a totallyautomated image-matching system that generates three-dimensional terrain coordinatesdepends upon much more than geometricconsiderations. Other important factors include use of interactive programming, selection of appropriate density difference functions, and statistical reliability of a chosenmatch-point. These and other factors, however, are not discussed in this paper.
993
994 PHOTOGRAMMETRIC E GINEERING & REMOTE SENSING, 1975
OVERVIEW
Before explaining various specific stages inthe image-matching system, it is important togive a synopsis of the overall matchingphilosophy and offer enough perspective toeventually link the individual parts to thewhole. Figures 4 and 7, relating to twodimensional and one-dimensional searchingcriteria respectively, are helpful in visualizing this summary. In this paper, targets areconsidered as arrays of density. values surrounding a specific point in the left photograph of the stereopair, and search areas(always larger than target areas) are located inthe right photograph. Search areas aretwo-dimensional when they are dimensioned larger than the target in directions ofboth x-parallax and y-parallax, and are
Left Photo:
geometric relationships inherent within thestereo system. This effectively eliminates theguesswork concerning y-parallax, andimage-matching may efficiently revert tosearching routines along lines containingonly x-parallax. Conjugate epipolar lines aredetermined and straight-line interpolationsreplace bulky equations to determine terraincoordinates. Furthermore, the horizontal location of target points are computed and independent of residual y-parallax to improvematch point accuracies.
COLLINEARITY EQUATIONS
Because of the extensive usage of the collinearity equations throughout the imagematching processes described in this paper,they will be written in general form here.
Right Photo:
YI. = -f
YR = -f
[ml1(Xp-XCL) + m 12(Yp-YC/J + m 13 (Zp-ZCL)]
m31 (Xp-XC/J + m32 (Yp-YC/J + m 33 (Zp-ZC L)
[m 21 (Xp-XCL) + m22 (Yp- YCL) + m23 (ZP-ZCL)]m31(Xp-XCL) + m32(Yp-YCL) + m33(Zp-ZCL)
[m'l1(Xp-XCR) + m'12 (Yp-YCR) + m'13 (Zp-ZCR)]
m'31 (Xp-XCR) + m'32 (Yp-YC R) + m'33 (Zp-ZC R)
[m'21 (Xp-XCR) + m'22 (Yp-YC R) + m'23 (Zp-ZC R)]m'31 (Xp-XC R) + m'32 (Yp-YC R) + m'33 (Zp-ZC R)
(I)
(2)
(3)
(4)
one-dimensional when they are dimensionedlarger only in the direction of x-parallax.Density difference algorithms such as thestatistical cross-correlation coefficient areformulas devised to numerically compare atarget array in the left photograph to anequal-sized array somewhere within a searcharea on the right photograph, in an attempt todecide whether or not both arrays are describing the same point.
Two basic steps are employed in imagematching. In the first step, sufficient numbers of points within selected passpoint locations are matched to attain a high-degree ofrelative orientation between photographs.This matching requires the use of twodimensional density difference algorithmsand somewhat enlarged two-dimensionalsearch areas because little prior informationregarding conjugate image locations isknown. After relative orientation, the secondstep involves calculating the epipolar
In the collinearity equations,! is the focallength, the m's are rotation parameters of theleft photograph, and primed m's are rotationparameters for the right photograph. In addition, x and y refer to photographic fiducialaxis coordinates andXC, YC, andZC the exposure station coordi nates of either the left orright photograph. Figure 3 illustrates thecondition of collinearity as it applies in thispaper. Equations I, 2, 3, and 4 are arbitrarilywritten to express the object space axis system in the datum plane. The X model axis isin the vertical plane containing the leftphotographic fiducial x-axis and the Y modelaxis is perpendicular to X at the datum nadirpoint of the left photograph. It should benoted, however, that these equations are easily modified to enable the model coordinatesystem to assume any convenient orientationand position. In Figure 3, the two lines CLpPand CRp'P pass through their respective exposure stations and image points and meet at
AN IMPROVED METHOD OF DIGITAL IMAGE CORRELATION 995
P, a common object point. Model coordinatesXp , Yp , and Zp are common to the four equations and serve to link the left image to that ofthe right. The collinearity equations will becontinually referenced throughout the balance of this paper.
INPUT CONSIDERATIONS
The first input consideration is to adequately describe a pair of overlapping aerialphotographs to the computer. The procedureinitially requires a conversion of analogicphotographic input imagery to a numerically ordigital equivalent format. This conversion ismade with a scanning microdensitometer, adevice which scans the black-and-white (orcolor) film negative with a select spot of lightwhose grid position on the film is known andrecorded. The ratio of the light intensity transmitted through the HIm compared with theclear air signal is analogically detected,logarithmically amplified, and then convertedto a cOlTesponding digital integer value. Therange of numerical values and spacing of gridsampling points are chosen to offer desiredimage fidelity and positional accuracy commensurate with the expected usage of the endproduct.
SCAN DIRECTION
For the image-matching system under consideration, scanning in a direction parallel tothe airbase on each negative is very advantageous, although not necessary. The effect ofthis scan configuration, shown in Figure 1, isto minimize kappa rotation in later orientation, thus allowing scan lines to lie nearly
parallel to epipolar lines. Optimally, scansalong epipolar lines (an epipolar line is theline of intersection between the plane of thephotograph and a plane containing the airbase and an object point) are most efficient,but this requires knowledge of the relativeorientation parameters and specialized scanning equipment. Figure 2 shows the scannedportion of two photographs scanned approximately parallel to the airbase. The advantageof this scan configuration will be made clearlater.
CONJUGATE PASSPOINT ESTIMATOR
The principal and conjugate principalpoints are usefi.d for approximating the relativeorientation of the stereopair, thus assisting inpinpointing search area locations for selectedpasspoint targets. In Figure 1, parameters B 1>
B 2 , D 1, and D 2 are measured with anengineer's scale. These are then used to estimate the location of the unknown orientation parameters of the right photographic exposure station with respect to the left. Valuesfor D 1 or D 2 are negative if either lies belowthe fiducial x-axis. The equations to determine kappa, XC, YC, and ZC of the rightexposure station are derived from the parallax equations assuming vertical photographyand are equated below without proof.
Kappa = tan- t [2(D t +DJ /(Bt+B~](5)XC R = XC L + [(B1+BJ / 2] HlfJ (6)YC R = YC1_ + (DJ (H) / f (7)ZCR = ZCL (B1/BJ (8)
In these equations,! is the focal length and His an assumed flying height above datum tobe used in relative orientation.
FLIGHT --left scandirection
'"<D
right scandirection
Left (Target) Photo Right (Search) Photo
FIG.!. Desirable scan line orientations as they relate to the photographic principaland conjugate principal points.
996 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING, 1975
(1530, 2110) (1530, 2110)...................:.:.:.:.:-:.:.:::::::: :::::".:::::::::::::::::::::::::::::::::: :~
o','~t.:~:;;~, ••••••••••••••••••••••••••••••••••••••••••.• •••••.••••••••••••••••••••••••••••••••••••;Is EAR C H J
::: ::::::::::::::::::::::::::::::::::::::: F L IGH T ~~~ :?~\~{)))~{~~~~ :~AXIS
(TARGET)
scanner
(1,1) (1,1)}
OVERLAP AREA/
FIG.2. Two-dimensional density array configuration ofa pair ofoverlapping aerialphotographs. Scanner initial points are at lower left corner with coordinates (1,1) foreach photograph.
An estimated ground elevation for eachtarget passpoint, the approximate exposurestation parameters between temporarily assumed vertical photographs, and the collinearity equations are all that are necessaryto compute approximate locations of conjugate passpoints. Knowing XI., and Zp for aspecific desired passpoint location, an inverse solution involving two equations andtwo unknowns is made for Xp and Yp in equations 3 and 4 along with the approximatedright exposure parameters as knowns, imagecoordinates XR and YR are determined. Thistechnique serves the image-matching systemwell as a search location estimator. The sameidea, it will be shown, is used after the refined relative orientation parameters areknown to determine epipolar lines. Attentionfirst is turned to performing passpoint correlation and solving for these refined relativeorientation parameters.
Two-DIMENSIONAL IMAGE CORRELATION
It is important to provide for a strong relative orientation in order to accurately determine the parameters for searching alongepipolar lines. For this reason considerabletime is spent during the search for conjugatepasspoints to assure desired accuracy. Thesystem presently accepts up to 60 passpoints,and provi des several bu iIt-i n rejectioncriteria to safeguard against poorly matchedpasspoints. The relative orientation programis fully partitioned allowing easy passpointrejection, and allows for the solution of many
passpoints at little additional computationalburden. Redundant information is utilized tooffset the inherent limitation of describingdiscrete points as arrays representing, at best,a spot area on the ground.
Figure j shows the optimum passpoint locations chosen internally within the program.Each circle represents a group of passpoints,and five of the six standard passpoint locations have two groups of passpoints indi-
H
RE LATIVEORIENTATION
x
Xp
FIG.3. Passpoint selection and relative orientation.
AN IMPROVED METHOD OF DIGITAL IMAGE CORRELATION 997
cated. Within each group, any number ofpasspoint targets can be selected andmatched to serve in relative orientation. If,for example, there are 11 passpoint groupareas selected, and four passpoint targets arechosen within each area, the system will have44 passpoints available for the solution of exposure station orientation parameters.
SEARCHING IN TWO-DIMENSIONAL SPACE
Figure 4 depicts the two-dimensionalsearch routine operation, using as an examplea five-row-by-five-column passpoint targetselected from the passpoint group locatednear the center of the left photograph. Thistarget represents an image point 1242 scansand 469 elements from the scanner initialpoint (shown in Figure 2) on the left photograph. The estimated search area location,assumed to contain the conjugate targetimage, is indexed in the lower left corner at1140 scans and 144 elements of the rightphotograph. The search area is 15 rows by 15columns in this example, but normally can beon the order of 50 rows by 50 columns with amaximum of 100 rows by 100 columns. Thesearch area need not be square.
The target contains integers representingthe actual gray-shade densities surroundingthe circled center point. The search area, although represented by blank squares, is also
filled .with integer densities representingimage densities in the right photograph. Foreach column and row lag position, imaginethe target being superimposed upon thesearch area. Lag position is the number ofcolumns and the number of rows that thelower left target element is offset from thelower left search area element. The shadedportion of the search area represents thetarget at column lag position eight and rowlag position eight. At each position, corresponding density integer values are comparedby using a density difference algorithm. Thatlag position indicating the strongest similarity in density and arrangement is chosen asthe conjugate match point. (The match acceptance or rejection criteria, although extremely important, will not be discussed.)
The use of passpoints in groups allowseasier detection of passpoint mismatches.Often when one passpoint is mismatched, theentire group is suspect. The relative orientation is able to reject individual passpoints orentire groups of passpoints until orientationto the desired accuracy is achieved. After theparameters for the exposure stations havebeen determined, the system is ready to revert to one-dimensional searching in order tomatch the remaining imagery. The idea ofepipolar geometry is discussed next to introduce concepts necessary for providing thisconvenience.
left right TWO-DIMENSIONAL
~ :8\SEARCH ROUTI N E
I ;\lCOLLAG~
I ~SEARCH<t. 1154-....I I
TARGET.. .. ....
84 79 79 74 68 1244 .. .... ....
85 84 83 69 70 -- . .. .... -I- tf)tf) c
<t.c .. . ... ....
<1l- 93 86 75 70 65 <1l- I .... .. .. <J<J
Itf)
95 87 73 63 59 tf)
C..9 I
96 89 85 67 56 1240 <--'
"- elements~
:i:to "- 0..,.
I..,. a:
.L 1140..,.
elements to..,. '"~ ~
FIG. 4. Search routine in two-dimensional space. A 5-by-5 target array lagged within a 15-by-15search array is depicted.
998 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING, 1975
EPIPOLAR GEOMETRY
Image comparison is greatly simplified ifone keeps in mind the geometric configurations present between image planes and object space. These relationships are easily applied by using specialized perspectivegeometric techniques that some photogrammetrists call coplanarity and others callepipolar geometry. The relationships important to this discussion are shown in Figure 5.In this simplified sketch, the epipolar axis isanother term for the airbase, the line joiningexposure stations. Recall thatthe condition ofcollinearity requires the exposure station,image point, and object point to lie along thesame straight line. In Figure 5, these collinear lines form the edges of the epipolarplane. The epipolar plane, which depicts thecondition of coplanarity, contains by definition the epipolar axis, the image points]J andp', and the corresponding object point P. Thelines of intersection between the epipolarplane and the two photographic image planesare called conjugate epipolar lines. Withthese terms in mind, the utility of these relationships can be explained.
The most important concept of epipolargeometry to be realized in image-matching isthat conjugate imagery is always found alongconjugate epipolar lines, regardless of photographic orientation. Exactly where a conjugate image point is found along the conjugateepipolar line is dependent only upon the elevation of the object point. Points situated onepipolar lines are void of y-parallax. What
EPIPOLAA AXIS
-)L-_-+__-,.-__~--':X
Xp
FIG. 5. An epipolar look at coplanarity.
remains along any epipolar line is x-parallax,which of course determines elevation.
An explanation of the method used to locate conjugate epipolar lines using the collinearity equations will help explain the advantages of epipolar geometry, and serves tointroduce the techniques involved in onedimensional searching methods.
ONE-DIMENSIONAL IMAGE CORRELATION
If a conjugate point is known to lie along apredetermi ned Iine, the search routine becomes one-dimensional. The reduced searcharea saves huge amounts of computationalrepetitiveness and often provides for a morereliable match-poi nt.
EPIPOLAR LINE DETERMINATION
The determination of conjugate epipolarlines is explained with the aid of Figure 6. Inthe procedure used, only a simple manipulation of the collinearity equations is used togenerate conjugate epipolar lines. This approach is preferred over other methods because a single targeted image point can locateconjugate epipolar lines.
The steps for locating conjugate epipolarlines, given a single target point and matrixmethods of solutions (refer to Figure 6) are-
(1) Given the relative orientation parameters ofthe left photograph and the x and y imagecoordinates of target point a, first solve thecol linearity equations 1 and 2 inversely forthe X and Y coordinates of point A, using anelevation Z ,. The solution involves two equa-
FIG. 6. Method for determining conjugateepipolar lines.
AN IMPROVED METHOD OF DIGITAL IMAGE CORRELATION 999
tions and two unknowns. ext choose a newmodel elevation Z2 and again solve for themodel coordinates of point a, at model positionA 2. Note that a change in elevation affectsthe horizontal model position of image pointa.
(2) Given the relative orientation parameters ofthe right photograph and the now-knownmodel coordinates of A" solve collinearityequations 3 and 4 directly for the right photographic image coordinates ofa' l' Point a' 1 bydefinition lies along an epipolar line conjugate to a yet undetermined line in the leftphotograph which contains a. In similar fashion, determine the right image coordinates ofa'2 using model pointA 2 and its model coordinates.
(3) The coordinates of right photographic imagepoints a' 1 and a'2 describe the right epipolarline containing the conjugate point of a aschosen in the left photograph. This line is atthe intersection between the epipolar planecontaining A 1 and A2 and the right photograph.
(4) What remains now is to describe an additional point in the left photograph to givedirection to the left epipolar line passingthrough point a. This is done by working theprocess in reverse. Starting at point a'2 in theright photograph, we could easily relocatepoint a by determining model coordinates ofA2 using elevation Z2 and equations 3 and 4inversely. Then we could solve for imagecoordinates of a using equations I and 2 directly. However, reassigning for image point
a'2a model elevation ofZ 1 and solving equations 3 and 4 inversely locates point B acrossfrom AI' Using these model coordinates andequations 1 and 2 directly locates a secondimage point b along the epipolar lines passing through the original point a.
Elevations 2 1 and 2 2 control the location ofthe endpoints used to describe the conjugateepipolar lines. The elevation initially assigned to 2 1 corresponds to that of a nearbypasspoint elevation and later depends uponpreviously determined image model elevations to properly locate the epipolar endpoints. Elevation 2 1 in conjunction with 2 2
serves to describe only that length ofepipolarlines which is needed to span localized portions of the respective images. All targetsalong the left epipolar line can be matched totheir respective conjugate images along theright conjugate epipolar line before newlines need be computed. In general, thenumber of sets of conjugate epipolar linesthat must be generated roughly correspondsto the number of scans needed to cover thephotograph.
SEARCHING IN ONE-DIMENSIONAL SPACE
Although epipolar lines are used to determine target and search centerpoint locations,the arrays used to describe these locations areformed by elements located along scan lines.Figure 7 is a schematic of targeting and
<t
I
ONE- DIMENSIONAL
SEARCH ROUTINE
1 1149
CD<...J
EPIPOLARLINE
right
II ft.
\SEARCH
left
1 1
GET/ <t
\2 ,,, .... u ,~
TAREPIPOLARLINE
124
1237=#=(l)(l)
'"FIG. 7. Search routine in one-dimensional space. Theepipolar lines but arrays are along original scan lines.
array centers are determined by the
1000 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING, 1975
searching in one-dimensioanl space. The example target contains one row offive densityvalues and is centered along the target epipolar lines at 1242 scans and469 elements fromthe scanner initial point. The search area isincremented along the conjugate epipolarsearch line and is made up of densities alongthat scan line closest to the search center.Since the scans were oriented nearly parallelto the airbase, there is good agreement between epipolar lines and scan lines. Excessive jumping across scan li'nes duringsearches becomes a burden when indexingfuture search areas for subsequent points.
After the chosen target's conjugate point ismatched using density algorithms, the targetis incremented one column position and thesearch area is re-centered one column position over from the previous successful matchpoint and a new match point is determined.
The success of this pure one-dimensionalsearch routi ne depends upon the exactness ofthe relative orientation parameters. If an excess of y-parallax exists from relative orientation, the true search area may be other thanalong the conjugate epipolar line. To checkfor y-parallax, several targets are selectednear each endpoi nt of the target epipolar Ii ne.These points are then searched in twodimensional mode within search areas whichinclude the conjugate epipolar line. If thechosen image match-points near each of theendpoints of the computed search epipolarline are not along the line, y-parallax exists.This y-parallax is either eliminated byslightly shifting the conjugate epipolar lineor by requesting an update on the relativeorientation parameters. For those situationsinvolving minor y-parallax, the target is ex-
panded to contain perhaps three scans, butremains centered on the left epipolar line.The search area is expanded to accommodatea two-dimensional target but is still incremented only in the direction ofx-parallax.
SEPARATION OF MODEL X AND r FROM Z I
THE SYSTEM
If excessive y-parallax is present in themodel due to poor relative orientation, itwould be convenient to prevent this parallaxfrom contami nati ng the resuI tant terrai ncoordinates. In relative orientation for exam-
CL EPIPOLAA AXIS CR
CROSS RATIO
ZP,
h = (P,-Pj cos a
-l[~Ja ,.. TAN __
I
FIG. 8. Elevation interpolation from epipolarlines.
B B B
Y
X
SPOT SIZE & ELEVATION
ns (B f)/:;. Z == -----:-----:----:
(x-x') (x-x'-ns)
X
X-PARALLAX
/:;.X= /:;.Z(x/f)
/:;.Y= /:;.Z(y/f)
X
Y-PARALLAX
/:;.X == a
/:;. Y == a/:;. Z == a
FIG. 9. Effects of spot size, x-parallax and y-parallax on model coordinates_
AN IMPROVED METHOD OF DIGITAL IMAGE CORRELATION 1001
pIe, as exposure station parameters are determined, remaining unknown model pointsare also being solved using equations 1,2,3,and 4 which determine model coordinatesXI', Yp , and Zp. Thus any residual y-parallaxwill contaminate the solution of unknownpoints. This can be avoided if the elevationdetermination is separated from X and Y.
In the one-dimensional image matchingsystem only the left photograph determinesXI' and Yp model coordinates. The elevationis determined based upon the location of theconjugate point along the conjugate epipolarline in the right photograph. With the elevation Zp determined, the collinearity equations 1 and 2 can be solved inversely todetermine where the image ray intersectsthe model space at Zp.
Figure 8 shows how a difference in elevation is determined solely hom position alongepipolar lines. Fortunately the ultimate elevation of a selected match poi nt can be interpolated knowing the elevation ZpI used todetermine one endpoint of the conjugateepipolar line. Using the basic law of projectivity, that being the theory of the cross-ratio,it can be seen that elevation differences alonga collinear line extending from the leftphotograph are related to distances along thesearch epipolar line where the match point isultimately determined. Once a match point isdetermined, its corresponding elevation isdetermined by simple ratio. Using this elevation in equations 1 and 2 which describe theline from the left exposure station throughthe target point to where it intercepts themodel at Zp, the horizontal model coordinates are obtai ned.
Although y-parallax is effectively prevented £i·om further contaminating the spatiallocation of spots imaged on the left photograph, the same is not true when mismatchesoccur in the direction of x-parallax. Figure 9shows the effects of both x-parallax andy-parallax, expressed in terms of the wholenumber ofdecimal spot size ofdisplacements(ns) upon the model X, Y, and Z coordinates.In the figure, B is the distance between exposure stations and! is the focal length. Theimage target coordinates x and y, as well asthe conjugate match-point x' value, areneeded to locate the point within the modelspace. The error expressions given are approximate in that they assume vertical photography.
Elevation differences caused by x-parallaxas seen in Figure 9 (center) affect X and Y ofthe model depending upon the location of theimage in the left photograph. The effect ofelevation on horizontal positioning is most
severe at the edges of the model area. Note,for example, that the ground spatial coordinates of the left photographic principal pointimage are unaffected by errors in elevation.Figure 9 (right) indicates that althoughy-parallax is present, its effect is eliminatedin the determination of the model coordinates. Since the X and Y coordinates are determined from the left exposure stationparameters only, and y-parallax has no effecton elevation differences, this statement istrue. However, as is often the case, difler-"ences in y-parallax in one-dimensionalsearch methods could lead to mismatches ofimagery along the direction ofx-parallax. Expanding target sizes while retaining one direction of search is a reasonable compromiseto this problem.
CONCLUSION
Many other comments and suggestions regarding strategies in digital image-matchingcan be made concerning computationalshortcuts, mass storage requirements, density difference algorithms, other perspective geometrical advantages, utility of thesystem, application, cost, etc. This paper,however, is limited only to an explanation ofsome basic geometrical considerations forachieving a practical solution to imagematching on densitized aerial photographs.Other solutions to the same problem havebeen used and are working. They possessvarious advantages and disadvantages incomparison to the method presented herein.
The final evaluation of any imagematching system can only be made after implementation into a useful applications system. Terrain coordinates derived from this orany other system must meet the cost and accuracy requirements dictated by the ultimateuser.
ACKNOWLEDGMENT
This particular image-matching system hasbeen developed for application in automatedshoreline erosion studies. The research wassponsored by the University of WisconsinSea Grant Program, part of the National SeaGrant Program maintained by the NationalOceanic and Atmospheric Administration ofthe United States Department of Commerce.
REFERENCES
Analysis and Development of Digital MappingSystem Software, Final Report, USAETLContract No. ETL-CR-74-5, Control DataCorporation, Minneapolis, Minnesota, May,1974.
1002 PHOTOGRAMMETRIC E GINEERI G & REMOTE SE SI G, 1975
Digital Mapping System: Mathematical Processing, Final Technical Report, USAETL Contract No. ETL-CR-74-6, KeuffeJ and EsserCompany, Morristown, New Jersey, May,1974.
Digital Mapping System Study, Final TechnicalReport, USAETL Contract No.DAAK02-71-C-0079, Keuffel and Esser Company, Morristown, New Jersey, October, 1971.
Gambino, Lawrence A., and Michael A. Crombie,"Digital Mapping and Associated DigitalImage Processing", Proceedings AmericanSociety of Photogrammetry, St. Louis, March,1974.
Hallert, Bertil, Photogrammetry, McGraw-Hill,London, 1960.
Helava, U. V., and W. E. Chapelle, "Epipolar-ScanCorrelation", Bendix Technical Journal,Spring, 1972.
Helava, U. V., J. A. Hornbuckle, and A. J. ShahanJr., "Digital Processing and Analysis ofImageData", Bendix Technical Journal, Spring,1972.
Keating, Terrence J., "Analytical PhotogrammetryFrom Digitized Image Densities." Ph.D.Thesis, University of Wisconsin, Madison,
May, 1975. (Available through UniversityMicrofilms, Ann Arbor, Michigan.)
Keating, Terrence J., "Testing the HadamardTransform Image Correlation Technique",Unpublished paper, Department of Civil andEnvironmental Engineering, University ofWisconsin, December, 1973.
Masry, S. E., "Digital Correlation Principles",Photogrammetric Engineering, Vol. 40, No.3,March, 1974.
Mayer, John D., "A New Development in MilitaryMapping", Presented paper, American Society of Photogrammetry Convention, St. Louis,March, 1974.
Optimized Digital Automatic Map CompilationSystem, Final Technical Report, GIMRADAContract No. DA-44-009 AMC-1l1 (X), International Business Machines Corporation,Federal Systems Division, Rockville, MD,July, 1964.
Phillips, Dennis R., and Eric A. Smith, "Automated Cloud Tracking Using PreciselyAligned Digital ATS Pictures" IEEE Transactions on Computers, Vol. C-21, o. 7, July,1972.
Wolf, Paul R., Elements of Photogrammetry,McGraw-Hill, New York, 1974.
XI" International Congress of PhotogrammetryJuly 11 to July 23, 1976
Exhibits, Technical Tours, Social Events. Technical Conferences.Write now to the Congress Secretariat for advance registration forms.
Presented PapersThe submission of Invited Papers, Working Group Papers, Commission Reports, and
National Reports to the ISP Congress is arranged by the various Commission Presidents orNational Societies. Anyone, however, may submit Presented Papers. The format for papers is210 by 297 mm with a type area of 150 by 240 mm. The cover page of the paper will include:
• XIII Congress of the International Society for Photogrammetry Helsinki 1976
• Relevant umbers of Commission and Working Group• UDC umber (Universal Decimal Classification)• Presented Paper
One copy of the abstract will be sent to the appropriate Commission President (see Yearbookissue) and one to the Congress Secretariat by February 1, 1976. Three hundred copies of thepaper itself should reach the Congress Seeretariat by May 15, 1976 and an additional 1500copies must reach the Congress Secretariat by June 15, 1976. The address of the CongressSecretariat is:
ISP CONGRESS HELSINKI 1976Secretariat/PublicationsCommission No.SF-02150 ESPOO 15Finland