Upload
jaakko
View
213
Download
1
Embed Size (px)
Citation preview
Backward discrete wave field propagation modeling asan inverse problem: toward perfect reconstruction
of wave field distributions
Vladimir Katkovnik,* Artem Migukin, and Jaakko AstolaSignal Processing Institute, University of Technology of Tampere, P.O. Box 553, Tampere, Finland
*Corresponding author: [email protected]
Received 15 December 2008; revised 14 April 2009; accepted 17 April 2009;posted 22 April 2009 (Doc. ID 105309); published 11 June 2009
We consider reconstruction of a wave field distribution in an input/object plane from data in an output/diffraction (sensor) plane. We provide digital modeling both for the forward and backward wave fieldpropagation. A novel algebraic matrix form of the discrete diffraction transform (DDT) originated inKatkovnik et al. [Appl. Opt. 47, 3481 (2008)] is proposed for the forward modeling that is aliasing freeand precise for pixelwise invariant object and sensor plane distributions. This “matrix DDT” is a base forformalization of the object wave field reconstruction (backward propagation) as an inverse problem. Thetransfer matrices of the matrix DDT are used for calculations as well as for the analysis of conditionswhen the perfect reconstruction of the object wave field distribution is possible. We show by simulationthat the developed inverse propagation algorithm demonstrates an improved accuracy as comparedwith the standard convolutional and discrete Fresnel transform algorithms. © 2009 Optical Societyof America
OCIS codes: 070.2025, 100.3010, 100.3190.
1. Introduction
Awave field reconstruction from intensity and phasemeasurements is one of the basic problems in digitalwave field imaging and holography [1]. In a typicalholography scenario, one distinguishes input/objectand output/sensor planes (Fig. 1). The object plane isa source of light radiation and reflection propagatingalong the optical axis. The sensor plane is parallel tothe object plane with a distance z between the planes.A contribution of this paper concerns digital mod-
eling of the forward and backward wave field propa-gation. The discrete diffraction transform (DDT)proposed in [2] links discrete pixelated values ofthe object and sensor distributions. It is the so-calleddiscrete-to-discrete modeling that is aliasing freeand accurate for a pixelwise invariant object distri-bution and pixelated sensor. In this paper we gener-alize and develop this model in two important
aspects. First, the pixels in the object and sensorplanes can be of different size. Second, instead ofthe frequency domain DDT (F-DDT) exploited in[2], we introduce an algebraic approach based onthe matrix transform of the wave field distributions.This novel matrix DDT is also aliasing free and ac-curate for the pixelated object and diffraction planedistributions. Furthermore, this matrix DDT doesnot require double-size image calculations as it isin F-DDT. The reconstruction of the object distribu-tion from a distribution given in the sensor plane isformulated as an inverse problem. Depending on thepixel size and the distance between the object andsensor planes,DDT matrices can become very ill con-ditioned, which makes the reconstruction of the ob-ject distribution difficult or even impossible. Thecapability of the perfect or good quality reconstruc-tion is well characterized by the rank and condition-ing number of the transform matrices of theintroduced matrix DDT. The forward matrix trans-form modeling is a natural and very productive tool
0003-6935/09/183407-17$15.00/0© 2009 Optical Society of America
20 June 2009 / Vol. 48, No. 18 / APPLIED OPTICS 3407
to study limitations of the wave field reconstructionand to develop novel effective algorithms.
2. Discretization of Backward Wave Field Propagation
In this section we consider a wave field reconstruc-tion based on discretization of the backward propa-gation integral operator. In this way we arrive atthe well known standard techniques as well as intro-duce some novel ones.
A. Integral Models
Let udðy; xÞ be a complex-valued two-dimensional(2D) wave field defined in the sensor plane z ¼ dof the three-dimensional (3D) space ðx; y; zÞ as a func-tion of the lateral coordinates x and y (see Fig. 1).According to the scalar diffraction theory there is alinear operator that links this sensor wave field dis-tribution with the object wave field u0ðy; xÞ at z ¼ 0as udðy; xÞ ¼ Ddfu0g, where Dd is a diffraction opera-torDz with a distance parameter z ¼ d. The thoroughtheory of this operator representation can be foundin [3,4].The diffraction operator can be given as a
convolution
uzðy; xÞ ¼ Dzfu0g≜Z
∞
−∞
Z∞
−∞gzðy − η; x − ξÞ
× u0ðη; ξÞdηdξ; ðy; xÞ ∈ R2; ð1Þ
where the kernel gz is shift invariant defined by thefirst Rayleigh–Sommerfeld solution of the Maxwell–Helmholtz equation [4],
gzðy; xÞ ¼ zexpðj2πr=λÞ
jλ · r2 ;
r ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2 þ y2 þ z2
q; z ≫ λ; ð2Þ
where λ is a wavelength.It is shown in [3] that the operator Dz is invertible,
and the inverse operator also can be presented as aconvolution with a shift-invariant kernel. If the dif-
fraction wave field uzðy; xÞ is given, the wave field inthe object plane z ¼ 0 can be reconstructed using theinverse operator D−1
z as
u0ðη; ξÞ ¼ D−1z fuzg≜
Z∞
−∞
Z∞
−∞g−zðη − y; ξ − xÞ
× uzðy; xÞdxdy: ð3ÞThe term diffraction transform is used in [3] for the
forwardDz and backwardD−1z operators. Thus the re-
construction problem is reduced to calculation of theinverse transform and the corresponding wave field.
B. Convolutional Discrete Models
Discretization of the integral diffraction transformsdefined by the formulas (1) and (3) is a straightfor-ward idea in order to derive digital models for theforward and backward wave field propagation. Thisdiscrete modeling is a subject of many publications,and the discrete Fourier transform (DFT) based tech-niques are very popular. While the review of this areais beyond the scope of this paper, we wish to mentionthat discrete space domain modeling for holographyis discussed in detail in [1], and the accuracy of thefrequency domain approach can be found in [5]. TheDFT based algorithm with a detailed accuracy ana-lysis is presented in [6]. A number of recent develop-ments concern continuous and discrete Fresneltransforms and their multiresolution versions (e.g.,[7–11]).
Two spatial domain approaches are the most pop-ular ones for derivation of discrete models of thewave field propagation: the DFT convolution andthe Fresnel transform.
The discrete convolutional models as sampledversions of the formulas (1) and (3) are of the form
uzðΔy;zs;Δx;ztÞ ¼Xk;l
u0ðΔy;0k;Δx;0lÞgzðΔy;zs
−Δy;0k;Δx;zt −Δx;0lÞ ·Δy;0Δx;0; ð4Þ
u0ðΔy;0k;Δx;0lÞ ¼Xs;t
uzðΔy;zs;Δx;ztÞg−zðΔy;0k
−Δy;zs;Δx;0l −Δx;ztÞ ·Δy;zΔx;z; ð5Þ
where the hat in u0 means an estimate of u0, andΔy;0 ×Δx;0 and Δy;z ×Δx;z are pixel sizes (samplingintervals) in the object and sensor planes, respec-tively.
Assuming that the pixel sizes are equal for boththe object and sensor planes, Δy;z ¼ Δy;0 ¼ Δy andΔx;z ¼ Δx;0 ¼ Δx, we can treat this equation as a dis-crete shift-invariant convolution and apply DFT toboth parts of these equations (e.g., [1,12]):
Uzðf y; f xÞ ¼ Gzðf y; f xÞU0ðf y; f xÞ; ð6Þ
U0ðf y; f xÞ ¼ G−zðf y; f xÞUzðf y; f xÞ; ð7Þ
Fig. 1. Principal setup of wave field propagation andreconstruction.
3408 APPLIED OPTICS / Vol. 48, No. 18 / 20 June 2009
where Uz ¼ DFTfuzg, U0 ¼ DFTfu0g, U0 ¼DFTfu0g, and Gz ¼ DFTfgzg are calculated overthe Ny ×Nx pixels.It is shown, in particular in [6], that the model (6)
and (7) can be essentially improved if the kernel gz inEq. (4) is zero padded to the double size, and all cal-culations are produced for, respectively, double-sizedUz,U0, U0, andGz. This zero padding of the kernel gzis applied in [2], subsection 3.B, as a tool to obtain theaccurate forward propagation modeling implemen-ted in the frequency domain.
C. Fresnel Discrete Transform
If z ≫ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2 þ y2
p, the following approximation of the
kernel gz is valid:
gzðy; xÞ≃expðj2πz=λÞ
jλz exp�jπλz ðx
2 þ y2Þ�: ð8Þ
The approximation (8) is used for calculation of theintegrals (1) and (3), for further discrete modeling bydiscretization of these integrals and derivation of theFresnel transform [1].
These discretization results are summarized in thefollowing proposition.
Proposition 1. I. Provided Eq. (8), the discretewave field modeling of the forward propagationcan be presented in the form
uzðΔy;zs;Δx;ztÞ≜−j × exp½jð2πz=λþ πððΔy;zsÞ2 þ ðΔx;ztÞ2Þ=ðλzÞÞ�
λ · z
×Xk;l
�exp
�jπλz ððΔy;0kÞ2 þ ðΔx;0lÞ2Þ
�u0ðΔy;0k;Δx;0lÞ
�× exp
�−j
2πΔy;0Δy;zskþΔx;0Δx;ztl
λz
�·Δy;0Δx;0 ð9Þ
and the modeling of the backward propagation as
u0ðΔy;0k;Δx;0lÞ≜j × exp½−jð2πz=λþ πððΔy;0kÞ2 þ ðΔx;0lÞ2Þ=ðλzÞÞ�
λ · z
×Xs;t
exp�−j
πλz ððΔy;zsÞ2 þ ðΔx;ztÞ2Þ
�uzðΔy;zs;Δx;ztÞ × exp
�j2πðΔy;0Δy;zskþΔx;0Δx;ztlÞ
λz
�·Δx;zΔy;z: ð10Þ
II. Let the object and sensor planes be of the samesizeNy ×Nx and the following conditions be fulfilled:
Ny ¼λ · z
Δy;0Δy;z; Nx ¼
λ · zΔx;0Δx;z
: ð11Þ
Then forward and backward propagations (9) and(10) can be calculated using DFT:
uzðΔy;zs;Δx;ztÞ≜−jλ · z exp½jð2πz=λþ πððΔy;zsÞ2 þ ðΔx;ztÞ2Þ=ðλzÞÞ�
× DFTk;l
�exp
�jπλz ððΔy;0kÞ2 þ ðΔx;0lÞ2Þ
�u0ðΔy;0k;Δx;0lÞ
�½s; t� ·Δy;0Δx;0; ð12Þ
u0ðΔy;0k;Δx;0lÞ≜j
λ · z exp½−jð2πd=λþ πððΔy;0kÞ2 þ ðΔx;0lÞ2Þ=ðλzÞÞ�
× DFT−1s;t
�exp
�−j
πλz ððΔy;zsÞ2 þ ðΔx;ztÞ2Þ
�uzðΔy;zs;Δx;ztÞ
�½k; l� ×NyNxΔx;zΔy;z; ð13Þ
20 June 2009 / Vol. 48, No. 18 / APPLIED OPTICS 3409
where
u0 ¼ u0: ð14Þ
The subscripts in DFTk;l and DFT−1s;t show the
variables with respect to which these transformsare calculated.While the formulas (9)–(13) are well known, often
they are given in shortened forms where some of thefactors (in particular phase factors) are omitted. Forconvenience of the reference and comparison withnovel algorithms, we present here these transformsin the complete forms.Conventionally, the term discrete Fresnel trans-
form is applied to the transforms defined by the for-mulas (12) and (13) valid provided the samplingconditions (11). This transform enables the perfectreconstruction of u0, u0 ¼ u0.More general formulas (9) and (10) are valid as dis-
crete approximations of the integrals (1)–(3) for arbi-trary parameter values, i.e., for any size of the objectand diffraction planes and the pixels in these planes.However, in general, if the conditions (11) are not sa-tisfied, u0 defined by (10) is not equal to u0, u0 ≠ u0.The sums in Eqs. (9) and (10) have the form
Xs;t
exp�j2πðΔy;0Δy;z · skþΔx;0Δx;z · tlÞ
λz
�·wðs; tÞ;
where wðs; tÞ is a function of integers s and t. Thesesums are called the discrete-time/space Fouriertransforms and are treated as the integral Fouriertransform of the discrete sequences (e.g., [13]). Thesesums are not DFT because the factors of sk and tl inthe argument of the exponent are not integer.However, these sums become DFT provided the con-ditions (11) and the fast Fourier transform can beused for calculations.Let us illustrate the fact that Eq. (10) does not give
the perfect reconstruction if the conditions (11) arenot fulfilled.Assume that Ny ¼ Nx ¼ 512, Δy;0 ¼ Δy;z ¼ Δx;0 ¼
Δx;z ¼ 5 μm, and z ¼ df , where the “in-focus” distancedf is defined according to Eq. (11) as df≜
NyΔ2y;0=λ ¼ 0:0202m. In our numerical experiments
through the paper we assume the amplitude modu-lation of the object distribution, i.e., u0, is nonnega-tive real valued, 0 ≤ u0 ≤ 1, and the phase of u0 isequal to zero. This amplitude distribution of u0 is de-fined by the Baboon test image. Figure 2(a) demon-strates the “perfect” reconstruction of the amplitudedistribution for the in-focus distance z ¼ df . Now letus assume that the distance is different from df , e.g.,equal to z ¼ 3df . The corresponding reconstructionshown in Fig. 2(b) is far from been accurate. It con-firms that the backward Fresnel transform is not in-verse for the forward Fresnel transform. Figure 2(c)shows the reconstruction of u0 for z ¼ 3df using theregularized inverse technique M-DTT introducedbelow. We can see a significant improvement in thisreconstruction in comparison with what is achievedin Fig. 2(b).
Proposition 2. The discrete Fresnel transformformulas (9) and (10) can be represented in the fol-lowing matrix forms:
uz ¼ μz · Cy;z · u0 · CTx;z; ð15Þ
u0 ¼ μ0 · CTy;−z · uz · Cx;−z; ð16Þ
whereCy;z ¼ ðCy;z½s; s0�ÞNy;z×Ny;0
;
Cx;z ¼ ðCx;z½s; s0�ÞNx;z×Nx;0
are the matrices with the elements calculatedaccording to the formulas
Cy;z½s; s0� ¼ exp�jπλzðsΔy;z − s0Δy;0Þ2
�;
Cx;z½s; s0� ¼ exp�jπλzðsΔx;z − s0Δx;0Þ2
�;
μz ¼ μ ·Δx;0Δy;0; μ0 ¼ μ� ·Δx;zΔy;z;
μ ¼ expðj2πz=λÞjλ · z : ð17Þ
Fig. 2. Image reconstruction: (a) by the backward Fresnel transform for z ¼ df (perfect quality), (b) by the backward Fresnel transform forz ¼ 3df (average quality), and (c) by the regularized inverse M-DDT algorithm (quality is improved with respect to (b)).
3410 APPLIED OPTICS / Vol. 48, No. 18 / 20 June 2009
Accordingly,
uz ¼ ðuz½s; t�ÞNy;z×Nx;z; u0 ¼ ðu0½k; l�ÞNy;0×Nx;0
are matrices of sizes Ny;z ×Nx;z and Ny;0 ×Nx;0,respectively.The proof of this proposition is elementary by
inserting Eqs. (17) into Eqs. (15) and (16). Note thatthe derived formulas are valid for nonsquare ma-trices of arbitrary sizes.There is a serious difference between standard
Fresnel transforms (12) and (13) valid provided thesampling conditions (11) and the formulas (9) and(10) valid in the much more general case when thesesampling conditions are not satisfied. Despite thisdifference, the term discrete Fresnel transform isused to address both kinds of transforms. However,we use the term matrix (forward and inverse) dis-crete Fresnel transform (M-DFrT and M-IDFrT)for the transforms defined by the matrix formulas(15) and (16), respectively. In our simulation experi-ments we use this matrix version of the discreteFresnel transform.It follows from Eqs. (15) and (16) that the recon-
struction of u0 from uz can be obtained also as a solu-tion of Eq. (15). In this way we arrive at the backwardpropagation modeling quite differently from thestandard Eq. (16). For instance, for the square non-singular Cy;z and Cx;z this backward propagation is ofthe form
u0 ¼ 1μz
· C−1y;z · uz · C−T
x;z ; ð18Þ
and it is different from Eq. (16) because in generalC−1
y;z ≠ CTy;−z and C−1
x;z ≠ CTx;−z. Concerning the above dis-
cussed invertability of M-DFrT, we can see that it hasa place if and only if Cy;zCT
y;−z and Cx;zCTx;−x are the
identity matrices. This condition can be guaranteedunder assumptions (11) and has no place in the gen-eral case.
D. Principal Limitations of the Backward PropagationIntegral
In order that the back propagation integral (3) en-ables the perfect restoration of the object u0, the dis-tribution uz should be given for all x and y, i.e., thesensor in the diffraction plane should of infinite size.For the finite-size sensor, the perfect restoration ofany u0 is not possible. Thus there is a sort of nonsym-metry for the forward and backward propagation in-tegral operators.In this discussion and in what follows the term
perfect restoration is a property of the algorithmto give the accurate (precise) reconstruction forany applicable object distribution. Thus the back-ward diffraction transform is always nonperfect forthe finite-size sensor, as it is not able to give the per-fect reconstruction for any object distribution, whileit can be achieved for some particular distributions.
The perfect restoration property should guarantee aprecise result for any distribution.
The property of the inverse integral diffractiontransform to be nonperfect for a finite sensor is inher-ited by all discrete restoration algorithms based onapproximations of the kernel g−z in Eq. (3).
3. Matrix Discrete Diffraction Transform
The standard techniques discussed in the previoussection consider discrete models as approximationsfor forward and backward wave field propagation in-tegrals. In the approach originated in [2] and furtherdeveloped in this paper we follow different ideas.First, we develop the accurate forward discrete mod-eling that is precise for a class of pixelwise invariantdistributions. Second, we reconstruct the object dis-tribution by inverting this precise forward model. Weshow that under some assumptions the forwardmodel can be inverted, and then we can achievethe perfect reconstruction of any object distribution,of course, from the class of pixelwise invariant distri-butions. The essential point is that for reconstructionwe do not exploit the backward propagation integral(3), which is valid only for the infinite sensor.Instead we use the accurate discrete forward propa-gation model.
Let us assume that the input of our model is dis-crete defined by a pixelwise constant object distribu-tion, and the output is also discrete as defined by theoutputs of the sensor pixels. For this pixelwise invar-iant distribution, DDT gives an accurate discrete-to-discrete modeling because this model is obtained bythe accurate integration of Eq. (1). In this develop-ment the standard assumptions concerning sam-pling and band-limitedness can be omitted as notrelevant and replaced by hypotheses of piecewiseconstant distributions.
In [2] the DDT transform is proposed in two differ-ent complementary forms: spatial algebraic and fre-quency domains. The frequency form is exploited forcalculations, while the algebraic form is used mainlyfor interpretation and illustrations.
In this paper we present a novel algebraic form ofDDT based on two (line- and column-transform) ma-trices enabling the following goals: forward propaga-tion modeling, calculation of the inverse wave fieldpropagation, and analysis of the conditioning of theDDT forward propagation operator. We name this no-vel transform a matrix discrete diffraction transform(M-DDT). In order to make a difference between thisnovel DDT and the old one we rename the DDT in-troduced originally in [2] as a frequency discrete dif-fraction transform. The latter model has beenpresented assuming the same size square pixels inthe input and output planes and the square matricesof the input and output data.
A. Forward Propagation Modeling
Let the pixels in the object and sensor planes be rec-tangular of sizes (Δy;0 ×Δx;0) and (Δy;z ×Δx;z), respec-tively. The sizes of the images in the object and
20 June 2009 / Vol. 48, No. 18 / APPLIED OPTICS 3411
sensor planes measured in pixels can be also differ-ent, Ny;0 ×Nx;0 and Ny;z ×Nx;z. Then the formulas(10)–(13) from [2] are generalized to the followingform:
uzðy; xÞ ¼XNy;0=2−1
k¼−Ny;0=2
XNx;0=2−1
l¼−Nx;0=2
u0½k; l�Z Δx;0=2
−Δx;0=2dξ
×Z Δy;0=2
−Δy;0=2dηgzðy − kΔy;0 þ η; x − lΔx;0 þ ξÞ;
u0½k; t� ¼ u0ðkΔy;0 þ η; lΔx;0 þ ξÞ;jηj ≤ Δy;0=2; jξj ≤ Δx;0=2; ð19Þ
where the sum is calculated over Ny;0 ×Nx;0 pixels ofthe input rectangular array.Let the output signal of a sensor’s pixel be the aver-
age value of the distribution impinging on this pixel:
uz½s; t� ¼1
Δy;zΔx;z
Z Δy;z=2
−Δy;z=2
Z Δx;z=2
−Δx;z=2
× uzðsΔy;z þ η0; tΔx;z þ ξ0Þdξ0dη0: ð20Þ
Inserting Eq. (19) into Eq. (20), we arrive at thespace domain DDT introduced as a discrete convolu-tion with a shift-varying kernel az:
uz½s; t� ¼XNy;0=2−1
s¼−Ny;0=2
XNx;0=2−1
t¼−Nx;0=2
az½s; k; t; l� · u0½k; l�;
s ¼ −Ny;z=2; :::;Ny;z=2 − 1;
t ¼ −Nx;z=2; :::;Nx;z=2 − 1; ð21Þ
where
az½s; k; t; l� ¼1
Δy;zΔx;z
Z Δx;z=2
−Δx;z=2
Z Δx;0=2
−Δx;0=2dξdξ0
×Z Δy;z=2
−Δy;z=2
Z Δy;0=2
−Δy;0=2dηdη0gzðsΔy;z − kΔy;0
þ η0 þ η; tΔx;z − lΔx;0 þ ξ0 þ ξÞ;s ¼ −Ny;z=2; :::;Ny;z=2 − 1;
k ¼ −Ny;0=2; :::;Ny;0=2 − 1;
t ¼ −Nx;z=2; :::;Nx;z=2 − 1;
l ¼ −Nx;0=2; :::;Nx;0=2� 1: ð22Þ
The kernel az in Eq. (22) is an averaged (pixelwisedouble-averaged) version of the original kernel gz inEq. (1).For the Fresnel approximation of gz in Eq. (8), the
kernel az in Eq. (22) allows a factorization
az½s; k; t; l� ¼ μAy½s; k�Ax½t; l�; μ ¼ expðj2πz=λÞjλ · z ; ð23Þ
where
Ay½s; k� ¼1
Δy;z
Z Δy;z=2
−Δy;z=2
Z Δy;0=2
−Δy;0=2
× exp�jπλzðsΔy;z − kΔy;0 þ η0 þ ηÞ2
�dηdη0;
Ax½t; l� ¼1
Δx;z
Z Δx;z=2
−Δx;z=2
Z Δx;0=2
−Δx;0=2
× exp�jπλzðtΔx;z − lΔx;0 þ ξ0 þ ξÞ2
�dξdξ0:ð24Þ
Inserting Eq. (23) into Eq. (21) we arrive at theformula
uz½s; t� ¼ μXNy;0=2−1
k¼−Ny;0=2
XNx;0=2−1
l¼−Nx;0=2
Ay½s; k�u0½k; l�Ax½t; l�; ð25Þ
which defines the matrix form of the discrete input-output forward propagation model
uz ¼ μ · Ay · u0 · ATx : ð26Þ
Note that calculations of the double integrals inEq. (24) can be reduced to integration on a singlevariable using the formulas from Appendix A.
The kernel az½s; k; t; l� in Eq. (21) becomes shift in-variant depending on the differences of the argu-ments s − k and t − l as soon as the pixels at theobject and sensor planes take equal sizes, Δx;z ¼Δx;0 ¼ Δx and Δy;z ¼ Δy;0 ¼ Δy. In this case theDDT becomes essentially simpler because the ma-trices in Eq. (26) are symmetrical with the elementsdepending on the differences of the indexes only,Ay½s; k� ¼ Ay½s − k� ¼ Ay½k − s�, Ax½t; l� ¼ Ax½t − l� ¼Ax½l − t�. If Δy ¼ Δx and Ny ¼ Nx, then Ax ¼ Ay.
Formula (26) defines what we call the matrix dis-crete diffraction transform (M-DDT) with the aver-aged matrices. The matrices Ay and AT
x manipulaterespectively by the rows and columns of the matrixu0. The link between the object and the diffractionplane distributions can be written in the operatorform, uz ¼ M-DDTfu0g.
It follows from the definition formulas (19) and (20)as well as from the results presented in [2] thatM-DDT is aliasing free and gives the accurate distri-bution in the sensor plane provided that the objectdistribution is pixelwise invariant.
B. Backward (Inverse) Modeling and PerfectReconstruction
An inverse of M-DDT can be defined as an operatormapping uz into u0. This inverse of M-DDT gives theperfect reconstruction of the pixelwise object distri-bution if the forward operator M-DDT is nonsingular.In this subsection, we clarify the assumptions whenthe M-DDT is nonsingular, analyze the conditioningof this transform, and introduce the regularized in-verse algorithm for the ill-conditioned M-DDT.
3412 APPLIED OPTICS / Vol. 48, No. 18 / 20 June 2009
Using the matrix vectorization technique (e.g.,[14]), Eq. (26) can be rewritten in the standardmatrix form explicitly resolved with respect to thevariables in the equation as
ucolz ¼ μ ·G · ucol0 ; G ¼ Ax ⊗ Ay; ð27Þ
where the index (col) means vector variables builtfrom columns of the corresponding matrix and thesymbol ⊗ stands for the Kronecker product of ma-trices. The dimension of the matrix G is equal toNy;zNx;z ×Ny;0Nx;0, and the vectors ucolz and ucol0 havethe lengths Ny;zNx;z and Ny;0Nx;0, respectively.Let the sizeNy;zNx;z of the sensor vector be equal to
or larger than the size Ny;0Nx;0 of the object vector;then the least-square estimate ucol0 of ucol0 is a solutionof the normal equation,
GHucolz ¼ μ ·GHG · ucol0 : ð28Þ
For G ¼ Ax ⊗ Ay we obtain [14]
GH ·G ¼ ðAx ⊗ AyÞH · ðAx ⊗ AyÞ ¼ ðAHx ⊗ AH
y Þ· ðAx ⊗ AyÞ ¼ AH
x Ax ⊗ AHy Ay:
The matrices AHx Ax and AH
y Ay are Hermitian andhave real-valued nonnegative eigenvalues. Let ρi,i ¼ 1; :::;Nx;0, and λj, j ¼ 1; :::;Ny;0, be these eigenva-lues, respectively, for AH
x Ax and AHy Ay. Then, the
Ny;0Nx;0 eigenvalues of GH ·G are defined by the pro-ducts ρiλj, i ¼ 1; :::;Nx;0, j ¼ 1; :::;Ny;0 [14]. It followsthat the conditioning number as well as the rank ofthe matrix GH ·G are the products of the condition-ing numbers and ranks calculated separately for thematrices AH
x Ax and AHy Ay:
condG ¼ condAHx Ax
· condAHy Ay
;
rankG ¼ rankAHx Ax
· rankAHy Ay
: ð29ÞThe conditioning numbers and ranks ofGH ·G con-
ventionally serve as a measure of the complexity ofthe numerical solution of the set of normal equa-tions (28). Despite the fact that the representationin the form of Eqs. (27) and (28) is straightforward,for using the standard linear algebra techniques weprefer to base our algorithms on the matrix represen-tation (26), which allows us to build numerical algo-rithms using the matrices AH
y Ay and AHx Ax of much
smaller dimensions than the dimension of the matrixG. In a similar way, we characterize the complexity ofthis problems using the ranks of the matrices AH
y Ayand AH
x Ax separately.If thematricesAy andAx in Eq. (26) are square and
nonsingular, then the perfect reconstruction of u0from uz is achieved with the solution in the obviousform
u0 ¼ 1μA
−1y uzA−T
x ;ð30Þ
where u0 ¼ u0.
For rectangular object and sensor planes the per-fect reconstruction is achieved provided the followingassumptions:
1. The support of the sensor plane distribution isequal to or larger than the support of the object planedistribution
Nx;z ≥ Nx;0; Ny;z ≥ Ny;0; ð31Þ
2. Ay and Ax are full rank matrices, i.e.,
rankðAyÞ ¼ Ny;0; rankðAxÞ ¼ Nx;0: ð32Þ
If the conditions 1 and 2 are held, then the perfectreconstruction is given by the formula
u0 ¼ 1μ ðA
Hy AyÞ−1AH
y uzA�xðAT
xA�xÞ−1; u0 ¼ u0; ð33Þ
where ðHÞ stands for the Hermitian conjugate,AH
y ¼ ðA�yÞT .
The numerical study shows that, depending on theparameters, in particular on the distance z, the pixelsizes, and object and sensor sizes, the matrices Ayand Ax can be extremely ill conditioned. This sortof ill conditioning means that these matrices are nu-merically singular, and the formulas (30) and (33)being formally correct are practically useless. Thecorresponding calculations cannot be fulfilled andgive unstable results highly sensitive with respectto the parameter variations, round off errors of cal-culations, and observation noise.
To address this, we replace the concept of the rankin Eq. (32) by the numerical rank calculated based onthe singular value spectrum of the matrices andthe perfect reconstruction by the numerical perfect(practically high accuracy) reconstruction.
The algorithm (33) also should be modified. One ofthe efficient ways to deal with the ill conditioning isto involve prior information on the distribution inquestion. The regularization is one of the conven-tional techniques for this goal.
Let us consider the standard Tikhonov’s regulari-zation [15,16]. Instead of the solution of Eq. (26), weare looking for the regularized estimate of u0 definedby minimization of the quadratic criterion
u0 ¼ argminu0
L; ð34Þ
L ¼ ∥uz − μAyu0ATx ∥
2F þ α2∥u0∥2F; ð35Þ
where the quadratic Frobenius matrix norm isdefined by the formula ∥u0∥2F ¼ P
k;l ju0ðk; lÞj2.The regularization penalty ∥u0∥2F in L serves in
order to enable a bounded and smooth estimate u0of the object distribution u0. The regularization
20 June 2009 / Vol. 48, No. 18 / APPLIED OPTICS 3413
parameter α2 controls the level of smoothness of theestimate u0 [15].Proposition 3. The regularized inverse u0 defined
as a minimizer of Eq. (34) is a solution of the linearequation
ðjμjAHy Ay þ αIÞu0ðjμjAT
xA�x þ αIÞ − αjμju0AT
xA�x
− αjμjAHy Ayu0 ¼ μ�AH
y uzA�x; α ≥ 0: ð36Þ
This proposition is proved in Appendix B.It is checked numerically that in many cases the
second and the third components on the left-handside of this equation are not essential and can bedropped with an approximate solution given in theexplicit nonrecursive form,
u0 ¼ 1μ
�AH
y Ay þαjμj I
�−1AH
y uzA�x
�AT
xA�x þ
αjμj I
�−1:
ð37Þ
Comparing the latter equation with Eq. (33), notethat the inverse of AH
y Ay and ATxA�
x is replaced bythe inverse of their regularized versions AH
y Ay þαjμjI and AT
xA�x þ
αjμjI.
The square complex valued matrices AHy Ay
(Ny;0 ×Ny;0) and ATxA�
x (Nx;0 ×Nx;0) are Hermitianwith the real-valued nonnegative spectra (eigenva-lues): fρ1 ≥ ρ2 ≥ ::: ≥ ρJg, where J is equal to Ny;0for AH
y Ay and equal to Nx;0 for AHy Ay. The spectra
can be different for these matrices.The numerical ranks of the matrices AH
y Ay andAT
xA�x are calculated as the number of eigenvalue ele-
ments of the corresponding matrix that are largerthan a given threshold value. In our experimentswe use the numerical rank definition with thethreshold equal to 1012,
rank ¼ maxjðj∶ρ1=ρj < 1012Þ: ð38Þ
For α > 0 the regularized inverse (37) gives abiased estimate of the true distribution. Smaller αmeans a smaller bias. However, too small α valuesare not applicable, as the estimate u0 can be comple-tely destroyed.It follows from our numerical experiments that if
the numerical rank is defined according to Eq. (38)and the value of this rank is equal to Ny;0 forAH
y Ay and Nx;0 for AHx Ax, then there is a small α
enabling the high accuracy (numerically perfect)reconstruction of the object distribution.We let the pixels’ sizes be so small that the
averaging in the forward propagation integrals(22) is not essential. We denote the correspondingAy and Ax matrices without averaging as By andBx, respectively; then
By½s; k�≜ limΔy;z;Δy;0→0
Ay½s; k�
¼ Δy;0 · exp�jπλzðsΔy;z − kΔy;0Þ2
�¼ Δy;0 · Cy;z½s; k�;
Bx½t; l�≜ limΔx;z;Δx;z→0
Ax½l; t�
¼ Δx;0 · exp�jπλzðtΔx;z − lΔx;0Þ2
�¼ Δx;0 · Cx;z½t; l�;
s ¼ −Ny;z=2; :::;Ny;z=2 − 1;
k ¼ −Ny;0=2; :::;Ny;0=2 − 1;
t ¼ −Nx;z=2; :::;Nx;z=2 − 1;
l ¼ −Nx;0=2; :::;Nx;0=2 − 1;
ð39Þ
where Cy;z and Cx;z are defined in Eq. (17).Thus
By ¼ Δy;0 · Cy;z; Bx ¼ Δx;0 · Cx;z; ð40Þ
i.e., within the scalar factors Δy;0, Δx;0, the matricesBy and Bx are identical to the matrices of the matrixFresnel transform. We call these By and Bx matricesnonaveraged, in order to emphasize that the aver-aging effect is omitted.
Proposition 4.We let the object and sensor planesbe of equal pixel size Ny ×Nx and the conditions (11)be fulfilled. Then for the nonaveraged matrices
1
Δ2y;0Ny
BHy By ¼ INy×Ny
;1
Δ2x;0Nx
BTx B�
x ¼ INx×Nx;
ð41Þ
or
1Ny
CHy;zCy;z ¼ INy×Ny
;1Nx
CTx;zC�
x;z ¼ INx×Nx: ð42Þ
Equations (41) and (42) can be easily drived fromEqs. (39) and (40).
Proposition 4 proves that provided the conditions(11), the matrices AH
y Ay and ATxA�
x to be inverted inEq. (37) become well conditioned, do not requireany regularization, α ¼ 0, and can be inverted analy-tically with
u0 ¼ 1μ ðB
Hy ByÞ−1BH
y uzB�xðBT
x B�xÞ−1
¼ 1
μΔ2y;0Δ2
x;0NxNyBHy uzB�
x
¼ 1μΔy;0Δx;0NxNy
CHy;zuzC�
x;z: ð43Þ
3414 APPLIED OPTICS / Vol. 48, No. 18 / 20 June 2009
It can be checked using Eq. (11) that Eq. (43) isequivalent to the discrete Fresnel backward propa-gation (16). Thus in these conditions M-DDT withnonaveraged matrices becomes equivalent to theinverse discrete Fresnel transform.Using the result (41) we may rewrite the regular-
ized inverse solution (37) as
u0 ¼ 1μ · γy · γx
ð gBHy By þ ~αy·IÞ−1BH
y uzB�xð gBT
x B�x þ ~αx·IÞ−1;
gBHy By ¼ BH
y By=γy; gBTx B�
x ¼ BTx B�
x=γx;~αy ¼ α=ðjμj · γyÞ; ~αx ¼ α=ðjμj · γxÞ; γy ¼ Δ2
y;0Ny;
γx ¼ Δ2x;0Nx;
ð44Þ
where ~αy and ~αx are normalized regularization para-
meters. Under assumptions of Proposition 4, gBHy By
and gBTx B�
x are the identity matrices.With the averaged matrices the solution for u0 has
the form of Eq. (44), where the matrices By and Bx arereplaced by Ay and Ax, respectively. In this case, thenormalized matrices are different from the identitymatrices. However, as a rule they are quasi diagonalwith diagonal items close to 1 and very small abso-lute values of nondiagonal items. Then the standardrequirements to the regularization parameters canbe done in the form ~αy ≪ 1, ~αx ≪ 1, i.e.,
0 < α0 ≤ α ≪ jμjminðγy; γxÞ: ð45Þ
This upper bound guarantees that the regulariza-tion does not change the main diagonal terms of thematrices, and the lower bound α0 guarantees the reg-ularization effects.Let us evaluate the order of this upper bound for α.
Note that jμj ¼ 1=λd; then for pixels of equal size Δand square image N ×N, we obtain
jμjminðγy; γxÞ ¼Δ2Nλd ¼ Δ2Ndf
λddf¼ df
d;
whichmeans that roughly α ≪ df =d. Thus for the dis-tances close to the in-focus ones, α ≪ 1. Evaluation ofthe lower bound α0 is a difficult problem. It has beenfound experimentally that a values from the interval½10−7; 10−1� on many occasions are able to give a goodaccuracy reconstruction of the object distribution.A generalized Tikhonov regularization uses var-
ious quadratic norms of linear functionals of u0 suchas gradients or Laplacian [15,16]. Further develop-ments of the regularization techniques results innonlinear penalty functions such as the total varia-tion [17] and l0 and l1 norms. The latter leads to therecent compressed sensing estimators being able tobuilt lower complexity estimates with strong denois-ing properties [18,19]. All these sorts of approaches
are applicable to the considered inverse wavefrontrestoration problems.
4. Experiments on Wave Field Reconstruction
A. Simulations
Simulation is used as a universal tool for accuratestudy of the reconstruction accuracy and qualitativeeffects.
1. Sensor Redundancy and ReconstructionAccuracy
It is assumed that images in object and sensor planesare square but can be of different sizes,Ny;0 ¼ Nx;0 ¼N0 and Ny;z ¼ Nx;z ¼ Nz ¼ qN0, q ≥ 1, the pixels aresquare and of equal size for the sensor and objectplanes, Δy;0 ¼ Δx;0 ¼ Δ0, Δy;z ¼ Δx;z ¼ Δz, andΔ0 ¼ Δz ¼ Δ. The size of the image in the sensorplane is always larger than or equal to the size ofthe image in the object plane.
Let us denote the ratio of these sizes as q ¼ Nz=N0.The parameter q defines a redundancy of the sensorwith respect to the object distribution. The larger thesensor image is, the better the accuracy of recon-struction for the object plane. While this statementsounds obvious, one of the problem addressed isthe analysis of the influence of the parameter q onthe rank of the M-DDT matrices and on the accuracyof reconstruction. Note that the M-DDTalgorithm al-ways uses the averaged matrices Ay and Ax.
The size of Ay and Ax is Nz ×N0, while the size ofAH
y Ay and AHx Ax is always N0 ×N0 for any Nz ¼ qN0,
q ≥ 1. The rank and conditioning of AHy Ay and AH
x Axare seriously dependent on the sensor size Nz andthe parameter q.
The distance d ¼ df corresponding to the condi-tions (11) is calculated as
df ¼ NzΔ2=λ ¼ N0qΔ2=λ: ð46Þ
We call this “in-focus” distance with notation df be-cause for the nonaveragedmatrices BH
y By andBHx Bx it
enables the perfect reconstruction (see Proposition 4for accurate statements). It has been observed frommultiple simulation experiments that for the aver-aged matrices the distances d ¼ df and d close todf are also in-focus distances with the best recon-struction accuracy. This interesting fact cannot beproved analytically, as there are no analytical expres-sions for the averaged matrices Ay and Ax.
Figure 3 shows the numerical ranks of the matrixAH
y Ay (512 × 512) versus the distance d for differentvalues of q ¼ f1; 2; 4g. In order to emphasize the de-pendence of the in-focus distance on the parameter qwe use notation df jq, where the values of q are shown.Let us start from the curves corresponding to q ¼ 1.The peaks of these curves for the averaged Ay andnonaveraged By [as specified in Eq. (40)] cases areachieved exactly at the in-focus distance df jq¼1 andequal to the maximal rank 512. These ranks are
20 June 2009 / Vol. 48, No. 18 / APPLIED OPTICS 3415
quickly going down as d > df , with the ranks for theaveraged case mainly lower than those for the non-averaged case. As a result, the reconstruction of theobject distribution becomes more and more question-able for distances larger than df .For instance, for d ¼ 3df jq¼1 ¼ 0:06m the rank for
the averaged case is about 260; i.e., only 260 linesand columns from 512 can be treated as independentones. We can expect that only this number of columnsand lines can be reconstructed, and there is nochance to reach a good quality imaging. For smallerdistances d < df , the ranks are erratic, first decreas-ing rapidly, then increasing, and again going down.This results in the correspondingly varying accuracyof reconstructions.For the redundant sensors of larger size with q ¼ 2
and q ¼ 4 the ranks are also decreasing functions ford > df jq¼2 and d > df jq¼4, respectively, but the rankvalues are much higher than for q ¼ 1. Note also thatfor nearly all cases, the ranks for the averaged ma-trices are lower than those for the nonaveraged ones.Very interesting behavior of the ranks of the aver-
aged matrices is observed for d ≤ df jq¼2 and for d ≤df jq¼4 because these ranks take the maximum value512 up to the very small values of d. Thus for thesedistances M-DDT with the averaged matrices is ableto give reconstructions of very high accuracy.For the nonaveraged matrices the behavior of the
rank curves is different. The ranks take the maxi-mum values for df jq¼1 ≤ d ≤ df jq¼2 if q ¼ 2 anddf jq¼1 ≤ d ≤ df jq¼4 if q ¼ 4 and become erraticfor d < df jq¼1.Figure 4 shows root mean square error (RMSE) for
the M-DDT algorithm (with averaged matrices) ver-sus the distance d for q ¼ 1, q ¼ 2, and q ¼ 4. We cansee that if q ¼ 2, we obtain the practically perfect re-construction for all d ≤ df jq¼2. If q ¼ 4, this interval ofthe perfect reconstruction becomes wider, and wehave the perfect reconstruction for all d ≤ df jq¼4. Inall these calculations we use regularization para-meters from the interval ½10−7; 10−1�. For q ¼ 1 theaccuracy is much worse than it is for q ¼ f2; 4g.Figure 4 is complementary to Fig. 3. It demonstrates
that for all intervals of d where the ranks take themaximum values, perfect (or high accuracy) recon-struction is achieved.
2. Comparative Accuracy Analysis
The comparative quantitative accuracy analysis ofthe different algorithms for various distances d isshown in Fig. 5. The best performance for nearlyall d is demonstrated by the regularized inverseM-DDT algorithm (37). Close to these results arethe results shown by the recursive regularized in-verse frequency domain F-DDT presented in [2].We use the recursive version of this algorithm andresults are given for 10 iterations. It is emphasizedthat M-DDT enables the perfect reconstruction ford ¼ df . If the square pixels are of equal size in theobject and sensor planes and the correspondingimages are also of equal size, the standard double-size zero-padding convolutional algorithm (conv-2)and M-IDFrT give identical results within the accu-racy of calculations. The accuracy of these algorithmsis much better than can be obtained by the ordinary
Fig. 4. RMSE versus the distance z ¼ d for q ¼ 1, q ¼ 2, and q ¼ 4averaged matrices. A nearly perfect reconstruction is obtained forall d ≤ df jq¼2 if q ¼ 2 and for all d ≤ df jq¼4 if q ¼ 4.
Fig. 5. Accuracy of the image restoration (RMSE) versus the dis-tance d for different algorithms: M-DDT, the convolutional inverseusing the transfer functions of the image size (conv-1) and the dou-ble-size zero-padding image (conv-2). M-IDFrT, and the recursiveregularized inverse F-DDT , df ¼ 0:02m, q ¼ 1.
Fig. 3. Rank of the matrix AHy Ay versus the distance z ¼ d for
averaged (av) Ay and nonaveraged (non-av) matrices By for differ-ent sizes of the square sensor defined by the parameter q.
3416 APPLIED OPTICS / Vol. 48, No. 18 / 20 June 2009
convolutional algorithm (conv-1). The curvescorresponding to these algorithms in Fig. 5 are over-lapping. Overall the M-DDT algorithm demonstratesessentially better accuracy than the standard ones.This algorithm differs from M-IDFrT only by useof the averaged matrices Ay, Ax, while the nonaver-aged By, Bx matrices are used in M-IDFrT.Figures 6 and 7 illustrate a visual performance of
the M-DDT algorithm versus the M-IDFrT algo-rithm. In this simulation we use the M-DDT withthe averaged matrices for the forward propagation,which gives the precise prediction of the wave fielddistribution at the sensor plane. Thus it is a good imi-tation of the optical experiments. The backwardM-DDT propagation (inverse distribution recon-struction) is performed according to the formula(37) and by M-IDFrT according to Eq. (16). Theadvantage of the regularized inverse algorithm isclear for all cases.In Fig. 7 we use the redundant sensor of the double
size with q ¼ 2. The accuracy is essentially improvedas compared with the case q ¼ 1 shown in Fig. 6.Accuracy results similar to those discussed abovehave been confirmed for various test images.
Figure 8 demonstrates the capability of theM-DDT algorithm with the averaging to give thealiasing-free results. The left image is the M-IDFrTreconstruction with the distance d ¼ 0:5 · df . This re-construction is completely destroyed by the aliasingeffects. In the same time the M-DDT reconstruction(right image) is nearly perfect.
When d ≠ df , the matrices AHy Ay and AH
x Ax can beextremely ill conditioned with the conditioning num-ber of the order 1018 to 1020. It is interesting that verysmall regularization parameters α are able to changethe accuracy of the reconstructions in a dramatic way.In particular, the imageFig. 6(a) is obtainedwith verysmall α ¼ 10−12. Increasing these regularizationparameters up to the value 10−4 practically doesnot change the reconstruction accuracy. Thus the al-gorithm is quite robust with respect to the selection ofα.Nevertheless, nonzeropositive regularizationpara-meters are of importance, as it is in particular illu-strated in Fig. 9. These images are obtained ford ¼ 1:01 × df , i.e., the distance d is different fromthe in-focus one by one percent only. The image Fig. 9(a) obtained for α ¼ 0 is completely destroyed, whilethe image in Fig. 9(b) obtained for α ¼ 10−7 shows
Fig. 6. Comparative imaging by M-DDTand M-IDFrT algorithms, for various distances between the object and sensor planes: (a) d ¼ df ,(b)d ¼ 3df , and (c)d ¼ 6df ,wheredf is the in-focusdistance forq ¼ 1,df ¼ df jq¼1. The images in theobjectandsensorplanesareof equal size.
20 June 2009 / Vol. 48, No. 18 / APPLIED OPTICS 3417
high-accuracy imaging. The regularization results ina lower sensitivity of reconstruction to variations ofthe distance values. Note that the zero regularizationparameters can be used for d ¼ df.
3. Nonsquare Pixels and Nonequal Size Images
In the above simulation produced for square pixels ofequalsize for the imageandsensorplanes, the in-focusdistance is selected using the formulas (46) obtainedfromEq.(11). For the general case,when thepixels are
rectangular and of different size in the image andsensor planes and the images in these planes can alsobe of different size, the sampling conditions (11)derived from Eq. (10) take a well known form [1],
Ny;0 ¼ λ · dΔy;0Δy;z
; Ny;z ¼λ · d
Δy;0Δy;z;
Nx;0 ¼ λ · dΔx;0Δx;z
; Nx;z ¼λ · d
Δx;0Δx;z:
ð47Þ
Fig. 7. Comparative imaging by M-DDTand M-IDFrT algorithms, for various distances between the object and sensor planes: (a) d ¼ df ,(b) d ¼ 3df , and (c) d ¼ 6df , where df is the in-focus distance for q ¼ 1, df ¼ df jq¼1. The image in the sensor plane is of double the size of theimage in the object plane, q ¼ 2.
Fig. 8. Object wave field reconstruction (amplitude distribution)with the distance z ¼ 0:5df : (left) standard M-IDFrT fails with apattern of clear aliasing effects; (right) M-DDT with the averagedmatrices gives a good quality aliasing free reconstruction.
Fig. 9. M-DDT reconstructions, the distance d ¼ 1:01 × df : image(a) obtained with α ¼ 0 is completely destroyed; image (b) obtainedwith α ¼ 10−7 gives a very good accuracy of RMSE ¼ 0:0049.
3418 APPLIED OPTICS / Vol. 48, No. 18 / 20 June 2009
Of course these condition cannot be fulfilledsimultaneously with the same value of the distanceparameter d.Nevertheless, the in-focus distance for M-DDT can
be found using the ranks of the matrices Ay and Axanalyzed as functions of d. The matrices Ay and Ax inEq. (26) define transformations of the rows andcolumns of the object distribution u0 in order tomap this distribution in the observed sensor-planedistribution uz. The conditions (47) can be consideredseparately for these matrices as two groups of equa-tions, respectively, for the matrix Ay,
Ny;0 ¼ λ · dΔy;0Δy;z
; Ny;z ¼λ · d
Δy;0Δy;z; ð48Þ
and for the matrix Ax,
Nx;0 ¼ λ · dΔx;0Δx;z
; Nx;z ¼λ · d
Δx;0Δx;z: ð49Þ
Then the in-focus distances for the matrices Ay(focusing in rows) df ;y and Ax (focusing in lines)df ;x are calculated as
df ;y ¼ Δy;0Δy;zNy;z=λ; df ;x ¼ Δx;0Δx;zNx;z=λ: ð50Þ
In these formulas we use that Ny;z ≥ Ny;0 andNx;z ≥ Nx;0. When Ny;z ¼ Ny;0 and Nx;z ¼ Nx;0, thesematrices become identical, respectively, for d ¼ df ;yand d ¼ df ;x. When Ny;z > Ny;0 and Nx;z > Nx;0, thesensor is redundant (q > 1) and the dependence ofthe rank with respect to d has the form shown inFig. 3. Thus the rank takes the maximum valuesfor the distances equal or smaller than defined inEq. (50).Now we select a single in-focus distance df from
two df ;y and df ;x for the M-DDT algorithm (with aver-aged matrices) as
df ¼ minðdf ;y;df ;xÞ: ð51Þ
The ranks of the averaged Ay and Ax takes themaximal or close to maximal values for all d ≤ df ;yand d ≤ df ;x. Thus selection of df guarantees simulta-neously the ranks of both matrices Ay and Ax takevalues close to the maximum values.These results are valid only for the averaged
matrices Ay and Ax and, in general, are not validfor the nonaveraged By and Bx. Note also that Eq.(51) is valid provided that a difference between df ;yand df ;x is not too large. Thus the recommendation(51) is not universal and should be used cautiously.Illustrations of using the focusing distance calcu-
lated according to the rule (51) are shown in Figs. 10and 11. It is assumed that the pixels are rectangularand of the size ð5 × 8Þ μm and the same for the objectand sensor planes, that both images are of the size512 × 512. In Fig. 10 we can see the ranks of the ma-trices AH
y Ay and AHx Ax. These ranks take their max-
imum value 512 for the corresponding in-focus
distances df ;y ¼ 0:0202m and df ;y ¼ 0:0518m. It isseen that for df ¼ minðdf ;ydf ;xÞ ¼ 0:0202m, the rankof the matrix AH
x Ax is close to 512. This confirms themotivation behind selection of df according to the for-mula (51). Figure 11 demonstrates that the objectwave field reconstruction according to M-DDT andthe best applicable standard technique M-IDFrT. Itis seen that the M-DDT results [Fig. 11(a)] for df ;yare better than those [Fig. 11(c)] for the distancedf ;x with lower RMSE value 0.0115 versus 0.0559.Comparison with respect to M-IDFrT [Figs. 11(b)and 11(d)] is in favor of M-DDT both visually and nu-merically with the best RMSE value 0.0809 achievedfor the distance df ;x.
B. Phase-Shifting Wave Field Registration
Phase-shifting holography is one of the popular tech-niques used for measurement of the complex-valuedwave field distributions. In the basic setup for the in-line phase-shifting holography, the object beam and aspecial reference beam reflected at the piezoelectrictransducer mirror controlled by a computer are com-bined on the digital sensor and form an interferencepattern [1,12]. Following [20] we show that M-DDTtechniques are applicable for the object wave field re-construction. The wave field impinging on the digitalsensor can be presented in the form
Iϕðy; xÞ ¼ juzðy; xÞ þ uref ðy; xÞejϕðy;xÞj2¼ juzðy; xÞj2 þ juref ðy; xÞj2
þ uzðy; xÞu�ref ðy; xÞe−jϕðy;xÞ
þ u�zðy; xÞuref ðy; xÞejϕðy;xÞ:
We assume that the amplitude uref and the phaseshift ϕðy; xÞ ¼ ϕ of the reference beam are invariant;then the averaged outputs of the sensor pixels arecalculated for Iϕðy; xÞ according to Eq. (20) as follows:
Iϕ½k; l� ¼ Iz½k; l� þ juref j2 þ uz½k; l�u�refe
−jϕ
þ u�z ½k; l�urefejϕ; ð52Þ
Fig. 10. Ranks of the matrices AHy Ay and AH
x Ax for the rectangu-lar pixels ð5 × 8Þ μm and RMSE for the Baboon image reconstruc-tion by M-DDT and M-IDFrT algorithms versus the distance d.
20 June 2009 / Vol. 48, No. 18 / APPLIED OPTICS 3419
where
Iϕ½k; l� ¼1
Δy;zΔx;z
×Z Δy;z=2
−Δy;z=2
Z Δx;z=2
−Δx;z=2IϕðkΔy;z þ η0; lΔx;z þ ξ0Þdξ0dη0;
Iz½k; l� ¼1
Δy;zΔx;z
×Z Δy;z=2
−Δy;z=2
Z Δx;z=2
−Δx;z=2juzðkΔy;z þ η0; lΔx;z þ ξ0Þj2dξ0dη0;
and uz½k; l� are given by Eq. (20).Let us use for the three phase shifts ϕ ¼ 0, π=2, and
π; then it follows from Eq. (52) that the complex-valued wave field uz½k; l� can be calculated as
uz½k; l� ¼1
4u�ref
ðI0½k; l� − Iπ½k; l�
− jð2Iπ=2½k; l� − I0½k; l� − Iπ½k; l�ÞÞ; ð53Þ
and the wave field reconstruction is reduced to recon-struction of the discrete object distribution u0from the wave field distribution uz given in thesensor plane.In our preliminary physical tests we use this
phase-shifting setup with an object distribution gen-erated by a spatial light modulator. Visually M-DDTgives quite a good reconstruction. An accuracy eval-uation is planned as a further special experimental
study, in particular including a precise calibration ofthe equipment and corrections of the wave fieldpropagation modeling.
C. Implementation
Computations of the integrals defining the matrix Ayare the most demanding element of the computa-tional complexity of M-DDT. This matrix dependson the sizes of images and pixels in the object andsensor planes as well as on the distance betweenthese planes. Thus Ay can be calculated for a set ofthese parameters in advance, saved, and used forprocessing of various wave field distributions withthe corresponding size and distance parameters.Respectively, in evaluation of the computational com-plexity, we separate the time required for calcula-tions of the matrices Ay (and By) from the timerequired for calculation of wave field reconstructionsthemselves. This separation allows us to make a faircomparison of the computational complexity versusthe standard methods where there are no calcula-tions of the integrals using Ay.
We represent the estimate reconstruction (37) inthe form of the matrix product
u0 ¼ 1μQyuzQx; ð54Þ
where Qy ¼ ðAHy Ay þ α=jμj · IÞ−1AH
y , Qx ¼ A�xðAT
xA�xþ
α=jμj · IÞ−1. In Tables 1 and 2 we show the computa-tional time for calculations of the matrices Ay and Qy(times for calculation of Ax and Qx are equal to com-putational times for Ay and Qy, respectively).
All further results are given for square matrices ofthe size N ×N with the pixels’ sizes in the object andsensor planes Δy;0 ×Δx;0. The results in Table 1 formatrix Ay depend essentially on the ratio of pixelsizes in object and sensor planes. If Δy;z ¼ Δy;0, thenAy is the Toeplitz matrix defined by elements of a sin-gle column (row). It enables a fast calculation of thematrix with a minimal required time. If the pixelsare not square or equal in the object and sensorplanes, e.g., Δy;z ≠ Δy;0, the computational time ismuch higher. In general, it requires calculation ofallN2 elements of the matrix, with the maximal timeshown in the last row of Table 1. Note that in thistable Δy;z ¼ 5 μm.
However, if Δy;0 ¼ 2 ×Δy;z, as in the third row ofTable 1, or Δy;0 is multiple to Δy;z with a factor dif-ferent than 2, a more economical algorithm is usedlooking for equal elements of the matrix Ay andcalculating these equal elements only once. If theseare equal elements in Ay and they are properly
Fig. 11. Comparative imaging by M-DDT and M-IDFrT algo-rithms for the distances df ;y ¼ 0:0202m (first row of images)and df ;x ¼ 0:0518m(second row of images: (a) M-DDT reconstruc-tion, RMSE ¼ 0:0115 ; (b) M-IDFrT reconstruction, RMSE ¼ 0:874; (c) M-DDT reconstruction, RMSE ¼ 0:0115 ; (d) M-IDFrT recon-struction, RMSE ¼ 0:0809.
Table 1. Computational Time (in Seconds) for Matrices Ay and By
Matrix=N2 1282 2562 5122 10242
By 0.0046 0.02 0.11 0.43Ay;Δy;0 ¼ Δy;z 1.9 3.9 7.1 16Ay;Δy;0 ¼ 2 ×Δy;z 5.3 14.2 33 110Ay;Δy;0 ¼ 1:1 ×Δy;z 28.5 73.4 186 768
3420 APPLIED OPTICS / Vol. 48, No. 18 / 20 June 2009
identified, then the required time is much shorterthan for calculation of all the items of Ay. Of course,this acceleration depends on the ratio Δy;z=Δy;0.In Table 2 we show the calculation time of the ma-
trix Qy, which depends on the size of the matrix only.These results are obtained assuming that the matrixAy is already known. In Table 3 we show the timeused for calculation of the estimates (54) and com-pare it to the time required by the standard convolu-tional methods conv-1 and conv-2. Naturally, conv-1is fastest, and the time required for conv-2 estimatesis much higher and comparable with the time re-quired by M-DDT. Note that in these calculationsfor M-DDT we assume the matrices Qy and Qx aregiven. Under this assumption, the computationalcomplexity of M-DDT is of the order of the standardconv-2 algorithm using zero padding.The computational time results given in the tables
are obtain byMonte Carlo simulation with averagingover 100 samples. The computer used for experi-ments uses an Intel Pentium 4 chip at 3:2GHz,has random access memory of 4 GB, and usesWindows Xp SP2 and Matlab 7.7.0 (R2008b).Memory requirements depend mainly on the size
of the matrices in Eq. (54) and are calculated as N2 ×8 × 2 × 3 bytes, where N2 is the number of elementsin the matrices, 8 bytes are for double-precision float-ing point calculations, the factor 2 appears because ofall variables are complex valued, and 3 is the numberof matrices in the product (54).A demonstration version of the M-DDTand F-DDT
algorithms implemented in MATLAB is available atthe website http://www.cs.tut.fi/~lasip.
5. Conclusion
This paper concerns two aspects of discrete hologra-phy: discrete modeling of forward wave field propa-gation, and the inverse. The important property ofthe proposed M-DDT is that it gives an accurate pre-diction of the forward propagation for a pixelwiseinvariant object distribution. In particular, this pre-diction is free of the aliasing effects typical for allstandard discrete models. The forward M-DDT canbe treated as a generalization of the standardFresnel transform to the averaged version of thistransform, where the averaging is applied to bothin the sensor and object domains. The matrix form
of the proposed M-DDT is exploited in order to devel-op the inverse reconstruction techniques and formu-late the conditions when the numerical perfect (highaccuracy) reconstruction of the object plane wavefield distribution is possible.
Numerically and visually, the M-DDT based algo-rithms demonstrate a better performance as com-pared with the standard Fresnel transform andconvolutional techniques. The inverse M-DDT algo-rithms also give better results as compared withthe recursive and nonrecursive F-DDT reconstruc-tion algorithms.
Appendix A: Replacement of Double Integrationby Integration on a Single Variable
Consider the integral
J ¼Z
1=2
−1=2
Z1=2
−1=2f ðxþΔ1ξ1 þΔ2ξ2Þdξ1dξ2;
where f is a function of two variables. Further,change the variables of integration according tothe following formula:�
xy
�¼
�cosφ sinφ− sinφ cosφ
��Δ1ξ1Δ2ξ2
�: ðA1Þ
With φ ¼ π=4 this orthogonal transformation meansa rotation of the integration area on the angle π=4,and multiplication on Δ1 and Δ2 results in the cor-responding scaling of the obtained area. For φ ¼π=4 the new variables are defined as
�uv
�¼ 1ffiffiffi
2p
�Δ1 Δ2
−Δ1 Δ2
��ξ1ξ2
�
¼ 1ffiffiffi2
p�
Δ1ξ1 þΔ2ξ2−Δ1ξ1 þΔ2ξ2
�: ðA2Þ
Let us assume that Δ1 ≥ Δ2; then the integrationarea is a rectangle rotated on 45° as shown in Fig. 12.According to Eq. (A2) the coordinates of the corners of
Table 2. Computational Time (in Seconds) for Matrices Qy
Matrix = N2 1282 2562 5122 10242
Qy 0.02 0.15 1.1 10.1
Table 3. Computational Time (in Seconds) for Calculationof Estimates
Matrix = N2 1282 2562 5122 10242
M-DDT 0.011 0.046 0.74 3.75conv-1 0.0045 0.023 0.13 0.55conv-2 0.023 0.13 0.55 2.3
Fig. 12. Integration areas for calculation of the integral (A4).
20 June 2009 / Vol. 48, No. 18 / APPLIED OPTICS 3421
this rectangular are
u1 ¼ 1
2ffiffiffi2
p ðΔ1 þΔ2Þ; v1 ¼ 1
2ffiffiffi2
p ð−Δ1 þΔ2Þ;
u2 ¼ 1
2ffiffiffi2
p ðΔ1 −Δ2Þ; v2 ¼ 1
2ffiffiffi2
p ð−Δ1 −Δ2Þ:
ðA3Þ
The integral J over this rectangular area isrepresented as
J ¼Z
1=2
−1=2
Z1=2
−1=2f ðxþΔ1ξ1 þΔ2ξ2Þdξ1dξ2
¼ 1Δ1Δ2
ZZu;v∈S
f ðxþffiffiffi2
puÞdudv; ðA4Þ
where S is the area of the rectangle. The integrandin Eq. (A4) depends on the variable u and does notdepend on v. It allows us to represent this integralas a sum of the three integrals with integration overthe areas I, II, and III shown in Fig. 12. Usingsimple geometric speculations we can calculatethese integrals as integrals over a single variableu as follows:
J ¼ J1 þ J2 þ J3;
J1 ¼ 1Δ1Δ2
Zu2
−u2
f ðxþffiffiffi2
puÞðΔ2
ffiffiffi2
pÞdu;
J2 ¼ 1Δ1Δ2
Zu1
u2
f ðxþffiffiffi2
puÞ2ðu1 − uÞdu;
J3 ¼ 1Δ1Δ2
Z �u2
�u1
f ðxþffiffiffi2
puÞ2ðu1 þ uÞdu:
Inserting in these formulas the values fromEq. (A3), we arrive at
J1 ¼ 1Δ1
Z ðΔ1−Δ2Þ=2
−ðΔ1−Δ2Þ=2f ðxþ uÞdu;
J2 ¼ 1Δ1Δ2
Z ðΔ1þΔ2Þ=2
ðΔ1−Δ2Þ=2f ðxþ uÞ
�Δ1 þΔ2
2− u
�du;
J3 ¼ 1Δ1Δ2
Z ðΔ1þΔ2Þ=2
ðΔ1−Δ2Þ=2f ðx − uÞ
�Δ1 þΔ2
2− u
�du:
ðA5Þ
IfΔ2 > Δ1, the formulas (A5) are used, whereΔ1 isreplaced by Δ2 and Δ2 by Δ1. For the general casewith arbitrary Δ1 and Δ2 these formulas can be re-written as
J1 ¼ 1maxðΔ1;Δ2Þ
ZD2
−D2
f ðxþ uÞdu;
J2 ¼ 1Δ1Δ2
ZD1
D2
f ðxþ uÞðD1 − uÞdu;
J3 ¼ 1Δ1Δ2
ZD1
D2
f ðx − uÞðD1 − uÞdu;
ðA6Þ
where D1 ¼ ðΔ1 þΔ2Þ=2, D2 ¼ jΔ1 −Δ2j=2:Let us derive the formulas valid for the integrals in
Eq. (24). For Ay we have
Ay½k; s� ¼1
Δy;z
Z Δy;z=2
−Δy;z=2
Z Δy;0=2
−Δy;0=2
× exp�jπλdðkΔy;z − sΔy;0 þ ξ0 þ ξÞ2
�
× dξdξ0 ¼ Δy;0
Z1=2
−1=2
Z1=2
−1=2
× exp�jπλdðkΔy;z − sΔy;0 þ ξ1Δy;0 þ ξ2Δy;zÞ2
�× dξ1dξ2:
Then using Eq. (A6) we obtain
Ay½k; s� ¼ Δy;0 · ðJ1ðk; sÞ þ ~J2ðk; sÞÞ; ðA7Þ
J1ðk; sÞ ¼1
maxðΔy;0;Δy;zÞ
×Z
D2
−D2
exp�jπλdðkΔy;z − sΔy;0 þ uÞ2
�du;
~J2ðk; sÞ ¼1
Δy;0Δy;d
×Z
D1
D2
�exp
�jπλdðkΔy;d − sΔy;0 þ uÞ2
�
þ exp�jπλdðkΔy;d − sΔy;0 − uÞ2
��× ðD1 − uÞdu: ðA8Þ
In a similar way we obtain for Ax,
Ax½l; t� ¼ Δx;0 · ðJ1ðl; tÞ þ ~J2ðl; tÞÞ; ðA9Þ
J1ðl; tÞ ¼1
maxðΔx;0;Δx;zÞZ
D2
−D2
× exp�jπλdðlΔx;z − tΔx;0 þ uÞ2
�du;
~J2ðl; tÞ ¼1
Δx;0Δx;z
ZD1
D2
�exp
�jπλdðlΔx;z − tΔx;0 þ uÞ2
�
þ exp�jπλdðlΔx;z − tΔx;0 − uÞ2
��× ðD1 − uÞdu: ðA10Þ
3422 APPLIED OPTICS / Vol. 48, No. 18 / 20 June 2009
For Δ1 ¼ Δ2 ¼ Δ the integral J1 ¼ 0, J ¼ ~J2, andit can be transformed to
J ¼Z
1
−1f ðxþΔuÞð1 − jujÞdu: ðA11Þ
In the derivation of this formula in [2], theerroneous extra factor 2 appeared, which should beomitted.
Appendix B: Proof of Proposition 3
Using the matrix trace, the criterion (35) can beexpressed as
L ¼ trððuz − μAyu0ATx ÞHðuz − μAyu0AT
x Þ þ α2uH0 u0Þ¼ trðuHz uz − μuHz Ayu0AT
x − μ�A�xuH0 A
Hy uz
þ jμj2A�xuH0 A
Hy Ayu0AT
x þ α2uH0 u0Þ: ðB1ÞMinimization in Eq. (34) concerns both the real
and imaginary parts of u0. These minimumconditions can be given in the following form (e.g.Appendices 14.6 and 14.7 in [21]):
∂L
∂uH0¼ 0: ðB2Þ
Using the formulas for the trace matrix differen-tiation (e.g., [22], pp. 275–276),
∂∂R
trðQRÞ ¼ QT ;∂∂R
trðQRTÞ ¼ Q;
we arrive at the equation
− μ�AHx uTz A�
y þ jμj2AHx AxuT0A
TyA�
y þ α2uT0 ¼ 0;
which can be rewritten in the equivalent form
μ�AHy uzA�
x ¼ ðjμjAHy Ay þ αIÞu0ðjμjAT
xA�x þ αIÞ
− αjμju0ATxA�
x − αjμjAHy Ayu0; ðB3Þ
where I is the identity matrix of corresponding size.For simplicity we do not show the size of the identitymatrix.This proves Proposition 3.The following iterative algorithm has been used to
solve the matrix equation (B3):
uðkþ1Þ0 ¼ ðjμjAH
y Ay þ αIÞ−1μ�AHy uzA�
x
× ðjμjATxA
�x þ αIÞ−1 þ jμjαðAH
y Ayjμj þ αIÞ−1
× uðkÞ0 ATxA�
xðATxA
�xjμj þ αIÞ−1
þ jμjαðjμjAHy Ay þ αIÞ−1AH
y AyuðkÞ0
× ðjμjATxA
�x þ αIÞ−1; k ¼ 0; 1; :::; uð1Þ0 ¼ 0:ðB4Þ
We highly appreciate very fruitful comments of thethree anonymous referees. This work was supported
by the Academy of Finland, project 213462 (FinnishCentre of Excellence program 2006–2011).
References1. Th. Kreis, Handbook of Holographic Interferometry (Optical
and Digital Methods) (Wiley-VCH, 2005).2. V. Katkovnik, J. Astola, and K. Egiazarian, “Discrete diffrac-
tion transform for propagation, reconstruction, and design ofwave field distributions,” Appl. Opt. 47, 3481–3493 (2008).
3. G. S. Sherman, “Integral-transform formulation of diffractiontheory,” J. Opt. Soc. Am. 57, 1490–1498 (1967).
4. J. W. Goodman, Introduction to Fourier Optics, 2nd ed.(McGraw-Hill, 1996).
5. L. Onural, “Exact analysis of the effects of sampling of the sca-lar diffraction field,” J. Opt. Soc. Am. A 24, 359–367 (2007).
6. F. Shen and A. Wang, “Fast-Fourier-transform based numer-ical integration method for the Rayleigh—Sommerfelddiffraction formula,” Appl. Opt. 45, 1102–1110 (2006).
7. L. Onural, “Some mathematical properties of the uniformlysampled quadratic phase function and associated issues indigital Fresnel diffraction simulations,” Opt. Eng. 43, 2557–2563 (2004).
8. I. Aizenberg and J. Astola, “Discrete generalized Fresnel func-tions and transforms in an arbitrary discrete basis, “ IEEETrans. Signal Process. 54, 4261–4270 (2006).
9. M. Liebling, Th. Blu, and M. Unser, “Fresnelets: new multi-resolution wavelet bases for digital holography,” IEEE Trans.Image Process. 12, 29–43 (2003).
10. B. M. Hennelly and J. T. Sheridan, “Generalizing, optimizing,and inventing numerical algorithms for the fractional Fourier,Fresnel, and linear canonical transforms,” J. Opt. Soc. Am. A22, 917–927 (2005).
11. L. Yaroslavsky, “Discrete transforms, fast algorithms andpoint spread functions of numerical reconstruction of digitallyrecorded holograms,” in Advances in Signal Transforms:Theory and Applications, J. Astola and L. Yaroslavsky,eds., Vol. 7 of EURASIP Book Series on Signal Processingand Communications (Hindawi Publishing, 2007), pp. 93–141.
12. I. Yamaguchi, J. Kato, S. Ohta, and J. Mizuno, “Image forma-tion in phase-shifting digital holography and applications tomicroscopy,” Appl. Opt. 40, 6177–6186 (2001).
13. A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Pro-cessing, 2nded., SignalProcessingSeries (Prentice-Hall, 1999).
14. H. Theil, “Linear algebra and matrix methods in econo-metrics, in Handbook of Econometrics, Z. Griliches andM. D. Intriligator, eds. (North-Holland, 1983), Vol. 1.
15. A. N. Tikhonov and V. Y. Arsenin, Solution of Ill-PosedProblems (Wiley, 1977).
16. M. Bertero and P. Boccacci, Introduction to Inverse Problemsin Imaging (IOP Publishing, 1998).
17. L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variationbased noise removal algorithms,” Physica D (Amsterdam) 60,259–268 (1992).
18. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory52, 1289–1306 (2006).
19. E. Candès, J. Romberg, and T. Tao, “Robust uncertaintyprinciples: exact signal reconstruction from highly incompletefrequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006).
20. V. Katkovnik, J. Astola, and K. Egiazarian, “Numericalwavefield reconstruction in phase-shifting holography as in-verse discrete problem,” Proceedings of the 2008 EuropeanSignal Processing Conference (EUSIPCO 2008) (Wiley, 2008).
21. V. Katkovnik, K. Egiazarian, and J. Astola, Local Approxima-tion Techniques in Signal and Image Processing (SPIE, 2006).
22. L. L. Scharf, Statistical Signal Processing (Prentice-Hall,1991).
20 June 2009 / Vol. 48, No. 18 / APPLIED OPTICS 3423