Upload
david-j
View
213
Download
0
Embed Size (px)
Citation preview
Computational Photography and Compressive Holography
Daniel L. Marks
Duke Imaging and Spectroscopy Program, Fitzpatrick Center for Photonics, Duke University
101 Science Drive, Durham NC 27708
Joonku Hahn Ryoichi Horisaki David J. Brady
Abstract
As lasers, photosensors, and computational imaging
techniques improve, holography becomes an increasingly
attractive approach for imaging applications largely re-
served for photography. For the same illumination energy,
we show that holography and photography have nearly
identical noise performance. Because the coherent field is
two dimensional outside of a source, there is ambiguity in
inferring the three-dimensional structure of a source from
the coherent field. Compressive holography overcomes this
limitation by imposing sparsity constraints on the three-
dimensional scatterer, which greatly reduces the number
of possibilities allowing reliable inference of structure. We
demonstrate the use of compressive holography to infer the
three-dimensional structure of a scene comprising two toys.
1. Introduction
The invention of the laser 50 years ago enabled holog-
raphy to be a practical means of image formation. De-
spite this, photography continues to dominate almost all
imaging applications. With continued advances in lasers,
photosensors, and computational imaging techniques, it is
worth revisiting holography and comparing its capabilities
and performance to photography. In particular, because
holograms contain a snapshot of the optical field, numeri-
cal field propagation techniques can be used on holograms
to simulate diffraction and other optical processes. This is
in general not possible with photographic images in which
phase information is lost. Holograms can be digitally re-
focused using numerical diffraction whereas out-of-focus
photographic images are blurred. Despite this advantage,
in general it is impossible to unambiguously identify the
configuration of a three-dimensional scatterer from a two-
dimensional hologram. Recently it was proposed [5] to use
compressive inference techniques to resolve this ambigu-
ity by noting that many objects of interest are sparse. In
this work, we compare the fundamental noise limitations
of photography and holography and show how compressive
inference can be applied to single holograms to infer three-
dimensional structure.
Image formation in digital holography is a computational
process rather than a physical process as it is in photog-
raphy. Photographic images detect the total energy of the
rays incident on sensor pixels. The direction of the rays
is lost in the photographic image, and therefore the rays
can not be propagated back to the source using computa-
tional techniques. Photographic lenses effectively perform
the computational operation which places the rays scattered
from a particular object on the same pixel. In order to
change this operation, the lens must be physically changed,
e.g. refocused or apertured. A hologram records the am-
plitude and phase of a coherent field. The phase informa-
tion includes the ray direction information lost in conven-
tional photographs. In digital holography systems, numer-
ical computations perform the image formation operations
that would be performed by a lens in a photographic im-
ager. A holographic computed image may be formed by
numerically simulating the diffraction integrals that model
the propagation of light through space. However, the flexi-
bility of a digital computer enables computed image forma-
tion methods on holographically recorded fields that may
not be physically realizable in optical hardware. This is po-
tentially a great advantage of holographic image formation
over photography.
In particular, digital holographic recording enables com-
pressive sensing techniques to be applied to the recorded
field. Compressive holography [5] is a new imaging tech-
nique that utilizes the inference methods of compressed
sensing [10, 8, 7] to enable accurate recovery of sparse ob-
ject using fewer measurements than would be required for
more generally specified objects. Sparsity is a constraint
on the reconstructed object that requires most of the coeffi-
cients to be exactly zero when the object properties are spec-
978-1-4244-7023-5/10/$26.00 ©2010 IEEE
ified in a given linear basis. Because there are vastly fewer
sparse objects than general objects, far less data is required
to distinguish between the possible sparse objects. In partic-
ular, recovery of the object to a particular accuracy [15, 17]
can be demonstrated when certain conditions are satisfied.
Compressive holography exploits these results by enabling
sparse objects, including three-dimensional sparse objects,
to be reconstructed from holographically detected optical
fields.
In particular, compressive holography can surmount con-
ventional limitations of conventional holography that arise
from the two-dimensional nature of the coherent field. If
one considers a coherent field radiated or scattered from an
object, this field can be described by the amplitude of this
field on a surface containing the object. Due to Green’s
theorem, knowledge of the field on any surface containing
a source is sufficient to find the field on any other surface
containing the source. Therefore there is nothing new to be
known about the coherent field by measuring the coherent
field throughout 3-D space. For general inference of 3-D
objects, there is a dimensionality mismatch between coher-
ent fields and their sources. However, by imposing sparsity
on the source, the number of possible sources can be greatly
reduced so that a 2-D coherent field can uniquely specify a
particular source. Compressive holography greatly extends
the utility of coherent imaging by enabling compressive in-
ference methods to be applied to the digital propagation of
coherent fields while overcoming limitations due to the two-
dimensional nature of the coherent field.
Recent studies have begun to re-evaluate the relative bal-
ance of incoherent and coherent imaging systems. Compu-
tational incoherent imaging systems have already exploited
the abundance of signal processing concepts and techniques
to improve on conventional focal-plane imaging cameras.
Techniques such as blind deconvolution [30, 6, 1, 29, 11]
are able to extract additional detail from focal plane images,
compensating for motion artifacts, focus error, and aliasing.
The point spread function can be manipulated [4, 19, 12]
to aid these computational techniques for improved per-
formance. Coherence imaging techniques [20, 22, 21, 25,
24, 14] utilize the statistical properties of partially coher-
ent fields to enable spectroscopy, infinite depth of field,
and computational refractive distortion correction. Digital
holography [28, 27, 23, 16] allows greatly increased flex-
ibility over photochemical holography because the optical
field can be captured, digitally propagated, and numerical
inference techniques can be used. Diffraction tomogra-
phy [31, 9] is a means of reconstructing transmissive ob-
jects using multiple holograms. Compressive holography
techniques extend these methods by enabling accurate re-
covery of sparse objects with fewer samples.
The present paper considers the limits of compressive
holography as compared to incoherent imaging techniques.
Specifically, we show that the signal to noise ratio of holo-
graphic photography is comparable to conventional focal
photography. When combined with adaptive illumination
and processing strategies, this result suggests that holo-
graphic systems may outcompete conventional photography
against certain metrics. In particular, compressive hologra-
phy offers a means to minimize the number of measure-
ments required to achieve a particular image quality. We
interpret the signal to noise ratio limits of holography in the
context of compressive sensing to understand how photon
noise influences the quality of reconstructed sparse images.
Finally, we demonstrate the acquisition of a hologram and
reconstruction of a sparse object by compressive hologra-
phy.
2. Noise comparison of noncompressive sam-
pling
To be able to compare the performance of photographic
and holographic imaging, we present a model in which the
field scattered off of an object is detected coherently and in-
coherently. For simplicity, the object is modeled by a two-
dimensional refractive index distribution, or equivalently
scattering ampltiude n(r), with r being spatial position. A
plane wave is scattered from the planar object, and the scat-
tered light propagates through free space to a remote de-
tection plane. In the incoherent (photographic) case, the
object is imaged through a finite aperture imaging system
and forms a spatially bandlimited image on the focal plane.
The intensity of this bandlimited image is detected and used
to estimate a bandlimited version of the square of the scat-
tering amplitude n(r). In the coherent (holographic) case,
the field propagates to a detection plane through free space.
Because of the finite aperture on the detection plane, the
estimate of the scattering amplitude is also spatially ban-
dlimited.
To compare these approaches, the variance in the esti-
mate of refractive index is computed in both cases. The
fundamental limitation to the accuracy of intensity measure-
ments is photon noise, and so the two methods are compared
with the same average amount of detected energy scattered
from the sample.
To formulate the imaging model and determine the vari-
ance in the refractive index, we consider a plane-wave elec-
tric field with amplitude E0 and wave number k is incident
on a scatterer with complex scattering amplitude n(r). The
scattered field is given by
E′(r′) =
∫
d2r E0 n(r)H(r, r′, k) (1)
where the operator H(r, r′, k) is a propagation operator
modeling the optical system the object is observed through.
For example in the case of free-space diffraction, the
diffraction kernel is
H(r, r′, k) =∫
d2q T (q, k) exp(iz√
k2 − q2)exp(iq·(r − r′))
(2)
where T (q, k) is an operator representing the bandlimit
of free space or an intervening imaging system, q is the
two-dimensional transverse spatial frequency of the scat-
tered field, and z is the distance over which the field propa-
gates [3, 18, 13]. Typically T (q, k) = 1 for q2 < k2(NA)2
where NA is the numerical aperture of the imaging system
and T (q, k) = 0 otherwise. For a unit magnification tele-
centric imaging system (4-F system),
H(r, r′, k) =
∫
d2q T (q, k) exp(iq·(r − r′)). (3)
or identical to Eq. 2 with z = 0. The general conditions that
must be satisfied by the kernel H(r, r′, k) are given in the
Appendix.
In the first case, we consider directly imaging the scat-
terer onto a focal plane array. Using Eq. 3, the field at the
focal plane is given by
E′(r′) = E0 n(r′) (4)
The intensity of this electric field I(r′) = η2 |E′(r′)|2 is
sensed at the focal plane, where η is the impedance of free
space. The mean 〈p̂(r′)〉 and variance Var p̂(r′) of the num-
ber of detected photons in an area A over an time interval
∆t is then given by
〈p̂(r′)〉 = Var p̂(r′) =ηE2
0A∆t
2~ck
⟨
|n(r′)|2⟩
(5)
because photon detection is a Poisson process. To continue,
we linearize the model n(r) = n0 + ∆n(r) with n0 real
and ∆n is zero mean and substitute, neglecting the term
proportional to ∆n2:
〈p̂(r′)〉 =ηE2
0A∆t
2~ck
(
n20 + 2n0Re {〈∆n(r′)〉}
)
=ηE2
0n2
0A∆t
2~ck
(6)
The variances⟨
|∆n|2⟩
and Var p̂(r′) are related by
Var p̂(r′) =ηE2
0n20A∆t
2~ck=
(
ηE20n2
0A∆t
~ck
)2⟨
|∆n|2⟩
(7)
so that in terms of the intensity I0 = η2E2
0 , the variance in
the estimate is
⟨
|∆n|2⟩
=~ck
2ηE20n2
0A∆t=
~ck
4I0n20A∆t
(8)
given that the intensity I0 = η2E2
0 .
On the other hand, we consider a holographic measure-
ment of the field E′(r′). Assuming the reference amplitude
is given by ER(r), the detected intensity is given by
I(r′) =η
2|ER + E′(r′)|
2(9)
To continue, we assume that ER � |E′(r′)| and so lin-
earize Eq. 9:
I(r′) ≈η
2
[
E2R + 2Re {ERE′(r′)}
]
(10)
If we assume that the electric field E′(r′) is zero mean, the
mean and variance of the photon number per unit area per
time is given by
〈p̂(r′)〉 = Var p̂(r′) =IRA∆t
~ck(11)
with IR = η2E2
R. Therefore Var I(r′) = IR(~ck)/A∆t and
Var Re {ηERE′(r′)} = Var I(r′) so that
Var E′(r′) =~ck
2ηA∆t(12)
Interestingly, this is the variance caused by a single photon
over the time interval. To relate n(r) and E′(r′), we con-
sider Eq. 1:
E′(r′) =
∫
d2r E0 n(r)H(r, r′, k) (13)
Both sides are integrated with respect to the kernel
H(r, r′, k):
∫
d2r′ E′(r′)H(r′′, r′, k)∗
=∫
d2r′d2r E0 n(r)H(r, r′, k)H(r′′, r′, k)∗
(14)
Assuming the kernel is unitary, this simplifies to
∫
d2r′ E′(r′)H(r′′, r′, k)∗
= E0 n(r′′) (15)
Taking the expectation value of⟨
|n(r′′)|2⟩
:
⟨
∣
∣
∫
d2r′ E′(r′)H(r′′, r′, k)∗∣∣
2⟩
=
E20
⟨
|n(r′′)|2⟩ (16)
This can be written as
⟨∫
d2r′ E′(r′)H(r′′, r′, k)∗∫
d2r E′(r)∗H(r′′, r, k)⟩
=
E20
⟨
|n(r′′)|2⟩
(17)
Because the noise source is photon noise, we can assume
the noise of different samples of E′(r′) are independent, so
that 〈E′(r′)E′(r)∗〉 = ~ck2ηA∆t
δ(2)(r′ − r). Inserting this
definition:
~ck2ηA∆t
∫
d2r′∫
d2r δ(2)(r′ − r)
H(r′′, r′, k)∗H(r′′, r, k) = E20
⟨
|n(r′′)|2⟩ (18)
Integration over r yields
~ck2ηA∆t
∫
d2r′H(r′′, r′, k)∗H(r′′, r′, k)
= E20
⟨
|n(r′′)|2⟩ (19)
Using the finite property of the kernel
∫
d2r′ H(r′′, r′, k)H(r′′, r′, k)∗ = 1 (20)
the expectation value⟨
|n(r′′)|2⟩
is
⟨
|n(r′′)|2⟩
= ~ck4I0A∆t
(21)
Comparing Eq. 21 to Eq. 8, we see that the only differ-
ence is the n−20 term in Eq. 8. The variance of the scattering
potential estimate for standard incoherent or photographic
imaging is related to and consistent with other estimates of
the photon noise estimates of photographic images. This
result is expressed as a variance of the scattering potential
so that it may be compared directly to the same quantity
derived for holographic imaging. Typically the scattering
potential of the imaged objects is not directly inferred in
photography as the detected photocount is considered the
desired end result of photography rather than an estimate of
the object properties. Holography systems produce an in-
tensity signal proportional to the scattering potential, while
incoherent is proportional to the potential squared. There-
fore for a small potential a holographic system affords some
advantages in sensitivity. For a large potential, the reference
beam in the holographic case is a vanishingly small part of
the signal and the two systems both effectively are incoher-
ent. For this fairly general imaging task, both photography
and holography perform equally well. The signal process-
ing flexibility afforded by holographic field measurements
does not come at the expense of poorer noise performance.
3. Error bounds on compressive sampling
To analyze the performance of compressive sensing al-
gorithms applied to the holographic reconstruction of a re-
fractive index distribution, the holographically sensed field
must be transformed from continuous measurements to dis-
cretely sampled data. This is fundamentally because spar-
sity is defined as the number of nonzero elements in a
vector, so that the vector must be represented by a count-
able number of elements. Because holograms are usu-
ally recorded as discrete digital samples in a computational
holography system, it is natural for conditions such as spar-
sity to be imposed on the reconstructions from such sys-
tems.
To model the optical system as a discrete linear trans-
formation (a matrix), the continuous optical system trans-
formation H(r, r′, k) must be transformed into a discretely
sampled system. We consider the hologram created by Fres-
nel propagation from the scatterer such that
H(r, r′, k) =−ik
2zexp
[
−ik
2z|r′ − r|
2]
(22)
To create a discrete version of the holographic transforma-
tion, we consider a scattering potential vector nw which has
the potential sampled at positions rw with rw being the sam-
pled positions and At the same time, we measure the remote
field Em at positions r′m. The discrete transformation be-
tween these two vectors is given by
Em =N
∑
w
E0 nw
−ik
2zexp
[
−ik
2z|r′m − rw|
2]
(23)
If the rn and r′m are sampled at equally spaced integrals,
this sum may be represented by a discrete convolution. This
sum can be rewritten as follows
Em
E0
exp[
ik2z
|r′m|2]
=N∑
w=1E0
−ikz
[
nw exp[
−ik2z
|rw|2]]
exp[
ikzr′m
· rw
]
(24)
The discrete Fresnel transformation can be expressed as the
product of the potential with a unimodular quadratic phase,
a Fourier transform, and another unimodular quadratic
phase. The products with a unimodular phase do not af-
fect the sparsity of the vector. Therefore the product of the
potential n and the quadratic phase may be regarded as a
new vector with the same sparsity as the original. Like-
wise, the quadratic phase applied to Em is easily accounted
for. Therefore, the Fresnel transformation behaves effec-
tively the same as the Fourier transform to sparse vectors.
If the rn and r′m are sampled at equally spaced integrals
and the vector n is padded with a sufficient number of ze-
ros, the Fourier transform in Eq. 24 can be performed using
a discrete Fourier transform (DFT). Note that zero padding
also does not change the number of nonzero elements in η.
Because the DFT satisfies the restricted isometry hypothe-
sis (RIH), and we have established that the Fresnel trans-
form does not alter the sparsity of n, the Fresnel transform
also satisfies RIH. Therefore compressive sensing results
derived for the DFT likewise apply to the Fresnel transform.
To derive the amount of expected noise in the reconstruc-
tion, we apply a result of Candes et al. [7]. Instead of ex-
haustively sampling all of the points of the vector E, only
nE of the elements of E are sampled. Note that the chirp
that multiplies the elements of E can be accounted for by
multiplying the conjugate chirp into the data, and therefore
the chirp does not change the sparse sampling of the data.
The result used here is that if the following compressive
sensing problem is solved
min ‖ n ‖`1 subject to ‖ E− E0Fn ‖`2≤ ε (25)
with F being the discrete Fresnel transform operator as
specified in Eq. 24 and ε being a least-squares error bound,
then the following bound on the error in n applies
‖ n − n0 ‖`2≤ Cε
E0(26)
with ‖ η−η0 ‖`2 being the squared-error because the noisy
reconstruction and the true object uncorrupted by noise is
η0. The constant C is a scaling constant independent of ε.
The scaling factors are based on the variances and serve to
render the quantities unitless.
The error in E is not a Gaussian distribution, but based
on a Poisson distribution. For large numbers of detected
photons, which is typically the case for a reference beam in
holography, the Poisson distribution can be approximated
by a Gaussian distribution. Unfortunately, a Gaussian dis-
tribution has no true error bound. To apply the results of
Eq. 25 and Eq. 26, we specify a probabilistic bound. For ex-
ample, the inequality holds for example 95% of the time if
the error is within two standard deviations above the mean.
D standard deviations above the mean corresponds to
ε =√
~ckD2ηA∆t
. Inserted into Eq. 26,
‖ n− n0 ‖`2≤ C
√
~ckD
4I0A∆t(27)
with a probability erf[
D√2
]
assuming a Gaussian distribu-
tion.
Having established the probability the error is less than
a certain amount, we now establish the sampling conditions
under which this condition holds. Another result of Candes
et al. [8] stipulates a condition under which accurate recov-
ery of a sparse vector occurs when Fourier samples of this
vector are observed:
S ≤CK
(log N)6(28)
S is the maximum number of nonzero elements of the vec-
tor n such that accurate recovery occurs. K is the minimum
number of randomly selected observations of field points
Em required for accurate recovery. N is the total number
of elements in the vector n. We note that, the elements of
n which are nonzero are not known a priori, therefore both
the identity and the values of the nonzero elements are in-
ferred by minimization of the `1 term in Eq. 25.
HeNe Laser
Focal plane
arrayarray
Figure 1. Schematic of digital holography interferometer used to
acquire compressive holograms.
The constant C has been shown to be the order of unity
and is inferred from the restricted isometry hypothesis. This
hypothesis restricts the linear operator F such that all matri-
ces FT which are created by selecting any T ≤ S columns
of F must satisfy the following inequality
(1 − δS) ‖ c ‖2`2≤‖ FT c ‖2
`2≤ (1 + δS) ‖ c ‖2
`2(29)
Matrices F (such as the discrete Fourier transform matrix)
that satisfy this condition are such that any T randomly se-
lected columns from F are approximately orthonormal. The
constant δS indicates the degree to which this orthonormal-
ity is satisfied when there are S or fewer nonzero elements
in the vector n, with larger δS indicating a larger constant
C and therefore more data samples required for accurate
recovery.
Examples of values of this constant are C ≈ 8.87 for
δ4S = 15 and C ≈ 10.47 for δ4S = 1
4 . The constant δ4S is
given by the condition under which the inequality 29 holds
when there are four times as many nonzero elements of n.
In practice, this value of C is pessimistic and recovery can
occur for somewhat smaller values of C.
4. Experimental demonstration of compressive
holography
To explore the potential of compressive holography, we
acquire a hologram of a three-dimensional object and recon-
struct the object using sparsity constraints. We show that
sparsity can significantly improve the quality of the recon-
struction and reduce noise and defocus artifacts. Because
the hologram is two-dimensional but the object is three-
dimensional, very little Fourier data is available. Despite
this limitation, compressive holography is able to produce a
quality reconstruction because sparsity is enforced.
To understand how coherent imaging sparsely samples
the Fourier space, we compare the band volume of incoher-
ent imaging with the band surface of coherent imaging. The
band volume is the [20] region of the three-dimensional
Fourier space of an incoherent radiator that is sampled in a
finite aperture, and is denoted by the blue area of Fig. 2. A
kx
kz
kx
Figure 2. Band volume of incoherent imaging as compared to co-
herent imaging. The blue area is the band volume of incoherent
image. The red curve is the band surface of coherent imaging.
The diagonal lines are the bandwidth limits due to the finite nu-
merical aperture. For three-dimensions, the volume is rotationally
symmetric around the kz axis.
three-dimensional subset of spatial frequencies of an inco-
herent source can be sampled. On the other hand, only a
two-dimensional subset of spatial frequencies can be sam-
pled from a coherent source which is denoted by the solid
red circular arc. From this two-dimensional subset of spa-
tial frequencies the three-dimensional scattering potential of
the object is inferred. Sparsity ameliorates the dimensional
mismatch so that 3-D structure can be obtained from these
2-D measurements. In three dimensions, the band volume
and surfaces are revolved around the z axis.
The setup used to acquire the hologram is detailed in
Fig. 1. A Helium Neon laser emits coherent light at 633
nm, which is divided into reference and signal beams by a
beam splitter. The reference beam is magnified by a tele-
scope and spatially filtered at a pinhole. The signal beam
is expanded by a lens and scatters from an object consist-
ing of two toys, a polar bear and a dinosaur, separated in
depth. The light scatters from the object and the reference
and signal beams are recombined at a second beam split-
ter. A hologram is formed from the interference between
the reference and signal beams and is sampled by a digital
focal plane array.
To process the data, the TWiST algorithm [2] was used
to find to minimize the following functional:
L = τ ‖ ∇n ‖`1 + ‖ E− E0Fη ‖`2 (30)
The regularization term ‖ ∇n ‖`1 is a total variation
term [26] that minimizes the number of discontinuities in
the reconstruction. Because the objects being reconstructed
have areas of constant reflectivity, the total variation reg-
ularization tends to group energy into constant reflectivity
areas by minimizing discontinuities.
The reconstructed scattering potential n(r) using the
total variation sparsity constraint algorithm is presented
(a) (b) (c)
(e)(d) (f)
Figure 3. Five reconstructed planes of the three-dimensional ob-
ject (a-e), and a section of the measured hologram (f) to show the
interference fringes.
in Fig. 3 as reconstructed two-dimensional images. Five
planes with different z are computed using the diffraction
integral of Eq. 24. The two planes (b) and (d) correspond to
the reconstructions of the polar bear and dinosaur using to-
tal variation regularization. Plane (a) is the diffracted holo-
gram with z closer to the hologram plane than either the
polar bear or the dinosaur. Plane (c) is the diffracted holo-
gram with z between the ranges of the polar bear and di-
nosaur. Finally plane (e) corresponds to z further away from
the hologram plane than either the polar beam and dinosaur.
Part (f) is an example of the interference fringes of the holo-
gram to show that the imaging is computational rather than
incoherent. Total variation produces the sharpest and high-
est amplitude reconstructions in the true planes where the
two objects reside. It is this localization in three-dimensions
that is one of the advantages of compressive holography.
Compressive holography employs sparsity constraints to
infer three-dimensional structure from a two-dimensional
subset of the Fourier space. It takes advantage of holog-
raphy because the diffraction integral can be computed nu-
merically which is not possible with purely incoherent de-
tection. For this reason, holography has advantages when
combined with compressive sensing that enables holo-
graphic imaging to be competitive with incoherent imaging
in some imaging configurations.
5. Appendix
For the sake of completeness, the conditions are spec-
ified for the kernels for which the noise analysis applies.
These kernels H(r, r′′, k) satisfy the limited unitary condi-
tion
∫
d2r d2r′d2r′′H(r, r′′, k)H(r′, r′′, k)∗A(r)B(r′)∗ =∫
d2rA(r)B(r)∗
(31)
for all A(r) and B(r) each limited to a spanning a respec-
tive subspace of functions. In practice, this subspace is
given by the spatial frequencies up to the bandlimit of the
optical system represented by the kernel H(r, r′′, k). For
finite, energy preserving kernels we also stipulate
∫
d2r′′ |H(r0, r′′, k)|
2= 1 (32)
If the delta function B(r) = δ(2)(r − r0) is inserted into
Eq. 31, the following is implied
∫
d2r d2r′′H(r, r′′, k)H(r0, r′′, k)∗A(r) = A(r0)
(33)
with A(r) limited to a spanning its subspace of functions.
References
[1] G. R. Ayers and J. C. Dainty. Iterative blind de-
convolution method and its applications. Opt. Lett.,
13(7):547–549, 1988.
[2] J. M. Bioucas-Dias and M. A. T. Figueiredo. A new
twist: two-step iterative shrinkage/thresholding algo-
rithms for image restoration. IEEE. Trans. Image
Proc., 16(12):2992–3004, Dec. 2007.
[3] M. Born and E. Wolf. Principles of Optics. Cambridge
University Press, Cambridge, UK, 1980.
[4] S. Bradburn, W. T. Cathey, and E. R. Dowski, Jr. Real-
izations of focus invariance in optical-digital systems
with wave-front coding. Appl. Opt., 36(35):9157–
9166, 1997.
[5] D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and
S. Lim. Compressive holography. Opt. Express,
17:13040–13049, 2009.
[6] M. M. Bronstein, A. M. Bronstein, M. Zibulevsky, and
Y. Y. Zeevi. Blind deconvolution of images using opti-
mal sparse representations. IEEE Trans. Image Proc.,
14(6):726–736, 2005.
[7] E. J. Candes, J. K. Romberg, and T. Tao. Stable
signal recovery from incomplete and inaccurate mea-
surements. Comm. Pure Appl. Math., 59:1207–1223,
2006.
[8] E. J. Candes and T. Tao. Near-optimal signal recovery
from random projections: Universal encoding strate-
gies? IEEE Trans. Inf. Theory, 52(12):5406–5426,
2006.
[9] A. J. Devaney. A filtered backpropagation algorithm
for diffraction tomography. Ultrason. Imaging, 4:336–
350, 1982.
[10] D. L. Donoho. Compressed sensing. IEEE Trans. Inf.
Theory, 52(4):1289–1306, 2006.
[11] S. Farsiu, M. D. Robinson, M. Elad, and P. Milan-
far. Fast and robust multiframe super resolution. IEEE
Trans. Image Proc., 13(10):1327–1344, 2004.
[12] N. George and W. Chi. Extended depth of field using
a logarithmic asphere. J. Opt. A, 5:5157–5163, 2003.
[13] J. Goodman. Introduction to Fourier Optics. The
McGraw-Hill Companies, New York, 1968.
[14] K. Itoh and Y. Ohtsuka. Fourier-transform spectral
imaging: retrieval of source information from three-
dimensional spatial coherence. J. Opt. Soc. Am. A,
3:94–100, 1986.
[15] A. Juditsky and A. S. Nemirovski. On verifiable suf-
ficient conditions for sparse signal recovery via L1
optimization. Available http:// www.citebase.org/ ab-
stract?id=oai:arXiv.org:0809.2650, 2008.
[16] J. Kuhn, T. Colomb, F. Montfort, F. Charriere,
Y. Emery, E. Cuche, P. Marquet, and C. Depeursinge.
Real-time dual-wavelength digital holographic mi-
croscopy with a single hologram acquisition. Opt.
Expr., 15(12):7231–7242, 2007.
[17] K. Lee and Y. Bresler. Computing performance guar-
antees for compressed sensing. In International Con-
fererence on Acoustics, Speech and Signal Processing,
2008., pages 5129–5132, Apr 2008.
[18] L. Mandel and E. Wolf. Optical Coherence and Quan-
tum Optics. Cambridge University Press, Cambridge,
1995.
[19] D. L. Marks, R. Stack, D. J. Brady, and J. van der
Gracht. 3d tomography using a cubic phase plate ex-
tended depth of field system. Opt. Lett., 24:253–255,
1999.
[20] D. L. Marks, R. A. Stack, and D. J. Brady. 3d co-
herence imaging in the fresnel domain. Appl. Opt.,
38:1332–1343, 1999.
[21] D. L. Marks, R. A. Stack, and D. J. Brady. Astig-
matic coherence sensor for digital imaging. Opt. Lett.,
25(23):1726–1728, 2000.
[22] D. L. Marks, R. A. Stack, D. J. Brady, D. Munson,
and R. B. Brady. Visible cone-beam tomography with
a lensless interferometric camera. Science, 284:2164–
2166, 1999.
[23] M. Paturzo, F. Merola, S. Grilli, S. De Nicola,
A. Finizio, and P. Ferraro. Super-resolution in digital
holography by two-dimensional dynamic phase grat-
ing. Opt. Expr., 16(21):17107–17118, 2008.
[24] F. Roddier. Interferometric imaging in optical astron-
omy. Physics Reports, 17:97–166, 1988.
[25] F. Roddier, C. Roddier, and J. Demarcq. A rotation
shearing interferometer with phase-compensated roof
prisms. J. Opt. (Paris), 11:149–152, 1978.
[26] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total
variation based noise removal algorithms. Physica D,
60(1–4):259–268, 1992.
[27] U. Schnars and W. Jueptner. Digital recording and re-
construction of holograms in hologram interferometry
and shearography. Appl. Opt., 33:4373–4377, 1994.
[28] U. Schnars and W. Jueptner. Digital holography.
Springer Verlag, New York, 2005.
[29] D. G. Sheppard, B. R. Hunt, and M. W. Marcellin.
Iterative multiframe superresolution algorithms for
atmospheric-turbulence-degraded imagery. J. Opt.
Soc. Am. A, 15(4):978–992, 1998.
[30] F. Sroubek, G. Cristobal, and J. Flusser. A unified ap-
proach to superresolution and multichannel blind de-
convolution. IEEE Trans. Image Proc., 16(9):2322–
2332, 2007.
[31] E. Wolf. Three-dimensional structure determination of
semi-transparent objects from holographic data. Opt.
Commun., 1:153–156, 1969.