22
High-resolution ab initio three-dimensional x-ray diffraction microscopy Henry N. Chapman, Anton Barty, Stefano Marchesini, Aleksandr Noy, and Stefan P. Hau-Riege University of California, Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 94550 Congwu Cui, Malcolm R. Howells, and Rachel Rosen Advanced Light Source, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720 Haifeng He, John C. H. Spence, and Uwe Weierstall Department of Physics and Astronomy, Arizona State University, Tempe, Arizona 85287-1504 Tobias Beetz, Chris Jacobsen, and David Shapiro Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794-3800 Received August 26, 2005; revised October 14, 2005; accepted October 18, 2005; posted October 25, 2005 (Doc. ID 64418) Coherent x-ray diffraction microscopy is a method of imaging nonperiodic isolated objects at resolutions lim- ited, in principle, by only the wavelength and largest scattering angles recorded. We demonstrate x-ray dif- fraction imaging with high resolution in all three dimensions, as determined by a quantitative analysis of the reconstructed volume images. These images are retrieved from the three-dimensional diffraction data using no a priori knowledge about the shape or composition of the object, which has never before been demonstrated on a nonperiodic object. We also construct two-dimensional images of thick objects with greatly increased depth of focus (without loss of transverse spatial resolution). These methods can be used to image biological and mate- rials science samples at high resolution with x-ray undulator radiation and establishes the techniques to be used in atomic-resolution ultrafast imaging at x-ray free-electron laser sources. © 2006 Optical Society of America OCIS codes: 340.7460, 110.1650, 110.6880, 100.5070, 100.6890, 070.2590, 180.6900. 1. INTRODUCTION In many fields of science the ability to visualize the three- dimensional (3D) organization of component parts is prov- ing crucial to our understanding of the mechanisms in- volved in atomic and molecular processes. This is occurring in fields as diverse as whole-cell imaging in bi- ology, the study of the minimum energy pathway for crack propagation in brittle solids, and the internal structure of the new labyrinthine mesoporous structures developed by inorganic chemists for a wide range of applications. The field of coherent x-ray diffraction imaging (also known as diffraction microscopy) is expected to make a significant contribution to this effort. In this method, first put forward and developed by Sayre et al., 1,2 an image is reconstructed from measurements of the far-field scat- tered intensity of an isolated and nonperiodic object. The resolution of this form of microscopy is limited by only the wavelength and the largest scattering angle recorded. Hence this method is being pursued as a method for high- resolution x-ray microscopy without the technological limitations of manufacturing high-resolution optical elements. 3–8 The penetrating nature of x rays allows im- aging of objects much thicker than those that can be ex- amined in a transmission electron microscope (e.g., 10 m), at resolutions much better than those of visible microscopes. Preliminary studies of radiation damage suggest that 3D resolutions of about 10 nm should be achievable on frozen hydrated biological material. 9 The method is also being pursued in order to push x-ray im- aging to its resolution limits, namely ultrafast near- atomic-resolution imaging of macromolecules at x-ray free-electron laser sources 10,11 and of laser-aligned molecules, 12 which will enable structure determination without the need for crystallizing material. High-resolution imaging of thick objects can be at- tained only in the context of 3D measurement and recon- struction. In most cases, other than surface studies or im- aging of man-made objects, the analysis of the structure can be properly interpreted only in three dimensions. Un- less the object itself is a slice of material that is thinner than the depth of focus of a two-dimensional (2D) image, artifact-free structural analysis can be carried out only with knowledge of the surrounding material, or by apply- ing imaging modalities whereby depth information is not strongly transferred to the image (such as confocal imag- ing). At resolution lengths very much larger than the wavelength, thickness effects do not play a significant role, since, at the correspondingly low numerical aperture (NA), the depth of focus may be much larger than the size of the object. This is certainly the case as one satisfies the projection approximation of high-energy x-ray tomogra- phy, where the depth of focus approaches infinity. Tomog- Chapman et al. Vol. 23, No. 5/May 2006/J. Opt. Soc. Am. A 1179 1084-7529/06/051179-22/$15.00 © 2006 Optical Society of America

High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

High-resolution ab initio three-dimensional x-raydiffraction microscopy

Henry N. Chapman, Anton Barty, Stefano Marchesini, Aleksandr Noy, and Stefan P. Hau-Riege

University of California, Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, California 94550

Congwu Cui, Malcolm R. Howells, and Rachel Rosen

Advanced Light Source, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720

Haifeng He, John C. H. Spence, and Uwe Weierstall

Department of Physics and Astronomy, Arizona State University, Tempe, Arizona 85287-1504

Tobias Beetz, Chris Jacobsen, and David Shapiro

Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794-3800

Received August 26, 2005; revised October 14, 2005; accepted October 18, 2005; posted October 25, 2005 (Doc. ID 64418)

Coherent x-ray diffraction microscopy is a method of imaging nonperiodic isolated objects at resolutions lim-ited, in principle, by only the wavelength and largest scattering angles recorded. We demonstrate x-ray dif-fraction imaging with high resolution in all three dimensions, as determined by a quantitative analysis of thereconstructed volume images. These images are retrieved from the three-dimensional diffraction data using noa priori knowledge about the shape or composition of the object, which has never before been demonstrated ona nonperiodic object. We also construct two-dimensional images of thick objects with greatly increased depth offocus (without loss of transverse spatial resolution). These methods can be used to image biological and mate-rials science samples at high resolution with x-ray undulator radiation and establishes the techniques to beused in atomic-resolution ultrafast imaging at x-ray free-electron laser sources. © 2006 Optical Society ofAmerica

OCIS codes: 340.7460, 110.1650, 110.6880, 100.5070, 100.6890, 070.2590, 180.6900.

1. INTRODUCTIONIn many fields of science the ability to visualize the three-dimensional (3D) organization of component parts is prov-ing crucial to our understanding of the mechanisms in-volved in atomic and molecular processes. This isoccurring in fields as diverse as whole-cell imaging in bi-ology, the study of the minimum energy pathway for crackpropagation in brittle solids, and the internal structure ofthe new labyrinthine mesoporous structures developed byinorganic chemists for a wide range of applications.

The field of coherent x-ray diffraction imaging (alsoknown as diffraction microscopy) is expected to make asignificant contribution to this effort. In this method, firstput forward and developed by Sayre et al.,1,2 an image isreconstructed from measurements of the far-field scat-tered intensity of an isolated and nonperiodic object. Theresolution of this form of microscopy is limited by only thewavelength and the largest scattering angle recorded.Hence this method is being pursued as a method for high-resolution x-ray microscopy without the technologicallimitations of manufacturing high-resolution opticalelements.3–8 The penetrating nature of x rays allows im-aging of objects much thicker than those that can be ex-amined in a transmission electron microscope (e.g.,10 !m), at resolutions much better than those of visiblemicroscopes. Preliminary studies of radiation damage

suggest that 3D resolutions of about 10 nm should beachievable on frozen hydrated biological material.9 Themethod is also being pursued in order to push x-ray im-aging to its resolution limits, namely ultrafast near-atomic-resolution imaging of macromolecules at x-rayfree-electron laser sources10,11 and of laser-alignedmolecules,12 which will enable structure determinationwithout the need for crystallizing material.

High-resolution imaging of thick objects can be at-tained only in the context of 3D measurement and recon-struction. In most cases, other than surface studies or im-aging of man-made objects, the analysis of the structurecan be properly interpreted only in three dimensions. Un-less the object itself is a slice of material that is thinnerthan the depth of focus of a two-dimensional (2D) image,artifact-free structural analysis can be carried out onlywith knowledge of the surrounding material, or by apply-ing imaging modalities whereby depth information is notstrongly transferred to the image (such as confocal imag-ing). At resolution lengths very much larger than thewavelength, thickness effects do not play a significantrole, since, at the correspondingly low numerical aperture(NA), the depth of focus may be much larger than the sizeof the object. This is certainly the case as one satisfies theprojection approximation of high-energy x-ray tomogra-phy, where the depth of focus approaches infinity. Tomog-

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1179

1084-7529/06/051179-22/$15.00 © 2006 Optical Society of America

Page 2: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

raphic imaging in this mode is limited by detector pixelsize or, if a diverging beam is used, by Fresnel diffractioneffects. However, as one moves to higher resolution, thedepth of focus decreases dramatically, with the ratio oftransverse resolution length to longitudinal depth of focusgiven by the NA. For the classes of high-resolution mi-croscopy and structure determination applications inwhich we are interested, that imaging can properly becarried out only in three dimensions.

Coherent 3D x-ray diffraction imaging is especiallysuited to high-resolution x-ray microscopy. With a colli-mated beam incident on an object, the far-field diffractionpattern (recorded on a flat CCD) represents diffracted in-tensities, which, in reciprocal space, sample points on themomentum- and energy-conserving Ewald sphere. By ro-tating the sample about an axis normal to the beam, thissphere, which passes through the origin, sweeps throughalmost all of the reciprocal space volume of continuousdiffuse scattering from our nonperiodic object. In this waywe collect the 3D distribution of scattered intensity in re-ciprocal space, which is phased by using the 3D imple-mentations of iterative methods, as discussed below. Oncethe phases of the diffraction intensities in the diffractionvolume have been determined, the 3D Fourier transformof the object is known and the 3D image can be obtainedsimply by an inverse Fourier transform. As will be dem-onstrated in this paper, such data sets can be used forartifact-free analysis of structures. This is also the casefor crystallography but is not generally the case for imag-ing with a lens. Partially coherent tomographic imagingtechniques, such as tomography in the scanning trans-mission x-ray microscope13 or transmission x-raymicroscope,14,15 lead to a complicated transfer of objectspatial frequencies into the measured image, and there isno longer a simple one-to-one mapping of a measurementon a detector pixel, for example, to a spatial frequency ofthe object. For some classes of object, such as pure phaseor amplitude objects, it may be possible to deconvolve the3D transfer function, but this is not generally assured.16

As with coherent diffraction imaging and crystallography,coherent imaging with a lens also leads to a direct map-ping of spatial frequencies in the object to spatial frequen-cies of the image. Again, a tomographic reconstructionfrom coherent 2D images can be easily achieved for pureamplitude or phase objects but would otherwise requireknowing the phase and amplitude of the image in order totransform into 3D reciprocal space. Coherent diffractionimaging essentially attempts to emulate coherent lens-based imaging, using a computer algorithm in place of alens. The advantage, for tomography of complex objects, isthat the diffraction amplitudes are measured and thephases retrieved from the oversampling of those ampli-tudes, so that a direct 3D Fourier synthesis of the objectcan be achieved.

In this paper we perform an important demonstrationof the feasibility of high-resolution diffraction microscopyrequired for biological and materials characterization, aswell as single-molecule imaging. Significantly this is donewithout the use of detailed a priori information about thesample structure or low-resolution data obtained by othermeans. We also demonstrate that a full 3D reconstructioncan be produced on a 10243 or larger data cube in a rea-

sonable amount of time using currently available compu-tational hardware. Three significant recent developmentshave enabled us to perform full 3D image reconstructionswith high resolution in all three dimensions. The commis-sioning of a new diffraction tomography apparatus17 byStony Brook University at an undulator beamline18 of theAdvanced Light Source (ALS) allows us to acquire diffrac-tion patterns at over 100 orientations of an object, withshort exposure times, over angular ranges of more than±70°. The Shrinkwrap phase-retrieval algorithm that wedeveloped6 has proven to be extremely robust and effec-tive in performing phase retrieval on diffraction data setswith missing data (e.g., due to a beam stop) or limitedangles. The algorithm retrieves images from the mea-sured diffraction intensities. It does not require addi-tional information about the object, such as a low-resolution image, and can retrieve phases of generalcomplex-valued objects. In direct methods of crystallogra-phy the term ab initio is used to mean “from the experi-mental data, without any supplemental priorinformation,”19 and we employ this term to describe ourmethod. The third advance is the ability to perform 3Dfast Fourier transforms (FFTs) on the large10243-element arrays of diffraction data that are as-sembled from our measurements. Although the familiarincrease of computer processing power has brought gi-gaelement FFTs in reach of today’s computers, it has beenthe development of computer clusters and specific soft-ware for distributed computation of FFTs that has madefeasible the 3D implementation of the Shrinkwrap algo-rithm. In particular, we utilize the dist_fft software20

on a 16-node cluster of dual-processor Apple G5 Xserves,giving us a performance of 8.6 s per 10243-element single-precision complex FFT. We note that this computationaladvance should also benefit the field of diffraction tomog-raphy (Ref. 21, Sect. 13.2), in which both the phase andamplitude of the scattered field are measured, as is pos-sible with scattered ultrasonic waves.

We present here experimental results of high-resolution 3D x-ray diffraction imaging of a well-characterized test object to demonstrate the practical ap-plication of these advances and quantitatively assess thetechnique. We show the first full 3D x-ray diffraction im-ages that have been reconstructed without prior knowl-edge of the sample. We believe that these are the highest-resolution 3D x-ray images of noncrystalline objects everachieved, with a demonstrable resolution volume of10 nm"10 nm"40 nm. We demonstrate that artifact-free 2D images can be created from the 3D diffractiondata set of objects very much thicker than the depth of fo-cus. In Section 2 we review diffraction imaging, the ex-perimental requirements for 3D image reconstructions,and our computer implementation to perform the 3Dphase retrieval and Fourier synthesis of the image. Oursample preparation and characterization techniques arediscussed in Subsection 3.A, and our particular experi-mental setup and methods are described in Subsections3.B and 3.C. Image reconstruction results are presentedin Section 4. The 3D images are visualized as isosurfacerenderings, extremely large depth-of-focus projection im-ages, maximum value projections, and tomographic slicesthrough the object. We also compare artifact-free 2D pro-

1180 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.

Page 3: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

jections of 3D data with reconstructions of individual 2Dviews and illustrate the artifacts present in single-view2D images of thick objects. In Section 5 we quantitativelyassess our 3D image resolution.

2. THREE-DIMENSIONAL COHERENTDIFFRACTION IMAGINGThe incident x-ray wave field interacts with a 3D periodicor nonperiodic object through the scattering potential ofthe object, o!x"=re#!x", where #!x" is the complex electrondensity and re is the classical electron radius. This objectscattering function may be decomposed into a Fourierrepresentation of 3D spatial frequencies u, with complexamplitudes

O!u" = F#o!x"$ %& o!x"exp!2$iu · x"dx, !1"

in which each spatial frequency can be thought of as a vol-ume grating. In the case of coherent diffraction imaging aplane wave with wave vector kin is incident on the object,and the intensity of the scattered field in the direction ofthe wave vector kout is measured on a 2D pixellated de-tector (e.g., a bare CCD) in the diffraction far field. Thisdetector is typically centered on the forward direction butin principle could be oriented in any angle to the incidentbeam (see Fig. 1). For elastic scattering, only the volumegratings that satisfy Bragg’s law will scatter, and thewave-vector transfer q=kout!kin will be equal to the grat-ing spatial frequency; that is, q=u. Since the magnitudes'kout' and 'kin' are constant and equal to 1/%, these spatialfrequencies u lie on the Ewald sphere of radius 1/%,22,23

where % is the x-ray wavelength. This construction isequivalent to the condition that to scatter light by anangle 2& from the forward direction (the z axis), the vol-ume grating must be tilted by an angle & from perpen-dicular to the forward direction (Bragg’s law). With theconvention used here we have 'q'=q= !2/%"sin &. The dif-fraction amplitudes in the direction kout are proportionalto O!q", and in diffraction imaging we measure the inten-sities, proportional to 'O!q"'2. In particular, in the Bornapproximation (which can be thought of in this context assingle scattering), the number of photons per second mea-sured in a CCD pixel, with solid angle ', is given by

I!q;'" = I0'P'O!q"'2, !2"

where I0 is the flux (photons per second per unit area) ofthe incident plane wave on the sample and P is the polar-

ization factor; P= !1+cos2 (" /2 for unpolarized light, with(=2&.22

The complex scattering potential o!x" that we aim torecover from measurements of I!q" is related to the com-plex refractive index n!x" of the object by (Ref. 21, Sect.13.1)24

o!x" = re#!x" =$

%2 (1 ! n2!x"). !3"

In the soft x-ray region the complex refractive index isusually written in terms of the optical constants as n!x"=1!)!x"! i*!x". For optical constants much less thanunity, which is generally the case for soft x rays, Eq. (3)can then be well approximated by

o!x" *2$

%2 ()!x" + i*!x") =2$

%2 +n!x". !4"

The limit of validity of Eq. (2) under the Born approxima-tion requires that D'+n!x"',2$%C, where D is the thick-ness of the object and C*0.2.25

A. Experimental RequirementsThe recovery of the 3D image o!x" from O!u" requires thephases of O!u" to be recovered and combined with thesquare root of the measured intensities. Both phase re-trieval and image recovery place requirements on the 3Dsampling of the diffraction intensities. Image recovery re-quires that the object be adequately sampled in real spaceto resolve the finest desired feature size over a given fieldof view. The requirements of the phase retrieval step aremore demanding, in particular because our phase re-trieval algorithm has the additional task of overcominggaps and missing regions in the 3D sampled data by per-forming the interpolation tasks of recovering intensitiesthat were blocked by a beam stop or that were missed dueto a limited angular range of measurements. The 3D im-age recovery requires reconstructing the complex ampli-tudes O!u" throughout a volume of reciprocal space. Sincea single diffraction pattern is limited to frequencies u=qon the Ewald sphere, diffraction data must be collectedfor various orientations of the sample.

In this work we perform phase retrieval and image re-covery by full 3D Fourier synthesis, which requires inter-polating the measured intensities from the Ewald sphereonto a uniform 3D Cartesian grid. In reciprocal space thegrid has a width of N samples, spaced by +u, and is cen-tered at the zero spatial frequency. For N even, the spa-tial frequencies along each grid axis run from !!N /2!1"+u to !N /2"+u. In real space we characterize the gridby a spacing +x and a field width w=N+x. Since +x+u=1/N, we have the relationship +u=1/w; thus the largestspatial frequency component along a grid axis is given byux,max=N+u /2=1/ !2+x".

From Eqs. (1) and (2) the inverse Fourier transform ofthe intensity diffraction pattern is proportional to the au-tocorrelation function of the image that would be recov-ered when the phases are known:

Fig. 1. Scattering geometry for coherent x-ray diffraction imag-ing. The sample is rotated about the y axis by an angle -.

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1181

Page 4: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

i!x" = F!1#I!q"$ . o!x" ! o*!! x", !5"

where ! denotes the convolution operator. Consider anobject of finite extent with a maximum width D along anyaxis of the real-space grid. The autocorrelation image i!x"in that direction has a maximum width of 2D, and hencethe diffraction intensities are band limited. That is, thesmallest grid spacing required to record all informationpresent in the diffraction intensities is +u=1/ !2D", theShannon critical sampling frequency, and the functioncan be recovered everywhere by a sinc-series expansion ofthe measured samples.26 We define the sampling ratio sper dimension with w=sD (Shannon sampling of intensi-ties occurs for s=2). The oversampling of data relative tothe sampling of +u=1/D is what enables the phase re-trieval techniques to be employed. In practice we may col-lect data on a finer grid than that strictly required to com-pletely specify the diffraction intensities, as a way toincrease detector dynamic range (sampling with s/2does not increase the information content of the measure-ment). However, it is possible to successfully retrievephases from measurements with less than this factor of 2in each dimension.27 The minimum required sampling ra-tio has been shown by simulation to be +2 for 2D imagesand +32 for 3D images.28

The CCD must be placed so that it intersects a largeenough range of scattering angles up to the desired spa-tial resolution. Usually the CCD is far enough from thesample to be in the diffraction far field, in which the an-gularly resolved diffraction pattern does not vary withpropagation distance. For an object of width D the farfield exists beyond distances of zF=2D2 /% from theobject.29 For a detector with pixels of width p placed a dis-tance zD from the object, we have, for small scatteringangles, +q=p / !zD%". That is, to sample a field width ofw=sD, the detector must be placed a distance of zD=spD /%. This will be in the far field if zD/zF, which canbe satisfied if D,sp /2, or the condition that the samplemust be smaller than the CCD pixel spacing when s=2. Ifthe CCD is closer to the sample than zF, then the sampleand diffraction planes are related by a Fresnel, ratherthan a Fourier, transform, and the reconstruction algo-rithms must be appropriately modified.

Experimental requirements are placed on the trans-verse and longitudinal coherence of the incident beam.The transverse spatial coherence length of the incidentbeam must be at least as large as the entire field widthw=sD.30 The effect of partial coherence may be modeledas an incoherent source located some distance from thesample, whereby the diffraction intensity pattern is con-volved with a demagnified intensity image of the source.In real space this convolution modulates the autocorrela-tion of the object with an envelope function that is propor-tional to the modulus of the transform of the source func-tion. By the Van Cittert–Zernike theorem21 this envelopefunction is the mutual coherence of the source. The mea-sured diffraction intensity is also convolved with the pixelresponse function of the detector, which modulates theautocorrelation image with an envelope proportional tothe modulation transfer function of the detector. Thespectral bandwidth +% /% of the incident light should benarrower than 2/N, since we require +q /q=+% / !2%" so

that the range of angles diffracted from a single spatialfrequency by a range of wavelengths spreads by no morethan half a detector pixel. This is equivalent to a mini-mum required longitudinal coherence length of wqmax%=2w sin &, which will be the maximum path length forlight scattering by 2& to the edge of the detector frompoints spaced transversely by w, or by the same anglefrom points spaced longitudinally by w / !tan 2&".

In our experiments we rotate the sample about an axisperpendicular to the incident beam direction to build upthe 3D data set. At the highest spatial frequencies re-corded, an angular increment of +- leads to a spacing be-tween the Ewald sphere surfaces of +q=qmax+-. That is,the Crowther resolution31 matches the critical samplingof the diffraction intensities !s=2" when

+- = +q/qmax = +x/D, !6"

with the real-space sampling interval +x. Note that thisangular increment leads to a higher than necessary sam-pling at the lower spatial frequencies. For the examples inthis paper we collected diffraction data with angular in-crements that are 2–4 times larger than those given byEq. (6). In the process of phase retrieval we additionallyrecover both the amplitudes and phases of the missingdata between the Ewald surfaces, including those in alarge gap resulting from a limited range (usually ±70°) ofrotation angles, data blocked by a beam stop, and themissing “cone” of data resulting from rotating the sampleabout a single axis. This amplitude and phase retrieval ofthe missing data is essentially a superresolution tech-nique and is achieved with the same real-space imageconstraints that we use for phase retrieval.32 Recovery ofunmeasured data behind a beam stop has been demon-strated previously in 2D coherent x-ray diffraction imag-ing using this technique,6,33 and data missing due to alimited number of views have been recovered in the con-text of computed tomography using iterative algorithms34

similar to those that we use (described in Subsection 2.C).Depending on how much amplitude information is miss-ing, there will be a null space of images that are not con-strained by the real- or reciprocal-space constraints32,35

and that may need to be regularized in thereconstruction.36

B. Interpolation of the Diffraction IntensitiesWe interpolate the diffraction intensities measured on theCCD detector onto a regular Cartesian grid. The locationof the CCD pixel indexed by integers !i , j" is given by thevector pi,j=p!ii+ jj", relative to the pixel intersecting thedirect beam, as shown in Fig. 1. We have then, with kin

= !1/%"k,

qi,j = kout ! kin =1

%,pi,j + zDk

+pi,j2 + zD

2! k- , !7"

where zD is the distance from the sample to the detector.Hence, for example,

qi,jx = qi,j · i =

1

%

pi

+p2!i2 + j2" + zD2

, !8"

1182 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.

Page 5: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

qi,jz = qi,j · k =

1

%, zD

+p2!i2 + j2" + zD2

! 1- . !9"

In practice each diffraction pattern in our data set has adifferent rotation in the 3D Fourier space of the object,and the coordinate of each pixel in this space is given by

ui,j,- = R-qi,j, !10"

where R- is the 3D rotation matrix derived from theknown object orientation. The coordinates ui,j,- are thenmapped onto a uniform Cartesian grid by nearest-neighbor sampling. Where more than one pixel from theset of intensity measurements contribute to a given voxel,the pixel values are averaged to determine the appropri-ate intensity value at that point. We find that for the datasets that we have collected, where the angular increment+- is at least four times larger than the condition speci-fied by Eq. (6), nearest-neighbor interpolation givesslightly better reconstructions than does bilinear interpo-lation. At the high resolutions, where the reciprocal-spacedistance between data from adjacent object orientations isseveral pixels, the latter method leads to many interpo-lated values that are simply some fraction of their neigh-boring value. We find it better in this sparse case to havethe intensity value concentrated at a single u position(even if that position is slightly wrong) than distributingthe intensity value over a cluster of voxels.

We note that there are alternatives to the nearest-neighbor interpolation onto a regular grid that we use inthis work. The interpolation could be avoided by solvingthe inverse transform by inverse methods, such as is per-formed in the computer program BACK (Ref. 37, Sect.A5.3), which utilizes a constrained conjugate gradientsolver and which is used in the computer programSPEDEN38 (SPEDEN is a program to perform 3D phase re-trieval, designed specifically to optimally include priordata and avoid instabilities caused by interpolation). Al-ternatively, it should be possible to employ fast algo-rithms to compute discrete Fourier transforms of noneq-uispaced data.39 In the nondiffracting limit (i.e.,computed tomography, CT), the reconstruction method offiltered backprojection can be shown to be a computation-ally efficient method that is equivalent to Fourier synthe-sis via a polar-to-Cartesian interpolation.40,41 A corre-sponding algorithm, called filtered backpropagation,42

has been developed for the diffracting case where the dif-fraction amplitude and phase are measured, which againcan be shown to be equivalent to Fourier synthesis afterinterpolation.43

C. Phase RetrievalOur phase retrieval method follows from previous workon 2D diffraction imaging.6,7,8,44 In particular we have ex-tended the Shrinkwrap algorithm6 to operate on 3D ar-rays. This algorithm is based on an iterative transform al-gorithm (ITA), which cycles between real and reciprocalspace, respectively enforcing the constraints of known ob-ject support and known diffraction modulus. Usually anITA requires knowledge about the shape of the object toset the support constraint. This support is usually largerthan the actual boundary of the object; what is termed a

loose support. For general complex-valued objects (e.g.,consisting of more than one element or compound), wherea positivity constraint cannot be applied, the ITA giveshigher-quality reconstructions when the support con-straint more closely and tightly matches the object’sboundary.45 The reason for this is explained in Subsection4.B. The Shrinkwrap algorithm periodically updates theestimate of the support based on the current object esti-mate. The updated support is chosen by low-pass filteringthe current estimate and setting the support to be the re-gion for which the intensity is above a certain threshold(usually a prescribed fraction of the maximum image in-tensity). The method can be started from an estimate of avery loose support, from a threshold of the object’s auto-correlation function, or even from the entire array. Amethod that exists for finding an estimate of the objectsupport from the autocorrelation function’s support couldalso be used.46,47 While the Shrinkwrap method can beused with any ITA, such as the hybrid input–output(HIO)48 or difference map49 algorithms, we used the HIOand relaxed averaged alternating reflection (RAAR)50 al-gorithms for this work.

Many of the phase retrieval ITAs can be written asfixed point iterative equations, which can be written gen-erally in the form gn+1=Tgn for a generic operator T. TheRAAR algorithm can be represented by the fixed point it-erative equation of the complex-valued real-space imageiterate g (Ref. 50):

gn+1 = ( 12*!RSRM + I" + !1 ! *"PM)gn

= (2*PSPM + !1 ! 2*"PM + *!PS ! I")gn, !11"

where the operator R=2P!I is the reflector correspond-ing to the projector P , I is the identity operator, and * is afeedback parameter, which we usually set to *=0.9. Thetwo operators PM and PS are the projections onto themodulus constraint and support constraint, respectively.We apply the modulus according to

PMg

=F!1.G!u"

'G!u"' + 0(+I!u" + 1+I!u") if G!u" / +I!u"

+ 1+I!u"G!u"

'G!u"' + 0(+I!u"! 1+I!u") if G!u" , +I!u"

! 1+I!u"G!u" otherwise, or u ! M

/ ,

!12"

where 1+I is the estimated standard deviation of the mea-sured diffraction amplitudes, G!u"=F#g!x"$, and 0 is asmall number. M is the set of u where I!u" has been mea-sured. For example, u!M in the missing sector of datapresent when diffraction is recorded over a limited rangeof angles. The operator PM of Eq. (12) retains the phase ofthe complex Fourier amplitude G!u" and projects itsmodulus 'G!u"' to the nearest measured diffraction ampli-

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1183

Page 6: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

tude, in the interval (+I!u"!1+I!u" ,+I!u"+1+I!u") [or doesnothing if the modulus already lies within that range or ifI!u" has not been measured]. Given the support S fromShrinkwrap, we apply the support constraint when re-trieving the phase of a complex image, using

PSg = 0g!x" if x " S

0 otherwise1 . !13"

We also perform phase retrieval where we impose realand positive constraints on the image amplitudes, replac-ing PS with

PS+g = 0R#g!x"$ if x " S and R#g!x"$ / 0

0 otherwise 1 . !14"

The HIO algorithm can be written in terms of a fixedpoint iterative equation only when applying the supportconstraint PS, not when applying positivity constraints.50

In general the HIO algorithm is given by

gn+1 = 0PMgn if x " S!

!I ! *PM"gn otherwise1 , !15"

where S! is the set of elements where PMgn satisfies thesupport and (if desired) the reality and positivity con-straints. As with the RAAR algorithm we use a value ofthe feedback parameter of *=0.9.

Regardless of algorithm, we monitor the reconstructionwith the real-space image error

ES2 %

2 'gn ! PSgn'2

2 'PSgn'2=

2x!S

'gn!x"'2

2x"S

'gn!x"'2. !16"

This metric is a measure of the total power in the imagethat remains outside the support and is zero for the caseof perfectly satisfying the real-space constraints. We de-fine, in a completely analogous way to Eq. (16), the errorEM corresponding to the modulus constraint defined byPM:

EM2 %

2 'gn ! PMgn'2

2 'PMgn'2=

2 ''Gn' ! +I'2

2I , !17"

where the equality follows from Parseval’s theorem and istrue only for 1+I=0. The error metrics ES and EM are thenormalized distances between the current iterate gn andthe support and modulus constraint sets, respectively.The reconstructed image from a reconstruction run (froma particular set of starting phases) is given by

2M = PMgn !18"

for the final iterate gn of both the RAAR and HIO algo-rithms.

The Shrinkwrap algorithm has been used previously toreconstruct 2D images of thin objects at a resolution ofabout 20 nm.6 We have found in subsequent studies thatthe step of updating the support would sometimes shrinkthe support to a shape smaller than the actual boundaryof the object. To counter this effect, we have improved theShrinkwrap algorithm to prevent it from overshrinking

the support. Depending on the parameters of the low-passfilter and the threshold level, the support may start to cutoff the extremities of the object. At this point the supportconstraint error ES

2 increases rapidly, and the reconstruc-tions rapidly degrade with further iteration. This error isthus a good indicator of when to halt the support refine-ment. We simply monitor the error metric, and when it in-creases above a set point, we choose the support savedfrom ten iterations prior. This then becomes our best es-timate of the support and is used as a fixed support incombination with the RAAR algorithm for many more(typically 100–1000) iterations. We further decrease theuncertainty of the retrieved phases by averaging the re-trieved complex images from independent and randomstarting diffraction phases using the Shrinkwrap-derivedsupport constraint36 as described in Eq. (22) of Subsection5.A. If the phase at a particular spatial frequency is ran-domly recovered from trial to trial, the average moduluswill average to zero and hence be filtered out of the recov-ered image.

The 2D reconstructions shown in this paper were per-formed by using the RAAR algorithm [Eq. (11)], and the3D reconstructions were performed by using a combina-tion of HIO [Eq. (15)] and RAAR algorithm. A typical re-construction process proceeds as follows. First we definethe initial object support mask by applying a 2% intensitythreshold to the object autocorrelation, obtained by Fou-rier transforming the measured diffraction pattern. Thesupport constraint, defined by the current object mask, isapplied to the solution in real space once per iteration. Wetypically use a feedback parameter of *=0.9 in the RAARor HIO algorithm. The object support S is recomputed ev-ery 30 iterations by convolving the absolute value of thecurrent reconstruction 2M with a Gaussian of FWHM ofinitially 3 pixels in all dimensions and applying a thresh-old to the resultant image at 15% of the maximum value.As the iterations progress, we reduce the width of theGaussian blurring function from 3 pixels to 1 pixel, fol-lowing the prescription wS=1+2 exp!!n2 /nw

2 ", with nwregulating the speed at which wS decreases with iterationnumber n. The reduction in the blurring width enablesthe support to better conform to the solution as the qual-ity of the reconstruction increases. We perform thisShrinkwrap support determination without applying anyreal-space positivity or reality constraint on the imageamplitudes (that is, we use the constraint PS in the RAARalgorithm or S!=S in the HIO algorithm). The final sup-port is usually obtained after 300–600 iterations, with astopping criterion that the support constraint error ES

2

does not exceed 0.2. Once the support is determined, wecarry out many iterations of the RAAR algorithm, start-ing from random phases, using a feedback parameter of*=0.9. In some cases additional real-space constraints,such as positivity or reality of the image amplitudes, arealso applied.

As shown in Eq. (12), in diffraction space the ampli-tudes of the object guess are matched in magnitude to themeasured diffraction pattern amplitude over those partsof 3D diffraction space where the measured intensity isdefined. Those parts of 3D diffraction space where thereare no measured data are allowed to float and are not con-strained. This includes the regions between the measured

1184 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.

Page 7: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

Ewald spheres, the missing wedge of data from the finiterange of rotation angles, the central beam-stop region,and those parts of the diffraction pattern where the mea-sured intensity is sufficiently low to be regarded as noise.An additional, optional Fourier space constraint is to setthose pixels beyond the radius of the spatial frequenciesmeasured by the CCD chip to zero. This asserts lack ofknowledge of spatial frequencies higher than those mea-sured by the CCD camera and effectively provides a pupilfunction for the imaging system in 3D space.

Providing an initial guess for the 3D object support isnot typically necessary but speeds the reconstruction pro-cess and helps break inversion symmetry present in theobject autocorrelation. An initial 3D support estimate canbe obtained from the diffraction data by first performingShrinkwrap phase retrieval on a 2D central section, asdescribed in Subsection 4.C. We then extrude the supportmask into a 3D shape that is invariant in the projectiondirection and that matches the 2D mask that was gener-ated in all planes normal to this direction, to provide aninitial 3D support estimate. If several 2D reconstructionsare available from a range of views, the intersection ofthese support functions in three dimensions can be usedto provide a more detailed initial support estimate. Expe-rience has shown that even a low-resolution or compara-tively poor support estimate is sufficient to almost imme-diately break any inversion symmetry in thereconstruction and hasten convergence of the 3D solution.If three or more extruded support functions are used toform an intersection, then the support functions mayneed to be initially aligned to form an intersection, al-though the Shrinkwrap algorithm will usually improvethe support even if misalignment yields a too small inter-secting volume. A better method for providing an initialsupport estimate appears to be obtainable from intersec-tion operations of the support of the autocorrelationfunction,46,47 which is readily computed as described inSubsection 3.C. Although we have not yet attempted it,we believe that estimating the initial support from thesupport of the autocorrelation, followed by the dynamicsupport constraint of Shrinkwrap, should provide a veryrobust reconstruction methodology.

3. METHODSA. Sample PreparationA goal of this study was to be able to unambiguously com-pare reconstructed x-ray images of a 3D object with im-ages obtained by another high-resolution method, such asa scanning electron microscope (SEM). To accomplishthis, we fabricated a test object that consists of a siliconnitride membrane with a 3D pyramid shape that is deco-rated with 50-nm-diameter colloidal gold spheres, similarto that previously described.7 The object is 3D and has acomparable width, height, and depth, measuring 2.5 !m"2.5 !m"1.8 !m.

The pyramid-shaped membrane was fabricated by li-thography with methods similar to those used to makesilicon nitride windows and silicon nitride atomic-forcemicroscope tips. The starting material was a double-sidepolished 200-!m-thick wafer of silicon crystal with thecrystal 100 axis oriented normal to the surface. Pits with

an inverted pyramid shape were etched into one side ofthe wafer by anisotropic etching through a pattern of 2.5-!m-wide square holes, lithographically printed and de-veloped in photoresist. The anisotropic etch leaves the 111crystal planes exposed, so that the surface normal of anyof the four faces of the pyramid makes an angle of 54.7° tothe window normal and the ratio of the depth of the pit toits base width is 1/+2. After removal of the photoresist alow-stress silicon nitride film of 100-nm thickness wasgrown on the surface by chemical vapor deposition. Win-dow openings were then etched from the other side of thewafer after first resist coating and patterning that side,making sure to align to marks etched in the front surface.The etch from the back removes silicon but leaves a free-standing membrane of silicon nitride, which in this casehad one pyramid-shaped indentation per window. Thewindows were made with a slotted shape about 2-mm-wide by 50-!m high. With the 200-!m thickness of thesilicon frame and the pyramid positioned in the center ofthe window, this allows a line of sight through the windowat a maximum rotation angle (about an axis in the planeof the window, parallel to the short window dimension) of78°.

The gold-sphere test object was made by dragging asmall drop of solution of gold balls in water, suspendedfrom a micropipette, across the silicon nitride window sothat it intersected with the pyramid indentation. Bestsuccess was achieved with a slightly hydrophilic siliconnitride surface, which could be obtained by cleaning thesurface in an oxygen plasma. As the drop was moved overand away from the indentation, a smaller drop brokeaway from the main drop and was captured in the pyra-mid. This drop quickly evaporated and left the gold ballsin a characteristic pattern where the gold tended to fill inthe edges of the pyramid. The main drop was completelydragged away from the window, so the only gold balls onthe window were those in the pyramid. A plan-view SEMimage (membrane and wafer perpendicular to the elec-tron beam) of the object is shown in Fig. 2. The SEM is,however, sensitive only to the surface of the object—theelectrons do not penetrate the gold spheres or the mem-brane. The depth of focus of the SEM was larger than thethickness of the object, and from the plan view we can de-termine the lateral coordinates of the topmost balls andinfer the third coordinate from the known geometry of thepyramid.

The silicon nitride window was glued to a pin so thatthe pyramid was close (within about 20 !m) to the rota-tion axis of the pin. The pin was mounted in a collar thatwas attached to a JEOL electron microscope sampleholder. This assembly was mounted into the modified go-niometer holder of the diffraction apparatus.17

B. Data AcquisitionExperiments were carried out at an undulator source atbeamline 9.0.118 of the Advanced Light Source (ALS) us-ing the Stony Brook University diffraction apparatus.17

Pertinent to this experiment, 750-eV (1.65-nm-wavelength) x rays were selected from the undulator ra-diation by a zone-plate monochromator with a spectralresolution of % /+%=1000. The 5-!m-diameter monochro-mator exit pinhole also selects a transversely spatial co-

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1185

Page 8: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

herent patch of the beam. The sample was located 20 mmfrom this pinhole. A direct-detection bare CCD detector,with 20-!m pixel spacing, 1340"1300 pixels, was located142 mm behind the sample. We selected subarrays of1200"1200 elements, centered on the location of the zerospatial frequency (direct beam). At these CCD and wave-length settings we have a real-space sampling interval inx and y of +x=9.8 nm (in the small-angle approximation)and a field width of w=N+x=11.7 !m. With these set-tings the 2.5-!m-wide pyramid object satisfies the far-field and sampling conditions discussed in Subsection 2.A.The diffraction from the pyramid object is more than 4"oversampled in each dimension !s=4.6".

The frame of the slotted window in which the pyramidis formed blocks most of the high-angle scatter from thepinhole that would otherwise illuminate the CCD. Thisscatter reveals a projection shadow image of the slottedwindow, useful for aligning the pyramid to the beam. Thediffraction pattern of the pyramid measured by the CCDis shielded from this remaining pinhole scatter with asmall aperture placed 6 mm upstream of the sample (adistance at which the sample can be rotated without in-terference). A beam stop blocks the direct undiffractedbeam from impinging on the CCD. More details are givenby Beetz et al.17

Diffraction patterns were collected with the sample ori-ented at rotation angles of !57° to +72° at 1° intervals[compared with 0.27° angular increments required for fullsampling according to Eq. (6)]. The shadow of the sample

support frame limited useful data to !57° to +66°. We ad-ditionally collected data at 0.5° increments for a range of19° centered at an object orientation of -=!26° from thehead-on !-=0° " orientation. To keep the sample centeredin the 5-!m beam, we calibrated the position of thesample by performing a 2D raster scan of the rotation andy goniometer motors. The total scattered counts (not in-cluding those blocked by the beam stop) were collected foreach motor position and the optimum y position (a trans-lation motion perpendicular to the rotation axis) was thencomputed for each rotation angle, and these were fitted toa smooth curve as a function of rotation angle. To collectthe 3D data set, at each rotation angle we took several ex-posures to accommodate the large dynamic range of 105 ofthe diffraction pattern and to reduce the area occluded bythe beam stop (by setting the beam stop to different posi-tions). After subtraction of dark noise, pixel data thatwere not saturated and not masked by the beam stopwere summed over these exposures and then normalizedby the accumulated incident flux corresponding to thatsum. A typical diffraction pattern is shown in Fig. 3(a),which was composed of ten or more individual exposuresof 0.1-, 1-, 10-, and 60-s duration, for a cumulative expo-sure of 73 s. The diffraction pattern intensities are dis-played on a logarithmic gray scale in Fig. 3. At the highestangles of the pattern (highest resolution, at 0.07 nm!1,along the diagonal), the mean photon count is1.9 photons/pixel for this sample orientation. The maxi-mum normalized photon count, which occurs in a pixel

Fig. 2. (Color online) (a) SEM image of the pyramid test object, consisting of 50-nm-diameter gold spheres lining the inside of apyramid-shaped indentation in a 100-nm-thick silicon nitride membrane. The membrane extends over a window of size 50 !m"1.7 mm, the pyramid base width is 2.5 !m, and the height is 1.8 !m. (b) Isosurface rendering of the reconstructed 3D image. (c) Ex-tremely large depth-of-field x-ray projection image from a central section of the 3D diffraction data set, reconstructed with the Shrink-wrap algorithm. (d) Maximum value projection of the 3D reconstructed image (left) with a horizontal white line indicating the locationof a tomographic slice (right). The scale-bar length is 1 !m and applies to all images.

1186 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.

Page 9: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

near the center of the pattern, is 109,000 photons. The es-timated incident flux was 8"109 photons s!1 !m!2 (per400 mA of storage-ring current), and the normalized inci-dent fluence for the accumulated sum of Fig. 3(a) was 3"1011 photons/!m2. The total normalized scatteredcounts at the CCD over the accumulated exposure timefor the pattern in Fig. 3(a) was 1.6"108 photons (equal tothe total counts that would be recorded if the detector hadinfinite dynamic range and did not saturate).

Views of the diffraction data cube are shown in Figs.4(a)–4(c) below and discussed in Subsection 4.A. Thiscube was assembled from the 123 diffraction patterns at1° sample orientation increments, and 32 patterns athalf-degree intervals, by interpolating onto ui,j,-. The to-tal integrated exposure time for the complete data set was3.2 h, with a total incident fluence of 5"1013 photons/!m2.

C. Data DiagnosticsAs seen in relation (5), the autocorrelation of the objectcan be determined from a single inverse Fourier trans-

form of the measured data without having to recover thediffraction phases. We find that computing the autocorre-lation image from the measured diffraction data is a use-ful diagnostic to determine if the measurement fulfills thesampling requirements, to help identify the object, and toassess the measurement quality. The inverse Fouriertransform of the diffraction pattern shown in Fig. 3(a) isgiven in Fig. 3(b), after first applying the high-pass filterdescribed below in Eq. (19) to remove effects due to theabrupt intensity change caused by the beam stop. The dis-played autocorrelation image has been cropped by half inwidth and height from the inverse Fourier transform ofthe diffraction pattern, since, with the linear gray scaledisplayed, the rest of the field was black. This autocorre-lation image has a well-defined support that is confinedwithin the image field, showing that we are indeed over-sampling the diffraction intensities.

The Fourier relationship of expression (5) offers a fur-ther method to diagnose the diffraction pattern I!q" as afunction of the spatial frequency q across the pattern. Aproperty of the Fourier transform of a function, often dis-

Fig. 3. (a) Diffraction pattern for the -=0° orientation of the pyramid. (b) Autocorrelation image i!x", formed by Fourier transformingthe diffraction intensity pattern of (a) after filtering, displayed with a linear gray scale (white corresponds to highest intensity). The scalebar is 1 !m. (c) Autocorrelation spectrogram of the same single-view diffraction pattern of the pyramid, displayed with a logarithmicgray scale. The central and rightmost images of the bottom row of (c) are redisplayed in (d) and (e), respectively.

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1187

Page 10: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

cussed in the context of holography, is that a subregion ofthe transform (or hologram) can be inverse Fourier trans-formed to give a spatially filtered image of the originalfunction. The field of that image is the full field of theoriginal function. The filtered image may differ from sub-region to subregion, since each subregion contains infor-mation pertaining to particular spatial frequencies of theoriginal object function. Nevertheless, for nonperiodic ob-ject functions, these images should be consistent and notvary too dramatically between neighboring regions. Largeinconsistencies in images formed in neighboring regionspoint to inconsistencies in the measured diffraction data.This gives a convenient way to qualitatively check the dif-fraction intensities (and also reconstructed phases) acrossthe diffraction pattern by forming an array of subimages,each corresponding to a particular subregion. We termthis array a “spectrogram,” since it displays the image in-formation as a function of spatial frequency, much like thetime–frequency spectrograms used in audio analysis. Weapply the spectrogram analysis both to the autocorrela-tion image and to reconstructed images to determine theconsistency of the data or reconstructed phases.

An example of an autocorrelation spectrogram is shownin Fig. 3(c), where each image is formed by applying a 2DGaussian window function to the diffraction pattern, cen-tered at a location indicated by the image position in thearray. One effect that is immediately noticeable to the eyeis that, at the higher-resolution positions, the images varyaccording to their azimuthal position in the spectrogram.In particular, features that are tangential to circles of con-stant resolution have greater contrast than features thatvary in a radial direction. The effect gives the appearancethat the spectrogram is rendered onto the surface of asphere, but it is in fact a signature of limited longitudinalcoherence of the incident beam. For a given q, repre-sented in the spectrogram by the position of the image,pairs of points of the object that are perpendicularly bi-sected by the vector q will interfere with zero path differ-ence. These points will interfere with the same phase forall wavelengths (assuming no dispersion of the scatteringfactors). The path difference at q of other pairs of pointsalong this line perpendicular to q depend only in secondorder on their mean displacement from the direction of q.The path differences of rays scattering from pairs ofpoints separated in the radial direction, however, vary tofirst order with their radial separation. Therefore a lim-ited longitudinal coherence, which limits the path differ-ence over which interference occurs, will reduce interfer-ence for points separated by a direction parallel to the qvector by a much greater extent than that for the perpen-dicular direction. The spectrogram gives a good visual de-termination of the degree of longitudinal coherence, andwe see from Fig. 3(c) that the longitudinal coherence isadequate for diffraction out to the center edge of the pat-tern [Fig. 3(d)] but not quite adequate for the corner of thepattern [Fig. 3(e)]. By comparison with spectrograms ofsimulated diffraction patterns, we estimate that the rela-tive bandwidth of the incident beam in this case is ap-proximately +% /%=1/600.

It is also clear from Fig. 3(c) that the data are inconsis-tent at the low spatial frequencies, since at those q posi-tions there is noticeable intensity outside the support of

the pyramid object autocorrelation. This is due to the factthat low-frequency data are missing because of the beamstop, and also to a lesser degree due to scattering from thesample substrate or the coherence-defining pinhole. Thedata are visibly noisier in windowed regions located in thefirst three rows of the fourth column of Fig. 3(c), due tothe stalk that holds the beam stop and that was movedover several positions in this region for the cumulative ex-posure. The noise and inconsistency can cause theShrinkwrap algorithm to fail (in which it keeps shrinkingbeyond the object’s boundary), especially when appliedwithout an image-space constraint such as positivity. Wefind that the Shrinkwrap algorithm consistently con-verges to a stable object support when we apply a high-pass filter to the diffraction intensities before interpola-tion. This filter has the form

f!q" = 0!q/2a"4 exp!2 ! q2/2a2" if q , 2a

1 otherwise1 , !19"

where q= 'q' and the filter radius a=100 pixels, or lessthan 10% of the array width. The image thus formed willbe a high-pass filtered image, equivalent to the coherentimage formed by a lens with a central obscuration. Thefilter was applied to the data of Fig. 3(a), before Fouriertransformation, to diminish the effects of the beam stop.This filter also regularizes the inverse transform, which isill posed in part due to the missing data behind the beamstop,32,35 by simply setting the unknown intensities to bezero. The effect of this high-pass filter is to convolve thecoherent image with the Fourier transform of the filter.This causes a ringing of the image, which gives rise tonegative amplitudes in the image, and a slightly largerimage support. We also zero the diffraction intensities ofthe bright cross streaks seen in the x–z central section, tosuppress artifacts that they may cause.

D. Computational ImplementationThe two key computational challenges in implementinghigh-resolution 3D phase retrieval at the time of writingare performing the numerous 3D FFTs required in a rea-sonable period of time and managing the memory require-ments of the large 3D data arrays.

Memory requirements are dictated by the size of thedata sets acquired and by the phase retrieval algorithmsused. For the iterative transform phase retrieval methodsdescribed in Subsection 2.C, we require four or more 3Darrays with the same number of elements as that in theinterpolated input diffraction volume. Specifically, the ar-rays required are the input diffraction modulus data(+I!u", floating point), the current and previous iterates[gn!x" and gn!1!x", complex floating-point data], and thesupport constraint (S, byte data). The estimated modulusstandard deviation 1+I requires another floating-point ar-ray, but in the 3D reconstructions we set 1+I=0 to reducethe memory requirement and speed up the reconstruc-tions. In principle FFTs can be performed on arbitrarysized arrays; however, it is advantageous to perform re-constructions on a regular grid with 2n grid points on aside. Our interpolated data array is a gigavoxel data cubecontaining 10243=230 elements, which requires a total of8 GB of computer memory per array for single-precision

1188 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.

Page 11: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

complex data. The minimum memory footprint for single-precision iterative object reconstruction using either theHIO or RAAR algorithm on a 10243 data cube is therefore2"8 GB complex arrays, plus 1"4 GB diffraction cubedata and 1"1 GB support array, giving a total memoryfootprint of 21 GB of data, where we use the definition of1 GB=230 bytes. The minimum memory footprint for per-forming basic HIO and RAAR reconstruction on 3D ar-rays of different sizes is given in Table 1. Note that this isthe minimum memory footprint needed to perform a HIOreconstruction and that more memory may be requireddepending on the specific implementation. For example,FFT speed can be increased through use of temporary“workspace” arrays that require additional memory, andmaintaining a running sum of successive images 2M re-quires an additional complex-valued array to be retainedin memory. The memory calculations above include onlythe data arrays and do not take account of operating sys-tem requirements and the executable code itself.

The second computational challenge is efficient evalua-tion of the numerous 3D Fourier transforms required for3D phase retrieval. The Fourier transform of a singlelarge data set is not trivially parallelizable, in that theproblem can be easily broken into separate parallel tasksand distributed over many computer processors, as is thecase, for example, with ray tracing and partially coherentimaging, where each CPU can work on a subset of the en-tire problem without the need for intensive internodecommunication during execution. The nature of the Fou-rier transform means that any element of the input arrayaffects all elements of the output, requiring internode ex-change of array data at each Fourier transform step to en-sure that all CPUs work together to solve the one largeFFT.

We overcome the problem of efficiently calculating dis-tributed Fourier transforms by using the dist_fft dis-tributed gigaelement FFT library from Apple Computerspecifically written for this project by the Apple AdvancedComputation Group.20 This FFT library distributes theFourier transform calculation load efficiently over manyprocessors and has been hand-optimized to take advan-tage of the G5 architecture used in the Apple Macintoshline of computers and the “Altivec” single-instructionmultiple-data floating-point vector processing unit. Dis-tributed FFT libraries are also available elsewhere, forexample in version 2 of the FFTW libraries,51 but at thistime these do not support single-instruction multiple-datavector processing extensions and proved to be slower onour platform. dist_fft decomposes the input 3D dataset into nproc discrete data slabs consisting of an n"n" !n /nproc" voxel subportion of the original data array.Only a distinct portion of the array resides on each CPUat any given time, enabling data sets much larger thanthe memory of each individual node to be computed, andthe distributed memory nature of the FFT is exploitedthrough parallelization of all steps in the reconstructioncode. Standard message passing interface52 (MPI) com-mands are used to communicate data between processes.

We ran fully parallelized reconstruction code on a 16-node 2.0-GHz dual-processor (32 processors total) Macin-tosh Xserve G5 cluster with 4-GB RAM per node. To maxi-mize interprocess communication speed, we used high-speed, low-latency Mellanox Infiniband interconnects tocarry message passing interface traffic between computenodes. If this cluster is used, the processing time on a5123 array is 2.2 s/ iteration with the HIO phase retrievalalgorithm, and an acceptable 3D reconstruction can beproduced in under 2500 iterations for a total computationtime of 2.5 h on a 5123 grid. The individual FFT timingand total reconstruction time for typical array sizes onthis cluster are given in Table 2.

4. IMAGE RECONSTRUCTIONA. Three-Dimensional ImagesA full 3D image is obtained by performing phase retrievalon the entire 3D diffraction data set. The resulting vol-ume image reveals the structure of the object in all threedimensions and can be visualized in many ways includingforming projections through the data or slices (tomo-graphs) of the data. Specific segmentation analyses can becarried out on the volume image to determine propertiessuch as strength of materials.53 3D reconstructions wereperformed by interpolating the diffraction intensities atui,j,- onto a 10243 grid. Representations of the interpo-lated diffraction intensities are given in Fig. 4. Note thatthe 1° angular increments of the object rotation are justless than four times larger than the 0.27° requirement ofEq. (6) for this object and that we have a 40° sector ofmissing data due to our limited range of object orienta-tions, as well as data lost to the beam stop. The effect ofthe 1° rotation increment is apparent in Fig. 4(b), wherethe gaps between the measured Ewald spheres are seenin the ux–uy plane (referred to as a central section) ex-tracted from the data cube. The limited range of views isreadily apparent in Fig. 4(c), which shows the ux–uz cen-tral section.

Table 1. Minimum Memory Footprint Required forIterative 3D Phase Retrieval for Various Array

Sizesa

Array Size Single Precision Double Precision

2563 336 MB 592 MB5123 2.6 GB 4.6 GB

10243 21 GB 37 GB20483 168 GB 296 GB

aThe arrays required are the input diffraction data !floating point", the current andprevious iterates !complex single- or double-precision floating-point data", and thesupport constraint !byte data".

Table 2. Computing Times Using a Cluster-BasedFourier Transform and Reconstruction Code on 16

G5 Dual-Processor Xserve Compute Nodesa

Array SizeTime per 3D

Fourier TransformTime per 3D

Reconstruction

2563 73 ms 10 min5123 850 ms 1.5 h

10243 7.9 s 14 h

aFourier transform timings are wall time per individual FFT. Reconstruction tim-ings are for a complete 3D reconstruction consisting of 2000 iterations of HIO phaseretrieval complete with two FFTs per iteration plus other operations required to cal-culate the reconstruction.

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1189

Page 12: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

The 3D phase retrieval code described above in Subsec-tion 3.D was applied to the assembled 3D data to producea full 3D reconstruction from the diffraction cube. We ap-plied the Shrinkwrap algorithm, as described in Subsec-tion 2.C, to determine the 3D support mask and the dif-fraction phases. We performed phase retrieval by usingeither the real-positive real-space constraint PS+ or thesupport constraint PS. For the complex image reconstruc-tion, as with the case of reconstruction from central sec-tions discussed below in Subsection 4.C, the solution wasregularized by first applying the high-pass filter of Eq.

(19) to the diffraction intensities. For the real-positive re-construction the missing amplitudes were unconstrainedand were allowed to be recovered by the algorithm. Thereconstruction success with the sparsity of data that wehave in this case is undoubtedly due to the sparseness ofthe object itself. In essence the object is a membrane, andthe 3D speckles are elongated by up to 50 pixels in direc-tions perpendicular to the pyramid faces, as can clearly bediscerned in Fig. 4(c).

A 3D isosurface rendering of the real-positive con-strained reconstructed pyramid is shown in Fig. 2(c),where we also display a slice from the volume image inFig. 2(d). Three images from the 3D pyramid image areshown in Fig. 5, showing the maximum value projection,along the three orthogonal axes, of the real part of the 3Dimage that was reconstructed by using the support con-straint with real positivity, PS+. Each pixel of this imageis given by the maximum value encountered along thepath that projects onto that pixel, illustrating a visualiza-tion method available only with the full 3D image array.In initial reconstruction trials using only a support con-straint PS, we observed a linear ramp in the imaginarycomponent. This was essentially a phase ramp, due to ashift of the diffraction data in Fourier space caused by aninaccurate measurement of the location of the zero spatialfrequency (the direct beam) on the CCD. We shifted thediffraction data by an amount that minimized the real-space phase shift, which required shifting the data byhalf-pixel amounts. This recentering of the data was nec-essary before we could apply the real-positive constrainton the image. Further analysis of the images is carriedout in Section 5. Movies of the reconstructed image and3D diffraction intensities are available on the Web.54

B. Two-Dimensional Images2D images are useful for visualizing and quantifying ob-jects, and most diffraction imaging experiments per-formed to date have been 2D. However, if the object isthick, then the interpretation of an image reconstructedfrom a single Ewald sphere is not trivial. Most notably, ascompared with our full reconstructed 3D image, the 2Dimage will have defocus artifacts that do not diminish inpower with displacement of the object along the beamaxis. However, in some cases, obtaining a full 3D recon-struction may not be achievable, for example when imag-ing nonreproducible objects with single pulses of an x-rayfree-electron laser. It is thus instructive to compare 2Dimages reconstructed from single-view diffraction pat-terns with the 3D image.

We first consider how thin an object must be to be con-sidered 2D. In a 2D reconstruction from a single diffrac-tion pattern, the spatial frequencies that are passed tothe diffraction pattern are constrained on the Ewaldsphere according to

qz = 1/% ! +1/%2 ! qx2 ! qy

2 * !%

2!qx

2 + qy2", !20"

where the approximation is for small scattering angles, orqx31/%. We can define the numerical aperture of the dif-fraction pattern as NA=qx,max%, in analogy with imaging

Fig. 4. (Color online) (a) Rendering of the entire 3D diffractiondata set. (b) Central slice of the data set in the plane rotated by!24° about the y axis from the ux–uy plane. (c) Central slice of thedata set in the ux–uz plane. All patterns are displayed on a loga-rithmic gray scale (white corresponds to highest intensity). Thehalf-width of each pattern is ux,max=0.048 nm!1.

1190 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.

Page 13: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

with a lens (of square aperture for the case of a squaredetector, with the NA defined here along the half-width ofthe square rather than the diagonal), which gives the ex-pression of maximum longitudinal distance of the Ewaldsurface, qz,max*!NA2/ !2%". For a 2D object of thicknessD!0, the 3D transform will be independent of the longi-tudinal frequency qz (rods in the z direction), and so themeasurement on the Ewald sphere is equivalent to mea-surement of the 2D diffraction intensities 'O!qx ,qy ,0"'2. Insuch a case there will be no artifact in the image causedby not actually measuring intensities on qz=0. An objectof finite thickness D will have a coherent diffraction pat-tern with speckles of width 1/D in the longitudinal direc-tion. If, at the highest transverse frequencies, the Ewaldsphere substantially cuts through a speckle that is cen-tered at qz=0 then the measurement will again be equiva-lent to the 2D diffraction intensities on the qz=0 plane.That is, we can consider an object to be thin or 2D if theEwald departure is no more than 1/ !4D", or half thespeckle half-width, which corresponds to

D ,%

2NA2 , !21"

or, equivalently, the thickness D must be less than adepth of focus. For the experiments with the pyramid ob-ject at %=1.65 nm and NA=0.084, this thickness limit isD=120 nm, which is considerably smaller than the 1.8-!m thickness of the pyramid.

Relation (21) does not imply, however, that diffractionimaging performs optical sectioning where only the partsof the object located within the depth of focus are imaged.The thickness limit simply implies that the 2D single-pattern image of an object thicker than D will contain ar-tifacts due to the information that is cut off by the trans-fer function. Consider an object containing two parts (e.g.,screens) that are separated by more than a depth of focus.As with coherent imaging with the equivalent aberration-free thin lens, partial information from both screens ofthat object will be transferred in the imaging process. Infact, in diffraction imaging, there is not necessarily anypreferred image plane, since, by the Fourier shift theo-rem, a shift ) z of an object along the beam axis z willcause only a phase shift given by !2$) zqz and hencethere will be no change to the diffracted intensities (forsmall enough ) z that the change in distance to the detec-tor does not change the effective NA and scale the patternon the detector). Note that from relation (20) the phaseshifts of the 2D spatial frequencies of the image, due tothe defocus )z, will be $) z%!qx

2+qy2", as expected from the

Fresnel propagator.55 The position of the focal plane canbe chosen in the phase retrieval step, a fact that was dem-onstrated computationally and experimentally by Spenceet al.56 In that work the focus of the retrieved image of anobject of two screens separated by some depth could bechosen by setting a tight support for the features in onescreen or the other. As shown by Spence et al., once thephases of the diffraction intensities have been retrieved,

Fig. 5. Maximum value projections along three orthogonal directions of the reconstructed 3D image. Projections were performed along(a) z, (b) x, and (c) y directions. (d) Enlarged region of (a) for comparison with Fig. 8 below. The 3D image was reconstructed with realityand positivity constraints. The scale bars are 500 nm.

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1191

Page 14: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

images can be generated at any position through focus byFresnel propagating the image wave field (equivalent toapplying the appropriate quadratic phase term to the dif-fraction phases).

The defocus effects of a single view are illustrated inFig. 6, where we show 2D images of the wave field at thepyramid object, reconstructed from a single-view diffrac-tion pattern. In this example we use the diffractogram forthe object rotated by -=24° from the head-on (plan-view)orientation. The image 2M reconstructed by Shrinkwrap,from the single-view diffraction pattern, is shown in Fig.6(a). No real-space reality or positivity constraint was ap-plied, and the reconstructed image is complex. For thisobject and view the edges of the object (its support) are ata range of heights along the beam axis. In this case theend-point support that the Shrinkwrap algorithm arrivedat was tightest around the balls halfway along the armsof the pyramid, and consequently this is the plane of bestfocus. This focal plane gives the greatest overall imagecontrast, which explains why Shrinkwrap converges to it.The complex image can be numerically propagated, byconvolution with the Fresnel propagator, by any arbitraryamount ) z. We generated a series of numerically refo-cused images, where ) z varies between !2.5 and +2.5 in50 nm steps. As the refocus distance is increased, the bestfocus moves along the pyramid arms to their extremities.The difference in focus of balls near the vertex and armextremities can be seen in Figs. 6(d) and 6(e), which showx–) z lineouts of the real part of the complex image. Thedifference between the best focus for these two cases is1.2 !m, which agrees with the 3D image (Subsection 4.A)and the known geometry of the pyramid. It should be

noted that this computational focusing does not constitute3D imaging but is simply the propagation of a 2D coher-ent field. The coherent transfer function (CTF) for the 2Dimage formed from a single-view diffraction pattern isconfined to the Ewald surface, and in this situation withcoherent illumination the integrated intensity of this 2Dimage does not change with defocus (a consequence ofParseval’s theorem and the invariance of the diffractionintensities with defocus). That is, it is unlikely that nu-merical defocusing of a complicated object gives resultsthat could be as easily interpreted as those for thepyramid-membrane test object used here. This situationis unlike partially coherent imaging in a microscope,where out-of-focus objects of compact support contributeless power to the image and some optical sectioning canbe carried out.16

Another consequence of the “defocus artifact” of 2D im-ages is that the 2D image of a thick real object is complex,which means that a real-space positivity constraint can-not be applied during the phase retrieval process. A posi-tivity constraint, when valid, is known to be very effectivein deriving the diffraction phases and is important in di-rect methods in crystallography and a strong constraintin diffraction imaging. Here a real object is one in whichthe object’s 3D complex transmission function o!u" is real,to a multiplicative complex constant. Propagation of theout-of-focus parts of the object to the selected image planewill give rise to a large variation in the complex values ofimage, as demonstrated in Fig. 7. Here we show the com-plex amplitudes of images recovered from calculated dif-fraction patterns of simulated objects. The simulated ob-jects consisted of gold balls of equal size, distributed in a

Fig. 6. Real part of the image reconstructed from (a) a single-view diffraction pattern, and real part of the image formed by numericallypropagating (a) by (b) !0.5 !m and (c) +0.7 !m. Lineouts from the image near (d) the pyramid center and (e) the arm extremity for arange of propagation from !2.5 to +2.5 !m. The locations of these lineouts are indicated by arrows in (a). The difference in the plane ofbest focus for these two image locations is apparent. The scale bars are 500 nm.

1192 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.

Page 15: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

similar way to that for the pyramid test object. In the firstcase [Fig. 7(a)] the z coordinate of all ball centers was setto zero to construct a quasi-2D object. Ignoring the arbi-trary phase shift, the reconstructed image is real al-though not strictly positive (the negativity of the image isdue to the truncation of the diffraction pattern). The cal-culated image values are complex for the 3D object [Fig.7(b)], and there is a rough correlation between absolutevalue and phase of the values. This nonreality can also beexplained by the curvature of the Ewald sphere. The 3Ddiffraction magnitudes of a real object are centrosymmet-ric, whereas the Ewald sphere does not cut through bothO!u" and O!!u".57 In general a positivity constraint willbe applicable only for the full 3D image, 2D projections(discussed in Subsection 4.C), 2D images of thin objects,and 2D images of objects with a mirror-plane symmetry.

If the object can be considered 2D and positive, a posi-tivity constraint will have the effect of focusing the image.Usually the support constraint is loose, and even if theshape of the object is well-known or determined from theShrinkwrap algorithm, for example, there may still beroom for a defocused image to be contained within thesupport. The degree of defocus allowed by the support de-pends on how tight it is. The defocused image of a real 2Dobject with sharp edges or high frequencies will be realbut include negative values. The focused image will bethat which is most positive, and hence a positivity con-straint will select that image. This is true not only for de-focus but for any other low-order aberration (astigma-tism, coma, etc.). That is, without the positivityconstraint, there are slowly varying phase modes thatcannot be determined, and the number of these modes de-

pends on how tight the support is. The same argumentapplies for 3D reconstruction of positive 3D objects. Inthis case the phase errors will be low-order 3D modes,which cannot be thought of as focus or other aberrationsof an optical system but are simply unconstrained phaseerrors in Fourier space.

C. Two-Dimensional Projection ImagesDefocus in a 2D image formed from a single diffractionpattern is a consequence of the Ewald sphere CTF, as de-scribed in Subsection 4.B. The focal plane of the imagemay be varied by multiplying the Fourier transform of the2D image by a quadratic phase term. In a full 3D recon-struction, there is no concept of defocus. A shift of the ob-ject by ) z along the beam axis causes the phase ramp!2$) zuz across the 3D transform. This causes a shift ofthe image, no different from shifts ) x or ) y in the otherdimensions. There is no optical axis in the reconstructed3D image, so there is no defocus. Similarly there is no de-focus in a 2D projection image formed by integrating the3D image along a specific direction. A 2D projection maybe recovered from the diffraction intensities without hav-ing to first undergo a full 3D reconstruction, and we findthat this is a useful step in quickly examining our 3Ddatasets. By the Fourier projection theorem the projectionimage is formed from a central section in reciprocal space;e.g., the plane uz=0 gives the projection along the z axis.We have performed phase retrieval on central sections ofthe pyramid diffraction data by first extracting the appro-priate diffraction intensities from all views. One exampleof a central section is shown in Fig. 4(b), which was gen-erated by linear interpolation of measured intensities at

Fig. 7. Distributions of the real-space complex amplitudes 2M, in the Argand plane, of simulated single-view coherent images for (a) a2D and (b) a 3D object consisting of 50-nm-diameter gold balls for an x-ray wavelength of 1.6 nm. Distributions of complex amplitudesof images reconstructed from experimental data are given for (c) the 2D projection image shown in Fig. 8, (d) the single-view 2D imageof Fig. 6, and (e) the full 3D image. Cases (c) and (d) were reconstructed by using PS, and (e) by using PS+.

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1193

Page 16: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

ui,j,- onto the uz=0 plane. The projection images that wereconstruct from experimental data are superior to the re-construction on a single Ewald sphere. One example isshown in Figs. 8(a) and 8(b), which can be compared withFig. 6. In the projection images, balls at the apex of thepyramid are similar to the balls at the base, whereas inthe single-view image the balls at the apex appear out offocus. The image of Figs. 8(a) and 8(b) was obtained withthe Shrinkwrap algorithm (parameters given in Subsec-tion 2.C) after first regularizing by filtering the diffractionintensities according to Eq. (19). The missing data in thearc-shaped regions seen in Fig. 4(b) were allowed to floatin the reconstruction of the complex image, according toEq. (12).

Figures 8(a) and 8(b) depict the real part of the compleximage, and the distribution of complex values of the re-constructed image is given in Fig. 7(c). As compared withthe distribution of complex values of a reconstructed im-age from a single-view diffraction pattern, given in Fig.7(d), the values of the projection image are clusteredcloser to the real axis. This is as expected, since there areno defocus artifacts, and the object consists mostly of asingle material (gold) as was simulated in Figs. 7(a) and7(b). Thus we should be able to apply the reality and posi-tivity constraints of Eq. (14) to the projection image, tofurther improve it and allow this extra information tohelp reconstruct the spatial frequency amplitudes andphases that are missing behind the beam stop. This wasthe case for Figs. 2(b) and 8(c), which is the real-positiveconstrained reconstruction from the same central sectionas that for Fig. 8(a). In this case the diffraction data werenot filtered. This reconstruction was carried out with thesame support mask derived by Shrinkwrap and used forthe reconstruction of the complex image. Since they wereconstrained, the complex amplitudes of the image were

distributed along the real axis, with some deviation fromreal for smaller amplitudes that could be attributed tonoise and scattering material other than gold (e.g., thesilicon nitride pyramid).

5. IMAGE ANALYSISBoth the reconstructed x-ray 2D projection image de-scribed in Subsection 4.C and the 3D image described inSubsection 4.A clearly show good agreement with theSEM image of Fig. 2(a). When we overlay a semitranspar-ent version of the projection image of Fig. 2(b) on the SEMimage of Fig. 2(a), we see that the locations of all balls vis-ible in the SEM match with the balls visible in the x-rayimage to within a pixel. In the x-ray volume image, how-ever, we can locate more balls than are visible in the SEMimage. The slice image of Fig. 2(d) reveals that the goldballs of the object are not entirely a single layer; rather,the arms of the structure are several balls deep in places.The balls were deposited on the inside surface of the sili-con nitride pyramid, and it is clearly seen that these ballsare indeed flush with the intersecting edges of the pyra-mid. The regions where the balls are layered give rise to ahigher projected image intensity, which shows up asbrighter regions in the projection image of Fig. 2(b). Weconfirm that the 3D pyramid geometry determined fromthe reconstructed volume image is consistent with themanufacture of the pyramid. We measure an includedangle between opposite faces of the pyramid of 70° ±1°,compared with the included angle of 70.5° between the111 and 111 crystal planes of silicon. The 3D image doesnot appear to contain the pyramid-shaped silicon nitridemembrane itself, which in principle should be recon-structed as a predominantly phase object (with a phase of1.3° per voxel). However, due to the missing data in thebeam stop, we observe only the high-frequency edges ofthe pyramid, which are faintly visible in projection im-ages.

The volume images display good spatial resolution inthe x, y, and z dimensions. Quantifying resolution is notstraightforward, since we do not have an exactly known3D standard—the SEM shows only the surface of the ob-ject, for example, and this method cannot reveal the 3Dstructure. We estimate the resolution of our images by ex-amining both their Fourier and real-space representa-tions. In Fourier space we base measures of resolution onthe signal-to-noise ratio (SNR) of measured diffraction in-tensities and the consistency of recovered phases,whereas in real space we fit models to 3D images of iso-lated balls.

A. Reconstruction Consistency and ResolutionThe performance of our imaging technique could be quan-tified in Fourier space, in principle, by measuring theCTF. For the numerical reconstruction technique usedhere, this CTF would encapsulate resolution limits due toSNR, data alignment, and regions of missing data, as wellas algorithm stability and uniqueness. The direct compu-tation of the CTF would require computing the ratio ofthe image Fourier amplitudes to the Fourier amplitudesof the actual object, which again requires an accuratequantitative model of the actual object structure at high

Fig. 8. Extremely large depth-of-focus projection images for theobject orientation -=0°. (a) Reconstruction from a 2D centralsection interpolated from the 3D diffraction data set. The recon-struction was performed without a positivity constraint !ES

2

=0.167". (b) Enlargement of the lower right arm of (a). (c) [andalso Fig. 2(c)] Reconstruction from the 2D central section using apositivity constraint !ES

2 =0.072". (d) Projected image formed byintegrating the full 3D reconstructed image !ES

2 =0.113". Thescale bars are 500 nm.

1194 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.

Page 17: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

resolution. Without such a model we can base an estimateof the upper limit of the CTF cutoff on the SNR of themeasured diffraction data plotted in Fig. 9. The largestspatial frequency used in the interpolated 3D diffractiondata set (recorded near the corner of the CCD) is atumax=+2N+q=0.068 nm!1. At this resolution shell we re-corded an average of ,1 photon/pixel, and obtained aSNR of 1 photon/pixel at u=0.062 nm!1. (Since the noiselevel of our camera is considerably less than 1 photon, weassume that the noise in our diffraction patterns is deter-mined by photon shot noise.) If we assume hypotheticallythat the diffraction phases are known, then the image canbe fully represented, without loss of information, with apixel sampling of +q=1/D, where D is the width of the ob-ject, corresponding to s=1, and so we could rebin our over-sampled data into larger pixels with a correspondinglyhigher photon count. Summing in this way over pixels (re-ferred to as pixel binning) is not the same as resampling,however, and such an operation would multiply the auto-correlation image with the Fourier transform of thesummed pixel shape, which will be a function that fallsfrom unity at the image center to 2/$ at the edge of theautocorrelation image. The effect could be deconvolvedfrom the pattern, but we avoid that by binning to a pixelsampling of +q=1/ !sD", with s=2, which is the criticalsampling interval of the object’s autocorrelation function.The measured data were collected at s=4.6, so resamplingto s=2 gives an average of 1 photon/pixel!SNR=1" at u=0.066 nm!1. If we take a measure of resolution as thefrequency at which the SNR of the rebinned data is unity,then we find that the average 3D cutoff is 0.066 nm!1, ora smallest resolvable half-period of 7.5 nm. This is veryclose to the smallest half-period of 7.3 nm limited by thedetector NA.

The phase retrieval process recovers the diffractionphases with a limited accuracy, due to factors includingSNR of the diffraction amplitudes, missing data, the in-consistency of constraints, and systematic errors in thedata (such as errors in interpolation). These errors inphase reduce the resolution of the synthesized image.

With a complex image a loose support constraint will leadto unconstrained low-order aberrations, for example, aswas discussed in Subsection 4.B. In our case of recon-structing complex 2D images, with low frequencies miss-ing due to the beam stop, we have observed that phase re-trieval from independent random starts may differ by aphase vortex (right or left handed) centered at the zerospatial frequency. This too has the effect of reducing theimage resolution. One way to quantify the effect of thesephase variations is to determine the correlation betweenphases retrieved from independent random starts of thephase retrieval algorithm. For example, we could computethe differential phase residual of these two solutions inthe same way that independent images are compared incryoelectron microscopy (Ref. 58, Chap. 3, Sect. B). Sincewe have the ability to compute an unlimited number of re-constructions from independent random starts, a moreappropriate choice is to determine the variation in re-trieved phases as a function of resolution, as suggested byShapiro et al.36 More specifically, the average of the inde-pendent complex reconstructions is computed, and thesquare of the Fourier amplitudes of this average are com-pared with the measured diffraction intensities. Wherethe phases are consistently retrieved to the same value,the squared modulus of the average will be equal to theconstrained modulus and the ratio will be unity. Wherethe phases are random and completely uncorrelated, theaverage will approach zero. Thus the ratio is effectively atransfer function for the phase retrieval process, and theaverage image is the best estimate of the image: Spatialfrequencies are weighted by the confidence with whichtheir phases are known.36 It should be possible to obtaineven better estimates of the complex amplitudes by amore detailed analysis of the distributions of retrievedphases over independent trials, for example by applyingpatching and voting methods as described by Fienup andWackerman.59

All 2D and 3D images displayed in this paper are aver-ages of more than 300 independent phase retrieval trials.That is, the best estimate of the image is given by

2M = 32Mexp!i-0"4, !22"

where 3 4 denotes an average over independent recon-structions. Analogous to the modulus of the CTF of a co-herent imaging system, we define the phase retrievaltransfer function (PRTF) as

PRTF!u" ='Fu#2M$'

+I!u"=

'34M!u"exp!i-0"4'

+I!u", !23"

where 4M is the diffraction amplitude with retrievedphases, the Fourier transform of Eq. (18). Plots of thePRTF, averaged over shells of constant u and where I!u"are nonzero, are shown in Fig. 10(a) for the 3D image ofFig. 5 and for the 2D projection image of Fig. 8(a).

When we compute the average image 2M, the arbitrarymultiplicative phase constant -0 of each image must beadjusted to a common value so that the random variationof this constant does not reduce the average, which wouldresult in a low value of the transfer function. We do thisfor the first reconstructed image 2M

!0" by finding the con-

Fig. 9. 3D diffraction intensities I!u", averaged over shells ofconstant u, in units of average photon count per CCD pixel. Theaverage over constant u of the 3D SNR of the measured intensi-ties is shown as a dotted’s dashed curve.

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1195

Page 18: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

stant phase that maximizes the real part of that image,which can be achieved by finding the value -0 thatmaximizes60

5 = 2k

(R#2M!0"!k"exp!i-0"$)2

= 2k

#2'2M!0"!k"'2 + (2M

!0"!k")2 exp!2i-0"

+ (2M!0"!k")*2 exp!! 2i-0"$/4 !24"

for an image with pixels (or voxels) indexed by integers kand complex values 2M!k". We maximize the square of thereal part to allow for positive and negative real values.The value 5 can be maximized by maximizing either thesecond or third term of Eq. (24), and we do so by findingthe phase - of the complex value 2k(2M

!0"!k")2 and setting-0=!- /2. The subsequent images 2M are adjusted byfinding the constant phase -1 that minimizes 2k'2M

!0"!k"!2M!k"'2. This phase is that which maximizes 2kR#2M

!0"

"!k"2M!k"exp!i-1"$, which is simply the phase of the com-plex value 2k(2M

!0"!k")*2M!k".In the case of 2D images we also improve the average

by separating out the vortex modes mentioned above.This was achieved by using the voting method,59 simplyby correlating each phase retrieval solution with the pre-vious solutions and separating the solutions into threeclasses (which were found to differ by left- and right-handed phase vortices) based on the value of the correla-tion. We found that the class with the most frequentmembers (60% of trials) gave rise to the best image,whereas the other two classes were equally frequent (20%each) and gave rise to images for which the balls werelarger, had bright edges, and showed reduced intensity attheir centers. Based on the appearance of the balls, we as-sumed that the most frequent class was that which didnot have a vortex mode. The effect of removing the vortexmodes from the average image is shown in Fig. 10(a). Asis seen in that figure, the PRTF is uniformly increasedacross all frequencies. This is due to the fact that the left-and right-handed vortex modes sum together to give avariation of the modulus, which varies as the cosine of theazimuthal angle in the diffraction pattern and which av-erages to zero in the average around this complete circlefor each u.

The resolution cutoff of the phase retrieval process canbe given by the spatial frequency at which the PRTF ex-trapolates to zero. For all cases here, this frequency isgreater than the measurement cutoff of umax=0.068 nm!1, or half-period resolution length of 7.4 nm. Amore conservative estimate of the resolution is given bythe frequency at which the PRTF reaches a value of 0.5.For the vortex-corrected 2D reconstruction this occursjust at umax, but for the 3D image this corresponds to0.048 nm!1, or a half-period resolution length of 10.4 nm.In this case the average resolution cutoff is worse than inthe 2D case because the 3D PRTF is diminished along theuz direction where the diffraction data are missing (whichreduces the average over the u shell). This is illustratedin Fig. 10(b), where we display the 3D PRTF as a surfacefor which it has a value of 0.5. The PRTF is not defined inthe regions of missing diffraction data, which are seen as

the missing wedges in the surface. It is seen that the reso-lution is approximately the same in all directions of uwhere intensities were measured.

When applied to the average image 2M, the modulusconstraint error EM

2 of Eq. (17) is equal to the intensity-weighted integral over u of '1!PRTF!u"'2. That is, it givesa single measure of how well diffraction intensities of theaverage image agree with the measurement. This is gen-erally higher than the metric EM

2 applied to the iterate gn,which gives an estimate for how well the algorithm fitsthe intensity data. The value of EM

2 applied to the average3D image is 0.368, and 0.059 for the average 2D projec-tion image that was corrected for vortex phase errors(0.312 without vortex correction). We expect that a simi-lar correction of low-order phase modes in the 3D imagewould lead to a similar improvement in the error metric,and the relatively high value of EM

2 for the average 3D im-

Fig. 10. (Color online) (a) Phase retrieval transfer function(PRTF), averaged over shells of constant u, for the real-positive3D projection image (solid curve) and averaged over circles ofconstant u for the complex 2D image (dotted–dashed curves). Thedotted–dashed curve with lower values is for the 2D projectionimage without correction of vortex phase modes. (b) Isosurfacerendering of the 3D PRTF at a threshold level of 0.5. The tickmarks on the uy and uz axes indicate 0.05 nm!1.

1196 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.

Page 19: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

age is a result of the overall filtering due to the variationof these low-order phase modes.

We can also compute the agreement of the average im-age 2M with the real-space support constraint ES

2 of Eq.(16). We find a value of 0.228 when the computation is ap-plied to the average 3D image and 0.167 for the average2D complex-valued projection image reconstructed fromthe central section. Note, however, that in the 3D imagethe support S accounts for 0.10% of the image voxelswhereas S covers 4.1% of the pixels in the projection 2Dimages, and so the average error per pixel outside thesupport is much less for the 3D than for the 2D recon-struction. We find that, with the addition of the real-spacepositivity constraint, ES

2 of the average 2D projection im-age decreases from 0.167 to 0.072. However, in this casethe modulus constraint error EM

2 increases from 0.059 to0.172.

B. Real-Space ResolutionThe measures of resolution from the SNR and PRTF re-veal the effects of noise, consistency of the diffractiondata, and how well the image obeys the imposed con-straints. These measures are contributors to the overallimage resolution. A direct measure of a lower limit ofresolution can be obtained by examining the images ofisolated and closely spaced gold balls. Lineouts of the iso-lated ball located on the lower left arm of Fig. 5(a) areshown in Fig. 11 for all three orthogonal directions. Theball image has full widths at half-maximum (FWHM) of30, 35, and 70 nm in the x, y, and z directions, respec-tively. Images of other isolated balls in the object are verysimilar to those shown in Fig. 11. Assuming that the ballsare 50 nm in diameter, we obtain a good fit to the imagesby modeling a coherent imaging system with a CTF thatis unity within a cube of half-width 0.05 nm!1 (centered atthe zero frequency) and that is zero within a sector of 60°as rotated about the y axis and centered about the z axis.Lineouts of the modeled coherent images, computed byconvolving an isolated 50-nm sphere with the Fouriertransform of the CTF (that is, the point spread function,or PSF), are shown as dotted curves in Fig. 11. TheFWHMs of the modeled coherent image are 36, 40, and 64nm in the x, y, and z directions, respectively, in goodagreement with the reconstructed image. We do not ex-

pect the model to be an exact fit to the data, since the ac-tual PSF is more complicated and depends on the detailsof the phase retrieval, which is better characterized bythe PRTF in Fig. 10. However, the fits are reasonable, andthe widths of the modeled PSF are in good agreementwith the measures of resolution obtained from analysis ofthe diffraction intensities and recovered phases. The mod-eled PSF, given by the Fourier transform of the CTF, hasa half-width of 10 nm"10 nm"40 nm. Here the half-width is defined as the distance from the central maxi-mum of the PSF to the first zero. Since the imaging pro-cess is coherent, the image width depends on the phase ofthe PSF, which has different distributions for the x and ydirections. This explains the variations of image width inthe x and y directions and why the image FWHMs inthese directions are in fact smaller than the ball diameter.As expected, the resolution is much worse in the z direc-tion than in the x and y directions, due to the missing sec-tor of data that arises from recording diffraction over alimited range of angles.

6. SUMMARY AND CONCLUSIONSWe have presented 3D images reconstructed ab initiofrom coherent x-ray diffraction, which exhibit high reso-lution in all three dimensions. These images are thehighest-resolution 3D images of nonperiodic objectswhere the resolution is comparable in all three dimen-sions. The work presented here marks an important ad-vance in that we have fully demonstrated the experimen-tal methods to collect 3D coherent x-ray diffraction andthe computational tools to manage the data, and to recon-struct 3D images of more than 109 voxels.

The coherent x-ray diffraction recorded from our 3Dtest object comprised 140 views, at 1° intervals, and ex-tended to a maximum spatial frequency of 0.068 nm!1, ora smallest reconstructible half-period of 7.4 nm. Althoughwe cannot exactly quantify the resolution of the image,which would require knowing the object’s 3D structure,we have determined the consistency of the retrievedphases, which gives us an estimate of an upper bound ofthe CTF of the imaging process. Our analysis shows thatwe can consistently retrieve phases out to the maximumspatial frequency recorded. This consistency measure

Fig. 11. Lineouts of the real part of the reconstructed complex amplitude 3D image, for three orthogonal directions (a) x, (b) y, and (c)z, through the isolated single ball at the pyramid apex. Coordinates are relative to the center of the 3D image array. Dotted curves showlineouts from a simulated 3D coherent image with a cube CTF with a 60° missing sector.

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1197

Page 20: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

does not tell us anything about systematic errors, such asinterpolation of the data, errors in assigning spatial fre-quency u to the intensities (imperfect knowledge of thebeam center), and missing data due to the beam stop orlimited range of object orientations. However, we easilyresolve 50-nm spheres that are touching each other, andfrom such image lineouts, and comparisons of recon-structed x-ray images with the SEM image, we have con-fidence that our achieved image resolution is close to ourupper estimate.

We have found that our Shrinkwrap algorithm,6 whichdetermines the object support ab initio, is robust andworks well even with missing Fourier-space data due tolimited object orientations or the beam stop. The phaseretrieval process can be essentially characterized by a 3Dtransfer function (the PRTF), which is influenced by thenoise of the measured diffraction intensities. While the al-gorithm also allows the amplitudes at the locations ofmissing data to be recovered, these values are not consis-tently reconstructed and are averaged to zero, leavingworse resolution in the depth !z" direction. We expect thatwith a data set collected over the full range of sample ori-entation angles we would achieve equal resolution in allthree dimensions. As it is, we obtained an estimate of 10nm in x and y and 50 nm in z.

We have shown that high-NA x-ray coherent imaging ofthick objects can properly be carried out only in the con-text of three dimensions. Here we define high-NA imagingof thick objects to be imaging under conditions that leadto a depth of focus less than the depth of the object in anyof its orientations. Since the imaging is coherent, a 2D im-age of a thick object in any view will exhibit defocus arti-facts that do not diminish in overall power with the de-gree of defocus and that lead to difficulties in theinterpretation of the image. In addition, these artifactscause the image of a real-positive object, for example, tobe complex, hence hampering quantitative evaluation ofthe image. 2D images free from defocus artifacts can bequickly generated from central sections extracted fromthe diffraction data. 3D images are synthesized from theentire 3D diffraction data set. The tools are now in placeto perform full 3D reconstructions of thick samples. Cur-rently we have reconstructed arrays with almost 2"109

elements. If the minimum oversampling of +32 is used ineach dimension, then this would correspond to objects ofwidth 9.5 !m at a pixel spacing of 10 nm, or a half-periodresolution length of 7 nm along the diagonal. Whensingle-particle x-ray free-electron laser imaging at atomicresolution becomes feasible, then these demonstratedcomputational capabilities could be used to reconstructobjects of 480-nm width at 0.7-nm resolution, for example.This would correspond to a large virus, or a large proteincomplex such as the ribosome.

ACKNOWLEDGMENTSWe thank Ray Mariella (LLNL) for the idea of using a sili-con nitride pyramid as a test object and Jackie Crawfordand Dino Ciarlo (LLNL) for determining its manufactur-ing process. We thank Janos Kirz (LBNL and StonyBrook) for technical advice and extensive discussionsabout our experiments. We acknowledge stimulating dis-

cussions with Abraham Szöke (LLNL), Gösta Huldt (Upp-sala Universitet), and Eugene Ingerman (CBST). Wegratefully acknowledge Richard Crandall and the Ad-vanced Computations Group (Apple Computer, Inc.) forthe development of the dist_fft software. We acknowl-edge the support of the Advanced Light Source (ALS)staff. This work was performed in part under the auspicesof the U.S. Department of Energy by University of Cali-fornia, Lawrence Livermore National Laboratory, undercontract W-7405-Eng-48. The work of the Lawrence Ber-keley National Laboratory participants and the operationof the ALS facility was supported by the Director, Office ofEnergy Research, Office of Basic Energy Sciences, Mate-rials Sciences Division, U.S. Department of Energy, undercontract DE-AC03-76SF00098. This work has been sup-ported by funding from the National Science Foundation.The Center for Biophotonics, an NSF Science and Tech-nology Center, is managed by the University of California,Davis, under cooperative agreement PHY 0120999. Thiswork was also partly supported by the Department of En-ergy under grant DEFG0202ER45996.

Corresponding author e-mail address:[email protected].

REFERENCES1. D. Sayre and H. N. Chapman, “X-ray microscopy,” Acta

Crystallogr., Sect. A: Found. Crystallogr. 51, 237–252(1995).

2. D. Sayre, H. N. Chapman, and J. Miao, “On theextendibility of x-ray crystallography to noncrystals,” ActaCrystallogr., Sect. A: Found. Crystallogr. 54, 232–239(1998).

3. J. Miao, P. Charalambous, J. Kirz, and D. Sayre,“Extending the methodology of x-ray crystallography toallow imaging of micrometre-sized non-crystallinespecimens,” Nature (London) 400, 342–344 (1999).

4. I. K. Robinson, I. A. Vartanyants, G. J. Williams, M. A.Pfeifer, and J. A. Pitney, “Reconstruction of the shapes ofgold nanocrystals using coherent x-ray diffraction,” Phys.Rev. Lett. 87, 195505 (2001).

5. G. J. Williams, M. A. Pfeifer, I. A. Vartanyants, and I. K.Robinson, “Three-dimensional imaging of microstructure inAu nanocrystals,” Phys. Rev. Lett. 90, 175501 (2003).

6. S. Marchesini, H. He, H. N. Chapman, S. P. Hau-Riege, A.Noy, M. R. Howells, U. Weierstall, and J. C. H. Spence,“X-ray image reconstruction from a diffraction patternalone,” Phys. Rev. B 68, 140101 (2003).

7. S. Marchesini, H. N. Chapman, S. P. Hau-Riege, R. A.London, A. Szoke, H. He, M. R. Howells, H. Padmore, R.Rosen, J. C. H. Spence, and U. Weierstall, “Coherent x-raydiffractive imaging: applications and limitations,” Opt.Express 11, 2344–2353 (2003).

8. H. He, S. Marchesini, M. Howells, U. Weierstall, H.Chapman, S. Hau-Riege, A. Noy, and J. C. H. Spence,“Inversion of x-ray diffuse scattering to images usingprepared objects,” Phys. Rev. B 67, 174114 (2003).

9. M. R. Howells, T. Beetz, H. N. Chapman, C. Cui, J. M.Holton, C. J. Jacobsen, J. Kirz, E. Lima, S. Marchesini, H.Miao, D. Sayre, D. A. Shapiro, and J. C. H. Spence, “Anassessment of the resolution limitation due to radiation-damage in x-ray diffraction microscopy,” arxiv.org e-printarchive, physics/0502059, February 11, 2005, http://arxiv.org/pdf/physics/0502059.

10. R. Neutze, R. Wouts, D. van der Spoel, E. Weckert, and J.Hajdu, “Potential for biomolecular imaging withfemtosecond x-ray pulses,” Nature (London) 406, 753–757(2000).

11. J. Miao, K. O. Hodgson, and D. Sayre, “An approach tothree-dimensional structures of biomolecules by using

1198 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.

Page 21: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

single-molecule diffraction images,” Proc. Natl. Acad. Sci.U.S.A. 98, 6641–6645 (2001).

12. J. C. H. Spence and R. B. Doak, “Single moleculediffraction,” Phys. Rev. Lett. 92, 198102 (2004).

13. W. S. Haddad, I. McNulty, J. Trebes, E. Anderson, R.Levesque, and L. Yang, “Ultrahigh-resolution x-raytomography,” Science 266, 1213–1215 (1994).

14. D. Weiss, G. Schneider, B. Niemann, P. Guttmann, D.Rudolph, and G. Schmahl, “Computed tomography ofcryogenic biological specimens based on x-ray microscopicimages,” Ultramicroscopy 84, 185–197 (2000).

15. C. A. Larabell and M. A. Le Gros, “X-ray tomographygenerates 3-D reconstructions of the yeast, Saccharomycescerevisiae, at 60-nm resolution,” Mol. Biol. Cell 15,957–962 (2004).

16. N. Streibl, “Three-dimensional imaging by a microscope,” J.Opt. Soc. Am. A 2, 121–127 (1985).

17. T. Beetz, M. Howells, C. Jacobsen, C. Kao, J. Kirz, E. Lima,T. Mentes, H. Miao, C. Sanchez-Hanke, D. Sayre, and D.Shapiro, “Apparatus for x-ray diffraction microscopy andtomography of cryo specimens,” Nucl. Instrum. MethodsPhys. Res. A 545, 459–468 (2005).

18. M. R. Howells, P. Charalambous, H. He, S. Marchesini, andJ. C. H. Spence, “An off-axis zone-plate monochromator forhigh-power undulator radiation,” in Design andMicrofabrication of Novel X-Ray Optics, D. C. Mancini, ed.,Proc. SPIE 4783, 65–73 (2002).

19. C. Giacovazzo, Direct Phasing in Crystallography (OxfordU. Press, 1998), p. 468.

20. R. Crandall, E. Jones, J. Klivington, and D. Kramer,“Gigaelement FFTs on Apple G5 clusters,” AdvancedComputation Group, Apple Computer (2004), http://images.apple.com/acg/pdf/20040827_GigaFFT.pdf.

21. M. Born and E. Wolf, Principles of Optics, 7th ed.(Cambridge U. Press, 2002).

22. R. W. James, The Optical Principles of the Diffraction ofX-Rays (Bell, 1962).

23. E. Wolf, “Three-dimensional structure determination ofsemi-transparent objects from holographic data,” Opt.Commun. 1, 153–156 (1969).

24. J. Kirz, C. Jacobsen, and M. Howells, “Soft x-raymicroscopes and their biological applications,” Q. Rev.Biophys. 28, 33–130 (1995).

25. F. Natterer, “An error bound for the Born approximation,”Inverse Probl. 20, 447–452 (2004).

26. R. N. Bracewell, The Fourier Transform and ItsApplications, 2nd ed. (McGraw-Hill, 1986).

27. J. Miao, T. Ishikawa, E. H. Anderson, and K. O. Hodgson,“Phase retrieval of diffraction patterns from noncrystallinesamples using the oversampling method,” Phys. Rev. B 67,174104 (2003).

28. J. Miao, D. Sayre, and H. N. Chapman, “Phase retrievalfrom the magnitude of the Fourier transforms ofnonperiodic objects,” J. Opt. Soc. Am. A 15, 1662–1669(1998).

29. J. W. Goodman, Introduction to Fourier Optics, 2nd ed.(McGraw-Hill, 1996).

30. J. C. H. Spence, U. Weierstall, and M. Howells, “Coherenceand sampling requirements for diffractive imaging,”Ultramicroscopy 101, 149–152 (2004).

31. R. Crowther, D. DeRosier, and A. Klug, “The reconstructionof a three-dimensional structure from its projections andits applications to electron microscopy,” Proc. R. Soc.London A317, 319–340 (1970).

32. E. Salerno, “Superresolution capabilities of the Gerchbergmethod in the band-pass case: an eigenvalue analysis,” Int.J. Imaging Syst. Technol. 90, 181–188 (1998).

33. Y. Nishino, J. Miao, and T. Ishikawa, “Image reconstructionof nanostructured nonperiodic objects only fromoversampled hard x-ray diffraction intensities,” Phys. Rev.B 68, 220101 (2003).

34. T. Sato, S. J. Norton, M. Linzer, O. Ikeda, and M. Hirama,“Tomographic image reconstruction from limitedprojections using iterative revisions in image andtransform spaces,” Appl. Opt. 20, 395–399 (1981).

35. M. Bertero and E. R. Pike, “Resolution in diffraction-

limited imaging, a singular value analysis. I. The case ofcoherent illumination,” Opt. Acta 29, 727–746 (1982).

36. D. Shapiro, P. Thibault, T. Beetz, V. Elser, M. Howells, C.Jacobsen, J. Kirz, E. Lima, H. Miao, A. M. Neimann, andD. Sayre, “Biological imaging by soft x-ray diffractionmicroscopy,” Proc. Natl. Acad. Sci. U.S.A. 102, 15343-15346(2005).

37. A. Szoke, H. Szoke, and J. R. Somoza, “Holographicmethods in x-ray crystallography. V. Multiple isomorphousreplacement, multiple anomalous dispersion andnoncrystallographic symmetry,” Acta Crystallogr., Sect. A:Found. Crystallogr. 53, 291–313 (1997).

38. S. P. Hau-Riege, H. Szoke, H. N. Chapman, A. Szoke, S.Marchesini, A. Noy, H. He, M. Howells, U. Weierstall, andJ. C. H. Spence, “SPEDEN: reconstructing single particlesfrom their diffraction patterns,” Acta Crystallogr., Sect. A:Found. Crystallogr. 60, 294–305 (2004).

39. D. Potts, G. Steidl, and M. Tasche, “Fast Fouriertransforms for nonequispaced data: a tutorial,” in ModernSampling Theory: Mathematics and Applications, J. J.Benedetto and P. Ferreira, eds. (Springer, 2001), Chap. 12,pp. 249–274.

40. H. Choi and D. C. Munson, Jr., “Direct-Fourierreconstruction in tomography and synthetic apertureradar,” Int. J. Imaging Syst. Technol. 9, 1–13 (1998).

41. F. Natterer, The Mathematics of Computerized Tomography(SIAM, 2001).

42. A. J. Devaney, “A filtered backpropagation algorithm fordiffraction tomography,” Ultrason. Imaging 4, 336–350(1982).

43. S. Pan and A. Kak, “A computational study ofreconstruction algorithms for diffraction tomography:interpolation versus filtered-backpropagation,” IEEETrans. Acoust., Speech, Signal Process. SP-31, 1262–1275(1983).

44. H. He, S. Marchesini, M. Howells, U. Weierstall, G.Hembree, and J. C. H. Spence, “Experimental lenslesssoft-x-ray imaging using iterative algorithms: phasingdiffuse scattering,” Acta Crystallogr., Sect. A: Found.Crystallogr. 59, 143–152 (2003).

45. J. R. Fienup, “Reconstruction of a complex-valued objectfrom the modulus of its Fourier transform using a supportconstraint,” J. Opt. Soc. Am. A 4, 118–123 (1987).

46. J. R. Fienup, “Phase retrieval and support estimation inx-ray diffraction,” in Coherence 2005: InternationalWorkshop on Phase Retrieval and Coherent Scattering(ESRF, 2005), http://www.esrf.fr/NewsAndEvents/Conferences/Coherence2005/Proceedings/files/Talks/Fienup.pdf.

47. T. R. Crimmins, J. Fienup, and B. J. Thelen, “Improvedbounds on object support from autocorrelation support andapplication to phase retrieval,” J. Opt. Soc. Am. A 7, 3–13(1990).

48. J. R. Fienup, “Phase retrieval algorithms: a comparison,”Appl. Opt. 21, 2758–2769 (1982).

49. V. Elser, “Phase retrieval by iterated projections,” J. Opt.Soc. Am. A 20, 40–55 (2003).

50. D. R. Luke, “Relaxed averaged alternating reflections fordiffraction imaging,” Inverse Probl. 21, 37–50 (2005).

51. M. Frigo and S. G. Johnson, “The design andimplementation of FFTW3,” Proc. IEEE 93, 216–231(2005). (Special issue on “Program Generation,Optimization, and Platform Adaptation”).

52. “The message passing interface (MPI) standard,” http://www-unix.mcs.anl.gov/mpi/.

53. A. J. Ladd, J. H. Kinney, D. L. Haupt, and S. A. Goldstein,“Finite-element modeling of trabecular bone: comparisonwith mechanical testing and determination of tissuemodulus,” J. Orthop. Res. 16, 622–628 (1998).

54. “Movies of three-dimensional diffraction data andreconstruction of the pyramid test object,” http://als.lbl.gov/esg_beamlines/cxdi.

55. J. M. Cowley, Diffraction Physics (North-Holland,1981).

56. J. C. H. Spence, U. Weierstall, and M. Howells, “Phaserecovery and lensless imaging by iterative methods in

Chapman et al. Vol. 23, No. 5 /May 2006/J. Opt. Soc. Am. A 1199

Page 22: High-resolution ab initio three-dimensional x-ray ...xrm.phys.northwestern.edu/research/pdf_papers/2006/chapman_josaa_2006.pdfHigh-resolution ab initio three-dimensional x-ray diffraction

optical, x-ray and electron diffraction,” Philos. Trans. R.Soc. London, Ser. A 360, 875–895 (2002).

57. G. Huldt, Biomedical Centre, Uppsala Universitet(personal communication, 2005).

58. J. Frank, Three-Dimensional Electron Microscopy ofMacromolecular Assemblies (Academic, 1996).

59. J. R. Fienup and C. Wackerman, “Phase-retrievalstagnation problems and solutions,” J. Opt. Soc. Am. A 3,1897–1907 (1986).

60. J. R. Fienup, “Invariant error metrics for imagereconstruction,” Appl. Opt. 36, 8352–8357(1997).

1200 J. Opt. Soc. Am. A/Vol. 23, No. 5 /May 2006 Chapman et al.