Upload
haxuyen
View
216
Download
4
Embed Size (px)
Citation preview
Abstract—Cone-beam CT (CBCT) offers the capability for
novel, point-of-care imaging platforms dedicated and
optimized to specific diagnostic tasks, many with stringent
demands on quantitative accuracy, image uniformity, and
detectability of low-contrast soft-tissue structures that exceed
conventional image quality limits. For example, a CBCT
system is now under development for imaging of traumatic
brain injury at the point-of-care, employing a compact
scanning geometry and requiring a new level of image quality /
radiation dose performance beyond that of previous CBCT
applications. Among the major challenges to image quality,
uniformity, and dose is x-ray scatter, motivating the
development of a high-speed, high-fidelity scatter estimation
and correction methodology to yield artifact free images with
contrast resolution sufficient for the task of detecting small,
fresh bleeds (~1 mm diameter, 50 Hounsfield Unit contrast).
We report a fast and accurate approach for Monte Carlo
(MC) based scatter correction that advances computational
speed to practical levels and accuracy in scatter fluence
estimation well beyond that of simple parametric approaches.
A novel methodology combining GPU acceleration, variance
reduction techniques, simulations with low number of photons
and with a reduced number of projection angles (sparse MC)
augmented by kernel denoising yields a computation time of
~2 min. Uniformity in reconstructions of a realistic head
phantom is improved by ~60% compared to an uncorrected
image and by ~20% compared to an "oracle" correction based
on constant scatter fraction. The sparse MC framework is also
suitable to integration with novel reconstruction methods (e.g.,
model-based penalized weighted least squares) under
development to advance CBCT image quality beyond
conventional limits to a level required by challenging
application in brain imaging.
Index Terms—X-ray Scatter, Monte Carlo Simulation,
Artifact Correction, CT Reconstruction, Head CT.
I. INTRODUCTION
Increased awareness of the healthcare burden of traumatic brain injury (TBI), estimated to result in >$76B in direct and indirect costs, has generated a growing interest in imaging technologies for assessment of brain injury directly at the point-of-care or even within the environment where TBI frequently occurs (e.g., athletic or war theater). Such
technology could also find application in other settings where visualization of acute brain injury is essential for proper diagnosis and treatment, e.g. in assessment of concussion or intracranial hemorrhage in the Emergency Department or Intensive Care Unit. Flat-panel detector (FPD)-based CBCT systems provide an excellent platform for development of point-of-care imaging. Research underway at our institution is developing such technology for high quality imaging of the brain in platforms well suited to such challenging application areas and imaging tasks [Fig. 1 (A)]. One of the significant challenges in such application is the required level of contrast and image uniformity. For the task of detecting fresh intraparenchymal blood associated with acute injury, the contrast is ~50 Hounsfield Units (HU), and the size of a bleed can be as small as ~1 mm in diagnosis of mild TBI. As shown in Fig. 1(B), current generation CBCT systems can detect such contrast levels for >2 mm detail size, but lack image uniformity compared to conventional CT [Fig. 1(C)]. This loss of uniformity is largely caused by shading artifacts due to increased scatter inherent to CBCT. The proposed imaging system involves a compact geometry to facilitate portability, further increasing scatter magnitude. Moreover, the need to minimize radiation dose likely prohibits the use of an antiscatter grids. Consequently, a scatter correction algorithm capable of high-accuracy scatter fluence
estimation is essential to achieving the desired level of image quality in head CBCT. Monte Carlo (MC) simulations provide accurate scatter estimates, but have been considered too computationally expensive for application in scatter correction. Recently, MC simulation engines have been successfully ported onto GPUs [1, 2], providing a convenient parallel platform for fast MC on desktop computers.
Wojciech Zbijewski, Alejandro Sisniega, J. Webster Stayman, John Yorkston, Nafi Aygun, Vassili Koliatsos, and Jeffrey H. Siewerdsen
A Sparse Monte Carlo Method for High-Speed, High-Accuracy Scatter Correction for Soft-Tissue Imaging in
Cone-Beam CT
Fig. 1. High-quality CBCT head imaging. (A) Mock illustration of a dedicated CBCT scanner for application in the ICU and other point-of-care settings for high-quality imaging of the head and neck, intracranial hemorrhage, and traumatic brain injury. (B) Reconstruction of a head phantom with simulated bleeds obtained on a current generation CBCT employing scatter grid and basic scatter and beam hardening corrections. (C) Reconstruction of the same phantom on clinical CT.
This work was supported in part by academic-industry partnership with Carestream
Health (Rochester NY).W. Zbijewski, A. Sisniega, J. W. Stayman, and J. H. Siewerdsen are with the Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21212 USA (phone: 410-955-1305; fax: 410-955-1115; e-mail: [email protected]). J. Yorkston is with Carestream Health, Rochester, NY.N. Aygun is with Dept. of Radiology and V. Koliatsos is with Dept. of Neurology,
Johns Hopkins University, Baltimore, MD.
The third international conference on image formation in X-ray computed tomography Page 401
We have combined GPU-based MC simulations of x-ray scatter with GPU-optimized variance reduction (VR) techniques, introduced in the general theory of MC to provide acceleration by improving signal-to-noise ratio (SNR) in the scatter estimates obtained within a given simulation time. Further acceleration of MC is possible by decreasing the number of tracked photons. The resulting speed-up is offset by decreased SNR in the estimates, but successful de-noising of the scatter distributions with 3-dimesional iterative Richardson-Lucy fitting has been demonstrated [3]. This approach exploits the smoothness of scatter in the detector plane and in projection angle. Here we employ a non-iterative (and thus potentially faster) de-noising algorithm (kernel smoothing, KS). We combine this approach with GPU-enabled MC simulation with VR and investigate additional speed-up through a reduction in the number of simulated projections (sparse angular sampling). The proposed approach is tested on experimental data pertinent to head CBCT, demonstrating accurate correction within 2 min simulation time.
II. METHODS
A. GPU-Accelerated Monte Carlo Simulator
The GPU implementation of the MC x-ray transport model is based on MC-GPU v1.1 (code.google.com/p/mcgpu/). In-house additions to the simulator [2] include a probabilistic model of tungsten anode x-ray spectra with arbitrary filtration, and an analytical model of energy-dependent detection in CsI:Tl scintillator.The GPU-accelerated MC package employs variance reduction through Interaction Splitting, whereby every interacting photon is split into several virtual photons, followed by Forced Detection, whereby virtual photons are deterministically ray-traced toward a randomly selected detector pixel. The implementation of VR was optimized for parallel execution on a GPU [2]. For a head CBCT geometry, VR achieved ~6x improvement in SNR over MC-GPU with no VR at equal runtime [2].
B. A Fast MC Scatter Correction Pipeline
The proposed scatter correction pipeline is illustrated in Fig. 2. In the initialization phase, the initial CBCT reconstruction is segmented by simple thresholding, and a fixed nominal density ("piecewise" segmentation) is assigned to each tissue (bone, soft-tissue, and air). GPU-accelerated MC simulation of the segmented reconstruction is performed using an extremely low number of photons (simulation times of <1 min). The resulting scatter estimates are too noisy to yield accurate correction even with de-noising, but are sufficient to estimate the mean scatter per projection, allowing for a baseline correction.
The corrected reconstruction is then segmented using a second-pass "continuous" tissue model, where the density of each tissue is now allowed to vary in a linear manner based on the HU value. The initial, baseline correction facilitates this approach by reducing gross HU inaccuracies. The use of the continuous model is essential to e.g. avoid under-correction due to over-estimate of scatter absorption in the skull (occurring when the skull is simulated as uniform layer of cortical bone).
GPU-based MC is then applied to the segmented volume. The number of photons is again sparse – low enough to yield practical correction times – but greater than in the initialization phase, so that the resulting scatter distribution SMC can be de-noised to yield an accurate scatter estimate for the correction (SMC-KS) :
, ,
, ,
( , , , , , ) ( , , )( , , )
( , , , , , )i i i
i i i
KS i i i MC i i iu v
MC KS
KS i i iu v
K u v u v S u vS u v
K u v u v
q
q
q q qq
q q-
=å
å(1),
where u, v are the detector coordinates, θ is projection angle, and index i runs through the scatter sinogram. KKS is a 3-dimensional Gaussian kernel applied to the distance between two points in the (u, v, θ) space; the FWHM of the kernel is denoted as σKS. In the studies presented here, the numerical values of σKS in the spatial and angular dimensions are equal when expressed in pixels for (u, v)and degrees for θ. Note that Eq. (1) allows estimation of the
Fig. 2. Workflow of the proposed rapid MC-based scatter correction algorithm, consisting of an initial baseline correction facilitating accurate segmentation in the subsequent correction employing rapid MC-GPU with low number of photon tracks and de-noising of the scatter estimates.
Page 402 The third international conference on image formation in X-ray computed tomography
signal at locations (u, v, θ) located outside of the lattice (ui,
vi, θi) where the noisy input data (SMC) are provided. This allows for estimation of the complete scatter sinogram from simulations done only at a sparse subset of projection angles with no need for interpolation.
C. Experimental Setup
An FPD-based imaging bench is configured in a geometry envisioned for head CBCT as shown in Fig. 1 (A). The SDD is ~700 mm, and the SAD is ~500 mm. The system employs a PaxScan 4343 FPD (Varian Imaging Products, Palo Alto, CA) with a 250 mg/cm2 CsI:Tl screen and 0.139x0.139 mm2 native pixel size; the data is binned to 0.556 mm2 pixel size. The studies involved a head phantom consisting of a natural skull embedded in uniform water-equivalent (Rando) body material (The Phantom Laboratory, Salem, NY) with spherical contrast-detail patterns in the cranium (~1-15 mm diameter range, ~-100 - +200 HU contrast range). The phantom is imaged at 100 kVp (+2 mm Al, +0.2 mm Cu) and 0.25 mAs/projection; 360 projections are collected at 1o increments.
Reconstructions employed the Feldkamp algorithm using 0.5 mm3 voxels, Hann apodization and a cutoff at 0.5 of the Nyquist frequency. In each case, bone-induced beam hardening was corrected (after scatter correction) using a variation of the two-pass Joseph-Spital approach [4]. GPU-based MC scatter estimation was executed on an Nvidia GTX 780 Ti GPU with 2880 CUDA cores and 3 GB on-board memory (Nvidia, Santa Clara, CA).
The accuracy of the correction was assessed by evaluating the uniformity of the reconstructions. A region-of-interest (ROI) consisting of the complete intra-cranial volume waschosen inside a 10 mm thick slab in the superior orbital region of the head, where the cranium is filled only with the brain-equivalent Rando material. The ROI was thus expected to be uniform; deviations from uniformity were measured by computing the standard deviation of voxel values in the ROI (denoted as nonuniformity, NU). While this metric includes both the effects of noise and artifact, it
is assumed that for the same projection dataset, changes in NU primarily reflect changes in the artifact level, with lower values of NU indicating more uniform images.
III. RESULTS
Fig. 4 (A) shows a gold standard MC result computed
from the head phantom reconstruction with 1011
photons/projection (no VR was applied). A noisy
distribution obtained with 108 photons/projection (and
x1000 shorter runtime) [Fig. 4 (B)] is successfully restored
with KS to a similar level of noise and detail as the gold
standard, as shown in Fig. 4 (C). Fig. 5 demonstrates the
relationship between the uniformity in the scatter corrected
reconstructions, the number of simulated photon histories,
and the size of the smoothing kernel (again without VR).
The size of σKS yielding minimal NU increased with
increasing number of photons, reflecting increased noise in
the MC scatter estimates. Very small kernels (σKS =10
pixels/degrees, likely smaller than typical scatter PSF) are
insufficient even for 108 phot/proj, whereas simulation with
a very low number of photons yielded artifacts in the
corrected images even for σKS close to the optimum [Fig. 5
(B)]. For large kernels, the performance was similar across
the entire range of numbers of photon histories; in this case,
MC-KS converged to a correction with uniform scatter
intensity per projection and exhibited overcorrection.
Fig. 5. (A) The nonuniformity NU in reconstructions obtained using MC-KS scatter correction as a function of the number of simulated photons and the size of the 3D smoothing kernel σKS. (B) MC-KS scatter corrected reconstructions obtained using MC with 106 photons/projection (top row) and 108
photons/projection (bottom row) and various sizes of the smoothing kernel.
Fig. 4. (A) A gold-standard MC scatter simulation of the head phantom using 1011 photons/projection is compared with MC simulations with 108
phot/proj (B), along with the corresponding result of 3D KS (C).
The third international conference on image formation in X-ray computed tomography Page 403
Scatter-corrected reconstruction obtained with MC-KS
with 108 phot/proj in Fig. 5 achieved a high level of
uniformity, but the simulation time (~50 min) is prohibitive
for many clinical applications. While the potential for
further reduction in the number of simulated photons
without sacrificing uniformity certainly exists [as
demonstrated e.g. by the 107 phot/proj MC-KS curve in Fig.
5 (A)], an alternative approach was investigated that
combines MC with a low number of photons with sparse
sampling of the projection angles. Example results are
shown for the head phantom in Fig. 6. Reconstruction of a
thinly collimated scan with reduced scatter is compared
with a reconstruction with no scatter correction (initial
volume in the scatter correction pipeline of Fig. 2),
reconstruction with an "oracle" constant scatter fraction
correction (scatter in each projection estimated as a constant
fraction of the signal in the center of the shadow of the
object, where the fraction is estimated from an MC
simulation), and three examples of MC-KS. For MC-KS,
the kernel sizes were chosen to minimize NU. MC-KS with
no VR, 108 phot/proj and no projection subsampling shows
considerable improvement over uncorrected image (~60%
reduction in NU), noticeable improvement over the "oracle"
correction (~20% reduction in NU and reduction in
artifacts) and similar (or slightly better) uniformity as the
collimated scan (no NU estimate is available due to the
limited field of view). A very similar level of uniformity
was exhibited by MC-KS correction with 5×107 phot/proj
(no VR) and simulation of every 5th projection, requiring
less than 5 min simulation time per scan. Even shorter
simulation time of only 2 min/scan achieved a comparable
level of uniformity and artifact when MC-KS was combined
with variance reduction, using fewer photons in the VR
simulation, but giving a noise level similar to that of plain
MC with ~30x more photon histories.
IV. DISCUSSION
Monte Carlo-based scatter correction in head CBCT
imaging was achieved within ~2 min simulation time by
employing a combination of sparse sampling in the number
of photons and projection angles with variance reduction,
GPU acceleration, and denoising via kernel smoothing. The
ability to achieve accurate scatter estimation with sparse
sampling of projections can be advantageous for MC
acceleration because of the potentially better load balancing
associated with tracking of a large number of photons
concentrated within fewer frames (compared to simulating
fewer photons for all projection angles). Such tradeoffs are
a subject of ongoing work. The proposed approach relies on
an accurate segmentation of the reconstructed volume;
current results show a ~20% degradation in uniformity
when the continuous object model is replaced with the
simpler piecewise model. Segmentation algorithms, tissue
models and associated calibration methods are under
investigation. Integration with model-based image
reconstruction for further enhancement of soft tissue
detectability will be pursued, initially as a pre-correction
step in a Penalized Weighted Least Squares approach
operating on line integral data. The algorithm will be
validated in a series of benchtop studies and deployed on
the dedicated CBCT scanner currently under development
at for high-quality imaging of the head and neck.
REFERENCES
[1] A. Badal et al. “Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel GPU,” Med. Phys., 2009.
[2] A. Sisniega et al., "Monte Carlo study of the effects of system geometry and antiscatter grids on cone-beam CT scatter distributions," Med. Phys., 2013.
[3] W. Zbijewski et al. “Efficient Monte Carlo based scatter artifact reduction in cone-beam micro-CT,” IEEE TMI, 2006.
[4] P. M. Joseph et al. "A method for correcting bone induced artifacts in
computed tomography scanners," J. Comput. Assist. Tomogr. 1978.
Fig. 6. Comparison of CBCT head phantom reconstructions without scatter correction, with a basic scatter correction and with MC-KS scatter correction.
Highly uniform reconstructions with significantly reduced artifacts are achievable within ~2 min/scan using accelerated MC simulations.
Page 404 The third international conference on image formation in X-ray computed tomography