1
•The data exhibit a severe type of signal- dependent noise, assumed to obey a Poisson distribution: •Blur is neglected •Independence of the observations assumed •Bayesian framework: MAP estimate •X as Markov Random Field (MRF) - Gibbs distribution for X •Anisotropic prior terms •log-Euclidean –TV edge preserving priors in space •log-Euclidean priors in time Denoising of Fluorescence Confocal Image Sequences: a Comparison Study Abstract Fluorescence laser scanning confocal microscopy (FLSCM) imaging is now a common biomedical tool that researchers make used in the study of dynamic processes occurring inside the living cells. Although fluorescent confocal microscopes are reliable instruments, the acquired images are usually corrupted by a severe type of Poisson noise due to the small amount of acquired radiation (low photon-count images) and to the huge optico- electronics amplification. These effects are even more pernicious when very low intensity incident radiation is used to avoid phototoxicity. In this work a convex, Bayesian denoising algorithm, using a log-Euclidean total variation regularization prior in space and a log-Euclidean regularization prior in time is described to remove the Poisson multiplicative noise corrupting the FLSCM images. Since model validation is a very important step, a comparison with five state-of-the-art algorithms is presented. Synthetic data were generated and denoised with the described algorithm and with each one of the other five. Results using the Csiszár I-divergence and the SNR figures-of-merit are presented. Experimental Results Problem Formulation N ,M ,L Y i,j,t i,j,t i,j,t i,j,t i,j,t i,j,t i,j,t E X ,Y log py |x x y log x C 1 1 1 y x x p y|x e y! α β i,j,t i,j,t i,j,t i,j,t i,j,t i,j,t i,j,t i ,j,t i,j ,t i,j,t i,j,t i,j,t E X ,Y x y log x x x x log log log x x x 2 2 2 1 1 1 i , j ,t Y y , i,j,t N ,M ,L 0 1 1 1 X ˆ X E X ,Y argmin Y X E X ,Y E X ,Y E X ISEL 1,2 Isabel Rodrigues (irodrigues@isr.ist.utl.pt ) and 1,3 João Sanches ( [email protected] ) 1 Institute for Systems and Robotics 2 Instituto Superior de Engenharia de Lisboa 3 Instituto Superior Técnico Lisbon, Portugal Prox-it-Ans: deconvolution algorithm for data blurred and degraded by Poisson noise. The Anscombe transform is used explicitly in the problem formulation, resulting in a nonlinear convex, AWGN deconvolution problem in the Bayesian framework with a non-smooth sparsity-promoting penalty over the representation coefficients in a dictionary of transforms (curvelets, wavelets) of the image to be restored. The solution is obtained using a fast proximal backward-forward splitting iteration algorithm. Prox-it-Gauss: naive version of the Prox-it-Ans where the Anscombe transform is performed first. The Non-local Means algorithm (NLM) non-local averaging technique, operating on all pixels in the image with the same characteristic. The BiShrink: locally adaptive 3-D image denoising algorithm using dual-tree complex wavelet transforms with the bivariate shrinkage thresholding function. The BLF: 2-D algorithm that smoothes images but preserves edges by means of a nonlinear combination of nearby image values. Non-convex optimization The data: LSFCM image sequences The optimization problem The energy function The data fidelity term i , j ,t z i,j,t i,j,t i,j,t i , j ,t i , j ,t i , j ,t i,j ,t i , j ,t i,j,t i,j,t i , j ,t E Z,Y e y z z z z z z z 2 2 2 1 1 1 α β i,j,t i,j,t z log x variable change Convex optimization •A 64 × 64 pixels base image with a cell nucleus shape was generated. •To each pixel of the base images, an exponential decay along the time (t = 1, ..., 64) was applied to simulate the intensity decrease due to the photobleaching effect in a FLIP experiment, with rates equal to 0.07 for every pixel in the range of 10 (in pixel units) from the center coordinates of the hole (dark circle) and equal to 0.02 for the rest of the image. •The true sequence was corrupted with Poisson noise. •SNR range: 3 dB to 9 dB. Synthetic Data Comparison Algorithms Signal to noise ratio (SNR) results Csiszaer I-divergence results CPU time to process the synthetic sequence 64x64x64 pixels A comparison of the performance of the proposed denoising algorithm with five state-of-the-art algorithms is presented. Results with synthetic data shows that the proposed algorithm outperforms all the others when the SNR and I- Divergence are used as figures-of-merit. The CPU time outperforms all but one algorithms. Example with real data (Hela cell) Data provided by the Instituto de Medicina Molecular de Lisboa

RECPAD - 14ª Conferência Portuguesa de Reconhecimento de Padrões, Aveiro, 23 de Outubro de 2009 The data exhibit a severe type of signal-dependent noise,

Embed Size (px)

Citation preview

Page 1: RECPAD - 14ª Conferência Portuguesa de Reconhecimento de Padrões, Aveiro, 23 de Outubro de 2009 The data exhibit a severe type of signal-dependent noise,

• The data exhibit a severe type of signal-dependent noise,

assumed to obey a Poisson distribution:• Blur is neglected• Independence of the observations assumed• Bayesian framework: MAP estimate• X as Markov Random Field (MRF) - Gibbs distribution for X• Anisotropic prior terms• log-Euclidean –TV edge preserving priors in space• log-Euclidean priors in time

Denoising of Fluorescence Confocal Image Sequences: a Comparison Study

AbstractFluorescence laser scanning confocal microscopy (FLSCM) imaging is now a common biomedical tool that researchers make used in the study of dynamic processes occurring inside the living cells. Although fluorescent confocal microscopes are reliable instruments, the acquired images are usually corrupted by a severe type of Poisson noise due to the small amount of acquired radiation (low photon-count images) and to the huge optico-electronics amplification. These effects are even more pernicious when very low intensity incident radiation is used to avoid phototoxicity. In this work a convex, Bayesian denoising algorithm, using a log-Euclidean total variation regularization prior in space and a log-Euclidean regularization prior in time is described to remove the Poisson multiplicative noise corrupting the FLSCM images. Since model validation is a very important step, a comparison with five state-of-the-art algorithms is presented. Synthetic data were generated and denoised with the described algorithm and with each one of the other five. Results using the Csiszár I-divergence and the SNR figures-of-merit are presented.

Experimental Results

Problem Formulation

N ,M ,LY i,j ,t i,j ,ti,j ,t

i,j ,t i,j ,t i,j ,ti,j ,t

E X,Y log p y | x

x y log x C

1 1 1

y

xxp y| x e

y!

α β

i,j ,t i,j ,t i,j ,ti,j ,t

i,j ,t i,j ,t i,j ,t

i ,j ,t i,j ,t i,j ,ti,j ,t i,j ,t

E X,Y x y log x

x x xlog log log

x x x2 2 2

1 1 1

i,j ,tY y , i,j,t N ,M ,L0 1 1 1

X

X̂ E X,Yargmin

Y XE X,Y E X,Y E X

ISEL

1,2Isabel Rodrigues ([email protected]) and 1,3João Sanches ([email protected]) 1Institute for Systems and Robotics

2Instituto Superior de Engenharia de Lisboa 3Instituto Superior Técnico

Lisbon, Portugal

Prox-it-Ans: deconvolution algorithm for data blurred and degraded by Poisson noise. The Anscombe transform is used explicitly in the problem formulation, resulting in a nonlinear convex, AWGN deconvolution problem in the Bayesian framework with a non-smooth sparsity-promoting penalty over the representation coefficients in a dictionary of transforms (curvelets, wavelets) of the image to be restored. The solution is obtained using a fast proximal backward-forward splitting iteration algorithm.

Prox-it-Gauss: naive version of the Prox-it-Ans where the Anscombe transform is performed first.

The Non-local Means algorithm (NLM) non-local averaging technique, operating on all pixels in the image with the same characteristic.

The BiShrink: locally adaptive 3-D image denoising algorithm using dual-tree complex wavelet transforms with the bivariate shrinkage thresholding function.

The BLF: 2-D algorithm that smoothes images but preserves edges by means of a nonlinear combination of nearby image values.

Non-convex optimization

The data: LSFCM image sequences

The optimization problem

The energy function

The data fidelity term

i ,j ,tzi,j ,t i,j ,t

i,j ,t

i,j ,t i ,j ,t i,j ,t i,j ,t i,j ,t i,j ,ti,j ,t i,j ,t

E Z,Y e y z

z z z z z z2 2 2

1 1 1α β

i,j ,t i,j ,tz log x variable change Convex optimization

• A 64 × 64 pixels base image with a cell nucleus shape was generated.• To each pixel of the base images, an exponential decay along the time (t

= 1, ..., 64) was applied to simulate the intensity decrease due to the photobleaching effect in a FLIP experiment, with rates equal to 0.07 for every pixel in the range of 10 (in pixel units) from the center coordinates of the hole (dark circle) and equal to 0.02 for the rest of the image.

• The true sequence was corrupted with Poisson noise.• SNR range: 3 dB to 9 dB.

Synthetic Data

Comparison Algorithms

Signal to noise ratio (SNR) results Csiszaer I-divergence results

CPU time to process the synthetic sequence 64x64x64 pixels

A comparison of the performance of the proposed denoising algorithm with five state-of-the-art algorithms is presented.Results with synthetic data shows that the proposed algorithm outperforms all the others when the SNR and I-Divergence are used as figures-of-merit. The CPU time outperforms all but one algorithms.

Example with real data (Hela cell)

Data provided by the Instituto de Medicina Molecular de Lisboa