Upload
others
View
13
Download
0
Embed Size (px)
Citation preview
Iterative Methods for Inverse & Ill-posed Problems
Iterative Methods for Inverse & Ill-posed Problems
Gerd Teschke
Konrad Zuse Institute
Research Group: Inverse Problems in Science and Technology
http://www.zib.de/AG InverseProblems
ESI, Wien, December 2006
Iterative Methods for Inverse & Ill-posed Problems
Outline
1 Scope of the problem
2 Linear Problems & Sparsity
3 Nonlinear Inverse Problems & SparsitySettingIterationMinimizationConvergenceRegularizationExamples
4 Adaptivity for linear problems
Iterative Methods for Inverse & Ill-posed Problems
Scope of the problem
Computation of an approximation to a solution of
T (x) = y
whereT : X → Y
and X , Y Hilbert spaces
In many relevant cases only noisy data y δ with
‖y δ − y‖ ≤ δ
Iterative Methods for Inverse & Ill-posed Problems
Scope of the problem
There is a very wide range of possible applications
Image deblurring + decomposition (Daubechies,T. 05)
Audio coding (T. 06)
Sparseness (acceleration) of support vector machines(Ratsch,T. 05,06)
SPECT (Ramlau,T. 06)
Astrophysical data processing (Anthoine 05 + DeMol 04)
Geophysics: seismic wave decomposition (Holschneider 06)
Meteorological radar data processing (Lehmann,T. 06)
...
Iterative Methods for Inverse & Ill-posed Problems
Scope of the problem
Mathematical description:
div(σ∇Φ) = divj in Ω
〈σ∇Φ, n〉 = 0 at Γ = ∂Ω ,
Inverse problem:R : (σ, j) 7→ Φ|∂Ω
Variational formulation:
J(σ, j) = ‖R(σ, j)− Φ|δ∂Ω‖2 + αΨ(σ, σ, j)
Iterative Methods for Inverse & Ill-posed Problems
Scope of the problem
Linear case: ill–posed integral equation of the first kind
Rf (s, ω) =
∫R
f (sω + tω⊥)dt = − log
(IL(s, ω)
I0(s, ω)
)
Nonlinear case: (SPECT)
R[f , µ](s, ω) =
∫R
f (sω + tω⊥)e−R∞t µ(sω+τω⊥)dτdt
Consider
J(f , µ) = ‖y δ − R[f , µ]‖2 + αΨ(f , µ)
Iterative Methods for Inverse & Ill-posed Problems
Scope of the problem
Sparse Approximation of set vectors xi
Reduced SVM ⇔ sparse SVM
Sparsifying both simultaneously:
‖Ψ1(α, x)−Ψ(β, z)‖2 + Sparsity(β) + Sparsity(z)
where Ψ1(α, x) =∑Nx
i=1 αiΦ(xi )
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Linear Problems & Sparsity
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Signal representation
v maybe represented by a preassigned basis
But sometimes too restrictive - way out: frame
But sometimes too restrictive - way out: dictionary of frames,...
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Sparsity Constraints
Certain physical constraints, e.g. well-known energy norm
‖y − AF ∗g‖2 + α‖g‖2
promotion of sparsity 0 < p < 2,
‖y − AF ∗g‖2 + α‖g‖p`p
more general‖y − Av‖2 + α sup
h∈C〈v , h〉
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Sparsity Constraints
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Sparsity Constraints and Iterative Process
Consider for instance:
‖y − AF ∗g‖2 + α‖g‖`1
Problem: ‖AF ∗g‖2 induce a nonlinear coupling
Way out:
‖y − AF ∗g‖2 + α‖g‖`1 + C‖g − a‖2 − ‖AF ∗(g − a)‖2
= −2 〈g,FA∗y〉+ α‖g‖`1 + C‖g − a‖2 + 2 〈g,FA∗AF ∗a〉
‖y‖2 − ‖AF ∗a‖2
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Sparsity Constraints and Iterative Process
Define
J(g, a) := ‖y−AF ∗g‖2 +α‖g‖`1 +C‖g−a‖2−‖AF ∗(g−a)‖2
Create an iteration process by setting a = g0 and
gm+1 = arg ming
J(g, gm)
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Sparsity Constraints and Minimization
Reduces to variational equations of the form:
a = b − α sign(a)
Solved by
a = Sα(b) =
b − α b ≥ αb + α b ≤ −αb = 0 −α < b < α
In its full glory
gm+1 = Sα(FA∗y + gm − FA∗AF ∗gm)
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Provided analysis
Daubechies+Defrise+DeMol 2003: minimization by Gaussiansurrogate functionals → iterative Landweber approachproof of norm convergence and regularization properties
general case:‖y − Av‖2 + 2α sup
h∈C〈v , h〉
minimization, norm convergence, (regularization theory)→ Daubechies + T. + Vese 2006
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Well-posed case
Theorem (Daubechies/T./Vese 06)
Suppose some technical conditions on C, and A∗A has boundedinverse in its range. If we define T := (A∗A)−1/2 and, for anarbitrary closed convex set K, SK := Id − PK , where PK is the(nonlinear) projection on K, then the minimizing v is given by
v = TSαTCTA∗y .
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Ill-posed case, convergence
Gaussian surrogate approach yields:
vn+1 := (Id − PαC )(vn + A∗y − A∗Avn)
by same techniques as in D3 − 2003:
vnweak−→ v
norm convergence requires special knowledge of C !!!
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Ill-posed case, convergence
Theorem (Daubechies/T./Vese 06)
Suppose vn − vweak−→ 0 and ‖PαC (g)− PαC (g + vn − v)‖ → 0.
Moreover, assume that vn is orthogonal to g, PC (g). If for somesequence γn (with γn →∞) the convex set C satisfies
γn(vn − v) ∈ C
then‖vn − v‖ → 0
.
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
Left: Shepp-Logan Phatom (64x64), right: FBP (0:10:180)
Iterative Methods for Inverse & Ill-posed Problems
Linear Inverse Problems & Sparsity
( ... here is the movie theater)
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Nonlinear Problems & Sparsity
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Setting
The setting
Nonlinear problem: T : X → Y , T (x) = y
Variational form (vector valued)
Jα(g1, . . . , gn) = ‖y δ − T (g1, . . . , gn)‖2 + 2αΨ(g1, . . . , gn)
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Setting
The setting
Requirements on T (essentially):
T strongly continuous
T ′ L - Lipschitz continuous
Further requirements
‖g‖(`2)n ≤ cΨ(g)
... technical conditions
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Setting
Linear mixing:
T (Kg) = T
r∑l=1
Al ,iF∗gl
i=1,...,n
Simple cases:
Nonlinear scalar valued:
T (Kg) = T (K (g1, . . . , gn)) = T
(n∑
i=1
F ∗gi
)
Purely linear: T some linear and bounded operator.
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Setting
Non coupled sparsity (T. 05)
Ψ(g) = (Ψ1(g1), . . . ,Ψn(g
n))
Joint sparsity (linear case: Fornasier/Rauhut 06)
Ψ(u) =∑λ∈Λ
ωλ‖uλ‖q
Complementary sparsity, ....
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Iteration
Basic Idea
For g ∈ (`2)n and some auxiliary a ∈ (`2)
n, consider
Jsα(g, a) := Jα(g) + C‖g − a‖2
(`2)n− ‖T (g)− T (a)‖2
Y
Create an iteration process
1 Pick g0 ∈ (`2)n and some proper constant C > 0
2 Derive a sequence gkk=0,1,... by the iteration:
gk+1 = arg mingk∈(`2)n
Jsα(g, gk) k = 0, 1, 2, . . .
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Iteration
Proper Surrogate Functionals
Given multi–parameter α ∈ R+ and g0 ∈ (`2)n, define a ball
Kr := g ∈ (`2)n : Ψ(g) ≤ r
with radius r = Jα(g0)/(2α)
Define C
C := 2 max
(
supg∈Kr
‖T ′(g)‖
)2
, L√
J(g0)
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Iteration
Proper Surrogate Functionals
Properties:
C‖g − g0‖2(`2)n
− ‖T (g)− T (g0)‖2Y ≥ 0
All Jsα(g, gk) are bounded from below, gk ∈ Kr
All Jα(gk) and Jsα(gk+1, gk) are non-increasing
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Minimization
Necessary Condition
The necessary condition for a minimum of Jsα(g, a) is given by
0 ∈ −T ′(g)∗(y δ − T (a)) + Cg − Ca + α∂Ψ(g)
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Minimization
Recasting the Necessary Condition
Let M(g, a) := T ′(g)∗(y δ − T (a))/C + a
then the necessary conditions can be casted as fixed point problem
g =α
C(I − PC)
(C
αM(g, a)
),
where PC is the orthogonal projection onto the convex set C.
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Minimization
Fixed Point Iteration with Projection
Lemma
The fixed point iteration converges towards the minimizer ofJsα(g, gk)
Lemma
T ∈ C 2: Jsα(g, gk) is strictly convex.
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Minimization
Joint Sparsity
Measure:Ψ(u) =
∑λ∈Λ
ωλ‖uλ‖q
Fixed point iteration:
gl+1 =α
C(I − PC)
(M(gl , a)
αC
)
Equivalent description:
‖gl+1 −M(gl , a)‖2(`2)n
+ 2α/C∑λ∈Λ
‖(gλ)l+1‖q
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Minimization
Joint Sparsity
Proposition
Let 1 ≤ q ≤ ∞ and 1 = 1/q + 1/q′. The coefficients of iterates ofthe fixed point equation are given by
(gλ)l+1 = (g1λ , . . . , gn
λ)l+1 = (I − PBq′ (C−1αωλ))((M(gl , a))λ) .
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Convergence
Convergence
Theorem
Assume that there exists at least one isolated limit g?α of a
subsequence gk,l of gk . Then gk → g?α as k →∞. The
accumulation point g?α is a minimizer for the functional Js
α(g, g?α)
and satisfies the necessary condition for a minimum of Jα.
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Regularization
Regularization
Theorem
Letα(δ)
δ→0−→ 0 , δ2/α(δ)δ→0−→ 0 .
Then every sequence g?α(δ) of minimizers of the functional Jα(g)
where δ → 0 and α = α(δ) has a convergent subsequence. Thelimit of every convergent subsequence is a solution of T (g) = ywith minimal values of Ψ(g).
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Examples
50 100 150 200 250
0
1
2X
50 100 150 200 250
0.51
1.52
y
50 100 150 200 250
0.51
1.52
yδ
50 100 150 200 250
0
0.5
1
F*1 g1
50 100 150 200 250
0.2
0.4
0.6
F*2 g2
50 100 150 200 2500.20.40.60.8
11.21.4
K g
200 400 600 800
20
40
discr(red), penalty(green)
200 400 600 800
150
200
250sparsity
200 400 600 800
100
err
200 400 600 80010
1.3
101.8
Jα
200 400 600 800
100
Add
200 400 600 800
102
Jα (red), Jα+Add (blue)
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Examples
50 100 150 200
500
1000
1500
2000
2500
discr(red), penalty(green)
50 100 150 200
0.5
1
1.5
2
2.5
3
x 104 sparsity
50 100 150 200
20
40
60
80
err
50 100 150 200
500
1000
1500
2000
2500
3000
Jα
50 100 150 2000
100
200
300
400
Add
50 100 150 200
103
Jα (red), Jα+Add (blue)
X
0
0.5
1
1.5
Y+δ
0
0.2
0.4
0.6
0.8
1
G
0.5
1
1.5
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Examples
R[f , µ](s, ω) =
∫R
f (sω + tω⊥)e−R∞t µ(sω+τω⊥)dτdt
Left: density f , right: attenuation µ
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Examples
R[f , µ](s, ω) =
∫R
f (sω + tω⊥)e−R∞t µ(sω+τω⊥)dτdt
Simulated data R[f , µ]
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Examples
Reconstruction of density f (3 percent error)
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Examples
Reduced Support Vector Machines
Cascade Classification
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Examples
Input image, images showing the amount of rejected pixels at the1st , 3rd and 50th stages of the cascade
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Examples
Reduced Support Vector Machines
Percentage of rejected non-face patches as a function of thenumber of operations required
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Examples
method time per patch
SVM 787.34µsRVM 22.51µs
W-RVM 1.48µs
Comparison of speed improvement of the W-RVM to the RVM andSVM
Iterative Methods for Inverse & Ill-posed Problems
Nonlinear Inverse Problems & Sparsity
Examples
Drawback - High Computational Complexity
Adaptivity!
Iterative Methods for Inverse & Ill-posed Problems
Adaptivity for linear problems
Frame based concept (Stevenson, Dahlke et.al.) for positiveoperators
Consider the regularized problem
Construction of RHS, APPLY, COARSE routines
Iterative Methods for Inverse & Ill-posed Problems
Adaptivity for linear problems
Operator s∗–admissibility
New definition of s∗–compressibility (NEW: density function)
New routine APPLY
=⇒ if our operator fulfills the new s∗–compressibility , thenwith the new APPLY routine our operator is s∗–admissible
Iterative Methods for Inverse & Ill-posed Problems
Adaptivity for linear problems
Verification for the linear Radon transform
|〈(R∗R + α)φλ, φλ′〉| ≤ c2−||λ|−|λ
′||
1 + 2min(|λ|,|λ′|)δ(λ, λ′)
remember Lemarie class:
2−σ||λ|−|λ′||(1 + 2min(|λ|,|λ′|)δ(λ, λ′))−β
with β > n, σ > n/2