Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
Non-negative CP decomposition of hyperspectral data
Miguel A. Veganzones, Jeremy Cohen, RodrigoCabral-Farias and Pierre Comon
gipsa-lab, CNRS, UMR 5216, F-38420, Grenoble, [email protected]
ERC AdG-2013-320594 DECODA
Outline
1 Motivation
2 Non-negative CP decomposition
3 Experimental results with time series
4 What about hyperspectral data cubes ?
Miguel A. Veganzones Compression-based NN-CP 2 / 21
Outline
1 Motivation
2 Non-negative CP decomposition
3 Experimental results with time series
4 What about hyperspectral data cubes ?
Miguel A. Veganzones Compression-based NN-CP 3 / 21
Goals
1. Analysis of hyperspectral data by means of CanonicalPolyadic (CP) tensor decomposition.1.1 Time-series.1.2 Multi-angle acquisitions.1.3 Conventional images.
2. Physical interpretation of the one-rank factors in terms ofspectral unmixing.
3. Applications :3.1 Snow cover maps of the Alps : collaboration with LTHE
laboratory and MeteoFrance.3.2 Analysis of Martian surface : collaboration with IPAG
(Mars-ReCo project).
Miguel A. Veganzones Compression-based NN-CP 4 / 21
Goals
1. Analysis of hyperspectral data by means of CanonicalPolyadic (CP) tensor decomposition.1.1 Time-series.1.2 Multi-angle acquisitions.1.3 Conventional images.
2. Physical interpretation of the one-rank factors in terms ofspectral unmixing.
3. Applications :3.1 Snow cover maps of the Alps : collaboration with LTHE
laboratory and MeteoFrance.3.2 Analysis of Martian surface : collaboration with IPAG
(Mars-ReCo project).
Miguel A. Veganzones Compression-based NN-CP 4 / 21
Goals
1. Analysis of hyperspectral data by means of CanonicalPolyadic (CP) tensor decomposition.1.1 Time-series.1.2 Multi-angle acquisitions.1.3 Conventional images.
2. Physical interpretation of the one-rank factors in terms ofspectral unmixing.
3. Applications :3.1 Snow cover maps of the Alps : collaboration with LTHE
laboratory and MeteoFrance.3.2 Analysis of Martian surface : collaboration with IPAG
(Mars-ReCo project).
Miguel A. Veganzones Compression-based NN-CP 4 / 21
Hyperspectral tensors
> Hyperspectral matrix : XI×J .• I : number of pixels (spatial way).• J : number of bands (spectral way).
> Hyperspectral tensor : X I×J×K .• K : number of time acquisitions (temporal way : SNOW
project).• K : number of angles (angular way : Mars-ReCo project).• What about data cubes (rows × columns × bands) ?
Miguel A. Veganzones Compression-based NN-CP 5 / 21
Hyperspectral tensors
> Hyperspectral matrix : XI×J .• I : number of pixels (spatial way).• J : number of bands (spectral way).
> Hyperspectral tensor : X I×J×K .• K : number of time acquisitions (temporal way : SNOW
project).• K : number of angles (angular way : Mars-ReCo project).• What about data cubes (rows × columns × bands) ?
Miguel A. Veganzones Compression-based NN-CP 5 / 21
Hyperspectral tensors
> Hyperspectral matrix : XI×J .• I : number of pixels (spatial way).• J : number of bands (spectral way).
> Hyperspectral tensor : X I×J×K .• K : number of time acquisitions (temporal way : SNOW
project).• K : number of angles (angular way : Mars-ReCo project).• What about data cubes (rows × columns × bands) ?
Miguel A. Veganzones Compression-based NN-CP 5 / 21
Outline
1 Motivation
2 Non-negative CP decomposition
3 Experimental results with time series
4 What about hyperspectral data cubes ?
Miguel A. Veganzones Compression-based NN-CP 6 / 21
Non-negative CP decomposition
> Formulation :
Xijk ≈R∑
r=1
AirBjrCkrλr. (1)
• R ∈ N+ : tensor non-negative rank.• AI×R : spatial factors.• BJ×R : spectral factors.• CK×R : temporal/angular factors.• ΛR×R : scaling diagonal matrix.• Everything is non-negative !
> Compact representation : X ≈ (A,B,C)Λ.> Optimization problem :
minimize ‖X − (A,B,C)Λ‖2Fw.r.t. A,B,C,Λ
subject to A � 0,B � 0,C � 0
(2)
Miguel A. Veganzones Compression-based NN-CP 7 / 21
Non-negative CP decomposition
> Formulation :
Xijk ≈R∑
r=1
AirBjrCkrλr. (1)
• R ∈ N+ : tensor non-negative rank.• AI×R : spatial factors.• BJ×R : spectral factors.• CK×R : temporal/angular factors.• ΛR×R : scaling diagonal matrix.• Everything is non-negative !
> Compact representation : X ≈ (A,B,C)Λ.> Optimization problem :
minimize ‖X − (A,B,C)Λ‖2Fw.r.t. A,B,C,Λ
subject to A � 0,B � 0,C � 0
(2)
Miguel A. Veganzones Compression-based NN-CP 7 / 21
Non-negative CP decomposition
> Formulation :
Xijk ≈R∑
r=1
AirBjrCkrλr. (1)
• R ∈ N+ : tensor non-negative rank.• AI×R : spatial factors.• BJ×R : spectral factors.• CK×R : temporal/angular factors.• ΛR×R : scaling diagonal matrix.• Everything is non-negative !
> Compact representation : X ≈ (A,B,C)Λ.> Optimization problem :
minimize ‖X − (A,B,C)Λ‖2Fw.r.t. A,B,C,Λ
subject to A � 0,B � 0,C � 0
(2)
Miguel A. Veganzones Compression-based NN-CP 7 / 21
Properties
> Best lower non-negative rank approximates always exist→The problem is well-posed.
> There exist upper bounds to the tensor rank, Ro, that ensureuniqueness -> But only for exact decompositions !
> (Recently proved) If condition R ≤ Ro holds true→ Almostalways the best lower non-negative rank approximate isunique.
Miguel A. Veganzones Compression-based NN-CP 8 / 21
Properties
> Best lower non-negative rank approximates always exist→The problem is well-posed.
> There exist upper bounds to the tensor rank, Ro, that ensureuniqueness -> But only for exact decompositions !
> (Recently proved) If condition R ≤ Ro holds true→ Almostalways the best lower non-negative rank approximate isunique.
Miguel A. Veganzones Compression-based NN-CP 8 / 21
Properties
> Best lower non-negative rank approximates always exist→The problem is well-posed.
> There exist upper bounds to the tensor rank, Ro, that ensureuniqueness -> But only for exact decompositions !
> (Recently proved) If condition R ≤ Ro holds true→ Almostalways the best lower non-negative rank approximate isunique.
Miguel A. Veganzones Compression-based NN-CP 8 / 21
Properties
> Best lower non-negative rank approximates always exist→The problem is well-posed.
> There exist upper bounds to the tensor rank, Ro, that ensureuniqueness -> But only for exact decompositions !
> (Recently proved) If condition R ≤ Ro holds true→ Almostalways the best lower non-negative rank approximate isunique.
Miguel A. Veganzones Compression-based NN-CP 8 / 21
Algorithms
> The CP optimization problem is highly non-convex.> Yet many algorithms provide rather precise computation.> These algorithms can be divided into two main classes :
• All-at-once gradient-based descent, e.g. : all CP parametersare updated at the same time using a gradient scheme andnon-negativity constraints are implemented through barriersor soft penalizations.• Alternating minimization : the cost function is minimized in an
alternating way for each factor (A, B or C) while the othersare fixed.
> Drawback : These algorithms are computationally expensive→ Useless for large tensors.• Proposed solution : compression-based NN-CP
decomposition.
Miguel A. Veganzones Compression-based NN-CP 9 / 21
Algorithms
> The CP optimization problem is highly non-convex.> Yet many algorithms provide rather precise computation.> These algorithms can be divided into two main classes :
• All-at-once gradient-based descent, e.g. : all CP parametersare updated at the same time using a gradient scheme andnon-negativity constraints are implemented through barriersor soft penalizations.• Alternating minimization : the cost function is minimized in an
alternating way for each factor (A, B or C) while the othersare fixed.
> Drawback : These algorithms are computationally expensive→ Useless for large tensors.• Proposed solution : compression-based NN-CP
decomposition.
Miguel A. Veganzones Compression-based NN-CP 9 / 21
Algorithms
> The CP optimization problem is highly non-convex.> Yet many algorithms provide rather precise computation.> These algorithms can be divided into two main classes :
• All-at-once gradient-based descent, e.g. : all CP parametersare updated at the same time using a gradient scheme andnon-negativity constraints are implemented through barriersor soft penalizations.• Alternating minimization : the cost function is minimized in an
alternating way for each factor (A, B or C) while the othersare fixed.
> Drawback : These algorithms are computationally expensive→ Useless for large tensors.• Proposed solution : compression-based NN-CP
decomposition.
Miguel A. Veganzones Compression-based NN-CP 9 / 21
Algorithms
> The CP optimization problem is highly non-convex.> Yet many algorithms provide rather precise computation.> These algorithms can be divided into two main classes :
• All-at-once gradient-based descent, e.g. : all CP parametersare updated at the same time using a gradient scheme andnon-negativity constraints are implemented through barriersor soft penalizations.• Alternating minimization : the cost function is minimized in an
alternating way for each factor (A, B or C) while the othersare fixed.
> Drawback : These algorithms are computationally expensive→ Useless for large tensors.• Proposed solution : compression-based NN-CP
decomposition.
Miguel A. Veganzones Compression-based NN-CP 9 / 21
Algorithms
> The CP optimization problem is highly non-convex.> Yet many algorithms provide rather precise computation.> These algorithms can be divided into two main classes :
• All-at-once gradient-based descent, e.g. : all CP parametersare updated at the same time using a gradient scheme andnon-negativity constraints are implemented through barriersor soft penalizations.• Alternating minimization : the cost function is minimized in an
alternating way for each factor (A, B or C) while the othersare fixed.
> Drawback : These algorithms are computationally expensive→ Useless for large tensors.• Proposed solution : compression-based NN-CP
decomposition.
Miguel A. Veganzones Compression-based NN-CP 9 / 21
Compression-based NN-CP decomposition
General approach [1] :1. Compress the original tensor X I,J,K on a small tensor
GR1,R2,R3 .• Supposition : the compressed tensor G contains almost the
same information as the original tensor X .2. Compute the CP decomposition of G.
• Consider non-negativity constraints of X in the optimizationproblem.• Estimate the compressed factors AR1×R
c , BR2×Rc and CR3×R
c .
3. Uncompress the estimated one-rank factors Ac, Bc and Cc
to obtain the original factors AI×R, BJ×R and CK×R.
[1] Cohen, J., Cabral-Farias, R., Comon, P. ; "Fast Decomposition of Large Nonnegative Tensors," IEEE SignalProcessing Letters, 22(7), pp. 862-866, 2015.
Miguel A. Veganzones Compression-based NN-CP 10 / 21
Compression-based NN-CP decomposition
General approach [1] :1. Compress the original tensor X I,J,K on a small tensor
GR1,R2,R3 .• Supposition : the compressed tensor G contains almost the
same information as the original tensor X .2. Compute the CP decomposition of G.
• Consider non-negativity constraints of X in the optimizationproblem.• Estimate the compressed factors AR1×R
c , BR2×Rc and CR3×R
c .
3. Uncompress the estimated one-rank factors Ac, Bc and Cc
to obtain the original factors AI×R, BJ×R and CK×R.
[1] Cohen, J., Cabral-Farias, R., Comon, P. ; "Fast Decomposition of Large Nonnegative Tensors," IEEE SignalProcessing Letters, 22(7), pp. 862-866, 2015.
Miguel A. Veganzones Compression-based NN-CP 10 / 21
Compression-based NN-CP decomposition
General approach [1] :1. Compress the original tensor X I,J,K on a small tensor
GR1,R2,R3 .• Supposition : the compressed tensor G contains almost the
same information as the original tensor X .2. Compute the CP decomposition of G.
• Consider non-negativity constraints of X in the optimizationproblem.• Estimate the compressed factors AR1×R
c , BR2×Rc and CR3×R
c .
3. Uncompress the estimated one-rank factors Ac, Bc and Cc
to obtain the original factors AI×R, BJ×R and CK×R.
[1] Cohen, J., Cabral-Farias, R., Comon, P. ; "Fast Decomposition of Large Nonnegative Tensors," IEEE SignalProcessing Letters, 22(7), pp. 862-866, 2015.
Miguel A. Veganzones Compression-based NN-CP 10 / 21
Outline
1 Motivation
2 Non-negative CP decomposition
3 Experimental results with time series
4 What about hyperspectral data cubes ?
Miguel A. Veganzones Compression-based NN-CP 11 / 21
Dataset
> 44 daily MODIS acquisitions with low cloud coverage.> Pre-processed to increase spatial resolution (250m).> Dimensions : 4800× 7× 44.
[2] Veganzones, M.A. ; Cohen, J. ; Cabral-Farias, R. ; Chanussot, J. ; Comon, P. ; "Nonnegative tensor CP decompositionof hyperspectral data," IEEE Transactions on Geoscience and Remote Sensing, 54(5), pp. 2577-2588, 2016.
Miguel A. Veganzones Compression-based NN-CP 12 / 21
Experimental methodology (I)
> Competing algorithms :• Compression-based CP algorithms : CCG and ProCoALS.• Conventional CP algorithm : ANLS.• Conventional day-basis full-constrained spectral unmixing
(FCLSU).> Groundtruth : 8 spectra (on-field acquisitions).
1 2 3 4 5 6 70
0.2
0.4
0.6
0.8
1
Spectral Band
Radiance
medium snow
glacier+snow
old coarse
ice
debris
rocks
rain forest
pasture
Miguel A. Veganzones Compression-based NN-CP 13 / 21
Experimental methodology (II)
> We run 50 Monte Carlo runs for each of the algorithms for aset of different rank values in the range R ∈ [5, 15].
> For the compression-based CP algorithms, the compressedtensor, X c, has dimensions 175× 7× 25.
> Quality measures :• Reconstruction error : average RMSE.• Angular error w.r.t. the groundtruth spectral signatures.• Linear Pearson correlation w.r.t. the spatial abundances
obtained by the FCLSU.
Miguel A. Veganzones Compression-based NN-CP 14 / 21
Experimental methodology (II)
> We run 50 Monte Carlo runs for each of the algorithms for aset of different rank values in the range R ∈ [5, 15].
> For the compression-based CP algorithms, the compressedtensor, X c, has dimensions 175× 7× 25.
> Quality measures :• Reconstruction error : average RMSE.• Angular error w.r.t. the groundtruth spectral signatures.• Linear Pearson correlation w.r.t. the spatial abundances
obtained by the FCLSU.
Miguel A. Veganzones Compression-based NN-CP 14 / 21
Experimental methodology (II)
> We run 50 Monte Carlo runs for each of the algorithms for aset of different rank values in the range R ∈ [5, 15].
> For the compression-based CP algorithms, the compressedtensor, X c, has dimensions 175× 7× 25.
> Quality measures :• Reconstruction error : average RMSE.• Angular error w.r.t. the groundtruth spectral signatures.• Linear Pearson correlation w.r.t. the spatial abundances
obtained by the FCLSU.
Miguel A. Veganzones Compression-based NN-CP 14 / 21
Reconstruction errors
5 6 7 8 9 10 11 12 13 14 15
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6x 10
−4
Rank
Average Normalized RMSE
(a) ANLS5 6 7 8 9 10 11 12 13 14 15
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6x 10
−4
Rank
Average Normalized RMSE
(b) CCG
5 6 7 8 9 10 11 12 13 14 15
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6x 10
−4
Rank
Average Normalized RMSE
(c) ProCoALS
Miguel A. Veganzones Compression-based NN-CP 15 / 21
Spectral factors interpretation
5 6 7 8 9 10 11 12 13 14 150
5
10
15
20
25
30
35
Rank
Average angular distance
medium snow
glacier+snow
old coarse
ice
debris
rocks
rain forest
pasture
(a) ANLS
5 6 7 8 9 10 11 12 13 14 150
5
10
15
20
25
30
35
Rank
Average angular distance
medium snow
glacier+snow
old coarse
ice
debris
rocks
rain forest
pasture
(b) CCG
5 6 7 8 9 10 11 12 13 14 150
5
10
15
20
25
30
35
Rank
Average angular distance
medium snow
glacier+snow
old coarse
ice
debris
rocks
rain forest
pasture
(c) ProCoALS
Miguel A. Veganzones Compression-based NN-CP 16 / 21
Spatial factors interpretation
5 6 7 8 9 10 11 12 13 14 150
0.2
0.4
0.6
0.8
1
Rank
Average maximum correlation
medium snow
glacier+snow
old coarse
ice
debris
rocks
rain forest
pasture
(a) ANLS
5 6 7 8 9 10 11 12 13 14 150
0.2
0.4
0.6
0.8
1
Rank
Average maximum correlation
medium snow
glacier+snow
old coarse
ice
debris
rocks
rain forest
pasture
(b) CCG
5 6 7 8 9 10 11 12 13 14 150
0.2
0.4
0.6
0.8
1
Rank
Average maximum correlation
medium snow
glacier+snow
old coarse
ice
debris
rocks
rain forest
pasture
(c) ProCoALS
Miguel A. Veganzones Compression-based NN-CP 17 / 21
Spatial factors interpretation
> Best run with rank R = 8.
Permanent snow/ice
Seasonal snow/ice (type 1) Seasonal snow/ice (type 2)
Seasonal vegetation (type 1) Seasonal vegetation (type 2)
Miguel A. Veganzones Compression-based NN-CP 18 / 21
Outline
1 Motivation
2 Non-negative CP decomposition
3 Experimental results with time series
4 What about hyperspectral data cubes ?
Miguel A. Veganzones Compression-based NN-CP 19 / 21
Are hyperspectral data cubes tensors ?
> Hyperspectral images are often provided, not asnon-negative matrices, X ∈ RI×J
+ , but as data cubes,X ∈ RIr×Ic×J
+ , where Ir and Ic denote the number of rowsand columns respectively, being I = IrIc.
> It is possible to think of the data cube as a tensorrepresentation of the image, allowing to apply tensoranalysis techniques.
> This is fine !→ Employed for de-noising, compression, ...> ... but, in general, it is not possible to suppose a low rank,
that is, a small R.> Proposed solution : patch-tensors [3].
[3] Veganzones, M.A. ; Cohen, J. ; Cabral-Farias, R. ; Usevich, K., Drumetz, L. ; Chanussot, J. ; Comon, P. ; "CanonicalPolyadic decomposition of hyperspectral patch-tensors," 2016 EUSIPCO Conference (submitted).
Miguel A. Veganzones Compression-based NN-CP 20 / 21
Are hyperspectral data cubes tensors ?
> Hyperspectral images are often provided, not asnon-negative matrices, X ∈ RI×J
+ , but as data cubes,X ∈ RIr×Ic×J
+ , where Ir and Ic denote the number of rowsand columns respectively, being I = IrIc.
> It is possible to think of the data cube as a tensorrepresentation of the image, allowing to apply tensoranalysis techniques.
> This is fine !→ Employed for de-noising, compression, ...> ... but, in general, it is not possible to suppose a low rank,
that is, a small R.> Proposed solution : patch-tensors [3].
[3] Veganzones, M.A. ; Cohen, J. ; Cabral-Farias, R. ; Usevich, K., Drumetz, L. ; Chanussot, J. ; Comon, P. ; "CanonicalPolyadic decomposition of hyperspectral patch-tensors," 2016 EUSIPCO Conference (submitted).
Miguel A. Veganzones Compression-based NN-CP 20 / 21
Are hyperspectral data cubes tensors ?
> Hyperspectral images are often provided, not asnon-negative matrices, X ∈ RI×J
+ , but as data cubes,X ∈ RIr×Ic×J
+ , where Ir and Ic denote the number of rowsand columns respectively, being I = IrIc.
> It is possible to think of the data cube as a tensorrepresentation of the image, allowing to apply tensoranalysis techniques.
> This is fine !→ Employed for de-noising, compression, ...> ... but, in general, it is not possible to suppose a low rank,
that is, a small R.> Proposed solution : patch-tensors [3].
[3] Veganzones, M.A. ; Cohen, J. ; Cabral-Farias, R. ; Usevich, K., Drumetz, L. ; Chanussot, J. ; Comon, P. ; "CanonicalPolyadic decomposition of hyperspectral patch-tensors," 2016 EUSIPCO Conference (submitted).
Miguel A. Veganzones Compression-based NN-CP 20 / 21
Are hyperspectral data cubes tensors ?
> Hyperspectral images are often provided, not asnon-negative matrices, X ∈ RI×J
+ , but as data cubes,X ∈ RIr×Ic×J
+ , where Ir and Ic denote the number of rowsand columns respectively, being I = IrIc.
> It is possible to think of the data cube as a tensorrepresentation of the image, allowing to apply tensoranalysis techniques.
> This is fine !→ Employed for de-noising, compression, ...> ... but, in general, it is not possible to suppose a low rank,
that is, a small R.> Proposed solution : patch-tensors [3].
[3] Veganzones, M.A. ; Cohen, J. ; Cabral-Farias, R. ; Usevich, K., Drumetz, L. ; Chanussot, J. ; Comon, P. ; "CanonicalPolyadic decomposition of hyperspectral patch-tensors," 2016 EUSIPCO Conference (submitted).
Miguel A. Veganzones Compression-based NN-CP 20 / 21
Are hyperspectral data cubes tensors ?
> Hyperspectral images are often provided, not asnon-negative matrices, X ∈ RI×J
+ , but as data cubes,X ∈ RIr×Ic×J
+ , where Ir and Ic denote the number of rowsand columns respectively, being I = IrIc.
> It is possible to think of the data cube as a tensorrepresentation of the image, allowing to apply tensoranalysis techniques.
> This is fine !→ Employed for de-noising, compression, ...> ... but, in general, it is not possible to suppose a low rank,
that is, a small R.> Proposed solution : patch-tensors [3].
[3] Veganzones, M.A. ; Cohen, J. ; Cabral-Farias, R. ; Usevich, K., Drumetz, L. ; Chanussot, J. ; Comon, P. ; "CanonicalPolyadic decomposition of hyperspectral patch-tensors," 2016 EUSIPCO Conference (submitted).
Miguel A. Veganzones Compression-based NN-CP 20 / 21
Non-negative CP decomposition of hyperspectral data
Miguel A. Veganzones, Jeremy Cohen, RodrigoCabral-Farias and Pierre Comon
gipsa-lab, CNRS, UMR 5216, F-38420, Grenoble, [email protected]
ERC AdG-2013-320594 DECODA