Accurate Segmentation of Hyperspectral Images Using Deep...

Preview:

Citation preview

Accurate Segmentation of Hyperspectral Images UsingDeep Neural Networks—Are We There Yet?

Jakub Nalepa, Michal Marcinkiewicz, Pablo Ribalta Lorenzo, Krzysztof Czyz,Michal Kawulok & HYPERNET Team

KP Labs, Gliwice, PolandSilesian University of Technology, Gliwice, Poland

jnalepa@ieee.org

The ESA Earth Observation �-Week,Frascati, Italy. November 14, 2018

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

Introduction

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 1 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

About us

HYPERspectral image segmentation using deep neural NETworksHYPERNET

May 2018—May 2019Polish Industry Incentive Scheme

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 2 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 3 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

Segmentation of HSI—conventional machine learning

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 4 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

Segmentation of HSI—deep learning

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 5 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

Deep networks in the wild. . .

Hyperspectral image segmentationMedical imagingObject detectionText classificationTemporal and time-series analysisSelf-driving carsMusic compositionReal-time analysis of behaviorsTranslationSpeech recognitionLanguage modelingDocument summarization. . .

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 6 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

Deploying a deep neural network in the wild

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 7 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

Deploying a deep neural network in the wild

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 8 / 30

The roadmapDesign a topologySelect hyper-parametersTrain the network

- Topology: P. Ribalta and J. Nalepa, pp 505-512, Proc. GECCO, ACM 2018.- Hyper-params: P. Ribalta and J. Nalepa, et al., pp 481-488, Proc. GECCO, ACM 2017.- Hyper-params: P. Ribalta and J. Nalepa, et al., pp 1864-1871, Proc. GECCO, ACM 2017.

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

Deploying a deep neural network in the wild

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 8 / 30

The roadmapDesign a topologySelect hyper-parametersTrain the network

- Topology: P. Ribalta and J. Nalepa, pp 505-512, Proc. GECCO, ACM 2018.- Hyper-params: P. Ribalta and J. Nalepa, et al., pp 481-488, Proc. GECCO, ACM 2017.- Hyper-params: P. Ribalta and J. Nalepa, et al., pp 1864-1871, Proc. GECCO, ACM 2017.

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

Deep networks for hyperspectral image segmentation

Spectral Spatial-spectral

(Selected) Open issues:Validation of hyperspectral image segmentation algorithmsResouce-frugality of deep neural networksRobustness of deep neural networks

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 9 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

About usHyperspectral imagingDeep networks for hyperspectral image segmentation

Deep networks for hyperspectral image segmentation

Spectral Spatial-spectral

(Selected) Open issues:Validation of hyperspectral image segmentation algorithmsResouce-frugality of deep neural networksRobustness of deep neural networks

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 9 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Validation of hyperspectral image

segmentation algorithms

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 10 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Selection of training, validation, and test sets

How to select training, validation, and test sets?What are the benchmark (ground-truth) datasets?

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 11 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Selection of training, validation, and test setsMethod Datasetsú SettingsMultiscale superpixels (Dundar & Ince, 2018) IP, PU RandomWatershed + SVM (Tarabalka et al., 2010) PU ArbitraryClustering (SVM) (Bilgin et al., 2011) Washington DC, PU Full imageMultiresolution segm. (Amini et al., 2018) Three in-house datasets RandomRegion expansion (Li et al., 2018) PU, Sa, KSC Full imageDBN (spatial-spectral) (Li et al., 2014) HU RandomDBN (spatial-spectral) (Chen et al., 2015) IP, PU RandomDeep autoencoder (Chen et al., 2014) KSC, PU Monte CarloCNN (Zhao & Du, 2016) PC, PU, RandomCNN (Chen et al., 2016) IP, PU, KSC Monte CarloActive learning + DBN (Liu et al., 2017) PC, PU, Bo RandomDBN (spectral) (Zhong et al., 2017) IP, PU RandomRNN (spectral) (Mou et al., 2017) PU, HU, IP RandomCNN (Santara et al., 2017) IP, Sa, PU RandomCNN (Lee & Kwon, 2017) IP, Sa, PU Monte CarloCNN (Gao et al., 2018) IP, Sa, PU Monte CarloCNN (Ribalta et al., 2018) Sa, PU Randomú IP—Indian Pines; PU—Pavia University; Sa—Salinas; KSC—Kennedy SpaceCenter; HU—Houston University; PC—Pavia Centre; Bo—Botswana

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 12 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Benchmark hyperspectral images—examples

Broccoli 1 Broccoli 2 Fallow 1 Fallow 2 Fallow 3 Stubble Celery Grapes Soil Corn Lettuce 1 Lettuce 2 Lettuce 3 Lettuce 4 Vineyard 1 Vineyard 2

Gravel

Trees

Bricks

Bitumen

Bare Soil

Metal

Ashpalt

Meadows

Shadow

Salinas Valley (512x217px, AVIRIS, 3.7m, 224 bands) Pavia University (610x340px, ROSIS, 1.3m, 103 bands)

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 13 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Selection of training, validation, and test setsMethod Datasetsú SettingsMultiscale superpixels (Dundar & Ince, 2018) IP, PU RandomWatershed + SVM (Tarabalka et al., 2010) PU ArbitraryClustering (SVM) (Bilgin et al., 2011) Washington DC, PU Full imageMultiresolution segm. (Amini et al., 2018) Three in-house datasets RandomRegion expansion (Li et al., 2018) PU, Sa, KSC Full imageDBN (spatial-spectral) (Li et al., 2014) HU RandomDBN (spatial-spectral) (Chen et al., 2015) IP, PU RandomDeep autoencoder (Chen et al., 2014) KSC, PU Monte CarloCNN (Zhao & Du, 2016) PC, PU, RandomCNN (Chen et al., 2016) IP, PU, KSC Monte CarloActive learning + DBN (Liu et al., 2017) PC, PU, Bo RandomDBN (spectral) (Zhong et al., 2017) IP, PU RandomRNN (spectral) (Mou et al., 2017) PU, HU, IP RandomCNN (Santara et al., 2017) IP, Sa, PU RandomCNN (Lee & Kwon, 2017) IP, Sa, PU Monte CarloCNN (Gao et al., 2018) IP, Sa, PU Monte CarloCNN (Ribalta et al., 2018) Sa, PU Randomú IP—Indian Pines; PU—Pavia University; Sa—Salinas; KSC—Kennedy SpaceCenter; HU—Houston University; PC—Pavia Centre; Bo—Botswana

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 14 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Selection of training, validation, and test setsMethod Datasetsú SettingsMultiscale superpixels (Dundar & Ince, 2018) IP, PU RandomWatershed + SVM (Tarabalka et al., 2010) PU ArbitraryClustering (SVM) (Bilgin et al., 2011) Washington DC, PU Full imageMultiresolution segm. (Amini et al., 2018) Three in-house datasets RandomRegion expansion (Li et al., 2018) PU, Sa, KSC Full imageDBN (spatial-spectral) (Li et al., 2014) HU RandomDBN (spatial-spectral) (Chen et al., 2015) IP, PU RandomDeep autoencoder (Chen et al., 2014) KSC, PU Monte CarloCNN (Zhao & Du, 2016) PC, PU, RandomCNN (Chen et al., 2016) IP, PU, KSC Monte CarloActive learning + DBN (Liu et al., 2017) PC, PU, Bo RandomDBN (spectral) (Zhong et al., 2017) IP, PU RandomRNN (spectral) (Mou et al., 2017) PU, HU, IP RandomCNN (Santara et al., 2017) IP, Sa, PU RandomCNN (Lee & Kwon, 2017) IP, Sa, PU Monte CarloCNN (Gao et al., 2018) IP, Sa, PU Monte CarloCNN (Ribalta et al., 2018) Sa, PU Randomú IP—Indian Pines; PU—Pavia University; Sa—Salinas; KSC—Kennedy SpaceCenter; HU—Houston University; PC—Pavia Centre; Bo—Botswana

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 15 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Training-test information leak

t1

t2

t3

1 2

3 4

Training (ti) and test (Âi) pixels with their spatial neighborhoods.ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 16 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Patch-based training-test splitsa) b) c) d) e) f) g)

a) b) c) d) e) f) g)

a) True-color composite, b) ground-truth, c)–g) 5 folds (black/white patches are for training).ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 17 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Do train-test splits matter (Monte-Carlo vs. Patch-based)?

Deep neural network architecturesSpatial-spectral (3D) CNN (Gao et al., Remote Sens., 2018)Spectral (1D) CNN

Training-test subset cardinalities: as in Gao et al., 2018 (balanced andimbalanced)

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 18 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Do train-test splits matter (Monte-Carlo vs. Patch-based)?

Deep neural network architecturesSpatial-spectral (3D) CNN (Gao et al., Remote Sens., 2018)Spectral (1D) CNN

Convolution

Batch norm

.

Max pooling

Fully connected …

f = 200 s = 1×1×5

s=2×2

Softmax

l1=512, l2=128

1×1×b

Training-test subset cardinalities: as in Gao et al., 2018 (balanced andimbalanced)

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 18 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Do train-test splits matter (Monte-Carlo vs. Patch-based)?

Deep neural network architecturesSpatial-spectral (3D) CNN (Gao et al., Remote Sens., 2018)Spectral (1D) CNN

Convolution

Batch norm

.

Max pooling

Fully connected …

f = 200 s = 1×1×5

s=2×2

Softmax

l1=512, l2=128

1×1×b

Training-test subset cardinalities: as in Gao et al., 2018 (balanced andimbalanced)

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 18 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Sneak-peek from the results (Salinas Valley). . .

OA—overall accuracy, AA—average accuracy

Conclusion: Monte-Carlo cross-validation renders over-optimistic insights into classification performance.

For more results, see J. Nalepa et al., https://arxiv.org/abs/1811.03707, 2018 (submitted to IEEEGeoscience and Remote Sensing Letters)

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 19 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Sneak-peek from the results (Salinas Valley). . .

OA—overall accuracy, AA—average accuracy

Conclusion: Monte-Carlo cross-validation renders over-optimistic insights into classification performance.

For more results, see J. Nalepa et al., https://arxiv.org/abs/1811.03707, 2018 (submitted to IEEEGeoscience and Remote Sensing Letters)

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 19 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Resource-frugality of deep neural

networks

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 20 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Hardware-constrained environment

Reduction

ú

of:Memory footprintInference time

*Without adversely a�ecting classification performance. . .

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 21 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Hardware-constrained environment

Reductionú of:Memory footprintInference time

*Without adversely a�ecting classification performance. . .

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 21 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Quantization of deep neural networksQuantization of deep neural networks

During DNNtraining

Post-training

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 22 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Quantization of deep neural networksQuantization of deep neural networks

During DNNtraining

Post-training

Prune redundant network nodesCompress constantsMap weights/parameters to 8-bit precision

Min Max

-127 0 127

0.0

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 23 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Sneak-peek from the results. . .

Con

volu

tion

Batch

norm

.

Max p

oolin

g

Drop

out

Fully

connected

Con

volu

tion

Batch

norm

.

f = 96s = 5×1

f=32s=1×1 s=128s=2×2

Softmax

OA: 84.62% æ 83.45% (drop of 1.17%) OA: 78.30% æ 77.02% (drop of 1.28%)

For more results, see: P. Ribalta, M. Marcinkiewicz, J. Nalepa, Proc. IEEE DSD,pp 260-267, 2018.

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 24 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Sneak-peek from the results. . .

Con

volu

tion

Batch

norm

.

Max p

oolin

g

Drop

out

Fully

connected

Con

volu

tion

Batch

norm

.

f = 96s = 5×1

f=32s=1×1 s=128s=2×2

Softmax

0 100 200 300 400 500

Size in KBs

Model sizes for Salinas Valley dataset

Non-quantized Quantized

0 100 200 300

Size in KBs

Model sizes for Pavia University dataset

Non-quantized Quantized

OA: 84.62% æ 83.45% (drop of 1.17%) OA: 78.30% æ 77.02% (drop of 1.28%)

For more results, see: P. Ribalta, M. Marcinkiewicz, J. Nalepa, Proc. IEEE DSD,pp 260-267, 2018.

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 24 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Robustness of deep neural networks

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 25 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Are deep networks robust (against noise)?Simulation: Gaussian noise, SNR wavelength-dependentDeep network: Spatial-spectral CNN

OA: 90.59% OA: 31.90% OA: 48.39% 65.78%

(Thanks to Adam Popowicz for the noise model.)ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 26 / 30

IntroductionOpen issues in hyperspectral image segmentation

Conclusions

Validation of HSI segmentationResource-frugality of DNNs for HSIRobustness of DNNs for HSI

Are deep networks robust (against noise)?Simulation: Gaussian noise, SNR wavelength-dependentDeep network: Spatial-spectral CNN

OA: 90.59% OA: 31.90% OA: 48.39% 65.78%

(Thanks to Adam Popowicz for the noise model.)ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 26 / 30

IntroductionOpen issues in hyperspectral image segmentation

ConclusionsTake-home messageHYPERNET

Conclusions

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 27 / 30

IntroductionOpen issues in hyperspectral image segmentation

ConclusionsTake-home messageHYPERNET

Conclusions

We are not there yet

Open issues in hyperspectral image analysisLack of ground-truth dataResource-frugality of deep neural nets for HSI segmentationRobustness of deep neural nets for HSI segmentationBetter generalization for unseen data (æ better regularizers)E�cient pre-/post-processing of HSI data (e.g., band selection)

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 28 / 30

IntroductionOpen issues in hyperspectral image segmentation

ConclusionsTake-home messageHYPERNET

Conclusions

We are not there yetOpen issues in hyperspectral image analysis

Lack of ground-truth dataResource-frugality of deep neural nets for HSI segmentationRobustness of deep neural nets for HSI segmentationBetter generalization for unseen data (æ better regularizers)E�cient pre-/post-processing of HSI data (e.g., band selection)

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 28 / 30

IntroductionOpen issues in hyperspectral image segmentation

ConclusionsTake-home messageHYPERNET

HYPERNET—code and beyond

https://github.com/ESA-PhiLab/hypernet

HYPERNET Team: Jakub Nalepa, Michal Kawulok, Michal Myller,Lukasz Tulczyjew, Marek Antoniak, Rafal Zogala

State-of-the-art spectral and spatial-spectral deep nets for HSI (and beyond)Band selection using attention-based convolutional neural nets (P. Ribalta, L. Tulczyjew, M.Marcinkiewicz, J. Nalepa, https://arxiv.org/abs/1811.02667, after the first round of reviews inIEEE Transcations on Geoscience and Remote Sensing)Data augmentation (generative adversarial nets, noise-based,. . . )Visualization of HSIPatch-based benchmark generation, and more. . .

All in Python (Keras/Pytorch) with Jupyter-notebook examples

Wanting more? Drop me a line at jnalepa@ieee.org

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 29 / 30

IntroductionOpen issues in hyperspectral image segmentation

ConclusionsTake-home messageHYPERNET

HYPERNET—code and beyond

https://github.com/ESA-PhiLab/hypernet

HYPERNET Team: Jakub Nalepa, Michal Kawulok, Michal Myller,Lukasz Tulczyjew, Marek Antoniak, Rafal Zogala

State-of-the-art spectral and spatial-spectral deep nets for HSI (and beyond)Band selection using attention-based convolutional neural nets (P. Ribalta, L. Tulczyjew, M.Marcinkiewicz, J. Nalepa, https://arxiv.org/abs/1811.02667, after the first round of reviews inIEEE Transcations on Geoscience and Remote Sensing)Data augmentation (generative adversarial nets, noise-based,. . . )Visualization of HSIPatch-based benchmark generation, and more. . .All in Python (Keras/Pytorch) with Jupyter-notebook examples

Wanting more? Drop me a line at jnalepa@ieee.org

ESA �-Week 2018 J. Nalepa et al.: Hyperspectral image segmentation—are we there yet? 29 / 30

Accurate Segmentation of Hyperspectral Images UsingDeep Neural Networks—Are We There Yet?

Jakub Nalepa, Michal Marcinkiewicz, Pablo Ribalta Lorenzo, Krzysztof Czyz,Michal Kawulok & HYPERNET Team

KP Labs, Gliwice, PolandSilesian University of Technology, Gliwice, Poland

jnalepa@ieee.org

The ESA Earth Observation �-Week,Frascati, Italy. November 14, 2018

Recommended