84
IAPR Summer School on Machine and Visual Intelligence Vico Equense, Naples, Italy, 28th August 2018 Interpreting and Explaining Deep Models in Computer Vision Wojciech Samek Fraunhofer HHI, Machine Learning Group

Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Image ProcessingFraunhoferHeinrich Hertz Institute

IAPR Summer School on Machine and Visual Intelligence Vico Equense, Naples, Italy, 28th August 2018

Interpreting and Explaining Deep Models in Computer Vision

Wojciech SamekFraunhofer HHI, Machine Learning Group

Page 2: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 2

“Superhuman” AI Systems

Page 3: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 3

Computing power

Deep Neural Network Information (implicit)

Solve taskHuge volumes of data

Can we trust these black boxes ?

Page 4: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 4

Can we trust these black boxes ?

Page 5: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 5

Can we trust these black boxes ?

Page 6: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 6

Can we trust these black boxes ?

verifysystem

understandweaknesses

legalaspects learn new

things from data

We need interpretability in order to:

Page 7: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 7

Dimensions of Interpretability

Page 8: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 8

Dimensions of Interpretability

Page 9: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 9

Dimensions of Interpretability

Page 10: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 10

Dimensions of Interpretability

Page 11: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 11

Dimensions of Interpretability

Page 12: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 12

Dimensions of Interpretability

Page 13: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 13

Dimensions of Interpretability

Page 14: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Explain Predictions of Deep Neural Networks

Page 15: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 15

Naive Approach: Sensitivity Analysis

Page 16: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 16

Naive Approach: Sensitivity Analysis

Page 17: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 17

Naive Approach: Sensitivity Analysis

Page 18: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 18

Naive Approach: Sensitivity Analysis

Page 19: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions

BlackBox

19

Better Approach: LRP

Layer-wise Relevance Propagation (LRP)(Bach et al., PLOS ONE, 2015)

Page 20: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 20

Better Approach: LRP

Classification

cat

rooster

dog

Page 21: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 21

Better Approach: LRP

Classification

cat

rooster

dog

What makes this image a “rooster image” ? Idea: Redistribute the evidence for class rooster back to image space.

Page 22: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 22

Better Approach: LRP

Theoretical interpretation Deep Taylor Decomposition

(Montavon et al., 2017) not based on gradient !

Page 23: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 23

Better Approach: LRP

Explanation

cat

rooster

dog

Page 24: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 24

Heatmap of prediction “3” Heatmap of prediction “9”

Better Approach: LRP

Page 25: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 25

Better Approach: LRP

More information (Montavon et al., 2017 & 2018)

Page 26: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 26

Decomposing the Correct Quantity

Page 27: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 27

Why Simple Taylor doesn’t work?

Page 28: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 28

Each explanation step: - easy to find good root point - no gradient shattering

Idea: Since neural network is composed of simple functions, we propose a deep Taylor decomposition.

Deep Taylor Decomposition

(Montavon et al., 2017 Montavon et al. 2018)

Page 29: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 29

Other Explanation Methods

Page 30: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Axiomatic Approach to Interpretability

Page 31: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 31

First Attempt: Distance to Ground Truth

Page 32: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 32

From Ground Truth Explanations to Axiom

Page 33: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 33

Axiomatic Approach to Interpretability

Page 34: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 34

Axiomatic Approach to Interpretability

Page 35: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 35

Axiomatic Approach to Interpretability

Page 36: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 36

Axiomatic Approach to Interpretability

Page 37: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Deep Taylor Decomposition

Page 38: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 38

Deep Taylor Decomposition

Page 39: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 39

Deep Taylor Decomposition

Page 40: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 40

Deep Taylor Decomposition

Page 41: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 41

Deep Taylor Decomposition

Page 42: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 42

Deep Taylor Decomposition

Page 43: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 43

Deep Taylor Decomposition

Page 44: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 44

Deep Taylor Decomposition

Page 45: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 45

Deep Taylor Decomposition

Page 46: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 46

Deep Taylor Decomposition

Page 47: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 47

Deep Taylor Decomposition

Page 48: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Applications

Page 49: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 49

General Images (Bach’ 15, Lapuschkin’16) Text Analysis (Arras’16 &17)

Speech (Becker’18)

Games (Lapuschkin’18, in prep.)

EEG (Sturm’16)

fMRI (Thomas’18)

Morphing (Seibold’18)

Video (Anders’18)VQA (Arras’18)

Histopathology (Binder’18)

Faces (Lapuschkin’17)

Gait Patterns (Horst’18, in prep.)

Translation (Ding’17)

Digits (Bach’ 15)

LRP applied to different Data

Page 50: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 50

LSTM (Arras’17, Thomas’18)Convolutional NNs (Bach’15, Arras’17 …)

Bag-of-words / Fisher Vector models (Bach’15, Arras’16, Lapuschkin’17, Binder’18)

One-class SVM (Kauffmann’18)

Local RenormalizationLayers (Binder’16)

LRP applied to different Models

Page 51: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 51

(Arras et al. 2016 & 2017)

word2vec/CNN:Performance: 80.19%Strategy to solve the problem: identify semantically meaningful words related to the topic.

BoW/SVM:Performance: 80.10%Strategy to solve the problem: identify statistical patterns,i.e., use word statistics

Application: Compare Classifiers

Page 52: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 52

(Lapuschkin et al. 2016)same performance —> same strategy ?

Application: Compare Classifiers

Page 53: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 53

‘horse’ images in PASCAL VOC 2007

Application: Compare Classifiers

Page 54: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 54

GoogleNet focuses onfaces of animal.—> suppresses background noise

BVLC CaffeNet heatmaps are much more noisy.

(Binder et al. 2016)

performance

heatmapstructure

?

Application: Compare Classifiers

Page 55: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 55

classifier

how importantis context ?

how importantis context ?

relevance outside bbox

relevance inside bboximportance of context =

Application: Measure Context Use

Page 56: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 56

(Lapuschkin et al., 2016)

Application: Measure Context Use

Page 57: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 57

GoogleNetBVLC CaffeNet(Lapuschkin et al. 2016)

Cont

ext u

se

VGG CNN S

Context use anti-correlated with performance.

BVLC

Caf

feNe

tG

oogl

eNet

VGG

CNN

SApplication: Measure Context Use

Page 58: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions

(Lapuschkin et al., 2017)

58

with pretraining

without pretraining

Strategy to solve the problem: Focus on chin / beard, eyes & hear, but without pretraining the model overfits

Gender classification

Application: Face analysis

Page 59: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions

(Lapuschkin et al., 2017)

59

Age classification Predictions25-32 years old

60+ years old

pretraining onImageNet

pretraining onIMDB-WIKI

Strategy to solve the problem: Focus on the laughing …

laughing speaks against 60+(i.e., model learned that old people do not laugh)

Application: Face analysis

Page 60: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 60

How to handle multiplicative interactions ?

gate neuron indirectly affect relevance distribution in forward pass

Application: Sentiment analysis

(Arras et al., 2017)

Negative sentiment

Model understands negation !

Page 61: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 61

CNN

DNN

explain

LRP

(Sturm et al. 2016)

How brain works subject-dependent—> individual explanations

Brain-Computer Interfacing

Application: EEG Analysis

Page 62: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 62

(Sturm et al. 2016)

With LRP we can analyzewhat made a trial being misclassified.

Application: EEG Analysis

Page 63: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 63

(Thomas et al. 2018)

Application: fMRI Analysis

Page 64: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions

(Anders et al., 2018)

64

Application: Understand the model

Page 65: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 65

Observation: Explanations focus on the bordering of the video, as if it wants to watch more of it.

Application: Understand the model

Page 66: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 66

Idea: Play video in fast forward (without retraining) and then the classification accuracy improves.

Application: Understand the model

Page 67: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions

(Becker et al., 2018)

67

female speaker male speaker

model classifies gender based on the fundamental frequency and its immediate harmonics (see also Traunmüller & Eriksson 1995)

Application: Understand the model

Page 68: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions

(Arras et al., 2018)

68

model understands the question and correctly identifies the object of interest

Application: Understand the model

Page 69: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions

(Lapuschkin et al., in prep.)

Sensitivity Analysis LRP

69

does not focus on where the ball is, but on where the ball could be in the next frame

LRP shows that that model tracks the ball

Application: Understand the model

Page 70: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions

(Lapuschkin et al., in prep.)

70

Application: Understand the model

Page 71: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 71

(Lapuschkin et al., in prep.)

model learns 1. track the ball2. focus on paddle3. focus on the tunnel

Application: Understand the model

Page 72: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Take Home Messages

Page 73: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 73

Take Home Messages

Page 74: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 74

Take Home Messages

Page 75: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 75

Take Home Messages

Page 76: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 76

Take Home Messages

High flexibility: Different LRP variants, free parameters

Page 77: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 77

Take Home Messages

Page 78: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 78

Take Home Messages

Page 79: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 79

Take Home Messages

Page 80: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 80

Take Home Messages

Page 81: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 81

Tutorial / Overview PapersG Montavon, W Samek, KR Müller. Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing, 73:1-15, 2018.W Samek, T Wiegand, and KR Müller, Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models, ITU Journal: ICT Discoveries - Special Issue 1 - The Impact of Artificial Intelligence (AI) on Communication Networks and Services, 1(1):39-48, 2018.

Methods PapersS Bach, A Binder, G Montavon, F Klauschen, KR Müller, W Samek. On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation. PLOS ONE, 10(7):e0130140, 2015.G Montavon, S Bach, A Binder, W Samek, KR Müller. Explaining NonLinear Classification Decisions with Deep Taylor Decomposition. Pattern Recognition, 65:211–222, 2017L Arras, G Montavon, K-R Müller, W Samek. Explaining Recurrent Neural Network Predictions in Sentiment Analysis. EMNLP'17 Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA), 159-168, 2017.A Binder, G Montavon, S Lapuschkin, KR Müller, W Samek. Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers. Artificial Neural Networks and Machine Learning – ICANN 2016, Part II, Lecture Notes in Computer Science, Springer-Verlag, 9887:63-71, 2016.J Kauffmann, KR Müller, G Montavon. Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models. arXiv:1805.06230, 2018.

Evaluation ExplanationsW Samek, A Binder, G Montavon, S Lapuschkin, KR Müller. Evaluating the visualization of what a Deep Neural Network has learned. IEEE Transactions on Neural Networks and Learning Systems, 28(11):2660-2673, 2017.

References

Page 82: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 82

Application to TextL Arras, F Horn, G Montavon, KR Müller, W Samek. Explaining Predictions of Non-Linear Classifiers in NLP. Workshop on Representation Learning for NLP, Association for Computational Linguistics, 1-7, 2016.L Arras, F Horn, G Montavon, KR Müller, W Samek. "What is Relevant in a Text Document?": An Interpretable Machine Learning Approach. PLOS ONE, 12(8):e0181142, 2017.L Arras, G Montavon, K-R Müller, W Samek. Explaining Recurrent Neural Network Predictions in Sentiment Analysis. EMNLP'17 Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA), 159-168, 2017.L Arras, A Osman, G Montavon, KR Müller, W Samek. Evaluating and Comparing Recurrent Neural Network Explanation Methods in NLP. arXiv, 2018

Application to Images & FacesS Lapuschkin, A Binder, G Montavon, KR Müller, Wojciech Samek. Analyzing Classifiers: Fisher Vectors and Deep Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2912-20, 2016.S Bach, A Binder, KR Müller, W Samek. Controlling Explanatory Heatmap Resolution and Semantics via Decomposition Depth. IEEE International Conference on Image Processing (ICIP), 2271-75, 2016.F Arbabzadeh, G Montavon, KR Müller, W Samek. Identifying Individual Facial Expressions by Deconstructing a Neural Network. Pattern Recognition - 38th German Conference, GCPR 2016, Lecture Notes in Computer Science, 9796:344-54, Springer International Publishing, 2016.S Lapuschkin, A Binder, KR Müller, W Samek. Understanding and Comparing Deep Neural Networks for Age and Gender Classification. IIEEE International Conference on Computer Vision Workshops (ICCVW), 1629-38, 2017.C Seibold, W Samek, A Hilsmann, P Eisert. Accurate and Robust Neural Networks for Security Related Applications Exampled by Face Morphing Attacks. arXiv:1806.04265, 2018.

References

Page 83: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 83

Application to VideoC Anders, G Montavon, W Samek, KR Müller. Understanding Patch-Based Learning by Explaining Predictions. arXiv:1806.06926, 2018.V Srinivasan, S Lapuschkin, C Hellge, KR Müller, W Samek. Interpretable Human Action Recognition in Compressed Domain. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1692-96, 2017.

Application to SpeechS Becker, M Ackermann, S Lapuschkin, KR Müller, W Samek. Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals. arXiv:1807.03418, 2018.

Application to the SciencesI Sturm, S Lapuschkin, W Samek, KR Müller. Interpretable Deep Neural Networks for Single-Trial EEG Classification. Journal of Neuroscience Methods, 274:141–145, 2016.A Thomas, H Heekeren, KR Müller, W Samek. Interpretable LSTMs For Whole-Brain Neuroimaging Analyses. arXiv, 2018.KT Schütt, F. Arbabzadah, S Chmiela, KR Müller, A Tkatchenko. Quantum-chemical insights from deep tensor neural networks. Nature communications, 8, 13890, 2017.A Binder, M Bockmayr, M Hägele and others. Towards computational fluorescence microscopy: Machine learning-based integrated prediction of morphological and molecular tumor profiles. arXiv:1805.11178, 2018

References

Page 84: Interpreting and Explaining Deep Models in Computer Visioniphome.hhi.de/samek/pdf/VISMACSummerSchool2018.pdf · Artificial Neural Networks and Machine Learning – ICANN 2016, Part

Wojciech Samek: Interpreting Deep Neural Networks by Explaining their Predictions 84

Acknowledgement Klaus-Robert Müller (TUB)Grégoire Montavon (TUB)Sebastian Lapuschkin (HHI) Leila Arras (HHI)Alexander Binder (SUTD) …

Thank you for your attention