20
Machine Learning Bayesian Decision Theory Dmitrij Schlesinger WS2014/2015, 27.10.2014

Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Machine LearningBayesian Decision Theory

Dmitrij SchlesingerWS2014/2015, 27.10.2014

Page 2: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

RecognitionThe model:Let two random variables be given:– The first one is typically discrete (k ∈ K) and is called“class”

– The second one is often continuous (x ∈ X) and is called“observation”

Let the joint probability distribution p(x, k) be “given”As k is discrete it is often specified by p(x, k) = p(k) · p(x|k)

The recognition task: given x, estimate kUsual problems (questions):– How to estimate k from x ? (today)– The joint probability is not always explicitly specified– The set K is sometimes huge

ML: Bayesian Decision Theory 27.10.2014 2

Page 3: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Idea – a gameSomebody samples a pair (x, k) according to a p.d. p(x, k)

He keeps k hidden and presents x to you

You decide for some k∗ according to a chosen decisionstrategy

Somebody penalizes your decision according to aloss-function, i.e. he compares your decision to the truehidden k

You know both p(x, k) and the loss-function(how does he compare)

Your goal is to design the decision strategy in order to pay asless as possible in average.

ML: Bayesian Decision Theory 27.10.2014 3

Page 4: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Bayesian RiskNotations:

The decision set D. Note: it needs not to coincide with K !!!Examples: decisions like “I don’t know”, “not this class” ...

Decision strategy is a (deterministic) mapping e : X → D

Loss-function C : D ×K → R, i.e. for a decision d and a"true" class k the penalty is C(d, k)

The Bayesian Risk of a strategy e is the expected loss:

R(e) =∑

x

∑k

p(x, k) · C(e(x), k

)→ min

e

It should be minimized with respect to the decision strategy

ML: Bayesian Decision Theory 27.10.2014 4

Page 5: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Some variantsGeneral:

R(e) =∑

x

∑k

p(x, k) · C(e(x), k

)→ min

e

Almost always:decisions can be made for different x independently (the set ofdecision strategies is not restricted). Then:

R(e(x)

)= R(d) =

∑k

p(x, k) · C(d, k)→ mind

Very often: the decision set coincides with the set of classes,i.e. D = K

k∗ = arg mink

∑k′p(x, k′) · C(k, k′) =

= arg mink

∑k′p(k′|x) · C(k, k′)

ML: Bayesian Decision Theory 27.10.2014 5

Page 6: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Maximum A-posteriori Decision (MAP)The loss is the simplest one:

C(k, k′) ={

1 if k 6= k′

0 otherwise = δ(k 6= k′)

i.e. we pay 1 if the answer is not the true class, no matterwhat error we make.From that follows:

R(k) =∑k′p(k′|x) · δ(k 6= k′) =

=∑k′p(k′|x)− p(k|x) = 1− p(k|x)→ min

k

p(k|x)→ maxk

ML: Bayesian Decision Theory 27.10.2014 6

Page 7: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

A MAP exampleLet K = {1, 2}, x ∈ R2, p(k) be given. Conditional probabilitydistributions for observations given classes are Gaussians:

p(x|k) = 12πσ2

k

exp[−‖x− µk‖2

2σ2k

]

The loss-function is δ(k 6= k′), i.e. we want MAP.

The decision strategy e : X → Kpartitions the input space into two re-gions: the one corresponding to thefirst and the one corresponding to thesecond class.

How does this partition look like?

ML: Bayesian Decision Theory 27.10.2014 7

Page 8: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

A MAP exampleFor a particular x we decide for 1, if

p(1) · 12πσ2

1exp

[−‖x− µ1‖2

2σ21

]> p(2) · 1

2πσ22

exp[−‖x− µ2‖2

2σ22

]

Special case (for simplicity) σ1 = σ2→ the decision strategy is (derivation on the board)

〈x, µ2 − µ1〉 > const

→ a linear classifier – the hyperplane orthogonal to µ2 − µ1

More classes, equal σ and p(k) → Voronoi-diagramMore classes, equal σ, different p(k) → Fischer-classifierTwo classes, different σ – a general quadratic curveetc.

ML: Bayesian Decision Theory 27.10.2014 8

Page 9: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Decision with rejectionThe decision set is D = K ∪ {r}, i.e. extended by a specialdecision “I don’t know”. The loss-function is

C(d, k) ={δ(d 6= k) if d ∈ Kε if d = r

i.e. we pay a (reasonable) penalty if we are lazy to decide.Case-by-case analysis:1. We decide for a class d ∈ K, then the decision is MAPd = k∗ = arg maxk p(k|x), the loss for this is 1− p(k∗|x)

2. We decide to reject d = r and pay ε for this

The decision strategy is:Compare p(k∗|x) with 1− ε and decide for the greater value.Note: not only the argument arg maxk is important but thevalue of the loss mink as well (for comparison).

ML: Bayesian Decision Theory 27.10.2014 9

Page 10: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Other simple loss-functionsLet the set of classes be structured (in some sense)Example:We have a probability density p(x, y) with an observations xand a continuous hidden value y. Suppose, we know p(y|x)for a given x, for which we would like to infer y.

The Bayesian Risk reads:

R(e(x)

)=∫ ∞−∞

p(y|x) · C(e(x), y

)dy

ML: Bayesian Decision Theory 27.10.2014 10

Page 11: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Other simple loss-functionsSimple δ-loss-function → MAP (not interesting anymore)

Loss may account for differences between the decision andthe “true” hidden value, for instance C(d, y) = (d− y)2,i.e. we pay depending on the distance.

Than (see board again):

e(x) = arg mind

∫ ∞−∞

p(y|x) · (d− y)2dy =

=∫ ∞−∞

y · p(y|x)dy = Ep(y|x)[y]

Other choices: C(d, y) = |d− y|, C(d, y) = δ(|d− y| > ε),combination with “rejection” etc.

ML: Bayesian Decision Theory 27.10.2014 11

Page 12: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Additive loss-functions – an exampleQ1 Q2 . . . Qn

P1 1 0 . . . 1P2 0 1 . . . 0. . . . . . . . . . . . . . .Pm 0 1 . . . 0“∑” ? ? . . . ?

Consider a “questionnaire”:m persons answer n questions.Furthermore, let us assume thatpersons are rated – a “reliability”measure is assigned to each one.The goal is to find the “right”answers for all questions.

Strategy 1:Choose the best person and take all his/her answers.

Strategy 2:– Consider a particular question– Look, what all the people say concerning this, do(weighted) voting

ML: Bayesian Decision Theory 27.10.2014 12

Page 13: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Additive loss-functions – example interpretation

People are classes k, reliability measure is the posterior p(k|x)

Specialty:classes consist of “parts” (questions) – classes are structured

The set of classes is k = (k1, k2 . . . km) ∈ Km, it can be seenas a vector of m components each one being a simple answer(0 or 1 in the above example)

The “Strategy 1” is MAP

How to derive (consider, understand) the other decisionstrategy from the viewpoint of the Bayesian Decision Theory?

ML: Bayesian Decision Theory 27.10.2014 13

Page 14: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Additive loss-functions

Consider the simple C(k, k′) = δ(k 6= k′) loss for the caseclasses are structured – it does not reflect how strong theclass and the decision disagree

A better (?) choice – additive loss-function

C(k, k′) =∑

i

ci(ki, k′i)

i.e. disagreements of all components are summed up

Substitute it in the formula for Bayesian Risk, derive and lookwhat happens ...

ML: Bayesian Decision Theory 27.10.2014 14

Page 15: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Additive loss-functions – derivation

R(k) =∑k′

[p(k′|x) ·

∑i

ci(ki, k′i)]

= / swap summations

=∑

i

∑k′ci(ki, k

′i) · p(k′|x) = / split summation

=∑

i

∑l∈K

∑k′:k′

i=l

ci(ki, l) · p(k′|x) = / factor out

=∑

i

∑l∈K

[ci(ki, l) ·

∑k′:k′

i=l

p(k′|x)]

= / red are marginals

=∑

i

∑l∈K

ci(ki, l) · p(k′i=l|x)→ mink

/ independent problems

⇒∑l∈K

ci(ki, l) · p(k′i=l|x)→ minki

∀i

ML: Bayesian Decision Theory 27.10.2014 15

Page 16: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Additive loss-functions – the strategy

1. Compute marginal probability distributions for values

p(k′i=l|x) =∑

k′:k′i=l

p(k′|x)

for each variable i and each value l

2. Decide for each variable “independently” according to itsmarginal p.d. and the local loss ci∑

l∈K

ci(ki, l) · p(k′i=l|x)→ minki

This is again a Bayesian Decision Problem – minimize theaverage loss

ML: Bayesian Decision Theory 27.10.2014 16

Page 17: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Additive loss-functions – a special caseFor each variable we pay 1 if we are wrong:

ci(ki, k′i) = δ(ki 6= k′i)

The overall loss is the number of misclassified variables(wrongly answered questions)

C(k, k′) =∑

i

δ(ki 6= k′i)

and is called Hamming distance

The decision strategy is Maximum Marginal Decision

k∗i = arg maxl

p(k′i=l|x) ∀i

ML: Bayesian Decision Theory 27.10.2014 17

Page 18: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Minimum Marginal Squared Error (MMSE)Assume, the values l for ki are numbers (vectors). Examples:– in tracking and pose estimation it is the set of all possiblepositions of the object to be tracked

– in stereo it is the set of all disparity/depth values– in denoising it is a grayvalue etc.→ a more reasonable (additive) loss should account for metricdifference between the decision and the true position, e.g.

C(k, k′) =∑

i

ci(ki, k′i) =

∑i

‖ki − k′i‖2

The task to be solved for each position i is∑l∈K

‖ki − l‖2 · p(k′i=l|x)→ minki

ML: Bayesian Decision Theory 27.10.2014 18

Page 19: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

Minimum Marginal Squared Error (MMSE)

∑l∈K

‖ki − l‖2 · p(k′i=l|x)→ minki

∂ki

=∑l∈K

2 · (ki − l) · p(k′i=l|x) = 0∑l∈K

ki · p(k′i=l|x) =∑l∈K

l · p(k′i=l|x)

ki =∑l∈K

l · p(k′i=l|x)

The optimal decision for i-th variable is the expectation(average) in the corresponding marginal probability distributionNote: the decision is not necessarily an element of K, e.g. itmay be real-valued → sets D and K are different.

ML: Bayesian Decision Theory 27.10.2014 19

Page 20: Machine Learning - Bayesian Decision Theoryds24/lehre/ml1_ws1415/ml_02_b… · Idea–agame Somebodysamplesapair(x,k) accordingtoap.d.p(x,k) Hekeepskhiddenandpresentsxtoyou Youdecideforsomek∗accordingtoachosendecision

SummaryToday:– The idea – a gameThe approach – minimize the average (expected) loss

– Simple δ-loss – Maximum A-posteriori Decision– Rejection – an example of "extended" decision sets– "Metric" classes – more elaborated losses are possible– Structured classes – even more elaborated losses ...

The message:The design of the appropriate loss-function is as important asthe design of the appropriate probabilistic model.

The next class: probabilistic learning

ML: Bayesian Decision Theory 27.10.2014 20