30
Bayesian networks practice

Bayesian networks practice. Semantics e.g., P(j m a b e) = P(j | a) P(m | a) P(a | b, e) P( b) P( e) = … Suppose we have the variables

  • View
    239

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Bayesian networks practice

Page 2: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Semantics

e.g.,

P(j m a b e)

= P(j | a) P(m | a) P(a | b, e) P(b) P(e)

= …

n

iii

n

iii

nnnnn

nnn

n

xparentsxPxxxP

xxPxxxPxxxP

xxPxxxP

xxP

1111

1212111

1111

1

))(|(),...,|(

...

),...,(),...,|(),...,|(

),...,(),...,|(

),...,(

Suppose we have the variables X1,…,Xn.

The probability for them to have the values x1,…,xn respectively is P(xn,…,x1):

P(xn,…,x1): is short for

P(Xn=xn,…, Xn= x1):

We order them according to the topological of the given

BayesNet

Page 3: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Inference in Bayesian Networks• Basic task is to compute the posterior probability for a query

variable, given some observed event – that is, some assignment of values to a set of evidence variables.

• Notation:– X denotes query variable

– E denotes the set of evidence variables E1,…,Em, and e is a particular event, i.e. an assignment to the variables in E.

– Y will denote the set of the remaining variables (hidden variables).

• A typical query asks for the posterior probability P(x|e1,…,em)

• E.g. We could ask: What’s the probability of a burglary if both Mary and John call, P(burglary | johhcalls, marycalls)?

Page 4: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Classification

),...,,(),...,(

),...,,(),...,|(

),...,,(),...,(

),...,,(),...,|(

11

11

11

11

mm

mm

mm

mm

eexPeeP

eexPeexP

eexPeeP

eexPeexP

• We compute and compare the following:

• However, how do we compute:

?),...,,(

and

),...,,(

1

1

m

m

eexP

eexP

What about the hidden variables Y1,…,Yk?

Page 5: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Inference by enumeration

1

1

),...,,,...,,(...),...,,(

and

),...,,,...,,(...),...,,(

111

111

y y kmm

y y kmm

k

k

yyeexPeexP

yyeexPeexP

Example: P(burglary | johhcalls, marycalls)? (Abbrev. P(b|j,m))

),,,,(),,,,(),,,,(),,,,(

),,,,(

),,(

),|(

eamjbPeamjbPeamjbPeamjbP

eamjbP

mjbP

mjbP

a e

Page 6: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Another example

Once the right topology has been found. the probability table associatedwith each node is determined. Estimating such probabilities is fairly straightforward and is similar to the approach used by naïve Bayes classifiers.

Page 7: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

High Blood Pressure• Suppose we get to know that the new patient has high blood

pressure. • What’s the probability he has heart disease under this condition?

4165.0*

)75.0*75.025.0*55.0(*3.0*85.0*

75.0*45.025.0*25.0*7.0*85.0*

)(),|()(),|()()|(

)(),|()(),|()()|(

)(),|()()|(

),|()|()(),|()()|(

)()()|(),|()|(),|(

),,,,,()|(

dPdehdPdPdehdPePhdbpP

dPdehdPdPdehdPePhdbpP

dPdehdPePhdbpP

hhdcpPdhPdPdehdPePhdbpP

dPePhdbpPhhdcpPdhPdehdP

bpcphdehdPbphdP

e d

e d h cp

e d h cp

e d h cp

Page 8: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

High Blood Pressure (Cont’d)

e d h cpbpcphdehdPbphdP ),,,,,()|(

e d h cp

dPePhdbpPhhdcpPdhPdehdP )()()|(),|()|(),|(

e d h cp

hhdcpPdhPdPdehdPePhdbpP ),|()|()(),|()()|(

e d

dPdehdPePhdbpP )(),|()()|(

)(),|()(),|()()|(

)(),|()(),|()()|(

dPdehdPdPdehdPePhdbpP

dPdehdPdPdehdPePhdbpP

102.0*

75.0*25.025.0*45.0*3.0*2.0*

75.0*55.025.0*75.0*7.0*2.0*

Page 9: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

High Blood Pressure ()

5185.0

1

1020.04165.0

1

8033.04165.0*)|( bphdP

0.1967102.0*)|( bphdP

Page 10: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

High Blood Pressure, Healthy Diet, and Regular Exercise

03719.0*25.0*25.0*7.0*85.0*

)(),|()()|(

),|()|()(),|()()|(

)()()|(),|()|(),|(

),,,,,(),,|(

dPdehdPePhdbpP

hhdcpPdhPdPdehdPePhdbpP

dPePhdbpPhhdcpPdhPdehdP

bpcphdehdPedbphdP

h cp

h cp

h cp

Page 11: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

High Blood Pressure, Healthy Diet, and Regular Exercise (Cont’d)

02625.0*25.0*75.0*7.0*2.0*

)(),|()()|(

),|()|()(),|()()|(

)()()|(),|()|(),|(

),,,,,(),,|(

dPdehdPePhdbpP

hhdcpPdhPdPdehdPePhdbpP

dPePhdbpPhhdcpPdhPdehdP

bpcphdehdPedbphdP

h cp

h cp

h cp

Page 12: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

High Blood Pressure, Healthy Diet, and Regular Exercise (Cont’d)

06344.0

1

02625.003719.0

1

5862.003719.0*),,|( edbphdP

0.413802625.0*),,|( edbphdP

The model therefore suggests that eating healthily and exercising regularly may reduce a person's risk of getting heart disease.

Page 13: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Weather dataWhat is the Bayesian Network corresponding to Naïve Bayes?

Page 14: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

“Effects” and “Causes” vs. “Evidence” and “Class”• Why Naïve Bayes has this graph?

• Because when we compute in Naïve Bayes:

P(play=yes | E) =

P(Outlook=Sunny | play=yes) *

P(Temp=Cool | play=yes) *

P(Humidity=High | play=yes) *

P(Windy=True | play=yes) *

P(play=yes) / P(E)

we are interested in computing P(…|play=yes), which are probabilities of our evidence “observations” given the class.

• Of course, “play” isn’t a cause for “outlook”, “temperature”, “humidity”, and “windy”.

• However, “play” is the class and knowing that it has a certain value, will influence the observational evidence probability values.

• For example, if play=yes, and we know that the playing happens indoors, then it is more probable (than without this class information) the outlook to be observed “rainy.”

Page 15: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Right or Wrong Topology?• In general, there is no right or wrong graph topology.

– Of course the calculated probabilities (from the data) will be different for different graphs.

– Some graphs will induce better classifiers than some other. – If you reverse the arrows in the previous figure, then you get a

pure causal graph, • whose induced classifier might have estimated error (through cross-

validation) better or worse than the Naïve Bayes one (depending on the data).

• If the topology is constructed manually, we (humans) tend to prefer the causal direction. – In domains such as medicine the graphs are usually less complex

in the causal direction.

Page 16: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Weka suggestionHow Weka finds the shape of the graph?

Fixes an order of attributes (variables) and then adds and removes arcs until it gets the smallest estimated error (through cross-validation).

By default it starts with a Naïve Bayes network.

Also, it maintains a score of graph complexity, trying to keep the complexity low.

Page 17: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables
Page 18: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

You can change to 2 for example.

If you do, then the max number of

parents for a node will be 2.

It is going to start with a Naïve Bayes graph

and then try to add/remove arcs.

Laplace correction. Better change it to 1, to be compatible with the counter initialization in

Naïve Bayes.

Page 19: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Play probability tableBased on the data…

P(play=yes) = 9/14P(play=no) = 5/14

P(play=yes) = (9+1)/(14+2) = .625P(play=yes) = (5+1)/(14+2) = .375

Let’s correct with Laplace …

Page 20: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Outlook probability tableBased on the data…

P(outlook=sunny|play=yes) = (2+1)/(9+3) = .25P(outlook=overcast|play=yes) = (4+1)/(9+3) = .417P(outlook=rainy|play=yes) = (3+1)/(9+3) = .333

P(outlook=sunny|play=no) = (3+1)/(5+3) = .5P(outlook=overcast|play=no) = (0+1)/(5+3) = .125P(outlook=rainy|play=no) = (2+1)/(5+3) = .375

Page 21: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Windy probability table

P(windy=true|play=yes,outlook=sunny) = (1+1)/(2+2) = .5

Based on the data…let’s find the conditional probabilities for “windy”

Page 22: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Windy probability table

P(windy=true|play=yes,outlook=sunny) = (1+1)/(2+2) = .5

P(windy=true|play=yes,outlook=overcast) = 0.5

P(windy=true|play=yes,outlook=rainy) = 0.2

P(windy=true|play=no,outlook=sunny) = 0.4

P(windy=true|play=no,outlook=overcast) = 0.5

P(windy=true|play=no,outlook=rainy) = 0.75

Based on the data…

Page 23: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Final figure

Classify it

Classify it

Page 24: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Classification I Classify it

P(play=yes|outlook=sunny, temp=cool,humidity=high, windy=true) =

*P(play=yes)*P(outlook=sunny|play=yes)*P(temp=cool|play=yes, outlook=sunny)*P(humidity=high|play=yes, temp=cool)*P(windy=true|play=yes, outlook=sunny)

= *0.625*0.25*0.4*0.2*0.5= *0.00625

Page 25: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Classification II Classify it

P(play=no|outlook=sunny, temp=cool,humidity=high, windy=true) =

*P(play=no)*P(outlook=sunny|play=no)*P(temp=cool|play=no, outlook=sunny)*P(humidity=high|play= no, temp=cool)*P(windy=true|play=no, outlook=sunny)

= *0.375*0.5*0.167*0.333*0.4= *0.00417

Page 26: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Classification III Classify it

P(play=yes|outlook=sunny, temp=cool,humidity=high, windy=true) = *0.00625

P(play=no|outlook=sunny, temp=cool,humidity=high, windy=true) = *.00417

= 1/(0.00625+0.00417) =95.969

P(play=yes|outlook=sunny, temp=cool,humidity=high, windy=true) = 95.969*0.00625 = 0.60

Page 27: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Classification IV (missing values or hidden variables)

P(play=yes|temp=cool, humidity=high, windy=true)

= *outlookP(play=yes)*P(outlook|play=yes)*P(temp=cool|play=yes,outlook)*P(humidity=high|play=yes, temp=cool)*P(windy=true|play=yes,outlook)

=…(next slide)

Page 28: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Classification V (missing values or hidden variables)P(play=yes|temp=cool, humidity=high, windy=true)

= *outlookP(play=yes)*P(outlook|play=yes)*P(temp=cool|play=yes,outlook) *P(humidity=high|play=yes,temp=cool)*P(windy=true|play=yes,outlook)

= *[P(play=yes)*P(outlook= sunny|play=yes)*P(temp=cool|play=yes,outlook=sunny)*P(humidity=high|play=yes,temp=cool)*P(windy=true|play=yes,outlook=sunny)

+P(play=yes)*P(outlook= overcast|play=yes)*P(temp=cool|play=yes,outlook=overcast)*P(humidity=high|play=yes,temp=cool)*P(windy=true|play=yes,outlook=overcast)

+P(play=yes)*P(outlook= rainy|play=yes)*P(temp=cool|play=yes,outlook=rainy)*P(humidity=high|play=yes,temp=cool)*P(windy=true|play=yes,outlook=rainy)]

= *[ 0.625*0.25*0.4*0.2*0.5 + 0.625*0.417*0.286*0.2*0.5 + 0.625*0.33*0.333*0.2*0.2 ]=*0.01645

Page 29: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Classification VI (missing values or hidden variables)P(play=no|temp=cool, humidity=high, windy=true)

= *outlookP(play=no)*P(outlook|play=no)*P(temp=cool|play=no,outlook) *P(humidity=high|play=no,temp=cool)*P(windy=true|play=no,outlook)

= *[P(play=no)*P(outlook=sunny|play=no)*P(temp=cool|play=no,outlook=sunny)*P(humidity=high|play=no,temp=cool)*P(windy=true|play=no,outlook=sunny)

+P(play=no)*P(outlook= overcast|play=no)*P(temp=cool|play=no,outlook=overcast)*P(humidity=high|play=no,temp=cool)*P(windy=true|play=no,outlook=overcast)

+P(play=no)*P(outlook= rainy|play=no)*P(temp=cool|play=no,outlook=rainy)*P(humidity=high|play=no,temp=cool)*P(windy=true|play=no,outlook=rainy)]

= *[ 0.375*0.5*0.167*0.333*0.4 + 0.375*0.125*0.333*0.333*0.5 + 0.375*0.375*0.4*0.333*0.75 ]=*0.0208

Page 30: Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables

Classification VII (missing values or hidden variables)

P(play=yes|temp=cool, humidity=high, windy=true) =*0.01645P(play=no|temp=cool, humidity=high, windy=true) =*0.0208

=1/(0.01645 + 0.0208)= 26.846

P(play=yes|temp=cool, humidity=high, windy=true) = 26.846 * 0.01645 = 0.44P(play=no|temp=cool, humidity=high, windy=true) = 26.846 * 0.0208 = 0.56

I.e. P(play=yes|temp=cool, humidity=high, windy=true) is 44% and P(play=no|temp=cool, humidity=high, windy=true) is 56%

So, we predict ‘play=no.’