90
COMP3456 – adapted from textbook slides www.bioalgorithms. info Hidden Markov Models

COMP3456 – adapted from textbook slides Hidden Markov Models

Embed Size (px)

DESCRIPTION

COMP3456 – adapted from textbook slides CG-Islands Given 4 nucleotides: probability of occurrence is ≈1/4. Thus, probability of occurrence of a dinucleotide is ≈1/16.Given 4 nucleotides: probability of occurrence is ≈1/4. Thus, probability of occurrence of a dinucleotide is ≈1/16. However, the frequencies of dinucleotides in DNA sequences vary widely.However, the frequencies of dinucleotides in DNA sequences vary widely. In particular, CG is typically under- represented (frequency of CG is typically < 1/16)In particular, CG is typically under- represented (frequency of CG is typically < 1/16)

Citation preview

Page 1: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Hidden Markov Models

Page 2: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Outline• CG-islandsCG-islands• The “Fair Bet Casino”The “Fair Bet Casino”• Hidden Markov ModelsHidden Markov Models• Decoding AlgorithmDecoding Algorithm• Forward-Backward AlgorithmForward-Backward Algorithm• Profile HMMsProfile HMMs• HMM Parameter EstimationHMM Parameter Estimation• Viterbi training• Baum-Welch algorithm

Page 3: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

CG-Islands

• Given 4 nucleotides: probability of Given 4 nucleotides: probability of occurrence is ≈1/4. Thus, probability of occurrence is ≈1/4. Thus, probability of occurrence of a dinucleotide is ≈1/16.occurrence of a dinucleotide is ≈1/16.

• However, the frequencies of dinucleotides in However, the frequencies of dinucleotides in DNA sequences vary widely.DNA sequences vary widely.

• In particular, In particular, CG CG is typically under-is typically under-represented (frequency of represented (frequency of CG CG is typically is typically < 1/16)< 1/16)

Page 4: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides

part 1: Hidden Markov Models

Page 5: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Why CG-Islands?• CGCG is the least frequent dinucleotide because is the least frequent dinucleotide because CC in in

CG CG is easily is easily methylated methylated andand has the tendency to has the tendency to mutate into Tmutate into T

• However, the methylation is suppressed around However, the methylation is suppressed around genes in a genome. So, genes in a genome. So, CGCG appears at relatively appears at relatively high frequency within these high frequency within these CGCG islands – islands –

• so finding the so finding the CGCG islands in a genome is an islands in a genome is an important problem as it gives us a clue of important problem as it gives us a clue of where where genes aregenes are..

Page 6: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

CG Islands and the “Fair Bet Casino”

• The The CGCG islandsislands problem can be modelled after problem can be modelled after a problem named a problem named “The Fair Bet Casino”“The Fair Bet Casino”

• The game is to flip coins, which results in only The game is to flip coins, which results in only two possible outcomes: two possible outcomes: HHead or ead or TTail.ail.

• The The FFair coin will give air coin will give HHeads and eads and TTails with ails with same probability ½.same probability ½.

• The The BBiased coin will give iased coin will give HHeads with prob. ¾.eads with prob. ¾.

Page 7: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

The “Fair Bet Casino” (cont’d)

• Thus, we define the probabilities:Thus, we define the probabilities:• P(H|F) = P(T|F) = ½P(H|F) = P(T|F) = ½• P(H|B) = ¾, P(T|B) = ¼P(H|B) = ¾, P(T|B) = ¼• The crooked dealer changes between Fair The crooked dealer changes between Fair

and Biased coins with probability 10%and Biased coins with probability 10%

Page 8: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

The Fair Bet Casino Problem

• Input:Input: A sequence A sequence x = xx = x11xx22xx33…x…xnn of coin of coin tosses made by two possible coins (tosses made by two possible coins (FF or or BB).).

• • Output:Output: A sequence A sequence π = ππ = π11 π π22 π π33… π… πnn, with , with

each each ππii being either being either F F or or BB indicating that indicating that xxii is the result of tossing the Fair or Biased is the result of tossing the Fair or Biased coin respectively.coin respectively.

Page 9: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

but there's a problem…

Fair Bet Casino Fair Bet Casino ProblemProblemAny Any observed observed outcome of coin outcome of coin tosses could tosses could have been have been generated by generated by any any sequence of sequence of states!states!

We need to incorporate a way to grade different sequences differently.

The Decoding The Decoding ProblemProblem

Page 10: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

P(x|fair coin) vs. P(x|biased coin)

• Suppose first that the dealer never changes Suppose first that the dealer never changes coins. Some definitions:coins. Some definitions:

• P(P(xx|fair coin):|fair coin): prob. of the dealer using the prob. of the dealer using the F F coin and generating the outcome coin and generating the outcome x.x.

• P(P(xx|biased coin):|biased coin): prob. of the dealer using prob. of the dealer using the the BB coin and generating outcome coin and generating outcome x.x.

Page 11: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

P(x|fair coin) vs. P(x|biased coin)

ΠΠi=1,n i=1,n p (xp (xii||biased coinbiased coin)=(3/4))=(3/4)kk(1/4)(1/4)n-kn-k==

• kk - - the number of the number of HHeads in eads in x.x.

Page 12: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

P(x|fair coin) vs. P(x|biased coin)

So what can we find out?• P(x|fair coin) = P(x|biased coin)• 1/2n = 3k/4n • 2n = 3k

• n = k log23

• when k = n / log23 (k ≈0.67n)

Page 13: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Log-odds Ratio

• We define We define log-odds ratiolog-odds ratio as follows: as follows:

• loglog22(P((P(xx|fair coin) / P(|fair coin) / P(xx|biased coin)) |biased coin)) • = Σ= Σkk

i=1i=1 log log22((pp++((xxii) / ) / pp--((xxii)) ))

• = = n – kn – k log log2233

This gives us a threshold at which support This gives us a threshold at which support favours one model over the other.favours one model over the other.

Page 14: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Computing Log-odds Ratio in Sliding Windows

x1x2x3x4x5x6x7x8…xn

Consider a sliding window of the outcome

sequence. Find the log-odds for this short window.Log-odds value

0

Fair coin most likely used

Biased coin most likely used

Disadvantages:- the length of CG-island is not known in advance- different windows may classify the same position differently

Page 15: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Hidden Markov Model (HMM)• Can be viewed as an abstract machine, with Can be viewed as an abstract machine, with k hidden k hidden

states, and that emits symbols from an alphabet Σ.states, and that emits symbols from an alphabet Σ.• Each hidden state has its own probability distribution, Each hidden state has its own probability distribution,

and the machine switches between states according to and the machine switches between states according to this probability distribution.this probability distribution.

• While in a certain state, the machine makes two While in a certain state, the machine makes two decisions:decisions:1.1.What state should I move to next?What state should I move to next?2.2.What symbol - from the alphabet What symbol - from the alphabet ΣΣ - should I emit? - should I emit?

Page 16: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Why “Hidden”?

• Observers can see the emitted symbols of an Observers can see the emitted symbols of an HMM but have HMM but have no ability to know which state no ability to know which state the HMM is currently inthe HMM is currently in..

• Thus, the goal is to infer the most likely Thus, the goal is to infer the most likely hidden states of an HMM based on the given hidden states of an HMM based on the given sequence of emitted symbols.sequence of emitted symbols.

Page 17: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

HMM Parameters

• ΣΣ: set of emission characters.: set of emission characters.• Ex.: Σ = {H, T} for coin tossingEx.: Σ = {H, T} for coin tossing• Σ = {1, 2, 3, 4, 5, 6} for dice tossingΣ = {1, 2, 3, 4, 5, 6} for dice tossing

• QQ: set of hidden states, each emitting : set of hidden states, each emitting symbols from Σ.symbols from Σ.

• Q={F,B} for coin tossingQ={F,B} for coin tossing

Page 18: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

HMM Parameters (cont’d)

• A = (aA = (aklkl):): a |Q| x |Q| matrix of probability of a |Q| x |Q| matrix of probability of changing from state changing from state k k to state to state l.l.

• aaFFFF = 0.9 a = 0.9 aFBFB = 0.1 = 0.1

• aaBFBF = 0.1 a = 0.1 aBBBB = 0.9 = 0.9

• E = (eE = (ekk((bb)):)): a |Q| x |Σ| matrix of probability of a |Q| x |Σ| matrix of probability of emitting symbol emitting symbol bb while being in state while being in state k.k.

• eeFF(0) = ½ e(0) = ½ eFF(1) = ½ (1) = ½

• eeBB(0) = ¼ e(0) = ¼ eBB(1) = ¾ (1) = ¾

Page 19: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

HMM for Fair Bet Casino• The The Fair Bet CasinoFair Bet Casino in in HMM HMM terms:terms:• Σ = {0, 1} : 0 for Tails and 1 for HeadsΣ = {0, 1} : 0 for Tails and 1 for Heads• Q = {Q = {F,BF,B} : } : FF for Fair & for Fair & BB for Biased coin. for Biased coin.• Transition Probabilities Transition Probabilities A A Emission Probabilities Emission Probabilities EE

Fair Biased

Fair aaFFFF = 0.9 = 0.9 aaFBFB = 0.1 = 0.1

Biased aaBFBF = 0.1 = 0.1 aaBBBB = 0.9 = 0.9

Tails(0) Heads(1)

Fair eeFF(0) = ½ (0) = ½ eeFF(1) = ½(1) = ½

Biased eeBB(0) = ¼(0) = ¼ eeBB(1) = ¾(1) = ¾

Page 20: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

HMM for Fair Bet Casino (cont’d)

HMM model for the HMM model for the Fair Bet Casino Fair Bet Casino ProblemProblem

Page 21: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Hidden Paths• A A pathpath π = π π = π11… π… πnn in the HMMin the HMM is defined as a is defined as a

sequence of states.sequence of states.• Consider path Consider path π π = FFFBBBBBFFF and sequence = FFFBBBBBFFF and sequence x x = =

0101110100101011101001

x 0 1 0 1 1 1 0 1 0 0 1

π = F F F B B B B B F F π = F F F B B B B B F F FFP(xP(xii|π|πii)) ½ ½ ½ ¾ ¾ ¾ ¼ ¾ ½ ½ ½ ½ ½ ½ ¾ ¾ ¾ ¼ ¾ ½ ½ ½ P(P(ππi-1 i-1 ππii)) ½ ½ 99//1010 99//10 10

11//10 10 99//10 10

99//10 10 99//10 10

99//10 10 11//10 10

99//10 10 99//1010

Transition probability from state ππi-1 i-1 to to state state ππii

Probability that xi was emitted from state ππii

Page 22: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

P(x|π) Calculation

• P(P(xx|π):|π): Probability that sequence Probability that sequence xx was was generated by the path generated by the path π:π:

Page 23: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

P(x|π) Calculation

• P(P(xx|π):|π): Probability that sequence Probability that sequence xx was was generated by the path generated by the path π:π:

Page 24: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Decoding Problem

• Goal:Goal: Find an optimal hidden path of states Find an optimal hidden path of states given observations.given observations.

• Input:Input: A sequence of observations A sequence of observations x = xx = x11…x…xnn generated by an HMM generated by an HMM MM(Σ(Σ, Q, A, E, Q, A, E))

• Output:Output: A path that maximizes A path that maximizes P(x|π)P(x|π) over over all possible paths all possible paths π.π.

Page 25: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Building Manhattan for Building Manhattan for Decoding Problem• Andrew Viterbi used the Manhattan grid Andrew Viterbi used the Manhattan grid

model to solve the model to solve the Decoding ProblemDecoding Problem..• Every choice of Every choice of π = ππ = π11… π… πn n corresponds to a corresponds to a

path in the graph.path in the graph.• The only valid direction in the graph is The only valid direction in the graph is

eastward.eastward.• This graph has |This graph has |QQ||22((nn-1) edges (remember Q -1) edges (remember Q

is the set of states; n is the length of the is the set of states; n is the length of the sequence).sequence).

Page 26: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Edit Graph for Decoding Problem

Page 27: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Decoding Problem vs. Alignment Problem

Valid directions in the Valid directions in the alignment problem.alignment problem.

Valid directions in the Valid directions in the decoding problem.decoding problem.

Page 28: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Decoding Problem as Finding a Longest Path in a DAG• The The Decoding ProblemDecoding Problem is reduced to finding a is reduced to finding a

longest path in the longest path in the directed acyclic graph directed acyclic graph (DAG) above.(DAG) above.

• Note:Note: the length of the path is defined as the the length of the path is defined as the productproduct of its edges’ weights, not the of its edges’ weights, not the sum. sum.

Page 29: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Decoding Problem (cont’d)

• Every path in the graph has the probability Every path in the graph has the probability P(x|P(x|π)π)..

• The Viterbi algorithm finds the path that The Viterbi algorithm finds the path that maximizes maximizes P(x|πP(x|π) among all possible paths.) among all possible paths.

• The Viterbi algorithm runs in The Viterbi algorithm runs in O(n|Q|O(n|Q|22)) time.time.

Page 30: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Decoding Problem: weights of edges

w

The weight w is given by: ???

(k, i) (l, i+1)

Page 31: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Decoding Problem: weights of edges

w

The weight w is given by: ??

(k, i) (l, i+1)

Page 32: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Decoding Problem: weights of edges

w

The weight w is given by: ?

(k, i) (l, i+1)

ii-th term =-th term =

Page 33: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Decoding Problem: weights of edges

w

The weight w=el(xi+1). akl

(k, i) (l, i+1)

ii-th term =-th term = eeππii (x (xii)) . a. a ππii, , ππi+1i+1 = = el(xi+1). akl for ππi i =k, π=k, πi+1i+1==ll

Page 34: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Decoding Problem and Dynamic Programming

ssl,i+1l,i+1 = = maxmaxkk Є QЄ Q { {ssk,ik,i · · weight of edge betweenweight of edge between (k,i) (k,i) andand (l,i+1) (l,i+1)}}

= max= maxkk Є QЄ Q { {ssk,ik,i · a · aklkl · e · ell (x (xi+1i+1) ) }}

= = eell (x (xi+1i+1) · ) · maxmaxkk Є QЄ Q { {ssk,ik,i · a · aklkl}}

Page 35: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Decoding Problem (cont’d)

• Initialization:Initialization:• ssbegin,0begin,0 = 1= 1

• ssk,0k,0 = 0 for = 0 for k ≠ begink ≠ begin..• Let Let ππ** be the optimal path. Then,be the optimal path. Then,

P(P(xx||ππ**) = max) = maxkk Є QЄ Q { {ssk,nk,n . . aak,endk,end}}

Page 36: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Viterbi Algorithm

• The value of the product can become The value of the product can become extremely small, which leads to overflowing.extremely small, which leads to overflowing.

• To avoid overflowing, use log value instead. To avoid overflowing, use log value instead. •• ssk,i+1k,i+1= log= log((el(xi+1)) + max kk Є QЄ Q {{ssk,i k,i ++ log(akl)}}

Page 37: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Forward-Backward Problem

• Given:Given: a sequence of coin tosses generated a sequence of coin tosses generated by an HMMby an HMM..

• Goal:Goal: find the probability that the dealer was find the probability that the dealer was using a biased coin at a particular time.using a biased coin at a particular time.

Page 38: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Forward Algorithm• Define Define ffk,ik,i ((forward probabilityforward probability) as the ) as the

probability of emitting the prefix probability of emitting the prefix xx11…x…xii and and reaching the state reaching the state π = kπ = k..

• The recurrence for the forward algorithm:The recurrence for the forward algorithm:• • ffk,ik,i = = eekk(x(xii)) . . ΣΣ f fl,i-l,i-11 . a . alklk

• l Є Ql Є Q

Page 39: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Backward Algorithm

• However, However, forward probability forward probability is not the only is not the only factor affecting factor affecting P(πP(πii = k|x) = k|x)..

• The sequence of transitions and emissions The sequence of transitions and emissions that the HMM undergoes between that the HMM undergoes between ππi+1i+1 and and ππnn also affect also affect P(πP(πii = k|x) = k|x)..

• forward xi backward

Page 40: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Backward Algorithm (cont’d)

• DefineDefine backward probability backward probability bbk,ik,i as the as the probability of being in state probability of being in state ππii = k = k and and emitting the emitting the suffixsuffix xxi+1i+1…x…xnn..

• The recurrence for the The recurrence for the backward algorithmbackward algorithm::

Page 41: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Backward-Forward Algorithm

P(x) is the sum of P(x, πP(x, πii = k) = k) over all k

• The probability that the dealer used a The probability that the dealer used a biased coin at any moment biased coin at any moment ii::

Page 42: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Finding Distant Members of a Protein Family

• A distant cousin of functionally related sequences in A distant cousin of functionally related sequences in a protein family may have weak pairwise similarities a protein family may have weak pairwise similarities with each member of the family and thus fail with each member of the family and thus fail significance test. significance test.

• However, they may have weak similarities with However, they may have weak similarities with manymany members of the family. members of the family.

• The goal is to align a sequence to The goal is to align a sequence to allall members of members of the family at once.the family at once.

• A family of related proteins can be represented by A family of related proteins can be represented by their multiple alignment and the corresponding their multiple alignment and the corresponding profile.profile.

Page 43: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Profile Representation of Protein Families

• Aligned DNA sequences can be represented by a 4·n profile matrix reflecting the frequencies of nucleotides in every aligned position.

A protein family can be represented by a A protein family can be represented by a 20·n profile representing frequencies of amino acids.profile representing frequencies of amino acids.

Page 44: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Profiles and HMMs

• HMMs can also be used for aligning a HMMs can also be used for aligning a sequence against a profile representing sequence against a profile representing

• protein family.protein family.• A A 20·n20·n profile profile PP corresponds to corresponds to n n

sequentially linked sequentially linked matchmatch states states MM11,…,M,…,Mnn

in the in the profile HMMprofile HMM of of P.P.

Page 45: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Multiple Alignments and Protein Family Classification

• Multiple alignment of a protein family usually shows variations in conservation along the length of a protein

• Example: after aligning many globin proteins, the biologists recognized that the helices region in globins are more conserved than others.

Page 46: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

What are Profile HMMs ?

• A Profile HMM is a probabilistic representation of a multiple alignment.

• A given multiple alignment (of a protein family) is used to build a profile HMM.

• This model then may be used to find and score less obvious potential matches of new protein sequences.

Page 47: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Profile HMM

A profile HMMA profile HMM

Page 48: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Building a profile HMM

• Multiple alignment is used to construct the HMM model.

• Assign each column to a Match state in HMM. Add Insertion and Deletion state.

• Estimate the emission probabilities according to amino acid counts in column. Different positions in the protein will have different emission probabilities.

• Estimate the transition probabilities between Match, Deletion and Insertion states

• The HMM model gets trained to derive the optimal parameters.

Page 49: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

States of Profile HMM

• Match states Match states MM11……MMnn (plus (plus begin/endbegin/end states) states)

• Insertion states Insertion states II00II11……IInn

• Deletion states Deletion states DD11……DDnn

Page 50: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Transition Probabilities in Profile HMM

• log(alog(aMIMI)+log(a)+log(aIMIM)) = = gap initiation penaltygap initiation penalty

• log(alog(aIIII) = gap extension penaltygap extension penalty

Page 51: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Emission Probabilities in Profile HMM• Probability of emitting a symbol Probability of emitting a symbol a a at an at an insertion stateinsertion state IIjj::

eeIjIj(a) = p(a)(a) = p(a)

where where p(a)p(a) is the frequency of the is the frequency of the occurrence of the symbol occurrence of the symbol a a in all the in all the sequences (as we have nothing else to go sequences (as we have nothing else to go on).on).

Page 52: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Profile HMM Alignment

• Define Define vvMMjj (i)(i) as the logarithmic likelihood as the logarithmic likelihood

score of the best path for matching score of the best path for matching xx11..x..xii to to profile HMM ending with profile HMM ending with xxii emitted by the emitted by the state state MMjj..

• vvIIj j (i)(i) andand vvDD

j j (i)(i) are defined similarly (ending are defined similarly (ending with insertion or deletion in the sequence with insertion or deletion in the sequence xx))..

Page 53: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Profile HMM Alignment: Dynamic Programming

• vvMMj-1j-1(i-1) + log(a(i-1) + log(aMMj-1,j-1,MMj j ))

• vvMMjj(i) = log (e(i) = log (eMMjj(x(xii)/p(x)/p(xii)) + max v)) + max vII

j-1j-1(i-1) + log(a(i-1) + log(aIIj-1j-1,,MMj j ))

• vvDDj-1j-1(i-1) + log(a(i-1) + log(aDDj-1j-1,,MMj j ))

• • vvMM

jj(i-1) + log(a(i-1) + log(aMMjj, I, Ijj))

• vvIIjj(i) = log (e(i) = log (eIIjj(x(xii)/p(x)/p(xii)) + max v)) + max vII

jj(i-1) + log(a(i-1) + log(aIIjj, I, Ijj))

• vvDDjj(i-1) + log(a(i-1) + log(aDDjj, I, Ijj))

Page 54: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Paths in Edit Graph and Profile HMM

• A path through an edit graph and the corresponding path through a profile HMM

Page 55: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Making a Collection of HMM for Protein Families

• Use BLAST to separate a protein database into families of related proteins

• Construct a multiple alignment for each protein family.• Construct a profile HMM model and optimize the

parameters of the model (transition and emission probabilities).

• Align the target sequence against each HMM to find the best fit between a target sequence and an HMM

Page 56: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Application of Profile HMM to Modelling Globin Proteins• Globins represent a large collection of protein

sequences • 400 globin sequences were randomly selected from

all globins and used to construct a multiple alignment.

• Multiple alignment was used to assign an initial HMM

• This model then gets trained repeatedly with model lengths chosen randomly between 145 to 170, to get an HMM model with optimized probabilities.

Page 57: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

How Good is the Globin HMM?• 625 remaining globin sequences in the database

were aligned to the constructed HMM resulting in a multiple alignment. This multiple alignment agrees extremely well with the structurally derived alignment.

• 25,044 proteins, were randomly chosen from the database and compared against the globin HMM.

• This experiment resulted in an excellent separation between globin and non-globin families.

Page 58: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

PFAM (http://pfam.sanger.ac.uk/)• Pfam decribes protein domains • Each protein domain family in Pfam has:• - Seed alignment: manually verified multiple • alignment of a representative set of sequences.• - HMM built from the seed alignment for further • database searches.• - Full alignment generated automatically from the HMM• The distinction between seed and full alignments facilitates Pfam

updates.• - Seed alignments are stable resources.• - HMM profiles and full alignments can be updated with• newly found amino acid sequences.

Page 59: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

PFAM Uses• Pfam HMMs span entire domains that

include both well-conserved motifs and less-conserved regions with insertions and deletions.

• It results in modelling complete domains that facilitates better sequence annotation and leads to a more sensitive detection.

Page 60: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

HMM Parameter Estimation

• So far, we have assumed that the transition So far, we have assumed that the transition and emission probabilities are known.and emission probabilities are known.

• However, in most HMM applications, the However, in most HMM applications, the probabilities are not known. It’s very hard to probabilities are not known. It’s very hard to estimate the probabilities.estimate the probabilities.

Page 61: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides

part 2: estimation of HMM parameters

Page 62: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

HMM Parameter Estimation Problem

• Given• HMM with states and alphabet (emission

characters)• Independent training sequences x1, … xm

• Find • HMM parameters Θ (that is, akl, ek(b)) that

maximize • P(x1, …, xm | Θ)

• the joint probability of the training sequences.

Page 63: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Maximize the likelihood• P(x1, …, xm | Θ) as a function of Θ is called the

likelihood of the model.• The training sequences are assumed

independent, therefore• P(x1, …, xm | Θ) = Πi P(xi | Θ)

• The parameter estimation problem seeks Θ that realizes

• In practice the log likelihood is computed to avoid underflow errors

Page 64: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Two situations

• Known paths for training sequences, e.g.,CpG islands marked on training sequencesOne evening the casino dealer allows us to see when he changes dice

• Unknown paths, e.g.,CpG islands are not markedDo not see when the casino dealer changes dice

Page 65: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Known paths• Let Akl = # of times each k → l transition is

taken in the training sequences• Ek(b) = # of times b is emitted from state k in

the training sequences• Compute akl and ek(b) as maximum likelihood

estimators:

Page 66: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Pseudocounts Some state k may not appear in any of the

training sequences. This means Akl = 0 for every state l and akl cannot be computed with the given equation.

To avoid this overfitting we use predetermined pseudocounts rkl and rk(b).

• Akl = # of transitions k→l + rkl

• Ek(b) = # of emissions of b from k + rk(b)• The pseudocounts reflect our prior biases

about the probability values.

Page 67: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Unknown paths: Viterbi training• Idea: use Viterbi decoding to compute the most

probable path for training sequence xStart with some guess for initial parameters and compute π*, the most probable path for x using initial parameters.

• Iterate until there’s no change in π* :• Determine Akl and Ek(b) as before• Compute new parameters akl and ek(b) using the

same formulae as before• Compute new π* for x and the current

parameters

Page 68: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Viterbi training analysis• The algorithm converges precisely

• There are finitely many possible paths.• New parameters are uniquely determined by the current π*.

• There may be several paths for x with the same probability, hence we must compare the new π* with all previous paths having highest probability.

• It does not maximize the likelihood • Πx P(x | Θ)

• but the contribution to the likelihood of the most probable path

• Πx P(x | Θ, π*) • In general it performs less well than Baum-Welch

Page 69: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Unknown paths: Baum-Welch

• The general idea:1. Guess initial values for parameters.

• “art” and experience, not science2. Estimate new (better) values for

parameters.• how ?

3. Repeat until stopping criteria are met.• what criteria ?

Page 70: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Better values for parameters

• We would need the Akl and Ek(b) values but we cannot count transitions (the path is unknown) and do not want to use a most probable path.

• For all states k,l, symbol b and training sequence x, we

Compute Akl and Ek(b) as expected values, given the current parameters

Page 71: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Notation

• For any sequence of characters x emitted along some unknown path π, denote by “πi = k” the assumption that the state at position i (in which xi is emitted) is k.

Page 72: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Probabilistic setting for Ak,l

• Given x1, … ,xm consider a discrete probability space with elementary events

• εk,l, = “k → l is taken in x1, …, xm ”• For each x in {x1,…,xm} and each position i in x

let Yx,i be a random variable defined by

Define Y = Σx Σi Yx,i random variable that counts # of times the event εk,l happens in x1,…,xm.

Page 73: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

The meaning of Akl

• Let Akl be the expectation of Y

• E(Y) = Σx Σi E(Yx,i) = Σx Σi P(Yx,i = 1) =• ΣxΣi P({εk,l | πi = k and πi+1 = l}) =• ΣxΣi P(πi = k, πi+1 = l | x)

• Need to compute P(πi = k, πi+1 = l | x)

Page 74: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Probabilistic setting for Ek(b)• Given x1, … ,xm consider a discrete probability

space with elementary events • εk,b = “b is emitted in state k in x1, … ,xm ”• For each x in {x1,…,xm} and each position i in x

let Yx,i be a random variable defined by

• Define Y = Σx Σi Yx,i random variable that counts # of times the event εk,b happens in x1,…,xm.

Page 75: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

The meaning of Ek(b)

• Let Ek(b) be the expectation of Y

• E(Y) = Σx Σi E(Yx,i) = Σx Σi P(Yx,i = 1) =• ΣxΣi P({εk,b | xi = b and πi = k})

• Need to compute P(πi = k | x)

Page 76: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Computing new parameters

• Consider x = x1…xn training sequence• Concentrate on positions i and i+1

• Use the forward-backward values: • fki = P(x1 … xi , πi = k)

• bki = P(xi+1 … xn | πi = k)

Page 77: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Compute Akl (1)• Prob k →l is taken at position i of x• P(πi = k, πi+1 = l | x1…xn) = P(x, πi = k, πi+1 = l) /

P(x)

• Compute P(x) using either forward or backward values• We’ll show that P(x, πi = k, πi+1 = l) = bli+1 ·el(xi+1) ·akl ·fki

• • Expected # times k →l is used in training sequences• Akl = Σx Σi (bli+1 ·el(xi+1) ·akl ·fki) / P(x)

Page 78: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Compute Akl (2)

• P(x, πi = k, πi+1 = l) =

• P(x1…xi, πi = k, πi+1 = l, xi+1…xn) =

• P(πi+1 = l, xi+1…xn | x1…xi, πi = k)·P(x1…xi,πi =k)=

• P(πi+1 = l, xi+1…xn | πi = k)·fki =

• P(xi+1…xn | πi = k, πi+1 = l)·P(πi+1 = l | πi = k)·fki =

• P(xi+1…xn | πi+1 = l)·akl ·fki =

• P(xi+2…xn | xi+1, πi+1 = l) · P(xi+1 | πi+1 = l) ·akl ·fki =

• P(xi+2…xn | πi+1 = l) ·el(xi+1) ·akl ·fki =

• bli+1 ·el(xi+1) ·akl ·fki

Page 79: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Compute Ek(b)

• Prob xi of x is emitted in state k• P(πi = k | x1…xn) = P(πi = k, x1…xn)/P(x)

• P(πi = k, x1…xn) = P(x1…xi,πi = k,xi+1…xn) = • P(xi+1…xn | x1…xi,πi = k) · P(x1…xi,πi = k) =• P(xi+1…xn | πi = k) · fki = bki · fki

• Expected # times b is emitted in state k

Page 80: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Finally, new parameters

• Can add pseudocounts as before.

Page 81: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Stopping criteria

• Cannot actually reach maximum (optimization of continuous functions)

• Therefore we need stopping criteriaCompute the log likelihood of the model for current Θ

• Compare with previous log likelihood• Stop if small difference

Stop after a certain number of iterations

Page 82: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

The Baum-Welch algorithm• Initialization:• Pick the best-guess for model parameters• (or arbitrary)• Iteration:

1. Forward for each x2. Backward for each x3. Calculate Akl, Ek(b)4. Calculate new akl, ek(b)5. Calculate new log-likelihood

1. Until log-likelihood does not change much

Page 83: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Baum-Welch analysis

Log-likelihood is increased by iterations• Baum-Welch is a particular case of the

EM (expectation maximization) algorithmConvergence is to a local maximum. Choice of initial parameters determines local maximum to which the algorithm converges

Page 84: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Speech Recognition

• Create an Create an HMMHMM of the words in a language of the words in a language • Each word is a hidden state in Each word is a hidden state in QQ..• Each of the basic sounds in the language Each of the basic sounds in the language

is a symbol in Σ.is a symbol in Σ.

• Input:Input: use speech as the input sequence. use speech as the input sequence.• Goal:Goal: find the most probable sequence of find the most probable sequence of

states.states.

Page 85: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Speech Recognition: Building the Model

• Analyze some large source of English Analyze some large source of English sentences, such as a database of newspaper sentences, such as a database of newspaper articles, to form probability matrixes.articles, to form probability matrixes.

• AA0i0i: the chance that word : the chance that word ii begins a begins a sentence.sentence.

• AAijij: the chance that word : the chance that word jj follows word follows word ii..

Page 86: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Building the Model (cont’d)

• Analyze English speakers to determine what Analyze English speakers to determine what sounds are emitted with what words.sounds are emitted with what words.

• EEkk(b):(b): the chance that sound the chance that sound bb is spoken in is spoken in word word kk. Allows for alternate pronunciation . Allows for alternate pronunciation of words.of words.

Page 87: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Speech Recognition: Using the Model

• Use the same dynamic programming Use the same dynamic programming algorithm as beforealgorithm as before• Weave the spoken sounds through the Weave the spoken sounds through the

model the same way we wove the rolls of model the same way we wove the rolls of the die through the casino model.the die through the casino model.

• ππ represents the most likely set of words.represents the most likely set of words.

Page 88: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Using the Model (cont’d)

• How well does it work?How well does it work?• Common words, such as ‘the’, ‘a’, ‘of’ make Common words, such as ‘the’, ‘a’, ‘of’ make

prediction less accurate, since there are so prediction less accurate, since there are so many words that follow normally.many words that follow normally.

Page 89: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

Improving Speech Recognition• Initially, we were using a Initially, we were using a ‘bigram,’‘bigram,’ a graph a graph

connecting every two words.connecting every two words.• Expand that to a Expand that to a ‘trigram’‘trigram’

• Each state represents two words spoken in Each state represents two words spoken in succession.succession.

• Each edge joins those two words (Each edge joins those two words (A BA B) to another ) to another state representing (state representing (B CB C))

• Requires nRequires n33 vertices and edges, where n is the vertices and edges, where n is the number of words in the language.number of words in the language.

• Much better, but still limited context.Much better, but still limited context.

Page 90: COMP3456 – adapted from textbook slides  Hidden Markov Models

COMP3456 – adapted from textbook slides www.bioalgorithms.info

References

• Slides for CS 262 course at Stanford given by Serafim Batzoglou