Upload
noreen-glenn
View
219
Download
2
Embed Size (px)
Citation preview
Markov Chains & Randomized algorithms
BY HAITHAM FALLATAH
FOR COMP4804
Outline
History Of Markov chains The Markov property Markov Chains Example3 MC models and applications Analysis of a randomized algorithm using MC
History of the Markov chain
Andrey Markov was a Russian mathematician
Lived in 19th and early 20th century
He was bad at school in everything but math
Student of Chebyshev
was motivated to work on a model that was later known as the MC because of 2 reasons. Show that Chebyshev’s approach to extending the Law of Large
numbers to sums of dependent random variables could be improved.
He had an animosity with another Russian mathematician called Nekrasov who claimed that only independent events can converge on predictable distributions
The Markov property
Markov disproved Nekrasov’s claim by coming up with a model that has states and transitions.
Pick a ball at random, if Blue, pick a ball from Bin 1 otherwise from Bin 2
Probability will converge on a predictable distribution
1:1 Red:Blue
2:1 Red:Blue
1/3
2/3 1/2
Bin 1 Bin 2
1/2
Markov Chain Example
Sunny Cloudy Rainy Snowy Sunny 0.7 0.3 0 0Cloudy 0.2 0.2 0.4 0.2Rainy 0 0.3 0.5 0.2Snowy 0 0.2 0.1 0.7
Sunny Cloudy Rainy Snowy 0.161 0.242 0.26 0.335
Transition Matrix
P100 matrix for gen(100)
Sunny Cloudy Rainy Snowy 1 0 0 0
P0 matrix for initial state
Markov Models & applications
Discreet time Markov chain (DTMC) It is the notion that there is a chain of steps between states where
movement from a state to the next, depends only on the current state.
Pr(Xn+1 = x | X1 = x1 , X2 = x2 , ….. , Xn = xn ) = Pr(Xn+1 = x | Xn = xn )
This process is used with simulations and analysing random algorithms
Markov Models & applications
Continues Time Markov chain (CTMC)
Similar to the DTMC model except that steps have no meaning in continuous time so the Monrovian property must hold for all future times instead of just for one step. This model is best suited for modeling biological and physical systems because of their continues time nature.
Used in construction of phylogenies (evolutionary trees)
Markov Models & applications Hidden Markov chain model
It is a model in which the “real Markov chain” is hidden and instead, another set of states is observed and is dependent on the states of the hidden chain. This is one of the most powerful Markov models because it can fill in the blanks in problems where not all the information is given.
This model is used in pattern, speech, and writing recognition
Z1 Z2 Z3 Zn
X1 X2 X3 Xn
Analysis a randomized algorithm One example of a random algorithm that can be analysed by the
Markov chain model, is a randomized algorithm that solves the 2-SAT problem.
The 2-SAT problem consists of n Boolean variables and m clauses that are in a 2-conjunctive normal form. This problem is solved when all clauses have a value of true and thus the overall result of the statement would be true. This problem is considered in P (polynomial) as many algorithms can solve it in polynomial time.
The Problem We are given an algorithm that takes O(n2) steps to find a
satisfiable assignment. Our job is to prove independently using the Markov chain that this claim is true.
The algorithm runs as follows (pseudo code): Repeat the following 2mn2 times
Select a clause that’s not satisfied (ie. False) at random
Select one of the variables in the clause at random and flip its value
If the formula is satisfied, return true
Otherwise, return that this formula is unsatisfiable.
Example of Algorithm Execution
Algorithms randomly picks clause 3 and randomly switches X1 value True False True True False
Algorithms randomly picks clause 2 and randomly switches X3 value True True True False False
All variables are set to False. False True False True False
Algorithms randomly picks clause 4 and randomly switches X3 value True False True True False
X1 = TrueX2 = False X3 = False X4 = False
X1 = False X2 = False X3 = False X4 = False
X1 = TrueX2 = False X3 = TrueX4 = False
X1 = TrueX2 = False X3 = False X4 = False
Analysis
Let S = represents satisfying boolean values for the n variables
Let Ai= the variable boolean value assigned after the ith step in the algorithm
Let Xi = the number of variables in the current assignment Ai that have the same Boolean values as the Boolean values in S.
Xi = 0 Pr(making correct move) = 1
Xi = n , Done
Analysis If 0 < Xi ≤ n-1
Then, We know that Ai and S disagree on at least one clause
Pr(Xi+1 = j + 1| Xi = j) 1/2
Pr(Xi+1 = j - 1| Xi = j) 1/2
Not a Markov Chain
However consider:
Y0 = X0
Pr(Yi+1 = 1| Yi = 0) 1
Pr(Yi+1 = j + 1| Xi = j) 1/2
Pr(Yi+1 = j - 1| Xi = j) 1/2
j-1 jj+1
n01/2 1/2
Analysis
Let hj = the needed number of steps from state j to n
h0 = h1 + 1
hn = 0
If 0
Probability ½ we end up in state j+1 where hj = 1 + hj+1
Probability ½ we end up in state j-1 where hj = 1 + hj-1
Analysis
Thus:
Analysis
The System of equations we have is:
Using Induction we can get:
Analysis
Finally, we can substitute and get:
References Probability and Computing (book) by Michael mitzenmacher and Eli
Upfal (Main)
https://www.coursera.org/
http://en.wikipedia.org/wiki/Markov_chain
Markov Models and Hidden Markov Models: A Brief Tutorial
Discrete Mathematics & Mathematical Reasoning by Kousha Etessami
http://www.nimbios.org/tutorials/TT_stochastic_modeling_talks/NIMBioSTut_LJSA_2.pdf
https://www.youtube.com/watch?v=TPRoLreU9lA
https://www.youtube.com/watch?v=7KGdE2AK_MQ
Khan Academy