Upload
paula-logan
View
214
Download
0
Embed Size (px)
Citation preview
Markov ChainsMarkov ChainsBrian CarricoBrian Carrico
The Mathematical MarkovsThe Mathematical Markovs
Vladimir Andreyevich Markov Vladimir Andreyevich Markov (1871-1897) (1871-1897) Andrey Markov’s younger brotherAndrey Markov’s younger brother With Andrey, developed the Markov brothers’ With Andrey, developed the Markov brothers’
inequalityinequality Andrey Andreyevich Markov JrAndrey Andreyevich Markov Jr (1903-1979)(1903-1979)
Andrey Markov’s sonAndrey Markov’s son One of the key founders of the Russian school One of the key founders of the Russian school
of constructive mathematics and logicof constructive mathematics and logic Also made contributions to differential Also made contributions to differential
equations,topology, mathematical logic and equations,topology, mathematical logic and the foundations of mathematicsthe foundations of mathematics
Which brings us to:Which brings us to:
Andrey Andreyevich MarkovAndrey Andreyevich MarkovАндрей Андреевич МарковАндрей Андреевич Марков
June 14, 1856 – July 20, 1922June 14, 1856 – July 20, 1922
Born in Ryazan Born in Ryazan (roughly 170 miles Southeast of (roughly 170 miles Southeast of
Moscow)Moscow)
Began Grammar School Began Grammar School in 1866in 1866
Started at St Petersburg Started at St Petersburg University in 1874University in 1874
Defended his Masters Defended his Masters Thesis in 1880Thesis in 1880
Doctoral Thesis in 1885Doctoral Thesis in 1885 Excommunicated from Excommunicated from
the Russian Orthodox the Russian Orthodox ChurchChurch
Precursors to Markov ChainsPrecursors to Markov Chains
Bernoulli SeriesBernoulli Series
Brownian MotionBrownian Motion
Random WalksRandom Walks
Bernoulli SeriesBernoulli Series
Jakob Bernoulli (1654-1705)Jakob Bernoulli (1654-1705) Sequence independent random variables Sequence independent random variables
XX11, X, X22,X,X33,... such that,... such that For every i, XFor every i, Xii is either 0 or 1 is either 0 or 1 For every i, P(XFor every i, P(Xii)=1 is the same)=1 is the same
Markov’s first discussions of chains, a Markov’s first discussions of chains, a 1906 paper, considers only chains with 1906 paper, considers only chains with two statestwo states
Closely related to Random WalksClosely related to Random Walks
Brownian MotionBrownian Motion
Described as early as 60 BC by Roman Described as early as 60 BC by Roman poet Lucretiuspoet Lucretius
Formalized and officially discovered by Formalized and officially discovered by botanist Robert Brown in 1827botanist Robert Brown in 1827
The seemingly random movement of The seemingly random movement of particles suspended in a fluid particles suspended in a fluid
Random WalksRandom Walks
Formalized in 1905 by Karl PearsonFormalized in 1905 by Karl Pearson The formalization of a trajectory that consists of The formalization of a trajectory that consists of
taking successive random stepstaking successive random steps The results of random walk analysis have been The results of random walk analysis have been
applied to computer applied to computer science, physics, ecology, economics, and a science, physics, ecology, economics, and a number of other fields as a number of other fields as a fundamental model for random processes in fundamental model for random processes in timetime
Turns out to be a specific Markov chainTurns out to be a specific Markov chain
So what is a Markov Chain?So what is a Markov Chain?
A random process where all information A random process where all information about the future is contained in the present about the future is contained in the present statestate
Or less formally: a process where future Or less formally: a process where future states depend only on the present state, states depend only on the present state, and are independent of past statesand are independent of past states
Mathematically:Mathematically:
Applications of Markov ChainsApplications of Markov Chains
ScienceScience StatisticsStatistics Economics and FinanceEconomics and Finance Gambling and games of chanceGambling and games of chance BaseballBaseball Monte CarloMonte Carlo
ScienceScience
PhysicsPhysics Thermodynamic systems generally have time-Thermodynamic systems generally have time-
invariant dynamicsinvariant dynamics All relevant information is in the state All relevant information is in the state
descriptiondescription ChemistryChemistry
An algorithm based on a Markov chain was An algorithm based on a Markov chain was used to focus the fragment-based growth of used to focus the fragment-based growth of chemicals in silico towards a desired class of chemicals in silico towards a desired class of compounds such as drugs or natural products compounds such as drugs or natural products
Economics and FinanceEconomics and Finance
Markov Chains are used model a variety Markov Chains are used model a variety of different phenomena, including asset of different phenomena, including asset prices and market crashes. prices and market crashes.
Regime-switching model of James D. Regime-switching model of James D. Hamilton Hamilton
Markov Switching Multifractal asset pricing Markov Switching Multifractal asset pricing model model
Dynamic macroeconomicsDynamic macroeconomics
Gambling and Games of ChanceGambling and Games of Chance
In most card In most card games each games each hand is hand is independentindependent
Board games Board games like Snakes and like Snakes and LaddersLadders
BaseballBaseball
Use of Markov chain models in baseball Use of Markov chain models in baseball analysis began in 1960analysis began in 1960
Each at bat can be taken as a Markov Each at bat can be taken as a Markov chainchain
Monte CarloMonte Carlo
A Markov chain with a large number of A Markov chain with a large number of steps is used to create the algorithm for steps is used to create the algorithm for the basis of the Monte Carlo simulation the basis of the Monte Carlo simulation
StatisticsStatistics
Many important statistics measure Many important statistics measure independent trials, which can be independent trials, which can be represented by Markov chainsrepresented by Markov chains
An Example from StatisticsAn Example from Statistics
A thief is in a dungeon with three identical doors. Once A thief is in a dungeon with three identical doors. Once the thief chooses a door and passes through it, the door the thief chooses a door and passes through it, the door locks behind him. The three doors lead to:locks behind him. The three doors lead to: A 6 hour tunnel leading to freedomA 6 hour tunnel leading to freedom A 3 hour tunnel that returns to the dungeonA 3 hour tunnel that returns to the dungeon A 9 hour tunnel that returns to the dungeonA 9 hour tunnel that returns to the dungeon
Each door is chosen with equal probability. When he is Each door is chosen with equal probability. When he is dropped back into the dungeon by the second and third dropped back into the dungeon by the second and third doors there is a memoryless choice of doors. He isn’t doors there is a memoryless choice of doors. He isn’t able to mark the doors in any way. What is his expected able to mark the doors in any way. What is his expected time of escape?time of escape?
Note:Note:
Example (cont)Example (cont)
We plug the values in for xWe plug the values in for x ii and p(x and p(xii) to get:) to get: E(X)=6*(1/3)+xE(X)=6*(1/3)+x22*(1/3)+x*(1/3)+x33*(1/3)*(1/3)
But what are xBut what are x22 and x and x33?? Because the decision is memoryless, the Because the decision is memoryless, the
expected time after returning from tunnels 2 or 3 expected time after returning from tunnels 2 or 3 doesn’t change from the initial expected time. doesn’t change from the initial expected time. So, x2=x3=E(X).So, x2=x3=E(X).
So,So, E(X)=6*(1/3)+E(X)*(1/3)+E(X)*(1/3)E(X)=6*(1/3)+E(X)*(1/3)+E(X)*(1/3)
Now we’re back in Algebra 1Now we’re back in Algebra 1
SourcesSources
WikipediaWikipedia The Life and Work of A.A. Markov. The Life and Work of A.A. Markov.
Basharin, Gely P. et al. Basharin, Gely P. et al. http://decision.csl.illinois.edu/~meyn/pages/Markov-Work-and-life.pdf
Leemis (2009), Leemis (2009), ProbabilityProbability