Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
1/45
Stochastic Petri Net
Serge Haddad
LSV
ENS Paris-Saclay & CNRS & Inria
Petri Nets 2019, June 24th 2019
1 Stochastic Petri Net
2 Markov Chain
3 Markovian Stochastic Petri Net
4 Generalized Markovian Stochastic Petri Net (GSPN)
5 Product-form Petri Nets
2/45
Outline
1 Stochastic Petri Net
Markov Chain
Markovian Stochastic Petri Net
Generalized Markovian Stochastic Petri Net (GSPN)
Product-form Petri Nets
3/45
Stochastic Petri Net versus Time Petri Net
In TPN, the delays are non deterministically chosen.
In Stochastic Petri Net (SPN), the delays are randomly chosenby sampling distributions associated with transitions.
... but these distributions are not sufficient to eliminate non determinism.
Policies for a netOne needs to define:
The choice policy.What is the next transition to fire?
The service policy.What is the influence of the enabling degree of a transition on the process?
The memory policy.What become the samplings of distributions that have not be used?
4/45
Choice Policy
In the net, associate a distribution Di and a weight wi with every transition ti.
Preselection w.r.t. a marking m and enabled transitions Tm
Normalize weights wi of the enabled transitions: w′i ≡ wi∑tj∈Tm
wj
Sample the distribution defined by the w′i’s.
Let ti be the selected transition, sample Di giving the value di.
versus
Race policy with postselection w.r.t. a marking m
For every ti ∈ Tm, sample Di giving the value di.
Let T ′ be the subset of Tm with the smallest delays.Normalize weights wi of transitions of T ′: w′i ≡ wi∑
tj∈T ′wj
Sample the distribution defined by the w′i’s yielding some ti.
Priorities between transitions could added to refine the selection.
5/45
Choice Policy: Illustration
t1(D
1,w
1) t
2(D
2,w
2) t
3(D
3,w
3)
w1=1 w
2=2 w
3=2
Preselection
Sample (1/5,2/5,2/5)
Outcome t1
Sample D1
Outcome 4.2
Race Policy
Sample (D1,D
2,D
3)
Outcome(3.2,6.5,3.2)
Sample(1/3,-,2/3)
Outcome t3
6/45
Server Policy
A transition t can be viewed as server for firings:
A single server t allows a single instance of firings in m if m[t〉.An infinite server t allows d instances of firings in m
where d = min(⌊
m(p)Pre(p,t)
⌋| p ∈ •t) is the enabling degree.
A multiple server t with bound b allows min(b, d) instances of firings in m.
t1
t2
t3
t3 single server
Sample (D1,D
2,D
3)
3
t3 infinite server
Sample (D1,D
2,D
3,D
3,D
3)
t3 2-server
Sample (D1,D
2,D
3,D
3)
This can be generalised by marking-dependent services.
7/45
Memory Policy (1)
t1
t2
t3
d1<min(d
2,d
3)
t1
t2
t3
What happens to d2 and d
3?
Resampling Memory
Every sampling not used is forgotten.
This could correspond to a “crash” transition.
8/45
Memory Policy (2)
t1
t2
t3
d1<min(d
2,d
3)
t1
t2
t3
What happens to d2 and d
3?
Enabling Memory
The samplings associated with still enabled transitions are kept anddecremented (d′3 = d3 − d1).
The samplings associated with disabled transitions are forgotten (like d2).
Disabling a transition could correspond to abort a service.
9/45
Memory Policy (3)
t1
t2
t3
d1<min(d
2,d
3)
t1
t2
t3
What happens to d2 and d
3?
Age Memory
All the samplings are kept and decremented (d′3 = d3 − d1 and d′2 = d2 − d1).
The sampling associated with a disabled transition is frozen until thetransition become again enabled (like d′2).
Disabling a transition could correspond to suspend a service.
10/45
Memory Policy (4)
Specification of memory policy
To be fully expressive, it should be defined w.r.t. any pair of transitions.
t1
t2
d2<min(d
1,d
1',d
1'')
t1
t2
What happens to d1, d
1' and d
1''?
3 2
Interaction between memory policy and service policy
Assume enabling memory for t1 when firing t2 and infinite server policy for t1.Which sample should be forgotten?
The last sample performed,
The first sample performed,
The greatest sample, etc.
Warning: This choice may have a critical impact on the complexity of analysis.
11/45
Outline
Stochastic Petri Net
2 Markov Chain
Markovian Stochastic Petri Net
Generalized Markovian Stochastic Petri Net (GSPN)
Product-form Petri Nets
12/45
Discrete Time Markov Chain (DTMC)
A DTMC is a stochastic process which fulfills:
For all n, Tn is the constant 1
The process is memorylessPr(Sn+1 = sj | S0 = si0 , ..., Sn−1 = sin−1
, Sn = si)= Pr(Sn+1 = sj | Sn = si)
≡ P[i, j]
A DTMC is defined by S0 and P
1 2
3
0.70.3
1
0.80.2
0.3 0.7 0.0
0.0 0.0 1.0
0.2 0.8 0.0
P
13/45
Analysis: the State StatusThe transient analysis is easy and effective in the finite case:
πn = π0 · Pn with πn the distribution of Sn
The steady-state analysis (∃? limn→∞ πn) requires theoretical developments.
Classification of states w.r.t. the asymptotic behaviour of the DTMC
A state is transient if the probability of a return after a visit is less than one.Hence the probability of its occurrence will go to zero. (p < 1/2)
A state is recurrent null if the probability of a return after a visit is onebut the mean time of this return is infinite.Hence the probability of its occurrence will go to zero. (p = 1/2)
A state is recurrent non null if the probability of a return after a visit is oneand the mean time of this return is finite. (p > 1/2)
0
1
p1 2 3
p p p
1-p 1-p 1-p
...
14/45
State Status in Finite DTMCIn a finite DTMC
The status of a state only depends on the graph associated with the chain.
A state is transient iff it belongs toa non terminal strongly connected component (scc) of the graph.
A state is recurrent non null iff it belongs to a terminal scc.
6 7
8
0.70.3
1
0.80.2
1 2
3
0.7
0.2
0.80.1
0.14 5
0.5
1
0.3T={1,2,3}
C1={4,5}
C2={6,7,8}
0.5
0.4
0.3 0.1
15/45
Analysis: Irreducibility and PeriodicityIrreducibility and Periodicity
A chain is irreducible if its graph is strongly connected.
The periodicity of an irreducible chain is the greatest integer p such that:the set of states can be partionned in p subsets S0, . . . ,Sp−1where every transition goes from Si to Si+1%p for some i.
Computation of the periodicity
2 3
1 5
6
8
4 7 1
2
3 4
5 6 7
8
height 0
height 1
height 2
height 3
height 4
02
4
periodicity=gcd(0,2,4)=2
16/45
Analysis of a DTMC: a Particular Case
A particular case
The chain is irreducible and aperiodic (i.e. its periodicity is 1)
π∞ ≡ limn→∞ πn exists and its value is independent from π0.
π∞ is the unique solution of X = X · P ∧X · 1 = 1where one can omit an arbitrary equation of the first system.
1 2
3
0.70.3
1
0.80.21/8 7/16 7/16
=
π1 = 0.3π1 + 0.2π2 π2 = 0.7π1 + 0.8π3 π3 = π2
17/45
Analysis of a DTMC: the “General” Case
Almost general case: every terminal scc is aperiodic
π∞ exists.
π∞ =∑s∈S π0(s)
∑i∈I preachi[s] · πi∞ where:
1 S is the set of states,2 {Ci}i∈I is the set of terminal scc,3 πi
∞ is the steady-state distribution of Ci,4 and preachi[s] is the probability to reach Ci starting from s.
Computation of the reachability probability for transient states
Let T be the set of transient states(i.e. not belonging to a terminal scc)
Let PT,T be the submatrix of P restricted to transient states
Let PT,i be the submatrix of P transitions from T to CiThen preachi = (
∑n∈N(PT,T )
n) · PT,i · 1 = (Id− PT,T )−1 · PT,i · 1
18/45
Illustration: SCC and Matrices
6 7
8
0.70.3
1
0.80.2
1 2
3
0.7
0.2
0.80.1
0.14 5
0.5
1
0.3
T={1,2,3},C1={4,5},C
2={6,7,8}
0.5
0.4
0.3
0.0 0.7 0.0
0.1 0.0 0.8
0.0 0.2 0.0
PT,T=
0.0 0.3
0.0 0.0
0.0 0.4
PT,1.1=
0.0 0.0 0.0
0.0 0.1 0.0
0.3 0.1 0.0
PT,2.1=
0.3
0.0
0.4
=
0.0
0.1
0.4
=1.0
1.0
1.0
1.0
1.0
1.0
0.1
19/45
Continuous Time Markov Chain (CTMC)A CTMC is a stochastic process which fulfills:
Memoryless state change
Pr(Sn+1 = sj | S0 = si0 , ..., Sn−1 = sin−1, T0 < τ0, ..., Tn < τn, Sn = si)
= Pr(Sn+1 = sj | Sn = si) ≡ P[i, j]
Memoryless transition delay
Pr(Tn < τ | S0 = si0 , ..., Sn−1 = sin−1, T0 < τ0, ..., Tn−1 < τn−1, Sn = si)
= Pr(Tn < τ | Sn = si) = 1− e−λiτ
Notations and properties
P defines an embedded DTMC (the chain of state changes)
Let π(τ) the distribution de X(τ), for δ going to 0 it holds that:π(τ + δ)(si) ≈ π(τ)(si)(1− λiδ) +
∑j π(τ)(sj)λjδP[j, i]
Hence, let Q the infinitesimal generator defined by:Q[i, j] ≡ λiP[i, j] for j 6= i and Q[i, i] ≡ −
∑j 6=i Q[i, j]
Then: dπdτ = π · Q
20/45
The exponential distributionLet F be defined by: F (τ) = 1− e−λτ
Then F is the exponential distribution with rate λ > 0.
The exponential distribution is memoryless.
Let X be a random variable with a λ-exponential distribution.
Pr(X > τ ′ | X > τ) =Pr(X > τ ′)
Pr(X > τ)=e−λτ
′
e−λτ= e−λ(τ
′−τ) = Pr(X > τ ′ − τ)
The minimum of exponential distributions is an exponential distribution.
Let Y be independent from X with µ-exponential distribution.
Pr(min(X,Y ) > τ) = e−λτe−µτ = e−(λ+µ)τ
The minimal variable is selected proportionally to its rate.
Pr(X < Y ) =
∫ ∞0
Pr(Y > τ)FX{dτ} =∫ ∞0
e−µτλe−λτdτ =λ
λ+ µ
21/45
Convoluting the exponential distributionThe nth convolution of a distribution F is defined by:
Fn? def= F ? · · · ? F (n times)
Let fn (resp. Fn) be the density (resp. distribution) of the nth convolutionof the λ-exponential distribution. Then:
fn(x) = λe−λx(λx)n−1
(n− 1)!and Fn(x) = 1− e−λx
∑0≤m<n
(λx)m
m!
Sketch of proof
Recall that: f1(x) = λe−λx.
fn+1(x) =∫ x0fn(x− u)f1(u)du =
∫ x0λe−λ(x−u) (λ(x−u))
n−1
(n−1)! λe−λudu
= λe−λx∫ x0λ (λ(x−u))n−1
(n−1)! du = λe−λx (λx)n
n!
Deduce Fn+1 by:ddx
(1− e−λx
∑0≤m≤n
(λx)m
m!
)=
e−λx(λ∑
0≤m≤n(λx)m
m! −∑
0≤m≤n−1 λ(λx)m
m! ) = fn+1(x)
22/45
CTMC: Illustration and Uniformization
1 2
3
0.70.3
1
0.80.2
5 2
1
P
λ
1 2
3
3.5
2
0.80.2Q
A uniform version of the CTMC (equivalent w.r.t. the states)
1 2
3
0.350.65
0.2
0.080.02
10 10
10
P'
λ'
0.9
0.8
A CTMC
23/45
Analysis of a CTMC
Transient Analysis
Construction of a uniform version of the CTMC (λ, P)such that P[i, i] > 0 for all i.
Computation by case decomposition w.r.t. the number of transitions:
π(τ) = π(0)∑n∈N(e
−λτ ) τn
n! Pn
Steady-state analysis
The steady-state distribution of visits is given by the steady-state distributionof (λ, P) (by construction, the terminal scc are aperiodic) ...
equal to the steady-state distribution since the sojourn times follow the samedistribution.
A particular case: P irreduciblethe steady-state distribution π is the unique solution of X · Q = 0 ∧X · 1 = 1where one can omit an arbitrary equation of the first system.
24/45
Outline
Stochastic Petri Net
Markov Chain
3 Markovian Stochastic Petri Net
Generalized Markovian Stochastic Petri Net (GSPN)
Product-form Petri Nets
25/45
Markovian Stochastic Petri Net
Hypotheses
The distribution of every transition ti has a density function e−λiτ
where the parameter λi is called the rate of the transition.
For simplicity reasons, the server policy is single server.
First observations
The weights for choice policy are no more requiredsince equality of two samples has a null probability.(due to continuity of distributions)
The residual delay dj − di of transition tjknowing that ti has fired (i.e. di is the shortest delay)has the same distribution as the initial delay.Thus the memory policy is irrelevant.
26/45
Markovian Net and Markov ChainKey observation: given a marking m with Tm = t1, . . . , tk
The sojourn time in m is an exponential distribution with rate∑i λi.
The probability that ti is the next transition to fire is λi
(∑
j λj).
Thus the stochastic process is a CTMC whose states are markings andwhose transitions are the transitions of the reachability graph.
t1
t2
22
2
t3
t6
t4
t5
λ1
λ2
λ3λ
4λ5
λ2λ3
λ2
λ3
λ4λ5
λ2λ3
λ4λ5
λ2λ3
λ4λ5
λ4
λ5
λ6
27/45
Outline
Stochastic Petri Net
Markov Chain
Markovian Stochastic Petri Net
4 Generalized Markovian Stochastic Petri Net (GSPN)
Product-form Petri Nets
28/45
Generalizing Distributions for Nets
Modelling delays with exponential distributions is reasonable when:
Only mean value information is known about distributions.
Exponential distributions (or combination of them)are enough to approximate the “real” distributions.
Modelling delays with exponential distributions is not reasonable when:
The distribution of an event is known and is poorly approximable withexponential distributions:
a time-out of 10 time units
The delays of the events have different magnitude orders:executing an instruction versus performing a database request
In the last case, the 0-Dirac distribution is required.
29/45
Generalized Markovian Stochastic Petri Net(GSPN)
Generalized Markovian Stochastic Petri Nets (GSPN) are nets whose:
timed transitions have exponential distributions,
and immediate transitions have 0-Dirac distributions.
Their analysis is based on Markovian Renewal Process,
a generalization of Markov chains.
30/45
Markovian Renewal Process
A Markovian Renewal Process (MRP) fulfills:
a relative memoryless property
Pr(Sn+1 = sj , Tn < τ | S0 = si0 , ..., Sn−1 = sin−1 , T0 < τ0, ..., Sn = si)= Pr(Sn+1 = sj , Tn < τ | Sn = si) ≡ Q[i, j, τ ]
The embedded chain is defined by: P[i, j] = limτ→∞ Q[i, j, τ ]
The sojourn time Soj has a distribution defined by:
Pr(Soj[i] < τ) =∑j Q[i, j, τ ]
Analysis of a MRP
The steady-state distribution (if there exists) π is deduced from thesteady-state distribution of the embedded chain π′ by:
π(si) =π′(si)E(Soj[i])∑j π′(sj)E(Soj[j])
Transient analysis is much harder ... but the reachability probabilities onlydepend on the embedded chain.
31/45
A GSPN is a Markovian Renewal ProcessObservations
Weights are required for immediate transitions.
The restricted reachability graph corresponds to the embedded DTMC.
t1
t2
22
2
t3
t6
t4
t5
t1
t2
t3
t2t3
t2t3
t4t5
t4t5
t4
t5 t
6
t4
t5
tangible marking
vanishing marking
32/45
Steady-State Analysis of a GSPN (1)
Standard method for MRP
Build the restricted reachability graph equivalent to the embedded DTMC
Deduce the probability matrix P
Compute π∗ the steady-state distribution of the visits of markings: π∗ = π∗P
Compute π the steady-state distribution of the sojourn in tangible markings:
π(m) =π∗(m)Soj(m)∑
m′ tangible π∗(m′)Soj(m′)
How to eliminate the vanishing markings sooner in the computation?
33/45
Steady-State Analysis of a GSPN (2)An alternative method
As before, compute the transition probability matrix P .
Compute the transition probability matrix P ′ between tangible markings.
Compute π′∗ the (relative) steady-state distribution of the visits of tangiblemarkings: π′∗ = π′∗P ′.
Compute π the steady-state distribution of the sojourn in tangible markings:
π(m) =π′∗(m)Soj(m)∑
m′ tangible π′∗(m′)Soj(m′)
Computation of P ′
Let PX,Y the probability transition matrix from subset X to subset Y .
Let V (resp. T ) be the set of vanishing (resp. tangible) markings.
P ′ = PT,T + PT,V (∑n∈N P
nV,V )PV,T = PT,T + PT,V (Id− PV,V )−1PV,T
Iterative (resp. direct) computations uses the first (resp. second) expression.
34/45
Steady-State Analysis: Illustration
1
p2
p3
p2p3
p2p3
1 p5
p4
1
1 11
p2=w
2/(w
2+w
3) p
3=w
3/(w
2+w
3)
p4=λ
4/(λ
4+λ
5) p
5=λ
5/(λ
4+λ
5)
2p2p3
(p2)2
1 p5
p4
1
1 11
(p3)2
2cp2p3
c(p2)2 c(p
3)2
c
c
c((p2)2+2p
5p2p3) c((p
3)2+2p
4p2p3)
2dp2p3/(λ
4+λ
4)
d(p2)2/λ
4
d/λ1
d((p2)2+2p
5p2p3)/λ
4d((p
3)2+2p
4p2p3)/λ
5
d/λ6
d(p3)2/λ
5
“c” and “d” are normalizing constants
35/45
Outline
Stochastic Petri Net
Markov Chain
Markovian Stochastic Petri Net
Generalized Markovian Stochastic Petri Net (GSPN)
5 Product-form Petri Nets
36/45
Steady-State Analysis of a Queue
Clientarrivals
Servicetime
λ μ0 1 2 ...
λ λ λ
μ μ μ3
A (Markovian) queue is a CTMC
Interarrival time: exponential distribution with parameter λ
Service time: exponential distribution with parameter µ
Let ρ = λµ
be the utilization
The steady-state distribution π∞ exists iff ρ < 1
The probability of n clients in the queue is π∞(n) = ρn(1− ρ)
37/45
Analysis of Two Queues in Tandem
λ μ
0,0 1,0 2,0λ λ λ
μ
δ
0,1
0,2
1,1
1,2
2,1
2,2
λ λ λ
λ λ
μ
μ
μ
μ
...μ
δ
δ
δ δ
δδ
μ
μ
...
...
...
... ...λ
Observation. The associated Markov chain is more complex than the onecorresponding to two isolated queues. However ...
Assume ρ1 =λµ< 1 and ρ2 =
λδ< 1
The steady-state distribution π∞ exists.
The probability of n1 clients in queue 1 and n2 clients in queue 2 isπ∞(n1, n2) = ρn1
1 (1− ρ1)ρn22 (1− ρ2)
It is the product of the steady-state distributions corresponding to twoisolated queues.
38/45
Analysis of an Open Queuing Network
λμp
μ(1-p)
δq
λ μ δp
1-p
q
1-qδ(1-q)
In a steady-state
Define the (input and output) flow through queue 1 (resp. 2) as γ1 (resp. γ2).
Then γ1 = λ+ qγ2 and γ2 = pγ1. Thus γ1 = λ1−pq and γ2 = pλ
1−pq
Assume ρ1 =γ1µ< 1 and ρ2 =
γ2δ< 1
The steady-state distribution π∞ exists.
The probability of n1 clients in queue 1 and n2 clients in queue 2 isπ∞(n1, n2) = ρn1
1 (1− ρ1)ρnn2(1− ρ2)
It is the product of the steady-state distributions corresponding to twoisolated queues.
39/45
Analysis of a Closed Queuing Network
μp
3
1
μ(1-p)
δq
λ
p
1-p
q
1-qδ(1-q)
μ
2
λδ
Visit ratios (up to a constant)
Define the visit ratio flow of queue i as vi.
Then v1 = v3 + qv2, v2 = pv1 and v3 = (1− p)v1 + (1− q)v2.Thus v1 = 1, v2 = p and v3 = 1− pq.
Define ρ1 =v1µ
, ρ2 =v2δ
and ρ3 =v3λ
The steady-state probability of ni clients in queue i isπ∞(n1, n2, n3) =
1Gρ
n11 ρn2
2 ρn33 (with n1 + n2 + n3 = n)
where G the normalizing constant can be efficiently computed by dynamicprogramming.
40/45
Queuing Networks and Petri Nets
Observations
A (single client class) queuing network can easily be represented by a Petrinet.
Such a Petri net is a state machine: every transition has at most a singleinput and a single output place.
Can we define a more general subclass of Petri nets with a product formfor the steady-state distribution?
41/45
Product Form Stochastic Petri Nets(PFSPN)
t1 t2 t3p1 p2 p3
t4 t5 t6p4 p5 p6
t7
Principles
Transitions can be partionned into subsets corresponding to several classes ofclients with their specific activities
Places model resources shared between the clients.
Client states are implicitely represented.
42/45
Bags and Transitions in PFSPN
t1 t2 t3p1 p2 p3
t4 t5 t6p4 p5 p6
t7
t1p1+p4t2
t3
p2 p3
t4
p4t5
t6
p5 p6
t7
The resource graph
The vertices are the input and the ouput bags of the transitions.
Every transition of the net t yields a graph transition •tt−→ t•
Client classes correspond to the connected components of the graph.
First requirement: The connected components of the graph must bestrongly connected.
43/45
Witnesses in PFSPN
t1 t2 t3p1 p2 p3
t4 t5 t6p4 p5 p6
t7
t1p1+p4
t3
Vector -p2-p3 is a witness for bag p1+p4:
(-p2-p3).W(t3)=1
(-p2-p3).W(t1)=-1
(-p2-p3).W(t)=0 for every other t
where W is the incidence matrix
Witness for a bag b
Let In(b) (resp. Out(b)) the transitions with input (resp. output) b.
Let v be a place vector, v is a witness for b if:
∀t ∈ In(b) v ·W (t) = −1 (where W (t) is the incidence of t)∀t ∈ Out(b) v ·W (t) = 1∀t /∈ In(b) ∪Out(b) v ·W (t) = 0
Second requirement: Every bag must have a witness.
44/45
Steady-State Distributions of PFSPN
t1
2
t2 t3p1 p2 p3
t4
3
t5 t6p4 p5 p6
t7
The reachability space:
m(p1)+ m(p2)+ m(p3)=2
m(p4)+ m(p5)+ m(p6)=m(p1)+1
Steady-state distribution
Assume the requirements are fulfilled, with w(b) the witness for bag b.
Compute the ratio visit of bags v(b) on the resource graph.
The output rate of a bag b is µ(b) =∑t|•t=b µ(t) with µ(t) the rate of t.
Then: π∞(m) = 1G
∏b
(v(b)µ(b)
)w(b)·m
Observation. The normalizing constant can be efficiently computed if thereachability space is characterized by linear place invariants.
45/45
Some References
M. Ajmone Marsan, G. Balbo, G. Conte, S. Donatelli, G. Franceschinis
Modelling with Generalized Stochastic Petri NetsWiley series in parallel computing, John Wiley & Sons, 1995Freely available on the Web site of GreatSPN
S. Haddad, P. MoreauxChapter 7: Stochastic Petri NetsPetri Nets: Fundamental Models and Applications Wiley pp. 269-302, 2009
S. Haddad, P. Moreaux, M. Sereno, M. SilvaProduct-form and stochastic Petri nets: a structural approach.Performance Evaluation, 59: 313-336, 2005.
S. Haddad, J. Mairesse and H.-T. Nguyen
Synthesis and Analysis of Product-form Petri Nets.Fundamenta Informaticae 122(1-2), pages 147-172, 2013.