View
6
Download
0
Category
Preview:
Citation preview
Stochastic Process Algebras and OrdinaryDifferential Equations
Jane Hillston
Laboratory for Foundations of Computer Scienceand Centre for Systems Biology at Edinburgh
University of Edinburgh
5th September 2011
Outline
1 IntroductionStochastic Process Algebra
2 Continuous ApproximationState variables
3 Fluid-Flow SemanticsFluid Structured Operational Semantics
4 Conclusions
Outline
1 IntroductionStochastic Process Algebra
2 Continuous ApproximationState variables
3 Fluid-Flow SemanticsFluid Structured Operational Semantics
4 Conclusions
Process Algebra
• Models consist of agents which engage in actions.
α.P���*
HHHY
action typeor name
agent/component
• The structured operational (interleaving) semantics of thelanguage is used to generate a labelled transition system.
Process algebra model Labelled transition system-SOS rules
Process Algebra
• Models consist of agents which engage in actions.
α.P���*
HHHY
action typeor name
agent/component
• The structured operational (interleaving) semantics of thelanguage is used to generate a labelled transition system.
Process algebra model Labelled transition system-SOS rules
Process Algebra
• Models consist of agents which engage in actions.
α.P���*
HHHY
action typeor name
agent/component
• The structured operational (interleaving) semantics of thelanguage is used to generate a labelled transition system.
Process algebra model Labelled transition system-SOS rules
Stochastic process algebras
Process algebras where models are decorated with quantitativeinformation used to generate a stochastic process are stochasticprocess algebras (SPA).
Stochastic Process Algebra
• Models are constructed from components which engage inactivities.
(α, r).P���* 6 HH
HY
action typeor name
activity rate(parameter of an
exponential distribution)
component/derivative
• The language is used to generate a Continuous Time MarkovChain (CTMC) for performance modelling.
SPAMODEL
LABELLEDMULTI-
TRANSITIONSYSTEM
CTMC Q- -SOS rules state transition
diagram
Stochastic Process Algebra
• Models are constructed from components which engage inactivities.
(α, r).P���* 6 HH
HY
action typeor name
activity rate(parameter of an
exponential distribution)
component/derivative
• The language is used to generate a Continuous Time MarkovChain (CTMC) for performance modelling.
SPAMODEL
LABELLEDMULTI-
TRANSITIONSYSTEM
CTMC Q- -SOS rules state transition
diagram
Stochastic Process Algebra
• Models are constructed from components which engage inactivities.
(α, r).P���* 6 HH
HY
action typeor name
activity rate(parameter of an
exponential distribution)
component/derivative
• The language is used to generate a Continuous Time MarkovChain (CTMC) for performance modelling.
SPAMODEL
LABELLEDMULTI-
TRANSITIONSYSTEM
CTMC Q- -SOS rules state transition
diagram
Why use a process algebra?
• High level description of the system eases the task of modelconstruction.
• Formal language allows for unambiguous interpretation andautomatic translation into the underlying mathematicalstructure.
• Moreover properties of that mathematical structure may bededuced by the construction at the process algebra level.
• Furthermore formal reasoning techniques such as equivalencerelations and model checking can be used to manipulate orinterrogate models.
• Compositionality can be exploited both for model constructionand (in some cases) for model analysis.
Why use a process algebra?
• High level description of the system eases the task of modelconstruction.
• Formal language allows for unambiguous interpretation andautomatic translation into the underlying mathematicalstructure.
• Moreover properties of that mathematical structure may bededuced by the construction at the process algebra level.
• Furthermore formal reasoning techniques such as equivalencerelations and model checking can be used to manipulate orinterrogate models.
• Compositionality can be exploited both for model constructionand (in some cases) for model analysis.
Why use a process algebra?
• High level description of the system eases the task of modelconstruction.
• Formal language allows for unambiguous interpretation andautomatic translation into the underlying mathematicalstructure.
• Moreover properties of that mathematical structure may bededuced by the construction at the process algebra level.
• Furthermore formal reasoning techniques such as equivalencerelations and model checking can be used to manipulate orinterrogate models.
• Compositionality can be exploited both for model constructionand (in some cases) for model analysis.
Why use a process algebra?
• High level description of the system eases the task of modelconstruction.
• Formal language allows for unambiguous interpretation andautomatic translation into the underlying mathematicalstructure.
• Moreover properties of that mathematical structure may bededuced by the construction at the process algebra level.
• Furthermore formal reasoning techniques such as equivalencerelations and model checking can be used to manipulate orinterrogate models.
• Compositionality can be exploited both for model constructionand (in some cases) for model analysis.
Why use a process algebra?
• High level description of the system eases the task of modelconstruction.
• Formal language allows for unambiguous interpretation andautomatic translation into the underlying mathematicalstructure.
• Moreover properties of that mathematical structure may bededuced by the construction at the process algebra level.
• Furthermore formal reasoning techniques such as equivalencerelations and model checking can be used to manipulate orinterrogate models.
• Compositionality can be exploited both for model constructionand (in some cases) for model analysis.
Performance Evaluation Process Algebra (PEPA)
(α, f ).P PrefixP1 + P2 ChoiceP1 ��
LP2 Co-operation
P/L HidingX Variable
Performance Evaluation Process Algebra (PEPA)
(α, f ).P PrefixP1 + P2 ChoiceP1 ��
LP2 Co-operation
P/L HidingX Variable
Performance Evaluation Process Algebra (PEPA)
(α, f ).P PrefixP1 + P2 ChoiceP1 ��
LP2 Co-operation
P/L HidingX Variable
Performance Evaluation Process Algebra (PEPA)
(α, f ).P PrefixP1 + P2 ChoiceP1 ��
LP2 Co-operation
P/L HidingX Variable
Performance Evaluation Process Algebra (PEPA)
(α, f ).P PrefixP1 + P2 ChoiceP1 ��
LP2 Co-operation
P/L HidingX Variable
Performance Evaluation Process Algebra (PEPA)
(α, f ).P PrefixP1 + P2 ChoiceP1 ��
LP2 Co-operation
P/L HidingX Variable
Performance Evaluation Process Algebra (PEPA)
(α, f ).P PrefixP1 + P2 ChoiceP1 ��
LP2 Co-operation
P/L HidingX Variable
P1 ‖ P2 is a derived form for P1 ��∅ P2.
When working with large numbers of entities, we write P[n] todenote an array of n copies of P executing in parallel.
P[5] ≡ (P ‖ P ‖ P ‖ P ‖ P)
Performance Evaluation Process Algebra (PEPA)
(α, f ).P PrefixP1 + P2 ChoiceP1 ��
LP2 Co-operation
P/L HidingX Variable
P1 ‖ P2 is a derived form for P1 ��∅ P2.
When working with large numbers of entities, we write P[n] todenote an array of n copies of P executing in parallel.
P[5] ≡ (P ‖ P ‖ P ‖ P ‖ P)
Rates of interaction: bounded capacity
Stochastic process algebras differ in how they define the rate ofsynchronised actions. In PEPA cooperation between componentsgives rise to shared actions, the rate of which are governed by theassumption of bounded capacity.
Bounded capacityNo component can be made to carry out an action in cooperationfaster than its own defined rate for the action.
Thus shared actions proceed at the minimum of the rates in theparticipating components.
In contrast independent actions do not constrain each other and ifthere are multiple copies of a action enabled in independentconcurrent components their rates are summed.
Rates of interaction: bounded capacity
Stochastic process algebras differ in how they define the rate ofsynchronised actions. In PEPA cooperation between componentsgives rise to shared actions, the rate of which are governed by theassumption of bounded capacity.
Bounded capacityNo component can be made to carry out an action in cooperationfaster than its own defined rate for the action.
Thus shared actions proceed at the minimum of the rates in theparticipating components.
In contrast independent actions do not constrain each other and ifthere are multiple copies of a action enabled in independentconcurrent components their rates are summed.
Rates of interaction: bounded capacity
Stochastic process algebras differ in how they define the rate ofsynchronised actions. In PEPA cooperation between componentsgives rise to shared actions, the rate of which are governed by theassumption of bounded capacity.
Bounded capacityNo component can be made to carry out an action in cooperationfaster than its own defined rate for the action.
Thus shared actions proceed at the minimum of the rates in theparticipating components.
In contrast independent actions do not constrain each other and ifthere are multiple copies of a action enabled in independentconcurrent components their rates are summed.
Rates of interaction: bounded capacity
Stochastic process algebras differ in how they define the rate ofsynchronised actions. In PEPA cooperation between componentsgives rise to shared actions, the rate of which are governed by theassumption of bounded capacity.
Bounded capacityNo component can be made to carry out an action in cooperationfaster than its own defined rate for the action.
Thus shared actions proceed at the minimum of the rates in theparticipating components.
In contrast independent actions do not constrain each other and ifthere are multiple copies of a action enabled in independentconcurrent components their rates are summed.
A simple example: processors and resources
Proc0def= (task1, r1).Proc1
Proc1def= (task2, r2).Proc0
Res0def= (task1, r3).Res1
Res1def= (reset, r4).Res0
Proc0 ��{task1}
Res0
Proc0 ��{task1}
Res0
?(task1, R)
Proc1 ��{task1}
Res1
���
(reset, r4)@@@R(task2, r2)
Proc1 ��{task1}
Res0��������(task2, r2)
Proc0 ��{task1}
Res1AAAAAAAK (reset, r4)
R = min(r1, r3)
Q =
−R R 0 0
0 −(r2 + r4) r4 r2r2 0 −r2 0r4 0 0 −r4
A simple example: processors and resources
Proc0def= (task1, r1).Proc1
Proc1def= (task2, r2).Proc0
Res0def= (task1, r3).Res1
Res1def= (reset, r4).Res0
Proc0 ��{task1}
Res0
Proc0 ��{task1}
Res0
?(task1, R)
Proc1 ��{task1}
Res1
���
(reset, r4)@@@R(task2, r2)
Proc1 ��{task1}
Res0��������(task2, r2)
Proc0 ��{task1}
Res1AAAAAAAK (reset, r4)
R = min(r1, r3)
Q =
−R R 0 0
0 −(r2 + r4) r4 r2r2 0 −r2 0r4 0 0 −r4
A simple example: processors and resources
Proc0def= (task1, r1).Proc1
Proc1def= (task2, r2).Proc0
Res0def= (task1, r3).Res1
Res1def= (reset, r4).Res0
Proc0 ��{task1}
Res0
Proc0 ��{task1}
Res0
?(task1, R)
Proc1 ��{task1}
Res1
���
(reset, r4)@@@R(task2, r2)
Proc1 ��{task1}
Res0��������(task2, r2)
Proc0 ��{task1}
Res1AAAAAAAK (reset, r4)
R = min(r1, r3)
Q =
−R R 0 0
0 −(r2 + r4) r4 r2r2 0 −r2 0r4 0 0 −r4
Solving discrete state models
As we have seen the SOSsemantics of a SPA model ismapped to a CTMC withglobal states determined bythe local states of theparticipating components.
When the size of the statespace is not too large they areamenable to numerical solution(linear algebra) to determine asteady state or transientprobability distribution.
Q =
q1,1 q1,2 · · · q1,N
q2,1 q2,2 · · · q2,N
......
...qN,1 qN,2 · · · qN,N
π(t) = (π1(t), π2(t), . . . , πN(t))
Solving discrete state models
As we have seen the SOSsemantics of a SPA model ismapped to a CTMC withglobal states determined bythe local states of theparticipating components.
When the size of the statespace is not too large they areamenable to numerical solution(linear algebra) to determine asteady state or transientprobability distribution.
Q =
q1,1 q1,2 · · · q1,N
q2,1 q2,2 · · · q2,N
......
...qN,1 qN,2 · · · qN,N
π(t) = (π1(t), π2(t), . . . , πN(t))
Solving discrete state models
As we have seen the SOSsemantics of a SPA model ismapped to a CTMC withglobal states determined bythe local states of theparticipating components.
When the size of the statespace is not too large they areamenable to numerical solution(linear algebra) to determine asteady state or transientprobability distribution.
Q =
q1,1 q1,2 · · · q1,N
q2,1 q2,2 · · · q2,N
......
...qN,1 qN,2 · · · qN,N
π(t) = (π1(t), π2(t), . . . , πN(t))
Solving discrete state models
Alternatively they may bestudied using stochasticsimulation. Each run generatesa single trajectory through thestate space. Many runs areneeded in order to obtainaverage behaviours.
Benefits of process algebra
Beyond the clear benefits for model construction to be derivedfrom using a high-level language with compositionality the formalnature of the process algebra specification has been exploited in anumber of ways.
Benefits of process algebra
For example,
• The correspondence between the congruence, Markovianbisimulation, in the process algebra and the lumpabilitycondition in the CTMC, allows exact model reduction to becarried out compositionally.
• Characterisation of product form structure at the processalgebra level allows decomposed model solution based on theprocess algebra structure of the model.
• Stochastic model checking based on the CSL family oftemporal logics allows evaluation of quantified properties ofthe behaviour of the system
Benefits of process algebra
For example,
• The correspondence between the congruence, Markovianbisimulation, in the process algebra and the lumpabilitycondition in the CTMC, allows exact model reduction to becarried out compositionally.
• Characterisation of product form structure at the processalgebra level allows decomposed model solution based on theprocess algebra structure of the model.
• Stochastic model checking based on the CSL family oftemporal logics allows evaluation of quantified properties ofthe behaviour of the system
Benefits of process algebra
For example,
• The correspondence between the congruence, Markovianbisimulation, in the process algebra and the lumpabilitycondition in the CTMC, allows exact model reduction to becarried out compositionally.
• Characterisation of product form structure at the processalgebra level allows decomposed model solution based on theprocess algebra structure of the model.
• Stochastic model checking based on the CSL family oftemporal logics allows evaluation of quantified properties ofthe behaviour of the system
Disadvantages of process algebra
The primary disadvantage of stochastic process algebras, shared byall discrete event modelling paradigms, is the problem of statespace explosion, also known as the curse of dimensionality.
Simple example revisited
Proc0def= (task1, r1).Proc1
Proc1def= (task2, r2).Proc0
Res0def= (task1, r3).Res1
Res1def= (reset, r4).Res0
Proc0[NP ] ��{task1}
Res0[NR ]
Simple example revisited
Proc0def= (task1, r1).Proc1
Proc1def= (task2, r2).Proc0
Res0def= (task1, r3).Res1
Res1def= (reset, r4).Res0
Proc0[NP ] ��{task1}
Res0[NR ]
CTMC interpretationProcessors (NP ) Resources (NR ) States (2NP+NR )1 1 42 1 82 2 163 2 323 3 644 3 1284 4 2565 4 5125 5 10246 5 20486 6 40967 6 81927 7 163848 7 327688 8 655369 8 1310729 9 26214410 9 52428810 10 1048576
Disadvantages of process algebra
The primary disadvantage of stochastic process algebras, shared byall discrete event modelling paradigms, is the problem of statespace explosion, also known as the curse of dimensionality.
This is particularly a problem for population models — systemswhere we are interested in interacting populations of entities:
Disadvantages of process algebra
The primary disadvantage of stochastic process algebras, shared byall discrete event modelling paradigms, is the problem of statespace explosion, also known as the curse of dimensionality.
This is particularly a problem for population models — systemswhere we are interested in interacting populations of entities:
Disadvantages of process algebra
The primary disadvantage of stochastic process algebras, shared byall discrete event modelling paradigms, is the problem of statespace explosion, also known as the curse of dimensionality.
This is particularly a problem for population models — systemswhere we are interested in interacting populations of entities:
Large scale software systemsIssues of scalability are important for user satisfaction andresource efficiency but such issues are difficult to investigate usingdiscrete state models.
Disadvantages of process algebra
The primary disadvantage of stochastic process algebras, shared byall discrete event modelling paradigms, is the problem of statespace explosion, also known as the curse of dimensionality.
This is particularly a problem for population models — systemswhere we are interested in interacting populations of entities:
Biochemical signalling pathwaysUnderstanding these pathways has the potential to improve thequality of life through enhanced drug treatment and better drugdesign.
Disadvantages of process algebra
The primary disadvantage of stochastic process algebras, shared byall discrete event modelling paradigms, is the problem of statespace explosion, also known as the curse of dimensionality.
This is particularly a problem for population models — systemswhere we are interested in interacting populations of entities:
Epidemiological systemsImproved modelling of these systems could lead to improveddisease prevention and treatment in nature and better security incomputer systems.
Disadvantages of process algebra
The primary disadvantage of stochastic process algebras, shared byall discrete event modelling paradigms, is the problem of statespace explosion, also known as the curse of dimensionality.
This is particularly a problem for population models — systemswhere we are interested in interacting populations of entities:
Crowd dynamicsTechnology enhancement is creating new possibilities for directingcrowd movements in buildings and urban spaces, for example foremergency egress, which are not yet well-understood.
A conundrum
Process algebras are well-suited to constructing such models:
• Developed to represent concurrent behaviour compositionally;
• Represent the interactions between individuals explicitly;
• Stochastic extensions allow the dynamics of system behaviourto be captured;
• Incorporate formal apparatus for reasoning about thebehaviour of systems.
But solution techniques which rely on explicitly building the statespace, such as numerical solution, are hampered by spacecomplexity...
...whilst those that use the implicit state space, such as simulation,run into problems of time complexity.
A conundrum
Process algebras are well-suited to constructing such models:
• Developed to represent concurrent behaviour compositionally;
• Represent the interactions between individuals explicitly;
• Stochastic extensions allow the dynamics of system behaviourto be captured;
• Incorporate formal apparatus for reasoning about thebehaviour of systems.
But solution techniques which rely on explicitly building the statespace, such as numerical solution, are hampered by spacecomplexity...
...whilst those that use the implicit state space, such as simulation,run into problems of time complexity.
A conundrum
Process algebras are well-suited to constructing such models:
• Developed to represent concurrent behaviour compositionally;
• Represent the interactions between individuals explicitly;
• Stochastic extensions allow the dynamics of system behaviourto be captured;
• Incorporate formal apparatus for reasoning about thebehaviour of systems.
But solution techniques which rely on explicitly building the statespace, such as numerical solution, are hampered by spacecomplexity...
...whilst those that use the implicit state space, such as simulation,run into problems of time complexity.
A conundrum
Process algebras are well-suited to constructing such models:
• Developed to represent concurrent behaviour compositionally;
• Represent the interactions between individuals explicitly;
• Stochastic extensions allow the dynamics of system behaviourto be captured;
• Incorporate formal apparatus for reasoning about thebehaviour of systems.
But solution techniques which rely on explicitly building the statespace, such as numerical solution, are hampered by spacecomplexity...
...whilst those that use the implicit state space, such as simulation,run into problems of time complexity.
A conundrum
Process algebras are well-suited to constructing such models:
• Developed to represent concurrent behaviour compositionally;
• Represent the interactions between individuals explicitly;
• Stochastic extensions allow the dynamics of system behaviourto be captured;
• Incorporate formal apparatus for reasoning about thebehaviour of systems.
But solution techniques which rely on explicitly building the statespace, such as numerical solution, are hampered by spacecomplexity...
...whilst those that use the implicit state space, such as simulation,run into problems of time complexity.
A conundrum
Process algebras are well-suited to constructing such models:
• Developed to represent concurrent behaviour compositionally;
• Represent the interactions between individuals explicitly;
• Stochastic extensions allow the dynamics of system behaviourto be captured;
• Incorporate formal apparatus for reasoning about thebehaviour of systems.
But solution techniques which rely on explicitly building the statespace, such as numerical solution, are hampered by spacecomplexity...
...whilst those that use the implicit state space, such as simulation,run into problems of time complexity.
A conundrum
Process algebras are well-suited to constructing such models:
• Developed to represent concurrent behaviour compositionally;
• Represent the interactions between individuals explicitly;
• Stochastic extensions allow the dynamics of system behaviourto be captured;
• Incorporate formal apparatus for reasoning about thebehaviour of systems.
But solution techniques which rely on explicitly building the statespace, such as numerical solution, are hampered by spacecomplexity...
...whilst those that use the implicit state space, such as simulation,run into problems of time complexity.
The CODA project
In the CODA project we have been developing stochastic processalgebras and associated theory, tailored to the construction andevaluation of the collective dynamics of large systems ofinteracting entities.
One approach to this is to keep the discrete state representation inthe model and to evaluate it algorithmically rather thananalytically, i.e. carry out a discrete event simulation of the modelto explore its possible behaviours.
However, our main approach has been to use a countingabstraction in order to make a shift to population statistics and todevelop a fluid approximation.
The CODA project
In the CODA project we have been developing stochastic processalgebras and associated theory, tailored to the construction andevaluation of the collective dynamics of large systems ofinteracting entities.
One approach to this is to keep the discrete state representation inthe model and to evaluate it algorithmically rather thananalytically, i.e. carry out a discrete event simulation of the modelto explore its possible behaviours.
However, our main approach has been to use a countingabstraction in order to make a shift to population statistics and todevelop a fluid approximation.
The CODA project
In the CODA project we have been developing stochastic processalgebras and associated theory, tailored to the construction andevaluation of the collective dynamics of large systems ofinteracting entities.
One approach to this is to keep the discrete state representation inthe model and to evaluate it algorithmically rather thananalytically, i.e. carry out a discrete event simulation of the modelto explore its possible behaviours.
However, our main approach has been to use a countingabstraction in order to make a shift to population statistics and todevelop a fluid approximation.
Population statistics: emergent behaviour
A shift in perspective allows us to model the interactions betweenindividual components but then only consider the system as awhole as an interaction of populations.
This allows us to model much larger systems than previouslypossible but in making the shift we are no longer able to collectany information about individuals in the system.
To characterise the behaviour of a population we count thenumber of individuals within the population that are exhibitingcertain behaviours rather than tracking individuals directly.
Then we make a continuous approximation of how the counts varyover time.
Population statistics: emergent behaviour
A shift in perspective allows us to model the interactions betweenindividual components but then only consider the system as awhole as an interaction of populations.
This allows us to model much larger systems than previouslypossible but in making the shift we are no longer able to collectany information about individuals in the system.
To characterise the behaviour of a population we count thenumber of individuals within the population that are exhibitingcertain behaviours rather than tracking individuals directly.
Then we make a continuous approximation of how the counts varyover time.
Population statistics: emergent behaviour
A shift in perspective allows us to model the interactions betweenindividual components but then only consider the system as awhole as an interaction of populations.
This allows us to model much larger systems than previouslypossible but in making the shift we are no longer able to collectany information about individuals in the system.
To characterise the behaviour of a population we count thenumber of individuals within the population that are exhibitingcertain behaviours rather than tracking individuals directly.
Then we make a continuous approximation of how the counts varyover time.
Population statistics: emergent behaviour
A shift in perspective allows us to model the interactions betweenindividual components but then only consider the system as awhole as an interaction of populations.
This allows us to model much larger systems than previouslypossible but in making the shift we are no longer able to collectany information about individuals in the system.
To characterise the behaviour of a population we count thenumber of individuals within the population that are exhibitingcertain behaviours rather than tracking individuals directly.
Then we make a continuous approximation of how the counts varyover time.
Novelty
The novelty in this approach is twofold:
Linking process algebra and continuous mathematical models fordynamic evaluation represents a paradigm shift in how suchformal models are analysed.
The prospect of formally-based quantified evaluation of dynamicbehaviour could have significant impact in application domainswhich have traditionally worked directly at the level of fittingdifferential equation models.
Novelty
The novelty in this approach is twofold:
Linking process algebra and continuous mathematical models fordynamic evaluation represents a paradigm shift in how suchformal models are analysed.
The prospect of formally-based quantified evaluation of dynamicbehaviour could have significant impact in application domainswhich have traditionally worked directly at the level of fittingdifferential equation models.
Outline
1 IntroductionStochastic Process Algebra
2 Continuous ApproximationState variables
3 Fluid-Flow SemanticsFluid Structured Operational Semantics
4 Conclusions
Alternative Representations
ContinuousApproximation
of CTMCpopulation view
explicit stateCTMC
explicit stateCTMC
StochasticSimulationof CTMC
individual view
LargePEPA model
-
-���
������
���
���*
HHHHH
HHHHHH
HHHHj
Alternative Representations
ContinuousApproximation
of CTMC
population view
explicit stateCTMC
explicit stateCTMC
StochasticSimulationof CTMC
individual view
LargePEPA model
-
-���������
���
���*
HHHHH
HHHHHH
HHHHj
Alternative Representations
ContinuousApproximation
of CTMCpopulation view
explicit stateCTMC
explicit stateCTMC
StochasticSimulationof CTMC
individual view
LargePEPA model
--
���������
���
���*
HHHHH
HHHHHH
HHHHj
A new approach to SPA semantics
1 Use a counting abstraction rather than the CTMC completestate space.
2 Assume that these state variables are subject to continuousrather than discrete change.
3 No longer aim to calculate the probability distribution overthe entire state space of the model.
4 Instead the trajectory of the ODEs estimates the expectedbehaviour of the CTMC.
A new approach to SPA semantics
1 Use a counting abstraction rather than the CTMC completestate space.
2 Assume that these state variables are subject to continuousrather than discrete change.
3 No longer aim to calculate the probability distribution overthe entire state space of the model.
4 Instead the trajectory of the ODEs estimates the expectedbehaviour of the CTMC.
A new approach to SPA semantics
1 Use a counting abstraction rather than the CTMC completestate space.
2 Assume that these state variables are subject to continuousrather than discrete change.
3 No longer aim to calculate the probability distribution overthe entire state space of the model.
4 Instead the trajectory of the ODEs estimates the expectedbehaviour of the CTMC.
A new approach to SPA semantics
1 Use a counting abstraction rather than the CTMC completestate space.
2 Assume that these state variables are subject to continuousrather than discrete change.
3 No longer aim to calculate the probability distribution overthe entire state space of the model.
4 Instead the trajectory of the ODEs estimates the expectedbehaviour of the CTMC.
Models suitable for counting abstraction
• In the PEPA language multiple instances of components arerepresented explicitly — we write P[n] to denote an array of ncopies of P executing in parallel.
P[5] ≡ (P ‖ P ‖ P ‖ P ‖ P)
• The impact of an action of a counting variable is
• decrease by 1 if the component participates in the action• increase by 1 if the component is the result of the action• zero if the component is not involved in the action.
Models suitable for counting abstraction
• In the PEPA language multiple instances of components arerepresented explicitly — we write P[n] to denote an array of ncopies of P executing in parallel.
P[5] ≡ (P ‖ P ‖ P ‖ P ‖ P)
• The impact of an action of a counting variable is
• decrease by 1 if the component participates in the action• increase by 1 if the component is the result of the action• zero if the component is not involved in the action.
Models suitable for counting abstraction
• In the PEPA language multiple instances of components arerepresented explicitly — we write P[n] to denote an array of ncopies of P executing in parallel.
P[5] ≡ (P ‖ P ‖ P ‖ P ‖ P)
• The impact of an action of a counting variable is• decrease by 1 if the component participates in the action
• increase by 1 if the component is the result of the action• zero if the component is not involved in the action.
Models suitable for counting abstraction
• In the PEPA language multiple instances of components arerepresented explicitly — we write P[n] to denote an array of ncopies of P executing in parallel.
P[5] ≡ (P ‖ P ‖ P ‖ P ‖ P)
• The impact of an action of a counting variable is• decrease by 1 if the component participates in the action• increase by 1 if the component is the result of the action
• zero if the component is not involved in the action.
Models suitable for counting abstraction
• In the PEPA language multiple instances of components arerepresented explicitly — we write P[n] to denote an array of ncopies of P executing in parallel.
P[5] ≡ (P ‖ P ‖ P ‖ P ‖ P)
• The impact of an action of a counting variable is• decrease by 1 if the component participates in the action• increase by 1 if the component is the result of the action• zero if the component is not involved in the action.
Simple example revisited
Proc0def= (task1, r1).Proc1
Proc1def= (task2, r2).Proc0
Res0def= (task1, r3).Res1
Res1def= (reset, r4).Res0
Proc0[NP ] ��{task1}
Res0[NR ]
Simple example revisited
Proc0def= (task1, r1).Proc1
Proc1def= (task2, r2).Proc0
Res0def= (task1, r3).Res1
Res1def= (reset, r4).Res0
Proc0[NP ] ��{task1}
Res0[NR ]
• task1 decreases Proc0 and Res0
• task1 increases Proc1 and Res1
• task2 decreases Proc1
• task2 increases Proc0
• reset decreases Res1
• reset increases Res0
Simple example revisited
Proc0def= (task1, r1).Proc1
Proc1def= (task2, r2).Proc0
Res0def= (task1, r3).Res1
Res1def= (reset, r4).Res0
Proc0[NP ] ��{task1}
Res0[NR ]
dx1dt = −min(r1 x1, r3 x3) + r2 x2
x1 = no. of Proc1
• task1 decreases Proc0
• task1 is performed by Proc0
and Res0
• task2 increases Proc0
• task2 is performed by Proc1
Simple example revisited
Proc0def= (task1, r1).Proc1
Proc1def= (task2, r2).Proc0
Res0def= (task1, r3).Res1
Res1def= (reset, r4).Res0
Proc0[NP ] ��{task1}
Res0[NR ]
ODE interpretationdx1dt = −min(r1 x1, r3 x3) + r2 x2
x1 = no. of Proc1dx2dt = min(r1 x1, r3 x3)− r2 x2
x2 = no. of Proc2dx3dt = −min(r1 x1, r3 x3) + r4 x4
x3 = no. of Res0dx4dt = min(r1 x1, r3 x3)− r4 x4
x4 = no. of Res1
Alternative Representations
ContinuousApproximation
of CTMCpopulation viewset of ODEs
full generator matrix
(implicit)
explicit stateCTMC
explicit stateCTMC
Aggregatedreduced generatormatrix
generator functions abstract CTMC
StochasticSimulationof CTMC
individual view
LargePEPA model
-
-������
���
���
���*
HHHHH
HHHHHH
HHHHj
6
6
fluid approximation
counting abstraction
Alternative Representations
ContinuousApproximation
of CTMC
population viewset of ODEs
full generator matrix
(implicit)
explicit stateCTMC
explicit stateCTMC
Aggregatedreduced generatormatrix
generator functions abstract CTMC
StochasticSimulationof CTMC
individual view
LargePEPA model
-
-���������
���
���*
HHHHH
HHHHHH
HHHHj
6
6
fluid approximation
counting abstraction
Alternative Representations
ContinuousApproximation
of CTMCpopulation view
set of ODEs
full generator matrix
(implicit)
explicit stateCTMC
explicit stateCTMC
Aggregatedreduced generatormatrix
generator functions abstract CTMC
StochasticSimulationof CTMC
individual view
LargePEPA model
--
���������
���
���*
HHHHH
HHHHHH
HHHHj
6
6
fluid approximation
counting abstraction
Alternative Representations
ContinuousApproximation
of CTMCpopulation viewset of ODEs
full generator matrix
(implicit)
explicit stateCTMC
explicit stateCTMC
Aggregatedreduced generatormatrix
generator functions abstract CTMC
StochasticSimulationof CTMC
individual view
LargePEPA model
--������
���
���
���*
HHHHH
HHHHHH
HHHHj
6
6
fluid approximation
counting abstraction
Alternative Representations
ContinuousApproximation
of CTMCpopulation viewset of ODEs
full generator matrix
(implicit)
explicit state
CTMC
explicit stateCTMC
Aggregatedreduced generatormatrix
generator functions abstract CTMC
StochasticSimulationof CTMC
individual view
LargePEPA model
--������
���
���
���*
HHHHH
HHHHHH
HHHHj
6
6
fluid approximation
counting abstraction
Alternative Representations
ContinuousApproximation
of CTMC
population view
set of ODEs
full generator matrix
(implicit)
explicit state
CTMC
explicit stateCTMC
Aggregatedreduced generatormatrix
generator functions abstract CTMC
StochasticSimulationof CTMC
individual view
LargePEPA model
--������
���
���
���*
HHHHH
HHHHHH
HHHHj
6
6
fluid approximation
counting abstraction
Alternative Representations
ContinuousApproximation
of CTMC
population view
set of ODEs
full generator matrix
(implicit)
explicit state
CTMC
explicit stateCTMC
Aggregated
reduced generatormatrix
generator functions abstract CTMC
StochasticSimulationof CTMC
individual view
LargePEPA model
--������
���
���
���*
HHHHH
HHHHHH
HHHHj
6
6
fluid approximation
counting abstraction
Outline
1 IntroductionStochastic Process Algebra
2 Continuous ApproximationState variables
3 Fluid-Flow SemanticsFluid Structured Operational Semantics
4 Conclusions
Deriving a Fluid Approximation of a PEPA model
The aim is to represent the CTMC implicitly (avoiding state spaceexplosion), and to generate the set of ODEs which are the fluidlimit of that CTMC.
The exisiting SOS semantics is not suitable because it constructsthe state space of the CTMC explicitly.
Instead we define a structured operational semantics which definesthe possible transitions of an abitrary abstract state, giving theCTMC implicitly as a set of generator functions, which directlygive rise to the ODEs.
SPAMODEL
SYMBOLICLABELLED
TRANSITIONSYSTEM
ABSTRACTCTMC
ODEsFM(x)- - -SOS rules generator
functions
fluidapprox
Deriving a Fluid Approximation of a PEPA model
The aim is to represent the CTMC implicitly (avoiding state spaceexplosion), and to generate the set of ODEs which are the fluidlimit of that CTMC.
The exisiting SOS semantics is not suitable because it constructsthe state space of the CTMC explicitly.
Instead we define a structured operational semantics which definesthe possible transitions of an abitrary abstract state, giving theCTMC implicitly as a set of generator functions, which directlygive rise to the ODEs.
SPAMODEL
SYMBOLICLABELLED
TRANSITIONSYSTEM
ABSTRACTCTMC
ODEsFM(x)- - -SOS rules generator
functions
fluidapprox
Deriving a Fluid Approximation of a PEPA model
The aim is to represent the CTMC implicitly (avoiding state spaceexplosion), and to generate the set of ODEs which are the fluidlimit of that CTMC.
The exisiting SOS semantics is not suitable because it constructsthe state space of the CTMC explicitly.
Instead we define a structured operational semantics which definesthe possible transitions of an abitrary abstract state, giving theCTMC implicitly as a set of generator functions, which directlygive rise to the ODEs.
SPAMODEL
SYMBOLICLABELLED
TRANSITIONSYSTEM
ABSTRACTCTMC
ODEsFM(x)- - -SOS rules generator
functions
fluidapprox
Deriving a Fluid Approximation of a PEPA model
The aim is to represent the CTMC implicitly (avoiding state spaceexplosion), and to generate the set of ODEs which are the fluidlimit of that CTMC.
The exisiting SOS semantics is not suitable because it constructsthe state space of the CTMC explicitly.
Instead we define a structured operational semantics which definesthe possible transitions of an abitrary abstract state, giving theCTMC implicitly as a set of generator functions, which directlygive rise to the ODEs.
SPAMODEL
SYMBOLICLABELLED
TRANSITIONSYSTEM
ABSTRACTCTMC
ODEsFM(x)- - -SOS rules generator
functions
fluidapprox
Deriving a Fluid Approximation of a PEPA model
The aim is to represent the CTMC implicitly (avoiding state spaceexplosion), and to generate the set of ODEs which are the fluidlimit of that CTMC.
The exisiting SOS semantics is not suitable because it constructsthe state space of the CTMC explicitly.
Instead we define a structured operational semantics which definesthe possible transitions of an abitrary abstract state, giving theCTMC implicitly as a set of generator functions, which directlygive rise to the ODEs.
SPAMODEL
SYMBOLICLABELLED
TRANSITIONSYSTEM
ABSTRACTCTMC
ODEsFM(x)- - -SOS rules generator
functions
fluidapprox
Fluid Structured Operational Semantics
In order to get to the implicit representation of the CTMC we needto:
1 Make the counting abstraction (Context Reduction)
2 Collect the transitions of the reduced context (Jump Multiset)
3 Calculate the rate of the transitions in terms of an arbitrarystate of the CTMC.
Once this is done we can extract the vector field FM(x) from thejump multiset.
Fluid Structured Operational Semantics
In order to get to the implicit representation of the CTMC we needto:
1 Make the counting abstraction (Context Reduction)
2 Collect the transitions of the reduced context (Jump Multiset)
3 Calculate the rate of the transitions in terms of an arbitrarystate of the CTMC.
Once this is done we can extract the vector field FM(x) from thejump multiset.
Fluid Structured Operational Semantics
In order to get to the implicit representation of the CTMC we needto:
1 Make the counting abstraction (Context Reduction)
2 Collect the transitions of the reduced context (Jump Multiset)
3 Calculate the rate of the transitions in terms of an arbitrarystate of the CTMC.
Once this is done we can extract the vector field FM(x) from thejump multiset.
Fluid Structured Operational Semantics
In order to get to the implicit representation of the CTMC we needto:
1 Make the counting abstraction (Context Reduction)
2 Collect the transitions of the reduced context (Jump Multiset)
3 Calculate the rate of the transitions in terms of an arbitrarystate of the CTMC.
Once this is done we can extract the vector field FM(x) from thejump multiset.
Fluid Structured Operational Semantics
In order to get to the implicit representation of the CTMC we needto:
1 Make the counting abstraction (Context Reduction)
2 Collect the transitions of the reduced context (Jump Multiset)
3 Calculate the rate of the transitions in terms of an arbitrarystate of the CTMC.
Once this is done we can extract the vector field FM(x) from thejump multiset.
Context Reduction
Proc0def= (task1 , r1 ).Proc1
Proc1def= (task2 , r2 ).Proc0
Res0def= (task1 , r3 ).Res1
Res1def= (reset, r4 ).Res0
Systemdef= Proc0 [NP ] ��
{task1}Res0 [NR ]
⇓
R(System) = {Proc0 ,Proc1} ��{task1}
{Res0 ,Res1}
Population Vector
ξ = (ξ1, ξ2, ξ3, ξ4)
Context Reduction
Proc0def= (task1 , r1 ).Proc1
Proc1def= (task2 , r2 ).Proc0
Res0def= (task1 , r3 ).Res1
Res1def= (reset, r4 ).Res0
Systemdef= Proc0 [NP ] ��
{task1}Res0 [NR ]
⇓
R(System) = {Proc0 ,Proc1} ��{task1}
{Res0 ,Res1}
Population Vector
ξ = (ξ1, ξ2, ξ3, ξ4)
Location Dependency
Systemdef= Proc0 [N ′C ] ��
{task1}Res0 [NS ] ‖ Proc0 [N ′′C ]
⇓
{Proc0 ,Proc1} ��{task1}
{Res0 ,Res1} ‖ {Proc0 ,Proc1}
Population Vector
ξ = (ξ1, ξ2, ξ3, ξ4, ξ5, ξ6)
Location Dependency
Systemdef= Proc0 [N ′C ] ��
{task1}Res0 [NS ] ‖ Proc0 [N ′′C ]
⇓
{Proc0 ,Proc1} ��{task1}
{Res0 ,Res1} ‖ {Proc0 ,Proc1}
Population Vector
ξ = (ξ1, ξ2, ξ3, ξ4, ξ5, ξ6)
Location Dependency
Systemdef= Proc0 [N ′C ] ��
{task1}Res0 [NS ] ‖ Proc0 [N ′′C ]
⇓
{Proc0 ,Proc1} ��{task1}
{Res0 ,Res1} ‖ {Proc0 ,Proc1}
Population Vector
ξ = (ξ1, ξ2, ξ3, ξ4, ξ5, ξ6)
Fluid Structured Operational Semantics by Example
Proc0def= (task1 , r1 ).Proc1
Proc1def= (task2 , r2 ).Proc0
Res0def= (task1 , r3 ).Res1
Res1def= (reset, r4 ).Res0
Systemdef= Proc0 [NP ] ��
{task1}Res0 [NR ]
ξ = (ξ1, ξ2, ξ3, ξ4)
Fluid Structured Operational Semantics by Example
Proc0def= (task1 , r1 ).Proc1
Proc1def= (task2 , r2 ).Proc0
Res0def= (task1 , r3 ).Res1
Res1def= (reset, r4 ).Res0
Systemdef= Proc0 [NP ] ��
{task1}Res0 [NR ]
ξ = (ξ1, ξ2, ξ3, ξ4)
Proc0task1 ,r1−−−−−−→ Proc1
Proc0task1 ,r1ξ1−−−−−−−→∗ Proc1
Fluid Structured Operational Semantics by Example
Proc0def= (task1 , r1 ).Proc1
Proc1def= (task2 , r2 ).Proc0
Res0def= (task1 , r3 ).Res1
Res1def= (reset, r4 ).Res0
Systemdef= Proc0 [NP ] ��
{task1}Res0 [NR ]
ξ = (ξ1, ξ2, ξ3, ξ4)
Proc0task1 ,r1−−−−−−→ Proc1
Proc0task1 ,r1ξ1−−−−−−−→∗ Proc1
Res0task1 ,r3−−−−−−→ Res1
Res0task1 ,r3ξ3−−−−−−−→∗ Res1
Fluid Structured Operational Semantics by Example
Proc0def= (task1 , r1 ).Proc1
Proc1def= (task2 , r2 ).Proc0
Res0def= (task1 , r3 ).Res1
Res1def= (reset, r4 ).Res0
Systemdef= Proc0 [NP ] ��
{task1}Res0 [NR ]
ξ = (ξ1, ξ2, ξ3, ξ4)
Proc0task1 ,r1−−−−−−→ Proc1
Proc0task1 ,r1ξ1−−−−−−−→∗ Proc1
Res0task1 ,r3−−−−−−→ Res1
Res0task1 ,r3ξ3−−−−−−−→∗ Res1
Proc0 ��{task1}
Res0task1 ,r(ξ)−−−−−−−→∗ Proc1 ��
{task1}Res1
Apparent Rate Calculation
Proc0task1 ,r1−−−−−−→ Proc1
Proc0task1 ,r1 ξ1−−−−−−−→∗ Proc1
Res0task1 ,r3−−−−−−→ Res1
Res0task1 ,r3 ξ3−−−−−−−→∗ Res1
Proc0 ��{task1}
Res0task1 ,r(ξ)−−−−−−−→∗ Proc1 ��
{task1}Res1
r(ξ)=min(r1ξ1, r3ξ4
)
Apparent Rate Calculation
Proc0task1 ,r1−−−−−−→ Proc1
Proc0task1 ,r1 ξ1−−−−−−−→∗ Proc1
Res0task1 ,r3−−−−−−→ Res1
Res0task1 ,r3 ξ3−−−−−−−→∗ Res1
Proc0 ��{task1}
Res0task1 ,r(ξ)−−−−−−−→∗ Proc1 ��
{task1}Res1
r(ξ)=min(r1ξ1, r3ξ4
)
f (ξ, l , α) as the Generator Matrix of the Lumped CTMC
(P0 ‖ P0 ‖ P0 ) ��{task1}
(R0 ‖ R0 )
(P1 ‖ P0 ‖ P0 ) ��{task1}
R1 ‖ R0 )
(P1 ‖ P0 ‖ P0 ) ��{task1}
(R0 ‖ R1 )
(P0 ‖ P1 ‖ P0 ) ��{task1}
(R1 ‖ R0 )
(P0 ‖ P1 ‖ P0 ) ��{task1}
(R0 ‖ R1 )
(P0 ‖ P0 ‖ P1 ) ��{task1}
(R1 ‖ R0 )
(P0 ‖ P0 ‖ P1 ) ��{task1}
(R0 ‖ R1 )
���������7
������>
�����
�:
XXXXXXzZZZZZZ~
SSSSSSSSw
f (ξ, l , α) as the Generator Matrix of the Lumped CTMC
(P0 ‖ P0 ‖ P0 ) ��{task1}
(R0 ‖ R0 )
(P1 ‖ P0 ‖ P0 ) ��{task1}
R1 ‖ R0 )
(P1 ‖ P0 ‖ P0 ) ��{task1}
(R0 ‖ R1 )
(P0 ‖ P1 ‖ P0 ) ��{task1}
(R1 ‖ R0 )
(P0 ‖ P1 ‖ P0 ) ��{task1}
(R0 ‖ R1 )
(P0 ‖ P0 ‖ P1 ) ��{task1}
(R1 ‖ R0 )
(P0 ‖ P0 ‖ P1 ) ��{task1}
(R0 ‖ R1 )
���������7
������>
�����
�:
XXXXXXzZZZZZZ~
SSSSSSSSw
f (ξ, l , α) as the Generator Matrix of the Lumped CTMC
(P0 ‖ P0 ‖ P0 ) ��{task1}
(R0 ‖ R0 )
(P1 ‖ P0 ‖ P0 ) ��{task1}
(R1 ‖ R0 )
(P1 ‖ P0 ‖ P0 ) ��{task1}
(R0 ‖ R1 )
(P0 ‖ P1 ‖ P0 ) ��{task1}
(R1 ‖ R0 )
(P0 ‖ P1 ‖ P0 ) ��{task1}
(R0 ‖ R1 )
(P0 ‖ P0 ‖ P1 ) ��{task1}
(R1 ‖ R0 )
(P0 ‖ P0 ‖ P1 ) ��{task1}
(R0 ‖ R1 )
���������7
������>
�����
�:
XXXXXXzZZZZZZ~
SSSSSSSSw
(3, 0, 2, 0) -min(3r1, 2r3)(2, 1, 1, 1)(3, 0, 2, 0) -min(3r1, 2r3)(2, 1, 1, 1)
Jump Multiset
Proc0 ��{task1}
Res0task1 ,r(ξ)−−−−−−−→∗ Proc1 ��
{task1}Res1
r(ξ) = min(r1ξ1, r3ξ3
)
Proc1 ��{task1}
Res0task2 ,ξ2r2−−−−−−−→∗ Proc0 ��
{task1}Res0
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
Jump Multiset
Proc0 ��{task1}
Res0task1 ,r(ξ)−−−−−−−→∗ Proc1 ��
{task1}Res1
r(ξ) = min(r1ξ1, r3ξ3
)
Proc1 ��{task1}
Res0task2 ,ξ2r2−−−−−−−→∗ Proc0 ��
{task1}Res0
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
Jump Multiset
Proc0 ��{task1}
Res0task1 ,r(ξ)−−−−−−−→∗ Proc1 ��
{task1}Res1
r(ξ) = min(r1ξ1, r3ξ3
)
Proc1 ��{task1}
Res0task2 ,ξ2r2−−−−−−−→∗ Proc0 ��
{task1}Res0
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
Construction of f (ξ, l , α)
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
• Take l = (0, 0, 0, 0)
• Add −1 to all elements of l corresponding to the indices ofthe components in the lhs of the transition
l = (−1, 0, 0,−1)
• Add +1 to all elements of l corresponding to the indices ofthe components in the rhs of the transition
l = (−1 + 1, 0,+1,−1) = (0, 0,+1,−1)
f(ξ, (0, 0,+1,−1), reset
)= ξ4r4
Construction of f (ξ, l , α)
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
• Take l = (0, 0, 0, 0)
• Add −1 to all elements of l corresponding to the indices ofthe components in the lhs of the transition
l = (−1, 0, 0,−1)
• Add +1 to all elements of l corresponding to the indices ofthe components in the rhs of the transition
l = (−1 + 1, 0,+1,−1) = (0, 0,+1,−1)
f(ξ, (0, 0,+1,−1), reset
)= ξ4r4
Construction of f (ξ, l , α)
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
• Take l = (0, 0, 0, 0)
• Add −1 to all elements of l corresponding to the indices ofthe components in the lhs of the transition
l = (−1, 0, 0,−1)
• Add +1 to all elements of l corresponding to the indices ofthe components in the rhs of the transition
l = (−1 + 1, 0,+1,−1) = (0, 0,+1,−1)
f(ξ, (0, 0,+1,−1), reset
)= ξ4r4
Construction of f (ξ, l , α)
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
• Take l = (0, 0, 0, 0)
• Add −1 to all elements of l corresponding to the indices ofthe components in the lhs of the transition
l = (−1, 0, 0,−1)
• Add +1 to all elements of l corresponding to the indices ofthe components in the rhs of the transition
l = (−1 + 1, 0,+1,−1) = (0, 0,+1,−1)
f(ξ, (0, 0,+1,−1), reset
)= ξ4r4
Construction of f (ξ, l , α)
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
• Take l = (0, 0, 0, 0)
• Add −1 to all elements of l corresponding to the indices ofthe components in the lhs of the transition
l = (−1, 0, 0,−1)
• Add +1 to all elements of l corresponding to the indices ofthe components in the rhs of the transition
l = (−1 + 1, 0,+1,−1) = (0, 0,+1,−1)
f(ξ, (0, 0,+1,−1), reset
)= ξ4r4
Construction of f (ξ, l , α)
Proc0 ��{task1}
Res0task1 ,r(ξ)−−−−−−−→∗ Proc1 ��
{task1}Res1
Proc1 ��{task1}
Res0task2 ,ξ2r ′2−−−−−−−→∗ Proc0 ��
{task1}Res0
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
f (ξ, (−1,+1,−1,+1), task1) = min(r1ξ1, r3ξ4
)f (ξ, (+1,−1, 0, 0), task2) = ξ2r2f (ξ, (0, 0,+1,−1), reset) = ξ4r4
Construction of f (ξ, l , α)
Proc0 ��{task1}
Res0task1 ,r(ξ)−−−−−−−→∗ Proc1 ��
{task1}Res1
Proc1 ��{task1}
Res0task2 ,ξ2r ′2−−−−−−−→∗ Proc0 ��
{task1}Res0
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
f (ξ, (−1,+1,−1,+1), task1) = min(r1ξ1, r3ξ4
)
f (ξ, (+1,−1, 0, 0), task2) = ξ2r2f (ξ, (0, 0,+1,−1), reset) = ξ4r4
Construction of f (ξ, l , α)
Proc0 ��{task1}
Res0task1 ,r(ξ)−−−−−−−→∗ Proc1 ��
{task1}Res1
Proc1 ��{task1}
Res0task2 ,ξ2r ′2−−−−−−−→∗ Proc0 ��
{task1}Res0
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
f (ξ, (−1,+1,−1,+1), task1) = min(r1ξ1, r3ξ4
)f (ξ, (+1,−1, 0, 0), task2) = ξ2r2
f (ξ, (0, 0,+1,−1), reset) = ξ4r4
Construction of f (ξ, l , α)
Proc0 ��{task1}
Res0task1 ,r(ξ)−−−−−−−→∗ Proc1 ��
{task1}Res1
Proc1 ��{task1}
Res0task2 ,ξ2r ′2−−−−−−−→∗ Proc0 ��
{task1}Res0
Proc0 ��{task1}
Res1reset,ξ4r4−−−−−−−→∗ Proc0 ��
{task1}Res0
f (ξ, (−1,+1,−1,+1), task1) = min(r1ξ1, r3ξ4
)f (ξ, (+1,−1, 0, 0), task2) = ξ2r2f (ξ, (0, 0,+1,−1), reset) = ξ4r4
Capturing behaviour in the Generator Function
Proc0def= (task1 , r1 ).Proc1
Proc1def= (task2 , r2 ).Proc0
Res0def= (task1 , r3 ).Res1
Res1def= (reset, r4 ).Res0
Systemdef= Proc0 [NP ] ��
{task1}Res0 [NR ]
Numerical Vector Form
ξ = (ξ1, ξ2, ξ3, ξ4) ∈ N4, ξ1 + ξ2 = NP and ξ3 + ξ4 = NR
Generator Function
f (ξ, l , α) :f (ξ, (−1, 1,−1, 1), task1) = min (r1ξ1, r3ξ3)
f (ξ, (1,−1, 0, 0), task2) = r2ξ2f (ξ, (0, 0, 1,−1), reset) = r4ξ4
Capturing behaviour in the Generator Function
Proc0def= (task1 , r1 ).Proc1
Proc1def= (task2 , r2 ).Proc0
Res0def= (task1 , r3 ).Res1
Res1def= (reset, r4 ).Res0
Systemdef= Proc0 [NP ] ��
{task1}Res0 [NR ]
Numerical Vector Form
ξ = (ξ1, ξ2, ξ3, ξ4) ∈ N4, ξ1 + ξ2 = NP and ξ3 + ξ4 = NR
Generator Function
f (ξ, l , α) :f (ξ, (−1, 1,−1, 1), task1) = min (r1ξ1, r3ξ3)
f (ξ, (1,−1, 0, 0), task2) = r2ξ2f (ξ, (0, 0, 1,−1), reset) = r4ξ4
Capturing behaviour in the Generator Function
Proc0def= (task1 , r1 ).Proc1
Proc1def= (task2 , r2 ).Proc0
Res0def= (task1 , r3 ).Res1
Res1def= (reset, r4 ).Res0
Systemdef= Proc0 [NP ] ��
{task1}Res0 [NR ]
Numerical Vector Form
ξ = (ξ1, ξ2, ξ3, ξ4) ∈ N4, ξ1 + ξ2 = NP and ξ3 + ξ4 = NR
Generator Function
f (ξ, l , α) :f (ξ, (−1, 1,−1, 1), task1) = min (r1ξ1, r3ξ3)
f (ξ, (1,−1, 0, 0), task2) = r2ξ2f (ξ, (0, 0, 1,−1), reset) = r4ξ4
Extraction of the ODE from f
Generator Function
f (ξ, l , α) :f (ξ, (−1, 1,−1, 1), task1) = min (r1ξ1, r3ξ3)
f (ξ, (1,−1, 0, 0), task2) = r2ξ2f (ξ, (0, 0, 1,−1), reset) = r4ξ4
Differential Equation
dx
dt= FM(x) =
∑l∈Zd
l∑α∈A
f (x , l , α)
= (−1, 1,−1, 1) min (r1x1, r3x3) + (1,−1, 0, 0)r2x2
+ (0, 0, 1,−1)r4x4
Extraction of the ODE from f
Generator Function
f (ξ, l , α) :f (ξ, (−1, 1,−1, 1), task1) = min (r1ξ1, r3ξ3)
f (ξ, (1,−1, 0, 0), task2) = r2ξ2f (ξ, (0, 0, 1,−1), reset) = r4ξ4
Differential Equation
dx1dt
= −min (r1x1, r3x3) + r2x2
dx2dt
= min (r1x1, r3x3)− r2x2
dx3dt
= −min (r1x1, r3x3) + r4x4
dx4dt
= min (r1x1, r3x3)− r4x4
Consistency results
• The vector field F(x) is Lipschitz continuous i.e. all the ratefunctions governing transitions in the process algebra satisfylocal continuity conditions.
• The generated ODEs are the fluid limit of the family ofCTMCs generated by f (ξ, l , α): this family forms a sequenceas the initial populations are scaled by a variable n.
• We can prove this using Kurtz’s theorem:Solutions of Ordinary Differential Equations as Limits of PureJump Markov Processes, T.G. Kurtz, J. Appl. Prob. (1970).
• Moreover Lipschitz continuity of the vector field guaranteesexistence and uniqueness of the solution to the initial valueproblem.
Consistency results
• The vector field F(x) is Lipschitz continuous i.e. all the ratefunctions governing transitions in the process algebra satisfylocal continuity conditions.
• The generated ODEs are the fluid limit of the family ofCTMCs generated by f (ξ, l , α): this family forms a sequenceas the initial populations are scaled by a variable n.
• We can prove this using Kurtz’s theorem:Solutions of Ordinary Differential Equations as Limits of PureJump Markov Processes, T.G. Kurtz, J. Appl. Prob. (1970).
• Moreover Lipschitz continuity of the vector field guaranteesexistence and uniqueness of the solution to the initial valueproblem.
Consistency results
• The vector field F(x) is Lipschitz continuous i.e. all the ratefunctions governing transitions in the process algebra satisfylocal continuity conditions.
• The generated ODEs are the fluid limit of the family ofCTMCs generated by f (ξ, l , α): this family forms a sequenceas the initial populations are scaled by a variable n.
• We can prove this using Kurtz’s theorem:Solutions of Ordinary Differential Equations as Limits of PureJump Markov Processes, T.G. Kurtz, J. Appl. Prob. (1970).
• Moreover Lipschitz continuity of the vector field guaranteesexistence and uniqueness of the solution to the initial valueproblem.
Consistency results
• The vector field F(x) is Lipschitz continuous i.e. all the ratefunctions governing transitions in the process algebra satisfylocal continuity conditions.
• The generated ODEs are the fluid limit of the family ofCTMCs generated by f (ξ, l , α): this family forms a sequenceas the initial populations are scaled by a variable n.
• We can prove this using Kurtz’s theorem:Solutions of Ordinary Differential Equations as Limits of PureJump Markov Processes, T.G. Kurtz, J. Appl. Prob. (1970).
• Moreover Lipschitz continuity of the vector field guaranteesexistence and uniqueness of the solution to the initial valueproblem.
Outline
1 IntroductionStochastic Process Algebra
2 Continuous ApproximationState variables
3 Fluid-Flow SemanticsFluid Structured Operational Semantics
4 Conclusions
Why use a process algebra?
• High level description of the system eases the task of modelconstruction.
• Formal language allows for unambiguous interpretation andautomatic translation into the underlying mathematicalstructure.
• Moreover properties of that mathematical structure may bededuced by the construction at the process algebra level.
• Furthermore formal reasoning techniques such as equivalencerelations and model checking can be used to manipulate orinterrogate models.
• Compositionality can be exploited both for model constructionand (in some cases) for model analysis.
Why use a process algebra?
• High level description of the system eases the task of modelconstruction.
• Formal language allows for unambiguous interpretation andautomatic translation into the underlying mathematicalstructure.
• Moreover properties of that mathematical structure may bededuced by the construction at the process algebra level.
• Furthermore formal reasoning techniques such as equivalencerelations and model checking can be used to manipulate orinterrogate models.
• Compositionality can be exploited both for model constructionand (in some cases) for model analysis.
Why use a process algebra?
• High level description of the system eases the task of modelconstruction.
• Formal language allows for unambiguous interpretation andautomatic translation into the underlying mathematicalstructure.
• Moreover properties of that mathematical structure may bededuced by the construction at the process algebra level.
• Furthermore formal reasoning techniques such as equivalencerelations and model checking can be used to manipulate orinterrogate models.
• Compositionality can be exploited both for model constructionand (in some cases) for model analysis.
Why use a process algebra?
• High level description of the system eases the task of modelconstruction.
• Formal language allows for unambiguous interpretation andautomatic translation into the underlying mathematicalstructure.
• Moreover properties of that mathematical structure may bededuced by the construction at the process algebra level.
• Furthermore formal reasoning techniques such as equivalencerelations and model checking can be used to manipulate orinterrogate models.
• Compositionality can be exploited both for model constructionand (in some cases) for model analysis.
Why use a process algebra?
• High level description of the system eases the task of modelconstruction.
• Formal language allows for unambiguous interpretation andautomatic translation into the underlying mathematicalstructure.
• Moreover properties of that mathematical structure may bededuced by the construction at the process algebra level.
• Furthermore formal reasoning techniques such as equivalencerelations and model checking can be used to manipulate orinterrogate models.
• Compositionality can be exploited both for model constructionand (in some cases) for model analysis.
Conclusions
• Many interesting and important systems can be studied as aresult of the link between stochastic process algebras andODEs.
• Particularly, we can now consider examples of collectivedynamics and emergent behaviour.
• Stochastic process algebras, are well-suited to modelling thebehaviour of such systems in terms of the individuals and theirinteractions.
• Continuous approximation allows a rigorous mathematicalanalysis of the average behaviour of such systems.
Conclusions
• Many interesting and important systems can be studied as aresult of the link between stochastic process algebras andODEs.
• Particularly, we can now consider examples of collectivedynamics and emergent behaviour.
• Stochastic process algebras, are well-suited to modelling thebehaviour of such systems in terms of the individuals and theirinteractions.
• Continuous approximation allows a rigorous mathematicalanalysis of the average behaviour of such systems.
Conclusions
• Many interesting and important systems can be studied as aresult of the link between stochastic process algebras andODEs.
• Particularly, we can now consider examples of collectivedynamics and emergent behaviour.
• Stochastic process algebras, are well-suited to modelling thebehaviour of such systems in terms of the individuals and theirinteractions.
• Continuous approximation allows a rigorous mathematicalanalysis of the average behaviour of such systems.
Conclusions
• Many interesting and important systems can be studied as aresult of the link between stochastic process algebras andODEs.
• Particularly, we can now consider examples of collectivedynamics and emergent behaviour.
• Stochastic process algebras, are well-suited to modelling thebehaviour of such systems in terms of the individuals and theirinteractions.
• Continuous approximation allows a rigorous mathematicalanalysis of the average behaviour of such systems.
On-going work
• Time series plots counting the populations of componentsover time tell us a great deal about the dynamics of thesystem but are not necessarily the information we require.
• Recent work has established the validity of performancemeasures such as throughput, and average response timederived from the ODE solutions [TDGH 2012, IEEE TSE].
• On-going work is investigating the use of probes to query themodel by adding components to the model whose solepurpose is to gather statistics.
• Future work will also consider exploiting more of the formalstructure of the process algebras to assist in the manipulationand analysis of the ODEs.
On-going work
• Time series plots counting the populations of componentsover time tell us a great deal about the dynamics of thesystem but are not necessarily the information we require.
• Recent work has established the validity of performancemeasures such as throughput, and average response timederived from the ODE solutions [TDGH 2012, IEEE TSE].
• On-going work is investigating the use of probes to query themodel by adding components to the model whose solepurpose is to gather statistics.
• Future work will also consider exploiting more of the formalstructure of the process algebras to assist in the manipulationand analysis of the ODEs.
On-going work
• Time series plots counting the populations of componentsover time tell us a great deal about the dynamics of thesystem but are not necessarily the information we require.
• Recent work has established the validity of performancemeasures such as throughput, and average response timederived from the ODE solutions [TDGH 2012, IEEE TSE].
• On-going work is investigating the use of probes to query themodel by adding components to the model whose solepurpose is to gather statistics.
• Future work will also consider exploiting more of the formalstructure of the process algebras to assist in the manipulationand analysis of the ODEs.
On-going work
• Time series plots counting the populations of componentsover time tell us a great deal about the dynamics of thesystem but are not necessarily the information we require.
• Recent work has established the validity of performancemeasures such as throughput, and average response timederived from the ODE solutions [TDGH 2012, IEEE TSE].
• On-going work is investigating the use of probes to query themodel by adding components to the model whose solepurpose is to gather statistics.
• Future work will also consider exploiting more of the formalstructure of the process algebras to assist in the manipulationand analysis of the ODEs.
Thanks!
Acknowledgements: collaboratorsThanks to many co-authors and collaborators: Andrea Bracciali,Jeremy Bradley, Luca Bortolussi, Federica Ciocchetta, Allan Clark,Jie Ding, Adam Duguid, Vashti Galpin, Stephen Gilmore, DiegoLatella, Mieke Massink, Mirco Tribastone, and others.
Acknowledgements: fundingThanks to EPRSC for the Process Algebra for CollectiveDynamics grant and the CEC IST-FET programme for theSENSORIA project which have supported this work.
More information:http://www.dcs.ed.ac.uk/pepa
Thanks!
Acknowledgements: collaboratorsThanks to many co-authors and collaborators: Andrea Bracciali,Jeremy Bradley, Luca Bortolussi, Federica Ciocchetta, Allan Clark,Jie Ding, Adam Duguid, Vashti Galpin, Stephen Gilmore, DiegoLatella, Mieke Massink, Mirco Tribastone, and others.
Acknowledgements: fundingThanks to EPRSC for the Process Algebra for CollectiveDynamics grant and the CEC IST-FET programme for theSENSORIA project which have supported this work.
More information:http://www.dcs.ed.ac.uk/pepa
Thanks!
Acknowledgements: collaboratorsThanks to many co-authors and collaborators: Andrea Bracciali,Jeremy Bradley, Luca Bortolussi, Federica Ciocchetta, Allan Clark,Jie Ding, Adam Duguid, Vashti Galpin, Stephen Gilmore, DiegoLatella, Mieke Massink, Mirco Tribastone, and others.
Acknowledgements: fundingThanks to EPRSC for the Process Algebra for CollectiveDynamics grant and the CEC IST-FET programme for theSENSORIA project which have supported this work.
More information:http://www.dcs.ed.ac.uk/pepa
Thanks!
Acknowledgements: collaboratorsThanks to many co-authors and collaborators: Andrea Bracciali,Jeremy Bradley, Luca Bortolussi, Federica Ciocchetta, Allan Clark,Jie Ding, Adam Duguid, Vashti Galpin, Stephen Gilmore, DiegoLatella, Mieke Massink, Mirco Tribastone, and others.
Acknowledgements: fundingThanks to EPRSC for the Process Algebra for CollectiveDynamics grant and the CEC IST-FET programme for theSENSORIA project which have supported this work.
More information:http://www.dcs.ed.ac.uk/pepa
Alternative Representations
ODEs population view
TDSHA (hybridautomaton)
hybrid view
StochasticSimulation
CTMC
individual view
LargePEPA model
-
������
���
���
���*
HHHHH
HHHHHH
HHHHj
Alternative Representations
ODEs population view
TDSHA (hybridautomaton)
hybrid view
StochasticSimulation
CTMC
individual view
LargePEPA model
-������
���
���
���*
HHHHH
HHHHHH
HHHHj
Recommended