24
Budapest University of Technology and Economics Department of Measurement and Information Systems Budapest University of Technology and Economics Fault Tolerant Systems Research Group Reliability modeling with stochastic model checking Kristóf Marussy 1

Reliability modeling with stochastic model checking › sites › default › files › edu › ... · References 1. Katoen, Joost-Pieter. The Probabilistic Model Checking Landscape

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Budapest University of Technology and Economics Department of Measurement and Information Systems

Budapest University of Technology and Economics Fault Tolerant Systems Research Group

Reliability modeling with stochastic model checking

Kristóf Marussy

1

References

1. Katoen, Joost-Pieter. The Probabilistic Model Checking Landscape. In: LICS '16, pp. 31-45. DOI: 10.1145/2933575.2934574

2. Kwiatkowska, Marta and David Parker. Advances in Probabilistic Model Checking. In: Marktoberdorf Summer School '11. URL: http://qav.comlab.ox.ac.uk/papers/marktoberdorf11.pdf

3. Budde, Carlos E., Christian Dehnert, Ernst Moritz Hahn, Arnd Hartmanns, Sebastian Junges and Andrea Turrini. JANI: Quantitative Model and Tool Interaction. In: TACAS '17, pp. 151-168. DOI: 10.1007/978-3-662-54580-5_9

2

Dependability and performability modeling

Check whether the system

o satisfies its service-level agreement, or

o has tolerable hazard rate

Choose between architecture alternatives by optimization

Synthesize correct configurations

3

System architecture

Extra-functional

requirements Reliability, availability,

performance, …

Lower-level formalisms

4

System architecture

Extra-functional

requirements

Stochastic model • Markov Chains → Lecture 11

• Markov Decision Processes • Interactive Markov Chains • Markov Automata • Stochastic Timed Automata • Stochastic Hybrid Automata • Stochastic Games

Stochastic property descriptions

5

System architecture

Extra-functional

requirements

Stochastic model

Stochastic properties

• PCTL properties • CSL and CSRL → Lecture 11 • Rewards to be optimized • Constraints for synthesis

Analysis goals

6

System architecture

Extra-functional

requirements

Stochastic model

Stochastic properties

Stochastic model checker

Values of metrics

Values of parameters

“Automated procedure for establishing if a desired

property holds in a probabilistic system model” [2]

THE PROBABILISTIC MODELS LANDSCAPE

[1, Section 2]

7

Markov Chains

Markovian models: behavior only depends on the active state

o Discrete time: Transition probabilities

o Continuous time: Transition rates

PCTL properties, e.g.,

𝑃≤0.5𝐹[0,3]𝑠3

8

s0

s2 s1

s3

1

0.7 0.3

0.5 0.5

The probability of reaching s3 within at

most 3 units of time is not larger than 0.5

Markov Decision Processes

Introducing non-determinism to Markovian models

Action labels Act

o Act(s) ⊆ Act actions available is state s

Trajectory in discrete time: Sequence of states

𝜏 = 𝑠𝑖1𝑠𝑖2⋯𝑠𝑖𝑘 ∈ 𝑃𝑎𝑡ℎ

Scheduler 𝜎: 𝑃𝑎𝑡ℎ → 𝐴𝑐𝑡

𝜎 𝜏 = 𝜎 𝑠𝑖1𝑠𝑖2⋯𝑠𝑖𝑘 ∈ 𝐴𝑐𝑡 𝑠𝑖𝑘

9

s0

s2 s1

s3

A

B

0.5

0.5 C

A

0.7 0.3

Markov Decision Processes

Introducing non-determinism to Markovian models

Action labels Act

o Act(s) ⊆ Act actions available is state s

Trajectory in discrete time: Sequence of states

𝜏 = 𝑠𝑖1𝑠𝑖2⋯𝑠𝑖𝑘 ∈ 𝑃𝑎𝑡ℎ

Scheduler 𝜎: 𝑃𝑎𝑡ℎ → 𝐴𝑐𝑡

𝜎 𝜏 = 𝜎 𝑠𝑖1𝑠𝑖2⋯𝑠𝑖𝑘 ∈ 𝐴𝑐𝑡 𝑠𝑖𝑘

10

s0

s2 s1

s3

A

B

0.5

0.5 C

A

0.7 0.3

Choice between actions A and B Deterministic

action

Probabilistic action

Markov Decision Processes (MDPs)

Introducing non-determinism to Markovian models

Model either environmental effects or control

o Environment hampers satisfaction of extra- functional requirements

o Control aids in satisfying requirements

Maximize or minimize over schedulers

𝑃max=?𝐹[0,3]𝑠3 𝑃min=?𝐹[0,3]𝑠3

11

s0

s2 s1

s3

A

B

0.5

0.5 C

A

0.7 0.3

Stochastic Games

Model interactions with the environment and control at the same time

Two players

o Player – control

o Player – environment

Schedulers 𝜎1 and 𝜎2 for players and , respectively

𝑃=?𝐹[0,3]𝑠3

o Minimize over 𝜎1, maximize over 𝜎2

13

s0

s2 s1

s3

A

B

C

A

0.7 0.3

D

's turn

's turn

The solution landscape

Continuous-Time Markov Chains

o Transient and steady-state solution by linear equation solvers

Markov Decision Processes

o Bellman equations characterize extremal policies

o Value iteration, policy iteration, linear programming

Challenge: growth of time and memory costs due to state space explosion

14

THE ABSTRACTION LANDSCAPE

[1, Section 4]

15

Stochastic bisimulation minimization

Partition the states of the Markov chain into equivalence classes 𝐵1, 𝐵2, … , 𝐵𝑚

o States si and sj belong to the same equivalence class if

𝑃 𝑠𝑖 , 𝑠′

𝑠′∈𝐵𝑘= 𝑃 𝑠𝑗 , 𝑠

′𝑠′∈𝐵𝑘

for all Bk

o Defined analogously for MDPs

o Equivalent states can be merged to reduce state space

16

More aggressive merging

Merge (possibly unrelated) states in an MC by turning it into an MDP

18

s0

s2 s1

s3

1

0.7 0.3

0.5 0.5

s0

s12

s3

B

A

0.5

0.5 C

State within the coarse partition replaced with

non-determinism

B as in s1 C as in s2

More aggressive merging

Merge (possibly unrelated) states in an MC by turning it into an MDP

Application to parameter synthesis

o Replace the unknown parameter value in the transition probabilities or rate by non-determinism

20

Figure adapted from [1]

More aggressive merging

Merge (possibly unrelated) states in an MC by turning it into an MDP

Application to parameter synthesis

o Replace the unknown parameter value in the transition probabilities or rate by non-determinism

21

Figure adapted from [1]

Check extra-functional properties for whole

parameter ranges at once

More aggressive merging

Merge (possibly unrelated) states in an MDP by turning it into a Stochastic Game

22

s0

s12

s3

A

A

0.5

0.5 C

s0

s2 s1

s3

A

B

0.5

0.5 C

A

0.7 0.3 B

The “enemy” can pick whether we

transitioned to in s1 or s2

OPEN SOURCE TOOLS

23

PRISM Model Checker

http://www.prismmodelchecker.org/

PRISM language: de facto Standard

24

dtmc module die // local state s : [0..7] init 0; // value of the die d : [0..6] init 0; [] s=0 -> 0.5 : (s'=1) + 0.5 : (s'=2); [] s=1 -> 0.5 : (s'=3) + 0.5 : (s'=4); [] s=2 -> 0.5 : (s'=5) + 0.5 : (s'=6); [] s=3 -> 0.5 : (s'=1) + 0.5 : (s'=7) & (d'=1); [] s=4 -> 0.5 : (s'=7) & (d'=2) + 0.5 : (s'=7) & (d'=3); [] s=5 -> 0.5 : (s'=7) & (d'=4) + 0.5 : (s'=7) & (d'=5); [] s=6 -> 0.5 : (s'=2) + 0.5 : (s'=7) & (d'=6); [] s=7 -> (s'=7); endmodule

PRISM Model Checker

http://www.prismmodelchecker.org/

PRISM language: de facto Standard

o discrete-time Markov chains (DTMCs)

o continuous-time Markov chains (CTMCs)

o Markov decision processes (MDPs)

o probabilistic automata (PAs)

o probabilistic timed automata (PTAs)

o PRISM-games extensions for Stochastic Games

Hybrid (symbolic + explicit) analysis backed

25

Storm: modern probabilistic model checker

Developed at RWTH Aachen University since 2012

o Open source since 2017

Various input formats

o PRISM, JANI, GSPN, DFT, cpGCL, explicit

Command line, C++, Python interfaces

Standard model format: JANI [3]

o JSON-based format for probabilistic models

o Simplifies integration of probabilistic model checking and abstraction into toolchains

o Trigger analysis tasks over WebSocket

26

Summary

Analysis of stochastic and probabilistic models

o Model checking extra-functional requirements

o Calculation of metrics

o Synthesizing optimal parameters

Markov chains and non-deterministic extension

Abstraction techniques

o Bisimulation minimization

o Merge states by introducing non-determinism

Use in toolchains by invoking open-source tools

See [1, 2, 3] for more info

27