Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Interactive verification of Markov chains:Two distributed protocol case studies
Johannes Hölzl and Tobias Nipkow
TU München
QFM 201228 August 2012
1 / 24
Introduction
Ï In the interactive theorem prover
Isabelle
Ï Verified two case studies
Ï ZeroConf protocol (IPv4 address allocation)Ï Crowds protocol (anonymizing service)
Ï Built on Isabelle’s probability theory and Markov chainsHölzl & Heller (ITP 2011), Hölzl & Nipkow (TACAS 2012)
2 / 24
Introduction
Ï In the interactive theorem prover
IsabelleÏ Verified two case studies
Ï ZeroConf protocol (IPv4 address allocation)Ï Crowds protocol (anonymizing service)
Ï Built on Isabelle’s probability theory and Markov chainsHölzl & Heller (ITP 2011), Hölzl & Nipkow (TACAS 2012)
2 / 24
Introduction
Ï In the interactive theorem prover
IsabelleÏ Verified two case studies
Ï ZeroConf protocol (IPv4 address allocation)
Ï Crowds protocol (anonymizing service)Ï Built on Isabelle’s probability theory and Markov chainsHölzl & Heller (ITP 2011), Hölzl & Nipkow (TACAS 2012)
2 / 24
Introduction
Ï In the interactive theorem prover
IsabelleÏ Verified two case studies
Ï ZeroConf protocol (IPv4 address allocation)Ï Crowds protocol (anonymizing service)
Ï Built on Isabelle’s probability theory and Markov chainsHölzl & Heller (ITP 2011), Hölzl & Nipkow (TACAS 2012)
2 / 24
Introduction
Ï In the interactive theorem prover
IsabelleÏ Verified two case studies
Ï ZeroConf protocol (IPv4 address allocation)Ï Crowds protocol (anonymizing service)
Ï Built on Isabelle’s probability theory and Markov chainsHölzl & Heller (ITP 2011), Hölzl & Nipkow (TACAS 2012)
2 / 24
Interactive theorem proving
3 / 24
Interactive theorem proving
Ï Mathematics, but checked by a computer
Ï Powerful logics (e.g. ZF, CoC, HOL):
Ï Can deal with infinite-state systemsÏ User-extensible
Ï Too powerful to be fully automatic:
user needs to write proofs
Ï Proof language and proof methods
4 / 24
Interactive theorem proving
Ï Mathematics, but checked by a computerÏ Powerful logics (e.g. ZF, CoC, HOL):
Ï Can deal with infinite-state systemsÏ User-extensible
Ï Too powerful to be fully automatic:
user needs to write proofs
Ï Proof language and proof methods
4 / 24
Interactive theorem proving
Ï Mathematics, but checked by a computerÏ Powerful logics (e.g. ZF, CoC, HOL):
Ï Can deal with infinite-state systems
Ï User-extensibleÏ Too powerful to be fully automatic:
user needs to write proofs
Ï Proof language and proof methods
4 / 24
Interactive theorem proving
Ï Mathematics, but checked by a computerÏ Powerful logics (e.g. ZF, CoC, HOL):
Ï Can deal with infinite-state systemsÏ User-extensible
Ï Too powerful to be fully automatic:
user needs to write proofs
Ï Proof language and proof methods
4 / 24
Interactive theorem proving
Ï Mathematics, but checked by a computerÏ Powerful logics (e.g. ZF, CoC, HOL):
Ï Can deal with infinite-state systemsÏ User-extensible
Ï Too powerful to be fully automatic:
user needs to write proofsÏ Proof language and proof methods
4 / 24
Interactive theorem proving
Ï Mathematics, but checked by a computerÏ Powerful logics (e.g. ZF, CoC, HOL):
Ï Can deal with infinite-state systemsÏ User-extensible
Ï Too powerful to be fully automatic:user needs to write proofs
Ï Proof language and proof methods
4 / 24
Interactive theorem proving
Ï Mathematics, but checked by a computerÏ Powerful logics (e.g. ZF, CoC, HOL):
Ï Can deal with infinite-state systemsÏ User-extensible
Ï Too powerful to be fully automatic:user needs to write proofs
Ï Proof language and proof methods
4 / 24
Isabelle/HOL
Ï Logic is HOL: functional programming + quantifiers
Ï Declarative proof language IsarÏ Small kernel: each proof is reduced to primitive proof stepsÏ Powerful proof methods(rewrite engine, Sledgehammer, ...)
Ï Important theories: datatypes, real analysis, measure theory,probability theory, Markov chains, ...
5 / 24
Isabelle/HOL
Ï Logic is HOL: functional programming + quantifiersÏ Declarative proof language Isar
Ï Small kernel: each proof is reduced to primitive proof stepsÏ Powerful proof methods(rewrite engine, Sledgehammer, ...)
Ï Important theories: datatypes, real analysis, measure theory,probability theory, Markov chains, ...
5 / 24
Isabelle/HOL
Ï Logic is HOL: functional programming + quantifiersÏ Declarative proof language IsarÏ Small kernel: each proof is reduced to primitive proof steps
Ï Powerful proof methods(rewrite engine, Sledgehammer, ...)
Ï Important theories: datatypes, real analysis, measure theory,probability theory, Markov chains, ...
5 / 24
Isabelle/HOL
Ï Logic is HOL: functional programming + quantifiersÏ Declarative proof language IsarÏ Small kernel: each proof is reduced to primitive proof stepsÏ Powerful proof methods(rewrite engine, Sledgehammer, ...)
Ï Important theories: datatypes, real analysis, measure theory,probability theory, Markov chains, ...
5 / 24
Isabelle/HOL
Ï Logic is HOL: functional programming + quantifiersÏ Declarative proof language IsarÏ Small kernel: each proof is reduced to primitive proof stepsÏ Powerful proof methods(rewrite engine, Sledgehammer, ...)
Ï Important theories: datatypes, real analysis, measure theory,probability theory, Markov chains, ...
5 / 24
Case study: ZeroConf protocol
6 / 24
ZeroConf protocol
Ï Protocol to allocate an address in a link-local network,without central authority (RFC 3927)
Ï We formalized the analysis of Bohnenkamp et al. (2003)
Ï Address allocation when only one computer is addedÏ Probability that two hosts end up with the same addressÏ Expected time until an address is allocated
Ï Model checking analysis of Kwiatkowska et al. (2006) andAndova et al. (2003)
7 / 24
ZeroConf protocol
Ï Protocol to allocate an address in a link-local network,without central authority (RFC 3927)
Ï We formalized the analysis of Bohnenkamp et al. (2003)
Ï Address allocation when only one computer is addedÏ Probability that two hosts end up with the same addressÏ Expected time until an address is allocated
Ï Model checking analysis of Kwiatkowska et al. (2006) andAndova et al. (2003)
7 / 24
ZeroConf protocol
Ï Protocol to allocate an address in a link-local network,without central authority (RFC 3927)
Ï We formalized the analysis of Bohnenkamp et al. (2003)Ï Address allocation when only one computer is added
Ï Probability that two hosts end up with the same addressÏ Expected time until an address is allocated
Ï Model checking analysis of Kwiatkowska et al. (2006) andAndova et al. (2003)
7 / 24
ZeroConf protocol
Ï Protocol to allocate an address in a link-local network,without central authority (RFC 3927)
Ï We formalized the analysis of Bohnenkamp et al. (2003)Ï Address allocation when only one computer is addedÏ Probability that two hosts end up with the same address
Ï Expected time until an address is allocatedÏ Model checking analysis of Kwiatkowska et al. (2006) andAndova et al. (2003)
7 / 24
ZeroConf protocol
Ï Protocol to allocate an address in a link-local network,without central authority (RFC 3927)
Ï We formalized the analysis of Bohnenkamp et al. (2003)Ï Address allocation when only one computer is addedÏ Probability that two hosts end up with the same addressÏ Expected time until an address is allocated
Ï Model checking analysis of Kwiatkowska et al. (2006) andAndova et al. (2003)
7 / 24
ZeroConf protocol
Ï Protocol to allocate an address in a link-local network,without central authority (RFC 3927)
Ï We formalized the analysis of Bohnenkamp et al. (2003)Ï Address allocation when only one computer is addedÏ Probability that two hosts end up with the same addressÏ Expected time until an address is allocated
Ï Model checking analysis of Kwiatkowska et al. (2006) andAndova et al. (2003)
7 / 24
S
Ok
P 0 P 1 PN Err
1−q
; r · (N +1)
1
; 0
q
; r
p
; r
1−p
; 0
1−p
; 0
1−p
; 0
p
; E
1
; 0
8 / 24
S
Ok
P 0 P 1 PN Err
1−q
; r · (N +1)
1
; 0
q
; r
p
; r
1−p
; 0
1−p
; 0
1−p
; 0
p
; E
1
; 0
8 / 24
S
Ok
P 0
P 1 PN Err
1−q
; r · (N +1)
1
; 0
q
; r p
; r
1−p
; 0
1−p
; 0
1−p
; 0
p
; E
1
; 0
8 / 24
S
Ok
P 0 P 1
PN Err
1−q
; r · (N +1)
1
; 0
q
; r
p
; r
1−p
; 01−p
; 0
1−p
; 0
p
; E
1
; 0
8 / 24
S
Ok
P 0 P 1 PN
Err
1−q
; r · (N +1)
1
; 0
q
; r
p
; r
1−p
; 0
1−p
; 0
1−p
; 0
p
; E
1
; 0
8 / 24
S
Ok
P 0 P 1 PN Err
1−q
; r · (N +1)
1
; 0
q
; r
p
; r
1−p
; 0
1−p
; 0
1−p
; 0
p
; E
1
; 0
8 / 24
S
Ok
P 0 P 1 PN Err
1−q; r · (N +1)
1; 0
q; r p; r
1−p; 01−p; 0
1−p; 0
p; E
1; 0
8 / 24
Ï Fix parameters:
fixes N ::N and p q r E ::R
assumes 0< p and p < 1 and 0< q and q < 1assumes 0≤E and 0≤ r
Ï Define state space:
datatype zc-state= S |P N |Ok |Err
Ω=S,Ok,Err
∪
Pn
∣∣∣ n≤N
Ï Define the transition function τ:
τ S Ok = 1−qτ S (P 0) = qτ (Pn) (P (n+1)) = if n<N then p else 0
...
9 / 24
Ï Fix parameters:
fixes N ::N and p q r E ::R
assumes 0< p and p < 1 and 0< q and q < 1assumes 0≤E and 0≤ r
Ï Define state space:
datatype zc-state= S |P N |Ok |Err
Ω=S,Ok,Err
∪
Pn
∣∣∣ n≤N
Ï Define the transition function τ:
τ S Ok = 1−qτ S (P 0) = qτ (Pn) (P (n+1)) = if n<N then p else 0
...
9 / 24
Ï Fix parameters:
fixes N ::N and p q r E ::Rassumes 0< p and p < 1 and 0< q and q < 1
assumes 0≤E and 0≤ r
Ï Define state space:
datatype zc-state= S |P N |Ok |Err
Ω=S,Ok,Err
∪
Pn
∣∣∣ n≤N
Ï Define the transition function τ:
τ S Ok = 1−qτ S (P 0) = qτ (Pn) (P (n+1)) = if n<N then p else 0
...
9 / 24
Ï Fix parameters:
fixes N ::N and p q r E ::Rassumes 0< p and p < 1 and 0< q and q < 1assumes 0≤E and 0≤ r
Ï Define state space:
datatype zc-state= S |P N |Ok |Err
Ω=S,Ok,Err
∪
Pn
∣∣∣ n≤N
Ï Define the transition function τ:
τ S Ok = 1−qτ S (P 0) = qτ (Pn) (P (n+1)) = if n<N then p else 0
...
9 / 24
Ï Fix parameters:
fixes N ::N and p q r E ::Rassumes 0< p and p < 1 and 0< q and q < 1assumes 0≤E and 0≤ r
Ï Define state space:
datatype zc-state= S |P N |Ok |Err
Ω=S,Ok,Err
∪
Pn
∣∣∣ n≤N
Ï Define the transition function τ:
τ S Ok = 1−qτ S (P 0) = qτ (Pn) (P (n+1)) = if n<N then p else 0
...
9 / 24
Ï Fix parameters:
fixes N ::N and p q r E ::Rassumes 0< p and p < 1 and 0< q and q < 1assumes 0≤E and 0≤ r
Ï Define state space:
datatype zc-state= S |P N |Ok |Err
Ω=S,Ok,Err
∪
Pn
∣∣∣ n≤N
Ï Define the transition function τ:
τ S Ok = 1−qτ S (P 0) = qτ (Pn) (P (n+1)) = if n<N then p else 0
...
9 / 24
Ï Defines a Markov chain:
lemma markov-chain Ω τ
Ï Probability theory gives us:
Prs(ω. P ω) – the probability that a trace ω fulfills P ω
Ï Define probability that an error is reached:
Perr s =Prs(ω. ∃n. ω n=Err)
Ï Analyse: Perr S=?
10 / 24
Ï Defines a Markov chain:
lemma markov-chain Ω τ
Ï Probability theory gives us:
Prs(ω. P ω) – the probability that a trace ω fulfills P ω
Ï Define probability that an error is reached:
Perr s =Prs(ω. ∃n. ω n=Err)
Ï Analyse: Perr S=?
10 / 24
Ï Defines a Markov chain:
lemma markov-chain Ω τ
Ï Probability theory gives us:
Prs(ω. P ω) – the probability that a trace ω fulfills P ω
Ï Define probability that an error is reached:
Perr s =Prs(ω. ∃n. ω n=Err)
Ï Analyse: Perr S=?
10 / 24
Ï Defines a Markov chain:
lemma markov-chain Ω τ
Ï Probability theory gives us:
Prs(ω. P ω) – the probability that a trace ω fulfills P ω
Ï Define probability that an error is reached:
Perr s =Prs(ω. ∃n. ω n=Err)
Ï Analyse: Perr S=?
10 / 24
lemman≤N =⇒ Perr (P (N −n))= pn+1+ (1−pn+1) ·Perr S
proof (induct n)case (n+1)have Perr (P (N − (n+1)))
= p · (pn+1+ (1−pn+1) ·Perr S)+ (1−p) ·Perr Sby (simp · · ·)
also have . . . = p(n+1)+1+ (1−p(n+1)+1) ·Perr Sby (simp · · ·)
finally show Perr (P (N − (n+1)))= p(n+1)+1+ (1−p(n+1)+1) ·Perr S .
nextcase 0show Perr (P (N −0))= p0+1+ (1−p0+1) ·Perr Sby simp
qed
11 / 24
lemman≤N =⇒ Perr (P (N −n))= pn+1+ (1−pn+1) ·Perr S
proof (induct n)
case (n+1)have Perr (P (N − (n+1)))
= p · (pn+1+ (1−pn+1) ·Perr S)+ (1−p) ·Perr Sby (simp · · ·)
also have . . . = p(n+1)+1+ (1−p(n+1)+1) ·Perr Sby (simp · · ·)
finally show Perr (P (N − (n+1)))= p(n+1)+1+ (1−p(n+1)+1) ·Perr S .
nextcase 0show Perr (P (N −0))= p0+1+ (1−p0+1) ·Perr Sby simp
qed
11 / 24
lemman≤N =⇒ Perr (P (N −n))= pn+1+ (1−pn+1) ·Perr S
proof (induct n)case (n+1)
have Perr (P (N − (n+1)))= p · (pn+1+ (1−pn+1) ·Perr S)+ (1−p) ·Perr S
by (simp · · ·)also have . . . = p(n+1)+1+ (1−p(n+1)+1) ·Perr Sby (simp · · ·)
finally show Perr (P (N − (n+1)))= p(n+1)+1+ (1−p(n+1)+1) ·Perr S .
nextcase 0show Perr (P (N −0))= p0+1+ (1−p0+1) ·Perr Sby simp
qed
11 / 24
lemman≤N =⇒ Perr (P (N −n))= pn+1+ (1−pn+1) ·Perr S
proof (induct n)case (n+1)have Perr (P (N − (n+1)))
= p · (pn+1+ (1−pn+1) ·Perr S)+ (1−p) ·Perr Sby (simp · · ·)
also have . . . = p(n+1)+1+ (1−p(n+1)+1) ·Perr Sby (simp · · ·)
finally show Perr (P (N − (n+1)))= p(n+1)+1+ (1−p(n+1)+1) ·Perr S .
nextcase 0show Perr (P (N −0))= p0+1+ (1−p0+1) ·Perr Sby simp
qed
11 / 24
lemman≤N =⇒ Perr (P (N −n))= pn+1+ (1−pn+1) ·Perr S
proof (induct n)case (n+1)have Perr (P (N − (n+1)))
= p · (pn+1+ (1−pn+1) ·Perr S)+ (1−p) ·Perr Sby (simp · · ·)
also have . . . = p(n+1)+1+ (1−p(n+1)+1) ·Perr Sby (simp · · ·)
finally show Perr (P (N − (n+1)))= p(n+1)+1+ (1−p(n+1)+1) ·Perr S .
nextcase 0show Perr (P (N −0))= p0+1+ (1−p0+1) ·Perr Sby simp
qed
11 / 24
lemman≤N =⇒ Perr (P (N −n))= pn+1+ (1−pn+1) ·Perr S
proof (induct n)case (n+1)have Perr (P (N − (n+1)))
= p · (pn+1+ (1−pn+1) ·Perr S)+ (1−p) ·Perr Sby (simp · · ·)
also have . . . = p(n+1)+1+ (1−p(n+1)+1) ·Perr Sby (simp · · ·)
finally show Perr (P (N − (n+1)))= p(n+1)+1+ (1−p(n+1)+1) ·Perr S .
next
case 0show Perr (P (N −0))= p0+1+ (1−p0+1) ·Perr Sby simp
qed
11 / 24
lemman≤N =⇒ Perr (P (N −n))= pn+1+ (1−pn+1) ·Perr S
proof (induct n)case (n+1)have Perr (P (N − (n+1)))
= p · (pn+1+ (1−pn+1) ·Perr S)+ (1−p) ·Perr Sby (simp · · ·)
also have . . . = p(n+1)+1+ (1−p(n+1)+1) ·Perr Sby (simp · · ·)
finally show Perr (P (N − (n+1)))= p(n+1)+1+ (1−p(n+1)+1) ·Perr S .
nextcase 0show Perr (P (N −0))= p0+1+ (1−p0+1) ·Perr Sby simp
qed
11 / 24
lemman≤N =⇒ Perr (P (N −n))= pn+1+ (1−pn+1) ·Perr S
proof (induct n)case (n+1)have Perr (P (N − (n+1)))
= p · (pn+1+ (1−pn+1) ·Perr S)+ (1−p) ·Perr Sby (simp · · ·)
also have . . . = p(n+1)+1+ (1−p(n+1)+1) ·Perr Sby (simp · · ·)
finally show Perr (P (N − (n+1)))= p(n+1)+1+ (1−p(n+1)+1) ·Perr S .
nextcase 0show Perr (P (N −0))= p0+1+ (1−p0+1) ·Perr Sby simp
qed
11 / 24
Ï General result:
theorem Perr S= q ·pN+1
1−q · (1−pN+1)
Ï 16 hosts (q = 16/65024), 3 probe runs (N = 2), p = 0.01:
corollary Perr S≤ 10−13
12 / 24
Ï General result:
theorem Perr S= q ·pN+1
1−q · (1−pN+1)
Ï 16 hosts (q = 16/65024), 3 probe runs (N = 2), p = 0.01:
corollary Perr S≤ 10−13
12 / 24
Ï How do we model the expected running time?
Ï Similar to τ define the cost function ρ:
ρ S Ok = r · (N +1)ρ S (P 0) = rρ (Pn) (P (n+1)) = if n<N then r else 0
...
Ï Define expected cost until Err or Ok is reached:
Cfin s =∫ωcost-until
Err,Ok
(s ·ω) dPr s
Ï 16 hosts, 3 probe runs, p = 0.01, r = 2ms, E = 3600s:
theorem Cfin S≤ 0.007
13 / 24
Ï How do we model the expected running time?Ï Similar to τ define the cost function ρ:
ρ S Ok = r · (N +1)ρ S (P 0) = rρ (Pn) (P (n+1)) = if n<N then r else 0
...
Ï Define expected cost until Err or Ok is reached:
Cfin s =∫ωcost-until
Err,Ok
(s ·ω) dPr s
Ï 16 hosts, 3 probe runs, p = 0.01, r = 2ms, E = 3600s:
theorem Cfin S≤ 0.007
13 / 24
Ï How do we model the expected running time?Ï Similar to τ define the cost function ρ:
ρ S Ok = r · (N +1)ρ S (P 0) = rρ (Pn) (P (n+1)) = if n<N then r else 0
...
Ï Define expected cost until Err or Ok is reached:
Cfin s =∫ωcost-until
Err,Ok
(s ·ω) dPr s
Ï 16 hosts, 3 probe runs, p = 0.01, r = 2ms, E = 3600s:
theorem Cfin S≤ 0.007
13 / 24
Ï How do we model the expected running time?Ï Similar to τ define the cost function ρ:
ρ S Ok = r · (N +1)ρ S (P 0) = rρ (Pn) (P (n+1)) = if n<N then r else 0
...
Ï Define expected cost until Err or Ok is reached:
Cfin s =∫ωcost-until
Err,Ok
(s ·ω) dPr s
Ï 16 hosts, 3 probe runs, p = 0.01, r = 2ms, E = 3600s:
theorem Cfin S≤ 0.007
13 / 24
Case study: Crowds protocol
14 / 24
Crowds protocol
Ï Anonymizing protocolintroduced and analysed by Reiter & Rubin (1998)
Ï Group of nodes establishes a connection by randomly chosinganother node or the final server
Ï Analysis:
Ï Probability that original sender contacts a collaborating nodeis small
Ï Information a contacted collaborating node gains is small
15 / 24
Crowds protocol
Ï Anonymizing protocolintroduced and analysed by Reiter & Rubin (1998)
Ï Group of nodes establishes a connection by randomly chosinganother node or the final server
Ï Analysis:
Ï Probability that original sender contacts a collaborating nodeis small
Ï Information a contacted collaborating node gains is small
15 / 24
Crowds protocol
Ï Anonymizing protocolintroduced and analysed by Reiter & Rubin (1998)
Ï Group of nodes establishes a connection by randomly chosinganother node or the final server
Ï Analysis:
Ï Probability that original sender contacts a collaborating nodeis small
Ï Information a contacted collaborating node gains is small
15 / 24
Crowds protocol
Ï Anonymizing protocolintroduced and analysed by Reiter & Rubin (1998)
Ï Group of nodes establishes a connection by randomly chosinganother node or the final server
Ï Analysis:Ï Probability that original sender contacts a collaborating nodeis small
Ï Information a contacted collaborating node gains is small
15 / 24
Crowds protocol
Ï Anonymizing protocolintroduced and analysed by Reiter & Rubin (1998)
Ï Group of nodes establishes a connection by randomly chosinganother node or the final server
Ï Analysis:Ï Probability that original sender contacts a collaborating nodeis small
Ï Information a contacted collaborating node gains is small
15 / 24
N1
N3
N2
N4
N7
N6N5
N8
N9
C1
C2
S
16 / 24
N1
N3
N2
N4
N7
N6N5
N8
N9
C1
C2
S
16 / 24
N1
N3
N2
N4
N7
N6N5
N8
N9
C1
C2
S
16 / 24
N1
N3
N2
N4
N7
N6N5
N8
N9
C1
C2
S
16 / 24
N1
N3
N2
N4
N7
N6N5
N8
N9
C1
C2
S
16 / 24
N1
N3
N2
N4
N7
N6N5
N8
N9
C1
C2
S
16 / 24
N1
N3
N2
N4
N7
N6N5
N8
N9
C1
C2
S
16 / 24
N1
N3
N2
N4
N7
N6N5
N8
N9
C1
C2
S
16 / 24
N1N2N3N4N5N6N7
N1N2N3N4N5N6N7C1C2
N1N2N3N4N5N6N7C1C2
N1N2N3N4N5N6N7C1C2
N1N2N3N4N5N6N7C1C2
N1N2N3N4N5N6N7C1C2
S
S I M0 M1 M2 M3 M4 E
17 / 24
S
I N1
I N2
I N3
M N1
M N2
M N3
E
pi J1
pi J2
pi J3 1
Probabilities:
1/3
pf /3
1−pf
18 / 24
S
I N1
I N2
I N3
M N1
M N2
M N3
E
pi J1
pi J2
pi J3
1
Probabilities:
1/3
pf /3
1−pf
18 / 24
S
I N1
I N2
I N3
M N1
M N2
M N3
E
pi J1
pi J2
pi J3
1
Probabilities:1/3
pf /3
1−pf
18 / 24
S
I N1
I N2
I N3
M N1
M N2
M N3
E
pi J1
pi J2
pi J3
1
Probabilities:1/3
pf /3
1−pf
18 / 24
S
I N1
I N2
I N3
M N1
M N2
M N3
E
pi J1
pi J2
pi J3 1
Probabilities:1/3
pf /3
1−pf
18 / 24
Ï Fix parameters
fixes N C :: node set and pf ::R and pi :: node→R
assumes 0< pf and pf < 1assumes N 6= ; and finite Nassumes ∀n ∈N. 0≤ pi n and ∑
n∈Npi j = 1assumes C 6= ; and C⊂N and ∀c ∈C. pi c = 0
Ï Define state space
datatype α c-state= S | I α |M α |E
Ω=S∪
I n
∣∣∣ n ∈N\C∪
M n
∣∣∣ n ∈N∪
E
19 / 24
Ï Fix parameters
fixes N C :: node set and pf ::R and pi :: node→R
assumes 0< pf and pf < 1assumes N 6= ; and finite Nassumes ∀n ∈N. 0≤ pi n and ∑
n∈Npi j = 1assumes C 6= ; and C⊂N and ∀c ∈C. pi c = 0
Ï Define state space
datatype α c-state= S | I α |M α |E
Ω=S∪
I n
∣∣∣ n ∈N\C∪
M n
∣∣∣ n ∈N∪
E
19 / 24
Ï Fix parameters
fixes N C :: node set and pf ::R and pi :: node→R
assumes 0< pf and pf < 1
assumes N 6= ; and finite Nassumes ∀n ∈N. 0≤ pi n and ∑
n∈Npi j = 1assumes C 6= ; and C⊂N and ∀c ∈C. pi c = 0
Ï Define state space
datatype α c-state= S | I α |M α |E
Ω=S∪
I n
∣∣∣ n ∈N\C∪
M n
∣∣∣ n ∈N∪
E
19 / 24
Ï Fix parameters
fixes N C :: node set and pf ::R and pi :: node→R
assumes 0< pf and pf < 1assumes N 6= ; and finite N
assumes ∀n ∈N. 0≤ pi n and ∑n∈Npi j = 1
assumes C 6= ; and C⊂N and ∀c ∈C. pi c = 0
Ï Define state space
datatype α c-state= S | I α |M α |E
Ω=S∪
I n
∣∣∣ n ∈N\C∪
M n
∣∣∣ n ∈N∪
E
19 / 24
Ï Fix parameters
fixes N C :: node set and pf ::R and pi :: node→R
assumes 0< pf and pf < 1assumes N 6= ; and finite Nassumes ∀n ∈N. 0≤ pi n and ∑
n∈Npi j = 1
assumes C 6= ; and C⊂N and ∀c ∈C. pi c = 0
Ï Define state space
datatype α c-state= S | I α |M α |E
Ω=S∪
I n
∣∣∣ n ∈N\C∪
M n
∣∣∣ n ∈N∪
E
19 / 24
Ï Fix parameters
fixes N C :: node set and pf ::R and pi :: node→R
assumes 0< pf and pf < 1assumes N 6= ; and finite Nassumes ∀n ∈N. 0≤ pi n and ∑
n∈Npi j = 1assumes C 6= ; and C⊂N and ∀c ∈C. pi c = 0
Ï Define state space
datatype α c-state= S | I α |M α |E
Ω=S∪
I n
∣∣∣ n ∈N\C∪
M n
∣∣∣ n ∈N∪
E
19 / 24
Ï Fix parameters
fixes N C :: node set and pf ::R and pi :: node→R
assumes 0< pf and pf < 1assumes N 6= ; and finite Nassumes ∀n ∈N. 0≤ pi n and ∑
n∈Npi j = 1assumes C 6= ; and C⊂N and ∀c ∈C. pi c = 0
Ï Define state space
datatype α c-state= S | I α |M α |E
Ω=S∪
I n
∣∣∣ n ∈N\C∪
M n
∣∣∣ n ∈N∪
E
19 / 24
Ï Define transition function
τ S (I n) = pi nτ (I n) (M n′) = 1/|N|τ (M n) (M n′) = pf /|N|τ (M n) E = 1−pfτ E E = 1τ _ _ = 0
Ï Prove Markov chain property
theorem markov-chain Ω τ
20 / 24
Ï Define transition function
τ S (I n) = pi nτ (I n) (M n′) = 1/|N|τ (M n) (M n′) = pf /|N|τ (M n) E = 1−pfτ E E = 1τ _ _ = 0
Ï Prove Markov chain property
theorem markov-chain Ω τ
20 / 24
Ï We introduce some random variables:
init the initiating nodelast-ncoll the first node contacting a collaborating nodehit true if a collaborating node is contacted
Ï Probability that initiating node contacts a collaborating node
theorem PrS(ω. init ω= last-ncoll ω | hit ω)= 1− |N\C|−1|N| ·pf
Ï Information the collaborating nodes gain when contacted
theorem Ihit(init; last-ncoll)≤(1− |N\C|−1
|N| ·pf)· log2 |N\C|
21 / 24
Ï We introduce some random variables:
init the initiating nodelast-ncoll the first node contacting a collaborating nodehit true if a collaborating node is contacted
Ï Probability that initiating node contacts a collaborating node
theorem PrS(ω. init ω= last-ncoll ω | hit ω)= 1− |N\C|−1|N| ·pf
Ï Information the collaborating nodes gain when contacted
theorem Ihit(init; last-ncoll)≤(1− |N\C|−1
|N| ·pf)· log2 |N\C|
21 / 24
Ï We introduce some random variables:init the initiating node
last-ncoll the first node contacting a collaborating nodehit true if a collaborating node is contacted
Ï Probability that initiating node contacts a collaborating node
theorem PrS(ω. init ω= last-ncoll ω | hit ω)= 1− |N\C|−1|N| ·pf
Ï Information the collaborating nodes gain when contacted
theorem Ihit(init; last-ncoll)≤(1− |N\C|−1
|N| ·pf)· log2 |N\C|
21 / 24
Ï We introduce some random variables:init the initiating nodelast-ncoll the first node contacting a collaborating node
hit true if a collaborating node is contacted
Ï Probability that initiating node contacts a collaborating node
theorem PrS(ω. init ω= last-ncoll ω | hit ω)= 1− |N\C|−1|N| ·pf
Ï Information the collaborating nodes gain when contacted
theorem Ihit(init; last-ncoll)≤(1− |N\C|−1
|N| ·pf)· log2 |N\C|
21 / 24
Ï We introduce some random variables:init the initiating nodelast-ncoll the first node contacting a collaborating nodehit true if a collaborating node is contacted
Ï Probability that initiating node contacts a collaborating node
theorem PrS(ω. init ω= last-ncoll ω | hit ω)= 1− |N\C|−1|N| ·pf
Ï Information the collaborating nodes gain when contacted
theorem Ihit(init; last-ncoll)≤(1− |N\C|−1
|N| ·pf)· log2 |N\C|
21 / 24
Ï We introduce some random variables:init the initiating nodelast-ncoll the first node contacting a collaborating nodehit true if a collaborating node is contacted
Ï Probability that initiating node contacts a collaborating node
theorem PrS(ω. init ω= last-ncoll ω | hit ω)= 1− |N\C|−1|N| ·pf
Ï Information the collaborating nodes gain when contacted
theorem Ihit(init; last-ncoll)≤(1− |N\C|−1
|N| ·pf)· log2 |N\C|
21 / 24
Ï We introduce some random variables:init the initiating nodelast-ncoll the first node contacting a collaborating nodehit true if a collaborating node is contacted
Ï Probability that initiating node contacts a collaborating node
theorem PrS(ω. init ω= last-ncoll ω | hit ω)= 1− |N\C|−1|N| ·pf
Ï Information the collaborating nodes gain when contacted
theorem Ihit(init; last-ncoll)≤(1− |N\C|−1
|N| ·pf)· log2 |N\C|
21 / 24
22 / 24
Related Work: probability theory in ITPs
Ï Probability space of boolean sequences: N→ 0,1Hurd (2002), Hasan et al. (2009), Liu et al. (2011)
Ï Expectation and information theory (discrete, finite spaces)Coble (2009)
Ï Formalization of pGCL (prob. & non-det. language)Hurd et al. (2005), Audebaud & Paulin-Mohring (2009)
23 / 24
Related Work: probability theory in ITPs
Ï Probability space of boolean sequences: N→ 0,1Hurd (2002), Hasan et al. (2009), Liu et al. (2011)
Ï Expectation and information theory (discrete, finite spaces)Coble (2009)
Ï Formalization of pGCL (prob. & non-det. language)Hurd et al. (2005), Audebaud & Paulin-Mohring (2009)
23 / 24
Related Work: probability theory in ITPs
Ï Probability space of boolean sequences: N→ 0,1Hurd (2002), Hasan et al. (2009), Liu et al. (2011)
Ï Expectation and information theory (discrete, finite spaces)Coble (2009)
Ï Formalization of pGCL (prob. & non-det. language)Hurd et al. (2005), Audebaud & Paulin-Mohring (2009)
23 / 24
Summary & Future Work
Ï Markov chains with probability, expectation, and information
Ï ZeroConf protocol: a few days; ≈ 300 lines of theoryÏ Crowds anonymity: a few weeks; ≈ 1,100 lines of theoryÏ Compare: ≈ 20,600 lines of theory for probability theory
Future Work:Ï More Markov models (MDPs, CTMCs, CTMDPs, PTAs)Ï Certification of probabilistic model checker resultsÏ Specification language
Slides available at: http://www.in.tum.de/~hoelzl
24 / 24
Summary & Future Work
Ï Markov chains with probability, expectation, and informationÏ ZeroConf protocol: a few days; ≈ 300 lines of theory
Ï Crowds anonymity: a few weeks; ≈ 1,100 lines of theoryÏ Compare: ≈ 20,600 lines of theory for probability theory
Future Work:Ï More Markov models (MDPs, CTMCs, CTMDPs, PTAs)Ï Certification of probabilistic model checker resultsÏ Specification language
Slides available at: http://www.in.tum.de/~hoelzl
24 / 24
Summary & Future Work
Ï Markov chains with probability, expectation, and informationÏ ZeroConf protocol: a few days; ≈ 300 lines of theoryÏ Crowds anonymity: a few weeks; ≈ 1,100 lines of theory
Ï Compare: ≈ 20,600 lines of theory for probability theory
Future Work:Ï More Markov models (MDPs, CTMCs, CTMDPs, PTAs)Ï Certification of probabilistic model checker resultsÏ Specification language
Slides available at: http://www.in.tum.de/~hoelzl
24 / 24
Summary & Future Work
Ï Markov chains with probability, expectation, and informationÏ ZeroConf protocol: a few days; ≈ 300 lines of theoryÏ Crowds anonymity: a few weeks; ≈ 1,100 lines of theoryÏ Compare: ≈ 20,600 lines of theory for probability theory
Future Work:Ï More Markov models (MDPs, CTMCs, CTMDPs, PTAs)Ï Certification of probabilistic model checker resultsÏ Specification language
Slides available at: http://www.in.tum.de/~hoelzl
24 / 24
Summary & Future Work
Ï Markov chains with probability, expectation, and informationÏ ZeroConf protocol: a few days; ≈ 300 lines of theoryÏ Crowds anonymity: a few weeks; ≈ 1,100 lines of theoryÏ Compare: ≈ 20,600 lines of theory for probability theory
Future Work:Ï More Markov models (MDPs, CTMCs, CTMDPs, PTAs)Ï Certification of probabilistic model checker resultsÏ Specification language
Slides available at: http://www.in.tum.de/~hoelzl
24 / 24
Summary & Future Work
Ï Markov chains with probability, expectation, and informationÏ ZeroConf protocol: a few days; ≈ 300 lines of theoryÏ Crowds anonymity: a few weeks; ≈ 1,100 lines of theoryÏ Compare: ≈ 20,600 lines of theory for probability theory
Future Work:Ï More Markov models (MDPs, CTMCs, CTMDPs, PTAs)Ï Certification of probabilistic model checker resultsÏ Specification language
Slides available at: http://www.in.tum.de/~hoelzl
24 / 24