Upload
khangminh22
View
1
Download
0
Embed Size (px)
Citation preview
arXiv:2010.15596v3 [cs.LO] 25 Oct 2021
Verificatio
nofPatte
rns
—Yon
gWan
g—
Contents
1 Introduction 1
2 Truly Concurrent Process Algebra 2
2.1 Basic Algebra for True Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Algebra for Parallelism in True Concurrency . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 Placeholder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 State and Race Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.7 Asynchronous Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.8 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3 Verification of Architectural Patterns 27
3.1 From Mud to Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.1 Verification of the Layers Pattern . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.2 Verification of the Pipes and Filters Pattern . . . . . . . . . . . . . . . . . . 40
3.1.3 Verification of the Blackboard Pattern . . . . . . . . . . . . . . . . . . . . . . 44
3.2 Distributed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2.1 Verification of the Broker Pattern . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3 Interactive Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3.1 Verification of the MVC Pattern . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3.2 Verification of the PAC Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4 Adaptable Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.4.1 Verification of the Microkernel Pattern . . . . . . . . . . . . . . . . . . . . . 59
3.4.2 Verification of the Reflection Pattern . . . . . . . . . . . . . . . . . . . . . . 63
4 Verification of Design Patterns 68
4.1 Structural Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.1 Verification the Whole-Part Pattern . . . . . . . . . . . . . . . . . . . . . . . 68
4.2 Organization of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2.1 Verification of the Master-Slave Pattern . . . . . . . . . . . . . . . . . . . . . 70
4.3 Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.3.1 Verification of the Proxy Pattern . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.4 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
i
4.4.1 Verification of the Command Processor Pattern . . . . . . . . . . . . . . . . 75
4.4.2 Verification of the View Handler Pattern . . . . . . . . . . . . . . . . . . . . 79
4.5 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.5.1 Verification of the Forwarder-Receiver Pattern . . . . . . . . . . . . . . . . . 82
4.5.2 Verification of the Client-Dispatcher-Server Pattern . . . . . . . . . . . . . . 85
4.5.3 Verification of the Publisher-Subscriber Pattern . . . . . . . . . . . . . . . . 87
5 Verification of Idioms 90
5.1 Verification of the Singleton Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.2 Verification of the Counted Pointer Pattern . . . . . . . . . . . . . . . . . . . . . . . 91
6 Verification of Patterns for Concurrent and Networked Objects 95
6.1 Service Access and Configuration Patterns . . . . . . . . . . . . . . . . . . . . . . . 95
6.1.1 Verification of the Wrapper Facade Pattern . . . . . . . . . . . . . . . . . . 95
6.1.2 Verification of the Component Configurator Pattern . . . . . . . . . . . . . 97
6.1.3 Verification of the Interceptor Pattern . . . . . . . . . . . . . . . . . . . . . . 100
6.1.4 Verification of the Extension Interface Pattern . . . . . . . . . . . . . . . . . 103
6.2 Event Handling Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.2.1 Verification of the Reactor Pattern . . . . . . . . . . . . . . . . . . . . . . . . 106
6.2.2 Verification of the Proactor Pattern . . . . . . . . . . . . . . . . . . . . . . . 109
6.2.3 Verification of the Asynchronous Completion Token Pattern . . . . . . . . 112
6.2.4 Verification of the Acceptor-Connector Pattern . . . . . . . . . . . . . . . . 116
6.3 Synchronization Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.3.1 Verification of the Scoped Locking Pattern . . . . . . . . . . . . . . . . . . . 120
6.3.2 Verification of the Strategized Locking Pattern . . . . . . . . . . . . . . . . 122
6.3.3 Verification of the Double-Checked Locking Optimization Pattern . . . . . 125
6.4 Concurrency Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.4.1 Verification of the Active Object Pattern . . . . . . . . . . . . . . . . . . . . 127
6.4.2 Verification of the Monitor Object Pattern . . . . . . . . . . . . . . . . . . . 131
6.4.3 Verification of the Half-Sync/Half-Async Pattern . . . . . . . . . . . . . . . 133
6.4.4 Verification of the Leader/Followers Pattern . . . . . . . . . . . . . . . . . . 135
6.4.5 Verification of the Thread-Specific Storage Pattern . . . . . . . . . . . . . . 138
ii
7 Verification of Patterns for Resource Management 143
7.1 Resource Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.1.1 Verification of the Lookup Pattern . . . . . . . . . . . . . . . . . . . . . . . . 143
7.1.2 Verification of the Lazy Acquisition Pattern . . . . . . . . . . . . . . . . . . 146
7.1.3 Verification of the Eager Acquisition Pattern . . . . . . . . . . . . . . . . . . 150
7.1.4 Verification of the Partial Acquisition Pattern . . . . . . . . . . . . . . . . . 153
7.2 Resource Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
7.2.1 Verification of the Caching Pattern . . . . . . . . . . . . . . . . . . . . . . . . 156
7.2.2 Verification of the Pooling Pattern . . . . . . . . . . . . . . . . . . . . . . . . 159
7.2.3 Verification of the Coordinator Pattern . . . . . . . . . . . . . . . . . . . . . 163
7.2.4 Verification of the Resource Lifecycle Manager Pattern . . . . . . . . . . . . 165
7.3 Resource Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
7.3.1 Verification of the Leasing Pattern . . . . . . . . . . . . . . . . . . . . . . . . 168
7.3.2 Verification of the Evictor Pattern . . . . . . . . . . . . . . . . . . . . . . . . 172
8 Composition of Patterns 176
8.1 Composition of the Layers Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.2 Composition of the PAC Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
8.3 Composition of Resource Management Patterns . . . . . . . . . . . . . . . . . . . . 181
iii
1 Introduction
The software patterns provide building blocks to the design and implementation of a software
system, and try to make the software engineering to progress from experience to science. The
software patterns were made famous because of the introduction as the design patterns [1]. After
that, patterns have been researched and developed widely and rapidly.
The series of books of pattern-oriented software architecture [2] [3] [4] [5] [6] should be marked
in the development of software patterns. In these books, patterns are detailed in the following
aspects.
1. Patterns are categorized from great granularity to tiny granularity. The greatest gran-
ularity is called architecture patterns, the medium granularity is called design patterns,
and the tiniest granularity is called idioms. In each granularity, patterns are detailed and
classified according to their functionalities.
2. Every pattern is detailed according to a regular format to be understood and utilized
easily, which includes introduction to a pattern on example, context, problem, solution,
structure, dynamics, implementation, example resolved, variants.
3. Except the general patterns, patterns of the vertical domains are also involved, including
the domains of networked objects and resource management.
4. To make the development and utilization of patterns scientifically, the pattern languages
are discussed.
As mentioned in these books, formalization of patterns and an intermediate pattern language
are needed and should be developed in the future of patterns. So, in this book, we formalize
software patterns according to the categories of the series of books of pattern-oriented software
architecture, and verify the correctness of patterns based on truly concurrent process algebra [7]
[8] [9]. In one aspect, patterns are formalized and verified; in the other aspect, truly concurrent
process algebra can play a role of an intermediate pattern language for its rigorous theory.
This book is organized as follows.
In chapter 2, to make this book be self-satisfied, we introduce the preliminaries of truly concur-
rent process algebra, including its whole theory, modelling of race condition and asynchronous
communication, and applications.
In chapter 3, we formalize and verify the architectural patterns.
In chapter 4, we formalize and verify the design patterns.
In chapter 5, we formalize and verify the idioms.
In chapter 6, we formalize and verify the patterns for concurrent and networked objects.
In chapter 7, we formalize and verify the patterns for resource management.
In chapter 8, we show the formalization and verification of composition of patterns.
1
2 Truly Concurrent Process Algebra
In this chapter, we introduce the preliminaries on truly concurrent process algebra [7] [8] [9],
which is based on truly concurrent operational semantics.
APTC eliminates the differences of structures of transition system, event structure, etc, and dis-
cusses their behavioral equivalences. It considers that there are two kinds of causality relations:
the chronological order modeled by the sequential composition and the causal order between dif-
ferent parallel branches modeled by the communication merge. It also considers that there exist
two kinds of confliction relations: the structural confliction modeled by the alternative com-
position and the conflictions in different parallel branches which should be eliminated. Based
on conservative extension, there are four modules in APTC: BATC (Basic Algebra for True
Concurrency), APTC (Algebra for Parallelism in True Concurrency), recursion and abstraction.
2.1 Basic Algebra for True Concurrency
BATC has sequential composition ⋅ and alternative composition + to capture the chronological
ordered causality and the structural confliction. The constants are ranged over A, the set of
atomic actions. The algebraic laws on ⋅ and + are sound and complete modulo truly concurrent
bisimulation equivalences (including pomset bisimulation, step bisimulation, hp-bisimulation
and hhp-bisimulation).
Definition 2.1 (Prime event structure with silent event). Let Λ be a fixed set of labels, ranged
over a, b, c,⋯ and τ . A (Λ-labelled) prime event structure with silent event τ is a tuple E =
⟨E,≤, ♯, λ⟩, where E is a denumerable set of events, including the silent event τ . Let E = E/{τ},
exactly excluding τ , it is obvious that τ∗ = ǫ, where ǫ is the empty event. Let λ ∶ E → Λ be a
labelling function and let λ(τ) = τ . And ≤, ♯ are binary relations on E, called causality and
conflict respectively, such that:
1. ≤ is a partial order and ⌈e⌉ = {e′ ∈ E∣e′ ≤ e} is finite for all e ∈ E. It is easy to see that
e ≤ τ∗ ≤ e′ = e ≤ τ ≤ ⋯ ≤ τ ≤ e′, then e ≤ e′.
2. ♯ is irreflexive, symmetric and hereditary with respect to ≤, that is, for all e, e′, e′′ ∈ E, if
e ♯ e′ ≤ e′′, then e ♯ e′′.
Then, the concepts of consistency and concurrency can be drawn from the above definition:
1. e, e′ ∈ E are consistent, denoted as e ⌢ e′, if ¬(e ♯ e′). A subset X ⊆ E is called consistent,
if e ⌢ e′ for all e, e′ ∈ X.
2. e, e′ ∈ E are concurrent, denoted as e ∥ e′, if ¬(e ≤ e′), ¬(e′ ≤ e), and ¬(e ♯ e′).
Definition 2.2 (Configuration). Let E be a PES. A (finite) configuration in E is a (finite)
consistent subset of events C ⊆ E, closed with respect to causality (i.e. ⌈C⌉ = C). The set of
finite configurations of E is denoted by C(E). We let C = C/{τ}.
A consistent subset of X ⊆ E of events can be seen as a pomset. Given X,Y ⊆ E, X ∼ Y if X and
Y are isomorphic as pomsets. In the following of the paper, we say C1 ∼ C2, we mean C1 ∼ C2.
2
No. Axiom
A1 x + y = y + x
A2 (x + y) + z = x + (y + z)
A3 x + x = x
A4 (x + y) ⋅ z = x ⋅ z + y ⋅ z
A5 (x ⋅ y) ⋅ z = x ⋅ (y ⋅ z)
Table 1: Axioms of BATC
Definition 2.3 (Pomset transitions and step). Let E be a PES and let C ∈ C(E), and ∅ ≠X ⊆ E,
if C ∩X = ∅ and C ′ = C ∪X ∈ C(E), then CXÐ→ C ′ is called a pomset transition from C to C ′.
When the events in X are pairwise concurrent, we say that CXÐ→ C ′ is a step.
Definition 2.4 (Pomset, step bisimulation). Let E1, E2 be PESs. A pomset bisimulation is a
relation R ⊆ C(E1) × C(E2), such that if (C1,C2) ∈ R, and C1X1Ð→ C ′1 then C2
X2Ð→ C ′2, with
X1 ⊆ E1, X2 ⊆ E2, X1 ∼ X2 and (C ′1,C′2) ∈ R, and vice-versa. We say that E1, E2 are pomset
bisimilar, written E1 ∼p E2, if there exists a pomset bisimulation R, such that (∅,∅) ∈ R. By
replacing pomset transitions with steps, we can get the definition of step bisimulation. When
PESs E1 and E2 are step bisimilar, we write E1 ∼s E2.
Definition 2.5 (Posetal product). Given two PESs E1, E2, the posetal product of their config-
urations, denoted C(E1)×C(E2), is defined as
{(C1, f,C2)∣C1 ∈ C(E1),C2 ∈ C(E2), f ∶ C1 → C2 isomorphism}.
A subset R ⊆ C(E1)×C(E2) is called a posetal relation. We say that R is downward closed
when for any (C1, f,C2), (C′1, f
′,C ′2) ∈ C(E1)×C(E2), if (C1, f,C2) ⊆ (C′1, f
′,C ′2) pointwise and
(C ′1, f′,C ′2) ∈ R, then (C1, f,C2) ∈ R.
For f ∶ X1 →X2, we define f[x1 ↦ x2] ∶X1∪{x1}→X2∪{x2}, z ∈ X1∪{x1},(1)f[x1 ↦ x2](z) =x2,if z = x1;(2)f[x1 ↦ x2](z) = f(z), otherwise. Where X1 ⊆ E1, X2 ⊆ E2, x1 ∈ E1, x2 ∈ E2.
Definition 2.6 ((Hereditary) history-preserving bisimulation). A history-preserving (hp-) bisim-
ulation is a posetal relation R ⊆ C(E1)×C(E2) such that if (C1, f,C2) ∈ R, and C1e1Ð→ C ′1, then
C2e2Ð→ C ′2, with (C ′1, f[e1 ↦ e2],C ′2) ∈ R, and vice-versa. E1,E2 are history-preserving (hp-
)bisimilar and are written E1 ∼hp E2 if there exists a hp-bisimulation R such that (∅,∅,∅) ∈ R.
A hereditary history-preserving (hhp-)bisimulation is a downward closed hp-bisimulation. E1,E2are hereditary history-preserving (hhp-)bisimilar and are written E1 ∼hhp E2.
In the following, let e1, e2, e′1, e′2 ∈ E, and let variables x, y, z range over the set of terms for true
concurrency, p, q, s range over the set of closed terms. The set of axioms of BATC consists of
the laws given in Table 1.
We give the operational transition rules of operators ⋅ and + as Table 2 shows. And the predicateeÐ→√
represents successful termination after execution of the event e.
3
eeÐ→√
xeÐ→√
x + yeÐ→√
xeÐ→ x′
x + yeÐ→ x′
yeÐ→√
x + yeÐ→√
yeÐ→ y′
x + yeÐ→ y′
xeÐ→√
x ⋅ yeÐ→ y
xeÐ→ x′
x ⋅ yeÐ→ x′ ⋅ y
Table 2: Transition rules of BATC
Theorem 2.7 (Soundness of BATC modulo truly concurrent bisimulation equivalences). The
axiomatization of BATC is sound modulo truly concurrent bisimulation equivalences ∼p, ∼s, ∼hpand ∼hhp. That is,
1. let x and y be BATC terms. If BATC ⊢ x = y, then x ∼p y;
2. let x and y be BATC terms. If BATC ⊢ x = y, then x ∼s y;
3. let x and y be BATC terms. If BATC ⊢ x = y, then x ∼hp y;
4. let x and y be BATC terms. If BATC ⊢ x = y, then x ∼hhp y.
Theorem 2.8 (Completeness of BATC modulo truly concurrent bisimulation equivalences).
The axiomatization of BATC is complete modulo truly concurrent bisimulation equivalences ∼p,
∼s, ∼hp and ∼hhp. That is,
1. let p and q be closed BATC terms, if p ∼p q then p = q;
2. let p and q be closed BATC terms, if p ∼s q then p = q;
3. let p and q be closed BATC terms, if p ∼hp q then p = q;
4. let p and q be closed BATC terms, if p ∼hhp q then p = q.
2.2 Algebra for Parallelism in True Concurrency
APTC uses the whole parallel operator ≬, the auxiliary binary parallel ∥ to model parallelism,
and the communication merge ∣ to model communications among different parallel branches, and
also the unary conflict elimination operator Θ and the binary unless operator ◁ to eliminate
conflictions among different parallel branches. Since a communication may be blocked, a new
constant called deadlock δ is extended to A, and also a new unary encapsulation operator
∂H is introduced to eliminate δ, which may exist in the processes. The algebraic laws on
these operators are also sound and complete modulo truly concurrent bisimulation equivalences
(including pomset bisimulation, step bisimulation, hp-bisimulation, but not hhp-bisimulation).
Note that, the parallel operator ∥ in a process cannot be eliminated by deductions on the process
4
using axioms of APTC, but other operators can eventually be steadied by ⋅, + and ∥, this is alsowhy truly concurrent bisimulations are called an truly concurrent semantics.
We design the axioms of APTC in Table 3, including algebraic laws of parallel operator ∥,communication operator ∣, conflict elimination operator Θ and unless operator ◁, encapsulation
operator ∂H , the deadlock constant δ, and also the whole parallel operator ≬.
we give the transition rules of APTC in Table 4, it is suitable for all truly concurrent behav-
ioral equivalence, including pomset bisimulation, step bisimulation, hp-bisimulation and hhp-
bisimulation.
Theorem 2.9 (Soundness of APTC modulo truly concurrent bisimulation equivalences). The
axiomatization of APTC is sound modulo truly concurrent bisimulation equivalences ∼p, ∼s, and
∼hp. That is,
1. let x and y be APTC terms. If APTC ⊢ x = y, then x ∼p y;
2. let x and y be APTC terms. If APTC ⊢ x = y, then x ∼s y;
3. let x and y be APTC terms. If APTC ⊢ x = y, then x ∼hp y.
Theorem 2.10 (Completeness of APTC modulo truly concurrent bisimulation equivalences).
The axiomatization of APTC is complete modulo truly concurrent bisimulation equivalences ∼p,
∼s, and ∼hp. That is,
1. let p and q be closed APTC terms, if p ∼p q then p = q;
2. let p and q be closed APTC terms, if p ∼s q then p = q;
3. let p and q be closed APTC terms, if p ∼hp q then p = q.
2.3 Recursion
To model infinite computation, recursion is introduced into APTC. In order to obtain a sound
and complete theory, guarded recursion and linear recursion are needed. The corresponding
axioms are RSP (Recursive Specification Principle) and RDP (Recursive Definition Principle),
RDP says the solutions of a recursive specification can represent the behaviors of the specifica-
tion, while RSP says that a guarded recursive specification has only one solution, they are sound
with respect to APTC with guarded recursion modulo several truly concurrent bisimulation
equivalences (including pomset bisimulation, step bisimulation and hp-bisimulation), and they
are complete with respect to APTC with linear recursion modulo several truly concurrent bisim-
ulation equivalences (including pomset bisimulation, step bisimulation and hp-bisimulation). In
the following, E,F,G are recursion specifications, X,Y,Z are recursive variables.
For a guarded recursive specifications E with the form
5
No. Axiom
A6 x + δ = x
A7 δ ⋅ x = δ
P1 x≬ y = x ∥ y + x ∣ yP2 x ∥ y = y ∥ xP3 (x ∥ y) ∥ z = x ∥ (y ∥ z)P4 e1 ∥ (e2 ⋅ y) = (e1 ∥ e2) ⋅ yP5 (e1 ⋅ x) ∥ e2 = (e1 ∥ e2) ⋅ xP6 (e1 ⋅ x) ∥ (e2 ⋅ y) = (e1 ∥ e2) ⋅ (x ≬ y)P7 (x + y) ∥ z = (x ∥ z) + (y ∥ z)P8 x ∥ (y + z) = (x ∥ y) + (x ∥ z)P9 δ ∥ x = δP10 x ∥ δ = δC11 e1 ∣ e2 = γ(e1, e2)C12 e1 ∣ (e2 ⋅ y) = γ(e1, e2) ⋅ yC13 (e1 ⋅ x) ∣ e2 = γ(e1, e2) ⋅ xC14 (e1 ⋅ x) ∣ (e2 ⋅ y) = γ(e1, e2) ⋅ (x ≬ y)C15 (x + y) ∣ z = (x ∣ z) + (y ∣ z)C16 x ∣ (y + z) = (x ∣ y) + (x ∣ z)C17 δ ∣ x = δC18 x ∣ δ = δCE19 Θ(e) = eCE20 Θ(δ) = δCE21 Θ(x + y) = Θ(x)◁ y +Θ(y)◁ x
CE22 Θ(x ⋅ y) = Θ(x) ⋅Θ(y)CE23 Θ(x ∥ y) = ((Θ(x)◁ y) ∥ y) + ((Θ(y)◁ x) ∥ x)CE24 Θ(x ∣ y) = ((Θ(x)◁ y) ∣ y) + ((Θ(y)◁ x) ∣ x)U25 (♯(e1, e2)) e1 ◁ e2 = τ
U26 (♯(e1, e2), e2 ≤ e3) e1◁ e3 = e1U27 (♯(e1, e2), e2 ≤ e3) e3◁ e1 = τ
U28 e◁ δ = e
U29 δ◁ e = δ
U30 (x + y)◁ z = (x◁ z) + (y◁ z)U31 (x ⋅ y)◁ z = (x◁ z) ⋅ (y◁ z)U32 (x ∥ y)◁ z = (x◁ z) ∥ (y◁ z)U33 (x ∣ y)◁ z = (x◁ z) ∣ (y◁ z)U34 x◁ (y + z) = (x◁ y)◁ z
U35 x◁ (y ⋅ z) = (x◁ y)◁ z
U36 x◁ (y ∥ z) = (x◁ y)◁ z
U37 x◁ (y ∣ z) = (x◁ y)◁ z
D1 e ∉H ∂H(e) = eD2 e ∈H ∂H(e) = δD3 ∂H(δ) = δD4 ∂H(x + y) = ∂H(x) + ∂H(y)D5 ∂H(x ⋅ y) = ∂H(x) ⋅ ∂H(y)D6 ∂H(x ∥ y) = ∂H(x) ∥ ∂H(y)
Table 3: Axioms of APTC
6
xe1Ð→√
ye2Ð→√
x ∥ y{e1,e2}ÐÐÐÐ→
√x
e1Ð→ x′ ye2Ð→√
x ∥ y{e1,e2}ÐÐÐÐ→ x′
xe1Ð→√
ye2Ð→ y′
x ∥ y{e1,e2}ÐÐÐÐ→ y′
xe1Ð→ x′ y
e2Ð→ y′
x ∥ y{e1,e2}ÐÐÐÐ→ x′ ≬ y′
xe1Ð→√
ye2Ð→√
x ∣ y γ(e1,e2)ÐÐÐÐ→
√x
e1Ð→ x′ ye2Ð→√
x ∣ y γ(e1,e2)ÐÐÐÐ→ x′
xe1Ð→√
ye2Ð→ y′
x ∣ y γ(e1,e2)ÐÐÐÐ→ y′
xe1Ð→ x′ y
e2Ð→ y′
x ∣ y γ(e1,e2)ÐÐÐÐ→ x′ ≬ y′
xe1Ð→√ (♯(e1, e2))Θ(x) e1Ð→
√x
e2Ð→√ (♯(e1, e2))Θ(x) e2Ð→
√
xe1Ð→ x′ (♯(e1, e2))Θ(x) e1Ð→Θ(x′)
xe2Ð→ x′ (♯(e1, e2))Θ(x) e2Ð→ Θ(x′)
xe1Ð→√
y ↛e2 (♯(e1, e2))x◁ y
τÐ→√
xe1Ð→ x′ y ↛e2 (♯(e1, e2))
x◁ yτÐ→ x′
xe1Ð→√
y ↛e3 (♯(e1, e2), e2 ≤ e3)x◁ y
e1Ð→√
xe1Ð→ x′ y ↛e3 (♯(e1, e2), e2 ≤ e3)
x◁ ye1Ð→ x′
xe3Ð→√
y ↛e2 (♯(e1, e2), e1 ≤ e3)x◁ y
τÐ→√
xe3Ð→ x′ y ↛e2 (♯(e1, e2), e1 ≤ e3)
x◁ yτÐ→ x′
xeÐ→√
∂H(x)eÐ→√ (e ∉H) x
eÐ→ x′
∂H(x)eÐ→ ∂H(x′)
(e ∉H)
Table 4: Transition rules of APTC
7
ti(⟨X1∣E⟩,⋯, ⟨Xn ∣E⟩){e1,⋯,ek}ÐÐÐÐÐ→
√
⟨Xi∣E⟩{e1,⋯,ek}ÐÐÐÐÐ→
√
ti(⟨X1∣E⟩,⋯, ⟨Xn ∣E⟩){e1,⋯,ek}ÐÐÐÐÐ→ y
⟨Xi∣E⟩{e1,⋯,ek}ÐÐÐÐÐ→ y
Table 5: Transition rules of guarded recursion
No. Axiom
RDP ⟨Xi∣E⟩ = ti(⟨X1∣E,⋯,Xn ∣E⟩) (i ∈ {1,⋯, n})RSP if yi = ti(y1,⋯, yn) for i ∈ {1,⋯, n}, then yi = ⟨Xi∣E⟩ (i ∈ {1,⋯, n})
Table 6: Recursive definition and specification principle
X1 = t1(X1,⋯,Xn)
⋯
Xn = tn(X1,⋯,Xn)
the behavior of the solution ⟨Xi∣E⟩ for the recursion variable Xi in E, where i ∈ {1,⋯, n},is exactly the behavior of their right-hand sides ti(X1,⋯,Xn), which is captured by the two
transition rules in Table 5.
The RDP (Recursive Definition Principle) and the RSP (Recursive Specification Principle) are
shown in Table 6.
Theorem 2.11 (Soundness of APTC with guarded recursion). Let x and y be APTC with
guarded recursion terms. If APTC with guarded recursion ⊢ x = y, then
1. x ∼s y;
2. x ∼p y;
3. x ∼hp y.
Theorem 2.12 (Completeness of APTC with linear recursion). Let p and q be closed APTC
with linear recursion terms, then,
1. if p ∼s q then p = q;
2. if p ∼p q then p = q;
3. if p ∼hp q then p = q.
8
2.4 Abstraction
To abstract away internal implementations from the external behaviors, a new constant τ called
silent step is added to A, and also a new unary abstraction operator τI is used to rename actions
in I into τ (the resulted APTC with silent step and abstraction operator is called APTCτ ). The
recursive specification is adapted to guarded linear recursion to prevent infinite τ -loops specif-
ically. The axioms of τ and τI are sound modulo rooted branching truly concurrent bisimula-
tion equivalences (several kinds of weakly truly concurrent bisimulation equivalences, including
rooted branching pomset bisimulation, rooted branching step bisimulation and rooted branching
hp-bisimulation). To eliminate infinite τ -loops caused by τI and obtain the completeness, CFAR
(Cluster Fair Abstraction Rule) is used to prevent infinite τ -loops in a constructible way.
Definition 2.13 (Weak pomset transitions and weak step). Let E be a PES and let C ∈ C(E),and ∅ ≠ X ⊆ E, if C ∩X = ∅ and C ′ = C ∪X ∈ C(E), then C
XÔ⇒ C ′ is called a weak pomset
transition from C to C ′, where we defineeÔ⇒≜
τ∗
Ð→eÐ→
τ∗
Ð→. AndXÔ⇒≜
τ∗
Ð→eÐ→
τ∗
Ð→, for every e ∈ X.
When the events in X are pairwise concurrent, we say that CXÔ⇒ C ′ is a weak step.
Definition 2.14 (Branching pomset, step bisimulation). Assume a special termination predicate
↓, and let√
represent a state with√↓. Let E1, E2 be PESs. A branching pomset bisimulation
is a relation R ⊆ C(E1) × C(E2), such that:
1. if (C1,C2) ∈ R, and C1XÐ→ C ′1 then
• either X ≡ τ∗, and (C ′1,C2) ∈ R;
• or there is a sequence of (zero or more) τ -transitions C2τ∗
Ð→ C02 , such that (C1,C
02) ∈ R
and C02
XÔ⇒ C ′2 with (C ′1,C ′2) ∈ R;
2. if (C1,C2) ∈ R, and C2XÐ→ C ′2 then
• either X ≡ τ∗, and (C1,C′2) ∈ R;
• or there is a sequence of (zero or more) τ -transitions C1τ∗
Ð→ C01 , such that (C0
1 ,C2) ∈ Rand C0
1
XÔ⇒ C ′1 with (C ′1,C ′2) ∈ R;
3. if (C1,C2) ∈ R and C1 ↓, then there is a sequence of (zero or more) τ -transitions C2τ∗
Ð→ C02
such that (C1,C02) ∈ R and C0
2 ↓;
4. if (C1,C2) ∈ R and C2 ↓, then there is a sequence of (zero or more) τ -transitions C1τ∗
Ð→ C01
such that (C01 ,C2) ∈ R and C0
1 ↓.
We say that E1, E2 are branching pomset bisimilar, written E1 ≈bp E2, if there exists a branching
pomset bisimulation R, such that (∅,∅) ∈ R.
By replacing pomset transitions with steps, we can get the definition of branching step bisimula-
tion. When PESs E1 and E2 are branching step bisimilar, we write E1 ≈bs E2.
9
Definition 2.15 (Rooted branching pomset, step bisimulation). Assume a special termination
predicate ↓, and let√
represent a state with√↓. Let E1, E2 be PESs. A branching pomset
bisimulation is a relation R ⊆ C(E1) × C(E2), such that:
1. if (C1,C2) ∈ R, and C1XÐ→ C ′1 then C2
XÐ→ C ′2 with C ′1 ≈bp C
′2;
2. if (C1,C2) ∈ R, and C2XÐ→ C ′2 then C1
XÐ→ C ′1 with C ′1 ≈bp C
′2;
3. if (C1,C2) ∈ R and C1 ↓, then C2 ↓;
4. if (C1,C2) ∈ R and C2 ↓, then C1 ↓.
We say that E1, E2 are rooted branching pomset bisimilar, written E1 ≈rbp E2, if there exists a
rooted branching pomset bisimulation R, such that (∅,∅) ∈ R.
By replacing pomset transitions with steps, we can get the definition of rooted branching step
bisimulation. When PESs E1 and E2 are rooted branching step bisimilar, we write E1 ≈rbs E2.
Definition 2.16 (Branching (hereditary) history-preserving bisimulation). Assume a special
termination predicate ↓, and let√
represent a state with√↓. A branching history-preserving
(hp-) bisimulation is a weakly posetal relation R ⊆ C(E1)×C(E2) such that:
1. if (C1, f,C2) ∈ R, and C1e1Ð→ C ′1 then
• either e1 ≡ τ , and (C ′1, f[e1 ↦ τ],C2) ∈ R;
• or there is a sequence of (zero or more) τ -transitions C2τ∗
Ð→ C02 , such that (C1, f,C
02) ∈
R and C02
e2Ð→ C ′2 with (C ′1, f[e1 ↦ e2],C ′2) ∈ R;
2. if (C1, f,C2) ∈ R, and C2e2Ð→ C ′2 then
• either X ≡ τ , and (C1, f[e2 ↦ τ],C ′2) ∈ R;
• or there is a sequence of (zero or more) τ -transitions C1τ∗
Ð→ C01 , such that (C0
1 , f,C2) ∈R and C0
1
e1Ð→ C ′1 with (C ′1, f[e2 ↦ e1],C ′2) ∈ R;
3. if (C1, f,C2) ∈ R and C1 ↓, then there is a sequence of (zero or more) τ -transitions C2τ∗
Ð→C02 such that (C1, f,C
02) ∈ R and C0
2 ↓;
4. if (C1, f,C2) ∈ R and C2 ↓, then there is a sequence of (zero or more) τ -transitions C1τ∗
Ð→C01 such that (C0
1 , f,C2) ∈ R and C01 ↓.
E1,E2 are branching history-preserving (hp-)bisimilar and are written E1 ≈bhp E2 if there exists a
branching hp-bisimulation R such that (∅,∅,∅) ∈ R.
A branching hereditary history-preserving (hhp-)bisimulation is a downward closed branching
hhp-bisimulation. E1,E2 are branching hereditary history-preserving (hhp-)bisimilar and are writ-
ten E1 ≈bhhp E2.
Definition 2.17 (Rooted branching (hereditary) history-preserving bisimulation). Assume a
special termination predicate ↓, and let√
represent a state with√↓. A rooted branching history-
preserving (hp-) bisimulation is a weakly posetal relation R ⊆ C(E1)×C(E2) such that:
10
No. Axiom
B1 e ⋅ τ = e
B2 e ⋅ (τ ⋅ (x + y) + x) = e ⋅ (x + y)B3 x ∥ τ = xTI1 e ∉ I τI(e) = eTI2 e ∈ I τI(e) = τTI3 τI(δ) = δTI4 τI(x + y) = τI(x) + τI(y)TI5 τI(x ⋅ y) = τI(x) ⋅ τI(y)TI6 τI(x ∥ y) = τI(x) ∥ τI(y)CFAR If X is in a cluster for I with exits
{(a11 ∥ ⋯ ∥ a1i)Y1,⋯, (am1 ∥ ⋯ ∥ ami)Ym, b11 ∥ ⋯ ∥ b1j ,⋯, bn1 ∥ ⋯ ∥ bnj},then τ ⋅ τI(⟨X ∣E⟩) =τ ⋅ τI((a11 ∥ ⋯ ∥ a1i)⟨Y1∣E⟩ +⋯+ (am1 ∥ ⋯ ∥ ami)⟨Ym∣E⟩ + b11 ∥ ⋯ ∥ b1j +⋯+ bn1 ∥ ⋯ ∥ bnj)
Table 7: Axioms of APTCτ
ττÐ→√
xeÐ→√
τI(x)eÐ→√ e ∉ I
xeÐ→ x′
τI(x)eÐ→ τI(x′)
e ∉ I
xeÐ→√
τI(x)τÐ→√ e ∈ I
xeÐ→ x′
τI(x)τÐ→ τI(x′)
e ∈ I
Table 8: Transition rule of APTCτ
1. if (C1, f,C2) ∈ R, and C1e1Ð→ C ′1, then C2
e2Ð→ C ′2 with C ′1 ≈bhp C′2;
2. if (C1, f,C2) ∈ R, and C2e2Ð→ C ′1, then C1
e1Ð→ C ′2 with C ′1 ≈bhp C′2;
3. if (C1, f,C2) ∈ R and C1 ↓, then C2 ↓;
4. if (C1, f,C2) ∈ R and C2 ↓, then C1 ↓.
E1,E2 are rooted branching history-preserving (hp-)bisimilar and are written E1 ≈rbhp E2 if there
exists rooted a branching hp-bisimulation R such that (∅,∅,∅) ∈ R.
A rooted branching hereditary history-preserving (hhp-)bisimulation is a downward closed rooted
branching hhp-bisimulation. E1,E2 are rooted branching hereditary history-preserving (hhp-)bisimilar
and are written E1 ≈rbhhp E2.
The axioms and transition rules of APTCτ are shown in Table 7 and Table 8.
11
S○→√
Table 9: Transition rule of the shadow constant
Theorem 2.18 (Soundness of APTCτ with guarded linear recursion). Let x and y be APTCτ
with guarded linear recursion terms. If APTCτ with guarded linear recursion ⊢ x = y, then
1. x ≈rbs y;
2. x ≈rbp y;
3. x ≈rbhp y.
Theorem 2.19 (Soundness of CFAR). CFAR is sound modulo rooted branching truly concur-
rent bisimulation equivalences ≈rbs, ≈rbp and ≈rbhp.
Theorem 2.20 (Completeness of APTCτ with guarded linear recursion and CFAR). Let p
and q be closed APTCτ with guarded linear recursion and CFAR terms, then,
1. if p ≈rbs q then p = q;
2. if p ≈rbp q then p = q;
3. if p ≈rbhp q then p = q.
2.5 Placeholder
We introduce a constant called shadow constant S○ to act for the placeholder that we ever used
to deal entanglement in quantum process algebra. The transition rule of the shadow constant
S○ is shown in Table 9. The rule say that S○ can terminate successfully without executing any
action.
We need to adjust the definition of guarded linear recursive specification to the following one.
Definition 2.21 (Guarded linear recursive specification). A linear recursive specification E is
guarded if there does not exist an infinite sequence of τ -transitions ⟨X ∣E⟩ τÐ→ ⟨X ′∣E⟩ τ
Ð→ ⟨X ′′∣E⟩ τÐ→
⋯, and there does not exist an infinite sequence of S○-transitions ⟨X ∣E⟩ → ⟨X ′∣E⟩ → ⟨X ′′∣E⟩ →⋯.
Theorem 2.22 (Conservativity of APTC with respect to the shadow constant). APTCτ with
guarded linear recursion and shadow constant is a conservative extension of APTCτ with guarded
linear recursion.
We design the axioms for the shadow constant S○ in Table 10. And for S○ei , we add superscript
e to denote S○ is belonging to e and subscript i to denote that it is the i-th shadow of e. And
we extend the set E to the set E ∪ {τ} ∪ {δ} ∪ { S○ei}.
12
No. Axiom
SC1 S○ ⋅ x = x
SC2 x ⋅ S○ = x
SC3 S○e ∥ e = eSC4 e ∥ ( S○e ⋅ y) = e ⋅ ySC5 S○e ∥ (e ⋅ y) = e ⋅ ySC6 (e ⋅ x) ∥ S○e = e ⋅ x
SC7 ( S○e ⋅ x) ∥ e = e ⋅ xSC8 (e ⋅ x) ∥ ( S○e ⋅ y) = e ⋅ (x≬ y)SC9 ( S○e ⋅ x) ∥ (e ⋅ y) = e ⋅ (x≬ y)
Table 10: Axioms of shadow constant
The mismatch of action and its shadows in parallelism will cause deadlock, that is, e ∥ S○e′ = δ
with e ≠ e′. We must make all shadows S○ei are distinct, to ensure f in hp-bisimulation is an
isomorphism.
Theorem 2.23 (Soundness of the shadow constant). Let x and y be APTCτ with guarded
linear recursion and the shadow constant terms. If APTCτ with guarded linear recursion and
the shadow constant ⊢ x = y, then
1. x ≈rbs y;
2. x ≈rbp y;
3. x ≈rbhp y.
Theorem 2.24 (Completeness of the shadow constant). Let p and q be closed APTCτ with
guarded linear recursion and CFAR and the shadow constant terms, then,
1. if p ≈rbs q then p = q;
2. if p ≈rbp q then p = q;
3. if p ≈rbhp q then p = q.
With the shadow constant, we have
∂H((a ⋅ rb)≬ wb) = ∂H((a ⋅ rb)≬ ( S○a1 ⋅wb))
= a ⋅ cb
with H = {rb,wb} and γ(rb,wb) ≜ cb.And we see the following example:
13
xeÐ→√
λs(x)action(s,e)ÐÐÐÐÐÐ→
√x
eÐ→ x′
λs(x)action(s,e)ÐÐÐÐÐÐ→ λeffect(s,e)(x′)
xe1Ð→√
ye2
Ð/Ð→ (e1%e2)
λs(x ∥ y)action(s,e1)ÐÐÐÐÐÐ→ λeffect(s,e1)(y)
xe1Ð→ x′ y
e2
Ð/Ð→ (e1%e2)
λs(x ∥ y)action(s,e1)ÐÐÐÐÐÐ→ λeffect(s,e1)(x′ ≬ y)
xe1
Ð/Ð→ ye2Ð→√ (e1%e2)
λs(x ∥ y)action(s,e2)ÐÐÐÐÐÐ→ λeffect(s,e2)(x)
xe1
Ð/Ð→ ye2Ð→ y′ (e1%e2)
λs(x ∥ y)action(s,e2)ÐÐÐÐÐÐ→ λeffect(s,e2)(x ≬ y′)
xe1Ð→√
ye2Ð→√
λs(x ∥ y){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→
√
xe1Ð→ x′ y
e2Ð→√
λs(x ∥ y){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ λeffect(s,e1)∪effect(s,e2)(x′)
xe1Ð→√
ye2Ð→ y′
λs(x ∥ y){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ λeffect(s,e1)∪effect(s,e2)(y′)
xe1Ð→ x′ y
e2Ð→ y′
λs(x ∥ y){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ λeffect(s,e1)∪effect(s,e2)(x′ ≬ y′)
Table 11: Transition rule of the state operator
a≬ b = a ∥ b + a ∣ b= a ∥ b + a ∥ b + a ∥ b + a ∣ b= a ∥ ( S○a
1 ⋅ b) + ( S○b1 ⋅ a) ∥ b + a ∥ b + a ∣ b
= (a ∥ S○a1) ⋅ b + ( S○b
1 ∥ b) ⋅ a + a ∥ b + a ∣ b= a ⋅ b + b ⋅ a + a ∥ b + a ∣ b
What do we see? Yes. The parallelism contains both interleaving and true concurrency. This
may be why true concurrency is called true concurrency.
2.6 State and Race Condition
State operator permits explicitly to describe states, where S denotes a finite set of states,
action(s, e) denotes the visible behavior of e in state s with action ∶ S × E → E, effect(s, e)represents the state that results if e is executed in s with effect ∶ S × E → S. State operator
λs(t) which denotes process term t in s, is expressed by the following transition rules in Table
11. Note that action and effect are extended to E ∪ {τ} by defining action(s, τ) ≜ τ and
effect(s, τ) ≜ s. We use e1%e2 to denote that e1 and e2 are in race condition.
14
No. Axiom
SO1 λs(e) = action(s, e)SO2 λs(δ) = δSO3 λs(x + y) = λs(x) + λs(y)SO4 λs(e ⋅ y) = action(s, e) ⋅ λeffect(s,e)(y)SO5 λs(x ∥ y) = λs(x) ∥ λs(y)
Table 12: Axioms of state operator
Theorem 2.25 (Conservativity of APTC with respect to the state operator). APTCτ with
guarded linear recursion and state operator is a conservative extension of APTCτ with guarded
linear recursion.
Proof. It follows from the following two facts.
1. The transition rules of APTCτ with guarded linear recursion are all source-dependent;
2. The sources of the transition rules for the state operator contain an occurrence of λs.
Theorem 2.26 (Congruence theorem of the state operator). Rooted branching truly concurrent
bisimulation equivalences ≈rbp, ≈rbs and ≈rbhp are all congruences with respect to APTCτ with
guarded linear recursion and the state operator.
Proof. (1) Case rooted branching pomset bisimulation equivalence ≈rbp.
Let x and y be APTCτ with guarded linear recursion and the state operator processes, and
x ≈rbp y, it is sufficient to prove that λs(x) ≈rbp λs(y).By the transition rules for operator λs in Table 11, we can get
λs(x)action(s,X)ÐÐÐÐÐÐ→
√λs(y)
action(s,Y )ÐÐÐÐÐÐ→
√
with X ⊆ x, Y ⊆ y, and X ∼ Y .
Or, we can get
λs(x)action(s,X)ÐÐÐÐÐÐ→ λeffect(s,X)(x′) λs(y)
action(s,Y )ÐÐÐÐÐÐ→ λeffect(s,Y )(y′)
with X ⊆ x, Y ⊆ y, and X ∼ Y and the hypothesis λeffect(s,X)(x′) ≈rbp λeffect(s,Y )(y′).So, we get λs(x) ≈rbp λs(y), as desired(2) The cases of rooted branching step bisimulation ≈rbs, rooted branching hp-bisimulation ≈rbhpcan be proven similarly, we omit them.
We design the axioms for the state operator λs in Table 12.
15
Theorem 2.27 (Soundness of the state operator). Let x and y be APTCτ with guarded linear
recursion and the state operator terms. If APTCτ with guarded linear recursion and the state
operator ⊢ x = y, then
1. x ≈rbs y;
2. x ≈rbp y;
3. x ≈rbhp y.
Proof. (1) Soundness of APTCτ with guarded linear recursion and the state operator with
respect to rooted branching step bisimulation ≈rbs.
Since rooted branching step bisimulation ≈rbs is both an equivalent and a congruent relation
with respect to APTCτ with guarded linear recursion and the state operator, we only need to
check if each axiom in Table 12 is sound modulo rooted branching step bisimulation equivalence.
Though transition rules in Table 11 are defined in the flavor of single event, they can be modified
into a step (a set of events within which each event is pairwise concurrent), we omit them. If
we treat a single event as a step containing just one event, the proof of this soundness theorem
does not exist any problem, so we use this way and still use the transition rules in Table 12.
We only prove soundness of the non-trivial axioms SO3 − SO5, and omit the defining axioms
SO1 − SO2.
• Axiom SO3. Let p, q be APTCτ with guarded linear recursion and the state operator
processes, and λs(p + q) = λs(p) + λs(q), it is sufficient to prove that λs(p+ q) ≈rbs λs(p) +λs(q). By the transition rules for operator + and λs in Table 11, we get
pe1Ð→√
λs(p + q) action(s,e1)ÐÐÐÐÐÐ→
√p
e1Ð→√
λs(p) + λs(q)action(s,e1)ÐÐÐÐÐÐ→
√
qe2Ð→√
λs(p + q) action(s,e2)ÐÐÐÐÐÐ→
√q
e2Ð→√
λs(p) + λs(q)action(s,e2)ÐÐÐÐÐÐ→
√
pe1Ð→ p′
λs(p + q) action(s,e1)ÐÐÐÐÐÐ→ λeffect(s,e1)(p′)
pe1Ð→ p′
λs(p) + λs(q)action(s,e1)ÐÐÐÐÐÐ→ λeffect(s,e1)(p′)
qe2Ð→ q′
λs(p + q) action(s,e2)ÐÐÐÐÐÐ→ λeffect(s,e2)(q′)
qe2Ð→ q′
λs(p) + λs(q)action(s,e2)ÐÐÐÐÐÐ→ λeffect(s,e2)(q′)
So, λs(p + q) ≈rbs λs(p) + λs(q), as desired.
• Axiom SO4. Let q be APTCτ with guarded linear recursion and the state operator
processes, and λs(e⋅q) = action(s, e)⋅λeffect(s,e)(q), it is sufficient to prove that λs(e⋅q) ≈rbsaction(s, e) ⋅ λeffect(s,e)(q). By the transition rules for operator ⋅ and λs in Table 11, we
get
16
eeÐ→√
λs(e ⋅ q)action(s,e)ÐÐÐÐÐÐ→ λeffect(s,e)(q)
action(s, e) action(s,e)ÐÐÐÐÐÐ→
√
action(s, e) ⋅ λeffect(s,e)(q)action(s,e)ÐÐÐÐÐÐ→ λeffect(s,e)(q)
So, λs(e ⋅ q) ≈rbs action(s, e) ⋅ λeffect(s,e)(q), as desired.
• Axiom SO5. Let p, q be APTCτ with guarded linear recursion and the state operator
processes, and λs(p ∥ q) = λs(p) ∥ λs(q), it is sufficient to prove that λs(p ∥ q) ≈rbs λs(p) ∥λs(q). By the transition rules for operator ∥ and λs in Table 11, we get for the case
¬(e1%e2)
pe1Ð→√
qe2Ð→√
λs(p ∥ q){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→
√
pe1Ð→√
qe2Ð→√
λs(p) ∥ λs(q){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→
√
pe1Ð→ p′ q
e2Ð→√
λs(p ∥ q){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ λeffect(s,e1)∪effect(s,e2)(p′)
pe1Ð→ p′ q
e2Ð→√
λs(p) ∥ λs(q){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ λeffect(s,e1)∪effect(s,e2)(p′)
pe1Ð→√
qe2Ð→ q′
λs(p ∥ q){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ λeffect(s,e1)∪effect(s,e2)(q′)
pe1Ð→√
qe2Ð→ q′
λs(p) ∥ λs(q){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ λeffect(s,e1)∪effect(s,e2)(q′)
pe1Ð→ p′ q
e2Ð→ q′
λs(p ∥ q){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ λeffect(s,e1)∪effect(s,e2)(p′ ≬ q′)
pe1Ð→ p′ q
e2Ð→ q′
λs(p) ∥ λs(q){action(s,e1),action(s,e2)}ÐÐÐÐÐÐÐÐÐÐÐÐÐÐ→ λeffect(s,e1)∪effect(s,e2)(p′)≬ λeffect(s,e1)∪effect(s,e2)(q′)
So, with the assumption λeffect(s,e1)∪effect(s,e2)(p′ ≬ q′) = λeffect(s,e1)∪effect(s,e2)(p′) ≬λeffect(s,e1)∪effect(s,e2)(q′), λs(p ∥ q) ≈rbs λs(p) ∥ λs(q), as desired. For the case e1%e2, we
get
pe1Ð→√
qe2
Ð/Ð→
λs(p ∥ q)action(s,e1)ÐÐÐÐÐÐ→ λeffect(s,e1)(q)
17
pe1Ð→√
qe2
Ð/Ð→
λs(p) ∥ λs(q)action(s,e1)ÐÐÐÐÐÐ→ λeffect(s,e1)(q)
pe1Ð→ p′ q
e2
Ð/Ð→
λs(p ∥ q)action(s,e1)ÐÐÐÐÐÐ→ λeffect(s,e1)(p′ ≬ q)
pe1Ð→ p′ q
e2
Ð/Ð→
λs(p) ∥ λs(q)action(s,e1)ÐÐÐÐÐÐ→ λeffect(s,e1)(p′)≬ λeffect(s,e1)(q)
pe1
Ð/Ð→ qe2Ð→√
λs(p ∥ q)action(s,e2)ÐÐÐÐÐÐ→ λeffect(s,e2)(p)
pe1
Ð/Ð→ qe2Ð→√
λs(p) ∥ λs(q)action(s,e2)ÐÐÐÐÐÐ→ λeffect(s,e2)(p)
pe1
Ð/Ð→ qe2Ð→ q′
λs(p ∥ q)action(s,e2)ÐÐÐÐÐÐ→ λeffect(s,e2)(p≬ q′)
pe1
Ð/Ð→ qe2Ð→ q′
λs(p) ∥ λs(q)action(s,e2)ÐÐÐÐÐÐ→ λeffect(s,e2)(p)≬ λeffect(s,e2)(q′)
So, with the assumption λeffect(s,e1)(p′ ≬ q) = λeffect(s,e1)(p′) ≬ λeffect(s,e1)(q) and
λeffect(s,e2)(p ≬ q′) = λeffect(s,e2)(p) ≬ λeffect(s,e2)(q′), λs(p ∥ q) ≈rbs λs(p) ∥ λs(q),as desired.
(2) Soundness of APTCτ with guarded linear recursion and the state operator with respect to
rooted branching pomset bisimulation ≈rbp.
Since rooted branching pomset bisimulation ≈rbp is both an equivalent and a congruent relation
with respect to APTCτ with guarded linear recursion and the state operator, we only need to
check if each axiom in Table 12 is sound modulo rooted branching pomset bisimulation ≈rbp.
From the definition of rooted branching pomset bisimulation ≈rbp (see Definition 2.15), we know
that rooted branching pomset bisimulation ≈rbp is defined by weak pomset transitions, which
are labeled by pomsets with τ . In a weak pomset transition, the events in the pomset are either
within causality relations (defined by ⋅) or in concurrency (implicitly defined by ⋅ and +, and
explicitly defined by ≬), of course, they are pairwise consistent (without conflicts). In (1), we
have already proven the case that all events are pairwise concurrent, so, we only need to prove
the case of events in causality. Without loss of generality, we take a pomset of P = {e1, e2 ∶ e1 ⋅e2}.Then the weak pomset transition labeled by the above P is just composed of one single event
transition labeled by e1 succeeded by another single event transition labeled by e2, that is,PÔ⇒=
e1Ô⇒e2Ô⇒.
18
Similarly to the proof of soundness of APTCτ with guarded linear recursion and the state
operator modulo rooted branching step bisimulation ≈rbs (1), we can prove that each axiom in
Table 12 is sound modulo rooted branching pomset bisimulation ≈rbp, we omit them.
(3) Soundness of APTCτ with guarded linear recursion and the state operator with respect to
rooted branching hp-bisimulation ≈rbhp.
Since rooted branching hp-bisimulation ≈rbhp is both an equivalent and a congruent relation
with respect to APTCτ with guarded linear recursion and the state operator, we only need to
check if each axiom in Table 12 is sound modulo rooted branching hp-bisimulation ≈rbhp.
From the definition of rooted branching hp-bisimulation ≈rbhp (see Definition 2.17), we know that
rooted branching hp-bisimulation ≈rbhp is defined on the weakly posetal product (C1, f,C2), f ∶C1 → C2 isomorphism. Two process terms s related to C1 and t related to C2, and f ∶ C1 →C2 isomorphism. Initially, (C1, f,C2) = (∅,∅,∅), and (∅,∅,∅) ∈≈rbhp. When s
eÐ→ s′ (C1
eÐ→ C ′1),
there will be teÔ⇒ t′ (C2
eÔ⇒ C ′2), and we define f ′ = f[e ↦ e]. Then, if (C1, f,C2) ∈≈rbhp, then
(C ′1, f ′,C ′2) ∈≈rbhp.Similarly to the proof of soundness of APTCτ with guarded linear recursion and the state
operator modulo rooted branching pomset bisimulation equivalence (2), we can prove that each
axiom in Table 12 is sound modulo rooted branching hp-bisimulation equivalence, we just need
additionally to check the above conditions on rooted branching hp-bisimulation, we omit them.
Theorem 2.28 (Completeness of the state operator). Let p and q be closed APTCτ with guarded
linear recursion and CFAR and the state operator terms, then,
1. if p ≈rbs q then p = q;
2. if p ≈rbp q then p = q;
3. if p ≈rbhp q then p = q.
Proof. (1) For the case of rooted branching step bisimulation, the proof is following.
Firstly, we know that each process term p in APTCτ with guarded linear recursion is equal
to a process term ⟨X1∣E⟩ with E a guarded linear recursive specification. And we prove if
⟨X1∣E1⟩ ≈rbs ⟨Y1∣E2⟩, then ⟨X1∣E1⟩ = ⟨Y1∣E2⟩Structural induction with respect to process term p can be applied. The only new case (where
SO1−SO5 are needed) is p ≡ λs0(q). First assuming q = ⟨X1∣E⟩ with a guarded linear recursive
specification E, we prove the case of p = λs0(⟨X1∣E⟩). Let E consist of guarded linear recursive
equations
Xi = (a1i1 ∥ ⋯ ∥ aki1i1)Xi1+...+(a1imi∥ ⋯ ∥ akimi
imi)Ximi
+b1i1 ∥ ⋯ ∥ bli1i1+...+b1imi∥ ⋯ ∥ blimi
imi
for i ∈ 1, ..., n. Let F consist of guarded linear recursive equations
Yi(s) = (action(s, a1i1) ∥ ⋯ ∥ action(s, aki1i1))Yi1(effect(s, a1i1) ∪⋯∪ effect(s, aki1i1))+... + (action(s, a1imi
) ∥ ⋯ ∥ action(s, akimiimi))Yimi
(effect(s, a1imi) ∪⋯∪ effect(s, akimi
imi))
19
+action(s, b1i1) ∥ ⋯ ∥ action(s, bli1i1) + ... + action(s, b1imi) ∥ ⋯ ∥ action(s, blimi
imi)
for i ∈ 1, ..., n.
λs(⟨Xi∣E⟩)RDP= λs((a1i1 ∥ ⋯ ∥ aki1i1)Xi1 + ... + (a1imi
∥ ⋯ ∥ akimiimi)Ximi
+b1i1 ∥ ⋯ ∥ bli1i1 + ... + b1imi∥ ⋯ ∥ blimi
imi)
SO1-SO5= (action(s, a1i1) ∥ ⋯ ∥ action(s, aki1i1))λeffect(s,a1i1)∪⋯∪effect(s,aki1i1))
(Xi1)+... + (action(s, a1imi
) ∥ ⋯ ∥ action(s, akimiimi))λeffect(s,a1imi
)∪⋯∪effect(s,akimiimi))(Ximi
)+action(s, b1i1) ∥ ⋯ ∥ action(s, bli1i1) + ... + action(s, b1imi
) ∥ ⋯ ∥ action(s, blimiimi)
Replacing Yi(s) by λs(⟨Xi∣E⟩) for i ∈ {1, ..., n} is a solution for F . So by RSP , λs0(⟨X1∣E⟩) =⟨Y1(s0)∣F ⟩, as desired.(2) For the case of rooted branching pomset bisimulation, it can be proven similarly to (1), we
omit it.
(3) For the case of rooted branching hp-bisimulation, it can be proven similarly to (1), we omit
it.
2.7 Asynchronous Communication
The communication in APTC is synchronous, for two atomic actions a, b ∈ A, if there exists
a communication between a and b, then they merge into a new communication action γ(a, b);otherwise let γ(a, b) = δ.Asynchronous communication between actions a, b ∈ A does not exist a merge γ(a, b), and it
is only explicitly defined by the causality relation a ≤ b to ensure that the send action a to be
executed before the receive action b.
APTC naturally support asynchronous communication to be adapted to the following aspects:
1. remove the communication merge operator ∣, just because there does not exist a commu-
nication merger γ(a, b) between two asynchronous communicating action a, b ∈ A;
2. remove the asynchronous communicating actions a, b ∈ A from H of the encapsulation
operator ∂H ;
3. ensure the send action a to be executed before the receive action b, by inserting appro-
priate numbers of placeholders during modeling time; or by adding a causality constraint
between the communicating actions a ≤ b, all process terms violate this constraint will
cause deadlocks.
2.8 Applications
APTC provides a formal framework based on truly concurrent behavioral semantics, which can
be used to verify the correctness of system behaviors. In this subsection, we tend to choose
alternating bit protocol (ABP) [10].
20
Sender Receiver
A1
B
D
A2
C1
C2
Figure 1: Alternating bit protocol
The ABP protocol is used to ensure successful transmission of data through a corrupted channel.
This success is based on the assumption that data can be resent an unlimited number of times,
which is illustrated in Figure 1, we alter it into the true concurrency situation.
1. Data elements d1, d2, d3,⋯ from a finite set ∆ are communicated between a Sender and a
Receiver.
2. If the Sender reads a datum from channel A1, then this datum is sent to the Receiver in
parallel through channel A2.
3. The Sender processes the data in ∆, formes new data, and sends them to the Receiver
through channel B.
4. And the Receiver sends the datum into channel C2.
5. If channel B is corrupted, the message communicated through B can be turn into an error
message �.
6. Every time the Receiver receives a message via channel B, it sends an acknowledgement
to the Sender via channel D, which is also corrupted.
7. Finally, then Sender and the Receiver send out their outputs in parallel through channels
C1 and C2.
In the truly concurrent ABP, the Sender sends its data to the Receiver; and the Receiver can also
send its data to the Sender, for simplicity and without loss of generality, we assume that only
the Sender sends its data and the Receiver only receives the data from the Sender. The Sender
21
attaches a bit 0 to data elements d2k−1 and a bit 1 to data elements d2k, when they are sent into
channel B. When the Receiver reads a datum, it sends back the attached bit via channel D. If
the Receiver receives a corrupted message, then it sends back the previous acknowledgement to
the Sender.
Then the state transition of the Sender can be described by APTC as follows.
Sb = ∑d∈∆
rA1(d) ⋅ Tdb
Tdb = (∑d′∈∆
(sB(d′, b) ⋅ sC1(d′)) + sB(�)) ⋅Udb
Udb = rD(b) ⋅ S1−b + (rD(1 − b) + rD(�)) ⋅ Tdb
where sB denotes sending data through channel B, rD denotes receiving data through channel
D, similarly, rA1means receiving data via channel A1, sC1
denotes sending data via channel C1,
and b ∈ {0,1}.And the state transition of the Receiver can be described by APTC as follows.
Rb = ∑d∈∆
rA2(d) ⋅R′b
R′b = ∑d′∈∆
{rB(d′, b) ⋅ sC2(d′) ⋅Qb + rB(d′,1 − b) ⋅Q1−b} + rB(�) ⋅Q1−b
Qb = (sD(b) + sD(�)) ⋅R1−b
where rA2denotes receiving data via channel A2, rB denotes receiving data via channel B, sC2
denotes sending data via channel C2, sD denotes sending data via channel D, and b ∈ {0,1}.The send action and receive action of the same data through the same channel can communicate
each other, otherwise, a deadlock δ will be caused. We define the following communication
functions.
γ(sB(d′, b), rB(d′, b)) ≜ cB(d′, b)γ(sB(�), rB(�)) ≜ cB(�)γ(sD(b), rD(b)) ≜ cD(b)γ(sD(�), rD(�)) ≜ cD(�)
Let R0 and S0 be in parallel, then the system R0S0 can be represented by the following process
term.
τI(∂H(Θ(R0 ≬ S0))) = τI(∂H(R0 ≬ S0))
where H = {sB(d′, b), rB(d′, b), sD(b), rD(b)∣d′ ∈∆, b ∈ {0,1}}{sB(�), rB(�), sD(�), rD(�)}I = {cB(d′, b), cD(b)∣d′ ∈∆, b ∈ {0,1}} ∪ {cB(�), cD(�)}.Then we get the following conclusion.
22
Theorem 2.29 (Correctness of the ABP protocol). The ABP protocol τI(∂H(R0 ≬ S0)) exhibitsdesired external behaviors.
Proof. By use of the algebraic laws of APTC, we have the following expansions.
R0 ≬ S0P1= R0 ∥ S0 +R0 ∣ S0
RDP= (∑
d∈∆
rA2(d) ⋅R′0) ∥ (∑
d∈∆
rA1(d)Td0)
+(∑d∈∆
rA2(d) ⋅R′0) ∣ (∑
d∈∆
rA1(d)Td0)
P6,C14= ∑
d∈∆
(rA2(d) ∥ rA1
(d))R′0 ≬ Td0 + δ ⋅R′0 ≬ Td0
A6,A7= ∑
d∈∆
(rA2(d) ∥ rA1
(d))R′0 ≬ Td0
∂H(R0 ≬ S0) = ∂H(∑d∈∆
(rA2(d) ∥ rA1
(d))R′0 ≬ Td0)
= ∑d∈∆
(rA2(d) ∥ rA1
(d))∂H(R′0 ≬ Td0)
Similarly, we can get the following equations.
∂H(R0 ≬ S0) = ∑d∈∆
(rA2(d) ∥ rA1
(d)) ⋅ ∂H(Td0 ≬ R′0)
∂H(Td0 ≬ R′0) = cB(d′,0) ⋅ (sC1(d′) ∥ sC2
(d′)) ⋅ ∂H(Ud0 ≬ Q0) + cB(�) ⋅ ∂H(Ud0 ≬ Q1)∂H(Ud0 ≬ Q1) = (cD(1) + cD(�)) ⋅ ∂H(Td0 ≬ R′0)∂H(Q0 ≬ Ud0) = cD(0) ⋅ ∂H(R1 ≬ S1) + cD(�) ⋅ ∂H(R′1 ≬ Td0)∂H(R′1 ≬ Td0) = (cB(d′,0) + cB(�)) ⋅ ∂H(Q0 ≬ Ud0)∂H(R1 ≬ S1) = ∑
d∈∆
(rA2(d) ∥ rA1
(d)) ⋅ ∂H(Td1 ≬ R′1)
∂H(Td1 ≬ R′1) = cB(d′,1) ⋅ (sC1(d′) ∥ sC2
(d′)) ⋅ ∂H(Ud1 ≬ Q1) + cB(�) ⋅ ∂H(Ud1 ≬ Q′0)∂H(Ud1 ≬ Q′0) = (cD(0) + cD(�)) ⋅ ∂H(Td1 ≬ R′1)∂H(Q1 ≬ Ud1) = cD(1) ⋅ ∂H(R0 ≬ S0) + cD(�) ⋅ ∂H(R′0 ≬ Td1)∂H(R′0 ≬ Td1) = (cB(d′,1) + cB(�)) ⋅ ∂H(Q1 ≬ Ud1)
Let ∂H(R0 ≬ S0) = ⟨X1∣E⟩, where E is the following guarded linear recursion specification:
23
{X1 = ∑d∈∆
(rA2(d) ∥ rA1
(d)) ⋅X2d, Y1 = ∑d∈∆
(rA2(d) ∥ rA1
(d)) ⋅ Y2d,
X2d = cB(d′,0) ⋅X4d + cB(�) ⋅X3d, Y2d = cB(d′,1) ⋅ Y4d + cB(�) ⋅ Y3d,
X3d = (cD(1) + cD(�)) ⋅X2d, Y3d = (cD(0) + cD(�)) ⋅ Y2d,
X4d = (sC1(d′) ∥ sC2
(d′)) ⋅X5d, Y4d = (sC1(d′) ∥ sC2
(d′)) ⋅ Y5d,
X5d = cD(0) ⋅ Y1 + cD(�) ⋅X6d, Y5d = cD(1) ⋅X1 + cD(�) ⋅ Y6d,
X6d = (cB(d,0) + cB(�)) ⋅X5d, Y6d = (cB(d,1) + cB(�)) ⋅ Y5d
∣d, d′ ∈∆}
Then we apply abstraction operator τI into ⟨X1∣E⟩.
τI(⟨X1∣E⟩) = ∑d∈∆
(rA1(d) ∥ rA2
(d)) ⋅ τI(⟨X2d∣E⟩)
= ∑d∈∆
(rA1(d) ∥ rA2
(d)) ⋅ τI(⟨X4d∣E⟩)
= ∑d,d′∈∆
(rA1(d) ∥ rA2
(d)) ⋅ (sC1(d′) ∥ sC2
(d′)) ⋅ τI(⟨X5d∣E⟩)
= ∑d,d′∈∆
(rA1(d) ∥ rA2
(d)) ⋅ (sC1(d′) ∥ sC2
(d′)) ⋅ τI(⟨Y1∣E⟩)
Similarly, we can get τI(⟨Y1∣E⟩) = ∑d,d′∈∆(rA1(d) ∥ rA2
(d)) ⋅ (sC1(d′) ∥ sC2
(d′)) ⋅ τI(⟨X1∣E⟩).We get τI(∂H(R0 ≬ S0)) = ∑d,d′∈∆(rA1
(d) ∥ rA2(d)) ⋅ (sC1
(d′) ∥ sC2(d′)) ⋅ τI(∂H(R0 ≬ S0)). So,
the ABP protocol τI(∂H(R0 ≬ S0)) exhibits desired external behaviors.
With the help of shadow constant, now we can verify the traditional alternating bit protocol
(ABP) [10].
The ABP protocol is used to ensure successful transmission of data through a corrupted channel.
This success is based on the assumption that data can be resent an unlimited number of times,
which is illustrated in Figure 2, we alter it into the true concurrency situation.
1. Data elements d1, d2, d3,⋯ from a finite set ∆ are communicated between a Sender and a
Receiver.
2. If the Sender reads a datum from channel A.
3. The Sender processes the data in ∆, formes new data, and sends them to the Receiver
through channel B.
4. And the Receiver sends the datum into channel C.
5. If channel B is corrupted, the message communicated through B can be turn into an error
message �.
6. Every time the Receiver receives a message via channel B, it sends an acknowledgement
to the Sender via channel D, which is also corrupted.
24
Sender Receiver
A
B
D
C
Figure 2: Alternating bit protocol
The Sender attaches a bit 0 to data elements d2k−1 and a bit 1 to data elements d2k, when
they are sent into channel B. When the Receiver reads a datum, it sends back the attached bit
via channel D. If the Receiver receives a corrupted message, then it sends back the previous
acknowledgement to the Sender.
Then the state transition of the Sender can be described by APTC as follows.
Sb = ∑d∈∆
rA(d) ⋅ Tdb
Tdb = (∑d′∈∆
(sB(d′, b) ⋅ S○sC(d′)) + sB(�)) ⋅Udb
Udb = rD(b) ⋅ S1−b + (rD(1 − b) + rD(�)) ⋅ Tdb
where sB denotes sending data through channel B, rD denotes receiving data through channel
D, similarly, rA means receiving data via channel A, S○sC(d′) denotes the shadow of sC(d′).
And the state transition of the Receiver can be described by APTC as follows.
Rb = ∑d∈∆
S○rA(d) ⋅R′b
R′b = ∑d′∈∆
{rB(d′, b) ⋅ sC(d′) ⋅Qb + rB(d′,1 − b) ⋅Q1−b} + rB(�) ⋅Q1−b
Qb = (sD(b) + sD(�)) ⋅R1−b
where S○rA(d) denotes the shadow of rA(d), rB denotes receiving data via channel B, sC denotes
sending data via channel C, sD denotes sending data via channel D, and b ∈ {0,1}.
25
The send action and receive action of the same data through the same channel can communicate
each other, otherwise, a deadlock δ will be caused. We define the following communication
functions.
γ(sB(d′, b), rB(d′, b)) ≜ cB(d′, b)γ(sB(�), rB(�)) ≜ cB(�)γ(sD(b), rD(b)) ≜ cD(b)γ(sD(�), rD(�)) ≜ cD(�)
Let R0 and S0 be in parallel, then the system R0S0 can be represented by the following process
term.
τI(∂H(Θ(R0 ≬ S0))) = τI(∂H(R0 ≬ S0))
where H = {sB(d′, b), rB(d′, b), sD(b), rD(b)∣d′ ∈∆, b ∈ {0,1}}{sB(�), rB(�), sD(�), rD(�)}I = {cB(d′, b), cD(b)∣d′ ∈∆, b ∈ {0,1}} ∪ {cB(�), cD(�)}.Then we get the following conclusion.
Theorem 2.30 (Correctness of the ABP protocol). The ABP protocol τI(∂H(R0 ≬ S0)) canexhibit desired external behaviors.
Proof. Similarly, we can get τI(⟨X1∣E⟩) = ∑d,d′∈∆ rA(d) ⋅ sC(d′) ⋅ τI(⟨Y1∣E⟩) and τI(⟨Y1∣E⟩) =∑d,d′∈∆ rA(d) ⋅ sC(d′) ⋅ τI(⟨X1∣E⟩).So, the ABP protocol τI(∂H(R0 ≬ S0)) can exhibit desired external behaviors.
26
Figure 3: Layer i
3 Verification of Architectural Patterns
Architecture patterns are highest-level patterns which present structural organizations for soft-
ware systems and contain a set of subsystems and the relationships among them.
In this chapter, we verify four categories of architectural patterns, in subsection 3.1, we verify
structural patterns including the Layers pattern, the Pipes and Filters pattern and the Black-
board pattern. In section 3.2, we verify patterns considering distribution aspects. We verify
patterns that feature human-computer interaction in section 3.3. In section 3.4, we verify pat-
terns supporting extension of applications.
3.1 From Mud to Structure
In this subsection, we verify structural patterns including the Layers pattern, the Pipes and
Filters pattern and the Blackboard pattern.
3.1.1 Verification of the Layers Pattern
The Layers pattern contains several layers with each layer being a particular level of abstraction
of subtasks. In the Layers pattern, there are only communications between the adjacent layers.
That is, for layer i, it receives data (the data denoted dUi) from layer i + 1, processes the data
(the processing function denoted UFi) and sends the processed data (the processed data denoted
UFi(dUi)) to layer i−1; in the other direction, it receives data (the data denoted dLi
) from layer
i − 1, processes the data (the processing function denoted LFi) and sends the processed data
(the processed data denoted LFi(dLi))to layer i + 1, as Figure 3 illustrated. The four channels
are denoted UIi (the Upper Input of layer i), LOi (the Lower Output of layer i), LIi (the Lower
Input of layer i) and UOi (the Upper Output of layer i) respectively.
The whole Layers pattern containing n layers are illustrated in Figure 4. Note that, the num-
bering of layers are in a reverse order, that is, the highest layer is called layer n and the lowest
layer is called layer 1.
27
There exist two typical processes in the Layers pattern corresponding to two directions of data
processing as Figure 5 illustrated. One process is as follows.
1. The highest layer n receives data from the application which is denoted dUn through
channel UIn (the corresponding reading action is denoted rUIn(dUn)), then processes the
data, and sends the processed data to layer n − 1 which is denoted UFn(dUn) through
channel LOn (the corresponding sending action is denoted sLOn(UFn(dUn)));
2. The layer i receives data from the layer i + 1 which is denoted dUithrough channel UIi
(the corresponding reading action is denoted rUIi(dUi)), then processes the data, and
sends the processed data to layer i − 1 which is denoted UFi(dUi) through channel LOi
(the corresponding sending action is denoted sLOi(UFi(dUi
)));
3. The lowest layer 1 receives data from the layer 2 which is denoted dU1through channel
UI1 (the corresponding reading action is denoted rUI1(dU1)), then processes the data,
and sends the processed data to another layers peer which is denoted UF1(dU1) through
channel LO1 (the corresponding sending action is denoted sLO1(UF1(dU1
))).
The other process is following.
1. The lowest layer 1 receives data from the another layers peer which is denoted dL1through
channel LI1 (the corresponding reading action is denoted rLI1(dL1)), then processes the
data, and sends the processed data to layer 2 which is denoted LF1(dL1) through channel
UO1 (the corresponding sending action is denoted sUO1(LF1(dL1
)));
2. The layer i receives data from the layer i−1 which is denoted dLithrough channel LIi (the
corresponding reading action is denoted rLIi(dLi)), then processes the data, and sends
the processed data to layer i + 1 which is denoted LFi(dLi) through channel UOi (the
corresponding sending action is denoted sUOi(LFi(dLi
)));
3. The highest layer n receives data from layer n − 1 which is denoted dLn through channel
LIn (the corresponding reading action is denoted rLIn(dLn)), then processes the data, and
sends the processed data to the application which is denoted LFn(dLn) through channel
UOn (the corresponding sending action is denoted sUOn(LFn(dLn))).
We begin to verify the Layers pattern. We assume all data elements dUiand dLi
(for 1 ≤ i ≤ n)
are from a finite set ∆. The state transitions of layer i (for 1 ≤ i ≤ n) described by APTC are as
follows.
Li = ∑dUi,dLi∈∆(rUIi(dUi
) ⋅Li2 ≬ rLIi(dLi) ⋅Li3)
Li2 = UFi ⋅Li4
Li3 = LFi ⋅Li5
Li4 = ∑dUi∈∆(sLOi
(UFi(dUi)) ⋅Li)
Li5 = ∑dLi∈∆(sUOi
(LFi(dLi)) ⋅Li)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
29
communication functions for 1 ≤ i ≤ n. Note that, the channel of LOi+1 of layer i + 1 and the
channel UIi of layer i are the same one channel, and the channel LIi+1 of layer i + 1 and the
channel UOi of layer i are the same one channel. And also the data dLi+1 of layer i + 1 and the
data LFi(dLi) of layer i are the same data, and the data UFi+1(dUi+1) of layer i+1 and the data
dUiof layer i are the same data.
γ(rUIi(dUi), sLOi+1(UFi+1(dUi+1))) ≜ cUIi(dUi
)
γ(rLIi(dLi), sUOi−1(LFi−1(dLi−1))) ≜ cLIi(dLi
)
γ(rUIi−1(dUi−1), sLOi(UFi(dUi
))) ≜ cUIi−1(dUi−1)
γ(rLIi+1(dLi+1), sUOi(LFi(dLi
))) ≜ cLIi+1(dLi+1)
Note that, for the layer n, there are only two communication functions as follows.
γ(rLIn(dLn), sUOn−1(LFn−1(dLn−1))) ≜ cLIn(dLn)
γ(rUIn−1(dUn−1), sLOn(UFn(dUn))) ≜ cUIn−1(dUn−1)
And for the layer 1, there are also only two communication functions as follows.
γ(rUI1(dU1), sLO2
(UF2(dU2))) ≜ cUI1(dU1
)
γ(rLI2(dL2), sUO1
(LF1(dL1))) ≜ cLI2(dL2
)
Let all layers from layer n to layer 1 be in parallel, then the Layers pattern Ln⋯Li⋯L1 can be
presented by the following process term.
τI(∂H(Θ(Ln ≬⋯ ≬ Li ≬ ⋯≬ L1))) = τI(∂H(Ln ≬⋯≬ Li ≬⋯ ≬ L1))
whereH = {rUI1(dU1), sUO1
(LF1(dL1)),⋯, rUIi(dUi
), rLIi(dLi), sLOi
(UFi(dUi)), sUOi
(LFi(dLi)),
⋯, rLIn(dLn), sLOn(UFn(dUn))∣dU1, dL1
,⋯, dUi, dLi
⋯, dUn , dLn ∈∆},I = {cUI1(dU1
), cLI2(dL2),⋯, cUIi(dUi
), cLIi(dLi), cUIi−1(dUi−1), cLIi+1(dLi+1),⋯, cLIn(dLn), cUIn−1(dUn−1),
LF1,UF1,⋯,LFi,UFi,⋯,LFn,UFn ∣dU1, dL1
,⋯, dUi, dLi
⋯, dUn , dLn ∈∆}.Then we get the following conclusion on the Layers pattern.
Theorem 3.1 (Correctness of the Layers pattern). The Layers pattern τI(∂H(Ln ≬ ⋯ ≬ Li ≬
⋯≬ L1)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of layer i (for 1 ≤ i ≤ n), by use of the algebraic laws
of APTC, we can prove that
τI(∂H(Ln ≬⋯ ≬ Li ≬ ⋯≬ L1)) = ∑dU1,dL1
,dUn ,dLn∈∆((rUIn(dUn) ∥ rLI1(dL1
))⋅(sUOn(LFn(dLn)) ∥sLO1
(UF1(dU1)))) ⋅ τI(∂H(Ln ≬⋯ ≬ Li ≬⋯ ≬ L1)),
that is, the Layers pattern τI(∂H(Ln ≬ ⋯ ≬ Li ≬ ⋯ ≬ L1)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
31
Two Layers pattern peers can be composed together, just by linking the lower output of layer 1
of one peer together with the lower input of layer 1 of the other peer, and vice versa. As Figure
6 illustrated.
There are also two typical data processing process in the composition of two layers peers, as
Figure 7 shows. One process is data transferred from peer P to another peer P ′ as follows.
1. The highest layer n of peer P receives data from the application of peer P which is
denoted dUn through channel UIn (the corresponding reading action is denoted rUIn(dUn)),then processes the data, and sends the processed data to layer n − 1 of peer P which is
denoted UFn(dUn) through channel LOn (the corresponding sending action is denoted
sLOn(UFn(dUn)));
2. The layer i of peer P receives data from the layer i+1 of peer P which is denoted dUithrough
channel UIi (the corresponding reading action is denoted rUIi(dUi)), then processes the
data, and sends the processed data to layer i−1 which is denoted UFi(dUi) through channel
LOi (the corresponding sending action is denoted sLOi(UFi(dUi
)));
3. The lowest layer 1 of peer P receives data from the layer 2 of peer P which is denoted
dU1through channel UI1 (the corresponding reading action is denoted rUI1(dU1
)), thenprocesses the data, and sends the processed data to another layers peer P ′ which is
denoted UF1(dU1) through channel LO1 (the corresponding sending action is denoted
sLO1(UF1(dU1
)));
4. The lowest layer 1′ of P ′ receives data from the another layers peer P which is denoted dL1′
through channel LI1′ (the corresponding reading action is denoted rLI1′(dL
1′)), then pro-
cesses the data, and sends the processed data to layer 2 of P ′ which is denoted LF1′(dL1′)
through channel UO1′ (the corresponding sending action is denoted sUO1′(LF1′(dL
1′)));
5. The layer i′ of peer P ′ receives data from the layer i′ − 1 of peer P ′ which is denoted
dLi′through channel LIi′ (the corresponding reading action is denoted rLIi′ (dLi′
)), thenprocesses the data, and sends the processed data to layer i′ + 1 of peer P ′ which is
denoted LFi′(dLi′) through channel UOi′ (the corresponding sending action is denoted
sUOi′(LFi′(dLi′
)));
6. The highest layer n′ of peer P ′ receives data from layer n′ − 1 of peer P ′ which is denoted
dLn′through channel LIn′ (the corresponding reading action is denoted rLIn′ (dLn′
)), thenprocesses the data, and sends the processed data to the application of peer P ′ which is
denoted LFn′(dLn′) through channel UOn′ (the corresponding sending action is denoted
sUOn′(LFn′(dLn′
))).
The other similar process is data transferred from peer P ′ to peer P , we do not repeat again
and omit it.
The verification of two layers peers is as follows.
We also assume all data elements dUi, dLi
, dUi′and dLi′
(for 1 ≤ i, i′ ≤ n) are from a finite set
∆. The state transitions of layer i (for 1 ≤ i ≤ n) described by APTC are as follows.
Li = ∑dUi,dLi∈∆(rUIi(dUi
) ⋅Li2 ≬ rLIi(dLi) ⋅Li3)
Li2 = UFi ⋅Li4
32
Li3 = LFi ⋅Li5
Li4 = ∑dUi∈∆(sLOi
(UFi(dUi)) ⋅Li)
Li5 = ∑dLi∈∆(sUOi
(LFi(dLi)) ⋅Li)
The state transitions of layer i′ (for 1 ≤ i′ ≤ n) described by APTC are as follows.
Li′ = ∑dUi′,dL
i′∈∆(rUIi′
(dUi′) ⋅Li′
2≬ rLIi′ (dLi′
) ⋅Li′3)
Li′2= UFi′ ⋅Li′
4
Li′3= LFi′ ⋅Li′
5
Li′4= ∑dU
i′∈∆(sLOi′
(UFi′(dUi′)) ⋅Li′)
Li′5= ∑dL
i′∈∆(sUOi′
(LFi′(dLi′)) ⋅Li′)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions for 1 ≤ i ≤ n and 1 ≤ i′ ≤ n. Note that, the channel of LOi+1 of layer
i+1 and the channel UIi of layer i are the same one channel, and the channel LIi+1 of layer i+1
and the channel UOi of layer i are the same one channel, the channel of LOi′+1 of layer i′ + 1
and the channel UIi′ of layer i′ are the same one channel, and the channel LIi′+1 of layer i′ + 1
and the channel UOi′ of layer i′ are the same one channel. And also the data dLi+1 of layer i+ 1
and the data LFi(dLi) of layer i are the same data, and the data UFi+1(dUi+1) of layer i+ 1 and
the data dUiof layer i are the same data; the data dLi′+1
of layer i′ + 1 and the data LFi′(dLi′)
of layer i′ are the same data, and the data UFi′+1(dUi′+1) of layer i′+1 and the data dUi′
of layer
i′ are the same data.
γ(rUIi(dUi), sLOi+1(UFi+1(dUi+1))) ≜ cUIi(dUi
)γ(rLIi(dLi
), sUOi−1(LFi−1(dLi−1))) ≜ cLIi(dLi)
γ(rUIi−1(dUi−1), sLOi(UFi(dUi
))) ≜ cUIi−1(dUi−1)γ(rLIi+1(dLi+1), sUOi
(LFi(dLi))) ≜ cLIi+1(dLi+1)
γ(rUIi′(dUi′
), sLOi′+1(UFi′+1(dUi′+1
))) ≜ cUIi′(dUi′
)γ(rLIi′ (dLi′
), sUOi′−1(LFi′−1(dLi′−1
))) ≜ cLIi′ (dLi′)
γ(rUIi′−1(dUi′−1
), sLOi′(UFi′(dUi′
))) ≜ cUIi′−1(dUi′−1
)γ(rLIi′+1(dLi′+1
), sUOi′(LFi(dLi′
))) ≜ cLIi′+1(dLi′+1)
Note that, for the layer n, there are only two communication functions as follows.
γ(rLIn(dLn), sUOn−1(LFn−1(dLn−1))) ≜ cLIn(dLn)γ(rUIn−1(dUn−1), sLOn(UFn(dUn))) ≜ cUIn−1(dUn−1)
For the layer n′, there are only two communication functions as follows.
γ(rLIn′ (dLn′), sUOn′−1
(LFn′−1(dLn′−1))) ≜ cLIn′ (dLn′
)
35
γ(rUIn′−1(dUn′−1
), sLOn′(UFn′(dUn′
))) ≜ cUIn′−1(dUn′−1
)
For the layer 1, there are four communication functions as follows.
γ(rUI1(dU1), sLO2
(UF2(dU2))) ≜ cUI1(dU1
)γ(rLI2(dL2
), sUO1(LF1(dL1
))) ≜ cLI2(dL2)
γ(rLI1(dL1), sLO1′
(UF1′(dU1′))) ≜ cLI1(dL1
)γ(rLI
1′(dL
1′), sLO1
(UF1(dU1))) ≜ cLI
1′(dL
1′)
And for the layer 1′, there are four communication functions as follows.
γ(rUI1′(dU
1′), sLO
2′(UF2′(dU
2′))) ≜ cUI
1′(dU
1′)
γ(rLI2′(dL
2′), sUO
1′(LF1′(dL
1′))) ≜ cLI
2′(dL
2′)
γ(rLI1(dL1), sLO
1′(UF1′(dU
1′))) ≜ cLI1(dL1
)γ(rLI
1′(dL
1′), sLO1
(UF1(dU1))) ≜ cLI
1′(dL
1′)
Let all layers from layer n to layer 1 and from layer 1′ to n′ be in parallel, then the Layers
pattern Ln⋯Li⋯L1L1′⋯Li′⋯Ln′ can be presented by the following process term.
τI(∂H(Θ(Ln ≬ ⋯ ≬ Li ≬ ⋯ ≬ L1 ≬ L1′ ≬ ⋯ ≬ Li′ ≬ ⋯ ≬ Ln′))) = τI(∂H(Ln ≬ ⋯ ≬ Li ≬ ⋯ ≬
L1 ≬ L1′ ≬⋯ ≬ Li′ ≬⋯≬ Ln′))where H = {rLI1(dL1
), sLO1(UF1(dU1
)), rUI1(dU1), sUO1
(LF1(dL1)),⋯, rUIi(dUi
), rLIi(dLi),
sLOi(UFi(dUi
)), sUOi(LFi(dLi
)),⋯, rLIn(dLn), sLOn(UFn(dUn)),rLI
1′(dL
1′), sLO
1′(UF1(dU
1′)), rUI
1′(dU
1′), sUO
1′(LF1′(dL
1′)),⋯, rUIi′
(dUi′), rLIi′ (dLi′
),sLOi′
(UFi′(dUi′)), sUOi′
(LFi′(dLi′)),⋯, rLIn′ (dLn′
), sLOn′(UFn′(dUn′
))∣dU1
, dL1,⋯, dUi
, dLi⋯, dUn , dLn , dU1′
, dL1′,⋯, dUi′
, dLi′⋯, dUn′
, dLn′∈∆},
I = {cUI1(dU1), cLI1(dL1
), cLI2(dL2),⋯, cUIi(dUi
), cLIi(dLi), cUIi−1(dUi−1), cLIi+1(dLi+1),⋯,
cLIn(dLn), cUIn−1(dUn−1),LF1,UF1,⋯,LFi,UFi,⋯,LFn,UFn,
cUI1′(dU
1′), cLI
1′(dL
1′), cLI
2′(dL
2′),⋯, cUIi′
(dUi′), cLIi′ (dLi′
), cUIi′−1(dUi′−1
), cLIi′+1(dLi′+1),⋯,
cLIn′ (dLn′), cUIn′−1
(dUn′−1),LF1′ ,UF1′ ,⋯,LFi′ ,UFi′ ,⋯,LFn′ ,UFn′
∣dU1, dL1
,⋯, dUi, dLi
⋯, dUn , dLn , dU1′, dL
1′,⋯, dUi′
, dLi′⋯, dUn′
, dLn′∈∆}.
Then we get the following conclusion on the Layers pattern.
Theorem 3.2 (Correctness of two layers peers). The two layers peers τI(∂H(Ln ≬ ⋯ ≬ Li ≬
⋯≬ L1 ≬ L1′ ≬⋯ ≬ Li′ ≬ ⋯≬ Ln′)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of layer i and i′ (for 1 ≤ i, i′ ≤ n), by use of the
algebraic laws of APTC, we can prove that
τI(∂H(Ln ≬⋯ ≬ Li ≬ ⋯≬ L1 ≬ L1′ ≬⋯ ≬ Li′ ≬ ⋯≬ Ln′)) = ∑dUn ,dLn ,dUn′,dL
n′∈∆((rUIn(dUn) ∥
rUIn′(dUn′
)) ⋅ (sUOn(LFn(dLn)) ∥ sUOn′(LFn′(dLn′
)))) ⋅ τI(∂H(Ln ≬ ⋯ ≬ Li ≬ ⋯ ≬ L1 ≬ L1′ ≬
⋯≬ Li′ ≬⋯ ≬ Ln′)),that is, the two layers peers τI(∂H(Ln ≬ ⋯ ≬ Li ≬ ⋯ ≬ L1 ≬ L1′ ≬ ⋯ ≬ Li′ ≬ ⋯ ≬ Ln′)) canexhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
36
There exists another composition of two layers peers. There are communications between two
peers’s peer layers which are called virtual communication. Virtual communications are specified
by communication protocols, and we assume data transferred between layer i through virtual
communications. And the two typical processes are illustrated in Figure 8. The process from
peer P to peer P ′ is as follows.
1. The highest layer n of peer P receives data from the application of peer P which is
denoted dUn through channel UIn (the corresponding reading action is denoted rUIn(dUn)),then processes the data, and sends the processed data to layer n − 1 of peer P which is
denoted UFn(dUn) through channel LOn (the corresponding sending action is denoted
sLOn(UFn(dUn)));
2. The layer i of peer P receives data from the layer i+1 of peer P which is denoted dUithrough
channel UIi (the corresponding reading action is denoted rUIi(dUi)), then processes the
data, and sends the processed data to layer i of peer P ′ which is denoted UFi(dUi) through
channel LOi (the corresponding sending action is denoted sLOi(UFi(dUi
)));
3. The layer i′ of peer P ′ receives data from the layer i of peer P which is denoted dLi′through
channel LIi′ (the corresponding reading action is denoted rLIi′ (dLi′)), then processes the
data, and sends the processed data to layer i′ + 1 of peer P ′ which is denoted LFi′(dLi′)
through channel UOi′ (the corresponding sending action is denoted sUOi′(LFi′(dLi′
)));
4. The highest layer n′ of peer P ′ receives data from layer n′ − 1 of peer P ′ which is denoted
dLn′through channel LIn′ (the corresponding reading action is denoted rLIn′ (dLn′
)), thenprocesses the data, and sends the processed data to the application of peer P ′ which is
denoted LFn′(dLn′) through channel UOn′ (the corresponding sending action is denoted
sUOn′(LFn′(dLn′
))).
The other similar process is data transferred from P ′ to P , we do not repeat again and omit it.
The verification of two layers peers’s communication through virtual communication is as follows.
We also assume all data elements dUi, dLi
, dUi′and dLi′
(for 1 ≤ i, i′ ≤ n) are from a finite set
∆. The state transitions of layer i (for 1 ≤ i ≤ n) described by APTC are as follows.
Li = ∑dUi,dLi∈∆(rUIi(dUi
) ⋅Li2 ≬ rLIi(dLi) ⋅Li3)
Li2 = UFi ⋅Li4
Li3 = LFi ⋅Li5
Li4 = ∑dUi∈∆(sLOi
(UFi(dUi)) ⋅Li)
Li5 = ∑dLi∈∆(sUOi
(LFi(dLi)) ⋅Li)
The state transitions of layer i′ (for 1 ≤ i′ ≤ n) described by APTC are as follows.
Li′ = ∑dUi′,dL
i′∈∆(rUIi′
(dUi′) ⋅Li′
2≬ rLIi′ (dLi′
) ⋅Li′3)
Li′2= UFi′ ⋅Li′
4
Li′3= LFi′ ⋅Li′
5
Li′4= ∑dU
i′∈∆(sLOi′
(UFi′(dUi′)) ⋅Li′)
Li′5= ∑dL
i′∈∆(sUOi′
(LFi′(dLi′)) ⋅Li′)
37
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions for 1 ≤ i ≤ n and 1 ≤ i′ ≤ n. Note that, the channel of LOi+1 of layer
i+1 and the channel UIi of layer i are the same one channel, and the channel LIi+1 of layer i+1
and the channel UOi of layer i are the same one channel, the channel of LOi′+1 of layer i′ + 1
and the channel UIi′ of layer i′ are the same one channel, and the channel LIi′+1 of layer i′ + 1
and the channel UOi′ of layer i′ are the same one channel. And also the data dLi+1 of layer i+ 1
and the data LFi(dLi) of layer i are the same data, and the data UFi+1(dUi+1) of layer i+ 1 and
the data dUiof layer i are the same data; the data dLi′+1
of layer i′ + 1 and the data LFi′(dLi′)
of layer i′ are the same data, and the data UFi′+1(dUi′+1) of layer i′+1 and the data dUi′
of layer
i′ are the same data.
For the layer i, there are four communication functions as follows.
γ(rUIi(dUi), sLOi+1(UFi+1(dUi+1))) ≜ cUIi(dUi
)
γ(rLIi+1(dLi+1), sUOi(LFi(dLi
))) ≜ cLIi+1(dLi+1)
γ(rLIi(dLi), sLOi′
(UFi′(dUi′))) ≜ cLIi(dLi
)
γ(rLIi′ (dLi′), sLOi
(UFi(dUi))) ≜ cLIi′ (dLi′
)
For the layer i′, there are four communication functions as follows.
γ(rUIi′(dUi′
), sLOi′+1(UFi′+1(dUi′+1
))) ≜ cUIi′(dUi′
)
γ(rLIi′+1(dLi′+1), sUOi′
(LFi′(dLi′))) ≜ cLIi′+1(dLi′+1
)
γ(rLIi(dLi), sLOi′
(UFi′(dUi′))) ≜ cLIi(dLi
)
γ(rLIi′ (dLi′), sLOi
(UFi(dUi))) ≜ cLIi′ (dLi′
)
Note that, for the layer n, there are only two communication functions as follows.
γ(rLIn(dLn), sUOn−1(LFn−1(dLn−1))) ≜ cLIn(dLn)
γ(rUIn−1(dUn−1), sLOn(UFn(dUn))) ≜ cUIn−1(dUn−1)
And for the layer n′, there are only two communication functions as follows.
γ(rLIn′ (dLn′), sUOn′−1
(LFn′−1(dLn′−1))) ≜ cLIn′ (dLn′
)
γ(rUIn′−1(dUn′−1
), sLOn′(UFn′(dUn′
))) ≜ cUIn′−1(dUn′−1
)
Let all layers from layer n to layer i be in parallel, then the Layers pattern Ln⋯LiLi′⋯Ln′ can
be presented by the following process term.
τI(∂H(Θ(Ln ≬⋯ ≬ Li ≬ L1′ ≬ ⋯≬ Li′ ≬⋯ ≬ Ln′))) = τI(∂H(Ln ≬ ⋯≬ Li ≬ Li′ ≬ ⋯≬ Ln′))whereH = {rUIi(dUi
), rLIi(dLi), sLOi
(UFi(dUi)), sUOi
(LFi(dLi)),⋯, rLIn(dLn), sLOn(UFn(dUn)),
rUIi′(dUi′
), rLIi′ (dLi′), sLOi′
(UFi′(dUi′)), sUOi′
(LFi′(dLi′)),⋯, rLIn′ (dLn′
), sLOn′(UFn′(dUn′
))∣dUi
, dLi⋯, dUn , dLn , dUi′
, dLi′⋯, dUn′
, dLn′∈∆},
39
Figure 9: Filter i
I = {cUIi(dUi), cLIi(dLi
), cLIi+1(dLi+1),⋯, cLIn(dLn), cUIn−1(dUn−1),LFi,UFi,⋯,LFn,UFn,
cUIi′(dUi′
), cLIi′ (dLi′), cLIi′+1(dLi′+1
),⋯, cLIn′ (dLn′), cUIn′−1
(dUn′−1),LFi′ ,UFi′ ,⋯,LFn′ ,UFn′
∣dUi, dLi
⋯, dUn , dLn , dUi′, dLi′
⋯, dUn′, dLn′
∈∆}.Then we get the following conclusion on the Layers pattern.
Theorem 3.3 (Correctness of two layers peers via virtual communication). The two layers peers
via virtual communication τI(∂H(Ln ≬ ⋯ ≬ Li ≬ Li′ ≬ ⋯ ≬ Ln′)) can exhibit desired external
behaviors.
Proof. Based on the above state transitions of layer i and i′ (for 1 ≤ i, i′ ≤ n), by use of the
algebraic laws of APTC, we can prove that
τI(∂H(Ln ≬ ⋯ ≬ Li ≬ Li′ ≬ ⋯ ≬ Ln′)) = ∑dUn ,dLn ,dUn′,dL
n′∈∆((rUIn(dUn) ∥ rUIn′
(dUn′)) ⋅
(sUOn(LFn(dLn)) ∥ sUOn′(LFn′(dLn′
)))) ⋅ τI(∂H(Ln ≬ ⋯≬ Li ≬ Li′ ≬⋯≬ Ln′)),that is, the two layers peers via virtual communication τI(∂H(Ln ≬ ⋯ ≬ Li ≬ Li′ ≬ ⋯ ≬ Ln′))can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
3.1.2 Verification of the Pipes and Filters Pattern
The Pipes and Filters pattern is used to process a stream of data with each processing step being
encapsulated in a filter component. The data stream flows out of the data source, and into the
first filter; the first filter processes the data, and sends the processed data to the next filter;
eventually, the data stream flows out of the pipes of filters and into the data sink, as Figure 10
illustrated, there are n filters in the pipes. Especially, for filter i (1 ≤ i ≤ n), as illustrated in
Figure 9, it has an input channel Ii to read the data di, then processes the data via a processing
function FFi, finally send the processed data to the next filter through an output channel Oi.
There is one typical process in the Pipes and Filters pattern as illustrated in Figure 11 and
follows.
40
1. The filter 1 receives the data from the data source which is denoted d1 through the chan-
nel I1 (the corresponding reading action is denoted rI1(d1)), then processes the data
through a processing function FF1, and sends the processed data to the filter 2 which
is denoted FF1(d1) through the channel O1 (the corresponding sending action is denoted
sO1(FF1(d1)));
2. The filter i receives the data from filter i−1 which is denoted di through the channel Ii (the
corresponding reading action is denoted rIi(di)), then processes the data through a process-
ing function FFi, and sends the processed data to the filter i+1 which is denoted FFi(di)through the channel Oi (the corresponding sending action is denoted sOi
(FFi(di)));
3. The filter n receives the data from filter n − 1 which is denoted dn through the chan-
nel In (the corresponding reading action is denoted rIn(dn)), then processes the data
through a processing function FFn, and sends the processed data to the data sink which
is denoted FFn(dn) through the channel On (the corresponding sending action is denoted
sOn(FF1(dn))).
In the following, we verify the Pipes and Filters pattern. We assume all data elements di (for
1 ≤ i ≤ n) are from a finite set ∆. The state transitions of filter i (for 1 ≤ i ≤ n) described by
APTC are as follows.
Fi = ∑di,∈∆(rIi(di) ⋅ Fi2)Fi2 = FFi ⋅ Fi3
Fi3 = ∑di∈∆(sOi(FFi(di)) ⋅ Fi)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions for 1 ≤ i ≤ n. Note that, the channel of Ii+1 of filter i + 1 and the
channel Oi of filter i are the same one channel. And also the data di+1 of filter i + 1 and the
data FFi(di) of filter i are the same data.
γ(rIi(di), sOi−1(FFi−1(di−1))) ≜ cIi(di)
γ(rIi+1(di+1), sOi(FFi(di))) ≜ cIi+1(di+1)
Note that, for the filter n, there are only one communication functions as follows.
γ(rIn(dn), sOn−1(FFn−1(dn−1))) ≜ cIn(dn)
And for the filter 1, there are also only one communication functions as follows.
γ(rI2(d2), sO1(FF1(d1))) ≜ cI2(d2)
Let all filters from filter 1 to filter n be in parallel, then the Pipes and Filters pattern F1⋯Fi⋯Fn
can be presented by the following process term.
τI(∂H(Θ(F1 ≬⋯ ≬ Fi ≬ ⋯≬ Fn))) = τI(∂H(F1 ≬⋯ ≬ Fi ≬⋯ ≬ Fn))
42
Figure 12: Blackboard pattern
where H = {sO1(FF1(d1)),⋯, rIi(di), sOi
(FFi(di)),⋯, rIn(dn)∣d1,⋯, di,⋯, dn ∈∆},I = {cI2(d2),⋯, cIi(di),⋯, cIn(dn), FF1,⋯, FFi⋯, FFn∣d1,⋯, di,⋯, dn ∈∆}.Then we get the following conclusion on the Pipes and Filters pattern.
Theorem 3.4 (Correctness of the Pipes and Filters pattern). The Pipes and Filters pattern
τI(∂H(F1 ≬⋯ ≬ Fi ≬⋯ ≬ Fn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of filter i (for 1 ≤ i ≤ n), by use of the algebraic laws
of APTC, we can prove that
τI(∂H(F1 ≬ ⋯ ≬ Fi ≬ ⋯ ≬ Fn)) = ∑d1,dn∈∆(rI1(d1) ⋅ sOn(FFn(dn))) ⋅ τI(∂H(F1 ≬ ⋯ ≬ Fi ≬
⋯≬ Fn)),that is, the Pipes and Filters pattern τI(∂H(F1 ≬ ⋯ ≬ Fi ≬ ⋯ ≬ Fn)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
3.1.3 Verification of the Blackboard Pattern
The Blackboard pattern is used to solve problems with no deterministic solutions. In the Black-
board pattern, there are one Control module, one Blackboard module and several Knowledge
Source modules. When the Control module receives a request, it queries the Blackboard mod-
ule the involved Knowledge Sources; then the Control module invokes the related Knowledge
Sources; Finally, the related Knowledge Sources update the Blackboard with the invoked results,
as illustrated in Figure 12.
The typical process of the Blackboard pattern is illustrated in Figure 13 and as follows.
44
Figure 13: Typical process of Blackboard pattern
1. The Control module receives the request from outside applications which is denoted dI ,
through the input channel I (the corresponding reading action is denoted rI(dI)), thenprocesses the request through a processing function CF1, and sends the processed data
which is denoted CF1(dI) to the Blackboard module through the channel CB (the corre-
sponding sending action is denoted sCB(CF1(dI)));
2. The Blackboard module receives the request (information of involved Knowledge Sources)
from the Control module through the channel CB (the corresponding reading action is
denoted rCB(CF1(dI))), then processes the request through a processing function BF1,
and generates and sends the response which is denoted dB to the Control module through
the channel CB (the corresponding sending action is denoted sCB(dB));
3. The Control module receives the data from the Blackboard module through the chan-
nel CB (the corresponding reading action is denoted rCB(dB)), then processes the data
through another processing function CF2, and generates and sends the requests to the
related Knowledge Sources which are denoted dCithrough the channels CKi (the corre-
sponding sending action is denoted sCKi(dCi)) with 1 ≤ i ≤ n;
4. The Knowledge Source i receives the request from the Control module through the channel
CKi (the corresponding reading action is denoted rCKi(dCi)), then processes the request
through a processing function KFi, and generates and sends the processed data dKito the
Blackboard module through the channel BKi (the corresponding sending action is denoted
sBKi(dKi));
5. The Blackboard module receives the invoked results from Knowledge Source i through the
channel BKi (the corresponding reading action is denoted rBKi(dKi)) (1 ≤ i ≤ n), then
processes the results through another processing function BF2, generates and sends the
output dO through the channel O (the corresponding sending action is denoted sO(dO)).
45
In the following, we verify the Blackboard pattern. We assume all data elements dI , dB , dCi, dKi
, dO(for 1 ≤ i ≤ n) are from a finite set ∆. The state transitions of the Control module described by
APTC are as follows.
C = ∑dI∈∆(rI(dI) ⋅C2)C2 = CF1 ⋅C3
C3 =∑dI∈∆(sCB(CF1(dI)) ⋅C4)C4 =∑dB∈∆(rCB(dB) ⋅C5)C5 = CF2 ⋅C6
C6 =∑dC1,⋯,dCi
,⋯,dCn∈∆(sCK1
(dC1)≬ ⋯≬ sCKi
(dCi)≬ ⋯≬ sCKn(dCn) ⋅C)
The state transitions of the Blackboard module described by APTC are as follows.
B = ∑dI∈∆(rCB(CF1(dI)) ⋅B2)B2 = BF1 ⋅B3
B3 = ∑dB∈∆(sCB(dB) ⋅B4)B4 = ∑dK1
,⋯,dKi,⋯,dKn∈∆
(rBK1(dK1
)≬⋯ ≬ rBKi(dKi)≬⋯ ≬ rBKn(dKn) ⋅B5)
B5 = BF2 ⋅B6
B6 = ∑dO∈∆(sO(dO) ⋅B)The state transitions of the Knowledge Source i described by APTC are as follows.
Ki = ∑dCi∈∆(rCKi
(dCi) ⋅Ki2)
Ki2 =KFi ⋅Ki3
Ki3 = ∑dKi∈∆(sBKi
(dKi) ⋅Ki)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions for 1 ≤ i ≤ n.
γ(rCB(CF1(dI)), sCB(CF1(dI))) ≜ cCB(CF1(dI))γ(rCB(dB), sCB(dB)) ≜ cCB(dB)
γ(rCKi(dCi), sCKi
(dCi)) ≜ cCKi
(dCi)
γ(rBKi(dKi), sBKi
(dKi)) ≜ cBKi
(dKi)
Let all modules be in parallel, then the Blackboard pattern C B K1⋯Ki⋯Kn can be presented
by the following process term.
τI(∂H(Θ(C ≬ B ≬K1 ≬⋯ ≬Ki ≬⋯≬Kn))) = τI(∂H(C ≬ B ≬K1 ≬⋯≬Ki ≬⋯≬Kn))
where H = {rCB(CF1(dI)), sCB(CF1(dI)), rCB(dB), sCB(dB), rCK1(dC1),
sCK1(dC1),⋯, rCKi
(dCi), sCKi
(dCi),⋯, rCKn(dCn), sCKn(dCn),
rBK1(dK1
), sBK1(dK1
),⋯, rBKi(dKi), sBKi
(dKi),⋯, rBKn(dKn), sBKn(dKn)
∣dI , dB , dC1,⋯, dCi
,⋯, dCn , dK1,⋯, dKi
,⋯, dKn ∈∆},
46
I = {cCB(CF1(dI)), cCB(dB), cCK1(dC1),⋯, cCKi
(dCi),⋯, cCKn(dCn), cBK1
(dK1),
⋯, cBKi(dKi),⋯, cBKn(dKn),CF1,CF2,BF1,BF2,KF1,⋯,KFi,⋯,KFn
∣dI , dB , dC1,⋯, dCi
,⋯, dCn , dK1,⋯, dKi
,⋯, dKn ∈∆}.Then we get the following conclusion on the Blackboard pattern.
Theorem 3.5 (Correctness of the Blackboard pattern). The Blackboard pattern τI(∂H(C ≬
B ≬K1 ≬⋯ ≬Ki ≬⋯ ≬Kn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬ B ≬K1 ≬ ⋯ ≬Ki ≬⋯ ≬Kn)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(C ≬ B ≬K1 ≬
⋯≬Ki ≬ ⋯≬Kn)),that is, the Blackboard pattern τI(∂H(C ≬ B ≬K1 ≬⋯ ≬Ki ≬ ⋯ ≬Kn)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
3.2 Distributed Systems
In this subsection, we verify the distributed systems oriented patterns, including the Broker
pattern, and the Pipes and Filters pattern in subsection 3.1 and the Microkernel Pattern in
subsection 3.4.
3.2.1 Verification of the Broker Pattern
The Broker pattern decouples the invocation process between the Client and the Server. There
are five types of modules in the Broker pattern: the Client, the Client-side Proxy, the Brokers,
the Server-side Proxy and the Server. The Client receives the request from the user and passes
it to the Client-side Proxy, then to the first broker and the next one, the last broker passes
it the Server-side Proxy, finally leads to the invocation to the Server; the Server processes the
request and generates the response, then the response is returned to the user in a reverse way,
as illustrated in Figure 14.
The typical process of the Broker pattern is illustrated in Figure 15 and as follows.
1. The Client receives the request dI through the channel I (the corresponding reading action
is denoted rI(dI)), then processes the request through a processing function which is
denoted CF1, then sends the processed request CF1(dI) to the Client-side Proxy through
the channel ICP (the corresponding sending action is denoted sICP(CF1(dI)));
2. The Client-side Proxy receives the request dICPfrom the Client through the channel
ICP (the corresponding reading action is denoted rICP(dICP
)), then processes the request
through a processing function CPF1, and then sends the processed request CPF1(dICP)
to the first broker 1 through the channel ICB (the corresponding sending action is denoted
sICB(CPF1(dICP
)));
47
3. The broker i (for 1 ≤ i ≤ n) receives the request dIBifrom the broker i − 1 through the
channel IBBi(the corresponding reading action is denoted rIBBi
(dIBi)), then processes
the request through a processing function BFi1 , and then sends the processes request
BFi1(dIBi) to the broker i + 1 through the channel IBBi+1 (the corresponding sending
action is denoted sIBBi+1(BFi1(dIBi
)));
4. The Server-side Proxy receives the request dISPfrom the last broker n through the channel
IBS (the corresponding reading action is denoted rIBS(dISP
)), then processes the request
through a processing function SPF1, and then sends the processed request SPF1(dISP)
to the Server through the channel IPS (the corresponding sending action is denoted
sIPS(SPF1(dISP
)));
5. The Server receives the request dIS from the Server-side Proxy through the channel IPS
(the corresponding reading action is denoted rIPS(dIS)), then processes the request and
generates the response dOSthrough a processing function SF , and then sends the response
to the Server-side Proxy through the channel OPS (the corresponding sending action is
denoted sOPS(dOS
));
6. The Server-side Proxy receives the response dOSPfrom the Server through the channel OPS
(the corresponding reading action is denoted rOPS(dOSP
)), then processes the response
through a processing function SPF2, and sends the processed response SPF2(dOSP) to
the last broker n through the channel OBS (the corresponding sending action is denoted
sOBS(SPF2(dOSP
)));
7. the broker i receives the response dOBifrom the broker i + 1 through the channel OBBi+1
(the corresponding reading action is denoted rOBBi+1(dOBi
)), then processes the response
through a processing function BFi2 , and then sends the processed response BFi2(dOBi) to
the broker i − 1 through the channel OBBi(the corresponding sending action is denoted
sOBBi(BFi2(dOBi
)));
8. The Client-side Proxy receives the response dOCPfrom the first broker 1 through the
channel OCB (the corresponding reading action is denoted rOCB(dOCP
)), then processes
the response through a processing function CPF2, and sends the processed response
CPF2(dOCP) to the Client through the channel OCP (the corresponding sending action is
denoted sOCP(CPF2(dOCP
)));
9. The Client receives the response dOCfrom the Client-side Proxy through the channel OCP
(the corresponding reading action is denoted rOCP(dOC
)), then processes the response
through a processing function CF2 and generate the response dO, and then sends the
response out through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Broker pattern. We assume all data elements dI , dICP, dIBi
, dISP,
dIS , dOS, dOBi
, dOCP, dOC
, dO (for 1 ≤ i ≤ n) are from a finite set ∆. Note that, the channels
IBB1and ICB are the same one channel; the channels OBB1
and OCB are the same one channel;
the channels IBBn+1 and IBS are the same one channel; the channels OBBn+1 and OBS are the
same one channel. And the data CF1(dI) and dICPare the same data; the data CPF1(dICP
) anddIB1
are the same data; the data BFi1(dIBi) and dIBi+1
are the same data; the data BFn1(dIBn
)and the data dISP
are the same data; the data SPF1(dISP) and dIS are the same data; the data
SPF2(dOS) and dOBn
are the same data; the data BFi2(dOBi) and the data dOBi−1
are the same
49
data; the data BF12(dOB1) and dOCP
are the same data; the data CPF2(dOCP) and dOC
are
the same data; the data CF2(dOC) and the data dO are the same data.
The state transitions of the Client module described by APTC are as follows.
C = ∑dI∈∆(rI(dI) ⋅C2)C2 = CF1 ⋅C3
C3 =∑dI∈∆(sICP(CF1(dI)) ⋅C4)
C4 =∑dOC∈∆(rOCP
(dOC) ⋅C5)
C5 = CF2 ⋅C6
C6 =∑dO∈∆(sO(dO) ⋅C)The state transitions of the Client-side Proxy module described by APTC are as follows.
CP = ∑dICP∈∆(rICP
(dICP) ⋅CP2)
CP2 = CPF1 ⋅CP3
CP3 = ∑dICP∈∆(sICB
(CPF1(dICP)) ⋅CP4)
CP4 = ∑dOCP∈∆(rOCB
(dOCP) ⋅CP5)
CP5 = CPF2 ⋅CP6
CP6 = ∑dOCP∈∆(sOCP
(CPF2(dOCP)) ⋅CP )
The state transitions of the Broker i described by APTC are as follows.
Bi = ∑dIBi∈∆(rIBBi
(dIBi) ⋅Bi2)
Bi2 = BFi1 ⋅Bi3
Bi3 = ∑dIBi∈∆(sIBBi+1
(BFi1(dIBi)) ⋅Bi4)
Bi4 = ∑dOBi∈∆(rOBBi+1
(dOBi) ⋅Bi5)
Bi5 = BFi2 ⋅Bi6
Bi6 = ∑dOBi∈∆(sOBBi
(BFi2(dOBi)) ⋅Bi)
The state transitions of the Server-side Proxy described by APTC are as follows.
SP = ∑dISP∈∆(rIBS
(dISP) ⋅ SP2)
SP2 = SPF1 ⋅ SP3
SP3 = ∑dISP∈∆(sIPS
(SPF1(dISP)) ⋅ SP4)
SP4 = ∑dOSP∈∆(rOPS
(dOSP) ⋅ SP5)
SP5 = SPF2 ⋅ SP6
SP6 = ∑dOSP∈∆(sOBS
(SPF2(dOSP)) ⋅ SP )
The state transitions of the Server described by APTC are as follows.
S = ∑dIS ∈∆(rI(dI) ⋅ S2)
S2 = SF ⋅ S3
S3 = ∑dOS∈∆(sOPS
(dOS) ⋅ S)
51
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the broker i for 1 ≤ i ≤ n.
γ(rIBBi(dIBi
), sIBBi(BFi−11(dIBi−1
))) ≜ cIBBi(dIBi
)γ(rOBBi+1
(dOBi), sOBBi+1
(BFi+12(dOBi+1))) ≜ cOBBi+1
(dOBi)
There are two communication functions between the Client and the Client-side Proxy as follows.
γ(rICP(dICP
), sICP(CF1(dI))) ≜ cICP
(dICP)
γ(rOCP(dOC
), sOCP(CPF2(dOCP
))) ≜ cOCP(dOC
)
There are two communication functions between the broke 1 and the Client-side Proxy as follows.
γ(rIBB1(dIB1
), sICB(CPF1(dICP
))) ≜ cIBB1(dIB1
)γ(rOCB
(dOCP), sOBBi
(BFi2(dOBi))) ≜ cOCB
(dOCP)
There are two communication functions between the broker n and the Server-side Proxy as
follows.
γ(rIBS(dISP
), sIBBn+1(BFn1
(dIBn))) ≜ cIBS
(dISP)
γ(rOBBn+1(dOBn
), sOBS(SPF2(dOSP
))) ≜ cOBBn+1(dOBn
)
There are two communication functions between the Server and the Server-side Proxy as follows.
γ(rIPS(dIS), sIPS
(SPF1(dISP))) ≜ cIPS
(dIS)γ(rOPS
(dOSP), sOPS
(dOS)) ≜ cOPS
(dOSP)
Let all modules be in parallel, then the Broker pattern C CP SP S B1⋯Bi⋯Bn can be
presented by the following process term.
τI(∂H(Θ(C ≬ CP ≬ SP ≬ S ≬ B1 ≬ ⋯ ≬ Bi ≬ ⋯ ≬ Bn))) = τI(∂H(C ≬ CP ≬ SP ≬ S ≬
B1 ≬ ⋯≬ Bi ≬⋯ ≬ Bn))where H = {rIBBi
(dIBi), sIBBi
(BFi−11(dIBi−1)), rOBBi+1
(dOBi), sOBBi+1
(BFi+12(dOBi+1)),
rICP(dICP
), sICP(CF1(dI)), rOCP
(dOC), sOCP
(CPF2(dOCP)), rIBB1
(dIB1), sICB
(CPF1(dICP)),
rOCB(dOCP
), sOBBi(BFi2(dOBi
)), rIBS(dISP
), sIBBn+1(BFn1
(dIBn)), rOBBn+1
(dOBn), sOBS
(SPF2(dOSP)),
rIPS(dIS), sIPS
(SPF1(dISP)), rOPS
(dOSP), sOPS
(dOS)
∣dI , dICP, dOCP
, dISP, dOSP
, dIS , dOS, dOC
, dICP, dOCP
, dO, dIB1,⋯, dIBi
,⋯, dIBn, dOB1
,⋯, dOBi,⋯, dOBn
∈
∆},I = {cIBB1
(dIB1), cOBB2
(dOB1),⋯, cIBBi
(dIBi), cOBBi+1
(dOBi),⋯, cIBBn
(dIBn), cOBBn+1
(dOBn),
cICP(dICP
), cOCP(dOC
), cIBB1(dIB1
), cOCB(dOCP
), cIBS(dISP
), cOBBn+1(dOBn
), cIPS(dIS), cOPS
(dOSP),
CF1,CF2,CPF1,CPF2,BF11 ,BF12 ,⋯,BFi1 ,BFi2 ,⋯,BFn1,BFn2
, SPF1, SPF2, SF
∣dI , dICP, dOCP
, dISP, dOSP
, dIS , dOS, dOC
, dICP, dOCP
, dO, dIB1,⋯, dIBi
,⋯, dIBn, dOB1
,⋯, dOBi,⋯, dOBn
∈
∆}.Then we get the following conclusion on the Broker pattern.
52
Theorem 3.6 (Correctness of the Broker pattern). The Broker pattern τI(∂H(C ≬ CP ≬ SP ≬
S ≬ B1 ≬⋯ ≬ Bi ≬⋯ ≬ Bn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬ CP ≬ SP ≬ S ≬ B1 ≬⋯ ≬ Bi ≬ ⋯≬ Bn)) = ∑dI ,dO∈∆(rI(dI) ⋅sO(dO)) ⋅τI(∂H(C ≬
CP ≬ SP ≬ S ≬ B1 ≬⋯ ≬ Bi ≬ ⋯≬ Bn)),that is, the Broker pattern τI(∂H(C ≬ CP ≬ SP ≬ S ≬ B1 ≬ ⋯ ≬ Bi ≬ ⋯ ≬ Bn)) can exhibit
desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
3.3 Interactive Systems
In this subsection, we verify interactive systems oriented patterns, including the Model-View-
Controller (MVC) pattern and the Presentation-Abstraction-Control (PAC) pattern.
3.3.1 Verification of the MVC Pattern
The MVC pattern is used to model the interactive systems, which has three components: the
Model, the Views and the Controller. The Model is used to contain the data and encapsulate the
core functionalities; the Views is to show the computational results to the user; and the Controller
interacts between the system and user, accepts the instructions and controls the Model and the
Views. The Controller receives the instructions from the user through the channel I, then it
sends the instructions to the Model through the channel CM and to the View i through the
channel CVi for 1 ≤ i ≤ n; the model receives the instructions from the Controller, updates the
data and computes the results, and sends the results to the View i through the channel MVi
for 1 ≤ i ≤ n; When the View i receives the results from the Model, it generates or updates the
view to the user. As illustrates in Figure 16.
The typical process of the MVC pattern is shown in Figure 17 and following.
1. The Controller receives the instructions dI from the user through the channel I (the
corresponding reading action is denoted rI(DI)), processes the instructions through a
processing function CF , and generates the instructions to the Model dIM and those to
the View i dIVi for 1 ≤ i ≤ n; it sends dIM to the Model through the channel CM (the
corresponding sending action is denoted sCM(dIM )) and sends dIVi to the View i through
the channel CVi (the corresponding sending action is denoted sCVi(dIVi ));
2. The Model receives the instructions from the Controller through the channel CM (the
corresponding reading action is denoted rCM(dIM )), processes the instructions through a
processing function MF , generates the computational results to the View i (for 1 ≤ i ≤ n)
which is denoted dOMi; then sends the results to the View i through the channel MVi (the
corresponding sending action is denoted sMVi(dOMi
));
3. The View i (for 1 ≤ i ≤ n) receives the instructions from the Controller through the channel
CVi (the corresponding reading action is denoted rCVi(dIVi )), processes the instructions
53
Figure 16: MVC pattern
through a processing function V Fi1 to make ready to receive the computational results
from the Model; then it receives the computational results from the Model through the
channel MVi (the corresponding reading action is denoted rMVi(dOMi
)), processes the
results through a processing function V Fi2 , generates the output dOi, then sending the
output through the channel Oi (the corresponding sending action is denoted sOi(dOi)).
In the following, we verify the MVC pattern. We assume all data elements dI , dIM , dIVi , dOMi,
dOi(for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Controller module described by APTC are as follows.
C = ∑dI∈∆(rI(dI) ⋅C2)C2 = CF ⋅C3
C3 =∑dIM ∈∆(sCM(dIM ) ⋅C4)
C4 =∑dIV1,⋯,dIVn
∈∆(sCV1(dIV1 )≬⋯ ≬ sCVn(dIVn ) ⋅C)
The state transitions of the Model described by APTC are as follows.
M = ∑dIM ∈∆(rCM (dIM ) ⋅M2)
M2 =MF ⋅M3
M3 = ∑dOM1,⋯,dOMn
∈∆(sMV1(dOM1
)≬⋯ ≬ sMVn(dOMn) ⋅M)
The state transitions of the View i described by APTC are as follows.
Vi = ∑dIVi∈∆(rCVi
(dIVi ) ⋅ Vi2)
Vi2 = V Fi1 ⋅ Vi3
Vi3 = ∑dOMi∈∆(rMVi
(dOMi) ⋅ Vi4)
Vi4 = V Fi2 ⋅ Vi5
54
Figure 17: Typical process of MVC pattern
Vi5 = ∑dOi∈∆(sOi
(dOi) ⋅ Vi)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the View i for 1 ≤ i ≤ n.
γ(rCVi(dIVi ), sCVi
(dIVi )) ≜ cCVi(dIVi )
γ(rMVi(dOMi
), sMVi(dOMi
)) ≜ cMVi(dOMi
)
There are one communication functions between the Controller and the Model as follows.
γ(rCM(dIM ), sCM (dIM )) ≜ cCM(dIM )
Let all modules be in parallel, then the MVC pattern C M V1⋯Vi⋯Vn can be presented by
the following process term.
τI(∂H(Θ(C ≬M ≬ V1 ≬ ⋯≬ Vi ≬⋯ ≬ Vn))) = τI(∂H(C ≬M ≬ V1 ≬ ⋯≬ Vi ≬⋯ ≬ Vn))where H = {rCVi
(dIVi ), sCVi(dIVi ), rMVi
(dOMi), sMVi
(dOMi), rCM (dIM ), sCM (dIM )
∣dI , dIM , dIVi , dOMi, dOi
∈∆} for 1 ≤ i ≤ n,I = {cCVi
(dIVi ), cMVi(dOMi
), cCM (dIM ),CF,MF,V F11 , V F12 ,⋯, V Fn1, V Fn2
∣dI , dIM , dIVi , dOMi, dOi
∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the MVC pattern.
Theorem 3.7 (Correctness of the MVC pattern). The MVC pattern τI(∂H(C ≬ M ≬ V1 ≬
⋯≬ Vi ≬⋯ ≬ Vn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
55
Figure 18: PAC pattern
τI(∂H(C ≬M ≬ V1 ≬ ⋯ ≬ Vi ≬ ⋯ ≬ Vn)) = ∑dI ,dO1,⋯,dOn∈∆
(rI(dI) ⋅ sO1(dO1) ∥ ⋯ ∥ sOi
(dOi) ∥
⋯ ∥ sOn(dOn)) ⋅ τI(∂H(C ≬M ≬ V1 ≬⋯ ≬ Vi ≬⋯≬ Vn)),that is, the MVC pattern τI(∂H(C ≬M ≬ V1 ≬ ⋯≬ Vi ≬⋯ ≬ Vn)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
3.3.2 Verification of the PAC Pattern
The PAC pattern is also used to model the interactive systems, which has three components:
the Abstraction, the Presentations and the Control. The Abstraction is used to contain the data
and encapsulate the core functionalities; the Presentations is to show the computational results
to the user; and the Control interacts between the system and user, accepts the instructions
and controls the Abstraction and the Presentations, and also other PACs. The Control receives
the instructions from the user through the channel I, then it sends the instructions to the
Abstraction through the channel CA and receives the results through the same channel. Then
the Control sends the results to the Presentation i through the channel CPi for 1 ≤ i ≤ n,
and also sends the unprocessed instructions to other PACs through the channel O; When the
Presentation i receives the results from the Control, it generates or updates the presentation to
the user. As illustrates in Figure 3.3.2.
The typical process of the PAC pattern is shown in Figure 19 and following.
56
1. The Control receives the instructions dI from the user through the channel I (the corre-
sponding reading action is denoted rI(DI)), processes the instructions through a processing
function CF1, and generates the instructions to the Abstraction dIA and the remaining in-
structions dO; it sends dIA to the Abstraction through the channel CA (the corresponding
sending action is denoted sCA(dIA)), sends dO to the other PAC through the channel O
(the corresponding sending action is denoted sO(dO));
2. The Abstraction receives the instructions from the Control through the channel CA (the
corresponding reading action is denoted rCA(dIA)), processes the instructions through a
processing function AF , generates the computational results to Control which is denoted
dOA, and sends the results to the Control through the channel CA (the corresponding
sending action is denoted sCA(dOA));
3. The Control receives the computational results from the Abstraction through channel CA
(the corresponding reading action is denoted rCA(dOA)), processes the results through a
processing function CF2 to generate the results to the Presentation i (for 1 ≤ i ≤ n) which
is denoted dOCi; then sends the results to the Presentation i through the channel CPi (the
corresponding sending action is denoted sCPi(dOCi
));
4. The Presentation i (for 1 ≤ i ≤ n) receives the computational results from the Control
through the channel CPi (the corresponding reading action is denoted rCPi(dOCi
)), pro-cesses the results through a processing function PFi, generates the output dOi
, then sending
the output through the channel Oi (the corresponding sending action is denoted sOi(dOi)).
In the following, we verify the PAC pattern. We assume all data elements dI , dIA , dOA, dO,
dOCi, dOi
(for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Control module described by APTC are as follows.
C = ∑dI∈∆(rI(dI) ⋅C2)C2 = CF1 ⋅C3
C3 =∑dIA∈∆(sCA(dIA) ⋅C4)
C4 =∑dO∈∆(sO(dO) ⋅C5)C5 =∑dOA
∈∆(rCA(dOA) ⋅C6)
C6 = CF2 ⋅C7
C7 =∑dOC1,⋯,dOCn
∈∆(sCP1(dOC1
)≬⋯ ≬ sCPn(dOCn) ⋅C)
The state transitions of the Abstraction described by APTC are as follows.
A = ∑dIA∈∆(rCA(dIA) ⋅A2)
A2 = AF ⋅A3
A3 = ∑dOA∈∆(sCA(dOA
) ⋅A)The state transitions of the Presentation i described by APTC are as follows.
Pi = ∑dOCi∈∆(rCPi
(dOCi) ⋅ Pi2)
Pi2 = PFi ⋅ Pi3
Pi3 = ∑dOi∈∆(sOi
(dOi) ⋅ Pi)
57
Figure 19: Typical process of PAC pattern
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the Presentation i for 1 ≤ i ≤ n.
γ(rCPi(dOCi
), sCPi(dOCi
)) ≜ cCPi(dOCi
)
There are two communication functions between the Control and the Abstraction as follows.
γ(rCA(dIA), sCA(dIA)) ≜ cCA(dIA)
γ(rCA(dOA), sCA(dOA
)) ≜ cCA(dOA)
Let all modules be in parallel, then the PAC pattern C A P1⋯Pi⋯Pn can be presented by
the following process term.
τI(∂H(Θ(C ≬ A≬ P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn))) = τI(∂H(C ≬ A≬ P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn))where H = {rCPi
(dOCi), sCPi
(dOCi), rCA(dIA), sCA(dIA), rCA(dOA
), sCA(dOA)
∣dI , dIA , dOA, dO, dOCi
, dOi∈∆} for 1 ≤ i ≤ n,
I = {cCA(dIA), cCA(dOA), cCP1
(dOC1),⋯, cCPn(dOCn
),CF1,CF2,AF,PF1,⋯, PFn
∣dI , dIA , dOA, dO, dOCi
, dOi∈∆} for 1 ≤ i ≤ n.
Then we get the following conclusion on the PAC pattern.
58
Figure 20: Microkernel pattern
Theorem 3.8 (Correctness of the PAC pattern). The PAC pattern τI(∂H(C ≬ A ≬ P1 ≬ ⋯≬
Pi ≬⋯≬ Pn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬ A ≬ P1 ≬ ⋯ ≬ Pi ≬ ⋯ ≬ Pn)) = ∑dI ,dO,dO1,⋯,dOn∈∆
(rI(dI) ⋅ sO(dO) ⋅ sO1(dO1) ∥ ⋯ ∥
sOi(dOi) ∥ ⋯ ∥ sOn(dOn)) ⋅ τI(∂H(C ≬ A ≬ P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn)),
that is, the PAC pattern τI(∂H(C ≬ A ≬ P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
3.4 Adaptable Systems
In this subsection, we verify adaptive systems oriented patterns, including the Microkernel pat-
tern and the Reflection pattern.
3.4.1 Verification of the Microkernel Pattern
The Microkernel pattern adapts the changing of the requirements by implementing the un-
changeable requirements as a minimal functional kernel and changeable requirements as exter-
nal functionalities. There are five modules in the Microkernel pattern: the Microkernel, the
Internal Server, the External Server, the Adapter and the Client. The Client interacts with the
user through the channels I and O; The Adapter interacts with the Microkernel through the
channels IAM and OAM , and with the External Server through the channels IAE and OAE ; The
Microkernel interacts with the Internal Server through the channels IMI and OMI , with the
External Server through the channels IEM and OEM . As illustrates in Figure 20.
The typical process of the Microkernel pattern is shown in Figure 21 and as follows.
1. The Client receives the request dI from the user through the channel I (the corresponding
reading action is denoted rI(dI)), then processes the request dI through a processing
59
function CF1, and sends the processed request dIC to the Adapter through the channel
ICA (the corresponding sending action is denoted sICA(dIC));
2. The Adapter receives dIC from the Client through the channel ICA (the corresponding
reading action is denoted rICA(dIC)), then processes the request through a processing
function AF1, generates and sends the processed request dIA to the Microkernel through
the channel IAM (the corresponding sending action is denoted sIAM(dIA));
3. The Microkernel receives the request dIA from the Adapter through the channel IAM (the
corresponding reading action is denoted rIAM(dIA)), then processes the request through a
processing function MF1, generates and sends the processed request dIM to the Internal
Server through the channel IMI (the corresponding sending action is denoted sIMI(dIM )),
and to the External Server through the channel IEM (the corresponding sending action is
denoted sIEM(dIM ));
4. The Internal Server receives the request dIM from the Microkernel through the channel
IMI (the corresponding reading action is denoted rIMI(dIM )), then processes the request
and generates the response dOIthrough a processing function IF , and sends the response
to the Microkernel through the channel OMI (the corresponding sending action is denoted
sOMI(dOI));
5. The External Server receives the request dIM from the Microkernel through the channel
IEM (the corresponding reading action is denoted rIEM(dIM )), then processes the request
and generates the response dOEthrough a processing function EF1, and sends the response
to the Microkernel through the channel OEM (the corresponding sending action is denoted
sOEM(dOE
));
6. The Microkernel receives the response dOIfrom the Internal Server through the channel
OMI (the corresponding reading action is denoted rOMI(dOI)) and the response dOE
from
the External Server through the channel OEM (the corresponding reading action is de-
noted rOEM(dOE
)), then processes the responses and generate the response dOMthrough
a processing function MF2, and sends dOMto the Adapter through the channel OAM (the
corresponding sending action is denoted sOAM(dOM
));
7. The Adapter receives the response dOMfrom the Microkernel through the channel OAM
(the corresponding reading action is denoted rOAM(dOM
)), it may send dIA′ to the External
Server through the channel IAE (the corresponding sending action is denoted sIAE(dIA′ ));
8. The External Server receives the request dIA′ from the Adapter through the channel
IAE (the corresponding reading action is denoted rIAE(dIA′ )), then processes the re-
quest and generate the response dOE′through a processing function EF2, and sends dOE′
to the Adapter through the channel OAE (the corresponding sending action is denoted
sOAE(dOE′
));
9. The Adapter receives the response from the External Server through the channel OAE
(the corresponding reading action is denoted rOAE(dOE′)), then processes dOM
and dOE′
through a processing function AF2 and generates the response dOA, and sends dOA
to the
Client through the channel OCA (the corresponding sending action is denoted sOCA(dOA
));
60
Figure 21: Typical process of Microkernel pattern
10. The Client receives the response dOAfrom the Adapter through the channel OCA (the
corresponding reading action is denoted rOCA(dOA
)), then processes dOAthrough a pro-
cessing function CF2 and generate the response dO, and sends dO to the user through the
channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Microkernel pattern. We assume all data elements dI , dIC , dIA ,
dIA′ , dIM , dOI, dOE
, dOE′, dOM
, dO, dOA(for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Client module described by APTC are as follows.
C = ∑dI∈∆(rI(dI) ⋅C2)C2 = CF1 ⋅C3
C3 =∑dIC ∈∆(sICA
(dIC) ⋅C4)C4 =∑dOA
∈∆(rOCA(dOA
) ⋅C5)C5 = CF2 ⋅C6
C6 =∑dO∈∆(sO(dO) ⋅C)The state transitions of the Adapter module described by APTC are as follows.
A = ∑dIC ∈∆(rICA
(dIC ) ⋅A2)A2 = AF1 ⋅A3
A3 = ∑dIA∈∆(sIAM
(dIA) ⋅A4)A4 = ∑dOM
∈∆(rOAM(dOM
) ⋅A5)A5 = ∑dI
A′∈∆(sIAE
(dIA′ ) ⋅A6)
A6 = ∑dOE′∈∆(rOAE
(dOE′) ⋅A7)
A7 = AF2 ⋅A8
A8 = ∑dOA∈∆(sOCA
(dOA) ⋅A)
The state transitions of the Microkernel module described by APTC are as follows.
M = ∑dIA∈∆(rIAM
(dIA) ⋅M2)
61
M2 =MF1 ⋅M3
M3 = ∑dIM ∈∆(sIMI
(dIM )≬ sIEM(dIM ) ⋅M4)
M4 = ∑dOI,dOE
∈∆(rOMI(dOI)≬ rOEM
(dOE) ⋅M5)
M5 =MF2 ⋅M6)M6 = ∑dOM
∈∆(sOAM(dOM
) ⋅M)The state transitions of the Internal Server described by APTC are as follows.
I = ∑dIM ∈∆(rIMI
(dIM ) ⋅ I2)I2 = IF ⋅ I3
I3 =∑dOI∈∆(sOMI
(dOI) ⋅ I)
The state transitions of the External Server described by APTC are as follows.
E = ∑dIM ∈∆(rIEM
(dIM ) ⋅E2)E2 = EF1 ⋅E3
E3 = ∑dOE∈∆(sOEM
(dOE) ⋅E4)
E4 = ∑dIA′∈∆(rIAE
(dIA′ ) ⋅E5)E5 = EF2 ⋅E6
E6 = ∑dOE′∈∆(sOAE
(dOE′) ⋅E)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Client and the Adapter.
γ(rICA(dIC), sICA
(dIC)) ≜ cICA(dIC)
γ(rOCA(dOA
), sOCA(dOA
)) ≜ cOCA(dOA
)
There are two communication functions between the Adapter and the Microkernel as follows.
γ(rIAM(dIA), sIAM
(dIA)) ≜ cIAM(dIA)
γ(rOAM(dOM
), sOAM(dOM
)) ≜ cOAM(dOM
)
There are two communication functions between the Adapter and the External Server as follows.
γ(rIAE(dIA′ ), sIAE
(dIA′ )) ≜ cIAE(dIA′ )
γ(rOAE(dOE′
), sOAE(dOE′
)) ≜ cOAE(dOE′
)
There are two communication functions between the Internal Server and the Microkernel as
follows.
62
γ(rIMI(dIM ), sIMI(dIM )) ≜ cIMI(dIM )
γ(rOMI(dOI), sOMI
(dOI)) ≜ cOMI
(dOI)
There are two communication functions between the External Server and the Microkernel as
follows.
γ(rIEM(dIM ), sIEM
(dIM )) ≜ cIEM(dIM )
γ(rOEM(dOE
), sOEM(dOE
)) ≜ cOEM(dOE
)
Let all modules be in parallel, then the Microkernel pattern C A M I E can be presented
by the following process term.
τI(∂H(Θ(C ≬ A≬M ≬ I ≬ E))) = τI(∂H(C ≬ A ≬M ≬ I ≬ E))where H = {rICA
(dIC), sICA(dIC), rOCA
(dOA), sOCA
(dOA), rIAM
(dIA), sIAM(dIA),
rOAM(dOM
), sOAM(dOM
), rIAE(dIA′ ), sIAE
(dIA′ ), rOAE(dOE′
), sOAE(dOE′
),rIMI(dIM ), sIMI(dIM ), rOMI
(dOI), sOMI
(dOI), rIEM
(dIM ), sIEM(dIM ),
rOEM(dOE
), sOEM(dOE
)∣dI , dIC , dIA , dIA′ , dIM , dOI, dOE
, dOE′, dOM
, dO, dOA∈∆},
I = {cICA(dIC), cOCA
(dOA), cIAM
(dIA), cOAM(dOM
), cIAE(dIA′ ), cOAE
(dOE′),
cIMI(dIM ), cOMI(dOI), cIEM
(dIM ), cOEM(dOE
),CF1,CF2,AF1,AF2,MF1,MF2, IF,EF1,EF2
∣dI , dIC , dIA , dIA′ , dIM , dOI, dOE
, dOE′, dOM
, dO, dOA∈∆}.
Then we get the following conclusion on the Microkernel pattern.
Theorem 3.9 (Correctness of the Microkernel pattern). The Microkernel pattern τI(∂H(C ≬
A≬M ≬ I ≬ E)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬ A≬M ≬ I ≬ E)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(C ≬ A ≬M ≬ I ≬ E)),that is, the Microkernel pattern τI(∂H(C ≬ A ≬ M ≬ I ≬ E)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
3.4.2 Verification of the Reflection Pattern
The Reflection pattern makes the system be able to change its structure and behaviors dynam-
ically. There are two levels in the Reflection pattern: one is the meta level to encapsulate the
information of system properties and make the system self-aware; the other is the base level
to implement the concrete application logic. The meta level modules include the Metaobject
Protocol and n Metaobject. The Metaobject Protocol is used to configure the Metaobjects,
and it interacts with Metaobject i through the channels IMPiand OMPi
, and it exchanges con-
figuration information with outside through the input channel IM and OM . The Metaobject
63
Figure 22: Reflection pattern
encapsulate the system properties, and it interacts with the Metaobject Protocol, and with the
Component through the channels IMCiand OMCi
. The base level modules including concrete
Components, which interact with the Metaobject, and with outside through the input channel
IC and OC . As illustrates in Figure 22.
The typical process of the Reflection pattern is shown in Figure 23 and as follows.
1. The Metaobject Protocol receives the configuration information from the user through the
channel IM (the corresponding reading action is denoted rIM (dIM )), then processes the
information through a processing function PF1 and generates the configuration dIP , and
sends dIP to the Metaobject i (for 1 ≤ i ≤ n) through the channel IMPi(the corresponding
sending action is denoted sIMPi(dIP ));
2. The Metaobject i receives the configuration dIP from the Metaobject Protocol through the
channel IMPi(the corresponding reading action is denoted rIMPi
(dIP )), then configures
the properties through a configuration function MFi1, and sends the configuration results
dOMi1to the Metaobject Protocol through the channel OMPi
(the corresponding sending
action is denoted sOMPi(dOMi1
));
3. The Metaobject Protocol receives the configuration results from the Metaobject i through
the channel OMPi(the corresponding reading action is denoted rOMPi
(dOMi1)), then pro-
cesses the results through a processing function PF2 and generates the result dOM, and
64
Figure 23: Typical process of Reflection pattern
sends dOMto the outside through the channel OM (the corresponding sending action is
denoted sOM(dOM
));
4. The Component receives the invacation from the user through the channel IC (the cor-
responding reading action is denoted rIC(dIC)), then processes the invocation through a
processing function CF1 and generates the configuration dIC , and sends dIC to the Metaob-
ject i (for 1 ≤ i ≤ n) through the channel IMCi(the corresponding sending action is denoted
sIMCi(dIC));
5. The Metaobject i receives the invocation dIC from the Component through the channel
IMCi(the corresponding reading action is denoted rIMCi
(dIC)), then computes through
a computational function MFi2, and sends the computational results dOMi2to the Compo-
nent through the channel OMCi(the corresponding sending action is denoted sOMCi
(dOMi2));
6. The Component receives the computational results from the Metaobject i through the
channel OMCi(the corresponding reading action is denoted rOMCi
(dOMi2)), then processes
the results through a processing function CF2 and generates the result dOC, and sends
dOCto the outside through the channel OC (the corresponding sending action is denoted
sOC(dOC
)).
In the following, we verify the Reflection pattern. We assume all data elements dIM , dIC , dIP ,
dOMi1, dOMi2
, dOM, dOC
(for 1 ≤ i ≤ n) are from a finite set ∆.
65
The state transitions of the Metaobject Protocol module described by APTC are as follows.
P = ∑dIM ∈∆(rIM (dIM ) ⋅ P2)
P2 = PF1 ⋅ P3
P3 = ∑dIP ∈∆(sIMP1
(dIP )≬⋯ ≬ sIMPn(dIP ) ⋅ P4)
P4 = ∑dOM11,⋯,dOMn1
∈∆(rOMP1(dOM11
)≬⋯ ≬ rOMPn(dOMn1
) ⋅ P5)
P5 = PF2 ⋅ P6
P6 = ∑dOM∈∆(sOM
(dOM) ⋅ P )
The state transitions of the Component described by APTC are as follows.
C = ∑dIC ∈∆(rIC(dIC) ⋅C2)
C2 = CF1 ⋅C3
C3 =∑dIP ∈∆(sIMC1
(dIC)≬⋯ ≬ sIMCn(dIC) ⋅C4)
C4 =∑dOM12,⋯,dOMn2
∈∆(rOMC1(dOM12
)≬ ⋯≬ rOMCn(dOMn2
) ⋅C5)
C5 = CF2 ⋅C6
C6 =∑dOC∈∆(sOC
(dOC) ⋅C)
The state transitions of the Metaobject i described by APTC are as follows.
Mi = ∑dIP ∈∆(rIMPi
(dIP ) ⋅Mi2)Mi2 =MFi1 ⋅Mi3
Mi3 = ∑dOMi1∈∆(sOMPi
(dOMi1) ⋅Mi4)
Mi4 = ∑dIC ∈∆(rIMCi
(dIC) ⋅Mi5)Mi5 =MFi2 ⋅Mi6
Mi6 = ∑dOMi2∈∆(sOMCi
(dOMi2) ⋅Mi)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the Metaobject i for 1 ≤ i ≤ n.
γ(rIMPi(dIP ), sIMPi
(dIP )) ≜ cIMPi(dIP )
γ(rOMPi(dOMi1
), sOMPi(dOMi1
)) ≜ cOMPi(dOMi1
)
γ(rIMCi(dIC ), sIMCi
(dIC)) ≜ cIMCi(dIC)
γ(rOMCi(dOMi2
), sOMCi(dOMi2
)) ≜ cOMCi(dOMi2
)
Let all modules be in parallel, then the Reflection pattern C P M1⋯Mi⋯Mn can be presented
by the following process term.
τI(∂H(Θ(C ≬ P ≬M1 ≬⋯≬Mi ≬ ⋯≬Mn))) = τI(∂H(C ≬ P ≬M1 ≬⋯ ≬Mi ≬⋯ ≬Mn))
66
where H = {rIMPi(dIP ), sIMPi
(dIP )), rOMPi(dOMi1
), sOMPi(dOMi1
),rIMCi
(dIC), sIMCi(dIC ), rOMCi
(dOMi2), sOMCi
(dOMi2)∣dIM , dIC , dIP , dOMi1
, dOMi2, dOM
, dOC∈∆} for
1 ≤ i ≤ n,
I = {cIMPi(dIP ), cOMPi
(dOMi1), cIMCi
(dIC), cOMCi(dOMi2
), PF1, PF2,CF1,CF2,MFi1,MFi2
∣dIM , dIC , dIP , dOMi1, dOMi2
, dOM, dOC
∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Reflection pattern.
Theorem 3.10 (Correctness of the Reflection pattern). The Reflection pattern τI(∂H(C ≬ P ≬
M1 ≬⋯ ≬Mi ≬⋯≬Mn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬ P ≬ M1 ≬ ⋯ ≬ Mi ≬ ⋯ ≬ Mn)) = ∑dIM ,dIC ,dOM,dOC
∈∆(rIM (dIM ) ∥ rIC(dIC) ⋅sOM(dOM
) ∥ sOC(dOC
)) ⋅ τI(∂H(C ≬ P ≬M1 ≬⋯ ≬Mi ≬⋯≬Mn)),that is, the Reflection pattern τI(∂H(C ≬ P ≬M1 ≬ ⋯ ≬Mi ≬ ⋯ ≬Mn)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
67
Figure 24: Whole-Part pattern
4 Verification of Design Patterns
Design patterns are middle-level patterns, which are lower than the architecture patterns and
higher than the programming language-specific idioms. Design patterns describe the architecture
of the subsystems.
In this chapter, we verify the five categories of design patterns. In section 4.1, we verify the
patterns related to structural decomposition. In section 4.2, we verify the patterns related to
organization of work. We verify the patterns related to access control in section 4.3 and verify
management oriented patterns in section 4.4. Finally, we verify the communication oriented
patterns in section 4.5.
4.1 Structural Decomposition
In this subsection, we verify structural decomposition related patterns, including the Whole-Part
pattern.
4.1.1 Verification the Whole-Part Pattern
The Whole-Part pattern is used to divide application logics into Parts and aggregate the Parts
into a Whole. In this pattern, there are a Whole module and n Part modules. The Whole
module interacts with outside through the channels I and O, and with Part i (for 1 ≤ i ≤ n)
through the channels IWPiand OWPi
, as illustrated in Figure 24.
The typical process of the Whole-Part pattern is shown in Figure 25 and as follows.
1. The Whole receives the request dI from outside through the channel I (the corresponding
reading action is denoted rI(dI)), then processes the request through a processing function
WF1 and generates the request dIW , and sends the dIW to the Part i through the channel
IWPi(the corresponding sending action is denoted sIWPi
(dIW ));
68
Figure 25: Typical process of Whole-Part pattern
2. The Part i receives the request dIW from the Whole through the channel IWPi(the corre-
sponding reading action is denoted rIWPi(dIW )), then processes the request through a pro-
cessing function PFi and generates the response dOPi, and sends the response to the Whole
through the channel OWPi(the corresponding sending action is denoted sOWPi
(dOPi));
3. The Whole receives the response dOPifrom the Part i through the channel OWPi
(the
corresponding reading action is denoted rOWPi(dOPi
)), then processes the request through
a processing function WF2 and generates the request dO, and sends the dO to the outside
through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Whole-Part pattern. We assume all data elements dI , dIW , dOPi,
dO (for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Whole module described by APTC are as follows.
W = ∑dI∈∆(rI(dI) ⋅W2)W2 =WF1 ⋅W3
W3 = ∑dIW ∈∆(sIWP1
(dIW )≬⋯ ≬ sIWPn(dIW ) ⋅W4)
W4 = ∑dOP1
,⋯,dOPn∈∆(rOWP1
(dOP1)≬⋯ ≬ rOWPn
(dOPn) ⋅W5)
W5 =WF2 ⋅W6
W6 = ∑dO∈∆(sO(dO) ⋅W )The state transitions of the Part i described by APTC are as follows.
Pi = ∑dIW ∈∆(rIWPi
(dIW ) ⋅ Pi2)Pi2 = PFi ⋅ Pi3
Pi3 = ∑dOPi∈∆(sOWPi
(dOPi) ⋅ P )
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the Part i for 1 ≤ i ≤ n.
69
γ(rIWPi(dIW ), sIWPi
(dIW )) ≜ cIWPi(dIW )
γ(rOWPi(dOPi
), sOWPi(dOPi
)) ≜ cOWPi(dOPi
)
Let all modules be in parallel, then the Whole-Part pattern Q P1⋯Pi⋯Pn can be presented by
the following process term.
τI(∂H(Θ(W ≬ P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn))) = τI(∂H(W ≬ P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn))where H = {rIWPi
(dIW ), sIWPi(dIW ), rOWPi
(dOPi), sOWPi
(dOPi)
∣dI , dIW , dOPi, dO ∈∆} for 1 ≤ i ≤ n,
I = {cIWPi(dIW ), cOWPi
(dOPi),WF1,WF2, PFi
∣dI , dIW , dOPi, dO ∈∆} for 1 ≤ i ≤ n.
Then we get the following conclusion on the Whole-Part pattern.
Theorem 4.1 (Correctness of the Whole-Part pattern). The Whole-Part pattern τI(∂H(W ≬
P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(W ≬ P1 ≬ ⋯ ≬ Pi ≬ ⋯ ≬ Pn)) = ∑dIM ,dIC ,dOM,dOC
∈∆(rIM (dIM ) ∥ rIC(dIC) ⋅ sOM(dOM
) ∥sOC(dOC
)) ⋅ τI(∂H(W ≬ P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn)),that is, the Whole-Part pattern τI(∂H(W ≬ P1 ≬ ⋯ ≬ Pi ≬ ⋯ ≬ Pn)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
4.2 Organization of Work
4.2.1 Verification of the Master-Slave Pattern
The Master-Slave pattern is used to implement large scale computation. In this pattern, there
are a Master module and n Slave modules. The Slaves is used to implement concrete computation
and the Master is used to distribute computational tasks and collect the computational results.
The Master module interacts with outside through the channels I and O, and with Slave i (for
1 ≤ i ≤ n) through the channels IMSiand OMSi
, as illustrated in Figure 26.
The typical process of the Master-Slave pattern is shown in Figure 27 and as follows.
1. The Master receives the request dI from outside through the channel I (the corresponding
reading action is denoted rI(dI)), then processes the request through a processing function
MF1 and generates the request dIM , and sends the dIM to the Slave i through the channel
IMSi(the corresponding sending action is denoted sIMSi
(dIM ));
2. The Slave i receives the request dIM from the Master through the channel IMSi(the corre-
sponding reading action is denoted rIMSi(dIM )), then processes the request through a pro-
cessing function SFi and generates the response dOSi, and sends the response to the Master
through the channel OMSi(the corresponding sending action is denoted sOMSi
(dOSi));
70
Figure 26: Master-Slave pattern
Figure 27: Typical process of Master-Slave pattern
3. The Master receives the response dOSifrom the Slave i through the channel OMSi
(the
corresponding reading action is denoted rOMSi(dOSi
)), then processes the request through
a processing function MF2 and generates the request dO, and sends the dO to the outside
through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Master-Slave pattern. We assume all data elements dI , dIM , dOSi,
dO (for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Master module described by APTC are as follows.
M = ∑dI∈∆(rI(dI) ⋅M2)M2 =MF1 ⋅M3
M3 = ∑dIM ∈∆(sIMS1
(dIM )≬⋯ ≬ sIMSn(dIM ) ⋅M4)
M4 = ∑dOS1
,⋯,dOSn∈∆(rOMS1
(dOS1)≬⋯≬ rOMSn
(dOSn) ⋅M5)
71
M5 =MF2 ⋅M6
M6 = ∑dO∈∆(sO(dO) ⋅M)The state transitions of the Slave i described by APTC are as follows.
Si = ∑dIM ∈∆(rIMSi
(dIM ) ⋅ Si2)Si2 = SFi ⋅ Si3
Si3 = ∑dOSi∈∆(sOMSi
(dOSi) ⋅ S)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the Slave i for 1 ≤ i ≤ n.
γ(rIMSi(dIM ), sIMSi
(dIM )) ≜ cIMSi(dIM )
γ(rOMSi(dOSi
), sOMSi(dOSi
)) ≜ cOMSi(dOSi
)
Let all modules be in parallel, then the Master-Slave pattern M S1⋯Si⋯Sn can be presented
by the following process term.
τI(∂H(Θ(M ≬ S1 ≬⋯ ≬ Si ≬ ⋯≬ Sn))) = τI(∂H(M ≬ S1 ≬⋯ ≬ Si ≬ ⋯≬ Sn))where H = {rIMSi
(dIM ), sIMSi(dIM ), rOMSi
(dOSi), sOMSi
(dOSi)
∣dI , dIM , dOSi, dO ∈∆} for 1 ≤ i ≤ n,
I = {cIMSi(dIM ), cOMSi
(dOSi),MF1,MF2, SFi
∣dI , dIM , dOSi, dO ∈∆} for 1 ≤ i ≤ n.
Then we get the following conclusion on the Master-Slave pattern.
Theorem 4.2 (Correctness of the Master-Slave pattern). The Master-Slave pattern τI(∂H(M ≬
S1 ≬⋯ ≬ Si ≬⋯≬ Sn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(M ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(M ≬ S1 ≬ ⋯ ≬ Si ≬
⋯≬ Sn)),that is, the Master-Slave pattern τI(∂H(M ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
4.3 Access Control
4.3.1 Verification of the Proxy Pattern
The Proxy pattern is used to decouple the access of original components through a proxy. In this
pattern, there are a Proxy module and n Original modules. The Originals is used to implement
concrete computation and the Proxy is used to decouple the access to the Originals. The Proxy
72
Figure 28: Proxy pattern
module interacts with outside through the channels I and O, and with Original i (for 1 ≤ i ≤ n)
through the channels IPOiand OPOi
, as illustrated in Figure 28.
The typical process of the Proxy pattern is shown in Figure 29 and as follows.
1. The Proxy receives the request dI from outside through the channel I (the corresponding
reading action is denoted rI(dI)), then processes the request through a processing function
PF1 and generates the request dIP , and sends the dIP to the Original i through the channel
IPOi(the corresponding sending action is denoted sIPOi
(dIP ));
2. The Original i receives the request dIP from the Proxy through the channel IPOi(the corre-
sponding reading action is denoted rIPOi(dIP )), then processes the request through a pro-
cessing function OFi and generates the response dOOi, and sends the response to the Proxy
through the channel OPOi(the corresponding sending action is denoted sOPOi
(dOOi));
3. The Proxy receives the response dOOifrom the Original i through the channel OPOi
(the
corresponding reading action is denoted rOPOi(dOOi
)), then processes the request through
a processing function PF2 and generates the request dO, and sends the dO to the outside
through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Proxy pattern. We assume all data elements dI , dIP , dOOi, dO
(for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Proxy module described by APTC are as follows.
P = ∑dI∈∆(rI(dI) ⋅ P2)P2 = PF1 ⋅ P3
P3 = ∑dIP ∈∆(sIPO1
(dIP )≬⋯ ≬ sIPOn(dIP ) ⋅ P4)
P4 = ∑dOO1,⋯,dOOn
∈∆(rOPO1(dOO1
)≬⋯ ≬ rOPOn(dOOn
) ⋅ P5)
P5 = PF2 ⋅ P6
P6 = ∑dO∈∆(sO(dO) ⋅ P )
73
Figure 29: Typical process of Proxy pattern
The state transitions of the Original i described by APTC are as follows.
Oi = ∑dIP ∈∆(rIPOi
(dIP ) ⋅Oi2)Oi2 = OFi ⋅Oi3
Oi3 = ∑dOOi∈∆(sOPOi
(dOOi) ⋅Oi)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the Original i for 1 ≤ i ≤ n.
γ(rIPOi(dIP ), sIPOi
(dIP )) ≜ cIPOi(dIP )
γ(rOPOi(dOOi
), sOPOi(dOOi
)) ≜ cOPOi(dOOi
)
Let all modules be in parallel, then the Proxy pattern P O1⋯Oi⋯On can be presented by the
following process term.
τI(∂H(Θ(P ≬ O1 ≬⋯≬ Oi ≬⋯≬ On))) = τI(∂H(P ≬ O1 ≬⋯ ≬ Oi ≬⋯ ≬ On))where H = {rIPOi
(dIP ), sIPOi(dIP ), rOPOi
(dOOi), sOPOi
(dOOi)
∣dI , dIP , dOOi, dO ∈∆} for 1 ≤ i ≤ n,
I = {cIPOi(dIP ), cOPOi
(dOOi), PF1, PF2,OFi
∣dI , dIP , dOOi, dO ∈∆} for 1 ≤ i ≤ n.
Then we get the following conclusion on the Proxy pattern.
Theorem 4.3 (Correctness of the Proxy pattern). The Proxy pattern τI(∂H(P ≬ O1 ≬ ⋯ ≬
Oi ≬⋯≬ On)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
74
Figure 30: Command Processor pattern
τI(∂H(P ≬ O1 ≬ ⋯ ≬ Oi ≬ ⋯ ≬ On)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(P ≬ O1 ≬ ⋯ ≬ Oi ≬
⋯≬ On)),that is, the Proxy pattern τI(∂H(P ≬ O1 ≬ ⋯ ≬ Oi ≬ ⋯ ≬ On)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
4.4 Management
4.4.1 Verification of the Command Processor Pattern
The Command Processor pattern is used to decouple the request and execution of a service. In
this pattern, there are a Controller module, a Command Processor module, and n Command
modules and n Supplier modules. The Supplier is used to implement concrete computation, the
Command is used to encapsulate a Supplier into a command, and the Command Processor is
used to manage Commands. The Controller module interacts with outside through the channels
I and O, and with the Command Processor through the channels ICP and OCP . The Command
Processor interacts with Command i (for 1 ≤ i ≤ n) through the channels IPCiand OPCi
, and
the Command i interacts with the Supplier i through the channels ICSiand OCSi
, as illustrated
in Figure 30.
The typical process of the Command Processor pattern is shown in Figure 31 and as follows.
75
1. The Controller receives the request dI from outside through the channel I (the correspond-
ing reading action is denoted rI(dI)), then processes the request through a processing
function CF1 and generates the request dIP , and sends the dIP to the Command Processor
through the channel ICP (the corresponding sending action is denoted sICP(dIP ));
2. The Command Processor receives the request dIP from the Controller through the channel
ICP (the corresponding reading action is denoted rICP(dIP )), then processes the request
through a processing function PF1 and generates the request dIComi, and sends the dIComi
to the Command i through the channel IPCi(the corresponding sending action is denoted
sIPCi(dIComi
));
3. The Command i receives the request dIComifrom the Command Processor through the
channel IPCi(the corresponding reading action is denoted rIPCi
(dIComi)), then processes
the request through a processing function ComFi1 and generates the request dISi, and
sends the request to the Supplier i through the channel ICSi(the corresponding sending
action is denoted sICSi(dISi
));
4. The Supplier i receives the request dISifrom the Command i through the channel ICSi
(the
corresponding reading action is denoted rICSi(dISi
)), then processes the request through
a processing function SFi and generates the response dOSi, and sends the response to
the Command through the channel OCSi(the corresponding sending action is denoted
sOCSi(dOSi
));
5. The Command i receives the response dOSifrom the Supplier i through the channel OCSi
(the corresponding reading action is denoted rOCSi(dOSi
)), then processes the request
through a processing function ComFi2 and generates the response dOComi, and sends the
response to the Command Processor through the channel OPCi(the corresponding sending
action is denoted sOPCi(dOComi
));
6. The Command Processor receives the response dOComifrom the Command i through the
channel OPCi(the corresponding reading action is denoted rOPCi
(dOComi)), then processes
the response and generate the response dOPthrough a processing function PF2, and sends
dOPto the Controller through the channel OCP (the corresponding sending action is de-
noted sOCP(dOP
));
7. The Controller receives the response dOPfrom the Command Processor through the chan-
nel OCP (the corresponding reading action is denoted rOCP(dOP
)), then processes the
request through a processing function CF2 and generates the request dO, and sends the
dO to the outside through the channel O (the corresponding sending action is denoted
sO(dO)).
In the following, we verify the Command Processor pattern. We assume all data elements dI ,
dIP , dIComi, dISi
, dOSi, dOComi
, dOP, dO (for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Controller module described by APTC are as follows.
C = ∑dI∈∆(rI(dI) ⋅C2)C2 = CF1 ⋅C3
C3 =∑dIP ∈∆(sICP
(dIP ) ⋅C4)
76
Figure 31: Typical process of Command Processor pattern
C4 =∑dOP∈∆(rOCP
(dOP) ⋅C5)
C5 = CF2 ⋅C6
C6 =∑dO∈∆(sO(dO) ⋅C)The state transitions of the Command Processor module described by APTC are as follows.
P = ∑dIP ∈∆(rICP
(dIP ) ⋅ P2)P2 = PF1 ⋅ P3
P3 = ∑dICom1,⋯,dIComn
∈∆(sIPC1(dICom1
)≬⋯ ≬ sIPCn(dIComn
) ⋅ P4)
P4 = ∑dOCom1,⋯,dOComn
∈∆(rOPC1(dOCom1
)≬⋯ ≬ rOPCn(dOComn
) ⋅ P5)
P5 = PF2 ⋅ P6
P6 = ∑dOP∈∆(sOCP
(dOP) ⋅ P )
The state transitions of the Command i described by APTC are as follows.
Comi = ∑dIComi∈∆(rIPCi
(dIComi) ⋅Comi2)
Comi2 = ComFi1 ⋅Comi3
Comi3 = ∑dISi∈∆(sICSi
(dISi) ⋅Comi4)
Comi4 = ∑dOSi∈∆(rOCSi
(dOSi) ⋅Comi5)
Comi5 = ComFi2 ⋅Comi6
77
Comi6 = ∑dOComi∈∆(sOPCi
(dOComi) ⋅Comi)
The state transitions of the Supplier i described by APTC are as follows.
Si = ∑dISi∈∆(rICSi
(dISi) ⋅ Si2)
Si2 = SFi ⋅ Si3
Si3 = ∑dOSi∈∆(sOCSi
(dOSi) ⋅ Si)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of between the Controller the Command Processor.
γ(rICP(dIP ), sICP
(dIP )) ≜ cICP(dIP )
γ(rOCP(dOP
), sOCP(dOP
)) ≜ cOCP(dOP
)
There are two communication function between the Command Processor and the Command i
for 1 ≤ i ≤ n.
γ(rIPCi(dIComi
), sIPCi(dIComi
)) ≜ cIPCi(dIComi
)
γ(rOPCi(dOComi
), sOPCi(dOComi
)) ≜ cOPCi(dOComi
)
There are two communication function between the Supplier i and the Command i for 1 ≤ i ≤ n.
γ(rICSi(dISi
), sICSi(dISi
)) ≜ cICSi(dISi
)
γ(rOCSi(dOSi
), sOCSi(dOSi
)) ≜ cOCSi(dOSi
)
Let all modules be in parallel, then the Command Processor pattern
C P Com1⋯ Comi ⋯Comn S1⋯Si⋯Sn
can be presented by the following process term.
τI(∂H(Θ(C ≬ P ≬ Com1 ≬ ⋯ ≬ Comi ≬ ⋯ ≬ Comn ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn))) =τI(∂H(C ≬ P ≬ Com1 ≬⋯≬ Comi ≬⋯≬ Comn ≬ S1 ≬⋯ ≬ Si ≬⋯≬ Sn))where H = {rICP
(dIP ), sICP(dIP ), rOCP
(dOP), sOCP
(dOP), rIPCi
(dIComi), sIPCi
(dIComi),
rOPCi(dOComi
), sOPCi(dOComi
), rICSi(dISi
), sICSi(dISi
), rOCSi(dOSi
), sOCSi(dOSi
)∣dI , dIP , dIComi
, dISi, dOSi
, dOComi, dOP
, dO ∈∆} for 1 ≤ i ≤ n,I = {cICP
(dIP ), cOCP(dOP
), cIPCi(dIComi
), cOPCi(dOComi
), cICSi(dISi
),cOCSi
(dOSi),CF1,CF2, PF1, PF2,ComFi1,ComFi2, SFi
∣dI , dIP , dIComi, dISi
, dOSi, dOComi
, dOP, dO ∈∆} for 1 ≤ i ≤ n.
Then we get the following conclusion on the Command Processor pattern.
78
Figure 32: View Handler pattern
Theorem 4.4 (Correctness of the Command Processor pattern). The Command Processor
pattern τI(∂H(C ≬ P ≬ Com1 ≬ ⋯ ≬ Comi ≬ ⋯ ≬ Comn ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn)) canexhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬ P ≬ Com1 ≬ ⋯ ≬ Comi ≬ ⋯ ≬ Comn ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn)) =∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(C ≬ P ≬ Com1 ≬⋯ ≬ Comi ≬ ⋯≬ Comn ≬ S1 ≬⋯ ≬ Si ≬
⋯≬ Sn)),that is, the Command Processor pattern τI(∂H(C ≬ P ≬ Com1 ≬ ⋯ ≬ Comi ≬ ⋯ ≬ Comn ≬
S1 ≬⋯ ≬ Si ≬⋯≬ Sn)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
4.4.2 Verification of the View Handler Pattern
The View Handler pattern is used to manage all views of the system, which has three components:
the Supplier, the Views and the ViewHandler. The Supplier is used to contain the data and
encapsulate the core functionalities; the Views is to show the computational results to the
user; and the ViewHandler interacts between the system and user, accepts the instructions and
controls the Supplier and the Views. The ViewHandler receives the instructions from the user
through the channel I, then it sends the instructions to the Supplier through the channel V S
and to the View i through the channel V Vi for 1 ≤ i ≤ n; the model receives the instructions
from the ViewHandler, updates the data and computes the results, and sends the results to the
View i through the channel SVi for 1 ≤ i ≤ n; When the View i receives the results from the
Supplier, it generates or updates the view to the user. As illustrates in Figure 32.
The typical process of the View Handler pattern is shown in Figure 33 and following.
79
Figure 33: Typical process of View Handler pattern
1. The ViewHandler receives the instructions dI from the user through the channel I (the
corresponding reading action is denoted rI(DI)), processes the instructions through a
processing function V HF , and generates the instructions to the Supplier dIS and those to
the View i dIVi for 1 ≤ i ≤ n; it sends dIS to the Supplier through the channel CM (the
corresponding sending action is denoted sV S(dIM )) and sends dIVi to the View i through
the channel V Vi (the corresponding sending action is denoted sV Vi(dIVi ));
2. The Supplier receives the instructions from the ViewHandler through the channel V S (the
corresponding reading action is denoted rV S(dIS)), processes the instructions through a
processing function SF , generates the computational results to the View i (for 1 ≤ i ≤ n)
which is denoted dOSi; then sends the results to the View i through the channel SVi (the
corresponding sending action is denoted sMVi(dOMi
));
3. The View i (for 1 ≤ i ≤ n) receives the instructions from the ViewHandler through the chan-
nel V Vi (the corresponding reading action is denoted rV Vi(dIVi )), processes the instructions
through a processing function V Fi1 to make ready to receive the computational results
from the Supplier; then it receives the computational results from the Supplier through
the channel SVi (the corresponding reading action is denoted rSVi(dOSi
)), processes the
results through a processing function V Fi2 , generates the output dOi, then sending the
output through the channel Oi (the corresponding sending action is denoted sOi(dOi)).
In the following, we verify the View Handler pattern. We assume all data elements dI , dIS , dIVi ,
dOSi, dOi
(for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the ViewHandler module described by APTC are as follows.
V H = ∑dI∈∆(rI(dI) ⋅ V H2)V H2 = V HF ⋅ V H3
V H3 = ∑dIS ∈∆(sV S(dIS) ⋅ V H4)
80
V H4 = ∑dIV1,⋯,dIVn
∈∆(sV V1(dIV1 )≬ ⋯≬ sV Vn(dIVn ) ⋅ V H)
The state transitions of the Supplier described by APTC are as follows.
S = ∑dIM ∈∆(rV S(dIS) ⋅ S2)
S2 = SF ⋅ S3
S3 = ∑dOS1
,⋯,dOSn∈∆(sSV1
(dOS1)≬⋯ ≬ sSVn(dOSn
) ⋅ S)
The state transitions of the View i described by APTC are as follows.
Vi = ∑dIVi∈∆(rV Vi
(dIVi ) ⋅ Vi2)
Vi2 = V Fi1 ⋅ Vi3
Vi3 = ∑dOSi∈∆(rSVi
(dOSi) ⋅ Vi4)
Vi4 = V Fi2 ⋅ Vi5
Vi5 = ∑dOi∈∆(sOi
(dOi) ⋅ Vi)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the View i for 1 ≤ i ≤ n.
γ(rV Vi(dIVi ), sV Vi
(dIVi )) ≜ cV Vi(dIVi )
γ(rSVi(dOSi
), sSVi(dOSi
)) ≜ cSVi(dOSi
)
There are one communication functions between the ViewHandler and the Supplier as follows.
γ(rV S(dIS), sV S(dIS)) ≜ cV S(dIS)
Let all modules be in parallel, then the View Handler pattern V H S V1⋯Vi⋯Vn can be
presented by the following process term.
τI(∂H(Θ(V H ≬ S ≬ V1 ≬⋯ ≬ Vi ≬⋯ ≬ Vn))) = τI(∂H(V H ≬ S ≬ V1 ≬⋯≬ Vi ≬⋯ ≬ Vn))where H = {rV Vi
(dIVi ), sV Vi(dIVi ), rSVi
(dOSi), sSVi
(dOSi), rV S(dIS), sV S(dIS)
∣dI , dIS , dIVi , dOSi, dOi
∈∆} for 1 ≤ i ≤ n,I = {cV Vi
(dIVi ), cV Vi(dOMi
), cV S(dIS), V HF,SF,V F11 , V F12 ,⋯, V Fn1, V Fn2
∣dI , dIS , dIVi , dOSi, dOi
∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the View Handler pattern.
Theorem 4.5 (Correctness of the View Handler pattern). The View Handler pattern τI(∂H(V H ≬
S ≬ V1 ≬⋯ ≬ Vi ≬⋯ ≬ Vn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(V H ≬ S ≬ V1 ≬⋯ ≬ Vi ≬⋯≬ Vn)) = ∑dI ,dO1,⋯,dOn∈∆
(rI(dI) ⋅ sO1(dO1) ∥ ⋯ ∥ sOi
(dOi) ∥
⋯ ∥ sOn(dOn)) ⋅ τI(∂H(V H ≬ S ≬ V1 ≬ ⋯≬ Vi ≬⋯ ≬ Vn)),that is, the View Handler pattern τI(∂H(V H ≬ S ≬ V1 ≬ ⋯ ≬ Vi ≬ ⋯ ≬ Vn)) can exhibit
desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
81
Figure 34: Forwarder-Receiver pattern
4.5 Communication
4.5.1 Verification of the Forwarder-Receiver Pattern
The Forwarder-Receiver pattern decouples the communication of two communicating peers.
There are six modules in the Forwarder-Receiver pattern: the two Peers, the two Forwarders,
and the two Receivers. The Peers interact with the user through the channels I1, I2 and O1, O2;
with the Forwarder through the channels PF1 and PF2. The Receivers interact with Forwarders
the through the channels FR1 and FR2, and with the Peers through the channels RP1 and RP2.
As illustrates in Figure 34.
The typical process of the Forwarder-Receiver pattern is shown in Figure 35 and as follows.
1. The Peer 1 receives the request dI1 from the user through the channel I1 (the corresponding
reading action is denoted rI1(dI1)), then processes the request dI1 through a processing
function P1F1, and sends the processed request dIF1to the Forwarder 1 through the
channel PF1 (the corresponding sending action is denoted sPF1(dIF1
));
2. The Forwarder 1 receives dIF1from the Peer 1 through the channel PF1 (the corresponding
reading action is denoted rPF1(dIF1
)), then processes the request through a processing
function F1F , generates and sends the processed request dIR2to the Receiver 2 through
the channel FR1 (the corresponding sending action is denoted sFR1(dIR2
));
3. The Receiver 2 receives the request dIR2from the Forwarder 1 through the channel FR1
(the corresponding reading action is denoted rFR1(dIR2
)), then processes the request
through a processing function R2F , generates and sends the processed request dIP2to the
Peer 2 through the channel RP1 (the corresponding sending action is denoted sRP1(dIP2
));
4. The Peer 2 receives the request dIP2from the Receiver 2 through the channel RP1 (the
corresponding reading action is denoted rRP1(dIP2
)), then processes the request and gen-
erates the response dO2through a processing function P2F2, and sends the response to the
outside through the channel O2 (the corresponding sending action is denoted sO2(dO2)).
82
Figure 35: Typical process of Forwarder-Receiver pattern
There is another symmetric process from Peer 2 to Peer 1, we omit it.
In the following, we verify the Forwarder-Receiver pattern. We assume all data elements dI1 , dI2 ,
dIF1, dIF2
, dIR1, dIR2
, dI(P1), dIP2, dO1
, dO2are from a finite set ∆. We only give the transitions
of the first process.
The state transitions of the Peer 1 module described by APTC are as follows.
P1 = ∑dI1∈∆(rI1(dI1) ⋅ P12)
P12 = P1F1 ⋅ P13
P13 = ∑dIF1
∈∆(sPF1(dIF1
) ⋅ P1)
The state transitions of the Forwarder 1 module described by APTC are as follows.
F1 = ∑dIF1∈∆(rPF1
(dIF1) ⋅ F12)
F12 = F1F ⋅ F13
F13 = ∑dIR2
∈∆(sFR1(dIR2
) ⋅ F1)
The state transitions of the Receiver 2 module described by APTC are as follows.
R2 = ∑dIR2∈∆(rFR1
(dIR2) ⋅R22)
R22 = R2F ⋅R23
R23 = ∑dIP2
∈∆(sRP1(dIP2
) ⋅R2)
The state transitions of the Peer 2 module described by APTC are as follows.
P2 = ∑dIP2∈∆(rRP1
(dIP2) ⋅ P22)
P22 = P2F2 ⋅ P23
P23 = ∑dO2∈∆(sO2
(dO2) ⋅ P2)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Peer 1 and the Forwarder 1.
83
γ(rPF1(dIF1
), sPF1(dIF1
)) ≜ cPF1(dIF1
)
There are one communication functions between the Forwarder 1 and the Receiver 2 as follows.
γ(rFR1(dIR2
), sFR1(dIR2
)) ≜ cFR1(dIR2
)
There are one communication functions between the Receiver 2 and the Peer 2 as follows.
γ(rRP1(dIP2
), sRP1(dIP2
)) ≜ cRP1(dIP2
)
We define the following communication functions between the Peer 2 and the Forwarder 2.
γ(rPF2(dIF2
), sPF2(dIF2
)) ≜ cPF2(dIF2
)
There are one communication functions between the Forwarder 2 and the Receiver 1 as follows.
γ(rFR2(dIR1
), sFR2(dIR1
)) ≜ cFR2(dIR1
)
There are one communication functions between the Receiver 1 and the Peer 1 as follows.
γ(rRP2(dIP1
), sRP2(dIP1
)) ≜ cRP2(dIP1
)
Let all modules be in parallel, then the Forwarder-Receiver pattern P1 F1 R1 R2 F2 P2
can be presented by the following process term.
τI(∂H(Θ(P1 ≬ F1≬ R1≬ R2≬ F2≬ P2))) = τI(∂H(P1 ≬ F1≬ R1≬ R2≬ F2≬ P2))where H = {rPF1
(dIF1), sPF1
(dIF1), rFR1
(dIR2), sFR1
(dIR2), rRP1
(dIP2), sRP1
(dIP2),
rPF2(dIF2
), sPF2(dIF2
), rFR2(dIR1
), sFR2(dIR1
), rRP2(dIP1
), sRP2(dIP1
)∣dI1 , dI2 , dIF1
, dIF2, dIR1
, dIR2, dI(P1), dIP2
, dO1, dO2
∈∆},I = {cPF1
(dIF1), cFR1
(dIR2), cRP1
(dIP2), cPF2
(dIF2), cFR2
(dIR1), cRP2
(dIP1),
P1F1, P1F2, P2F1, P2F2, F1F,F2F,R1F,R2F ∣dI1 , dI2 , dIF1, dIF2
, dIR1, dIR2
, dI(P1), dIP2, dO1
, dO2∈
∆}.Then we get the following conclusion on the Forwarder-Receiver pattern.
Theorem 4.6 (Correctness of the Forwarder-Receiver pattern). The Forwarder-Receiver pattern
τI(∂H(P1 ≬ F1≬ R1≬ R2≬ F2≬ P2)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(P1 ≬ F1 ≬ R1 ≬ R2 ≬ F2 ≬ P2)) = ∑dI1 ,dI2 ,dO1,dO2
∈∆((rI1(dI1) ⋅ sO2(dO2)) ∥ (rI2(dI2) ⋅
sO1(dO1))) ⋅ τI(∂H(P1 ≬ F1≬ R1≬ R2≬ F2≬ P2)),
that is, the Forwarder-Receiver pattern τI(∂H(P1 ≬ F1 ≬ R1 ≬ R2 ≬ F2 ≬ P2)) can exhibit
desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
84
Figure 36: Client-Dispatcher-Server pattern
4.5.2 Verification of the Client-Dispatcher-Server Pattern
The Client-Dispatcher-Server pattern decouples the invocation of the client and the server to
introduce an intermediate dispatcher. There are three modules in the Client-Dispatcher-Server
pattern: the Client, the Dispatcher, and the Server. The Client interacts with the user through
the channels I and O; with the Dispatcher through the channels ICD and OCD; with the Server
through the channels ICS and OCS . As illustrates in Figure 36.
The typical process of the Client-Dispatcher-Server pattern is shown in Figure 37 and as follows.
1. The Client receives the request dI from the user through the channel I (the corresponding
reading action is denoted rI(dI)), then processes the request dI through a processing
function CF1, and sends the processed request dID to the Dispatcher through the channel
ICD (the corresponding sending action is denoted sICD(dID));
2. The Dispatcher receives dID from the Client through the channel ICD (the corresponding
reading action is denoted rICD(dID)), then processes the request through a processing
function DF , generates and sends the processed response dODto the Client through the
channel OCD (the corresponding sending action is denoted sOCD(dOD
));
3. The Client receives the response dODfrom the Dispatcher through the channel OCD (the
corresponding reading action is denoted rOCD(dOD
)), then processes the request through
a processing function CF2, generates and sends the processed request dIS to the Server
through the channel ICS (the corresponding sending action is denoted sICS(dIS));
4. The Server receives the request dIS from the Client through the channel ICS (the corre-
sponding reading action is denoted rICS(dIS)), then processes the request and generates
the response dOSthrough a processing function SF , and sends the response to the outside
through the channel OCS (the corresponding sending action is denoted sOCS(dOS
));
5. The Client receives the response dOSfrom the Server through the channel OCS (the corre-
sponding reading action is denoted rOCS(dOS
)), then processes the request through a pro-
cessing function CF3, generates and sends the processed response dO to the user through
the channel O (the corresponding sending action is denoted sO(dO)).
85
Figure 37: Typical process of Client-Dispatcher-Server pattern
In the following, we verify the Client-Dispatcher-Server pattern. We assume all data elements
dI , dID , dIS , dOD, dOS
, dO are from a finite set ∆.
The state transitions of the Client module described by APTC are as follows.
C = ∑dI∈∆(rI(dI) ⋅C2)C2 = CF1 ⋅C3
C3 =∑dID ∈∆(sICD
(dID) ⋅C4)C4 =∑dOD
∈∆(rOCD(dOD
) ⋅C5)C5 = CF2 ⋅C6
C6 =∑dIS ∈∆(sICS
(dIS) ⋅C7)C7 =∑dOS
∈∆(rOCS(dOS
) ⋅C8)C8 = CF3 ⋅C9
C9 =∑dO∈∆(sO(dO) ⋅C)The state transitions of the Dispatcher module described by APTC are as follows.
D = ∑dID ∈∆(rICD
(dID) ⋅D2)D2 =DF ⋅D3
D3 = ∑dOD∈∆(sOCD
(dOD) ⋅D)
The state transitions of the Server module described by APTC are as follows.
S = ∑dIS ∈∆(rICS
(dIS) ⋅ S2)S2 = SF ⋅ S3
S3 = ∑dOS∈∆(sOCS
(dOS) ⋅ S)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Client and the Dispatcher.
γ(rICD(dID), sICD
(dID)) ≜ cICD(dID)
86
γ(rOCD(dOD
), sOCD(dOD
)) ≜ cOCD(dOD
)
There are two communication functions between the Client and the Server as follows.
γ(rICS(dIS), sICS
(dIS)) ≜ cICS(dIS)
γ(rOCS(dOS
), sOCS(dOS
)) ≜ cOCS(dOS
)
Let all modules be in parallel, then the Client-Dispatcher-Server pattern C D S can be
presented by the following process term.
τI(∂H(Θ(C ≬D ≬ S))) = τI(∂H(C ≬D ≬ S))where H = {rICD
(dID), sICD(dID), rOCD
(dOD), sOCD
(dOD), rICS
(dIS), sICS(dIS),
rOCS(dOS
), sOCS(dOS
)∣dI , dID , dIS , dOD, dOS
, dO ∈∆},I = {cICD
(dID), cOCD(dOD
), cICS(dIS), cOCS
(dOS),CF1,CF2,CF3,DF,SF
∣dI , dID , dIS , dOD, dOS
, dO ∈∆}.Then we get the following conclusion on the Client-Dispatcher-Server pattern.
Theorem 4.7 (Correctness of the Client-Dispatcher-Server pattern). The Client-Dispatcher-
Server pattern τI(∂H(C ≬D ≬ S)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬D ≬ S)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(C ≬D ≬ S)),that is, the Client-Dispatcher-Server pattern τI(∂H(C ≬ D ≬ S)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
4.5.3 Verification of the Publisher-Subscriber Pattern
The Publisher-Subscriber pattern decouples the communication of the publisher and subscriber.
There are four modules in the Publisher-Subscriber pattern: the Publisher, the Publisher Proxy,
the Subscriber Proxy and the Subscriber. The Publisher interacts with the outside through the
channel I; with the Publisher Proxy through the channel PP . The Publisher Proxy interacts
with the Subscriber Proxy through the channel PS. The Subscriber interacts with the Subscriber
Proxy through the channel SS, and with the outside through the channels O. As illustrates in
Figure 38.
The typical process of the Publisher-Subscriber pattern is shown in Figure 39 and as follows.
1. The Publisher receives the input dI from the outside through the channel I (the corre-
sponding reading action is denoted rI(dI)), then processes the input dI through a process-
ing function PF , and sends the processed input dIPPto the Publisher Proxy through the
channel PP (the corresponding sending action is denoted sPP (dIPP));
87
Figure 38: Publisher-Subscriber pattern
Figure 39: Typical process of Publisher-Subscriber pattern
2. The Publisher Proxy receives dIPPfrom the Publisher through the channel PP (the cor-
responding reading action is denoted rPP (dIPP)), then processes the request through a
processing function PPF , generates and sends the processed input dISPto the Subscriber
Proxy through the channel PS (the corresponding sending action is denoted sPS(dISP));
3. The Subscriber Proxy receives the input dISPfrom the Publisher Proxy through the channel
PS (the corresponding reading action is denoted rPS(dISP)), then processes the request
through a processing function SPF , generates and sends the processed input dIS to the
Subscriber through the channel SS (the corresponding sending action is denoted sSS(dIS));
4. The Subscriber receives the input dIS from the Subscriber Proxy through the channel SS
(the corresponding reading action is denoted rSS(dIS)), then processes the request and
generates the response dO through a processing function SF , and sends the response to
the outside through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Publisher-Subscriber pattern. We assume all data elements dI ,
dIPP, dISP
, dIS , dO are from a finite set ∆.
The state transitions of the Publisher module described by APTC is are follows.
P = ∑dI∈∆(rI(dI) ⋅ P2)P2 = PF ⋅ P3
P3 = ∑dIPP∈∆(sPP (dIPP
) ⋅ P )The state transitions of the Publisher Proxy module described by APTC are as follows.
PP = ∑dIPP∈∆(rPP (dIPP
) ⋅ PP2)PP2 = PPF ⋅ PP3
PP3 = ∑dISP∈∆(sPS(dISP
) ⋅ PP )The state transitions of the Subscriber Proxy module described by APTC are as follows.
SP = ∑dISP∈∆(rPS(dISP
) ⋅ SP2)SP2 = SPF ⋅ SP3
88
SP3 = ∑dIS ∈∆(sSS(dIS) ⋅ SP )
The state transitions of the Subscriber module described by APTC are as follows.
S = ∑dIS ∈∆(rSS(dIS) ⋅ S2)
S2 = SF ⋅ S3
S3 = ∑dO∈∆(sO(dO) ⋅ S)The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Publisher and the Publisher Proxy.
γ(rPP (dIPP), sPP (dIPP
)) ≜ cPP (dIPP)
There are one communication functions between the Publisher Proxy and the Subscriber Proxy
as follows.
γ(rPS(dISP), sPS(dISP
)) ≜ cPS(dISP)
There are one communication functions between the Subscriber Proxy and the Subscriber as
follows.
γ(rSS(dIS), sSS(dIS)) ≜ cSS(dIS)
Let all modules be in parallel, then the Publisher-Subscriber pattern P PP SP S can be
presented by the following process term.
τI(∂H(Θ(P ≬ PP ≬ SP ≬ S))) = τI(∂H(P ≬ PP ≬ SP ≬ S))where H = {rPP (dIPP
), sPP (dIPP), rPS(dISP
), sPS(dISP), rSS(dIS), sSS(dIS)
∣dI , dIPP, dISP
, dIS , dO ∈∆},I = {cPP (dIPP
), cPS(dISP), cSS(dIS), PF,PPF,SPF,SF ∣dI , dIPP
, dISP, dIS , dO ∈∆}.
Then we get the following conclusion on the Publisher-Subscriber pattern.
Theorem 4.8 (Correctness of the Publisher-Subscriber pattern). The Publisher-Subscriber pat-
tern τI(∂H(P ≬ PP ≬ SP ≬ S)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(P ≬ PP ≬ SP ≬ S)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(P ≬ PP ≬ SP ≬ S)),that is, the Publisher-Subscriber pattern τI(∂H(P ≬ PP ≬ SP ≬ S)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
89
Figure 40: Singleton pattern
5 Verification of Idioms
Idioms are the lowest-level patterns which are programming language-specific and implement
some specific concrete problems.
There are almost numerous language-specific idioms, in this chapter, we only verify two idioms
called the Singleton pattern and the Counted Pointer pattern.
5.1 Verification of the Singleton Pattern
The Singleton pattern ensures that there only one instance in runtime for an object. In Singleton
pattern, there is only one module: The Singleton. The Singleton interacts with the outside
through the input channels Ii and the output channels Oi for 1 ≤ i ≤ n, as illustrated in Figure
40.
The typical process is shown in Figure 41 and as follows.
1. The Singleton receives the input dIi from the outside through the channel Ii (the corre-
sponding reading action is denoted rIi(dIi));
2. Then it processes the input and generates the output dOithrough a processing function
SFi;
3. Then it sends the output to the outside through the channel Oi (the corresponding sending
action is denoted sOi(dOi)).
In the following, we verify the Singleton pattern. We assume all data elements dIi , dOifor
1 ≤ i ≤ n are from a finite set ∆.
The state transitions of the Singleton module described by APTC are as follows.
S = ∑dI1 ,⋯,dIn∈∆(rI1(dI1)≬⋯ ≬ rIn(dIn) ⋅ S2)
S2 = SF1 ≬⋯≬ SFn ⋅ S3
90
Figure 41: Typical process of Singleton pattern
S3 = ∑dO1,⋯,dOn∈∆
(sO1(dO1)≬⋯ ≬ sOn(dOn) ⋅ S)
There is no communications in the Singleton pattern.
Let all modules be in parallel, then the Singleton pattern S can be presented by the following
process term.
τI(∂H(Θ(S))) = τI(∂H(S))where H = ∅, I = {SFi} for 1 ≤ i ≤ n.Then we get the following conclusion on the Singleton pattern.
Theorem 5.1 (Correctness of the Singleton pattern). The Singleton pattern τI(∂H(S)) can
exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(S)) = ∑dI1 ,dO1,⋯,dIn ,dOn∈∆
(rI1(dI1) ∥ ⋯ ∥ rIn(dIn) ⋅ sO1(dO1) ∥ ⋯ ∥ sOn(dOn)) ⋅ τI(∂H(S)),
that is, the Singleton pattern τI(∂H(S)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
5.2 Verification of the Counted Pointer Pattern
The Counted Pointer pattern makes memory management (implemented as Handle) of shared
objects (implemented as Bodys) easier in C++. There are three modules in the Counted Pointer
pattern: the Client, the Handle, and the Body. The Client interacts with the outside through
the channels I and O; with the Handle through the channel ICH and OCH . The Handle interacts
with the Body through the channels IHB and OHB . As illustrates in Figure 42.
The typical process of the Counted Pointer pattern is shown in Figure 43 and as follows.
1. The Client receives the input dI from the outside through the channel I (the corresponding
reading action is denoted rI(dI)), then processes the input dI through a processing function
91
Figure 42: Counted Pointer pattern
Figure 43: Typical process of Counted Pointer pattern
CF1, and sends the processed input dIH to the Handle through the channel ICH (the
corresponding sending action is denoted sICH(dIH ));
2. The Handle receives dIH from the Client through the channel ICH (the corresponding
reading action is denoted rICH(dIH )), then processes the request through a processing
function HF1, generates and sends the processed input dIB to the Body through the
channel IHB (the corresponding sending action is denoted sIHB(dIB));
3. The Body receives the input dIB from the Handle through the channel IHB (the correspond-
ing reading action is denoted rIHB(dIB)), then processes the input through a processing
function BF , generates and sends the response dOBto the Handle through the channel
OHB (the corresponding sending action is denoted sOHB(dOB
));
4. The Handle receives the response dOBfrom the Body through the channel OHB (the
corresponding reading action is denoted rOHB(dOB
)), then processes the response through
a processing function HF2, generates and sends the response dOH(the corresponding
sending action is denoted sOCH(dOH
));
5. The Client receives the response dOHfrom the Handle through the channel OCH (the
corresponding reading action is denoted rOCH(dOH
)), then processes the request and gen-
erates the response dO through a processing function CF2, and sends the response to the
outside through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Counted Pointer pattern. We assume all data elements dI , dIH ,
dIB , dOB, dOH
, dO are from a finite set ∆.
The state transitions of the Client module described by APTC are as follows.
C = ∑dI∈∆(rI(dI) ⋅C2)C2 = CF1 ⋅C3
C3 =∑dIH ∈∆(sICH
(dIH ) ⋅C4)C4 =∑dOH
∈∆(rOCH(dOH
) ⋅C5)
92
C5 = CF2 ⋅C6
C6 =∑dO∈∆(sO(dO) ⋅C)The state transitions of the Handle module described by APTC are as follows.
H = ∑dIH ∈∆(rICH
(dIH ) ⋅H2)H2 =HF1 ⋅H3
H3 = ∑dIB ∈∆(sIHB
(dIB) ⋅H4)H4 = ∑dOB
∈∆(rOHB(dOB
) ⋅H5)H5 =HF2 ⋅H6
H6 = ∑dOH∈∆(sOCH
(dOH) ⋅H)
The state transitions of the Body module described by APTC are as follows.
B = ∑dIB ∈∆(rIHB
(dIB) ⋅B2)B2 = BF ⋅B3
B3 = ∑dOB∈∆(sOHB
(dOB) ⋅B)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Client and the Handle Proxy.
γ(rICH(dIH ), sICH
(dIH )) ≜ cICH(dIH )
γ(rOCH(dOH
), sOCH(dOH
)) ≜ cOCH(dOH
)
There are two communication functions between the Handle and the Body as follows.
γ(rIHB(dIB), sIHB
(dIB)) ≜ cIHB(dIB)
γ(rOHB(dOB
), sOHB(dOB
)) ≜ cOHB(dOB
)
Let all modules be in parallel, then the Counted Pointer pattern C H B can be presented by
the following process term.
τI(∂H(Θ(C ≬H ≬ B))) = τI(∂H(C ≬ H ≬ B))where H = {rICH
(dIH ), sICH(dIH ), rOCH
(dOH), sOCH
(dOH), rIHB
(dIB), sIHB(dIB),
rOHB(dOB
), sOHB(dOB
)∣dI , dIH , dIB , dOB, dOH
, dO ∈∆},I = {cICH
(dIH ), cOCH(dOH
), cIHB(dIB), cOHB
(dOB),CF1,CF2,HF1,HF2,BF
∣dI , dIH , dIB , dOB, dOH
, dO ∈∆}.Then we get the following conclusion on the Counted Pointer pattern.
Theorem 5.2 (Correctness of the Counted Pointer pattern). The Counted Pointer pattern
τI(∂H(C ≬H ≬ B)) can exhibit desired external behaviors.
93
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬H ≬ B)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(C ≬ H ≬ B)),that is, the Counted Pointer pattern τI(∂H(C ≬H ≬ B)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
94
Figure 44: Wrapper Facade pattern
6 Verification of Patterns for Concurrent and Networked Ob-
jects
Patterns for concurrent and networked objects can be used both in higher-level and lower-level
systems and applications.
In this chapter, we verify patterns for concurrent and networked objects. In section 6.1, we
verify service access and configuration patterns. In section 6.2, we verify patterns related to
event handling. We verify synchronization patterns in section 6.3 and concurrency patterns in
section 6.4.
6.1 Service Access and Configuration Patterns
In this subsection, we verify patterns for service access and configuration, including the Wrapper
Facade pattern, the Component Configurator pattern, the Interceptor pattern, and the Exten-
sion Interface pattern.
6.1.1 Verification of the Wrapper Facade Pattern
The Wrapper Facade pattern encapsulates the non-object-oriented APIs into the object-oriented
ones. There are two classes of modules in the Wrapper Facade pattern: the Wrapper Facade
and n API Functions. The Wrapper Facade interacts with API Function i through the channels
IWAiand OWAi
, and it exchanges information with outside through the input channel I and O.
As illustrates in Figure 44.
The typical process of the Wrapper Facade pattern is shown in Figure 45 and as follows.
1. The Wrapper Facade receives the input from the user through the channel I (the corre-
sponding reading action is denoted rI(dI)), then processes the input through a process-
ing function WF1 and generates the input dIAi, and sends dIAi
to the API Function i
95
Figure 45: Typical process of Wrapper Facade pattern
(for 1 ≤ i ≤ n) through the channel IWAi(the corresponding sending action is denoted
sIWAi(dIAi
));
2. The API Function i receives the input dIAifrom the Wrapper Facade through the channel
IWAi(the corresponding reading action is denoted rIWAi
(dIAi)), then processes the input
through a processing function AFi, and sends the results dOAito the Wrapper Facade
through the channel OWAi(the corresponding sending action is denoted sOWAi
(dOAi));
3. The Wrapper Facade receives the computational results from the API Function i through
the channel OWAi(the corresponding reading action is denoted rOWAi
(dOAi)), then pro-
cesses the results through a processing function WF2 and generates the result dO, and
sends dO to the outside through the channel O (the corresponding sending action is de-
noted sO(dO)).
In the following, we verify the Wrapper Facade pattern. We assume all data elements dI , dIAi,
dOAi, dO (for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Wrapper Facade module described by APTC are as follows.
W = ∑dI∈∆(rI(dI) ⋅W2)W2 =WF1 ⋅W3
W3 = ∑dIA1
,⋅,dIAn∈∆(sIWA1
(dIA1)≬⋯ ≬ sIWAn
(dIAn) ⋅W4)
W4 = ∑dOA1
,⋯,dOAn∈∆(rOWA1
(dOA1)≬⋯ ≬ rOWAn
(dOAn) ⋅W5)
W5 =WF2 ⋅W6
W6 = ∑dO∈∆(sO(dO) ⋅W )The state transitions of the API Function i described by APTC are as follows.
Ai = ∑dIAi∈∆(rIWAi
(dIAi) ⋅Ai2)
Ai2 = AFi ⋅Ai3
96
Ai3 = ∑dOAi∈∆(sOWAi
(dOAi) ⋅Ai)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the API Function i for 1 ≤ i ≤ n.
γ(rIWAi(dIAi
), sIWAi(dIAi
)) ≜ cIWAi(dIAi
)
γ(rOWAi(dOAi
), sOWAi(dOAi
)) ≜ cOWAi(dOAi
)
Let all modules be in parallel, then the Wrapper Facade pattern W A1⋯Ai⋯An can be pre-
sented by the following process term.
τI(∂H(Θ(W ≬ A1 ≬⋯ ≬ Ai ≬⋯ ≬ An))) = τI(∂H(W ≬ A1 ≬⋯≬ Ai ≬ ⋯≬ An))where H = {rIWAi
(dIAi), sIWAi
(dIAi), rOWAi
(dOAi), sOWAi
(dOAi)
∣dI , dIAi, dOAi
, dO ∈∆} for 1 ≤ i ≤ n,I = {cIWAi
(dIAi), cOWAi
(dOAi),WF1,WF2,AFi
∣dI , dIAi, dOAi
, dO ∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Wrapper Facade pattern.
Theorem 6.1 (Correctness of the Wrapper Facade pattern). The Wrapper Facade pattern
τI(∂H(W ≬ A1 ≬⋯ ≬ Ai ≬⋯≬ An)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(W ≬ A1 ≬⋯ ≬ Ai ≬⋯≬ An)) = ∑dI ,dO∈∆(rI(dI) ⋅sO(dO)) ⋅τI(∂H(W ≬ A1 ≬⋯≬ Ai ≬
⋯≬ An)),that is, the Wrapper Facade pattern τI(∂H(W ≬ A1 ≬ ⋯≬ Ai ≬⋯ ≬ An)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.1.2 Verification of the Component Configurator Pattern
The Component Configurator pattern allows to configure the components dynamically. There
are three classes of modules in the Component Configurator pattern: the Component Configura-
tor and n Components and the Component Repository. The Component Configurator interacts
with Component i through the channels ICCiand OCCi
, and it exchanges information with
outside through the input channel I and O, and with the Component Repository through the
channels ICR and OCR. As illustrates in Figure 46.
The typical process of the Component Configurator pattern is shown in Figure 47 and as follows.
1. The Component Configurator receives the input from the user through the channel I
(the corresponding reading action is denoted rI(dI)), then processes the input through a
processing function CCF1 and generates the input dICi, and sends dICi
to the Component
i (for 1 ≤ i ≤ n) through the channel ICCi(the corresponding sending action is denoted
sICCi(dICi
));
97
Figure 46: Component Configurator pattern
2. The Component i receives the input dICifrom the Component Configurator through the
channel ICCi(the corresponding reading action is denoted rICCi
(dICi)), then processes
the input through a processing function CFi, and sends the results dOCito the Compo-
nent Configurator through the channel OCCi(the corresponding sending action is denoted
sOCCi(dOCi
));
3. The Component Configurator receives the configurational results from the Component i
through the channel OCCi(the corresponding reading action is denoted rOCCi
(dOCi)), then
processes the results through a processing function CCF2 and generates the configurational
information dIR , and sends dIR to the Component Repository through the channel ICR
(the corresponding sending action is denoted sICR(dIR));
4. The Component Repository receives the configurational information dIR through the chan-
nel ICR (the corresponding reading action is denoted rICR(dIR)), then processes the infor-
mation and generates the results dORthrough a processing function RF , and sends the
results dORto the Component Configurator through the channels OCR (the corresponding
sending action is denoted sOCR(dOR
));
5. The Component Configurator receives the results dORfrom the Component Repository
through the channel OCR (the corresponding reading action is denoted rOCR(dOR
)), theprocesses the results and generates the results dO through a processing function CCF3,
and sends dO to the outside through the channel O (the corresponding sending action is
denoted sO(dO)).
In the following, we verify the Component Configurator pattern. We assume all data elements
dI , dIR , dICi, dOCi
, dOR, dO (for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Component Configurator module described by APTC are as follows.
CC = ∑dI∈∆(rI(dI) ⋅CC2)
98
Figure 47: Typical process of Component Configurator pattern
CC2 = CCF1 ⋅CC3
CC3 = ∑dIC1
,⋅,dICn∈∆(sICC1
(dIC1)≬⋯ ≬ sICCn
(dICn) ⋅CC4)
CC4 = ∑dOC1
,⋯,dOCn∈∆(rOCC1
(dOC1)≬⋯ ≬ rOCCn
(dOCn) ⋅CC5)
CC5 = CCF2 ⋅CC6
CC6 = ∑dIR∈∆(sICR
(dIR) ⋅CC7)CC7 = ∑dOR
∈∆(rOCR(dOR
) ⋅CC8)CC8 = CCF3 ⋅CC9
CC9 = ∑dO∈∆(sO(dO) ⋅CC)The state transitions of the Component i described by APTC are as follows.
Ci = ∑dICi∈∆(rICCi
(dICi) ⋅Ci2)
Ci2 = CFi ⋅Ci3
Ci3 = ∑dOCi∈∆(sOCCi
(dOCi) ⋅Ci)
The state transitions of the Component Repository described by APTC are as follows.
R = ∑dIR∈∆(rICR
(dIR) ⋅R2)R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOCR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the Component Configurator for 1 ≤ i ≤ n.
γ(rICCi(dICi
), sICCi(dICi
)) ≜ cICCi(dICi
)
99
γ(rOCCi(dOCi
), sOCCi(dOCi
)) ≜ cOCCi(dOCi
)
γ(rICR(dIR), sICR
(dIR)) ≜ cICR(dIR)
γ(rOCR(dOR
), sOCR(dOR
)) ≜ cOCR(dOR
)
Let all modules be in parallel, then the Component Configurator pattern CC R C1⋯Ci⋯Cn
can be presented by the following process term.
τI(∂H(Θ(CC ≬ R ≬ C1 ≬⋯≬ Ci ≬⋯ ≬ Cn))) = τI(∂H(CC ≬ R ≬ C1 ≬ ⋯≬ Ci ≬⋯ ≬ Cn))where H = {rICCi
(dICi), sICCi
(dICi), rOCCi
(dOCi), sOCCi
(dOCi), rICR
(dIR), sICR(dIR),
rOCR(dOR
), sOCR(dOR
)∣dI , dIR , dICi, dOCi
, dOR, dO ∈∆} for 1 ≤ i ≤ n,
I = {cICCi(dICi
), cOCCi(dOCi
), cICR(dIR), cOCR
(dOR),CCF1,CCF2,CCF3,CFi,RF
∣dI , dIR , dICi, dOCi
, dOR, dO ∈∆} for 1 ≤ i ≤ n.
Then we get the following conclusion on the Component Configurator pattern.
Theorem 6.2 (Correctness of the Component Configurator pattern). The Component Con-
figurator pattern τI(∂H(CC ≬ R ≬ C1 ≬ ⋯ ≬ Ci ≬ ⋯ ≬ Cn)) can exhibit desired external
behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(CC ≬ R ≬ C1 ≬ ⋯ ≬ Ci ≬ ⋯ ≬ Cn)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(CC ≬ R ≬
C1 ≬⋯≬ Ci ≬⋯ ≬ Cn)),that is, the Component Configurator pattern τI(∂H(CC ≬ R ≬ C1 ≬ ⋯ ≬ Ci ≬ ⋯ ≬ Cn)) canexhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.1.3 Verification of the Interceptor Pattern
The Interceptor pattern adds functionalities to the concrete framework to introduce an interme-
diate Dispatcher and an Interceptor. There are three modules in the Interceptor pattern: the
Concrete Framework, the Dispatcher, and the Interceptor. The Concrete Framework interacts
with the user through the channels I and O; with the Dispatcher through the channel CD;
with the Interceptor through the channels IIC and OIC . The Dispatcher interacts with the
Interceptor through the channel DI. As illustrates in Figure 48.
The typical process of the Interceptor pattern is shown in Figure 49 and as follows.
1. The Concrete Framework receives the request dI from the user through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the request dI through a
processing function CF1, and sends the processed request dID to the Dispatcher through
the channel CD (the corresponding sending action is denoted sCD(dID));
100
Figure 48: Interceptor pattern
2. The Dispatcher receives dID from the Concrete Framework through the channel CD (the
corresponding reading action is denoted rCD(dID)), then processes the request through a
processing function DF , generates and sends the processed request dODto the Interceptor
through the channel DI (the corresponding sending action is denoted sDI(dOD));
3. The Interceptor receives the request dODfrom the Dispatcher through the channel DI
(the corresponding reading action is denoted rDI(dOD)), then processes the request and
generates the request dIC through a processing function IF1, and sends the request to
the Concrete Framework through the channel IIC (the corresponding sending action is
denoted sIIC(dIC ));
4. The Concrete Framework receives the request dIC from the Interceptor through the channel
IIC (the corresponding reading action is denoted rIIC(dIC)), then processes the request
through a processing function CF2, generates and sends the response dOCto the Interceptor
through the channel OIC (the corresponding sending action is denoted sOIC(dOC
));
5. The Interceptor receives the response dOCfrom the Concrete Framework through the
channel OIC (the corresponding reading action is denoted rOIC(dOC
)), then processes the
request through a processing function IF2, generates and sends the processed response dOto the user through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Interceptor pattern. We assume all data elements dI , dID , dIC ,
dOD, dOC
, dO are from a finite set ∆.
The state transitions of the Concrete Framework module described by APTC are as follows.
C = ∑dI∈∆(rI(dI) ⋅C2)C2 = CF1 ⋅C3
C3 =∑dID ∈∆(sCD(dID) ⋅C4)
101
Figure 49: Typical process of Interceptor pattern
C4 =∑dIC ∈∆(rIIC(dIC) ⋅C5)
C5 = CF2 ⋅C6
C6 =∑dOC∈∆(sOIC
(dOC) ⋅C)
The state transitions of the Dispatcher module described by APTC are as follows.
D = ∑dID ∈∆(rCD(dID) ⋅D2)
D2 =DF ⋅D3
D3 = ∑dOD∈∆(sDI(dOD
) ⋅D)The state transitions of the Interceptor module described by APTC are as follows.
I = ∑dOD∈∆(rDI(dOD
) ⋅ I2)I2 = IF1 ⋅ I3
I3 =∑dIC ∈∆(sIIC(dIC) ⋅ I4)
I4 =∑dOC∈∆(rOIC
(dOC) ⋅ I5)
I5 = IF2 ⋅ I6
I6 =∑dO∈∆(sO(dO) ⋅ I)The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Concrete Framework and the Dispatcher.
γ(rCD(dID), sCD(dID)) ≜ cCD(dID)
There are two communication functions between the Concrete Framework and the Interceptor
as follows.
102
γ(rIIC(dIC), sIIC(dIC)) ≜ cIIC(dIC)
γ(rOIC(dOC
), sOIC(dOC
)) ≜ cOIC(dOC
)
There are one communication function between the Dispatcher and the Interceptor as follows.
γ(rDI(dOD), sDI(dOD
)) ≜ cDI(dOD)
Let all modules be in parallel, then the Interceptor pattern C D I can be presented by the
following process term.
τI(∂H(Θ(C ≬D ≬ I))) = τI(∂H(C ≬D ≬ I))where H = {rCD(dID), sCD(dID), rIIC (dIC), sIIC(dIC), rOIC
(dOC), sOIC
(dOC),
rDI(dOD), sDI(dOD
)∣dI , dID , dIC , dOD, dOC
, dO ∈∆},I = {cCD(dID), cIIC (dIC), cOIC
(dOC), cDI(dOD
),CF1,CF2,DF, IF1, IF2
∣dI , dID , dIC , dOD, dOC
, dO ∈∆}.Then we get the following conclusion on the Interceptor pattern.
Theorem 6.3 (Correctness of the Interceptor pattern). The Interceptor pattern τI(∂H(C ≬
D ≬ I)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬D ≬ I)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(C ≬D ≬ I)),that is, the Interceptor pattern τI(∂H(C ≬ D ≬ I)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.1.4 Verification of the Extension Interface Pattern
The Extension Interface pattern allows to export multiple interface of a component to extend or
modify the functionalities of the component. There are three classes of modules in the Extension
Interface pattern: the Component Factory and n Extension Interfaces and the Component. The
Component Factory interacts with Extension Interface i through the channels IFEiand OFEi
,
and it exchanges information with outside through the input channel I and the output channel
O, and with the Component through the channels IFC and OFC . As illustrates in Figure 50.
The typical process of the Extension Interface pattern is shown in Figure 51 and as follows.
1. The Component Factory receives the input from the user through the channel I (the corre-
sponding reading action is denoted rI(dI)), then processes the input through a processing
function FF1 and generates the input dIC , and sends dIC to the Component through the
channel IFC (the corresponding sending action is denoted sIFC(dIC));
103
Figure 50: Extension Interface pattern
Figure 51: Typical process of Extension Interface pattern
2. The Component receives the input dIC through the channel IFC (the corresponding reading
action is denoted rIFC(dIC)), then processes the information and generates the results dOC
through a processing function CF , and sends the results dOCto the Component Factory
through the channels OFC (the corresponding sending action is denoted sOFC(dOC
));
3. The Component Factory receives the results dOCfrom the Component through the channel
OFC (the corresponding reading action is denoted rOFC(dOC
)), then processes the results
through a processing function FF2 and generates the input dIEi, and sends dIEi
to the
Extension Interface i (for 1 ≤ i ≤ n) through the channel IFEi(the corresponding sending
action is denoted sIFEi(dIEi
));
4. The Extension Interface i receives the input dIEifrom the Component Factory through
the channel IFEi(the corresponding reading action is denoted rIFEi
(dIEi)), then processes
the input through a processing function EFi, and sends the results dOEito the Com-
ponent Factory through the channel OFEi(the corresponding sending action is denoted
sOFEi(dOEi
));
5. The Component Factory receives the results from the Extension Interface i through the
channel OFEi(the corresponding reading action is denoted rOFEi
(dOEi)), the processes the
results and generates the results dO through a processing function FF3, and sends dO to
the outside through the channel O (the corresponding sending action is denoted sO(dO)).
104
In the following, we verify the Extension Interface pattern. We assume all data elements dI ,
dIC , dIEi, dOEi
, dOC, dO (for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Component Factory module described by APTC are as follows.
F = ∑dI∈∆(rI(dI) ⋅ F2)F2 = FF1 ⋅ F3
F3 = ∑dIC ∈∆(sIFC
(dIC) ⋅ F4)F4 = ∑dOC
∈∆(rOFC(dOC
) ⋅ F5)F5 = FF2 ⋅ F6
F6 = ∑dIE1
,⋅,dIEn∈∆(sIFE1
(dIE1)≬⋯ ≬ sIFEn
(dIEn) ⋅ F7)
F7 = ∑dOE1,⋯,dOEn
∈∆(rOFE1(dOE1
)≬⋯ ≬ rOFEn(dOEn
) ⋅ F8)
F8 = FF3 ⋅ F9
F9 = ∑dO∈∆(sO(dO) ⋅ F )The state transitions of the Extension Interface i described by APTC are as follows.
Ei = ∑dIEi∈∆(rIFEi
(dIEi) ⋅Ei2)
Ei2 = EFi ⋅Ei3
Ei3 = ∑dOEi∈∆(sOFEi
(dOEi) ⋅Ei)
The state transitions of the Component described by APTC are as follows.
C = ∑dIC ∈∆(rIFC
(dIC) ⋅C2)C2 = CF ⋅C3
C3 =∑dOC∈∆(sOFC
(dOC) ⋅C)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the Component Factory for 1 ≤ i ≤ n.
γ(rIFEi(dIEi
), sIFEi(dIEi
)) ≜ cIFEi(dIEi
)
γ(rOFEi(dOEi
), sOFEi(dOEi
)) ≜ cOFEi(dOEi
)
γ(rIFC(dIC), sIFC
(dIC)) ≜ cIFC(dIC)
γ(rOFC(dOC
), sOFC(dOC
)) ≜ cOFC(dOC
)
Let all modules be in parallel, then the Extension Interface pattern F C E1⋯Ei⋯En can be
presented by the following process term.
τI(∂H(Θ(F ≬ C ≬ E1 ≬⋯≬ Ei ≬⋯ ≬ En))) = τI(∂H(F ≬ C ≬ E1 ≬⋯ ≬ Ei ≬ ⋯≬ En))where H = {rIFEi
(dIEi), sIFEi
(dIEi), rOFEi
(dOEi), sOFEi
(dOEi), rIFC
(dIC), sIFC(dIC),
rOFC(dOC
), sOFC(dOC
)∣dI , dIC , dIEi, dOEi
, dOC, dO ∈∆} for 1 ≤ i ≤ n,
105
Figure 52: Reactor pattern
I = {cIFEi(dIEi
), cOFEi(dOEi
), cIFC(dIC), cOFC
(dOC), FF1, FF2, FF3,EFi,CF
∣dI , dIC , dIEi, dOEi
, dOC, dO ∈∆} for 1 ≤ i ≤ n.
Then we get the following conclusion on the Extension Interface pattern.
Theorem 6.4 (Correctness of the Extension Interface pattern). The Extension Interface pattern
τI(∂H(F ≬ C ≬ E1 ≬ ⋯≬ Ei ≬⋯ ≬ En)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(F ≬ C ≬ E1 ≬ ⋯ ≬ Ei ≬ ⋯ ≬ En)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(F ≬ C ≬ E1 ≬
⋯≬ Ei ≬⋯ ≬ En)),that is, the Extension Interface pattern τI(∂H(F ≬ C ≬ E1 ≬ ⋯ ≬ Ei ≬ ⋯ ≬ En)) can exhibit
desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.2 Event Handling Patterns
In this subsection, we verify patterns related to event handling, including the Reactor pattern,
the Proactor pattern, the Asynchronous Completion Token pattern, and the Acceptor-Connector
pattern.
6.2.1 Verification of the Reactor Pattern
The Reactor pattern allows to demultiplex and dispatch the request event to the event-driven
applications. There are three classes of modules in the Reactor pattern: the Handle Set and n
Event Handlers and the Reactor. The Handle Set interacts with Event Handler i through the
channel EHi, and it exchanges information with outside through the input channel I and the
output channel O, and with the Reactor through the channel HR. The Reactor interacts with
the Event Handler i through the channel REi. As illustrates in Figure 52.
The typical process of the Reactor pattern is shown in Figure 53 and as follows.
106
Figure 53: Typical process of Reactor pattern
1. The Handle Set receives the input from the user through the channel I (the corresponding
reading action is denoted rI(dI)), then processes the input through a processing function
HF1 and generates the input dIR , and sends dIR to the Reactor through the channel HR
(the corresponding sending action is denoted sHR(dIR));
2. The Reactor receives the input dIR through the channel HR (the corresponding reading
action is denoted rHR(dIR)), then processes the information and generates the results
dIEithrough a processing function RF , and sends the results dIEi
to the Event Handler i
through the channels REi (the corresponding sending action is denoted sREi(dIEi
));
3. The Event Handler i receives the input dIEifrom the Reactor through the channel REi (the
corresponding reading action is denoted rREi(dIEi
)), then processes the input through a
processing function EFi, and sends the results dOEito the Handle Set through the channel
EHi (the corresponding sending action is denoted sEHi(dOEi
));
4. The Handle Set receives the results from the Event Handler i through the channel EHi
(the corresponding reading action is denoted rEHi(dOEi
)), the processes the results and
generates the results dO through a processing function HF2, and sends dO to the outside
through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Reactor pattern. We assume all data elements dI , dIR , dIEi, dOEi
,
dO (for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Handle Set module described by APTC are as follows.
H = ∑dI∈∆(rI(dI) ⋅ F2)H2 =HF1 ⋅H3
H3 = ∑dIR∈∆(sHR(dIR) ⋅H4)
H4 = ∑dOEi∈∆(rEHi
(dOEi) ⋅H5)
H5 =HF2 ⋅H6
H6 = ∑dO∈∆(sO(dO) ⋅H)
107
The state transitions of the Event Handler i described by APTC are as follows.
Ei = ∑dIEi∈∆(rREi
(dIEi) ⋅Ei2)
Ei2 = EFi ⋅Ei3
Ei3 = ∑dOEi∈∆(sEHi
(dOEi) ⋅Ei)
The state transitions of the Reactor described by APTC are as follows.
R = ∑dIR∈∆(rHR(dIR) ⋅R2)
R2 = RF ⋅R3
R3 = ∑dIE1,⋯,dIEn
∈∆(sRE1(dIE1
)≬⋯≬ sREn(dIEn) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the Handle Set for 1 ≤ i ≤ n.
γ(rHR(dIR), sHR(dIR)) ≜ cHR(dIR)
γ(rEHi(dOEi
), sEHi(dOEi
)) ≜ cEHi(dOEi
)
There is one communication function between the Reactor and the Event Handler i.
γ(rREi(dIEi
), sREi(dIEi
)) ≜ cREi(dIEi
)
Let all modules be in parallel, then the Reactor pattern H R E1⋯Ei⋯En can be presented
by the following process term.
τI(∂H(Θ(H ≬ R ≬ E1 ≬⋯≬ Ei ≬ ⋯≬ En))) = τI(∂H(H ≬ R ≬ E1 ≬⋯ ≬ Ei ≬⋯ ≬ En))where H = {rHR(dIR), sHR(dIR), rEHi
(dOEi), sEHi
(dOEi), rREi
(dIEi), sREi
(dIEi)
∣dI , dIR , dIEi, dOEi
, dO ∈∆} for 1 ≤ i ≤ n,I = {cHR(dIR), cEHi
(dOEi), cREi
(dIEi),HF1,HF2,EFi,RF
∣dI , dIR , dIEi, dOEi
, dO ∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Reactor pattern.
Theorem 6.5 (Correctness of the Reactor pattern). The Reactor pattern τI(∂H(H ≬ R ≬ E1 ≬
⋯≬ Ei ≬⋯ ≬ En)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(H ≬ R ≬ E1 ≬ ⋯ ≬ Ei ≬ ⋯ ≬ En)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(H ≬ R ≬ E1 ≬
⋯≬ Ei ≬⋯ ≬ En)),that is, the Reactor pattern τI(∂H(H ≬ R ≬ E1 ≬ ⋯ ≬ Ei ≬ ⋯ ≬ En)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
108
Figure 54: Proactor pattern
6.2.2 Verification of the Proactor Pattern
The Proactor pattern also decouples the delivery the events between the event-driven applica-
tions and clients, but the events are triggered by the completion of asynchronous operations,
which has four classes of components: the Asynchronous Operation Processor, the Asynchronous
Operation, the Proactor and n Completion Handlers. The Asynchronous Operation Processor
interacts with the outside through the channel I; with the Asynchronous Operation through the
channels IPO and OPO; with the Proactor with the channel PP . The Proactor interacts with
the Completion Handler i with the channel PCi. The Completion Handler i interacts with the
outside through the channel Oi. As illustrates in Figure 54.
The typical process of the Proactor pattern is shown in Figure 55 and following.
1. The Asynchronous Operation Processor receives the input dI from the user through the
channel I (the corresponding reading action is denoted rI(DI)), processes the input
through a processing function AOPF1, and generates the input to the Asynchronous Op-
eration dIAOand it sends dIAO
to the Asynchronous Operation through the channel IPO
(the corresponding sending action is denoted sIPO(dIAO
));
2. The Asynchronous Operation receives the input from the Asynchronous Operation Pro-
cessor through the channel IPO (the corresponding reading action is denoted rIPO(dIAO
)),processes the input through a processing function AOF , generates the computational re-
sults to the Asynchronous Operation Processor which is denoted dOAO; then sends the
109
Figure 55: Typical process of Proactor pattern
results to the Asynchronous Operation Processor through the channel OPO (the corre-
sponding sending action is denoted sOPO(dOAO
));
3. The Asynchronous Operation Processor receives the results from the Asynchronous Opera-
tion through the channel OPO (the corresponding reading action is denoted rOPO(dOAO
)),then processes the results and generates the events dIP through a processing function
AOPF2, and sends it to the Proactor through the channel PP (the corresponding sending
action is denoted sPP (dIP ));
4. The Proactor receives the events dIP from the Asynchronous Operation Processor through
the channel PP (the corresponding reading action is denoted rPP (dIP )), then processes the
events through a processing function PF , and sends the processed events to the Completion
Handler i dICifor 1 ≤ i ≤ n through the channel PCi (the corresponding sending action is
denoted sPCi(dICi
));
5. The Completion Handler i (for 1 ≤ i ≤ n) receives the events from the Proactor through
the channel PCi (the corresponding reading action is denoted rPCi(dICi
)), processes the
events through a processing function CFi, generates the output dOi, then sending the
output through the channel Oi (the corresponding sending action is denoted sOi(dOi)).
In the following, we verify the Proactor pattern. We assume all data elements dI , dIAO, dIP ,
dICi, dOAO
, dOi(for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Asynchronous Operation Processor module described by APTC are
as follows.
110
AOP = ∑dI∈∆(rI(dI) ⋅AOP2)AOP2 = AOPF1 ⋅AOP3
AOP3 = ∑dIAO∈∆(sIPO
(dIAO) ⋅AOP4)
AOP4 = ∑dOAO∈∆(rOPO
(dOAO) ⋅AOP5)
AOP5 = AOPF2 ⋅AOP6
AOP6 = ∑dIP ∈∆(sPP (dIP ) ⋅AOP )
The state transitions of the Asynchronous Operation described by APTC are as follows.
AO = ∑dIAO∈∆(rIPO
(dIAO) ⋅AO2)
AO2 = AOF ⋅AO3
AO3 = ∑dOAO∈∆(sOPO
(dOAO) ⋅AO)
The state transitions of the Proactor described by APTC are as follows.
P = ∑dIP ∈∆(rPP (dIP ) ⋅ P2)
P2 = PF ⋅ P3
P3 = ∑dIC1,⋯,dIcn ∈∆
(sPC1(dIC1
)≬⋯ ≬ sPCn(dICn) ⋅ P )
The state transitions of the Completion Handler i described by APTC are as follows.
Ci = ∑dICi∈∆(rPCi
(dICi) ⋅Ci2)
Ci2 = CFi ⋅Ci3
Ci3 = ∑dOi∈∆(sOi
(dOi) ⋅Ci)
The sending action must occur before the reading action of the same data through the same
channel, then they can asynchronously communicate with each other, otherwise, will cause a
deadlock δ. We define the following communication constraint of the Completion Handler i for
1 ≤ i ≤ n.
sPCi(dICi
) ≤ rPCi(dICi
)
Here, ≤ is a causality relation.
There are two communication constraints between the Asynchronous Operation Processor and
the Asynchronous Operation as follows.
sIPO(dIAO
) ≤ rPO(dIAO)
sOPO(dOAO
) ≤ rOPO(dOAO
)
There is one communication constraint between the Asynchronous Operation Processor and the
Proactor as follows.
sPP (dIP ) ≤ rPP (dIP )
111
Let all modules be in parallel, then the Proactor pattern AOP AO P C1⋯Ci⋯Cn can be
presented by the following process term.
τI(∂H(Θ(AOP ≬ AO ≬ P ≬ C1 ≬⋯ ≬ Ci ≬⋯ ≬ Cn))) = τI(∂H(AOP ≬ AO ≬ P ≬ C1 ≬⋯≬
Ci ≬⋯ ≬ Cn))whereH = {sPCi
(dICi), rPCi
(dICi), sIPO
(dIAO), rPO(dIAO
), sOPO(dOAO
), rOPO(dOAO
), sPP (dIP ), rPP (dIP )∣sPCi
(dICi) ≰ rPCi
(dICi), sIPO
(dIAO) ≰ rPO(dIAO
), sOPO(dOAO
) ≰ rOPO(dOAO
), sPP (dIP ) ≰ rPP (dIP ),dI , dIAO
, dIP , dICi, dOAO
, dOi∈∆} for 1 ≤ i ≤ n,
I = {sPCi(dICi
), rPCi(dICi
), sIPO(dIAO
), rPO(dIAO), sOPO
(dOAO), rOPO
(dOAO),
sPP (dIP ), rPP (dIP ),AOPF1,AOPF2,AOF,CFi
∣sPCi(dICi
) ≤ rPCi(dICi
), sIPO(dIAO
) ≤ rPO(dIAO), sOPO
(dOAO) ≤ rOPO
(dOAO), sPP (dIP ) ≤ rPP (dIP ),
dI , dIAO, dIP , dICi
, dOAO, dOi
∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Proactor pattern.
Theorem 6.6 (Correctness of the Proactor pattern). The Proactor pattern τI(∂H(AOP ≬
AO ≬ P ≬ C1 ≬⋯ ≬ Ci ≬ ⋯≬ Cn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(AOP ≬ AO ≬ P ≬ C1 ≬ ⋯ ≬ Ci ≬ ⋯ ≬ Cn)) = ∑dI ,dO1,⋯,dOn∈∆
(rI(dI) ⋅ sO1(dO1) ∥ ⋯ ∥
sOi(dOi) ∥ ⋯ ∥ sOn(dOn)) ⋅ τI(∂H(AOP ≬ AO ≬ P ≬ C1 ≬⋯≬ Ci ≬⋯ ≬ Cn)),
that is, the Proactor pattern τI(∂H(AOP ≬ AO ≬ P ≬ C1 ≬ ⋯ ≬ Ci ≬ ⋯ ≬ Cn)) can exhibit
desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.2.3 Verification of the Asynchronous Completion Token Pattern
The Asynchronous Completion Token pattern also decouples the delivery the events between
the event-driven applications and clients, but the events are triggered by the completion of
asynchronous operations, which has four classes of components: the Initiator, the Asynchronous
Operation, the Service and n Completion Handlers. The Initiator interacts with the outside
through the channel I; with the Service through the channels IIS and OIS ; with the Completion
Handler i with the channel ICi. The Service interacts with the Asynchronous Operation through
the channel ISO and OSO. The Completion Handler i interacts with the outside through the
channel Oi. As illustrates in Figure 56.
The typical process of the Asynchronous Completion Token pattern is shown in Figure 57 and
following.
1. The Initiator receives the input dI from the user through the channel I (the corresponding
reading action is denoted rI(DI)), processes the input through a processing function IF1,
and generates the input to the Service dIS and it sends dIS to the Asynchronous Operation
through the channel IIS (the corresponding sending action is denoted sIIS(dIS));
2. The Service receives the input from the Initiator through the channel IIS (the correspond-
ing reading action is denoted rIIS(dIS)), processes the input through a processing function
112
Figure 56: Asynchronous Completion Token pattern
SF1, generates the input to the Asynchronous Operation which is denoted dIA ; then sends
the input to the Asynchronous Operation through the channel ISA (the corresponding
sending action is denoted sOSA(dOA
));
3. The Asynchronous Operation receives the input from the Service through the channel ISA(the corresponding reading action is denoted rISA
(dIA)), then processes the input and
generate the results dOAthrough a processing function AF , and sends the results to the
Service through the channel OSA (the corresponding sending action is denoted sOSA(dOA
));
4. the Service receives the results dOAfrom the Asynchronous Operation through the channel
OSA (the corresponding reading action is denoted rOSA(dOA
)), then processes the results
and generates the results dOSthrough a processing function SF2, and sends the results
to the Initiator through the channel OIS (the corresponding sending action is denoted
sOIS(dOS
));
5. The Initiator receives the results dOSfrom the Service through the channel OIS (the cor-
responding reading action is denoted rOIS(dOS
)), then processes the results and generates
the events dICithrough a processing function IF2, and sends the processed events to the
Completion Handler i dICifor 1 ≤ i ≤ n through the channel ICi (the corresponding sending
action is denoted sICi(dICi
));
6. The Completion Handler i (for 1 ≤ i ≤ n) receives the events from the Initiator through the
channel ICi (the corresponding reading action is denoted rICi(dICi
)), processes the eventsthrough a processing function CFi, generates the output dOi
, then sending the output
through the channel Oi (the corresponding sending action is denoted sOi(dOi)).
In the following, we verify the Asynchronous Completion Token pattern. We assume all data
elements dI , dIS , dIA , dICi, dOA
, dOS, dOi
(for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Initiator module described by APTC are as follows.
113
Figure 57: Typical process of Asynchronous Completion Token pattern
I = ∑dI∈∆(rI(dI) ⋅ I2)I2 = IF1 ⋅ I3
I3 =∑dIS ∈∆(sIIS(dIIS) ⋅ I4)
I4 =∑dOS∈∆(rOIS
(dOS) ⋅ I5)
I5 = IF2 ⋅ I6
I6 =∑dIC1
,dICn∈∆(sICi
(dICi) ⋅ I)
The state transitions of the Service described by APTC are as follows.
S = ∑dIS ∈∆(rIIS(dIS) ⋅ S2)
S2 = SF1 ⋅ S3
S3 = ∑dIA∈∆(sISA
(dIA) ⋅ S4)S4 = ∑dOA
∈∆(rOSA(dOA
) ⋅ S5)S5 = SF2 ⋅ S6
S6 = ∑dOS∈∆(sOIS
(dOS) ⋅ S)
The state transitions of the Asynchronous Operation described by APTC are as follows.
A = ∑dIA∈∆(rISA
(dIA) ⋅A2)A2 = AF ⋅A3
A3 = ∑dOA∈∆(sOSA
(dOA) ⋅A)
The state transitions of the Completion Handler i described by APTC are as follows.
Ci = ∑dICi∈∆(rICi
(dICi) ⋅Ci2)
Ci2 = CFi ⋅Ci3
Ci3 = ∑dOi∈∆(sOi
(dOi) ⋅Ci)
114
The sending action must occur before the reading action of the same data through the same
channel, then they can asynchronously communicate with each other, otherwise, will cause a
deadlock δ. We define the following communication constraint of the Completion Handler i for
1 ≤ i ≤ n.
sICi(dICi
) ≤ rICi(dICi
)
Here, ≤ is a causality relation.
There are two communication constraints between the Initiator and the Service as follows.
sIIS(dIS) ≤ rIIS(dIS)
sOIS(dOS
) ≤ rOIS(dOS
)
There is two communication constraints between the Service and the Asynchronous Operation
as follows.
sISA(dIA) ≤ rISA
(dIA)
sOSA(dOA
) ≤ rOSA(dOA
)
Let all modules be in parallel, then the Asynchronous Completion Token pattern I S A C1⋯Ci⋯Cn
can be presented by the following process term.
τI(∂H(Θ(I ≬ S ≬ A ≬ C1 ≬ ⋯ ≬ Ci ≬ ⋯ ≬ Cn))) = τI(∂H(I ≬ S ≬ A ≬ C1 ≬ ⋯ ≬ Ci ≬ ⋯ ≬
Cn))where H = {sICi
(dICi), rICi
(dICi), sIIS(dIS), rIIS (dIS), sOIS
(dOS), rOIS
(dOS),
sISA(dIA), rISA
(dIA), sOSA(dOA
), rOSA(dOA
)∣sICi(dICi
) ≰ rICi(dICi
), sIIS(dIS) ≰ rIIS(dIS), sOIS(dOS
) ≰ rOIS(dOS
), sISA(dIA) ≰ rISA
(dIA),sOSA
(dOA) ≰ rOSA
(dOA), dI , dIS , dIA , dICi
, dOA, dOS
, dOi∈∆} for 1 ≤ i ≤ n,
I = {sICi(dICi
), rICi(dICi
), sIIS (dIS), rIIS(dIS), sOIS(dOS
), rOIS(dOS
),sISA(dIA), rISA
(dIA), sOSA(dOA
), rOSA(dOA
), IF1, IF2, SF1, SF2,AF,CFi
∣sICi(dICi
) ≤ rICi(dICi
), sIIS(dIS) ≤ rIIS(dIS), sOIS(dOS
) ≤ rOIS(dOS
), sISA(dIA) ≤ rISA
(dIA),sOSA
(dOA) ≤ rOSA
(dOA), dI , dIS , dIA , dICi
, dOA, dOS
, dOi∈∆} for 1 ≤ i ≤ n.
Then we get the following conclusion on the Asynchronous Completion Token pattern.
Theorem 6.7 (Correctness of the Asynchronous Completion Token pattern). The Asynchronous
Completion Token pattern τI(∂H(I ≬ S ≬ A ≬ C1 ≬ ⋯ ≬ Ci ≬ ⋯ ≬ Cn)) can exhibit desired
external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(I ≬ S ≬ A ≬ C1 ≬ ⋯ ≬ Ci ≬ ⋯ ≬ Cn)) = ∑dI ,dO1,⋯,dOn∈∆
(rI(dI) ⋅ sO1(dO1) ∥ ⋯ ∥
sOi(dOi) ∥ ⋯ ∥ sOn(dOn)) ⋅ τI(∂H(I ≬ S ≬ A ≬ C1 ≬⋯ ≬ Ci ≬ ⋯≬ Cn)),
115
Figure 58: Acceptor-Connector pattern
that is, the Asynchronous Completion Token pattern τI(∂H(I ≬ S ≬ A ≬ C1 ≬ ⋯ ≬ Ci ≬ ⋯ ≬
Cn)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.2.4 Verification of the Acceptor-Connector Pattern
The Acceptor-Connector pattern decouples the connection and initialization of two cooperating
peers. There are six modules in the Acceptor-Connector pattern: the two Service Handlers, the
two Dispatchers, and the two initiator: the Connector and the Acceptor. The Service Handlers
interact with the user through the channels I1, I2 and O1, O2; with the Dispatcher through the
channels DS1 and DS2; with each other through the channels ISS1and ISS2
. The Connector
interacts with Dispatcher 1 the through the channels CD, and with the outside through the
channels IC . The Acceptor interacts with the Dispatcher 2 through the channel AD; with the
outside through the channel IA. The Dispatchers interact with the Service Handlers through
the channels DS1 and DS2. As illustrates in Figure 58.
The typical process of the Acceptor-Connector pattern is shown in Figure 59 and as follows.
1. The Connector receives the request dIC from the outside through the channel IC (the
corresponding reading action is denoted rIC(dIC)), then processes the request and gener-
ates the request dID1and dIA through a processing function CF , and sends the request
to the Dispatcher 1 through the channel CD (the corresponding sending action is de-
noted sCD(dID1)) and sends the request to the Acceptor through the channel CA (the
corresponding sending action is denoted sCA(dIA));
2. The Dispatcher 1 receives the request dID1from the Connector through the channel CD
(the corresponding reading action is denoted rCD(dID1)), the processes the request and
116
generates the request dIS1through a processing functionD1F , and sends the request to the
Service Handler 1 through the channel DS1 (the corresponding sending action is denoted
sDS1(dIS1
));
3. The Service Handler 1 receives the request dIS1from the Dispatcher 1 through the channel
DS1 (the corresponding reading action is denoted rDS1(dIS1
)), then processes the request
through a processing function S1F1 and make ready to accept the request from the outside;
4. The Acceptor receives the request dIA from the Connector through the channel CA (the
corresponding reading action is denoted rCA(dIA)), then processes the request and gen-
erates the request dID2through a processing function AF , and sends the request to
the Dispatcher 1 through the channel AD (the corresponding sending action is denoted
sAD(dID2));
5. The Dispatcher 2 receives the request dID2from the Acceptor through the channel AD
(the corresponding reading action is denoted rAD(dID2)), the processes the request and
generates the request dIS2through a processing functionD2F , and sends the request to the
Service Handler 2 through the channel DS2 (the corresponding sending action is denoted
sDS2(dIS2
));
6. The Service Handler 2 receives the request dIS2from the Dispatcher 2 through the channel
DS2 (the corresponding reading action is denoted rDS2(dIS2
)), then processes the request
through a processing function S2F1 and make ready to accept the request from the outside;
7. The Service Handler 1 receives the request dI1 from the user through the channel I1 (the
corresponding reading action is denoted rI1(dI1)), then processes the request dI1 through
a processing function S1F2, and sends the processed request dISS2to the Service Handler
2 through the channel ISS1(the corresponding sending action is denoted sISS1
(dISS2));
8. The Service Handler 2 receives the request dISS2from the Service Handler 1 through the
channel ISS1(the corresponding reading action is denoted rISS1
(dISS2)), then processes
the request and generates the response dO2through a processing function S2F3, and sends
the response to the outside through the channel O2 (the corresponding sending action is
denoted sO2(dO2));
9. The Service Handler 2 receives the request dI2 from the user through the channel I2 (the
corresponding reading action is denoted rI2(dI2)), then processes the request dI2 through
a processing function S2F2, and sends the processed request dISS1to the Service Handler
1 through the channel ISS2(the corresponding sending action is denoted sISS2
(dISS1));
10. The Service Handler 1 receives the request dISS1from the Service Handler 2 through the
channel ISS2(the corresponding reading action is denoted rISS2
(dISS1)), then processes
the request and generates the response dO1through a processing function S1F3, and sends
the response to the outside through the channel O1 (the corresponding sending action is
denoted sO1(dO1)).
In the following, we verify the Acceptor-Connector pattern. We assume all data elements dI1 ,
dI2 , dIC , dIA , dID1, dID2
, dIS1, dIS2
, dISS1, dISS2
, dO1, dO2
are from a finite set ∆. We only give
the transitions of the first process.
117
Figure 59: Typical process of Acceptor-Connector pattern
The state transitions of the Connector module described by APTC are as follows.
C = ∑dIC ∈∆(rIC(dIC) ⋅C2)
C2 = CF ⋅C3
C3 =∑dIA ,dID1
∈∆(sCA(dIA)≬ sCD(dID1) ⋅C)
The state transitions of the Dispatcher 1 module described by APTC are as follows.
D1 = ∑dID1∈∆(rCD(dID1
) ⋅D12)
D12 =D1F ⋅D13
D13 = ∑dIS1
∈∆(sDS1(dIS1
) ⋅D1)
The state transitions of the Service Handler 1 module described by APTC are as follows.
S1 = ∑dI1 ,dIS1,dISS1
∈∆(rI1(dI1)≬ rDS1(dIS1
)≬ rISS2(dISS1
) ⋅ S12)
S12 = S1F1 ≬ S1F2 ≬ S1F3 ⋅ S13
S13 = ∑dISS2
,dO1∈∆(sISS1
(dISS2)≬ sO1
(dO1) ⋅ S1)
The state transitions of the Acceptor module described by APTC are as follows.
A = ∑dIA∈∆(rCA(dIA) ⋅A2)
A2 = AF ⋅A3
A3 = ∑dID2∈∆(sAD(dID2
) ⋅A)
The state transitions of the Dispatcher 2 module described by APTC are as follows.
D2 = ∑dID2
∈∆(rAD(dID2) ⋅D22)
D22 =D2F ⋅D23
118
D23 = ∑dIS2∈∆(sDS2
(dIS2) ⋅D2)
The state transitions of the Service Handler 2 module described by APTC are as follows.
S2 = ∑dI2 ,dIS2
,dISS2
∈∆(rI2(dI2)≬ rDS2(dIS2
)≬ rISS1(dISS2
) ⋅ S22)
S22 = S2F1 ≬ S2F2 ≬ S2F3 ⋅ S23
S23 = ∑dISS1,dO2
∈∆(sISS2(dISS1
)≬ sO2(dO2) ⋅ S2)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Connector and the Acceptor.
γ(rCA(dIA), sCA(dIA)) ≜ cCA(dIA)
There are one communication functions between the Connector and the Dispatcher 1 as follows.
γ(rCD(dID1), sCD(dID1
)) ≜ cCD(dID1)
There are one communication functions between the Dispatcher 1 and the Service Handler 1 as
follows.
γ(rDS1(dIS1
), sDS1(dIS1
)) ≜ cDS1(dIS1
)
We define the following communication functions between the Acceptor and the Dispatcher 2.
γ(rAD(dID2), sAD(dID2
)) ≜ cAD(dID2)
There are one communication functions between the Dispatcher 2 and the Service Handler 2 as
follows.
γ(rDS2(dIS2
), sDS2(dIS2
)) ≜ cDS2(dIS2
)
There are one communication functions between the Service Handler 1 and the Service Handler
2 as follows.
γ(rISS1(dISS2
), sISS1(dISS2
)) ≜ cISS1(dISS2
)
γ(rISS2(dISS1
), sISS2(dISS1
)) ≜ cISS2(dISS1
)
Let all modules be in parallel, then the Acceptor-Connector pattern C D1 S1 A D2 S2
can be presented by the following process term.
τI(∂H(Θ(C ≬D1≬ S1≬ A≬ D2≬ S2))) = τI(∂H(C ≬ D1≬ S1≬ A ≬D2≬ S2))whereH = {rCA(dIA), sCA(dIA), rCD(dID1
), sCD(dID1), rDS1
(dIS1), sDS1
(dIS1), rAD(dID2
), sAD(dID2),
rDS2(dIS2
), sDS2(dIS2
), rISS1(dISS2
), sISS1(dISS2
), rISS2(dISS1
), sISS2(dISS1
)∣dI1 , dI2 , dIC , dIA , dID1
, dID2, dIS1
, dIS2, dISS1
, dISS2, dO1
, dO2∈∆},
119
I = {cCA(dIA), cCD(dID1), cDS1
(dIS1), cAD(dID2
), cDS2(dIS2
), cISS1(dISS2
), cISS2(dISS1
),CF,AF,D1F,D2F,S1F1 , S1F2, S1F3, S2F1, S2F2, S2F3
∣dI1 , dI2 , dIC , dIA , dID1, dID2
, dIS1, dIS2
, dISS1, dISS2
, dO1, dO2
∈∆}.Then we get the following conclusion on the Acceptor-Connector pattern.
Theorem 6.8 (Correctness of the Acceptor-Connector pattern). The Acceptor-Connector pat-
tern τI(∂H(C ≬D1≬ S1≬ A≬D2≬ S2)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬D1≬ S1≬ A≬D2≬ S2)) = ∑dIC ,dI1 ,dI2 ,dO1,dO2
∈∆(rIC(dIC) ∥ (rI1(dI1)⋅sO2(dO2)) ∥
(rI2(dI2) ⋅ sO1(dO1))) ⋅ τI(∂H(C ≬D1≬ S1≬ A≬ D2≬ S2)),
that is, the Acceptor-Connector pattern τI(∂H(C ≬ D1 ≬ S1 ≬ A ≬ D2 ≬ S2)) can exhibit
desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.3 Synchronization Patterns
In this subsection, we verify the synchronization patterns, including the Scoped Locking pattern,
the Strategized Locking pattern, the Thread-Safe Interface pattern, and the Double-Checked
Locking Optimization pattern.
6.3.1 Verification of the Scoped Locking Pattern
The Scoped Locking pattern ensures that a lock is acquired automatically when control enters
a scope and released when control leaves the scope. In Scoped Locking pattern, there are two
classes of modules: The n Controls and the Guard. The Control i interacts with the outside
through the input channel Ii and the output channel Oi; with the Guard through the channel
CGi for 1 ≤ i ≤ n, as illustrated in Figure 60.
The typical process is shown in Figure 61 and as follows.
1. The Control i receives the input dIi from the outside through the channel Ii (the corre-
sponding reading action is denoted rIi(dIi)), then it processes the input and generates
the input dIGithrough a processing function CFi1, and it sends the input to the Guard
through the channel CGi (the corresponding sending action is denoted sCGi(dIGi
));
2. The Guard receives the input dIGifrom the Control i through the channel CGi (the
corresponding reading action is denoted rCGi(dIGi
)) for 1 ≤ i ≤ n, then processes the
request and generates the output dOGithrough a processing function GFi, (note that,
after the processing, a lock is acquired), and sends the output to the Control i through
the channel CGi (the corresponding sending action is denoted sCGi(dOGi
));
3. The Control i receives the output from the Guard through the channel CGi (the corre-
sponding reading action is denoted rCGi(dOGi
)), then processes the output and generate
the output dOithrough a processing function CFi2 (accessing the resource), and sends the
120
Figure 60: Scoped Locking pattern
Figure 61: Typical process of Scoped Locking pattern
output to the outside through the channel Oi (the corresponding sending action is denoted
sOi(dOi)).
In the following, we verify the Scoped Locking pattern. We assume all data elements dIi , dOi,
dIGi, dOGi
for 1 ≤ i ≤ n are from a finite set ∆.
The state transitions of the Control i module described by APTC are as follows.
Ci = ∑dIi∈∆(rIi(dIi) ⋅Ci2)
Ci2 = CFi1 ⋅Ci3
Ci3 = ∑dIGi∈∆(sCGi
(dIGi) ⋅Ci4)
Ci4 = ∑dOGi∈∆(rCGi
(dOGi) ⋅Ci5)
Ci5 = CFi2 ⋅Ci6 CF12%⋯%CFn2
121
Ci6 = ∑dOi∈∆(sOi
(dOi) ⋅Ci)
The state transitions of the Guard module described by APTC are as follows.
G = ∑dIG1
,⋯,dIGn∈∆(rCG1
(dIG1)≬⋯≬ rCGn(dIGn
) ⋅G2)
G2 = GF1 ≬⋯ ≬ GFn ⋅G3 (GF1%⋯%GFn)G3 = ∑dOG1
,⋯,dOGn∈∆(sCG1
(dOG1)≬⋯ ≬ sCGn(dOGn
) ⋅G)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Control i and the Guard.
γ(rCGi(dIGi
), sCGi(dIGi
)) ≜ cCGi(dIGi
)
γ(rCGi(dOGi
), sCGi(dOGi
)) ≜ cCGi(dOGi
)
Let all modules be in parallel, then the Scoped Locking pattern C1⋯Cn G can be presented
by the following process term.
τI(∂H(Θ(C1 ≬⋯≬ Cn ≬ G))) = τI(∂H(C1 ≬⋯ ≬ Cn ≬ G))where H = {rCGi
(dIGi), sCGi
(dIGi), rCGi
(dOGi), sCGi
(dOGi)∣dIi , dOi
, dIGi, dOGi
∈∆},I = {cCGi
(dIGi), cCGi
(dOGi),CFi1,CFi2,GFi∣dIi , dOi
, dIGi, dOGi
∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Scoped Locking pattern.
Theorem 6.9 (Correctness of the Scoped Locking pattern). The Scoped Locking pattern τI(∂H(C1 ≬
⋯≬ Cn ≬ G)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C1 ≬ ⋯ ≬ Cn ≬ G)) = ∑dI1 ,dO1,⋯,dIn ,dOn∈∆
(rI1(dI1) ∥ ⋯ ∥ rIn(dIn) ⋅ sO1(dO1) ∥ ⋯ ∥
sOn(dOn)) ⋅ τI(∂H(C1 ≬⋯ ≬ Cn ≬ G)),that is, the Scoped Locking pattern τI(∂H(C1 ≬ ⋯ ≬ Cn ≬ G)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.3.2 Verification of the Strategized Locking Pattern
The Strategized Locking pattern uses a component (the LockStrategy) to parameterize the syn-
chronization for protecting the concurrent access to the critical section. In Strategized Locking
pattern, there are two classes of modules: The n Components and the n LockStrategies. The
Component i interacts with the outside through the input channel Ii and the output channel
Oi; with the LockStrategy i through the channel CLi for 1 ≤ i ≤ n, as illustrated in Figure 62.
The typical process is shown in Figure 63 and as follows.
122
Figure 62: Strategized Locking pattern
1. The Component i receives the input dIi from the outside through the channel Ii (the corre-
sponding reading action is denoted rIi(dIi)), then it processes the input and generates the
input dILithrough a processing function CFi1, and it sends the input to the LockStrategy
through the channel CLi (the corresponding sending action is denoted sCLi(dILi
));
2. The LockStrategy receives the input dILifrom the Component i through the channel CLi
(the corresponding reading action is denoted rCLi(dILi
)) for 1 ≤ i ≤ n, then processes the
request and generates the output dOLithrough a processing function LFi, (note that, after
the processing, a lock is acquired), and sends the output to the Component i through the
channel CLi (the corresponding sending action is denoted sCLi(dOLi
));
3. The Component i receives the output from the LockStrategy through the channel CLi
(the corresponding reading action is denoted rCLi(dOLi
)), then processes the output and
generate the output dOithrough a processing function CFi2, and sends the output to the
outside through the channel Oi (the corresponding sending action is denoted sOi(dOi)).
In the following, we verify the Strategized Locking pattern. We assume all data elements dIi ,
dOi, dILi
, dOLifor 1 ≤ i ≤ n are from a finite set ∆.
The state transitions of the Component i module described by APTC are as follows.
Ci = ∑dIi∈∆(rIi(dIi) ⋅Ci2)
Ci2 = CFi1 ⋅Ci3
Ci3 = ∑dILi∈∆(sCLi
(dILi) ⋅Ci4)
Ci4 = ∑dOLi∈∆(rCLi
(dOLi) ⋅Ci5)
Ci5 = CFi2 ⋅Ci6 (CF12%⋯%CFn2)Ci6 = ∑dOi
∈∆(sOi(dOi) ⋅Ci)
123
Figure 63: Typical process of Strategized Locking pattern
The state transitions of the LockStrategy i module described by APTC are as follows.
Li = ∑dILi∈∆(rCLi
(dILi) ⋅Li2)
Li2 = LFi ⋅Li3 (LF1%⋯%LFn)Li3 = ∑dOLi
∈∆(sCLi(dOLi
) ⋅Li)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Component i and the LockStrategy i.
γ(rCLi(dILi
), sCLi(dILi
)) ≜ cCLi(dILi
)
γ(rCLi(dOLi
), sCLi(dOLi
)) ≜ cCLi(dOLi
)
Let all modules be in parallel, then the Strategized Locking pattern C1⋯Cn L1⋯Ln can be
presented by the following process term.
τI(∂H(Θ(C1 ≬⋯≬ Cn ≬ L1 ≬⋯ ≬ Ln))) = τI(∂H(C1 ≬⋯ ≬ Cn ≬ L1 ≬ ⋯≬ Ln))where H = {rCLi
(dILi), sCLi
(dILi), rCLi
(dOLi), sCLi
(dOLi)∣dIi , dOi
, dILi, dOLi
∈∆},I = {cCLi
(dILi), cCLi
(dOLi),CFi1,CFi2,LFi∣dIi , dOi
, dILi, dOLi
∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Strategized Locking pattern.
Theorem 6.10 (Correctness of the Strategized Locking pattern). The Strategized Locking pat-
tern τI(∂H(C1 ≬ ⋯≬ Cn ≬ L1 ≬⋯ ≬ Ln)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
124
Figure 64: Double-Checked Locking Optimization pattern
τI(∂H(C1 ≬ ⋯≬ Cn ≬ L1 ≬⋯ ≬ Ln)) = ∑dI1 ,dO1,⋯,dIn ,dOn∈∆
(rI1(dI1) ∥ ⋯ ∥ rIn(dIn)⋅sO1(dO1) ∥
⋯ ∥ sOn(dOn)) ⋅ τI(∂H(C1 ≬⋯ ≬ Cn ≬ L1 ≬⋯ ≬ Ln)),that is, the Strategized Locking pattern τI(∂H(C1 ≬ ⋯ ≬ Cn ≬ L1 ≬ ⋯ ≬ Ln)) can exhibit
desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.3.3 Verification of the Double-Checked Locking Optimization Pattern
The Double-Checked Locking Optimization pattern ensures that a lock is acquired in a thread-
safe manner. In Double-Checked Locking Optimization pattern, there are two classes of modules:
The n Threads and the Singleton Lock. The Thread i interacts with the outside through the
input channel Ii and the output channel Oi; with the Singleton Lock through the channel TSi
for 1 ≤ i ≤ n, as illustrated in Figure 64.
The typical process is shown in Figure 65 and as follows.
1. The Thread i receives the input dIi from the outside through the channel Ii (the corre-
sponding reading action is denoted rIi(dIi)), then it processes the input and generates the
input dISithrough a processing function TFi1, and it sends the input to the Singleton
Lock through the channel TSi (the corresponding sending action is denoted sTSi(dISi
));
2. The Singleton Lock receives the input dISifrom the Thread i through the channel TSi
(the corresponding reading action is denoted rTSi(dISi
)) for 1 ≤ i ≤ n, then processes the
request and generates the output dOSithrough a processing function SFi, (note that, after
the processing, a lock is acquired), and sends the output to the Thread i through the
channel TSi (the corresponding sending action is denoted sTSi(dOSi
));
3. The Thread i receives the output from the Singleton Lock through the channel TSi (the
corresponding reading action is denoted rTSi(dOSi
)), then processes the output and gener-
ate the output dOithrough a processing function TFi2 (accessing the resource), and sends
125
Figure 65: Typical process of Double-Checked Locking Optimization pattern
the output to the outside through the channel Oi (the corresponding sending action is
denoted sOi(dOi)).
In the following, we verify the Double-Checked Locking Optimization pattern. We assume all
data elements dIi , dOi, dISi
, dOSifor 1 ≤ i ≤ n are from a finite set ∆.
The state transitions of the Thread i module described by APTC are as follows.
Ti = ∑dIi∈∆(rIi(dIi) ⋅ Ti2)
Ti2 = TFi1 ⋅ Ti3
Ti3 = ∑dISi∈∆(sTSi
(dISi) ⋅ Ti4)
Ti4 = ∑dOSi∈∆(rTSi
(dOSi) ⋅ Ti5)
Ti5 = TFi2 ⋅ Ti6 (TF12%⋯%TFn2)Ti6 = ∑dOi
∈∆(sOi(dOi) ⋅ Ti)
The state transitions of the Singleton Lock module described by APTC are as follows.
S = ∑dIS1
,⋯,dISn∈∆(rTS1
(dIS1)≬⋯≬ rTSn(dISn
) ⋅ S2)
S2 = SF1 ≬⋯≬ SFn ⋅ S3 (SF1%⋯%SFn)S3 = ∑dOS1
,⋯,dOSn∈∆(sTS1
(dOS1)≬⋯ ≬ sTSn(dOSn
) ⋅ S)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Thread i and the Singleton Lock.
γ(rTSi(dISi
), sTSi(dISi
)) ≜ cTSi(dISi
)
γ(rTSi(dOSi
), sTSi(dOSi
)) ≜ cTSi(dOSi
)
126
Let all modules be in parallel, then the Double-Checked Locking Optimization pattern T1⋯Tn S
can be presented by the following process term.
τI(∂H(Θ(T1 ≬⋯ ≬ Tn ≬ S))) = τI(∂H(T1 ≬⋯ ≬ Tn ≬ S))where H = {rTSi
(dISi), sTSi
(dISi), rTSi
(dOSi), sTSi
(dOSi)∣dIi , dOi
, dISi, dOSi
∈∆},I = {cTSi
(dISi), cTSi
(dOSi), TFi1, TFi2, SFi∣dIi , dOi
, dISi, dOSi
∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Double-Checked Locking Optimization pattern.
Theorem 6.11 (Correctness of the Double-Checked Locking Optimization pattern). The Double-
Checked Locking Optimization pattern τI(∂H(T1 ≬ ⋯ ≬ Tn ≬ S)) can exhibit desired external
behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(T1 ≬ ⋯ ≬ Tn ≬ S)) = ∑dI1 ,dO1,⋯,dIn ,dOn∈∆
(rI1(dI1) ∥ ⋯ ∥ rIn(dIn) ⋅ sO1(dO1) ∥ ⋯ ∥
sOn(dOn)) ⋅ τI(∂H(T1 ≬⋯ ≬ Tn ≬ S)),that is, the Double-Checked Locking Optimization pattern τI(∂H(T1 ≬ ⋯ ≬ Tn ≬ S)) can
exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.4 Concurrency Patterns
In this subsection, we verify concurrency related patterns, including the Active Object pattern,
the Monitor Object pattern, the Half-Sync/Harf-Async pattern, the Leader/Followers pattern,
and the Thread-Specific Storage pattern.
6.4.1 Verification of the Active Object Pattern
The Active Object pattern is used to decouple the method request and method execution of an
object. In this pattern, there are a Proxy module, a Scheduler module, and n Method Request
modules and n Servant modules. The Servant is used to implement concrete computation, the
Method Request is used to encapsulate a Servant, and the Scheduler is used to manage Method
Requests. The Proxy module interacts with outside through the channels I and O, and with the
Scheduler through the channels IPS and OPS . The Scheduler interacts with Method Request i
(for 1 ≤ i ≤ n) through the channels ISMiand OSMi
, and the Method Request i interacts with
the Servant i through the channels IMSiand OCSi
, as illustrated in Figure 66.
The typical process of the Active Object pattern is shown in Figure 67 and as follows.
1. The Proxy receives the request dI from outside through the channel I (the corresponding
reading action is denoted rI(dI)), then processes the request through a processing function
PF1 and generates the request dISh, and sends the dISh
to the Scheduler through the
channel IPS (the corresponding sending action is denoted sIPS(dISh
));
127
Figure 66: Active Object pattern
2. The Scheduler receives the request dIShfrom the Proxy through the channel IPS (the
corresponding reading action is denoted rIPS(dISh
)), then processes the request through
a processing function ShF1 and generates the request dIMi, and sends the dIMi
to the
Method Request i through the channel ISMi(the corresponding sending action is denoted
sISMi(dIMi
));
3. The Method Request i receives the request dIMifrom the Scheduler through the channel
ISMi(the corresponding reading action is denoted rISMi
(dIMi)), then processes the request
through a processing function MFi1 and generates the request dISi, and sends the request
to the Servant i through the channel IMSi(the corresponding sending action is denoted
sIMSi(dISi
));
4. The Servant i receives the request dISifrom the Method Request i through the channel
IMSi(the corresponding reading action is denoted rIMSi
(dISi)), then processes the request
through a processing function SFi and generates the response dOSi, and sends the response
to the Method Request through the channel OMSi(the corresponding sending action is
denoted sOMSi(dOSi
));
5. The Method Request i receives the response dOSifrom the Servant i through the channel
OMSi(the corresponding reading action is denoted rOMSi
(dOSi)), then processes the re-
quest through a processing function MFi2 and generates the response dOMi, and sends the
response to the Scheduler through the channel OSMi(the corresponding sending action is
denoted sOSMi(dOMi
));
6. The Scheduler receives the response dOMifrom the Method Request i through the chan-
nel OSMi(the corresponding reading action is denoted rOSMi
(dOMi)), then processes the
response and generate the response dOShthrough a processing function ShF2, and sends
dOShto the Proxy through the channel OPS (the corresponding sending action is denoted
sOPS(dOSh
));
128
Figure 67: Typical process of Active Object pattern
7. The Proxy receives the response dOShfrom the Scheduler through the channel OPS (the
corresponding reading action is denoted rOPS(dOSh
)), then processes the request through
a processing function PF2 and generates the request dO, and sends the dO to the outside
through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Active Object pattern. We assume all data elements dI , dISh,
dIMi, dISi
, dOSi, dOMi
, dOSh, dO (for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Proxy module described by APTC are as follows.
P = ∑dI∈∆(rI(dI) ⋅ P2)P2 = PF1 ⋅ P3
P3 = ∑dISh∈∆(sIPS
(dISh) ⋅ P4)
P4 = ∑dOSh∈∆(rOPS
(dOSh) ⋅ P5)
P5 = PF2 ⋅ P6
P6 = ∑dO∈∆(sO(dO) ⋅ P )The state transitions of the Scheduler module described by APTC are as follows.
Sh = ∑dISh∈∆(rIPS
(dISh) ⋅ Sh2)
Sh2 = ShF1 ⋅ Sh3
Sh3 = ∑dIM1
,⋯,dIMn∈∆(sISM1
(dIM1)≬⋯≬ sISMn
(dIMn) ⋅ Sh4)
Sh4 = ∑dOM1,⋯,dOMn
∈∆(rOSM1(dOM1
)≬ ⋯≬ rOSMn(dOMn
) ⋅ Sh5)
Sh5 = ShF2 ⋅ Sh6
Sh6 = ∑dOSh∈∆(sOPS
(dOSh) ⋅ Sh)
The state transitions of the Method Request i described by APTC are as follows.
129
Mi = ∑dIMi∈∆(rISMi
(dIMi) ⋅Mi2)
Mi2 =MFi1 ⋅Mi3
Mi3 = ∑dISi∈∆(sIMSi
(dISi) ⋅Mi4)
Mi4 = ∑dOSi∈∆(rOMSi
(dOSi) ⋅Mi5)
Mi5 =MFi2 ⋅Mi6
Mi6 = ∑dOMi∈∆(sOSMi
(dOMi) ⋅Mi)
The state transitions of the Servant i described by APTC are as follows.
Si = ∑dISi∈∆(rICSi
(dISi) ⋅ Si2)
Si2 = SFi ⋅ Si3
Si3 = ∑dOSi∈∆(sOCSi
(dOSi) ⋅ Si)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of between the Proxy the Scheduler.
γ(rIPS(dISh
), sIPS(dISh
)) ≜ cIPS(dISh
)
γ(rOPS(dOSh
), sOPS(dOSh
)) ≜ cOPS(dOSh
)
There are two communication function between the Scheduler and the Method Request i for
1 ≤ i ≤ n.
γ(rISMi(dIMi
), sISMi(dIMi
)) ≜ cISMi(dIMi
)
γ(rOSMi(dOMi
), sOSMi(dOMi
)) ≜ cOSMi(dOMi
)
There are two communication function between the Servant i and the Method Request i for
1 ≤ i ≤ n.
γ(rIMSi(dISi
), sIMSi(dISi
)) ≜ cIMSi(dISi
)
γ(rOMSi(dOSi
), sOMSi(dOSi
)) ≜ cOMSi(dOSi
)
Let all modules be in parallel, then the Active Object pattern
P Sh M1⋯ Mi ⋯Mn S1⋯Si⋯Sn
can be presented by the following process term.
τI(∂H(Θ(P ≬ Sh ≬M1 ≬ ⋯ ≬Mi ≬ ⋯ ≬Mn ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn))) = τI(∂H(P ≬ Sh ≬
M1 ≬⋯ ≬Mi ≬⋯≬Mn ≬ S1 ≬⋯ ≬ Si ≬ ⋯≬ Sn))
130
Figure 68: Monitor Object pattern
where H = {rIPS(dISh
), sIPS(dISh
), rOPS(dOSh
), sOPS(dOSh
), rISMi(dIMi
), sISMi(dIMi
),rOSMi
(dOMi), sOSMi
(dOMi), rIMSi
(dISi), sIMSi
(dISi), rOMSi
(dOSi), sOMSi
(dOSi)
∣dI , dISh, dIMi
, dISi, dOSi
, dOMi, dOSh
, dO ∈∆} for 1 ≤ i ≤ n,I = {cIPS
(dISh), cOPS
(dOSh), cISMi
(dIMi), cOSMi
(dOMi), cIMSi
(dISi),
cOMSi(dOSi
), PF1, PF2, ShF1, ShF2,MFi1,MFi2, SFi
∣dI , dISh, dIMi
, dISi, dOSi
, dOMi, dOSh
, dO ∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Active Object pattern.
Theorem 6.12 (Correctness of the Active Object pattern). The Active Object pattern τI(∂H(P ≬
Sh ≬M1 ≬ ⋯ ≬Mi ≬ ⋯ ≬Mn ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn)) can exhibit desired external behav-
iors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(P ≬ Sh ≬ M1 ≬ ⋯ ≬ Mi ≬ ⋯ ≬ Mn ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn)) = ∑dI ,dO∈∆(rI(dI) ⋅sO(dO)) ⋅ τI(∂H(P ≬ Sh≬M1 ≬⋯ ≬Mi ≬⋯≬Mn ≬ S1 ≬⋯ ≬ Si ≬⋯ ≬ Sn)),that is, the Active Object pattern τI(∂H(P ≬ Sh≬M1 ≬⋯ ≬Mi ≬⋯≬Mn ≬ S1 ≬⋯ ≬ Si ≬
⋯≬ Sn)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.4.2 Verification of the Monitor Object Pattern
The Monitor Object pattern synchronizes concurrent method execution to ensure that only
methods runs at a time. In Monitor Object pattern, there are two classes of modules: The n
Client Threads and the Monitor Object. The Client Thread i interacts with the outside through
the input channel Ii and the output channel Oi; with the Monitor Object through the channel
CMi for 1 ≤ i ≤ n, as illustrated in Figure 68.
The typical process is shown in Figure 69 and as follows.
131
Figure 69: Typical process of Monitor Object pattern
1. The Client Thread i receives the input dIi from the outside through the channel Ii (the
corresponding reading action is denoted rIi(dIi)), then it processes the input and gen-
erates the input dIMithrough a processing function CFi1, and it sends the input to the
Monitor Object through the channel CMi (the corresponding sending action is denoted
sCMi(dIMi
));
2. The Monitor Object receives the input dIMifrom the Client Thread i through the channel
CMi (the corresponding reading action is denoted rCMi(dIMi
)) for 1 ≤ i ≤ n, then processes
the request and generates the output dOMithrough a processing function MFi, and sends
the output to the Client Thread i through the channel CMi (the corresponding sending
action is denoted sCMi(dOMi
));
3. The Client Thread i receives the output from the Monitor Object through the channel CMi
(the corresponding reading action is denoted rCMi(dOMi
)), then processes the output and
generate the output dOithrough a processing function CFi2 (accessing the resource), and
sends the output to the outside through the channel Oi (the corresponding sending action
is denoted sOi(dOi)).
In the following, we verify the Monitor Object pattern. We assume all data elements dIi , dOi,
dIMi, dOMi
for 1 ≤ i ≤ n are from a finite set ∆.
The state transitions of the Client Thread i module described by APTC are as follows.
Ci = ∑dIi∈∆(rIi(dIi) ⋅Ci2)
Ci2 = CFi1 ⋅Ci3
Ci3 = ∑dIMi∈∆(sCMi
(dIMi) ⋅Ci4)
Ci4 = ∑dOMi∈∆(rCMi
(dOMi) ⋅Ci5)
Ci5 = CFi2 ⋅Ci6 (CF12%⋯%CFn2)Ci6 = ∑dOi
∈∆(sOi(dOi) ⋅Ci)
The state transitions of the Monitor Object module described by APTC are as follows.
132
M = ∑dIM1,⋯,dIMn
∈∆(rCM1(dIM1
)≬⋯ ≬ rCMn(dIMn) ⋅M2)
M2 =MF1 ≬⋯ ≬MFn ⋅M3 (MF1%⋯%MFn)M3 = ∑dOM1
,⋯,dOMn∈∆(sCM1
(dOM1)≬⋯ ≬ sCMn(dOMn
) ⋅M)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Client Thread i and the Monitor Object.
γ(rCMi(dIMi
), sCMi(dIMi
)) ≜ cCMi(dIMi
)
γ(rCMi(dOMi
), sCMi(dOMi
)) ≜ cCMi(dOMi
)
Let all modules be in parallel, then the Monitor Object pattern C1⋯Cn M can be presented
by the following process term.
τI(∂H(Θ(C1 ≬⋯≬ Cn ≬M))) = τI(∂H(C1 ≬⋯ ≬ Cn ≬M))where H = {rCMi
(dIMi), sCMi
(dIMi), rCMi
(dOMi), sCMi
(dOMi)∣dIi , dOi
, dIMi, dOMi
∈∆},I = {cCMi
(dIMi), cCMi
(dOMi),CFi1,CFi2,MFi∣dIi , dOi
, dIMi, dOMi
∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Monitor Object pattern.
Theorem 6.13 (Correctness of the Monitor Object pattern). The Monitor Object pattern
τI(∂H(C1 ≬ ⋯≬ Cn ≬M)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C1 ≬ ⋯ ≬ Cn ≬ M)) = ∑dI1 ,dO1,⋯,dIn ,dOn∈∆
(rI1(dI1) ∥ ⋯ ∥ rIn(dIn) ⋅ sO1(dO1) ∥ ⋯ ∥
sOn(dOn)) ⋅ τI(∂H(C1 ≬⋯ ≬ Cn ≬M)),that is, the Monitor Object pattern τI(∂H(C1 ≬ ⋯ ≬ Cn ≬ M)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.4.3 Verification of the Half-Sync/Half-Async Pattern
The Half-Sync/Half-Async pattern decouples the asynchronous and synchronous processings,
which has two classes of components: n Synchronous Services and the Asynchronous Service.
The Asynchronous Service receives the inputs asynchronously from the user through the channel
I, then the Asynchronous Service sends the results to the Synchronous Service i through the
channel ASi synchronously for 1 ≤ i ≤ n; When the Synchronous Service i receives the input
from the Asynchronous Service, it generates and sends the results out to the user through the
channel Oi. As illustrates in Figure 70.
The typical process of the Half-Sync/Half-Async pattern is shown in Figure 71 and following.
133
Figure 70: Half-Sync/Harf-Async pattern
1. The Asynchronous Service receives the input dI from the user through the channel I (the
corresponding reading action is denoted rI(DI)), processes the input through a processing
function AF , and generate the input to the Synchronous Service i (for 1 ≤ i ≤ n) which is
denoted dISi; then sends the input to the Synchronous Service i through the channel ASi
(the corresponding sending action is denoted sASi(dISi
));
2. The Synchronous Service i (for 1 ≤ i ≤ n) receives the input from the Asynchronous
Service through the channel ASi (the corresponding reading action is denoted rASi(dISi
)),processes the results through a processing function SFi, generates the output dOi
, then
sending the output through the channel Oi (the corresponding sending action is denoted
sOi(dOi)).
In the following, we verify the Half-Sync/Half-Async pattern. We assume all data elements dI ,
dISi, dOi
(for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Asynchronous Service module described by APTC are as follows.
A = ∑dI∈∆(rI(dI) ⋅A2)A2 = AF ⋅A3
A3 = ∑dIS1
,⋯,dISn∈∆(sAS1
(dIS1)≬⋯ ≬ sASn(dISn
) ⋅A)
The state transitions of the Synchronous Service i described by APTC are as follows.
Si = ∑dISi∈∆(rASi
(dISi) ⋅ Si2)
Si2 = SFi ⋅ Si3
Si3 = ∑dOi∈∆(sOi
(dOi) ⋅ Si)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the Synchronous Service i for 1 ≤ i ≤ n.
134
Figure 71: Typical process of Half-Sync/Harf-Async pattern
γ(rASi(dISi
), sASi(dISi
)) ≜ cASi(dISi
)
Let all modules be in parallel, then the Half-Sync/Half-Async pattern A S1⋯Si⋯Sn can be
presented by the following process term.
τI(∂H(Θ(A ≬ S1 ≬⋯ ≬ Si ≬⋯≬ Sn))) = τI(∂H(A ≬ S1 ≬⋯ ≬ Si ≬⋯≬ Sn))where H = {rASi
(dOSi), sASi
(dOSi)∣dI , dISi
, dOi∈∆} for 1 ≤ i ≤ n,
I = {cASi(dISi
),AF,SFi ∣dI , dISi, dOi
∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Half-Sync/Half-Async pattern.
Theorem 6.14 (Correctness of the Half-Sync/Half-Async pattern). The Half-Sync/Half-Async
pattern τI(∂H(A ≬ S1 ≬⋯≬ Si ≬⋯ ≬ Sn)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(A ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn)) = ∑dI ,dO1,⋯,dOn∈∆
(rI(dI) ⋅ sO1(dO1) ∥ ⋯ ∥ sOi
(dOi) ∥ ⋯ ∥
sOn(dOn)) ⋅ τI(∂H(A ≬ S1 ≬ ⋯≬ Si ≬⋯ ≬ Sn)),that is, the Half-Sync/Half-Async pattern τI(∂H(A ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn)) can exhibit
desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.4.4 Verification of the Leader/Followers Pattern
The Leader/Followers pattern decouples the event delivery between the event source and event
handler. There are four modules in the Leader/Followers pattern: the Handle Set, the Leader,
135
Figure 72: Leader/Followers pattern
the Follower and the Event Handler. The Handle Set interacts with the outside through the
channel I; with the Leader through the channel HL. The Leader interacts with the Follower
through the channel LF . The Event Handler interacts with the Follower through the channel
FE, and with the outside through the channels O. As illustrates in Figure 72.
The typical process of the Leader/Followers pattern is shown in Figure 73 and as follows.
1. The Handle Set receives the input dI from the outside through the channel I (the corre-
sponding reading action is denoted rI(dI)), then processes the input dI through a process-
ing function HF , and sends the processed input dIHLto the Leader through the channel
HL (the corresponding sending action is denoted sPP (dIHL));
2. The Leader receives dIHLfrom the Handle Set through the channel HL (the corresponding
reading action is denoted rHL(dIHL)), then processes the request through a processing
function LF , generates and sends the processed input dILFto the Follower through the
channel LF (the corresponding sending action is denoted sLF (dILF));
3. The Follower receives the input dILFfrom the Leader through the channel LF (the cor-
responding reading action is denoted rLF (dILF)), then processes the request through a
processing function FF , generates and sends the processed input dIFEto the Event Han-
dler through the channel FE (the corresponding sending action is denoted sFE(dIFE));
4. The Event Handler receives the input dIFEfrom the Follower through the channel FE
(the corresponding reading action is denoted rFE(dIFE)), then processes the request and
generates the response dO through a processing function EF , and sends the response to
the outside through the channel O (the corresponding sending action is denoted sO(dO)).
136
Figure 73: Typical process of Leader/Followers pattern
In the following, we verify the Leader/Followers pattern. We assume all data elements dI , dIHL,
dILF, dIFE
, dO are from a finite set ∆.
The state transitions of the Handle Set module described by APTC are as follows.
H = ∑dI∈∆(rI(dI) ⋅H2)H2 =HF ⋅H3
H3 = ∑dIHL∈∆(sHL(dIHL
) ⋅H)The state transitions of the Leader module described by APTC are as follows.
L = ∑dIHL∈∆(rHL(dIHL
) ⋅L2)L2 = LF ⋅L3
L3 = ∑dILF∈∆(sLF (dILF
) ⋅L)The state transitions of the Follower module described by APTC are as follows.
F = ∑dILF∈∆(rLF (dILF
) ⋅ F2)F2 = FF ⋅ F3
F3 = ∑dIFE∈∆(sFE(dIFE
) ⋅ F )The state transitions of the Event Handler module described by APTC are as follows.
E = ∑dIFE∈∆(rFE(dIFE
) ⋅E2)E2 = EF ⋅E3
E3 = ∑dO∈∆(sO(dO) ⋅E)
137
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Handle Set and the Leader.
γ(rHL(dIHL), sHL(dIHL
)) ≜ cHL(dIHL)
There are one communication functions between the Leader and the Follower as follows.
γ(rLF (dILF), sLF (dILF
)) ≜ cLF (dILF)
There are one communication functions between the Follower and the Event Handler as follows.
γ(rFE(dIFE), sFE(dIFE
)) ≜ cFE(dIFE)
Let all modules be in parallel, then the Leader/Followers patternH L F E can be presented
by the following process term.
τI(∂H(Θ(H ≬ L ≬ F ≬ E))) = τI(∂H(H ≬ L≬ F ≬ E))where H = {rHL(dIHL
), sHL(dIHL), rLF (dILF
), sLF (dILF), rFE(dIFE
), sFE(dIFE)
∣dI , dIHL, dILF
, dIFE, dO ∈∆},
I = {cHL(dIHL), cLF (dILF
), cFE(dIFE),HF,LF,FF,EF ∣dI , dIHL
, dILF, dIFE
, dO ∈∆}.Then we get the following conclusion on the Leader/Followers pattern.
Theorem 6.15 (Correctness of the Leader/Followers pattern). The Leader/Followers pattern
τI(∂H(H ≬ L ≬ F ≬ E)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(H ≬ L ≬ F ≬ E)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(H ≬ L≬ F ≬ E)),that is, the Leader/Followers pattern τI(∂H(H ≬ L ≬ F ≬ E)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
6.4.5 Verification of the Thread-Specific Storage Pattern
The Thread-Specific Storage pattern allows the application threads to get a global access point
to a local object. There are four modules in the Thread-Specific Storage pattern: the Application
Thread, the Thread Specific Object Proxy, the Key Factory, and the Thread Specific Object.
The Application Thread interacts with the outside through the channels I and O; with the
Thread Specific Object Proxy through the channel IAP and OAP . The Thread Specific Object
Proxy interacts with the Thread Specific Object through the channels IPO and OPO; with the
Key Factory through the channels IPF and OPF . As illustrates in Figure 74.
The typical process of the Thread-Specific Storage pattern is shown in Figure 75 and as follows.
138
Figure 74: Thread-Specific Storage pattern
1. The Application Thread receives the input dI from the outside through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the input dI through a
processing function AF1, and sends the processed input dIP to the Thread Specific Object
Proxy through the channel IAP (the corresponding sending action is denoted sIAP(dIP ));
2. The Thread Specific Object Proxy receives dIP from the Application Thread through the
channel IAP (the corresponding reading action is denoted rIAP(dIP )), then processes the
request through a processing function PF1, generates and sends the processed input dIFto the Key Factory through the channel IPF (the corresponding sending action is denoted
sIPF(dIF ));
3. The Key Factory receives the input dIF from the Thread Specific Object Proxy through
the channel IPF (the corresponding reading action is denoted rIPF(dIF )), then processes
the request and generates the result dOFthrough a processing function FF , and sends the
result to the Thread Specific Object Proxy through the channel OPF (the corresponding
sending action is denoted sOPF(dOF
));
4. The Thread Specific Object Proxy receives the result from the Key Factory through the
channel OPF (the corresponding reading action is denoted rOPF(dOF
)), then processes the
results and generate the request dIO to the Thread Specific Object through a processing
function PF2, and sends the request to the Thread Specific Object through the channel
IPO (the corresponding sending action is denoted sIPO(dIO));
5. The Thread Specific Object receives the input dIO from the Thread Specific Object Proxy
through the channel IPO (the corresponding reading action is denoted rIPO(dIO)), then
processes the input through a processing function OF , generates and sends the response
dOOto the Thread Specific Object Proxy through the channel OPO (the corresponding
sending action is denoted sOPO(dOO
));
139
Figure 75: Typical process of Thread-Specific Storage pattern
6. The Thread Specific Object Proxy receives the response dOOfrom the Thread Specific
Object through the channel OPO (the corresponding reading action is denoted rOPO(dOO
)),then processes the response through a processing function PF3, generates and sends the
response dOP(the corresponding sending action is denoted sOAP
(dOP));
7. The Application Thread receives the response dOPfrom the Thread Specific Object Proxy
through the channel OAP (the corresponding reading action is denoted rOAP(dOP
)), thenprocesses the request and generates the response dO through a processing function AF2,
and sends the response to the outside through the channel O (the corresponding sending
action is denoted sO(dO)).
In the following, we verify the Thread-Specific Storage pattern. We assume all data elements
dI , dIP , dIF , dIO , dOP, dOF
, dOO, dO are from a finite set ∆.
The state transitions of the Application Thread module described by APTC are as follows.
A = ∑dI∈∆(rI(dI) ⋅A2)A2 = AF1 ⋅A3
A3 = ∑dIP ∈∆(sIAP
(dIP ) ⋅A4)A4 = ∑dOP
∈∆(rOAP(dOP
) ⋅A5)A5 = AF2 ⋅A6
A6 = ∑dO∈∆(sO(dO) ⋅A)The state transitions of the Thread Specific Object Proxy module described by APTC are as
follows.
P = ∑dIP ∈∆(rIAP
(dIP ) ⋅ P2)P2 = PF1 ⋅ P3
140
P3 = ∑dIF ∈∆(sIPF
(dIF ) ⋅ P4)P4 = ∑dOF
∈∆(rOPF(dOF
) ⋅ P5)P5 = PF2 ⋅ P6
P6 = ∑dIO ∈∆(sIPO
(dIO) ⋅ P7)P7 = ∑dOO
∈∆(rOPO(dOO
) ⋅ P8)P8 = PF3 ⋅ P9
P9 = ∑dOP∈∆(sOAP
(dOP) ⋅ P )
The state transitions of the Key Factory module described by APTC are as follows.
F = ∑dIF ∈∆(rIPF
(dIF ) ⋅ F2)F2 = FF ⋅ F3
F3 = ∑dOF∈∆(sOPF
(dOF) ⋅ F )
The state transitions of the Thread Specific Object module described by APTC are as follows.
O = ∑dIO ∈∆(rIPO
(dIO) ⋅O2)O2 = OF ⋅O3
O3 = ∑dOO∈∆(sOPO
(dOO) ⋅O)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Application Thread and the Thread Specific Object Proxy
Proxy.
γ(rIAP(dIP ), sIAP
(dIP )) ≜ cIAP(dIP )
γ(rOAP(dOP
), sOAP(dOP
)) ≜ cOAP(dOP
)
There are two communication functions between the Thread Specific Object Proxy and the Key
Factory as follows.
γ(rIPF(dIF ), sIPF
(dIF )) ≜ cIPF(dIF )
γ(rOPF(dOF
), sOPF(dOF
)) ≜ cOPF(dOF
)
There are two communication functions between the Thread Specific Object Proxy and the
Thread Specific Object as follows.
γ(rIPO(dIO), sIPO
(dIO)) ≜ cIPO(dIO)
γ(rOPO(dOO
), sOPO(dOO
)) ≜ cOPO(dOO
)
141
Let all modules be in parallel, then the Thread-Specific Storage pattern A P F O can be
presented by the following process term.
τI(∂H(Θ(A ≬ P ≬ F ≬ O))) = τI(∂H(A ≬ P ≬ F ≬ O))where H = {rIAP
(dIP ), sIAP(dIP ), rOAP
(dOP), sOAP
(dOP), rIPF
(dIF ), sIPF(dIF ),
rOPF(dOF
), sOPF(dOF
), rIPO(dIO), sIPO
(dIO), rOPO(dOO
), sOPO(dOO
)∣dI , dIP , dIF , dIO , dOP
, dOF, dOO
, dO ∈∆},I = {cIAP
(dIP ), cOAP(dOP
), cIPF(dIF ), cOPF
(dOF), cIPO
(dIO), cOPO(dOO
),AF1,AF2, PF1, PF2, PF3, FF,OF ∣dI , dIP , dIF , dIO , dOP
, dOF, dOO
, dO ∈∆}.Then we get the following conclusion on the Thread-Specific Storage pattern.
Theorem 6.16 (Correctness of the Thread-Specific Storage pattern). The Thread-Specific Stor-
age pattern τI(∂H(A ≬ P ≬ F ≬ O)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(A ≬ P ≬ F ≬ O)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(A ≬ P ≬ F ≬ O)),that is, the Thread-Specific Storage pattern τI(∂H(A ≬ P ≬ F ≬ O)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
142
Figure 76: Lookup pattern
7 Verification of Patterns for Resource Management
Patterns for resource management are patterns related to resource management, and can be
used in higher-level and lower-level systems and applications.
In this chapter, we verify patterns for resource management. In section 7.1, we verify patterns
related to resource acquisition. We verify patterns for resource lifecycle in section 7.2 and
patterns for resource release in section 7.3.
7.1 Resource Acquisition
In this subsection, we verify patterns for resource acquisition, including the Lookup pattern, the
Lazy Acquisition pattern, the Eager Acquisition pattern, and the Partial Acquisition pattern.
7.1.1 Verification of the Lookup Pattern
The Lookup pattern uses a mediating lookup service to find and access resources. There are four
modules in the Lookup pattern: the Resource User, the Resource Provider, the Lookup Service,
and the Resource. The Resource User interacts with the outside through the channels I and
O; with the Resource Provider through the channel IUP and OUP ; with the Resource through
the channels IUR and OUR; with the Lookup Service through the channels IUS and OUS . As
illustrates in Figure 76.
The typical process of the Lookup pattern is shown in Figure 77 and as follows.
1. The Resource User receives the input dI from the outside through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the input dI through a
processing function UF1, and sends the processed input dIS to the Lookup Service through
the channel IUS (the corresponding sending action is denoted sIUS(dIS));
2. The Lookup Service receives dIS from the Resource User through the channel IUS (the
corresponding reading action is denoted rIUS(dIS)), then processes the request through a
143
Figure 77: Typical process of Lookup pattern
processing function SF , generates and sends the processed output dOSto the Resource
User through the channel OUS (the corresponding sending action is denoted sOUS(dOS
));
3. The Resource User receives the output dOSfrom the Lookup Service through the channel
OUS (the corresponding reading action is denoted rOUS(dOS
)), then processes the output
and generates the input dIP through a processing function UF2, and sends the input to the
Resource Provider through the channel IUP (the corresponding sending action is denoted
sIUP(dIP ));
4. The Resource Provider receives the input from the Resource User through the channel
IUP (the corresponding reading action is denoted rIUP(dIP )), then processes the input
and generate the output dOPto the Resource User through a processing function PF ,
and sends the output to the Resource User through the channel OUP (the corresponding
sending action is denoted sOUP(dOP
));
5. The Resource User receives the output dOPfrom the Resource Provider through the chan-
nel OUP (the corresponding reading action is denoted rOUP(dOP
)), then processes the
input through a processing function UF3, generates and sends the input dIR to the Re-
source through the channel IUR (the corresponding sending action is denoted sIUR(dIR));
6. The Resource receives the input dIR from the Resource User through the channel IUR (the
corresponding reading action is denoted rIUR(dIR)), then processes the input through a
processing function RF , generates and sends the response dOR(the corresponding sending
action is denoted sOUR(dOR
));
7. The Resource User receives the response dORfrom the Resource through the channel OUR
(the corresponding reading action is denoted rOUR(dOR
)), then processes the response and
generates the response dO through a processing function UF4, and sends the response to
the outside through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Lookup pattern. We assume all data elements dI , dIS , dIP , dIR ,
dOS, dOP
, dOR, dO are from a finite set ∆.
The state transitions of the Resource User module described by APTC are as follows.
144
U = ∑dI∈∆(rI(dI) ⋅U2)U2 = UF1 ⋅U3
U3 = ∑dIS ∈∆(sIUS
(dIS) ⋅U4)U4 = ∑dOS
∈∆(rOUS(dOS
) ⋅U5)U5 = UF2 ⋅U6
U6 = ∑dIP ∈∆(sIUP
(dIP ) ⋅U7)U7 = ∑dOP
∈∆(rOUP(dOP
) ⋅U8)U8 = UF3 ⋅U9
U9 = ∑dIR∈∆(sIUR
(dIR) ⋅U10)U10 = ∑dOR
∈∆(rOUR(dOR
) ⋅U11)U11 = UF4 ⋅U12
U12 = ∑dO∈∆(sO(dO) ⋅U)The state transitions of the Resource Provider module described by APTC are as follows.
P = ∑dIP ∈∆(rIUP
(dIP ) ⋅ P2)P2 = PF ⋅ P3
P3 = ∑dOP∈∆(sOUP
(dOP) ⋅ P )
The state transitions of the Lookup Service module described by APTC are as follows.
S = ∑dIS ∈∆(rIUS
(dIS) ⋅ S2)S2 = SF ⋅ S3
S3 = ∑dOS∈∆(sOUS
(dOS) ⋅ S)
The state transitions of the Resource module described by APTC are as follows.
R = ∑dIR∈∆(rIUR
(dIR) ⋅R2)R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOUR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Resource User and the Resource Provider Proxy.
γ(rIUP(dIP ), sIUP
(dIP )) ≜ cIUP(dIP )
γ(rOUP(dOP
), sOUP(dOP
)) ≜ cOUP(dOP
)
There are two communication functions between the Resource User and the Lookup Service as
follows.
γ(rIUS(dIS), sIUS
(dIS)) ≜ cIUS(dIS)
145
γ(rOUS(dOS
), sOUS(dOS
)) ≜ cOUS(dOS
)
There are two communication functions between the Resource User and the Resource as follows.
γ(rIUR(dIR), sIUR
(dIR)) ≜ cIUR(dIR)
γ(rOUR(dOR
), sOUR(dOR
)) ≜ cOUR(dOR
)
Let all modules be in parallel, then the Lookup pattern U S P R can be presented by the
following process term.
τI(∂H(Θ(U ≬ S ≬ P ≬ R))) = τI(∂H(U ≬ S ≬ P ≬ R))where H = {rIUP
(dIP ), sIUP(dIP ), rOUP
(dOP), sOUP
(dOP), rIUS
(dIS), sIUS(dIS),
rOUS(dOS
), sOUS(dOS
), rIUR(dIR), sIUR
(dIR), rOUR(dOR
), sOUR(dOR
)∣dI , dIP , dIS , dIR , dOP
, dOS, dOR
, dO ∈∆},I = {cIUP
(dIP ), cOUP(dOP
), cIUS(dIS), cOUS
(dOS), cIUR
(dIR), cOUR(dOR
),UF1,UF2,UF3,UF4, PF,SF,RF ∣dI , dIP , dIS , dIR , dOP
, dOS, dOR
, dO ∈∆}.Then we get the following conclusion on the Lookup pattern.
Theorem 7.1 (Correctness of the Lookup pattern). The Lookup pattern τI(∂H(U ≬ S ≬ P ≬
R)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(U ≬ S ≬ P ≬ R)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(U ≬ S ≬ P ≬ R)),that is, the Lookup pattern τI(∂H(U ≬ S ≬ P ≬ R)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
7.1.2 Verification of the Lazy Acquisition Pattern
The Lazy Acquisition pattern defers the acquisitions of resources to the latest time. There
are four modules in the Lazy Acquisition pattern: the Resource User, the Resource Provider,
the Resource Proxy, and the Resource. The Resource User interacts with the outside through
the channels I and O; with the Resource Proxy through the channel IUP and OUP . The Re-
source Proxy interacts with the Resource through the channels IPR and OPR; with the Resource
Provider through the channels IPP and OPP . As illustrates in Figure 78.
The typical process of the Lazy Acquisition pattern is shown in Figure 79 and as follows.
1. The Resource User receives the input dI from the outside through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the input dI through a
processing function UF1, and sends the processed input dIP to the Resource Proxy through
the channel IUP (the corresponding sending action is denoted sIUP(dIP ));
146
Figure 78: Lazy Acquisition pattern
2. The Resource Proxy receives dIP from the Resource User through the channel IUP (the
corresponding reading action is denoted rIUP(dIP )), then processes the request through
a processing function PF1, generates and sends the processed input dIRPto the Re-
source Provider through the channel IPP (the corresponding sending action is denoted
sIPP(dIRP
));
3. The Resource Provider receives the input dIRPfrom the Resource Proxy through the
channel IPP (the corresponding reading action is denoted rIPP(dIRP
)), then processes the
input and generates the output dORPthrough a processing function RPF , and sends the
output to the Resource Proxy through the channel OPP (the corresponding sending action
is denoted sOPP(dORP
));
4. The Resource Proxy receives the output from the Resource Provider through the channel
OPP (the corresponding reading action is denoted rOPP(dORP
)), then processes the results
and generate the input dIR to the Resource through a processing function PF2, and sends
the input to the Resource through the channel IPR (the corresponding sending action is
denoted sIPR(dIR));
5. The Resource receives the input dIR from the Resource Proxy through the channel IPR
(the corresponding reading action is denoted rIPR(dIR)), then processes the input through
a processing function RF , generates and sends the output dORto the Resource Proxy
through the channel OPR (the corresponding sending action is denoted sOPR(dOR
));
6. The Resource Proxy receives the output dORfrom the Resource through the channel OPR
(the corresponding reading action is denoted rOPR(dOR
)), then processes the response
through a processing function PF3, generates and sends the response dOP(the corre-
sponding sending action is denoted sOUP(dOP
));
7. The Resource User receives the response dOPfrom the Resource Proxy through the channel
OUP (the corresponding reading action is denoted rOUP(dOP
)), then processes the response
and generates the response dO through a processing function UF2, and sends the response
to the outside through the channel O (the corresponding sending action is denoted sO(dO)).
147
Figure 79: Typical process of Lazy Acquisition pattern
In the following, we verify the Lazy Acquisition pattern. We assume all data elements dI , dIS ,
dIP , dIR , dOS, dOP
, dOR, dO are from a finite set ∆.
The state transitions of the Resource User module described by APTC are as follows.
U = ∑dI∈∆(rI(dI) ⋅U2)U2 = UF1 ⋅U3
U3 = ∑dIP ∈∆(sIUP
(dIP ) ⋅U4)U4 = ∑dOP
∈∆(rOUP(dOP
) ⋅U5)U5 = UF2 ⋅U6
U6 = ∑dO∈∆(sO(dO) ⋅U)The state transitions of the Resource Proxy module described by APTC are as follows.
P = ∑dIP ∈∆(rIUP
(dIP ) ⋅ P2)P2 = PF1 ⋅ P3
P3 = ∑dIRP∈∆(sIPP
(dIRP) ⋅ P4)
P4 = ∑dORP∈∆(rOPP
(dORP) ⋅ P5)
P5 = PF2 ⋅ P6
P6 = ∑dIR∈∆(sIPR
(dIR) ⋅ P7)P7 = ∑dOR
∈∆(rOPR(dOR
) ⋅ P8)P8 = PF3 ⋅ P9
P9 = ∑dOP∈∆(sOUP
(dOP) ⋅ P )
The state transitions of the Resource Provider module described by APTC are as follows.
RP = ∑dIRP∈∆(rIPP
(dIRP) ⋅RP2)
RP2 = RPF ⋅RP3
RP3 = ∑dORP∈∆(sOPP
(dORP) ⋅RP )
The state transitions of the Resource module described by APTC are as follows.
148
R = ∑dIR∈∆(rIPR
(dIR) ⋅R2)R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOPR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Resource User and the Resource Proxy.
γ(rIUP(dIP ), sIUP
(dIP )) ≜ cIUP(dIP )
γ(rOUP(dOP
), sOUP(dOP
)) ≜ cOUP(dOP
)
There are two communication functions between the Resource Provider and the Resource Proxy
as follows.
γ(rIPP(dIRP
), sIPP(dIRP
)) ≜ cIPP(dIRP
)
γ(rOPP(dORP
), sOPP(dORP
)) ≜ cOPP(dORP
)
There are two communication functions between the Resource Proxy and the Resource as follows.
γ(rIPR(dIR), sIPR
(dIR)) ≜ cIPR(dIR)
γ(rOPR(dOR
), sOPR(dOR
)) ≜ cOPR(dOR
)
Let all modules be in parallel, then the Lazy Acquisition pattern U P RP R can be pre-
sented by the following process term.
τI(∂H(Θ(U ≬ P ≬ RP ≬ R))) = τI(∂H(U ≬ P ≬ RP ≬ R))where H = {rIUP
(dIP ), sIUP(dIP ), rOUP
(dOP), sOUP
(dOP), rIPP
(dIRP), sIPP
(dIRP),
rOPP(dORP
), sOPP(dORP
), rIPR(dIR), sIPR
(dIR), rOPR(dOR
), sOPR(dOR
)∣dI , dIP , dIRP
, dIR , dOP, dORP
, dOR, dO ∈∆},
I = {cIUP(dIP ), cOUP
(dOP), cIPP
(dIRP), cOPP
(dORP), cIPR
(dIR), cOPR(dOR
),UF1,UF2, PF1, PF2, PF3,RPF,RF ∣dI , dIP , dIRP
, dIR , dOP, dORP
, dOR, dO ∈∆}.
Then we get the following conclusion on the Lazy Acquisition pattern.
Theorem 7.2 (Correctness of the Lazy Acquisition pattern). The Lazy Acquisition pattern
τI(∂H(U ≬ P ≬ RP ≬ R)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(U ≬ P ≬ RP ≬ R)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(U ≬ P ≬ RP ≬ R)),that is, the Lazy Acquisition pattern τI(∂H(U ≬ P ≬ RP ≬ R)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
149
Figure 80: Eager Acquisition pattern
7.1.3 Verification of the Eager Acquisition Pattern
The Eager Acquisition pattern acquires the resources eagerly. There are four modules in the
Eager Acquisition pattern: the Resource User, the Resource Provider, the Resource Proxy, and
the Resource. The Resource User interacts with the outside through the channels I and O; with
the Resource Proxy through the channel IUP and OUP . The Resource Proxy interacts with the
Resource through the channels IPR and OPR; with the Resource Provider through the channels
IPP and OPP . As illustrates in Figure 80.
The typical process of the Eager Acquisition pattern is shown in Figure 81 and as follows.
1. The Resource User receives the input dI from the outside through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the input dI through a
processing function UF1, and sends the processed input dIP to the Resource Proxy through
the channel IUP (the corresponding sending action is denoted sIUP(dIP ));
2. The Resource Proxy receives dIP from the Resource User through the channel IUP (the
corresponding reading action is denoted rIUP(dIP )), then processes the request through
a processing function PF1, generates and sends the processed input dIRPto the Re-
source Provider through the channel IPP (the corresponding sending action is denoted
sIPP(dIRP
));
3. The Resource Provider receives the input dIRPfrom the Resource Proxy through the
channel IPP (the corresponding reading action is denoted rIPP(dIRP
)), then processes the
input and generates the output dORPthrough a processing function RPF , and sends the
output to the Resource Proxy through the channel OPP (the corresponding sending action
is denoted sOPP(dORP
));
4. The Resource Proxy receives the output from the Resource Provider through the channel
OPP (the corresponding reading action is denoted rOPP(dORP
)), then processes the results
and generate the input dIR to the Resource through a processing function PF2, and sends
the input to the Resource through the channel IPR (the corresponding sending action is
denoted sIPR(dIR));
150
Figure 81: Typical process of Eager Acquisition pattern
5. The Resource receives the input dIR from the Resource Proxy through the channel IPR
(the corresponding reading action is denoted rIPR(dIR)), then processes the input through
a processing function RF , generates and sends the output dORto the Resource Proxy
through the channel OPR (the corresponding sending action is denoted sOPR(dOR
));
6. The Resource Proxy receives the output dORfrom the Resource through the channel OPR
(the corresponding reading action is denoted rOPR(dOR
)), then processes the response
through a processing function PF3, generates and sends the response dOP(the corre-
sponding sending action is denoted sOUP(dOP
));
7. The Resource User receives the response dOPfrom the Resource Proxy through the channel
OUP (the corresponding reading action is denoted rOUP(dOP
)), then processes the response
and generates the response dO through a processing function UF2, and sends the response
to the outside through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Eager Acquisition pattern. We assume all data elements dI , dIS ,
dIP , dIR , dOS, dOP
, dOR, dO are from a finite set ∆.
The state transitions of the Resource User module described by APTC are as follows.
U = ∑dI∈∆(rI(dI) ⋅U2)U2 = UF1 ⋅U3
U3 = ∑dIP ∈∆(sIUP
(dIP ) ⋅U4)U4 = ∑dOP
∈∆(rOUP(dOP
) ⋅U5)U5 = UF2 ⋅U6
U6 = ∑dO∈∆(sO(dO) ⋅U)The state transitions of the Resource Proxy module described by APTC are as follows.
P = ∑dIP ∈∆(rIUP
(dIP ) ⋅ P2)P2 = PF1 ⋅ P3
P3 = ∑dIRP∈∆(sIPP
(dIRP) ⋅ P4)
151
P4 = ∑dORP∈∆(rOPP
(dORP) ⋅ P5)
P5 = PF2 ⋅ P6
P6 = ∑dIR∈∆(sIPR
(dIR) ⋅ P7)P7 = ∑dOR
∈∆(rOPR(dOR
) ⋅ P8)P8 = PF3 ⋅ P9
P9 = ∑dOP∈∆(sOUP
(dOP) ⋅ P )
The state transitions of the Resource Provider module described by APTC are as follows.
RP = ∑dIRP∈∆(rIPP
(dIRP) ⋅RP2)
RP2 = RPF ⋅RP3
RP3 = ∑dORP∈∆(sOPP
(dORP) ⋅RP )
The state transitions of the Resource module described by APTC are as follows.
R = ∑dIR∈∆(rIPR
(dIR) ⋅R2)R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOPR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Resource User and the Resource Proxy.
γ(rIUP(dIP ), sIUP
(dIP )) ≜ cIUP(dIP )
γ(rOUP(dOP
), sOUP(dOP
)) ≜ cOUP(dOP
)
There are two communication functions between the Resource Provider and the Resource Proxy
as follows.
γ(rIPP(dIRP
), sIPP(dIRP
)) ≜ cIPP(dIRP
)
γ(rOPP(dORP
), sOPP(dORP
)) ≜ cOPP(dORP
)
There are two communication functions between the Resource Proxy and the Resource as follows.
γ(rIPR(dIR), sIPR
(dIR)) ≜ cIPR(dIR)
γ(rOPR(dOR
), sOPR(dOR
)) ≜ cOPR(dOR
)
Let all modules be in parallel, then the Eager Acquisition pattern U P RP R can be pre-
sented by the following process term.
τI(∂H(Θ(U ≬ P ≬ RP ≬ R))) = τI(∂H(U ≬ P ≬ RP ≬ R))
152
Figure 82: Partial Acquisition pattern
where H = {rIUP(dIP ), sIUP
(dIP ), rOUP(dOP
), sOUP(dOP
), rIPP(dIRP
), sIPP(dIRP
),rOPP
(dORP), sOPP
(dORP), rIPR
(dIR), sIPR(dIR), rOPR
(dOR), sOPR
(dOR)
∣dI , dIP , dIRP, dIR , dOP
, dORP, dOR
, dO ∈∆},I = {cIUP
(dIP ), cOUP(dOP
), cIPP(dIRP
), cOPP(dORP
), cIPR(dIR), cOPR
(dOR),
UF1,UF2, PF1, PF2, PF3,RPF,RF ∣dI , dIP , dIRP, dIR , dOP
, dORP, dOR
, dO ∈∆}.Then we get the following conclusion on the Eager Acquisition pattern.
Theorem 7.3 (Correctness of the Eager Acquisition pattern). The Eager Acquisition pattern
τI(∂H(U ≬ P ≬ RP ≬ R)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(U ≬ P ≬ RP ≬ R)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(U ≬ P ≬ RP ≬ R)),that is, the Eager Acquisition pattern τI(∂H(U ≬ P ≬ RP ≬ R)) can exhibit desired external
behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
7.1.4 Verification of the Partial Acquisition Pattern
The Partial Acquisition pattern partially acquires the resources into multi stages to optimize
resource management. There are three modules in the Partial Acquisition pattern: the Resource
User, the Resource Provider, and the Resource. The Resource User interacts with the outside
through the channels I and O; with the Resource Provider through the channels IUP and OUP ;
with the Resource through the channels IUR and OUR. As illustrates in Figure 82.
The typical process of the Partial Acquisition pattern is shown in Figure 83 and as follows.
1. The Resource User receives the input dI from the outside through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the input dI through a
processing function UF1, and generates the input dIP , and sends the input to the Resource
Provider through the channel IUP (the corresponding sending action is denoted sIUP(dIP ));
153
Figure 83: Typical process of Partial Acquisition pattern
2. The Resource Provider receives the input from the Resource User through the channel
IUP (the corresponding reading action is denoted rIUP(dIP )), then processes the input
and generate the output dOPto the Resource User through a processing function PF ,
and sends the output to the Resource User through the channel OUP (the corresponding
sending action is denoted sOUP(dOP
));
3. The Resource User receives the output dOPfrom the Resource Provider through the chan-
nel OUP (the corresponding reading action is denoted rOUP(dOP
)), then processes the
output through a processing function UF2, generates and sends the input dIR to the Re-
source through the channel IUR (the corresponding sending action is denoted sIUR(dIR));
4. The Resource receives the input dIR from the Resource User through the channel IUR (the
corresponding reading action is denoted rIUR(dIR)), then processes the input through a
processing function RF , generates and sends the response dOR(the corresponding sending
action is denoted sOUR(dOR
));
5. The Resource User receives the response dORfrom the Resource through the channel OUR
(the corresponding reading action is denoted rOUR(dOR
)), then processes the response and
generates the response dO through a processing function UF3, and sends the response to
the outside through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Partial Acquisition pattern. We assume all data elements dI , dIP ,
dIR , dOP, dOR
, dO are from a finite set ∆.
The state transitions of the Resource User module described by APTC are as follows.
U = ∑dI∈∆(rI(dI) ⋅U2)U2 = UF1 ⋅U3
U3 = ∑dIP ∈∆(sIUP
(dIP ) ⋅U4)U4 = ∑dOP
∈∆(rOUP(dOP
) ⋅U5)U5 = UF2 ⋅U6
U6 = ∑dIR∈∆(sIUR
(dIR) ⋅U7)
154
U7 = ∑dOR∈∆(rOUR
(dOR) ⋅U8)
U8 = UF3 ⋅U9
U9 = ∑dO∈∆(sO(dO) ⋅U)The state transitions of the Resource Provider module described by APTC are as follows.
P = ∑dIP ∈∆(rIUP
(dIP ) ⋅ P2)P2 = PF ⋅ P3
P3 = ∑dOP∈∆(sOUP
(dOP) ⋅ P )
The state transitions of the Resource module described by APTC are as follows.
R = ∑dIR∈∆(rIUR
(dIR) ⋅R2)R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOUR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Resource User and the Resource Provider.
γ(rIUP(dIP ), sIUP
(dIP )) ≜ cIUP(dIP )
γ(rOUP(dOP
), sOUP(dOP
)) ≜ cOUP(dOP
)
There are two communication functions between the Resource User and the Resource as follows.
γ(rIUR(dIR), sIUR
(dIR)) ≜ cIUR(dIR)
γ(rOUR(dOR
), sOUR(dOR
)) ≜ cOUR(dOR
)
Let all modules be in parallel, then the Partial Acquisition pattern U P R can be presented
by the following process term.
τI(∂H(Θ(U ≬ P ≬ R))) = τI(∂H(U ≬ P ≬ R))where H = {rIUP
(dIP ), sIUP(dIP ), rOUP
(dOP), sOUP
(dOP), rIUR
(dIR), sIUR(dIR),
rOUR(dOR
), sOUR(dOR
)∣dI , dIP , dIR , dOP, dOR
, dO ∈∆},I = {cIUP
(dIP ), cOUP(dOP
), cIUR(dIR), cOUR
(dOR),
UF1,UF2,UF3, PF,RF ∣dI , dIP , dIR , dOP, dOR
, dO ∈∆}.Then we get the following conclusion on the Partial Acquisition pattern.
Theorem 7.4 (Correctness of the Partial Acquisition pattern). The Partial Acquisition pattern
τI(∂H(U ≬ P ≬ R)) can exhibit desired external behaviors.
155
Figure 84: Caching pattern
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(U ≬ P ≬ R)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(U ≬ P ≬ R)),that is, the Partial Acquisition pattern τI(∂H(U ≬ P ≬ R)) can exhibit desired external behav-
iors.
For the details of proof, please refer to section 2.8, and we omit it.
7.2 Resource Lifecycle
In this subsection, we verify patterns related to resource liftcycle, including the Caching pattern,
the Pooling pattern, the Coordinator pattern, and the Resource Lifecycle Manager pattern.
7.2.1 Verification of the Caching Pattern
The Caching pattern allows to cache the resources to avoid re-acquisitions of the resources.
There are four modules in the Caching pattern: the Resource User, the Resource Provider, the
Resource Cache, and the Resource. The Resource User interacts with the outside through the
channels I and O; with the Resource Provider through the channel IUP and OUP ; with the
Resource through the channels IUR and OUR; with the Resource Cache through the channels
IUC and OUC . As illustrates in Figure 84.
The typical process of the Caching pattern is shown in Figure 85 and as follows.
1. The Resource User receives the input dI from the outside through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the input and generates
the input dIP through a processing function UF1, and sends the input to the Resource
Provider through the channel IUP (the corresponding sending action is denoted sIUP(dIP ));
2. The Resource Provider receives the input from the Resource User through the channel
IUP (the corresponding reading action is denoted rIUP(dIP )), then processes the input
and generate the output dOPto the Resource User through a processing function PF ,
156
Figure 85: Typical process of Caching pattern
and sends the output to the Resource User through the channel OUP (the corresponding
sending action is denoted sOUP(dOP
));
3. The Resource User receives the output dOPfrom the Resource Provider through the chan-
nel OUP (the corresponding reading action is denoted rOUP(dOP
)), then processes the
input through a processing function UF2, generates and sends the input dIR to the Re-
source through the channel IUR (the corresponding sending action is denoted sIUR(dIR));
4. The Resource receives the input dIR from the Resource User through the channel IUR (the
corresponding reading action is denoted rIUR(dIR)), then processes the input through a
processing function RF , generates and sends the response dOR(the corresponding sending
action is denoted sOUR(dOR
));
5. The Resource User receives the response dORfrom the Resource through the channel OUR
(the corresponding reading action is denoted rOUR(dOR
)), then processes the input dOR
through a processing function UF3, and sends the processed input dIC to the Resource
Cache through the channel IUC (the corresponding sending action is denoted sIUC(dIC));
6. The Resource Cache receives dIC from the Resource User through the channel IUC (the
corresponding reading action is denoted rIUC(dIC)), then processes the request through a
processing function CF , generates and sends the processed output dOCto the Resource
User through the channel OUC (the corresponding sending action is denoted sOUC(dOC
));
7. The Resource User receives the output dOCfrom the Resource Cache through the channel
OUC (the corresponding reading action is denoted rOUC(dOC
)), then processes the response
and generates the response dO through a processing function UF4, and sends the response
to the outside through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Caching pattern. We assume all data elements dI , dIC , dIP , dIR ,
dOC, dOP
, dOR, dO are from a finite set ∆.
The state transitions of the Resource User module described by APTC are as follows.
U = ∑dI∈∆(rI(dI) ⋅U2)
157
U2 = UF1 ⋅U3
U3 = ∑dIP ∈∆(sIUP
(dIP ) ⋅U4)U4 = ∑dOP
∈∆(rOUP(dOP
) ⋅U5)U5 = UF2 ⋅U6
U6 = ∑dIR∈∆(sIUR
(dIR) ⋅U7)U7 = ∑dOR
∈∆(rOUR(dOR
) ⋅U8)U8 = UF3 ⋅U9
U9 = ∑dIC ∈∆(sIUC
(dIC ) ⋅U10)U10 = ∑dOC
∈∆(rOUC(dOC
) ⋅U11)U11 = UF4 ⋅U12
U12 = ∑dO∈∆(sO(dO) ⋅U)The state transitions of the Resource Provider module described by APTC are as follows.
P = ∑dIP ∈∆(rIUP
(dIP ) ⋅ P2)P2 = PF ⋅ P3
P3 = ∑dOP∈∆(sOUP
(dOP) ⋅ P )
The state transitions of the Resource Cache module described by APTC are as follows.
C = ∑dIC ∈∆(rIUC
(dIC) ⋅C2)C2 = CF ⋅C3
C3 =∑dOC∈∆(sOUC
(dOC) ⋅C)
The state transitions of the Resource module described by APTC are as follows.
R = ∑dIR∈∆(rIUR
(dIR) ⋅R2)R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOUR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Resource User and the Resource Provider.
γ(rIUP(dIP ), sIUP
(dIP )) ≜ cIUP(dIP )
γ(rOUP(dOP
), sOUP(dOP
)) ≜ cOUP(dOP
)
There are two communication functions between the Resource User and the Resource Cache as
follows.
γ(rIUC(dIC), sIUC
(dIC)) ≜ cIUC(dIC)
γ(rOUC(dOC
), sOUC(dOC
)) ≜ cOUC(dOC
)
158
There are two communication functions between the Resource User and the Resource as follows.
γ(rIUR(dIR), sIUR
(dIR)) ≜ cIUR(dIR)
γ(rOUR(dOR
), sOUR(dOR
)) ≜ cOUR(dOR
)
Let all modules be in parallel, then the Caching pattern U C P R can be presented by the
following process term.
τI(∂H(Θ(U ≬ C ≬ P ≬ R))) = τI(∂H(U ≬ C ≬ P ≬ R))where H = {rIUP
(dIP ), sIUP(dIP ), rOUP
(dOP), sOUP
(dOP), rIUC
(dIC), sIUC(dIC),
rOUC(dOC
), sOUC(dOC
), rIUR(dIR), sIUR
(dIR), rOUR(dOR
), sOUR(dOR
)∣dI , dIP , dIC , dIR , dOP
, dOC, dOR
, dO ∈∆},I = {cIUP
(dIP ), cOUP(dOP
), cIUC(dIC), cOUC
(dOC), cIUR
(dIR), cOUR(dOR
),UF1,UF2,UF3,UF4, PF,CF,RF ∣dI , dIP , dIC , dIR , dOP
, dOC, dOR
, dO ∈∆}.Then we get the following conclusion on the Caching pattern.
Theorem 7.5 (Correctness of the Caching pattern). The Caching pattern τI(∂H(U ≬ C ≬ P ≬
R)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(U ≬ C ≬ P ≬ R)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(U ≬ C ≬ P ≬ R)),that is, the Caching pattern τI(∂H(U ≬ C ≬ P ≬ R)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
7.2.2 Verification of the Pooling Pattern
The Pooling pattern allows to recycle the resources to avoid re-acquisitions of the resources.
There are four modules in the Pooling pattern: the Resource User, the Resource Provider, the
Resource Pool, and the Resource. The Resource User interacts with the outside through the
channels I and O; with the Resource Provider through the channel IUP and OUP ; with the
Resource through the channels IUR and OUR. The Resource Pool interacts with the Resource
Provider through the channels IPP and OPP . As illustrates in Figure 86.
The typical process of the Pooling pattern is shown in Figure 87 and as follows.
1. The Resource User receives the input dI from the outside through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the input and generates
the input dIP through a processing function UF1, and sends the input to the Resource
Pool through the channel IUP (the corresponding sending action is denoted sIUP(dIP ));
2. The Resource Pool receives the input from the Resource User through the channel IUP (the
corresponding reading action is denoted rIUP(dIP )), then processes the input and generate
the input dIRPto the Resource Provider through a processing function PF1, and sends
the input to the Resource Provider through the channel IPP (the corresponding sending
action is denoted sIPP(dIRP
));
159
Figure 86: Pooling pattern
3. The Resource Provider receives the output dIRPfrom the Resource Pool through the
channel IPP (the corresponding reading action is denoted rIPP(dIRP
)), then processes the
input through a processing function RPF , generates and sends the output dORPto the
Resource Pool through the channel OPP (the corresponding sending action is denoted
sOPP(dORP
));
4. The Resource Pool receives the input dORPfrom the Resource Provider through the chan-
nel OPP (the corresponding reading action is denoted rOPP(dORP
)), then processes the
output through a processing function PF2, generates and sends the response dOP(the
corresponding sending action is denoted sOUP(dOP
));
5. The Resource User receives the response dOPfrom the Resource Pool through the channel
OUP (the corresponding reading action is denoted rOUP(dOP
)), then processes the output
dOPthrough a processing function UF2, and sends the processed input dIR to the Resource
through the channel IUR (the corresponding sending action is denoted sIUR(dIR));
6. The Resource receives dIR from the Resource User through the channel IUR (the cor-
responding reading action is denoted rIUR(dIR)), then processes the request through a
processing function RF , generates and sends the processed output dORto the Resource
User through the channel OUR (the corresponding sending action is denoted sOUR(dOR
));
7. The Resource User receives the output dORfrom the Resource through the channel OUR
(the corresponding reading action is denoted rOUR(dOR
)), then processes the response and
generates the response dO through a processing function UF3, and sends the response to
the outside through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Pooling pattern. We assume all data elements dI , dIP , dIRP, dIR ,
dORP, dOP
, dOR, dO are from a finite set ∆.
The state transitions of the Resource User module described by APTC are as follows.
U = ∑dI∈∆(rI(dI) ⋅U2)U2 = UF1 ⋅U3
U3 = ∑dIP ∈∆(sIUP
(dIP ) ⋅U4)
160
Figure 87: Typical process of Pooling pattern
U4 = ∑dOP∈∆(rOUP
(dOP) ⋅U5)
U5 = UF2 ⋅U6
U6 = ∑dIR∈∆(sIUR
(dIR) ⋅U7)U7 = ∑dOR
∈∆(rOUR(dOR
) ⋅U8)U8 = UF3 ⋅U9
U9 = ∑dO∈∆(sO(dO) ⋅U)The state transitions of the Resource Provider module described by APTC are as follows.
RP = ∑dIRP∈∆(rIPP
(dIRP) ⋅RP2)
RP2 = RPF ⋅RP3
RP3 = ∑dORP∈∆(sOPP
(dORP) ⋅RP )
The state transitions of the Resource Pool module described by APTC are as follows.
P = ∑dIP ∈∆(rIUP
(dIP ) ⋅ P2)P2 = PF1 ⋅ P3
P3 = ∑dIRP∈∆(sIPP
(dIRP) ⋅ P4)
P4 = ∑dORP∈∆(rOPP
(dORP) ⋅ P5)
P5 = PF2 ⋅ P6
P6 = ∑dOP∈∆(sOUP
(dOP) ⋅ P )
The state transitions of the Resource module described by APTC are as follows.
R = ∑dIR∈∆(rIUR
(dIR) ⋅R2)R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOUR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Resource User and the Resource Pool.
161
γ(rIUP(dIP ), sIUP
(dIP )) ≜ cIUP(dIP )
γ(rOUP(dOP
), sOUP(dOP
)) ≜ cOUP(dOP
)
There are two communication functions between the Resource Provider and the Resource Pool
as follows.
γ(rIPP(dIRP
), sIPP(dIRP
)) ≜ cIPP(dIRP
)
γ(rOPP(dORP
), sOPP(dORP
)) ≜ cOPP(dORP
)
There are two communication functions between the Resource User and the Resource as follows.
γ(rIUR(dIR), sIUR
(dIR)) ≜ cIUR(dIR)
γ(rOUR(dOR
), sOUR(dOR
)) ≜ cOUR(dOR
)
Let all modules be in parallel, then the Pooling pattern U RP P R can be presented by
the following process term.
τI(∂H(Θ(U ≬ RP ≬ P ≬ R))) = τI(∂H(U ≬ RP ≬ P ≬ R))where H = {rIUP
(dIP ), sIUP(dIP ), rOUP
(dOP), sOUP
(dOP), rIPP
(dIRP), sIPP
(dIRP),
rOPP(dORP
), sOPP(dORP
), rIUR(dIR), sIUR
(dIR), rOUR(dOR
), sOUR(dOR
)∣dI , dIP , dIRP
, dIR , dOP, dORP
, dOR, dO ∈∆},
I = {cIUP(dIP ), cOUP
(dOP), cIPP
(dIRP), cOPP
(dORP), cIUR
(dIR), cOUR(dOR
),UF1,UF2,UF3, PF1, PF2,RPF,RF ∣dI , dIP , dIRP
, dIR , dOP, dORP
, dOR, dO ∈∆}.
Then we get the following conclusion on the Pooling pattern.
Theorem 7.6 (Correctness of the Pooling pattern). The Pooling pattern τI(∂H(U ≬ RP ≬
P ≬ R)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(U ≬ RP ≬ P ≬ R)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(U ≬ RP ≬ P ≬ R)),that is, the Pooling pattern τI(∂H(U ≬ RP ≬ P ≬ R)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
162
Figure 88: Coordinator pattern
7.2.3 Verification of the Coordinator Pattern
The Coordinator pattern gives a solution to maintain the consistency by coordinating the com-
pletion of tasks involving multi participants, which has two classes of components: n Synchronous
Services and the Coordinator. The Coordinator receives the inputs from the user through the
channel I, then the Coordinator sends the results to the Participant i through the channel CPi
for 1 ≤ i ≤ n; When the Participant i receives the input from the Coordinator, it generates and
sends the results out to the user through the channel Oi. As illustrates in Figure 88.
The typical process of the Coordinator pattern is shown in Figure 89 and following.
1. The Coordinator receives the input dI from the user through the channel I (the correspond-
ing reading action is denoted rI(DI)), processes the input through a processing function
CF , and generate the input to the Participant i (for 1 ≤ i ≤ n) which is denoted dIPi; then
sends the input to the Participant i through the channel CPi (the corresponding sending
action is denoted sCPi(dIPi
));
2. The Participant i (for 1 ≤ i ≤ n) receives the input from the Coordinator through the
channel CPi (the corresponding reading action is denoted rCPi(dIPi
)), processes the resultsthrough a processing function PFi, generates the output dOi
, then sending the output
through the channel Oi (the corresponding sending action is denoted sOi(dOi)).
In the following, we verify the Coordinator pattern. We assume all data elements dI , dIPi, dOi
(for 1 ≤ i ≤ n) are from a finite set ∆.
The state transitions of the Coordinator module described by APTC are as follows.
C = ∑dI∈∆(rI(dI) ⋅C2)
163
Figure 89: Typical process of Coordinator pattern
C2 = CF ⋅C3
C3 =∑dIP1
,⋯,dIPn∈∆(sCP1
(dIP1)≬⋯ ≬ sCPn(dIPn
) ⋅C)
The state transitions of the Participant i described by APTC are as follows.
Pi = ∑dIPi∈∆(rCPi
(dIPi) ⋅ Pi2)
Pi2 = PFi ⋅ Pi3
Pi3 = ∑dOi∈∆(sOi
(dOi) ⋅ Pi)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions of the Participant i for 1 ≤ i ≤ n.
γ(rCPi(dIPi
), sCPi(dIPi
)) ≜ cCPi(dIPi
)
Let all modules be in parallel, then the Coordinator pattern C P1⋯Pi⋯Pn can be presented
by the following process term.
τI(∂H(Θ(C ≬ P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn))) = τI(∂H(C ≬ P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn))where H = {rCPi
(dOPi), sCPi
(dOPi)∣dI , dIPi
, dOi∈∆} for 1 ≤ i ≤ n,
I = {cCPi(dIPi
),CF,PFi∣dI , dIPi, dOi
∈∆} for 1 ≤ i ≤ n.Then we get the following conclusion on the Coordinator pattern.
Theorem 7.7 (Correctness of the Coordinator pattern). The Coordinator pattern τI(∂H(C ≬
P1 ≬⋯ ≬ Pi ≬⋯ ≬ Pn)) can exhibit desired external behaviors.
164
Figure 90: Resource Lifecycle Manager pattern
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(C ≬ P1 ≬ ⋯ ≬ Pi ≬ ⋯ ≬ Pn)) = ∑dI ,dO1,⋯,dOn∈∆
(rI(dI) ⋅ sO1(dO1) ∥ ⋯ ∥ sOi
(dOi) ∥ ⋯ ∥
sOn(dOn)) ⋅ τI(∂H(C ≬ P1 ≬ ⋯≬ Pi ≬ ⋯≬ Pn)),that is, the Coordinator pattern τI(∂H(A ≬ S1 ≬ ⋯ ≬ Si ≬ ⋯ ≬ Sn)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
7.2.4 Verification of the Resource Lifecycle Manager Pattern
The Resource Lifecycle Manager pattern decouples the lifecyle management by introduce a
Resource Lifecycle Manager. There are four modules in the Resource Lifecycle Manager pattern:
the Resource User, the Resource Provider, the Resource Lifecycle Manager, and the Resource.
The Resource User interacts with the outside through the channels I and O; with the Resource
Provider through the channel IUM and OUM ; with the Resource through the channels IUR
and OUR. The Resource Lifecycle Manager interacts with the Resource Provider through the
channels IMP and OMP . As illustrates in Figure 90.
The typical process of the Resource Lifecycle Manager pattern is shown in Figure 91 and as
follows.
1. The Resource User receives the input dI from the outside through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the input and generates
the input dIM through a processing function UF1, and sends the input to the Resource
Lifecycle Manager through the channel IUM (the corresponding sending action is denoted
sIUM(dIM ));
2. The Resource Lifecycle Manager receives the input from the Resource User through the
channel IUM (the corresponding reading action is denoted rIUM(dIM )), then processes the
165
Figure 91: Typical process of Resource Lifecycle Manager pattern
input and generate the input dIP to the Resource Provider through a processing func-
tion MF1, and sends the input to the Resource Provider through the channel IMP (the
corresponding sending action is denoted sIMP(dIP ));
3. The Resource Provider receives the output dIP from the Resource Lifecycle Manager
through the channel IMP (the corresponding reading action is denoted rIMP(dIP )), then
processes the input through a processing function PF , generates and sends the output dOP
to the Resource Lifecycle Manager through the channel OMP (the corresponding sending
action is denoted sOMP(dOP
));
4. The Resource Lifecycle Manager receives the input dOPfrom the Resource Provider through
the channel OMP (the corresponding reading action is denoted rOMP(dOP
)), then processes
the output through a processing function MF2, generates and sends the response dOM(the
corresponding sending action is denoted sOUM(dOM
));
5. The Resource User receives the response dOMfrom the Resource Lifecycle Manager through
the channel OUM (the corresponding reading action is denoted rOUM(dOM
)), then processes
the output dOMthrough a processing function UF2, and sends the processed input dIR
to the Resource through the channel IUR (the corresponding sending action is denoted
sIUR(dIR));
6. The Resource receives dIR from the Resource User through the channel IUR (the cor-
responding reading action is denoted rIUR(dIR)), then processes the request through a
processing function RF , generates and sends the processed output dORto the Resource
User through the channel OUR (the corresponding sending action is denoted sOUR(dOR
));
7. The Resource User receives the output dORfrom the Resource through the channel OUR
(the corresponding reading action is denoted rOUR(dOR
)), then processes the response and
generates the response dO through a processing function UF3, and sends the response to
the outside through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Resource Lifecycle Manager pattern. We assume all data elements
dI , dIP , dIM , dIR , dOM, dOP
, dOR, dO are from a finite set ∆.
166
The state transitions of the Resource User module described by APTC are as follows.
U = ∑dI∈∆(rI(dI) ⋅U2)U2 = UF1 ⋅U3
U3 = ∑dIM ∈∆(sIUM
(dIM ) ⋅U4)U4 = ∑dOM
∈∆(rOUM(dOM
) ⋅U5)U5 = UF2 ⋅U6
U6 = ∑dIR∈∆(sIUR
(dIR) ⋅U7)U7 = ∑dOR
∈∆(rOUR(dOR
) ⋅U8)U8 = UF3 ⋅U9
U9 = ∑dO∈∆(sO(dO) ⋅U)The state transitions of the Resource Provider module described by APTC are as follows.
P = ∑dIP ∈∆(rIMP
(dIP ) ⋅ P2)P2 = PF ⋅ P3
P3 = ∑dOP∈∆(sOMP
(dOP) ⋅ P )
The state transitions of the Resource Lifecycle Manager module described by APTC are as
follows.
M = ∑dIM ∈∆(rIUM
(dIM ) ⋅M2)M2 =MF1 ⋅M3
M3 = ∑dIP ∈∆(sIMP
(dIP ) ⋅M4)M4 = ∑dOP
∈∆(rOMP(dOP
) ⋅M5)M5 =MF2 ⋅M6
M6 = ∑dOM∈∆(sOUM
(dOM) ⋅M)
The state transitions of the Resource module described by APTC are as follows.
R = ∑dIR∈∆(rIUR
(dIR) ⋅R2)R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOUR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Resource User and the Resource Lifecycle Manager.
γ(rIUM(dIM ), sIUM
(dIM )) ≜ cIUM(dIM )
γ(rOUM(dOM
), sOUM(dOM
)) ≜ cOUM(dOM
)
There are two communication functions between the Resource Provider and the Resource Life-
cycle Manager as follows.
167
γ(rIMP(dIP ), sIMP
(dIP )) ≜ cIMP(dIP )
γ(rOMP(dOP
), sOMP(dOP
)) ≜ cOMP(dOP
)
There are two communication functions between the Resource User and the Resource as follows.
γ(rIUR(dIR), sIUR
(dIR)) ≜ cIUR(dIR)
γ(rOUR(dOR
), sOUR(dOR
)) ≜ cOUR(dOR
)
Let all modules be in parallel, then the Resource Lifecycle Manager pattern U M P R can
be presented by the following process term.
τI(∂H(Θ(U ≬M ≬ P ≬ R))) = τI(∂H(U ≬M ≬ P ≬ R))where H = {rIUM
(dIM ), sIUM(dIM ), rOUM
(dOM), sOUM
(dOM), rIMP
(dIP ), sIMP(dIP ),
rOMP(dOP
), sOMP(dOP
), rIUR(dIR), sIUR
(dIR), rOUR(dOR
), sOUR(dOR
)∣dI , dIP , dIM , dIR , dOP
, dOM, dOR
, dO ∈∆},I = {cIUM
(dIM ), cOUM(dOM
), cIMP(dIP ), cOMP
(dOP), cIUR
(dIR), cOUR(dOR
),UF1,UF2,UF3,MF1,MF2, PF,RF ∣dI , dIP , dIM , dIR , dOP
, dOM, dOR
, dO ∈∆}.Then we get the following conclusion on the Resource Lifecycle Manager pattern.
Theorem 7.8 (Correctness of the Resource Lifecycle Manager pattern). The Resource Lifecycle
Manager pattern τI(∂H(U ≬M ≬ P ≬ R)) can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(U ≬M ≬ P ≬ R)) =∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(U ≬M ≬ P ≬ R)),that is, the Resource Lifecycle Manager pattern τI(∂H(U ≬ RP ≬ P ≬ R)) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
7.3 Resource Release
In this subsection, we verify patterns for resource release, including the Leasing pattern, and
the Evictor pattern.
7.3.1 Verification of the Leasing Pattern
The Leasing pattern uses a mediating lookup service to find and access resources. There are four
modules in the Leasing pattern: the Resource User, the Resource Provider, the Lease, and the
Resource. The Resource User interacts with the outside through the channels I and O; with the
Resource Provider through the channel IUP and OUP ; with the Resource through the channels
IUR and OUR; with the Lease through the channels IUL and OUL. As illustrates in Figure 92.
The typical process of the Leasing pattern is shown in Figure 93 and as follows.
168
Figure 92: Leasing pattern
1. The Resource User receives the input dI from the outside through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the input dI through a
processing function UF1, and sends the input dIP to the Resource Provider through the
channel IUP (the corresponding sending action is denoted sIUP(dIP ));
2. The Resource Provider receives the input from the Resource User through the channel
IUP (the corresponding reading action is denoted rIUP(dIP )), then processes the input
and generate the output dOPto the Resource User through a processing function PF ,
and sends the output to the Resource User through the channel OUP (the corresponding
sending action is denoted sOUP(dOP
));
3. The Resource User receives the output dOPfrom the Resource Provider through the chan-
nel OUP (the corresponding reading action is denoted rOUP(dOP
)), then processes the
output through a processing function UF2, generates and sends the input dIR to the Re-
source through the channel IUR (the corresponding sending action is denoted sIUR(dIR));
4. The Resource receives the input dIR from the Resource User through the channel IUR (the
corresponding reading action is denoted rIUR(dIR)), then processes the input through a
processing function RF , generates and sends the response dOR(the corresponding sending
action is denoted sOUR(dOR
));
5. The Resource User receives the response dORfrom the Resource through the channel OUR
(the corresponding reading action is denoted rOUR(dOR
)), then processes the response and
generates the response dO through a processing function UF3, and sends the processed
input dIL to the Lease through the channel IUL (the corresponding sending action is
denoted sIUL(dIL));
6. The Lease receives dIL from the Resource User through the channel IUL (the corresponding
reading action is denoted rIUL(dIL)), then processes the request through a processing
function LF , generates and sends the processed output dOLto the Resource User through
the channel OUL (the corresponding sending action is denoted sOUL(dOL
));
169
Figure 93: Typical process of Leasing pattern
7. The Resource User receives the output dOLfrom the Lease through the channel OUL
(the corresponding reading action is denoted rOUL(dOL
)), then processes the output and
generates dO through a processing function UF4, and sends the response to the outside
through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Leasing pattern. We assume all data elements dI , dIL , dIP , dIR ,
dOL, dOP
, dOR, dO are from a finite set ∆.
The state transitions of the Resource User module described by APTC are as follows.
U = ∑dI∈∆(rI(dI) ⋅U2)U2 = UF1 ⋅U3
U3 = ∑dIP ∈∆(sIUP
(dIP ) ⋅U4)U4 = ∑dOP
∈∆(rOUP(dOP
) ⋅U5)U5 = UF2 ⋅U6
U6 = ∑dIR∈∆(sIUR
(dIR) ⋅U7)U7 = ∑dOR
∈∆(rOUR(dOR
) ⋅U8)U8 = UF3 ⋅U9
U9 = ∑dIL∈∆(sIUL
(dIL) ⋅U10)U10 = ∑dOL
∈∆(rOUL(dOL
) ⋅U11)U11 = UF4 ⋅U12
U12 = ∑dO∈∆(sO(dO) ⋅U)The state transitions of the Resource Provider module described by APTC are as follows.
P = ∑dIP ∈∆(rIUP
(dIP ) ⋅ P2)P2 = PF ⋅ P3
P3 = ∑dOP∈∆(sOUP
(dOP) ⋅ P )
170
The state transitions of the Lease module described by APTC are as follows.
L = ∑dIL∈∆(rIUL
(dIL) ⋅L2)L2 = LF ⋅L3
L3 = ∑dOL∈∆(sOUL
(dOL) ⋅L)
The state transitions of the Resource module described by APTC are as follows.
R = ∑dIR∈∆(rIUR
(dIR) ⋅R2)R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOUR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Resource User and the Resource Provider Proxy.
γ(rIUP(dIP ), sIUP
(dIP )) ≜ cIUP(dIP )
γ(rOUP(dOP
), sOUP(dOP
)) ≜ cOUP(dOP
)
There are two communication functions between the Resource User and the Lease as follows.
γ(rIUL(dIL), sIUL
(dIL)) ≜ cIUL(dIL)
γ(rOUL(dOL
), sOUL(dOL
)) ≜ cOUL(dOL
)
There are two communication functions between the Resource User and the Resource as follows.
γ(rIUR(dIR), sIUR
(dIR)) ≜ cIUR(dIR)
γ(rOUR(dOR
), sOUR(dOR
)) ≜ cOUR(dOR
)
Let all modules be in parallel, then the Leasing pattern U L P R can be presented by the
following process term.
τI(∂H(Θ(U ≬ L ≬ P ≬ R))) = τI(∂H(U ≬ L ≬ P ≬ R))where H = {rIUP
(dIP ), sIUP(dIP ), rOUP
(dOP), sOUP
(dOP), rIUL
(dIL), sIUL(dIL),
rOUL(dOL
), sOUL(dOL
), rIUR(dIR), sIUR
(dIR), rOUR(dOR
), sOUR(dOR
)∣dI , dIP , dIL , dIR , dOP
, dOL, dOR
, dO ∈∆},I = {cIUP
(dIP ), cOUP(dOP
), cIUL(dIL), cOUL
(dOL), cIUR
(dIR), cOUR(dOR
),UF1,UF2,UF3,UF4, PF,LF,RF ∣dI , dIP , dIL , dIR , dOP
, dOL, dOR
, dO ∈∆}.Then we get the following conclusion on the Leasing pattern.
Theorem 7.9 (Correctness of the Leasing pattern). The Leasing pattern τI(∂H(U ≬ L ≬ P ≬
R)) can exhibit desired external behaviors.
171
Figure 94: Evictor pattern
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(U ≬ L ≬ P ≬ R)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(U ≬ L ≬ P ≬ R)),that is, the Leasing pattern τI(∂H(U ≬ L≬ P ≬ R)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
7.3.2 Verification of the Evictor Pattern
The Evictor pattern allows different strategies to release the resources. There are three modules
in the Evictor pattern: the Resource User, the Evictor, and the Resource. The Resource User
interacts with the outside through the channels I and O; with the Evictor through the channels
IUE and OUE ; with the Resource through the channels IUR and OUR. The Evictor interacts
with the Resource through the channels IER and OER. As illustrates in Figure 94.
The typical process of the Evictor pattern is shown in Figure 95 and as follows.
1. The Resource User receives the input dI from the outside through the channel I (the
corresponding reading action is denoted rI(dI)), then processes the input dI through a
processing function UF1, and generates the input dIR , and sends the input to the Resource
through the channel IUR (the corresponding sending action is denoted sIUR(dIR));
2. The Resource receives the input from the Resource User through the channel IUR (the
corresponding reading action is denoted rIUR(dIR)), then processes the input and generate
the output dORto the Resource User through a processing function RF1, and sends the
output to the Resource User through the channel OUR (the corresponding sending action
is denoted sOUR(dOR
));
3. The Resource User receives the output dORfrom the Resource through the channel OUR
(the corresponding reading action is denoted rOUR(dOR
)), then processes the output through
a processing function UF2, generates and sends the input dIE to the Evictor through the
channel IUE (the corresponding sending action is denoted sIUE(dIE));
172
Figure 95: Typical process of Evictor pattern
4. The Evictor receives the input dIE from the Resource User through the channel IUE (the
corresponding reading action is denoted rIUE(dIE)), then processes the input through a
processing function EF1, generates and sends the input dIR′ (the corresponding sending
action is denoted sIER(dIR′ ));
5. The Resource receives the input from the Evictor through the channel IER (the corre-
sponding reading action is denoted rIER(dIR′ )), then processes the input and generate
the output dOR′to the Evictor through a processing function RF2, and sends the output
to the Evictor through the channel OER (the corresponding sending action is denoted
sOER(dOR′
));
6. The Evictor receives dOR′from the Resource through the channel OER (the corresponding
reading action is denoted rOER(dOR′
)), then processes the input through a processing
function EF2, generates and sends the output dOE(the corresponding sending action is
denoted sOUE(dOE
));
7. The Resource User receives the response dOEfrom the Evictor through the channel OUE
(the corresponding reading action is denoted rOUE(dOE
)), then processes the response and
generates the response dO through a processing function UF3, and sends the response to
the outside through the channel O (the corresponding sending action is denoted sO(dO)).
In the following, we verify the Evictor pattern. We assume all data elements dI , dIE , dIR , dIR′ ,
dOE, dOR
, dOR′, dO are from a finite set ∆.
The state transitions of the Resource User module described by APTC are as follows.
U = ∑dI∈∆(rI(dI) ⋅U2)U2 = UF1 ⋅U3
U3 = ∑dIR∈∆(sIUR
(dIR) ⋅U4)U4 = ∑dOR
∈∆(rOUR(dOR
) ⋅U5)U5 = UF2 ⋅U6
U6 = ∑dIE ∈∆(sIUE
(dIE ) ⋅U7)
173
U7 = ∑dOE∈∆(rOUE
(dOE) ⋅U8)
U8 = UF3 ⋅U9
U9 = ∑dO∈∆(sO(dO) ⋅U)The state transitions of the Evictor module described by APTC are as follows.
E = ∑dIE ∈∆(rIUE
(dIE) ⋅E2)E2 = EF1 ⋅E3
E3 = ∑dIR′∈∆(sIER
(dIR′ ) ⋅E4)
E4 = ∑dOR′∈∆(rOER
(dOR′) ⋅E5)
E5 = EF2 ⋅E6
E6 = ∑dOE∈∆(sOUE
(dOE) ⋅E)
The state transitions of the Resource module described by APTC are as follows.
R = ∑dIR∈∆(rIUR
(dIR) ⋅R2)R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOUR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Resource User and the Evictor.
γ(rIUE(dIE), sIUE
(dIE)) ≜ cIUE(dIE )
γ(rOUE(dOE
), sOUE(dOE
)) ≜ cOUE(dOE
)
There are two communication functions between the Resource User and the Resource as follows.
γ(rIUR(dIR), sIUR
(dIR)) ≜ cIUR(dIR)
γ(rOUR(dOR
), sOUR(dOR
)) ≜ cOUR(dOR
)
There are two communication functions between the Evictor and the Resource as follows.
γ(rIER(dIR′ ), sIER
(dIR′ )) ≜ cIER(dIR′ )
γ(rOER(dOR′
), sOER(dOR′
)) ≜ cOER(dOR′
)
Let all modules be in parallel, then the Evictor pattern U E R can be presented by the
following process term.
τI(∂H(Θ(U ≬ E ≬ R))) = τI(∂H(U ≬ E ≬ R))
174
where H = {rIUE(dIE), sIUE
(dIE), rOUE(dOE
), sOUE(dOE
), rIUR(dIR), sIUR
(dIR),rOUR
(dOR), sOUR
(dOR), rIER
(dIR′ ), sIER(dIR′ ), rOER
(dOR′), sOER
(dOR′)
∣dI , dIE , dIR , dIR′ , dOR′, dOE
, dOR, dO ∈∆},
I = {cIUE(dIE ), cOUE
(dOE), cIUR
(dIR), cOUR(dOR
), cIER(dIR′ ), cOER
(dOR′),
UF1,UF2,UF3,EF1,EF2,RF ∣dI , dIE , dIR , dIR′ , dOR′, dOE
, dOR, dO ∈∆}.
Then we get the following conclusion on the Evictor pattern.
Theorem 7.10 (Correctness of the Evictor pattern). The Evictor pattern τI(∂H(U ≬ E ≬ R))can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(U ≬ E ≬ R)) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅ τI(∂H(U ≬ E ≬ R)),that is, the Evictor pattern τI(∂H(U ≬ E ≬ R)) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
175
8 Composition of Patterns
Patterns can be composed to satisfy the actual requirements freely, once the syntax and seman-
tics of the output of one pattern just can be plugged into the syntax and semantics of the input
of another pattern.
In this chapter, we show the composition of patterns. In section 8.1, we verify the composition
of the Layers patterns. In section 8.2, we show the composition of Presentation-Abstraction-
Control (PAC) patterns. We compose patterns for resource management in section 8.3.
8.1 Composition of the Layers Patterns
In this subsection, we show the composition of the Layers patterns, and verify the correctness
of the composition. We have already verified the correctness of the Layers pattern and its
composition in section 3.1.1, here we verify the correctness of the composition of the Layers
patterns based on the correctness result of the Layers pattern.
The composition of two layers peers is illustrated in Figure 96. Each layers peer is abstracted
as a module, and the composition of two layers peers is also abstracted as a new module, as the
dotted rectangles illustrate in Figure 96.
There are two typical processes in the composition of two layers peers: one is the direction from
peer P to peer P ′, the other is the direction from P ′ two P . We omit them, please refer to
section 3.1.1 for details.
In the following, we verify the correctness of the plugging of two layers peers. We assume all
data elements dL1, dL
1′, dUn , dUn′
are from a finite set ∆. Note that, the channel LO1 and the
channel LI1′ are the same one channel; the channel LO1′ and the channel LI1 are the same one
channel. And the data dL1′
and the data PUF (dUn) are the same data; the data dL1and the
data P ′UF (dUn′) are the same data.
The state transitions of the P described by APTC are as follows.
P = ∑dUn ,dL1∈∆(rUIn(dUn) ⋅ P2 ≬ rLI1(dL1
) ⋅ P3)P2 = PUF ⋅ P4
P3 = PLF ⋅ P5
P4 = ∑dUn∈∆(sLO1
(PUF (dUn)) ⋅ P )P5 = ∑dL1
∈∆(sUOn(PLF (dL1)) ⋅ P )
The state transitions of the P ′ described by APTC are as follows.
P ′ = ∑dUn′
,dL1′∈∆(rUIn′
(dUn′) ⋅ P ′2 ≬ rLI
1′(dL
1′) ⋅ P ′3)
P ′2 = P′UF ⋅ P ′4
P ′3 = P′LF ⋅ P ′5
P ′4 = ∑dUn′∈∆(sLO1′
(PUF (dUn′)) ⋅ P ′)
P ′5 = ∑dL1′∈∆(sUOn′
(PLF (dL1′)) ⋅ P ′)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions.
176
γ(rLI1(dL1), sLO1′
(P ′UF (dUn′))) ≜ cLI1(dL1
)
γ(rLI1′(dL
1′), sLO1
(PUF (dUn))) ≜ cLI1′ (dL1′)
Let all modules be in parallel, then the two layers peers P P ′ can be presented by the following
process term.
τI(∂H(Θ(τI1(∂H1(P )) ≬ τI2(∂H2
(P ′))))) = τI(∂H(τI1(∂H1(P )) ≬ τI2(∂H2
(P ′))))where H = {rLI1(dL1
), sLO1′(P ′UF (dUn′
)), rLI1′(dL
1′), sLO1
(PUF (dUn))∣dL1
, dL1′, dUn , dUn′
∈∆},I = {cLI1(dL1
), cLI1′(dL
1′), PUF,PLF,P ′UF,P ′LF ∣dL1
, dL1′, dUn , dUn′
∈∆}.And about the definitions of H1 and I1, H2 and I2, please see in section 3.1.1.
Then we get the following conclusion on the plugging of two layers peers.
Theorem 8.1 (Correctness of the plugging of two layers peers). The plugging of two layers
peers
τI(∂H(τI1(∂H1(P )) ≬ τI2(∂H2
(P ′))))can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(τI1(∂H1(P )) ≬ τI2(∂H2
(P ′)))) = ∑dUn ,dUn′∈∆((rUIn(dUn) ∥ rUIn′
(dUn′))
⋅(sUOn(PLF (P ′UF (dUn′))) ∥ sUOn′
(P ′LF (PUF (dUn)))))⋅τI(∂H(τI1(∂H1(P ))≬ τI2(∂H2
(P ′)))),that is, the plugging of two layers peers τI(∂H(τI1(∂H1
(P )) ≬ τI2(∂H2(P ′)))) can exhibit desired
external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
8.2 Composition of the PAC Patterns
In this subsection, we show the composition of Presentation-Abstraction-Control (PAC) patterns
(we already verified its correctness in section 3.3.2) and verify the correctness of the composition.
The PAC patterns can be composed into levels of PACs, as illustrated in Figure 97.
If the syntax and semantics of the output of one PAC match the syntax and semantics of the
input of another PAC, then they can be composed. For the simplicity and without loss of
generality, we show the plugging of only two PACs, as illustrated in Figure 98. Each PAC is
abstracted as a module, and the composition of two PACs is also abstracted as a new module,
as the dotted rectangles illustrate in Figure 98.
The typical process of plugging of two PACs is composed by the process of one PAC (the typical
process is described in section 3.3.2) follows the process of the other PAC, and we omit it.
In the following, we verify the correctness of the plugging of two PACs. We assume all data
elements dI1 , dO1, dI2 , dO2
, dI , dO, dO1i(for 1 ≤ i ≤ n), dO2j
(for 1 ≤ j ≤m) are from a finite set
178
∆. Note that, the channel I and the channel I1 are the same one channel; the channel O1 and
the channel I2 are the same one channel; and the channel O2 and the channel O are the same
channel. And the data dI1 and the data dI are the same data; the data dO1and the data dI2
are the same data; and the data dO2and the data dO are the same data.
The state transitions of the PAC1 described by APTC are as follows.
PAC1 = ∑dI1∈∆(rI1(dI1) ⋅ PAC12)
PAC12 = PAC1F ⋅ PAC13
PAC13 = ∑dO1∈∆(sO1
(dO1) ⋅ PAC14)
PAC14 = ∑dO11,⋯,dO1n
∈∆(sO11(dO11
)≬ ⋯≬ sO1n(dO1n
) ⋅ PAC1)
The state transitions of the PAC2 described by APTC are as follows.
PAC2 = ∑dI2∈∆(rI2(dI2) ⋅ PAC22)
PAC22 = PAC2F ⋅ PAC23
PAC23 = ∑dO2∈∆(sO2
(dO2) ⋅ PAC24)
PAC24 = ∑dO21,⋯,dO2m
∈∆(sO21(dO21
)≬⋯≬ sO2m(dO2m
) ⋅ PAC2)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions.
γ(rI2(dI2), sO1(dO1)) ≜ cI2(dI2)
Let all modules be in parallel, then the two PACs PAC1PAC2 can be presented by the following
process term.
τI(∂H(Θ(τI1(∂H1(PAC1))≬ τI2(∂H2
((PAC2)))))) = τI(∂H(τI1(∂H1(PAC1))≬ τI2(∂H2
((PAC2)))))where H = {rI2(dI2), sO1
(dO1)∣dI1 , dO1
, dI2 , dO2, dI , dO, dO1i
, dO2j∈∆} for 1 ≤ i ≤ n and 1 ≤ j ≤m,
I = {cI2(dI2), PAC1F,PAC2F ∣dI1 , dO1, dI2 , dO2
, dI , dO, dO1i, dO2j
∈∆} for 1 ≤ i ≤ n and 1 ≤ j ≤m.
And about the definitions of H1 and I1, H2 and I2, please see in section 3.3.2.
Then we get the following conclusion on the plugging of two PACs.
Theorem 8.2 (Correctness of the plugging of two PACs). The plugging of two PACs
τI(∂H(τI1(∂H1(PAC1))≬ τI2(∂H2
(PAC2))))can exhibit desired external behaviors.
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(τI1(∂H1(PAC1))≬ τI2(∂H2
(PAC2)))) = ∑dI ,dO,dO11,⋯,dO1n
,dO21,⋯,dO2m
∈∆(rI(dI) ⋅ sO(dO) ⋅sO11(dO11
) ∥ ⋯ ∥ sO1i(dO1i
) ∥ ⋯ ∥ sO1n(dO1n
) ⋅ sO21(dO21
) ∥ ⋯ ∥ sO2j(dO2j
) ∥ ⋯ ∥ sO2m(dO2m
)) ⋅τI(∂H(τI1(∂H1
(PAC1))≬ τI2(∂H2(PAC2)))),
that is, the plugging of two PACs τI(∂H(τI1(∂H1(PAC1)) ≬ τI2(∂H2
(PAC2)))) can exhibit
desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
180
Figure 99: The whole resource management process
8.3 Composition of Resource Management Patterns
In this subsection, we show the composition of resource management patterns (we have al-
ready verified the correctness of patterns for resource management in section 7), and verify the
correctness of the composition.
The whole process of resource management involves resource acquisition firstly, resource utiliza-
tion and lifecycle management secondly, and resource release lastly, as Figure 99 illustrates. For
resource acquisition, we take an example of the Lookup pattern, and the Lifecycle Manager pat-
tern for resource lifecycle management, and the Leasing pattern for resource release. The whole
process of resource management is composed of the typical processes of the Lookup pattern, the
Lifecycle Manager pattern and the Leasing pattern, we do not repeat any more, please refer to
the details of these three patterns in section 7. And we can verify the correctness of the whole
resource management system shown in Figure 99, just like the work we doing many times for
concrete patterns in the above sections.
But, we do not verify the correctness of the whole resource management system like the previous
work. The whole resource management system in Figure 99 contains the full functions of the
Lookup pattern, the Lifecycle manager pattern and the Leasing pattern, and actually can be
implemented by the composition of these three patterns, as Figure 100 illustrates. For the whole
process of resource management, firstly the Lookup pattern works, then the Lifecycle Manager
pattern, and lastly the Leasing pattern. That is, the output of the Lookup pattern is plugged
into the input of the Lifecycle Manager pattern, and the output of the Lifecycle Manager pattern
is plugged into the Leasing pattern. Each pattern is abstracted as a module, and the composition
of these three patterns is also abstracted as a new module, as the dotted rectangles illustrate in
Figure 100.
In the following, we verify the correctness of the plugging of resource management patterns. We
assume all data elements dI , dO, dIA , dOA, dIL , dOL
, dIR , dORare from a finite set ∆. Note that,
the channel I and the channel IA are the same one channel; the channel OA and the channel
IL are the same one channel; and the channel OL and the channel IR are the same channel; the
channel OR and the channel O are the same channel. And the data dIA and the data dI are the
same data; the data dOAand the data dIL are the same data; and the data dOL
and the data
181
dIR are the same data; the data dORand the data dO are the same data.
The state transitions of the Lookup pattern A described by APTC are as follows.
A = ∑dIA∈∆(rIA(dIA) ⋅A2)
A2 = AF ⋅A3
A3 = ∑dOA∈∆(sOA
(dOA) ⋅A)
The state transitions of the Lifecycle Manager pattern L described by APTC are as follows.
L = ∑dIL∈∆(rIL(dIL) ⋅L2)
L2 = LF ⋅L3
L3 = ∑dOL∈∆(sOL
(dOL) ⋅L)
The state transitions of the Leasing pattern R described by APTC are as follows.
R = ∑dIR∈∆(rIR(dIR) ⋅R2)
R2 = RF ⋅R3
R3 = ∑dOR∈∆(sOR
(dOR) ⋅R)
The sending action and the reading action of the same data through the same channel can
communicate with each other, otherwise, will cause a deadlock δ. We define the following
communication functions between the Lookup pattern and the Lifecyle Manager pattern.
γ(rIL(dIL), sOA(dOA
)) ≜ cIL(dIL)
We define the following communication functions between the Lifecyle Manager pattern and the
Leasing pattern.
γ(rIR(dIR), sOL(dOL
)) ≜ cIR(dIR)
Let all modules be in parallel, then the resource management patterns A L R can be presented
by the following process term.
τI(∂H(Θ(τI1(∂H1(A)) ≬ τI2(∂H2
(L)) ≬ τI3(∂H3(R))))) = τI(∂H(τI1(∂H1
(A)) ≬ τI2(∂H2(L)) ≬
τI3(∂H3(R))))
where H = {rIL(dIL), sOA(dOA
), rIR(dIR), sOL(dOL
)∣dI , dO, dIA , dOA
, dIL , dOL, dIR , dOR
∈∆},I = {cIL(dIL), cIR(dIR),AF,LF,RF ∣dI , dO, dIA , dOA
, dIL , dOL, dIR , dOR
∈∆}.And about the definitions of H1 and I1, H2 and I2, H3 and I3, please see in section 7.
Then we get the following conclusion on the plugging of resource management patterns.
Theorem 8.3 (Correctness of the plugging of resource management patterns). The plugging of
resource management patterns
τI(∂H(τI1(∂H1(A)) ≬ τI2(∂H2
(L))≬ τI3(∂H3(R))))
can exhibit desired external behaviors.
183
Proof. Based on the above state transitions of the above modules, by use of the algebraic laws
of APTC, we can prove that
τI(∂H(τI1(∂H1(A)) ≬ τI2(∂H2
(L))≬ τI3(∂H3(R)))) = ∑dI ,dO∈∆(rI(dI) ⋅ sO(dO)) ⋅
τI(∂H(τI1(∂H1(A)) ≬ τI2(∂H2
(L))≬ τI3(∂H3(R)))),
that is, the plugging of resource management patterns τI(∂H(τI1(∂H1(A)) ≬ τI2(∂H2
(L)) ≬τI3(∂H3
(R)))) can exhibit desired external behaviors.
For the details of proof, please refer to section 2.8, and we omit it.
184
References
[1] E. Gamma, R. Helm, R. Johnson, J. Vlissides. Design patterns: elements of reusable
object-oriented software. (1995).
[2] F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad, M. Stal. Pattern-oriented
software architecture - volume 1: a system of patterns. (1996). Wiley Publishing.
[3] D. C. Schmidt, M. Stal, H. Rohnert, F. Buschmann. Pattern-oriented software ar-
chitecture - volume 2: patterns for concurrent and networked objects. (2000). Wiley
Publishing.
[4] M. Kircher, P. Jain. Pattern-oriented software architecture - volume 3: patterns for
resource management. (2004). Wiley Publishing.
[5] F. Buschmann, K. Henney, D. C. Schmidt. Pattern-oriented software architecture -
volume 4: a pattern language for distributed computing. (2007). Wiley Publishing.
[6] F. Buschmann, K. Henney, D. C. Schmidt. Pattern-oriented software architecture -
volume 5: on patterns and pattern languages. (2007). Wiley Publishing.
[7] Y. Wang. Algebraic laws for true concurrency. (2016). Manuscript, arXiv: 1611.09035.
[8] Y. Wang. A calculus for true concurrency. (2017). Manuscript, arxiv: 1703.00159.
[9] Y. Wang. A calculus of truly concurrent mobile processes. (2017). Manuscript,
arXiv:1704.07774.
[10] K.A. Bartlett, R.A. Scantlebury, and P.T. Wilkinson. A note on reliable full-duplex
transmission over half-duplex links. (1969). Communications of the ACM, 12(5):260-
261.
185