72
Probabilistic Distributed Temporal Logic Luís Gonçalves Silva Nina Morão Dissertação para a obtenção de Grau de Mestre em Matemática e Aplicações Júri Presidente: Professora Doutora Cristina Sales Viana Serôdio Sernadas Orientador: Professor Doutor Jaime Arsénio de Brito Ramos Vogal: Professor Doutor Paulo Alexandre Carreira Mateus

Probabilistic Distributed Temporal Logic...Chapter 1 Introduction The term temporal logic is used to describe any system of rules and symbolism for repre-senting, and reasoning about,

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

  • Probabilistic Distributed Temporal Logic

    Luís Gonçalves Silva Nina Morão

    Dissertação para a obtenção de Grau de Mestre emMatemática e Aplicações

    Júri

    Presidente: Professora Doutora Cristina Sales Viana Serôdio SernadasOrientador: Professor Doutor Jaime Arsénio de Brito RamosVogal: Professor Doutor Paulo Alexandre Carreira Mateus

  • ii

  • Abstract

    We present a logic for reasoning about temporal properties of distributed systems andfinitely additive probability. The language for this logic allows us to make statements suchas “The probability of ’sometime in the future φ’ is at least s”.We start by presenting a linear temporal logic LTL with a sound and complete axiomaticsystem. We next present a distributed temporal logic DTL and show that this logic isreducible to LTL.To introduce probabilities in the linear temporal logic we define pLTL. We then makea first attempt for probabilities in a distributed temporal logic with PDTLL by havingprobabilities in the temporal states. Finally we adopt for an exogenous approach, PDTLG,with probabilities over the models of a distributed temporal logic.

  • iv

  • Contents

    1 Introduction 1

    2 Temporal logic 72.1 Linear Temporal Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    2.1.1 Syntax and semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 72.1.2 Axiomatization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.1.3 Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.1.4 Decidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    2.2 Distributed Temporal Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.2.1 Syntax and Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 282.2.2 Axiomatization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    3 Probabilistic temporal logic 353.1 Probabilistic linear temporal logic . . . . . . . . . . . . . . . . . . . . . . . . 35

    3.1.1 Syntax and semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 363.1.2 Axiomatization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    3.2 Probabilistic distributed temporal logic (local) pDTLL . . . . . . . . . . . . 443.2.1 Syntax and semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    3.2.1.1 Extending probability to actions . . . . . . . . . . . . . . . 463.3 Probabilistic distributed temporal logic (global) pDTLG . . . . . . . . . . . 51

    3.3.1 Syntax and semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 523.3.1.1 Actions and probability in pDTLG . . . . . . . . . . . . . . 553.3.1.2 Communication in pDTLG . . . . . . . . . . . . . . . . . . . 59

    4 Conclusion 61

    v

  • vi

  • Chapter 1

    Introduction

    The term temporal logic is used to describe any system of rules and symbolism for repre-senting, and reasoning about, propositions qualified in terms of time. It was first introducedas tense logic, by Artur Prior in the 1960s , as a particular modal logic-based system [33].

    Temporal logic is a branch of logic focusing on propositions whose truth values dependson time. It has been studied and developed by practitioners mainly in the areas of computerscience and logic. One main concern is with the study of distributed systems, which arecollections of connected autonomous components. The first advances in the use of temporallogics to study distributed systems were made by Pnueli, followed by Sernadas [35, 31].This logic was also used to study reactive systems, which are system that maintain ongoingprocesses, and to study any form of concurrency which is the property of systems runningseveral processes at the same time.

    Temporal logic provides a formalism for describing the occurrence of events in timethat is suitable for reasoning about concurrent programs [23]. When a program runs twoprocesses in parallel we can’t infer the input-output relation based solely on the input-output relations computed by each of the individual processes. The reason for this is that,running in parallel, the processes may interfere with one another, resulting in a differentbehavior than if they run alone. Temporal logic was then used to describe the desiredbehavior or operation of a system, while avoiding references to the method or details of it’simplementation. In order to analyze systems, Manna and Pnueli used a simplified modelin which the executions were restricted to be interleaved [23]. An interleaved executionis one in which at any instant only one process is executing an instruction. Once theinstruction was completed, another process would initiate an instruction. Under this model,the execution of the program proceeds as a sequence of discrete steps. The selection ofthe next process to be executed was made by a scheduler. This however raised a problem

    1

  • of fairness. An infinite process could be constantly selected by the scheduler, which couldbe neglecting another process that was ready to execute an instruction. A fair schedulerwould ensure that no process would be neglected forever.

    Hence the behavior of a concurrent program is characterized by the set of its fairexecution sequences. Temporal logic is then utilized to state properties of the executionsequences of a given program, thus describing properties of the dynamic behavior of theprogram.

    Another task of great importance is the verification of a program, proving or disprovingthe correctness of the algorithms. The old approach was to construct a proof by hand usingaxioms and inference rules in a deductive system such as temporal logic [23]. These proofstook some effort and brilliance to organize and, due to the complexity of testing validity,mechanical theorem provers were of little help. Clarke and Emmerson presented a model-theoretic approach for the case of finite-state concurrent systems that would mechanicallydetermine if the system met a specification expressed in propositional temporal logic [9].This technique is known as model checking.

    They used a specification language they called computation tree logic (CTL) that isa proposition, branching-time temporal logic. The global state graph of the concurrentsystem was viewed as a finite Kripke structure, and an efficient algorithm could be given todetermine whether the structure was a model of a particular formula (therefore determiningif the program met its specification).

    However, they encountered the problem that with CTL it was not possible to assertthat correctness only holds on fair execution sequences [3, 10]. They overcame this problemby moving the fairness requirements into the semantics of CTL.

    When defining temporal logic, there are two possible views regarding the underlyingnature of time. One is that time is linear: At each moment there is only one possiblefuture. The other is that time has a branching, treelike nature: At each moment, timemay split into alternate courses representing different possible futures [3].

    The modalities of a temporal logic system usually reflect the semantics regarding thenature of time. Thus, in a logic of linear time, temporal modalities are provided fordescribing events along a single time path. In contrast, in a logic of branching time, themodalities reflect the branching nature of time, by allowing quantification over possiblefutures.

    For verification purposes we are typically interested in properties that hold for allcomputation paths. It is thus satisfactory to pick an arbitrary path and reason about it,asserting the existence of alternative computation paths as provided by a branching time

    2

  • logic like CTL, and so it serves a purpose as model checking.We are however more concerned with results that will help to verify correctness of

    concurrent programs, for which a linear time logic like LTL is generally used. This is thereason why LTL is taken as our base logic.

    As a side note, there is also the language CTL∗ that merges linear time with branchingtime, allowing linear time assertions prefixed by existential path quantifiers [3].

    The use of branching temporal logics like CTL and CTL∗ and even linear temporal logicsas been deeply studied for reasoning about concurrent systems [22, 31]. However, therewas a need for a precise notion of causality as reflected by the time structures adopted forlogics. This led to the study of partial order temporal logics [32].

    The idea of a partial order temporal logic is that it adopts a local causal perspective,instead of taking a look at the entire system. This is the case of the logics of partial orderedcomputations introduced by Printer and Wolper , or of interleaving set temporal logics andof temporal logics for reasoning about distributed transition systems, with product statespaces, occurrence nets and event structures [32].

    In particular, n-agent temporal logics are in the latter category. Event structures areenriched with information about its sequential agents as models of concurrent systems.

    In this approach each individual agent has a only a partial view of the whole distributedsystem in a particular time instant, due to the autonomy of each agent, and thereforecommunication has an important role to link up the system. n-agent logics can explicitlydistinguish sequential agents in the system, refer to the local viewpoint of each agent andexpress communication between agents.

    There are several versions of n-agent logics, with different perspectives on how non-local information can be accessed by each agent. Caleiro and Ehrich proposed a logic whichassumes that an assertion about another agent is only possible at a communication point[11]. For example some logics assume that at each instant, the actual information aboutanother agent is the one corresponding to the last communication with it.

    Other n-agent logics consider the existence of a “present-knowledge” allowing referenceto non-local properties of agents.

    We follow the approach of Caleiro and Ehlrich when dealing with concurrent systems[11, 5].

    Recently there’s been an increasing interest in probabilistic logic due to the growingimportance of probability in security and in quantum logic. Adding probability features toa base logic has been a research topic for some time [2, 15, 37]. One of the first attemptingto combine logic and probability was Carnap [7].

    3

  • Several approaches to enriching the base languages introduce a probability operatorwhich allows the construction of new formulas and terms from the base formulas e.g. [1, 34].From a semantic point of view there are two basic approaches. Either the models of thebase logic are modified so that they accommodate the probability component (endogenousapproach) or the models are kept as they are and the probabilities are assigned outside themodels (exogenous approach).

    The exogenous approach to enriching logics was introduced by Mateus and Sernadaswhile developing a new approach for quantum logic [26]. However it is shown that it canbe used to enrich any logic [25].

    The idea is to add probabilities features to a logic without changing it’s base structure.The probability language is defined by taking the base formulas as terms and having aprobability operator as a term constructor. This approach will be most convenient whenwe have to deal with languages that have some delicate formulas, as is the case of formulasfor communication between agents.

    When using probabilities, there seems to be two kind of statements that we may wantto express. One is what Hacking calls the chance setup , that is a statement about whatone might expect as the result of performing some experiment in a given situation, e.g. “there’s s probability of randomly choosing an agent for which formula φ is true” [14].

    The other statement captures what has been called a degree of belief [4]. These state-ments are about quantifying an unknown property of a situation, e.g. “there’s s probabilitythat formula φ of agent A is true ”. This refers to a particular agent, and expresses ourbelief in whether agent A has the said attribute.

    The difference between the two statements is that the first statement assumes a fixeduniverse, and makes an assertion about that universe. The second statement implicitlyassumes the existence of a number of possibilities with some probability over these pos-sibilities [16]. The Statements that concerns us are of the second type, as we want torepresent a system’s behavior, allowing all the possible realities.

    The first statement is connected to the endogenous approach, where the probabilityis assigned to formulas inside the logic [29, 12]. The second statement is closer to anexogenous concept, which requires that we define a higher-level logic which will be theprobability logic. A model for this logic consists of a probability measure on the space ofpossible valuations of the base logic.

    In this paper we take several steps to define an exogenous probabilistic logic for reason-ing about concurrent systems. We start by laying the foundations and defining a languagefor a linear temporal logic LTL with a sound and weakly complete axiomatic system, follow-ing the works [19, 35, 22, 31]. Next we follow with the definition of a distributed temporal

    4

  • logic DTL, based on the works of [23, 11, 8, 6, 5]. This will be an extension of the previousdefined logic LTL to include additional time lines for the components that constitute a sys-tem. An axiomatic system is also presented, but we don’t show results for completeness,although results of this nature have been shown in [5] for a different deductive system forDTL. Not only that but we will also show that there’s a reduction from DTL to LTL, thatis, if we are dealing with a system of only one agent, it can be represented by LTL.

    In the second half of this work we deal with enriching the base logics LTL and DTLfor probabilistic reasoning. For this, we start by defining a linear temporal logic with aprobability operator pLTL, that only has probability on classical propositional formulas.Temporal formulas, and actions, which are multi-valued symbols, won’t have a probabilityassigned to them. We present an axiomatic system, following mainly the work [30]. We alsoprove some results for formula manipulation. Lastly, we show that if all classical formulashave probability 1, then pLTL can be reduced to LTL.

    To enrich DTL with probabilities, we take two steps. The first step is similar to what wedone with pLTL, defining a probability operator only for classical propositional formulas.Thus we define a logic pDTLL that defines probabilities at a local level, an exogenousapproach that considers a probability measure for each time event.

    The second step for a probabilistic DTL takes a different approach of what has beendone with pLTL and pDTLL. Instead of a local operator for classical formulas, we followthe works of [24, 8, 27, 25] with an exogenous approach that will include all formulas.Probabilities are defined at a global level over the models of DTL, resulting in the logicpDTLG. As a consequence of the exogenous approach, the base structure of DTL is keptintact, allowing probabilities on all formulas, which include classical and temporal formulas,as well as communication and actions. The results we present in this section are mainlyabout probabilistic reasoning and specific manipulation of formulas, with some focus onactions and communication.

    5

  • 6

  • Chapter 2

    Temporal logic

    Temporal Logic is used to describe a situation through the passage of time. While classicallogic allow for a description of a static situation, temporal logic depicts that situation astime progresses.

    The execution of a program is a series of chained instructions, and therefore may berepresented by Temporal Logic. This logic is also a useful tool to describe concurrentsystems. It establishes a way to represent sequences of events for each agent without alimitation in how actions occur, without the need of having global sequences of actions,which would cause problems as some actions would have to be put on hold, instead ofallowing them to happen simultaneously. As it is, dependencies between agents of anotheragent action’s are easily established, without the need of queuing them to be dealt with.So it seems temporal logic is an acceptable language to show and to reason about parallelprograms, and in general reactive systems.

    2.1 Linear Temporal Logic

    Linear temporal logic adopts the paradigm of linearly ordered, discrete time where temporalmodalities are provided for describing events along a single time path.

    The linear temporal logic defined here is a propositional temporal logic of linear timefor only one agent and serves as a stepping stone to the logics presented in the subsequentchapters.

    2.1.1 Syntax and semantics

    The alphabet of LTL is a pair ⟨Π, Act⟩ where Π is a set of propositional symbols and Actis a set of actions, such that nil ∈ Act.

    7

  • The language LLTL is given by

    LLTL ::= Π | Act | ¬LLTL | LLTL ⇒ LLTL | LLTLULLTL | ∗

    The ¬ and ⇒ are the usual negation and implication symbols. We denote propositionalsymbols by p and q, and LTL formulas by φ and ψ. The operator U is the usual (strong)until. A formula φUψ (read φ until ψ) predicts the eventual occurrence of ψ and states thatφ holds continuously at least until the (first) occurrence of ψ. The symbol ∗ is a markerfor the initial moment and will only be true at the first state of a temporal structure.

    Other logical connectives (⊤,⊥,∨,∧, etc.) and temporal operators can be defined asabbreviations in the usual way. For instance:

    ⊥ ≡ ¬ (φ⇒ φ) falseXφ ≡ ⊥Uφ tomorrow (next)Fφ ≡ ⊤Uφ sometime in the futureF◦φ ≡ φ ∨ Fφ now or sometime in the futureGφ ≡ ¬F¬φ always in the futureG◦φ ≡ φ ∧ Gφ now and always in the futureφWψ ≡ (Gφ) ∨ (φUψ) weak until, unless

    Temporal formulas will be interpreted over infinite sequences of states. A state is justa classic boolean valuation for propositional symbols. In addiction, each state will alsoinclude an action (the action that is going to take place next).

    To have a sequence of states we represent them with the natural numbers using thefunction λ : N → 2Π that given a natural number will return a valuation of Π. As anabbreviation we write λk instead of λ (k), to represent the valuation of Π in state k.

    Also for each state there will be an action assigned. That is a single action for eachstate, by the function α : N → Act. As an abbreviation we write αk instead of α (k).

    An LTL interpretation structure is a pair η = ⟨λ, α⟩.We define the global satisfaction relation of a formula φ by an interpretation structure

    η as follows:

    • η φ if and only η, k φ for every k.

    and the local satisfaction relation at local states is defined by:

    • η, k p if λk (p) = 1;

    • η, k act if αk = act;

    • η, k ¬φ if η, k 1 φ;

    8

  • • η, k φ1 ⇒ φ2 if η, k 1 φ1 or η, k φ2;

    • η, k φUψ if there is k′′ ∈ N with k < k′′ such that η, k′′ ψ and η, k′ φ for everyk′ with k < k′ < k′′;

    • η, k ∗ if k = 0.

    We say that η is a model of Γ ⊆ LLTL if η globally satisfies each of the formulas in Γ. Also,a formula φ is a semantic consequence of Γ, written Γ � φ, if every model η verifies η φwhenever η γ for every γ ∈ Γ. Lastly, a formula φ is valid, written � φ if we have η φfor every model η.

    Now we present a simple example to illustrate the use of the previous definitions.

    Example 2.1.1. 1. Let Γ = {φ}, then ¬ (⊤U¬φ) is a semantic consequence of Γ.

    2. The formula ¬ (⊤U¬p) is not valid.For the purpose of reaching a contradiction, suppose that there’s a model η of Γ such

    that η 1 ¬ (⊤U¬φ). Then by negation of the definition of global satisfaction relation, itmust be that there’s some k such that η, k 1 ¬ (⊤U¬φ). This is, by definition of negation,η, k ⊤U¬φ. By definition of U we get that there’s k′′ ∈ N with k < k′′ such thatη, k′′ ¬φ, and again by definition of negation, η, k′′ 1 φ.

    Since η is a model of Γ, we have that η φ, which results in η, k φ for every k ∈ N.This contradicts the previous conclusion that there must be k′′ such that η, k′′ 1 φ. Henceη ¬ (⊤U¬φ) for every model η of Γ.

    To see that ¬ (⊤U¬p) is not a valid formula, consider the model η = ⟨λ, α⟩ suchthat λ2 (p) = 0. We show that η does not satisfy the formula, that is, η 1 ¬ (⊤U¬p).Supposing by contradiction that η ¬ (⊤U¬p), then η, k ¬ (⊤U¬p) for every k ∈ N.That is, η, k 1 (⊤U¬p), so by definition, either there is no k′′ such that η, k′′ ¬p or thereis some k′ with k < k′ < k′′ such that η, k′ 1 ⊤. Since the later is impossible, then thereis no k′′ such that η, k′′ ¬p. Therefore we reach a contradiction, since λ2 (p) = 0, thatis, for k′′ = 2 we have η, k′′ ¬p. The formula ¬ (⊤U¬p) is not valid.

    We also derive a result that will be used in future sections.

    Lemma 2.1.2. ⊢ Fφ ∧ G (φ⇒ ψ) ⇒ Fψ.

    Proof. Aiming for a contradiction suppose there is some model η such that η 1 Fφ ∧G (φ⇒ ψ) ⇒ Fψ , i.e., for some k we have η, k Fφ ∧ G (φ⇒ ψ) and η, k 1 Fψ. Sinceη, k Fφ by (prop), we know that for some k′ we have λ, k′ φ, and from λ, k G (φ⇒ ψ)

    9

  • we know that λ, k′′ (φ⇒ ψ) for every k′′, in particular for k′′ = k′, hence we get λ, k′ ψ,which contradicts λ, k 1 Fψ.

    The semantic of the other temporal operators can now be derived. We present thedefinitions for some abbreviations, as some might be useful ahead.

    Lemma 2.1.3. Let η be an LTL model, and k ∈ N. Then:

    • η, k Xφ if η, k + 1 φ;

    • η, k Fφ if η, i φ for some i > k;

    • η, k F◦φ if η, i φ for some i ≥ k;

    • η, k Gφ if η, i φ for every i > k;

    • η, k G◦φ if η, i φ for every i ≥ k;

    • η, k φWψ if either η, i φ for every i > k or there is j > k such that η, j ψ andη, i φ for every j > i > k.

    Remark 2.1.4. These notions of global satisfaction consider that temporal specificationapplies to all states. This is the floating viewpoint, that considers that φ holds for η if itholds for all states. The alternative is the anchored viewpoint, where φ holds for model ηif it holds at the initial state. This is most natural for programming. Although we go forthe floating viewpoint, it is easy to translate it into the anchored viewpoint, sufficing thatthe formula holds only if the initial state symbol ∗ holds. Conversely we would translatefrom the anchored viewpoint to the floating viewpoint by adding a G before the formula.

    2.1.2 Axiomatization

    The first axiomatic system for linear temporal logic was presented in [20]. There wereincluded axioms concerning past operators and a “symmetric” axiom for the future clauses.The system was more extensive as operators such as G and F were taken as basic for theconstruction of the language. We however are only concerned with the future operators,and some of the axioms in that work are no longer necessary since our basic constructor isU. We follow [3] and present the formal system ΣLTLfor the derivation of formulas:

    Axioms

    10

  • taut. All tautologically valid formulasltl1. ¬Xφ⇔ X¬φltl2. X (φ⇒ ψ) ⇒ (Xφ⇒ Xψ)ltl3. G◦φ⇒ φ ∧ XG◦φuntil1. φUψ ⇔ Xψ ∨ X (φ ∧ φUψ)until2. φUψ ⇒ XF◦ψinit1. X¬∗just1.

    ∨̇act∈Act

    act

    Rules of inference

    mp. φ,φ⇒ ψ ⊢ ψnex. φ ⊢ Xφind. φ⇒ ψ,φ⇒ Xφ ⊢ φ⇒ G◦ψinit2. ∗ ⇒ G◦φ ⊢ φ

    The “axiom” (taut) covers the axioms for classical propositional logic (we are, however,not interesed here in how tautologically valid formulas can be derived). In the axioms (ltl2),(ltl3) and (until2) we point the fact that these are only implications and not equivalences,even though we can prove the equivalences as valid. The axiom (init1) states that thesymbol for the initial state is never valid after the first state. The rule (init2) relates aformula to the initial state. The axiom (just1) states that exactly one action is true atevery instant.

    The derivability of a formula φ from a set Γ of formulas is denoted by Γ ⊢Σ φ, or Γ ⊢ φwhen the formal system Σ is understood from the context. We define derivability, in thiscase for ΣLTL, inductively by:

    • Γ ⊢ΣLTL φ for every axiom,

    • Γ ⊢ΣLTL φ for every φ ∈ Γ,

    • Given a rule φ1, ..., φn ⊢ ψ, if Γ ⊢ΣLTL φi for all premises φi (i = 1, ..., n), then Γ ⊢ ψ.

    A formula φ is called derivable, denoted ⊢Σ φ or ⊢ φ, if ∅ ⊢ φ.

    Let us now show the soundness of ΣLTL with respect to the semantics of LTL.

    Theorem 2.1.5. Let φ be a formula and Γ a set of formulas. If Γ ⊢ φ then Γ � φ.

    11

  • Proof. The proof runs by induction on the derivation of φ from Γ. Let η be an arbitrarymodel.

    ltl1. ¬Xφ ⇔ X¬φ. Let us first see that ¬Xφ ⇒ X¬φ and for this purpose assumeby contradiction that η 1 ¬Xφ ⇒ X¬φ. So for some k ∈ N we have η, k 1¬Xφ⇒ X¬φ which is defined as η, k ¬Xφ and η, k 1 X¬φ. The former is bydefinition η, k + 1 1 φ while the later is η, k + 1 φ which is a contradiction.Conversely, suppose η 1 ¬Xφ ⇐ X¬φ. So for some k ∈ N we have thatη, k 1 ¬Xφ ⇐ X¬φ which is in turn η, k 1 ¬Xφ and η, k X¬φ. Again wereach a contradiction as by definition we get η, k + 1 φ and η, k + 1 1 φ.

    ltl2. X (φ⇒ ψ) ⇒ (Xφ⇒ Xψ). Assume by absurd that η 1 (X (φ⇒ ψ) ⇒ (Xφ⇒ Xψ)),and so for some k ∈ N we have that η, k 1 (X (φ⇒ ψ) ⇒ (Xφ⇒ Xψ)). Thatis η, k X (φ⇒ ψ) and λ, k 1 (Xφ⇒ Xψ). The former leads by definition toη, k + 1 φ ⇒ ψ and the later to η, k Xφ and η, k 1 Xψ and consequentlyto η, k + 1 φ and η, k + 1 1 ψ. Note that this contradicts the implication inη, k+1 φ⇒ ψ. So it must be that η, k X (φ⇒ ψ) ⇒ (Xφ⇒ Xψ) for all k.

    ltl3. G◦φ ⇒ φ ∧ XG◦φ. Again the proof runs by contradiction. Assume that η 1(G◦φ⇒ φ ∧ XG◦φ) and so for some k ∈ N we have that η, k 1 (G◦φ⇒ φ ∧ XG◦φ).It then follows that η, k G◦φ and η, k 1 φ ∧ XG◦φ. This last expression ex-pands to η, k 1 φ or η, k 1 XG◦φ. Notice that neither of these two is compatiblewith η, k G◦φ for the fact that η, k G◦φ is by definition η, k′ φ for everyk′ ≥ k, so in particular η, k φ. Also if η, k′ φ for every k′ ≥ k, thencertainly η, k′′ φ for every k′′ ≥ k + 1, which is by definition η, k + 1 G◦φand again by definition η, k XG◦φ and a contradiction is reached. Ergo,η, k G◦φ⇒ φ ∧ XG◦φ for every k.

    until1. φUψ ⇔ Xψ ∨ X(φ ∧ φUψ). Assume for the purpose of a contradiction thatη 1 φUψ ⇒ Xψ ∨ X(φ ∧ φUψ). The definition is that η, k 1 φUψ ⇒ Xψ ∨X(φ ∧ φUψ) for some k ∈ N. So the implication unfolds to η, k φUψ andη, k 1 Xψ∨X(φ∧φUψ). The definition of the former is that there is an event k′′

    with k < k′′ where λ, k′′ ψ and η, k′ φ for every k′ such that k < k′ < k′′.So considering two cases, either k′′ = k + 1 or k′′ > k + 1.For the first case, we conclude that η, k + 1 ψ so by definition η, k Xψ.For the second case we have that in particular for k′ = k + 1, η, k + 1 φ,and still that there is a state k < k′′ where η, k′′ ψ and η, k′′′ φ for everyk′′′ such that k + 1 < k′′′ < k′′ which is by definition η, k + 1 φUψ. In

    12

  • conclusion, we have η, k + 1 φ ∧ φUψ, which by definition is the same asη, k X(φ ∧ φUψ).Combining the two cases yields η, k Xψ ∨X(φ∧φUψ), which contradicts theinitial assumption.The inverse implication will lead to similar results.

    until2. φUψ ⇒ XF◦ψ. Assume in view of a contradiction that η 1 φUψ ⇒ XF◦ψ andso for some k ∈ N we get η, k 1 φUψ ⇒ XF◦ψ, or identically that η, k φUψand η, k 1 XF◦ψ. The former is by definition η, k′′ ψ and η, k′ φ fork′′ > k′ > k and the later is η, k′′′ 1 ψ for k′′′ > k, which is a contradiction.Therefor η, k φUψ ⇒ XF◦ψ for any k.

    init1. X¬∗. Note that η, k′ ∗ if and only if k′ = 0. This is the same as saying thatη, k′′ 1 ∗ for k′′ ≥ 1.If we write k′′ = k+ 1, we get η, k+ 1 1 ∗ for k ≥ 0 whichis by definition η, k X¬∗.

    mp. φ,φ ⇒ ψ ⊢ ψ. Assume that η φ and η φ ⇒ ψ and for the purpose ofa contradiction that η 1 ψ. By global satisfaction we have that η, k φ andη, k φ⇒ ψ for k ∈ N. From the definition of the implication either η, k ψor η, k 1 φ. Since we have that η, k φ which contradicts η, k 1 φ, then itmust be that η, k ψ.

    nex. φ ⊢ Xφ. Assume that η φ and by absurd that η 1 Xφ. So by definitionη, k φ for any k ∈ N, in particular for any k = 1, 2, ... The previous expressioncan be written using k′ +1 = k resulting in η, k′ +1 φ for any k′ +1 = 1, 2, ..which is in fact k′ ∈ N. Hence by definition we have η, k′ Xφ for k′ ∈ N,which yields η Xφ and contradicts the initial assumption.

    ind. φ ⇒ ψ,φ ⇒ Xφ ⊢ φ ⇒ G◦ψ. Assume that η φ ⇒ ψ and η φ ⇒ Xφand with the objective of reaching a contradiction suppose that η 1 φ⇒ G◦ψ.Then for some k ∈ N we have η, k φ ⇒ ψ and η, k φ ⇒ Xφ but alsoη, k 1 φ⇒ G◦ψ . Consider the two former expressions and note that from theimplications there are two possible cases:1) η, k 1 φ;2) η, k ψ and η, k Xφ.Now back to η, k 1 φ ⇒ G◦ψ we note that this is by definition η, k φ andη, k′ 1 ψ for k′ ≥ k. We can easily note that the case 1) is incompatible withthis expression, due to η, k 1 φ. The second case’s contradiction comes fromthe expression η, k ψ which doesn’t match with η, k′ 1 ψ for k′ ≥ k when

    13

  • k′ = k. Hence we reach a contradiction and it must be that we can deriveη, k φ⇒ G◦ψ for any k ∈ N.

    init. ∗ ⇒ G◦φ ⊢ φ. In view of a contradiction assume that η ∗ ⇒ G◦φ andη 1 φ. So for some k ∈ N we must have η, k ∗ ⇒ G◦φ and η, k 1 φ. Fromη, k ∗ ⇒ G◦φ either η, k 1 ∗ or η, k G◦φ. Note that η, k 1 ∗happens onlyfor k ≥ 1. In fact we need only consider the case where k = 0, as it will restrictthe behavior of k > 0. So for k = 0 we get η, 0 ∗ which contradicts η, 0 1 ∗,and so from η, 0 ∗ ⇒ G◦φ we must take that η, 0 G◦φ. Now this is bydefinition η, k′ φ for k′ ≥ 0, which contradicts η, k 1 φ for k ∈ N. So we canin fact derive η, k φ for all k.

    We argued above that in derivations within ΣLTL we do not want to bother how toderive tautologically valid formulas; we will simply use them as axioms. Nevertheless,there will still occur purely classical derivation parts where only (taut) and (mp) are used.We will abbreviate such parts by using the following rule

    prop. φ1, ...φn ⊢ ψ if ψ is a tautological consequence of φ1, ...φn.

    As an example we note the chain rule

    φ⇒ ψ,ψ ⇒ ϕ ⊢ ϕ⇒ φ

    which we will apply from now on in derivations, together with many other, as a rule ofthe kind (prop).

    Example 2.1.6. We make use of the presented axiomatization to derive and auxiliaryresult

    xout. ⊢ Xφ ∧ Xψ ⇒ X (φ ∧ ψ)

    The expanded form of (xout) is ¬ (Xφ⇒ ¬Xψ) ⇒ X¬ (φ⇒ ¬ψ) so this is our goal:(1) X (φ⇒ ¬ψ) ⇒ (Xφ⇒ X¬ψ) (ltl2)(2) X (φ⇒ ¬ψ) ⇒ (Xφ⇒ ¬Xψ) (prop),(ltl1),(1)(3) ¬ (Xφ⇒ ¬Xψ) ⇒ ¬X (φ⇒ ¬ψ) (prop),(2)(4) ¬ (Xφ⇒ ¬Xψ) ⇒ X¬ (φ⇒ ¬ψ) (prop),(ltl1),(3)

    Theorem 2.1.7 (Deduction theorem). Let φ, ψ be formulas and Γ a set of formulas. IfΓ ∪ {φ} ⊢ ψ then Γ ⊢ G◦φ⇒ ψ

    14

  • Proof. The proof runs by induction on the derivation of ψ from Γ ∪ {φ}.

    1. ψ is an axiom of ΣLTL or ψ ∈ Γ: then Γ ⊢ ψ, and Γ ⊢ G◦φ⇒ ψ follows with (prop)

    2. ψ ≡ φ: then Γ ⊢ G◦φ⇒ φ ∧ XGφ by (ltl3), and Γ ⊢ G◦φ⇒ φ follows with (prop)

    3. ψ is a conclusion of (mp) with premises ϕ and ϕ⇒ ψ: We then have both Γ∪{φ} ⊢ ϕand Γ ∪ {φ} ⊢ ϕ ⇒ ψ Applying the induction hypothesis, we get Γ ⊢ G◦φ ⇒ ϕ andΓ ⊢ G◦φ⇒ (ϕ⇒ ψ) from which Γ ⊢ G◦φ⇒ ψ follows with (prop).

    4. ψ ≡ Xϕ is a conclusion of (nex) with premise ϕ: Then Γ ∪ {φ} ⊢ ϕ, and thereforeΓ ⊢ G◦φ ⇒ ϕ by induction hypothesis. We continue the derivation of G◦φ ⇒ ϕ to aderivation of G◦φ⇒ Xϕ:(1) G◦φ⇒ ϕ induction hypothesis(2) X (G◦φ⇒ ϕ) (nex),(1)(3) X (G◦φ⇒ ϕ) ⇒ (XG◦φ⇒ Xϕ) (ltl2)(4) XG◦φ⇒ Xϕ (mp),(2),(3)(5) G◦φ⇒ φ ∧ XG◦φ (ltl3)(6) G◦φ⇒ XG◦φ (prop),(5)(7) G◦φ⇒ Xϕ (prop),(4),(6)

    5. ψ ≡ ϕ ⇒ G◦η is a conclusion of (ind) with premises ϕ ⇒ η and ϕ ⇒ Xϕ: As above,we get with the induction hypothesis that G◦φ⇒ (ϕ⇒ η) and G◦φ⇒ (ϕ⇒ Xϕ) arederivable from Γ, and their derivations can be continued to derive G◦φ⇒ (ϕ⇒ G◦η)as follows:(1) G◦φ⇒ (ϕ⇒ η) induction hypothesis(2) G◦φ⇒ (ϕ⇒ Xϕ) induction hypothesis(3) G◦φ ∧ ϕ⇒ η (prop),(1)(4) G◦φ ∧ ϕ⇒ Xϕ (prop),(2)(5) G◦φ⇒ XGφ (prop),(ltl3)(6) G◦φ ∧ ϕ⇒ XGφ ∧ Xϕ (prop),(4),(5)(7) XGφ ∧ Xϕ⇒ X (G◦φ ∧ ϕ) (xout)(8) G◦φ ∧ ϕ⇒ X (G◦φ ∧ ϕ) (prop),(6),(7)(9) G◦φ ∧ ϕ⇒ G◦η (ind),(3),(8)(10) G◦φ⇒ (ϕ⇒ G◦η) (prop),(9)

    6. ψ ≡ φ is a conclusion of (init2) with premise ∗ ⇒ G◦ϕ: Then Γ ∪ {φ} ⊢ ∗ ⇒ G◦ϕ,and therefore Γ ⊢ G◦φ ⇒ (∗ ⇒ G◦ϕ) by ind. hyp. We continue the derivation ofG◦φ⇒ (∗ ⇒ G◦ϕ) to a derivation of G◦φ⇒ ϕ:

    15

  • (1) G◦φ⇒ (∗ ⇒ G◦ϕ) induction hypothesis(2) G◦φ ∧ ∗ ⇒ G◦ϕ (prop),(1)(3) ∗ ⇒ (G◦φ⇒ G◦ϕ) (prop),(2)(4) G◦φ⇒ G◦ϕ (init2),(3)(5) G◦ϕ⇒ ϕ ∧ XGϕ (ltl3)(6) G◦φ⇒ ϕ (prop),(4),(5)

    Theorem 2.1.8. Let φ, ψ be formulas, and let Γ be a set of formulas. If Γ ⊢ G◦φ ⇒ ψthen Γ ∪ {φ} ⊢ ψ.

    Proof. If Γ ⊢ G◦φ ⇒ ψ then also Γ ∪ {φ} ⊢ G◦φ ⇒ ψ. With Γ ∪ {φ} ⊢ φ we getΓ∪{φ} ⊢ G◦φ with (nex), (ind) and (prop) and finally Γ∪{φ} ⊢ ψ by applying (mp).

    2.1.3 Completeness

    The axiom system was based on the one presented in [3]. In that book there was also aproof for completeness. The difference is that they use G as a basic constructor, instead ofU. Our proof of completeness is heavily based in their method, but adapted it to be builtaround U. We present part of their proof that is somewhat redundant now, since we haveproved for the “until” operator and have very little concern for the “always” operator.

    The proof lines follow the often called Henkin-Hasenjager method, modified in manydetails for our situation.

    Now, to answer questions related to completeness of ΣLTL let us first take into accountan elucidating situation. Consider the infinite set

    Γ = {φ⇒ ψ,φ⇒ Xψ,φ⇒ XXψ,φ⇒ XXXψ, ...}

    of formulas. It is easy to calculate

    Γ � φ⇒ G◦ψ.

    If we were dealing with a sound and complete formal system, then φ⇒ G◦ψ would bederivable from Γ. Any derivation in such a system can only have finitely many assumptionsof Γ. So assuming a derivation of φ ⇒ G◦ψ from Γ, the soundness of the system wouldimply that φ⇒ G◦ψ is a consequence of a finite subset of Γ. This is not the case however.The situation presented shows that

    Γ � φ does not imply Γ ⊢ φ

    16

  • for the formal system ΣLTL (for arbitrary Γ and φ). In fact no sound system can achievethis kind of completeness. The argument for incompleteness was based on F being infinite.This leads us to consider a weaker notion of completeness

    Definition 2.1.9. We call a formal system weakly complete if

    Γ � φ implies Γ ⊢ φ for finite Γ.

    We will show that ΣLTL is weakly complete.Since we are restricted to finite set Γ of assumptions, it suffices to consider cases where

    Γ = ∅ (taking in consideration Theorem 2.1.7) to prove � φ implies ⊢ φ, or conversely that0 φ implies 2 φ. We will show that φ is satisfiable by constructing a suitable temporalstructure, whenever ¬φ cannot be derived in ΣLTL.

    Let us begin with introducing some notation. A positive-negative pair (shortly PNP)is a pair P = (Γ+,Γ−) of two finite sets Γ+ and Γ− of formulas. We denote the set Γ+∪Γ−

    by ΓP . Furthermore, we will sometimes denote Γ+ by pos (P) and Γ− by neg (P). Finallythe formula P̂ will be the abbreviation

    P̂ ≡∧φ∈Γ+

    φ ∧∧ψ∈Γ−

    ¬ψ

    where empty conjunctions are identified with the formula ⊤. PNP will be used to representinformation about the temporal structure under construction; the intuition is that theformulas in Γ+ should be true and those in Γ− should be false at the current state.

    A PNP P is called inconsistent if ⊢ ¬P̂. Otherwise, P is called consistent.P = (Γ+,Γ−)

    Lemma 2.1.10. Let P = (Γ+,Γ−) be a consistent PNP and φ a formula.

    1. Γ+ and Γ− are disjoint

    2. (Γ+ ∪ {φ} ,Γ−) or (Γ+,Γ− ∪ φ) is a consistent PNP

    Proof. 1. Assume that Γ+ and Γ− are not disjoint and pick φ ∈ Γ+∩Γ−. Then P̂ is of theform ...∧φ∧ ...∧¬φ∧ ... which implies that ¬P̂ is tautologically derivable. So ⊢ ¬P̂ whichmeans that P is inconsistent and a contradiction is reached. Hence, Γ+ and Γ− must bedisjoint.

    2. If φ ∈ Γ+ or φ ∈ Γ− then we have (Γ+ ∪ {φ} ,Γ−) = (Γ+,Γ−) or (Γ+,Γ− ∪ {φ}) =(Γ+,Γ−), respectively, and the assertion follows by the premise that (Γ+,Γ−) is consistent.If instead (Γ+ ∪ {φ} ,Γ−) and (Γ+,Γ− ∪ {φ}) are consistent, then it reverts to case a)where we can obtain a pair that is not disjoint. Otherwise, assuming that both pairs under

    17

  • consideration are inconsistent implies ⊢ ¬(P̂ ∧ φ

    )and ⊢ ¬

    (P̂ ∧ ¬φ

    ). It is easy to obtain

    ⊢ ¬P̂ with (prop), which again contradicts the consistency of P. Hence (at least) one ofthe pairs must be consistent.

    Lemma 2.1.11. Let P = (Γ+,Γ−) be a consistent PNP and φ and ψ formulas.

    a) ⊥ /∈ Γ+

    b) If φ,ψ, φ⇒ ψ ∈ ΓP then: φ⇒ ψ ∈ Γ+ if and only if φ ∈ Γ− or ψ ∈ Γ+.

    c) If ⊢ φ⇒ ψ, with φ ∈ Γ+ and ψ ∈ ΓP , then ψ ∈ Γ+.

    Proof. a) Assume ⊥ ∈ Γ+. Then ⊢ P̂ ⇒ ⊥ (taut) which is just ⊢ ¬P̂, and contradicts theconsistency. This proves ⊥ /∈ Γ+.

    b) Assume that φ ⇒ ψ ∈ Γ+ but φ /∈ Γ− and ψ /∈ Γ+. Since φ,ψ ∈ FP we getφ ∈ Γ+ and ψ ∈ Γ−. Then ⊢ P̂ ⇒ (φ⇒ ψ) ∧ φ ∧ ¬ψ and this yields ⊢ ¬P̂ which is acontradiction. Hence, φ ∈ Γ− or ψ ∈ Γ+. On the other hand, assume that φ ∈ Γ− orψ ∈ Γ+. If φ ⇒ ψ /∈ Γ+ we must have φ ⇒ ψ ∈ Γ−, and we get ⊢ P̂ ⇒ ¬ (φ⇒ ψ) ∧ ¬φor ⊢ P̂ ⇒ ¬ (φ⇒ ψ) ∧ ψ. In both cases we again obtain the contradiction ⊢ ¬P̂; henceφ⇒ ψ ∈ Γ+.

    c) Assume that ψ /∈ Γ+. Then ψ ∈ Γ− because ψ ∈ ΓP , and with φ ∈ Γ+ we get⊢ P̂ ⇒ φ ∧ ¬ψ, and since ⊢ φ ⇒ ψ, then ⊢ P̂ ⇒ (φ⇒ ψ), and combining these with(prop) we get⊢ P̂ ⇒ φ ∧ ¬ψ ∧ (φ⇒ ψ), furthermore we get ⊢ ¬P̂. This contradicts theconsistency of P; hence ψ ∈ Γ+.

    Let φ be a formula. With φ we associate a set τ (φ) of formulas, inductively defined asfollows:

    1. τ (p) = {p} for p ∈ Π.

    2. τ (act) = {act} for act ∈ Act.

    3. τ (⊥) = {⊥}.

    4. τ (φ⇒ ψ) = {φ⇒ ψ} ∪ τ (φ) ∪ τ (ψ).

    5. τ (Xφ) = {Xφ}

    6. τ (G◦φ) = {G◦φ} ∪ τ (φ)

    7. τ (φUψ) = {φUψ} ∪ {Xφ} ∪ {Xψ}

    18

  • In result τ (φ) is the set of “sub-formulas” of φ. However note that Xφ formulas are treatedas “indivisible”.

    For a set of formulas Γ we let

    τ (Γ) = {ψ : ψ ∈ τ (φ) , φ ∈ Γ} .

    Obviously, τ (τ (Γ)) = τ (Γ) and τ (ΓP) is finite for every PNP P (since FP is finite).We call a PNP P complete if τ (ΓP) = ΓP .

    Lemma 2.1.12. Let P be a consistent PNP. There is a consistent and complete PNP P∗

    with pos (P) ⊆ pos (P∗) and neg (P) ⊆ neg (P∗).

    Proof. Starting from P, P∗ is constructed by successively adding φ to pos (FP) or toneg (FP) for every φ ∈ τ (FP), depending on which of these is consistent. By Lemma2.1.10.2 this is always possible and it evidently yields some consistent and complete PNPP∗.

    Given a consistent PNP P, we call any PNP P∗ that satisfies the conditions of Lemma2.1.12 a completion of P. In general, different completions of a given P are possible, butonly finitely many.

    Lemma 2.1.13. Let P∗1 , ...,P∗n be all different completions of a consistent PNP P. Then⊢ P̂ ⇒ P̂∗1 ∨ ... ∨ P̂∗n.

    Proof. We first prove an auxiliary assertion: let Γ be some finite set of formulas and letQ1, ..., Qm be all different PNP Q with ΓQ = τ (Γ) and such that pos (Q) and neg (Q) aredisjoint. Because τ (ΓQ) = τ (τ (Γ)) = τ (Γ) = ΓQ holds for any such Q, all Q1, ..., Qm arecomplete and we show bu induction on the number of formulas in τ (Γ) that

    (∗) ⊢m∨i=1Q̂i.

    If τ (Γ) = ∅ then m = 1, Q1 = (∅, ∅), and Q̂1 = ⊤, so (∗) holds by (taut). Assume nowthat τ (Γ) = {φ1, ..., φk} for some k ≥ 1. Clearly there must be some j (where 1 ≤ j ≤ k)such that φj /∈ τ ({φ1, ..., φj−1, φj+1, ..., φk}), i.e., φj is a “most complex” formula in τ (Γ);let Γ′ = τ (Γ) \ {φj}. I particular, it follows that τ (Γ′) = Γ′. Let Q′1, ...Q′l be all PNPconstructed for Γ′ as described. Then m = 2l and the PNP Q1, ..., Qm are obtained formQ′1, ...Q

    ′l as follows:

    Q1 = (pos (Q′1) ∪ {φj} , neg (Q′1)),...Ql = (pos (Q′l) ∪ {φj} , neg (Q′l)),Ql+1 =

    (pos

    (Q′l+1

    ), neg

    (Q′l+1

    )∪ {φj}

    ),

    19

  • ...

    Qm = (pos (Q′m) , neg (Q′m) ∪ {φj}).

    By the induction hypothesis we have ⊢m∨i=1Q̂i which yields

    ⊢l∨

    i=1

    (Q̂i ∧ φj

    )∨

    l∨i=1

    (Q̂i ∧ ¬φj

    ),

    i.e., (∗) by (prop).Let now P be a consistent PNP, and let P ′1, ...,P ′m be all different PNP P ′ with FP =

    τ (FP) and such that pos (P ′) and neg (P ′) are disjoint. The completions P∗1 , ...,P∗n are justthose P ′i which are consistent and for which pos (P) ⊆ pos (P ′i) and neg (P) ⊆ neg (P ′i).without loss of generality, we may suppose that these are P ′1, ...,P ′n which means that fori > n,(i) P ′i is inconsistentor(ii) pos (P) * pos (P ′i) or neg (P) * neg (P ′i).

    We obtain ⊢ ¬P̂ ′i in case (i) and pos (P) ∩ neg (P ′i) ̸= ∅ or neg (P) ∩ pos (P ′i) ̸= ∅ andtherefore ⊢ ¬

    (P̂ ∧ P̂ ′i

    )by (taut) in case (ii). In either case, we may conclude ⊢ P̂ ⇒ ¬P̂ ′i

    with (prop), and this holds for every i > n. With (∗) we obtain ⊢m∨i=1

    P̂ ′i and with (prop)

    we the get ⊢ P̂ ⇒m∨i=1

    P̂ ′i which is just the desired assertion.

    When we build a completion P∗ of a consistent PNP P what we are doing is that thesub-formulas of formulas appearing in FP that should be true or false in some state arecollected in pos (P∗) and neg (P∗), respectively, and so all formulas of pos (P) are trueand all formulas of neg (P) are false in that state. Let us illustrate this idea with a littleexample.

    Example 2.1.14. Suppose φ ≡ ((p1 ⇒ p2) ⇒ G◦p3), ψ ≡ (p3 ⇒ Xp2) with p1, p2, p3 ∈ Πand P = ({φ} , {ψ}). One possible completion of P is

    P∗ = ({φ, p1 ⇒ p2, (G◦p3) , p2, p3} , {ψ, p1, (Xp2)}) .

    If all sub-formulas of φ and ψ in pos (P∗) evaluate to true and those in neg (P∗) evaluateto false, then φ becomes true and ψ becomes false and, moreover, such valuation is in factpossible because of the consistency of P∗. However, some of this information focused onone state may also have implications for other states. In our example, G◦p3 becomes truein a state only if p3 is true in that state which is already the case as p3 belongs to pos (P∗)and p3 is also true in every future state or, equivalently, G◦p3 is true in the next state. To

    20

  • make Xp2 false we have to make p2 false in the next state.

    To “transfer” this information from one state to the next is the purpose of the nextconstruction.

    For a PNP P = (Γ+,Γ−) we define the following four sets of formulasσ1 (P) = {φ : Xφ ∈ Γ+},σ2 (P) = {G◦φ : G◦φ ∈ Γ+},σ3 (P) = {φUψ : φUψ ∈ Γ+ and Xφ ∈ Γ+and Xψ ∈ Γ−},σ4 (P) = {φ : Xφ ∈ Γ−},σ5 (P) = {G◦φ : G◦φ ∈ Γ− and φ ∈ Γ+}σ6 (P) = {φUψ : φUψ ∈ Γ− and Xφ ∈ Γ+and Xψ ∈ Γ−},and the PNPσ (P) = (σ1 (P) ∪ σ2 (P) ∪ σ3 (P) , σ4 (P) ∪ σ5 (P) ∪ σ6 (P)).

    Note 2.1.15. The function σ will only be applied to consistent and complete PNPs P∗.This is the reason why σ3 is defined as it is. Although it might be true that φUψ ∈ Γ+,we don’t have to consider Xψ ∈ Γ+ because it would trivially be processed by σ1 since P∗

    is complete.There are eight different PNPs P = (Γ+,Γ−) for φUψ ∈ Γ, only five consistent and of

    those five only two are non-trivial and to our concern, and thus we treat them with σ3 andσ6.

    Example 2.1.16. Back to the example 2.1.14, we would have σ (P∗) = ({G◦p3} , {p2}),which notes what has to be true or false for the next state in accordance with P∗ and thefuture states.

    Lemma 2.1.17. Let P be a PNP.

    a) ⊢ P̂ ⇒ Xσ̂ (P)

    b) If P is consistent then σ (P) is consistent.

    Proof. (1) We show that ⊢ P̂ ⇒ Xφ if φ ∈ σ1 (P)∪σ2 (P)∪σ3 (P) and that ⊢ P̂ ⇒ X¬φ ifφ ∈ σ4 (P) ∪ σ5 (P) ∪ σ6 (P). The assertion (1) then follows immediately with (prop) andthe law Xψ1∧Xψ2 ⇒ X (ψ1 ∧ ψ2). We distinguish the four cases of φ ∈ σi, i = 1, 2, 3, 4, 5, 6.

    1. If φ ∈ σ1 then Xφ ∈ pos (P) and therefore ⊢ P̂ ⇒ Xφ by (prop).

    2. If φ ≡ G◦ψ ∈ σ2 (P) then G◦ψ ∈ pos (P) and therefore ⊢ P̂ ⇒ G◦ψ by (prop), fromwhich we get ⊢ P̂ ⇒ XG◦φ with (ltl3) and (prop).

    21

  • 3. If φ ≡ ϕUψ ∈ σ3 (P) then ϕUψ ∈ pos (P) and Xϕ ∈ pos (P) and Xψ ∈ neg (P) andtherefore ⊢ P̂ ⇒ ϕUψ ∧ Xϕ ∧ ¬Xψ by (prop), from which we get ⊢ P̂ ⇒ XϕUψ with(until1) and (prop).

    4. If φ ∈ σ4 (P) then Xφ ∈ neg (P) and therefore⊢ P̂ ⇒ ¬Xφ by (prop) from which weget ⊢ P̂ ⇒ X¬φ with (ltl1) and (prop).

    5. if φ ≡ G◦ψ ∈ σ5 (P) then G◦ψ ∈ neg (P) and ψ ∈ pos (P) and therefore ⊢ P̂ ⇒φ ∧ ¬G◦ψ by (prop), from which we get ⊢ P̂ ⇒ ¬XG◦φ with (ltl3) and (prop) andfinally ⊢ P̂ ⇒ X¬G◦φ with (ltl1) and (prop).

    6. If φ ≡ ϕUψ ∈ σ6 (P) then ϕUψ ∈ neg (P) and Xϕ ∈ pos and Xψ ∈ neg and therefore⊢ P̂ ⇒ ¬ (ϕUψ) ∧ Xϕ ∧ ¬Xψ by (prop), from which we get ⊢ P̂ ⇒ ¬XϕUψ with(until1) and (prop) and finally ⊢ P̂ ⇒ X¬ϕUψ.

    (2) Assume that σ (P) is inconsistent, i.e., ⊢ ¬σ̂ (P). Using (next) it follows that ⊢X¬σ̂ (P); hence also ⊢ ¬Xσ̂ (P) with (ltl1) and (prop). Together with (1) we infer ⊢ ¬P̂by (prop), implying that P would be inconsistent.

    According to the idea of the proof above, in order to satisfy the formulas of pos (P) andfalsify those of neg (P) of a given consistent PNP P, respectively, in a state, the infinitesequence

    P∗, σ (P∗)∗ , σ (σ (P∗)∗)∗ ...

    should now carry the complete information about how the sub-formulas should evaluatein their state and the ones after that. There is, however, two remaining problems: forsome element Pi of this sequence there could be a G◦φ ∈ neg (Pi) which means that φshould become false in the corresponding state or in a future state. But either forcedby the consistency constraint or just by having chosen a “bad” completion, φ ∈ pos (Pj)could hold for all elements Pj , j ≥ i, of the sequence. A similar obstacle arises withϕUψ ∈ pos (Pi) where ψ should become true for some future state, but it could be thecase that ψ ∈ neg (Pj) holds for all elements Pj , j ≥ i. In order to overcome these lastdifficulties we consider all possible completions in every step from one state to the next.

    Formally, let P be a consistent and complete PNP. We define an infinite tree KP :

    • The root of KP is P.

    • If Q is a node of KP then the successor nodes of Q are all different completions ofσ (Q).

    According to our remarks and results above, every node of KP is a consistent and completePNP. If Q is a node then the sub-tree of KP with root Q is just KQ.

    22

  • Lemma 2.1.18. Let P be a consistent and complete PNP

    1. KP has only finitely many different nodes Q1, ..., Qn.

    2. ⊢n∨i=1Q̂i ⇒ X

    n∨i=1Q̂i.

    Proof. (1) From the definitions of σ and τ operations it follows immediately that all for-mulas that occur in some node of KP are sub-formulas of the formulas contained in FP ,of which there are only finitely many. This implies that there can be only finitely manydifferent nodes in KP .

    (2) Lemma 2.1.17.1 shows that we have ⊢ Q̂i ⇒ Xσ̂ (Qi) for every i = 1, ..., n. LetQ′i1 , ...,Q

    ′im

    be all different completions of σ (Qi); then Lemma 2.1.13 proves ⊢ σ̂ (Qi) ⇒m∨j=1

    Q̂′j . The definition of KP implies Q′ij ∈ {Q1, ...,Qn}; hence ⊢ Q̂′ij

    ⇒n∨k=1

    Q̂k, for

    every j = 1, ...,m. So we get ⊢ σ̂ (Qi) ⇒n∨k=1

    Q̂k with the (nex) and (ltl2) and hence

    ⊢ Q̂i ⇒ Xn∨k=1

    Q̂k for every i = 1, ..., n. From this, assertion (2) follows with (prop).

    A finite path (from P1 to Pk) in KP is a sequence P1, ...,Pk of nodes such that Pi+1 is asuccessor node of Pi for every i = 1, ..., k − 1. An infinite path is defined analogously.

    Lemma 2.1.19. Let P be a consistent and complete PNP, P0,P1,P2, ... and infinite pathin KP , i ∈ N, and φ a formula.

    1. If Xφ ∈ FP i then Xφ ∈ pos (Pi) if and only if φ ∈ pos (Pi+1).

    2. G◦φ ∈ pos (Pi) implies that φ ∈ pos (Pj) for every j ≥ i

    Proof. (1) Assume that Xφ ∈ FP i. If Xφ ∈ pos (Pi) then φ ∈ pos (σ (Pi)); hence φ ∈pos (Pi+1). If Xφ /∈ pos (Pi) then Xφ ∈ neg (Pi); hence φ ∈ neg (σ (Pi)), and thereforeφ ∈ pos (Pi+1) and φ /∈ pos (Pi+1) with Lemma 2.1.10.1.

    (2) Assume that G◦φ ∈ pos (Pi). Then φ ∈ FP i, because of φ ∈ τ (G◦φ) and thecompleteness of Pi. We get φ ∈ pos (Pi) with Lemma 2.1.11.3 and ⊢ G◦φ ⇒ φ whichfollows from (ltl3). Moreover, G◦φ ∈ pos (σ (Pi)); hence G◦φ ∈ pos (Pi+1). By inductionwe may conclude that φ ∈ pos (Pj) for every j ≥ i.

    An infinite path in KP is just a sequence of PNPs as in our informal explanation above,However, as explained there, we have to find such path where every occurrence of someformula G◦φ in some neg (Pi) is eventually followed by a negative occurrence of φ andlikewise for every occurrence of some formula φUψ in some pos (Pi)is eventually followed

    23

  • by a occurrence of a positive ψ . Formally, let us call an infinite path P0,P1,P2, ... in KPcomplete if P0 = P and the following condition holds for every i ∈ N:

    If G◦φ ∈ neg (Pi) then φ ∈ neg (Pj) for some j ≥ i.Lemma 2.1.19 and this definition will be seen to ensure the existence of a temporal

    structure satisfying P̂. It remains to guarantee that a complete path really exists wheneverP is consistent and complete.

    Lemma 2.1.20. Let P be a consistent and complete PNP. There is a complete path KP .

    Proof. We first show:

    (1*) If Q is some node of KP and φ is some formula such that G◦φ ∈ neg (Q) thenthere is a node Q′ such that φ ∈ neg (Q′).

    (2*) If Q is some node of KP and ψ is some formula such that ϕUψ ∈ pos (Q) thenthere is a node Q′ such that Xψ ∈ pos (Q′).

    Assume that φ /∈ neg (Q′) for every node of Q′ of KP . Because of φ ∈ τ (G◦φ) we thenhave φ ∈ pos (Q) and therefore G◦φ ∈ neg (Q′) for all successor nodes Q′ of Q accordingto the construction σ. Continuing inductively, we find that G◦φ ∈ neg (Q′), φ ∈ pos (Q′),and hence ⊢ Q̂′ ⇒ φ for every node Q′ of KP . Let Q′1, ...,Q′n be all nodes of KP . Then

    ⊢n∨i=1Q̂′i ⇒ φ. Furthermore, by Lemma 2.1.18.2 we have ⊢

    n∨i=1Q̂′i ⇒ X

    n∨i=1Q̂′i; so with (ind)

    we obtain ⊢n∨i=1Q̂′i ⇒ G◦φ. Because of Q ∈ {Q′1, ...,Q′n} we also have ⊢ Q̂ ⇒

    n∨i=1Q̂′i and so

    we get ⊢ Q̂ ⇒ G◦φ by (prop). Because of G◦φ ∈ neg (Q), i.e., ⊢ Q̂ ⇒ ¬G◦φ, this implies⊢ ¬Q̂ by (prop) which means that Q is inconsistent. This is a contradiction; thus (1*) isproved.

    Assume that ψ /∈ pos (Q′) for every node of Q′ of KP . Because of Xψ ∈ τ (ϕUψ) we thenhave Xψ ∈ neg (Q) and therefore ϕUψ ∈ pos (Q′) for all successor nodes Q′ of Q accordingto the construction σ. Continuing inductively, we find that ϕUψ ∈ pos (Q′), Xψ ∈ neg (Q′),and hence ⊢ Q̂′ ⇒ ¬Xψ for every node Q′ of KP . Let Q′1, ...,Q′n be all nodes of KP . Then

    ⊢n∨i=1Q̂′i ⇒ ¬Xψ. Furthermore, by Lemma 2.1.18.2 we have ⊢

    n∨i=1Q̂′i ⇒ X

    n∨i=1Q̂′i; so with

    (ind) we obtain ⊢n∨i=1Q̂′i ⇒ G◦¬Xψ. Because of Q ∈ {Q′1, ...,Q′n} we also have ⊢ Q̂ ⇒

    n∨i=1Q̂′i

    and so we get ⊢ Q̂ ⇒ G◦¬Xψ by (prop) and since G◦¬Xψ ⇔ ¬Fψ we get ⊢ Q̂ ⇒ ¬Fψwith (prop). Furthermore we obtain ⊢ Q̂ ⇒ ¬ (ϕUψ) by (until2) and (prop). Because ofϕUψ ∈ pos (Q), i.e., ⊢ Q̂ ⇒ ϕUψ, this implies ⊢ ¬Q̂ by (prop) which means that Q isinconsistent. This is a contradiction; thus (2*) is proved.

    24

  • From Lemma 2.1.18.1 we know that KP contains only finitely many different nodes.Since neg (Q) and pos (Q) are finite sets of formulas for every node Q, there can only befinitely many formulas such that G◦φ ∈ neg (Q) and finitely many such that ϕUψ ∈ pos (Q)for some node Q on KP . Choose some fixed enumeration φ0, ..., φm−1 of all such formulas(including both the G◦φ and ϕUψ formulas). In order to construct a complete path in KPwe now define a succession π0, π1, ... of finite and non-empty paths in KP such that πi is aproper prefix of πi+1:

    • Let π0 = P consist only of the root P of KP .

    • Inductively, assume that πi = Q0,Q1, ...,Qk has already been defined.

    – If φi mod m is of the form G◦φ we distinguish two cases: if G◦φ /∈ neg (Qk) orφ ∈ neg (Qk) then πi+1 is obtained from πiby appending some successor nodeQ′ of Qk in KP . (Lemmas 2.1.12 and 2.1.17 imply that Qk has at least onesuccessor node.)If G◦φ ∈ neg (Qk) and φ /∈ neg (Qk) then, by (1*), KQkcontains some nodeQ′ such that φ ∈ neg (Q′). Choose such Q′ (which must obviously be differentfrom Qk), and let πi+1 be obtained by appending the path from Qk to Q′ tothe path πi.

    – If φi mod m is of the form ϕUψ we distinguish two cases: if ϕUψ /∈ pos (Qk) orXψ ∈ pos (Qk) then πi+1 is obtained from πiby appending some successor nodeQ′ of Qk in KP . (Lemmas 2.1.12 and 2.1.17 imply that Qk has at least onesuccessor node.)If ϕUψ ∈ pos (Qk) and Xψ ∈ neg (Qk) then, by (2*), KQkcontains some nodeQ′ such that Xψ ∈ pos (Q′). Choose such Q′ (which must obviously be differentfrom Qk), and let πi+1 be obtained by appending the path from Qk to Q′ tothe path πi.

    The succession π0, π1, ... uniquely determines an infinite path πi = Q0,Q1, ... with Q0 = Pin KP . To see that π is complete, assume that G◦φ ∈ neg (Qi) for some i but thatφ /∈ neg (Qi′) for all i′ ≥ i. As in the proof of (1*), it follows that G◦φ ∈ neg (Qi′) forevery i′ ≥ i. The formula φ occurs in the enumeration of all formulas of this kind fixedabove, say, as φl. Now choose j ∈ N such that πj∗m+l = Q0,Q1, ...,Qk where k ≥ i; inparticular it follows that G◦φl ∈ neg (Qk). But the construction of πi+1ensures that πi+1,which is a finite prefix of π, ends with some node Q′ such that φ ≡ φl ∈ neg (Q′), and acontradiction is reached. We get an analogous contradiction for formulas ϕUψ. We havethus found a complete path π = Q0,Q1, ... in KP .

    25

  • Now we have in fact all means for proving a theorem which is a rather trivial transcrip-tion of the desired completeness result.

    Theorem 2.1.21. (Satisfiability Theorem for ΣLTL). For every consistent PNP P , theformula P̂ is satisfiable.

    Proof. Let P be a consistent PNP, P∗ be a completion of P, and P0,P1,P2, ... a completepath in KP∗ according to Lemma 2.1.20. We define a LTL model η = ⟨α, λ⟩ where thelabellings are λi (p) = 1 if and only if p ∈ pos (Pi) for every p ∈ Π and αi = act if and onlyif act ∈ pos (Pi), i ∈ N.We will prove below that for every formula φ and every i ∈ N:

    (*) If φ ∈ KPi then: λ, i φ if and only if φ ∈ pos (Pi).

    Before we prove this, let us show that (*) implies the satisfiability of P̂: because ofpos (P) ⊆ pos (P0), neg (P) ⊆ neg (P0) and pos (P0) ∩ neg (P0) = ∅ we get

    λ, 0 P̂ or expanded λ,0∧

    φ∈pos(P)φ ∧

    ∧ψ∈neg(P)

    ¬ψ

    from (*). In particular, P̂ is satisfiable.The proof of (*) runs by structural induction on the formula φ.

    • φ ≡ p ∈ Π; λ, i p if and only p ∈ pos (Pi) by definition.

    • φ ≡ act ∈ Act; λ, i act if and only act ∈ pos (Pi) by definition.

    • φ ≡ act ∈ Act; λ, i act if and only if

    • φ ≡ ϕ⇒ ψ; If ϕ⇒ ψ ∈ ΓPi , then also ϕ ∈ ΓPi and ψ ∈ ΓPi because Pi is a completePNP, and therefore:λ, i ϕ ⇒ ψ if and only if λ, i 1 ϕ or λ, i ψ, which by the induction hypothesisresults in ϕ /∈ pos (Pi) or ψ ∈ pos (Pi), which by Lemma 2.1.10.2 is results in ϕ ∈neg (Pi) or ψ ∈ pos (Pi) and by Lemma 2.1.11.2 leads to ϕ⇒ ψ ∈ pos (Pi).

    • φ ≡ Xψ; From Xψ ∈ ΓPi we obtain φ ∈ ΓPi+1 and therefore:λ, i Xψ by definition λ, i+1 ψ and by ind. hyp. φ ∈ pos (Pi+1) which by Lemma2.1.19.1 is Xψ ∈ pos (Pi).

    • φ ≡ ϕUψ; If ϕUψ ∈ pos (Pi) from the definition of a complete path and Lemma2.1.10.1 ensure Xψ ∈ pos (Pj) for some j ≥ i. Since it may not be so obvious,Xϕ ∈ pos (Pk) for all i ≤ k < j from (prop) and (until1) and form the consistency of

    26

  • P. By the induction hypothesis we get λ, j + 1 ψ and λ, k + 1 ϕ for j + 1 > iand i ≤ k + 1 < j + 1 and by definition λ, i ϕUψ.

    Before we finally deduce our main result from this theorem we will still mention thata close look at its proof provides another interesting corollary called finite model property(of LTL):

    Corollary 2.1.22. Every satisfiable formula is satisfiable by a temporal structure whichhas only finitely many different states.

    To see this fact, assume that a formula φ is satisfiable. From the definition it followsthat ¬¬φ is satisfiable; hence ¬φ is not valid and not derivable in ΣLTL by Theorem2.1.2. So, by definition, the PNP ({φ} , ∅) is consistent and therefore φ is satisfiable bya temporal structure λ according to (the proof of) Theorem 2.1.13. By construction andLemma 2.1.18.1, λ has only finitely many different states.

    Theorem 2.1.23. (Weak Completeness Theorem for ΣLTL). ΣLTL is weakly complete, i.e.,for every finite set Γ of formulas and formulas φ, if Γ � φ then Γ ⊢ φ. In particular, if� φ then ⊢ φ.

    Proof. We prove the claim first for Γ = ∅: if � φ then ¬φ is not satisfiable and hencethe PNP (∅, {φ}) is inconsistent by theorem 2.1.13. This means ⊢ ¬¬φ by definition andimplies ⊢ φ using (prop).

    Let now Γ = {φ1, ..., φn} ̸= ∅. We then have Γ � φ and this impliesφ1...φn−1 � G◦φn ⇒ φ with Theorem 2.1.7...� G◦φ1 ⇒ (G◦φ2 ⇒ ...⇒ (G◦φn ⇒ φ) ...) with Theorem 2.1.7⊢ G◦φ1 ⇒ (G◦φ2 ⇒ ...⇒ (G◦φn ⇒ φ) ...) with the result proved aboveφ1 ⊢ G◦φ2 ⇒ (G◦φ3 ⇒ ...⇒ (G◦φn ⇒ φ) ...) with Theorem 2.1.8...Γ ⊢ φ with Theorem 2.1.8

    Summarizing, we know from soundness and the Weak Completeness Theorems that

    Γ � φ if and only if Γ ⊢ φfor finite Γ

    in particular that� φ if and only if ⊢ φ.

    27

  • 2.1.4 Decidability

    We will present a short description on weather the validity of LTL is decidable. In fact wewill point for a method to decide the satisfiability problem used in [3].

    The decision procedure for satisfiability would be based on the same idea of the proofof the Weak Completeness Theorem 2.1.23. It’s essence was to construct a satisfyingtemporal structure for any finite and consistent set of formulas. The tree of PNPs and theoperations involved to construct that tree were carefully chosen to ensure there were onlyfinitely many different nodes. The information contained in the infinite tree can thereforebe represented in a finite graph, a tableau. We would just need to identify the identicalnodes and represent cycles between them. Eventually there will be nodes that are closed,i.e., that result in some sort of contradiction. This will lead to the notion of a tableaubeing successful.

    Ultimately it must be shown that a tableau for a PNP P is successful if and only if P̂is satisfiable. We won’t show this.

    Anyway, the mentioned result would provide an algorithmic decision procedure forsatisfiability and allow us to prove that The satisfiability and validity problems for LLTLare decidable.

    2.2 Distributed Temporal Logic

    The distributed temporal logic DTL is a logic for reasoning about temporal properties ofdistributed systems. The agents that constitute the system have a sequential execution,and interact with each other by means of synchronous communication. The definition ofa DTL model and its semantics were heavily based on [5].

    For the previous presented language of LTL we were only concerned with the develop-ment of a simple situation as time progressed. Now DTL introduces several agents in asystem. The situation expands to include the intricacies of all the agents, how they eachevolve and affect the others as time progresses .

    2.2.1 Syntax and Semantics

    The syntax of distributed temporal logic is defined over a distributed signature Σ =⟨Id, {Πi}i∈Id , {Acti}i∈Id

    ⟩of a system, where Id is a finite set of agents and, for each

    i ∈ Id, Πi is a set of local state propositions and Acti is a set of local actions. Thelanguage LDTL of distributed temporal logic DTL is defined by:

    L ::= @i [Li] | L ⇒ L | ¬L

    28

  • for i ∈ Id, where the local languages Li are defined as

    Li ::= Πi | Acti | ¬Li | Li ⇒ Li | LiULi | c⃝j [Lj ] | ∗

    with j ∈ Id.

    The global formula @i[φ] means that the formula φ holds at current local state of agenti. A formula of the form c⃝j [φ] ∈ Li is called a communication formula, and means thatagent i has just communicated with agent j for whom φ holds.

    To extend our concept of temporal structure lets consider life-cycles.

    A local life-cycle of an agent i is a discrete countable total order λi = ⟨Ei,≤i⟩ where Eiis the set of local events and ≤i the local order of causality. We define the correspondinglocal successor relation →i⊆ Ei × Ei to be the relation such that e →i e′ if e ≤i e′ andthere is no e′′ such that e ≤i e′′ ≤i e′. As a consequence, we have that ≤i=→∗i , i.e. ≤i isthe reflexive and transitive closure of →i.

    A distributed life-cycle is a family λ = {λi}i∈Id of local life-cycles such that ≤=(∪i∈Id ≤i

    )∗ defines a partial order of global causality on the set of all events E = ∪i∈IdEi.Note that communication is modeled by event sharing and thus for some event e we mayhave e ∈ Ei ∩ Ej , for i ̸= j.

    A local state of an agent i is a finite set ξ ⊆ Ei such that if e ≤i e′ and e′ ∈ ξ, thene ∈ ξ. The set Ξi of all local states of agent i is totally ordered by inclusion and has ∅ asthe minimal element. Each non-empty local state ξ of agent i is reached, by the occurrenceof the event we call lasti(ξ), from the local state ξ\ {lasti(ξ)}. The local states of eachagent are totally ordered as a consequence of the total order on local events. Since theyare discrete and well-founded, we enumerate them as follows: ∅ is the 0th state; {e}, wheree is the minimum of ⟨Ei,≤i⟩, is the first state; and in general, if ξ is the kthstate of agenti and lasti(ξ) →i e′, then ξ ∪ {e′} is the (k + 1)th state of agent i. We denote ξki the kth

    state of agent i. Note that ξ0i = ∅ is the initial state and ξki is the state reached from theinitial state after the occurrence of the first k events, so two states are related by inclusionξm ⊆ ξn if and only if m ≤ n. Also, ξki is the only state of agent i that contains k elements,i.e., where

    ∣∣ξki ∣∣ = k. Given e ∈ Ei, observe that (e ↓ i) = {e′ ∈ Ei | e′ ≤i e} is always alocal state.

    An interpretation structure η = ⟨λ, σ, α⟩ consists of a distributed life-cycle λ and fam-ilies σ = {σi}i∈Id and α = {αi}i∈Id of labeling functions. For each i ∈ Id, σi : Ξi →p(Πi) associates a set of local state propositions to each local state, where the functionαi : Evi → Acti associates a local action to each event. We denote ⟨λi, σi, αi⟩ by ηi anddefine the global satisfaction relation by

    29

  • • η DTL @i [φ] if and only if ηi i φ if and only if ηi, ξi i φ for every ξi ∈ Ξi;

    • η DTL ¬φ if and only if η 1DTL φ;

    • η DTL φ⇒ ψ if and only if η 1DTL φ or η DTL ψ.

    The local satisfaction relations at local states is defined by

    • ηi, ξi i p if p ∈ σi (ξ);

    • η, ξi i act if ξi ̸= ∅ and αi (last (ξi)) = act;

    • ηi, ξi i ¬φ if ηi, ξi 1i φ;

    • ηi, ξi i φ⇒ ψ if ηi, ξi 1i φ or ηi, ξi i ψ;

    • ηi, ξi i φUψ if there exists ξ′i ∈ Ξi such that |ξi| < |ξ′i| and ηi, ξ′i i ψ and ηi, ξ′′i i φfor every |ξi| < |ξ′′i | < |ξ′i|

    • ηi, ξi i c⃝i [φ] if |ξi| > 0 and lasti (ξi) ∈ Ej , and µj , (lasti (ξi) ↓ j) j φ;

    • ηi, ξi i ∗ if ξi = ∅.

    We define that Γ �DTL φ if any model η verifies η DTL φ whenever η DTL γ for everyγ ∈ Γ. Also, local semantic consequence is defined as Γ �i φ if any model η verifies ηi i φwhenever ηi i γ for every γ ∈ Γ for some agent i ∈ Id. A formula φ is locally valid, �i φor equivalently � @i [φ] if ηi φ for any model η for some agent i ∈ Id.

    When the agent is understood from the context, we will write instead of i.

    2.2.2 Axiomatization

    We present an axiomatiozation for DTL, mainly by adjusting the axioms we already havefor LTL to the new notation. The previous LTL axioms are valid only locally, hence theneed to add the symbol @i to now assign an agent and shift them to a global level. Wedon’t prove any soundness or completeness results. For results on those subjects, check [5]where it is provided a sound and complete labeled tableaux system for this logic.

    30

  • Axioms

    For every i ∈ Id we have the following set of axioms:

    dtl1. @i [¬Xφ⇔ X¬φ]dtl2. @i [X (φ⇒ ψ) ⇒ (Xφ⇒ Xψ)]dtl3. @i [G◦φ⇒ φ ∧ XG◦φ]untild1. @i [φUψ ⇔ Xψ ∨ X (φ ∧ φUψ)]untild2. @i [φUψ ⇒ XF◦ψ]initd1. @i [X¬∗]

    Rules of inference

    mpd. @i [φ] ,@i [φ⇒ ψ] ⊢ @i [ψ]nexd. @i [φ] ⊢ @i [Xφ]indd. @i [φ⇒ ψ] ,@i [φ⇒ Xφ] ⊢ @i [φ⇒ G◦ψ]initd2. @i [∗ ⇒ G◦φ] ⊢ @i [φ]

    Future sections will be based of DTL language and will have some examples where itis more practical to deal with an anchored viewpoint. Therefore we present some resultsthat will be useful for those sections.

    Lemma 2.2.1. Given any DTL model η and any agent i it is true that η @i [∗ ⇒ φ] ifand only if η, ξ0i φ.

    Proof. Suppose first that η, ξ0i φ and note that also η, ξ0i ∗ since by definition ξ0 = ∅.With some propositional calculus results we can conclude that η, ξ0i ∗ ⇒ φ.

    Now note that for any ξi ̸= ∅ it is true that η, ξi ∗ ⇒ φ since the antecedent is alwaysfalse, that is η, ξi 1 ∗. In conclusion, for any ξi we have η, ξi ∗ ⇒ φ and consequentiallyη @i [∗ ⇒ φ].Conversely suppose η @i [∗ ⇒ φ]. This is by definition, η, ξi ∗ ⇒ φ for every ξi, inparticular for ξi = ∅. Since η, ξ0i ∗ and η, ξ0i ∗ ⇒ φ we then get that η, ξ0i φ.

    Lemma 2.2.2. Given any DTL model η and any agent i ∈ Id we have that if η, ξki φfor some ξki then η @i [∗ ⇒ Fφ].

    Proof. Note that η @i [∗ ⇒ Fφ] if and only if η, ξ0i Fφ by the previous Lemma. Thedefinition is that η, ξ0 Fφ if there is ξ > ξ0 such that η, ξ φ, which is exactly our case,hence the result.

    31

  • As we stated before, a DTL system is an expansion from LTL, whereas we had oneagent, we now have several that communicate with each other. So if we restrain a DTLmodel to only one agent, its behavior would be of a LTL model.

    To show this we have to deal with the communication formulas that are not presentin LTL. The simple way is to include them in the set of propositional formulas as if justanother formula. So consider the set Π c⃝i = Πi

    ∪ {c⃝j [φ] : φ ∈ Lj

    }for some agent i.

    To translate one model from a language into the other we will use a function δ :{ηDTL : ηDTL is a DTL model} → {ηLTL : ηLTL is a LTLmodel} that maps δ : ⟨λ, α, σ⟩ 7−→⟨λ′, α⟩.

    The local life-cycle λ in the DTL model has a correspondent set of states N in the LTLmodel. The first DTL state, ξ0, will be the first LTL state 0, and the state ξ′ such thatξ0 →i ξ′ will be state 1; state ξ′′such that ξ′ →i ξ′′ will be state 2 and so on.

    The crucial detail here are the labeling functions. We define λ′ : N → 2Π c⃝i such thatλ′ (k) =

    {p : p ∈ σ(ξk)

    }∪

    {c⃝j [φ] : η, ξk c⃝j [φ]

    }. This way the communication formulas

    are included in LLTLas if they were another propositional formula.The actions suffer no alteration from one model to the other.

    Proposition 2.2.3. Given φ ∈ LTL c⃝ and a DTL model η then η DTL φ if and only ifδ (η) LTL c⃝ φ.

    Proof. The proof follows by induction on the the structure of a local formula φ. Let ibe an agent and ξi any local state for that agent.

    For the basis of induction let φ be a propositional formula p ∈ Π c⃝i . Starting fromη, ξki DTL β (p) which is the same as η, ξki DTL p we know that p ∈ σi

    (ξk

    )and by

    definition p ∈ λ′ (k) so we have δ (η) , k LTL c⃝ p. Also if p ∈ λ′(k), then p ∈ σi(ξk

    ), and

    so if δ (η) , k LTL c⃝ p then η, ξki DTL p.If φ is p ∈ Π c⃝i but of the form c⃝j [φ]. Then η, ξ

    ki DTL c⃝j [φ] and by definition

    c⃝j [φ] ∈ λ′ (k) so δ (η) , k LTL c⃝ c⃝j [φ]. Also if c⃝j [φ] ∈ λ′ (k), then η, ξk c⃝j [φ] bydefinition, and so if δ (η) , k LTL c⃝ c⃝j [φ] then η, ξki DTL c⃝j [φ].

    Also if φ is act ∈ Acti then η, ξni DTL act if and only if δ (η) , k DTL act sinceδ (α) = α.

    For the induction step consider that:

    • φ is ¬φ. η, ξki DTL ¬φ then by definition η, ξki 1DTL φ if and only if by inductionhypothesis δ (η) , k 1LTL c⃝ φ that is δ (η) , k LTL c⃝ ¬φ.

    • φ is φ ⇒ ψ. Supposing that η, ξki 1DTL φ ⇒ ψ, it follows that η, ξki DTL φ andη, ξki 1DTL ψ, so by induction hypothesis and the previous step, η, k DTL φ andδ (η) , k 1DTL ψ, yielding that δ (η) , k 1LTL φ⇒ ψ .

    32

  • • φ is φUψ. The definition of η, ξki i φUψ is that there is ξni ∈ Ξi such that ξk ⊂ ξn

    and η, ξni i ψ and η, ξmi i φ for every ξm such that ξk ⊂ ξm ⊂ ξn. Note that bydefinition ξx is the state reached after the occurrence of x events, and so, k < m < n.So by induction hypothesis we state that δ (η) , n i ψ and δ (η) ,m i φ and sincek < m < n, by definition δ (η) , k φUψ.

    33

  • 34

  • Chapter 3

    Probabilistic temporal logic

    Probabilistic thinking has been seen as a rather useful tool in the fields of Computer Scienceand Artificial Intelligence. Either to generate verbal explanations when the reasoning isnumeric, for decision making in unknown environments in AI or for modeling the non-deterministic phenomena, as run time for algorithms or queuing requests in a database.

    Probability comes as a way to express uncertain outcomes. We may argue that wehave, for example, the sometime operator F which gives some uncertainty, but it’s use islimited as it won’t allow us to express a great deal of ideas. We would like to expressinformation of the type “the probability of sometime in the future φ being true is s”.

    In this chapter we propose to extend the temporal logics presented previously with aprobabilistic dimension. In doing so we adopt an exogenous approach following the ideaspresented in [25, 27].

    3.1 Probabilistic linear temporal logic

    We start by enriching the language of LTL to include a probability operator. The approachis to only consider the case of probability in classical formulas. This will allow us to expresssimple statements of the form “tomorrow the probability that φ is true is at least s”. Whatwe would like to express, but cannot with this approach, is the statement “the probabilitythat tomorrow φ is true is at least s”. The two statements are very similar, so the limitationseems acceptable and at first glance we don’t seem to be losing too much expressive power.

    The probability is given at a local level, with the advantage of having one probabilityfunction for each time state resulting in that the probabilities of a state are independentof the probabilities of previous and future states. Information can be transferred from onestate to another via temporal formulas. Note that temporal formulas represent classicalformulas that are to be evaluated at some other time state. We don’t give probability to

    35

  • temporal formulas and can use them to carry probabilities to other states.

    3.1.1 Syntax and semantics

    We start of by giving the language of a classical propositional logic C over the set Π ofpropositional symbols, defined by the grammar

    C ::= Π | ¬C | C ⇒ C.

    The alphabet of a language LpLTL for Propositional Linear Temporal Logic LpLTL isdefined by:

    LpLTL ::= act |´≥sC | LpLTL ⇒ LpLTL | ¬LpLTL | LpLTLULpLTL

    where{´

    ≥s

    }s∈[0,1]Q

    is a family of unary probabilistic operators where [0, 1]Q is the set

    of rational numbers from the interval [0, 1]. All previous abbreviations are still in use withthe addition of the following:

    ´≥s

    D1.´s p ≡ ¬

    ´≥1−s ¬p

    D4.´=s p ≡

    ´≥s p ∧

    ´≤ p

    We use the operator´

    for probability. The formula´≥s p reads “the probability of p is

    at least s”.

    Let F be a set of probability spaces(Π, 2Π, µk

    )where Π as the sample space and 2Π

    the σ-algebra of all subsets of Π. The probability functions are defined by µk : 2Π → [0, 1],where 2Π is the set of valuations for the propositional constants. We now define the labelingfunction λ : N → F that assigns to each state a probability space.

    By associating a probability space to each state we obtain the pretended transpositionfrom LTL to pLTL. Recall that the corresponding labeling function λ on LTL would matcha valuation to each state. Now on pLTL it matches a probability on valuations. The resultis that, instead of a valuation in a state, we will have the probability of each possiblevaluation for that state.

    Example 3.1.1. Suppose that Π = {p} and Act = {nil}. If we are only concerned withstate k, there are only two types of LTL models η. Either η, k p or η, k 1 p for k ∈ N,that is, there’s only two valuations, λk (p) = 1 or λk (p) = 0.

    36

  • A pLTL model η′ will give a probability to each of these valuations, for exampleµk (λk (p) = 1) = s and µk (λk (p) = 0) = 1 − s.

    We will no longer be concerned whether p is true or false in state k, what we want toknow is the probability of p being true or false. The expressions η, k p are replaced byη′, k

    ´≥s p. A model will satisfy “the probability of p being true is s” instead of satisfying

    “p”.

    In the example we limited the set of propositional constants to have only one symbol.In the general case where Π has more than one element, there will be more than onevaluation v that satisfies a propositional constant p. What this means is that we will bemeasuring sets of valuations that satisfy p. The sets will be represented by {v : v C p}.

    Example 3.1.2. Suppose Π = {p, q} and Act = {nil}. For a state k ∈ N there are fourpossible valuations vi for the pair (p, q): v1 = (0, 0), v2 = (0, 1), v3 = (1, 0), v4 = (1, 1).

    For instance, the set {v : v p} is represented by {v3, v4}. Suppose we have a prob-ability for {v : v p}, e.g. µk ({v : v p}) = s. By writing this we mean that the set ofvaluations in state k for which p is true has probability s.

    A pLTL model η = ⟨λ, α⟩ will consist of two labeling functions, λand α, where the firstwas defined above, and α : N → Act assigns an action to each state, as for plain LTL,wihout probabilities.

    We define the satisfaction relation for pLTL. The global satisfaction relation is definedas η φ if and only if η, k φ for all k ∈ N.

    Local satisfaction is defined by:

    • η, k ´≥s p if µk ({v : v C p}) ≥ s;

    • η, k act if α (k) = act;

    • η, k ¬φ if η, k 1 φ;

    • η, k φUψ if there is k′′ ∈ N with k < k′′ such that η, k′′ ψ and η, k′ φ for everyk′ with k < k′ < k′′;

    Entailment is defined as Γ � φ if for every model pLTLη verifies η φ whenever η γ forγ ∈ Γ.

    Lemma 3.1.3. We now prove the probability abreviatures.D1.

    ´s p ≡ ¬

    ´≥1−s ¬p

    D4.´=s p ≡

    ´≥s p ∧

    ´≤ p

    37

  • Analougously we could obtain D1’:´>s p ≡ ¬

    ´≤s p, and also D2’ and D3’.

    Proof. Let η pLTL model.

    D1. Suppose that η ´s p. Using D1’ we

    get η, k ¬´≤s p which is η, k 1

    ´≤s p and it is false that µ ({v : v p}) ≤

    s. The complementary of this set is µ ({v : v ¬p}) so it must also be falsethat µ ({v : v ¬p}) ≥ 1 − s, leading to η, k 1

    ´≥1−s ¬p resulting in η, k

    ¬´≥1−s ¬p for all k.

    D4. Suppose η ´≥s p ∧

    ´≤ p. Then for all k ∈ N η, k

    ´≥s p and η, k

    ´≤ p,

    which means that µ ({v : v p}) ≥ s and µ ({v : v p}) ≤ s which leads toµ ({v : v p}) = s and η, k

    ´=s p for all k

    3.1.2 Axiomatization

    The axiomatization system defined below was based on [30].

    Axioms

    C1. All tautologically valid formulasC2. ¬Xφ⇔ X¬φC3. X (φ⇒ ψ) ⇒ (Xφ⇒ Xψ)C4. φUψ ⇔ Xψ ∨ X (φ ∧ φUψ)C5. φUψ ⇒ FψC6. X¬∗C7.

    ´≥0 φ, φ ∈ Π

    C8.´≤s φ⇒

    ´ s, φ ∈ Π

    C9.´

  • Rules of inference

    R1. φ,φ⇒ ψ ⊢ ψR2. φ ⊢ XφR3. φ⇒ ψ,φ⇒ Xφ ⊢ φ⇒ G◦ψR4. ∗ ⇒ G◦φ ⊢ φ

    Note that the axiom system can be divided into three parts dealing with propositional,temporal and probabilistic reasoning, respectively.

    Axioms C1-c6 were already covered in LTL, dealing with the classical propositionallogic, and the basic next and until operators of temporal logic.

    The axioms C7-C11 take the probabilistic aspect of pLTL. The axiom C7 announcesthat every formula is satisfied by a set of valuations of measure at least 0.

    Axioms C8 and C9 represent some inequality properties, and are equivalent to (C8’)´≥t φ⇒

    ´>s φ, t > s, φ ∈ Π and (C9’)

    ´>s φ⇒

    ´≥s φ, φ ∈ Π.

    The axioms C10 and C11 correspond to the finite addictivity of probabilities. The rulesR1 and R2 are Modus Ponens and Necessitation, respectively.

    Theorem 3.1.4. Let ϕ be a formula and Γ a set of formulas. If Γ ⊢ ϕ then Γ � ϕ.

    Proof. The proof runs by induction on the derivation of ϕ from Γ. Let η be an arbitrarymodel.

    C7 ϕ is´≥0 ψ, ψ ∈ Π. By definition, η

    ´≥0 φ if for all k ∈ N we have η, k

    ´≥0 φ

    and this is by local satisfaction definition µ ({v : v C φ}) ≥ 0. This is alwaystrue since the probability measure is defined as µ : 2Π → [0, 1].

    C8 ϕ is´≤s φ ⇒

    ´ s, φ ∈ Π. Suppose by contradiction that η 1´

    ≤s φ ⇒´

  • This means that there’s some k ∈ N such that η, k 1(´

    ≥s φ ∧´≥r ψ ∧

    ´≥1 (¬φ ∨ ¬ψ)

    )⇒´

    ≥min(1,s+r) (φ ∨ ψ), and this is by definition η, k (´

    ≥s φ ∧´≥r ψ ∧

    ´≥1 (¬φ ∨ ¬ψ)

    )and η, k 1

    ´≥min(1,s+r) (φ ∨ ψ). This results in µ ({v : v C φ}) ≥ s, and also in

    µ ({v : v C ψ}) ≥ r and in µ ({v : v C (¬φ ∨ ¬ψ)}) ≥ 1. We need to take no-tice of some details here. µ ({v : v C (¬φ ∨ ¬ψ)}) ≥ 1 means that st-he nega-tion of (¬φ ∨ ¬ψ), which is φ ∧ ψ, has probability 0. Since {v : v C (φ ∨ ψ)}is the union of the disjoint sets {v : v C (φ ∧ ¬ψ)}, {v : v C (¬φ ∨ ψ)} and{v : v C (φ ∧ ψ)} only the first two have any weight in the probability, sincethe last one has probability 0. We need only notice now that {v : v C ψ} is theunion of disjoint sets {v : v C ψ ∧ ¬φ} and {v : v C ψ ∧ φ}. This means thatµ ({v : v C ψ}) = µ ({v : v C ψ ∧ ¬φ}) = r since, again {v : v C ψ ∧ φ}has probability 0. This is analogous for {v : v C ψ}. The final conclu-sion is that µ ({v : v C (φ ∨ ψ)}) ≥ s + r. The measure as a maximumlimit of 1, so µ ({v : v C (φ ∨ ψ)}) ≥ min (1, s+ r). This contradicts η, k 1´≥min(1,s+r) (φ ∨ ψ), hence the initial supposition is wrong.

    C11 ϕ is(´

    ≤s φ ∧´

  • This gives us the upper bound, as each formula is forced to be satisfied by a valuation setof measure at most 1. This goes along with the previous definition of probability measure,µ : 2Π → [0, 1].

    Let us now look at some simple examples to make use of the axiomatic system.

    Example 3.1.5. We want to show that if we have Γ ={´

  • Proof. Let η be a pLTL model. Both relations follow by contradiction.

    1. To prove this by contradiction suppose that η 1´=0(p∧q) ⇒

    (´=s(p ∨ q) ⇒

    (´=r p⇒

    ´=s−r q

    )).

    So there is a k such that η, k ´=0(p ∧ q), η, k

    ´=s(p ∨ q), η, k

    ´=r p and

    η, k 1´=s−r q. For the set {p, q} there are four possible valuations, v1 = (0, 0),

    v2 = (0, 1),v3 = (1, 0) and v4 = (1, 1). Since η, k ´=0(p ∧ q), we know µk({v :

    v p ∧ q}) = µk({v4}) = 0. And since η, k ´=r p then µk({v : v p}) =

    µk({v3, v4}) = µk(v3) + µk(v4) = r, since µk(v4) = 0, then µk(v3) = r. Now fromη, k

    ´=s(p ∨ q) we know that µk({v : v p ∨ q}) = µk(v2) + µk(v3) + µk(v4) = s

    which implies that µk(v2) = s − (µk(v3) + µk(v4)) = s − r. Considering now thatµk({v : v q}) = µk({v2, v4}) = µk(v2) + µk(v4) = s − r, by definition we havethat η, k

    ´=s−r q, which is a contradiction. So it must be true that for every k,

    η, k ´=0(p ∧ q) ⇒

    (´=s(p ∨ q) ⇒

    (´=r p⇒

    ´=s−r q

    )).

    2. Aiming for a contradiction, suppose that thatη 1´=1(p⇒ q) ⇒

    (´=r p⇒

    ´≥r q

    ). So

    there is a k such that η, k ´=1(p ⇒ q), η, k

    ´=r p and η, k 1

    ´≥r q. Considering

    valuations v1 = (0, 0), v2 = (0, 1),v3 = (1, 0) and v4 = (1, 1), from η, k ´=1(p⇒ q)

    we know that µk({v : v p⇒ q}) = µk({v1, v2v4, }) = µk(v1) +µk(v2) +µk(v4) = 1,and since µk is a probability measure, µk(v3) = 0. From η, k

    ´=r p, we know

    that µk(v3) + µk(v4) = r and so µk(v4) = r. Now η, k 1´≥r q which is the same

    as η, k ¬´≥r q or abbreviated η, k

    ´

  • Example 3.1.8. Consider the following situation. An agent wants to encrypt a one bitbinary message m. At the start she tosses a perfect coin with half of probability foreach face. After this, she encrypts the message m by the xor operation with the value eobtained from the toss. The result is stored in the cyphered message c. We shall provethat the cyphered message has one half of probability of being the original message. Forthis, consider Π = {m, e, c}, Act = {toss, enc, nil} and the set Γ of formulas

    1. ∗ ⇒ Ftoss

    2. ∗ ⇒ G◦´=1m

    3. toss⇒ X´= 1

    2e

    4. toss⇒ G¬toss

    5. toss⇒ Xenc

    6. enc⇒ G¬enc

    7. enc⇒´=1 (c⇔ m⊕ e).

    We can conclude that Γ �(∗ ⇒ F

    ´= 1

    2(c⇔ m)

    ).

    Consider the eight possible valuations for {m, e, c}, v1 = (0, 0, 0), v2 = (0, 1, 0), v3 =(1, 0, 0), v4 = (1, 1, 0), v5 = (0, 0, 1), v6 = (0, 1, 1), v7 = (1, 0, 1) and v8 = (1, 1, 1).

    Consider a model η that satisfies Γ. From (2) and since η, 0 ∗ we take that η, 0 G0´=1m, so for any k ≥ 0 η, k

    ´=1m, and note that µk ({v : v m}) = µk (v3) +

    µk (v4) + µk (v7) + µk (v8) = 1, so for any k also µk (v1) + µk (v2) + µk (v5) + µk (v6) = 0since µ is a probability measure. Taking (1) and η, 0 ∗ we know that η, 0 Ftoss,so for some n, it will be true that η, n toss, and since the model satisfies that (3),then η, n X

    ´= 1

    2e, that is λ, n + 1

    ´12e. Note that µn+1 ({v : v e}) = µn+1 (v2) +

    µn+1 (v4) + µn+1 (v6) + µn+1 (v8) = 12 and µn+1 (v2) + ηn+1 (v6) = 0 (we have seen thatthis is true for any state k), so it must be that µn+1 (v4)+µn+1 (v8) = 12 and consequentlyµn+1 (v3)+µn+1 (v7) = 12 . Now note that µn+1 ({v2, v3, v6, v7}) =

    12 and that {v2, v3, v6, v7}

    is the set {v : v ¬ (m⇔ e)}, which means that µn+1 ({v : v ¬ (m⇔ e)}) = 12 andtherefore η, n + 1

    ´= 1

    2¬(m ⇔ e), which is abbreviated to η, n + 1

    ´= 1

    2m ⊕ e. Now

    from (5) we take that η, n Xenc, by definition η, n + 1 enc, and also considering(7) we conclude η, n + 1

    ´=1