143
Concepts in Theory of Computation and Applications for Automata-Based Modeling Gilson Antonio Giraldi LECTURE NOTES - COURSE GA025 NATIONAL LABORATORY FOR SCIENTIFIC COMPUTING PETR ´ OPOLIS, RJ - BRASIL DECEMBER 2005

LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Concepts in Theory of Computationand Applications for Automata-Based

Modeling

Gilson Antonio Giraldi

LECTURE NOTES - COURSE GA025NATIONAL LABORATORY FOR SCIENTIFIC COMPUTING

PETROPOLIS, RJ - BRASILDECEMBER 2005

Page 2: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

To my Family

Maria Thereza

Gabriel and Andre

December 16, 2005

Page 3: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Contents

1 Preface 2

2 Introduction 3

3 Set Theory 53.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2 Set Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.3 Relations and Functions . . . . . . . . . . . . . . . . . . . . . . 7

3.3.1 Composition of functions . . . . . . . . . . . . . . . . . . 83.4 Sequences of Sets . . . . . . . . . . . . . . . . . . . . . . . . . . 93.5 Cardinality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.6 Topological Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 103.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Recursive Function Theory and Computability 144.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.2 Peano’s Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.3 Axioms and Computer Science . . . . . . . . . . . . . . . . . . . 154.4 Recursive Function Theory . . . . . . . . . . . . . . . . . . . . . 164.5 Recursive Function as a Model of Computation . . . . . . . . . . 184.6 Partial Recursive Functions . . . . . . . . . . . . . . . . . . . . . 194.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5 Boolean Expressions, Normal Forms and BDD 235.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.2 Truth Tables and Boolean Expressions . . . . . . . . . . . . . . . 23

i

Page 4: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

5.3 Algebra of Propositions . . . . . . . . . . . . . . . . . . . . . . 255.4 Normal Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . 275.5 Satisfiability of Boolean expressions . . . . . . . . . . . . . . . . 275.6 Binary Decision Diagrams (BDD) . . . . . . . . . . . . . . . . . 285.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

6 Circuit Theory and Computational Models 356.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356.2 Circuit Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

6.2.1 Threshold Circuits . . . . . . . . . . . . . . . . . . . . . 376.2.2 Equality Threshold Circuits . . . . . . . . . . . . . . . . 38

6.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

7 Formal Languages and Turing Machines 417.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417.2 Basic Concepts in Formal Languages . . . . . . . . . . . . . . . 427.3 Finite Representation of Languages . . . . . . . . . . . . . . . . 447.4 Deterministic Turing Machines . . . . . . . . . . . . . . . . . . . 467.5 Nondeterministic Turing Machines . . . . . . . . . . . . . . . . . 507.6 Deterministic Turing Machines and Languages . . . . . . . . . . 517.7 Extensions of Turing Machines . . . . . . . . . . . . . . . . . . 547.8 Grammars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

8 Complexity Theory 608.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608.2 Complexity Theory . . . . . . . . . . . . . . . . . . . . . . . . . 618.3 Rates of Growth . . . . . . . . . . . . . . . . . . . . . . . . . . . 618.4 Languages and Problems . . . . . . . . . . . . . . . . . . . . . . 62

8.4.1 Graph Problems and Languages . . . . . . . . . . . . . . 628.4.2 Optimization Problems and Languages . . . . . . . . . . 638.4.3 Integer Partition and Factoring . . . . . . . . . . . . . . . 648.4.4 Boolean Expressions . . . . . . . . . . . . . . . . . . . . 64

8.5 P versus NP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

ii

Page 5: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

8.6 Complexity of Algorithms . . . . . . . . . . . . . . . . . . . . . 658.7 Elements of Graph Theory . . . . . . . . . . . . . . . . . . . . . 66

9 Cellular Automata Theory 689.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689.2 Cellular Automata . . . . . . . . . . . . . . . . . . . . . . . . . 699.3 Cellular Automata and Computational Theory . . . . . . . . . . . 729.4 Algebraic Properties of Cellular Automata . . . . . . . . . . . . 76

9.4.1 Finite Fields an Vector Spaces . . . . . . . . . . . . . . . 769.4.2 Matrix Representation for Cellular Automata . . . . . . . 789.4.3 Polynomial Representation . . . . . . . . . . . . . . . . . 80

9.5 Nondeterminism in Cellular Automata . . . . . . . . . . . . . . . 819.6 Fractal Patterns and Cellular Automata . . . . . . . . . . . . . . 83

9.6.1 Elements of Continuous Fractal Theory . . . . . . . . . . 839.7 Self-Organization . . . . . . . . . . . . . . . . . . . . . . . . . . 919.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

10 Lattice Gas Cellular Automata and Fluid Simulation 9710.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9710.2 FHP and Navier-Stokes . . . . . . . . . . . . . . . . . . . . . . . 98

10.2.1 Multiscale Analysis . . . . . . . . . . . . . . . . . . . . . 10310.3 Navier-Stokes Equations . . . . . . . . . . . . . . . . . . . . . . 107

A DNA Computing 109A.1 Structure of DNA . . . . . . . . . . . . . . . . . . . . . . . . . . 109A.2 Operations on DNA Molecules . . . . . . . . . . . . . . . . . . . 113

A.2.1 Denaturation and Renaturation . . . . . . . . . . . . . . . 113A.2.2 Lengthening DNA . . . . . . . . . . . . . . . . . . . . . 113A.2.3 Shortening DNA . . . . . . . . . . . . . . . . . . . . . . 116A.2.4 Cutting DNA . . . . . . . . . . . . . . . . . . . . . . . . 116A.2.5 Linking DNA . . . . . . . . . . . . . . . . . . . . . . . . 118A.2.6 Multiplying DNA . . . . . . . . . . . . . . . . . . . . . . 119A.2.7 Reading the Sequence . . . . . . . . . . . . . . . . . . . 119

A.3 Beginning of DNA Computing . . . . . . . . . . . . . . . . . . . 119A.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

iii

Page 6: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

B Neural Computation and Circuit Theory 123B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123B.2 Classical Neural Networks . . . . . . . . . . . . . . . . . . . . . 124

C Quantum Computation and Circuit Theory 126C.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126C.2 Quantum Computation . . . . . . . . . . . . . . . . . . . . . . . 127

iv

Page 7: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

List of Figures

5.1 Binary Decision Tree. . . . . . . . . . . . . . . . . . . . . . . . . 315.2 Binary Decision Diagram. . . . . . . . . . . . . . . . . . . . . . 32

6.1 Basic Gates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366.2 Circuit example. . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

7.1 Turing machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . 477.2 Control graph for Turing machine 7.13. . . . . . . . . . . . . . . 487.3 Control graph for a Nondeterministic Turing machine. . . . . . . 507.4 K-track Turing machine. . . . . . . . . . . . . . . . . . . . . . . 557.5 Two-Way Turing machine. . . . . . . . . . . . . . . . . . . . . . 557.6 K-Tape Turing machine:each tape is scanned by its own head. . . 56

8.1 Example of directed graph. . . . . . . . . . . . . . . . . . . . . . 67

9.1 Evolution of a CA given by expression 9.4. In this case, the initialconfiguration is a finite one-dimensional lattice which has onlyone site with the value 1 (pictured in black). . . . . . . . . . . . . 70

9.2 Some examples of Wolfram’s classification for one-dimensionalr = 1 cellular automata. . . . . . . . . . . . . . . . . . . . . . . . 71

9.3 State transition graph corresponding to the rule 60. . . . . . . . . 739.4 Deterministic finite automaton corresponding to rule 60 and its

graph representation. . . . . . . . . . . . . . . . . . . . . . . . . 749.5 Minimal deterministic finite automaton corresponding to rule 60. . 759.6 Some examples of fractals. . . . . . . . . . . . . . . . . . . . . . 849.7 Evolution of CA given by rule number 90 generating fractal-like

structures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

v

Page 8: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

9.8 Taylor instability and self-organization in continuous systems: (a)Experimental setup. (b) Schematic diagram of the formation ofrolls. (c) Trajectory of a fluid particle within a roll. . . . . . . . . 92

10.1 The two-particle collision in the FHP. . . . . . . . . . . . . . . . 101

A.1 The double helix of the DNA structure and the RNA. . . . . . . . 111A.2 The structure of the nucleotides. . . . . . . . . . . . . . . . . . . 112A.3 A 5′-phosphate group is joined with a 3′-hydroxil group. . . . . . 112A.4 Watson-Crick complementarity. . . . . . . . . . . . . . . . . . . 114A.5 Lengthening DNA. . . . . . . . . . . . . . . . . . . . . . . . . . 115A.6 Add single tails to both ends of DNA molecule. . . . . . . . . . . 116A.7 Degradation due to 3′ − nuclease. . . . . . . . . . . . . . . . . . 117A.8 Distroying internal phosphodiester bonds of the DNA. . . . . . . 117A.9 Cutting double stranded molecules. . . . . . . . . . . . . . . . . . 118A.10 Simple directed graph. . . . . . . . . . . . . . . . . . . . . . . . 120A.11 Possible paths that can be formed in Adleman’s experiment. . . . 122

B.1 McCulloch-Pitts neuron model. . . . . . . . . . . . . . . . . . . . 124

C.1 Example of quantum circuit with generic quantum gates denotedby U and D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

1

Page 9: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 1

Preface

Computational Modeling is a new research area. The tasks in this field basicallyinvolves three elements: a Problem, a Mathematical Model and an Algorithm.The Problem may be derived from physical phenomena or from some techno-logical applications. The Model should be grounded in mathematical elements,like partial differential equations (PDE), graph theory, etc. Finally, the Algorithmallows to convert the Model in a computer program, written in some high levelcomputer language (C, C++, Fortran, etc.), to run in a suitable computer host.

The theory of computation can help the tasks of algorithm analysis and soft-ware development. However, and more important for the following notes, theoryof computation can offer elements for the modeling itself. That is the case whenusing cellular automata for fluid modeling, for example. Besides, concepts incomputational complexity can be used to quantify resources required to solve agiven computational problem. The polynomial versus exponential classification,although coarse, are important to discriminate between efficient and inefficient al-gorithms. Following this avenue, we finally go to the complexity classes of P andNP, important subjects for everyone that tries to make computational modelingthrough discrete mathematics.

The following lecture notes cover an introduction to the theory of computerscience for graduate students in computational modeling. This is the main goal ofthe following pages. I hope these notes achieve such objective.

2

Page 10: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 2

Introduction

These lecture notes cover an introductory material to the theory of computer sci-ence for graduate students involved in modeling problems through discrete math-ematics. Therefore, the following Chapters cover subjects in set theory, Booleanfunctions, computability and complexity as well as automata theory.

A majority of methods for modeling and simulation in natural science use2D/3D mesh based approaches that are mathematically motivated by the Eulerianmethods of Finite Element (FE) and Finite Difference (FD), in conjunction withpartial differential equations. These works are based on a top down viewpoint ofthe nature: the system is considered as a continuous one subjected to Newton’s andconservation Laws as well as state equations connecting the macroscopic variables(pressure P, density ρ, temperature T , etc.).

However, it is possible the change the modeling paradigm to bottom up mod-els. These are discrete models based on agents that move and interact accordingto local and simple rules in order to mimic a fully dynamics. When using suchviewpoint, the mathematical background becomes the automata theory. As a con-sequence, concepts in finite fields, graph, Boolean expressions, among others ofdiscrete mathematics, arises. In these fields, we may be in face with hard prob-lems for which efficient algorithm are not known yet, despite of the efforts ofmany researchers in the last century.

In order to allow students to travel safe in this avenue of the art of modelingand simulation, it is necessary to offer some background in computability andcomplexity. That is the goal of the first part of these notes, composed by the

3

Page 11: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapters 3, 4, 5, 6, 7, 8.Then, we can start automata-based modeling techniques. Therefore, we cover

some topics in cellular automata, in the Chapter 9. We add to this Chapter someconsiderations about fractal theory, in order to make these notes as self-containedas possible. Finally, lattice gas automata models are considered and their appli-cations for fluid modeling is described (Chapter 10). The Appendices A,B and Cdescribe some computational paradigms that can be considered as beautiful exam-ples of applications of the concepts studied. We end these notes with a referencelist for further investigations of the interested reader.

4

Page 12: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 3

Set Theory

In this Chapter we review some basic ideas in set theory. This theory offers anefficient background for formalization and demonstration of important results inthe theory of computation. Besides, the notion of cardinality, to be develop inwhat follows is a fundamental one for computability and complexity theories.Thus, we start with the notion of sets and usual set operations. Then, in sections3.3, we develop the concepts of relations and functions. Cardinality is discussedin section 3.5. We end the theory with some considerations about topologicalspaces. Finally, a list of exercises is proposed.

3.1 DefinitionsA set is composed by objects which are its elements. If an object x belongs to aset A, we formally write [1, 2]:

x ∈ A, (3.1)

otherwise, we say that:

x /∈ A, (3.2)

which means, the element x do not belongs to the set A.We can define a set A by explicitly given all its elements or through a rule

which defines its elements. For instance, in these examples:

5

Page 13: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

A = 1, 2, 3, 4, 5 , (3.3)

B = x; x is vowel , (3.4)

we give explicitly the elements of the set A but the set B is defined just by aproperty of its elements.

There are two very special sets. The empty set, denoted by ∅, which has noelements, and the university set, which can be defined as:

Universe = x; x is object . (3.5)

Observe that in the definition of the Universe set, the term object is used inthe general sense; that is, anything we can image is an element of the Universeset.

An important relation between sets is the inclusion one:

A ⊂ B ⇔ a ∈ A then a ∈ B. (3.6)

Definition: Given two sets A,B we say that: A = B if and only if A ⊂ B

and B ⊂ A.

The following properties can be easily demonstrated:1) Reflexive: A ⊂ A, ∀A,2) Anti-Symmetric: if A ⊂ B and B ⊂ A, then A = B,

3) Transitive: if A ⊂ B and B ⊂ C, then A ⊂ C.

Prof : ExerciseGiven a set X, we denote by P (X) the set of the parts of X, which is given

by:

P (X) = A; A ⊂ X (3.7)

As an example, let us consider the set P (A) where A = a, b, c . Accordingthe definition above, it is given by:

P (A) = ∅, a , b , c , a, b , a, c , b, c , A . (3.8)

Observe that ∅ is a subset of A and, consequently, ∅ ∈ P (A). In fact, it canbe proved that:

6

Page 14: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Property: ∅ ⊂ A, ∀A. (Exercise)

3.2 Set OperationsGiven two sets A,B we can define the following operations:

1) A ∪B = x; x ∈ A or x ∈ B .2) A ∩B = x; x ∈ A and x ∈ B .2) A−B = x; x ∈ A and x /∈ B .3) Cartesian Product: A×B = (a, b) ; a ∈ A and b ∈ B .The set A− B is also called the complement of B with respect to A:

A− B = CAB.

The following properties can be demonstrated (Exercise):a) A ∪ B = B ∪ A.b) A ∪ (B ∪ C) = (A ∪ B) ∪ C.c) A ∪ B = A⇔ B ⊂ A.

d) A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) .

e) A ∩ (B ∪ C) = (A ∩B) ∪ (A ∩ C) .

f) If B ⊂ E then CE (CEB) = B

g) A ⊂ B ⇔ CEB ⊂ CEA

h) CE (A ∪B) = CEA ∩ CEB

Many other properties can be found in [2, 1]. Also, the references [1, 3] offerinteresting discussions about the ”Paradoxes” of the traditional Set Theory. Also,the reference [4] is another interesting material, with a less formal language, thatcan be used as an introduction to the philosophical aspects of Mathematics ingeneral and set theory in particular.

3.3 Relations and FunctionsA function f : A→ B has four elements:

a) Domain: the set A.b) Codomain: set Bc) A rule such that for each x ∈ A gives a f (x) ∈ B.

7

Page 15: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

d) Range or Image Set: Im = b ∈ B; ∃x ∈ A such that b = f (x) .Theimage set is also denoted by f (A) .

Given two sets A,B we call a relation any subset ∆ ⊂ A × B. A specialrelation is the graph of a function f, defined as:

G (f) = (x, y) ∈ A× B; y = f (x) . (3.9)

The more elementary properties of a function f : A→ B are:a)Surjective (onto): if f (A) = B.

b) Injective (one-to-one): f (a) = f (b) ⇒ a = b.

c) Bijective: If f is injective and surjective.Another important concept is the inverse function. If f is bijective than we

can define the function f−1, called inverse function, as follows:

f−1 : B → A; f−1 (y) = x⇔ f (x) = y. (3.10)

Some times it is interesting to extend the concept of inverse function for in-verse relations. Given a function f : A → B, not necessarily bijective, we canalways define the inverse image, through the relation,

f−1 (B) = x ∈ A; f (x) ∈ B . (3.11)

The following properties are given as exercises:1) f (X ∪ Y ) = f (X) ∪ f (Y ) .

2)f (X ∩ Y ) ⊂ f (X) ∩ f (Y ) .

3)f−1 (X ∪ Y ) = f−1 (X) ∪ f−1 (Y ) .

4)f−1 (∅) = ∅.See [2],[1]. for a more complete list of properties.

3.3.1 Composition of functionsGiven two functions f : A → B and g : B → C, such that the domain of g is thecodomain of f , we can define the composite function g f : A→ C as follows:

g f (x) = g (f (x)) , ∀x ∈ A. (3.12)

The composition is associative. In fact, given three functions f, g and h wecan show from the above definition that:

8

Page 16: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

g (f h) = (g f) h. (3.13)

From these definitions, it is possible to show the following property: The setΦ = f : A→ A; f is bijective is a group respect to the compositionoperation (Exercise).

3.4 Sequences of SetsLet L be a set which elements we will call indexes. Thus, given a set X , we call asequence of elements of X, with indexes in L, a function f : L → X. If we havea sequence of sets Aλ then we usually write the sequence as (Aλ)

λ∈L

Now, we can extend the operations of union and intersection for sequences:

λ∈L

= x; ∃λ such that x ∈ Aλ , (3.14)

λ∈L

= x; x ∈ Aλ, ∀λ . (3.15)

3.5 CardinalityFor finite sets, the cardinality of a set is the number of elements of it. For infinitesets, we can say that the cardinality is a measure that compares the size of sets.We can show that two finite sets A and B have the same number of elements byconstructing a bijective function f : A → B. Such approach, to compare the sizeof sets through bijections (mappings) can be used for both finite and infinite sets.Therefore, we can state that:

Definition:(i) Two sets A and B have the same cardinality if there is a bijective function

f : A→ B.

(ii) The cardinality of a set X is less than or equal to the cardinality of a set Yif there is a injective function f : X → Y.

We denote the cardinality of a set A by card (A). So, the relationships (i)and (ii) are denoted by card (A) = card (B) and card (X) ≤ card (Y ) , respec-

9

Page 17: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

tively. Besides, we say that card (X) < card (Y ) if card (X) ≤ card (Y ) andcard (X) 6= card (Y ) .

A set that has the same cardinality of the set of natural numbers is said to becountably infinite or denumerable. The term countable refers to sets that areeither finite or denumerable.

The cardinality of a set is also called a cardinal number. When working withcardinal numbers, some amazing facts happen. For example, we can show that thecardinality of the Cartesian product N × N is the same of the N, as well as thatthe cardinality N is the same of the set Z of all integers (Exercise).

In this section we followed more or less the presentation found in [5]. The in-terested reader will find interesting comments about the history and theory behindthis field of mathematics in [6].

3.6 Topological SpacesA topology over a set E is a set Θ ⊂ P (E) such that [7]:

e1) Let I an index set. If Oi ∈ Θ, then the union O = ∪i ∈ I

Oi is such

that O ∈ Θ;

e2) O1, O2 ∈ Θ ⇒ O = O1 ∩O2 is such that O ∈ Θ;

e3) E ∈ Θ.

The par (E,Θ) e called a topological space . The elements of E are calledpoints and the elements of Θ are called open sets. A subset A ⊂ E is calledclosed if its complement is open.

We must remember that:

∪i ∈ ∅

Oi = ∅,

and so, the empty set ∅ belongs to Θ.

10

Page 18: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Given a point p ∈ E , we call a neighborhood of p any set Vp that containsan open set O such that p ∈ O. A topological space is (E,Θ) is separated orHausdorff if every two distinct points have disjoint neighborhoods; that is, ifgiven any distinct points p 6= q, there are neighborhoods Vp and Vq such thatVp ∩ Vq = ∅.

It is possible to define the concept of continuity in topological spaces. Giventwo topological spaces E1, E2 and a function f : E1 → E2 we say that f iscontinuous in a point p ∈ E1 if ∀Vf(p) there is a neighborhood Vp such thatf (Vp) ⊂ Vf(p).

3.7 Exercises1. Proof the properties 1), 2) and 3) for the inclusion operation.

2. Proof that ∅ ⊂ A, ∀A.

3. Demonstrate the following properties:

a) A ∪B = B ∪ A.b) A ∪ (B ∪ C) = (A ∪ B) ∪ C.c) A ∪B = A⇔ B ⊂ A.

d) A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) .

e) A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C) .

f) If B ⊂ E then CE (CEB) = B

g) A ⊂ B ⇔ CEB ⊂ CEA

h) CE (A ∪ B) = CEA ∩ CEB

4. Prove the following properties:

a) f (X ∪ Y ) = f (X) ∪ f (Y ) .

b)f (X ∩ Y ) ⊂ f (X) ∩ f (Y ) .

c)f−1 (X ∪ Y ) = f−1 (X) ∪ f−1 (Y ) .

d)f−1 (∅) = ∅.

11

Page 19: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

5. Show that the set Φ = f : A→ A; f is bijective is a group respectto the composition operation.

6. Prove that topological space is separated or Hausdorff if, and only if , theintersection of all closed neighborhoods of any point x is the set x .

7. What happens if we state the axiom e2) in the form:

O = ∩i ∈ I

Oi is such that O ∈ Θ ?

8. What is the usual topology of the <n?

9. The set Φ = f : A→ A; f is bijective is a group respect to thecomposition operation (see [8] for group definition).

10. Show that card (N × N) = card (N) and that card (N) = card (Z), whereZ is the set of integers. Find the explicit form for the corresponding bijec-tion.

11. Show that the cardinality of the real numbers is different from the cardinal-ity of N, card (<) 6= card (N) .

12. Prove that the union of two countable sets is countable. Generalize thisresult for a union of a finite number of countable sets.

13. Show that, for a finite set A, card (P (A)) = 2card(A).

14. Prove that the set of finite subsets of a countable set is countable.

15. Discuss the statement: If S ⊂ N then S is countable. Can you prove it?

16. Is it true that:

∞⋃

j=1

(∞⋂

i=1

Aij

)=

∞⋂

i=1

(∞⋃

j=1

Aij

),

for any family (Aij)(i,j)∈N×Nof sets?

12

Page 20: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

17. Let us consider the family An ⊂ < of intervals, defined by:

An = [0,1

n), n ∈ N

Using the usual topology in <, show that

∞⋂

n=1

An = 0 .

18. A function f : E1 → E2 is a continuous function in the topological spaceE1 if it is continuous in every point p ∈ E1. Show that it is equivalent to saythat f is continuous if given any open set B ⊂ E2 then the set A = f−1 (B)

is an open set of E1.

13

Page 21: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 4

Recursive Function Theory andComputability

4.1 IntroductionIn this material we will focus on two alternative approaches to the classificationof computable functions: machine based and the functional one. The former isbased on the Turing Machine framework while the later set out to capture theconcept of computability by considering the class of functions over N that resultfrom a set of basic functions and functional operations. In section 4.4 we reviewthe basic elements underlying this approach, developed by Godel and Kleene, andcalled Recursive Function Theory. Before that presentation, some concepts aboutrecursion procedures shall be discussed. In the end of this Chapter there is a listof proposed exercises.

4.2 Peano’s AxiomsGiven the set N of undefined objects called natural numbers, and a function s :

N → N,where s (n) is called the successor of n. The function s satisfies thefollowing axioms [2]:

P1) The function s in injective.

14

Page 22: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

P2) N − s (N) has only one element.P3. Induction Principle: If X ⊂ N is a subset such that 0 ∈ X and, for all

n ∈ X we have s (n) ∈ X , then X = N.The Induction Principle can be also stated in another way which is more suit-

able in practice. Before to re-write it, let us consider any property P concerningnatural numbers as a function:

P : N → 0, 1 ,

such that P (n) = 1 if P is true for a given n and P (n) = 0 otherwise. So, wecan re-write the axiom P3) in the following way:

Induction Principle: Let us consider P as a property related to natural num-bers. If P (0) = 1, and starting from the hypothesis that P (k) = 1, it is possibleto show that P (k + 1) = 1 also, then we can conclude that P (n) = 1, ∀n ∈ N.

Demonstration by Mathematical Induction: It is any demonstration inwhich the Induction Principle is applied.In this case, the demonstration followsthe next steps:

(a) We show that P (0) = 1,

(b) As the inductive hypothesis, we assume that P (k) = 1. Then, using thishypothesis we try to prove that P (k + 1) = 1 also.

(c) From (a), (b), and by the Induction Principle we conclude that P (n) = 1,∀n ∈ N.

Example: Prove by mathematical induction that:

(a− 1)(1 + a + a2 + ...+ an

)= an+1 − 1.

Theorem: The set N is well-ordered; that is, every nonempty subset of N hasa least element.

4.3 Axioms and Computer ScienceThe Peano’s Axioms, plus set N, compose a formal (axiomatic) system. We cansay that in an Axiomatic System there are objects, which existence is assumed,and a finite number of statements about the objects, the axioms [6].

We always hope that an axiomatic system has the following properties:

15

Page 23: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

(A1) Complete: If for all statement S at least one of S, ∼ S (negation) is atheorem.

(A2) Consistent: For all statement S at most one of S,∼ S is a theorem.Until the beginning of the XX century it was a belief that we could solve any

well defined problem. So, if anyone did not have success to demonstrate someproperty, we should expect that the chosen hypotheses were not suitable or somemistake was made. Such belief in the axiomatic method was the starting point forHilbert ’s work in computer science. Hilbert asked whether or not there existedsome algorithm which could be used, in principle, to solve all the problems ofmathematics. He expected that the answer to this question would be yes. However,the answer to Hilbert’s problem turned out to be no: there is no algorithm to solveall mathematical problems. Such demonstration was due Church and Turing andis a remarkable work in the theory of algorithms, and consequently, for the moderntheory of computer science (see [9], Chapter 3)

Besides, the development of a precise formulation of the concept of math-ematical proof, culminated with the Kurt Godel ’s results on completeness andconsistency in formal theories. These establish the existence, in any sufficientlypowerful theory, of mathematical statements that are (self-evidently) true but cannot be proved to be so. Godel ’s work was also motivated by the Hilbert’s prob-lem. In 1931 Godel published his famous Incompleteness Theorem, that essen-tially shows that no set of axioms with properties (A1) and (A2) above could exist;and thus, Hilbet’s algorithm could not be written.

Interested reader shall see references, [9], Chapter 3, and [10] to complete thisdiscussion.

4.4 Recursive Function TheoryIn computer science, like in other areas, many of the sets involved are infinite.Then, we must be able to define an infinite set in a manner that allows its membersto be efficiently constructed and manipulated. This can be done recursively.

A recursive definition of a set X specifies a method for constructing the el-ements of the set by using two components: the basis elements and a finite setof rules or operations. The basis consists of a set of elements that are explicitlygiven as member of the set X. The rules are used to construct new elements of the

16

Page 24: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

set from the previously defined members. The reader will certainly remember therecursive programing technique in which a computational procedure calls itself.The link between such programing technique and the recursive definition is thatwe can implement the later by using the former.

For instance, we can recursively define the set N, following the Peano ’s Ax-ioms, through the following scheme:

i) Basis: 0 ∈ N,

ii)Recursive Step: If n ∈ N, then s (n) ∈ N,

iii) Closure: n ∈ N only if it can be obtained from 0 by a finite number ofapplications of the operation s.

More interesting, recursive definitions can be found in the context of formallanguages. So, let an alphabet X , for example, X = a, b, c, d, .., x, y, z. Astring overX is a finite sequence of elements from X. Strings are the fundamentalobjects used in the definition of languages. In order to establish the propertiesof strings, the set of strings over an alphabet is defined recursively. The basisconsists of the string that contains no elements, the null string, denoted by λ. Thedefinition is as follows:

Definition: Let Σ be an alphabet. The set Σ∗,of strings over Σ, is defined by:i) Basis:λ ∈ Σ∗,

ii) Recursive step: If w ∈ Σ∗ and a ∈ Σ then wa ∈ Σ∗,

iii) Closure: w ∈ Σ∗ only if it can be obtained from λ by a finite number ofapplications of the operation in ii).

A language over an alphabet Σ is a subset of Σ∗. Set operations can be usedto specify languages [5]. Automata theory can be used to define a class of abstractmachines whose computation determine if a string belongs to a language or not[5, 10]. For instance, such theory found a fundamental and beautiful application inthe field of DNA computing framework [11]. Computational modeling of insectsocieties is also another field with interesting applications of frameworks, likeStochastic Process Algebras, derived from the relationships between automata andlanguages [12].

17

Page 25: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

4.5 Recursive Function as a Model of ComputationWe shall observe that everything recursively defined can be computed; in particu-lar it can be implemented in a computer. However, what is a computable function?From a practical viewpoint, we may say that a computable function is the one thatcan be computed, in finite time, in some computer hardware.

Such point of view link a theoretical universe, composed by procedures andalgorithms, and the physical realization of an abstract machine that can be the hostof the computation. For instance, up to now, we can not really compute with realnumbers. In fact, we can say that the available hardwares can only compute withcountable quantities. The relationship between computational theory and naturalnumbers received a more fundamental role when Church and Turing (1936) statedthe conjecture that the limitation to compute with real number is not just a matterof the state-of-the-art in hardwares but it is the consequence of an universal con-cept (Church-Turing Thesis) [9]. Such thesis can be stated as follows, which waspresented by Deutch [13] in his famous work on the universal quantum computer:

”Every finitely realizable physical system can be perfectly simulated by a uni-versal model computing machine operating by finite means”.

From the viewpoint of the Quantum Mechanics, the basis for such statementmay be the influence of noise and the discrete nature of the universe inherent inthe Quantum Mechanics postulates [14].

Now, we must return to the initial question: what is a computable function?Would be any function f : N

k → N a computable one?To answer this question, we will follow a way inspired in the recursive defini-

tion presented above. So, the key idea is based on the following elements:a) The set N; which elements can be represented in a computer;b) A set of basic (computable) functions,c) Operations over the basic functions, also computable.Every function defined through axioms (b) and (c) will be computable. If

the so defined set of functions is general enough to encompass the intuitivelycomputable functions we are close to solve the initial question. In fact, that isthe case. The so called Partial Recursive Functions (PRF), developed by Godeland Kleene, set out to capture the concept of computability by considering theclasses of functions over N that result from a set of basic functions and functionalcomposition operators. Such theory is developed next.

18

Page 26: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

4.6 Partial Recursive Functionsa) Basic Function. The following functions are partial recursive:

zero : N → N; zero(n) = 0, ∀n ∈ N, (4.1)

succ : N → N; succ(n) = n+ 1, (4.2)

proj xi : Nn→ N; proj xi(x1, .., xn) = xi. (4.3)

b)Composition. If the following functions are partial recursive:g : Nk → N,

fi : Nm → N, i ∈ 1, 2, ..., k ,then, the function defined by the composition of these functions:

h : Nm → N, (4.4)

h (x) = g (f1 (x) , f2 (x) , ..., fk (x)) ,

is also partial recursive function.c) Recursion. If the following functions are partial recursive:f : N

m → N,

g : Nm+2 → N,

then, the function h : Nm+1 → N defined as:

h (x1, ..., xn, 0) = f (x1, ..., xn) , (4.5)

h (x1, ..., xn, y + 1) = g (x1, ..., xn, y, h (x1, ..., xn, y)) , (4.6)

is said to be defined by recursion from f and g, and is also partial recursive.d) Minimization: If the function f : N

m+1 → N is partial recursive, then thefunction h : N

m → N defined by:

h (x1, ..., xn) =

19

Page 27: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

min y; f (x1, ..., xn, y) = 0 and if z ≤ y ⇒ f (x1, ..., xn, z) exists(4.7)

is said to be defined by minimization from f and is also partial recursive.The reader can find more details in [10]We shall made some considerations about the minimization. We are going to

show that we need this concept in order to include the factorial function in thePRF set. So, let us consider the partial recursive function:

f : N → N,

f (n) = 1 − n, n ≤ 1, (4.8)f (n) = 0, otherwise.

Then, the function

h = min y; f (y) = 0 and if z ≤ y ⇒ f (z) exists , (4.9)

is well-defined in the context of PRFs, despite of the fact that it does not have adomain! In fact, this function can be identified with the number ”1”, and writtenas:

h : · → N, (4.10)h = 1.

From this fact, using also the axiom that successor is a PRF and the fact thatthe product function is a PRF (Exercise), it follows that the function:

g : N2 → N,

g (x, y) = succ(x) · y, (4.11)

20

Page 28: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

is also a PRF one (Exercise). Therefore, once functions h, g are PRFs we canuse the recursion operation, defined in axiom (c) above, to show that the factorialfunction is PRF. The corresponding scheme defines a function f : N

0+1 → N,

such that:

f (0) = h, (4.12)

f (k + 1) = g (k, f (k)) = (k + 1) · f (k) , (4.13)

which is a PRF function that implements the factorial (Exercise).

4.7 Exercises1. By using induction, prove that:

a) for all n ∈ N

1 + 2 + 3 + ... + n =n

2(n+ 1) .

b) for all positive integer n, 5n − 1 is divided by 4.

c) for all n ∈ N

1 + x+ x2 + ... + xn =xn+1 − 1

x− 1.

d)

1

1 · 2 · 3 +1

2 · 3 · 4 +1

3 · 4 · 5 + ...1

n · (n+ 1) · (n+ 2)=

n (n+ 3)

4 (n+ 1) (n + 2)

2. Based on the axioms of the PRFs, demonstrate that the following functionsare PRF:

a) f : N → N; f (n) = 1 (constant ”1” function)

b) Addition of two natural numbers;

c) Product of two natural numbers;

d) Power (n,m) = nm;

21

Page 29: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3. Prove that the function given by expression (4.11) is a PRF.

4. Show, by induction, that the scheme given by expressions (4.12)-(4.13) de-fines the factorial f (k) = k!, k ∈ N and k ≥ 0.

22

Page 30: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 5

Boolean Expressions, Normal Formsand BDD

5.1 IntroductionIn this Chapter we consider basic elements of Boolean expressions. We start withthe truth tables and algebra of propositions (sections 5.2 and 5.3). Then, we definenormal forms and give a discussion about the Satisfiability problem (sections 5.4and 5.5). In section 5.6 we present the binary decision diagrams, an efficientmethod to rewrite Boolean expressions. Section 5.7 presents an application in thecontext of VLSI design. Finally, a list of exercises is proposed.

5.2 Truth Tables and Boolean ExpressionsA variable x such that x ∈ 0, 1 (true 1 and false 0) is called a Boolean vari-able. Given Boolean variables x, y, ..., they can be put together to form Booleanexpressions through the operators of conjunction (∧), disjunction (∨), negation(∼), implication (⇒), and bi–implication (⇔).These operators are defined by thefollowing standard truth tables [15]:

Besides, the complete syntax includes parentheses to solve ambiguities. More-over, as a common convention it is assumed that the operators bind according to

23

Page 31: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

∼0 1

1 0

(a)

∧ 0 1

0 0 0

1 0 1

(b)

∨ 0 1

0 0 1

1 1 1

(c)

Table 5.1: (a) Truth table for negation. (b) Truth table for conjunction (AND

operator). (c) Truth table for disjunction (OR operator).

⇒ 0 1

0 1 1

1 0 1

(a)

⇔ 0 1

0 1 0

1 0 1

(b)

Table 5.2: (a) Truth table for implication. (b) Bi-implication and its truth table.

their relative priority. The priorities are, with the highest first: ∼,∧,∨,⇔,⇒ .

So, for example,

∼ x1 ∧ x2 ∨ x3 ⇒ x4 = (((∼ x1) ∧ x2) ∨ x3) ⇒ x4. (5.1)

A Boolean expression with variables x1, x2, ..., xn produces for each assign-ment of truth values (0, 1 values) to the variables itself a truth value according tothe standard truth tables given in (5.1)-(5.2). If we denote the set of truth val-ues by B = 0, 1, then we can think of a Boolean expression with variablesx1, x2, ..., xn as a function:

F : Bn → B,

24

Page 32: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

where Bn = B×B×B...×B (n-times). As an example, let us rewrite expression(5.1) as:

F : B4 → B,

F (x1, x2, x3, x4) =∼ x1 ∧ x2 ∨ x3 ⇒ x4. (5.2)

Two Boolean expression F1, F2 : Bn → B are said to be equal if:

F1 (x1, x2, ..., xn) = F2 (x1, x2, ..., xn) , ∀ (x1, x2, ..., xn) ∈ Bn. (5.3)

Tautology: A Boolean expression is a tautology if it yields true for all truthassignments.

Satisfiable: A Boolean expression is satisfiable if it yields true for at least onetruth assignment.

5.3 Algebra of PropositionsThere are some algebraic laws that Boolean expressions obey, some of whichare analogous to laws satisfied by the real numbers. These relationships can beproved by using the truth Tables (5.1)-(5.2). The corresponding algebra is verymuch useful to simplify Boolean expressions. Specifically, we have the followinglaws [6, 16]:

1) Idempotent laws

p ∧ p = p

p ∨ p = p

2) Commutative

p ∧ q = q ∧ p

p ∨ q = q ∨ p

25

Page 33: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

p⇔ q = q ⇔ p

3) Associative

(p ∧ q) ∧ r = p ∧ (q ∧ r)

(p ∨ q) ∨ r = p ∨ (q ∨ r)

4) Absorption laws

p ∧ (p ∨ q) = p

p ∨ (p ∧ q) = p

5) Distributive

p ∧ (q ∨ r) = (p ∧ q) ∨ (p ∧ r)

p ∨ (q ∧ r) = (p ∨ q) ∧ (p ∨ r)

6) Involution law

∼ (∼ p) = p

7)De Morgan ’s Laws

∼ (p ∨ q) = (∼ p) ∧ (∼ q)

∼ (p ∧ q) = (∼ p) ∨ (∼ q)

8) Complement Laws

p∨ ∼ p = 1,

p∧ ∼ p = 0,

26

Page 34: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

5.4 Normal FormsA Boolean expression is in Disjunctive Normal Form (DNF) if it consists of adisjunction of conjunctions of variables and negations of variables; that is [15, 16]:

F (x1, x2, ..., xn) =(t11 ∧ t12 ∧ ... ∧ t1k1

)∨ ... ∨

(tm1 ∧ tm2 ∧ ... ∧ tmkm

), (5.4)

where tij is either a variable xl or a negation of a variable ∼ xl.Expression (5.4) can be rewritten as:

m∨

j=1

kj∧

i=1

tji

. (5.5)

As an example, consider the expression:

(x∧ ∼ y) ∨ (∼ x ∧ y) , (5.6)

which is equal to the XOR gate (Exercise).Similarly, a Conjunctive Normal Form (CNF) is an expression that can be

written as:

m∧

j=1

kj∨

i=1

tji

, (5.7)

where, like in the previous definition, tij is either a variable xl or a negation of avariable ∼ xl.

Proposition 1: Any Boolean expression is equal to an expression in CNF andan expression in DNF. (Exercise. See also [16])

5.5 Satisfiability of Boolean expressionsFrom the viewpoint of applications, an important point is the satisfiability prob-lem. In general, it is hard to determine whether a Boolean expression is satisfiable;that is, given a Boolean expression F : Bn → B, find the set:

S = (x1, x2, ..., xn) ∈ Bn; F (x1, x2, ..., xn) = 1 .

27

Page 35: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

There is a famous theorem, due to Cook, which made the hardness of thesatisfiability problem precisely:

Theorem 1 (Cook): Satisfiability of Boolean expressions is NP-complete.The NP-complete complexity class is a special subset of the NP problems (see

Chapter 8). In a pragmatic viewpoint, we can say that a NP-Complete problemis the one for which the only known deterministic algorithms to solve it run inexponential time. No polynomial time deterministic algorithms are known forany of the NP-Complete problems yet.

It is important to observe that, if an expression is in DNF form, then the sat-isfiability is decidable in polynomial time but for DNFs the tautology check ishard. Besides, the conversion between CNFs and DNFs is exponential. To ex-emplify this fact, let us consider the following CNF example over the variablesx1

0, x20, ..., x

n0 , x

11, x

21, ..., x

n1 :

(x1

0 ∨ x11

)∧(x2

0 ∨ x21

)∧ ... ∧ (xn

0 ∨ xn1 ) . (5.8)

Following the rules of section 5.3, it is easy to show that this expression canbe put in the following DNF form (Exercise):

(x1

0 ∧ x20 ∧ ... ∧ xn−1

0 ∧ xn0

)∨ (5.9)

(x1

0 ∧ x20 ∧ ... ∧ xn−1

0 ∧ xn1

)∨

....(x1

1 ∧ x21 ∧ ... ∧ xn−1

1 ∧ xn0

)∨

(x1

1 ∧ x21 ∧ ... ∧ xn−1

1 ∧ xn1

)(5.10)

We shall observe that, whereas expression (5.8) has size proportional to n, thecorresponding DNF expression has size proportional to n2n. Due to the practicaland theoretical relevance of the satisfiability problem, other normal forms wereproposed in order to address its hardness. The following introduces one of them.

5.6 Binary Decision Diagrams (BDD)Now, we introduce a new operator called if-then-else, defined as [15]:

28

Page 36: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

x→ y, z = (x ∧ y) ∨ (∼ x ∧ z) . (5.11)

It is easy to check that this expression is true if x = 1 and y = 1, or x = 0 andz = 1. An important property of the if-then-else operator is that we can expressall operators defined in the previous sections using only the if-then-else operator(Exercise). For example, we can prove that:

∼ x = x→ 0, 1, (5.12)

Also, the Boolean expression x is represented as x → 1, 0. Therefore, we candefine a normal form based on this new operator.

INF: An If–then–else Normal Form (INF) is a Boolean expression built en-tirely from the if–then–else operator and the constants 0 and 1 such that all testsare performed only on variables

If t [0/x] denotes the Boolean expression obtained by replacing x with 0 in theexpression t then it is not hard to see that the following equivalence holds:

t = x→ t [1/x] , t [0/x] . (5.13)

This is called the Shannon expansion of t with respect to x. This simpleequation has a lot of useful applications. The first is to generate an INF from anyexpression t. The key idea is to work recursively by first using expression (5.13) toform to form two Boolean expressions that contain one less variable than t. Then,we can find INFs expressions for both t [1/x] , t [0/x] and so on. In this way, wecan prove the following proposition:

Proposition 2: Any Boolean expression is equivalent to an expression in INF.As an example, let us consider the Boolean expression: t = (x1 ⇔ y1) ∧

(x2 ⇔ y2) . If we denote t [0/x] = t0 and t [1/x] = t1,and if we select in orderthe variables x1, y1, x2, y2, we get, by using Shannon expansions, the followingequivalent expression:

t = x1 → t1, t0 (5.14)

t0 = y1 → 0, t00 (5.15)

29

Page 37: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

t1 = y1 → t11, 0 (5.16)

t00 = x2 → t001, t000 (5.17)

t11 = x2 → t111, t110 (5.18)

t000 = y2 → 0, 1 (5.19)

t001 = y2 → 1, 0 (5.20)

t110 = y2 → 0, 1 (5.21)

t111 = y2 → 1, 0 (5.22)

These expressions can be organized as a tree shown in the Figure 5.1.Several expressions are easily seen to be identical, so it is tempting to identify

them. For example, instead of t110 we can use t000 and instead of t111 we can uset001. If we substitute t000 for t110 in the right–hand side of t11 and also t001 fort111, we find that t00 and t11 are identical, and in t1 we can replace t11 with t00.

If we in fact identify all equal sub-expressions we end up with what is knownas a binary decision diagram (BDD). It is no longer a tree of Boolean expressionsbut a directed acyclic graph (DAG). The obtained expressions can be written as:

t = x1 → t1, t0 (5.23)

t0 = y1 → 0, t00 (5.24)

t1 = y1 → t00, 0 (5.25)

t00 = x2 → t001, t000 (5.26)

t000 = y2 → 0, 1 (5.27)

30

Page 38: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 5.1: Binary Decision Tree.

31

Page 39: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

t001 = y2 → 1, 0 (5.28)

The Figure 5.2 pictures the corresponding BDD. In this Figure each node rep-resents a sub-expression in the graph. Such a node may be either terminal, inthe case of the constants 0 and 1, or non–terminal. A non–terminal node has alow–edge corresponding to the else–part and a high–edge corresponding to thethen–part.

We shall notice that the number of nodes has decreased from 9 in the decisiontree to 6 in the BDD. It is not hard to imagine that if each of the terminal nodeswere other big decision trees the savings would be dramatic.

Figure 5.2: Binary Decision Diagram.

32

Page 40: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

5.7 ApplicationsIn fields like cellular automata theory and applications, sometimes we are in facewith the problem: Given a linear boolean operator:

T : 0, 1n −→ 0, 1n ,

consider the question of whether a particular sequence of n site values can occurafter k applications of T , starting from any initial state. More important, let ussuppose that the space (0, 1n ,⊕) , where ⊕ is the sum modulo 2, encodes twoclasses of patterns P1 ⊂ 0, 1n and P2 ⊂ 0, 1n . Consider now the question:Find a linear operator T that works as a classifier, that is, which has the property:

T (x) 6= T (y) , ∀x ∈ P1 and y ∈ P2. (5.29)

This expression is equivalent to the following one:

T (x⊕ y) 6= 0, ∀x ∈ P1 and y ∈ P2. (5.30)

Such problem has been explored in the context of VLSI design [17]. Theinterested reader shall wonder the possibility of apply the solution of problem(5.30) for other architectures. The solution of this problem can be addressed byBDD. Firstly, the set of equations derived from expression (5.30) must be ob-tained. Then, we should convert these equations in INF form in order to get moreefficiently the solution.

5.8 Exercises1. Show how all operators of Tables (5.1)-(5.2) can be encoded using only

∼,∧.

2. Prove the De Morgan’s and distributive laws.

3. For each of the following formulas tell whether it is (i)satisfiable, (ii)a tau-tology, (iii) unsatisfiable.

(a) (p ∨ q) ⇒ p

33

Page 41: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

(b) p∧ ∼ q

(c)∼ (p⇒ q) ⇒ (p∧ ∼ q)

4. Is it possible to say whether F is satisfiable from the fact that ∼ F is atautology?

5. Prove that, given two Boolean expressions F1 and F2, then F1 = F2 if andonly if the expression F1 ⇔ F2 is a tautology.

6. Use the laws of section 5.3 to simplify the expression (5.6).

7. Demonstrate Proposition 1 of section 5.4.

8. By using the propositional laws of section 5.3 show that expressions (5.8)can be rewritten in the form (5.9).

9. Prove expression (5.12) and find if-then-else expressions for the other oper-ators in Tables (5.1)-(5.2).

10. Find CNF and DNF expressions equal to each of the following:

(a) (p ∧ (q ∨ r)) ∨ (q ∧ (p ∨ r))(b) ∼ p ∨ (p∧ ∼ q) ∧ (r ∨ (∼ p ∧ q))(c) p⇒ (q ⇔ r)

34

Page 42: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 6

Circuit Theory and ComputationalModels

6.1 IntroductionIn this chapter we develop basic elements of another model of computation, thecircuit model, which is equivalent to the recursive partial functions (RPF) modelin terms of computational power, but is more convenient for many applications.Both models can be used to address the problem of characterizing the set of com-putable functions. However, for some applications, the viewpoint of breakingthe computation in a graph of gates is much more natural and convenient thenfunctional operations, like composition and recursion. That is the case of neuralcomputation [18] and quantum computing [9, 19] paradigms, which are brieflydiscussed in Appendices B and C respectively.

In this Chapter, we summarize the basic elements of circuit theory in section6.2. The equivalence between RPF and circuit framework is proposed as an exer-cise (section 6.3). We end this Chapter with a list of exercises.

35

Page 43: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

6.2 Circuit TheoryIn this section we start with a practical development found in [9], page 129. So,we may say that a circuit is made up wires and gates. The wires carry informationaround and the gates (functions) perform simple computational tasks. The sim-plest gates are the logic operators presented on Chapter 5 and the correspondingfunctions are defined by their truth Tables (section 5.2). The Figure 6.1 shows theelementary gates and their graphical representation. Also, we can add to the setof elementary gates the identity (just a wire) and the FANOUT which replaces abit with two copies of itself.

Figure 6.1: Basic Gates.

In Figure 6.2 it is pictured an example of a circuit thatoutputs the result of the Boolean expression x⊕ y together with a carry bit set

to 1 if x and y are both 1, or 0 otherwise (check it!).Formally, let G be a set of gates that map several bits to one bit. For each

n ≥ 0, a circuit Cn over the set G is a directed, acyclic graph with a list of inputnodes (with no incoming edges), a list of output nodes (with no outgoing edges),and a gate in G labeling each non-input/non-output node . Given a binary inputstring (x1, x2, ..., xn) ∈ 0, 1n , we label each input node xi or ∼ xi (NOT (xi)).

36

Page 44: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 6.2: Circuit example.

For every other node v with m predecessors (y1, y2, ..., yn), we recursively assigna value g (y1, y2, ..., yn), where g is the gate that labels node v. The circuit Cn

outputs the value given by the list of output nodes.The size of a circuit Cn is its number of nodes and the depth of Cn is the

length of the longest path from any input node to an output node.A circuit family is a list of circuits C = (C1, C2, ..., Cn, ...) where Ci

has i binary inputs. The family C computes a family of Boolean functions(f1, f2, ..., fn, ...), where fi is the boolean function computed by circuit Ci.

We say that the familyC has size complexity s (n) and depth complexity d (n)

if for all n ≥ 0 circuit Cn has size at most s (n) and depth at most d (n). Size anddepth are important complexity descriptions of circuits that respectively charac-terize the computational resources and the number of steps needed to compute afamily of boolean functions. Important classes of circuits are the following ones.

6.2.1 Threshold CircuitsA weighted threshold function of weight bound w and threshold ∆ ∈ Z (set ofintegers) is a boolean function denoted by T n,∆

w1,w2,...,wnand defined as follows [20]:

Thn,∆w1,w2,...,wn

: 0, 1n → 0, 1 ,

37

Page 45: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Thn,∆w1,w2,...,wn

(x1, x2, ..., xn) = 1, if

n∑

i=1

wixi ≥ ∆,

Thn,∆w1,w2,...,wn

(x1, x2, ..., xn) = 0, otherwise,

where (w1, w2, ..., wn) ∈ Zn, |wi| ≤ w for all i.

If we restricted to the case of wi = 1, i = 1, 2, .., n, we just denoteThn,∆ : 0, 1n → 0, 1 and call Thn,∆ a threshold function with threshold∆. A threshold circuit is a circuit over the set of threshold gates (functions). LetTC (s (n) , d (n)) denote the collection of threshold circuits of size s(n) and depthd(n). The class of functions that can be computable by these circuits will bedenoted as TC. We have analogous definitions for the collection of weightedthreshold circuits and the class of functions that are computable by them, denotedby WTC (s (n) , d (n)) and WTC, respectively.

An important theorem about threshold circuits is the following one:Theorem: Suppose an analytic function f (x) has a convergent Taylor series

expansion f (x) =∑∞

n=0 cn (x− x0)n over an interval |x− x0| < ε, where 0 <

ε < 1, and the coefficients are rational numbers cn = an/bn, where an, bn areintegers with magnitude at most 2nO(1) . Then, f (x) can be efficiently computedover this interval, within accuracy 2−nc , for any constant c ≥ 1, by thresholdcircuits of polynomial size and simultaneous constant depth [20].

6.2.2 Equality Threshold CircuitsA weighted equality threshold function of weight bound w ≥ 0 is a booleanfunction defined by:

Etnw1,...,wn: 0, 1n → 0, 1 ,

Etnw1,...,wn(x1, x2, ..., xn) = 0, if

n∑

i=1

wixi = 0,

Etnw1,...,wn(x1, x2, ..., xn) = 1, otherwise,

for (w1, w2, ..., wn) ∈ Zn, |wi| ≤ w for all i.

38

Page 46: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

An equality threshold circuit of weight bound w ≥ 0 is a circuit over the setof equality threshold gates such that all the weights are absolutely bounded byw. Let EC(s(n); d(n)) denote the collection of equality circuits of size s(n) anddepth d(n). EC will also be used to denote the class of functions computable bythese circuits.

It is possible to show that the weights of a EC circuit can be encoded appro-priately in unitary matrices, and quantum entanglement and interference can beused to compute

∑ni=1 wixi in one step. Thus, EC provide a good model to help

in the combination of ideas from quantum computing and threshold circuits [21].

6.3 Exercises1. Show that any functions f : 0, 1n → 0, 1 can be computed by a

circuit.2. Show that the circuit model and the Partial Recursive Functions (PRF)

model are equivalent; that is, show that any function f : 0, 1n → 0, 1 can becomputed by both the PRF and circuit model.

3. Consider the weighted threshold circuits of section 6.2.1. Whatkind of function f : 0, 1n → 0, 1 can be computed by this model? Now,

discuss the question: What kind of function f : 0, 1n → 0, 1 can be computedby the McCulloch-Pitts (neuron) model of section B.2?

4. Develop a circuit that computes the operation x−y (subtraction), for x, y ∈0, 1. The circuit must have x, y as inputs and two outputs, one of them used tosend the signal of the operation, say, 0 for ”+” and 1 for ”−”.

5. Develop circuits to implement the following operations:a) x.y ⊕ z.w, x, y, z, w ∈ 0, 1 .b) Sum of two natural numbers m and n.c) Given a pair (x, y) ∈ 0, 12 , gives as output the pair (y, x) .

6. Simplify the following expression and give the corresponding circuit: ∼((a∧ ∼ b) ∨ c) ∧ ((a ∨ c)∧ ∼ c) .

7. The computation units in quantum computing are unitary operators. Thewhole computation, forget measurements now, can be seen as a sequence A1 ·A2... ·An (x) , where Ai are unitary matrices, the operation ”·” is the usual matrix

39

Page 47: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

product, and x ∈ Cn is a vector representing the initial state (C is the set of

complex numbers). Therefore, answer the questions:(a) Are there functions f : 0, 1n → 0, 1 that can not be computed by

unitary gates?(b) Which kind of functions can be computed by unitary gates?(c) Based on items (a)-(b) discuss the power of quantum computing paradigm

if compared with the usual one based on logic (electronic) circuits.8. Develop threshold circuits to implement the following operations:a) x⊕ y, for x, y ∈ 0, 1.b) Product: x.y , where x, y ∈ 0, 1 .

40

Page 48: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 7

Formal Languages and TuringMachines

7.1 IntroductionLoosely speaking, a formal language is a language whose structure can be spec-ified with mathematical precision. The study of formal languages is not onlyinteresting as a mathematical discipline in its own right, but also because of itsrelevance to the foundations of mathematics, its applications, and surprising con-nections with other branches of mathematics.

For discrete mathematics, a fundamental application of languages is for encod-ing problems. From the computer science viewpoint, the power of a computationalmodel can be analyzed through the set of languages it decides. In this Chapter,we present basic concepts in language theory and language representation. Then,Turing machines are described as well as their relationships with language theory.Besides, some concepts in grammar theories are presented and a list of exercisesis proposed at the end of the Chapter.

41

Page 49: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

7.2 Basic Concepts in Formal LanguagesAn alphabet Σ is a finite set of symbols. For instance, the particular case whichΣ = 0, 1 will be of special interest in what follows. Other interesting ex-amples the current English alphabet a, b, c, d, ..., w, x, y, z and the DNA baseA, T, C,G.

Given an alphabet Σ, we define the sets Σ∗ and Σ+, as follows:

Σ+ =

∞⋃

n=1

Σn, (7.1)

Σ∗ =

∞⋃

n=0

Σn = Σ+ ∪ λ, (7.2)

where λ is the element called the empty string or empty word. A word or (finite)string is an element of Σ∗. It is important to observe that, abbc is a string, and cbbais also a string but they are different. Observe that the set Σ∗ contains all finitestrings and that Σ+ = Σ∗− λ. We can also define Σ∗ by recursion, as follows:

Definition: Let Σ be an alphabet. The set Σ∗,of strings over Σ, is defined by:i) Basis:λ ∈ Σ∗,

ii) Recursive step: If w ∈ Σ∗ and a ∈ Σ then wa ∈ Σ∗,

iii) Closure: w ∈ Σ∗ only if it can be obtained from λ by a finite number ofapplications of the operation in ii).

We can prove that the later definition and the one in expression (7.2) are equiv-alent (Exercise).

A language over Σ is a subset of Σ∗, meaning that it is a set of words madefrom the symbols in the alphabet Σ. Hence, if we take the alphabet Σ = 0, 1,the following sets are examples of languages over Σ:

L1 = 0, 001, 0001, 1111, (7.3)

L2 = 0, x, y, z, 0; x, y, z ∈ 0, 1 , (7.4)

L3 = Σ∗ (7.5)

L4 = ∅. (7.6)

42

Page 50: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

In the context of languages, ∅ is called the empty language.We define the length of x ∈ Σ∗, denoted by |x| , as the number of symbols

occurring in x. Therefore, if Σ = A, T, C,G we have: |A| = 1, |ATTT | = 4,

|λ| = 0.

We can define the following operations over words in Σ∗:Concatenation: If x, y ∈ Σ∗ then x concatenated with y is the word formed

by the symbols of x followed by the symbols of y. This is denoted by xy. Observethat |xy| = |x| + |y| .

The concatenation of zero strings is λ and the concatenation of only one stringis the string itself.

Substring: A string v is a substring of a stringw if and only if there are stringsx and y such that w = xvy.

Suffix: If w = xv for some string x, then v is a suffix of w.Prefix: If w = vy for some string y, then v is a prefix of w.Reversal: Given a string w, its reversal, denoted by wR is the string spelled

backwards. For example, (abcd)R = dcba. A formal definition can be given asfollows (Exercise):

(1) If w = λ, then wR = λ.

(2) If w is a string of length n + 1 > 0, then w = ua for some a ∈ Σ, andwR = auR.

An important fact is that, if Σ is finite then Σ∗ is countably infinite. To showthis, we must find a bijection f : N → Σ∗. In order to perform this task, firstly fixsome ordering of the alphabet, say Σ = a1, ..., an , where a1, ..., an are distinct.The members of Σ∗ can then be ordered following a procedure similar to the oneused in the expression (7.18), section 7.6 bellow. Such natural ordering is the keyfor the demonstration (Exercise).

Since languages are sets, they can be combined by set operations of union,intersection, and difference. Also concatenation of languages is possible. If L1

and L2 are languages over Σ∗, their concatenation is L = L1L2 defined by:

L = w ∈ Σ∗; w = xy for some x ∈ L1 and y ∈ L2 . (7.7)

Another language operation is the Kleene star of a language L, denoted byL∗ and defined as follows:

43

Page 51: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

L∗ =

w ∈ Σ∗; w = w1w2...wk for some k ≥ 0 and w1, w2, ..., wk ∈ L .(7.8)

Besides, we write L+ for the language LL∗. Formally:

L+ =

w ∈ Σ∗; w = w1w2...wk for some k ≥ 1 and w1, w2, ..., wk ∈ L .(7.9)

We shall notice that L+ can be considered as the closure of L under the func-tion of concatenation; that is, L+is the smallest language that includes L and allstrings that are concatenations of strings in L. (Exercise)

Definition: A decision problem is a function which codomain is the set 0, 1.According this definition, all function f : Σ∗ → 0, 1 are decision problems

over the alphabet Σ.

7.3 Finite Representation of LanguagesA central issue in the theory of computation is the representation of languages (fi-nite or infinite) by finite specifications. By representation of a language L over analphabet Σ, we mean a string, a finite sequence D of symbols over some alphabetΩ, such that a string w ∈ L if and only if it can be defined through D.

To introduce this concept, let us take as an example the language L ⊂ 0, 1∗,such that w ∈ L if and only if w has two or three occurrences of 1 and the firstand second of which are not consecutive. This language can be specified usingthe following string:

0∗ 1 0∗ 0 1 0∗ ((1 0∗) ∪ ∅∗) . (7.10)

In fact, we may dispense the braces ”” and simply write:

0∗10∗010∗ (10∗ ∪ ∅∗) . (7.11)

Roughly speaking, an expression such as (7.10) and (7.11) are called regularexpressions. Hence, a regular expression describes a language exclusively by

44

Page 52: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

means of symbols of the alphabets Σ and ∅, combined perhaps with the symbols∪ and ∗, possible with the aid of parentheses. More formally, we can replace thesymbols ∅, ∪, ∗, that have specific meanings, by special symbols Ø,t, ?, free ofmeaningful overtones. Thus, we can state the following definition.

Definition: A string over the alphabet Σ∪ (, ) ,t, ?,Ø is a regular expres-sion, over the alphabet Σ, if and only if it can be obtained as follows:

(1) Ø and each member of Σ is a regular expression,(2) If α and β are regular expressions, then so is (αβ),(3) If α and β are regular expressions, then so is (α t β),(4) If α is a regular expressions, then so is α?,If we interpret the symbols t, ?,Ø as union, Kleene star and empty set, and

consider the juxtaposition of expressions as concatenation, then every regular ex-pression represents a language. The relation between regular expressions andlanguages can be established more formally. Thus, let Ω = Σ∪(, ) ,t, ?,Ø andP (Σ∗) the the set of the parts of Σ∗, then, consider a function:

£ : Ω∗ → P (Σ∗) , (7.12)

such that £ (α) is the language represented by α. This function can be definedas follows:

(1) £ (Ø) = ∅ and £ (a) = a for each member of a ∈ Σ,(2) If α and β are regular expressions, then £ ((αβ)) = £ (α) £ (β),(3) If α and β are regular expressions, then £ (α t β) = £ (α) ∪ £ (β),(4) If α is a regular expressions, then £ (α?) = £ (α)?.This definition implies that different languages must have different represen-

tations (Exercise). However, we must be careful about the possible limitations forthe choice of representations. The problem is that the set of strings Ω∗ is countablyinfinite but the set P (Σ∗) is not countably infinite (Exercise). Therefore, there willbe languages for which we are unable to find finite representations! This is a veryimportant result in theory of computation.

The class of regular languages over an alphabet Σ is the set of all languages Lsuch that L = £ (α) for some regular expression α over Ω.

Two important aspects about languages are language recognition and lan-guage generators. The former can answer the question: Is string w a member ofa language L? The later would be used to specify all the elements of L. For finite

45

Page 53: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

languages, a finite representation can be used to list all the members of the lan-guage. In this case, the representation is a language generator in an explicit sense.For infinite languages, finite representations are not language generators in com-putational, or algorithmic sense. However, we can use the finite representation todesign language recognition devices.

7.4 Deterministic Turing MachinesA Deterministic Turing Machine M is defined through an 8-upla(Q,Σ,Γ, δ, q0, b, qA, qR) , where:

Q is a finite set of states,Γ is an alphabet of input symbols,b ∈ Γ is the blank symbol,Σ ⊂ Γ − b is the alphabet of output symbols,δ : Q× Γ → Q× Σ × L,R is the state transition function,q0 ∈ Q is the initial state,qA, qR ∈ Q are the final states.Besides, it is supposed that M has an one-way infinite tape divided into cells

numbered through N − 0 , that is (1, 2, 3, ...). The tape is scanned by a headwhich can read and print on a single cell at a time. The operation of M is super-vised by a simple process called a finite control (see Figure 7.1).

An execution of M encompasses the following steps:1) Initialization: The state of M is q0 and the tape contains one input word

x1x2x3...xn ∈ (Γ − b)n in the cells 1, 2, ..., n. All other cells contain the blanksymbol. The head is positioned at cell 1.

2) In a single step M performs the following actions:2.1) The symbol on the tape cell scanned by the head is read;2.2) The state transition function gives the new state of the machine, the sym-

bol to be printed on the cell scanned and whether the head moves one cell left(L) or one cell right (R). The head is not permitted to move to the left of the cellnumber 1.

3) The execution ends when M enters one of the final states.If the final state is qA, the input is said to be accepted by M . Otherwise, if the

final state is qR, the input is said to be rejected by M.

46

Page 54: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 7.1: Turing machine.

It is important to note that the state transition function completely describesthe program which controls the execution of a specific machine. Therefore, theterms Turing machine and Turing machine program are sometimes used withthe same meaning [10]. In some references the Turing machine definition may beslight different from the above one but without any change in the computationalpower of the device [16, 22]. See also an interesting discussions about finite andinfinite machines in part two of the book [23].

From the Turing machine definition, it is clear that once we describe the statetransition function, the machine behavior is also specified. We can use a tabu-lar representation to specify a particular state transition function. Let us see thefollowing example:

Example: Consider the following Turing machine:Q = q0, q1, qA, qR ,Γ = 0, 1, b ,Σ = 0, 1 ,q0 is the initial state,qA, qR are the final states,.δ : Q× Γ → Q× Σ × L,R, defined as:

47

Page 55: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

δ (q0, 0) = (q1, 1, R) ; δ (q0, 1) = (qA, 1, R)

δ (q0, B) = (qR, 1, R) ; δ (q1, 0) = (q0, 1, R)

δ (q1, 1) = (qR, 1, R) ; δ (q1, B) = (qR, 1, R)

(7.13)

The tabular, or functional representation of this example does not give a globalview of the machine behavior. The graph representation is preferable not only asa representation itself but for theoretical aspects also.

Definition: Let M = (Q,Σ,Γ, δ, q0, b, qA, qR) be a Turing machine. Thecontrol graph of M, denoted by CGM is a directed labelled graph in which:

(a) The number of nodes equals the number of states of Q,(b) The nodes are labelled by the states of Q,(c) Each move δ (qi, σ) = (qj, τ, D) defines one edge, labelled [σ, τ,D], from

the node qi to the node qj.Therefore, following this definition, the control graph for Turing machine de-

fined in expression (7.13) has the form picture on Figure 7.2.

Figure 7.2: Control graph for Turing machine 7.13.

Some definitions and notations are useful in this area.

48

Page 56: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Definition: A configuration of a Turing machine M =

(Q,Σ,Γ, δ, q0, b, qA, qR) is a member (q, uav) ∈ Q × Γ∗ × Γ × Γ∗ , suchthat:

(a) The symbol a indicates the actual head position, with a ∈ Γ.

(b) u, v ∈ Γ∗ are finite strings and v ends with a symbol in Σ.

Notation: Given two configurationsC1 =(q1, u1a1v1

)andC2 =

(q2, u2a2v2

)

of a Turing machine M = (Q,Σ,Γ, δ, q0, b, qA, qR) we write:

C1 M C2, (7.14)

to say that the configuration C1 yields the configuration C2 in one execution of theTuring machine.

With this notation, we can say that a computation by M is a sequence ofconfigurations C0, C1, C2, ..., Cn, for some n ≥ 0, such that:

C0 M C1 M C2 M · · · M Cn. (7.15)

In this case, we say that the computation is of length n, or that it has n steps,and rewrite expression (7.15) as:

C0 nM Cn. (7.16)

In more general form, we say that:

C1 ∗M C2, (7.17)

if configuration C1 yields configuration C2 after a finite number of executions (acomputation of finite length) of the Turing machine M .

Definition: Let M = (Q,Σ,Γ, δ, q0, b, qA, qR) be a Turing machine and w ∈Σ∗. Suppose that (q0, w)

∗M (h, y) , for some h ∈ qA, qR and y ∈ Σ∗. Then, y

is called the output of M on input w, and is denoted by M (w).Definition: Now let us consider a function f : Σ∗ → Σ∗. We say that a Turing

machine M computes f if, for all w ∈ Σ∗, we have M (w) = f (w) .

49

Page 57: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

7.5 Nondeterministic Turing MachinesWe can design Turing machines that act nondeterministically: such machinesmight have, on certain combinations of state and scanned symbol, more than onepossible choice to proceed. Formally, we can state the following definition:

A nondeterministic Turing Machine M is defined through an 8-upla(Q,Σ,Γ,∆, q0, b, qA, qR) , where:

Q is a finite set of states,Γ is an alphabet of input symbols,b ∈ Γ is the blank symbol,Σ ⊂ Γ − b is the alphabet of output symbols,∆ is a subset of ((Q× Γ) × (Q× Σ × L,R))q0 ∈ Q is the initial state,qA, qR ∈ Q are the final states.We must observe that, differently from the deterministic case, the subset ∆

may not define a function. We can define control graph for a nondeterministicTuring machine like in the deterministic case. The Figure 7.3 shows such anexample. For instance, we observe that, when the device is in state q0 and theinput symbol is 0 there are two possibilities for the computation to proceed: thehead writes symbol 0, the head moves one cell to the right (R) and the state ischanged to q1, or, the head prints symbol 1, moves one cell to the right (R) andkeeps the state q0.

Figure 7.3: Control graph for a Nondeterministic Turing machine.

50

Page 58: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Before proceeding, we must first define what it means for a nondeterministicTuring machine to compute, or to decide, something.

Definition: Let M = (Q,Σ,Γ,∆, q0, b, qA, qR) be a nondeterministic Turingmachine. We say that M decides a language L ⊂ Σ∗ if the following two condi-tions hold:

(a) For each w ∈ L, there is a n ∈ N, depending on M and w, such that thereis no configuration C satisfying (q0, w)

nM C,

(b) w ∈ L if and only of (q0, w) ∗M (qA, uav) , for some u, v ∈ Σ∗and a ∈ Σ.

7.6 Deterministic Turing Machines and LanguagesGiven a Deterministic Turing machine M = (Q,Σ,Γ, δ, q0, b, qA, qR) we define:

Definition: Language recognized by M is the set of words belonging to Σ∗

that are accepted by M . It is denoted by Accepted by (M) . Formally, we have:

Accepted by (M) = w ∈ Σ∗; M stops in qA .

Definition: Language rejected by M is the set of words belonging to Σ∗ thatare rejected by M . It is denoted by Rejected by (M) . Formally, we have:

Rejected by (M) = w ∈ Σ∗; M stops in qR .

Definition: Loop Language is the set of words belonging to Σ∗ for which Menters in a infinite loop. It is denoted by Loop (M) . Formally, we have:

Loop (M) = w ∈ Σ∗; M enters in infinite loop .

Given these definitions, the following identities are easily verified:

Accepted by (M) ∪ Rejected by (M) ∪ Loop (M) = Σ∗,

Accepted by (M) ∩ Rejected (M) = ∅,

Accepted by (M) ∩ Loop (M) = ∅,

51

Page 59: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Rejected by (M) ∩ Loop (M) = ∅,

Definition: A language accepted by a Deterministic Turing machine is said tobe recursively enumerated (RE).

Observe that, given a recursively enumerated languages and the Turing ma-chine M that accepts it, the set Loop (M) may not be empty!

Definition: A language L is said to be recursive if there is a Turing machinesuch that:

Accepted by (M) = L,

Rejected by (M) = Σ∗ − L,

Loop (M) = ∅.

In the literature, some synonyms for the term recursive can be found. Themost common are computable, decidable and solvable [22]. In what follows, weadopt the terms decidable (resp. undecidable) when discussing languages. Simi-larly, we say computable (resp. non-computable) when concerning with functionsover the natural numbers.

A fundamental result in computer science is the fact that there are languagesthat are not RE. Before demonstrating this fact, we must observe that the set ofTuring machines is countably enumerated (Exercise 16).

Now, let us consider languages over the set 0, 1. Firstly, we need to definesome ordering in this set. This can be easily obtained by the following definition:

Definition: The canonical ordering of the set of words 0, 1∗ is the orderingarising from the relation ≤candefined by the following rules. For x, y ∈ 0, 1∗ ,we say that x ≤can y if and only if:

52

Page 60: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

x = y, (7.18)or

|x| < |y| ,or

|x| = |y| , and x = 0u, y = 1v,

or

|x| = |y| , and x = σu, y = σv and u ≤can v,

where σ ∈ 0, 1 . This ordering is: 0, 1, 00, 01, 10, 11, 000, 001, 010, 011, ....We can show that ≤can is an equivalence relation, that is, if x, y, z ∈ 0, 1∗

we can show that (Exercise 9):(1) x ≤can x, ∀x ∈ 0, 1∗ , (reflexive property)(2) If x ≤can y then y ≤can x (symmetric property)(3) If x ≤can y and y ≤can z then x ≤can z.

Now, we can prove the following theorem.Theorem: There exist languages over 0, 1∗ that are not RE.Dem: Following the ordering given by expression (7.18), we shall observe

that for any language L ∈ 0, 1∗ , it can be described by an infinite length binarystring

m1m2...mi−1mimi+1...,

where

wj ∈ L⇔ mj = 1,

wj /∈ L⇔ mj = 0,

and wj is the jth word in the canonical ordering defined in expression (7.18).With this representation for languages and the fact that the set is Turing machinesis countably enumerated, we can use the technique called diagonalization to provethe theorem by contradiction. So, let us suppose that all languages over 0, 1∗are RE. From the above presentation, we can build the Table 7.1, in which mj

k ∈

53

Page 61: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

TM1 m11 m1

2 m13 · · · m1

k · · ·TM2 m2

1 m22 m2

3 · · · m2k · · ·

TM3 m31 m3

2 m33 · · · m3

k · · ·· · · · · · · · · · · ·TMk mk

1 mk2 mk

3 · · · mkk · · ·

· · · · · · · · · · · · · · · · · · · · ·

Table 7.1: Enumeration of Turing machines against languages over 0, 1∗.

0, 1 denotes whether the word wk is acceptable by the jth Turing machine TMj

.Now, take the language, called DIAG, defined by the following expression:

ρj = 0, if mjj = 1, (7.19)

ρj = 1, if mjj = 0.

From the assumption that all languages over 0, 1∗ are RE, it should be truethat DIAG is also RE, and so there is some Turing machine accepting and haltingon all words in DIAG. Suppose that this is the kth Turing machine in the Table7.1. So, wk ∈ DIAG, but, this yields a contradiction due to the definition ofDIAG in expression (7.19) () .

7.7 Extensions of Turing MachinesThere are some variations of the Turing machine definition. For instance, wecan consider Turing machines with multiple tracks; that is, in which the tape isdivided into k tracks, each track square recording one symbol (Figure 7.4).

Also, we can consider Turing machines with two-way infinite tape, with cellsnumbered as ...,−2,−1, 0, 1, 2, ... (Figure 7.5). Besides, we can increase the num-ber of tapes to make k-tape Turing machines, denoted by Mk. In this case, eachtape is scanned by its own read/write head, as pictured on Figure 7.6.

In a single move the symbol scanned by each head is read (in parallel) andthese k symbols together with the current state are used to determine the next

54

Page 62: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 7.4: K-track Turing machine.

Figure 7.5: Two-Way Turing machine.

55

Page 63: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 7.6: K-Tape Turing machine:each tape is scanned by its own head.

state, which symbol is printed by each head and whether a head moves left onesquare (L), right one square (R) or remains stationary (S). In this case, the functionδ is defined by:

δ : Q× Γk → Q× Γk × L,R, Sk .

One important question is whether these variations improve the power of theTuring machine. One way to address this question is in the context of languages.So, for any alphabet Σ, let 1−WAY (Σ) denote the set of languages over Σ∗ thatcan be accepted by Turing machines having one-way infinite tape, 2−WAY (Σ)

the set of languages over Σ∗ that can be accepted by Turing machines havingtwo-way infinite tape, and k − Tape (Σ) the set of languages over Σ∗ that canbe accepted by k-tape Turing machines. The following theorems can be proved[10, 22]:

Theorem: 1 −WAY (Σ) = 2 −WAY (Σ) , ∀Σ.

Theorem: 1 − Tape (Σ) = k − Tape (Σ) , ∀k ∈ N and Σ.

Considering these results what could be a definition of equivalence for Turing

56

Page 64: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

machines?

7.8 GrammarsTwo important concepts in computer science are language recognizers and lan-guage generators. The former is any device that accepts valid strings while thelater is a device that generates in a well specified sense the strings of a language.All these devices are finite ones because the alphabets of interest are finite setsof symbols. The regular expressions defined in section (7.3) define a class oflanguage generators. In this section we are going to introduce another kind oflanguage generators, called grammars. Then, we state the fundamental fact thatthe class of languages generated by such grammars is precisely the class of recur-sively enumerable ones.

Definition: A grammar is a quadruple G = (V,Σ, R, S) where:V is an alphabet,Σ ⊂ V is the set of terminal symbols, and V − Σ is the set of nonterminal

symbols,S ∈ V − Σ is the star symbolR, the set of rules, is a finite subset of (V ∗ (V − Σ)V ∗) × V ∗.

We write u → v if (u, v) ∈ R; u ⇒G v if and only if, for some w1, w2 ∈ V ∗,

and some rule a→ b ∈ R, u = w1aw2 and v = w1bw2.

A string w ∈ Σ∗ is generated by G if and only if

S ⇒G w1 ⇒G w2...⇒G wn−1 ⇒G w, (7.20)

where w1, w2..., wn−1 ∈ V ∗. Generally, we write:

w0 ⇒∗G w,

to represent a derivation in G of w from w0, by a finite sequence of the form ofthe expression (7.20). Finally, the language generated by G, called L (G) ,is theset:

L (G) = w ∈ Σ∗; S ⇒∗G w . (7.21)

Example: Consider the grammar G = (V,Σ, R, S) where:

57

Page 65: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

V = S, a, b is the alphabet,Σ = a, b is the set of terminal symbols,S ∈ V − Σ is the star symbolR, consists of the rules S → aSb, S → λ (null string).A possible derivation is: S ⇒ aSb ⇒ aaSbb ⇒ aabb. We can show that

L (G) = anbn; n ≥ 0 (Exercise 17)The following theorem is fundamental.Theorem: A language is generated by a grammar if and only if it is recursively

enumerated [22].

7.9 Exercises1). Show that if L ⊂ 0, 1∗ is the language such that w ∈ L if and only if w hasan unequal number of 0′s and 1′s, then L∗ = 0, 1∗ .

2) Compare definitions (7.8)-(7.9). What is the difference between them (see[22], page 45.)?

3)Show that L+ is the smallest language that includes L and all strings that areconcatenations of one or more strings in L.

4) Is it possible two different regular expressions to define the same language?Give examples.

5) Would it be possible to compute a language that does not have finite repre-sentation?

6) Given the regular expression defined by the string (7.11), write a programthat receives as input a string w ∈ L ⊂ 0, 1∗ and send as output yes if w ∈ L orno if w /∈ L.

7) Write a program that simulates a deterministic Turing machine.8) Write a program that simulates a nondeterministic Turing machine.9) Show that the ordering defined in (7.18) is an equivalence relation.10) Show that the definition given by expression (7.2) and the recursive one

are equivalent.11) Show that the following recursive definition is equivalent to the operation

of Reversal: Given a string w :

(a) If w = λ, then wR = λ.

58

Page 66: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

(b) If w is a string of length n + 1 > 0, then w = ua for some a ∈ Σ, andwR = auR.

12) Prove that: If Σ is finite then Σ∗ is countably infinite.13) Demonstrate that given a finite alphabet Σ,the set P (Σ∗) is not countably

infinite.14) Prove that definition given by expression (7.12) implies that different lan-

guages must have different representations.15) Write the following Turing machines:a) Recognizes the language of L exercise 6.b) Turing machines the compute operators AND, OR, ∼.c) Write the control graph for these machines16) Show that the set of Turing machines is countably enumerated.17) Consider the grammar G = (V,Σ, R, S) where:V = S, a, b is the alphabet,Σ = a, bis the set of terminal symbols,S ∈ V − Σ is the star symbolR, consists of the rules S → aSb, S → λ (null string). Show that L (G) =

anbn; n ≥ 0.18) Find a grammar that generates the languages L = anbncn; n ≥ 0.19) Consider a definition of Deterministic Turing machine that allows three

possibilities for the head movement, say R, S, L , where S means ”stay in theactual position” until the next step. Demonstrate that such definition does notincrease the power of the Turing machine.

20) Prove that the recursive definition for the set Σ∗ and the definition in ex-pression (7.2) are equivalent (section 7.2).

59

Page 67: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 8

Complexity Theory

8.1 IntroductionOne fundamental contribution of the concept of Turing machines is the mathemat-ical notion of a procedure or an algorithm. Since its development the researchersrealized that Turing machines can carry out surprisingly complex tasks. Besides,we have also seen in section 7.7 that certain additional features that we mightconsider adding to the basic Turing machine model, including k-track, multipletapes, etc., do not increase the set of tasks that can be accomplished. Finally, it ispossible to show that any other computational model (Partial Recursive Functions,circuits, etc) can be also simulated by Turing machines. Such observations are thesource for the Church-Turing thesis which can be stated as follows:

Church-Turing Thesis: We therefore propose to adopt the Turing machinethat halts on all inputs as the precise formal notion of the intuitive notion of analgorithm. In other words, we assume that an algorithm is a Turing machine thatalways halts. [22]

It is important to keep in mind that it is not a theorem! It is just a consequenceof the fact that, despite of too many works to disprove it, up to now there is noexamples that contradict such thesis. To disprove this thesis, it is necessary thatsomeone proposes an alternative model of computation that is public acceptable asa plausible and reasonable model and yet is provably more powerful than Turingmachines. Such task, if possible, remains an open point in the state-of-the-art of

60

Page 68: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

our scientific knowledge.There is also the so called strong Church-Turing thesis which states that:Strong Church-Turing thesis: Any model of computation can be simulated

on a nondeterministic Turing machine with at most a polynomial increase in thenumber of elementary operations required.[9]

In this Chapter, we provide some theoretical elements behind this thesis. Theway to be followed goes towards the complexity theory. Therefore, we start withlanguages and some fundamental problems in this field. Then, the complexityclasses P and NP are presented. Besides, we describe the complexity of algo-rithms and present some concepts in graph theory that are used during the discus-sion. At the end of this Chapter, a list of exercises can be found.

8.2 Complexity TheoryIn the preceding Chapters we dealt with the question: What can be effectivelycomputed? This is the basic question in Computability Theory [10].

Now, we turn to the question: What can be efficiently computed? This is thefundamental question in Complexity Theory [22, 23, 16]. Before to address thisquestion we must put precisely the meaning of efficiently computed. Next, weoffer the mathematical elements to perform this task. Then, we turn to the idea ofencoding problems through languages and describe some fundamental problems(languages) that will drive us towards the definitions of complexity classes.

8.3 Rates of GrowthIn this section we offer a mathematical background to quantify resources to solveproblems through computers. It is important to have in mind that our viewpointis always independent of specific architectures or physical devices. Therefore,we must consider abstract elements that allow to get fundamental relations withpractical consequences. The following definitions are very important to achievethis goal.

We will focus on function functions f : N → N. In fact, it is possible that afinite number of points in the image set is negative. Therefore, let f, g : N → N.

61

Page 69: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Then:Definition: We say that f (n) = O (g (n)) if there are numbers c and n0, such

that f (n) ≤ cg (n), for all n ≥ n0

Definition: We say that f (n) = Ω (g (n)) if there are numbers c and n0, suchthat f (n) ≥ cg (n), for all n ≥ n0.

Definition: We say that f (n) = Θ (g (n)) if f (n) = O (g (n)) and f (n) =

Ω (g (n)) .

8.4 Languages and ProblemsLanguages can be used to encode decision problems. On the other hand, a lan-guage L ⊂ Σ can be though of as the following decision problem:

Decision Problem for a Language L:

f : Σ∗ → 0, 1 ;

f (x) = 1 if x ∈ L,

f (x) = 0 otherwise.

Sometimes it is worthwhile to think of a problem and the associated languageinterchangeable. That is the case in computational complexity because languagesare naturally appropriate in connection with Turing machines.

Before going in the computational complexity field, we shall present someimportant problems and the corresponding languages. In what follows, some con-cepts in graph theory will be fundamental. They are discussed in section 8.7.

8.4.1 Graph Problems and LanguagesReachability: Given a directed graph G ⊂ V × V , where V = v1, v2, · · ·, vnis a finite set, and two nodes vi, vj ∈ V is there a path from vi to vj?

This problem can encoded through the following language:

62

Page 70: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

LR = k (G)b (i)b (j) ; There is a path from vi to vj in G ,(8.1)

where b (i) is a binary representation of the integer i, and k is a reasonable wayto encode graphs as strings (see section 8.7).

Euler Cycle: Given a graph G, is there a closed path in G that uses each edgeexactly once?

A graph that contains such a path is called Eulerian, in honour of LeonardEuler, a great mathematician of the eighteenth century, who solved it. By usingthe binary encoding ”k” of graphs given in section 8.7, we can associate EulerCycle Problem with the following language:

LE = k (G) ; G is Eulerian . (8.2)

Hamilton Cycle: Given a graph G, is there a cycle that passes through eachnode of G exactly once?

A graph that contains such a path is called Hamiltonian. Following the sameideas used for the Eulerian graphs, we can encode the Hamilton Cycle Problemwith the following language:

LH = k (G) ; G is Hamiltonian . (8.3)

8.4.2 Optimization Problems and LanguagesIn general, optimization problems can not be encoded by languages in a straight-forward way, like in the examples of section 8.4.1. This difficulty arises from thefact that optimization problems are not the kind of problems that requires ”yes”or ”no” answer. Instead, they require us to find the best (according to some costfunction) among many possible solutions.

However, we can bypass such difficulty using a general method to turningthese problems into languages: Supply each input with a bound on the cost func-tion.

As an example, let us consider the famous Travelling Salesman Problem.In this problem, we are given a set c1, c2, · · ·, cn of cities, and an n × n

matrix of nonnegative integers dij meaning the distance between city ci and

63

Page 71: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

cj . It is supposed that dii = 0 and dij = dji, ∀i, j. We are asked to findthe shortest (closed) tour of the cities. Therefore, we need to find a bijectionπ : 1, 2, · · ·, n → 1, 2, · · ·, n , a permutation in fact, such that the quantity:

c (π) = dπ(1)π(2) + dπ(2)π(3) + · · · + dπ(n−1)π(n) + dπ(n)π(1), (8.4)

is as small as possible.By using a bound B, we can study the Travelling Salesman Problem as fol-

lows:Travelling Salesman Problem: Given an integer n ≥ 2, and n × n distance

matrix dij and an integer B ≥ 0, find a permutation π of 1, 2, · · ·, n such thatc (π) ≤ B.

8.4.3 Integer Partition and FactoringPartition: Given a set of n numbers a1, a2, · · ·, an ⊂ N

∗, represented in bi-nary, is there a set P ⊂ 1, 2, · · ·, n such that

∑i∈P ai =

∑i/∈P ai?

Factoring: Given a composite integerm and l < m, doesm have a non-trivialfactor less than l?

8.4.4 Boolean ExpressionsA Boolean expression is satisfiable if it yields true for at least one truth assign-ment.

Satisfiability Problem: given a Boolean expression F : 0, 1n → 0, 1,find the set:

S = (x1, x2, ..., xn) ∈ 0, 1n ; F (x1, x2, ..., xn) = 1 .

8.5 P versus NPDefinition: A deterministic Turing machine M is said to be polynomiallybounded if there is a polynomial p (n) such that, for any input x ∈ Σ∗ we have

64

Page 72: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

(q0, x) O(p(|x|))M (h, y) , with h ∈ qA, qR and y ∈ Σ∗. In others words, the

machine always halts after at most O (p (|x|)) steps.Definition: A language L (or problem) is called polynomial-time decidable

if there is a polynomially bounded Turing machine that decides it. The class of allpolynomial-time decidable languages is called P

Definition: A nondeterministic Turing machine M is said to be polynomi-ally bounded if there is a polynomial p (n) such that, for any input x ∈ Σ∗ wehave (q0, x)

O(p(|x|))M (h, y) , with h ∈ qA, qR and y ∈ Σ∗. In others words, the

machine always halts after at most O (p (|x|)) steps.Definition: We define NP (nondeterministic polynomial) to be the class of

all languages that are decided by a polynomially bounded nondeterministic Turingmachine.

An immediate consequence of these definitions is that P ⊂ NP. What aboutthe question P = NP ? This is one of the most fundamental questions on com-puter science. Despite of the efforts of many researchers, we have not the answeryet.

8.6 Complexity of AlgorithmsFollowing the thesis that an algorithm is a Turing machine that always halts wecan extend the concept of polynomially bounded Turing machines for algorithms.However, some practical considerations must be made. Firstly we shall considerproblems that may be not represented as decision ones. Besides, for Turing ma-chines the notation ” M ” carries the idea of one execution step of the machine.For practical algorithms a similar notion can be defined if we consider the mainstep of an algorithm. For instance, such step may be each iteraction of a ”for ”loop in a iterative program.

For numerical problems (solution of linear systems, simulation of natural phe-nomena, etc.) an important measure of complexity for an algorithm is the numberof floating point operations that are performed. So, let P a problem, A a com-putational algorithm to solve it and E = E1, E2, ..., Em the set of all possibleinputs. We will denote for ti = ti (Ei) the number of floating point operationsperformed by A when the input is Ei. Then, we define [24]:

65

Page 73: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Complexity of Worst Case = max ti (Ei) ; Ei ∈ E ,

Complexity of best Case = min ti (Ei) ; Ei ∈ E ,

Average− Case Complexity =m∑

i=1

piti,

where pi is the probability of the input Ei.

8.7 Elements of Graph TheoryGiven a set A, we call a subset R ⊂ A × A a relation on A. It is interesting toobserve that any relation R can be represented by a directed graph. For a reviewin graph theory see [22], Chapter 1, or [6]. In a directed graph, the elements of Aare represented by small circles (the nodes of the graph) and an arrow is drawnfrom a to b if and only if (a, b) ∈ R. (Figure 8.1).

Some important properties about relations are the following ones:1) A relation R ⊂ A× A is reflexive if (a, a) ∈ R, ∀a ∈ A,

2) R ⊂ A× A is symmetric if (a, b) ∈ R⇔ (b, a) ∈ R,

3) R ⊂ A× A is transitive if (a, b) ∈ R and (b, c) ∈ R then (a, c) ∈ R.

4) A relation R ⊂ A× A is antisymmetric if whenever (a, b) ∈ R and a 6= b,

then (b, a) /∈ R.

A relation that has properties (1)-(3) is called an equivalence relation.A symmetric relation without pairs of the form (a, a) is represented as an

undirected graph, or simply a graph, which is drawn without arrowheads.A path from a1 to an, in a binary relation R, is a sequence (a1, a2, · · ·, an) for

some n ≥ 1 such that (ai, ai+1) ∈ R for i = 1, ..., n − 1. The length of a path(a1, a2, · · ·, an) is n. The path (a1, a2, · · ·, an) is a cycle if a′is are all distinct andalso (an, a1) ∈ R.

An important concept in graph theory is the adjacency matrix. So, let R ⊂A × A a directed graph with n nodes and AR a n × n matrix with 0 − 1 entriessuch that AR (i, j) = 1 if and only if (ai, aj) ∈ R. The matrix AR is called

66

Page 74: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 8.1: Example of directed graph.

the adjacency matrix of R ,or adjacency matrix of the corresponding graph. It isimportant to observe that this matrix allows to represent a graph as a binary stringxR ∈ 0, 1∗. It is just a matter of stacking each row (or column) to the right ofthe previous row of AR.

Definition: Let R ⊂ A× A a directed graph defined on a set A. The reflexivetransitive closure of R is the relation:

Rc = (a, b) ; a, b ∈ A and there is a path from a to b in R .

67

Page 75: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 9

Cellular Automata Theory

9.1 IntroductionCellular automata are discrete dynamical systems [25, 26] originally proposed byVon Neumann [27] (see also [28] for a brief story). They consist of a lattice ofdiscrete identical sites, each site taking on a finite set of values [29, 30]. The val-ues of the sites evolve in discrete time steps according to simple rules that updatethe value of each site in terms of the values of neighboring sites [30, 31]. CellularAutomata is a rich field of investigation that includes computational aspects likeUniversality, languages/grammars and state transition diagrams [32, 29], statisti-cal mechanics and probability (self-organization, Markov theory, fractals, etc.)[31, 33, 34, 35, 36], algebraic methods (matrix algebra, polynomials over Fi-nite Fields, etc.) [37, 31, 38, 39, 40, 41, 42] among others [30, 43]. They havebeen applied for pattern classification and recognition [44, 45, 33], pattern gen-eration [46, 43, 41], hardware architectures for massively parallel computation[47, 48], models for biological processes [49, 50, 51] and physical systems simu-lation [52, 53, 54, 55, 56, 57, 58].

This Chapter starts with basic concepts of cellular automata (section 9.2).Then, in section 9.3, we present some aspects of computational theory for cellu-lar automata. Section 9.4 review algebraic representations for cellular automata.Some aspects of nondeterminism in cellular automata are discussed in section9.5. Considerations about fractal structures and cellular automata are presented in

68

Page 76: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

section 9.6. Besides, in section 9.6.1 we give some elements of fractal theory inorder to make the material self contained. Some concepts about self-organizationare presented in section 9.7. We end this Chapter with a list of execises.

9.2 Cellular AutomataA cellular automaton (CA) is a quadruplet (L;S;N ; f) where L is a set of indicesor sites, S is the finite set of site values or states, N : L → Lk is a mappingdefining the neighborhood of every site i as a collection of k sites, and f : Sk → S

is the evolution function of the CA [31, 59]. The neighborhood of site i is definedas the set N(i) = j; |j − i| ≤ [(k − 1)/2], where [x] stands for the integer partof x. Since the set of states is finite, fj will denote the set of possible rules ofthe CA taken among the p = (#S)(#S)k rules (Exercise 4).

For a one-dimensional cellular automaton the lattice L is an array of sites, andthe transition rule f updates a site value according to the values of a neighborhoodof k = 2r + 1 sites around it, that means:

f : S2r+1 → S, (9.1)

at+1i = f

(at

i−r, ..., ati−1, a

ti, a

ti+1, ..., a

ti+r

), (9.2)

atj ∈ S, j = i− r, ..., i+ r. (9.3)

where t means the evolution time, also taking discrete values, and ati means the

value of the site i at time t [31, 30] (see also [60] for on-line examples). There-fore, given a configuration of site values at time t, it will be updated through theapplication of the transition rule to generate the new configuration at time t + 1,and so on. In the case of r = 1 in expression (9.2) and S = 0, 1 we have a spe-cial class of cellular automata which was extensively studied in the CA literature[28, 41, 42, 31]. Figure 9.1 shows the very known example of such a CA. The rulein this case is:

at+1i =

(at

i−1 + ati+1

)mod (2) , (9.4)

69

Page 77: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

that means, the remaining of the division by two. The Figure 9.1 pictures theevolution of an initial configuration in which there is only one site with the value1. If the lattice L is a finite array of N cells, we need some boundary conditionsto update the first and last cells (usually indexed as 0 and N − 1, respectively).One possibility is to consider the lattice as a periodic structure (periodic boundaryconditions).

Figure 9.1: Evolution of a CA given by expression 9.4. In this case, the initial

configuration is a finite one-dimensional lattice which has only one site with the

value 1 (pictured in black).

Once r = 1 in expression (9.2), it is easy to check that this rule is defined bythe function:

1 1 1

0

1 1 0

1

1 0 1

0

1 0 0

1

0 1 1

1

0 1 0

0

0 0 1

1

0 0 0

0(9.5)

0 ∗ 27 + 1 ∗ 26 + 0 ∗ 27 + 1 ∗ 24 + 1 ∗ 23 + 0 ∗ 22 + 1 ∗ 21 + 0 ∗ 20 = 90 (9.6)

By observing this example, we see that there are 28 = 256 such rules and foreach one it can be assigned a rule number following the indexation illustrated onexpression (9.6). In [32], Wolfram proposes four basic classes of behavior forthese rules (see also [59]):

Class 1: Evolution leads to homogeneous state in which all the sites have thesame value (Figure 9.2.a);

70

Page 78: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Class 2: Evolution leads to a set of stable and periodic structures that areseparated and simple (Figure 9.2.b);

Class 3: Evolution leads to a chaotic pattern (Figure 9.2.c);Class 4: Evolution leads to complex structures (Figure 9.2.d).

(a) (b)

(c) (d)

Figure 9.2: Some examples of Wolfram’s classification for one-dimensional r = 1

cellular automata.

Other classifications based on Markovian processes and group properties canbe also found in the literature [36, 40].

We also shall observe that the local rule (9.2) leads to a global mapping:

Φ : SN → SN . (9.7)

If we call Ω(0) = SN we can write:

Ω(t) = Φt(SN), (9.8)

71

Page 79: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

and consequently:

Ω(t+1) = Φ(Ω(t)). (9.9)

We can show that Φ(Ω(t))⊂ Ω(t) (Exercise 1). Despite of its local simplicity,

knowledge discovery in CA is a NP problem. In fact, let us take a one-dimensionalCA with a finite lattice L of size d. One may consider the question of whether aparticular sequence of d site values can occur after T time steps in the evolution ofthe cellular automaton, starting from any initial state. Then, one may ask whetherthere exists any algorithm that can determine the answer in a time given by somepolynomial in d and T . The question can certainly be answered by testing allsequences of possible initial site values, that is (#S)d. But this procedure re-quires a time that grows exponentially with d. Nevertheless, if an initial sequencecould be guessed, then it could be tested in a time polynomial in d and T . As aconsequence, the problem is in the class NP. Moreover, inverse problems in CAare NP due to the fact that they can be cast in a Satisfiability problem (Exercise2). Such observation has motivates the application of data mining techniques forknowledge discovery in CA [61].

9.3 Cellular Automata and Computational TheoryOne possibility to represent the evolution of a cellular automata is through itsglobal state transition graph. In this representation, each one on the 2N possibleinitial configurations is represented by a node, or point, in the graph, and a di-rected line connects each node to the node generated by a single application of thecellular automaton rule. Each computation produces a sequence of directed linesthat composes a path in a N -dimensional hypercube. This representation is inter-esting to discuss the concept of attractors in the field of cellular automata. A pointxf ∈ SN is an attractor if: Φ (xf) = xf , where Φ is the global mapping definedin expression (9.7). Besides, we may have a loop, that means, a sequence of dis-tinct configurations Φ (x1) ,Φ

2 (x1) , ...,Φm (x1) such that Φ (Φm (x1)) = Φ (x1).

The global state transition graph representation may not be practical, mainlywhen N gets large. If we represent the CA as a graph in the same manner we didfor Turing machines, we get a representation that is more convenient for theoreti-cal and practical aspects. For instance, Figure 9.3 shows the state transition graph

72

Page 80: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

for the CA with rule number 60. This graph must be read as follows: nodes arelabeled by pairs xy ∈ 0, 12, each arc is labeled by a sequence xyz → w, withx, y, z ∈ 0, 1, which means that, if at

i−1 = x, ati = y and at

i+1 = z, then the rulegenerates at+1

i = w and the CA goes to the state yz

Figure 9.3: State transition graph corresponding to the rule 60.

Such representation is much more compact than the global one and is moreconvenient to study cellular automata in the context of languages and grammars.

The graph in Figure 9.3 may be considered as the state transition graph for afinite automaton which generates the formal (regular) language Ω(1). Each nodecorresponds to a state of the finite automaton, and each arc to a transition in thefinite automaton, or equivalently to a rule in the grammar represented by the finiteautomaton. Labelling the states in the graph as u0, u1, u2, u3 we find the followingrules for the grammar:

u0 → 0u0, u0 → 0u1, u1 → 1u2, u1 → 1u3, (9.10)u2 → 0u0, u2 → 0u1, u3 → 0u3, u3 → 1u2.

We shall observe that this grammar generates a language defined by the regularexpression (see section 7.3):

Ω(1) = (0∗1 (0 ∪ 10)) . (9.11)

One advantage of turning the CA evolution a language problem is that we canthink about simpler automata that generate the same language Ω(1). Besides, we

73

Page 81: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

shall observe that, from the viewpoint of language generators, it does not matterif we omit the label of the nodes.

The automaton pictured on Figure 9.3 works like a nondeterministic one in thesense that multiple arcs generating the same symbol emanate from some nodes,so that several distinct paths may generate the same word in the formal language.It is convenient to find deterministic finite automata G equivalent to the previousone. In order to do this, we use a technique called subset construction.

So, let Ψ the set of all possible subsets of the set of nodes u0, u1, u2, u3 .The construction of G starts with the start node ΨS = u0, u1, u2, u3 .This nodeis joined by a 0 arc to the node corresponding to the set of states in the graphreached by a 0 arc according to expressions (9.10) from any of the nodes in ΨS.Following this procedure for the remaining arcs and nodes, the resulting graph isthe one shown in Figure 9.4, and may be represented by the rules:

ΨS = u0, u1, u2, u3 → 0 u0, u1, u3 , u0, u1, u2, u3 → 1 u2, u3 ,u0, u1, u3 → 0 u0, u1, u3 , u0, u1, u3 → 1 u2, u3 ,

u2, u3 → 0 u0, u1, u3 , u2, u3 → 1 u2 ,u2 → 0 u0, u1 , u2 → 1 ,

u0, u1 → 0 u0, u1 , u0, u1 → 1 u2, u3 .

Figure 9.4: Deterministic finite automaton corresponding to rule 60 and its graph

representation.

74

Page 82: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Just as above, some of the states in the graph of Figure 9.4 are equivalent, andmay be combined. Two states are equivalent if and only if transitions from themwith all possible symbols (here 0 or 1) lead to equivalent states. An equivalentautomata is shown in Figure 9.5, which may be obtained by representing eachequivalence class of states in G by a single state. It may be shown that this au-tomaton is the minimal one that recognizes the corresponding language [29]. It isunique (up to state relabelling), and has fewer states than any equivalent (deter-ministic) automata.

Figure 9.5: Minimal deterministic finite automaton corresponding to rule 60.

The graph representation for cellular automata used above allows to importtechniques from graph theory for CA analysis.

For example, consider the minimal deterministic finite automata representedby the graph given in Figure 9.5. It may be represented by the following adjacencymatrix:

M =

1 1 0

1 0 1

1 0 0

, (9.12)

where the definition of adjacency matrix is given on section 8.7.Let N (m) be the number of possible length m paths in the graph. For large

m, we have:

N (m) ' Tr (Mm) =∑

λmi ∼ λm

max,

where λmmax is the maximum of the eigenvalues λi of M (see [29] and references

therein).

75

Page 83: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

9.4 Algebraic Properties of Cellular AutomataIn this Chapter we review algebraic properties of cellular automata based on poly-nomial and matrix representations. The section starts with some elements of finitefields, vector space and polynomials over a finite field. The main goal of the cor-responding sections is the representation of cellular automata through matrices aswell as polynomials over the finite field GF (2).

9.4.1 Finite Fields an Vector SpacesIn this section we will focus on cellular automata defined over SN , where S =

0, 1 . Our aim is to represent cellular automata by matrices. Additional materialfor these topics can be found in [29, 6]. So, let us define the operations:

⊕ : 0, 12 → 0, 1 , (9.13)⊕ (x, y) = (x + y)mod (2) .

· : 0, 12 → 0, 1 , (9.14)· (x, y) = x · y. (usual multiplication)

We can prove the following properties (Exercise 6). For any x, y, z ∈ S :

1) (x⊕ y) ⊕ z = x⊕ (y ⊕ z) ,

2) There is an element e ∈ S, the additive identity, such that x⊕e = e⊕x = x,

3) ∀x ∈ S, there is the −x, called additive inverse , such that x ⊕ (−x) =

(−x) ⊕ x = e,

4) x⊕ y = y ⊕ x,

5) (x · y) · z = x · (y · z) ,6) x · (y ⊕ z) = (x · y) ⊕ (x · z) ,7) There is an element I ∈ S, called multiplicative identity, such that I · x =

x · I = x, ∀x ∈ S.

8) x · y = y · x,9) For any x ∈ S, x 6= e, there is an element, denoted by x−1, such that

x · x−1 = x−1 · x = I.

76

Page 84: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

In this case, the 3-upla (S,⊕, ·) is called a (finite) field, also denoted byGF (2). It is easy to check that e = 0 and I = 1 in properties (2) and (7).We already know that the operation ”⊕” is the Boolean operator XOR and theoperation ”·” is the AND operator. However, sometimes it may be easier to thinkabout algebraic structures instead of Boolean expressions because some propertiesbecomes more clear.

Following this way, we can show that the set SN is a vector space. In fact, letu ∈ SN and v ∈ SN , such that

u =

u0

u1

· · ·uN−1

, v =

v0

v1

· · ·vN−1

Then, we can define the following operations:

⊕ : SN × SN → SN , (9.15)

⊕ (u, v) =

u0 ⊕ v0

u1 ⊕ v1

· · ·uN−1 ⊕ vN−1

.

· : S × SN → SN , (9.16)

· (α, v) =

α · v0

α · v1

· · ·α · vN−1

.

We can show that the 3-upla(SN ,⊕, ·

), where ⊕ and · are defined in (9.15)

and (9.16), respectively, is a vector space over the finite field S; that is, the fol-lowing properties hold:

1) ∀u, v, w ∈ SN , we have (u⊕ v) ⊕ w = u⊕ (v ⊕ w) ,

77

Page 85: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2) There is an element 0 ∈ SN , called the additive identity, such that u⊕ 0 =

0 ⊕ u = u, ∀u ∈ SN ,

3) ∀u ∈ SN , there is the −u, such that u⊕ (−u) = (−u) ⊕ u = 0,

4) u⊕ v = v ⊕ u,

5) k · (u⊕ v) = (k · u) ⊕ (k · z) , ∀k ∈ S and ∀u, v ∈ SN ,

6) (a⊕ b) · u = (a · u) ⊕ (b · u) , ∀a, b ∈ S and ∀u ∈ SN ,

7) (a · b) u = a · (b · u) , ∀a, b ∈ S and ∀u ∈ SN ,

8) 1 · u = u, ∀u ∈ SN .

We can also define basis for the vector space(SN ,⊕, ·

). For example, we can

show that the set B = u1, u2, .., uN where:

u1 =

1

0

0

· · ·0

, u2 =

0

1

0

· · ·0

, ..., uN =

0

0

0

· · ·1

, (9.17)

is a basis for the vector space(SN ,⊕, ·

).

Besides, we can multiply a matrix A = (aij)N×N , with aij ∈ S, by a vectoru ∈ SN as usual, but using the operations ”⊕” and ”·” defined in (9.13), (9.14),respectively (Exercise 7). In this case, we use the notationAu, orA·u, to representthe multiplication of the matrix A by the vector u.

9.4.2 Matrix Representation for Cellular AutomataA CA over SN can be seen as a transformation:

Φ : SN → SN .

Definition 1: A CA Φ is called additive if, for ∀u, v ∈ SN :

Φ (u⊕ v) = Φ (u) ⊕ Φ (v) . (9.18)

Definition 2: A CA that does not satisfies expression (9.18) is called nonad-ditive.

78

Page 86: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Theorem 1: A CA is additive if and only if it possesses a matrix representa-tion; that is, if and only if there is a constant square matrix A = (aij)N×N , withaij ∈ S, such that:

Φ (u) = Au, ∀u ∈ SN . (9.19)

Proof: Exercise 13.The matrix A is called the characteristic matrix of the CA.More general, we can prove the following theorem:

Theorem 2: Given a cellular automaton Φ : SN → SN , there is a constantvector T ∈ SN and a constant square matrix A = (aij)N×N , with aij ∈ S, suchthat:

Φ (u) = T ⊕ Au, (9.20)

if and only if :

Φ (u⊕ v) = T ⊕ [Φ (u) ⊕ Φ (v)] , ∀u, v ∈ SN (9.21)

Moreover, a CA is additive if and only if T = 0.Prove: Define Φ (u⊕ v) = Φ (u⊕ v) ⊕ T = Φ (u) ⊕ Φ (v). From this defini-

tion, we can show that Φ (u) = Φ (u) ⊕ T, ∀u ∈ SN . Therefore,

Φ (u⊕ v) = Φ (u) ⊕ Φ (v) = (Φ (u) ⊕ T ) ⊕ (Φ (v) ⊕ T ) = Φ (u) ⊕ Φ (v) .

Then, use the Theorem 1 (Exercise 14).

Now, we can classify additive cellular automata according to specific proper-ties of their characteristic matrices.

Definition 3: An additive CA, with characteristic matrix A, is called group-type, or simply a group CA, if there is a positive integerm ∈ N such thatAm = I ,where I refers to the identity matrix.

This definition means that the matrix A generates a cyclic group of matricesover the field GF (2), that is, consider the set of powers (in GF (2)) of the matrixA, given by: P = An; n ∈ N . Firstly, we shall observe that this set is closedunder the matrix multiplication (As · At = As+t ∈ P ) . The set P with the op-eration of multiplication of matrices in GF (2) is a group. It is also commutative

79

Page 87: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

because As ·At = At ·As. This fact, as well as the definition of group CA can beused to prove the following theorem.

Theorem 3: A CA is a group CA if and only if det (A) = 1, where ”det” isthe determinant of the characteristic matrix A.

Proof: Exercise 15 (see [62], page 4, for reference).If an additive CA, with characteristic matrix A, is not group (det (A) = 0), we

do not have such structures. These cellular automata have been used for patternclassifications [17]. So, let us suppose that the space SN , encodes two classesof patterns P1 ⊂ 0, 1N and P2 ⊂ 0, 1N . Consider now the question: Find alinear operator Φ that works as a classifier, that is, which has the property:

Φ (x) 6= Φ (y) , ∀x ∈ P1 and y ∈ P2. (9.22)

This expression is equivalent to the following one (Exercise 17):

Φ (x⊕ y) 6= 0, ∀x ∈ P1 and y ∈ P2. (9.23)

Such problem has been explored in the context of VLSI design [17]. The in-terested reader shall wonder the possibility of apply the solution of problem (9.23)for other architectures. The solution of this problem can be addressed by binarydecision diagrams (BDDs). Firstly, the set of equations derived from expression(9.23) must be obtained. Then, we should convert these equations in INF form inorder to get more efficiently the solution.

9.4.3 Polynomial RepresentationWe say that a CA is pure if the local rule f is the same for all sites of the lattice.In this case, we can represent the CA evolution through product of polynomialsover GF (2). To see this, we must present some definitions.

Definition 4: A dipolynomial p over GF (2) is any expression given by:

p (x) =i=N∑

i=−M

aixi, (9.24)

where the coefficients ai ∈ 0, 1, and M ,N are natural numbers. We say that thedegree of p is N. If M = 0 we call p (x) an ordinary polynomial.

80

Page 88: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

The usual definitions in modular arithmetic for polynomials can be extendedfor dipolynomial.

Definition 5: Given a dipolynomial by expression (9.24) and an ordinary poly-nomial d (x) =

∑nj=0 djx

j ,with degree n, we define:

p (x) ≡ r (x)mod (d (x)) ⇔ p (x) = q (x) · d (x) + r (x) , (9.25)

where q (x) is a dipolynomial and r (x) is an ordinary polynomial. If d0 6= 0 andthe degree of r (x) is less then n then r (x) is unique (Exercise 18).

Besides, we can associate a polynomial with a configuration(at

0, at1, ..., a

tN−1

)

of a CA, at time t, in a straightforward way:

(at

0, at1, ..., a

tN−1

)→ pt (x) =

i=N−1∑

i=0

atix

i. (9.26)

In this case, pt (x) is called (characteristic polynomial) of the CA. Now, let ustake the dipolynomial:

s (x) = x+ x−1. (9.27)

We can show that the configuration obtained by applying the Rule 90 to theconfiguration

(at

0, at1, ..., a

tN−1

), with periodic boundary conditions, is the one

represented by the ordinary polynomial pt+1 (x) that satisfies:

pt+1 (x) ≡(pt (x) · s (x)

)mod

(xN − 1

). (9.28)

Therefore, expression (9.28) gives the polynomial representation of the time evo-lution of the CA (see Exercise 9). Such expression can be used for any CA rulethat can be represented by a polynomial.

9.5 Nondeterminism in Cellular AutomataIt is interesting to consider the four basic classes of cellular automata but in thepresence of noise which randomly reverses the CA output values with probabilityp. What kind of new features may happen?

81

Page 89: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

This question resembles the problem of studying the influence of control pa-rameters in continuous dynamical systems [63]. With such parameters we cancontrol the influence of factors like temperature, viscosity, irradiation, etc. Thesesystems can be analyzed through stability theory [64, 63], bifurcation, catastrophetheory [64, 65] and perturbation [63]. In this context, there may be critical valuesfor the parameters, in the sense that sudden changes happen near them. As anexample, let us consider a simple dynamical system:

dx

dt= λx + y (9.29)

dy

dt= x + λy

where λ is a real parameter. According to the theory of ordinary differential equa-tions [63], the qualitative analysis of this system may be done through the eigen-values/eigenvectors of the matrix of the above system:

A =

λ 1

1 λ

. (9.30)

The eigenvalues are given by:

α1 = λ+ 1, α2 = λ− 1. (9.31)

We observe that the value λ = 1 is a critical one because, for λ > 1 the origin(0, 0) is a focus but for 0 < λ < 1 we observe a saddle point. Thus, we have ajump, that is, a sudden change in the system behavior, for λ = 1.

Cellular automata are discrete dynamical systems for which the probability pcould be seem as a parameter that ranges in [0, 1]. There will be critical valuesfor the probability (parameter) p? If the answer is ”yes” which property suddenlychanges?

These questions and the mathematical theory necessary to perform such anal-ysis are points that we shall consider in new investigations.

82

Page 90: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

9.6 Fractal Patterns and Cellular AutomataFractal geometry has been considered as a new scientific way to think about theedges of clouds, textures, landscapes, etc. Since the work of Mandelbrot [66] thefractal geometry has been studied as a rigorous mathematical field with branchesin physics, computer graphics and arts [67].

Simply, we can say that the geometric characterization of fractals is self-similarity: the shape is made of smaller copies of itself that are similar to thewhole (see section 9.6.1 for a more precise characterization of fractals). The Fig-ure 9.6 shows beautiful examples of fractal structures.

It is amazing that the CA from Class 4 may generate interesting patterns thatresembles fractal structures. A simple, and effective example is given by the Rule90, defined in expression (9.5). Figure 9.7 shows the pattern generated after appli-cation of equation (9.4) to a quarter of a million site values during 500 time steps.It is possible to show that the fractal dimension of this pattern is log3 2 ' 1.59.

Certainly, this is an approximation because fractals are naturally ground inthe continuous mathematics. The precision we want depends on the number ofcells as well as the number of time steps (iteractions) we apply the rule. Besides,some initial configurations may not produce fractal-like patterns but others maygenerate such behavior. We can also ask if nondeterministic cellular automatamay generate nondeterministic fractals. Such question and others (see [29] for 20

open problems in cellular automata) are challenges in this field.

9.6.1 Elements of Continuous Fractal TheoryThis section follows the presentation of [67]. All the definitions and proofs can befound in that reference.

Metric Spaces

Definition: A metric space (X, d) is a set X together with a real-valued functiond : X ×X → <+, which measures the distance between pairs of points (x, y) ∈X ×X and obeys the following axioms:

i) d (x, y) = d (y, x) ,

ii) d (x, y) = 0 if and only if x = y,

83

Page 91: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

(a) (b)

(c) (d)

Figure 9.6: Some examples of fractals.

84

Page 92: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 9.7: Evolution of CA given by rule number 90 generating fractal-like struc-

tures.

iii) d (x, y) ≤ d (x, z) + d (z, y), ∀x, y, z ∈ X.

Definition: Two metrics d1 and d2 on a space X are equivalent if there existconstants 0 < c1 < c2 <∞ such that:

c1d1 (x, y) ≤ d2 (x, y) ≤ c2d1 (x, y) , ∀ (x, y) ∈ X ×X.

Definition: Two metric spaces (X1, d1) and (X2, d2) are equivalent if there isbijection h : X1 → X2 such that the metric d3 on X1 defined by:

d3 (x, y) = d2 (h (x) , h (y)) , ∀ (x, y) ∈ X1 ×X1,

is equivalent to d1.

Definition: A function f : X1 →X2 from a metric space (X1, d1) into a metricspace (X2, d2) is continuous if, for each ε > 0, there is a δ > 0 such that:

d1 (x, y) < δ =⇒ d2 (f (x) , f (y)) < ε.

Definition: A sequence xn∞n=1 of points in a metric space (X, d) is said toconverge to a point x ∈ X, if for any given ε > 0 there is an integer N > 0 sothat:

85

Page 93: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

d (xn, x) < ε for all n > N.

Definition: A sequence xn∞n=1 of points in a metric space (X, d) is calledCauchy sequence if, for any given number ε > 0, there is an integer N > 0 sothat:

d (xn, ym) < ε for all n,m > N.

Definition: A metric space (X, d) is said to be complete if every Cauchysequence xn∞n=1 converges to a point x ∈ X.

Definition: Let S ⊂ X be a subset of a metric space (X, d). Then, a pointx ∈ S is called a limit point of S if there is a sequence xn∞n=1 of points xn ∈S − x, such that

Limn→∞xn = x.

The set of limit points of S will be denoted by S.Definition: Let S ⊂ X be a subset of a metric space (X, d). S is closed if it

contains all of its limit points.Definition: Let S ⊂ X be a subset of a metric space (X, d). Then, S is

compact if every sequence xn∞n=1 in S contains a subsequence that convergesto a point x ∈ S.

Definition: Let S ⊂ X be a subset of a metric space (X, d). S is totallybounded if, for each ε > 0 there is a finite set of points y1, y2, ..., yn ⊂ S suchthat whenever x ∈ S, d (x, yi) < ε for some yi ∈ y1, y2, ..., yn .

Theorem: Let S ⊂ X be a subset of a metric space (X, d). S is compact ifand only if it is closed and totally bounded.

Definition: Let S ⊂ X be a subset of a metric space (X, d). S is open if foreach x ∈ S there is an ε > 0 such that B (x, ε) = y ∈ X; d (x, y) ≤ ε ⊂ S.

Fractal Space

Let (X, d) be a complete metric space. Then H (X) denotes the space whosepoints are the compact subsets of X, other than the empty set

Definition: Let (X, d) be a complete metric space, x ∈ X and B ∈ H (X) .

Then we define the distance from the point x to the set B as:

86

Page 94: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

d (x,B) = Min d (x, y) ; y ∈ B .

Definition: Let (X, d) be a complete metric space. Let A,B ∈ H (X) . Wecan define a distance from set A to set B as:

d (A,B) = Max d (x,B) ; x ∈ A .

Besides, the Hausdorff distance between points (sets) A,B ∈ H (X) is de-fined by:

h (A,B) = Max d (A,B) , d (B,A) .

We can prove that these definitions are distances; that is, that properties (i)-(ii)hold [67]. The metric space (H (X) , h) is called the space of fractals.

The following is a fundamental property.Theorem: Let (X, d) be a complete metric space. Then (H (X) , h) is also

a complete metric space. Moreover, if An ∈ H (X)∞n=1 is a Cauchy sequencethen

A = Limn→∞An = A ∈ H (X) .

can be characterized as follows:

A = x ∈ X; there is a Cauchy sequence xn ∈ An (9.32)that converges to x

The Contraction Mapping Theorem

Definition: A transformation, or a function, f : X → X on a metric space (X, d)

is called contractive or a contraction mapping if there is a constant 0 ≤ s < 1

such that

d (f (x) , f (y)) ≤ s · d (x, y) , ∀x, y ∈ X.

Any such number s is called the contractivity factor for f. A point xf ∈ X

such that f (xf ) = xf is called a fixed point of the transformation.

87

Page 95: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Theorem: Let f : X → X be a contraction mapping on a complete metricspace (X, d) . Then f possesses exactly one fixed point xf ∈ X and moreoverfor any point x ∈ X , the sequence

f (n) (x) ; n = 0, 1, 2, ...

converges to xf

(f (n) = f f ... f, n times). That is:

Limn→∞f(n) (x) = xf , ∀x ∈ X.

We can say that a Deterministic Fractal is a fixed point of a contraction map-ping on (H (X) , h) . The following definition characterizes the kind of contrac-tion transformations we will talk about in fractal theory.

Theorem: Let w : X → X be a contraction mapping on the metric space(X, d) with contractivity factor s. Then w : H (X) → H (X) defined by:

w (B) = w (x) ; x ∈ B ,for any B ∈ H (X) is a contraction mapping on (H (X) , h) with contractivefactor s.

Theorem: Let (X, d) be a metric space andwn : H (X) → H (X) , n = 1, 2, ..., N a set of contraction mapping inthe metric space (H (X) , h) . Let the contractivity factor for wn be denoted by sn

for each n. Define W : H (X) → H (X) by:

W (B) = ∪Nn=1wn (B) ,

for each B ∈ H (X). Then, W is a contraction mapping on (H (X) , h) withcontractive factor s = Max sn; n = 1, 2, ..., N .

Definition: A (hyperbolic) iterated function system (IFS) consists of a com-plete metric space (X, d) together with a finite set of contraction mappingswn : X → X , with respective contractivity factors sn, for n = 1, ..., N.

Theorem: Given a hyperbolic iterated function systemwn : X → X; wn, n = 1, 2, ..., N with contractive factor s =

Max sn; n = 1, 2, ..., N . Then the transformation W : H (X) → H (X)

defined by:

W (B) = ∪Nn=1wn (B) ,

for all B ∈ H (X) , is a contraction mapping on the complete metric space(H (X) , h) with contractivity factor s. Therefore:

88

Page 96: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

h (W (B) ,W (C)) ≤ s · h (B,C) ,

for all B,C ∈ H (X). Its unique fixed point, A ∈ H (X) obeys:

A = W (A) = ∪Nn=1wn (A) ,

and is given by

A = Limn→∞W(n) (B) ,

for any B ∈ H (X) . This fixed point is called the attractor of the IFS.A deterministic fractal is an attractor of an IFS. However, we must be

careful because there are attractors that are not fractals.

Fractal Dimension

Let (X, d) denote a complete metric space and A ∈ H (X). Let ε > 0 andB (x, ε) the closed ball of radius ε centered at x ∈ X . We wish to define aninteger N (A, ε) given by:

N (A, ε) = smallest integer M such that A ⊂ ∪Mn=1B (xn, ε) ,

for some set of points xn; n = 1, 2, ..,M ⊂ X . The existence of such numberis guaranteed due to the fact that A is compact [7]. Before state the idea of fractaldimension let us give one more definition:

Definition: Let f (ε) and g (ε) be real valued functions of the positive realvariable ε. Then f (ε) ≈ g (ε) means that:

Limε→0

(ln (f (ε))

ln (g (ε))

)= 1,

where the symbol ln means logarithm to the base e.The intuitive idea behind fractal dimension is that a setA has fractal dimension

D if:

N (A, ε) ≈ Cε−D, (9.33)

89

Page 97: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

for some positive constant C. Observe that if we solve this equation for D we findthat:

D ≈ ln (N (A, ε)) − ln (C)

ln(

) ,

If we notice that

Limε→0ln (C)

ln(

) = 0,

we get the following definition:Definition: Let A ∈ H (X) where (X, d) is a metric space. For each ε > 0 let

N (A, ε) be the smallest number of closed balls of radius ε needed to cover A. If:

D = Limε→0ln (N (A, ε))

ln(

) ,

exists, then D is called the fractal dimension of A. We will say that D = D (A).The following theorems can simplify the process of computing the fractal di-

mension.Theorem: Let A ∈ H (X) where (X, d) is a metric space. Let εn = Crn for

real numbers 0 < r < 1 and C > 0, and integers n = 1, 2, 3, .... If

D = Limn→∞ln (N (A, εn))

ln(

1εn

) ,

then A has fractal dimension D.Theorem (The Box Counting Theorem): LetA ∈ H (<m) where the Euclidean

metric is used. Cover <m by closed just-touching square boxes of side length(1/2n). Let Nn (A) denote the number of boxes of side length (1/2n) whichintersect A. If:

D = Limn→∞ln (Nn (A))

ln (2n),

exists, then A has fractal dimension D.

90

Page 98: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

9.7 Self-OrganizationSelf-organization can be defined as the spontaneous creation of a globally coher-ent pattern out of local interactions. This implies in an increase of organizationwhich can be measured more objectively as a decrease of statistical entropy. Thetheory of self-organization has grown out of a variety of disciplines, includingthermodynamics, fluid dynamics, chemistry, biology, computer modelling, amongothers [63, 68]. The traditional mathematical background for the self-organizationscience branches into areas of dynamical systems, catastrophe theory and stochas-tic differential equations [63, 65].

To present some brief quantitative discussion, let us take the Shannon entropygiven by:

H = −n∑

i=1

pi (log pi) (9.34)

where pi, i = 1, .., n give the probability distribution associated to the possiblestates of the system.

There is a physical law stating that, for an isolated system, this quantity doesnot decreases (see [69] for a discussion inside Kinetic Theory) during the systemevolution. For instance, let us take an isolated system composed by two gases sep-arated by some membrane. If the membrane is suddenly removed, we know thatthe gases will mix until the space distribution of them becomes homogeneous in-side the box. So, the system went from an organized configuration to a disorderedone. That is what we always observe for isolated systems.

However, if we can control some parameters of the system, like temperature,the situation may gets different. The example given bellow, called Taylor instabil-ity, is a beautiful case of self-organization. In this case, we have an experiment inwhich a fluid moves between two coaxial cylinders (Figure 9.8.a). The outer oneis kept fixed while the inner one can rotate with an angular velocity w, which is aparameter that can be controlled by outside the system. It is observed that at slowrotation speeds the fluid forms coaxial streamlines. When the rotation speed is in-creased at a critical value, the motion of the fluid becomes organized in the formof rolls in which the fluid periodically moves outwards and inwards in horizontallayers (Figures 9.8.b and 9.8.c)

In the continuous mathematics, the study of self-organization starts with the

91

Page 99: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

(a)

(b) (c)

Figure 9.8: Taylor instability and self-organization in continuous systems: (a) Ex-

perimental setup. (b) Schematic diagram of the formation of rolls. (c) Trajectory

of a fluid particle within a roll.

92

Page 100: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

problem of the analysis of parameter sensitivity in dynamical systems. So, let ustake the dynamical system:

dx

dt= V (c1, c2, .., cm;x) ,

where x ∈ Rn, V is a vector field that depends of the space x, and of parame-ters c1, c2, .., cm, also called control parameters. From the theory of differentialequations, we know that the singular points, that means, the points x ∈ Rn inwhich

V (c1, c2, .., cm;x) = 0, (9.35)

are fundamental to characterize the qualitative behavior of the dynamical system.However, the solution of the equation (9.35) depends on the control parametersalso. Therefore, suddenly structural changes may occur when we change theseparameters. The dynamical system given by expression (9.29) is a simple exampleof such behavior. In this case, the origin (0, 0) is the solution of expression (9.35).According to the theory of ordinary differential equations [63], the qualitativeanalysis of this system nearby its singular point, may be done through the analysisof the eigenvalues/eigenvectors of the matrix (9.30), already performed throughexpression 9.31. Such analysis shown that we have a jump, a sudden change inthe system behavior, for λ = 1.

Now, let us suppose that there is a potential P that generates the vector fieldV, that means,

V (c1, c2, .., cm;x) =∇xP (c1, c2, .., cm;x) , (9.36)

where ∇x is the gradient respect to the space variable x. In this case, we canimport concepts from catastrophe theory [65] to study the problem of parametersensitivity. In this case, the potential P can be seen as a family of functions:P : <m × <n → <, and the Morse theory can be used to classify the singularpoints (solutions of equation (9.35)) and the qualitative behavior of the vectorfield nearby these points.

Now, lets us return to the discrete universe. Cellular automata can generateself organizing systems because the entropy (9.34) may decrease for these sys-tems. However, how could we study parameter sensitivity for cellular automata?

93

Page 101: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Besides, more fundamental, what kind of parameters could we talk about whenstudy cellular automata?

Unfortunately, we have more questions than answers in this field.For instance, we can simulate fluids using a class of cellular automata called

Lattice Gas. Would it be possible to simulate the Taylor instabilities by using thiskind of automata? We shall consider these questions in new investigations in thefield of cellular automata.

9.8 Exercises1. Prove that Φ

(Ω(t))⊂ Ω(t).

2. Show that inverse problems in CA are NP problems.

3. Implement the CA rules 10, 20, 60, 56, 90 and try to find their classes ac-cording Wolfram classification (section 9.2).

4. Show that the set of all possible rules for a CA has cardinality p =

(#S)(#S)k .

5. Show that a deterministic CA will find an attractor or enters in a loop afteran enough finite number of iteractions.

6. Prove the properties (1)-(9), section 9.4.1, for operations defined in expres-sions (9.13)-(9.14).

7. Given the matrix and vectors:

A =

1 1 0

1 0 1

0 1 0

, u =

1

1

1

, v =

1

0

1

.

compute A · u,A · (u⊕ v) ,

8. Show that, for a CA represented by expression (9.20), we have:

Φn (u) =(I ⊕ A⊕ A2 ⊕ ...⊕ An−1

)· T ⊕ Anu, ∀u ∈ SN ,

94

Page 102: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

where Φn (u) the application of CA Φ n-times (Ex: Φ2 (u) = Φ (Φ (u)) ,

etc.).

9. Show that the evolution of CA rule 90, with periodic boundary conditions,can be represented by expression (9.28).

10. Given the CA rule 90 in S5, with periodic boundary conditions, and itsmatrix representation A, is there configuration x, such that:

Ax =

0

1

0

1

0

.

11. We call a CA Φ a depth one cellular automaton if Φ2 = Φ. Show that:

Φ2 = Φ ⇐⇒ Φ · (Φ ⊕ I) = 0.

12. Let us consider CA over S3 with periodic boundary conditions. In this casewe have 23 = 8 possible configurations. We will take the following classesof patterns P1, P2 ⊂0, 13 give by: P1 = 1, 4, 5, 6, 7 and P2 = 0, 2, 3 .Find a CA Φ that satisfies:

Φ (x⊕ y) 6= 0, ∀x ∈ P1 and y ∈ P2,

Φ · (Φ ⊕ I) = 0.

13. Prove Theorem 1 of section 9.4.2.

14. Prove Theorem 2 of section 9.4.2.

15. Prove Theorem 3, section 9.4.2 (see [62], page 4, for reference)

16. Show that the CA Rule 60 and 195, for null boundary conditions, are groupcellular automata. What happens for periodic boundary conditions?

95

Page 103: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

17. Show that expressions (9.22) (9.23) are equivalent.

18. Prove that there is only one ordinary polynomial r (x) with degree less thenn that satisfies expression (9.25) if d0 6= 0.

19. Given a CA Φ : 0, 1N → 0, 1N . Show that.

Φ (u⊕ v) = Φ (u) ⊕ Φ (v) ⇔ Φ ((au) ⊕ (bv)) = (aΦ (u)) ⊕ (bΦ (v)) ,

if a, b ∈ GF (2) and u, v ∈ 0, 1N .

20. Show that the multiplication of matrices by vectors in GF (2) is distributiverespect to the operation ”⊕”, that is, show that: A (u⊕ v) = (Au)⊕ (Av) ,

for any N ×N matrix A and vectors u, v ∈ 0, 1N .

96

Page 104: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 10

Lattice Gas Cellular Automata andFluid Simulation

10.1 IntroductionThe modeling and simulation of natural elements like fluids (gas or liquids), elas-tic, plastic and melting objects, among others, have taken the attention of thescientific community since a long time ago due to the potential applications ofthese methods and the complexity and beauty of the natural phenomena that areinvolved [70]. In particular, techniques in the field of Computational Fluid Dy-namics (CFD) have been applied for fluid simulation in a huge range of appli-cations such as aircrafts design, turbines, turbulence analysis, virtual simulators,visual effects [71] among many others [72].

A majority of fluid simulation methods use 2D/3D mesh based approachesthat are mathematically motivated by the Eulerian methods of Finite Element (FE)and Finite Difference (FD), in conjunction with Navier-Stokes equations of fluids[72]. These works are based on a top down viewpoint of the nature: the fluid isconsidered as a continuous system subjected to Newton’s and conservation Lawsas well as state equations connecting macroscopic variables such as pressure P,density ρ and temperature T .

In this Chapter, we change the viewpoint to the bottom up model of the Lat-

97

Page 105: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

tice Gas Cellular Automata (LGCA) [73]. These are discrete models based onpoint particles that move on a lattice, according to suitable and simple rules in or-der to mimic a fully molecular dynamics. Particles can only move along the edgesof the lattice and their interactions are based on simple collision rules. There isan exclusion principle that limits to one the number of particles that enter a givensite (lattice node) in a given direction of motion. Such framework needs low com-putational resources for both the memory allocation and the computation itself.Such models have been applied for scientific application in two-phase flows de-scription (gas-liquid systems, for example), numerical simulation of bubble flows[74], among others. Besides, Wolfram [75] has studied the computational andthermodynamics aspects of these models for fluid modeling.

In this Chapter we focus on fluid modeling and simulation through Lattice GasCellular Automata (LGCA). The following material was adapted from the refer-ence [71]. Specifically we take a special LCGA, introduced by Frisch, Hasslacherand Pomeau, known as FHP model, and show its capabilities for fluid simulation.By Chapman-Enskog expansion, a known multiscale technique in this area, it canbe demonstrated that the Navier-Stokes model can be reproduced by FHP tech-nique. However, there is no need to solve Partial Differential Equations (PDEs)to obtain a high level of description. Therefore, it is possible to combine theadvantage of the low computational cost of LGCA and its ability to mimic therealistic fluid dynamics to develop a new simulating framework for engineeringand scientific applications.

The Chapter is organized as follows. Section 10.2 describes the FHP modeland its multiscale analysis. To make the material self-contained, the section 10.3offers a review of fluid dynamics.

10.2 FHP and Navier-StokesFrom the viewpoint of fluid models, the models based on Navier-Stokes equations(section 10.3) are top down approaches in the sense that the relationships of inter-est are between variables that capture the global properties of the system; that is,pressure, density and temperature. These relationships are expressed in ordinaryor partial differential equations like (10.37) in the section 10.3.

On the other hand, bottom up models start from a description of local inter-

98

Page 106: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

actions. These models usually involve algorithmic descriptions of individuals,particles in the case of fluids. Analysis and computer simulation of bottom upmodels should produce, as emergent properties, the global relationships seen inthe real world, without these being built into the model. Thus, there is no need touse a PDEs and numerical methods to obtain a high level of description.

The state-of-the-art techniques in this area are the Lattice Boltzmann models[76]. In this section we will study a simpler model, the FHP one, for fluid simu-lation. Also, it will be demonstrated how Navier-Stoke models can be reproducedby this method.

The FHP was introduced by Frisch, Hasslacher and Pomeau [77] in 1986 andis a model of a two-dimensional fluid. It can be seen as an abstraction, at a micro-scopic scale, of a fluid. The FHP model describes the motion of particles travelingin a discrete space and colliding with each other. The space is discretized in ahexagonal lattice, like in Figure 10.1.

The microdynamics of FHP is given in terms of Boolean variables describingthe occupation numbers at each site of the lattice and at each time step (i.e. thepresence or the absence of a fluid particle). The FHP particles move in discretetime steps, with a velocity of constant modulus, pointing along one of the sixdirections of the lattice. The dynamics is such that no more than one particleenters the same site at the same time with the same velocity. This restriction isthe exclusion principle; it ensures that six Boolean variables at each lattice site arealways enough to represent the microdynamics.

In the absence of collisions, the particles would move in straight lines, alongthe direction specified by their velocity vector. The velocity modulus is such that,in a time step, each particle travels one lattice spacing and reaches a nearest-neighbor site.

In order to conserve the number of particles and the momentum during eachinteraction, only a few configurations lead to a non-trivial collision (i.e. a collisionin which the directions of motion have changed). When exactly two particles enterthe same site with opposite velocities, both of them are deflected by 60 degreesso that the output of the collision is still a zero momentum configuration with twoparticles (Figure 10.1). When exactly three particles collide with an angle of 120degrees between each other, they bounce back to where they come from (so thatthe momentum after the collision is zero, as it was before the collision). Both two-

99

Page 107: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

and three-body collisions are necessary to avoid extra conservation laws. Severalvariants of the FHP model exist in the literature [78], including some with restparticles like models FHP-II and FHP-III. For all other configurations no collisionoccurs and the particles go through as if they were transparent to each other.

The full microdynamics of the FHP model can be expressed by evolution equa-tions for the occupation numbers defined as the number, ni (~r, t), of particle enter-ing site ~r at time twith a velocity pointing along direction~ci, where i = 1, 2, . . . , 6

labels the six lattice directions. The numbers ni can be 0 or 1.We also define the time step as ∆t and the lattice spacing as ∆r. Thus, the six

possible velocities ~vi of the particles are related to their directions of motion by

~vi =∆r

∆t~ci. (10.1)

Without interactions between particles, the evolution equations for the ni wouldbe given by

ni (~r + ∆r~ci, t+ ∆t) = ni (~r, t) (10.2)

which express that a particle entering site ~r with velocity along ~ci will continue ina straight line so that, at next time step, it will enter site ~r + ∆r~ci with the samedirection of motion. However, due to collisions, a particle can be removed fromits original direction or another one can be deflected into direction ~ci.

For instance, if only ni and ni+3 are 1 at site ~r, a collision occurs and theparticle traveling with velocity ~vi will then move with either velocity ~vi−1 or ~vi+1,where i = 1, 2, . . . , 6. The quantity

Di = nini+3 (1 − ni+1) (1 − ni+2) (1 − ni+4) (1 − ni+5) . (10.3)

indicates, when Di = 1 that such a collision will take place. Therefore ni − Di

is the number of particles left in direction ~ci due to a two-particle collision alongthis direction.

Now, when ni = 0, a new particle can appear in direction ~ci, as the resultof a collision between ni+1 and ni+4 or a collision between ni−1 e ni+2. It isconvenient to introduce a random Boolean variable q (~r, t), which decides whetherthe particles are deflected to the right (q = 1) or to the left (q = 0), when a two-body collision takes place. Therefore, the number of particle created in direction~ci is

qDi−1 + (1 − q)Di+1. (10.4)

100

Page 108: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure 10.1: The two-particle collision in the FHP.

Particles can also be deflected into (or removed from) direction ~ci because of athree-body collision. The quantity which express the occurrence of a three-bodycollision with particles ni, ni+2 and ni+4 is

Ti = nini+2ni+4 (1 − ni+1) (1 − ni+3) (1 − ni+5) (10.5)

As before, the result of a three-body collision is to modify the number of particlesin direction ~ci as

ni − Ti + Ti+3, (10.6)

Thus, according to our collision rules, the microdynamics of a LGCA is writtenas

ni (~r + ∆r~ci, t+ ∆t) = ni (~r, t) + Ωi (n (~r, t)) (10.7)

where Ωi is called the collision term.For the FHP model, Ωi is defined so as to reproduce the collisions, that is

Ωi = −Di + qDi−1 + (1 − q)Di+1 − Ti + Ti+3. (10.8)

Using the full expression for Di and Ti, given by the Equations (10.3)-(10.5), we

101

Page 109: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

obtain,

Ωi (10.9)

= −nini+2ni+4 (1 − ni+1) (1 − ni+3) (1 − ni+5)

+ ni+1ni+3ni+5 (1 − ni) (1 − ni+2) (1 − ni+4)

− nini+3 (1 − ni+1) (1 − ni+2) (1 − ni+4) (1 − ni+5)

+ (1 − q)ni+1ni+4 (1 − ni) (1 − ni+2) (1 − ni+3)

+ (1 − q) (1 − ni+5)

+ qni+2ni+5 (1 − ni) (1 − ni+1) (1 − ni+3) (1 − ni+4) .

These equations are easy to code in a computer and yield a fast and exact imple-mentation of the model

Until now, we deal with microscopic quantities. However, the physical quanti-ties of interest are not so much the Boolean variables ni but macroscopic quantitiesor average values, such as, for instance, the average density of particles and theaverage velocity field at each point of the system. Theses quantities are definedfrom the ensemble average Ni (~r, t) = 〈ni (~r, t)〉 of the microscopic occupationvariables. Note that, Ni (~r, t) is also the probability of having a particle enteringthe site ~r, at time t, with velocity

~vi =∆r

∆t~ci.

In general, a LGCA is characterized by the number z of lattice directions andthe spatial dimensionality d. In our case d = 2 and z = 6. Following the usualdefinition of statistical mechanics, the local density of particles is the sum of theaverage number of particles traveling along, each direction ~ci

ρ (~r, t) =z∑

i=0

Ni (~r, t) . (10.10)

Similarly, the particle current, which is the density ρ times the velocity field ~u, isexpressed by.

ρ (~r, t) ~u (~r, t) =

z∑

i=0

~viNi (~r, t) . (10.11)

102

Page 110: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Another quantity which will play an important role in the up coming derivation isthe momentum tensor Π defined as

Παβ =

z∑

i=0

~viα~viβNi (~r, t) (10.12)

where the Greek indices α and β label the d spatial components of the vectors.The quantity Π represents the flux of the α−component of momentum transportedalong the β−axis. This term will contain the pressure contribution and the effectsof viscosity.

10.2.1 Multiscale AnalysisThe starting point to obtain the macroscopic behavior of the CA fluid is to derivean equation for the N ′

is. Averaging the microdynamics (10.7) yields

Ni (~r + ∆r~ci, t+ ∆t) −Ni (~r, t) = 〈Ωi (n (~r, t))〉 (10.13)

where Ωi is the collision term of the LGCA, under study. It is important to noticethat Ωi (n) has some generic properties, namely

z∑

i=1

Ωi = 0 ez∑

i=1

~viΩi = 0 (10.14)

expressing the fact that particle number and momentum are conserved during thecollision process (the incoming sum of mass or momentum equals the outgoingsum).

The Ni’s vary between 0 and 1 and, at a scale L >> ∆r e T >> ∆t, one canexpect them to be smooth functions of the space and time coordinates. Therefore,equation (10.13) can be Taylor expanded up to second order and gives

∆r (~ci · ∇)Ni (~r, t) + ∆t∂tNi (~r, t) (10.15)

+1

2(∆r)

2 (~ci · ∇)2Ni (~r, t) + ∆r∆t (~ci · ∇) ∂tNi (~r, t)

+1

2(∆t)

2 (∂t)2Ni (~r, t) = 〈Ωi (n (~r, t))〉 .

where (∂t)2 is the second derivative in respect to the time parameter t.

103

Page 111: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

At a macroscopic scale L >> ∆r, following the procedure of the so-calledmultiscale expansion [79], we introduce a new space variable ~r1 such that

~r1 = ε∂~r1 e ∂r = ε∂~r1 (10.16)

with ε << 1. We also introduce the extra time variables t1 = εt and t2 = ε2t,as well as new functions N ε

i depending on ~r1, t1 and t2, N εi = N ε

i (t1, t2, ~r1) andsubstitute into equation (10.15)

Ni → N εi ∂t → ε∂t1 + ε2∂t2 ∂r → ε∂~r1 (10.17)

together with the corresponding expressions for the second order derivatives.Then obtain new equations for the new functions N ε

i . Thus, we may write [79],

N εi = N

(0)i + εN

(1)i + ε2N

(2)i + · · · (10.18)

The Chapman-Enskog method is the standard procedure used in statistical me-chanics to solve an equation like (10.15) with a perturbation parameter ε. As-suming that 〈Ωi (n)〉 can be factorized into Ωi (N), we write the contributions ofeach order in ε. According to multiscale expansion (10.18), the right-hand side of(10.15) reads

Ωi (N) = Ωi

(N (0)

)+ ε

z∑

j=1

(∂Ωi

(N (0)

)

∂Nj

)N

(1)j + O

(ε2)

(10.19)

Using expressions (10.16)-(10.18) in the left-hand side of (10.15) and comparingthe terms of the same order in ε in the equation (10.19), yields

O(ε0)

: Ωi

(N (0)

)= 0 (10.20)

and

O(ε1)

: ∂1αviαN(0)i + ∂t1N

(0)i (10.21)

=1

∆t

z∑

j=1

(∂Ωi

(N (0)

)

∂Nj

)N

(1)j

where the subscript 1 in spatial derivatives (e.g. ∂1α) indicates a differential oper-ator expressed in the variable ~r1 and ∆r

∆t(~ci · ∇r1) = ∂1αviα, from equation (10.1).

104

Page 112: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

We also impose the extra conditions that the macroscopic quantities ρ and ρ~uare entirely given by the zero order of expansion (10.18)

ρ =z∑

i=1

N(0)i and ρ~u =

z∑

i=1

~viN(0)i (10.22)

and thereforez∑

i=1

N(l)i = 0 and

z∑

i=1

~viN(l)i = 0, for l ≥ 1 (10.23)

Thus, following the Chapman-Enskog method we can obtain [73], from equa-tion (10.15), the following result at order ε

∂t1ρ+ div1 ρu = 0 (10.24)

and∂t1ρuα + ∂1βΠ

(0)αβ = 0 (10.25)

On the other hand, if we consider the terms of order ε2 and using the relations(10.24) and (10.25) to simplify, we have

∂t2ρua + ∂1β

(1)αβ +

∆t

2

(∂t1Π

(0)αβ + ∂1γS

(0)αβγ

)]= 0 (10.26)

The last equation contains the dissipative contributions to the Euler equation(10.25). The first contribution is Π

(1)αβ which is the dissipative part of the mo-

mentum tensor. The second part, namely ∆t

2

(∂t1Π

(0)αβ + ∂1γS

(0)αβγ

)comes from the

second order terms of the Taylor expansion of the discrete Boltzmann equation.These terms account for the discreteness of the lattice and have no counterpart instandard hydrodynamics. As we shall see, they will lead to the so-called latticeviscosity. The order ε e ε2 can be grouped together to give the general equationsgoverning our system. Summing equations (10.24) and (10.26) with the appro-priate power of ε as factor and we obtain the continuity equation (see expression(10.35):

∂tρ + div ρ~u = 0 (10.27)

Similarly, equation (10.25) and (10.26) yields [73]

∂tρua +∂

∂rβ

[Παβ +

∆t

2

(ε∂t1Π

(0)αβ +

∂rγ

S(0)αβγ

)]= 0 (10.28)

105

Page 113: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

We now turn to the problem of solving equation (10.20) together with condi-tions (10.22) in order to find N (0)

i as functions of ρ and ρ~u. The solutions N (0)i

which make the collision term Ω vanish are known as the local equilibrium so-lutions. Physically, they correspond to a situation where the rate of each type ofcollision equilibrates. Since the collision time ∆t is much smaller than the macro-scopic observation time, it is reasonable to expect, in first approximation that anequilibrium is reached locally.

Provided that the collision behaves reasonably, it is found [73] that the genericsolution is

N(0)i =

1

1 + exp(−A− ~B · ~vi

) (10.29)

This expression has the form of a Fermi-Dirac distribution. This is a consequenceof the exclusion principle we have imposed in the cellular automata rule (no morethan one particle per site and direction). This form is explicitly obtained for theFHP model by assuming that the rate of direct and inverse collisions are equal.The quantities A e ~B are functions of the density ρ and the velocity field ~u andare to be determined according to equations (10.22). In order to carry out thiscalculation, N (0)

i is Taylor expanded up to second order in the velocity field ~u.One obtains [80]

N(0)i = aρ +

v2~vi · ~u+

ρG (ρ)

v4Qiαβuαuβ (10.30)

where α, β, γ are summed over the spacial coordinates, e.g. α, β, γ ∈ 1, . . . , d,v = ∆r

∆t, a = 1

z, b = d

zand

Qiαβ = viαviβ − v2

dδαβ (10.31)

The function G is obtained from the fact that N (0)i is the Taylor expansion of

a Fermi-Dirac distribution. For FHP, it is found [80, 73]

G (ρ) =2

3

(3 − ρ)

(6 − ρ)

We may now compute the local equilibrium part of the momentum tensor, Π(0)αβ

and then obtain the pressure term

p = aC2v2ρ−

[C2

d− C4

]ρG (ρ) u2 (10.32)

106

Page 114: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

where C2 = zd.

We can see [73] that the lattice viscosity is given by

νlattice = −C4b∆tv

2

2= − z

d (d+ 2)

d

z

∆t

2v2

=−∆t

2 (d+ 2)v2

The usual contribution to viscosity is due to the collision between the fluid parti-cles is given by [73]

νcoll = ∆tv2 bC4

Λ

where −Λ is given by −Λ = 2s (1 − s)3 where s = ρ6

Therefore, the Navier-Stokes equation reads

∂t~u+ 2C4G (ρ) (~u · ∇) ~u = −1

ρ∇p+ ν∇2~u (10.33)

whereν = ∆tv

2bC4

(1

Λ− 1

2

)=

∆tv2

d+ 2

(1

Λ− 1

2

)(10.34)

is the kinematic viscosity of our discrete fluid.

10.3 Navier-Stokes EquationsThe majority fluid models follow the Eulerian formulation of fluid mechanics;that is, the fluid is considered as a continuous system subjected to Newton’s andconservation Laws as well as state equations connecting the macroscopic vari-ables that define the thermodynamic state of the fluid: pressure P, density ρ andtemperature T .

So, the mass conservation, also called continuity equation, is given by [72]:

∂ρ

∂t+ ∇ · (ρ~u) = 0 (10.35)

The linear momentum conservation equation, also called Navier-Stokes, canbe obtained by applying the third Newton’s Law to a volume element dV of fluid.It can be written as [72].:

ρ

(∂~u

∂t+~u·∇~u

)= −∇P + F + µ

(∇2~u+

1

3∇ (∇·~u)

)(10.36)

107

Page 115: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

where F is an external force field and µ is the viscosity of the fluid. Besides,the equation ∇·~u = 0 must be added to model incompressible fluids. Thus, ifwe combine this equations with expression (10.36) we obtain the Navier-Stokesequations for incompressible fluids (water, for example):

ρ

(∂~u

∂t+ ~u·∇~u

)= −∇P + F + µ∇2~u, (10.37)

∇·~u = 0. (10.38)

Also, we need an additional equation for the pressure field. This is a stateequation which ties together all of the conservation equations for continuum fluiddynamics and must be chosen to model the appropriate fluid (i.e. compressible orincompressible). In the case of liquids, the pressure P is temperature insensitiveand can be approximated by P = P (ρ). Morris in [81] proposed an expressionthat have been used for fluid animation also [82]:

P = c2ρ (10.39)

where c is the speed of sound in this fluid [83].Equations (10.37)-(10.39) need initial conditions

(ρ (t = 0, x, y, z) , ~u (t = 0, x, y, z)). Besides, in practice, fluid domain is aclosed subset of the Euclidean space and thus the behavior of the fluid in thedomain boundary - boundary conditions - must be explicitly given. For a fixedrigid surface S, one usual model is the no-sleep boundary condition that can bewritten as:

~u|S = 0. (10.40)

Also, numerical methods should be used to perform the computational sim-ulation of the fluid because the fluid equations in general do not have analyticalsolution. Finite Element (FE) and Finite Difference (FD) are known approachesin this field. Recently, the Lagrangian Method of Characteristics [84, 85] and themeshfree methods of Smoothed Particle Hydrodynamics (SPH) [82] and Moving-Particle Semi-Implicit (MPS) [86] have been also applied.

108

Page 116: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Appendix A

DNA Computing

From the hardware viewpoint, the basic idea of DNA computing is to replace themicrochips of usual computers by DNA counterparts. Capabilities for informationprocessing of organic molecules have been discovered and explored for the designof devices primitives [87, 88]. In the particular case of the DNA molecule, themassive parallelism of DNA strands and the Watson-Crick complementarity givesthe high hopes for the future of DNA computing [11].

However, from the viewpoint of the computer science theory, we can provethat by using DNA functionalities it is possible to build computing models whichare equivalent in power with some Turing machines. Therefore, computabilitytheory can be reconstructed in this framework. Whether or not this has practicalsignificance remains an open question. The theoretical background for DNA com-puting is based on automata and language theories. It is a beautiful field in whichwe can explore elements of computer science in a non-traditional way.

A.1 Structure of DNAThe DNA is the molecule that plays the central role in DNA computing as wellas in living cells. It is a polymer, a large molecule, which is strung together frommonomers called deoxyribonucleotides [11].

In 1951, the 23-year old biologist James Watson traveled from the UnitedStates to work with Francis Crick, an English physicist at the University of Cam-

109

Page 117: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

bridge. Crick was already using the process of X-ray crystallography to studythe structure of protein molecules. Together, Watson and Crick used X-ray crys-tallography data, produced by Rosalind Franklin and Maurice Wilkins at King’sCollege in London, to decipher structure of the Deoxyribo Nucleic Acid, the DNAmolecule.

This is what they already knew from the work of other scientists, about theDNA molecule:

1) DNA is made up of subunits which scientists called nucleotides.2) Each nucleotide is made up of a sugar, a phosphate and a base.3) There are 4 different bases in a DNA molecule:a) adenine (a purine) - base Ab) cytosine (a pyrimidine) - base Cc) guanine (a purine) - base Gd) thymine (a pyrimidine) - base T4) The number of purine bases equals the number of pyrimidine bases5) The number of adenine bases equals the number of thymine bases6) The number of guanine bases equals the number of cytosine bases7) The basic structure of the DNA molecule is helical, with the bases being

stacked on top of each other.Working with nucleotide models made of wire, Watson and Crick attempted

to put together the puzzle of DNA structure in such a way that their model wouldaccount for the variety of facts that they knew about the molecule. They publishedtheir hypothesis in 1953, in a remarkable work [89], which main consequence wasthe discovery of the DNA structure, the double helix, represented on Figure A.1.

Another important result of the Watson and Crick´s work [89] is the so calledWatson-Crick complementarity: the pairing A-T and C-G.That is why scien-tists observed the facts 5 and 6 above.

The structure of the nucleotides follows the scheme presented on Figure A.2.In this Figure ”B” represents the base (A, T, C, or G), P is the phosphate group,and the stick is the sugar base, with its carbons enumerated from 1′ to 5′.

A 5′-phosphate group of one nucleotide is joined with the 3′-hydroxil groupof the other one forming a phosphodiester bond, which is a strong bound, and isillustrated in Figure A.3. This is the process to form a single strand.

The double strand is formed using Watson-Crick complementarity, which is

110

Page 118: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure A.1: The double helix of the DNA structure and the RNA.

111

Page 119: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure A.2: The structure of the nucleotides.

Figure A.3: A 5′-phosphate group is joined with a 3′-hydroxil group.

112

Page 120: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

represented on Figure A.4. In this figure the general rule for joining two singlestrands is pictured. In this case, hydrogen bonds (the Watson-Crick complemen-tarity, in fact) take place to compose the DNA at all. In the double strand moleculethe two single strands have opposite directions: the nucleotide at the 5′ end of onestrand is bounded to the nucleotide at the 3′ end of the other strand. It is a stan-dard convention to represent double strand molecules (also called duplex) that theupper strand runs from left to right in the 5′ − 3′ direction, and, consequently, thelower strand runs from left to right in the 3′ − 5′ direction.

A.2 Operations on DNA Molecules

A.2.1 Denaturation and RenaturationThe hydrogen bonding between complementary bases is (much) weaker than thelink between consecutive nucleotides within one strand. Therefore, it is possibleto separate the two strands of a DNA molecule without breaking the single strand.This process is called denaturation and has an inverse one, the renaturation,also called reannealing. The process of denaturation is induced by a temperatureincrease and reversely, a temperature decrease induces the reanneling.

A.2.2 Lengthening DNAThere are various manipulations of DNA that are mediated by enzymes. These areproteins that catalyze chemical reactions taking place in living cells. For instance,a class of enzymes called (DNA) polymerases is able to add nucleotides to anexisting DNA molecule. As a matter of fact, polymerase can extend only in the5′ − 3′ direction .

In order the polymerases add nucleotides, it is required (1) an existing singlestranded template to follow, and (2) an already existing sequence (primer) whichis bounded to a part of the template, with the 3′ end available for extension.

The whole process is pictured on Figure A.5. The polymerase will extendrepeatedly the 3′ end of the shorter strand, complementing the sequence, providingthat required nucleotides are available in the solution where the reaction takesplace.

113

Page 121: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure A.4: Watson-Crick complementarity.

114

Page 122: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure A.5: Lengthening DNA.

115

Page 123: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Besides, there are some polymerases that will extend a DNA molecule withouta prescribed template. Terminal transferase is such a polymerase. It is useful toadd single stranded ”tails” to both ends of a double strand molecule, as we see onFigure A.6.

Figure A.6: Add single tails to both ends of DNA molecule.

A.2.3 Shortening DNADNA can be degradated by DNA nucleases. They are divided into DNA exonu-cleases and DNA endonucleases. Exonucleases shorten the DNA by removingnucleotides one at a time from the ends of the DNA molecule. The Figure A.7shows a degradation process due to a 3′ − nuclease, that is a nuclease that degra-dates the DNA in the 3′ − 5′ direction. Also, there are nucleases that degradatesthe DNA in both 3′ − 5′ and 5′ − 3′ directions.

A.2.4 Cutting DNAEndonucleases destroy internal phosphodiester bonds of the DNA. They can bequite specialized to what they cut, where they cut, and how they cut. For instance,the S1 endonuclease cuts only single strands, as represented in Figure A.8, orwithin single strand pieces of a mixed DNA containing single stranded and doublestranded pieces as represented in Figure A.9.

116

Page 124: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure A.7: Degradation due to 3′ − nuclease.

Figure A.8: Distroying internal phosphodiester bonds of the DNA.

117

Page 125: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Restriction endonucleases are much more specific because they cut only dou-ble stranded molecules, and moreover they can be specified to cut only a specificset of sites. We shall observe that, if a stretch of DNA contains several recognitionsites, then the restriction enzyme will cut all of them (Figure A.9)

Figure A.9: Cutting double stranded molecules.

A.2.5 Linking DNADNA molecules may be linked together by a process called ligation which is me-diated by enzymes called ligases. So, it is possible to recompose an original DNAthat was cut by endonucleases as well as to get hybrid molecules by taking twodifferent DNA molecules, say D1 and D2 that were cut by the same restrictionenzyme, and then find the pieces by recombination, that is, by recomposing thedouble strand.

118

Page 126: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

A.2.6 Multiplying DNABy using the so called polymerase chain reaction one can produce within a shortperiod of time millions of copies of a desired DNA molecule even if one beginswith only one strand of the molecule. This process gives the Nobel Prize for hiscreator, Kary Mullis, in 1993 (nobelprize.org/chemistry/laureates/1993/mullis-autobio.html). The role process involves a sequence of steps and it has been ap-plied in a variety of areas such genetic engineering, genome analysis, archeology,etc. and DNA computing as we will see next.

A.2.7 Reading the SequenceThe ultimate goal in many applications of genetic engineering techniques is tolearn the exact sequence of nucleotides that comprises a DNA molecule. Thisprocess is called sequencing. Fortunately, it is possible to easily perform suchtask in usual Labs. This is a fundamental point for DNA computing.

A.3 Beginning of DNA ComputingIn this section we show how the DNA functionalities specified on section A.2 canbe used to build computing devices. First of all, let us see some basic definitionsin graph theory.

A directed graph G with designated vertices vin and vout is said to have aHamiltonian path if and only if there exists a sequence of compatible one wayedges e1, e2, ..., ez, that is a path, which begins at vin ends at vout and enters everyother vertex exactly once. Figure A.10 shows a graph which for vin = 0 andvout = 6 has a Hamiltonian path given by the edges 0 → 1, 1 → 2, 2 → 3, 3 →4, 4 → 5, 5 → 6.

Up to now, there is no an efficient deterministic algorithm for deciding whetheran arbitrary directed graph with designated vertices has a Hamiltonian path ornot. The only polynomial time algorithms to solve such problem are non-deterministic, like the following one:

1) Generate random paths through the graph.2) Keep only those paths which begin with vin and end at vout.

119

Page 127: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3) Reject all paths that do not involve exactly n vertices.4) For each of the n vertices v, reject all paths that do not involve v.5) If any paths remain say YES otherwise say No.Adleman [87] solved the Hamiltonian Path problem for the graph on Figure

A.10, through an implementation of the above algorithm in the molecular level,by using the DNA properties and functionalities stated in sections A.1 and A.2.

Figure A.10: Simple directed graph.

The tools of molecular biology are used to solve an instance of the directedHamiltonian path problem. The graph is encoded in molecules of DNA andthe operations of the computation are performed with standard protocols and en-zymes. This experiment demonstrates the feasibility of carrying out computationsat the molecular level.

The basic idea of the Adleman’s work is to encode nodes and edges of thegraph using 20 − mer strands. We are free to define the former but the latteris constrained to the complementarity. Therefore, each vertex i of the graph isassociated with a 20 − mer strand of DNA, denoted by si, 0 ≤ i ≤ 6. Forinstance, the nodes 2 and 3 are encoded in the Adleman’s work with the followingoligonucleotides:

s2 = TATCGCATCGCTATATCCGA,

120

Page 128: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

s3 = GCTATTCGAGCTTAAAGCTA.

Edges are also encoded by a 20 − mer strand of DNA obtained by first de-composing each si into two strands of length 10, say s′i and s′′i . So, an edge fromthe vertex i to the vertex j, provided one exists in the graph G, is encoded as theWatson-Crick complement of s′′i s′j. Hence, if we define a function h such that:

h (A) = T, h (T ) = A, h (C) = G, h (G) = C,

and with the convention that for DNA strands, h is applied letter by letter (forexample: h (CATT ) = GTAA), we can say that the edge ei→j is encoded ash(s′′i s

′j

). Now, we are ready to describe the man phase of Adleman’s experi-

ment. For each vertex i in the graph and for each edge ei→j large quantities of thecorresponding oligonucleotides were mixed together in a single ligation reaction.Here the oligonucleotides si served as splints to bring oligonucleotides associatedwith compatible edges together for ligation. The Watson-Crick complementar-ity assures that the ligation reaction caused the formation of DNA molecules thatcould be viewed as encoding of random paths through the graph. The Figure A.11pictures examples of possible paths that can be formed in such experiment. Theremaining steps of the above algorithm can be performed by standard filtering orscreening procedures that require biochemical techniques lying outside the aimsof this discussion.

The formalization of the operations behind the construction of double strandedsequences and the automata counterpart of the corresponding system are groundedin the context of sticker systems and Watson-Crick automata. With these elementsit is possible to address the problems of what can be computed and what can beefficiently computed by DNA machines. The interested reader shall see reference[11], Chapters 4 and 5.

A.4 Exercises1. Implement the non-deterministic algorithm of section A.3, to solve the

Hamiltonian path problem.

2. What is the computational complexity of this algorithm?

121

Page 129: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Figure A.11: Possible paths that can be formed in Adleman’s experiment.

122

Page 130: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Appendix B

Neural Computation and CircuitTheory

B.1 IntroductionOne beautiful application of the theory of circuits summarized in Chapter 6 infor neural computing. Artificial Neural Networks (ANNs) have been studied andapplied in fields like cognitive science, pattern recognition and classification [18].

Neural networks use a different approach to problem solving than the con-ventional computers. In conventional computers there is a program, a set of in-structions to be followed in order to solve a problem. On the other hand, neuralnetworks process information in a similar way the human brain does. The networkis composed of a large number of highly interconnected processing elements (neu-rones) working in parallel. There is no a main program. Instead, there is learningprocess in which the network learns to solve some problem. Neural networks learnby example which must be selected carefully otherwise useful time is wasted oreven worse the network might be functioning incorrectly.

Recently, several works have been done in artificial neural networks (ANNs)based on quantum theory due to cognitive science and computer science aspects.The so called Quantum Neural Networks (QNNs) is an interesting area of researchin the field of quantum computation (see Appendix C) and quantum information

123

Page 131: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

[90]. However, an open question about QNNs is what such an architecture willlook like as an implementation on quantum hardware. In section B.2 we presentbasic concepts in neural computing.

B.2 Classical Neural NetworksThe first logical neuron was developed by W. S. McCulloch and W.A. Pitts in1943 [18]. It describes the fundamental functions and structures of a neural cellreporting that a neuron will fire an impulse only if a threshold value is exceeded.

Figure B.1: McCulloch-Pitts neuron model.

Figure B.1 shows the basic elements of McCulloch-Pitts model: x is the inputvector, w are weights input associated, y is output, R is number of elements ininput and f is the activation function that determines the value in output. A simplechoice for f is the signal function sgn(.). In this case, if the sum, across all theinputs with its respective weights exceeds the threshold b the output is 1 else thevalue of y is −1, that is:

y = sgn(

R∑

i=1

wixi − b). (B.1)

But the McCulloch-Pitts neuron did not have a mechanisms for learning.Based on biological evidences, D.O. Hebb suggested a rule to adapt the weightsinput, which is interpreted as learning rule for the system [18]. This biologicalinspired procedure can be expressed in the following manner:

wnewi = wold

i + ∆wi; ∆wi = η(ydesired − y)xi, (B.2)

124

Page 132: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

where wnew and wold are adapted weights and initials weights respectively, η isa real parameter to control the rate of learning and ydesired is the desired (know)output. This learning rule plus the elements of Figure B.1 is called the perceptronmodel for a neuron.

Then, the learning typically occurs for example through training, or exposureto a know set of input/output data. The training algorithm iteratively adjusts theconnection weights wi analogous to synapses in biological nervous. These con-nection weights store the knowledge necessary to solve specific problems.

125

Page 133: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Appendix C

Quantum Computation and CircuitTheory

C.1 IntroductionIn the last two decades we observed a growing interest in Quantum Computationand Quantum Information due to the promising of solving efficiently hard prob-lem for conventional computing paradigms. Quantum Computation and QuantumInformation is the study of the information processing tasks that can be accom-plished using quantum mechanical systems. Thus, it deals with methods for char-acterization, transmission, storing, and computation of information in quantumstates.

Quantum Computation and Quantum Information began to emerge as a co-herent discipline at the 1980´s. Like in the classical computation, this field en-compasses two aspects: Hardware and algorithms. The fundamental ideas of thisnew field comes from Quantum Mechanics, Computer Science and InformationTheory. The theory of circuits, developed in Chapter 6, offers a suitable compu-tational model for the representation of a quantum algorithm. Therefore, in thisChapter, we show some details of quantum computing and explain the applicationof circuit theory in this field.

126

Page 134: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

C.2 Quantum ComputationIn practice, the most useful model for quantum computation is the Quantum Com-putational Network also called Deutsch’s model [21, 19]. The basic informationunit in this model is a qubit [9], which can be considered a superposition of twoindependent states | 0〉 and | 1〉 , denoted by | ψ〉 = α0 | 0〉 + α1 | 1〉, whereα0, α1 are complex numbers such that |α0|2 + |α1|2 = 1.

A composed system with n qubits is described using N = 2n independentstates obtained through the tensor product of the Hilbert Space associated witheach qubit. Thus, the resulting space has a natural basis that can be denoted by:

| i0i1...in−1〉; ij ∈ 0, 1 . (C.1)

This set can be indexed by | i〉; i = 0, 1, ..., N − 1. Following the QuantumMechanics Postulates, the state system | ψ〉, in any time t, can be expanded as asuperposition of the basis states:

| ψ〉 =N−1∑

i=0

αi | i〉;N−1∑

i=0

|αi|2 = 1. (C.2)

Entanglement is another important concept for quantum computation with noclassical counterpart. To understand it, a simple example is worthwhile.

Let us suppose that we have a composed system with two qubits. According tothe above explanation, the resulting Hilbert Space has N = 22 independent states.

Let the Hilbert Space associated with the first qubit (indexed by 1) denoted byH1 and the Hilbert Space associated with the second qubit (indexed by 2) denotedby H2. The computational basis for these spaces are given by: | 0〉1, | 1〉1 and| 0〉2, | 1〉2, respectively. If qubit 1 is in the state | ψ〉1 = a10 | 0〉1 + a11 | 1〉1and qubit 2 in the state | ψ〉2 = a20 | 0〉2 + a21 | 1〉2, then the composed system isin the state: | ψ〉 =| ψ〉1⊗ | ψ〉2, explicitly given by:

| ψ〉 =∑

i,j∈0,1

a1ia2j | i〉1⊗ | j〉2. (C.3)

Every state that can be represented by a tensor product | ψ〉1⊗ | ψ〉2 belongsto the tensor product space H1 ⊗H2. However, there are some states in H1 ⊗H2

that can not be represented in the form | ψ〉1⊗ | ψ〉2. They are called entangledstates. The Bell state (or EPR pair) presented next is a very known example:

127

Page 135: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

| ψ〉 =1√2

(| 0〉1⊗ | 0〉2+ | 1〉1⊗ | 1〉2) . (C.4)

Trying to represent this state as a tensor product | ψ〉1⊗ | ψ〉2, with | ψ〉1 ∈ H1

and | ψ〉2 ∈ H2, produces an inconsistent linear system without solution.Entangled states are fundamental for teleportation [9]. In recent years, there

has been tremendous efforts trying to better understand the properties of entan-glement, not only as a fundamental resource for the Nature, but also for quantumcomputation and information.

The computation unit in Deutsch’s model consists of quantum gates which areunitary operators that evolves an initial state performing the necessary computa-tion to get the desired result. A quantum computing algorithm can be summarizedin three steps: (1) Prepare the initial state; (2) A sequence of (universal) quantumgates to evolve the system; (3) Quantum measurements.

From quantum mechanics theory, the last stage performs a collapse and onlywhat we know in advance is the probability distribution associated to the measure-ment operation. So, it is possible that the result obtained by measuring the systemshould be post-processed to achieve the target (quantum factoring (Chapter 6 of[19]) is a nice example).

The most used computational model in quantum computing is the circuit onebecause it fits very well with the Deutsch’s model. The Figure C.1 shows a repre-sentation of a quantum circuit.

Figure C.1: Example of quantum circuit with generic quantum gates denoted by

U and D.

128

Page 136: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Bibliography

[1] R. L. Vaught. Set Theory: An Introduction. Birkhauser, Boston, 1985.

[2] E. L. Lima. Curso de Analise, volume 2. Livros Tecnicos e CientıficosEditora S.A., 1985.

[3] B. Russell. Bertrand Russell and the Origins of the Set-Theoretic Paradoxes.Birkhauser Verlag Basel, 1992.

[4] S. F. Barker. Filosofia da Matematica. Prentice-Hall Inc., 1976.

[5] T.A. Sudkamp. Languages and Machines: An Introduction to the Theory ofComputer Science. Addison-Wesley Publishing Company, INC., 1988.

[6] R. Garnier and J. Taylor. Discrete Mathematics for New Technology. AdamHilger, 1992.

[7] C. S. Honig. Alicacoes da Topologia a Analise. Editora Edgar Blucher, Ltda,SP-Brasil, 1976.

[8] G. A. Giraldi. Lie Groups. PhD thesis, Federal University of Rio de Janeiro- COPPE, virtual01.lncc.br/ giraldi/Tese/ApendiceB.ps.gz, 2000.

[9] M. Nielsen and I. Chuang. Quantum Computation and Quantum Informa-tion. Cambridge University Press., December 2000.

[10] P.E. Dunne. Computability Theory: Concepts and Applications. Ellis Hor-wood, 1991.

[11] G. Paun, G. Rozenberg, and A. Salomaa. DNA computing : new computingparadigms. Springer, Berlin, New York, 1998.

129

Page 137: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

[12] Gilson A. Giraldi, Adilson V. Xavier, Antonio L. Apolinario Jr., and Paulo S.Rodrigues. Lattice gas cellular automata for computational fluid animation.Technical report, www.arxiv.org/abs/cs.GR/0507012, 2005.

[13] David Deutsch. Quantum theory, the Church-Turing principle and the uni-versal quantum computer. Proceedings of the Royal Society of London Ser. A,A400:97–117, 1985.

[14] S. Bains and J. Johnson. Noise, physics, and non-turing computation.www.sunnybains.com/jcis2000.pdf, 2000.

[15] Henrik Reif Andersen. An introduction to binary decision diagrams. Tech-nical report, www.itu.dk/people/hra/bdd97.ps, 1997.

[16] M.D. Davis and E.J. Weyuker. Computability, complexity, and languages:fundamentals of theoretical computer science. New York Academic Press,1983.

[17] Santanu Chattopadhyay, Shelly Adhikari, Sabyasachi Sengupta, and MahuaPal. Highly regular, modular, and cascadable design of cellular automata-based pattern classifier. IEEE Trans. Very Large Scale Integr. Syst., 8(6):724–735, 2000.

[18] R. Beale and T. Jackson. Neural Computing. MIT Press, 1994.

[19] J. Preskill. Quantum computation - caltech course notes. Technical report,http://www.theory.caltech.edu/people/preskill/ph229/, 2001.

[20] John H. Reif and Stephen R. Tate. On threshold circuits and polynomialcomputation. SIAM J. Comput., 21(5):896–908, 1992.

[21] S. Gupta and R. Zia. Quantum neural networks. Technical report,http://www.arxiv.org/PS cache/quant-ph/pdf/0201/0201144.pdf, 2002.

[22] H.R. Lewis and C.H. Papadimitriou. Elements of the Theory of Computation.Prentice-Hall, Second Edition., 1998.

[23] M.L. Minsky. Computation: Finite and Infinite Machines. Prentice-Hall,INC., 1967.

130

Page 138: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

[24] J.L. Szwarcfiter and L. Markenzon. Estruturas de Dados e seus Algoritmos.Editora LTC., 1994.

[25] J. Demongeot, E. Goles, and M. Tchuente, editors. Dynamical Systems andCellular Automata, Proceedings of the Conference on Dynamical Behaviourof Cellular Automata: Theory and Applications, Luminy, France, September13–17, 1983. Academic Press, London.

[26] J. Demongeot, E. Goles, and M. Tchuente. Dynamical systems and cellularautomata. Academic Press, New York, 1985.

[27] J. V. Neumann. The Theory of Self-Reproducing Automata. Ed. Urbana, IL:Univ. Illinois Press, 1966.

[28] Palash Sarkar. A brief history of cellular automata. ACM Comput. Surv.,32(1):80–107, 2000.

[29] S. Wolfram. Computational theory of cellular automata. Communications inMathematical Physics, 96:15–57, 1984.

[30] A new kind of science. Wolfram Media Inc., Champaign, Ilinois, US, UnitedStates, 2002.

[31] S. Wolfram. Cellular Automata and Complexity. Addison-Wesley, ReadingMA, 1994.

[32] Stephan Wolfram. Universality and complexity in cellular automata. PhysicaD, 10:1–35, 1984.

[33] Howard Gutowitz. Statistical properties of cellular automata in the context oflearning and recognition. part II: Inverting local structure theory equationsto find cellular automata with specified properties. In K. H. Zhao, editor,Learning and Recognition–A Modern Approach, pages 256–280. World Sci-entific Publishing, Singapore, 1989.

[34] L. S. Schulman and P. E. Seiden. Statistical mechanics of a dynamical sys-tem based on conway’s game of life. J. Stat Phys., 19:293, 1978.

[35] Stephan Wolfram. Statistical mechanics of cellular automata. Rev. Mod.Phys., 55:601–644, 1983.

131

Page 139: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

[36] H. Gutowitz. A hierarchical classification of CA. Physica D, 45:136, 1990.

[37] O. Martin, A. Odlyzko, and S. Wolfram. Algebraic properties of cellularautomata. Commun. Math. Phys., 93:219, 1984.

[38] John Pedersen. Cellular automata as algebraic systems. Complex Systems,6(3):237–250, June 1992.

[39] Hans B. Sieburg and Oliver K. Clay. Cellular automata as algebraic systems.Complex Systems, 5(6):575–602, December 1991.

[40] Aloke K. Das, A. Sanyal, and P. Palchaudhuri. On characterization of cellu-lar automata with matrix algebra. Inf. Sci., 61(3):251–277, 1992.

[41] A. K. Das and P. P. Chaudhuri. Vector space theoretic analysis of additivecellular automata and its application for pseudoexhaustive test pattern gen-eration. IEEE Trans. Comput., 42(3):340–352, 1993.

[42] P.P. Chaudhuri, D.R. Chowdhury, S. Nandi, and S. Chatterjee. Additive Cel-lular Automata, Theory and Applications, volume 1. IEEE Computer Soci-ety Press, Los Alamitos, California, 1997.

[43] N. Boccara, J. Nasser, and M. Roger. Particle-like structures and interactionsin spatio-temporal patterns generated by one-dimensional deterministic cel-lular automaton rules. Phys. Rev. A, 44, July 1991.

[44] F. Jimenez Morales, J.P. Crutchfield, and M. Mitchell. Evolving two-dimensional cellular automata to perform density classification: a report onwork in progress. In S. Bandini et al., editor, Cellular automata: researchtowards industry, pages 3–14. Springer-Verlag, 1998.

[45] Howard Gutowitz. Statistical properties of cellular automata in the contextof learning and recognition. part i: Introduction. In K.H. Zhao, editor, Learn-ing and Recognition–A Modern Approach, pages 233–255. World ScientificPublishing, Singapore, 1989.

[46] Carter Bays. Patterns for simple cellular automata in a universe of densepacked spheres. Complex Systems, 1(6):853–875, December 1987.

132

Page 140: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

[47] Information Mechanics Group. Cam8: a parallel, uniform, scalable archi-tecture for ca experimentation. http://www.im.lcs.mit.edu/cam8.

[48] Howard Gutowitz. A massively parallel cryptosystem based on cellular au-tomata. In B. Schneier, editor, Applied Cryptography. K. Reidel, 1993.

[49] G. Bard Ermentrout and L. Edelstein-Keshet. Cellular automata approachesto biological modeling. J. theor. Biol., 160:97–133, 1993.

[50] G. Bard Ermentrout and Leah Edelstein-Keshet. Cellular automata ap-proaches to biological modeling. Journal of Theoretical Biology, 160:97–133, Jan 1993.

[51] H. Hartman and G. Vichniac. Inhomogenous cellular automata. In E. Bi-enenstock and et al., editors, Disordered Systems and Biological Organiza-tion. unknown, 1900.

[52] Haiping Fang, Zuowei Wang, Zhifang Lin, and Muren Liu. Lattice boltz-mann method for simulating the viscous flow in large distensible blood ves-sels. Physical Review E, 65:051925, 2002.

[53] L.P. Kadanoff, G.R. McNamara, and G. Zanetti. From automata to fluid flow:Comparisons of simulation and theory. Physical Review A, 40(8):4527–4541, 89.

[54] A. Pires, D.P. Landau, and H. Herrmann, editors. Computational Physicsand Cellular Automata. World Scientific, 1990.

[55] S. Wolfram. Cellular automaton fluid: basic theory. J. Stat. Phys., 45:471,1986.

[56] S. Succi. The Lattice Boltzmann Equation for Fluid Dynamics and Beyond(Numerical Mathematics and Scientific Computation). Oxford UniversityPress, 2nd edition, 2002.

[57] Bastien Chopard and Michel Droz. Cellular Automata Modeling of PhysicalSystems. Cambridge University Press, 1998.

133

Page 141: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

[58] Bastien Chopard, Alexandre Dupuis, Alexandre Masselot, and Pascal Luthi.Cellular automata and lattice boltzmann techniques: An approach to modeland simulate complex systems. Advances in complex systems, 5(2):1–144,2002. special issue on: Applications of Cellular Automata in Complex Sys-tems.

[59] Christoph Adami. Introduction to Artificial Life. Springer, New York, 1998.

[60] S. Wolfram. Web Site:. www.stephenwolfram.com/publications/articles/ca/.

[61] L. Deniau and J. Blanc-Talon. PCA and cellular automata: a statistical ap-proach for determnistic machines. 1994.

[62] A.K. Das, A. Ganguly, A. Dasgupta, S. Bhawmik, and P.P. Chandhuri. Ef-ficient characterization of cellular automata. In IEE Proceedings, volume137, Reading, MA, January 1990.

[63] H. Haken. Advanced Synergetics: Instability Hierarchies of Self-OrganizingSystems and Devices. Springer-Verlag, New York, 1987.

[64] G. Iooss and D.D. Joseph. Elementary Stability and Bifurcation Theory.Springer-Verlag, New York, 1997.

[65] T. Postol and I.N. Stewart. Catastrophe Theory and its Applications. PitmanPublishing Limited, London, 1978.

[66] B. Mandelbrot. The Fractal Geometry of Nature. W.H. Freeman and Co.,San Francisco., 1982.

[67] M. Barnsley. Fractals Everywhere. Academic Press INC., 1988.

[68] Francis Heylighen. Technical report, cite-seer.ist.psu.edu/heylighen99science.html, 1999.

[69] R.L. Liboff. Kinetic Theory: Classical, Quantum, and Relativistic Descrip-tions. Prentice-Hall International Editions, 1990.

[70] M. Desbrun and M. P. Cani. Smoothed particles: A new paradigm for ani-mating highly deformable bodies. In Proceedings of EG Workshop on Ani-mation and Simulation, pages 61–76. Springer-Verlag, 1996.

134

Page 142: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

[71] G.A. Giraldi, A.V. Xavier, A.L. Apolinario Jr, and P.S. Rodrigues. Latticegas cellular automata for computational fluid animation. Technical report,http://www.arxiv.org/format/cs.GR/0507012, 2005.

[72] C. Hirsch. Numerical Computation of Internal and External Flows: Funda-mentals of Numerical Discretization. John Wiley & Sons, 1988.

[73] U. Frisch, D. D’Humieres, B. Hasslacher, P. Lallemand, Y. Pomeau, and J.-P. Rivet. Lattice gas hudrodynamics in two and three dimension. ComplexSystems, pages 649–707, 1987.

[74] T. Inamuro, T. Ogata, and F. Ogino. Numerical simulation of bubble flowsby the lattice boltzmann method. FUTURE GENERATION COMPUTERSYSTEMS, 20(6):959–964, 2004.

[75] S. Wolfram. Cellular automata and complexity. Addison-Wesley, http://www.stephenwolfram.com/publications/articles/ca/86-fluids/index.html, 1996.

[76] J. Harting, J. Chin, M. Venturoli, and P. V. Coveney. Large-scale lat-tice boltzmann simulations of complex fluids: advances through the adventof computational grids. http://www.ica1.uni-stuttgart.de/ jens/pub/05/05-PhilTransReview.pdf, 2005.

[77] U. Frisch, B. Hasslacher, and Y. Pomeau. Lattice-gas automata for thenavier-stokes equation. Phys. Rev., page 1505, 1986.

[78] G. Doolen. Lattice Gas Method for Partial Differential Equations. Addison-Wesley, 1990.

[79] J. Piasecki. Echelles de temps multiples en theories cinetique. Cahiers dephysique. Press polytechniques et universitaire romandes, 1997.

[80] B. Chopard and M. Droz. Cellular Automata Modeling of Physical Systems.Cambridge University Press, 1998.

[81] J.P. Morris, P.J. Fox, and Yi Zhu. Modeling low reynolds number incom-pressible flows using sph. JOURNAL OF COMPUTATIONAL PHYSICS,136:214–226, 1997.

135

Page 143: LNCCvirtual01.lncc.br/~giraldi/GA-025/book.pdf · Contents 1 Preface 2 2 Introduction 3 3 Set Theory 5 3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

[82] M. Muller, D. Charypar, and M. Gross. Particle-based fluid simulation forinteractive applications. In Proceedings of ACM SIGGRAPH symposium onComputer animation, 2003.

[83] B. Schlatter. A pedagogical tool using smoothed particle hydrodynamics tomodel fluid flow past a system of cylinders. Master’s thesis, 1989.

[84] J. Stam. Stable fluids. In Proceedings of the 26th annual conferenceon Computer graphics and interactive techniques, pages 121–128. ACMPress/Addison-Wesley Publishing Co., 1999.

[85] J. Stam. Real-time fluid dynamics for games. In Proceedings of the GameDeveloper Conference, 2003.

[86] S. Premoze, T. Tasdizen, J. Bigler, A. Lefohn, and R. Whitaker. Particle-based simulation of fluids. In EUROGRAPHICS, volume 22, 2003.

[87] Leonard M. Adleman. Molecular computation of solutions to combinatorialproblems. Science, 266(11):1021–1024, 1994.

[88] P. N. Hengen, I. G. Lyakhov, L. E. Stewart, and T. D. Schneider. Molecu-lar flip-flops formed by overlapping Fis sites. Nucleic Acids Res., 31, No.22:6663–6673, 2003.

[89] J.D. Watson and F.H.C. Crick. A structure for deoxyribose nucleic acid.Nature, 171:737–738, 1953.

[90] Gilson A. Giraldi and Jean Faber. Quantum models for artificial neu-ral networks. Technical report, virtual01.lncc.br/giraldi/TechReport/QNN-Review.pdf.gz, 2002.

136