Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Some Approaches towards Lattice Computing inMathematical Morphology and Computational
Intelligence
Peter Sussner
Mathematical Imaging and Computational Intelligence GroupDepartment of Applied Mathematics, IMECC
University of Campinas
The 11th International FLINS Conference on Decision Making andSoft Computing
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 1 / 68
Introduction
Lattice Theory (LT)
Some Remarks on LT
Origins:
Boolean algebra (mid 19th century);Dedekind’s investigations on number theory (late 19th and early20th century);Major branch of abstract algebra since 1940 (Birkhoff’s book);Linked to fuzzy set theory by Goguen in 1967;Provides framework for MM as shown by Serra in 1988.
LT has found applications in many areas such as:
mathematical morphology (MM);fuzzy set theory;computational intelligence;automated decision making;formal concept analysis.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 2 / 68
Introduction
Lattice Computing
Graña’s Original Definition:
The collection of computational intelligence tools and techniques that
make use of lattice operators inf and sup for the construction ofthe computational algorithms
or exploit lattice theory for language representation and reasoning.
Kaburlasos’ Extended Definition:
An evolving collection of tools and math. modeling methodologies with
the capacity to process lattice ordered data per se including logicvalues, numbers, sets, symbols, graphs, etc.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 3 / 68
Introduction
Organization of this talk
1 Introduction
2 Basic Concepts of Lattice Theory and MM on Complete Lattices
3 L-Fuzzy MM
4 Lattice Fuzzy Transforms
5 General Concepts of MNNs
6 Some Examples of MNNs and Related LC Models
7 Conclusions and Perspectives for the Future
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 4 / 68
Introduction
Organization of this talk
1 Introduction
2 Basic Concepts of Lattice Theory and MM on Complete Lattices
3 L-Fuzzy MM
4 Lattice Fuzzy Transforms
5 General Concepts of MNNs
6 Some Examples of MNNs and Related LC Models
7 Conclusions and Perspectives for the Future
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 4 / 68
Introduction
Organization of this talk
1 Introduction
2 Basic Concepts of Lattice Theory and MM on Complete Lattices
3 L-Fuzzy MM
4 Lattice Fuzzy Transforms
5 General Concepts of MNNs
6 Some Examples of MNNs and Related LC Models
7 Conclusions and Perspectives for the Future
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 4 / 68
Introduction
Organization of this talk
1 Introduction
2 Basic Concepts of Lattice Theory and MM on Complete Lattices
3 L-Fuzzy MM
4 Lattice Fuzzy Transforms
5 General Concepts of MNNs
6 Some Examples of MNNs and Related LC Models
7 Conclusions and Perspectives for the Future
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 4 / 68
Introduction
Organization of this talk
1 Introduction
2 Basic Concepts of Lattice Theory and MM on Complete Lattices
3 L-Fuzzy MM
4 Lattice Fuzzy Transforms
5 General Concepts of MNNs
6 Some Examples of MNNs and Related LC Models
7 Conclusions and Perspectives for the Future
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 4 / 68
Introduction
Organization of this talk
1 Introduction
2 Basic Concepts of Lattice Theory and MM on Complete Lattices
3 L-Fuzzy MM
4 Lattice Fuzzy Transforms
5 General Concepts of MNNs
6 Some Examples of MNNs and Related LC Models
7 Conclusions and Perspectives for the Future
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 4 / 68
Introduction
Organization of this talk
1 Introduction
2 Basic Concepts of Lattice Theory and MM on Complete Lattices
3 L-Fuzzy MM
4 Lattice Fuzzy Transforms
5 General Concepts of MNNs
6 Some Examples of MNNs and Related LC Models
7 Conclusions and Perspectives for the Future
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 4 / 68
Basic Concepts of LT and MM on Complete Lattices
Complete Lattices
Recall that a partially ordered set or poset is a set L 6= ∅ togetherwith a partial order relation ≤. Thus, a poset is a pair (L,≤). If ≤arises clearly from the context, one writes L instead of (L,≤).A poset L is called a lattice if the infimum and the supremum of Y,denoted resp. by
∧
Y and∨
Y , exist in L ∀ finite Y (6= ∅) ⊆ L.If 0L =
∧
L and 1L =∨
L exist in L, then L is bounded.If∧
Y and∨
Y exists in L for every Y ⊆ L, then L is complete.
Examples of complete lattices include R±∞ = R ∪ {+∞,−∞},Z±∞ = Z ∪ {+∞,−∞}, {0,1}, [0,1], I = {[a,b] ⊆ [0,1]} with thepartial ordering [a,b] ≤ [c,d ] ⇔ a ≤ c and b ≤ d , and P(X), theclass of all subsets of X, with the partial ordering given by ⊆.
If L is a complete lattice, then LX = {f : X → L} for an arbitrary set
X and Ln also yield complete lattices.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 5 / 68
Basic Concepts of LT and MM on Complete Lattices
L-Fuzzy Sets
Definition and Notations:
An L-fuzzy set A consists of a universe X together with a membershipfunction µA : X → L. If L is a complete lattice, then the class of L-fuzzysets over the universe X, denoted using the symbol FL(X), alsorepresents a complete lattice.If A ∈ FL(X), then we simply write A(x) instead of µA(x).
L-Fuzzy Sets for special choices of a complete lattice L:
F[0,1](X) = F(X), the class of fuzzy sets over X;
FI(X) equals the class of interval-valued fuzzy sets over X;
L = P([0,1]) yields the class of interval type-2 fuzzy sets;
L = F([0,1]) yields the class of type-2 fuzzy sets (T2-FSs).
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 6 / 68
Basic Concepts of LT and MM on Complete Lattices
Homomorphisms and Isomorphisms
Given complete lattices L,M, ϕ : L → M is called a completelattice homomorphism if:
ϕ(∧
X ) =∧
ϕ(X ) and ϕ(∨
X ) =∨
ϕ(X )∀X ⊆ L.
If ϕ is also bijective, then ϕ is called a (complete) latticeisomorphism. In this case, L and M are said to be isomorphic andwe write L ≃ M.
For any complete lattice L, we have FL(X) ≃ LX and if |X | = n,
then FL(X) ≃ Ln.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 7 / 68
Basic Concepts of LT and MM on Complete Lattices
Two Basic Operators of MM on Complete Lattices
Complete lattices yield a mathematical framework for MM. From nowon, the symbols L and M denote complete lattices.
Consider operators ε, δ : L → M.
ε is called an (algebraic) erosion if
ε(
∧
Y)
=∧
y∈Y
ε(y) , ∀Y ⊆ L .
δ is called an (algebraic) dilation if
δ(
∨
Y)
=∨
y∈Y
δ(y) , ∀Y ⊆ L .
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 8 / 68
Basic Concepts of LT and MM on Complete Lattices
Adjunctions and (Algebraic) Dilations and Erosions
Definition:
Consider δ : L → M and ε : M → L. The pair (ε, δ) is called anadjunction from L to M, in other words ε and δ are adjoint if
δ(x) ≤ y ⇔ x ≤ ε(y) ∀ x ∈ L , y ∈ M .
Useful Facts:
Let δ : L → M and ε : M → L.
If (ε, δ) is an adjunction, then δ is a dilation and ε is an erosion.
For every dilation δ there is a unique erosion ε such that (ε, δ) isan adjunction.
For every erosion ε there is a unique dilation δ such that (ε, δ) isan adjunction.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 9 / 68
L-Fuzzy MM
Origins of Mathematical Morphology
Mathematical Morphology (MM) originated as a theory for theprocessing and analysis of images using structuring elements (SEs).
Aplications of MM include
1 noise removal;2 skeletonizing;3 edge detection;4 automatic target recognition;5 image segmentation;6 image restauration.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 10 / 68
L-Fuzzy MM
Two Perspectives on MM
MM from two different (but not mutually exclusive) points of view:
MM in the geometrical or topological sense: employs SEs as wellas inclusion and intersection measures;
MM in the algebraic sense: usually defined in a complete latticesetting, recently extended to complete semilattices.
Basic Operators of MM in the geometrical or topological sense
(Morphological) erosion: yields a degree of inclusion of thetranslated SE in the image at every pixel;
(Morphological) dilation: yields the a degree of intersection of theimage with the (reflected and) translated SE at every pixel.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 11 / 68
L-Fuzzy MM
Basic Concepts of L-Fuzzy MM
Consider an image a and an SE s ∈ FL(X), where X = Rd or X = Z
d .
sx(y) = s(y − x) ∀ y ∈ X , (translation of s by x)
s(y) = s(−y) ∀ y ∈ X . (reflection of s around the origin)
Let IncL and SecL be respectively inclusion and intersection measures.
An L-fuzzy (morphological) erosion of a ∈ FL(X) by s ∈ FL(X):
EL(a,s)(x) = IncL(sx,a) ∈ L, ∀x ∈ X;
An L-fuzzy (morphological) dilation of a ∈ FL(X) by s ∈ FL(X):
∆L(a,s)(x) = SecL(sx,a) ∈ L, ∀x ∈ X.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 12 / 68
L-Fuzzy MM
Two Special Cases of L-Fuzzy MM
Binary MM:
Binary MM deals with images A and SEs S ∈ P(X) ≃ F{0,1}(X).
The operators of binary erosion and dilation can be defined usingcrisp inclusion and intersection measures.
Fuzzy MM:
Fuzzy MM deals with images a and SEs s such that a,s ∈ F(X).
Fuzzy erosions and dilations can be defined using fuzzy inclusionand intersection measures.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 13 / 68
L-Fuzzy MM
Illustration of Binary Erosion
Figure :
Binary image A, SE S, and binary erosion of A by S.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 14 / 68
L-Fuzzy MM
Illustration of Fuzzy Erosion
Figure :
Fuzzy image a, SE s, and fuzzy erosion of a by s.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 15 / 68
L-Fuzzy MM
Some Comments on Gray-Scale MM
Images and SEs are viewed as elements of GX, where
G = R±∞
or G = Z±∞.
The usual approach towards gray-scale MM is the umbraapproach that:
can be derived from binary MM,yields algebraic erosions and dilations GX → GX for arbitrary, butfixed SEs,is closely related to fuzzy MM based on Lukasiewicz operators,and can be extended to images and SEs in GX, were G is anarbitrary complete l-group extension.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 16 / 68
L-Fuzzy MM
Lattice-Ordered Groups
A lattice that also represents a group such that every grouptranslation x 7→ a + x + b is isotone is called an l-group.
An l-group F such that F is a conditionally complete lattice iscalled a conditionally complete l-group.
A complete lattice G such that F = G \ {∨
G,∧
G} forms anl-group is called a complete l-group extension.
Examples
1 Rn and Z
n are conditionally complete l-groups ∀n ∈ N .2 R±∞ = R ∪ {−∞,+∞} and Z±∞ = Z ∪ {−∞,+∞} represent
complete l-group extensions.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 17 / 68
L-Fuzzy MM
Construction of L-Fuzzy Erosions and Dilations
Let I and C be resp. an L-fuzzy implication and an L-fuzzy conjunction.The operators EL,∆L : FL(X)×FL(X) → FL(X) are defined as follows:
EL(a,s)(x) = IncL(sx,a) =∧
y∈X
I(sx(y),a(y)) ∀x ∈ X ,
∆L(a,s)(x) = SecL(sx,a) =∨
y∈X
C(sx(y),a(y)) ∀x ∈ X .
We have:
EL(.,s) are (algebraic) erosions for all s ∈ FL(X) if and only ifI(s, .) are (algebraic) erosions for all s ∈ L.
∆L(.,s) are (algebraic) dilations for all s ∈ FL(X) if and only ifC(s, .) are (algebraic) dilations for all s ∈ L.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 18 / 68
L-Fuzzy MM
L-Fuzzy Operators
An operator C : L× L → L is called a conjunction on L or anL-fuzzy conjunction if
C is increasing,C(0L, 0L) = C(0L, 1L) = C(1L, 0L) = 0L and C(1L, 1L) = 1L.
If C is in addition commutative and associative with C(x ,1L) = x∀x ∈ L, then C is called a triangular norm or simply t-norm on L
(L-fuzzy disjunctions and s-norms are defined similarly).
An operator I : L× L → L is called an implication on L or L-fuzzyimplication if
I is decreasing in the first argument,I is increasing in the second argument,I(0L, 0L) = I(0L, 1L) = I(1L, 1L) = 1L, and I(1L, 0L) = 0L.
An involutive bijection L → L that reverses the partial ordering iscalled a negation on L.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 19 / 68
L-Fuzzy MM
Interval-Valued Fuzzy (IV-Fuzzy) Operators
Some examples of IV-fuzzy operators, i.e., I-fuzzy operators:
The pessimistic conjunction CpC with representative C:
CpC(x,y) = [C(x1, y1),C(x1, y2) ∨ C(x2, y1)].
The symbol T pW denotes Cp
C if C = TW (Lukasiewicz t-norm) and∆
pW denotes the I-fuzzy dilation based on T p
W .
The optimistic implication IoI with representative I:
IoI (x,y) = [I(x1, y1) ∧ I(x2, y2), I(x1, y2)].
The symbol IoW denotes Io
I if I = IW (Lukasiewicz implication) andEo
W denotes the I-fuzzy erosion based on IoW .
The standard negation NI
S on I is given byNI
S(x) = [1 − x2,1 − x1]∀x = [x1, x2] ∈ I.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 20 / 68
L-Fuzzy MM
Properties and Applications of EoW and ∆
pW
Properties:
For every s ∈ FI(X), we have:
EoW (.,s) and ∆p
W (., s) are adjoint.
EoW (., s) and ∆p
W (., s) are dual w.r.t. the standard negation NS onFI(X), where NS(a)(x) = NI
S(a(x)) ∀x ∈ I, that is:
∆pW (a, s) = NS(E
oW (NS(a), s)∀ a ∈ FI(X).
Morphological Gradient:
Given an SE s ∈ FI(X), an IV morphological gradient image ofa ∈ FI(X) is given by:
EoW (a,s)(x)−∆p
W (a, s)(x)∀ x ∈ I,
where x − y = [x1 − y2, (x1 − y1) ∨ (x2 − y2)] ∀ x, y ∈ I.Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 21 / 68
L-Fuzzy MM
An Application in Image Segmentation
The Watershed Transform:A region-based approach to image segmentation;
we chose F. Meyer’s flooding algorithm for our experiments;
pre- or postprocessing techniques are usually employed to avoidoversegmentation;
to this end, we used the 8-connected disk with radius 1 as astructuring element in gradient and filtering techniques.
Figure : Illustration of Watershed Algorithm.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 22 / 68
L-Fuzzy MM
Tomographic Image Reconstruction
Experiments Using the Shepp-Logan Phantom:
Consider a discretized version of the famous Shepp-Logan phantom(on a 256 × 256 grid) as well as the reconstructions produced by thefollowing algorithms in a noiseless setting using 600 uniform views and400 equally spaced rays within each view:
filtered backprojection (FBP) with Ramlak filter,
filter of the backprojections (FOB) with Ramlak filter,
Tretiak & Metz reconstruction with attenuation parameter 0.1.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 23 / 68
L-Fuzzy MM
Visualization of Image Reconstruction Algorithms
Original Shepp-Logan Phantom (Top Left) and Reconstructions:
Figure : Original Shepp-Logan phantom and reconstructions produced by thePeter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 24 / 68
L-Fuzzy MM
Conventional Morphological Gradient Images
Morphological Gradients of Shepp-Logan Phantom and ReconstructedImages after Applying the h-Minima Transform (with h = 0.07):
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 25 / 68
L-Fuzzy MM
Application of Watershed Transform
Watershed Transforms of Original and Reconstructed Images:
Figure :Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 26 / 68
L-Fuzzy MM
IV Representation of the Reconstructed Images
Lower and Upper Bounds given by the Pixelwise Minimum andMaximum of the Reconstructed Images:
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 27 / 68
L-Fuzzy MM
IV Morphological Gradient and Resulting Watershed
IV Morphological Gradient (Top Row), Mean of Upper and LowerBounds, and Watershed Transform (after h-Minima Transform):
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 28 / 68
Lattice FTs
Fuzzy Tansforms (FTs)
Introductory Remarks:
FTs come in pairs consisting of a direct and an inverse transform:
The direct transform maps functions residing in an original space tofunctions in a transformed space.The inverse transform maps functions in the transformed spaceback into the original space.
3 versions of FTs: one linear and 2 lattice-based version.
Lattice FTs were defined as discrete transforms fromF(P) → F(K) (direct transform) or from F(K) → F(P) (inversetransform) for finite universes P and K.
Generalized versions consist of direct transforms FL(P) → FL(K)and inverse transforms FL(K) → FL(P), where P and K arearbitrary and (L,∨,∧, ⋆,→) is a complete residuated lattice.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 29 / 68
Lattice FTs
Residuated Lattices
A residuated lattice (RL) is an algebra (L,∨,∧, ⋆,→) such that1 (L,∨,∧) is a bounded lattice.2 (L, ⋆) is a commutative monoid whose identity element is 1L.3 The operation → is a residuation operation with respect to ⋆, i.e.,
for all x , y , z ∈ L we have:
z ⋆ x ≤ y ⇔ x ≤ z → y , ∀x , y , z ∈ L.
In other words, (z ⋆ ., z → .) is an adjunction for every z ∈ L.
If L is complete, then we speak of a complete residuated lattice.
In any residuated lattice, the operator ⋆ is an L-fuzzy t-norm.
Note that L-fuzzy MM can be conducted in a complete RL.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 30 / 68
Lattice FTs
Definitions of Lattice FTs
Generalized Versions of Lattice FTs:
If (L,∨,∧, ⋆,→) is a complete RL, P and K are arbitrary universes, andAk ∈ FL(P) are basic functions s.t. ∀p ∈ P ∃ k ∈ K with Ak (p) 6= 0L,then F↑,F↓ : F(P) → F(K) (direct transforms) andf↑, f↓ : F(K) → F(P) (inverse transforms) are defined as follows:
F ↑(k) = [F↑(f )](k) =∨
p∈P
Ak (p) ⋆ f (p) , ∀f ∈ F(P) , ∀k ∈ K,
f ↑(p) = [f↑(F )](p) =∧
k∈K
Ak(p) → F (k) , ∀F ∈ F(K) , ∀p ∈ P,
F ↓(k) = [F↓(f )](k) =∧
p∈P
Ak (p) → f (p) , ∀f ∈ F(P) , ∀k ∈ K,
f ↓(p) = [f↓(F )](p) =∨
k∈K
Ak (p) ⋆ F (k) , ∀F ∈ F(K) , ∀p ∈ P.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 31 / 68
Lattice FTs
Lattice FTs and MM on Complete Lattices
Theorem:
Let Ak ∈ FL(P), where k ∈ K. We have:
1 F↑ : FL(P) → FL(K) represents an (algebraic) dilation.2 f↑ : FL(K) → FL(P) represents an (algebraic) erosion.3 F↓ : FL(P) → FL(K) represents an (algebraic) erosion.4 f↓ : FL(K) → FL(P) represents an (algebraic) dilation.
Moreover, the pairs (F↑, f↑) and (F↓, f↓) represent adjunctions.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 32 / 68
Lattice FTs
Relationship of Lattice FTs to L-Fuzzy MM
Theorem:
Given a complete RL (L,∨,∧, ⋆,→), let F↑,F↓, f↑, f↓ : FL(P) → FL(P)with Ak ∈ FL(P) for k in an arbitrary group P with identity element 0.Let s denote the SE in FL(P) that satisfies s(p) = A0(p) for all p ∈ P.If Ak+q(p + q) = Ak(p) ∀k ,p,q ∈ P, then F↑,F↓, f↑, and f↓ can bewritten as follows in terms of L-fuzzy erosions or L-fuzzy dilations:
F↑(f ) = ∆L(f , s),
f↑(F ) = EL(F , s),
F↓(f ) = EL(f , s),
f↓(F ) = ∆L(F , s).
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 33 / 68
General Concepts of MNNs
Morphological Neural Networks (MNNs)
MNNs can be seen as approaches towards lattice computing orcomputational intelligence based on lattice theory.
MNNs perform morphological operations in the lattice-algebraic orgeometrical/topological sense.
The aggregation functions of morphological neurons are given byoperations of MM.
Most MNN models have strong theoretical foundations in MM oncomplete lattices (or in MM on complete inf-semilattices).
MNNs have been used for a variety of applications such aspattern recognition, image and signal processing, computervision, approximate reasoning, and prediction.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 34 / 68
General Concepts of MNNs
Max Product, Min Product, and Conjugate
Let G be a complete l-group extension and F = G \ {∨
G,∧
G}. LetA ∈ F
m×n and B ∈ Gn×p.
M = A ∨� B - max product of A and B: mij =∨n
k=1(aik + bkj).
W = A ∧� B - min product of A and B: wij =∧n
k=1(aik + bkj)
A∗ - conjugate of A: A∗ = −AT
An (algebraic) erosion is givenby
εA : Gn −→ G
m
x 7−→ A ∧� x
An (algebraic) dilation is givenby
δA : Gn −→ G
m
x 7−→ A ∨� x
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 35 / 68
General Concepts of MNNs
Max-C Product and Min-D Product
Let A ∈ Lm×n and B ∈ L
n×p.
Consider a conjunction C and a disjunction D on L. Recall thatC,D are increasing operators L
2 → L s.t.:C(0L, 0L) = C(0L, 1L) = C(1L, 0L) = D(0L, 0L) = 0L;C(1L, 1L) = D(0L, 1L) = D(1L, 0L) = D(1L, 1L) = 1L;
M = A ◦ B - max-C product of A and B: mij =∨n
k=1 C(aik ,bkj).
W = A • B - min-D product of A and B: wij =∧n
k=1 D(aik ,bkj)
If C(a, ·) is a dilation for everya ∈ L then we have an(algebraic) dilation
δFA : Ln −→ L
m
x 7−→ A ◦ x
If D(a, ·) is an erosion for everya ∈ L then we have an(algebraic) erosion
εFA : Ln −→ L
m
x 7−→ A • x
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 36 / 68
General Concepts of MNNs
Some Types of Morphological Neurons
Additive Max and Min Neurons
Let G be a complete l-group extension and F = G \ {∨
G,∧
G}. For aninput vector x ∈ G
n and a vector of synaptic weights w ∈ Fn, the output
y ∈ G is computed as follows:
Additive max neuron: y =∨n
j=1
(
wj + xj)
= wT ∨� x;
Additive min neuron: y =∧n
j=1
(
wj + xj)
= wT ∧� x.
Max-C and Min-D Neurons:
Let C and D be resp. a fuzzy conjunction and disjunction. For an inputvector x ∈ [0,1]n and a vector of synaptic weights w ∈ [0,1]n, theoutput y ∈ [0,1] is computed as follows:
Max-C neuron: y =∨n
j=1 C(
wj , xj)
= wT ◦ x;
Min-D neuron: y =∧n
j=1 D(
wj , xj)
= wT • x.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 37 / 68
General Concepts of MNNs
Morphological Neurons in Complete Lattices
Observations
Additive max and min neurons as well as max-C and min-Dneurons for continous C and D yield elementary morphologicaloperators between complete lattices.
The aggregation functions of max-C and min-D neurons can alsobe viewed as L-fuzzy dilations and erosions of x by SEs(determined by w).
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 38 / 68
Examples of MNNs and Related LC Models
Morphological Associative Memories (MAMs)
Design Problem of an Associative Memory (AM):
Given a finite set of pairs or associations{(
xξ,yξ)
: ξ = 1, . . . , k}
,determine a mapping A such that - ideally - we have:
1 A(xξ) = yξ for all ξ = 1, . . . , k ;2 A(xξ) = yξ for xξ ≈ xξ.
Comments on Original MAM Models
1 The MAMs WXY and MXY are designed by using “ ∧� " and “ ∨� " toassociate xξ with yξ.
2 WXY , MXY yield functions Rn±∞ → R
m±∞. Replacing R±∞ by any
complete l-group extension G, let F = G \ {∨
G,∧
G}.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 39 / 68
Examples of MNNs and Related LC Models
The MAMs WXY and MXY
Definitions of WXY and MXY :
For X = [x1, . . . ,xk ] ∈ Fn×k and Y = [y1, . . . ,yk ] ∈ F
m×k , let
WXY = Y ∧� X ∗ , MXY = Y ∨� X ∗ ∈ Fm×n . (1)
Given x ∈ Gn, the outputs of WXY and MXY are resp. calculated in
terms of a dilation and an erosion Gn → G
m:
y = WXY ∨� x , z = MXY ∧� x .
If X = Y , we speak of an auto-associative MAM (AMM).
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 40 / 68
Examples of MNNs and Related LC Models
Properties of AMMs
1 Unlimited absolute storage capacity;2 One-step convergence when employed with feedback;3 The fixed points known to be the lattice polynomials in xξ;4 Basins of attraction known;5 x is attracted to supremum of x in set of fixed points using WXX ;6 x is attracted to infimum of x in set of fixed points using MXX .
Alternatives1 Use modified MAMs WXX + ν or MXX + µ;2 Substitute the complete lattice (Gn,≤) with a cisl of the form
define new AM in this setting.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 41 / 68
Examples of MNNs and Related LC Models
Some Applications of MAMs
The MAM models WXY and MXY have been applied in diverse areassuch as:
hyperspectral image analysis;
color image segmentation;
image compression;
robot vision;
face localization;
a variety of other pattern recognition problems.
Other AMs were defined in complete lattices or quantales thatcorrespond to color spaces.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 42 / 68
Examples of MNNs and Related LC Models
L-Fuzzy Morphological Associative Memories
Max-C L-FMAMs:Let L be a complete lattice.
Given X = [x1, . . . xp] ∈ Ln×p, Y = [y1, . . . yp] ∈ L
m×p, and anL-fuzzy conjunction C, a max-C L-FMAM model W is given by
y = W(x) = W ◦ x ,
where W ∈ Lm×n, x ∈ L
n, and y ∈ Lm.
If L = [0,1], then we simply refer to an L-FMAM as an FMAM.
Examples of Max-C FMAMs:
Kosko’s max-min and max-product FAMs;
the generalized FAM of Chung & Lee;
implicative FAMs (IFAMs).
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 43 / 68
Examples of MNNs and Related LC Models
Adjunction-Based Learning for Max-C L-FMAMs
Synthesis of a Weight Matrix WC for a Max-C L-FMAM WC:
Let C(., x) be a dilation ∀x ∈ L.
Consider the dilation DX : Lm×n −→ Lm×p given by
DX (A) = A ◦ X ∀A ∈ Lm×n;
Let EDX : Lm×p −→ L
m×n be the unique erosion that forms anadjunction with DX ;
Define WC = EDX (Y );
The symbol WC denotes the resulting Max-C L-FMAM.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 44 / 68
Examples of MNNs and Related LC Models
Examples of Max-C L-FMAMs
The Max-CF FMAM WF and the Max-CrF IV-FMAM W r
F :
The cross-ratio uninorm CF defined below represents acommutative conjunction and C(., x) is a dilation ∀x ∈ [0,1].
CF (x , y) =
{
1, if (x , y) = (0,1) or (1,0) ,xy
(1−y)(1−x)+xy)) , otherwise .
The symbols WF and WF denote resp. WC and WC if C = CF .
Given a fuzzy conjunction C, an IV-fuzzy conjunction CrC , called
the C-representable conjunction, is defined as follows:
CrC(x,y) = [C(x1, y1),C(x2, y2)] ∀ x,y ∈ I.
CrF stands for Cr
C where C = CF .
The symbols W rF and W r
F denote resp. WC and WC if C = CrF .
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 45 / 68
Examples of MNNs and Related LC Models
An Application to Time-Series Prediction in Industry
Problem of forecasting the average monthly streamflow of Furnas,a large hydroelectric plant in southern Brazil;
Seasonality of the streamflow suggests the use of 12 differentpredictor models;
The objective is to predict the monthly streamflow sγ from asubset of standardized past values;
Using (interval-valued) fuzzy c-means, the training data (pγ , sγ),where pγ = [sγ−3, sγ−2, sγ−1], yield the centers and standarddeviations of Gaussian membership functions xξ and yξ.
{(xξ,yξ) : ξ = 1, . . . , k} can be stored implicitly in W , whereW ∈ [0,1]n×m or W ∈ I
n×m;
The estimated value is obtained by (type-reducing and)defuzzifying yγ = W(xγ) (using the centroid method);
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 46 / 68
Examples of MNNs and Related LC Models
Some Details on the IV Fuzzy Clustering
IV fuzzy c-means clustering on the training data (pγ ,sγ), wherepγ = [sγ−3, sγ−2, sγ−1] ∈ [−5,5]3 and sγ ∈ [−5,5], yields cξ
x, cξy
and σξx, σ
ξy of IV membership functions xξ ∈ I
n and yξ ∈ Im.
We used a constant number of clusters c = 5 and “fuzzifier"parameters m = [m1,m2] = [2 − α,2 + α] for α = 0,0.1, . . . ,0.4.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 47 / 68
Examples of MNNs and Related LC Models
Performance of IV-FMAMs on Validation Data
m1 m2 MRE RMSE MAE
2 2 22.95 181.01 223.961.9 2.1 22.80 181.17 223.351.8 2.2 22.65 183.05 223.461.7 2.3 22.31 186.23 221.971.6 2.4 22.52 187.47 223.28
Table : Prediction errors produced by W rF using the data 1931 - 1995 for
different values of m.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 48 / 68
Examples of MNNs and Related LC Models
Performance of Several Predictors on Test Data
MRE RMSE MAEModel (%) (m3/s) (m3/s)PAR 18.08 266.13 154.44
ANFIS 20.12 262.21 166.31C-FSM 20.19 260.82 163.48A-FSM 19.08 278.42 167.77
FMAM WF 18.8 278.08 167.33IV-FMAM W r
F 18.58 276.96 165.78
Table : Comparison of the prediction errors produced by WF , W rF , an online
adaptive (first order Takagi-Sugeno) fuzzy system model (A-FSM), an offlineconstructive (first order Takagi-Sugeno) fuzzy system model (C-FSM), theadaptive network- based fuzzy inference system (ANFIS) of Jang, and aperiodic autoregressive (PAR) model using the data from 1996-2005.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 49 / 68
Examples of MNNs and Related LC Models
Predictions Produced by the Max-CrF FMAM W r
F
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 20060
500
1000
1500
2000
2500
3000
3500
4000
Year
Str
eam
flow
(m
3 /s)
Actual ValuesPredicted Values
Figure : Streamflow prediction for the Furnas reservoir from 1996 to 2005.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 50 / 68
Examples of MNNs and Related LC Models
Another Type of a Morphological Neuron
Let S be a fuzzy subsethood measure. Given a weight vectorw ∈ [0,1]n and input x ∈ [0,1]n, compute y ∈ [0,1] as follows:
y = S(w,x) .
We have a fuzzy erosion of x by the SE w but not an algebraic erosion.Recall that S : F(X)×F(X) → [0,1] satisfies:
1 If A ⊆ B then S(A,B) = 1;2 S(X, ∅) = 0;3 If A ⊆ B ⊆ C then S(C,A) ≤ S(B,A) and S(C,A) ≤ S(C,B).
Let T be a t-norm. A similarity measure SM is given by:
SM(A,B) = T (S(A,B),S(B,A)) ∀A,B ∈ F(X) .
F(X) can be replaced by any bounded lattice L in the def’s of S, SM.Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 51 / 68
Examples of MNNs and Related LC Models
Construction of Subsethood Measures
Let I be a fuzzy implication that satisfies I(a,b) = 1 for all a ≤ b and letυ : F(X) → [0,1] be a increasing function s.t. υ(∅) = 0 and υ(X) = 1.The following operators S∩ and S∪ yield subsethood measures:
(a) S∩(A,B) = I(υ(A), υ(A ∩ B)) (2)
(b) S∪(A,B) = I(υ(A ∪ B), υ(B)) (3)
Let X = {x1, . . . ,xk} and let I = IP (Goguen implication).
For υM given by υM(A) =∑k
i=1 µA(xi)k , S∩ and S∪ correspond resp.
to SK (Kosko’s subsethood) and SW (Willmot’s subsethood).
S∩p and S∪
p arise by using υp : F(X) → [0,1] for p ∈ (0,+∞):
υp(A) =k
∑
i=1
1 − cos(π[µA(xi)]p)
2k, ∀A ∈ F(X).
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 52 / 68
Examples of MNNs and Related LC Models
Θ-Fuzzy Associative Memories (Θ-FAMs)
Let Θξ : F(X) → [0,1] and w ∈ V ⊆ Rk , where V is closed and convex.
Figure : Topology of the Θ-FAM, based on Θξ and w, that associatesAξ ∈ F(X) with Bξ ∈ F(Y) for finite universes X and Y.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 53 / 68
Examples of MNNs and Related LC Models
Weighted S-, dual S-, and SM-FAMs
Consider the Θ-FAM based on Θξ and w.1 If Θξ is given by Θξ( · ) = S(Aξ, · ), where ξ = 1, . . . , k , for some
subsethood measure S, then the corresponding Θ-FAM is called aweighted S-FAM;
2 If Θξ is given by Θξ( · ) = S( · ,Aξ), , where ξ = 1, . . . , k , for somesubsethood measure S, then the corresponding Θ-FAM is called aweighted dual S-FAM;
3 If Θξ is given by Θξ( · ) = SM(Aξ, · ), where ξ = 1, . . . , k , for somesimilarity measure SM, then the corresponding Θ-FAM is called aweighted SM-FAM;
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 54 / 68
Examples of MNNs and Related LC Models
Training of Θ-FAMs
Remarks on Θ-FAM Training Algorithm:
A supervised training algorithm was specifically designed forΘ-FAMs;
proven to converge in a finite number of steps (< f (w0), where f isthe proposed objective function and w0 is the initial weight vector);
proven to reach a local minimum of f in a finite number of steps;
Tunable Equivalence Measure FAMs (TE-FAMs):
Θ-FAMs can be extended to deal with pattern cues in L, where L
is an arbitrary bounded lattice.
A Θ-FAM is called TE-FAM if Θξ(.) = E ξ(.,xξ), where E ξ representparametrized equivalence measures; TE-FAMs can be trained by:
1 extracting a fundamental memory set from the training set,2 optimizing the weights and parameters.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 55 / 68
Examples of MNNs and Related LC Models
Applications of Θ-FAMs
In our simulations, we considered weighted S-FAM, dual S-FAM,and SM-FAMs with S = SK ,SW , and S∩
p , S∪p for p ∈ P =
{0.25,0.5,0.75,1,1.5,2,2.5,3,4}, as well as SM given bySM(A,B) = S(A,B) ∧ S(B,A);
We employed pre-processing techniques to convert vectorswhose entries represent numerical or categorical attributes intofuzzy sets;
We chose initial weights w (0)j ∈ [0,1] as solutions to certain simple
optimization problems;
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 56 / 68
Examples of MNNs and Related LC Models
Some Classification Problems (KEEL Repository)
Instances Categorical Numerical ClassesFeatures Features
Appendicitis 106 0 7 2Cleveland 297 0 13 5
Crx 653 9 6 2Ecoli 336 0 7 8Glass 214 0 9 7Heart 270 0 13 2
Iris 150 0 4 3Monks 432 0 6 2
Movementlibras 360 0 90 15Pima 768 0 8 2Sonar 208 0 60 2
Spectfheart 267 0 44 2Vowel 990 0 13 11Wdbc 569 0 30 2Wine 178 0 13 3
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 57 / 68
Examples of MNNs and Related LC Models
Some Details on the Experiments
Same approach as in a recent article (IEEE TFS, Vol. 19, no. 5,Oct. 2011) on the fuzzy association rule-based classificationmethod for high-dimensional problems (FARC-HD);
Partitioning of the data into 10 folds;
Extraction of the set of Θ-FAMs that produced the lowest meanclassification error with the lowest variance during training.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 58 / 68
Examples of MNNs and Related LC Models
Average Classification Rates in the Testing Phase
2SLAVE FH-GBML SGERD CBA CBA2 CMAR CPAR C4.5 FARC-HD Θ-FAM
Appendicitis 82.91 86 84.48 89.6 89.6 89.7 87.8 83.3 84.2 81.18Cleveland 48.82 53.51 51.59 56.9 54.9 53.9 54.9 54.5 55.2 51.17
Crx 74.06 86.6 85.03 83.6 85 85 87.3 85.3 86 82.02Ecoli 84.53 69.38 74.05 78 77.1 77.7 76.2 79.5 82.2 76.78Glass 58.05 57.99 58.49 70.8 71.3 70.3 68.9 67.4 70.2 70.49Heart 71.36 75.93 73.21 83 81.5 82.2 80.7 78.5 84.4 78.15
Iris 94.44 94 94.89 93.3 93.3 94 96 96 96 96Monks 97.26 98.18 80.65 100 100 100 100 100 99.8 98.63
Movementlibras 67.04 68.89 68.09 36.1 7.2 39.2 63.6 69.4 76.7 84.72Pima 73.71 75.26 73.37 72.7 72.5 75.1 74.5 74 75.7 67.44Sonar 71.42 68.24 71.9 75.4 77.9 78.8 75 70.5 80.2 80.69
Spectfheart 79.17 72.36 78.16 79.8 79.8 79.4 78.3 76.5 79.8 81.3Vowel 71.11 67.07 65.83 63.6 74.9 60.4 63.00 81.5 71.8 97.07Wdbc 92.33 92.26 90.68 94.7 95.1 94.9 95.1 95.2 95.3 96.14Wine 89.47 92.61 91.88 93.8 93.8 96.7 95.6 93.3 94.3 97.24Mean 77.05 77.22 76.15 78.09 76.93 78.49 79.79 80.33 82.12 82.60
Comparison with structural learning algorithm on vague environment (2SLAVE), fuzzy hybrid genetic based machine learning
algorithm (FH-GBML), steady-state genetic algorithm for extracting fuzzy classification rules from data (SGERD), classification
based on associations (CBA), an improved version of the CBA method (CBA2), classification based on multiple association rules
(CMAR), C4.5 decision tree, classification based on predictive association rules (CPAR), and fuzzy association rule-based
classification method for high-dimensional problems (FARC-HD).
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 59 / 68
Examples of MNNs and Related LC Models
A Vision-Based Self-Localization Problem
Figure : Landmark images that were used for building the map.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 60 / 68
Examples of MNNs and Related LC Models
Θ-FAMs for Vision-Based Self-Localization
Comparison of the results produced by the selected Θ-FAMs bestresults previously obtained for each walk using off-line mapping byLICA, MF-ICA and MS-ICA, and endmember selection for SLAM.Walks 1 and 2 were used resp. for training and validation.
Walk 3 Walk 4 Walk 5 Walk 6 AverageDual S∩
1.5-FAM 0.78 0.72 0.78 0.77 0.76Dual S∪
1.5-FAM 0.79 0.72 0.79 0.77 0.77LICA 0.75 0.66 0.73 0.75 0.72
MF-ICA 0.62 0.54 0.65 0.53 0.58MS-ICA 0.69 0.62 0.74 0.69 0.68SLAM 0.76 0.6 0.69 0.64 0.67
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 61 / 68
Examples of MNNs and Related LC Models
Fuzzy Lattice Reasoning (FLR) Models
Fuzzy Lattice
A fuzzy lattice is a pair (L, µ) consisting of a lattice L and a functionµ : L× L → [0,1] such that µ(x , y) = 1 if and only if x ≤ y .
Inclusion Measure
For a complete lattice L, an “inclusion measure" σ is a functionL× L → [0,1] such that ∀x , y , z ∈ L:
1 σ(x , x) = 1 ∀x ∈ L;2 y ≤ z ⇒ σ(x , y) ≤ σ(x , z);3 x 6≤ y ⇒ σ(x , y) < 1,
In this case, (L, σ) is a fuzzy lattice.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 62 / 68
Examples of MNNs and Related LC Models
Some Details on FLR Classifiers
Training Phase
Given a set of training data {(E1, c1), . . . , (EL, cL)}, where Ei areinformation granules such as fuzzy interval’s numbers or hyperboxes inR
n and ci are class labels, a clustering algorithm is perfomed resultingin a set {(E1, c1), . . . , (EM , cM )} with elements of the same type.
Recall Phase
Upon presentation of a granule F , compute σ(F , Ej) ∀j = 1, . . . ,M.Assign F to the class label cJ such that J = arg maxj σ(F , Ej).
Remarks
σ(F ,E) is a component of an erosion of E by the SE F ;
(Unlike MP/CL), FLR training depends on order of training data;
FLR models have been proposed for some general types of data.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 63 / 68
Examples of MNNs and Related LC Models
Fuzzy Inference Systems Based on FLR
Consider the antecedent part of a fuzzy rule:
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 64 / 68
Examples of MNNs and Related LC Models
Sparse Rule Activation Using FLR
By choosing appropriate "inclusion" or subsethood measures,inputs outside the support of the antecedents can activate rules.
Parametrized versions of σ or S can be tuned.Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 65 / 68
Conclusions and Perspectives for the Future
Concluding Remarks
Lattice Computing, or LC for short, has been proposed forprocessing diverse types of data using lattice theory.
We reviewed some basic concepts of lattice theory and some offollowing approaches towards LC:
Mathematical morphology (MM) on complete lattices;L-fuzzy MM;Lattice fuzzy transforms;Morphological neural networks (MNNs) and related LC models.
We presented some applications of LC in:
Image segmentation (based on image reconstruction methods);Some benchmark classification problems;A vision-based self-localization problem in robotics;Time-series prediction in industry;Fuzzy inference systems in general.
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 66 / 68
Conclusions and Perspectives for the Future
Perspectives for the Future of Lattice Computing
LT is concerned with general lattice structures. Thus, LCapproaches can be applied to disparate types of data.
LT and LC allow for a top-down view on various methodologies,that are already existent or under development.
Since many classes of information granules are lattice ordered,LC approaches can be applied to granular computing.
In particular, the advent of L-fuzzy MM provides an access toMNNs for extended fuzzy sets
Extensions of fuzzy sets have proven to be very useful inrule-based systems for applications in engineering and computingwith words as well as in approximate reasoning.
Thanks for your interest!
Peter Sussner (Unicamp) LC in MM and Comp. Intelligence FLINS 2014 67 / 68