10
Variable Structure Neural Networks for Real Time Approximation of Continuous Time Dynamical Systems Using Evolutionary Artificial Potential Fields HASSEN MEKKI 1,2 , MOHAMED CHTOUROU 1 1 Intelligent Control, design and Optimization of complex system (ICOS), University of Sfax, Tunisia 2 National School of Engineering of Sousse, University of Sousse, Tunisia [email protected] ; [email protected] Abstract - A novel neural network architecture, is proposed and shown to be useful in approximating the unknown nonlinearities of dynamical systems. In the variable structure neural network, the number of basis functions can be either increased or decreased this is according to specified design strategies so that the network will not overfit or underfit the data set. Based on the Gaussian radial basis function (GRBF) variable neural network, an online identification of continuous-time dynamical systems is presented. The location of the centers of the GRBFs is analyzed using a new method inspired from evolutionary artificial potential fields method combined with a pruning algorithm. A minimal number of neuron is guaranteed by using this method. It is in noted, that both the recruitment and the pruning is made by a single neuron. By consequence, the recruitment phase does not perturb the network and the pruning does not provoke an oscillation of the output response. The weights of neural network are adapted so that the dynamics of the system checks the imposed performances, in particular the stability of the system. Key-words- Variable structure neural network; Radial basis functions, Evolutionary artificial fields, Identification of dynamical systems. 1 Introduction: Recently, neural network research has gained increasing attention. In fact, artificial neural networks are able of learning and reconstructing complex nonlinear mappings and they have been widely studied by control researchers in identification analysis and the design of control systems [6], [10], [14], [15], [16], [18], [19]. The network size, often measured by number of hidden units in a single hidden layer network, reflects the capacity of the neural network to approximate an arbitrary function. What is the required size of the neural network to solve a specific problem? the fundamental question in the design of neural networks. If the training starts with a small network, it is possible that the learning process cannot be achieved. On the other hand, if a large network is used, the learning process can be very slow and an over-fitting may occur. The approaches, which assume a priori the number of RBFs, usually lead to the problem of poor generalization. In addition, these approaches usually work offline, so they are not suitable for practical real-time applications where the online learning is required for the neural-network-based controller design. To remedy the aforementioned shortcomings, several growing RBF networks have been proposed in [9], [11], [12], [20], [24], [27]. In using RBF networks, the basis function are placed on regular points of a square mesh, for example, covering a relevant region of space where the state is known to be contained [2], [3], [13]. This region therefore is the network approximation region, which is in general known for a given system. The distance between the points affects the number of basis functions required to cover the region and hence determines the size of the neural network. WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou E-ISSN: 2224-2678 75 Issue 2, Volume 11, February 2012

Variable Structure Neural Networks for Real Time ... - WSEAS

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Variable Structure Neural Networks for Real Time ... - WSEAS

Variable Structure Neural Networks for Real Time Approximation of Continuous Time Dynamical Systems Using Evolutionary Artificial

Potential Fields

HASSEN MEKKI1,2, MOHAMED CHTOUROU1

1Intelligent Control, design and Optimization of complex system (ICOS), University of Sfax, Tunisia

2National School of Engineering of Sousse, University of Sousse, Tunisia

[email protected]; [email protected]

Abstract - A novel neural network architecture, is proposed and shown to be useful in approximating the unknown nonlinearities of dynamical systems. In the variable structure neural network, the number of basis functions can be either increased or decreased this is according to specified design strategies so that the network will not overfit or underfit the data set. Based on the Gaussian radial basis function (GRBF) variable neural network, an online identification of continuous-time dynamical systems is presented. The location of the centers of the GRBFs is analyzed using a new method inspired from evolutionary artificial potential fields method combined with a pruning algorithm. A minimal number of neuron is guaranteed by using this method. It is in noted, that both the recruitment and the pruning is made by a single neuron. By consequence, the recruitment phase does not perturb the network and the pruning does not provoke an oscillation of the output response. The weights of neural network are adapted so that the dynamics of the system checks the imposed performances, in particular the stability of the system.

Key-words- Variable structure neural network; Radial basis functions, Evolutionary artificial fields, Identification of dynamical systems.

1 Introduction: Recently, neural network research has gained increasing attention. In fact, artificial neural networks are able of learning and reconstructing complex nonlinear mappings and they have been widely studied by control researchers in identification analysis and the design of control systems [6], [10], [14], [15], [16], [18], [19]. The network size, often measured by number of hidden units in a single hidden layer network, reflects the capacity of the neural network to approximate an arbitrary function. What is the required size of the neural network to solve a specific problem? the fundamental question in the design of neural networks. If the training starts with a small network, it is possible that the learning process cannot be achieved. On the other hand, if a large network is used, the learning process can be very slow and an over-fitting may occur.

The approaches, which assume a priori the number of RBFs, usually lead to the problem of poor generalization. In addition, these approaches usually work offline, so they are not suitable for practical real-time applications where the online learning is required for the neural-network-based controller design. To remedy the aforementioned shortcomings, several growing RBF networks have been proposed in [9], [11], [12], [20], [24], [27]. In using RBF networks, the basis function are placed on regular points of a square mesh, for example, covering a relevant region of space where the state is known to be contained [2], [3], [13]. This region therefore is the network approximation region, which is in general known for a given system. The distance between the points affects the number of basis functions required to cover the region and hence determines the size of the neural network.

WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou

E-ISSN: 2224-2678 75 Issue 2, Volume 11, February 2012

Page 2: Variable Structure Neural Networks for Real Time ... - WSEAS

It seems well that if the size of the neural network input vector increases, it will have an excessive increases of the neural network size which will provoke oscillation in the output responses. To remedy theses problems, a novel neural network architecture is proposed, where the location of the centers of the GRBFs is analyzed using a new method inspired from evolutionary artificial potential field’s method. The weights of neural network are adapted so that the dynamics of the system checks the imposed performances, in particular the stability of the system. This paper is organized as follows. Section 2 provides the design of the continuous-time dynamical system approximator. This is done by the use of multilayered neural networks for the identification of uncertain nonlinear functions. In section 3, we describe a novel self-organizing RBF network that can dynamically vary its structure in real time. The proposed self-organizing RBF network is capable of adding or removing RBFs to ensure the desired approximation accuracy and at the same time to keep the appropriate network complexity. The location of the centers of the GRBFs is analyzed using a new method inspired from evolutionary artificial potential fields method. To show the obtained performances of the proposed algorithm, section 4 presents two simulation examples. The first one deals with a based variable structure network on-line identification of a nonlinear function. The second example treats the on-line identification of nonlinear dynamical system.

2 Approximation network synthesis and analysis The nonlinear continuous-time dynamical systems that we consider in this paper are modeled by the following vector differential equation:

= , , = (1)

Where ∈ ℝ, ∈ ℝ, . , . : Ω → ℝ is a continuous vector-valued function of many variables defined on a compact set Ω ⊂ℝ is the space. We assume that the function is unknown to us. Let ∈ ℝ× be a Hurwitz matrix as:

= 0 1 0 … 00 0 1 ⋯ 0⋮ ⋮ ⋱ ⋱ ⋮⋮ ⋮ ⋮ ⋱ 1−( −() −(* … −(+),--

-.

The real numbers ( , () , (*, … , (+) are chosen using the pole assignment technique.

We express (1), similarly as in [8], in the following form:

= + 0, , = (2)

Where

0, = − + , (3)

We then analyze an RBF network with + 1 inputs and outputs that we will use to approximate the unknown function 0, in real-time, equivalently, the unknown dynamical system (1), as given in the fig. 1.

Fig. 1: RBF neural network architecture

Let:

02, = 3ξ, (4)

where 3 ∈ ℝ×4 is a weight matrix and

ξ, ∈ ℝ4. The RBF network

approximator has the following form:

2 = 2 + 02, , 2 = 2 (5)

⋮ ⋮ ⋮ ⋮ ⋮

)

)

02)

02*

02

5)

5*

54

6))

6)*

64

WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou

E-ISSN: 2224-2678 76 Issue 2, Volume 11, February 2012

Page 3: Variable Structure Neural Networks for Real Time ... - WSEAS

Note that the previous approximator has access to the system true state. We define the approximation error as:

7 = − 2, (6)

Combining (5) and (6), we obtain the approximation error dynamics

7 = 7 + 0, −02, 7 = 7 (7)

We assume that:

80, − 02, 8 ≤ : (8)

Where : > 0 and ‖. ‖ is the Euclidean norm.

2.1 Approximation method: The online identification method is introduced by fig. 2:

Fig. 2. Diagram of the real-time RBF network approximator.

The weights of neural network are adapted so that the dynamics of the system checks the imposed performances, in particular the stability of the system. In fact, by construction, the matrix is Hurwitz. Then, for any given real symmetric positive-definite matrix =, the solution > to the continuous Lyapunov matrix equation:

?> + > = −2= (9)

is symmetric positive definite. Consider then the Lyapunov function candidate A7 = 1 2⁄ 7?>7.

It’s easily to demonstrate that the time derivative of A is negative definite [8].

As it is shown in [8], the adaptation law is given by:

3 = >CDEFG>75?, (10)

It can be written in the following form:

6 HI =J 0 K 6HI = −L 7 >75?, HI < 00 K 6HI = L 7 >75?, HI > 0G>75?, HI 7NO7 P (11)

where ( = 1, … , Q and E = 1, . . , R, 6HI is the

weight from the EST RBF to the (ST output neuron. >75?, HI is the element in the (ST row and

the EST column of matrix >75?, . L and G are design parameter.

3 Variable structure neural network There are five parameters characterizing the RBF network approximation to be determined:

- The number of RBFs U - The type of the RBF 5I, ; - The location of the center VI; - The radius in each coordinate WXI; - The weight vector for each output neuron LH;

In this section, we present a novel RBF network structure that is capable of determining the number of RBFs U, the location of the center VI and the weight vector for each output neuron LH by itself. The determination of LH is already seen in the section 2. We first show how to determine the location of the center VI. The strategy of determination of the number of RBFs needed in the proposed online identification problem will be discussed in section 3. The proposed self-organizing RBF network is capable of adding or removing RBFs to ensure the desired approximation accuracy and at the same time to keep the appropriate network complexity. 3.1 Determination of the RBFs location center

WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou

E-ISSN: 2224-2678 77 Issue 2, Volume 11, February 2012

Page 4: Variable Structure Neural Networks for Real Time ... - WSEAS

In using RBF networks, the basis function are placed on regular points of a square mesh, for example, covering a relevant region of space where the state is known to be contained [2], [3]. This region therefore is the network approximation region, which is in general known for a given system. The distance between the points affects the number of basis functions required to cover the region and hence determines the size of the neural network. It seems well that if the size of the neural network input vector increases, it will have an excessive increases of the neural network size which will provokes oscillation in output responses.

To remedy theses problems, a novel neural network architecture, is proposed, where the location of the centers of the GRBFs, is analyzed using a new method inspired from evolutionary artificial potential field’s method.

3.2 The Artificial Potential Field This method is especially used in the real-time robot path planning. In the artificial potential field methods, a robot is considered as a particle under the influence of an artificial potential field Y whose local variations reflect, for instance, the positions of obstacles and of the goal that the robot is supposed to reach [17], [21], [23], [26]. The potential field function is defined as the sum of an attraction field that pulls the robot towards the goal and a repulsive field that repels it forms the obstacles. The movement is executed in an iterative way, in which an artificial force is induced by:

Z[\ = −∇[Y\ (12)

which forces the robot to move to the direction that

the potential field decrees, where ∇[ the gradient with respect to \ and \ = , _ which represents the coordinates of the robot position. The complete potential field is a superposition of contributions from obstacles, waypoint (if applicable), and the goal:

Y\ = ∑ YI\aIb) + ∑ YIc\dIb) + Y:\

(13)

where and c denote the number of obstacles

and waypoints, respectively, and YI and YIc are

their potential. Y: is the potential generated by the goal (navigation target).

In our approach, a neuron plays the role of the robot, the other neurons play the roles of the obstacles and the current true state of the system, is the goal point

Different potential functions have been proposed in literature. The most commonly used attractive potential take the form [22], [25]:

YeSS\ = )* fg\, \:hei (14)

where f is a positive scaling factor, g\, \:hei =8\:hei − \ 8 is the distance between the neuron \

and the goal \:hei, and 1 = 1 or 2. For 1 = 1, the

attractive potential is conic in shape and the resulting attractive force has constant amplitude except at the goal, where YeSS is singular. For 1 = 2, the attractive potential is parabolic in shape. The corresponding attractive force is then given by the negative gradient of the attractive potential:

ZeSS\ = − ∇YeSS\ = f\:hei − \ 8\:hei − \8j (15)

which is a constant force on the space: it does not tend to infinity with increasing distance from \:hei. However, it is not zero at \:hei. One commonly used repulsive potential function takes the following form [1]:

Yklm\ = n )* o p )qr,rstu − )qav* K g\, \lw ≤ g0 K g\, \lw > g

P (16)

Where o is a positive scaling factor, g\, \lw denotes the minimal distance from the center of neuron \ and the center of other neuron, \lw denotes the center of the nearest neuron, and g is a positive constant denoting the distance of influence of the neuron. The corresponding repulsive force is given by:

WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou

E-ISSN: 2224-2678 78 Issue 2, Volume 11, February 2012

Page 5: Variable Structure Neural Networks for Real Time ... - WSEAS

Zklm\ = − ∇Yklm\

=

no p )qr,rstu − )qav )qr,rstux ∇g\, \lw K g\, \lw ≤ g 0 K g\, \lw > g P (17)

The total force applied to the neuron is the sum of the attractive force and the sum of the repulsive force :

ZShSei = ZeSS + ∑ Zklm (18)

This determines the motion of the neuron.

The attractive and the repulsive phenomena are given in fig. 3.

Fig. 3: The Artificial potential field

The parameters f, o g are chosen so that we obtain a scenario similar to fig. 4. We obtain concentrations of neurons in a ball of dimension ( + 1) centered in the desired point. By construction we obtain several layers (or orbit) of radius CX ≈ K. W were K the rank of the orbit and W is the width of the GRBF.

Fig. 4: Location of the GRBF in the space K1 = 2

3.3 Determination of the number N The proposed self-organizing RBF network is capable of adding or removing RBFs to ensure the desired approximation accuracy and at the same time to keep the appropriate network complexity. 3.3.1 Adding RBFs As the system trajectory evolves in time, the approximation error 7 is measured. We first check if the Euclidean norm of the approximation error ‖7‖ exceeds a predetermined threshold 7ez, and the period between the two adding operations is greater than the minimum response time k. 7ez and k are design parameters. If these two conditions are satisfied, we recruit a new neuron which will be placed on a neighborhood of the last orbit. It is noted, that the recruitment is made by a single neuron. Consequently, the recruitment phase does not perturb the network. 3.3.2 Removing RBFs, The RBF removing operation is also implemented sequentially for all U coordinates. We first measure the approximation error 7. If the Euclidean norm of the approximation error ‖7‖ is smaller than |7ez, where | ∈ 0, 1 is a design parameter, we remove a neuron from the last orbit. It is in noted, that the pruning is also made by a single neuron, which aims at not provoking an oscillation of the output response. 3.4 Self-Organizing RBF Network Algorithm Choose the design parameters , =, STklThi~,

N

G : Goal

N :Neuron

WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou

E-ISSN: 2224-2678 79 Issue 2, Volume 11, February 2012

Page 6: Variable Structure Neural Networks for Real Time ... - WSEAS

7ez, |, k, ~, G, L. Initialize some GRBFs in a neighborhood of the initial condition and the weight matrix 3 of the initial RBF network. In each sampling period, repeat the following steps.

1) Compare the current true state of (2) and the current approximated state 2 of (5) to obtain the approximation error 7 = − 2.

2) If ‖7‖ > 7ez and the period between two adding operation is greater than e, go to 3); otherwise, go to 4).

3) Add a new neuron which will be placed on a neighborhood of the last orbit. The radius of this orbit is given by: CX =KieS + 1. W; where KieS is the rank of the last layer.

4) If ‖7‖ ≤ |. 7ez , go to 5); otherwise go to 6).

5) Remove a neuron from the last orbit 6) Update the weight matrix 3 using (11). 7) Compute 02 using (4) in the uniform grid,

and then evaluate the approximated state 2 using (5).

8) Determination of the motion of the GRBF in the space using (18).

4 Simulation results In this section, we test our proposed real-time self-organizing RBF network approximator on two examples: the first one deal with the on-line identification of a nonlinear function using the gradient method. The second example treats the real-time approximation of nonlinear dynamical system using the adaptation law given in equation (11).

4.1 On-line identification of a nonlinear function

For this problem, the considered nonlinear function is described by the following equation:

_), * = * cos) + 2OK) (17)

The aim is to seek an optimal number of RBF using the proposed algorithm, to approximate adaptively the function with small error. Based on simulation studies, the parameters of the proposed algorithm are chosen as follows:

7ez = 0.1, | = 0.5, W = 0.5,g = 2. W, o = 1.5

The initial number of the hidden units is chosen as U = 4.

Fig. 5 shows the simulation results of the process using the proposed algorithm. Fig. 5.(a) and fig. 5.(b) show respectively, the evolution of ) and *. The evolutions of the desired function and its estimate are given in fig. 5.(c) and fig. 5.(d) respectively. It is clear that the approximation error is so acceptable. Fig. 5.(e) represents the evolution of the hidden units number. It’s noted that we used a minimal number of neuron on this simulation. Fig. 5(f) represents the localization of the centers of the used RBFs. It’s obvious that the centers are joined together all around the state ), * to be estimated.

a

b

Iteration number k

Evo

lutio

n o

f ) (

Evo

lutio

n o

f * (

Iteration number k

WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou

E-ISSN: 2224-2678 80 Issue 2, Volume 11, February 2012

Page 7: Variable Structure Neural Networks for Real Time ... - WSEAS

4.2 Online identification of nonlinear dynamical system This example illustrates a one-link rigid robotic manipulator. The dynamic equation of the one-link rigid robotic manipulator is given by [5]:

1N*\ + \ + 1N0DO\ = (18)

where the link is of length N and masse 1, and \ is the angular position with initial value \0 = 0.1 and \ 0 = 0. The parameters 1 = 1, N = 1, = 1, and 0 = 1. The above dynamical equation can be written as the following state equation:

) = * * = * − DO) + P (19)

Based on simulation studies, the parameters of the proposed algorithm are chosen as follows:

= p 0 1−1 −2v , = = p1.9 00 −1.9v,G = 2 ,L = 0.1,7ez = 0.1, | = 0.5, W = 0.5, o = 1.5 The initial number of the hidden units is chosen as U = 9.

Fig. 6 shows the simulation results of the process using the proposed algorithm. Fig. 5.(a) and fig.

Fig. 5. (a) and (b) Evolution of ) and * respectively. (c) and (d) Evolution of the desired function and its approximation respectively. (e) Evolution of the RBF’s number. (f) Location of RBF center (o) and state (-)

c

d

e

f

Iteration number k

Iteration number k

Iteration number k )(

Evo

lutio

n o

f the

RB

F’s

nu

mbe

r U E

volu

tion

of _ (

E

volu

tion

of _ ~ (

* (

WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou

E-ISSN: 2224-2678 81 Issue 2, Volume 11, February 2012

Page 8: Variable Structure Neural Networks for Real Time ... - WSEAS

5.(b) show the evolution of the state ) and its approximation respectively. The evolutions of the real state * and its estimate are given in fig. 5.(c) and fig. 5.(d) respectively. It is clear that the approximation error is acceptable. The chattering presented in the curve of the evolution of ) and * is due to the use of the given identification algorithm. Fig. 5.(e) represents the evolution of the hidden units number. It’s clear that we used a minimal number of neuron on this simulation.

Comparing with other works [3], [5], [7], [8], it is so clear that we used a minimal number of neuron while respecting the imposed performances. We can notice, too, that the recruitment is made by a single neuron. As a consequence, the recruitment phase does not perturb the network.

5 Conclusion An online identification of continuous-time dynamical systems is presented. A novel neural network architecture, is proposed and shown to be useful in approximating the unknown nonlinearities of dynamical systems. In the variable structure neural network, the number of basis functions can be either increased or decreased with time according specified design strategies so that the network will not overfit or underfit the data set. Based on the Gaussian radial basis function of variable neural network, the location of the centers of the GRBFs is analyzed using a new method inspired from evolutionary artificial potential fields method combined with a pruning algorithm.

Time (s)

Time (s)

Evo

lutio

n o

f ) (

Evo

lutio

n o

f 2 ) (

Evo

lutio

n o

f 2 * (

Evo

lutio

n o

f * (

a

c

b

d

Time (s)

Time (s)

WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou

E-ISSN: 2224-2678 82 Issue 2, Volume 11, February 2012

Page 9: Variable Structure Neural Networks for Real Time ... - WSEAS

References: [1] C. Qixin, H. Yanwen, Z. Jingliang, An

evolutionary artificial potential field algorithm

for dynamic path planning of mobile

robot,IEEE/ RSJ Int. conf. on Intelligent

Robots and Systems, pp. 3331-3336, Beijing,

oct. 2006

[2] D. Shi, D. S. Yeung, J. Gao, Sensitivity analysis

applied to the construction of radial basis

function networks, Neural networks vol. 18

pp. 951-957, 2005

[3] G. P. Liu, V. Kadirkamanathan and S. A.

Billings, Variable neural networks for adaptive

control of nonlinear systems, IEEE Trans.

Trans. Syst. Man Cybern. B. Cybern., vol. 398,

no. 1, pp. 34-43, Feb. 1999

[4] H. Al-Duwaish, S. Z. Rizi, Design of neuro-

controller for multivariable nonlinear time-

varying systems, WSEAS TRANSACTION on

SYSTEMS and CONTROL, Vol. 5, pp. 711-720,

Sep 2010

[5] H. Mekki, M. Chtourou, N. Derbel, “Variable

structure neural networks for adaptive

control of nonlinear systems using the

stochastic approximation “, Simulation

Modeling Practice and Theory 14 (2006) 1000-

1009.

[6] J. Balicki, An adaptive quantum-basqed

evolutionary algorithm for multiobjective

optimization, WSEAS TRANSACTION on

SYSTEMS and CONTROL, Vol. 4, pp.603-612,

Dec. 2009

[7] J. Lian, Y. Lee, S. D. Sudhoff and S. H. Zak,

“Variable Structure Neural Network Based

Direct Adaptive Robust Control of Uncertain

Systems“, American Control Conference,

Seattle, Washington, USA, June 2008.

[8] J. Lian, Y. Lee, S. D. Sudhoff, S. H. Zak, “Self-

Organizing Radial Basis Function Netwoek for

Real-Time Approximation of Continuous-Time

Dynamical Systems, IEEE Transactions on

neural network, Vol. 19, No. 3, March 2008.

[9] J. R. B. Jr and M. do C. Nicoletti, A multiclass

version of a constructive neural network

algorithm based on linear separability and

convex hull, ICANN,; Part II, LNCS, pp. 723-

733, 2008.

[10] K. Salahshoor, M. R. Jafari, “On-line

identification of non-linear systems using

adaptive RBF-based neural networks“,

International Journal of Information Science

and Technology, Vol. 5, No. 2, 99-121,

July/December 2007.

[11] L. Augusto D. C. Meleiro, F. J. V. Zuben, R. M.

Filho, “Constructive learning neural network

applied to identification and control of fuel-

ethanol fermentation process“, Engineering

Applications of Artificial Intelligence 22 (2009)

201-215.

[12] L. Ma, K. Khorasani, New training strategies

for constructive neural networks with

application to regression problems, Neural

networks Vol.17 pp. 589-609, 2004.

[13] M. Belkheiri, F. Boudjema, Function

approximation based augmented

backstepping control for an introduction

machine, WSEAS TRANSACTION on SYSTEMS

and CONTROL, Vol. 2, Sep 2007.

[14] M. D. C. Nicoletti and J. R. Bertini, an

empirical evaluation of constructive neural

network alghorithms n classification tasks, Int.

J. Innovative Computing and Application, Vol.

1, No. 1, pp. 1-13, 2007

[15] M. d. C. Nicoletti, and al., constructive neural

network algorithms for feedforward

architectures suitable for classification tasks,

Springer-Verlag Berlin Heidelberg, SCI 258,

pp.1-23, 2009.

Fig. 6. (a) and (b) Evolution of ) and its approximation respectively. (c) and (d) Evolution of * and its approximation respectively. (e) Evolution of the RBF’s number.

Time (s)

Evo

lutio

n o

f the

RB

F’s

nu

mbe

r U e

WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou

E-ISSN: 2224-2678 83 Issue 2, Volume 11, February 2012

Page 10: Variable Structure Neural Networks for Real Time ... - WSEAS

[16] M. Grochowski and W. Duch, Constructive

neural network algorithms tat solve highly

non-separable problems, Springer-Verlag

Berlin Heidelberg, SCI 258, pp. 49-70, 2009.

[17] M. G. Park, J. H. Jeon and M. C. Lee, obstacle

avoidance for mobile robots using artificial

potential field approach with simulated

annealing, IEEE international symposium on

industrial electronics, vol. 3, pp. 1530-1535,

2001.

[18] M. Kubalcik, V. Bobal, Self-tuning of

continuous-time systems, WSEAS

TRANSACTION on SYSTEMS and CONTROL,

Vol. 5, pp. 802-813, Oct. 2010.

[19] M. R. Jafari, al., On-line identification of non-

linear systems using an adaptive RBF-based

neural network, Proceeding of WCECS, 58-3,

October 2007, San Francisco, USA.

[20] N. B. Karayiannis and G. W. Mi, Growing

radial basis neural networks: Merging

supervised and unsupervised learning with

network growth techniques, IEEE Trans.

Neural Netw., vol. 8, no. 6, pp- 1492-1506,

Nov, 1997.

[21] Neri. F. (2012), Learning and predicting

financial series by combining evolutionary

computation and agent simulation,

Transactions on computational collective

intelligence, Vol. 6, Springer, Heidelberg, Vol.

7, pp. 202-221

[22] P. Y. Ahang, T. S. Lü and L. B. Song, Soccer

robot path planning based on the artificial

potential field approach with simulated

annealing, Robotica, vol. 22, pp. 563-566,

2004.

[23] P. Vadakkepat, T. H. Lee, L. Xin, Application of

evolutionary artificial potential field in robot

soccer system, IFSA World Congress and 20th

NAFIPS int. Conf. vol. 5, pp. 2781-2785,

Vancouver, BC, Canada juil. 2001

[24] S. Lee and R. M. Kil, A gaussian potential

function network with hierachiclally self-

organizing learning, Neural Netw., vol. 4, no.

2, pp. 207-224, 1996.

[25] S. S. Ge and Y. J. Cui, New potential functions

for mobile robot path planning, IEEE

Transactions on Robotics and automation, vol.

16, No. 5, pp. 615-620, October 2000.

[26] S. S. Ge and Y. J. Cui, New potential functions

for mobile robot path planning, IEEE Trans. On

Robot. And Auto., vo. 16, No. 5, Octob. 2000.

[27] S. S. Baboo, P. Subashini, M. Krishnaveni,

Combining self-organizing maps and radial

basis function networks for tamil handwritten

character reconition, ICGST International

Journal on Graphics, vision and image

processing, Vol. 9, pp. 1-7, 2009.

WSEAS TRANSACTIONS on SYSTEMS Hassen Mekki, Mohamed Chtourou

E-ISSN: 2224-2678 84 Issue 2, Volume 11, February 2012