Upload
abdelkader
View
215
Download
3
Embed Size (px)
Citation preview
Fuzzy c-regression models based on the BELS method for nonlinear system identification
Borhen Aissaoui(1), Moêz Soltani(1)
(1) Research Unit on Control, Monitoring and Safety of Systems (C3S)
(1) High School of Sciences and Engineering of Tunis (ESSTT), Tunisia
{borhen.issaoui, [email protected]}
Dorsaf Elleuch(2), Abdelkader Chaari(1) (2) Laboratory of Sciences and Techniques of Automatic
control & computer engineering (Lab-STA) (2) National Engineering School of Sfax (ENIS)
Sfax, Tunisia [email protected], [email protected]
Abstract—A fuzzy c-regression model clustering algorithm based on Bias-Eliminated Least Squares method (BELS) is presented. This method is designed to develop an identification procedure for noisy nonlinear systems. The BELS method is used to identify consequent parameters and eliminate the bias. The proposed approach has been applied to benchmark modeling problem which proved a good performance.
Keywords—Takagi-Sugeno fuzzy model; fuzzy c-regression models; Bias-Eliminated Least-Squares.
I. INTRODUCTION Fuzzy modeling algorithms are studied in many research
areas. It is known as one as better describing nonlinear systems. Takagi and Sugeno (T-S) system is a most fuzzy model studied in literature [19]. In [6], the authors present a modified version of Fuzzy C-Means (FCM) clustering algorithm called fuzzy C-Regression Models (FCRM). The FCRM algorithm develops hyper-plane-shaped clusters, while FCM algorithm develops hyper-spherical-shaped clusters. However, in [11], the authors prove that the FCRM using the Weighted Regressive Least Square (WLRS) algorithm made it sensitive to initialization. Hence, different initializations may lead, easily, to different results. To overcome this problem, the gradient descent algorithm was used to adjust the parameters of fuzzy models [9]. Recently, the orthogonal least squares (OLS) is used to identify the consequent parameters [11- 12].
The affine T-S fuzzy model structure architecture is frequently applied to analysis and synthesis of nonlinear systems ([18], [8]). The (T-S) model has been widely studied without constant parameter ([20], [22]). In [25], the authors shown that the model is assumed as a quasi-linear model and the stability analysis ensured, if the scalar term is eliminated.
In this paper, a nonlinear system modeling with T-S fuzzy models without taking into account the constant term in consequent parameter is considered. The BELS method is used to identify the consequent parameters of local linear model. The BELS method has a robust capability against noise and is quite successful in improving the robustness of a variety of modeling problem. In other word, the BELS method is applicable to arbitrary colored disturbances acting on the plant ([14-15], [26]).
This paper is organized as follows. In Section 2, fuzzy c-regression models clustering algorithm is briefly introduced. In Section 3, the proposed method is given. Some example is given to illustrate the proposed approach. Finally, the conclusion is presented in Section 5.
II. FUZZY C-REGRESSION MODELS CLUSTERING ALGORITHM
The affine T-S fuzzy model based on FCRM belongs to the range of clustering algorithms with linear prototype.
Let S={(x1, y1), …, (xN, yN)}={(xk, yk), k=1, …,N} be a set of input-output sample data pairs. Assume that the data pairs in S are drawn from c different fuzzy regression models, the hyper-plane-shaped cluster representative will be adopted has the following structure:
01 1 2 2
1
( , ) ( )..
( )
[ . ( ), 1, 2,....,]
k i k i ik i
i k i k iM kM
ik
i
iT
k i ik i
y f Ea x a x a
E
i c
bx
E
θ θ
θ
θ θ
= += + + +
= + =
++
x
x
(1)
where 1,...., Mk k kMx x ∈ℜ⎡ ⎤⎣ ⎦=x is the input vector, c is number
of rules and 11,..., , 0
Mi i iM ia a bθ +⎡ ⎤= ∈ℜ⎣ ⎦ is the parameter
vector of the corresponding local linear model. The distances (Eik(θi)) are weighted with the membership
values µik in the objective function that is minimized by the clustering algorithm and formulated as:
2
1 1( , ) ( ) ( ( ))
N cm
ik ik ik i
J Eμ= =
=∑∑U θ θ (2)
with
( ) | . |Tik i k k iyE = − xθ θ (3)
where m is the weighting exponent and µik is the membership degree of xk to ith cluster. The membership values µik have to satisfy the following conditions:
U.S. Government work not protected by U.S. copyright
1
1
[0 1]; 1, 2,...., ; 1, 2,....,
0 ; 1, 2,....,
1; 1, 2,....,
ikN
ikk
c
iki
i c k N
N i c
k N
μ
μ
μ
=
=
∈ = =
< < =
= =
∑
∑
(4)
The identification procedure of the FCRM algorithm is summarized as follows [6]. Given data S, set m>1 and specify regression models Equation (1), choose a measure of error Equation (3). Pick a termination threshold ε�0 and initialize U(0) (e.g. random). Repeat for l=1, 2, ...
Step 1. Calculate values for c model parameters θi(l) in
Equation (1) that globally minimize the restricted function Equation (2).
Step 2. Update U(l) with Eik(θi(l)), to satisfy:
21( )
1
1 0 1
0
ik
mcl ikikj jk
if E for i c
EE
otherwise
−
=
⎧ > ≤ ≤⎪⎪ ⎛ ⎞⎪= ⎨ ⎜ ⎟⎜ ⎟⎪ ⎝ ⎠⎪⎪⎩
∑U (5)
Until ( ) ( )1l l ε−− ≤U U then stop; otherwise set l=l+1 and
return to step 1.
III. THE NEW FCRM CLUSTERING ALGORITHM Many authors have shown that the clustering performance
severely distorted when the noisy data is presented ([24], [13], [3], [10]). To overcome this problem, noise clustering (NC) algorithm is widely used ([11], [12]). In this approach, noise is considered as a separate class. It is represented by a fictitious prototype that has a constant distance δ from all the data points. The membership µik of point xk in the noise cluster is given by:
*1
1c
k iki
μ μ=
= −∑ (6)
Thus, the membership constraint for the good clusters is effectively relaxed to:
1
1c
iki
μ=
<∑
Dave's objective function is given by :
( ) ( ) ( ) ( )2 2*
1 1 1
,c N N
m mNC ik ik k
i k k
J U V Dμ δ μ= = =
= +∑∑ ∑ (7)
for any input xk in subspace i denoted by center vi, ik ikD = −x v . The combination of noise clustering algorithm with FCRM
algorithm can lead to New FCRM (NFCRM) objective function as follows [17] :
( ) ( ) ( )
( )
2
1 1
2*
1
; , ( )N c
mnew ik ik i
k iN
mk
k
J S U Eθ μ
δ μ
= =
=
=
+
∑∑
∑
θ (8)
Where δ is a scale parameter and may be used on the idea presented in [4] as:
2 2
1 1
1 ( ( )).
N c
ik ik i
Ec N
δ γ= =
⎛ ⎞= ⎜ ⎟⎜ ⎟
⎝ ⎠∑∑ θ (9)
where γ is a user-defined parameter depending on the example’s type.
To solve constrained problem Jnew with respect to µik, we introduce N Lagrange multipliers λk,,k=1,…,N. The minimization of Jnew is as follows:
( ) *1 1
, 1N c
ik k new k ik kk i
F Jμ λ λ μ μ= =
⎛ ⎞= − + −⎜ ⎟⎜ ⎟
⎝ ⎠∑ ∑ (10)
By differentiating the Lagrangian with respect to the µik, µ*k and λk and setting the derivates to zero, we obtain:
( ) ( )1 2/ 0
m
ik ik ik kF m Eμ μ λ−
∂ ∂ = − = (11)
( ) 12* */ 0mk k kF mμ δ μ λ−∂ ∂ = − = (12)
*1
/ 1 0c
k ik ki
F λ μ μ=
∂ ∂ = + − =∑ (13)
From Equations (11-12), we get:
( )
1/( 1)1/( 1)
21
mmk
ikik
m E
λμ−− ⎡ ⎤⎡ ⎤
⎢ ⎥= ⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦
(14)
and 1/( 1) 1/( 1)
* 21
m mk
k mλμ
δ
− −⎡ ⎤ ⎡ ⎤= ⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥⎣ ⎦ (15)
Using Equations (13-15), we get:
( ) ( )
1/( 1)
2/( 1) 2/( 1)
1
1
1/ 1/
mk
c m mjk
j
mE
λ
δ
−
− −
=
⎡ ⎤ =⎢ ⎥⎣ ⎦
+∑ (16)
And then substituting it into Equation (16), the following equation can be obtained:
2 21 1
1
1ik
c m mik ik
jkj
E EE
μ
δ
− −
=
=⎛ ⎞ ⎛ ⎞+⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠
∑
(17)
From Equations (1-8), the objective function of NFCRM is defined as:
2
1 1
2
1 1
1
1
( ) ( )
(
ˆ
1 )
N cm T
new ik k ik i
M
kj
N cm
ikk i
k
xJ yμ
δ μ
= =
= =
+
=
−
= −
+
∑∑ ∑
∑ ∑
θ (18)
where ,ˆ 1k kx = ⎡ ⎤⎣ ⎦x . The partial derivation of the object Equation (18) is:
1
1 1
ˆ ˆ2 ( ) 0N M
mnewik k it kt kj
ij k t
Jy x xμ θ
θ
+
= =
⎛ ⎞∂= − − =⎜ ⎟⎜ ⎟∂ ⎝ ⎠∑ ∑ (19)
and then,
1
21
ˆ ˆ( ) ( )
ˆ( )
Nm
ik k it kt kjk t j
ij N mik kjk
y x x
x
μ θ
θμ
= ≠
=
−
=∑ ∑
∑ (20)
1,2,...., ; 1,2,...., 1i c j M= = + As mentioned in [23], the non-Euclidean distance is more
robust than the Euclidean distance. By transforming Equation (3), the measure of error is defined as:
E ( ) 1 ( | [ 1]. |)Tik i k k iexp yρ= − − − xθ θ (21)
where ρ is a positive constant. Then the NFCRM objective function Equation (8) is rewritten as follows:
2
1 1
2*
1 1
J ( ; , ) ( ) (E ( ))
(1 )
N cm
new ik ik ik i
N cm
ikk i
S U θ μ
δ μ
= =
= =
=
+ −
∑∑
∑ ∑
θ (22)
Using Equations (9) and (17) can be respectively rewritten as:
2 21 1
1
1
E E( ) ( )E
ik cik ikm m
jkj
μ
δ− −
=
=
+∑ (23)
and 2 2*
1 1
1 (E ( )).
N c
ik ik ic N
δ γ= =
= ∑∑ θ (24)
NFCRM Algorithm [17]: Given data S, set m>1. Fix γ>0, ρ>0 and the parameter
vectors θi at random. Pick a termination threshold ε>0 and initialize U(0).
Repeat for l=1,2,... Step 1. Compute measure of error
Eik(θi
l) via Equation (21) Step 2. Calculate 2
*δ via Equation (24).
Step 3. Compute ( )likμ and ( )l
ijθ via Equations. (23) and (20), respectively.
Step 4. Compute ( ) ( 1)|| ||l lerr U U −= − . Until err ε≤ then stop; otherwise set l=l+1 and return to
step 1.
A. Estimation of consequent parameters by the BELS method The novel fuzzy c-regression models for decomposition of
the input-output space into multiple linear structures is used. Gaussian membership functions are usually chosen to represent the fuzzy sets in the premise part of each fuzzy rule. The premise parameters can be easily obtained using µik. The
fuzzy sets centers vik and the standard derivation σik are calculated as follows:
1
1
, 1,2,..., ; 1,2,...,
Nik kjk
ij Nikk
xi c j M
μν
μ=
=
= = =∑∑
(25)
and 2
1
1
2 ( )N
ik kj ijkij N
ikk
xμ νσ
μ=
=
−=
∑∑
(26)
Once the antecedent parameters have been fixed, the BELS method can be applied to estimate the consequent parameters for each rule. Using BELS algorithm, the consequent parameters are estimated by transforming model Equation (1) into an equivalent auxiliary model:
0( ) ( ) ( )Ty k k kφ ω= Θ + (27)
with 1[ ,...., ]TMθ θΘ = .
We introduce a (2 ) 1m n+ × auxiliary parameters vector Θ and a (2 ) 1m n+ × auxiliary regression vector as follows:
θα⎡ ⎤Θ = ⎢ ⎥⎣ ⎦
, ( )
( )( )k
kk
ϕφ
ρ⎡ ⎤
= ⎢ ⎥⎣ ⎦
(28)
where [ ]( ) ( 1) . . . ( 2 )T k y k m y k mρ = − − − and
[ ]( ) 0 . . . 0 mkα = ∈ℜ Then, it is easy to verify the following equality:
( ) ( )T Tk kφ ϕ θΘ = (29) Thus, the model can be represented by an auxiliary model equivalent to Equation (27).
The auxiliary parameter vector can be expressed by: 1ˆ ( )LS yN R Rφφ φ
−Θ = (30)
with the auto-correlation matrix Rφφ and cross-correlation matrix yRφ are:
( ) ( )TR E k kφφ φ φ⎡ ⎤= ⎣ ⎦ (31)
[ ]( ) ( )yR E k y kφ φ= (32)
Using Equation (30) ˆLSΘ is rewritten as follows:
0
1ˆ ( )LS N R Rφφ φω−Θ = Θ +
(33) With
0
1Bias R Rϕϕ ϕω−= −
The noise covariance matrix 0
Rφω(2 ) 1m n+ ×∈ℜ can be
expressed by:
0 0yR DRφω ω= (34)
with (2 ) 1
0m n
yR ω+ ×∈ ℜ is the matrix of cross-correlation
between the output and the noise and (2 )0T m m n
mD I × +⎡ ⎤= ∈ ℜ⎣ ⎦ .
By using the BELS method, the solution of Equation (27) is given by the following expression:
0
1( ) ( ) ( )BELS LS yN N R DR Nϕϕ ωθ θ −= − (35)
with [ ]0 0( ) ( )yR E y k kω ω= : noise covariance vector.
Under condition which the noise estimates covariance vector
0yR ω is attainable in some manner. The BELS algorithm is described as follows:
step 1. Initialize the parameters vector 0BELSθ =
step 2. From the sampled input-output data { }( ), ( ), 1u k y k N ≥ calculate the estimates of the covariance matrices and vectors:
1
1ˆ ( ) ( ) ( )N
T
kR N k k
Nϕϕ ϕ ϕ=
= ∑
1
1ˆ ( ) ( ) ( )N
T
k
R N k kNϕρ ϕ ρ
=
= ∑ (36)
1
1ˆ ( ) ( ) ( )N
yk
R N k y kNϕ ϕ
== ∑
1
1ˆ ( ) ( ) ( )N
yk
R N k y kNρ ρ
== ∑
step 3. Estimate the parameter vector θ by LS method using
Equation (30) step 4. Compute the estimate of the noise covariance
0yR ω
( )( )
0
11ˆ ˆ ˆ( ) ( ) ( )
ˆˆ ˆ. ( ) ( ) ( )
Ty
TLS y
R N R N R N D
R N N R N
ω ϕρ ϕϕ
ϕρ ρθ
−−=
− (37)
step 5. Calculate the BELS estimate of the system parameter vector θ
10
( ) ( ) ( )BELS LS yN N R DR Nϕϕ ωθ θ −= − (38)
IV. EXPERIMENTAL RESULTS In this section, the performance algorithm is tested. Mean square error MSE was used as a performances index, which is defined as:
( )2
1
1 ˆN
k kk
MSE y yN =
= −∑ (39)
A. Benchmark problem
Considering a nonlinear system given by equation Equation (40) [1]
11 2 2
1
( 2) ( 2.5)v
8.5k k k
k k kk k
y y yy u
y y−
+−
+ += + +
+ + (40)
where yk is system output, uk is input which is uniformly distributed in the interval [-1 1] and vk is white noise with zero mean and variance σ2, which is added to the output system at different Signal-to-Noise Ratio (SNR) levels.
Choosing SNR as follow: SNR=1 dB, 5 dB, 10 dB, 15 dB and 20 dB.
The simulations results are given for two experimental cases. The training data set contains 500 samples for input-output pairs while for the testing 1000 data pairs are generated by the following input signal:
2sin 500,250
2 20.3sin 0.1sin .250 25
k
k if ku
k k otherwise
π
π π
⎧ ⎛ ⎞ <=⎜ ⎟⎪⎪ ⎝ ⎠= ⎨⎛ ⎞ ⎛ ⎞⎪ +⎜ ⎟ ⎜ ⎟⎪ ⎝ ⎠ ⎝ ⎠⎩
(41)
Tables I-VI present results obtained with different algorithms such as: Gustafson-Kessel (GK) [5], NFCRM algorithm (NFCRMA) [11], FCM [7], fuzzy model identification (FMI) clustering algorithm [2] and NFCRM1 [16], et NFCRM2 [17].
Choosing {y(k -1), y(k - 2), u(k), u(k - 1)} as output-input variables, and the number of fuzzy rules is four. The parameter settings are: γ=0.1 and ρ= 0.1 for NFCRM_BELS. MSETr and MSETs are the MSE for training and testing data, respectively.
In Case 1, Comparing results with those cited above with regard to the noisy data. Table I shows that the best MSE is obtained by our method during training and test phases. In addition, the consequent parameters are obtained without constant terms unlike to other approaches mentioned in these tables. This shows that our model is better than others techniques reported in literature taking into account the few number parameters for each rule.
TABLE I. COMPARISON RESULTS (CASE 1)
Algorithms MSETr MSETs
FCM 0.0090 0.2220
GK 0.0046 0.1347
FMI 0.0013 0.0181
NFCRMA 5.20 e-4 0.0096
NFCRM1 4.76 e-4 0.0052
NFCRM2 3.94e-4 0.0045
NFCRM_BELS 2.65e-4 0.0028
In Case 2, the noise influence is analyzed with different SNR levels (SNR=1, 5, 10, 15 and 20db). The parameter settings are: γ=0.1 and ρ=1 for the NFCRM_BELS algorithm. As shown in Tables II–VI, we can note that, whatever the noise level is, only the proposed algorithm retained good performance. Consequently, our algorithm is more resistant to noise compared with other existing algorithms in the literature.
TABLE II.
COMPARISON RESULTS WITH SNR=20DB
Algorithms MSETr MSETs
FCM 0.0285 0.2417
GK 0.0258 0.1867
FMI 0.0134 0.0802
NFCRMA 0.0133 0.0621
NFCRM1 0.0126 0.0473
NFCRM2 0.0116 0.0442
NFCRM_BELS 0.0093 0.0276
TABLE III. COMPARISON RESULTS WITH SNR=15DB
Algorithms MSETr MSETs
FCM 0.0533 0.3164
GK 0.0471 0.2212
FMI 0.0373 0.1110
NFCRMA 0.0363 0.0806
NFCRM1 0.0355 0.0786
NFCRM2 0.0342 0.0770
NFCRM_BELS 0.0256 0.0574
TABLE IV. COMPARISON RESULTS WITH SNR=10DB
Algorithms MSETr MSETs
FCM 0.1155 0.5505
GK 0.1100 0.3042
FMI 0.1068 0.2136
NFCRMA 0.1039 0.1926
NFCRM1 0.0963 0.1836
NFCRM2 0.0947 0.1762
NFCRM_BELS 0.0688 0.1130
TABLE V. COMPARISON RESULTS WITH SNR=5DB
Algorithms MSETr MSETs
FCM 0.4577 0.8167
GK 0.3364 0.8094
FMI 0.3356 0.4859
NFCRM 0.3309 0.4465
NFCRM1 0.3158 0.4276
NFCRM2 0.3094 0.4261
NFCRM_BELS 0.2605 0.3273
TABLE VI. COMPARISON RESULTS WITH SNR=1DB
Algorithms MSETr MSETs
FCM 2.1292 2.1640
GK 1.0079 1.3765
FMI 0.9171 1.1649
NFCRMA 0.9046 1.1395
NFCRM1 0.8505 0.9491
NFCRM2 0.8092 0.9194
NFCRM_BELS 0.8041 0.8934
V. CONCLUSION In this paper, a new fuzzy c-regression model clustering algorithm based on the BELS (NFCRM_BELS) method has been presented. The application of BELS method improves the robustness of the FCRM method in noisy data. The proposed fuzzy modeling approach is applied to benchmark modeling problem. The experimental results show that the NFCRM-BELS algorithm has better performance than other methods in terms of the accuracy and the compactness.
REFERENCES
[1] Bidyadhar, S. and Debashisha, J. (2011). A differential evolution based neural network approach to nonlinear identification, Applied Soft Computing vol. 11, no 1, pp. 861-871.
[2] Chen, J. Q., Xi, Y. G. and Zhang, Z. J. (1998). A clustering algorithm for fuzzy model identification, International Journal of Control (3): 319-329.
[3] Chen, J.L. and Wang, J.H. (1999). A new robust clustering algorithm-density-weighted fuzzy c-means, Proceedings of the IEEE Conference on Systems, Man, and Cybernetics, SMC 1999, Tokyo, Japan, pp. 12-15.
[4] Dave, R. N. (1991). Characterization and detection of noise in clustering, Pattern Recognition Letters vol. 12, no 11 , pp. 657-664.
[5] Gustafson, D.E. and Kessel, W.C. (1979). Fuzzy clustering with a fuzzy covariance matrix, Proceedings of the IEEE Conference on Decision Control, CDC 1978, San Diego, CA, USA, pp. 761-766.
[6] Hathaway, R. J. and Bezdek, J. C. (1993). Switching regression models and fuzzy clustering. IEEE Transactions on Fuzzy Systems, vol. 1, no 3, pp. 195-204.
[7] Hoppner, F., Klawonn, F., Kruse, R. and Runkler, T. (1999). Fuzzy Cluster Analysis, Methods for Classification, Data Analysis and Image Recognition, 1st Edn., John Wiley and Sons, Chichester.
[8] Johansen, T. A. and Foss, A. B. (1995). Identification of non-linear system structure and parameters using regime decomposition. Automatica, vol. 31, no 2, pp. 321-326.
[9] K. Kilic, O. Uncu and I.B. Turksen (2007), Comparison of different strategies of utilizing fuzzy clustering in structure identification, Inform. Sci. vol. 177, no 23, pp. 5153-5162.
[10] Kim, K., Kim, Y.K., Kim, E. and Park, M. (2004). A new TSK fuzzy modeling approach, Proceedings of the IEEE International Conference on Fuzzy Systems, Fuzz-IEEE, Budapest, Hungary, pp. 773-776.
[11] Li C., Zhou J., Xiang X., Li Q. and An X. (2009). T-S fuzzy model identification based on a novel fuzzy c-regression model clustering algorithm, Engineering Applications of Artificial Intelligence, vol. 22, no 4-5, pp. 646-653.
[12] Li C., Zhou J., Xiang X., Li Q., An X. and Xiang X. (2010). A new T–S fuzzy-modeling approach to identify a boiler–turbine system, Expert Systems with Applications, vol. 37, no 3, pp. 2214-2221.
[13] Ohashi, Y. (1984). Fuzzy clustering and robust estimation, 9th Meeting, SAS Users Group International, Hollywood Beach, FL, USA, pp. 1-6.
[14] Stoica P. and Söderström T. (1982), Bias correction in least-squares identification. Int. J. Control, vol. 35, pp. 449-457.
[15] Stoica P., Söderström T. and Simonyte V. (1995). Study of a bias-free least-squares parameter estimator. IEE Proc-control theory and applications, vol. 142, no. 1, pp. 1-6.
[16] Soltani M., Aissaoui B., Chaari, A., Ben Hmida, F. and Gossa, M. (2011), A modified fuzzy c-regression model clustering algorithm for T-S fuzzy model identification, Proceedings of the 8th IEEE International Multi-Conference on Systems, Signals and Devices, SSD, Sousse, Tunisia, pp. 1-6.
[17] Soltani M., Chaari, A. and Ben Hmida, F. (2012). A novel fuzzy c–regression model algorithm using a new error measure and particle swarm optimization, AMCS journal, Int. J. Appl. Math. Comput. Sci.,vol. 22, no. 3, pp. 617-628.
[18] Sugeno, M. and Kang, G. T. (1988). Structure identification of fuzzy model. Fuzzy Sets and Systems, vol. 28, no 1, pp. 15-33.
[19] Takagi, T. and Sugeno, M. (1985), Fuzzy identification of systems and its application to modeling and control, IEEE Transactions on Systems, Man, and Cybernetics, vol. 15, no 1, pp. 116-132
[20] Tanaka, K. and Sugeno, M. (1992). Stability analysis and design of fuzzy control systems, Fuzzy Sets and Systems, vol. 45, no. 2, pp. 135-156.
[21] Torsten S., Wei X. Z., and Petre S. (1999), Comments on “On a Least-Squares-Based Algorithm for Identification of Stochastic Linear Systems”, IEEE Transactions on Signal Processing, vol. 47, no. 5, pp. 1395-1396.
[22] Wang, H. O., Tanaka, K. and Griffin, M. (1995), Parallel Distributed Compensation of Nonlinear Systems by Takagi-Sugeno Fuzzy Model, Proc. IEEE International Conference on Fuzzy Systems - Fuzz-IEEE, Yokohama, Japan, pp.531-538.
[23] Wu, K.L. and Yang, M.S. (2002). Alternative c-means clustering algorithms, Pattern Recognition , vol. 35, no. 10, pp. 2267-2278.
[24] Yang, X., Song, Q. and Liu, S. (2005). Robust deterministic annealing algorithm for data clustering, Proceedings of the International Joint Conference on Neural Networks, IJCNN, Montreal, Canada, pp. 1878-1882.
[25] Zhao, J., Gorez, R. and Wertz, V. (1997), Synthesis of Fuzzy Control Systems Based on Linear Takagi-Sugeno Fuzzy Models. In: Murray-Smith, R., Johansen, T.A., (Eds.), Multiple Model Approaches to Modelling and Control, Taylor & Francis, London, pp.307-336.
[26] Zheng W. X. (2002), On Lest-Squares identification of ARMAX models, 15th Triennial World Congress, Barcelona, Spain.