10

Click here to load reader

Structured weighted low rank approximation

Embed Size (px)

Citation preview

Page 1: Structured weighted low rank approximation

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONSNumer. Linear Algebra Appl. 2004; 11:609–618 (DOI: 10.1002/nla.362)

Structured weighted low rank approximation

M. Schuermans∗;†, P. Lemmerling and S. Van Hu�el

K.U.Leuven; ESAT-SCD; Kasteelpark Arenberg 10; B-3001 Leuven-Heverlee; Belgium

SUMMARY

This paper extends the weighted low rank approximation (WLRA) approach towards linearly structuredmatrices. In the case of Hankel matrices an equivalent unconstrained optimization problem is derivedand an algorithm for solving it is proposed. The correctness of the latter algorithm is veri�ed on abenchmark problem. Finally the statistical accuracy and numerical e�ciency of the proposed algorithmis compared with that of STLNB, a previously proposed algorithm for solving Hankel WLRA problems.Copyright ? 2004 John Wiley & Sons, Ltd.

KEY WORDS: rank reduction; structured matrices; weighted norm

1. INTRODUCTION

Multivariate linear problems play an important role in many applications, including multiple-input multiple-output (MIMO) system identi�cation and deconvolution problems with multipleoutputs.These linear models can be described as AX ≈B or equivalently as [AB][X T − I ]T ≈ 0 with

A∈Rn×(m−d), B∈Rn×d the observed variables and X∈R(m−d)×d the parameter matrix to beestimated. Since often all variables (i.e. those in A and B) are perturbed by noise, the totalleast squares (TLS) approach is used instead of the least squares (LS) approach. The TLSapproach solves the following problem:

min�A;�B

‖[�A �B]‖2F such that (A+�A)X =B+�B (1)

∗Correspondence to: M. Schuermans, K.U.Leuven, ESAT-SCD, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee,Belgium.

†E-mail: [email protected]

Contract=grant sponsor: Research Council KUL; contract=grant number: GOA-Me�sto 666, IDO=99=003 andIDO=02=009Contract=grant sponsor: Flemish Government: FWO: contract=grant number: G.0200.00, G.0078.01, G.0407.02,G.0269.02 and G.0270.02Contract=grant sponsor: AW1Contract=grant sponsor: IWTContract=grant sponsor: Belgian Federal Government; contract=grant number IUAP IV-02 and IUAP V-22Contract=grant sponsor: EU

Published online 5 May 2004 Received 10 January, 2003Copyright ? 2004 John Wiley & Sons, Ltd. Revised 10 October, 2003

Page 2: Structured weighted low rank approximation

610 M. SCHUERMANS, P. LEMMERLING AND S. VAN HUFFEL

where ‖:‖F stands for the Frobenius norm, S ≡ [A B] is the observed data matrix and�S ≡ [�A �B] is the correction applied to S. The standard way for solving the TLS problemis by means of the singular value decomposition (SVD) [1, 2]. The latter approach yieldsmaximum likelihood (ML) estimates of X when the ‘noise part’‡ of S has rows that areindependently identically distributed (i.i.d.) with common zero mean vector and common co-variance matrix that is a (unknown) multiple of the identity matrix Im.However, in many applications the observed data matrix S is structured and since the SVD

does not preserve structure, [A+�A B+�B] will typically be unstructured. In Reference [3]it was shown that this loss of structure implies—under noise conditions that occur naturallyin many problems—a loss of statistical accuracy of the estimated parameter matrix. Thesespeci�c noise conditions can best be explained by representing the structured matrix S by itsminimal vector representation s (e.g. a Hankel matrix S ∈Rn×m is converted into a minimalrepresentation s∈Rn+m−1). The noise part of s has to obey a Gaussian i.i.d. distribution inorder to generate the noise condition mentioned earlier.To obtain a ML parameter matrix, in these frequently occurring structured cases, one has

to solve a so-called structured weighted low rank approximation (SWLRA) problem:

minR

rank(R)6r¡kR ∈�

‖S − R‖2V (2)

with R, S ∈Rn×m, � the set of all matrices having the same structure as S and ‖M‖2V ≡vec(M)TV vec(M) where vec(M) stands for the vectorized form of M , i.e., a vector con-structed by stacking the consecutive columns of M in one vector. So, for a given structureddata matrix S of a certain rank k, we are looking for the nearest (in weighted norm ‖:‖2V -sense)lower rank matrix R with the same structure as S. We de�ne �r ≡m − r.For speci�c choices of V , �r and �, (2) corresponds to well-known problems:

• If V is the identity matrix, denoted as V = I , and �=Rn×m, problem (2) reduces tothe well-studied TLS problem (1) with R=[A + �A B + �B] and d=�r, i.e. weenforce �r independent linear relations among the columns of R in order to reduce Rto rank 6r.

• When � is a set of matrices sharing a particular linear structure (for example, Hankelmatrices) and V is a diagonal matrix, then (2) corresponds to the so-called StructuredTLS problem. STLS problems with �r=1 have been studied extensively [3–6]. For�r¿1 only a few algorithms are described [7, 8], most of which yield statistically sub-optimal results such as the algorithm to �nd lower rank structured matrices by alternatingiterations between lower rank matrices and structured matrices [7].

• The case in which �=Rn×m corresponds to the previously introduced unstructuredweighted low rank approximation (UWLRA) problem [9].

In this paper we consider (2) with � the set of a particular type of linearly structuredmatrices (for example, the set of Hankel matrices). From a statistical point of view it only

‡The noise part corresponds to Sn ∈Rn×m in S = So + Sn where So ∈Rn×m contains the unobserved noiseless dataand S ∈Rn×m contains the observed noisy data.

Copyright ? 2004 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2004; 11:609–618

Page 3: Structured weighted low rank approximation

STRUCTURED WEIGHTED LOW RANK APPROXIMATION 611

makes sense to treat identical elements in an identical way, e.g. in a Hankel matrix all theelements on one antidiagonal should be treated the same way. Therefore (2) is reformulated as

minvec2(R)

rank(R)6r¡k

‖S − R‖2W (3)

with ‖M‖2W ≡ vec2(M)TWvec2(M), where vec2(M) is a minimal vector representation of thelinearly structured matrix M . E.g., when � represents the set of Hankel matrices, vec2(M) isa vector containing the di�erent elements of the di�erent antidiagonals. Note that due to theone-to-one relation between M and vec2(M) the condition R∈� no longer appears in (3).Problem (3) is the so-called SWLRA.The major contribution of this paper is the extension of the concept and the algorithm

presented in Reference [9] to linearly structured matrices (i.e. the extension of UWLRA toSWLRA problems). Problem (3) can also be interpreted as an extension of the STLS problemsdescribed in References [3–6] by allowing �r to be larger than 1.The paper is structured as follows. In Section 2, an unconstrained optimization prob-

lem equivalent to problem (3) is derived by means of the method of Lagrange multipliers.Section 3 introduces an algorithm for solving this unconstrained optimization problem. Finally,Section 4 concludes with some numerical experiments illustrating the statistical optimality ofthe SWLRA algorithm and the SWLRA algorithm is compared with the structural total leastnorm problems (STLNB) algorithm described in Reference [8].

2. DERIVATION

The �rst step in the derivation is to reformulate (3) into an equivalent double-minimizationproblem:

minN∈Rm×(m−r)

NTN=I

min

R∈Rn×m

RN=0

‖S − R‖2W

(4)

The second step consists of �nding a closed form expression f(N ) for the solution of the innerminimization min R∈Rn×m

RN=0‖S − R‖2W . The latter is obtained as follows. Applying the technique

of Lagrange multipliers to the inner minimization of (4) yields the Lagrangian

(L; R)=vec2(S − R)TW vec2(S − R) − tr(LT(RN )) (5)

where tr(A) stands for the trace of matrix A and L is the matrix of Lagrange multipliers.Using the equalities

vec(A)T vec(B)= tr(ATB)

vec(ABC)= (CT ⊗A) vec(B)

(5) becomes

(L; R)=vec2(S − R)TW vec2(S − R)− vec(L)T(N T ⊗ I)vec(R) (6)

Copyright ? 2004 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2004; 11:609–618

Page 4: Structured weighted low rank approximation

612 M. SCHUERMANS, P. LEMMERLING AND S. VAN HUFFEL

Note that when R is a linearly structured matrix it is straightforward to write down a relationbetween vec(R) and its minimal vector representation vec2(R):

vec(R)=H vec2(R) (7)

where H∈Rnm×q and q is the number of di�erent elements in R. E.g., in the case of a Hankelmatrix, q= n+m−1 and vec2(R) can be constructed from the �rst column and last row of R.Substituting (7) in (6), the following expression is obtained for the Lagrangian:

(L; R)=vec2(S − R)TW vec2(S − R)− vec(L)T(N T ⊗ In)H vec2(R) (8)

Setting the derivatives of w.r.t. vec2(R) and L equal to 0 yields the following set ofequations: [

2W −HT(N ⊗ In)

(N T ⊗ In)H 0

][vec2(R)

vec(L)

]=

[2W vec2(S)

0

](9)

Using the fact that

[A −B

BT 0

]−1

=

[A−1 − A−1B(BTA−1B)−1BTA−1 ∗

−(BTA−1B)−1BTA−1 ∗

]

and setting H2 ≡ (N ⊗ In)TH , it follows from (9) that

vec2(R) = (Iq − W−1HT2 (H2W

−1HT2 )

−1H2)vec2(S)⇒vec2(S − R) = W−1HT

2 (H2W−1HT

2 )−1H2 vec2(S)

(10)

As a result, the double-minimization problem (4) can be written as the following optimizationproblem:

minN∈Rm×(m−r)

NTN=I

vec2(S)THT2 (H2W

−1HT2 )

−1H2 vec2(S) (11)

3. ALGORITHM

From this point on we will focus on a speci�c �, since as will become clear further on, itis not possible to derive a general algorithm that can deal with any type of linearly struc-tured matrices. The Hankel structure will be investigated further on since this is one of themost frequently occurring structures in signal processing applications. As a result Toeplitzmatrices are also dealt with since they can be converted into Hankel matrices with a simplepermutation of the rows, whereas the solution of the corresponding SWLRA problem doesnot change. The straightforward approach for solving (4) would be to apply a non-linear LS(NLLS) solver to (11). For �r=1 this works �ne but when �r¿1 this approach breaksdown by yielding the trivial solution R=0. This can easily be understood by considering(4) with W equal to the identity matrix. As can be seen from (10), the inner minimizationof (4) corresponds to an orthogonal projection of vec2(S) on the orthogonal complement of

Copyright ? 2004 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2004; 11:609–618

Page 5: Structured weighted low rank approximation

STRUCTURED WEIGHTED LOW RANK APPROXIMATION 613

the column space of HT2 ∈R(n+m−1)×n�r . Therefore it is clear that for �r=1 the orthogonal

complement of the column space of HT2 will never be empty (the dimension of the column

space of HT2 = rank(H

T2 )=n) but for �r¿1 the latter orthogonal complement will typically

be empty since n¿m. As a result the projection will yield R=0. The problem lies in anoverparameterization of the matrix HT

2 =HT(N ⊗ In).To �nd a solution to the latter problem we �rst state the inner minimization in (4) in words:

‘Given a matrix N whose column space is a nullspace of a non-trivial Hankel matrix, �ndthe Hankel matrix R closest to S such that the nullspace of R is spanned by the column spaceof N ’. A parameterization of the nullspace of a Hankel matrix can be found by means of theso-called Vandermonde decomposition of a Hankel matrix. Given a Hankel matrix R∈Rn×m

of rank r, it is straightforward to see that it can be parameterized as follows:

R=

1 1 : : : 1z1 z2 : : : zr...

......

zn−11 zn−12 : : : zn−1r

c1c2

. . .cr

1 z1 : : : zm−1

11 z2 : : : zm−1

2...

......

1 zr : : : zm−1r

(12)

with zi, i=1; : : : ; r the so-called complex signal poles and ci, i=1; : : : ; r the so-called complexamplitudes.By de�ning a vector p=[p1 p2 : : : pr+1]T such that the following polynomial in �

p1 + p2�+ · · ·+ pr+1�r

has roots zi, i=1; : : : ; r, it is easy to see that the nullspace of R can be parameterized bypi; i=1; : : : ; r + 1 in the following matrix whose rows span the nullspace of R:

p1 p2 : : : pr+1 0 0 : : : 00 p1 p2 : : : pr+1 0 : : : 0...

...0 0 : : : 0 p1 p2 : : : pr+1

∈R(m−r)×m

We could now use the previous matrix to parameterize N in (11) but a di�erent approach istaken here. First note that from (12) it follows that the vector vec2(R) (with R being a rankr Hankel matrix) can be written as the following linear combination:

vec2(R)= c1b1 + c2b2 + · · ·+ crbr

with vectors bi ≡ [1 zi : : : zn+m−2i ]T and ci scalar coe�cients for i=1; : : : ; r.

As a result, it is clear that the rows of the following matrix:

H3≡

p1 p2 : : : pr+1 0 0 : : : 00 p1 p2 : : : pr+1 0 : : : 0...

...0 0 : : : 0 p1 p2 : : : pr+1

∈R(n+m−1−r)×(n+m−1)

span the space perpendicular to bi, i=1; : : : ; r. Therefore—bearing in mind the goal of theinner minimization of (4)—the rank de�cient Hankel matrix R closest to S can be found by

Copyright ? 2004 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2004; 11:609–618

Page 6: Structured weighted low rank approximation

614 M. SCHUERMANS, P. LEMMERLING AND S. VAN HUFFEL

means of the following orthogonal projection:

vec2(R)= (I − W−1HT3 (H3W

−1HT3 )

−1H3) vec2(S) (13)

and (11) thus becomes

minpvec2(S)THT

3 (H3W−1HT

3 )−1H3 vec2(S) (14)

with p≡ [p1 p2 : : : pr+1]T. Remark that the new expressions are similar as before, althoughwe now work with a vector p instead of a matrix N .The algorithm for solving the SWLRA for Hankel matrices can therefore be summarized

as follows:

Algorithm SWLRAInput: Hankel matrix S with �rst column= [s1 s2 : : : sn]T and last row= [sn : : : sn+m−1],S ∈Rn×m, rank r and weighting matrix W .Output: Hankel matrix R of rank 6r, such that R is as close as possible to S in ‖:‖W -sense.BeginStep 1: Construct matrix S ′ by rearranging the elements of matrix S such that

S ′ ∈R(n+m−r−1)×(r+1) is a Hankel matrix with �rst column= [s1 : : : sn+m−r−1]T andlast row= [sn+m−r−1 : : : sn+m−1].

Step 2: Compute SVD of S ′: U�V T.Step 3: Take starting value p0 equal to the (r + 1)-right singular vector of S ′.Step 4: Minimize the cost function in (14).Step 5: Compute R using (13).

End

In Step 4 a standard NLLS (Matlab’s lsqnonlin) is used. In order to use a NLLS routine,the cost function has to be cast in the form fTf and in order to do so the Cholesky decom-position of HT

3 (H3W−1HT

3 )−1H3 has to be computed. This can be done by a QR factorization

of W−1=2HT3 . The computationally most intensive step is Step 4.

4. NUMERICAL EXPERIMENTS

In this section we �rst consider a small SWLRA problem of which the solution is known inorder to illustrate the correctness of the proposed algorithm. The last section compares thee�ciency and statistical accuracy of algorithm SWLRA and the STLNB algorithm proposedin Reference [8].

4.1. Benchmarks

To illustrate the numerical correctness of the algorithm SWLRA, we use two small SWLRAproblems proposed in Reference [8] of which the exact solution can be calculated analytically.In Reference [8], modelling problems have been discussed, which approximate a given datasequence zk ∈Cp by the impulse response z̃k ∈Cp of a �nite-dimensional linear time-invariant

Copyright ? 2004 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2004; 11:609–618

Page 7: Structured weighted low rank approximation

STRUCTURED WEIGHTED LOW RANK APPROXIMATION 615

(LTI) system of a given order, yielding the following SWLRA problem:

minzk

p∑k=1[(zk − z̃k)wk]2 s:t: C̃

(X−I

)=0 (15)

where C̃ ∈Cn×m is a Toeplitz matrix constructed from z̃k and wk are appropriate weights.

Example 1Consider the following LTI-system of order 4

wk+1 = diag(0:4; 0:3; 0:2; 0:1)wk

zk = [1 1 1 1]wk

with w0 = [1 1 1 1]T.By arranging the �rst eight samples zk of the impulse response in a Toeplitz matrix C

C=

0:1 0:3 1 40:0354 0:1 0:3 10:013 0:0354 0:1 0:30:00489 0:013 0:0354 0:10:00187 0:00489 0:013 0:0354

and computing the best rank 1 structure-preserving approximation C̃ to C using the weights(w21 ; w

22 ; : : : ; w

28)= (1; 2; 3; 4; 4; 3; 2; 1), we obtain with algorithm SWLRA the same ratio

z̃k =z̃k−1 = 0:26025661, k=2; : : : ; 8 as the analytically determined solution in Reference [8].The result is only accurate up to eight digits, since accuracy is lost due to the squaring e�ectof the data matrix S in the cost function of (14).

Example 2Consider the following LTI-system of order 2

wk+1 =(1 −10 1

)wk

zk = [1 0]wk

with w0 =(61

).

We arrange the data zk in the 5×2 Toeplitz matrix

C=

5 64 53 42 31 2

and determine the best rank 1 structure-preserving approximation C̃ to C by solving thecorresponding SWLRA problem with weights (w21 ; w

22 ; : : : ; w

26)= (1; 2; 2; 2; 2; 1). We obtain the

Copyright ? 2004 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2004; 11:609–618

Page 8: Structured weighted low rank approximation

616 M. SCHUERMANS, P. LEMMERLING AND S. VAN HUFFEL

Table I. Numerical results for the 30×20 matrix.SWLRA STLNB

�r ‖S − R‖F cpu time (s) ‖S − R‖F cpu time (s)

2 8:04× 10−7 6:07× 10−1 7:87× 10−7 4:83× 10−2

4 8:41× 10−7 5:52× 10−1 7:19× 10−7 8:89× 10−2

6 6:85× 10−7 5:00× 10−1 6:48× 10−7 1:55× 10−1

8 9:36× 10−7 4:39× 10−1 5:96× 10−7 2:90× 10−1

10 5:70× 10−7 3:78× 10−1 4:40× 10−7 4:18× 10−1

12 4:11× 10−7 3:14× 10−1 3:82× 10−7 4:20× 10−1

14 2:93× 10−7 2:53× 10−1 2:74× 10−7 2:41× 10−1

16 1:45× 10−7 2:00× 10−1 1:01× 10−7 4:48× 10−1

18 7:40× 10−8 1:90× 10−1 1:12× 10−4 2.05

Table II. Numerical results for the 50×30 matrix.SWLRA STLNB

�r ‖S − R‖F cpu time (s) ‖S − R‖F cpu time (s)

2 7:54× 10−7 2.43 6:76× 10−7 1:18× 10−1

4 1:22× 10−6 2.27 6:67× 10−7 4:72× 10−1

6 3:40× 10−6 2.11 6:38× 10−7 1.588 1:07× 10−6 1.96 6:00× 10−7 2.0710 2:30× 10−6 1.78 5:32× 10−7 4.5212 2:07× 10−6 1.61 4:94× 10−7 4.2114 1:81× 10−6 1.45 4:39× 10−7 5.416 9:54× 10−7 1.27 4:07× 10−7 6.1118 7:32× 10−7 1.11 3:03× 10−7 6.6620 1:37× 10−6 9:38× 10−1 2:69× 10−7 7.0922 2:58× 10−7 8:43× 10−1 2:31× 10−7 5.1524 1:37× 10−7 6:91× 10−1 1:17× 10−7 2.8726 1:39× 10−7 5:36× 10−1 9:34× 10−8 3.8828 7:40× 10−9 3:89× 10−1 7:40× 10−9 2.83

optimal solution with z̃k =z̃k−1 = 0:76292301, k=2; : : : ; 6 as determined in Reference [8]. Alsohere, the accuracy is limited in the same way by the squaring e�ect.We lose accuracy, in contrast to STLNB which has an accuracy up to 10−16, because of the

squaring e�ect of our cost function fTf that has to be minimized by the algorithm SWLRA.

4.2. Statistical accuracy and numerical e�ciency

To compare the statistical accuracy and the computational e�ciency of the SWLRA algo-rithm and the STLNB algorithm on larger examples, we perform Monte-Carlo simulations. InTable I, the results for a 30×20 Hankel matrix are presented, whereas Table II contains theresults for a 50×30 Hankel matrix. The Monte-Carlo simulations are performed as follows.A Hankel matrix of the required rank, i.e. r=m−�r is constructed. The latter matrix is per-turbed by a Hankel matrix that is constructed using a vector containing i.i.d. Gaussian noise

Copyright ? 2004 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2004; 11:609–618

Page 9: Structured weighted low rank approximation

STRUCTURED WEIGHTED LOW RANK APPROXIMATION 617

of standard deviation 10−5. Finally, the SWLRA and the STLNB algorithm are applied to thisperturbed Hankel matrix. Each Monte-Carlo simulation consists of 100 runs, i.e. it considers100 noisy realizations. The results presented are averaged over these runs and obtained bycoding both algorithms in Matlab (version 6.1) and running on a PC i686 with 800MHz and256 MB memory.From both tables we can conclude the following. The statistical accuracy of the SWLRA

algorithm is worse than that of the STLNB algorithm for small �r but for large �r theSWLRA algorithm performs best.From the computational point of view, the SWLRA algorithm outperforms the STLNB

algorithm. The behaviour of the computational time for both algorithms can be explained asfollows. The number of �ops for minimizing the cost function (Step 4) in algorithm SWLRA isdominated by a QR decomposition (NLLS solver) whose computational cost is of the order of2m2(n−m=3) for a n×m matrix, so in our case of the order of 2(r+1)2(n+m−1−r−(r+1)=3).As a result, the number of �ops decreases for decreasing r, or equivalently for increasing �r.On the contrary, the cost of the STLNB algorithm is dominated by a QR factorization of a(n+m−1+n�r)× (n+m−1+m�r) matrix requiring 2(n+m−1+m�r)2(n+m−1+n�r−(n+m−1+m�r)=3) �ops, which results in an increasing number of �ops for increasing �r.However, in both algorithms the computational cost can signi�cantly decrease by exploitingthe matrix structure in the computation of the QR decomposition, e.g. by using displacementrank theory [10, 11]. This is the subject of future work.

5. CONCLUSIONS

In this paper we have developed an extension of the Weighted Low Rank Approximationintroduced by Manton et al. [9] for linearly structured matrices. For a particular type ofstructure, namely the Hankel structure, an algorithm was developed. The numerical accuracy ofthe latter algorithm was tested on a benchmark problem. Furthermore, the statistical accuracyand the computational e�ciency are compared for the STLNB [8] and the SWLRA algorithm.For both properties, the SWLRA algorithm performs better than the STLNB algorithm forlarge �r.

ACKNOWLEDGEMENTS

Prof. Sabine Van Hu�el is a full professor, Dr. Philippe Lemmerling is a postdoctoral researcher of theFWO (Fund for Scienti�c Research – Flanders) and Mieke Schuermans is a research assistant at theKatholieke Universiteit Leuven, Belgium. Our research is supported by:

Research Council KUL: GOA-Me�sto 666, IDO=99=003 and IDO=02=009 (Predictive computermodels for medical classi�cation problems using patient data and expert knowledge), severalPhD=postdoc & fellow grants;Flemish Government: FWO: PhD=postdoc grants, projects, G.0200.00 (damage detection in com-posites by optical �bres), G.0078.01 (structured matrices), G.0407.02 (support vector machines),G.0269.02 (magnetic resonance spectroscopic imaging), G.0270.02 (non-linear Lp approximation),research communities (ICCoS, ANMMM);AWI: Bil. Int. Collaboration Hungary=Poland;IWT: PhD Grants,

Copyright ? 2004 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2004; 11:609–618

Page 10: Structured weighted low rank approximation

618 M. SCHUERMANS, P. LEMMERLING AND S. VAN HUFFEL

Belgian Federal Government: DWTC (IUAP IV-02 (1996–2001) and IUAP V-22 (2002–2006):Dynamical Systems and Control: Computation, Identi�cation & Modelling);EU: NICONET, INTERPRET, PDT-COIL, MRS=MRI signal processing (TMR);Contract Research=agreements: Data4s, IPCOS.

REFERENCES

1. Van Hu�el S, Vandewalle J. The total least squares problem: computational aspects and analysis. Frontiers inApplied Mathematics Series, vol. 9. SIAM: Philadelphia, 1991.

2. Golub GH, Van Loan CF. An analysis of the total least squares problem. SIAM Journal on Numerical Analysis1980; 17:883–893.

3. Abatzoglou TJ, Mendel JM, Harada GA. The constrained total least squares technique and its applications toharmonic superresolution. IEEE Transactions on Signal Processing 1991; 39:1070–1086.

4. Abatzoglou TJ, Mendel JM. Constrained total least squares. IEEE International Conference on Acoustics,Speech and Signal Processing, Dallas, 1987; 1485–1488.

5. De Moor B. Total least squares for a�nely structured matrices and the noisy realization problem. IEEETransactions on Signal Processing 1994; 42:3004–3113.

6. Rosen JB, Park H, Glick J. Total least norm formulation and solution for structured problems. SIAM Journalon Matrix Analysis and Application 1996; 17(1):110–126.

7. Chu MT, Funderlic RE, Plemmons RJ. Structured lower rank approximation. Linear Algebra and itsApplications 2003; 366:157–172.

8. Van Hu�el S, Park H, Rosen JB. Formulation and solution of structured total least norm problems for parameterestimation. IEEE Transactions on Signal Processing 1996; 44(10):2464–2474.

9. Manton JH, Mahony R, Hua Y. The geometry of weighted low rank approximations. IEEE Transactions onSignal Processing, 2003; 51(2):500–514.

10. Lemmerling P, Mastronardi N, Van Hu�el S. Fast algorithm for solving the Hankel=Toeplitz structured totalleast squares problem. Numerical Algorithms 2000; 23:371–392.

11. Kailath T, Sayed AH (eds). Fast Reliable Algorithms for Matrices with Structure. SIAM: Philadelphia, 1999.

Copyright ? 2004 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2004; 11:609–618