22
On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pe˜ na Abstract. Different approaches to the decomposition of a nonsingular totally positive matrix as a product of bidiagonal matrices are studied. Special attention is paid to the interpretation of the factorization in terms of the Neville elimination process of the matrix and in terms of corner cutting algorithms of Computer Aided Geometric Design. Conditions of uniqueness for the decomposition are also given. §1. Introduction Totally positive matrices (TP matrices in the sequel) are real, nonnegative matrices whose all minors are nonnegative. They have a long history and many applications (see the paper by Allan Pinkus in this volume for the early history and motivations) and have been studied mainly by researchers of those applications. In spite of their interesting algebraic properties, they have not yet received much attention from linear algebrists, including those working specifically on nonnegative matrices. One of the aims of the masterful survey [1] by T. Ando, which presents a very complete list of results on TP matrices until 1986, was to attract this attention. In some papers by M. Gasca and G. M¨ uhlbach ([13] for example) on the connection between interpolation formulas and elimination techniques it became clear that what they called Neville elimination had special interest for TP matrices. It is a procedure to make zeros in a column of a matrix by adding to each row an appropriate multiple of the precedent one and had been already used in some of the first papers on TP matrices [33]. However, in the last years, we have developed a better knowledge of the properties of Neville elimination which has allowed us to improve many previous results on those matrices [14–22]. In this paper we shall use this elimination technique to get the factorization of a nonsingular TP matrix as a product of bidiagonal matrices. This provides a useful representation of such matrices which allows us to identify some important subclasses, as for example that of strictly to- tally positive matrices (that is, TP matrices whose minors are all positive). Under some conditions on the zero pattern of the bidiagonal matrices that representation is unique. Total Positivity and its Applications 1 M. Gasca and C. A. Micchelli (eds.), pp. 1–3. Copyright o c 1995 by xxx ISBN xxx. All rights of reproduction in any form reserved.

On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

  • Upload
    others

  • View
    15

  • Download
    0

Embed Size (px)

Citation preview

Page 1: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

On Factorizations of Totally Positive Matrices

M. Gasca and J. M. Pena

Abstract. Different approaches to the decomposition of a nonsingulartotally positive matrix as a product of bidiagonal matrices are studied.Special attention is paid to the interpretation of the factorization in termsof the Neville elimination process of the matrix and in terms of cornercutting algorithms of Computer Aided Geometric Design. Conditions ofuniqueness for the decomposition are also given.

§1. Introduction

Totally positive matrices (TP matrices in the sequel) are real, nonnegativematrices whose all minors are nonnegative. They have a long history andmany applications (see the paper by Allan Pinkus in this volume for the earlyhistory and motivations) and have been studied mainly by researchers of thoseapplications. In spite of their interesting algebraic properties, they have notyet received much attention from linear algebrists, including those workingspecifically on nonnegative matrices. One of the aims of the masterful survey[1] by T. Ando, which presents a very complete list of results on TP matricesuntil 1986, was to attract this attention.

In some papers by M. Gasca and G. Muhlbach ([13] for example) onthe connection between interpolation formulas and elimination techniques itbecame clear that what they called Neville elimination had special interestfor TP matrices. It is a procedure to make zeros in a column of a matrixby adding to each row an appropriate multiple of the precedent one and hadbeen already used in some of the first papers on TP matrices [33]. However,in the last years, we have developed a better knowledge of the properties ofNeville elimination which has allowed us to improve many previous results onthose matrices [14–22]. In this paper we shall use this elimination techniqueto get the factorization of a nonsingular TP matrix as a product of bidiagonalmatrices. This provides a useful representation of such matrices which allowsus to identify some important subclasses, as for example that of strictly to-tally positive matrices (that is, TP matrices whose minors are all positive).Under some conditions on the zero pattern of the bidiagonal matrices thatrepresentation is unique.

Total Positivity and its Applications 1M. Gasca and C. A. Micchelli (eds.), pp. 1–3.

Copyright oc 1995 by xxx

ISBN xxx.

All rights of reproduction in any form reserved.

Page 2: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

2 M. Gasca and J. M. Pena

A direct consequence of the well-known Cauchy-Binet identity for de-terminants is that the product of TP matrices is again a TP matrix. Con-sequently, one of the topics in the literature of TP matrices has been theirdecomposition as products of simpler TP matrices. In particular, in view ofapplications, the most interesting factorization seems to be in terms of bidiag-onal nonnegative matrices which, obviously, are always TP matrices. Let usgive a brief overview of some of the different approximations to this question.

Square TP matrices of order n form a multiplicative semigroup Sn, andthe nonsingular matrices of Sn form a semigroup sn of the group of all realnonsingular square matrices of order n. In [31], Loewner used some notionsfrom the theory of Lie groups which we briefly recall for the study of Sn

and sn. If U(t) is a differentiable matrix function of the real parameter t inan interval [0, t0], representing for each t an element of sn, and U(0) is theidentity matrix I (which belongs to sn), then the matrix (dU(t)

dt )t=0 is calledan infinitesimal element of sn. The first task in [31] was to prove that the setσn of all infinitesimal elements of sn consists of the Jacobi (i.e., tridiagonal)matrices with nonnegative off-diagonal elements.

As in Lie-group theory, it can be shown that if Ω(t) (0 ≤ t ≤ t0) is anyone-parameter family of elements of σn which is piecewise continuous in t, thedifferential equation

dU(t)dt

= Ω(t)U(t) (1.1)

has a unique continuous solution U(t) in sn satisfying U(0) = I. In this casewe say that U(t0) is generated by the infinitesimal elements Ω(t) (0 ≤ t ≤ t0).In general, a semigroup cannot be completely generated by its infinitesimalelements. However, Loewner proved in [31] that this is not the case for sn.He used the following reformulation of a result due to Whitney [33].

Let Eij (1 ≤ i, j ≤ . . . , n) be the n × n matrix with all elements zerowith the exception of a one at the place (i, j) and denote Fij(ω) = I + ωEij .Then every nonsingular TP matrix U can be written as a product

U = U1U2 · · ·Un−1DV1V2 · · ·Vn−1, (1.2)

where, for i = 1, 2, . . . , n− 1,

Ui = Fn,n−1(ωin,n−1)Fn−1,n−2(ωi

n−1,n−2) · · ·Fi+1,i(ωii+1,i), (1.3)

Vi = Fn−i,n−i+1(ωin−i,n−i+1)Fn−i+1,n−i+2(ωi

n−i+1,n−i+2) · · ·Fn−1,n(ωin−1,n),

(1.4)with all the ω-s nonnegative, and D represents a diagonal matrix with positivediagonal elements.

Observe that the matrices Ui and Vi are products of bidiagonal elemen-tary totally positive matrices but neither Ui nor Vi are bidiagonal: (1.2) usesn(n− 1) bidiagonal factors.

Page 3: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

Factorizations 3

The conclusion of [31] is that, by using infinitesimal generators, any non-singular TP matrix of order n can be generated from the identity by thesolutions of the differential equation (1.1).

In 1979 Frydman and Singer ([12], Theorem 1) showed that the class oftransition matrices for the finite state time-inhomogeneous birth and deathprocesses coincides with the class of nonsingular TP stochastic matrices. Thisresult was based upon a factorization of nonsingular TP stochastic matricesP in terms of bidiagonal matrices ([12],Theorem 1’) similar to (1.2) withoutthe diagonal matrix D:

P = U1U2 · · ·Un−1V1V2 · · ·Vn−1, (1.5)

and with the elementary matrices F scaled to be stochastic. As in (1.2) thematrices Ui, Vi are not bidiagonal and (1.5) contains n(n − 1) bidiagonalfactors. The fact that those transition matrices for birth and death processesare totally positive had been pointed out in 1959 by Karlin and Mc Gregor[27, 28] with probabilistic arguments. All these results have been surveyed in1986 by G. Goodman [23], who extended them to compound matrices, thatis matrices whose elements are the values of the minors of a certain order mof a given matrix A.

In [9], Remark 4.1, Cryer pointed out that a matrix A is TP iff it can bewritten in the form

A =N∏

r=1

Lr

M∏s=1

Us

where each Lr (resp. Us) is a TP bidiagonal lower (upper) triangular matrix.In that remark the author did not give any relation between N, M, and theorder n of A

On the other hand, factorizations of TP matrices as product of bidiagonalmatrices are important in Computer Aided Geometric Design and, in particu-lar, in corner cutting algorithms. In [24], Goodman and Micchelli showed thatthe existence of a corner cutting algorithm transforming a control polygon ofa curve into another one with the same number of vertices was equivalent tothe fact that both polygons were related by a nonsingular stochastic matrix.The key tool to obtain this result was again (see [24], Theorem 1) the charac-terization of a nonsingular TP stochastic matrix of order n as the product ofn− 1 bidiagonal lower triangular stochastic matrices by other n− 1 matriceswhich are bidiagonal upper triangular, with a zero pattern which will be pre-cised in Sections 3 and 4. Observe that in this case the factorization is formedby 2n−2 bidiagonal matrices and compare with (1.5). What has happened isthat the set of all elementary matrices which appeared in (1.5), by replacingthe factors Ui, Vi by their corresponding decompositions (1.3),(1.4), has beenreordered, as we shall see in Section 3, to give rise to a short number of bidi-agonal (in general nonelementary) matrices. In [32], Theorem 3.1, Micchelliand Pinkus obtained a factorization theorem for rectangular TP matrices as a

Page 4: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

4 M. Gasca and J. M. Pena

product of bidiagonal matrices in order to extend the previous interpretationto general corner cutting algorithms. For more details related with this mat-ter and for concrete factorizations associated with corner cutting algorithmsto the Bezier polygon, see [25],[26], [5] and [6]. The use of Neville elimina-tion was crucial to prove the results on optimality obtained in these last twopapers. See also the paper by Carnicer and Pena on optimal bases in thisvolume.

As we shall see in the following sections, in [19] we proved the uniquenessof factorizations of the type just mentioned above for nonsingular TP matrices,under certain conditions on the zero pattern of the bidiagonal factors. In [20],we interpreted these last results in terms of corner cutting algorithms.

Finally, factorizations of biinfinite TP matrices in terms of bidiagonalmatrices have also important applications. Motivated mainly by some prob-lems of Approximation Theory, Cavaretta et al. proved in [7], Theorem 1, theexistence of such factorization for biinfinite strictly m-banded TP matrices. In[4], Theorem B, de Boor and Pinkus proved that every finite nonsingular TPm-banded matrix is the product of m TP bidiagonal matrices and, in Theo-rem C of the same paper, they extended this result to biinfinite m-banded TPmatrices with linearly independent rows and columns. Recently, Dahmen andMicchelli [10] have obtained factorization results in terms of bidiagonal ma-trices for biinfinite TP matrices with a special structure of zeros (which theycalled 2-slanted matrices) and they applied these results to multi-resolutionanalysis and wavelet theory. This study has been extended in [21] to p-slantedmatrices (p ≥ 2), and some applications of these results to spline interpolationand locally finite decomposition of spline spaces have been also provided.

This paper is organized in the following way. In Section 2 we describecarefully Neville elimination for nonsingular matrices and study the proper-ties of the bidiagonal elementary matrices associated to it. We pay specialattention to the nonsingular matrices which can be transformed into diagonalform by Neville elimination without permutations of rows or columns. Thisproperty is referred to as the WRC condition. Section 3 is devoted to thedecomposition of matrices which satisfy the WRC condition as products ofbidiagonal matrices with a prescribed zero pattern in the places (i, i− 1) forthe bidiagonal lower triangular matrices and in the places (i, i + 1) for thebidiagonal upper triangular ones. Since any nonsingular TP matrix A satis-fies the WRC condition, in Section 4 we apply to these matrices the resultsof the precedent section and, depending on the choice of the zero pattern ofthe bidiagonal matrices, we arrive to form 16 different factorizations (someof them coincident for some special classes of TP matrices). Each of thesedecomposition is unique under the prescribed conditions and has a differentinterpretation with respect to the Neville elimination process of the matrixA. At our knowledge, the uniqueness of the different factorizations, whichis a consequence of the uniqueness of the elimination process, is a novelty inthis type of results. In the last section we relate the previously studied fac-torizations of A with other characterizations of nonsingular TP matrices. In

Page 5: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

Factorizations 5

particular we discuss determinantal characterizations, that is necessary andsufficient conditions on the signs of a short number of minors of a matrix inorder to classify it as a TP matrix or as a matrix belonging to some pecialclasses of TP matrices. We also mention some of the characterizations of anonsingular TP matrix by its LU decomposition or, as we recently saw in [16],by its QR decomposition.

§2. Neville elimination

The following notation will be convenient. For k, n ∈ IN, 1 ≤ k ≤ n, Qk,n

will denote the set of all increasing sequences of k natural numbers not greaterthan n. For α = (α1, α2, . . . , αk), β = (β1, β2, . . . , βk) ∈ Qk,n and A an n× nreal matrix, we denote by A[α|β] the k × k submatrix of A containing rowsα1, . . . , αk and columns β1, . . . , βk of A. Q0

k,n will denote the set of increasingsequences of k consecutive natural numbers not greater than n.

Neville elimination was precisely described for general finite matrices in[14]. Here, following [22], we restrict ourselves to the case of nonsingularmatrices. For a nonsingular matrix A of order n this elimination procedureconsists of n−1 successive steps, resulting in a sequence of matrices as follows:

A = A(1) → A(1) → A(2) → A(2) · · · → A(n) = A(n) = U,

where U is an upper triangular matrix. For each t, 1 ≤ t ≤ n, the matrixA(t) = (a(t)

ij )1≤i,j≤n has zeros below its main diagonal in the first t−1 columns,and also one has

a(t)it = 0, i ≥ t ⇒ a

(t)ht = 0 ∀h ≥ i. (2.1)

A(t) is obtained from the matrix A(t) by moving to the bottom the rows witha zero entry in the column t, if necessary, in order to get (2.1). The rows aremoved and placed with the same relative order as they appear in A(t). To getA(t+1) from A(t) we produce zeros in the column t below the main diagonal bysubtracting a multiple of the ith row to the (i+1)th for i = n−1, n−2, . . . , t,according to the formula

a(t+1)ij =

a(t)ij if i ≤ t

a(t)ij −

(a(t)it /a

(t)i−1,t

)a(t)i−1,j if i ≥ t + 1 and a

(t)i−1,t 6= 0

a(t)ij if i ≥ t + 1 and a

(t)i−1,t = 0.

(2.2)

Observe that in the third case a(t)i−1,t = 0 implies that a

(t)it = 0. In this process

one has A(n) = A(n) = U , and when no row exchanges are needed, thenA(t) = A(t) for all t. This happens, for example, when

det A[α|1, 2, . . . , t] 6= 0, 1 ≤ t ≤ n, ∀α ∈ Q0t,n (2.3)

Page 6: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

6 M. Gasca and J. M. Pena

(see [14] , Lemma 2.6) or when A is a nonsingular totally positive matrix ([14],Corollary 5.5).

The elementpij = a

(j)ij , 1 ≤ j ≤ i ≤ n, (2.4)

is called the (i, j) pivot of the Neville elimination of A and the number

mij =

a(j)ij /a

(j)i−1,j(= pij/pi−1,j) if a

(j)i−1,j 6= 0

0 if a(j)i−1,j = 0

(⇒ a

(j)ij = 0

) (2.5)

the (i, j) multiplier. Observe that mij = 0 if and only if pij = 0 and that by(2.1)

mij = 0 ⇒ mhj = 0 ∀h > i. (2.6)

The complete Neville elimination of a matrix A consists in performing theNeville elimination of A to get U as above and then proceeding with the Nevilleelimination of UT , the transpose of U (or, equivalently, the Neville eliminationof U by columns). The (i, j) pivot (resp., multiplier) of the complete Nevilleelimination of A is that of the Neville elimination of A if i ≥ j and the (j, i)pivot (resp. multiplier) of the Neville elimination of UT if j ≥ i.

Consider in more detail the case of a nonsingular matrix A whose Nevilleelimination can be performed without row exchanges. As in [19], they will bereferred to as matrices satisfying the WR condition. In this case, the elimi-nation process can be matricially described by elementary matrices withoutusing permutation matrices. To this end, we denote by Ei(α) (2 ≤ i ≤ n) thelower triangular, bidiagonal matrix whose element (r, s), 1 ≤ r, s ≤ n is givenby 1 if r = s

α if (r, s) = (i, i− 1)0 elsewhere,

that is

Ei(α) :=

11

. . .1α 1

. . .1

. (2.7)

Observe that E−1i (α) = Ei(−α)

Ei(α)Ei(β) = Ei(α + β)Ei(α)Ej(β) = Ej(β)Ei(α) except for |i− j| = 1 with αβ 6= 0.

(2.8)

Page 7: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

Factorizations 7

For a matrix A satisfying the WR condition, the Neville elimination pro-cess can be written

En(−mn,n−1) · · ·(E3(−m32) · · ·En(−mn2))· (E2(−m21) · · ·En−1(−mn−1,1)En(−mn1))A = U,

(2.9)

where U is a nonsingular upper triangular matrix, and the mij ’s are themultipliers (2.5) satisfying (2.6).

It is easily seen that

Ei(mij)Ei+1(mi+1,j) · · ·En(mnj) =

10 1

. . . . . .0 1

mi,j 1. . . . . .

mn,j 1

(2.10)

is a bidiagonal, lower triangular matrix, whose entries (2, 1), . . . , (i− 1, i− 2)are zero. On the contrary, the product En(mnj) · · ·Ei+1(mi+1,j)Ei(mij) isjust a lower triangular matrix that, in general, is not bidiagonal. This lastordering was used in [31] and (1.2)-(1.4) and was the cause of the high numberof bidiagonal factors in (1.2). Consequently (2.9) can be written in the form

Fn−1Fn−2 · · ·F1A = U (2.11)

with

Fi =

10 1

. . . . . .0 1

−mi+1,i 1. . . . . .

−mn,i 1

. (2.12)

From (2.9) we get the factorization of A

A = (En(mn1)En−1(mn−1,1) · · ·E2(m21))·(En(mn2) · · ·E3(m32)) · · ·En(mn,n−1)U.

(2.13)

Observe that the diagonal entries uii of U are the pivots pii given by (2.4).A careful and technical discussion of how the zero pattern of the matrix U ismodified by the successive factors Ei’s of the right-hand side of (2.13) showsthe following result ([19], theorem 2.2):

Page 8: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

8 M. Gasca and J. M. Pena

Theorem 2.1. A nonsingular n × n matrix A satisfies the WR condition ifand only if it can be factorized in the form (2.13) with the mi,j ’s satisfying(2.6). If A satisfies that condition, the factorization is unique and mij is the(i, j) multiplier of the Neville elimination of A.

When no row exchanges are needed in the Neville elimination of A andUT we say that the process is possible without row or column exchanges and,for brevity, that A satisfies the WRC condition. In this case, accordingly toTheorem 2.1, A can be written in the form

A = (En(mn1)En−1(mn−1,1) · · ·E2(m21)) · (En(mn2) · · ·E3(m32)) · · ··(En(mn,n−1))D(ET

n (m′n,n−1))(E

Tn−1(m

′n−1,n−2)E

Tn (m′

n,n−2)) · · ··(ET

2 (m′21) · · ·ET

n−1(m′n−1,1)E

Tn (m′

n1)) (2.14)

with mij and m′ij satisfying (2.6) and D a diagonal matrix. Moreover this

representation is unique, and mij (respectively m′ij) is the (i, j) multiplier of

the Neville elimination of A (resp. UT ). Here ETi denotes the transpose of

Ei.By transposition of the right-hand side of (2.14) we deduce that AT

satisfies the WRC condition too and the multipliers of the Neville eliminationof AT are those of UT . Therefore, if we denote

mij = m′ji for all i > j, (2.15)

we can say that, for any (i, j), i 6= j, mij is the (i, j) multiplier of the completeNeville elimination of A, in the sense that, if i > j, it is the multiplier of theNeville elimination of A and, if i < j, it is the (j, i) multiplier of the Nevilleelimination of AT (or, equivalently, of UT ). On the contrary, what we havecalled pivots are different, in general, for the Neville elimination of UT andAT . However, due to the fact that UT is lower triangular, the (i, i) entry ofD in (2.14) is the same as of UT and both are the (i, i) pivot of the Nevilleelimination of A (and of AT ).

All these questions can be summarized in the following:

Theorem 2.2. A nonsingular n×n matrix A satisfies the WRC condition ifand only if it can be factorized in the form

A = (En(mn1)En−1(mn−1,1) · · ·E2(m21)) · (En(mn2) · · ·E3(m32)) · · ··(En(mn,n−1))D(ET

n (mn−1,n))(ETn−1(mn−2,n−1)ET

n (mn−2,n)) · · ··(ET

2 (m12) · · ·ETn−1(m1,n−1)ET

n (m1n)) (2.16)

with D a diagonal matrix and the mij ’s satisfying

mij = 0 ⇒ mhj = 0 ∀h > i if i > j,

mij = 0 ⇒ mik = 0 ∀k > j if i < j.(2.17)

Moreover, under this condition the factorization is unique, mij is the (i, j)multiplier of the complete Neville elimination of A and the (i, i) entry of D isthe (i, i) pivot of the Neville elimination of A.

Page 9: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

Factorizations 9

§3. Factorization of a matrix as a product of bidiagonal matrices

In this section we only consider nonsingular matrices having the WRCcondition. We shall see in Section 4 that nonsingular TP matrices satisfy thiscondition.

As we have seen in the reasoning leading to (2.11) the factor (En(mn1) ·En−1(mn−1,1) · · · E2(m21)) in the right-hand side of (2.16) is not, in general,a bidiagonal matrix. With the aim of finding a decomposition of A as a prod-uct of a short number of bidiagonal matrices, we can reorder the elementaryfactors of (2.15). By (2.8) we have, for example,

En(mn1)En−1(mn−1,1) · · ·E2(m21)En(mn2) =En(mn1)En−1(mn−1,1)En(mn2) · · ·E2(m21).

So, we can go on to write (2.16) in the form

A = (En(mn1)) · (En−1(mn−1,1)En(mn2) · (En−2(mn−2,1)(En−1(mn−1,2)

En(mn3)) · · · (E2(m21)(E3(m32) · · ·En(mn,n−1))D(ETn (mn−1,n) · · ·

ET3 (m32)ET

2 (m21)) · · · · (ETn (m2n)ET

n−1(m1,n−1)) · (ETn (m1n)) (3.1)

and, by (2.10),

A = Fn−1Fn−2 · · ·F1DG1 · · ·Gn−2Gn−1, (3.2)

with

Fi =

10 1

. . . . . .0 1

mi+1,1 1. . . . . .

mn,n−i 1

(3.3)

and

Gi =

1 0. . . . . .

1 01 m1,i+1

. . . . . .1 mn−i,n

1

. (3.4)

Observe that in Section 2 we have stated the uniqueness of the decom-position (2.16) under the condition (2.17). Since (3.1) has been obtained by areordering of (2.16) completely determined, (3.1) is also unique and so is (3.2).The element (h+1, h), h ≥ i, of Fi in (3.2) is the (h+1, h+1− i) multiplier of

Page 10: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

10 M. Gasca and J. M. Pena

the Neville elimination of A and a similar interpretation can be given to thematrices Gi with respect to AT . Therefore, (2.17) means that if the element(h+1, h) of Fi is zero, then the element (h+1+k, h+k) of Fi+k, for k = 1, . . . ,n − i, is also zero, and similar condition for the G’s. However, we shall seedifferent factorizations of the matrix A as a product of bidiagonal matricesunder other different conditions. One of these different factorizations is basedupon [19], theorem 2.6, giving rise to the following result (which appears in[19] as theorem 3.3):

Theorem 3.1. Let A be a nonsingular matrix which can be factorized in theform A = LDV with L (resp. V ) lower (upper) triangular, unit diagonal,and D a diagonal matrix. Then A satisfies the WRC condition if and onlyif the matrix B = L−1CV −1, with C a diagonal matrix, satisfies the samecondition. In the affirmative case, the multipliers of the Neville eliminationof A (resp. AT ) are the negatives of those of B (resp. BT ), but in generaloccur in a different order.

Let A be a nonsingular matrix satisfying the WRC condition. FromTheorems 2.1 or 2.2 we deduce that A can be written in the form A = LDVand by Theorem 3.1 the matrix L−1 satisfies the WRC condition. FromTheorem 2.2 we get

L−1 = (En(µn,1)En−1(µn−1,1) · · ·E2(µ2,1)) · (En(µn2) · · ·E3(µ3,2)) · · ··(En(µn,n−1))

(3.5)

with µij the (i, j) multiplier of L−1, satisfying

µij = 0 ⇒ µhj = 0 ∀h > i. (3.6)

Consequently one has

L = (En(−µn,n−1)) · · ·(E3(−µ3,2) · · ·En(−µn,2))·(E2(−µ2,1)) · · ·En−1(−µn−1,1)En(−µn,1)),

(3.7)

that isL = Hn−1Hn−2 · · ·H1

with

Hi =

10 1

. . . . . .0 1

−µi+1,i 1. . . . . .

−µn,i 1

(3.9)

and the µ’s satisfying (3.6).

Page 11: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

Factorizations 11

Let us remark that the µ’s are the multipliers of the Neville eliminationof L−1. If we want to interpret them in terms of the elimination of L we haveto reorder the factors of the right-hand side of (3.7) as in (2.13) keeping inmind the condition (2.6). As it is seen in the proof of Theorem 2.6 of [19],this can be done as in the beginning of this section, with the reordering whichhas led to (3.1). If all the µ’s are different from zero, the condition similarto (2.6) for the new ordering still holds. If some of the µ’s are zero a morecomplicated reordering, which is explained in [19], must be done. In summarythe result is that the numbers −µi,j which appear in the subdiagonal of thematrices Hi are the multipliers of the Neville elimination of L, but in generalthey occur in a different order than in the elimination process of L.

Similar reasonings can be applied to the upper triangular factor V of thematrix A to get a factorization of it of the form

A = Hn−1Hn−2 · · ·H1DK1 · · ·Kn−2Kn−1 (3.10)

with

Hi =

10 1

. . . . . .0 1

αi+1,i 1. . . . . .

αn,i 1

, (3.11)

Ki =

1 0. . . . . .

1 01 αi,i+1

. . . . . .1 αi,n

1

. (3.12)

and the α’s satisfying

αi,j = 0 ⇒ αh,j = 0 ∀h > i if i > j,

αi,j = 0 ⇒ αi,k = 0 ∀k > j if i < j.(3.13)

Observe that Theorem 3.1 means that the computational cost of theNeville elimination of a nonsingular matrix A which satisfies the WRC con-dition is the same as that of matrices B whose lower triangular factor is theinverse of that of A. This means that if the factor L of A is the inverse of abidiagonal, lower triangular matrix, L−1, the computational cost of the Nevilleelimination of A is very low (that of L−1). This property does not hold forGauss elimination and consequently for these matrices Neville elimination ismore efficient.

Page 12: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

12 M. Gasca and J. M. Pena

§4. Totally positive matrices

In [19], Corollary 5.5, it was proved that a nonsingular matrix A is totallypositive if and only if there are no row or columns exchanges in the completeNeville elimination of A and all the pivots are nonnegative. Furthermore it isclear that the diagonal pivots must be different from zero. With the notationsof Section 2 and taking into account that, according to (2.5), the multipliersof the elimination process are quotient of pivots,we can reformulate this resultas

Theorem 4.1. A nonsingular n× n matrix A is totally positive if and onlyif it can be factorized in the form (2.16) with D a diagonal matrix withpositive diagonal entries and the mi,j ’s nonnegative numbers satisfying (2.17).Moreover, under this condition the factorization is unique, mi,j is the (i, j)multiplier of the complete Neville elimination of A and the (i, i) entry of D isthe (i, i) pivot of the Neville elimination of A.

Analogously, following Section 3 a nonsingular TP matrix can be ex-pressed in two different forms as a product of bidiagonal matrices.

Theorem 4.2. A nonsingular n×n matrix A is totally positive if and only ifit can be factorized in the form (3.2) (respectively (3.10)) with D a diagonalmatrix with positive diagonal entries, Fi, Gi like in (3.3),(3.4), (resp. Hi, Ki

like in (3.11),(3.12)) and the mi,j ’s ( αi,j ’s) nonnegative numbers satisfying(2.17) ((3.13)). Under this condition, the factorization is unique.

Since (3.2) comes from a simple reordering of (2.15), the interpretationof the numbers mi,j is the same as in Theorem 4.1. The αi,j ’s are the samenumbers, in general in a different ordering. However, when all the multipliersmi,j are different from zero, (2.17) and (3.13) do not apply and by uniquenessboth factorizations are coincident.

Example: The matrix

A =

1 1 3 6 01 2 7 15 23 6 23 52 130 0 4 15 170 0 2 10 24

is totally positive because its complete Neville elimination can be carried outwithout row or column exchanges, with multipliers m51 = m41 = 0, m31 = 3,m21 = 1, m52 = m42 = m32 = 0, m53 = 2, m43 = 1/2, m54 = 5/2, m15 = 0,m14 =, 2 m13 = 3, m12 = 1, m25 = 2, m24 = m23 = 1, m35 = 1, m34 = 1/2,m45 = 0, and diagonal pivots p11 = p22 = p44 = 1, p33 = 2, p55 = 8. Itsfactorization (3.2) is

Page 13: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

Factorizations 13

A =

1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 00 0 0 0 1

1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 00 0 0 0 1

1 0 0 0 00 1 0 0 00 3 1 0 00 0 0 1 00 0 0 1/2 1

·

1 0 0 0 01 1 0 0 00 0 1 0 00 0 2 1 00 0 0 5/2 1

1 0 0 0 00 1 0 0 00 0 2 0 00 0 0 1 00 0 0 0 8

1 1 0 0 00 1 1 0 00 0 1 1/2 00 0 0 1 00 0 0 0 1

·

1 0 0 0 00 1 3 0 00 0 1 3 00 0 0 1 10 0 0 0 1

1 0 0 0 00 1 0 0 00 0 1 2 00 0 0 1 20 0 0 0 1

1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 00 0 0 0 1

.

In order to get the factorization (3.10), the first, third and fourth factorsabove must be replaced, respectively, by

1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 00 0 0 1/2 1

,

1 0 0 0 00 1 0 0 00 3 1 0 00 0 2 1 00 0 0 5/2 1

and

1 0 0 0 01 1 0 0 00 3 1 0 00 0 0 1 00 0 0 1/2 1

.

As it has been recalled in Section 1 a matrix is said to be strictly totallypositive (STP for brevity) if all its minors are positive. In [19], Theorem4.1, it was proved that a square matrix is STP if and only if it satisfies theWRC condition and the multipliers and diagonal pivots of its complete Nevilleelimination are all positive. This can be expressed in a theorem similar toTheorem 4.2:

Theorem 4.3. A nonsingular n × n matrix A is strictly totally positive ifand only if it can be factorized in the form (3.2) with D a diagonal matrixwith positive diagonal entries, Fi, Gi like in (3.3),(3.4), and the mi,j ’s positivenumbers. This factorization is unique.

Observe that the condition (2.17) does not apply here and that the fac-torizations (3.2) and (3.10) are now coincident.

An important class of TP matrices is that of almost strictly totally positivematrices (see [15]). A nonsingular TP matrix A of order n is called almoststrictly totally positive (ASTP for brevity) if it satisfies the following property:for any α, β ∈ Q0

k,n

detA[α|β] > 0 ⇐⇒ aαh,βh> 0, h = 1, 2, . . . , k. (4.1)

Page 14: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

14 M. Gasca and J. M. Pena

In other words, a minor of A formed from consecutive rows and columns ispositive if and only if all its diagonal entries are positive. It was proved in[15] that if A is ASTP then (4.1) holds for any α, β ∈ Qk,n and, consequently,for this type of matrices, we know exactly the minors which are positiveand the ones which are zero. Important examples of ASTP matrices are thecollocation matrices of B-splines (see [2]) and Hurwitz matrices (see [30]).Obviously, STP matrices form a subclass of ASTP matrices. The followingtheorem ([22], Theorem 4.1 and Remark 4.2) allows us the characterizationof ASTP matrices in terms of their factorizations:

Theorem 4.4. A nonsingular n×n matrix A is ASTP if and only if it can befactorized in the form (3.2) with D a diagonal matrix with positive diagonalentries, Fi, Gi like in (3.3),(3.4), and with mij ≥ 0 satisfying

mij = 0 and i > j ⇒ mrs = 0 ∀(r, s) with r ≥ i, s ≤ j

mij = 0 and i < j ⇒ mrs = 0 ∀(r, s) with r ≤ i, s ≥ j.(4.2)

Let us observe that (2.17) is a part of (4.2) and therefore the factorizationis unique. Moreover, since (3.13) is also a part of (4.2) the factorizations (3.2)and (3.10) are coincident for ASTP matrices.

Remark. A signature sequence of order n is a sequence ε = (εi)1≤i≤n of realnumbers with |εi| = 1 for all i. An n×m matrix A is called sign-regular withsignature ε if ε is a signature sequence of order h = minn, m such that, fork = 1, 2, . . . , h,

εk detA[α|β] ≥ 0 (4.3)

for any α ∈ Qk,n, β ∈ Qk,m. A is called strictly sign-regular with signatureε if ≥ is replaced by > in (4.3). TP and STP matrices are examples of sign-regular and strictly sign-regular matrices, respectively, with εi = 1 for all i.Neville elimination provides an algorithmic characterization of strictly sign-regular matrices as can be found in [17] and also a factorization of them asa product of bidiagonal matrices. In the same paper, we gave the algorithmscorresponding to Theorems 4.1 and 4.3 above to check the total positivity orstrict total positivity of a matrix via Neville elimination.

Some other factorizations of nonsingular totally positive matrices of ordern as a product of 2n − 2 bidiagonal matrices by a diagonal matrix can beconsidered. The converse A# of a matrix A = (aij)0≤i,j≤n is the matrixwhose (i, j) entry (0 ≤ i, j ≤ n) is an−i,n−j (see [1]). It is easy to provethat (AB)# = A#B#. According to Theorem 4.2, if A is a nonsingular TPmatrix, then the upper triangular matrix K = K1 · · ·Kn−2Kn−1 of (3.10) isTP. Consequently its converse K# is lower triangular and TP and again byTheorem 4.2 it can be decomposed as in (3.10)

K# = Hn−1Hn−2 · · · H1, (4.4)

where the matrices Hi have the form (3.11) with the bidiagonal elementssatisfying (3.13). If we denote Ui := H#

i , one has

K = Un−1Un−2 · · ·U1.

Page 15: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

Factorizations 15

Therefore, in (3.10), we get

A = Hn−1Hn−2 · · ·H1DUn−1Un−2 · · ·U1, (4.5)

with Hi given by (3.11) and

Ui =

1 βi1

1 βi2

. . . . . .1 βi,n−i

1 0. . . . . .

1 01

, (4.6)

with the βij satisfying

βij = 0 ⇒ βhj = 0 ∀h < i. (4.7)

Other factorizations can be easily obtained from (3.10) and (4.5) by ap-plying to the lower triangular factors a similar idea. First we take the converseH# of the lower triangular factor H = Hn−1Hn−2 · · ·H1 of a nonsingular TPmatrix in (3.10). Afterwards we decompose H# as in (3.10) and take theconverses again. From (3.10) and (4.5), respectively, we get

A = L1L2 · · ·Ln−1DK1 · · ·Kn−2Kn−1 (4.8)

andA = L1L2 · · ·Ln−1DUn−1Un−2 · · ·U1, (4.9)

where

Li =

1γi1 1

. . . . . .γi,n−i 1

0 1. . . . . .

0 1

, (4.10)

with the γij satisfying (4.7).On the other hand, we can consider A# instead of A, apply any of the four

factorizations (3.10),(4.5),(4.8) and (4.9), and take converses again. Then weget four similar factorizations with the roles of the lower and upper triangularmatrices interchanged:

A = Un−1Un−2 · · · U1DL1L2 · · · Ln−1, (4.11)

Page 16: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

16 M. Gasca and J. M. Pena

A = Un−1Un−2 · · · U1DHn−1 · · · H2H1, (4.12)

A = K1K2 · · · Kn−1DL1L2 · · · Ln−1, (4.13)

A = K1K2 · · · Kn−1DHn−1 · · · H2H1. (4.14)

In summary, from (3.10) we have got eight different factorizations of A asa product of bidiagonal matrices. Other eight factorizations, with the sametypes of factors as above but with different conditions for the subdiagonalelements are obtained by starting from the factorization (2.16) with the con-dition (2.17). Nevertheless, recall that for ASTP matrices, and in particularfor STP matrices, (2.16) and (3.10) are coincident and therefore only the eightfirst factorizations must be considered for these matrices.

A matrix with nonnegative entries is called stochastic if the row sums areequal to one. Matrices which are nonsingular, stochastic and totally positiveare of particular interest in Computer Aided Geometric Design. An elemen-tary corner cutting is a transformation which maps any polygon P1P2 · · ·Pn

into another one Q1Q2 · · ·Qn defined by

Qj = Pj j 6= i

Qi = λPi + (1− λ)Pi+1

(4.15)

for some 1 ≤ i ≤ n− 1 or

Qj = Pj j 6= i

Qi = λPi + (1− λ)Pi−1

(4.16)

for some 2 ≤ i ≤ n.A corner cutting algorithm is any composition of elementary corner cut-

ting transformations. An elementary corner cutting is defined by a bidiagonal,nonsingular, totally positive and stochastic matrix, which is upper (respec-tively lower) triangular in the case (4.15) (resp. (4.16)). In [24] Goodmanand Micchelli proved that a matrix which is nonsingular, totally positive andstochastic, can be written as a product of bidiagonal matrices of the sametype and therefore describes a corner cutting algorithm.

Observe that a nonsingular, totally positive matrix A = (aij)1≤i,j≤n canbe factorized in the form

A = DB (4.17)

with D = (dij)1≤i,j≤n a diagonal matrix with positive diagonal entries dii =1/(

∑ni=1 aij) and B a nonsingular, totally positive, stochastic matrix. The

factorization is obviously unique. By using (4.17) for each factor, a productof nonsingular, totally positive matrices A = A1, A2, . . . , Am can be written

A = DB1B2 · · ·Bm (4.18)

with B1, B2, . . . , Bm totally positive and stochastic and D a diagonal matrixwith positive diagonal. Moreover, the matrix A is stochastic if and only if Dis the identity matrix I.

Hence, Theorem 4.2 can be slightly changed to characterize nonsingular,stochastic, TP matrices in terms of factorizations with bidiagonal, stochasticfactors:

Page 17: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

Factorizations 17

Theorem 4.5. A nonsingular n×n matrix A is stochastic and totally positiveif and only if it can be factorized in the form

A = Fn−1Fn−2 · · ·F1G1 · · ·Gn−2Gn−1, (4.19)

with

Fi =

10 1

. . .. . .

0 1αi+1,1 1− αi+1,1

. . .. . .

αn,n−i 1− αn,n−i

(4.20)

and

Gi =

1 0. . .

. . .

1 01− α1,i+1 α1,i+1

. . .. . .

1− αn−i,n αn−i,n

1

, (4.21)

where, ∀(i, j), 0 ≤ αi,j < 1 satisfies (2.17) (with mij replaced by αi,j). Underthese conditions, the factorization is unique.

As in Theorem 4.2 an analogous result can be stated in terms of a fac-torization of the type (3.10) with stochastic bidiagonal matrices. And thesame happens with the other similar factorizations: (4.5), (4.8), (4.9) and soon. One of them was specially considered in [24] to prove that any nonsingu-lar, stochastic, totally positive matrix describes a corner cutting algorithm.Uniqueness conditions were not studied in that paper.

§5. Determinantal characterizations and other factorizations

In this section, we shall see that the factorizations of TP matrices wehave just obtained can be interpreted in terms of the signs of some minors ofthose matrices, improving some well-known results in the literature.

The main tool we use to relate factorization results with determinantalcharacterizations is again Neville elimination. In fact as it is seen in [14],Lemma 2.6, the pivots of the Neville elimination of a matrix A can be ex-pressed as Schur complements of submatrices of A and therefore as quotientof minors of A. Since the multipliers of the elimination process are quotientof pivots (see (2.5)), hence of minors of A, a necessary and sufficient condition

Page 18: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

18 M. Gasca and J. M. Pena

for a nonsingular matrix A to be TP is that the signs of the minors involvedin the calculation of multipliers and diagonal pivots of the complete Nevilleelimination of A ensure that the conditions of Theorem 4.1, for instance, areaccomplished.

In consequence, Theorem 4.1 leads to the following determinantal char-acterization, which corresponds to [16], Theorem 3.1:

Theorem 5.1. Let A be a nonsingular matrix of order n. Then A is TP ifand only if it satisfies simultaneously, for each k ∈ 1, 2, . . . , n, the followingconditions:

detA[α|1, 2, . . . , k] ≥ 0 ∀α ∈ Qk,n (5.1)

det A[1, 2, . . . , k|β] ≥ 0 ∀β ∈ Qk,n, (5.2)

detA[1, 2, . . . , k] > 0. (5.3)

This characterization reduces considerably the number of minors used inthe characterization obtained by Cryer in [8], namely: a nonsingular matrixA is TP if and only if

detA[α|β] ≥ 0 ∀α ∈ Q0k,n, β ∈ Qk,n, k ∈ 1, . . . , n. (5.4)

On the other hand, observe the symmetry of the conditions of Theorem5.1 with respect of rows and columns, compared with the unnatural asym-metry of (5.4). In the case of general TP matrices, nonnecessarily regular,(5.4) is replaced in [8] by a similar condition which depends on the rang of A.These matrices can be characterized too by their Neville elimination as it canbe seen in [14], Theorem 5.4.

With respect to STP matrices, Theorem 4.3 corresponds to the followingresult, which was contained in [14], Theorem 4.1:

Theorem 5.2. Let A be an n×m matrix. Then A is STP if and only if

det A[α|1, . . . , k] > 0 ∀α ∈ Q0k,n, k ∈ 1, . . . , n, (5.5)

det A[1, . . . , k|β] > 0 ∀β ∈ Q0k,n, k ∈ 1, . . . , n. (5.6)

This result improves the classical characterization of STP matrices dueto Fekete in [11] (another proof thereof can be found in Theorem 2.5 of [1]):an n × m matrix is STP if and only if all its minors formed by consecutiverows and columns are strictly positive.

Observe that, for example, in the case n = m, n(n + 1)(2n + 1)/6 minorsshould be checked according to Fekete’s criterion, while they are only n(n+1)according to Theorem 5.2.

Concerning ASTP matrices, Theorem 4.4 is closely related with the fol-lowing result, which is contained in [22], Theorem 3.3. First, we must intro-duce some notations related with the zero pattern of an ASTP matrix A.

For an n× n matrix A let us denote:i0 = 1, j0 = 1;

Page 19: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

Factorizations 19

for t = 1, . . . , l :it = maxi|ai,jt−1 6= 0+ 1 (≤ n + 1),jt = maxj|ait,j = 0+ 1 (≤ n + 1),

where l is given in this recurrent definition by il = n + 1. Analogously wedenote:

j0 = 1, i0 = 1;for t = 1, . . . , r :

jt = maxj|ait−1,j 6= 0+ 1,

it = maxi|ai,jt= 0+ 1,

where jr = n + 1. In other words, the entries below the places (i1− 1, j) withj0 ≤ j < j1, (i2− 1, j) with j1 ≤ j < j2, . . . , (il−1− 1, j) with jl−2 ≤ j < jl−1

are zero. So are the entries to the right of the places (i, j1−1) with i0 ≤ i < i1,(i, j2−1) with i1 ≤ i < i2, . . . ,(i, jr−1−1) with ir−2 ≤ i < ir−1. On the otherhand, the entries of both lists, those above the first list and those to theleft of the last list are nonzero. We shall express this by saying that thematrix A has a zero pattern given by I = i0, i1, . . . , il, J = j0, j1, . . . , jl,I = i0, i1, . . . , ir and J = j0, j1, . . . , jr. Only matrices with these patternsof zeros and all the other entries positive can be ASTP, as it is explained in[22] with the following result:

Theorem 5.3. Let A be a nonsingular n× n totally positive matrix with azero pattern given by I, J, I, J as above, satisfying

it > jt t = 1, . . . , l − 1

jt > it t = 1, . . . , r − 1.(5.7)

Then A is an ASTP matrix if and only if it satisfies the following conditionssimultaneously:

i) For 1 ≤ t ≤ l,for jt−1 ≤ hy < jt,

detA[it − 1− h + jk, . . . , it − 1|jk, jk + 1, . . . , h] > 0,

where jk = maxjs|s ≤ t− 1, h− js < it − is.ii)For 1 ≤ t ≤ r,

for it−1 ≤ hy < it,

detA[ik, ik + 1 . . . , h|jt − 1− h + ik, . . . , jt − 1] > 0

where ik = maxis|s ≤ t− 1, h− is < jt − js.

On the other hand, the decompositions studied in the precedent sectioncan be expressed in a more compact form if we replace all the lower triangularbidiagonal matrices by their product and, analogously, all the upper triangularones by their product. In this form we get the LDU factorization of the matrixA, and so Theorem 4.1 can be reformulated (see [9], Theorem 7.1 for a firstversion of this result)

Page 20: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

20 M. Gasca and J. M. Pena

Theorem 5.4. A nonsingular n× n matrix A is totally positive if and onlyif it can be decomposed in the form LDU with D a diagonal matrix withpositive diagonal entries, L a lower triangular, unit diagonal TP matrix andU an upper triangular, unit diagonal TP matrix.

Analogously, the factorization of a strictly totally positive matrix whichhas been given in Theorem 4.3 can be rewritten in the form LDU , but, ob-viously, the triangular matrices L and U are not STP. If B is an n× n lower(resp., upper) triangular matrix such that detB[α|β] > 0 for any α, β ∈ Qp,n,1 ≤ p ≤ n, with βk ≤ αk (resp, βk ≥ αk) then B is called ∆STP matrix.Observe that a ∆STP matrix is an ASTP matrix. Hence we get (see [8],Theorem 1.1)

Theorem 5.5. An n× n matrix A is strictly totally positive if and only if itcan be decomposed in the form LDU with D a diagonal matrix with positivediagonal entries, L a lower triangular, unit diagonal ∆STP matrix and U anupper triangular, unit diagonal ∆STP matrix.

Another class of important TP matrices has not yet been consideredin this paper: that of the matrices A such that Am is STP for some posi-tive integer m. They are called oscillatory. A triangular matrix A is called∆-oscillatory if A is TP and Am is ∆STP for some positive integer m. Simul-taneously to the result of Theorem 5.5, it was shown in Theorem 1.1 of [8]that a square matrix A is oscillatory if and only if it has an LU -factorizationsuch that L and U are ∆oscillatory.

Let us give two final remarks. The QR factorization of a matrix, that is itsdecomposition as the product of an orthogonal matrix by an upper triangularmatrix is important in Numerical Analysis. Its application to TP matriceshas been recently considered by us in [16]. By introducing some new classesof matrices related to that of TP matrices, whose definitions are omitted herefor brevity, nonsingular TP and STP matrices are characterized in terms oftheir QR factorization in Theorem 4.7 of that paper.

The second and last remark concerns to the solution of linear systems withTP coefficient matrices. Some years ago, in [3], deBoor and Pinkus provedthat partial pivoting is not necessary when Gauss elimination is used to solvethem. Recently, in [18], we have studied scaled partial pivoting with respectto l∞-norm and Euclidean norm for Gauss and Neville elimination appliedto the same systems. It has been proved that in exact aritmetic they do notneed row exchanges. The same result holds, for sufficiently high precisionarithmetic, in Gauss elimination and also, for almost strictly totally positivematrices, in Neville elimination.

References

1. Ando, T., Totally positive matrices, Linear Algebra Appl. 90 (1987),165–219.

2. de Boor, C., Total positivity of the spline collocation matrix, IndianaUniv. J. Math. 25 (1976), 541–551.

Page 21: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

Factorizations 21

3. de Boor, C. and Pinkus, A. Backward error analysis for totally positivelinear systems, Numer. Math. 27 (1977), 485–490.

4. de Boor, C. and Pinkus, A. The approximation of a totally positive bandmatrix by a strictly banded totally positive one, Linear Algebra Appl. 42(1982), 81-98.

5. Carnicer, J. M., and Pena, J. M., Shape preserving representations andoptimality of the Bernstein basis, Advances in Computational Mathemat-ics 1 (1993), 173–196.

6. Carnicer, J. M., and Pena, J. M., Totally positive bases for shape pre-serving curve design and optimality of B-splines. Computer-Aided Geom.Design 11 (1994), 633–654.

7. Cavaretta, A. S., Dahmen, W., Micchelli, C. A., and Smith, P. W., A fac-torization theorem for banded matrices, Linear Algebra Appl. 39 (1981),229–245.

8. Cryer, C. W., The LU -factorization of totally positive matrices, LinearAlgebra Appl. 7 (1973), 83–92.

9. Cryer, C. W., Some properties of totally positive matrices, Linear AlgebraAppl. 15 (1976), 1–25.

10. Dahmen, W., and Micchelli, C.A., Banded matrices with banded inversesII: Locally finite decomposition of spline spaces, Constructive Approxi-mation 9 (1993), 263–282.

11. Fekete, M., and Polya, G., Uber ein Problem von Laguerre, Rend. C.M.Palermo 34 (1912), 89–120.

12. Frydman H., and Singer B., Total positivity and the embedding problemfor Markov chains, Math. Proc. Cambridge Philos. Soc. 85 (1979), 339–344.

13. Gasca, M., and Muhlbach, G., Generalized Schur complements and a Testfor Total Positivity, Applied Numer. Math.y3 (1987), 215–232.

14. Gasca, M., and Pena, J. M., Total positivity and Neville elimination,Linear Algebra Appl. 165 (1992), 25–44.

15. Gasca, M., Micchelli, C. A., and Pena, J. M., Almost strictly totallypositive matrices, Numerical Algorithms 2 (1992), 225–236.

16. Gasca, M., and Pena, J. M., Total positivity, QR-factorization and Nevilleelimination, SIAM J. on Matrix Anal. Appl. 14 (1993), 1132–1140.

17. Gasca, M., and Pena, J. M., Sign-regular and totally positive matrices: analgorithmic approach, in International Multivariate Approximation: FromCAGD to wavelets, Jetter, K. and Utreras, L. (eds.), World ScientificPublishers, 1993, 131–146.

18. Gasca, M., and Pena, J. M., Scaled pivoting in Gauss and Neville elimi-nation for totally positive systems, Applied Numerical Mathematics, 13(1993), 345–356.

19. Gasca, M., and Pena, J. M., A matricial description of Neville eliminationwith applications to total positivity, Linear Algebra Appl. 202 (1994),33–54.

20. Gasca, M., and Pena, J. M., Corner cutting algorithms and totally posi-tive matrices, in Curves and Surfaces in Geometric Design, P. J. Laurent,

Page 22: On Factorizations of Totally Positive Matricespcmap.unizar.es/~gasca/investig/GASCA.pdf · On Factorizations of Totally Positive Matrices M. Gasca and J. M. Pena˜ Abstract. Different

22 M. Gasca and J. M. Pena

A. Le Mehaute and L. L. Schumaker (eds.), AKPeters, 1994, 177–184.21. Gasca, M., Micchelli, C. A., and Pena, J. M., Banded matrices with

banded inverses III: p-slanted matrices, in Wavelets, Images and Sur-face fitting, P. J. Laurent, A. Le Mehaute and L. L. Schumaker (eds.),AKPeters, 1994, 245–268.

22. Gasca, M., and Pena, J. M., On the characterization of Almost StrictlyTotally positive matrices , to appear in Advances in Computational Math-ematics.

23. Goodman, G. A probabilistic representation of totally positive matrices,Advances in Applied Mathematics 7 (1986), 236–252.

24. Goodman, T. N. T., and Micchelli, C. A., Corner cutting algorithms forthe Bezier representation of free form curves, Linear Alg. Appl. 99 (1988),225–252.

25. Goodman, T. N. T., Shape preserving representations, in MathematicalMethods in Computer Aided Geometric Design, T. Lyche and L. Schu-maker (eds.), Academic Press, New York, 1989, 333–357.

26. Goodman, T. N. T., and Said, H. B., Shape preserving properties of thegeneralized Ball basis, Computer-Aided Geom. Design 8 (1991), 115–121.

27. Karlin, S., and McGregor, J. L., Coincidence probabilities of birth anddeath processes, Pacific J. Math. 9 (1959) 1109–1140.

28. Karlin, S., and McGregor, J. L., Coincidence probabilities, Pacific J.Math. 9 (1959) 1141–1164.

29. Karlin, S., Total positivity, Vol. I, Stanford University Press, Standford,1968.

30. Kempermann, J. H. B., A Hurwitz matrix is totally positive, SIAM J.Math. Anal. 13 (1982), 331–341.

31. Loewner, C., On totally positive matrices, Math. Z. 63 (1955), 338–340.32. Micchelli, C.A., and Pinkus A., Descartes systems from corner cutting,

Constr. Approx., (1991) 7, 195–208.33. Whitney, A. M., A reduction theorem for totally positive matrices, J.

d’Analyse Math. 2 (1952), 88–92.

Acknowledgements. Both authors have been partially supported by theDGICYT Spain Research Grant PB93-0310.

Mariano GascaDepartamento de Matematica AplicadaFacultad de Ciencias, Universidad de Zaragoza,50009 Zaragoza, [email protected]

Juan M. PenaDepartamento de Matematica AplicadaFacultad de Ciencias, Universidad de Zaragoza,50009 Zaragoza, [email protected]