20
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices Weighted maximum norm e.g.) x 1 x 2 1 -1 1 -1 The unit ball of w.r.t. 0 , max w w x x i i i x 1 x 2 w -w w 2 -w 2 w 1 -w 1 The unit ball of w.r.t. 1 x

Maximum Norms & Nonnegative Matrices

Embed Size (px)

DESCRIPTION

Maximum Norms & Nonnegative Matrices. Weighted maximum norm e.g.). The unit ball of w.r.t. The unit ball of w.r.t. x 2. x 2. w 2. w. 1. -w 1. w 1. -1. 1. x 1. x 1. -w. -w 2. -1. The induced matrix norm Proposition 6.2 (c) (b) If M ≥ 0, then - PowerPoint PPT Presentation

Citation preview

Page 1: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.1

Maximum Norms & Nonnegative Matrices Weighted maximum norm

e.g.)

x1

x2

1

-1

1-1

The unit ball of w.r.t.

0,max

ww

xx

i

i

i

x1

x2

w

-w

w2

-w2

w1-w1

The unit ball of w.r.t.

1

x

Page 2: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.2

The induced matrix norm

Proposition 6.2

(c)

(b) If M ≥ 0, then

(d) Let M ≥ 0. Then, for any λ > 0, iff

(e)

(f) If

(a) M ≥ 0 iff M maps nonnegative vectors into nonnegative vectors.

)(a) A.13 prop.by (1

max

)(a) A.12 prop.by (max

max

1

0

1

j

n

jij

ii

x

x

waw

x

Ax

AxA

w

-w

Mw

-Mw

x1

x2

λw

-λw

.

MM

M .M

{ 1}

Mx x

( )M M

0, then

M N M N

M

M ≥ 0

|| MM

Page 3: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.3

n x n matrix M to Graph G = (N,A) N = {1, …… , n} A = {(i,j) | i≠j & mij ≠ 0}

Definition 6.1An n x n matrix M (n≥2) is called irreducible, if for every i,j N, a positive path in the graph G.

e.g.)

11

00M

)}1,2{(

}2,1{

A

N

1 2

Page 4: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.4

Proposition 6.5 (Brouwer Fixed Point Theorem)Consider the unit simplex

e.g.)

If f: S S is a continuous fct. , then some w S such that f(w)=w.

n

i=1

{ 0 and 1}niS x R x x

1

1

SUnit Simplex

Page 5: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.5

Proposition 6.6 (Perron-Frobenious Theorem)

Let M ≥ 0.(a) If M is irreducible, then ρ(M) is an eigenvalue of M and some

ω > 0 such that Mω = ρ(M)ω. Furthermore, such a ω is unique within a scalar multiple, i.e., if some v satisfies Mv = ρ(M)v, then v=αω.Finally, .

(b)ρ(M) is an eigenvalue of M & there exists some ω≥0, ω≠0 such that Mω = ρ(M)ω.

(c) For every ε > 0, there exists ω > 0 such that

Proof) by yourself

)(MM

)()( MMM

Page 6: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.6

e.g.)

1 2

1 2

2

1

0 00

1 1

( , )

( ) ( 1) 0 (M)=1

For any w>0,

1, (M) < for any .

However, taking 0, (M).

M

G N A

not irreducible

M M

M

Page 7: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.7

Corollaries Corollary 6.1

Let . The followings are equivalent :

Corollary 6.2 Given any square matrix M, there exists some such that iff

Corollary 6.3 Given any square matrix M,

0M

wMwwiii

Mwii

Miw

such that 0 & 1 some )(

1 such that 0 some )(

1)( )(

0w 1M

1

wM

MM

Page 8: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.8

Convergence analysis using maximum norms Def 6.2

A square matrix A with entries is (row) diagonally dominant if

Prop 6.7 If A is row diagonally dominant, then the Jacobi method for

solving converges.

ija

proof)

For ,

Therefore , for each i

Therefore, Q.E.D

1 1( 1) ( ) : Jacobix t D Bx t D b

,ij iii j

a a i

bAx

ij 0 0

and ii

ii

ii

ij

ija

ma

am

dominance) diagonal(by 11

ii

ii

ij ii

ijn

jij

a

a

a

am

1)( )1( 1

MwM

Page 9: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.9

Prop. 6.8 Consider on nxn matrix associated to an iteration x: = Mx

+ b. Let be the corresponding Gauss-Seidel iteration matrix , that is , the iteration matrix obtained if the components in the original iteration are updated one at a time. Suppose that . Then .

Proof) Assume that Let us fix some such that By prop 6.6(c) & prop 6.2(b) such that Therefore, (by Prop. 6.2 (d)) Equivalently for all i , - (*) Consider now some such that and let (Note that is not necessarily nonnegative)

MM ˆ 1M

1M0 1 M

0 some w w w

M M M w M

wwM

n

jijij wwm

1

x 1

wx xMy ˆ

M

Page 10: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.10

We will prove by induction on i that Assuming that for

Therefore, for every satisfying This implies that Q.E.D.

• Prop. 6.8 implies that if

iλwy ii ,

jj λwy ij

( )

1 by 1)

& 1)

( (*))

i ij j ij j i ij j ij jj i j i j i j i

wjij j ij j

j i j i j

ij j ij jj i j i

i

y m y m x y m y m x

xm y m w ( x

w

m w m w ( assumption

w

M̂x λw x 1

wx

)()ˆ( ˆ MMλMw

.1)( then , 1)( and 0 GSJJ MMM

Page 11: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.11

Prop. 6.9 (Stein-Rosenberg Theorems) Consider where for and

(This implies that the Jacobi iteration matrix is given by

and for . That is ) (a) If , then restatement of Prop.

6.8

(b) If , then

Proof) by yourself.

bAx 0ija ji iaii ,0

1JM

1JM

JGS MM

JGS MM

JM 0iiJM

iiijijJ aaM / ij .0JM

Page 12: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.12

Prop. 6.8 implies that for nonnegative iteration matrices, if a Jacobi algorithm converges, then the corresponding Gauss-Seidel iteration also converges, and its convergence rate is no worse than that of the Jacobi algorithm.

Notice that the proofs of Prop. 6.8 and Prop. 6.9 remain valid when different updating orders of the components are considered.

Nonnegative matrices possess some intrinsic robustness w.r.t. the order of updates!

Key to asynchronous algorithms

Page 13: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.13

Convergence Analysis Using Quadratic Cost Function Consider where is a symmetric positive definite

matrix. Solve ( has a unique solution since is invertible)

Find satisfying

Define a cost fct.

F is a strictly convex fct. ( is positive definite and by Prop. A.40 (d) ) minimizes iff , i.e. ,

bAx bAx

x A

x 0 bAx

bxAxxxF ''2

1)(

A

A

*x 0)( * xF 0)( ** bAxxFF

Page 14: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.14

Assume that is a symmetric positive definite matrix. Def. A.11 A nxn square matrix is called positive definite if is real and

for all , . It is called nonnegative definite if it is

real and for all . Prop. A.26 (a) For any real matrix , the matrix is symmetric and

nonnegative definite. It is positive definite if and only if is nonsingular.

(b) A square symmetric real matrix is nonnegative definite (positive definite) iff all of its eigenvalues are nonnegative (positive).

(c) The inverse of a symmetric positive definite matrix is symmetric and positive definite.

A

A A0' Axx nRx 0x

0' AxxnRx

AA'A

A

Page 15: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.15

The meaning of Gauss-Seidel method (&SOR) in term of cost fct. F.

can be viewed as a coordinate descent method minimizing )(xF

0)(:)0( 1 aFax

0))1((:)1( 2 xFxa

0)(:)1( 1 bFbx

0))2((:)2( 2 xFxb

Sets of points on which F is constantX(2)

X(1)

X(0)

X1

X2

a

b

0)( * xF

Page 16: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.16

Prop. 6.10

Let A be symmetric and positive definite, and let x* be the solution of Ax=b.

(a) If , then the sequence {x(t)} generated by the SOR algorithm converges to x*.

(b) If , then for every choice of x(0) different than x*, the sequence generated by the SOR algorithm does not converge to x*.

Prop. 6.11

If A is symmetric and positive definite and if is sufficiently

small, then the JOR and Richardson’s algorithms converge to the solution of Ax=b.

Both are a special case of Prop. 2.1 and Prop. 2.2 of Section 3.2.

)2,0(

)2,0(

0

Page 17: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.17

Conjugate Gradient Method To accelerate the speed of convergence of the classical

iterative methods Consider - Assume that A is nxn symmetric and positive definite - If A is not, consider the equivalent problem . Then,

is symmetric and positive definite (by Prop. A.26 (a) ) For convenience, assume , i.e.,

.bAx

bAAxA '' AA'

0b 0Ax

Page 18: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.18

The cost function

An iteration of the method has the general form

is a direction of update

is a scalar step size defined by the line minimization

Let

AxxxF '2

1)(

)()()()1( tsttxtx ,1,0t

nRts )(

)(t

)()(min)()()( tstxFtsttxFR

)())(()( tAxtxFtg

Page 19: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.19

Steepest Descent Method

Conjugate Gradient method

Prop. 7.2. For the conjugate gradient method, the following hold:

The algorithm terminates after at most n steps; that is, there exists some t n such that g(t)=0 and x(t)=0.

( ) ( ) ( ( ))

( 1) ( ) ( ) ( ( ))

where r(t) is a scalar stepsize defined by the line minimization

F( ( ) ( ) ( ( ))) min ( ( ) ( ( )))r R

s t g t F x t

x t x t r t F x t

x t r t F x t F x t r F x t

( ) ( ) ( ) ( 1) :conjugate direction

where

( ) ' ( ) ( ) ' ( )( ) , ( )

( 1) ' ( 1) ( ) ' ( )

s t g t t s t

g t g t s t g tt r t

g t g t s t Ag t

Page 20: Maximum Norms & Nonnegative Matrices

Network Systems Lab.

Korea Advanced Institute of Science and Technology

No.20

Geometric interpretation

{s(t)} is mutually A-conjugate, that is, s(t)’As(r) = 0 if t r

If A=I, s(t)’s(r) =0, if t r

A=I A: Positive definite & symmetric

X(0)

X(0)

Steepest Descent

Conjugate gradient

(0) (0)

- ( (0))

- (0)

s g

F x

x

{ ( )} is orthogonal.s t