22
2.4-1 2.4 Signal Space [P2.4] You are used to the association between n-tuples of numbers and points in space, and you even call them both “vectors.” It’s a powerful link, because you can visualize many operations on n-tuples as corresponding operations in Euclidean 3-space. In this section, we extend spatial visualization to finite-energy functions and get a big payback. Our vectors are now: It is easiest to relate functions and tuples by considering functions to be “indexed” by a continuous variable t, rather than by a discrete variable, as in the case of tuples. However, this is not necessary; the formal link rests on the structure below, which can even treat zero-mean random variables as vectors (see the expanded description of vector spaces in Appendix D).

2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-1

2.4 Signal Space [P2.4]

• You are used to the association between n-tuples of numbers and points in

space, and you even call them both “vectors.” It’s a powerful link, because

you can visualize many operations on n-tuples as corresponding operations

in Euclidean 3-space.

In this section, we extend spatial visualization to finite-energy functions

and get a big payback. Our vectors are now:

It is easiest to relate functions and tuples by considering functions to be

“indexed” by a continuous variable t, rather than by a discrete variable, as

in the case of tuples. However, this is not necessary; the formal link rests

on the structure below, which can even treat zero-mean random variables

as vectors (see the expanded description of vector spaces in Appendix D).

Page 2: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-2

2.4.1 Vector Space

• A vector space consists of a set of vectors that forms a commutative group

under a binary operation “+”: - ( ) ( ) ++ = + + + = + + + = =x y y x x y z x y z x 0 x x x 0

with a field of scalars and scalar multiplication

( ) ( ) 1 0 ( ) ( )a b ab a a a a b a b= = = + = + + =x x x x x 0 x +y x y x x x

Usually the scalars are in or , but or even B={0,1} will do, too.

o Summary: we can add different vectors together. There’s a zero element

and a negative counterpart for every vector. Also, we can multiply by

scalars.

o All three of our “vectors” qualify.

o A set of vectors spans the space consisting of all linear combinations of

them; they form a basis of the space.

o Vectors are linearly independent iff no linear combination of them

equals 0 (except all zero coefficients).

Page 3: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-3

2.4.2 Inner Product Space

• Adding an inner product operator to a vector space gives us length and

distance between vectors. The inner product maps pairs of vectors to

complex (or real) numbers: V V× → , and it’s denoted ( , )x y .

o Properties:

*( , ) ( , ) ( , ) ( , ) ( , )

( , ) 0, equality iff a b a b= + = +

≥ =x y y x x y z x z y zx x x 0

o Familiar examples include dot product (for arrows in space), array

product (for tuples), correlation (for time functions) and covariance for

zero mean random variables:

T

† *

0

x(t) y ( ) t dt E XY *⎡ ⎤⎣ ⎦∫x y y xi

Vectors are orthogonal iff their inner product is zero.

• The inner product lets us define the squared norm, or squared “length” of a

vector. For a waveform, it’s the energy.

2 2†

0

( )T

x t d= ∫x x x t

Page 4: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-4

• The inner product also gives us the squared “distance” between vectors:

2 22 † 2

0

( ) ( ) ( ) ( )T

d d x t y t d= − = − − = −∫x y x y x y t

2.4.3 Waveforms as Vectors

• Waveforms with finite energy (square integrable) obey the same rules as

or (see the summary at the end of this section). Consequently, we

can make vector diagrams of signals and use geometric intuition.

n n

• Example

o Assume these basis waveforms:

They have unit energy and are orthogonal (inner product is zero), hence

“orthonormal.”

Page 5: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-5

o We can form other waveforms as linear combinations, such as

1

1 22( ) ( ) 2 ( )x t t tϕ ϕ= − + 1 2( ) ( ) ( )y t t tϕ ϕ= − −

They are in the space spanned by 1( )tϕ and 2 ( )tϕ .

o A vector diagram for x(t) and y(t):

o We can form sum and difference from the diagram. For example,

( ) ( )x t y t+ is represented by

Page 6: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-6

o We can calculate the energy as the squared length (values are real, but

pro forma treated as complex):

Vector Waveform

( )

2 21 2

1 2 1 22 2

1 2

,

2

= − −

= − − − −

= + =

y ϕ ϕ

ϕ ϕ ϕ ϕ

ϕ ϕ

or by coordinates, 2 2 21 1= − + − =y 2

22

02

21 2

02 2

2 21 2

0 0

( )

( ) ( )

( ) ( ) 2

T

y

T

T T

E y t dt

t t dt

t dt t dt

=

= − −

= + =

∫ ∫

ϕ ϕ

ϕ ϕ

Similarly, the squared length

of x by coordinates is 22 21

214

2

4

= − +

=

x

and for x(t), 2

2

02

211 22

01 14 4

( )

( ) 2 ( )

4 4

T

x

T

E x t dt

t t

=

= − +

= + =

∫ ϕ ϕ dt

Verify them by direct integration.

Page 7: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-7

o We also get the squared “distance” between x(t) and y(t). First, by direct

integration:

( )2

2 14

0

( ) ( ) 9T

x t y t dt− =∫

and, second, from the vector diagram:

( )

22

21 1 1 2 2 22 21 1

2 4

( ) ( )

3 9

d

x y x y

= −

= − + +

= + =

x y

ϕ ϕ

Page 8: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-8

2.4.4 Common Orthonormal Bases

• Although any set of N linearly independent vectors can provide a basis for

an N-dimensional space, most operations become simpler if the basis

vectors are orthogonal and have unit energy (i.e., are orthonormal).

With these two, finding the

coefficients of some other vector

requires work:

But with these two, the coefficients

of some other vector are just the

inner products with and . 1ϕ 2ϕ

Below is a collection of common orthonormal waveform basis sets.

• The Fourier series basis is familiar (defined over [ ]2, 2T T− here):

1( ) exp 2 , 2 2,ktt j k T t T LTT

⎛ ⎞= − ≤ ≤ −⎜ ⎟⎝ ⎠

ϕ π k L≤ ≤

Page 9: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-9

o To represent another waveform in this space (not quite conventional

Fourier series pair):

2

2 2

2

1 1( ) ( )TL

j kt T j kt Tk k

k L T

x t x e x x t eT T

=− −

= =∑ ∫π π dt

o The Fourier transforms of these basis signals are

( )( ) sinck f T fT kΦ = − (Why?)

o So for large L, the range of frequencies (positive and negative) is

roughly (2 1)L + T T. Rule of thumb: there are about real

dimensions in (positive frequency) bandwidth W and time T.

2D W≈

Page 10: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-10

• The Walsh-Hadamard basis looks like this:

Generated by the recursion

[ ](0) 1=H

( 1) ( 1)

( )( 1) ( 1)

n nn

n n

− −

− −

⎡ ⎤= ⎢ ⎥

−⎢ ⎥⎣ ⎦

H HH

H H

o They are orthogonal (prove by induction on ). ( )nH

o They span the space of staircase-like function with possible

equispaced zero crossings:

2n

Page 11: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-11

• The rectangular pulse translates span the same space as Walsh-

Hadamard:

o Their Fourier transforms are, for N pulses in time T,

2 2 2( ) sincj f T N j f nT Nn

T Tf e e fN N

− ⎛ ⎞Φ = ⎜ ⎟⎝ ⎠

π π

• Translated square root raised cosine pulses (Section 2.5) form another

orthonormal set. For 0.5β = , a few of them are as shown below:

6 4 2 0 2 4 6

0.5

0.5

1

1.5root cosine translates

normalized time t/T

Page 12: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-12

o Why are they orthogonal? Because the inner product of two such pulses

separated by a multiple of T is

( ) ( ) ( )gg t g t nT dt R nT∞

−∞

− =∫

The autocorrelation function of this pulse is the raised cosine function

(Appendix C), which equals zero at multiples of T:

4 3 2 1 0 1 2 3 4

0.5

1

1.5

beta =1beta = 0.5beta = 0.35

Rg(t) for square root raised cos pulse

normalized time t/T

Hence they are orthogonal. We also normalized them to unit energy.

Page 13: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-13

2.4.5 Summary of Equivalences

Concept n 2L space

Scaling a x ( )a x t

Sum +x y ( ) ( )x t y t+

Inner product †y x *( ) ( )x t y t dt

−∞∫

Orthogonality † 0=y x *( ) ( ) 0x t y t dt

−∞

=∫

Norm † =x x x 2( ) xx t dt E∞

−∞

=∫

Distance 2i i

ix y− = −∑x y 2( ) ( )x t y t dt

−∞

−∫

Page 14: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-14

Concept n 2L space

Normalized

inner product

† †

y x

x x y y *( ) ( )

x y

x t y t dt

E E

−∞∫

Triangle ≠ty ,

equality when

proportional

+ ≤ +x y x y

x y x yE E E+ ≤ +

Projection

coefficients

(orthonormal

basis)

k kx = xϕ

*( ) ( )k kx x t t dtϕ∞

−∞

= ∫

Schwartz ≠ty ,

equality when

proportional

2 2 2† ≤y x x y

2*( ) ( ) x yx t y t dt E E

−∞

≤∫

Page 15: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-15

Concept n 2L space

Pythagoras, for

orthogonal

vectors

2 2 2+ = +x y x y

if orthogonal

x y x yE E E+ = +

if orthogonal

Page 16: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-16

2.4.6 Projections

• For any waveform , we want the coefficients ( )x t { }ix with respect to a

given basis set { }( )iu t . A practical reason is that, in a computer, it’s easier

to manipulate a vector of coefficients than a waveform.

• The corresponding representation and error are, respectively,

0

ˆ ˆ( ) ( )N

n nn

x t x u=

= ∑ t and ˆ( ) ( ) ( )e t x t x t= − .

Why could there be an error?

o If is finite, then most square-integrable functions lie at least partially

outside , the space spanned by the basis functions

N

U { }( )nu t .

o If is infinite, the basis set may not be complete; i.e., the norm of the

error may not be guaranteed to converge to zero as becomes

arbitrarily large.

N

N

• How to pick the best coefficients? What do we mean by “best”?

o A quadratic measure like integrated squared error 2( )e t dt∞

−∞∫ ?

o Or the integrated absolute value ( )e t dt∞

−∞∫ ?

Page 17: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-17

o Or the worst case error max ( )t

e t⎡ ⎤⎣ ⎦ ?

o We’ll use integrated squared error, since quadratic measures like energy,

power and variance are simple, useful, tractable and well understood.

So the objective is: given any , find ( )x t ˆ( )x t ∈U that minimizes

2( )eE e t dt∞

−∞

= ∫ , the squared norm of . ( )e t

• We can use the parallel natures of function space and Euclidean space to

establish an important property of the representation error.

o Whether and are orthogonal or not, the that minimizes the norm

of e is the orthogonal projection of x onto . Therefore the error is

orthogonal to the representation, to the basis vectors and to any other

vector in :

1u 2u x̂

U

U

*ˆ( ) ( ) 0e t x t dt∞

−∞

=∫ and *( ) ( ) 0ne t u t dt∞

−∞

=∫

Also Pythagoras: ˆx xE E E= + e . We’ll prove these properties later.

Page 18: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-18

2.4.7 Coefficient Calculation – Orthonormal Basis

• The easy case: find the coefficients if the basis is orthonormal.

o The error and its squared norm are

1

ˆ( ) ( ) ( )N

n ni

e t x t x u t=

= −∑

* *

1

2

1

* * * * *

1 1 1

ˆ2Re ( ) ( )

*

1

ˆ( ) ( )

ˆ ˆ ˆ ˆ( ) ( ) ( ) ( ) ( ) ( )

ˆ (

N nkn n

n

N

e n nn

N N N N

x n n n n n k n kn n n k i

x x t u t dt

N

x nn

E x t x u t dt

E x x t u t dt x x t u t dt x x u t u t dt

E x x t

= −∞

=−∞

∞ ∞ ∞

= = = =−∞ −∞ −∞

=δ⎡ ⎤= ⎢ ⎥

⎢ ⎥⎣ ⎦

=

= −

= − − +

∑ ∫

= −

∑∫

∑ ∑ ∑∑∫ ∫ ∫

∑ * * *

1 1

ˆ ˆ ˆ) ( ) ( ) ( )N N

n n n n nn n

u t dt x x t u t dt x x∞ ∞

= =−∞ −∞

− +∑ ∑∫ ∫

o Take the gradient of with respect to the complex coefficients eE ˆix (see

Appendix E!) and equate it to zero:

*ˆ ( ) ( ) , 1, ,n nx x t u t dt n N∞

−∞

= =∫ …

which is how you calculate Fourier series coefficients, for example.

Page 19: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-19

o This correlation operation is at the core of most modems, as the inner

product of two arrays of samples. For this discussion, denote the ith

correlation as ip .

So projection looks like this:

o Check Pythagoras. What is the value of the minimized norm?

From the previous page, 2† †

ˆ

ˆ ˆ ˆ in general

ˆ after substitution of

e x

x x

E E

E E

= − − +

= − =

x p p x x

x p

o Check orthogonality of error and arbitrary basis function.

*

1 1

ˆ ˆ( ) ( ) ( ) 0N N

n n k k n nkn n

x t x u t u t dt p x∞

= =−∞

⎛ ⎞− = −⎜ ⎟

⎝ ⎠∑ ∑∫ δ =

Page 20: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-20

2.4.7 Coefficient Calculation – General Basis

• Slightly more difficult is the case of a non-orthonormal basis set.

o To obtain the coefficients ˆnx (the components of ), adapt the

expression for from Section 2.4.6:

eE

,

2

1

* * * *

1 1 1

† † †

ˆ( ) ( )

ˆ ˆ ˆ ˆ ( ) ( )

ˆ ˆ ˆ ˆk n

N

e n nn

N N N N

x n n n n n k k nn n n k i

G

x

E x t x u t dt

E x p x p x x u t u t dt

E

=−∞

= = = = −∞

=

= −

= − − +

= − − +

∑∫

∑ ∑ ∑∑ ∫

x p p x x Gx

where G is the “Gram matrix,” the matrix of inner products of basis f’ns.

o Set the gradient to zero ˆ =G x p , which gives the optimum coefficients as

1ˆ −=x G p

These are the “normal equations.” Projection looks like this:

Note that p contains all the information needed to construct . ˆ( )x t

Page 21: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-21

o The energy of the representation is ˆ( )x t

2 * *ˆ

1 1

† 1 1

ˆ ˆ ˆ( ) ( ) ( )

ˆ ˆ in general

ˆ for optimum coefficients

N N

x n k k nn k

E x t dt x x u t u t dt∞ ∞

= =−∞ −∞

− −

= =

=

= =

∑∑∫ ∫

x Gx

p G p x G p

• Check Pythagoras: † † †

† 1 † 1 † 1

ˆ

ˆ ˆ ˆ ˆ in general

for optimum coefficients

e x

x

x x

E E

E

E E

− − −

= − − +

= − − +

= −

x p p x x Gx

p G p p G p p G p

• Check orthogonality of error and basis functions. Concisely,

[ ]1 2( ) ( ), ( ), , ( )Nt u t u t u t=u … ,

a row vector (or “matrix,” if time runs downwards). Then

[ ]1

21 2

1

ˆˆ

ˆ ˆ( ) ( ) ( ), ( ), , ( ) ( )

ˆ

N

n n Nn

N

xx

ˆx t x u t u t u t u t t

x=

⎡ ⎤⎢ ⎥⎢ ⎥= = =⎢ ⎥⎢ ⎥⎣ ⎦

∑ u x…

*1

*

( )( ) = ( ) ( )

( )N

u tx t dt x t t

u t

∞ ∞

−∞ −∞

⎡ ⎤⎢ ⎥

= ⎢ ⎥⎢ ⎥⎣ ⎦

∫ ∫p u dt

Page 22: 2.4-1 2.4 Signal Space [P2.4] · 2.4-8 2.4.4 Common Orthonormal Bases • Although any set of N linearly independent vectors can provide a basis for an N-dimensional space, most operations

2.4-22

[ ]

2 * *1 1 2 1

2*2 1 2

2*1

*1*

†21 1 1

*2

( ) ( ) ( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )

( )

( ) ( ) ( ) ( ) = ( ) ( )

( )

N

N N

u t dt u t u t dt u t u t dt

u t u t dt u t dt

u t u t dt u t dt

u t

u t u t u t u t dt t t dt

u t

−∞

−∞

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

⎡ ⎤⎢ ⎥⎢ ⎥= ⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

∫ ∫ ∫∫ ∫

∫ ∫

∫∫

G

u u

x

G p

As for orthogonality (our goal, before we developed notation),

( )† †

1

ˆ( ) ( ) ( ) ( ) ( )

ˆ in general

ˆ for optimum coefficients

t e t dt t x t t dt∞ ∞

−∞ −∞

= −

= −

= =

∫ ∫u u u

p Gx

0 x

• This notation also lets us write the projection concisely as -1

† †

ˆ

ˆ( ) ( ) ( ) ( ) ( ) ( )x t t t t dt x t t∞ ∞

−∞ −∞

=

⎛ ⎞= ⎜ ⎟⎜ ⎟

⎝ ⎠∫ ∫

x

u u u u dt