Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Lecture 4: Parameter Estimation
Methods 1
Sahar Moghimi
System Identification
We have a set of models (based on the parameter vector)
…and a set of measurements
2
3
Prediction error minimization
State space
Probabilistic approach
Prediction error identification methods
Approach: choose parameters to make the PE as small as
possible
In order to define a scalar measure:
Linear filter: if set properly
can reduce the effect of noise
Time varying norm:
Weighted norm: 4
Least squares method
Special case of prediction error methods:
5
On the consistency of LSE
6
7
Weighted LES: to value different measurements in
a different fashion:
Dealing with the issue of best regressors
8
Weighted Principle Component Regression (PCR) Consider
The number of columns (p+1) in X should be sufficiently large to cover the whole duration of the impulse response function which can approximate the system relatively accurately.
If p+1 is greater than the number of nonzero samples in the true impulse response, then the true value of the corresponding extraneous elements in A should be zero.
Assumed uncorrelated
9
We consider decomposing X into its principal components and solving the
linear regression problem in the domain of PCs using Singular Value
Decomposition (SVD).
Treat as black box: code widely available
In MATLAB: [U,D,V]=svd(X,0)
T
1
00
00
00
VU
nd
d
X
Dealing with the issue of best regressors
The di are called the singular values of X
If D is singular, some of the di will be 0
In general rank(X) = number of nonzero di
The column vectors of U (principal components) are the eigenvectors of
the matrix XXT.
The column vectors of V are the eigenvectors of the matrix XTX, and
D is a diagonal matrix with its diagonal elements (singular values) being the
square root of the eigenvalues of XTX (or XXT).
Dealing with the issue of best regressors
Why is SVD so useful?
Application #1: inverses
A-1=(VT)-1 W-1
U-1 = V W-1 UT
Using fact that inverse = transpose
for orthogonal matrices
Since W is diagonal, W-1 also diagonal with reciprocals of
entries of W
Dealing with the issue of best regressors
A-1=(VT)-1 W-1
U-1 = V W-1 UT
This fails when some wi are 0
It’s supposed to fail – singular matrix
Pseudoinverse: if wi=0, set 1/wi to 0 (!)
“Closest” matrix to inverse
Defined for all (even non-square, singular, etc.) matrices
Equal to (ATA)-1AT if ATA invertible
Dealing with the issue of best regressors
Solving Ax=b by least squares
x=pseudoinverse(A) times b
Compute pseudoinverse using SVD
Lets you see if data is singular
Even if not singular, ratio of max to min singular values
(condition number) tells you how stable the solution will be
Set 1/wi to 0 if wi is small (even if not exactly 0)
Dealing with the issue of best regressors
14
If some of the diagonal elements of D are small, it means that the corresponding
PCs have small variances and consequently less information.
We rank the PCs in a descending order according to their singular values in
evaluating their contributions to the output.
We form a series of matrices U each containing a subset of PCs by adding one PC
at a time as a column vector.
The matrices U differ by the number of columns involved and their associated
regression equations (Y = UB + E ) represent the candidate models from which the
"best" model should be selected.
Note that these candidate models are data-specific.
For model selection, we employ the widely used model order selection criteria.
Dealing with the issue of best regressors
Dealing with the issue of best regressors
15
With q PCs and the corresponding U, V and D matrices
denoted as 𝑈 (𝑁 × 𝑞), 𝑉 (𝑝 × 𝑞) and 𝐷 (𝑞 × 𝑞)
Estimating State space models: Subspace
method
16
Estimating the state space model using LSE
If we do not have an insight into the particular structure,
for different state vectors, we can have an indefinite
number of solutions
17
If we knew the sequence of state vectors:
Therefore the state and output can be estimated
using LS.
Estimating State space models: Subspace
method
Estimating State space models: Subspace
method
18
If we define
Estimating State space models: Subspace
method
19
K step ahead prediction based on a finite number of
measurements:
Algorithm steps:
20
Instrumental variable method
21
22
23
24
25
26
27 Slides 14-21 from: http://www.it.uu.se/edu/course/homepage/systemid/vt05
28
Example
29
Example:
30
31
Exercise:
Implement the LSE and IV methods on your data.
Analyze the estimation error and discuss your findings