49
Modeling methods Used in system identification and in MRAS

Modeling methods

Embed Size (px)

DESCRIPTION

Modeling methods. Used in system identification and in MRAS. Linear in parameter models: (Models for describing linear systems). Auto-Regressive Model (AR model) Moving Average Model (MA model) Finite Impulse Response Model (FIR model) - PowerPoint PPT Presentation

Citation preview

Page 1: Modeling methods

Modeling methods

Used in system identification and in MRAS

Page 2: Modeling methods

Linear in parameter models: (Models for describing linear systems)

• Auto-Regressive Model (AR model)• Moving Average Model (MA model)• Finite Impulse Response Model (FIR model)• Auto-Regressive model with extraneous (extra) input

(ARX model)• Auto-Regressive Moving Average model (ARMA model)• Auto-Regressive Moving Average model with

extraneous input (ARMAX model)• Auto-Regressive Integrated Moving Average model

(ARIMA model)• Auto-Regressive Integrated Moving Average model with

extraneous input (ARIMAX• model)

Page 3: Modeling methods

Contd..

• Each of the above models has a way of describing the relationship between the input, output and error.

• Some models have much freedom in describing the input, • some have freedom in describing the error • Others have freedom in describing the output • certain models which describe input, output and error with

freedom.• Based on the plant conditions, a particular model can be

chosen. • The choice of a suitable model for a plant is very important,

since the parameters to be estimated depends on the model chosen.

• great care needs to be taken in the choice of model for description of the plant.

Page 4: Modeling methods

Auto-Regressive Model (AR model)

• The Auto-regressive model is given by the equation

Where,

Page 5: Modeling methods

• This model describes a relation only between the output and error.

• The freedom in describing the output is more than the error.

• This method is used to describe a plant as the input is not described here, but is usually used in combination with other models.

• The block diagram representation of AR model is given by:

The parameter vector to be estimated in this model is

Page 6: Modeling methods

Moving Average Model (MA model)

• The moving average model is given by the equation

Page 7: Modeling methods

• This model describes a relation only between the output and error.

• This model is called Moving Average model because the error here is expressed as a moving average of the white noise.

Page 8: Modeling methods

• Much freedom is given for the description of error than the output.

• This method is seldom used to describe a plant as the input is not described here, but is usually used in combination with other models.

• The block diagram representation of MA model is given by:

parameter vector to be estimated in this model is

Page 9: Modeling methods

Finite Impulse Response Model (FIR model)

• The Finite Impulse Response model is given by the equation:

Page 10: Modeling methods

• This model describes a relation between the input, error and output. • The input can be described with much freedom compared to the

error and output.• This model can be used to describe plants where much freedom is

not required for the description of errors.• The block diagram representation of FIR model is:

The parameter vector to be estimated in this model is

Page 11: Modeling methods

Auto-Regressive model with extraneous (extra) input (ARX model)

• Known as Equation Error Model, is given by:

Page 12: Modeling methods

• this model describes a relation between the input, error and output.• Also, the input and output can be described with much freedom

compared to the error.• This model can be used to describe plants where much freedom is

not required for the description of errors.• The block diagram representation of ARX model is:

Page 13: Modeling methods

Auto-Regressive Moving Average Model (ARMA Model)

• Model is described by the equation

Page 14: Modeling methods

• This model is a combination of Auto-Regressive (AR) model and Moving Average (MA) model.

• This model gives a relation between output and error. Here both output and error are described with much freedom.

• This model is not often used to describe plants as input is not• considered here.• The block diagram representation of ARMA model is:

Page 15: Modeling methods

Auto-Regressive Moving Average Model with extraneous input (ARMAX Model)

• Described by equation

Page 16: Modeling methods

• An extension of ARMA model, where an extraneous input (u(t)) is included to the model.

• Used to describe the plant, as this model describes the input, error and output with full freedom.

• The block diagram representation of ARMAX model is:

Page 17: Modeling methods

Auto-Regressive Integrated Moving Average Model with eXtraneous input

(ARIMAX Model)

• The models described above are valid only for white noise disturbances. To be able to describe the disturbances which are variable in nature or drifting in nature, ARIMAX model is preferred.

• The equation describing the ARIMAX model is:

Page 18: Modeling methods

• In this, the disturbance is described as a summation of constant part and variable part.

• Hence, this model can be used to describe systems where the disturbance is drifting in nature.

• The parameter vector to be estimated

Page 19: Modeling methods

Note

Page 20: Modeling methods

Parametric Estimation Techniques

• A parametric estimation technique is characterized by a finite dimensional parameter vector.

• A mapping from the recorded data to the estimated parameter vector.

• So, in parametric methods, the result of identification can be expressed by a finite dimensional parameter vector in matrix form.

• Some of the parametric estimation techniques are:• Least Squares (LS) Estimation• Recursive Least Squares (RLS) Estimation• Extended Least Squares (ELS) Estimation and• Least Mean Square (LMS) Estimation

Page 21: Modeling methods

Least Square Estimation

• Karl Friedrich Gauss • least squares principle:• Stated that “the unknown parameter of a

mathematical model should be chosen in such a way that the sum of squares of the difference between the actually observed and computed values, multiplied by numbers that measure degree of precision, is minimal”.

Page 22: Modeling methods

• simple for a mathematical model that can be written in the form

Page 23: Modeling methods

• called a regression model. • The model is indexed by the variable i, which often

denotes time. • The variables called the regression variables which is

usually a set of inputs.• As per Least Squares principle, the parameter vector

should be chosen to minimize the Least-Square loss function given by:

Page 24: Modeling methods
Page 25: Modeling methods
Page 26: Modeling methods
Page 27: Modeling methods

• The first term on the right hand side is independent of θ. The second term is always positive.

• Hence the minimum is obtained for:

Page 28: Modeling methods
Page 29: Modeling methods

Example

Page 30: Modeling methods
Page 31: Modeling methods
Page 32: Modeling methods
Page 33: Modeling methods

• Statistical Properties of Least Square Estimation Technique

Page 34: Modeling methods

Recursive Least Square (RLS) Estimation

• In adaptive controllers, the observations are obtained sequentially in real time.

• to save computation time, the computation can be made recursive in nature.

• The computation of least square estimate can be arranged in such a way that the results obtained at time t – 1 can be used to get the estimates at time t.

Page 35: Modeling methods
Page 36: Modeling methods
Page 37: Modeling methods
Page 38: Modeling methods
Page 39: Modeling methods

RLS technique for time varying parameters

• In the Least Square model given by equation the parameters are assumed to be constant,

• but in practical situations, they are time-varying in nature.

• The least squares method can be extended for the following two cases:

• The parameters are assumed to change abruptly but infrequently.

• The parameters are changing continuously but slowly.

Page 40: Modeling methods

Parameter changes abruptly but infrequently:

• The case of abrupt parameter changes can be covered by resetting.

• The matrix P in the least squares algorithm is periodically reset to αI, where α is a large number.

• This implies that the gain K (t) in the estimator becomes large and the estimate can be updated with a larger step.

• more sophisticated version is to run n estimators in parallel, which are reset sequentially.

• The estimate is then chosen by using some decision logic.

Page 41: Modeling methods

Parameters are slowly time-varying in nature:

• The case of slowly time-varying parameters can be covered by relatively simple mathematical models. The loss function, in this case is taken to be:

parameter λ is called the forgetting factor or discounting factor. 0 < λ ≤ 1. The method is therefore called exponential forgetting or exponentialdiscounting.

Page 42: Modeling methods
Page 43: Modeling methods

Simplified Algorithms (Algorithms that avoid Updating of the P matrix)

• The recursive least-squares algorithm has two sets of state variables and P which must be updated at each step.

• For large n, the updating of matrix P dominates the computing effort. • There are several simplified algorithms that avoid updating the P

matrix at the cost of slower convergence. • Some algorithms that avoid updating of the P matrix are:

•Kaczmarz’s Projection algorithm• Projection Algorithm (Normalized projection algorithm)• Stochastic approximation algorithm and• Least mean square algorithm.

Page 44: Modeling methods

Kaczmarz’s Projection algorithm:

Page 45: Modeling methods

Projection algorithm: (Normalized Projection Algorithm)

Page 46: Modeling methods

Stochastic approximation algorithm:

Page 47: Modeling methods

Least mean square algorithm

• A further simpler algorithm is obtained which eliminates the term P(t). The algorithm is the least mean square algorithm given by

Limitations of Standard Least Squares Algorithm:

it can be directly applied only for systems, which can be expressed in terms of the regression model. To apply Least Squares principle to a system, it needs to be first converted to regression model.

Page 48: Modeling methods

Extended Least Squares Estimation Algorithm:

Page 49: Modeling methods