86
1 Basic time series concepts ARMA and ARIMA

Univariate Time Series Analysis

Embed Size (px)

DESCRIPTION

Univarite time series analysis for quantitative decision making

Citation preview

Page 1: Univariate Time Series Analysis

1

Basic time series concepts ARMA and ARIMA

Page 2: Univariate Time Series Analysis

2

• In Univariate Time Series models we attempt to predict a variable using only information contained in its past values. (i.e. let the data speak for themselves)

• Stochastic Process: A sequence of random variables Y1 , Y2 ,..YT . Observed value of a TS is considered a realization of the stochastic process. (Analogy b/w Population and sample)

• A Strictly Stationary ProcessA strictly stationary process is one where joint distribution of Y1 , Y2 ,..YT is same as of Y1-k , Y2-k ,..YT -ki.e its properties remain invariant to a time displacement. All the moments are time invariant.

• A Weakly Stationary ProcessIf a series satisfies the next three equations, it is said to be weakly or covariance stationary

1. E(yt ) = μ , t = 1,2,...,∞2.3. ∀

t1 , t2

Univariate Time Series Models

E y yt t t t( )( )1 2 2 1

− − = −μ μ γE y yt t( )( )− − = < ∞μ μ σ 2

Page 3: Univariate Time Series Analysis

3

A stationary series with zero mean

Page 4: Univariate Time Series Analysis

4

Non-stationarity due to changing mean

Page 5: Univariate Time Series Analysis

5

Non-stationarity due to changing variance

Page 6: Univariate Time Series Analysis

6

Non-stationarity in autocorrelations well as in variance

-40

-30

-20

-10

0

10

20

30

40

50

250 500 750 1000

A driftless random walk Xt=Xt+N(0,9)

Page 7: Univariate Time Series Analysis

7

Non-Stationarity in autocorrelations well as in mean and variance

0

4 0

8 0

1 2 0

1 6 0

2 0 0

2 4 0

2 5 0 5 0 0 7 5 0 1 0 0 0

A ra n d o m w a lk w ith d r ift X t= 0 .2 + x t( -1 )+ N (0 ,9 )

Page 8: Univariate Time Series Analysis

8

Non-Stationarity due to mean and variance: real data

Source: Mukherjee

et al(1998). Econometrics and data analysis for developing countries

Page 9: Univariate Time Series Analysis

9

Log transformation to remove non- stationarity in variance

Page 10: Univariate Time Series Analysis

10

Why stationarity is required?

For a stochastic process Y1 , Y2 ,..YT we need to estimate:•

T means E(Y1 ), E(Y2 ), . . .E(YT )

T variances V(Y1 ), V(Y2 ), . . .V(YT )•

T(T-1)/2 covariances Cov(Yi

,Yj

), i<j•

In all 2T+ T(T-1)/2 = T(T+3)/2 parameters

We only have T time series observations

Some simplifying assumptions are needed to reduce number of parameters to be estimated

A simplification comes from stationarity assumption

Page 11: Univariate Time Series Analysis

11

• So if the process is covariance stationary, all the variances are the same and all the covariances depend on the difference between t1 and t2 . The moments

, s = 0,1,2, ...are known as the covariance function.

• The covariances, γs , are known as autocovariances.

• However, the value of the autocovariances depend on the units of measurement of yt .

• It is thus more convenient to use the autocorrelations which are the autocovariances normalised by dividing by the variance:

, s = 0,1,2, ...

• If we plot τs against s=0,1,2,... then we obtain the autocorrelation function orcorrelogram.

Univariate Time Series Models (cont’d)

τ γγs

s=0

E y E y y E yt t t s t s s( ( ))( ( ))− − =+ + γ

Page 12: Univariate Time Series Analysis

12

• A white noise process is one with (virtually) no discernible structure. A definition of a white noise process is

• Thus the autocorrelation function will be zero for s greater or equal to 1 • τs ∼

approximately N(0,1/T) where T = sample size

• We can use this to test whether any autocorrelation coefficient is significantly different from zero and for constructing a confidence interval.

• For example, a 95% confidence interval would be given by . If the sample autocorrelation coefficient, , falls outside this region for any value of s, then we reject the null hypothesis that the true value of the coefficient at lag s is zero.

A White Noise Process

E yVar y

if t rotherwise

t

t

t r

( )( )

==

= =⎧⎨⎩−

μσ

γ σ

2

2

0

$τ s T1196. ×±

Page 13: Univariate Time Series Analysis

13

• We can also test the joint hypothesis that all m of the τk correlation coefficients are simultaneously equal to zero using the Q-statistic developed by Box and Pierce:

where T = sample size, m = maximum lag length• The Q-statistic is asymptotically distributed as a .

• However, the Box Pierce test has poor small sample properties, so a varianthas been developed, called the Ljung-Box statistic:

• This statistic is very useful as a portmanteau (general) test of linear dependence in time series.

Joint Hypothesis Tests

χ m2

∑=

=m

kkTQ

1

( ) 2

1

2

~2 m

m

k

k

kTTTQ χτ∑

=

−+=

Page 14: Univariate Time Series Analysis

14

• Question:Suppose that a researcher had estimated the first 5 autocorrelation coefficients using a series of length 100 observations, and found them to be (from 1 to 5): 0.207, -0.013, 0.086, 0.005, -0.022.Test each of the individual coefficient for significance, and use both the Box- Pierce and Ljung-Box tests to establish whether they are jointly significant.

• Solution:A coefficient would be significant if it lies outside (-0.196,+0.196) at the 5% level, so only the first autocorrelation coefficient is significant.Q=5.09 and Q*=5.26Compared with a tabulated χ2(5)=11.1 at the 5% level, so the 5 coefficients are jointly insignificant. [p-val=1-@cchisq(5.09,5)=0.595]

An ACF Example

Page 15: Univariate Time Series Analysis

15

Moving Average Processes

• Some economic hypothesis lead to moving average time series structure. Changes in price of a stock from day 1 to next day behave as a series of uncorrelated random variables with zero mean and constant variance

• i.e. [ ut is uncorrelated random variable]• Random component ut reflects unexpected news e.g. new information about financial

health of a corporation, popularity of the product suddenly rises or falls (due to reports of desirable or undesirable effects), emergence of a new competitors, revelation of management scandal etc.

• But suppose that full impact of any unexpected news is not completely absorbed by the market in one day. Then the price change next day might be

• Where is the effect of new information received during day t+1 and reflects the continuing assessment of day t news.

• The equation above is a moving average process. The value of economic variable is a weighted combination of current and past period random disturbances.

TtuPPy tttt .,..,2,1,1 =+−= −

ttt uuy θ+= ++ 11

1+tutuθ

1+ty

Page 16: Univariate Time Series Analysis

16

• Let ut (t=1,2,3,...) be a sequence of independently and identically distributed (iid) random variables with E(ut )=0 and Var(ut )= , then

yt = μ + ut + θ1 ut-1 + θ2 ut-2 + ... + θq ut-q

is a qth order moving average model MA(q).

• Its properties are E(yt )=μ; Var(yt ) = γ0 = (1+ )σ2

Covariances

Moving Average Processes

σε2

θ θ θ12

22 2+ + +... q

⎪⎩

⎪⎨⎧

>

=++++= −++

qsfor

qsforsqqssss 0

,...,2,1)...( 22211 σθθθθθθθ

γ

Page 17: Univariate Time Series Analysis

17

1. Consider the following MA(2) process:

where ut is a zero mean white noise process with variance .(i) Calculate the mean and variance of Xt

(ii) Derive the autocorrelation function for this process (i.e. express theautocorrelations, τ1 , τ2 , ... as functions of the parameters θ1 and

θ2 ).(iii) If θ1 = -0.5 and θ2 = 0.25, sketch the acf of Xt .

Example of an MA Problem

2211 −− ++= tttt uuuX θθ2σ

Page 18: Univariate Time Series Analysis

18

(i) If E(ut )=0, then E(ut-i )=0 ∀

i.So

E(Xt ) = E(ut + θ1 ut-1 + θ2 ut-2 )= E(ut )+ θ1 E(ut-1 )+ θ2 E(ut-2 )=0 (why ?)

Var(Xt ) = E[Xt -E(Xt )][Xt -E(Xt )]but E(Xt ) = 0, soVar(Xt ) = E[(Xt )(Xt )]

= E[(ut + θ1 ut-1 + θ2 ut-2 )(ut + θ1 ut-1 + θ2 ut-2 )]= E[ +cross-products]

But E[cross-products]=0 since Cov(ut ,ut-s )=0 for s≠0. (why?)

Solution

22

22

21

21

2−− ++ ttt uuu θθ

Page 19: Univariate Time Series Analysis

19

So Var(Xt ) = γ0 = E [ ]= (why?)=

(ii) The acf of Xt . γ1 = E[Xt -E(Xt )][Xt-1 -E(Xt-1 )] (first order auto covariance)

= E[Xt ][Xt-1 ]= E[(ut +θ1 ut-1 + θ2 ut-2 )(ut-1 + θ1 ut-2 + θ2 ut-3 )]= E[( )]= =

Solution (cont’d)

22

22

21

21

2−− ++ ttt uuu θθ

222

221

2 σθσθσ ++22

22

1 )1( σθθ ++

2221

211 −− + tt uu θθθ

221

21 σθθσθ +

2211 )( σθθθ +

Page 20: Univariate Time Series Analysis

20

γ2 = E[Xt -E(Xt )][Xt-2 -E(Xt-2 )] (second order auto covariance)= E[Xt ][Xt-2 ]= E[(ut + θ1 ut-1 +θ2 ut-2 )(ut-2 +θ1 ut-3 +θ2 ut-4 )]= E[( )]=

γ3 = E[Xt -E(Xt )][Xt-3 -E(Xt-3 )]= E[Xt ][Xt-3 ]= E[(ut +θ1 ut-1 +θ2 ut-2 )(ut-3 +θ1 ut-4 +θ2 ut-5 )]= 0

So γs = 0 for s > 2.

Solution (cont’d)

222 −tuθ

22σθ

Page 21: Univariate Time Series Analysis

21

Solution (cont’d)

We have the autocovariances, now calculate the autocorrelations:

(iii) For θ1 = -0.5 and θ2 = 0.25, substituting these into the formulae above gives τ1 = -0.476, τ2 = 0.190.

τγγ0

00

1= =

τγγ3

30

0= =

τγγs

s s= = ∀ >0

0 2

)1()(

)1(

)(22

21

21122

22

1

2211

0

11 θθ

θθθσθθ

σθθθγγ

τ++

+=

++

+==

)1()1(

)(22

21

222

22

1

22

0

22 θθ

θσθθ

σθγγ

τ++

=++

==

Page 22: Univariate Time Series Analysis

22

Thus the ACF plot will appear as follows:

ACF Plot

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

0 1 2 3 4 5 6

s

acf

Page 23: Univariate Time Series Analysis

23

Autoregressive Processes

Economic activity takes time to slow down and speed up. There is

a built in inertia in economic series. A simple process that characterize this process is the first order autoregressive process

Where is an intercept parameter and it is assumed that•

is uncorrelated random error with mean zero and variance •

yt

is seen to comprise two parts (in addition to intercept)•

i. carry over component depending on last period value of yt•

Ii. new shock to the level of economic variable in current period

ttt uyy ++= −11φμ

μ 11 1 <<− φtu 2σ

11 −tyφ

tu

Page 24: Univariate Time Series Analysis

24

• An autoregressive model of order p, an AR(p) can be expressed as

• Or using the lag operator notation:Lyt = yt-1 Liyt = yt-i

• or

or where .

Autoregressive Processes

φ φ φ φ( ) ( ... )L L L Lpp= − + +1 1 2

2

tptpttt uyyyy +++++= −−− φφφμ ...2211

∑=

− ++=p

ititit uyy

1φμ

∑=

++=p

itt

iit uyLy

1φμ

tt uyL += μφ )(

Page 25: Univariate Time Series Analysis

25

The Stationary Condition for an AR Model

• The condition for stationarity of a general AR(p) model is that the roots of lag polynomial

all lie outside the unit circle i.e. have their absolute value greater than one.

• A stationary AR(p) model is required for it to have an MA(∞) representation.

• Example 1: Is yt = yt-1 + ut stationary?The characteristic root is 1, so it is a unit root process (so non-stationary)(simulation exercise, acf, pacf)

• Example 2: Is yt = 1.2yt-1 - 0.32yt-2 +ut stationary?The characteristic polynomial is

Characteristic roots are 2.5 and 1.25 both outside the unit circle, the process is stationary.

012.132.0 2 =+− LL032.02.11 2 =+− LL ⇒

0...1 221 =−−−− p

pLLL φφφ

Page 26: Univariate Time Series Analysis

26

• States that any stationary series can be decomposed into the sum of two unrelated processes, a purely deterministic part and a purely stochastic part, which will be an MA(∞).

• For the AR(p) model, , ignoring the intercept, the Wold decomposition is

where,

Wold’s Decomposition Theorem

ψ φ φ φ( ) ( ... )L L L Lpp= − − − − −1 1 2

2 1

tt uyL =)(φ

tt uLy )(ψ=

Page 27: Univariate Time Series Analysis

27

• Consider the following simple AR(1) model

(i) Calculate the (unconditional) mean of yt .

For the remainder of the question, set μ=0 for simplicity.

(ii) Calculate the (unconditional) variance of yt .

(iii) Derive the autocorrelation function for yt .

Sample AR Problem

ttt uyy ++= −11φμ

Page 28: Univariate Time Series Analysis

28

Solution

(i) Unconditional mean: Assume that so that AR(1) process is stationary

Stationarity implies that mean and variance are same for all yt t= 1,2,.E(yt ) = E(μ+φ1 yt-1 )

=so

11 1 <<− φ

)(1 tyEφμ +

μφ =− )()( 1 tt yEyE

11)(

φμ

−=tyE

Page 29: Univariate Time Series Analysis

29

Solution (cont’d)

(ii) Calculating the variance of yt :From Wold’s decomposition theorem:

tt uLy =− )1( 1φ

tt uLy 11 )1( −−= φ

tt uLLy ...)1( 2211 +++= φφ

Var(yt) = E[yt-E(yt)][yt-E(yt)]but E(yt) = 0, since we are setting μ = 0.Var(yt) = E[(yt)(yt)]

...22

111 +++= −− tttt uuuy φφ

Page 30: Univariate Time Series Analysis

30

Var(yt)=E

=E

=E

=

=

=

Solution (cont’d)

( )( ).... 22

11122

111 ++++++ −−−− tttttt uuuuuu φφφφ

)]...( 22

41

21

21

2 productscrossuuu ttt −++++ −− φφ

...)]( 22

41

21

21

2 +++ −− ttt uuu φφ

...241

221

2 +++ uuu σφσφσ

...)1( 41

21

2 +++ φφσ u

)1( 21

2

φσ−

u

Page 31: Univariate Time Series Analysis

31

Solution (cont’d)

(iii) Turning now to calculating the acf, first calculate the autocovariances:

γ1 = Cov(yt , yt-1 ) = E[yt -E(yt )][yt-1 -E(yt-1 )]Since has been set to zero, E(yt ) = 0 and E(yt-1 ) = 0, soγ1 = E[yt yt-1 ] γ1 = E[ ]

= E[

=

= (make a bivariate table for understanding product of brackets)

...)( 22

111 +++ −− ttt uuu φφ ...)( 32

1211 +++ −−− ttt uuu φφ]...2

23

12

11 productscrossuu tt −+++ −− φφ

...251

231

21 +++ σφσφσφ

)1( 21

21

φ

σφ

μ

Page 32: Univariate Time Series Analysis

32

Solution (cont’d)

For the second autocorrelation coefficient,γ2 = Cov(yt , yt-2 ) = E[yt -E(yt )][yt-2 -E(yt-2 )]

Using the same rules as applied above for the lag 1 covarianceγ2 = E[yt yt-2 ]

= E[ ] = E[==

=

...)( 22

111 +++ −− ttt uuu φφ ...)( 42

1312 +++ −−− ttt uuu φφ]...2

34

12

22

1 productscrossuu tt −+++ −− φφ...24

122

1 ++ σφσφ...)1( 4

12

122

1 +++ φφσφ

)1( 21

221

φ

σφ

Page 33: Univariate Time Series Analysis

33

Solution (cont’d)

• If these steps were repeated for γ3 , the following expression would be obtained

γ3 =

and for any lag s, the autocovariance would be given by

γs =

The acf can now be obtained by dividing the covariances by the variance:

)1( 21

231

φ

σφ

)1( 21

21

φ

σφ

s

Page 34: Univariate Time Series Analysis

34

Solution (cont’d)

τ0 =

τ1 = τ2 =

τ3 = …τs =

10

0 =γγ

1

21

2

21

21

0

1

)1(

)1(φ

φ

σ

φ

σφ

γγ

=

⎟⎟⎟

⎜⎜⎜

⎟⎟⎟

⎜⎜⎜

−= 2

1

21

2

21

221

0

2

)1(

)1(φ

φ

σ

φ

σφ

γγ

=

⎟⎟⎟

⎜⎜⎜

⎟⎟⎟

⎜⎜⎜

−=

31φ

s1φ

Page 35: Univariate Time Series Analysis

35

• Measures the correlation between an observation k periods ago and the current observation, after controlling for observations at intermediate lags (i.e. all lags < k).

• So τkk measures the correlation between yt and yt-k after removing the effects of yt-k+1 , yt-k+2 , …, yt-1 .

• At lag 1, the acf = pacf always

• At lag 2, τ22 = (τ2 -τ12) / (1-τ1

2)

• For lags 3+, the formulae are more complex.

The Partial Autocorrelation Function (denoted τkk )

Page 36: Univariate Time Series Analysis

36

• The pacf is useful for telling the difference between an AR process and anARMA process.

• In the case of an AR(p), there are direct connections between yt and yt-s onlyfor s≤

p.

• So for an AR(p), the theoretical pacf will be zero after lag p.

• In the case of an MA(q), this can be written as an AR(∞), so there are direct connections between yt and all its previous values.

• For an MA(q), the theoretical pacf will be geometrically declining.

The Partial Autocorrelation Function (denoted τkk ) (cont’d)

Page 37: Univariate Time Series Analysis

37

• By combining the AR(p) and MA(q) models, we can obtain an ARMA(p,q) model:

where

and

or

with

ARMA Processes

φ φ φ φ( ) ...L L L Lpp= − − − −1 1 2

2

qqLLLL θθθθ ++++= ...1)( 2

21

tt uLyL )()( θμφ +=

tqtqttptpttt uuuuyyyy +++++++++= −−−−−− θθθφφφμ ...... 22112211

stuuEuEuE sttt ≠=== ,0)(;)(;0)( 22 σ

Page 38: Univariate Time Series Analysis

38

• Similar to the stationarity condition, we typically require the MA(q) part of the model to have roots of θ(z)=0 greater than one in absolute value.

An inventible MA(q) process can be expressed as infinite order AR process

• The mean of an ARMA series is given by

• The autocorrelation function for an ARMA process will display combinations of behaviour derived from the AR and MA parts, but for lags beyond q, the acf will simply be identical to the individual AR(p) model.

The Invertibility Condition

E ytp

( )...

=− − − −

μφ φ φ1 1 2

Page 39: Univariate Time Series Analysis

39

An autoregressive process has• a geometrically decaying acf• number of spikes of pacf = AR order

A moving average process has• Number of spikes of acf = MA order• a geometrically decaying pacf

Summary of the Behaviour of the acf for AR and MA Processes

Page 40: Univariate Time Series Analysis

40

Summary of the Behaviour ACF and PACF

Page 41: Univariate Time Series Analysis

41

Can you identify the appropriate ARIMA model from this Pacf?

Page 42: Univariate Time Series Analysis

42

First or second difference needs to be performed?

Page 43: Univariate Time Series Analysis

43

The acf and pacf are not produced analytically from the relevant formulae for a model of that type, but rather are estimated using 100,000 simulated observations with disturbances drawn from a normal distribution.

ACF and PACF for an MA(1) Model: yt = – 0.5ut-1 + ut

Some sample acf and pacf plots for standard processes

-0.45

-0.4

-0.35

-0.3

-0.25

-0.2

-0.15

-0.1

-0.05

0

0.05

1 2 3 4 5 6 7 8 9 10

Lag

acfa

nd p

acf

acfpacf

Page 44: Univariate Time Series Analysis

44

ACF and PACF for an MA(2) Model: yt = 0.5ut-1 - 0.25ut-2 + ut

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

1 2 3 4 5 6 7 8 9 10

Lags

acf a

nd p

acf

acfpacf

Page 45: Univariate Time Series Analysis

45

-0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 2 3 4 5 6 7 8 9 10

Lags

acf a

nd p

acf

acf

pacf

ACF and PACF for a slowly decaying AR(1) Model: yt = 0.9yt-1 + ut

Page 46: Univariate Time Series Analysis

46

ACF and PACF for a more rapidly decaying AR(1) Model: yt = 0.5yt-1 + ut

-0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

1 2 3 4 5 6 7 8 9 10

Lags

acf a

nd p

acf

acfpacf

Page 47: Univariate Time Series Analysis

47

ACF and PACF for a more rapidly decaying AR(1) Model with Negative Coefficient: yt = -0.5yt-1 + ut

-0.6

-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

1 2 3 4 5 6 7 8 9 10

Lags

acf a

nd p

acf

acfpacf

Page 48: Univariate Time Series Analysis

48

ACF and PACF for a Non-stationary Model (i.e. a unit coefficient): yt = yt-1 + ut

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 2 3 4 5 6 7 8 9 10

Lags

acf a

nd p

acf

acfpacf

Page 49: Univariate Time Series Analysis

49

ACF and PACF for an ARMA(1,1): yt = 0.5yt-1 + 0.5ut-1 + ut

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1 2 3 4 5 6 7 8 9 10

Lags

acf a

nd p

acf

acfpacf

Page 50: Univariate Time Series Analysis

50

• Box and Jenkins (1970) were the first to approach the task of estimating an ARMA model in a systematic manner. There are 3 steps to their approach:1. Identification2. Estimation3. Model diagnostic checking

Step 1:- Involves determining the order of the model.- Use of graphical procedures- A better procedure is now available

Building ARMA Models - The Box Jenkins Approach

Page 51: Univariate Time Series Analysis

51

Step 2:- Estimation of the parameters- AR model can be estimated using least square, while MA and mixed (ARMA/ARIMA) involve non-linear parameter models can be estimated iteratively using maximum likelihood.

Step 3:- Model checking

Box and Jenkins suggest 2 methods:- deliberate overfitting- residual diagnostics

Building ARMA Models - The Box Jenkins Approach (cont’d)

Page 52: Univariate Time Series Analysis

52

Estimation of ARIMA models

Consider MA(1) model•

Box and Jenkins suggest a grid search procedure •

Estimate and equating first sample and population autocorrelations functions . Using them as starting values and assuming compute by

recursive substitution as follows

Compute for each set of values in a suitable range. Point estimates of the parameters are obtained where error sum of square is minimized. If are assumed normally distributed, the Maximum likelihood estimates as same as LS. Formula for asymptotic distribution of variances of ML estimators can be applied for computing standard error and

confidence intervals. More complex models can estimated similarly.

x=μ̂)ˆ1/(ˆ 2

111 ββ +=r

2,11

11

≥−−=−=

− tXX

ttt εβμεμε

010 == −εε

Page 53: Univariate Time Series Analysis

53

• Identification would typically not be done using acf’s.

• We want to form a parsimonious model.

• Reasons:- variance of estimators is inversely proportional to the number of degrees offreedom.

- models which are profligate might be inclined to fit to data specific features

• This gives motivation for using information criteria, which embody 2 factors- a term which is a function of the RSS- some penalty for adding extra parameters

• The object is to choose the number of parameters which minimises the information criterion.

Some More Recent Developments in ARMA Modelling

Page 54: Univariate Time Series Analysis

54

• The information criteria vary according to how stiff the penalty term is. • The three most popular criteria are Akaike’s (1974) information criterion

(AIC), Schwarz’s (1978) Bayesian information criterion (SBIC), and the Hannan-Quinn criterion (HQIC).

where k = p + q + 1, T = sample size. So we min. IC s.t.SBIC embodies a stiffer penalty term than AIC.

• Which IC should be preferred if they suggest different model orders?– SBIC is strongly consistent but (inefficient).– AIC is not consistent, and will typically pick “bigger” models.

Information Criteria for Model Selection

AIC k T= +ln( $ ) /σ 2 2

p p q q≤ ≤,

TTkSBIC ln)ˆln( 2 += σ

))ln(ln(2)ˆln( 2 TTkHQIC += σ

Page 55: Univariate Time Series Analysis

55

• Box-Jenkins approach assumes that variable to be modelled is stationary

• ARIMA is distinct from ARMA models. The I stands for integrated.

• An integrated autoregressive process is one with a characteristic root on the unit circle. (i.e. a non-stationary process)

• Typically researchers difference the variable as necessary and then build an ARMA model on those differenced variables.

• An ARMA(p,q) model in the variable differenced d times is equivalent to an ARIMA(p,d,q) model on the original data.

ARIMA Models

Page 56: Univariate Time Series Analysis

56

• Forecasting = prediction.• An important test of the adequacy of a model. e.g.- Forecasting tomorrow’s return on a particular share- Forecasting the price of a house given its characteristics- Forecasting the riskiness of a portfolio over the next year- Forecasting the volatility of bond returns

• We can distinguish two approaches:- Econometric (structural) forecasting - Time series forecasting

• The distinction between the two types is somewhat blurred (e.g, VARs).

Forecasting in Econometrics

Page 57: Univariate Time Series Analysis

57

• Expect the “forecast” of the model to be good in-sample.

• Say we have some data - e.g. monthly KSE-100 index returns for 120 months: 1990M1 – 1999M12. We could use all of it to build the model, or keep some observations back:

• A good test of the model since we have not used the information from1999M1 onwards when we estimated the model parameters.

In-Sample Versus Out-of-Sample

Page 58: Univariate Time Series Analysis

58

How to produce forecasts

• Multi-step ahead versus single-step ahead forecasts

• Recursive versus rolling windows

• To understand how to construct forecasts, we need the idea of conditional expectations: E(yt+1 | Ωt )

• We cannot forecast a white noise process: E(ut+s | Ωt ) = 0 ∀

s > 0.

• The two simplest forecasting “methods”1. Assume no change : f(yt+s ) = yt

2. Forecasts are the long term average f(yt+s ) = y

Page 59: Univariate Time Series Analysis

59

Models for Forecasting (cont’d)

• Time Series ModelsThe current value of a series, yt , is modelled as a function only of its previous values and the current value of an error term (and possibly previous values of the error term).

• Models include:• simple unweighted averages• exponentially weighted averages• ARIMA models• Non-linear models – e.g. threshold models, GARCH, etc.

Page 60: Univariate Time Series Analysis

60

The forecasting model typically used is of the form:

where ft,s = yt+s , s≤

0; ut+s = 0, s > 0= ut+s , s ≤

0

Forecasting with ARMA Models

∑∑=

−+=

− ++=q

jjstj

p

iistist uff

11,, θφμ

Page 61: Univariate Time Series Analysis

61

• An MA(q) only has memory of q.

e.g. say we have estimated an MA(3) model:

yt = μ + θ1 ut-1 + θ 2 ut-2 + θ 3 ut-3 + utyt+1 = μ + θ 1 ut + θ 2 ut-1 + θ 3 ut-2 + ut+1yt+2 = μ + θ 1 ut+1 + θ 2 ut + θ 3 ut-1 + ut+2yt+3 = μ + θ 1 ut+2 + θ 2 ut+1 + θ 3 ut + ut+3

• We are at time t and we want to forecast 1,2,..., s steps ahead.

• We know yt , yt-1 , ..., and ut , ut-1

Forecasting with MA Models

Page 62: Univariate Time Series Analysis

62

ft, 1 = E(yt+1 |

t ) = E(μ + θ 1 ut + θ 2 ut-1 + θ 3 ut-2 + ut+1 )= μ + θ 1 ut + θ 2 ut-1 + θ 3 ut-2

ft, 2 = E(yt+2 |

t ) = E(μ + θ 1 ut+1 + θ 2 ut + θ 3 ut-1 + ut+2 )= μ + θ 2 ut + θ 3 ut-1

ft, 3 = E(yt+3 |

t ) = E(μ + θ 1 ut+2 + θ 2 ut+1 + θ 3 ut + ut+3 )= μ + θ 3 ut

ft, 4 = E(yt+4 |

t ) = μ

ft, s = E(yt+s |

t ) = μ

s ≥

4

Forecasting with MA Models (cont’d)

Page 63: Univariate Time Series Analysis

63

• Say we have estimated an AR(2)yt = μ + φ1 yt-1 + φ 2 yt-2 + utyt+1 = μ + φ 1 yt + φ 2 yt-1 + ut+1yt+2 = μ + φ 1 yt+1 + φ 2 yt + ut+2yt+3 = μ + φ 1 yt+2 + φ 2 yt+1 + ut+3

ft, 1 = E(yt+1 |

t ) = E(μ + φ 1 yt + φ 2 yt-1 + ut+1 )= μ + φ 1 E(yt ) + φ 2 E(yt-1 )= μ + φ 1 yt + φ 2 yt-1

ft, 2 = E(yt+2 |

t ) = E(μ + φ 1 yt+1 + φ 2 yt + ut+2 )= μ + φ 1 E(yt+1 ) + φ 2 E(yt )= μ + φ 1 ft, 1 + φ 2 yt

Forecasting with AR Models

Page 64: Univariate Time Series Analysis

64

ft, 3 = E(yt+3 |

t ) = E(μ + φ 1 yt+2 + φ 2 yt+1 + ut+3 )= μ + φ 1 E(yt+2 ) + φ 2 E(yt+1 )= μ + φ 1 ft, 2 + φ 2 ft, 1

• We can see immediately that

ft, 4 = μ + φ 1 ft, 3 + φ 2 ft, 2 etc., so

ft, s = μ + φ 1 ft, s-1 + φ 2 ft, s-2

• Can easily generate ARMA(p,q) forecasts in the same way.

Forecasting with AR Models (cont’d)

Page 65: Univariate Time Series Analysis

65

•For example, say we predict that tomorrow’s return on the FTSE will be 0.2, butthe outcome is actually -0.4. Is this accurate? Define ft,s as the forecast made at time t for s steps ahead (i.e. the forecast made for time t+s), and yt+s as the realised value of y at time t+s.

• Some of the most popular criteria for assessing the accuracy of time series forecasting techniques are:

MAE is given by

Mean absolute percentage error:

How can we test whether a forecast is accurate or not?

2,

1)(1

stst

N

tfy

NMSE −= +

=∑

stst

N

tfy

NMAE ,

1

1−= +

=∑

st

ststN

t yfy

NMAPE

+

+

=

−×= ∑ ,

1

1100

Page 66: Univariate Time Series Analysis

66

Box-Jenkins Methodology Summarized

Page 67: Univariate Time Series Analysis

67

Illustrations of Box-Jenkins methodology-I (Pak GDP forecasting)Year GDP Year GDP Year GDP

1961 82085 1975 180404 1989 403948 1962 86693 1976 186479 1990 422484 1963 92737 1977 191717 1991 446005 1964 98902 1978 206746 1992 480413 1965 108259 1979 218258 1993 487782 1966 115517 1980 233345 1994 509091 1967 119831 1981 247831 1995 534861 1968 128097 1982 266572 1996 570157 1969 135972 1983 284667 1997 579865 1970 148343 1984 295977 1998 600125 1971 149900 1985 321751 1999 625223 1972 153018 1986 342224 1973 163262 1987 362110 1974 174712 1988 385416

An upward non-linear trend with some evidence of increasing variability 0

100,000

200,000

300,000

400,000

500,000

600,000

700,000

1965 1970 1975 1980 1985 1990 1995

Pakistan's real GDP at 1980-81 factor cost (Rs million)

Page 68: Univariate Time Series Analysis

68

Pakistan GDP forecasting Stationarity and Identification

First difference of GDP still seems to have some trend with high

variability near end of sample. First difference of log GDP appears to be relatively trend less.

Var

(d(GDP))=71614669 Var

(d(logGDP))= 0.00039

.00

.02

.04

.06

.08

.10

1965 1970 1975 1980 1985 1990 1995

First difference of log (GDP)

0

4,000

8,000

12,000

16,000

20,000

24,000

28,000

32,000

36,000

1965 1970 1975 1980 1985 1990 1995

First difference of GDP

Page 69: Univariate Time Series Analysis

69

Pakistan’s GDP forecasting Stationarity and Identification

Over differencing needs to be avoided•

Second differences also appear to be stationary with some outliers

Var(d(gdp),2)=72503121 Var(d(log(gdp),2)=0.00074

-30,000

-20,000

-10,000

0

10,000

20,000

1965 1970 1975 1980 1985 1990 1995

Second difference of GDP

-.08

-.06

-.04

-.02

.00

.02

.04

.06

1965 1970 1975 1980 1985 1990 1995

Second difference of log GDP

Page 70: Univariate Time Series Analysis

70

Stationarity and Identification

GDP series appears to have very slowly decaying autocorrelation and single spike at lag 1 possibly indicates that GDP is a random walk. First differenced GDP has many significant autocorrelations, which can

also be seen from Ljung-Box stats and p-values

Page 71: Univariate Time Series Analysis

71

Stationarity and Identification

Log of GDP has same autocorrelation structure as GDP. First difference of log (GDP) looks like white noise. Also look at the Q-stats and p-values

Page 72: Univariate Time Series Analysis

72

Stationarity and Identification

Second differencing seems to be unnecessary. So we work with first difference of log(GDP).i.e. d=1. ACF and PACF do not show any nice looking theoretical pattern.

Page 73: Univariate Time Series Analysis

73

Stationarity and Identification

We will consider fitting several ARIMA(p,1,q) models

ARIMA(0,1,4), is identified as the best models using the two model selection criteria. Smaller the values of the selection criteria better is

the in-sample fit

ARIMA (p,d,q) AIC BIC

ARIMA (1,1,0) -4.879 -4.792

ARIMA (4,1,0) -4.932 -4.708

ARIMA (0,1,1) -4.910 -4.824

ARIMA (0,1,4) -5.370 -5.284

ARIMA (4,1,4) -5.309 -5.174

ARIMA (5,1,5) -5.249 -5.113

ARIMA (1,1,4) -5.333 -5.202

Page 74: Univariate Time Series Analysis

74

Estimation of the models

Estimation output of two best fitting models

tt

tt

LLLLyL

LLLLyL

ε

εθθθθ

)913.0201.0165.0104.01()1(

)1()1(432

44

33

221

+−+−=−

++++=−

Page 75: Univariate Time Series Analysis

75

Model Diagnostics

We look at the correlogram

of the estimated model. The residuals appear to be white noise. P-values of Q-

stats of ARIMA(0,1,4) are smaller.

Page 76: Univariate Time Series Analysis

76

Forecasting: In sample Estimation

To compare the out sample performance of the competing forecasting models, we hold out last few observations. In this case the out sample performance will be compared using 5 year hold out sample 1995-1999

Re-estimate the model using sample 1961-1994•

ARIMA(0,1,4) model shows some under estimation near end of sample

0

100,000

200,000

300,000

400,000

500,000

600,000

700,000

1965 1970 1975 1980 1985 1990 1995

GDP GDPF6

Observed GDP and and fitted GDP from ARIMA(0,1,4) model

Page 77: Univariate Time Series Analysis

77

Forecasting: In sample Estimation

Similar underestimation is observed for ARIMA(1,1,4) model

We will select the forecasting model usingout sample accuracy

measures e.g.RMSE or MAPE, which Eviews report under Forecasting tab

hYY

RMSE2)ˆ( −

= ∑ 0

100,000

200,000

300,000

400,000

500,000

600,000

700,000

1965 1970 1975 1980 1985 1990 1995

GDP GDPF7

Observed GDP and and fitted GDP from ARIMA(1,1,4) model

Page 78: Univariate Time Series Analysis

78

Out sample forecast evaluation

Using the two competing models the forecasts are generated as follows:

Note: Static forecast option for dynamic models (e.g

ARIMA) in Eviews uses actual values of lagged dependent variable, while dynamic forecast option uses previously forecasted values of lagged dependent variable.

ARIMA(0,1,4) generates better forecasts as seen by smaller value

of RMSE

Year Observed ARIMA(0,1,4) ARIMA(1,1,4)

1995 534861.0 536938.6 539376.9

1996 570157.0 569718.6 570955.6

1997 579865.0 584971.0 587198.2

1998 600125.0 615367.1 61828.2

1999 625223.0 648580.1 652246.0

RMSE 12715.77 15064.7

Page 79: Univariate Time Series Analysis

79

Box-Jenkins Method :Application II (Airline Passenger Data)

The given data on number of airline passenger has been analyzed by several authors including Box and Jenkins.

Page 80: Univariate Time Series Analysis

80

Airline Passenger Data Stationarity and Identification

The time series plot indicates an upward trend with with

seasonality and increasing variability. Log

transformation seem to stabilize variance. Seasonality has to be modeled.

0

100

200

300

400

500

600

700

49 50 51 52 53 54 55 56 57 58 59 60 61

Number of airline passengers in thousand

4.50

4.75

5.00

5.25

5.50

5.75

6.00

6.25

6.50

49 50 51 52 53 54 5 5 56 57 58 59 60 61

LO G (PASSENGERS)

Page 81: Univariate Time Series Analysis

81

Airline Passenger Data Stationarity and Identification

First difference of eliminates trend. Seasonality is evident. A seasonal difference Yt

-Yt-12 is also needed after first difference. This is done in Eviews as d(log(Yt),1,12). Both trend and seasonality appear to be removed.

-.3

-.2

-.1

.0

.1

.2

.3

49 50 51 52 53 54 55 56 57 58 59 60 61

D(LOG(PASSENGERS))

-.15

-.10

-.05

.00

.05

.10

.15

49 50 51 52 53 54 55 56 57 58 59 60 61

D(LOG(PASSENGERS),1,12)

Page 82: Univariate Time Series Analysis

82

Airline Passenger Data Stationarity and Identification

Let’s have a look at ACF, PACF. ACF and PACF of d(log(Yt),1,12) indicate some significant values at lag 1, and 12. We will do further work on d(log(Yt),1,12)

Page 83: Univariate Time Series Analysis

83

Airline Passenger Data: Identification

We will choose suitable model using AIC, BIC criteria with seasonal moving average SMA(12) or seasonal autoregressive SAR (12) to be included. Both AIC and BIC criteria point towards a mixed ARIMA (1,1,1) model with a seasonal moving average term of order 12.

Models AIC BIC

MA(1) SMA(12) -3.754 -3.689

AR(1) SMA(12) -3.744 -3.678

AR(1) SAR(12) -3.655 -3.585

MA(1) SAR (12) -3.677 -3.609

AR(1) MA(1) SAR(12)

-3.656 -3.562

AR(1) MA(1) SMA(12)

-3.779 -3.691

Page 84: Univariate Time Series Analysis

84

Airline Passenger Data: Estimation

All the coefficient in the estimated model AR(1) MA(1) SMA (12) are significant. The estimated model in compact form is

)12,1,(log()1)(1()1( 12

1

passengerdywherewLLyL

t

tt

=++=− εθρ

)12,1,(log()867.01)(957.01()661.01( 12

passengerdywhereLLyL

t

tt

=−−=− ε

Page 85: Univariate Time Series Analysis

85

Airline Passenger Data: Diagnostic Checks

After estimating the above model Use Eviews command ident

resid.The

residuals appear to be white noise.

Page 86: Univariate Time Series Analysis

86

Airline Passenger Data: Forecasting

Here is the graph of observed and fitted ‘passengers’. The forecast are given in the table below:

0

100

200

300

400

500

600

700

49 50 51 52 53 54 55 56 57 58 59 60 61

PASSENGERS PASSENGERSF

Observed values of number of passengers and forecast for 1961Month Forecast

1961.01 442.3

1961.02 429.45

1961.03 490.40

1961.04 484.82

1961.05 490.93

1961.06 560.17

1961.07 629.91

1961.08 626.91

1961.09 539.16

1961.10 474.11

1961.11 412.15

1961.12 462.14