23
IMPROVING THE QUALITY OF FORECASTING USING GENERALIZED AR MODELS: AN APPLICATION TO STATISTICAL QUALITY CONTROL M.SHELTON PEIRIS 1 Department of Statistics The Pennsylvania State University University Park PA 16802-2111, USA. Abstract This paper considers a new class of ARMA type models with indices to describe some hidden features of a time series. Various new results associated with this class are given in general form. It is shown that this class of models provides a valid, simple solution to a misclassification problem which arises in time series modeling. In particular, we show that these types of models could be used to describe data with different frequency components for suitably chosen indices. A simulation study is carried out to illustrate the theory. We justify the importance of this class of models in practice using a set of real time series data. It is shown that this approach leads to a significant improvement in the quality of forecasts of correlated Key words: Time series, Misclassification, Frequency, Spectrum, Estimation, Correlated data, Quality control, Forecasts, Autoregressive, Moving average, Correlation, Index. 1 Current address: School of Mathematics and Statistics, The University of Sydney, NSW 2006, Australia.

IMPROVING THE QUALITY OF FORECSTING USING ... · Web viewWith the normality assumption, the joint probability density function of is given by (4.1) To estimate the parameters and

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

IMPROVING THE QUALITY OF FORECSTING USING GENERALIZED AR MODELS: AN APLICATION TO STTISTICAL QUALITY CONTROL

IMPROVING THE QUALITY OF FORECASTING USING GENERALIZED AR MODELS:

AN APPLICATION TO STATISTICAL

QUALITY CONTROL

M.SHELTON PEIRIS1

Department of Statistics

The Pennsylvania State University

University Park PA 16802-2111, USA.

Abstract

This paper considers a new class of ARMA type models with indices to describe some hidden features of a time series. Various new results associated with this class are given in general form. It is shown that this class of models provides a valid, simple solution to a misclassification problem which arises in time series modeling. In particular, we show that these types of models could be used to describe data with different frequency components for suitably chosen indices. A simulation study is carried out to illustrate the theory. We justify the importance of this class of models in practice using a set of real time series data. It is shown that this approach leads to a significant improvement in the quality of forecasts of correlated data and opens a new direction of research for statistical quality control.

Introduction

It is known that many time series in practice (e.g. turbulent time series, financial time series) have different frequency components. For example, there are some problems which arise in time series analysis of data with low frequency components.

One extreme case to show the importance of these low frequency components is the analysis of long memory time series (see for instance, Beran (1994), Chen et. al.

(1994)). However, there is no systematic approach or a suitable class of models available in literature to accommodate, analyze and forecast time series with changing frequency behaviour via a direct method. This paper attempts to introduce a family of autoregressive moving average (ARMA) type models with indices (generalized ARMA or GARMA) to describe hidden frequency properties of time series data. We report some new results associated with the autocorrelation function (acf) of underlying processes.

Consider the family of standard ARMA (1,1) processes (see, Abraham and Ledolter (1983), Brockwell & Davis (1991)) generated by

,

1

1

-

-

-

+

=

t

t

t

t

Z

Z

X

X

b

a

EMBED Equation.3 (1.1)

where

1

,

<

b

a

and

{

}

t

Z

is a sequence of uncorrelated random variables (not necessarily independent) with zero mean and variance

s

2

, known as white noise, and denoted by WN (0,

s

EMBED Equation.3

2

) .

EMBED Equation.3

Using the backshift operator,

)

0

,

.

.

(

³

=

-

j

X

X

B

e

i

B

j

t

t

j

and the identity operator

,

0

B

I

=

(1.1) can be written as

(

)

(

)

t

t

Z

B

I

X

B

I

b

a

-

=

-

. (1.2)

The process in (1.2) has the following properties:

i. The autocovariance function (acovf),

k

g

satisfies:

(

)

(

)

[

]

2

2

2

0

1

1

a

b

a

s

g

-

-

+

=

(

)

(

)

[

]

2

2

2

1

1

a

b

a

a

b

a

s

g

-

-

+

-

=

2

,

1

1

³

=

-

k

k

k

g

a

g

ii. The spectral density function (sdf),

(

)

w

x

f

is

(

)

(

)

(

)

.

;

2

1

2

2

1

2

2

2

p

w

p

a

w

a

p

b

w

b

s

w

£

£

-

+

-

+

-

=

Cos

Cos

f

x

(1.3)

It is well known that for any time series data set, the density of crossings at a certain level

x

x

t

=

may vary. We have noticed that this property (say, the degree of level crossings) is very common in many time series data sets (see figures 1 to 4 in Appendix I). These series display similar patterns in the acf, the pacf, and the spectrum and hence one may propose the same standard ARMA model (for example, AR (1) model) for all four cases (based on the traditional approach). In other words, these series cannot identify from each other using standard models and/or techniques and leads to a ‘misclassification problem’ in time series. This encourages us to introduce a new, generalized version of (1.2) with additional parameters (or indices)

(

)

0

1

³

d

and

(

)

0

2

³

d

to control the degree of frequency or level crossings satisfying

(

)

(

)

t

t

Z

B

I

X

B

I

2

1

d

d

b

a

-

=

-

,

(1.4)

where

1

,

1

<

<

-

b

a

. This class of models covers the traditional ARMA (1.1) family given in (1.1) when

1

2

1

=

=

d

d

. Also standard AR (1) and MA (1) when

,

1

1

=

d

EMBED Equation.3

0

2

=

d

and

,

0

1

=

d

EMBED Equation.3

1

2

=

d

respectively. It is interesting to note that the degree of level crossings of data can be controlled by these additional parameters

1

d

and

2

d

and (1.4) constitutes a wide variety of important processes in practice. Hence (1.4) is called ‘generalized ARMA (1,1)’ or ‘GARMA (1,1)’. Although (1.4) can be extended to general ARMA type models, for simplicity this paper considers the family of GAR (1) and its applications. That is, we consider a GAR (1) process generated by

(

)

0

;

1

1

;

>

<

<

-

=

-

d

a

a

d

t

t

Z

X

B

I

. (1.5)

This class of models covers the standard AR (1) family when

1

=

d

. It can be seen from the spectrum that the degree of frequency of data can be controlled by this additional parameter

d

. For example, when

{

}

1,

t

X

a

=

in (1.5) converges for

2

1

0

<

<

d

and reduces to the class of fractionally integrated white noise processes. Hence, (1.5) constitutes a general family of AR(1) type models with a number of potential applications. There are many real world data sets with varying frequencies (especially in finance where the data has high frequency components) and (1.5) can be applied using existing techniques with simple modifications. With that view in mind, Section 2 reports some theoretical properties of the underlying GAR(1) process given in (1.5).

2 Properties of GAR(1) Processes

Let

(

)

j

j

j

j

B

B

I

a

p

a

d

å

¥

=

=

-

0

, (2.1)

where

1

,

;

0

,

0

0

=

=

>

Î

p

d

a

I

B

R

and

(

)

÷

÷

ø

ö

ç

ç

è

æ

-

=

d

p

j

j

j

1

(

)

(

)

(

)

1

;

!

1

1

³

-

+

-

+

-

-

=

j

j

j

d

d

d

L

.

If

d

is a positive integer, then

0

=

j

p

for

.

1

+

³

d

j

For any non-integral

,

0

>

d

it is known that

(

)

(

)

(

)

,

1

d

d

p

-

G

+

G

-

G

=

j

j

j

(2.2)

where

(

)

×

G

is the gamma function given by

(

)

(

)

ï

ï

ï

î

ï

ï

ï

í

ì

<

+

G

=

¥

>

=

G

-

-

¥

-

ò

.

0

;

1

0

;

0

;

1

0

1

x

x

x

x

x

dt

e

t

x

t

x

It is easy to see that the series

j

j

j

a

p

å

¥

=

0

converges for all

d

since

.

1

<

a

Thus

t

X

in (1.5) is equivalent to a valid AR

(

)

¥

process of the form

å

¥

=

-

=

0

j

t

j

t

j

j

Z

X

a

p

(2.3)

with

¥

<

å

2

j

j

a

p

.

Now we state and prove the following theorem for a stationary solution of (1.5).

Theorem 2.1: Let

{

}

(

)

.

,

0

~

2

s

WN

Z

t

. Then for all

0

>

d

and

,

1

<

a

the infinite series

j

t

j

j

j

t

Z

X

-

¥

=

å

Y

=

a

0

EMBED Equation.3 (2.4)

converges absolutely with probability one, where

(

)

(

)

(

)

.

1

d

d

G

+

G

+

G

=

Y

j

j

j

Proof. Let

(

)

j

j

j

j

B

B

I

a

a

d

å

¥

=

-

Y

=

-

0

,

where

(

)

÷

÷

ø

ö

ç

ç

è

æ

-

-

=

Y

j

j

j

d

1

(

)

(

)

(

)

.

0

;

1

³

G

+

G

+

G

=

j

j

j

d

d

Since

þ

ý

ü

î

í

ì

Y

=

÷

÷

ø

ö

ç

ç

è

æ

Y

-

¥

=

¥

=

-

å

å

2

2

0

2

0

j

t

j

j

j

j

j

t

j

j

Z

E

Z

E

a

a

and

¥

<

Y

å

¥

=

2

0

j

j

j

a

, the result follows.

Note:

For

1

=

a

, (2.4) converges for all

2

1

0

<

<

d

.

Now we consider the autocovariance function of the underlying process in (1.5).

Let

(

)

(

)

k

t

t

k

t

t

k

X

X

E

X

X

Cov

-

-

=

=

,

g

be the autocovariance function at lag

k

of

{

}

t

X

satisfying the conditions of theorem 2.1.

It is clear from (2.3) that

{

}

k

g

satisfy a Yule-Walker type recursion

0

;

0

0

>

=

å

¥

=

-

k

j

j

k

j

j

g

a

p

(2.5)

and the corresponding autocorrelation function (acf),

k

r

, at lag

k

satisfies

0

;

0

0

>

=

-

¥

=

å

k

j

k

j

j

j

r

a

p

. (2.6)

It is interesting to note that

k

k

a

r

=

is a solution of (2.6), since

0

0

=

å

¥

=

j

j

p

for any

0

>

d

. However, the general solution for

k

r

may be expressed as

(

)

k

k

k

a

d

a

z

r

,

,

=

,

where

(

)

×

z

is a suitably chosen function of

,

,

a

k

and

d

. To find this function

z

, we use the following approach:

The spectrum of

{

}

t

X

in (1.5) is

(

)

p

w

p

p

s

a

w

d

w

£

£

-

-

=

-

-

;

2

1

2

2

i

e

fx

(

)

p

s

a

w

a

d

2

2

1

2

2

-

+

-

=

Cos

. (2.7)

Note: In a neighbourhood of

0

=

w

,

(

)

,

1

2

~

2

2

d

a

p

s

-

-

g

x

f

where g stands for generalized.

Now the exact form of

k

g

(or

k

r

) can be obtained from

w

w

g

p

p

w

d

f

e

x

ik

k

)

(

ò

-

=

(

)

(

)

w

a

w

a

w

p

s

p

d

d

Cos

k

Cos

ò

+

-

=

0

2

2

2

1

. (2.8)

We first evaluate the integral in (2.8) for

0

=

k

and report this result together with other cases (ie. for

1

³

k

) in Section 3 as our main results.

3 Main Results

Theorem 3.1:

(

)

(

)

2

2

0

;

1

;

,

a

d

d

s

g

F

X

Var

t

=

=

, (3.1) where

(

)

q

q

q

q

;

;

,

3

2

1

F

is the hypergeometric function given by

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

.

1

;

;

,

0

3

2

1

3

2

1

3

2

1

å

¥

=

+

G

+

G

G

G

G

+

G

+

G

=

j

j

j

j

j

j

F

q

q

q

q

q

q

q

q

q

q

q

(3.2) (See Gradsteyn and Ryzhik (GR)(1965), p.1039).

Proof: From GR, p.384,

(

)

(

)

,

;

1

;

,

2

1

,

2

1

2

1

2

0

2

a

d

d

a

w

a

w

p

d

F

B

Cos

d

÷

ø

ö

ç

è

æ

=

+

-

ò

(3.3) where

(

)

(

)

(

)

(

)

y

x

y

x

y

x

B

+

G

G

G

=

,

is the beta function.

Since

p

=

÷

ø

ö

ç

è

æ

2

1

,

2

1

B

, the result follows.

Using (3.2) it is easy to see that for

1

=

d

,

(

)

(

)

1

2

2

1

;

1

;

1

,

1

-

-

=

a

a

F

.

(See GR, p.1040).

Hence (3.1) reduces to the corresponding well known result for the variance of an AR (1) (standard) process satisfying

t

t

t

Z

X

X

+

=

a

. That is,

(

)

1

,

1

2

2

<

-

=

a

a

s

t

X

Var

.

Using

(

)

(

)

(

)

(

)

(

)

2

3

1

3

2

1

3

3

3

2

1

1

;

;

,

q

q

q

q

q

q

q

q

q

q

q

-

G

-

G

-

-

G

G

=

F

, (3.4)

the result of Theorem 3.1 reduces to the

(

)

t

X

Var

for a fractionally differenced white noise process satisfying

(

)

t

t

Z

X

B

I

=

-

d

with

2

1

0

<

<

d

. That is, when

1

=

a

and

2

1

1

<

<

d

, (3.1) gives

(

)

(

)

d

d

s

g

-

G

-

G

=

1

2

1

2

2

0

.

(3.5)

(Compare with Brockwell and Davis (1991) p466, eq. 12.4.8).

As it is not easy to evaluate the integral in (2.8) directly for

0

=

/

k

. We find an expression for

k

g

via,

(

)

k

t

t

k

X

X

E

-

=

g

j

k

k

j

j

j

2

0

2

+

+

¥

=

å

=

a

y

y

s

.

An explicit form of

k

g

is given in Theorem 3.2.

Theorem 3.2:

(

)

(

)

(

)

(

)

.

0

;

1

;

1

;

,

2

2

³

+

G

G

+

+

+

G

=

k

k

k

k

F

k

k

k

d

a

d

d

d

a

s

g

(3.6)

Proof: Since

,

0

j

t

j

j

j

t

Z

X

-

¥

=

å

=

a

y

,

2

0

2

j

j

k

j

j

k

k

a

y

y

a

s

g

å

¥

=

+

=

(

)

(

)

(

)

(

)

(

)

(

)

.

1

1

0

2

2

2

å

¥

=

+

+

G

+

G

G

+

+

G

+

G

=

j

j

k

k

j

j

k

j

j

d

a

d

d

a

s

From p.556 of Abramovitz & Stegun (1965) we have

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

(

)

1

;

1

;

,

1

1

2

0

2

+

G

+

+

+

G

G

=

+

G

+

+

G

+

+

G

+

G

å

¥

=

k

k

k

F

k

j

j

k

j

k

j

j

j

a

d

d

d

d

a

d

d

and hence (3.6) follows.

Note: When

0

=

k

, Theorem 3.2 reduces to Theorem 3.1.

The autocorrelation function plays an important role in the analysis of the underlying process. The Corollary 3.1 below gives an expression for

k

r

for any

0

³

k

.

Corollary 3.1: The autocorrelation function of GAR (1) process in (1.5) is

(

)

(

)

(

)

(

)

(

)

2

2

;

1

;

,

1

;

1

;

,

a

d

d

d

a

d

d

d

a

r

F

k

k

k

F

k

k

k

+

G

G

+

+

+

G

=

. (3.7)

Note: When

1

=

a

, (3.7) reduces to the acf of a fractionally differenced white noise process given by Brockwell & Davis (1991), p.466, eq.12.4.9. Further it is interesting to note that (3.7) reduces to the acf of a standard AR (1) process when

1

=

d

, since

(

)

(

)

(

)

1

2

2

2

1

;

1

;

1

,

1

;

1

;

1

,

1

-

-

=

=

+

+

a

a

a

F

k

k

F

(see GRp1040).

Thus the results of the above two theorems provide a new set of formulae in a general form. Obviously, our new result for

k

g

in (3.6) supersede all existing results for standard AR (1) and fractionally differenced white noise processes.

Remark: An important consequence of our new result of Theorems 3.2 yields

(

)

(

)

(

)

(

)

(

)

1

;

1

;

,

2

1

2

0

2

+

G

G

+

+

+

G

=

+

-

ò

k

k

k

F

k

Cos

d

Cosk

k

d

a

d

d

d

pa

a

w

a

w

w

p

d

(3.8)

for any

.

0

>

d

When

0

=

k

the equation (3.8) reduces to the equation (3.3) in Theorem 3.1 (also see GR p384).

Note: The new result in equation (3.8) is particularly useful in many theoretical developments of the class of generalized ARMA (GARMA) processes with indices.

We now discuss a method of estimating parameters (

a

and

d

) of (1.5) in Section 4.4 Estimation of Parameters

4.1 Exact Likelihood Estimation

Consider a stationary normally distributed GAR (1) time series

{

}

t

X

generated by (1.5). Let

T

X

be a sample of size T from (1.5) and let

(

)

,

,

,

1

¢

=

T

T

X

X

X

L

then it is clear that

),

,

0

(

~

å

T

T

N

X

where 0 is a column vector of 0’s of order

1

´

T

and

å

is the covariance matrix (order

T

T

´

) of

T

X

. Stationarity of

{

}

t

X

implies that the covariance matrix is a Toeplitz form with entries given by the equation (3.6). With the normality assumption, the joint probability density function of

T

X

is given by

(

)

(

)

þ

ý

ü

î

í

ì

å

¢

-

å

=

å

-

-

-

T

T

T

T

X

X

exp

X

f

1

2

1

2

,

2

1

2

p

. (4.1)

To estimate the parameters

d

a

,

and

2

s

one needs a suitable optimization algorithm to maximize the likelihood function given in (4.1) (or the corresponding log likelihood function). This can be done by choosing an appropriate set of starting-up values since we have the covariance matrix

å

in terms of the parameters

d

a

,

and

2

s

. Denote the corresponding vector of the estimates by

h

ˆ

, where

(

)

¢

=

2

ˆ

,

ˆ

,

ˆ

ˆ

s

d

a

h

. We use the Method of Moments (MOM) techniques as described below to find a suitable set of starting up values for the ML algorithm.

4.2 Initial Values

Method of moment estimates of

a

and

d

can be obtained by approximating

1

r

and

2

r

from (3.7). It is easy to see that

(

)

(

)

(

)

(

)

(

)

ad

a

d

d

d

a

d

d

d

a

r

»

G

G

+

+

G

=

2

2

1

;

1

;

,

2

;

2

;

1

,

1

F

F

, (4.2)

and

(

)

(

)

(

)

(

)

(

)

(

)

2

1

;

1

;

,

3

;

3

;

2

,

2

2

2

2

2

d

d

a

a

d

d

d

a

d

d

d

a

r

+

»

G

G

+

+

G

=

F

F

. (4.3)

The corresponding (approximate) MOM estimates of

a

and

d

are given by

1

1

2

2

ˆ

r

r

r

-

=

a

and

a

d

ˆ

ˆ

1

r

=

.

2

ˆ

s

can be found from the equation involving

0

g

in Theorem 3.1. Next section considers a real time series and illustrate the usefulness of GAR (1) modeling in practice. Methods described by Granger and Joyeux (1980), Geweke and Porter-Hudak (1983) and Peiris and Court (1993) can be used to estimate the parameters in (1.5) and these results will be discussed in a future paper (see also Peiris et. al. (2003)).

5 An Application

Consider the yield data (monthly differences between the yield on mortgages and the yield on government loans in the Netherlands, January 1961 to December 1973) from Abraham and Ledolter (1983). The data for January to March 1974 are also available for comparison. The time series, the acf, the pacf, and the spectrum (see Appendix II, Fig. 5) suggest that an AR(1) model (standard) is suitable and Abraham and Ledolter found the following estimates (values in parentheses are the estimated standard errors):

)

08

.

0

(

97

.

0

ˆ

=

m

,

(

)

04

.

0

85

.

0

ˆ

=

a

, and

024

.

0

ˆ

=

s

.

However, we fit a generalized AR (1) model for the data using the methods described in Section 4. The results are:

(

)

13

.

0

97

.

0

ˆ

=

m

,

(

)

06

.

0

91

.

0

ˆ

=

a

,

(

)

10

.

0

87

.

0

ˆ

=

d

, and

042

.

0

2

=

s

.

The first three forecasts from the time origin at

156

=

t

and the corresponding 95% forecast intervals are:

0.65 and

23

.

0

65

.

0

±

0.78 and

32

.

0

78

.

0

±

0.80 and

40

.

0

80

.

0

±

.

The true (observed) values of the last three readings are 0.74, 0.90, and 0.91 respectively,

The corresponding results due to Abraham and Ledolter (1983) are

0.56 and

30

.

0

56

.

0

±

0.62 and

40

.

0

62

.

0

±

0.68 and

46

.

0

68

.

0

±

.

Our results (estimates) are closer to the true values than in Abraham and Ledolter (1983). Also note that our new results provide shorter forecast intervals in all three cases above, although the upper confidence limits are slightly higher than the corresponding results of Abraham & Ledolter (1983). Appendix II Fig. 6 gives a simulated series from (1.5) with estimated parameters delta = d = 0.87, alpha = a = 0.91 and the residual variance = 0.042 for comparison.

ACKNOWLEDGEMENTS

The author wishes to thanks the referee and Bovas Abraham for many helpful

comments and suggestions to improve the quality of the paper. This work was completed while the author was at the Pennsylvania State University. He thanks

Professor C. R. Rao and the Department of Statistics for their support during his visit. Also he thanks Jayanthi Peiris for her excellent typing of this paper.

REFERENCES

Abraham, B. and Ledolter, J., Statistical Methods for Forecasting, John Wiley, New York (1983).

Abramowitz, M. and Segun, I., Handbook of Mathematical Functions, Dover, NewYork (1968).

Beran, J., Statistics for Long Memory Processes, Chapman and Hall, New York (1994).

Brockwell, P.J. and Davis, R.A., Time Series: Theory and Methods, Springer-Verlag, New York (1991).

Chen, G., Abraham, B. and Peiris, S. , Lag window estimation of the degree of differencing in fractionally integrated time series models, J. Time Series Anal., 15, (1994) pp. 473-487.

Geweke, J. and Porter-Hudak, S., The estimation and application of long memory time series models, J. Time Series Anal. 4, (1983) pp.231-238.

Gradshteyn, I.S. and Ryzhik, I.M., Tables of Integrals, Series, and Products, 4th ed.., Academic Press, New York (1980).

Granger, C.W.J. and Joyeux, R., An introduction to long memory time series models and fractional differencing, J. Time Series Anal., 1, (1980) pp. 15-29.

Peiris, M.S. and Court, J.R. , A note on the estimation of degree of differencing in long memory time series analysis, Probab. and Math Statist., 14, (1993)pp. 223-229.

Peiris, S., Allen, D. and Thavaneswaran, A., An Introduction to Generalized Moving Average Models and Applications, Journal of Applied Statistical Science (2003) (to appear).

Appendix I

Fig.1

Fig.2

Fig.3

Fig.4

Appendix II

Fig.5

Fig.6

Key words: Time series, Misclassification, Frequency, Spectrum, Estimation, Correlated data, Quality control, Forecasts, Autoregressive, Moving average, Correlation, Index.

1Current address: School of Mathematics and Statistics, The University of Sydney, NSW 2006, Australia.

_1122492968.unknown
_1122820345.unknown
_1123960237.unknown
_1125122549.unknown
_1125134320.unknown
_1125137370.unknown
_1125137652.unknown
_1125137803.unknown
_1125137852.unknown
_1125137396.unknown
_1125137229.unknown
_1125137340.unknown
_1125134969.unknown
_1125122824.unknown
_1125124468.unknown
_1125132012.unknown
_1125122862.unknown
_1125122620.unknown
_1125122604.unknown
_1124046545.unknown
_1124046817.unknown
_1124047073.unknown
_1124047181.unknown
_1125122461.unknown
_1124047270.unknown
_1124047125.unknown
_1124046976.unknown
_1124047035.unknown
_1124046866.unknown
_1124046624.unknown
_1124046748.unknown
_1124046605.unknown
_1124046007.unknown
_1124046472.unknown
_1124046506.unknown
_1124046245.unknown
_1124045883.unknown
_1124045971.unknown
_1123960319.unknown
_1124045824.unknown
_1123960297.unknown
_1122886744.unknown
_1122887584.unknown
_1122888848.unknown
_1122979285.unknown
_1122980354.unknown
_1122980508.unknown
_1122980766.unknown
_1122980811.unknown
_1122980731.unknown
_1122980398.unknown
_1122979377.unknown
_1122980226.unknown
_1122979330.unknown
_1122979065.unknown
_1122979113.unknown
_1122978983.unknown
_1122888458.unknown
_1122888784.unknown
_1122888808.unknown
_1122888476.unknown
_1122887810.unknown
_1122888174.unknown
_1122887621.unknown
_1122887266.unknown
_1122887544.unknown
_1122887559.unknown
_1122887331.unknown
_1122887129.unknown
_1122887209.unknown
_1122886818.unknown
_1122883665.unknown
_1122885605.unknown
_1122885989.unknown
_1122886613.unknown
_1122885687.unknown
_1122883886.unknown
_1122884168.unknown
_1122883817.unknown
_1122882576.unknown
_1122883448.unknown
_1122883652.unknown
_1122883421.unknown
_1122821047.unknown
_1122821144.unknown
_1122820849.unknown
_1122793265.unknown
_1122799988.unknown
_1122800669.unknown
_1122820190.unknown
_1122820289.unknown
_1122800909.unknown
_1122800215.unknown
_1122800493.unknown
_1122800137.unknown
_1122794329.unknown
_1122795035.unknown
_1122799796.unknown
_1122794387.unknown
_1122793552.unknown
_1122793614.unknown
_1122793329.unknown
_1122707990.unknown
_1122709911.unknown
_1122710231.unknown
_1122792900.unknown
_1122709979.unknown
_1122709545.unknown
_1122709783.unknown
_1122708890.unknown
_1122494264.unknown
_1122707177.unknown
_1122707961.unknown
_1122494392.unknown
_1122493722.unknown
_1122493948.unknown
_1122493654.unknown
_1122404887.unknown
_1122487336.unknown
_1122489262.unknown
_1122491481.unknown
_1122491776.unknown
_1122491925.unknown
_1122491747.unknown
_1122491136.unknown
_1122491425.unknown
_1122491025.unknown
_1122488663.unknown
_1122488994.unknown
_1122489039.unknown
_1122488774.unknown
_1122488311.unknown
_1122488356.unknown
_1122487634.unknown
_1122405617.unknown
_1122486288.unknown
_1122486652.unknown
_1122487225.unknown
_1122486405.unknown
_1122406025.unknown
_1122485840.unknown
_1122405669.unknown
_1122405071.unknown
_1122405433.unknown
_1122405576.unknown
_1122405235.unknown
_1122404953.unknown
_1122405014.unknown
_1122404906.unknown
_1122222640.unknown
_1122402861.unknown
_1122403232.unknown
_1122403484.unknown
_1122404789.unknown
_1122403290.unknown
_1122403114.unknown
_1122403153.unknown
_1122403086.unknown
_1122223810.unknown
_1122402382.unknown
_1122402648.unknown
_1122224324.unknown
_1122223368.unknown
_1122223418.unknown
_1122223300.unknown
_1121765405.unknown
_1122222063.unknown
_1122222527.unknown
_1122222616.unknown
_1122222462.unknown
_1121766134.unknown
_1122221808.unknown
_1121765732.unknown
_1121762087.unknown
_1121764900.unknown
_1121765221.unknown
_1121762598.unknown
_1121761657.unknown
_1121762021.unknown
_1121760892.unknown