5
978-1-4244-5961-2/10/$26.00 ©2010 IEEE 4405 2010 Sixth International Conference on Natural Computation (ICNC 2010) Using Genetic Algorithms for Time Series Prediction Cheng-Xiang Yang, Yi-Fei Zhu School of Resources & Civil Engineering Northeastern University Shenyang 110004, P R China Abstract—This paper proposes using the genetic algorithms (GAs) for nonlinear time series prediction. A nesting evolution scheme is designed to evolve the forecasting models. In the outer evolution cycle, a binary-coded genetic algorithm is employed to evolve the structures of nonlinear polynomial type models. Then the coefficients of the evolved models are introduced and optimized by a real-coded genetic algorithm in the inner evolution cycle. The evolution process is repeated by using genetic operators and the principle of ‘survival of the fittest’ until find the satisfied results. The proposed method is applied to deformation prediction of the dangerous rock mass in rock engineering. The results indicate the applicability of the proposed algorithm with enough accuracy. Keywords-time series; genetic algorithms; nonlinear; modeling; forecasting I. INTRODUCTION Decision making and planning for a variety of complex systems involves prediction or forecasting, which is normally carried out by investigating patterns in historical data and speculating that the future trend will behave according to its past pattern. Much of the historical data is recorded at specific time intervals. This time series data provides good insight into the behavior of the systems under study. Many efforts have been made over the past several decades to develop and improve time series forecasting models. Although several different types of time series models are available [1, 2], they have difficulties in selection of nonlinear model structures. The approximation of linear models to complex real-world problem is not always satisfactory. Researchers began to introduce modern information analysis techniques for nonlinear systems, such as artificial neural networks, grey system theory and support vector machines, to overcome such limitations and the results are interesting and encouraging [3-7]. This provides potential powerful alternatives for nonlinear time series modeling. Recently, the evolutionary computation techniques [8] have proved themselves robust data-based modeling tools for complex system analysis. Based on the Darwinian theory of natural selection, they attempt to obtain the best solution by carrying out global optimization. They use suitable coding to represent possible solutions for a problem and guide the search by using genetic operators and the principle of 'survival of the fittest'. Of these algorithms, genetic algorithms (GAs) have established itself as a powerful search and optimization tools in problem solving and function optimization [9, 10]. Based on the genetic algorithms, this paper develops a nesting evolution algorithm for time series modeling. The motivation of the nesting evolution method comes from the following perspectives. First, since the complex real-world problem is highly nonlinear, it is difficult for forecasters to choose the exact model structure. Usually, a limit number of different models are tried and the one with the most accurate result is selected. However, the final selected model is not necessarily the best for future uses due to many potential influencing factors such as sampling variation, model uncertainty, and structure change. By using suitable coding, the problem of model structure selection can be eased with little extra effort through function optimization of GAs. Second, the model structures generated during the model identification process may include a number of coefficients which can have great effects on the performance of a model. As a potentially good model with favorable structure may be removed during the evolution process because of inappropriate parameters, a parameter estimation procedure has to be employed to optimize those coefficients. Then the good of fitness of each generated model structure can be reasonably computed and assigned before performing natural selection for further evolution. In this study, another GA evolution cycle is used for such parameter estimation. Thus, a nesting evolution scheme, i.e., an outer GA evolution of model structures coupling a series of inner GA evolution of model coefficients generated with mode structures, is consequently designed. The proposed algorithm is then used to model the nonlinear dynamic deformation behavior of some dangerous rock mass to illustrate its efficiency and accuracy. II. GENETIC ALGORITHMS GAs are random search algorithms based on the concepts of natural selection, genetics and evolution. The major difference between GAs and other classical optimization techniques is that GAs work with a population of possible solutions, while those classical optimization techniques work with a single solution. Another difference is that the GAs use probabilistic transition rules instead of deterministic rules. In a GA search process, a group of candidate solutions, represented as genes on a chromosome (e.g., binary strings or real numbers) in the search space, are evolved to find better solutions through natural selection and the genetic operators, i.e., crossover and mutation, borrowed from natural genetics. A standard GA consists of the following steps: Step 1. Initialize a population of possible solutions; Step 2. Calculate the fitness of each candidate solution in the population; This work was supported by Program for New Century Excellent Talents in University, the Special Fund for Basic Scientific Research of Central Colleges under Grant No. N090401002, N090101001 and the SRF for ROCS, SEM under Grant No. 20071108-4.

[IEEE 2010 Sixth International Conference on Natural Computation (ICNC) - Yantai, China (2010.08.10-2010.08.12)] 2010 Sixth International Conference on Natural Computation - Using

  • Upload
    yi-fei

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 2010 Sixth International Conference on Natural Computation (ICNC) - Yantai, China (2010.08.10-2010.08.12)] 2010 Sixth International Conference on Natural Computation - Using

978-1-4244-5961-2/10/$26.00 ©2010 IEEE 4405

2010 Sixth International Conference on Natural Computation (ICNC 2010)

Using Genetic Algorithms for Time Series Prediction

Cheng-Xiang Yang, Yi-Fei Zhu School of Resources & Civil Engineering

Northeastern University Shenyang 110004, P R China

Abstract—This paper proposes using the genetic algorithms (GAs) for nonlinear time series prediction. A nesting evolution scheme is designed to evolve the forecasting models. In the outer evolution cycle, a binary-coded genetic algorithm is employed to evolve the structures of nonlinear polynomial type models. Then the coefficients of the evolved models are introduced and optimized by a real-coded genetic algorithm in the inner evolution cycle. The evolution process is repeated by using genetic operators and the principle of ‘survival of the fittest’ until find the satisfied results. The proposed method is applied to deformation prediction of the dangerous rock mass in rock engineering. The results indicate the applicability of the proposed algorithm with enough accuracy.

Keywords-time series; genetic algorithms; nonlinear; modeling; forecasting

I. INTRODUCTION Decision making and planning for a variety of complex

systems involves prediction or forecasting, which is normally carried out by investigating patterns in historical data and speculating that the future trend will behave according to its past pattern. Much of the historical data is recorded at specific time intervals. This time series data provides good insight into the behavior of the systems under study. Many efforts have been made over the past several decades to develop and improve time series forecasting models. Although several different types of time series models are available [1, 2], they have difficulties in selection of nonlinear model structures. The approximation of linear models to complex real-world problem is not always satisfactory. Researchers began to introduce modern information analysis techniques for nonlinear systems, such as artificial neural networks, grey system theory and support vector machines, to overcome such limitations and the results are interesting and encouraging [3-7]. This provides potential powerful alternatives for nonlinear time series modeling.

Recently, the evolutionary computation techniques [8] have proved themselves robust data-based modeling tools for complex system analysis. Based on the Darwinian theory of natural selection, they attempt to obtain the best solution by carrying out global optimization. They use suitable coding to represent possible solutions for a problem and guide the search by using genetic operators and the principle of 'survival of the fittest'. Of these algorithms, genetic algorithms (GAs) have established itself as a powerful search and optimization tools in problem solving and function optimization [9, 10]. Based on the genetic algorithms, this paper develops a nesting evolution

algorithm for time series modeling. The motivation of the nesting evolution method comes from the following perspectives. First, since the complex real-world problem is highly nonlinear, it is difficult for forecasters to choose the exact model structure. Usually, a limit number of different models are tried and the one with the most accurate result is selected. However, the final selected model is not necessarily the best for future uses due to many potential influencing factors such as sampling variation, model uncertainty, and structure change. By using suitable coding, the problem of model structure selection can be eased with little extra effort through function optimization of GAs. Second, the model structures generated during the model identification process may include a number of coefficients which can have great effects on the performance of a model. As a potentially good model with favorable structure may be removed during the evolution process because of inappropriate parameters, a parameter estimation procedure has to be employed to optimize those coefficients. Then the good of fitness of each generated model structure can be reasonably computed and assigned before performing natural selection for further evolution. In this study, another GA evolution cycle is used for such parameter estimation. Thus, a nesting evolution scheme, i.e., an outer GA evolution of model structures coupling a series of inner GA evolution of model coefficients generated with mode structures, is consequently designed. The proposed algorithm is then used to model the nonlinear dynamic deformation behavior of some dangerous rock mass to illustrate its efficiency and accuracy.

II. GENETIC ALGORITHMS GAs are random search algorithms based on the concepts of

natural selection, genetics and evolution. The major difference between GAs and other classical optimization techniques is that GAs work with a population of possible solutions, while those classical optimization techniques work with a single solution. Another difference is that the GAs use probabilistic transition rules instead of deterministic rules. In a GA search process, a group of candidate solutions, represented as genes on a chromosome (e.g., binary strings or real numbers) in the search space, are evolved to find better solutions through natural selection and the genetic operators, i.e., crossover and mutation, borrowed from natural genetics. A standard GA consists of the following steps:

Step 1. Initialize a population of possible solutions; Step 2. Calculate the fitness of each candidate solution in

the population; This work was supported by Program for New Century Excellent

Talents in University, the Special Fund for Basic Scientific Research ofCentral Colleges under Grant No. N090401002, N090101001 and the SRFfor ROCS, SEM under Grant No. 20071108-4.

Page 2: [IEEE 2010 Sixth International Conference on Natural Computation (ICNC) - Yantai, China (2010.08.10-2010.08.12)] 2010 Sixth International Conference on Natural Computation - Using

4406

Step 3. Select the solutions with higher fitness to take part in evolution.

Step 4. Create new population by using GA operators on the selected solutions.

Step 5. If the stopping criterion is satisfied, then stop the computing and take the best solution as the final result, otherwise, go to Step 2.

The GAs are mathematically simple yet powerful in their search for improvement after each generation [20]. Due to their nature, GAs have many advantages: they require no knowledge of gradient information of the objective function; high nonlinearities and discontinuities present on the objective function have little effect on overall optimization performance; they are resistant to becoming trapped in local optima; they perform very well for large-scale optimization problems and can be employed for a wide variety of optimization problems.

III. GENETIC EVOLUTION SCHEME FOR NONLINEAR TIME SERIES MODELING

A. Problem Description Time series forecasting is an important area of forecasting

in which past observations of the same variable are collected and analyzed to develop a model describing the underlying relationship. It is reasonable to expect a predominant correlation between the current observation and past observations. Mathematically, as to observed time series {ut} (t = 1, 2, …), the times series model can be represented as

1 2( , , , )t t t t pu f u u u− − −= (1)

where p is the number of history observations and f (·) the mapping relationship. By introducing new observations, the model can then be used to extrapolate the time series into the future. Namely

1 1 1

2 1 2

( , , , )

( , , , )

t t t t p

t t t t p

u f u u u

u f u u u+ − − +

+ + − +

=

= (2)

In (1), f (·) is usually highly nonlinear. Based on the polynomial approximation theory, a polynomial type expression can be used to express nonlinear relationships as has been widely used in time series analysis. f (·) in (1) may be written as

1 21 1

( , , , )p q

kt t t p jk t i

j k

f u u u c u− − − −= =

=∑∑ (3)

where p, q and cjk (j=1, 2, …, p; k=1, 2, …, q) are parameters to be determined (in some cases, some of these parameters may be set accordingly to be zero). Of these parameters, p and q are integer values deciding the number of input variables and the level of the polynomial model, respectively. They construct the frame of the model expression and can be called structure parameters. cjk are real values deciding the model coefficients. For example, f (·) in (1) with p =2 and q =3 can be formulated as

2 3

1 21 1

2 3 2 311 1 12 1 13 1 21 2 22 2 13 2

( , )

kt t t jk t j

j k

t t t t t t

u f u u c u

c u c u c u c u c u c u

− − −= =

− − − − − −

= =

= + + + + +

∑∑ (4)

Therefore, if the structure parameters p and q and the model coefficients cjk are determined, then the nonlinear time series model represented in (3) is recognized.

Apparently, it is highly multimodal with large parameter space for most real-world systems, affected by a large number of factors with complex intercorrelations among them. The question now becomes how we efficiently search such solution space for a reasonable combination of those parameters that provide overall agreement with observed time series. It is difficult to obtain a global optimal solution using conventional regression methods. GAs were thus implemented to find the set of those unknown parameters that best matched the modeling prediction with observed results.

A closer examination will easily reveal that there is a natural nesting correlation between the model structure parameters p and q and the model coefficients cjk. That is, the model coefficients cjk have to be added and optimized under given model structure, while the goodness of a model structure decided by p and q have to be estimated with known model coefficients. In this paper, a nesting evolution algorithm is proposed for searching those parameters with nesting correlation to find the global optimal result, where an outer evolution cycle of model structures is nested by a series inner evolution cycle of model coefficients. The details are described in the subsequent sections.

B. Outer Evolution Cycle of Model Structures In the outer evolution cycle, model structure parameters are

generated by a standard GA procedure, starting from a initial population of parameter sets {p, q}i (i=1, 2, …, N) (initial generation), where N is the population size. Each parameter set represents a model structure (i.e., the frame of the polynomial expression without coefficients). For example, the model structure with p=2 and q=2 can be written as

2 22 2

1 2 1 1 2 21 1

( , ) kt t t t j t t t t

j ku f u u u u u u u− − − − − − −

= =

= = = + + +∑∑ (5)

Since a relatively small number of parameters with integer values are to be evolved, the binary coding method is used here to obtain fast convergence. The chromosomes (binary strings) of some model structures can be typically seen in Fig. 1. Subsequent generations are generated by using genetic operators including selection, reproduction, crossover and mutation. We can find that the algorithm can easily adjusts the model structure through crossover and mutation on binary strings. During crossover operation, the sub-strings (genes) from two parent chromosomes are randomly selected to produce a child chromosome. The mutation operation is conducted by randomly changing the values of some bits or the order of some sub-strings. As shown in Fig. 1.

Page 3: [IEEE 2010 Sixth International Conference on Natural Computation (ICNC) - Yantai, China (2010.08.10-2010.08.12)] 2010 Sixth International Conference on Natural Computation - Using

4407

Figure 1. Coding and genetic operators in structure evolution cycle(A 4-bit coding for each parameter used)

It should be noted that the fitness of each model structure can not be estimated now because the model coefficients are not known. Therefore, the outer evolution cycle have to be suspended until the model coefficients have been optimized.

Having the model structures generated, necessary coefficients are automatically introduced. These coefficients will be optimized in the inner evolution cycle.

C. Inner Evolution Cycle of Model Coefficients For every model structure generated in the outer evolution

cycle, the parameter set {p, q} is decoded according to (3) to construct the mathematical expression by introducing necessary model coefficients cjk (j=1, 2, …, p; k=1, 2, …, q). Then another standard GA procedure is used to evolve those coefficients in the context of the current model structure. Since the unknown parameters (e.g., with the number of p × q) are real values, it is more suitable and convenient to directly represent genes as real values too. Therefore, a real-coded GA in which all genes in a chromosome are real numbers is used. In the present analysis, the real-coded GA with a reset stochastic selection type of selection procedure, simulated binary crossovers and polynomial mutations has been used. For details of these genetic operators, we refer the reader to [11]. A typical crossover operator in the context of parameter estimation is shown in Fig. 2.

Figure 2. Crossover operation of two candidate parameter sets in the real-coded GA procedure (χ(•) is some crossover rule)

D. Fitness Test In a GA evolution process for modeling problem, all

generated models have to be tested against the real-world results to give reference values (fitness values) to control the so-called natural selection. It is necessary to collect a set of

input-output sample cases of the studied system. For time series modeling in the current study, these cases can be constructed according to (1). Each case consists of a sub-series of inputs (ut-

1, ut-2, …, ut-p) selected by a moving window with size of p from the observed series and a corresponding output ut. The data set will be divided into two groups. One group is used as fitness cases to obtain the optimal model. The other group is used as testing cases to access the ability of out-of-sample forecasting of the obtained model. One should note that the value of p is frequently changed during the evolution process and the data set must be reconstructed accordingly.

Fitness function that returns a measurement of the fitness of a model is generally based on the discrepancies between the model predictions and the observed results. It can be defined using some statistical result of those discrepancies. In the current study, the root of the mean square error is used, namely

Fitness =

1/22

1

1 n

i ii

u un

=

⎧ ⎫⎪ ⎪⎡ ⎤−⎨ ⎬⎣ ⎦⎪ ⎪⎩ ⎭∑ (6)

where ui and ui* are the model prediction and observed result

respectively, and n the number of learning cases.

After the inner evolution cycle, the optimal set of model coefficients for each model structure is determined and the corresponding fitness value is assigned to the model structure for further evolution.

IV. ALGORITHM DETAILS The nesting evolution algorithm for time series modeling

relies on the following steps:

Step 1. Randomly generate a population of initial model structure parameter sets {p, q}i, then step into the outer evolution cycle;

Step 2. Estimate the fitness value of each model structure parameter set through the following sub-steps.

Sub-step 1. Construct the learning cases and testing cases according to (1) with the parameter p of the current model structure;

Sub-step 2. Decode the model coefficient information (e.g., the number of model coefficients) of current model structure, then step into inner evolution cycle;

Sub-step 3. Evolve the model coefficients of the current model structure starting from an initial population of coefficient sets until the termination conditions for inner evolution cycle are satisfied, where the generated coefficient sets are tested against the fitness cases to calculate the fitness values according to (6);

Sub-step 4. Test the current model with optimized coefficients over the testing cases to estimate the fitness value of the current model structure.

Step 3. If the termination conditions for outer evolution cycle are satisfied, output the best models and terminate the algorithm. Otherwise, go to Step 4.

Step 4. Perform genetic operations on the current population of model structures to generate a new generation of model structures. Then go to Step 2.

The whole evolution process is logically shown in Fig. 3.

Parent chromosome 1 1 1 1 1 1 11 11 1 21 2( , , , , , , , , )q q jk pqc c c c c c Τ=P … … …

Parent chromosome 2 2 2 2 2 2 22 11 1 21 2( , , , , , , , , )q q jk pqc c c c c c Τ=P … … …

Child chromosome 3 3 1 2 3 1 23 11 11 11( ( , ), , ( , ), )jk jk jkc c c c c cχ χ Τ= = =P …

Crossover

Model structure 1 (p=5, q=1)

1 2 3 4 5t t t t tu u u u u− − − − −+ + + + Chromosome: 01010001

Model structure 2 (p=2, q=2)

2 21 1 2 2t t t tu u u u− − − −+ + +

Chromosome: 00100010

New model structure 3 (p=3, q=1) 1 2 3t t tu u u− − −+ +

Chromosome: 00110001

New model structure 4 (p=2, q=4)

2 3 4 2 3 41 1 1 1 2 2 2 2t t t t t t t tu u u u u u u u− − − − − − − −+ + + + + + +

Chromosome: 00100100

Crossover

Mutation

Page 4: [IEEE 2010 Sixth International Conference on Natural Computation (ICNC) - Yantai, China (2010.08.10-2010.08.12)] 2010 Sixth International Conference on Natural Computation - Using

4408

Figure 3. Nesting evolution procedure for time series modeling

It should be noted that the algorithm is based on stochastic search. Therefore, it is a probabilistic approach in nature and may arrive at different optimal solutions for different runs. To eliminate the effect of this inherent variation, it is necessary to perform several runs to improve the reliability of the results.

V. APPLICATION TO DANGEROUS ROCK MASS

A. Problem Presentation Stability analysis of dangerous rock mass (DAM) is a major

task in rock engineering. The collapse of DAM may lead to serious natural hazards, each year accounting for enormous property damage in terms of both direct and indirect costs.

Much effort has been made in the last decades to establish different models to predict the stability state. Summarily, there are two main kinds of methods. One kind of the methods is to develop numerical and analytical models to access the duration and magnitude of the movement state of the DAM. However, affected by the complex time-dependent property of geo-materials and many other engineering factors with considerable uncertainties, the movement of DAM is highly time-dependent and commonly characterized with complex nonlinear dynamic behavior. Under the conditions that the constitutive law of geo-materials is far from well-known, it is very difficult to develop accurate physical-based model to calculate and predict the dynamic behavior. As an alternative, the other kind of methods is based on data analysis of the deformation history of the DAM that is regularly monitored during the movement. Of these methods, deformation time series analysis has attracted many attentions with encouraging results [3-7]. In this section, the proposed evolution modeling approach is applied for modeling and predicting analysis of the dynamic evolution process of a typical DAM.

B. Rearch Data and Implement Settings To demonstrate the robustness of the proposed evolution

procedure for nonlinear time series modeling problem under consideration, we apply it to a real DAM related to the Three Gorges Project in China, and the monitored deformation history from 1978~1993 is used for modeling and predicting analysis. The data is illustrated in Fig. 4.

Before implementation of the proposed evolution procedure, control parameters have to be pre-chosen to control the search process, and they include mainly the population size, the probabilities of genetic operators (crossover, and mutation), the termination conditions, etc. As we have two different GA cycles, i.e. structure evolution cycle and coefficient evolution cycle respectively, two different sets of control parameters have to be selected so that the search can be carried out efficiently. The implementation settings in current study are listed in Table I.

TABLE I. COMTROL PARAMETERS

Object Parameter Structure evolution

Population size 8 Crossover probability 0.7 Mutation probability 0.2

Termination criterion The fitness value of the best individual has remained unchanged for 3 generations

Parameter optimization Population size 200 Crossover probability 0.85 Mutation probability 0.05

Termination criterion The fitness value of the best individual has remained unchanged for 3 generations

C. Application Results For the training and testing cases, several runs of

implementations described in section IV were performed with different randomly generated initial model structures so that each run takes a different genetic path to evolve models. After all the runs finished, we got the optimal time series model as:

Start

Randomly generate an initial population of model structure parameter sets {p, q}

Loop for each model structure i1 = 1, N1

Construct learn cases according to the current parameter value p

Randomly generate an initial population of model coefficient sets {cjk}

Calculate the fitness value of each coefficient set according to (6)

Output results

Decode the coefficient information of current model structure

Loop for each coefficient set i2 = 1, N2

Termination conditions

Perform genetic operations to generate a new

population of coefficient sets

Select the fitness of best coefficient set as the fitness of current model structure

Termination conditions

Yes

No

End

Perform genetic operations to generate a

new population of structure parameter sets Yes

No

Page 5: [IEEE 2010 Sixth International Conference on Natural Computation (ICNC) - Yantai, China (2010.08.10-2010.08.12)] 2010 Sixth International Conference on Natural Computation - Using

4409

21 1 2

2 22 3 3

0.4359 0.0157 0.9487

0.0022 0.1872 0.0016t t t t

t t t

u u u u

u u u− − −

− − −

= − + +

+ + (7)

Fig. 4 shows the learned and predicted deformation time series using the evolved time series model. One can note that the present method can attain a satisfied approximation. That is, the evolved model found the underlying relationship between history and future deformation series of the DAM. The evolved model can thus be used in predicting the movement state of the DAM and providing information for decision makers to program prevention schedule.

Figure 4. Learned and predicted results of the evolved nonlinear deformation

time series model (7)

VI. CONCLUSIONS This paper presented a nesting evolution procedure for

nonlinear time series modeling. This method automatically constructed models through an outer binary–coded GA evolution cycle for model structure selection nested with a set of inner real-coded GA evolution cycle for model coefficient optimization. It can perform coupling global optimal search of the structure as well as the coefficients of polynomial type nonlinear model. Using this method, a typical case study on prediction of deformation series of a real DAM related to the Three Gorges Project is discussed. The nonlinear time series

models was evolved and examined with the monitored deformation history. The results shown that there is good agreement between the observed and predicted deformations, and the models obtained have interpretative forms that can be easily used for further analysis. The present method has been tested with encouraging success. It can be used a useful alternative to other data-based modeling methodologies in the prediction analysis of complex dynamic systems.

REFERENCES [1] G.E.P. Box and G. Jenkins, “Time Series Analysis, Forecasting and

Control”, San Francisco, CA: Holden-Day, 1970. [2] P.J. Brockwell and R.A. Davis, “Introduction to Time Series and

Forecasting”, New York, Springer, 1996. [3] X. T. Feng, Z. Q. Zhang and P. Xu, “Adaptive and intelligent prediction

of deformation time series of high rock excavation slope”. Transactions of Nonferrous Metals Society of China (English Edition), Vol. 9, pp. 842-846, April 1999.

[4] Z. Q. Huang, T. Jiang, Z.Q. Yue, et al, “Deformation of the central pier of the permanent shiplock, Three Gorges Project, China: an analysis case study”, International Journal of Rock Mechanics & Mining Sciences. Vol. 40, pp. 877-892, September 2003.

[5] C.-X. Yang and Y.-F. Zhu, “Time series analysis using GA optimized neural networks”, in the 3rd International Conference on Natural Computation, Vol. IV, Los Alamitos, CA: IEEE Computer Society Press, 2007, pp. 270-274

[6] C. Yang, X. Feng and B. Chen, “Prediction of mining induced land subsidence using support vector machines”, in Land Subsidence-Proceedings of the Seventh International Symposium on Land Subsidence, Vol 2, A. G. Zhang, S. L. Gong, L. Carbognin et al., Eds. Shanghai, Shanghai Scientific & Technical Publishers, pp.799~806, 2005.

[7] H. B. Zhao and X. T. Feng, “Study and application of genetic-support vector machine for nonlinear displacement time series forecasting”. Chinese Journal of Geotechnical Engineering, Vol. 25, pp. 468-471, July 2003 (in Chinese).

[8] D. E. Goldberg, “Genetic and evolutionary algorithms come of age”. Communications of the ACM, Vol. 37, pp. 113-119, March 1994.

[9] D. E. Goldberg, “Genetic Algorithms in Search, Optimization and Machine Learning”, MA: Addison-Wesley, Reading, 1989.

[10] J.H. Holland, “Adaptation in Natural and Artificial System”, Michigan, The University of Michigan Press, 1975.

[11] K. Deb, “Multi-objective optimization using evolutionary algorithms”. UK, Wiley: Chichester, 2001.

0

5

10

15

20

25

0 6 12 18Observation step

Def

orm

atio

n/m

m

Observed seriesLearned seriesPredicted series