18
Research Article A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions Guang Pan, Pengcheng Ye, Peng Wang, and Zhidong Yang School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China Correspondence should be addressed to Pengcheng Ye; [email protected] Received 31 March 2014; Revised 26 May 2014; Accepted 23 June 2014; Published 15 July 2014 Academic Editor: Huamin Zhou Copyright © 2014 Guang Pan et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. e accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Aſterwards, the more accurate metamodels would be constructed by the procedure above. e validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. 1. Introduction In engineering, manufacturing companies strive to produce better and cheaper products more quickly. However, engi- neering systems are fairly large and complicated nowadays. In addition, design requirements are rigorous and stringent for such systems, especially multidiscipline design optimization systems such as aerospace. For example, in aircraſt, the design is intrinsically a daunting optimization task oſten involving multiple disciplines, multiple objectives, and computation- intensive processes for product simulation. Just taking the computation challenge as an example, it is reported that it takes Ford Motor Company about 36–160 h to run one crash simulation [1], which is unacceptable in practice. Despite the fact that the capacity of computer keeps increasing, the complexity of analysis soſtware, for example, finite element analysis (FEA) and computational fluid dynamics (CFD), seems to keep pace with computing advances [2]. To meet the challenge of increasing model complexity, design engineers are seeking new methods. As a result, metamodel which is oſten called surrogate model or response surface as a widely used approximation model to replace the expensive simulation is proposed and improved by researchers. In fact, in everyday life we try to save time and make predictions based on assumptions. e literature [3] describes a vivid example to strengthen understanding of metamodel. When travelling on a road we will predict the rate of turn of a bend based on the entry and surrounding landscape. Without accurately evaluating it, in our mind we are constructing metamodels using the direction of the road, its derivatives with respect to distance along the road, and local elevation information. is information is coupled with assumptions based on our experience of going round many bends in the past. en we will calculate a suitably safe speed based on our prediction of curvature and considering a safety error. In engineering design we are also faced with different problems, but we try to do with a surrogate model essentially what we do every day in our mind: make useful predictions based on limited information and assumptions. In the past two decades, the use of metamodel [47] has attracted intensive attention. It is found to be a valuable tool to support a wide scope of activities in modern engineering design, especially design optimization. Metamodeling which means the process of constructing metamodels involves two important aspects: (a) choosing a sampling method to generate sampling points and (b) choos- ing an approximation method to represent the data, which influence the performance of metamodels. An important research issue associated with metamodeling is how to obtain good accuracy of metamodels with reasonable sampling Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 192862, 17 pages http://dx.doi.org/10.1155/2014/192862

Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

Research ArticleA Sequential Optimization Sampling Method forMetamodels with Radial Basis Functions

Guang Pan Pengcheng Ye Peng Wang and Zhidong Yang

School of Marine Science and Technology Northwestern Polytechnical University Xirsquoan 710072 China

Correspondence should be addressed to Pengcheng Ye ypc2008300718163com

Received 31 March 2014 Revised 26 May 2014 Accepted 23 June 2014 Published 15 July 2014

Academic Editor Huamin Zhou

Copyright copy 2014 Guang Pan et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involvecomputationally expensive simulation programs The accuracy of metamodels is strongly affected by the sampling methods Inthis paper a new sequential optimization sampling method is proposed Based on the new sampling method metamodels canbe constructed repeatedly through the addition of sampling points namely extrema points of metamodels and minimum pointsof density function Afterwards the more accurate metamodels would be constructed by the procedure above The validity andeffectiveness of proposed sampling method are examined by studying typical numerical examples

1 Introduction

In engineering manufacturing companies strive to producebetter and cheaper products more quickly However engi-neering systems are fairly large and complicated nowadays Inaddition design requirements are rigorous and stringent forsuch systems especially multidiscipline design optimizationsystems such as aerospace For example in aircraft the designis intrinsically a daunting optimization task often involvingmultiple disciplines multiple objectives and computation-intensive processes for product simulation Just taking thecomputation challenge as an example it is reported that ittakes Ford Motor Company about 36ndash160 h to run one crashsimulation [1] which is unacceptable in practice Despitethe fact that the capacity of computer keeps increasing thecomplexity of analysis software for example finite elementanalysis (FEA) and computational fluid dynamics (CFD)seems to keep pace with computing advances [2] Tomeet thechallenge of increasing model complexity design engineersare seeking new methods As a result metamodel whichis often called surrogate model or response surface as awidely used approximation model to replace the expensivesimulation is proposed and improved by researchers In factin everyday life we try to save time and make predictionsbased on assumptions The literature [3] describes a vivid

example to strengthen understanding of metamodel Whentravelling on a road we will predict the rate of turn of abend based on the entry and surrounding landscapeWithoutaccurately evaluating it in our mind we are constructingmetamodels using the direction of the road its derivativeswith respect to distance along the road and local elevationinformation This information is coupled with assumptionsbased on our experience of going round many bends in thepast Then we will calculate a suitably safe speed based onour prediction of curvature and considering a safety error Inengineering design we are also faced with different problemsbut we try to do with a surrogate model essentially what wedo every day in our mind make useful predictions basedon limited information and assumptions In the past twodecades the use of metamodel [4ndash7] has attracted intensiveattention It is found to be a valuable tool to support a widescope of activities in modern engineering design especiallydesign optimization

Metamodeling which means the process of constructingmetamodels involves two important aspects (a) choosing asampling method to generate sampling points and (b) choos-ing an approximation method to represent the data whichinfluence the performance of metamodels An importantresearch issue associated withmetamodeling is how to obtaingood accuracy of metamodels with reasonable sampling

Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 192862 17 pageshttpdxdoiorg1011552014192862

2 The Scientific World Journal

methods and approximationmethods Accordingly samplingmethods and approximation methods are intensively studiedin recent years As one of the most effective approximationmethods radial basis functions (RBF) [8ndash10] interpolationhas been gained popularity for model approximation becauseof their simplicity and accurate results for interpolationproblems RBF is becoming a better choice for constructingmetamodels or finding the global optima of computationallyexpensive functions by using a limited number of samplingpoints Other types of approximation methods includingKriging [11] multivariate adaptive regression splines (MARS)[12] response surface methodology (RSM) [13] and supportvector machines (SVM) [14] and so forth are discussedbroadly as well Mullur et al proposed an improved formof the typical RBF approach that is extended radial basisfunctions (E-RBF) which offers more flexibility in the meta-model construction process and provides better smoothingproperties In general it is expected to yield more accuratemetamodels than the typical RBF So this paper uses E-RBFto construct the metamodels

The sampling method is another important factor ofaffecting accuracy for a given metamodel The samplingmethod can be divided into the classical sampling methodldquospace-fillingrdquo sampling method and sequential samplingmethod according to the type Classic sampling methodsare developed from design of experiments (DOE) Thesemethods focus on planning experiments and tend to spreadthe sample points around boundaries of the design space foreliminating random error The classical experiment designscontain alphabetical optimal design [15] factorial or frac-tional factorial design [16] Box-Behnken [17] central com-posite design (CCD) [18] and so forth However Sacks et al[19] stated that classic experiment designs can be inefficient oreven inappropriate for deterministic optimal problems Jin etal [20] confirmed that experiment designs for deterministiccomputer analyses should be spaced filling The space-fillingsampling methods which are correspondingly more oftenused in the literature are Latin hypercube design (LHD)[21] orthogonal arrays design [22] Hammersley sequences(HS) [23] and uniform designs [24] The sampling methodsabove are generated all at once or in other words at onestage It is difficult for the one-stage sampling methods toforecast the number of sampling points On the contrarythe sequential sampling approach generates sampling pointsone after another according to the particular criteria insteadof generating all points at once In sequential samplingtechnique the new sampling points are added to the sampleby taking the advantage of information gathered from theexisting (earlier created) metamodel and then correspond-ing response surface is updated Therefore the sequentialsampling recently has gained popularity for its advantages offlexibility and adaptability over other methods Jin et al [25]stated that sequential sampling allows engineers to controlthe sampling process and it is generally more efficient thanone-stage sampling Deng et al [26] proposed a sequentialsampling design method based on Kriging Wei et al [27]proposed a sequential sampling method adopting a criterionto determine the optimal sampling points which maximizedthe value of the product of curvature and square of minimum

distance to other design sites Kitayama et al [28] presented asequential approximate optimization (SAO) algorithm usingthe RBF network with sequential sampling methods A novelsequential sampling method based on the maximin distancecriterion was proposed by Zhu et al [29]

In this paper a new sequential optimization samplingmethod with extended radial basis functions is proposedIn order to utilize the geometrical feature of metamodelsextrema points of the response surface as new optimizationsampling points are added to the sample Through using themetamodeling functions constructed by extended radial basisfunctions the extrema points of metamodels can be achievedexpediently Moreover an effective function [28 30] calledthe density function for determining the sparse region inthe design variable space is considered The density functionconstructed by using the RBF is to discover a sparse region inthe design space It is expected that the addition of samplingpoints in the sparse region will improve the accuracy ofapproximation model Thus a new metamodeling algorithmintegrating a sequential optimization sampling method ispresented To illustrate the accuracy and efficiency of theproposed algorithm the measure performance and severalnumerical examples will be tested

The remainder of this paper is organized as follows Innext section the RBF and E-RBF are described briefly InSection 3 a new sequential optimization sampling methodis proposed In addition the density function [28 30] isintroduced In Section 4 the numerical examples assessmentmeasures test results and discussions and so forth will beprovidedThe last section is the closure of the paper where wesummarize the important observations made from our study

2 Radial Basis Functions

21 Learning of Classical Radial Basis Functions The RBFmetamodel was originally developed by Hardy [31] in 1971to fit irregular topographic contours of geographical dataIt has been known tested and verified for several decadesand many positive properties have been identified Mullurand Messac [32] made radial basis functions more flexibleand effective by adding so-called nonradial basis functionsKrishnamurthy [33] added a polynomial to the definitionof RBF for improving the performance Wu [34] providedcriteria for positive definiteness of radial functions withcompact support which produced series of positive definiteradial functions

An RBF network is a three-layer feed-forward networkshown in Figure 1 The output of the network

119891(119909) whichcorresponds to the response surface is typically given by

119910 =

119891 (x) =

119873

sum

119894=1

120582

119894120601 (

1003817

1003817

1003817

1003817

x minus x119894

1003817

1003817

1003817

1003817

) = Φ sdot 120582 (1)

where 119873 is the number of sampling points x is a vector ofdesign variables x

119894is a vector values of design variables at

the 119894th sampling point Φ = [120601

1 120601

2 120601

119873] (120601

119894= 120601(x minus

x119894)) 120582 = [120582

1 120582

2 120582

119873]

119879 x minus x119894 is the Euclidean norm

120601 is a basis function and 120582

119894is the coefficient for the 119894th

The Scientific World Journal 3

x1

x2

xN

1206011

1206012

120601N

1205821

1205822

120582N

fRBFsum

Figure 1 Three-layer feed-forward RBF network

Table 1 Commonly used basis functions

Name Radial function 119903 = 119909 minus 119909

119894

2

Linear 120601(119903) = 119888119903

Cubic 120601(119903) = (119903 + 119888)

3

Thin-plate spline 120601(119903) = 119903

2 log (119888119903

2)

Gaussian 120601(119903) = exp (minus119888119903

2)

Multiquadric 120601(119903) = (119903

2+ 119888

2)

12

c is a constant

basis function The approximation function 119910 is actually alinear combination of some RBF with weight coefficients 120582

119894

Themost commonly used radial classical radial functions arelisted in Table 1 The radial basis functions multiquadric andGaussian are the best known and most often applied Themultiquadric is nonsingular and simple to use [35] Henceradial basis function multiquadric is used in this paper

22 Learning of Extended Radial Basis Functions Theextended radial basis functions approach is a combinationof radial and nonradial basis functions which incorporatemore flexibility in the metamodels by introducing additionaldegrees of freedom in the metamodels definition It providesbetter smoothing properties and in general it is expected toyieldmore accuratemetamodels than the typical RBFMullurandMessac [32 36] found that the E-RBF approach results inhighly accurate metamodels compared to the classical RBFand Kriging Under the E-RBF approach the approximationfunction takes the form

119891 (x) =

119873

sum

119894=1

120582

119894120601 (

1003817

1003817

1003817

1003817

x minus x119894

1003817

1003817

1003817

1003817

)

+

119873

sum

119894=1

119899

sum

119895=1

120572

119871

119894119895120595

119871(120585

119895

119894) + 120572

119877

119894119895120595

119877(120585

119895

119894) + 120573

119894119895120595

120573(120585

119895

119894)

(2)

where 119899 is the number of design variables 120572119871119894119895 120572119877119894119895 120573119894119895are

coefficients to be determined for given problems 120595119871 120595119877 120595120573are components of the so-called nonradial basis functionsdefined in Table 2 Nonradial basis functions are functions of

Data point xi

Radial distance r

Generic point x

Design variable x1

120585i2

120585i1

Des

ign

varia

blex

2

Figure 2 Definition of coordinate 120585

Coord 120585120574120574

I II III IV

Linear

Linear

120572L120585n + 120573120585

120572R120585n + 120573120585

Basic

func

tion120601(120585)

Figure 3 Nonradial basis functions

120585

119895

119894 which is the coordinate vector of a generic point 119909 in the

design space relative to a data point 119909119894 defined as 120585

119894= 119909 minus 119909

119894

Thus 120585119895119894is the coordinate of any point 119909 relative to the data

point 119909119894along the 119895th dimensionThe difference between the

Euclidean distance 119903 used in RBF and the relative coordinates120585 used for N-RBF for a two-dimensional case is depicted inFigure 2 Four distinct regions (IndashIV) are depicted in Figure 3each corresponding to a row in Table 2

In matrix notation the metamodel defined in (2) can bewritten as

[119860] 120582 + [119861] (120572

119871)

119879

(120572

119877)

119879

120573

119879

119879

= 119891 (3)

Equation (3) can be compactly written in matrix form as

[119860] 120572 = 119891 (4)

where [119860] = [119860119861] 120572 = 120582

119879(120572

119871)

119879

(120572

119877)

119879

120573

119879

119879

and 119891 = 119891(119909

119894)

The coefficients120572 can be evaluated by using the pseudoin-verse approach to solve the underdetermined system of (4) asfollows

120572 = [119860]

+

119891 (5)

where [119860]

+ denotes the pseudoinverse of [119860]

4 The Scientific World Journal

Table 2 Nonradial basis functions 120595(120585

119895

119894)

Region Range of 120585119895119894

120595

119871120595

119877120595

120573

I 120585

119895

119894le minus120574 (minus120578120574

120578minus1) 120585

119895

119894+ 120574

120578(1 minus 120578) 0 120585

119895

119894

II minus120574 le 120585

119895

119894le 0 (120585

119895

119894)

120578

0 120585

119895

119894

III 0 le 120585

119895

119894le 120574 0 (120585

119895

119894)

120578

120585

119895

119894

IV 120585

119895

119894ge 120574 0 (120578120574

120578minus1) 120585

119895

119894+ 120574

120578(1 minus 120578) 120585

119895

119894

120574 120578 are prescribed parameters refer to [32 36]

After obtaining the coefficients 120572 using E-RBF one canevaluate the metamodels to construct response surface using(2) The resulting metamodel is one that is guaranteed to beconvex andhighly accurate [32 36] In the following section aseries of mathematical examples are approximated by E-RBFbased on the sampling method proposed in next section

In this section we introduce the metamodel approachE-RBF briefly due to space limitations A more completedescription and discussion are presented in articles [32 36]

3 The Sequential OptimizationSampling Method

As it is known the metamodels are approximation models ofrealmodels which are commonly complex and unknownTheaccuracy of metamodels owes to the approximation methodsand sampling methods primarily [37] The sampling methodproposed in this paper includes two parts The first part isthe procedure of adding optimization sampling points andthe second part is the procedure of adding points of sparseregions [28 30]

31 The Optimization Sampling Points The optimizationsampling points should have two important properties asfollows adaptive and sensitiveThe focus on sampling shouldshift to how to generate a reasonable number of samplingpoints intelligently so that the metamodel can reflect the realldquoblack-box functionsrdquo in areas of interest General knowledgetells us that sampling points at the site of valley andpeak of theresponse surface would improve the accuracy at the greatestextent The valley and peak are extrema points of functions

In mathematics the points which are the largest orsmallest within a given neighborhood are defined extremapoints McDonald et al [38] found the radial basis functionsmodels created with (1) are twice continuously differentiablewhen employing multiquadric function as basis function forall 119888 = 0 The first approximation model is constructed withinitial sampling points through using radial basis functionsThus the function evaluations analytic gradients and theHessian matrix of second partial derivatives can be obtainedfrom the initial functions

Considering the RBF model with 119899 dimensions in (1) thegradients of the equation are

120597

119891

120597x=

119873

sum

119894=1

120582

119894120601

1015840(119903

119894)

119903

119894 (x)

(x minus x119894)

119879 (6)

where 120601

1015840(119903

119894) = 120597120601120597119903

119894 119903119894(x) = xminus x

119894 For multiquadric RBF

model 120601(119903119894) = (119903

119894

2+ 119888

2)

12 1206011015840(119903119894) = 119903

119894(119903

119894

2+ 119888

2)

minus12The Hessian matrix can be calculated from (6) as

119867(x) =

120597

2

119891

120597x2

=

119873

sum

119894=1

120582

119894

119903

119894 (x)

[120601

1015840(119903

119894) sdot 119868 + (120601

10158401015840(119903

119894) minus

120601

1015840(119903

119894)

119903

119894 (x)

)] (x minus x119894)

119879

(7)

Similarly 12060110158401015840(119903119894) = 120597

2120601120597119903

119894

2 For multiquadric RBF model120601

10158401015840(119903

119894) = 119888

2(119903

119894

2+ 119888

2)

minus32 119868 is a unit vectorLet (6) be equal to zero and solve

120597

119891

120597x= 0 x = (119909

1 119909

2 119909

119873)

(8)

Then the critical point x119890can be obtained Upon substitution

of the critical point x119890into Hessian matrix we can judge the

definition of the matrix119867(x119890)The critical point is maximum

or minimum when the matrix is definite positive or definitenegative Figures 4 and 5 separately show the extrema pointsin the case of 2D 3D The red square and blue pentagonindicate maxima points meanwhile the green dot and redasterisk indicate minima points Once an approximationmodel has been created we can obtain coordinates of extremapoints

32 Density Function with the RBF It is necessary to addnew sampling points in the sparse region for global approx-imation To achieve this a new function called the densityfunction which is proposed by Kitayama et al [28 30] isconstructed using the RBF network in this paper The aimof the density function is to discover a sparse region in thedesign spaceThis density function generates local minima inthe sparse region so that theminimumof this function can betaken as a new sampling pointThe addition of new samplingpoints in the sparse region will improve the accuracy ofapproximation model and help to find the global minimumof metamodels

To explore the sparse region every output 119891of the RBFnetwork is replacedwith+1 Let119873 be the number of samplingpointsThe procedure for constructing the density function issummarized as follows

(1) The output vector 119891

119863 is prepared at the samplingpoints

119891

119863= (1 1 1)

119879

119873lowast1 (9)

The Scientific World Journal 5

MaximumMinimum

minus25120587 minus125120587 125120587 251205870

0

1

2

minus1

Figure 4 Extrema points in 2D

minus3 minus2 minus1 0 1 2 3minus3minus2minus10123

minus6

minus4

minus2

0

2

4

6

8

MaximumMinimum

Figure 5 Extrema points in 3D

(2) The weight vector 120582

119863of the density function 119863(119909) is

calculated as follows

120582

119863= (Φ

119879Φ + Δ)

minus1

Φ

119879119891

119863

(10)

where

Φ =

[

[

[

[

[

120601

1(119909

1) 120601

2(119909

1) sdot sdot sdot 120601

119873(119909

1)

120601

1(119909

2) 120601

2(119909

2) sdot sdot sdot 120601

119873(119909

2)

d

120601

1(119909

119873) 120601

2(119909

119873) sdot sdot sdot 120601

119873(119909

119873)

]

]

]

]

]

Δ = 10

minus3times

[

[

[

[

[

1 0 sdot sdot sdot 0

0 1 sdot sdot sdot 0

d

0 0 sdot sdot sdot 1

]

]

]

]

]119873lowast119873

(11)

(3) The addition of sampling point119909119863 in the sparse regionis explored as the globalminimumof density functionwith the RBF

119863(119909

119863) =

119873

sum

119895=1

120582

119863

119895120601

119895(119909

119863) 997888rarr min (12)

33 Summary of the Proposed Algorithm The detailed algo-rithm for sequential optimization samplingmethodwith RBFnetwork is described below Figure 6 shows the proposedalgorithm The proposed algorithm is roughly divided intotwo phases The first phase is used to construct the responsesurface and add the extrema points of response surface asnew sampling points The second phase is used to constructthe density function and add the minimum of the densityfunction as a new sampling point These two phases aredescribed particularly as follows

6 The Scientific World Journal

Construction of the response surface with RBF

Find the extrema points of response surface

Add the extrema points to the sample

Construction of the density function with RBF

Find the minimum point of density function

Add the minimum point to the sample

Yes

Yes

No

No

Second phase

First phase

count = 1

count = count + 1 count le d

m lt mmax

End

Update

the

sampling

points

m

Initial sampling points m

Figure 6 Proposed algorithm

First phase119898 initial sampling points are generated usingthe Latin hypercube sampling design All functions areevaluated at the sampling points and the response surface

119891(119909) is constructed The extrema points of response surfacecan then be found and directly taken as the new samplingpoints The more accurate metamodel would be constructedby repeating the procedure of adding extrema points to thesample adaptively and sequentially

Second phase the density function is constructed to findthe sparse regionTheminimumpoint of the density functionis taken as a new sampling point This step is repeated whilea terminal criterion (count le 119889) is satisfied Parameter119889 controls the number of sampling points obtained by thedensity function Kitayama et al [28] advised 119889 = int(1198992) Inthis paper parameter 119889 = int(1198992)+1 where int( ) representsthe rounding-off The new sampling point would be gainedif the parameter count is less than 119889 and it is increased ascount = count + 1

The terminal criterion of the integrated algorithm isdetermined by the maximum number of sampling points119898max If the number of sampling points is less than 119898maxthe algorithm proceeds Otherwise the algorithm is termi-nated In the algorithm the response surface is constructedrepeatedly through the addition of the new sampling pointsnamely extrema points and minimum point of densityfunction Afterwards the more accurate metamodel would

be constructed by repeating the procedure of adding pointsto the sample adaptively and sequentially

4 Numerical Examples

In this section we would test the validity of the proposedsampling method through some well-known numericalexamples and one engineering optimization problem Allthese functions are approximated with the E-RBF modelThe response surfaces are constructed through one-stagesampling methods (ie LHD and RND) and sequentialsampling methods (ie CV [25] Kitayama et al [28] SLE[29] and SOSM proposed in this paper) with the samesampling size In order to visualize the comparison betweenapproximated model and actual model two design variablesof numerical examples listed in Table 3 are tested

41 Sampling Strategies Two types of samplingmethods usedin this paper separately are one-stage sampling methods andsequential sampling methods One-stage sampling methodsinclude Latin hypercube sampling design (LHD) and Ran-dom sampling design (RND) Sequential sampling methodsinclude cross-validation samplingmethod (CV) [25] sequen-tial samplingmethodproposed byKitayama et al [28] termedKSSM in this paper successive local enumeration sampling

The Scientific World Journal 7

Table 3 Numerical function and global minimum

Number Function Design domain Global minimum1 119891(119909) = minus10 sin 119888 (119909

1) sdot sin 119888 (119909

2) minus2 le 119909 le 2 119909 = (0 0) 119891min = minus10

2 119891(119909) = 4119909

1sdot 119890

minus1199092

1minus1199092

2minus2 le 119909 le 2 119909 = (minus

radic22 0) 119891min = minus172

3 119891(119909) = 119909

2

1+ 119909

2

2minus 10 lowast [cos (2120587119909

1) + cos (2120587119909

1)] + 20 minus08 le 119909 le 08 119909 = (0 0) 119891min = 0

4119891(119909) = 3(1 minus 119909

1)

2sdot 119890

minus1199092

1minus(1199092+1)

2

minus 10 (

119909

1

5

minus 119909

3

1minus 119909

5

2)

sdot 119890

minus1199092

1minus1199092

2minus

1

3

119890

minus(1199091+1)2minus1199092

2

minus3 le 119909 le 3 119909 = (0228 minus163) 119891min = minus655

5119891(119909) = minus60[1 + (119909

1+ 1)

2

+ (119909

2minus 3)

2

] minus 20[1 + (119909

1minus 1)

2

+ (119909

2minus 3)

2

] minus 30[1 + 119909

2

1+ (119909

2+ 4)

2

] + 30

minus6 le 119909 le 6 119909 = (minus097 3) 119891min = minus3463

6 119891(119909) = minus10(sinradic119909

2

1+ 119909

2

2+ epsradic119909

2

1+ 119909

2

2+ eps) + 10 eps = 10

minus15minus3120587 le 119909 le 3120587 119909 = (0 0) 119891min = 0

method (SLE) [29] and sequential optimization samplingmethod (SOSM) proposed in this paper Every metamodelis constructed for 20 times with the same sampling methodin this paper For functions 1 and 2 we set 119898max = 25 Forfunctions 4 and 5 we set 119898max = 36 For functions 3 and 6we generate 28 and 40 sampling points separately

42 Selection of Parameters For the E-RBF approach we set119888 = 1 which is a prescribed parameter for the multiquadricbasis functions for all of the examplesThe parameter 120574 is setequal to approximately 13 of the design domain size MullurandMessac [32] investigated that the results were not undulysensitive to 120574 Another parameter 120578 is set equal to 2 for allnumerical examples

43 Metamodel Accuracy Measures Generally speaking anE-RBF response surface passes through all the samplingpoints exactly Therefore it is impossible to estimate theaccuracy of an E-RBF model with sampling points Tomeasure the accuracy of the resulting metamodels we canuse additional testing points to evaluate the accuracy of themodel via standard error measure root-mean-squared error(RMSE)The smaller the value of RMSE is the more accuratethe response surface will be The error measure is defined as

RMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

119870

(13)

where119870 is the number of additional testing points generatedby grid sampling method (32 lowast 32 for all the examples)119891(x119896) and

119891(x119896) are the true function value and predicted

metamodel value at the 119896th testing point x119896 respectively

In addition to the preceding RMSE we also calculate thenormalized root-mean-squared error (NRMSE) as follows

NRMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

sum

119870

119896=1[119891 (x119896)]

2lowast 100

(14)

RMSE only calculates the error of functions themselvesHowever NRMSE allows comparison of themetamodel errorvalues with regard to different functions

In engineering problems global minimum is requiredgenerally So we employ the simulated annealing (SA) [39]to calculate the global minimum

119891min based on the ultimatemetamodel The actual global minimum 119891min are listed inTable 3

44 Results and Discussions In this section we discuss theresults obtained after constructing metamodels through sixvarious sampling methods using the assessment measuredescribed above As mentioned in Section 41 20 proceduresare conducted for each sampling method and therefore thereare twenty sets of accuracy results for each sampling methodThe advantages and validity of sequential optimization sam-pling method (SOSM) are tested in comparison with one-stage sampling methods and sequential sampling methodsseparately below In addition SOSM is used to solve a typicalmechanical design optimization problem

441 Comparison of the Performance between SOSM andOne-Stage Sampling Methods In this part two classical one-stage sampling methods LHD RND and SOSM are usedto construct metamodels The accuracy of metamodels andglobal minimum of functions are obtained and managedThe error measures RMSE andNRMSE and global minimumsummarized in Table 4 are average values

From the Table 4 the RMSE and NRMSE of metamodelsusing samplingmethod SOSM are smaller than the other twoone-stage sampling methods for functions 1ndash5 For function6 the values are close The RND as expected performspoorly That is metamodels based on sampling methodSOSMmay provide a better fit to actual functions As seen inTable 4 on exploring global minimum

119891min of metamodelsSOSM is superior to LHD and RDN through all numericalexamples compared to the actual global minimum 119891min Inparticular the SOSM can almost find the global minimumof all numerical examples at every turn However LHD andRND perform fairly poorly particularly in the success ratewhich will be depicted in Figure 9

The mean results from Table 4 cannot represent theadvantages and disadvantages of different sampling methodsadequately Thus statistical graphics that is boxplots are

8 The Scientific World Journal

Table 4 Metamodel accuracy results for functions 1ndash6 between SOSM and one-stage sampling methods

Number SOSM LHD RND119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0722 30698 minus9137 0817 34766 minus8728 minus102 0061 9798 minus1715 0090 14524 minus1611 0122 19659 minus1662 minus17153 1674 6464 minus0002 1790 6913 0140 2072 8001 0040 04 0728 38118 minus6556 0810 42456 minus5786 0861 45116 minus5374 minus65515 2455 10531 minus34243 3353 14380 minus12928 3789 16253 minus9907 minus34636 0995 10011 0116 0992 9991 2284 1076 10830 2070 0

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

RMSE

of f

unct

ions

1ndash6

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

0

1

2

3

4

5

Figure 7 Assessment of metamodels RMSE between SOSM andone-stage sampling methods

used to show the deviations of the accuracy and globalminimum of each metamodel In descriptive statistics box-plot is a convenient way of graphically depicting groups ofnumerical data through their quartilesThe center line of eachboxplot shows the 50th percentile (median) value and the boxencompasses the 25th and 75th percentile of the data Theleader lines (horizontal lines) are plotted at a distance of 15times the interquartile range in each direction or the limitof the data (if the limit of the data falls within 15 times theinterquartile range) The data points outside the horizontallines are shown by placing a sign (ldquo◻rdquo) for each point Thetwenty times of metamodels accuracy results (RMSE andNRMSE) and global minimum 119891min with sampling methodsthat is SOSM LHD and RND are illustrated in Figures 7ndash9with the help of boxplots

From the results shown in Figures 7 and 8 it is foundthat the median values of RMSE and NRMSE are smallercompared to LHD and RND for functions 1ndash5 For function6 the median values of RMSE and NRMSE are a little largerbut very close In addition the box size of RMSE andNRMSEbased on SOSM is the shortest for all functions except forfunctions 4 and 5 The difference of sizes is small It is clearthat a small size of the box suggests small standard deviationof results High standard deviation of results suggests largeuncertainties in the predictions and low standard suggestslow uncertainty on the contrary The points outside thehorizontal lines indicate that the results of experiments areterrible Above all either mean values or median values and

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

Figure 8 Assessment of metamodels NRMSE between SOSM andone-stage sampling methods

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0

5

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6 Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Figure 9 Assessment of metamodels Global minimum betweenSOSM and one-stage sampling methods

box sizes of results are taken into account the accuracy ofmetamodels with sampling method SOSM is better

Figure 9 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 9 that the median value of SOSM is probablyequal to the actual global minimum 119891min Meanwhile thesmall sizes of boxes imply small standard deviation whichis also reflected by small differences between the mean andmedian values The standard deviation of global minimumis one of the important factors for evaluating the robustnessof algorithmTherefore smaller standard deviation of results

The Scientific World Journal 9

minus015

minus01

minus005

0

005

01

minus02minus015 minus01 minus005 0 005 01

015

x1

x2

(a) Number 1

minus085 minus08 minus07 minus065minus02

minus01

0

01

02

03

x1

x2

minus075

(b) Number 2

minus001 minus0005

0005

001

minus001

minus0005

0005 001

x1

x2

0

0

(c) Number 3

0 01 02 03 04 05minus2

minus19

minus18

minus17

minus16

minus15

x1

x2

(d) Number 4

minus15 minus1 minus05 0 05 12

25

3

35

4

45

x1

x2

(e) Number 5

0

0minus15 minus1 minus05 05 1 15minus15

minus1

minus05

05

1

15

x1

x2

(f) Number 6

Figure 10 Global minima points of functions 1ndash6 (a)ndash(f) show the global minima points for functions 1ndash6 separately The red pentagonindicates the actual global minimum point The black dots indicate the global minima points of metamodels based on sampling methodSOSMThe blue square indicates the global minima points of metamodels based on sampling method LHDThemagenta diamond indicatesthe actual global minima points of metamodels based on sampling method RND

10 The Scientific World Journal

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(a) Actual function

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(b) SOSM

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(c) LHD

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(d) RND

Figure 11 Function 1 actual and metamodel surface

implies the robustness of the algorithm It is clear fromFigure 9 that the SOSM is a more robust sampling methodthan the other two sampling techniques under the parametersettings employed in this paper The success rate is badcomparing the actual global minimum from Table 4 withthe distribution of global minimum of metamodels basedon sampling methods LHD and RND from Figure 9 Inother words SOSM plays a perfect role in finding the globalminimum of metamodels

In order to indicate the effectiveness of SOSM in seekingglobal minimum points the positions of global minimumpoints of metamodels for functions 1ndash6 are shown in Figures10(a)ndash10(f) separately The red pentagon indicates the actualglobal minimum point The black dots indicate the globalminimum points of metamodels based on sampling methodSOSM The blue squares indicate the global minimumpoints of metamodels based on sampling method LHDThe magenta diamonds indicate the global minimum pointsof metamodels based on sampling method RND Figure 10shows that the black dots are distributed densely in the centerof red pentagon Meanwhile the blue squares and magentadiamonds are decentralized around the red pentagon andthe difference between actual global minimum points and

global minimum points of metamodels based on LHD orRND ismostly quite largeThe above demonstrate that SOSMis superior to LHD and RND on global optimization (a)ndash(d)in Figures 11 12 13 14 15 and 16 show the graphs of actualfunctions and of the associated metamodels based on threesampling methods separately It can be observed intuitionallythat the metamodels surface adopting SOSM method issmoother than LHD and RND and furthermore the globalminima of metamodels based on SOSM are consistent withthe actual global minima The conclusions reached above areidentified further by comparing the actual function surfacesto metamodels surfaces

442 Comparison of the Performance between SOSMand Pre-vious Sequential Sampling Methods In this part three differ-ent sequential sampling methods including cross-validationsampling method (CV) [25] sequential sampling methodproposed by Kitayama et al [28] termed KSSM and suc-cessive local enumeration sampling method (SLE) [29] areused to construct metamodels in comparison with thoseconstructed by SOSM Similarly the accuracy of metamodelsand global minimumof functions are obtained andmanaged

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

2 The Scientific World Journal

methods and approximationmethods Accordingly samplingmethods and approximation methods are intensively studiedin recent years As one of the most effective approximationmethods radial basis functions (RBF) [8ndash10] interpolationhas been gained popularity for model approximation becauseof their simplicity and accurate results for interpolationproblems RBF is becoming a better choice for constructingmetamodels or finding the global optima of computationallyexpensive functions by using a limited number of samplingpoints Other types of approximation methods includingKriging [11] multivariate adaptive regression splines (MARS)[12] response surface methodology (RSM) [13] and supportvector machines (SVM) [14] and so forth are discussedbroadly as well Mullur et al proposed an improved formof the typical RBF approach that is extended radial basisfunctions (E-RBF) which offers more flexibility in the meta-model construction process and provides better smoothingproperties In general it is expected to yield more accuratemetamodels than the typical RBF So this paper uses E-RBFto construct the metamodels

The sampling method is another important factor ofaffecting accuracy for a given metamodel The samplingmethod can be divided into the classical sampling methodldquospace-fillingrdquo sampling method and sequential samplingmethod according to the type Classic sampling methodsare developed from design of experiments (DOE) Thesemethods focus on planning experiments and tend to spreadthe sample points around boundaries of the design space foreliminating random error The classical experiment designscontain alphabetical optimal design [15] factorial or frac-tional factorial design [16] Box-Behnken [17] central com-posite design (CCD) [18] and so forth However Sacks et al[19] stated that classic experiment designs can be inefficient oreven inappropriate for deterministic optimal problems Jin etal [20] confirmed that experiment designs for deterministiccomputer analyses should be spaced filling The space-fillingsampling methods which are correspondingly more oftenused in the literature are Latin hypercube design (LHD)[21] orthogonal arrays design [22] Hammersley sequences(HS) [23] and uniform designs [24] The sampling methodsabove are generated all at once or in other words at onestage It is difficult for the one-stage sampling methods toforecast the number of sampling points On the contrarythe sequential sampling approach generates sampling pointsone after another according to the particular criteria insteadof generating all points at once In sequential samplingtechnique the new sampling points are added to the sampleby taking the advantage of information gathered from theexisting (earlier created) metamodel and then correspond-ing response surface is updated Therefore the sequentialsampling recently has gained popularity for its advantages offlexibility and adaptability over other methods Jin et al [25]stated that sequential sampling allows engineers to controlthe sampling process and it is generally more efficient thanone-stage sampling Deng et al [26] proposed a sequentialsampling design method based on Kriging Wei et al [27]proposed a sequential sampling method adopting a criterionto determine the optimal sampling points which maximizedthe value of the product of curvature and square of minimum

distance to other design sites Kitayama et al [28] presented asequential approximate optimization (SAO) algorithm usingthe RBF network with sequential sampling methods A novelsequential sampling method based on the maximin distancecriterion was proposed by Zhu et al [29]

In this paper a new sequential optimization samplingmethod with extended radial basis functions is proposedIn order to utilize the geometrical feature of metamodelsextrema points of the response surface as new optimizationsampling points are added to the sample Through using themetamodeling functions constructed by extended radial basisfunctions the extrema points of metamodels can be achievedexpediently Moreover an effective function [28 30] calledthe density function for determining the sparse region inthe design variable space is considered The density functionconstructed by using the RBF is to discover a sparse region inthe design space It is expected that the addition of samplingpoints in the sparse region will improve the accuracy ofapproximation model Thus a new metamodeling algorithmintegrating a sequential optimization sampling method ispresented To illustrate the accuracy and efficiency of theproposed algorithm the measure performance and severalnumerical examples will be tested

The remainder of this paper is organized as follows Innext section the RBF and E-RBF are described briefly InSection 3 a new sequential optimization sampling methodis proposed In addition the density function [28 30] isintroduced In Section 4 the numerical examples assessmentmeasures test results and discussions and so forth will beprovidedThe last section is the closure of the paper where wesummarize the important observations made from our study

2 Radial Basis Functions

21 Learning of Classical Radial Basis Functions The RBFmetamodel was originally developed by Hardy [31] in 1971to fit irregular topographic contours of geographical dataIt has been known tested and verified for several decadesand many positive properties have been identified Mullurand Messac [32] made radial basis functions more flexibleand effective by adding so-called nonradial basis functionsKrishnamurthy [33] added a polynomial to the definitionof RBF for improving the performance Wu [34] providedcriteria for positive definiteness of radial functions withcompact support which produced series of positive definiteradial functions

An RBF network is a three-layer feed-forward networkshown in Figure 1 The output of the network

119891(119909) whichcorresponds to the response surface is typically given by

119910 =

119891 (x) =

119873

sum

119894=1

120582

119894120601 (

1003817

1003817

1003817

1003817

x minus x119894

1003817

1003817

1003817

1003817

) = Φ sdot 120582 (1)

where 119873 is the number of sampling points x is a vector ofdesign variables x

119894is a vector values of design variables at

the 119894th sampling point Φ = [120601

1 120601

2 120601

119873] (120601

119894= 120601(x minus

x119894)) 120582 = [120582

1 120582

2 120582

119873]

119879 x minus x119894 is the Euclidean norm

120601 is a basis function and 120582

119894is the coefficient for the 119894th

The Scientific World Journal 3

x1

x2

xN

1206011

1206012

120601N

1205821

1205822

120582N

fRBFsum

Figure 1 Three-layer feed-forward RBF network

Table 1 Commonly used basis functions

Name Radial function 119903 = 119909 minus 119909

119894

2

Linear 120601(119903) = 119888119903

Cubic 120601(119903) = (119903 + 119888)

3

Thin-plate spline 120601(119903) = 119903

2 log (119888119903

2)

Gaussian 120601(119903) = exp (minus119888119903

2)

Multiquadric 120601(119903) = (119903

2+ 119888

2)

12

c is a constant

basis function The approximation function 119910 is actually alinear combination of some RBF with weight coefficients 120582

119894

Themost commonly used radial classical radial functions arelisted in Table 1 The radial basis functions multiquadric andGaussian are the best known and most often applied Themultiquadric is nonsingular and simple to use [35] Henceradial basis function multiquadric is used in this paper

22 Learning of Extended Radial Basis Functions Theextended radial basis functions approach is a combinationof radial and nonradial basis functions which incorporatemore flexibility in the metamodels by introducing additionaldegrees of freedom in the metamodels definition It providesbetter smoothing properties and in general it is expected toyieldmore accuratemetamodels than the typical RBFMullurandMessac [32 36] found that the E-RBF approach results inhighly accurate metamodels compared to the classical RBFand Kriging Under the E-RBF approach the approximationfunction takes the form

119891 (x) =

119873

sum

119894=1

120582

119894120601 (

1003817

1003817

1003817

1003817

x minus x119894

1003817

1003817

1003817

1003817

)

+

119873

sum

119894=1

119899

sum

119895=1

120572

119871

119894119895120595

119871(120585

119895

119894) + 120572

119877

119894119895120595

119877(120585

119895

119894) + 120573

119894119895120595

120573(120585

119895

119894)

(2)

where 119899 is the number of design variables 120572119871119894119895 120572119877119894119895 120573119894119895are

coefficients to be determined for given problems 120595119871 120595119877 120595120573are components of the so-called nonradial basis functionsdefined in Table 2 Nonradial basis functions are functions of

Data point xi

Radial distance r

Generic point x

Design variable x1

120585i2

120585i1

Des

ign

varia

blex

2

Figure 2 Definition of coordinate 120585

Coord 120585120574120574

I II III IV

Linear

Linear

120572L120585n + 120573120585

120572R120585n + 120573120585

Basic

func

tion120601(120585)

Figure 3 Nonradial basis functions

120585

119895

119894 which is the coordinate vector of a generic point 119909 in the

design space relative to a data point 119909119894 defined as 120585

119894= 119909 minus 119909

119894

Thus 120585119895119894is the coordinate of any point 119909 relative to the data

point 119909119894along the 119895th dimensionThe difference between the

Euclidean distance 119903 used in RBF and the relative coordinates120585 used for N-RBF for a two-dimensional case is depicted inFigure 2 Four distinct regions (IndashIV) are depicted in Figure 3each corresponding to a row in Table 2

In matrix notation the metamodel defined in (2) can bewritten as

[119860] 120582 + [119861] (120572

119871)

119879

(120572

119877)

119879

120573

119879

119879

= 119891 (3)

Equation (3) can be compactly written in matrix form as

[119860] 120572 = 119891 (4)

where [119860] = [119860119861] 120572 = 120582

119879(120572

119871)

119879

(120572

119877)

119879

120573

119879

119879

and 119891 = 119891(119909

119894)

The coefficients120572 can be evaluated by using the pseudoin-verse approach to solve the underdetermined system of (4) asfollows

120572 = [119860]

+

119891 (5)

where [119860]

+ denotes the pseudoinverse of [119860]

4 The Scientific World Journal

Table 2 Nonradial basis functions 120595(120585

119895

119894)

Region Range of 120585119895119894

120595

119871120595

119877120595

120573

I 120585

119895

119894le minus120574 (minus120578120574

120578minus1) 120585

119895

119894+ 120574

120578(1 minus 120578) 0 120585

119895

119894

II minus120574 le 120585

119895

119894le 0 (120585

119895

119894)

120578

0 120585

119895

119894

III 0 le 120585

119895

119894le 120574 0 (120585

119895

119894)

120578

120585

119895

119894

IV 120585

119895

119894ge 120574 0 (120578120574

120578minus1) 120585

119895

119894+ 120574

120578(1 minus 120578) 120585

119895

119894

120574 120578 are prescribed parameters refer to [32 36]

After obtaining the coefficients 120572 using E-RBF one canevaluate the metamodels to construct response surface using(2) The resulting metamodel is one that is guaranteed to beconvex andhighly accurate [32 36] In the following section aseries of mathematical examples are approximated by E-RBFbased on the sampling method proposed in next section

In this section we introduce the metamodel approachE-RBF briefly due to space limitations A more completedescription and discussion are presented in articles [32 36]

3 The Sequential OptimizationSampling Method

As it is known the metamodels are approximation models ofrealmodels which are commonly complex and unknownTheaccuracy of metamodels owes to the approximation methodsand sampling methods primarily [37] The sampling methodproposed in this paper includes two parts The first part isthe procedure of adding optimization sampling points andthe second part is the procedure of adding points of sparseregions [28 30]

31 The Optimization Sampling Points The optimizationsampling points should have two important properties asfollows adaptive and sensitiveThe focus on sampling shouldshift to how to generate a reasonable number of samplingpoints intelligently so that the metamodel can reflect the realldquoblack-box functionsrdquo in areas of interest General knowledgetells us that sampling points at the site of valley andpeak of theresponse surface would improve the accuracy at the greatestextent The valley and peak are extrema points of functions

In mathematics the points which are the largest orsmallest within a given neighborhood are defined extremapoints McDonald et al [38] found the radial basis functionsmodels created with (1) are twice continuously differentiablewhen employing multiquadric function as basis function forall 119888 = 0 The first approximation model is constructed withinitial sampling points through using radial basis functionsThus the function evaluations analytic gradients and theHessian matrix of second partial derivatives can be obtainedfrom the initial functions

Considering the RBF model with 119899 dimensions in (1) thegradients of the equation are

120597

119891

120597x=

119873

sum

119894=1

120582

119894120601

1015840(119903

119894)

119903

119894 (x)

(x minus x119894)

119879 (6)

where 120601

1015840(119903

119894) = 120597120601120597119903

119894 119903119894(x) = xminus x

119894 For multiquadric RBF

model 120601(119903119894) = (119903

119894

2+ 119888

2)

12 1206011015840(119903119894) = 119903

119894(119903

119894

2+ 119888

2)

minus12The Hessian matrix can be calculated from (6) as

119867(x) =

120597

2

119891

120597x2

=

119873

sum

119894=1

120582

119894

119903

119894 (x)

[120601

1015840(119903

119894) sdot 119868 + (120601

10158401015840(119903

119894) minus

120601

1015840(119903

119894)

119903

119894 (x)

)] (x minus x119894)

119879

(7)

Similarly 12060110158401015840(119903119894) = 120597

2120601120597119903

119894

2 For multiquadric RBF model120601

10158401015840(119903

119894) = 119888

2(119903

119894

2+ 119888

2)

minus32 119868 is a unit vectorLet (6) be equal to zero and solve

120597

119891

120597x= 0 x = (119909

1 119909

2 119909

119873)

(8)

Then the critical point x119890can be obtained Upon substitution

of the critical point x119890into Hessian matrix we can judge the

definition of the matrix119867(x119890)The critical point is maximum

or minimum when the matrix is definite positive or definitenegative Figures 4 and 5 separately show the extrema pointsin the case of 2D 3D The red square and blue pentagonindicate maxima points meanwhile the green dot and redasterisk indicate minima points Once an approximationmodel has been created we can obtain coordinates of extremapoints

32 Density Function with the RBF It is necessary to addnew sampling points in the sparse region for global approx-imation To achieve this a new function called the densityfunction which is proposed by Kitayama et al [28 30] isconstructed using the RBF network in this paper The aimof the density function is to discover a sparse region in thedesign spaceThis density function generates local minima inthe sparse region so that theminimumof this function can betaken as a new sampling pointThe addition of new samplingpoints in the sparse region will improve the accuracy ofapproximation model and help to find the global minimumof metamodels

To explore the sparse region every output 119891of the RBFnetwork is replacedwith+1 Let119873 be the number of samplingpointsThe procedure for constructing the density function issummarized as follows

(1) The output vector 119891

119863 is prepared at the samplingpoints

119891

119863= (1 1 1)

119879

119873lowast1 (9)

The Scientific World Journal 5

MaximumMinimum

minus25120587 minus125120587 125120587 251205870

0

1

2

minus1

Figure 4 Extrema points in 2D

minus3 minus2 minus1 0 1 2 3minus3minus2minus10123

minus6

minus4

minus2

0

2

4

6

8

MaximumMinimum

Figure 5 Extrema points in 3D

(2) The weight vector 120582

119863of the density function 119863(119909) is

calculated as follows

120582

119863= (Φ

119879Φ + Δ)

minus1

Φ

119879119891

119863

(10)

where

Φ =

[

[

[

[

[

120601

1(119909

1) 120601

2(119909

1) sdot sdot sdot 120601

119873(119909

1)

120601

1(119909

2) 120601

2(119909

2) sdot sdot sdot 120601

119873(119909

2)

d

120601

1(119909

119873) 120601

2(119909

119873) sdot sdot sdot 120601

119873(119909

119873)

]

]

]

]

]

Δ = 10

minus3times

[

[

[

[

[

1 0 sdot sdot sdot 0

0 1 sdot sdot sdot 0

d

0 0 sdot sdot sdot 1

]

]

]

]

]119873lowast119873

(11)

(3) The addition of sampling point119909119863 in the sparse regionis explored as the globalminimumof density functionwith the RBF

119863(119909

119863) =

119873

sum

119895=1

120582

119863

119895120601

119895(119909

119863) 997888rarr min (12)

33 Summary of the Proposed Algorithm The detailed algo-rithm for sequential optimization samplingmethodwith RBFnetwork is described below Figure 6 shows the proposedalgorithm The proposed algorithm is roughly divided intotwo phases The first phase is used to construct the responsesurface and add the extrema points of response surface asnew sampling points The second phase is used to constructthe density function and add the minimum of the densityfunction as a new sampling point These two phases aredescribed particularly as follows

6 The Scientific World Journal

Construction of the response surface with RBF

Find the extrema points of response surface

Add the extrema points to the sample

Construction of the density function with RBF

Find the minimum point of density function

Add the minimum point to the sample

Yes

Yes

No

No

Second phase

First phase

count = 1

count = count + 1 count le d

m lt mmax

End

Update

the

sampling

points

m

Initial sampling points m

Figure 6 Proposed algorithm

First phase119898 initial sampling points are generated usingthe Latin hypercube sampling design All functions areevaluated at the sampling points and the response surface

119891(119909) is constructed The extrema points of response surfacecan then be found and directly taken as the new samplingpoints The more accurate metamodel would be constructedby repeating the procedure of adding extrema points to thesample adaptively and sequentially

Second phase the density function is constructed to findthe sparse regionTheminimumpoint of the density functionis taken as a new sampling point This step is repeated whilea terminal criterion (count le 119889) is satisfied Parameter119889 controls the number of sampling points obtained by thedensity function Kitayama et al [28] advised 119889 = int(1198992) Inthis paper parameter 119889 = int(1198992)+1 where int( ) representsthe rounding-off The new sampling point would be gainedif the parameter count is less than 119889 and it is increased ascount = count + 1

The terminal criterion of the integrated algorithm isdetermined by the maximum number of sampling points119898max If the number of sampling points is less than 119898maxthe algorithm proceeds Otherwise the algorithm is termi-nated In the algorithm the response surface is constructedrepeatedly through the addition of the new sampling pointsnamely extrema points and minimum point of densityfunction Afterwards the more accurate metamodel would

be constructed by repeating the procedure of adding pointsto the sample adaptively and sequentially

4 Numerical Examples

In this section we would test the validity of the proposedsampling method through some well-known numericalexamples and one engineering optimization problem Allthese functions are approximated with the E-RBF modelThe response surfaces are constructed through one-stagesampling methods (ie LHD and RND) and sequentialsampling methods (ie CV [25] Kitayama et al [28] SLE[29] and SOSM proposed in this paper) with the samesampling size In order to visualize the comparison betweenapproximated model and actual model two design variablesof numerical examples listed in Table 3 are tested

41 Sampling Strategies Two types of samplingmethods usedin this paper separately are one-stage sampling methods andsequential sampling methods One-stage sampling methodsinclude Latin hypercube sampling design (LHD) and Ran-dom sampling design (RND) Sequential sampling methodsinclude cross-validation samplingmethod (CV) [25] sequen-tial samplingmethodproposed byKitayama et al [28] termedKSSM in this paper successive local enumeration sampling

The Scientific World Journal 7

Table 3 Numerical function and global minimum

Number Function Design domain Global minimum1 119891(119909) = minus10 sin 119888 (119909

1) sdot sin 119888 (119909

2) minus2 le 119909 le 2 119909 = (0 0) 119891min = minus10

2 119891(119909) = 4119909

1sdot 119890

minus1199092

1minus1199092

2minus2 le 119909 le 2 119909 = (minus

radic22 0) 119891min = minus172

3 119891(119909) = 119909

2

1+ 119909

2

2minus 10 lowast [cos (2120587119909

1) + cos (2120587119909

1)] + 20 minus08 le 119909 le 08 119909 = (0 0) 119891min = 0

4119891(119909) = 3(1 minus 119909

1)

2sdot 119890

minus1199092

1minus(1199092+1)

2

minus 10 (

119909

1

5

minus 119909

3

1minus 119909

5

2)

sdot 119890

minus1199092

1minus1199092

2minus

1

3

119890

minus(1199091+1)2minus1199092

2

minus3 le 119909 le 3 119909 = (0228 minus163) 119891min = minus655

5119891(119909) = minus60[1 + (119909

1+ 1)

2

+ (119909

2minus 3)

2

] minus 20[1 + (119909

1minus 1)

2

+ (119909

2minus 3)

2

] minus 30[1 + 119909

2

1+ (119909

2+ 4)

2

] + 30

minus6 le 119909 le 6 119909 = (minus097 3) 119891min = minus3463

6 119891(119909) = minus10(sinradic119909

2

1+ 119909

2

2+ epsradic119909

2

1+ 119909

2

2+ eps) + 10 eps = 10

minus15minus3120587 le 119909 le 3120587 119909 = (0 0) 119891min = 0

method (SLE) [29] and sequential optimization samplingmethod (SOSM) proposed in this paper Every metamodelis constructed for 20 times with the same sampling methodin this paper For functions 1 and 2 we set 119898max = 25 Forfunctions 4 and 5 we set 119898max = 36 For functions 3 and 6we generate 28 and 40 sampling points separately

42 Selection of Parameters For the E-RBF approach we set119888 = 1 which is a prescribed parameter for the multiquadricbasis functions for all of the examplesThe parameter 120574 is setequal to approximately 13 of the design domain size MullurandMessac [32] investigated that the results were not undulysensitive to 120574 Another parameter 120578 is set equal to 2 for allnumerical examples

43 Metamodel Accuracy Measures Generally speaking anE-RBF response surface passes through all the samplingpoints exactly Therefore it is impossible to estimate theaccuracy of an E-RBF model with sampling points Tomeasure the accuracy of the resulting metamodels we canuse additional testing points to evaluate the accuracy of themodel via standard error measure root-mean-squared error(RMSE)The smaller the value of RMSE is the more accuratethe response surface will be The error measure is defined as

RMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

119870

(13)

where119870 is the number of additional testing points generatedby grid sampling method (32 lowast 32 for all the examples)119891(x119896) and

119891(x119896) are the true function value and predicted

metamodel value at the 119896th testing point x119896 respectively

In addition to the preceding RMSE we also calculate thenormalized root-mean-squared error (NRMSE) as follows

NRMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

sum

119870

119896=1[119891 (x119896)]

2lowast 100

(14)

RMSE only calculates the error of functions themselvesHowever NRMSE allows comparison of themetamodel errorvalues with regard to different functions

In engineering problems global minimum is requiredgenerally So we employ the simulated annealing (SA) [39]to calculate the global minimum

119891min based on the ultimatemetamodel The actual global minimum 119891min are listed inTable 3

44 Results and Discussions In this section we discuss theresults obtained after constructing metamodels through sixvarious sampling methods using the assessment measuredescribed above As mentioned in Section 41 20 proceduresare conducted for each sampling method and therefore thereare twenty sets of accuracy results for each sampling methodThe advantages and validity of sequential optimization sam-pling method (SOSM) are tested in comparison with one-stage sampling methods and sequential sampling methodsseparately below In addition SOSM is used to solve a typicalmechanical design optimization problem

441 Comparison of the Performance between SOSM andOne-Stage Sampling Methods In this part two classical one-stage sampling methods LHD RND and SOSM are usedto construct metamodels The accuracy of metamodels andglobal minimum of functions are obtained and managedThe error measures RMSE andNRMSE and global minimumsummarized in Table 4 are average values

From the Table 4 the RMSE and NRMSE of metamodelsusing samplingmethod SOSM are smaller than the other twoone-stage sampling methods for functions 1ndash5 For function6 the values are close The RND as expected performspoorly That is metamodels based on sampling methodSOSMmay provide a better fit to actual functions As seen inTable 4 on exploring global minimum

119891min of metamodelsSOSM is superior to LHD and RDN through all numericalexamples compared to the actual global minimum 119891min Inparticular the SOSM can almost find the global minimumof all numerical examples at every turn However LHD andRND perform fairly poorly particularly in the success ratewhich will be depicted in Figure 9

The mean results from Table 4 cannot represent theadvantages and disadvantages of different sampling methodsadequately Thus statistical graphics that is boxplots are

8 The Scientific World Journal

Table 4 Metamodel accuracy results for functions 1ndash6 between SOSM and one-stage sampling methods

Number SOSM LHD RND119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0722 30698 minus9137 0817 34766 minus8728 minus102 0061 9798 minus1715 0090 14524 minus1611 0122 19659 minus1662 minus17153 1674 6464 minus0002 1790 6913 0140 2072 8001 0040 04 0728 38118 minus6556 0810 42456 minus5786 0861 45116 minus5374 minus65515 2455 10531 minus34243 3353 14380 minus12928 3789 16253 minus9907 minus34636 0995 10011 0116 0992 9991 2284 1076 10830 2070 0

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

RMSE

of f

unct

ions

1ndash6

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

0

1

2

3

4

5

Figure 7 Assessment of metamodels RMSE between SOSM andone-stage sampling methods

used to show the deviations of the accuracy and globalminimum of each metamodel In descriptive statistics box-plot is a convenient way of graphically depicting groups ofnumerical data through their quartilesThe center line of eachboxplot shows the 50th percentile (median) value and the boxencompasses the 25th and 75th percentile of the data Theleader lines (horizontal lines) are plotted at a distance of 15times the interquartile range in each direction or the limitof the data (if the limit of the data falls within 15 times theinterquartile range) The data points outside the horizontallines are shown by placing a sign (ldquo◻rdquo) for each point Thetwenty times of metamodels accuracy results (RMSE andNRMSE) and global minimum 119891min with sampling methodsthat is SOSM LHD and RND are illustrated in Figures 7ndash9with the help of boxplots

From the results shown in Figures 7 and 8 it is foundthat the median values of RMSE and NRMSE are smallercompared to LHD and RND for functions 1ndash5 For function6 the median values of RMSE and NRMSE are a little largerbut very close In addition the box size of RMSE andNRMSEbased on SOSM is the shortest for all functions except forfunctions 4 and 5 The difference of sizes is small It is clearthat a small size of the box suggests small standard deviationof results High standard deviation of results suggests largeuncertainties in the predictions and low standard suggestslow uncertainty on the contrary The points outside thehorizontal lines indicate that the results of experiments areterrible Above all either mean values or median values and

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

Figure 8 Assessment of metamodels NRMSE between SOSM andone-stage sampling methods

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0

5

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6 Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Figure 9 Assessment of metamodels Global minimum betweenSOSM and one-stage sampling methods

box sizes of results are taken into account the accuracy ofmetamodels with sampling method SOSM is better

Figure 9 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 9 that the median value of SOSM is probablyequal to the actual global minimum 119891min Meanwhile thesmall sizes of boxes imply small standard deviation whichis also reflected by small differences between the mean andmedian values The standard deviation of global minimumis one of the important factors for evaluating the robustnessof algorithmTherefore smaller standard deviation of results

The Scientific World Journal 9

minus015

minus01

minus005

0

005

01

minus02minus015 minus01 minus005 0 005 01

015

x1

x2

(a) Number 1

minus085 minus08 minus07 minus065minus02

minus01

0

01

02

03

x1

x2

minus075

(b) Number 2

minus001 minus0005

0005

001

minus001

minus0005

0005 001

x1

x2

0

0

(c) Number 3

0 01 02 03 04 05minus2

minus19

minus18

minus17

minus16

minus15

x1

x2

(d) Number 4

minus15 minus1 minus05 0 05 12

25

3

35

4

45

x1

x2

(e) Number 5

0

0minus15 minus1 minus05 05 1 15minus15

minus1

minus05

05

1

15

x1

x2

(f) Number 6

Figure 10 Global minima points of functions 1ndash6 (a)ndash(f) show the global minima points for functions 1ndash6 separately The red pentagonindicates the actual global minimum point The black dots indicate the global minima points of metamodels based on sampling methodSOSMThe blue square indicates the global minima points of metamodels based on sampling method LHDThemagenta diamond indicatesthe actual global minima points of metamodels based on sampling method RND

10 The Scientific World Journal

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(a) Actual function

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(b) SOSM

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(c) LHD

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(d) RND

Figure 11 Function 1 actual and metamodel surface

implies the robustness of the algorithm It is clear fromFigure 9 that the SOSM is a more robust sampling methodthan the other two sampling techniques under the parametersettings employed in this paper The success rate is badcomparing the actual global minimum from Table 4 withthe distribution of global minimum of metamodels basedon sampling methods LHD and RND from Figure 9 Inother words SOSM plays a perfect role in finding the globalminimum of metamodels

In order to indicate the effectiveness of SOSM in seekingglobal minimum points the positions of global minimumpoints of metamodels for functions 1ndash6 are shown in Figures10(a)ndash10(f) separately The red pentagon indicates the actualglobal minimum point The black dots indicate the globalminimum points of metamodels based on sampling methodSOSM The blue squares indicate the global minimumpoints of metamodels based on sampling method LHDThe magenta diamonds indicate the global minimum pointsof metamodels based on sampling method RND Figure 10shows that the black dots are distributed densely in the centerof red pentagon Meanwhile the blue squares and magentadiamonds are decentralized around the red pentagon andthe difference between actual global minimum points and

global minimum points of metamodels based on LHD orRND ismostly quite largeThe above demonstrate that SOSMis superior to LHD and RND on global optimization (a)ndash(d)in Figures 11 12 13 14 15 and 16 show the graphs of actualfunctions and of the associated metamodels based on threesampling methods separately It can be observed intuitionallythat the metamodels surface adopting SOSM method issmoother than LHD and RND and furthermore the globalminima of metamodels based on SOSM are consistent withthe actual global minima The conclusions reached above areidentified further by comparing the actual function surfacesto metamodels surfaces

442 Comparison of the Performance between SOSMand Pre-vious Sequential Sampling Methods In this part three differ-ent sequential sampling methods including cross-validationsampling method (CV) [25] sequential sampling methodproposed by Kitayama et al [28] termed KSSM and suc-cessive local enumeration sampling method (SLE) [29] areused to construct metamodels in comparison with thoseconstructed by SOSM Similarly the accuracy of metamodelsand global minimumof functions are obtained andmanaged

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

The Scientific World Journal 3

x1

x2

xN

1206011

1206012

120601N

1205821

1205822

120582N

fRBFsum

Figure 1 Three-layer feed-forward RBF network

Table 1 Commonly used basis functions

Name Radial function 119903 = 119909 minus 119909

119894

2

Linear 120601(119903) = 119888119903

Cubic 120601(119903) = (119903 + 119888)

3

Thin-plate spline 120601(119903) = 119903

2 log (119888119903

2)

Gaussian 120601(119903) = exp (minus119888119903

2)

Multiquadric 120601(119903) = (119903

2+ 119888

2)

12

c is a constant

basis function The approximation function 119910 is actually alinear combination of some RBF with weight coefficients 120582

119894

Themost commonly used radial classical radial functions arelisted in Table 1 The radial basis functions multiquadric andGaussian are the best known and most often applied Themultiquadric is nonsingular and simple to use [35] Henceradial basis function multiquadric is used in this paper

22 Learning of Extended Radial Basis Functions Theextended radial basis functions approach is a combinationof radial and nonradial basis functions which incorporatemore flexibility in the metamodels by introducing additionaldegrees of freedom in the metamodels definition It providesbetter smoothing properties and in general it is expected toyieldmore accuratemetamodels than the typical RBFMullurandMessac [32 36] found that the E-RBF approach results inhighly accurate metamodels compared to the classical RBFand Kriging Under the E-RBF approach the approximationfunction takes the form

119891 (x) =

119873

sum

119894=1

120582

119894120601 (

1003817

1003817

1003817

1003817

x minus x119894

1003817

1003817

1003817

1003817

)

+

119873

sum

119894=1

119899

sum

119895=1

120572

119871

119894119895120595

119871(120585

119895

119894) + 120572

119877

119894119895120595

119877(120585

119895

119894) + 120573

119894119895120595

120573(120585

119895

119894)

(2)

where 119899 is the number of design variables 120572119871119894119895 120572119877119894119895 120573119894119895are

coefficients to be determined for given problems 120595119871 120595119877 120595120573are components of the so-called nonradial basis functionsdefined in Table 2 Nonradial basis functions are functions of

Data point xi

Radial distance r

Generic point x

Design variable x1

120585i2

120585i1

Des

ign

varia

blex

2

Figure 2 Definition of coordinate 120585

Coord 120585120574120574

I II III IV

Linear

Linear

120572L120585n + 120573120585

120572R120585n + 120573120585

Basic

func

tion120601(120585)

Figure 3 Nonradial basis functions

120585

119895

119894 which is the coordinate vector of a generic point 119909 in the

design space relative to a data point 119909119894 defined as 120585

119894= 119909 minus 119909

119894

Thus 120585119895119894is the coordinate of any point 119909 relative to the data

point 119909119894along the 119895th dimensionThe difference between the

Euclidean distance 119903 used in RBF and the relative coordinates120585 used for N-RBF for a two-dimensional case is depicted inFigure 2 Four distinct regions (IndashIV) are depicted in Figure 3each corresponding to a row in Table 2

In matrix notation the metamodel defined in (2) can bewritten as

[119860] 120582 + [119861] (120572

119871)

119879

(120572

119877)

119879

120573

119879

119879

= 119891 (3)

Equation (3) can be compactly written in matrix form as

[119860] 120572 = 119891 (4)

where [119860] = [119860119861] 120572 = 120582

119879(120572

119871)

119879

(120572

119877)

119879

120573

119879

119879

and 119891 = 119891(119909

119894)

The coefficients120572 can be evaluated by using the pseudoin-verse approach to solve the underdetermined system of (4) asfollows

120572 = [119860]

+

119891 (5)

where [119860]

+ denotes the pseudoinverse of [119860]

4 The Scientific World Journal

Table 2 Nonradial basis functions 120595(120585

119895

119894)

Region Range of 120585119895119894

120595

119871120595

119877120595

120573

I 120585

119895

119894le minus120574 (minus120578120574

120578minus1) 120585

119895

119894+ 120574

120578(1 minus 120578) 0 120585

119895

119894

II minus120574 le 120585

119895

119894le 0 (120585

119895

119894)

120578

0 120585

119895

119894

III 0 le 120585

119895

119894le 120574 0 (120585

119895

119894)

120578

120585

119895

119894

IV 120585

119895

119894ge 120574 0 (120578120574

120578minus1) 120585

119895

119894+ 120574

120578(1 minus 120578) 120585

119895

119894

120574 120578 are prescribed parameters refer to [32 36]

After obtaining the coefficients 120572 using E-RBF one canevaluate the metamodels to construct response surface using(2) The resulting metamodel is one that is guaranteed to beconvex andhighly accurate [32 36] In the following section aseries of mathematical examples are approximated by E-RBFbased on the sampling method proposed in next section

In this section we introduce the metamodel approachE-RBF briefly due to space limitations A more completedescription and discussion are presented in articles [32 36]

3 The Sequential OptimizationSampling Method

As it is known the metamodels are approximation models ofrealmodels which are commonly complex and unknownTheaccuracy of metamodels owes to the approximation methodsand sampling methods primarily [37] The sampling methodproposed in this paper includes two parts The first part isthe procedure of adding optimization sampling points andthe second part is the procedure of adding points of sparseregions [28 30]

31 The Optimization Sampling Points The optimizationsampling points should have two important properties asfollows adaptive and sensitiveThe focus on sampling shouldshift to how to generate a reasonable number of samplingpoints intelligently so that the metamodel can reflect the realldquoblack-box functionsrdquo in areas of interest General knowledgetells us that sampling points at the site of valley andpeak of theresponse surface would improve the accuracy at the greatestextent The valley and peak are extrema points of functions

In mathematics the points which are the largest orsmallest within a given neighborhood are defined extremapoints McDonald et al [38] found the radial basis functionsmodels created with (1) are twice continuously differentiablewhen employing multiquadric function as basis function forall 119888 = 0 The first approximation model is constructed withinitial sampling points through using radial basis functionsThus the function evaluations analytic gradients and theHessian matrix of second partial derivatives can be obtainedfrom the initial functions

Considering the RBF model with 119899 dimensions in (1) thegradients of the equation are

120597

119891

120597x=

119873

sum

119894=1

120582

119894120601

1015840(119903

119894)

119903

119894 (x)

(x minus x119894)

119879 (6)

where 120601

1015840(119903

119894) = 120597120601120597119903

119894 119903119894(x) = xminus x

119894 For multiquadric RBF

model 120601(119903119894) = (119903

119894

2+ 119888

2)

12 1206011015840(119903119894) = 119903

119894(119903

119894

2+ 119888

2)

minus12The Hessian matrix can be calculated from (6) as

119867(x) =

120597

2

119891

120597x2

=

119873

sum

119894=1

120582

119894

119903

119894 (x)

[120601

1015840(119903

119894) sdot 119868 + (120601

10158401015840(119903

119894) minus

120601

1015840(119903

119894)

119903

119894 (x)

)] (x minus x119894)

119879

(7)

Similarly 12060110158401015840(119903119894) = 120597

2120601120597119903

119894

2 For multiquadric RBF model120601

10158401015840(119903

119894) = 119888

2(119903

119894

2+ 119888

2)

minus32 119868 is a unit vectorLet (6) be equal to zero and solve

120597

119891

120597x= 0 x = (119909

1 119909

2 119909

119873)

(8)

Then the critical point x119890can be obtained Upon substitution

of the critical point x119890into Hessian matrix we can judge the

definition of the matrix119867(x119890)The critical point is maximum

or minimum when the matrix is definite positive or definitenegative Figures 4 and 5 separately show the extrema pointsin the case of 2D 3D The red square and blue pentagonindicate maxima points meanwhile the green dot and redasterisk indicate minima points Once an approximationmodel has been created we can obtain coordinates of extremapoints

32 Density Function with the RBF It is necessary to addnew sampling points in the sparse region for global approx-imation To achieve this a new function called the densityfunction which is proposed by Kitayama et al [28 30] isconstructed using the RBF network in this paper The aimof the density function is to discover a sparse region in thedesign spaceThis density function generates local minima inthe sparse region so that theminimumof this function can betaken as a new sampling pointThe addition of new samplingpoints in the sparse region will improve the accuracy ofapproximation model and help to find the global minimumof metamodels

To explore the sparse region every output 119891of the RBFnetwork is replacedwith+1 Let119873 be the number of samplingpointsThe procedure for constructing the density function issummarized as follows

(1) The output vector 119891

119863 is prepared at the samplingpoints

119891

119863= (1 1 1)

119879

119873lowast1 (9)

The Scientific World Journal 5

MaximumMinimum

minus25120587 minus125120587 125120587 251205870

0

1

2

minus1

Figure 4 Extrema points in 2D

minus3 minus2 minus1 0 1 2 3minus3minus2minus10123

minus6

minus4

minus2

0

2

4

6

8

MaximumMinimum

Figure 5 Extrema points in 3D

(2) The weight vector 120582

119863of the density function 119863(119909) is

calculated as follows

120582

119863= (Φ

119879Φ + Δ)

minus1

Φ

119879119891

119863

(10)

where

Φ =

[

[

[

[

[

120601

1(119909

1) 120601

2(119909

1) sdot sdot sdot 120601

119873(119909

1)

120601

1(119909

2) 120601

2(119909

2) sdot sdot sdot 120601

119873(119909

2)

d

120601

1(119909

119873) 120601

2(119909

119873) sdot sdot sdot 120601

119873(119909

119873)

]

]

]

]

]

Δ = 10

minus3times

[

[

[

[

[

1 0 sdot sdot sdot 0

0 1 sdot sdot sdot 0

d

0 0 sdot sdot sdot 1

]

]

]

]

]119873lowast119873

(11)

(3) The addition of sampling point119909119863 in the sparse regionis explored as the globalminimumof density functionwith the RBF

119863(119909

119863) =

119873

sum

119895=1

120582

119863

119895120601

119895(119909

119863) 997888rarr min (12)

33 Summary of the Proposed Algorithm The detailed algo-rithm for sequential optimization samplingmethodwith RBFnetwork is described below Figure 6 shows the proposedalgorithm The proposed algorithm is roughly divided intotwo phases The first phase is used to construct the responsesurface and add the extrema points of response surface asnew sampling points The second phase is used to constructthe density function and add the minimum of the densityfunction as a new sampling point These two phases aredescribed particularly as follows

6 The Scientific World Journal

Construction of the response surface with RBF

Find the extrema points of response surface

Add the extrema points to the sample

Construction of the density function with RBF

Find the minimum point of density function

Add the minimum point to the sample

Yes

Yes

No

No

Second phase

First phase

count = 1

count = count + 1 count le d

m lt mmax

End

Update

the

sampling

points

m

Initial sampling points m

Figure 6 Proposed algorithm

First phase119898 initial sampling points are generated usingthe Latin hypercube sampling design All functions areevaluated at the sampling points and the response surface

119891(119909) is constructed The extrema points of response surfacecan then be found and directly taken as the new samplingpoints The more accurate metamodel would be constructedby repeating the procedure of adding extrema points to thesample adaptively and sequentially

Second phase the density function is constructed to findthe sparse regionTheminimumpoint of the density functionis taken as a new sampling point This step is repeated whilea terminal criterion (count le 119889) is satisfied Parameter119889 controls the number of sampling points obtained by thedensity function Kitayama et al [28] advised 119889 = int(1198992) Inthis paper parameter 119889 = int(1198992)+1 where int( ) representsthe rounding-off The new sampling point would be gainedif the parameter count is less than 119889 and it is increased ascount = count + 1

The terminal criterion of the integrated algorithm isdetermined by the maximum number of sampling points119898max If the number of sampling points is less than 119898maxthe algorithm proceeds Otherwise the algorithm is termi-nated In the algorithm the response surface is constructedrepeatedly through the addition of the new sampling pointsnamely extrema points and minimum point of densityfunction Afterwards the more accurate metamodel would

be constructed by repeating the procedure of adding pointsto the sample adaptively and sequentially

4 Numerical Examples

In this section we would test the validity of the proposedsampling method through some well-known numericalexamples and one engineering optimization problem Allthese functions are approximated with the E-RBF modelThe response surfaces are constructed through one-stagesampling methods (ie LHD and RND) and sequentialsampling methods (ie CV [25] Kitayama et al [28] SLE[29] and SOSM proposed in this paper) with the samesampling size In order to visualize the comparison betweenapproximated model and actual model two design variablesof numerical examples listed in Table 3 are tested

41 Sampling Strategies Two types of samplingmethods usedin this paper separately are one-stage sampling methods andsequential sampling methods One-stage sampling methodsinclude Latin hypercube sampling design (LHD) and Ran-dom sampling design (RND) Sequential sampling methodsinclude cross-validation samplingmethod (CV) [25] sequen-tial samplingmethodproposed byKitayama et al [28] termedKSSM in this paper successive local enumeration sampling

The Scientific World Journal 7

Table 3 Numerical function and global minimum

Number Function Design domain Global minimum1 119891(119909) = minus10 sin 119888 (119909

1) sdot sin 119888 (119909

2) minus2 le 119909 le 2 119909 = (0 0) 119891min = minus10

2 119891(119909) = 4119909

1sdot 119890

minus1199092

1minus1199092

2minus2 le 119909 le 2 119909 = (minus

radic22 0) 119891min = minus172

3 119891(119909) = 119909

2

1+ 119909

2

2minus 10 lowast [cos (2120587119909

1) + cos (2120587119909

1)] + 20 minus08 le 119909 le 08 119909 = (0 0) 119891min = 0

4119891(119909) = 3(1 minus 119909

1)

2sdot 119890

minus1199092

1minus(1199092+1)

2

minus 10 (

119909

1

5

minus 119909

3

1minus 119909

5

2)

sdot 119890

minus1199092

1minus1199092

2minus

1

3

119890

minus(1199091+1)2minus1199092

2

minus3 le 119909 le 3 119909 = (0228 minus163) 119891min = minus655

5119891(119909) = minus60[1 + (119909

1+ 1)

2

+ (119909

2minus 3)

2

] minus 20[1 + (119909

1minus 1)

2

+ (119909

2minus 3)

2

] minus 30[1 + 119909

2

1+ (119909

2+ 4)

2

] + 30

minus6 le 119909 le 6 119909 = (minus097 3) 119891min = minus3463

6 119891(119909) = minus10(sinradic119909

2

1+ 119909

2

2+ epsradic119909

2

1+ 119909

2

2+ eps) + 10 eps = 10

minus15minus3120587 le 119909 le 3120587 119909 = (0 0) 119891min = 0

method (SLE) [29] and sequential optimization samplingmethod (SOSM) proposed in this paper Every metamodelis constructed for 20 times with the same sampling methodin this paper For functions 1 and 2 we set 119898max = 25 Forfunctions 4 and 5 we set 119898max = 36 For functions 3 and 6we generate 28 and 40 sampling points separately

42 Selection of Parameters For the E-RBF approach we set119888 = 1 which is a prescribed parameter for the multiquadricbasis functions for all of the examplesThe parameter 120574 is setequal to approximately 13 of the design domain size MullurandMessac [32] investigated that the results were not undulysensitive to 120574 Another parameter 120578 is set equal to 2 for allnumerical examples

43 Metamodel Accuracy Measures Generally speaking anE-RBF response surface passes through all the samplingpoints exactly Therefore it is impossible to estimate theaccuracy of an E-RBF model with sampling points Tomeasure the accuracy of the resulting metamodels we canuse additional testing points to evaluate the accuracy of themodel via standard error measure root-mean-squared error(RMSE)The smaller the value of RMSE is the more accuratethe response surface will be The error measure is defined as

RMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

119870

(13)

where119870 is the number of additional testing points generatedby grid sampling method (32 lowast 32 for all the examples)119891(x119896) and

119891(x119896) are the true function value and predicted

metamodel value at the 119896th testing point x119896 respectively

In addition to the preceding RMSE we also calculate thenormalized root-mean-squared error (NRMSE) as follows

NRMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

sum

119870

119896=1[119891 (x119896)]

2lowast 100

(14)

RMSE only calculates the error of functions themselvesHowever NRMSE allows comparison of themetamodel errorvalues with regard to different functions

In engineering problems global minimum is requiredgenerally So we employ the simulated annealing (SA) [39]to calculate the global minimum

119891min based on the ultimatemetamodel The actual global minimum 119891min are listed inTable 3

44 Results and Discussions In this section we discuss theresults obtained after constructing metamodels through sixvarious sampling methods using the assessment measuredescribed above As mentioned in Section 41 20 proceduresare conducted for each sampling method and therefore thereare twenty sets of accuracy results for each sampling methodThe advantages and validity of sequential optimization sam-pling method (SOSM) are tested in comparison with one-stage sampling methods and sequential sampling methodsseparately below In addition SOSM is used to solve a typicalmechanical design optimization problem

441 Comparison of the Performance between SOSM andOne-Stage Sampling Methods In this part two classical one-stage sampling methods LHD RND and SOSM are usedto construct metamodels The accuracy of metamodels andglobal minimum of functions are obtained and managedThe error measures RMSE andNRMSE and global minimumsummarized in Table 4 are average values

From the Table 4 the RMSE and NRMSE of metamodelsusing samplingmethod SOSM are smaller than the other twoone-stage sampling methods for functions 1ndash5 For function6 the values are close The RND as expected performspoorly That is metamodels based on sampling methodSOSMmay provide a better fit to actual functions As seen inTable 4 on exploring global minimum

119891min of metamodelsSOSM is superior to LHD and RDN through all numericalexamples compared to the actual global minimum 119891min Inparticular the SOSM can almost find the global minimumof all numerical examples at every turn However LHD andRND perform fairly poorly particularly in the success ratewhich will be depicted in Figure 9

The mean results from Table 4 cannot represent theadvantages and disadvantages of different sampling methodsadequately Thus statistical graphics that is boxplots are

8 The Scientific World Journal

Table 4 Metamodel accuracy results for functions 1ndash6 between SOSM and one-stage sampling methods

Number SOSM LHD RND119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0722 30698 minus9137 0817 34766 minus8728 minus102 0061 9798 minus1715 0090 14524 minus1611 0122 19659 minus1662 minus17153 1674 6464 minus0002 1790 6913 0140 2072 8001 0040 04 0728 38118 minus6556 0810 42456 minus5786 0861 45116 minus5374 minus65515 2455 10531 minus34243 3353 14380 minus12928 3789 16253 minus9907 minus34636 0995 10011 0116 0992 9991 2284 1076 10830 2070 0

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

RMSE

of f

unct

ions

1ndash6

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

0

1

2

3

4

5

Figure 7 Assessment of metamodels RMSE between SOSM andone-stage sampling methods

used to show the deviations of the accuracy and globalminimum of each metamodel In descriptive statistics box-plot is a convenient way of graphically depicting groups ofnumerical data through their quartilesThe center line of eachboxplot shows the 50th percentile (median) value and the boxencompasses the 25th and 75th percentile of the data Theleader lines (horizontal lines) are plotted at a distance of 15times the interquartile range in each direction or the limitof the data (if the limit of the data falls within 15 times theinterquartile range) The data points outside the horizontallines are shown by placing a sign (ldquo◻rdquo) for each point Thetwenty times of metamodels accuracy results (RMSE andNRMSE) and global minimum 119891min with sampling methodsthat is SOSM LHD and RND are illustrated in Figures 7ndash9with the help of boxplots

From the results shown in Figures 7 and 8 it is foundthat the median values of RMSE and NRMSE are smallercompared to LHD and RND for functions 1ndash5 For function6 the median values of RMSE and NRMSE are a little largerbut very close In addition the box size of RMSE andNRMSEbased on SOSM is the shortest for all functions except forfunctions 4 and 5 The difference of sizes is small It is clearthat a small size of the box suggests small standard deviationof results High standard deviation of results suggests largeuncertainties in the predictions and low standard suggestslow uncertainty on the contrary The points outside thehorizontal lines indicate that the results of experiments areterrible Above all either mean values or median values and

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

Figure 8 Assessment of metamodels NRMSE between SOSM andone-stage sampling methods

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0

5

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6 Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Figure 9 Assessment of metamodels Global minimum betweenSOSM and one-stage sampling methods

box sizes of results are taken into account the accuracy ofmetamodels with sampling method SOSM is better

Figure 9 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 9 that the median value of SOSM is probablyequal to the actual global minimum 119891min Meanwhile thesmall sizes of boxes imply small standard deviation whichis also reflected by small differences between the mean andmedian values The standard deviation of global minimumis one of the important factors for evaluating the robustnessof algorithmTherefore smaller standard deviation of results

The Scientific World Journal 9

minus015

minus01

minus005

0

005

01

minus02minus015 minus01 minus005 0 005 01

015

x1

x2

(a) Number 1

minus085 minus08 minus07 minus065minus02

minus01

0

01

02

03

x1

x2

minus075

(b) Number 2

minus001 minus0005

0005

001

minus001

minus0005

0005 001

x1

x2

0

0

(c) Number 3

0 01 02 03 04 05minus2

minus19

minus18

minus17

minus16

minus15

x1

x2

(d) Number 4

minus15 minus1 minus05 0 05 12

25

3

35

4

45

x1

x2

(e) Number 5

0

0minus15 minus1 minus05 05 1 15minus15

minus1

minus05

05

1

15

x1

x2

(f) Number 6

Figure 10 Global minima points of functions 1ndash6 (a)ndash(f) show the global minima points for functions 1ndash6 separately The red pentagonindicates the actual global minimum point The black dots indicate the global minima points of metamodels based on sampling methodSOSMThe blue square indicates the global minima points of metamodels based on sampling method LHDThemagenta diamond indicatesthe actual global minima points of metamodels based on sampling method RND

10 The Scientific World Journal

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(a) Actual function

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(b) SOSM

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(c) LHD

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(d) RND

Figure 11 Function 1 actual and metamodel surface

implies the robustness of the algorithm It is clear fromFigure 9 that the SOSM is a more robust sampling methodthan the other two sampling techniques under the parametersettings employed in this paper The success rate is badcomparing the actual global minimum from Table 4 withthe distribution of global minimum of metamodels basedon sampling methods LHD and RND from Figure 9 Inother words SOSM plays a perfect role in finding the globalminimum of metamodels

In order to indicate the effectiveness of SOSM in seekingglobal minimum points the positions of global minimumpoints of metamodels for functions 1ndash6 are shown in Figures10(a)ndash10(f) separately The red pentagon indicates the actualglobal minimum point The black dots indicate the globalminimum points of metamodels based on sampling methodSOSM The blue squares indicate the global minimumpoints of metamodels based on sampling method LHDThe magenta diamonds indicate the global minimum pointsof metamodels based on sampling method RND Figure 10shows that the black dots are distributed densely in the centerof red pentagon Meanwhile the blue squares and magentadiamonds are decentralized around the red pentagon andthe difference between actual global minimum points and

global minimum points of metamodels based on LHD orRND ismostly quite largeThe above demonstrate that SOSMis superior to LHD and RND on global optimization (a)ndash(d)in Figures 11 12 13 14 15 and 16 show the graphs of actualfunctions and of the associated metamodels based on threesampling methods separately It can be observed intuitionallythat the metamodels surface adopting SOSM method issmoother than LHD and RND and furthermore the globalminima of metamodels based on SOSM are consistent withthe actual global minima The conclusions reached above areidentified further by comparing the actual function surfacesto metamodels surfaces

442 Comparison of the Performance between SOSMand Pre-vious Sequential Sampling Methods In this part three differ-ent sequential sampling methods including cross-validationsampling method (CV) [25] sequential sampling methodproposed by Kitayama et al [28] termed KSSM and suc-cessive local enumeration sampling method (SLE) [29] areused to construct metamodels in comparison with thoseconstructed by SOSM Similarly the accuracy of metamodelsand global minimumof functions are obtained andmanaged

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

4 The Scientific World Journal

Table 2 Nonradial basis functions 120595(120585

119895

119894)

Region Range of 120585119895119894

120595

119871120595

119877120595

120573

I 120585

119895

119894le minus120574 (minus120578120574

120578minus1) 120585

119895

119894+ 120574

120578(1 minus 120578) 0 120585

119895

119894

II minus120574 le 120585

119895

119894le 0 (120585

119895

119894)

120578

0 120585

119895

119894

III 0 le 120585

119895

119894le 120574 0 (120585

119895

119894)

120578

120585

119895

119894

IV 120585

119895

119894ge 120574 0 (120578120574

120578minus1) 120585

119895

119894+ 120574

120578(1 minus 120578) 120585

119895

119894

120574 120578 are prescribed parameters refer to [32 36]

After obtaining the coefficients 120572 using E-RBF one canevaluate the metamodels to construct response surface using(2) The resulting metamodel is one that is guaranteed to beconvex andhighly accurate [32 36] In the following section aseries of mathematical examples are approximated by E-RBFbased on the sampling method proposed in next section

In this section we introduce the metamodel approachE-RBF briefly due to space limitations A more completedescription and discussion are presented in articles [32 36]

3 The Sequential OptimizationSampling Method

As it is known the metamodels are approximation models ofrealmodels which are commonly complex and unknownTheaccuracy of metamodels owes to the approximation methodsand sampling methods primarily [37] The sampling methodproposed in this paper includes two parts The first part isthe procedure of adding optimization sampling points andthe second part is the procedure of adding points of sparseregions [28 30]

31 The Optimization Sampling Points The optimizationsampling points should have two important properties asfollows adaptive and sensitiveThe focus on sampling shouldshift to how to generate a reasonable number of samplingpoints intelligently so that the metamodel can reflect the realldquoblack-box functionsrdquo in areas of interest General knowledgetells us that sampling points at the site of valley andpeak of theresponse surface would improve the accuracy at the greatestextent The valley and peak are extrema points of functions

In mathematics the points which are the largest orsmallest within a given neighborhood are defined extremapoints McDonald et al [38] found the radial basis functionsmodels created with (1) are twice continuously differentiablewhen employing multiquadric function as basis function forall 119888 = 0 The first approximation model is constructed withinitial sampling points through using radial basis functionsThus the function evaluations analytic gradients and theHessian matrix of second partial derivatives can be obtainedfrom the initial functions

Considering the RBF model with 119899 dimensions in (1) thegradients of the equation are

120597

119891

120597x=

119873

sum

119894=1

120582

119894120601

1015840(119903

119894)

119903

119894 (x)

(x minus x119894)

119879 (6)

where 120601

1015840(119903

119894) = 120597120601120597119903

119894 119903119894(x) = xminus x

119894 For multiquadric RBF

model 120601(119903119894) = (119903

119894

2+ 119888

2)

12 1206011015840(119903119894) = 119903

119894(119903

119894

2+ 119888

2)

minus12The Hessian matrix can be calculated from (6) as

119867(x) =

120597

2

119891

120597x2

=

119873

sum

119894=1

120582

119894

119903

119894 (x)

[120601

1015840(119903

119894) sdot 119868 + (120601

10158401015840(119903

119894) minus

120601

1015840(119903

119894)

119903

119894 (x)

)] (x minus x119894)

119879

(7)

Similarly 12060110158401015840(119903119894) = 120597

2120601120597119903

119894

2 For multiquadric RBF model120601

10158401015840(119903

119894) = 119888

2(119903

119894

2+ 119888

2)

minus32 119868 is a unit vectorLet (6) be equal to zero and solve

120597

119891

120597x= 0 x = (119909

1 119909

2 119909

119873)

(8)

Then the critical point x119890can be obtained Upon substitution

of the critical point x119890into Hessian matrix we can judge the

definition of the matrix119867(x119890)The critical point is maximum

or minimum when the matrix is definite positive or definitenegative Figures 4 and 5 separately show the extrema pointsin the case of 2D 3D The red square and blue pentagonindicate maxima points meanwhile the green dot and redasterisk indicate minima points Once an approximationmodel has been created we can obtain coordinates of extremapoints

32 Density Function with the RBF It is necessary to addnew sampling points in the sparse region for global approx-imation To achieve this a new function called the densityfunction which is proposed by Kitayama et al [28 30] isconstructed using the RBF network in this paper The aimof the density function is to discover a sparse region in thedesign spaceThis density function generates local minima inthe sparse region so that theminimumof this function can betaken as a new sampling pointThe addition of new samplingpoints in the sparse region will improve the accuracy ofapproximation model and help to find the global minimumof metamodels

To explore the sparse region every output 119891of the RBFnetwork is replacedwith+1 Let119873 be the number of samplingpointsThe procedure for constructing the density function issummarized as follows

(1) The output vector 119891

119863 is prepared at the samplingpoints

119891

119863= (1 1 1)

119879

119873lowast1 (9)

The Scientific World Journal 5

MaximumMinimum

minus25120587 minus125120587 125120587 251205870

0

1

2

minus1

Figure 4 Extrema points in 2D

minus3 minus2 minus1 0 1 2 3minus3minus2minus10123

minus6

minus4

minus2

0

2

4

6

8

MaximumMinimum

Figure 5 Extrema points in 3D

(2) The weight vector 120582

119863of the density function 119863(119909) is

calculated as follows

120582

119863= (Φ

119879Φ + Δ)

minus1

Φ

119879119891

119863

(10)

where

Φ =

[

[

[

[

[

120601

1(119909

1) 120601

2(119909

1) sdot sdot sdot 120601

119873(119909

1)

120601

1(119909

2) 120601

2(119909

2) sdot sdot sdot 120601

119873(119909

2)

d

120601

1(119909

119873) 120601

2(119909

119873) sdot sdot sdot 120601

119873(119909

119873)

]

]

]

]

]

Δ = 10

minus3times

[

[

[

[

[

1 0 sdot sdot sdot 0

0 1 sdot sdot sdot 0

d

0 0 sdot sdot sdot 1

]

]

]

]

]119873lowast119873

(11)

(3) The addition of sampling point119909119863 in the sparse regionis explored as the globalminimumof density functionwith the RBF

119863(119909

119863) =

119873

sum

119895=1

120582

119863

119895120601

119895(119909

119863) 997888rarr min (12)

33 Summary of the Proposed Algorithm The detailed algo-rithm for sequential optimization samplingmethodwith RBFnetwork is described below Figure 6 shows the proposedalgorithm The proposed algorithm is roughly divided intotwo phases The first phase is used to construct the responsesurface and add the extrema points of response surface asnew sampling points The second phase is used to constructthe density function and add the minimum of the densityfunction as a new sampling point These two phases aredescribed particularly as follows

6 The Scientific World Journal

Construction of the response surface with RBF

Find the extrema points of response surface

Add the extrema points to the sample

Construction of the density function with RBF

Find the minimum point of density function

Add the minimum point to the sample

Yes

Yes

No

No

Second phase

First phase

count = 1

count = count + 1 count le d

m lt mmax

End

Update

the

sampling

points

m

Initial sampling points m

Figure 6 Proposed algorithm

First phase119898 initial sampling points are generated usingthe Latin hypercube sampling design All functions areevaluated at the sampling points and the response surface

119891(119909) is constructed The extrema points of response surfacecan then be found and directly taken as the new samplingpoints The more accurate metamodel would be constructedby repeating the procedure of adding extrema points to thesample adaptively and sequentially

Second phase the density function is constructed to findthe sparse regionTheminimumpoint of the density functionis taken as a new sampling point This step is repeated whilea terminal criterion (count le 119889) is satisfied Parameter119889 controls the number of sampling points obtained by thedensity function Kitayama et al [28] advised 119889 = int(1198992) Inthis paper parameter 119889 = int(1198992)+1 where int( ) representsthe rounding-off The new sampling point would be gainedif the parameter count is less than 119889 and it is increased ascount = count + 1

The terminal criterion of the integrated algorithm isdetermined by the maximum number of sampling points119898max If the number of sampling points is less than 119898maxthe algorithm proceeds Otherwise the algorithm is termi-nated In the algorithm the response surface is constructedrepeatedly through the addition of the new sampling pointsnamely extrema points and minimum point of densityfunction Afterwards the more accurate metamodel would

be constructed by repeating the procedure of adding pointsto the sample adaptively and sequentially

4 Numerical Examples

In this section we would test the validity of the proposedsampling method through some well-known numericalexamples and one engineering optimization problem Allthese functions are approximated with the E-RBF modelThe response surfaces are constructed through one-stagesampling methods (ie LHD and RND) and sequentialsampling methods (ie CV [25] Kitayama et al [28] SLE[29] and SOSM proposed in this paper) with the samesampling size In order to visualize the comparison betweenapproximated model and actual model two design variablesof numerical examples listed in Table 3 are tested

41 Sampling Strategies Two types of samplingmethods usedin this paper separately are one-stage sampling methods andsequential sampling methods One-stage sampling methodsinclude Latin hypercube sampling design (LHD) and Ran-dom sampling design (RND) Sequential sampling methodsinclude cross-validation samplingmethod (CV) [25] sequen-tial samplingmethodproposed byKitayama et al [28] termedKSSM in this paper successive local enumeration sampling

The Scientific World Journal 7

Table 3 Numerical function and global minimum

Number Function Design domain Global minimum1 119891(119909) = minus10 sin 119888 (119909

1) sdot sin 119888 (119909

2) minus2 le 119909 le 2 119909 = (0 0) 119891min = minus10

2 119891(119909) = 4119909

1sdot 119890

minus1199092

1minus1199092

2minus2 le 119909 le 2 119909 = (minus

radic22 0) 119891min = minus172

3 119891(119909) = 119909

2

1+ 119909

2

2minus 10 lowast [cos (2120587119909

1) + cos (2120587119909

1)] + 20 minus08 le 119909 le 08 119909 = (0 0) 119891min = 0

4119891(119909) = 3(1 minus 119909

1)

2sdot 119890

minus1199092

1minus(1199092+1)

2

minus 10 (

119909

1

5

minus 119909

3

1minus 119909

5

2)

sdot 119890

minus1199092

1minus1199092

2minus

1

3

119890

minus(1199091+1)2minus1199092

2

minus3 le 119909 le 3 119909 = (0228 minus163) 119891min = minus655

5119891(119909) = minus60[1 + (119909

1+ 1)

2

+ (119909

2minus 3)

2

] minus 20[1 + (119909

1minus 1)

2

+ (119909

2minus 3)

2

] minus 30[1 + 119909

2

1+ (119909

2+ 4)

2

] + 30

minus6 le 119909 le 6 119909 = (minus097 3) 119891min = minus3463

6 119891(119909) = minus10(sinradic119909

2

1+ 119909

2

2+ epsradic119909

2

1+ 119909

2

2+ eps) + 10 eps = 10

minus15minus3120587 le 119909 le 3120587 119909 = (0 0) 119891min = 0

method (SLE) [29] and sequential optimization samplingmethod (SOSM) proposed in this paper Every metamodelis constructed for 20 times with the same sampling methodin this paper For functions 1 and 2 we set 119898max = 25 Forfunctions 4 and 5 we set 119898max = 36 For functions 3 and 6we generate 28 and 40 sampling points separately

42 Selection of Parameters For the E-RBF approach we set119888 = 1 which is a prescribed parameter for the multiquadricbasis functions for all of the examplesThe parameter 120574 is setequal to approximately 13 of the design domain size MullurandMessac [32] investigated that the results were not undulysensitive to 120574 Another parameter 120578 is set equal to 2 for allnumerical examples

43 Metamodel Accuracy Measures Generally speaking anE-RBF response surface passes through all the samplingpoints exactly Therefore it is impossible to estimate theaccuracy of an E-RBF model with sampling points Tomeasure the accuracy of the resulting metamodels we canuse additional testing points to evaluate the accuracy of themodel via standard error measure root-mean-squared error(RMSE)The smaller the value of RMSE is the more accuratethe response surface will be The error measure is defined as

RMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

119870

(13)

where119870 is the number of additional testing points generatedby grid sampling method (32 lowast 32 for all the examples)119891(x119896) and

119891(x119896) are the true function value and predicted

metamodel value at the 119896th testing point x119896 respectively

In addition to the preceding RMSE we also calculate thenormalized root-mean-squared error (NRMSE) as follows

NRMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

sum

119870

119896=1[119891 (x119896)]

2lowast 100

(14)

RMSE only calculates the error of functions themselvesHowever NRMSE allows comparison of themetamodel errorvalues with regard to different functions

In engineering problems global minimum is requiredgenerally So we employ the simulated annealing (SA) [39]to calculate the global minimum

119891min based on the ultimatemetamodel The actual global minimum 119891min are listed inTable 3

44 Results and Discussions In this section we discuss theresults obtained after constructing metamodels through sixvarious sampling methods using the assessment measuredescribed above As mentioned in Section 41 20 proceduresare conducted for each sampling method and therefore thereare twenty sets of accuracy results for each sampling methodThe advantages and validity of sequential optimization sam-pling method (SOSM) are tested in comparison with one-stage sampling methods and sequential sampling methodsseparately below In addition SOSM is used to solve a typicalmechanical design optimization problem

441 Comparison of the Performance between SOSM andOne-Stage Sampling Methods In this part two classical one-stage sampling methods LHD RND and SOSM are usedto construct metamodels The accuracy of metamodels andglobal minimum of functions are obtained and managedThe error measures RMSE andNRMSE and global minimumsummarized in Table 4 are average values

From the Table 4 the RMSE and NRMSE of metamodelsusing samplingmethod SOSM are smaller than the other twoone-stage sampling methods for functions 1ndash5 For function6 the values are close The RND as expected performspoorly That is metamodels based on sampling methodSOSMmay provide a better fit to actual functions As seen inTable 4 on exploring global minimum

119891min of metamodelsSOSM is superior to LHD and RDN through all numericalexamples compared to the actual global minimum 119891min Inparticular the SOSM can almost find the global minimumof all numerical examples at every turn However LHD andRND perform fairly poorly particularly in the success ratewhich will be depicted in Figure 9

The mean results from Table 4 cannot represent theadvantages and disadvantages of different sampling methodsadequately Thus statistical graphics that is boxplots are

8 The Scientific World Journal

Table 4 Metamodel accuracy results for functions 1ndash6 between SOSM and one-stage sampling methods

Number SOSM LHD RND119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0722 30698 minus9137 0817 34766 minus8728 minus102 0061 9798 minus1715 0090 14524 minus1611 0122 19659 minus1662 minus17153 1674 6464 minus0002 1790 6913 0140 2072 8001 0040 04 0728 38118 minus6556 0810 42456 minus5786 0861 45116 minus5374 minus65515 2455 10531 minus34243 3353 14380 minus12928 3789 16253 minus9907 minus34636 0995 10011 0116 0992 9991 2284 1076 10830 2070 0

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

RMSE

of f

unct

ions

1ndash6

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

0

1

2

3

4

5

Figure 7 Assessment of metamodels RMSE between SOSM andone-stage sampling methods

used to show the deviations of the accuracy and globalminimum of each metamodel In descriptive statistics box-plot is a convenient way of graphically depicting groups ofnumerical data through their quartilesThe center line of eachboxplot shows the 50th percentile (median) value and the boxencompasses the 25th and 75th percentile of the data Theleader lines (horizontal lines) are plotted at a distance of 15times the interquartile range in each direction or the limitof the data (if the limit of the data falls within 15 times theinterquartile range) The data points outside the horizontallines are shown by placing a sign (ldquo◻rdquo) for each point Thetwenty times of metamodels accuracy results (RMSE andNRMSE) and global minimum 119891min with sampling methodsthat is SOSM LHD and RND are illustrated in Figures 7ndash9with the help of boxplots

From the results shown in Figures 7 and 8 it is foundthat the median values of RMSE and NRMSE are smallercompared to LHD and RND for functions 1ndash5 For function6 the median values of RMSE and NRMSE are a little largerbut very close In addition the box size of RMSE andNRMSEbased on SOSM is the shortest for all functions except forfunctions 4 and 5 The difference of sizes is small It is clearthat a small size of the box suggests small standard deviationof results High standard deviation of results suggests largeuncertainties in the predictions and low standard suggestslow uncertainty on the contrary The points outside thehorizontal lines indicate that the results of experiments areterrible Above all either mean values or median values and

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

Figure 8 Assessment of metamodels NRMSE between SOSM andone-stage sampling methods

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0

5

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6 Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Figure 9 Assessment of metamodels Global minimum betweenSOSM and one-stage sampling methods

box sizes of results are taken into account the accuracy ofmetamodels with sampling method SOSM is better

Figure 9 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 9 that the median value of SOSM is probablyequal to the actual global minimum 119891min Meanwhile thesmall sizes of boxes imply small standard deviation whichis also reflected by small differences between the mean andmedian values The standard deviation of global minimumis one of the important factors for evaluating the robustnessof algorithmTherefore smaller standard deviation of results

The Scientific World Journal 9

minus015

minus01

minus005

0

005

01

minus02minus015 minus01 minus005 0 005 01

015

x1

x2

(a) Number 1

minus085 minus08 minus07 minus065minus02

minus01

0

01

02

03

x1

x2

minus075

(b) Number 2

minus001 minus0005

0005

001

minus001

minus0005

0005 001

x1

x2

0

0

(c) Number 3

0 01 02 03 04 05minus2

minus19

minus18

minus17

minus16

minus15

x1

x2

(d) Number 4

minus15 minus1 minus05 0 05 12

25

3

35

4

45

x1

x2

(e) Number 5

0

0minus15 minus1 minus05 05 1 15minus15

minus1

minus05

05

1

15

x1

x2

(f) Number 6

Figure 10 Global minima points of functions 1ndash6 (a)ndash(f) show the global minima points for functions 1ndash6 separately The red pentagonindicates the actual global minimum point The black dots indicate the global minima points of metamodels based on sampling methodSOSMThe blue square indicates the global minima points of metamodels based on sampling method LHDThemagenta diamond indicatesthe actual global minima points of metamodels based on sampling method RND

10 The Scientific World Journal

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(a) Actual function

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(b) SOSM

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(c) LHD

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(d) RND

Figure 11 Function 1 actual and metamodel surface

implies the robustness of the algorithm It is clear fromFigure 9 that the SOSM is a more robust sampling methodthan the other two sampling techniques under the parametersettings employed in this paper The success rate is badcomparing the actual global minimum from Table 4 withthe distribution of global minimum of metamodels basedon sampling methods LHD and RND from Figure 9 Inother words SOSM plays a perfect role in finding the globalminimum of metamodels

In order to indicate the effectiveness of SOSM in seekingglobal minimum points the positions of global minimumpoints of metamodels for functions 1ndash6 are shown in Figures10(a)ndash10(f) separately The red pentagon indicates the actualglobal minimum point The black dots indicate the globalminimum points of metamodels based on sampling methodSOSM The blue squares indicate the global minimumpoints of metamodels based on sampling method LHDThe magenta diamonds indicate the global minimum pointsof metamodels based on sampling method RND Figure 10shows that the black dots are distributed densely in the centerof red pentagon Meanwhile the blue squares and magentadiamonds are decentralized around the red pentagon andthe difference between actual global minimum points and

global minimum points of metamodels based on LHD orRND ismostly quite largeThe above demonstrate that SOSMis superior to LHD and RND on global optimization (a)ndash(d)in Figures 11 12 13 14 15 and 16 show the graphs of actualfunctions and of the associated metamodels based on threesampling methods separately It can be observed intuitionallythat the metamodels surface adopting SOSM method issmoother than LHD and RND and furthermore the globalminima of metamodels based on SOSM are consistent withthe actual global minima The conclusions reached above areidentified further by comparing the actual function surfacesto metamodels surfaces

442 Comparison of the Performance between SOSMand Pre-vious Sequential Sampling Methods In this part three differ-ent sequential sampling methods including cross-validationsampling method (CV) [25] sequential sampling methodproposed by Kitayama et al [28] termed KSSM and suc-cessive local enumeration sampling method (SLE) [29] areused to construct metamodels in comparison with thoseconstructed by SOSM Similarly the accuracy of metamodelsand global minimumof functions are obtained andmanaged

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

The Scientific World Journal 5

MaximumMinimum

minus25120587 minus125120587 125120587 251205870

0

1

2

minus1

Figure 4 Extrema points in 2D

minus3 minus2 minus1 0 1 2 3minus3minus2minus10123

minus6

minus4

minus2

0

2

4

6

8

MaximumMinimum

Figure 5 Extrema points in 3D

(2) The weight vector 120582

119863of the density function 119863(119909) is

calculated as follows

120582

119863= (Φ

119879Φ + Δ)

minus1

Φ

119879119891

119863

(10)

where

Φ =

[

[

[

[

[

120601

1(119909

1) 120601

2(119909

1) sdot sdot sdot 120601

119873(119909

1)

120601

1(119909

2) 120601

2(119909

2) sdot sdot sdot 120601

119873(119909

2)

d

120601

1(119909

119873) 120601

2(119909

119873) sdot sdot sdot 120601

119873(119909

119873)

]

]

]

]

]

Δ = 10

minus3times

[

[

[

[

[

1 0 sdot sdot sdot 0

0 1 sdot sdot sdot 0

d

0 0 sdot sdot sdot 1

]

]

]

]

]119873lowast119873

(11)

(3) The addition of sampling point119909119863 in the sparse regionis explored as the globalminimumof density functionwith the RBF

119863(119909

119863) =

119873

sum

119895=1

120582

119863

119895120601

119895(119909

119863) 997888rarr min (12)

33 Summary of the Proposed Algorithm The detailed algo-rithm for sequential optimization samplingmethodwith RBFnetwork is described below Figure 6 shows the proposedalgorithm The proposed algorithm is roughly divided intotwo phases The first phase is used to construct the responsesurface and add the extrema points of response surface asnew sampling points The second phase is used to constructthe density function and add the minimum of the densityfunction as a new sampling point These two phases aredescribed particularly as follows

6 The Scientific World Journal

Construction of the response surface with RBF

Find the extrema points of response surface

Add the extrema points to the sample

Construction of the density function with RBF

Find the minimum point of density function

Add the minimum point to the sample

Yes

Yes

No

No

Second phase

First phase

count = 1

count = count + 1 count le d

m lt mmax

End

Update

the

sampling

points

m

Initial sampling points m

Figure 6 Proposed algorithm

First phase119898 initial sampling points are generated usingthe Latin hypercube sampling design All functions areevaluated at the sampling points and the response surface

119891(119909) is constructed The extrema points of response surfacecan then be found and directly taken as the new samplingpoints The more accurate metamodel would be constructedby repeating the procedure of adding extrema points to thesample adaptively and sequentially

Second phase the density function is constructed to findthe sparse regionTheminimumpoint of the density functionis taken as a new sampling point This step is repeated whilea terminal criterion (count le 119889) is satisfied Parameter119889 controls the number of sampling points obtained by thedensity function Kitayama et al [28] advised 119889 = int(1198992) Inthis paper parameter 119889 = int(1198992)+1 where int( ) representsthe rounding-off The new sampling point would be gainedif the parameter count is less than 119889 and it is increased ascount = count + 1

The terminal criterion of the integrated algorithm isdetermined by the maximum number of sampling points119898max If the number of sampling points is less than 119898maxthe algorithm proceeds Otherwise the algorithm is termi-nated In the algorithm the response surface is constructedrepeatedly through the addition of the new sampling pointsnamely extrema points and minimum point of densityfunction Afterwards the more accurate metamodel would

be constructed by repeating the procedure of adding pointsto the sample adaptively and sequentially

4 Numerical Examples

In this section we would test the validity of the proposedsampling method through some well-known numericalexamples and one engineering optimization problem Allthese functions are approximated with the E-RBF modelThe response surfaces are constructed through one-stagesampling methods (ie LHD and RND) and sequentialsampling methods (ie CV [25] Kitayama et al [28] SLE[29] and SOSM proposed in this paper) with the samesampling size In order to visualize the comparison betweenapproximated model and actual model two design variablesof numerical examples listed in Table 3 are tested

41 Sampling Strategies Two types of samplingmethods usedin this paper separately are one-stage sampling methods andsequential sampling methods One-stage sampling methodsinclude Latin hypercube sampling design (LHD) and Ran-dom sampling design (RND) Sequential sampling methodsinclude cross-validation samplingmethod (CV) [25] sequen-tial samplingmethodproposed byKitayama et al [28] termedKSSM in this paper successive local enumeration sampling

The Scientific World Journal 7

Table 3 Numerical function and global minimum

Number Function Design domain Global minimum1 119891(119909) = minus10 sin 119888 (119909

1) sdot sin 119888 (119909

2) minus2 le 119909 le 2 119909 = (0 0) 119891min = minus10

2 119891(119909) = 4119909

1sdot 119890

minus1199092

1minus1199092

2minus2 le 119909 le 2 119909 = (minus

radic22 0) 119891min = minus172

3 119891(119909) = 119909

2

1+ 119909

2

2minus 10 lowast [cos (2120587119909

1) + cos (2120587119909

1)] + 20 minus08 le 119909 le 08 119909 = (0 0) 119891min = 0

4119891(119909) = 3(1 minus 119909

1)

2sdot 119890

minus1199092

1minus(1199092+1)

2

minus 10 (

119909

1

5

minus 119909

3

1minus 119909

5

2)

sdot 119890

minus1199092

1minus1199092

2minus

1

3

119890

minus(1199091+1)2minus1199092

2

minus3 le 119909 le 3 119909 = (0228 minus163) 119891min = minus655

5119891(119909) = minus60[1 + (119909

1+ 1)

2

+ (119909

2minus 3)

2

] minus 20[1 + (119909

1minus 1)

2

+ (119909

2minus 3)

2

] minus 30[1 + 119909

2

1+ (119909

2+ 4)

2

] + 30

minus6 le 119909 le 6 119909 = (minus097 3) 119891min = minus3463

6 119891(119909) = minus10(sinradic119909

2

1+ 119909

2

2+ epsradic119909

2

1+ 119909

2

2+ eps) + 10 eps = 10

minus15minus3120587 le 119909 le 3120587 119909 = (0 0) 119891min = 0

method (SLE) [29] and sequential optimization samplingmethod (SOSM) proposed in this paper Every metamodelis constructed for 20 times with the same sampling methodin this paper For functions 1 and 2 we set 119898max = 25 Forfunctions 4 and 5 we set 119898max = 36 For functions 3 and 6we generate 28 and 40 sampling points separately

42 Selection of Parameters For the E-RBF approach we set119888 = 1 which is a prescribed parameter for the multiquadricbasis functions for all of the examplesThe parameter 120574 is setequal to approximately 13 of the design domain size MullurandMessac [32] investigated that the results were not undulysensitive to 120574 Another parameter 120578 is set equal to 2 for allnumerical examples

43 Metamodel Accuracy Measures Generally speaking anE-RBF response surface passes through all the samplingpoints exactly Therefore it is impossible to estimate theaccuracy of an E-RBF model with sampling points Tomeasure the accuracy of the resulting metamodels we canuse additional testing points to evaluate the accuracy of themodel via standard error measure root-mean-squared error(RMSE)The smaller the value of RMSE is the more accuratethe response surface will be The error measure is defined as

RMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

119870

(13)

where119870 is the number of additional testing points generatedby grid sampling method (32 lowast 32 for all the examples)119891(x119896) and

119891(x119896) are the true function value and predicted

metamodel value at the 119896th testing point x119896 respectively

In addition to the preceding RMSE we also calculate thenormalized root-mean-squared error (NRMSE) as follows

NRMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

sum

119870

119896=1[119891 (x119896)]

2lowast 100

(14)

RMSE only calculates the error of functions themselvesHowever NRMSE allows comparison of themetamodel errorvalues with regard to different functions

In engineering problems global minimum is requiredgenerally So we employ the simulated annealing (SA) [39]to calculate the global minimum

119891min based on the ultimatemetamodel The actual global minimum 119891min are listed inTable 3

44 Results and Discussions In this section we discuss theresults obtained after constructing metamodels through sixvarious sampling methods using the assessment measuredescribed above As mentioned in Section 41 20 proceduresare conducted for each sampling method and therefore thereare twenty sets of accuracy results for each sampling methodThe advantages and validity of sequential optimization sam-pling method (SOSM) are tested in comparison with one-stage sampling methods and sequential sampling methodsseparately below In addition SOSM is used to solve a typicalmechanical design optimization problem

441 Comparison of the Performance between SOSM andOne-Stage Sampling Methods In this part two classical one-stage sampling methods LHD RND and SOSM are usedto construct metamodels The accuracy of metamodels andglobal minimum of functions are obtained and managedThe error measures RMSE andNRMSE and global minimumsummarized in Table 4 are average values

From the Table 4 the RMSE and NRMSE of metamodelsusing samplingmethod SOSM are smaller than the other twoone-stage sampling methods for functions 1ndash5 For function6 the values are close The RND as expected performspoorly That is metamodels based on sampling methodSOSMmay provide a better fit to actual functions As seen inTable 4 on exploring global minimum

119891min of metamodelsSOSM is superior to LHD and RDN through all numericalexamples compared to the actual global minimum 119891min Inparticular the SOSM can almost find the global minimumof all numerical examples at every turn However LHD andRND perform fairly poorly particularly in the success ratewhich will be depicted in Figure 9

The mean results from Table 4 cannot represent theadvantages and disadvantages of different sampling methodsadequately Thus statistical graphics that is boxplots are

8 The Scientific World Journal

Table 4 Metamodel accuracy results for functions 1ndash6 between SOSM and one-stage sampling methods

Number SOSM LHD RND119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0722 30698 minus9137 0817 34766 minus8728 minus102 0061 9798 minus1715 0090 14524 minus1611 0122 19659 minus1662 minus17153 1674 6464 minus0002 1790 6913 0140 2072 8001 0040 04 0728 38118 minus6556 0810 42456 minus5786 0861 45116 minus5374 minus65515 2455 10531 minus34243 3353 14380 minus12928 3789 16253 minus9907 minus34636 0995 10011 0116 0992 9991 2284 1076 10830 2070 0

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

RMSE

of f

unct

ions

1ndash6

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

0

1

2

3

4

5

Figure 7 Assessment of metamodels RMSE between SOSM andone-stage sampling methods

used to show the deviations of the accuracy and globalminimum of each metamodel In descriptive statistics box-plot is a convenient way of graphically depicting groups ofnumerical data through their quartilesThe center line of eachboxplot shows the 50th percentile (median) value and the boxencompasses the 25th and 75th percentile of the data Theleader lines (horizontal lines) are plotted at a distance of 15times the interquartile range in each direction or the limitof the data (if the limit of the data falls within 15 times theinterquartile range) The data points outside the horizontallines are shown by placing a sign (ldquo◻rdquo) for each point Thetwenty times of metamodels accuracy results (RMSE andNRMSE) and global minimum 119891min with sampling methodsthat is SOSM LHD and RND are illustrated in Figures 7ndash9with the help of boxplots

From the results shown in Figures 7 and 8 it is foundthat the median values of RMSE and NRMSE are smallercompared to LHD and RND for functions 1ndash5 For function6 the median values of RMSE and NRMSE are a little largerbut very close In addition the box size of RMSE andNRMSEbased on SOSM is the shortest for all functions except forfunctions 4 and 5 The difference of sizes is small It is clearthat a small size of the box suggests small standard deviationof results High standard deviation of results suggests largeuncertainties in the predictions and low standard suggestslow uncertainty on the contrary The points outside thehorizontal lines indicate that the results of experiments areterrible Above all either mean values or median values and

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

Figure 8 Assessment of metamodels NRMSE between SOSM andone-stage sampling methods

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0

5

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6 Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Figure 9 Assessment of metamodels Global minimum betweenSOSM and one-stage sampling methods

box sizes of results are taken into account the accuracy ofmetamodels with sampling method SOSM is better

Figure 9 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 9 that the median value of SOSM is probablyequal to the actual global minimum 119891min Meanwhile thesmall sizes of boxes imply small standard deviation whichis also reflected by small differences between the mean andmedian values The standard deviation of global minimumis one of the important factors for evaluating the robustnessof algorithmTherefore smaller standard deviation of results

The Scientific World Journal 9

minus015

minus01

minus005

0

005

01

minus02minus015 minus01 minus005 0 005 01

015

x1

x2

(a) Number 1

minus085 minus08 minus07 minus065minus02

minus01

0

01

02

03

x1

x2

minus075

(b) Number 2

minus001 minus0005

0005

001

minus001

minus0005

0005 001

x1

x2

0

0

(c) Number 3

0 01 02 03 04 05minus2

minus19

minus18

minus17

minus16

minus15

x1

x2

(d) Number 4

minus15 minus1 minus05 0 05 12

25

3

35

4

45

x1

x2

(e) Number 5

0

0minus15 minus1 minus05 05 1 15minus15

minus1

minus05

05

1

15

x1

x2

(f) Number 6

Figure 10 Global minima points of functions 1ndash6 (a)ndash(f) show the global minima points for functions 1ndash6 separately The red pentagonindicates the actual global minimum point The black dots indicate the global minima points of metamodels based on sampling methodSOSMThe blue square indicates the global minima points of metamodels based on sampling method LHDThemagenta diamond indicatesthe actual global minima points of metamodels based on sampling method RND

10 The Scientific World Journal

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(a) Actual function

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(b) SOSM

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(c) LHD

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(d) RND

Figure 11 Function 1 actual and metamodel surface

implies the robustness of the algorithm It is clear fromFigure 9 that the SOSM is a more robust sampling methodthan the other two sampling techniques under the parametersettings employed in this paper The success rate is badcomparing the actual global minimum from Table 4 withthe distribution of global minimum of metamodels basedon sampling methods LHD and RND from Figure 9 Inother words SOSM plays a perfect role in finding the globalminimum of metamodels

In order to indicate the effectiveness of SOSM in seekingglobal minimum points the positions of global minimumpoints of metamodels for functions 1ndash6 are shown in Figures10(a)ndash10(f) separately The red pentagon indicates the actualglobal minimum point The black dots indicate the globalminimum points of metamodels based on sampling methodSOSM The blue squares indicate the global minimumpoints of metamodels based on sampling method LHDThe magenta diamonds indicate the global minimum pointsof metamodels based on sampling method RND Figure 10shows that the black dots are distributed densely in the centerof red pentagon Meanwhile the blue squares and magentadiamonds are decentralized around the red pentagon andthe difference between actual global minimum points and

global minimum points of metamodels based on LHD orRND ismostly quite largeThe above demonstrate that SOSMis superior to LHD and RND on global optimization (a)ndash(d)in Figures 11 12 13 14 15 and 16 show the graphs of actualfunctions and of the associated metamodels based on threesampling methods separately It can be observed intuitionallythat the metamodels surface adopting SOSM method issmoother than LHD and RND and furthermore the globalminima of metamodels based on SOSM are consistent withthe actual global minima The conclusions reached above areidentified further by comparing the actual function surfacesto metamodels surfaces

442 Comparison of the Performance between SOSMand Pre-vious Sequential Sampling Methods In this part three differ-ent sequential sampling methods including cross-validationsampling method (CV) [25] sequential sampling methodproposed by Kitayama et al [28] termed KSSM and suc-cessive local enumeration sampling method (SLE) [29] areused to construct metamodels in comparison with thoseconstructed by SOSM Similarly the accuracy of metamodelsand global minimumof functions are obtained andmanaged

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

6 The Scientific World Journal

Construction of the response surface with RBF

Find the extrema points of response surface

Add the extrema points to the sample

Construction of the density function with RBF

Find the minimum point of density function

Add the minimum point to the sample

Yes

Yes

No

No

Second phase

First phase

count = 1

count = count + 1 count le d

m lt mmax

End

Update

the

sampling

points

m

Initial sampling points m

Figure 6 Proposed algorithm

First phase119898 initial sampling points are generated usingthe Latin hypercube sampling design All functions areevaluated at the sampling points and the response surface

119891(119909) is constructed The extrema points of response surfacecan then be found and directly taken as the new samplingpoints The more accurate metamodel would be constructedby repeating the procedure of adding extrema points to thesample adaptively and sequentially

Second phase the density function is constructed to findthe sparse regionTheminimumpoint of the density functionis taken as a new sampling point This step is repeated whilea terminal criterion (count le 119889) is satisfied Parameter119889 controls the number of sampling points obtained by thedensity function Kitayama et al [28] advised 119889 = int(1198992) Inthis paper parameter 119889 = int(1198992)+1 where int( ) representsthe rounding-off The new sampling point would be gainedif the parameter count is less than 119889 and it is increased ascount = count + 1

The terminal criterion of the integrated algorithm isdetermined by the maximum number of sampling points119898max If the number of sampling points is less than 119898maxthe algorithm proceeds Otherwise the algorithm is termi-nated In the algorithm the response surface is constructedrepeatedly through the addition of the new sampling pointsnamely extrema points and minimum point of densityfunction Afterwards the more accurate metamodel would

be constructed by repeating the procedure of adding pointsto the sample adaptively and sequentially

4 Numerical Examples

In this section we would test the validity of the proposedsampling method through some well-known numericalexamples and one engineering optimization problem Allthese functions are approximated with the E-RBF modelThe response surfaces are constructed through one-stagesampling methods (ie LHD and RND) and sequentialsampling methods (ie CV [25] Kitayama et al [28] SLE[29] and SOSM proposed in this paper) with the samesampling size In order to visualize the comparison betweenapproximated model and actual model two design variablesof numerical examples listed in Table 3 are tested

41 Sampling Strategies Two types of samplingmethods usedin this paper separately are one-stage sampling methods andsequential sampling methods One-stage sampling methodsinclude Latin hypercube sampling design (LHD) and Ran-dom sampling design (RND) Sequential sampling methodsinclude cross-validation samplingmethod (CV) [25] sequen-tial samplingmethodproposed byKitayama et al [28] termedKSSM in this paper successive local enumeration sampling

The Scientific World Journal 7

Table 3 Numerical function and global minimum

Number Function Design domain Global minimum1 119891(119909) = minus10 sin 119888 (119909

1) sdot sin 119888 (119909

2) minus2 le 119909 le 2 119909 = (0 0) 119891min = minus10

2 119891(119909) = 4119909

1sdot 119890

minus1199092

1minus1199092

2minus2 le 119909 le 2 119909 = (minus

radic22 0) 119891min = minus172

3 119891(119909) = 119909

2

1+ 119909

2

2minus 10 lowast [cos (2120587119909

1) + cos (2120587119909

1)] + 20 minus08 le 119909 le 08 119909 = (0 0) 119891min = 0

4119891(119909) = 3(1 minus 119909

1)

2sdot 119890

minus1199092

1minus(1199092+1)

2

minus 10 (

119909

1

5

minus 119909

3

1minus 119909

5

2)

sdot 119890

minus1199092

1minus1199092

2minus

1

3

119890

minus(1199091+1)2minus1199092

2

minus3 le 119909 le 3 119909 = (0228 minus163) 119891min = minus655

5119891(119909) = minus60[1 + (119909

1+ 1)

2

+ (119909

2minus 3)

2

] minus 20[1 + (119909

1minus 1)

2

+ (119909

2minus 3)

2

] minus 30[1 + 119909

2

1+ (119909

2+ 4)

2

] + 30

minus6 le 119909 le 6 119909 = (minus097 3) 119891min = minus3463

6 119891(119909) = minus10(sinradic119909

2

1+ 119909

2

2+ epsradic119909

2

1+ 119909

2

2+ eps) + 10 eps = 10

minus15minus3120587 le 119909 le 3120587 119909 = (0 0) 119891min = 0

method (SLE) [29] and sequential optimization samplingmethod (SOSM) proposed in this paper Every metamodelis constructed for 20 times with the same sampling methodin this paper For functions 1 and 2 we set 119898max = 25 Forfunctions 4 and 5 we set 119898max = 36 For functions 3 and 6we generate 28 and 40 sampling points separately

42 Selection of Parameters For the E-RBF approach we set119888 = 1 which is a prescribed parameter for the multiquadricbasis functions for all of the examplesThe parameter 120574 is setequal to approximately 13 of the design domain size MullurandMessac [32] investigated that the results were not undulysensitive to 120574 Another parameter 120578 is set equal to 2 for allnumerical examples

43 Metamodel Accuracy Measures Generally speaking anE-RBF response surface passes through all the samplingpoints exactly Therefore it is impossible to estimate theaccuracy of an E-RBF model with sampling points Tomeasure the accuracy of the resulting metamodels we canuse additional testing points to evaluate the accuracy of themodel via standard error measure root-mean-squared error(RMSE)The smaller the value of RMSE is the more accuratethe response surface will be The error measure is defined as

RMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

119870

(13)

where119870 is the number of additional testing points generatedby grid sampling method (32 lowast 32 for all the examples)119891(x119896) and

119891(x119896) are the true function value and predicted

metamodel value at the 119896th testing point x119896 respectively

In addition to the preceding RMSE we also calculate thenormalized root-mean-squared error (NRMSE) as follows

NRMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

sum

119870

119896=1[119891 (x119896)]

2lowast 100

(14)

RMSE only calculates the error of functions themselvesHowever NRMSE allows comparison of themetamodel errorvalues with regard to different functions

In engineering problems global minimum is requiredgenerally So we employ the simulated annealing (SA) [39]to calculate the global minimum

119891min based on the ultimatemetamodel The actual global minimum 119891min are listed inTable 3

44 Results and Discussions In this section we discuss theresults obtained after constructing metamodels through sixvarious sampling methods using the assessment measuredescribed above As mentioned in Section 41 20 proceduresare conducted for each sampling method and therefore thereare twenty sets of accuracy results for each sampling methodThe advantages and validity of sequential optimization sam-pling method (SOSM) are tested in comparison with one-stage sampling methods and sequential sampling methodsseparately below In addition SOSM is used to solve a typicalmechanical design optimization problem

441 Comparison of the Performance between SOSM andOne-Stage Sampling Methods In this part two classical one-stage sampling methods LHD RND and SOSM are usedto construct metamodels The accuracy of metamodels andglobal minimum of functions are obtained and managedThe error measures RMSE andNRMSE and global minimumsummarized in Table 4 are average values

From the Table 4 the RMSE and NRMSE of metamodelsusing samplingmethod SOSM are smaller than the other twoone-stage sampling methods for functions 1ndash5 For function6 the values are close The RND as expected performspoorly That is metamodels based on sampling methodSOSMmay provide a better fit to actual functions As seen inTable 4 on exploring global minimum

119891min of metamodelsSOSM is superior to LHD and RDN through all numericalexamples compared to the actual global minimum 119891min Inparticular the SOSM can almost find the global minimumof all numerical examples at every turn However LHD andRND perform fairly poorly particularly in the success ratewhich will be depicted in Figure 9

The mean results from Table 4 cannot represent theadvantages and disadvantages of different sampling methodsadequately Thus statistical graphics that is boxplots are

8 The Scientific World Journal

Table 4 Metamodel accuracy results for functions 1ndash6 between SOSM and one-stage sampling methods

Number SOSM LHD RND119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0722 30698 minus9137 0817 34766 minus8728 minus102 0061 9798 minus1715 0090 14524 minus1611 0122 19659 minus1662 minus17153 1674 6464 minus0002 1790 6913 0140 2072 8001 0040 04 0728 38118 minus6556 0810 42456 minus5786 0861 45116 minus5374 minus65515 2455 10531 minus34243 3353 14380 minus12928 3789 16253 minus9907 minus34636 0995 10011 0116 0992 9991 2284 1076 10830 2070 0

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

RMSE

of f

unct

ions

1ndash6

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

0

1

2

3

4

5

Figure 7 Assessment of metamodels RMSE between SOSM andone-stage sampling methods

used to show the deviations of the accuracy and globalminimum of each metamodel In descriptive statistics box-plot is a convenient way of graphically depicting groups ofnumerical data through their quartilesThe center line of eachboxplot shows the 50th percentile (median) value and the boxencompasses the 25th and 75th percentile of the data Theleader lines (horizontal lines) are plotted at a distance of 15times the interquartile range in each direction or the limitof the data (if the limit of the data falls within 15 times theinterquartile range) The data points outside the horizontallines are shown by placing a sign (ldquo◻rdquo) for each point Thetwenty times of metamodels accuracy results (RMSE andNRMSE) and global minimum 119891min with sampling methodsthat is SOSM LHD and RND are illustrated in Figures 7ndash9with the help of boxplots

From the results shown in Figures 7 and 8 it is foundthat the median values of RMSE and NRMSE are smallercompared to LHD and RND for functions 1ndash5 For function6 the median values of RMSE and NRMSE are a little largerbut very close In addition the box size of RMSE andNRMSEbased on SOSM is the shortest for all functions except forfunctions 4 and 5 The difference of sizes is small It is clearthat a small size of the box suggests small standard deviationof results High standard deviation of results suggests largeuncertainties in the predictions and low standard suggestslow uncertainty on the contrary The points outside thehorizontal lines indicate that the results of experiments areterrible Above all either mean values or median values and

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

Figure 8 Assessment of metamodels NRMSE between SOSM andone-stage sampling methods

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0

5

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6 Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Figure 9 Assessment of metamodels Global minimum betweenSOSM and one-stage sampling methods

box sizes of results are taken into account the accuracy ofmetamodels with sampling method SOSM is better

Figure 9 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 9 that the median value of SOSM is probablyequal to the actual global minimum 119891min Meanwhile thesmall sizes of boxes imply small standard deviation whichis also reflected by small differences between the mean andmedian values The standard deviation of global minimumis one of the important factors for evaluating the robustnessof algorithmTherefore smaller standard deviation of results

The Scientific World Journal 9

minus015

minus01

minus005

0

005

01

minus02minus015 minus01 minus005 0 005 01

015

x1

x2

(a) Number 1

minus085 minus08 minus07 minus065minus02

minus01

0

01

02

03

x1

x2

minus075

(b) Number 2

minus001 minus0005

0005

001

minus001

minus0005

0005 001

x1

x2

0

0

(c) Number 3

0 01 02 03 04 05minus2

minus19

minus18

minus17

minus16

minus15

x1

x2

(d) Number 4

minus15 minus1 minus05 0 05 12

25

3

35

4

45

x1

x2

(e) Number 5

0

0minus15 minus1 minus05 05 1 15minus15

minus1

minus05

05

1

15

x1

x2

(f) Number 6

Figure 10 Global minima points of functions 1ndash6 (a)ndash(f) show the global minima points for functions 1ndash6 separately The red pentagonindicates the actual global minimum point The black dots indicate the global minima points of metamodels based on sampling methodSOSMThe blue square indicates the global minima points of metamodels based on sampling method LHDThemagenta diamond indicatesthe actual global minima points of metamodels based on sampling method RND

10 The Scientific World Journal

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(a) Actual function

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(b) SOSM

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(c) LHD

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(d) RND

Figure 11 Function 1 actual and metamodel surface

implies the robustness of the algorithm It is clear fromFigure 9 that the SOSM is a more robust sampling methodthan the other two sampling techniques under the parametersettings employed in this paper The success rate is badcomparing the actual global minimum from Table 4 withthe distribution of global minimum of metamodels basedon sampling methods LHD and RND from Figure 9 Inother words SOSM plays a perfect role in finding the globalminimum of metamodels

In order to indicate the effectiveness of SOSM in seekingglobal minimum points the positions of global minimumpoints of metamodels for functions 1ndash6 are shown in Figures10(a)ndash10(f) separately The red pentagon indicates the actualglobal minimum point The black dots indicate the globalminimum points of metamodels based on sampling methodSOSM The blue squares indicate the global minimumpoints of metamodels based on sampling method LHDThe magenta diamonds indicate the global minimum pointsof metamodels based on sampling method RND Figure 10shows that the black dots are distributed densely in the centerof red pentagon Meanwhile the blue squares and magentadiamonds are decentralized around the red pentagon andthe difference between actual global minimum points and

global minimum points of metamodels based on LHD orRND ismostly quite largeThe above demonstrate that SOSMis superior to LHD and RND on global optimization (a)ndash(d)in Figures 11 12 13 14 15 and 16 show the graphs of actualfunctions and of the associated metamodels based on threesampling methods separately It can be observed intuitionallythat the metamodels surface adopting SOSM method issmoother than LHD and RND and furthermore the globalminima of metamodels based on SOSM are consistent withthe actual global minima The conclusions reached above areidentified further by comparing the actual function surfacesto metamodels surfaces

442 Comparison of the Performance between SOSMand Pre-vious Sequential Sampling Methods In this part three differ-ent sequential sampling methods including cross-validationsampling method (CV) [25] sequential sampling methodproposed by Kitayama et al [28] termed KSSM and suc-cessive local enumeration sampling method (SLE) [29] areused to construct metamodels in comparison with thoseconstructed by SOSM Similarly the accuracy of metamodelsand global minimumof functions are obtained andmanaged

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

The Scientific World Journal 7

Table 3 Numerical function and global minimum

Number Function Design domain Global minimum1 119891(119909) = minus10 sin 119888 (119909

1) sdot sin 119888 (119909

2) minus2 le 119909 le 2 119909 = (0 0) 119891min = minus10

2 119891(119909) = 4119909

1sdot 119890

minus1199092

1minus1199092

2minus2 le 119909 le 2 119909 = (minus

radic22 0) 119891min = minus172

3 119891(119909) = 119909

2

1+ 119909

2

2minus 10 lowast [cos (2120587119909

1) + cos (2120587119909

1)] + 20 minus08 le 119909 le 08 119909 = (0 0) 119891min = 0

4119891(119909) = 3(1 minus 119909

1)

2sdot 119890

minus1199092

1minus(1199092+1)

2

minus 10 (

119909

1

5

minus 119909

3

1minus 119909

5

2)

sdot 119890

minus1199092

1minus1199092

2minus

1

3

119890

minus(1199091+1)2minus1199092

2

minus3 le 119909 le 3 119909 = (0228 minus163) 119891min = minus655

5119891(119909) = minus60[1 + (119909

1+ 1)

2

+ (119909

2minus 3)

2

] minus 20[1 + (119909

1minus 1)

2

+ (119909

2minus 3)

2

] minus 30[1 + 119909

2

1+ (119909

2+ 4)

2

] + 30

minus6 le 119909 le 6 119909 = (minus097 3) 119891min = minus3463

6 119891(119909) = minus10(sinradic119909

2

1+ 119909

2

2+ epsradic119909

2

1+ 119909

2

2+ eps) + 10 eps = 10

minus15minus3120587 le 119909 le 3120587 119909 = (0 0) 119891min = 0

method (SLE) [29] and sequential optimization samplingmethod (SOSM) proposed in this paper Every metamodelis constructed for 20 times with the same sampling methodin this paper For functions 1 and 2 we set 119898max = 25 Forfunctions 4 and 5 we set 119898max = 36 For functions 3 and 6we generate 28 and 40 sampling points separately

42 Selection of Parameters For the E-RBF approach we set119888 = 1 which is a prescribed parameter for the multiquadricbasis functions for all of the examplesThe parameter 120574 is setequal to approximately 13 of the design domain size MullurandMessac [32] investigated that the results were not undulysensitive to 120574 Another parameter 120578 is set equal to 2 for allnumerical examples

43 Metamodel Accuracy Measures Generally speaking anE-RBF response surface passes through all the samplingpoints exactly Therefore it is impossible to estimate theaccuracy of an E-RBF model with sampling points Tomeasure the accuracy of the resulting metamodels we canuse additional testing points to evaluate the accuracy of themodel via standard error measure root-mean-squared error(RMSE)The smaller the value of RMSE is the more accuratethe response surface will be The error measure is defined as

RMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

119870

(13)

where119870 is the number of additional testing points generatedby grid sampling method (32 lowast 32 for all the examples)119891(x119896) and

119891(x119896) are the true function value and predicted

metamodel value at the 119896th testing point x119896 respectively

In addition to the preceding RMSE we also calculate thenormalized root-mean-squared error (NRMSE) as follows

NRMSE =

radic

sum

119870

119896=1[119891 (x119896) minus

119891 (x119896)]

2

sum

119870

119896=1[119891 (x119896)]

2lowast 100

(14)

RMSE only calculates the error of functions themselvesHowever NRMSE allows comparison of themetamodel errorvalues with regard to different functions

In engineering problems global minimum is requiredgenerally So we employ the simulated annealing (SA) [39]to calculate the global minimum

119891min based on the ultimatemetamodel The actual global minimum 119891min are listed inTable 3

44 Results and Discussions In this section we discuss theresults obtained after constructing metamodels through sixvarious sampling methods using the assessment measuredescribed above As mentioned in Section 41 20 proceduresare conducted for each sampling method and therefore thereare twenty sets of accuracy results for each sampling methodThe advantages and validity of sequential optimization sam-pling method (SOSM) are tested in comparison with one-stage sampling methods and sequential sampling methodsseparately below In addition SOSM is used to solve a typicalmechanical design optimization problem

441 Comparison of the Performance between SOSM andOne-Stage Sampling Methods In this part two classical one-stage sampling methods LHD RND and SOSM are usedto construct metamodels The accuracy of metamodels andglobal minimum of functions are obtained and managedThe error measures RMSE andNRMSE and global minimumsummarized in Table 4 are average values

From the Table 4 the RMSE and NRMSE of metamodelsusing samplingmethod SOSM are smaller than the other twoone-stage sampling methods for functions 1ndash5 For function6 the values are close The RND as expected performspoorly That is metamodels based on sampling methodSOSMmay provide a better fit to actual functions As seen inTable 4 on exploring global minimum

119891min of metamodelsSOSM is superior to LHD and RDN through all numericalexamples compared to the actual global minimum 119891min Inparticular the SOSM can almost find the global minimumof all numerical examples at every turn However LHD andRND perform fairly poorly particularly in the success ratewhich will be depicted in Figure 9

The mean results from Table 4 cannot represent theadvantages and disadvantages of different sampling methodsadequately Thus statistical graphics that is boxplots are

8 The Scientific World Journal

Table 4 Metamodel accuracy results for functions 1ndash6 between SOSM and one-stage sampling methods

Number SOSM LHD RND119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0722 30698 minus9137 0817 34766 minus8728 minus102 0061 9798 minus1715 0090 14524 minus1611 0122 19659 minus1662 minus17153 1674 6464 minus0002 1790 6913 0140 2072 8001 0040 04 0728 38118 minus6556 0810 42456 minus5786 0861 45116 minus5374 minus65515 2455 10531 minus34243 3353 14380 minus12928 3789 16253 minus9907 minus34636 0995 10011 0116 0992 9991 2284 1076 10830 2070 0

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

RMSE

of f

unct

ions

1ndash6

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

0

1

2

3

4

5

Figure 7 Assessment of metamodels RMSE between SOSM andone-stage sampling methods

used to show the deviations of the accuracy and globalminimum of each metamodel In descriptive statistics box-plot is a convenient way of graphically depicting groups ofnumerical data through their quartilesThe center line of eachboxplot shows the 50th percentile (median) value and the boxencompasses the 25th and 75th percentile of the data Theleader lines (horizontal lines) are plotted at a distance of 15times the interquartile range in each direction or the limitof the data (if the limit of the data falls within 15 times theinterquartile range) The data points outside the horizontallines are shown by placing a sign (ldquo◻rdquo) for each point Thetwenty times of metamodels accuracy results (RMSE andNRMSE) and global minimum 119891min with sampling methodsthat is SOSM LHD and RND are illustrated in Figures 7ndash9with the help of boxplots

From the results shown in Figures 7 and 8 it is foundthat the median values of RMSE and NRMSE are smallercompared to LHD and RND for functions 1ndash5 For function6 the median values of RMSE and NRMSE are a little largerbut very close In addition the box size of RMSE andNRMSEbased on SOSM is the shortest for all functions except forfunctions 4 and 5 The difference of sizes is small It is clearthat a small size of the box suggests small standard deviationof results High standard deviation of results suggests largeuncertainties in the predictions and low standard suggestslow uncertainty on the contrary The points outside thehorizontal lines indicate that the results of experiments areterrible Above all either mean values or median values and

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

Figure 8 Assessment of metamodels NRMSE between SOSM andone-stage sampling methods

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0

5

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6 Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Figure 9 Assessment of metamodels Global minimum betweenSOSM and one-stage sampling methods

box sizes of results are taken into account the accuracy ofmetamodels with sampling method SOSM is better

Figure 9 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 9 that the median value of SOSM is probablyequal to the actual global minimum 119891min Meanwhile thesmall sizes of boxes imply small standard deviation whichis also reflected by small differences between the mean andmedian values The standard deviation of global minimumis one of the important factors for evaluating the robustnessof algorithmTherefore smaller standard deviation of results

The Scientific World Journal 9

minus015

minus01

minus005

0

005

01

minus02minus015 minus01 minus005 0 005 01

015

x1

x2

(a) Number 1

minus085 minus08 minus07 minus065minus02

minus01

0

01

02

03

x1

x2

minus075

(b) Number 2

minus001 minus0005

0005

001

minus001

minus0005

0005 001

x1

x2

0

0

(c) Number 3

0 01 02 03 04 05minus2

minus19

minus18

minus17

minus16

minus15

x1

x2

(d) Number 4

minus15 minus1 minus05 0 05 12

25

3

35

4

45

x1

x2

(e) Number 5

0

0minus15 minus1 minus05 05 1 15minus15

minus1

minus05

05

1

15

x1

x2

(f) Number 6

Figure 10 Global minima points of functions 1ndash6 (a)ndash(f) show the global minima points for functions 1ndash6 separately The red pentagonindicates the actual global minimum point The black dots indicate the global minima points of metamodels based on sampling methodSOSMThe blue square indicates the global minima points of metamodels based on sampling method LHDThemagenta diamond indicatesthe actual global minima points of metamodels based on sampling method RND

10 The Scientific World Journal

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(a) Actual function

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(b) SOSM

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(c) LHD

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(d) RND

Figure 11 Function 1 actual and metamodel surface

implies the robustness of the algorithm It is clear fromFigure 9 that the SOSM is a more robust sampling methodthan the other two sampling techniques under the parametersettings employed in this paper The success rate is badcomparing the actual global minimum from Table 4 withthe distribution of global minimum of metamodels basedon sampling methods LHD and RND from Figure 9 Inother words SOSM plays a perfect role in finding the globalminimum of metamodels

In order to indicate the effectiveness of SOSM in seekingglobal minimum points the positions of global minimumpoints of metamodels for functions 1ndash6 are shown in Figures10(a)ndash10(f) separately The red pentagon indicates the actualglobal minimum point The black dots indicate the globalminimum points of metamodels based on sampling methodSOSM The blue squares indicate the global minimumpoints of metamodels based on sampling method LHDThe magenta diamonds indicate the global minimum pointsof metamodels based on sampling method RND Figure 10shows that the black dots are distributed densely in the centerof red pentagon Meanwhile the blue squares and magentadiamonds are decentralized around the red pentagon andthe difference between actual global minimum points and

global minimum points of metamodels based on LHD orRND ismostly quite largeThe above demonstrate that SOSMis superior to LHD and RND on global optimization (a)ndash(d)in Figures 11 12 13 14 15 and 16 show the graphs of actualfunctions and of the associated metamodels based on threesampling methods separately It can be observed intuitionallythat the metamodels surface adopting SOSM method issmoother than LHD and RND and furthermore the globalminima of metamodels based on SOSM are consistent withthe actual global minima The conclusions reached above areidentified further by comparing the actual function surfacesto metamodels surfaces

442 Comparison of the Performance between SOSMand Pre-vious Sequential Sampling Methods In this part three differ-ent sequential sampling methods including cross-validationsampling method (CV) [25] sequential sampling methodproposed by Kitayama et al [28] termed KSSM and suc-cessive local enumeration sampling method (SLE) [29] areused to construct metamodels in comparison with thoseconstructed by SOSM Similarly the accuracy of metamodelsand global minimumof functions are obtained andmanaged

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

8 The Scientific World Journal

Table 4 Metamodel accuracy results for functions 1ndash6 between SOSM and one-stage sampling methods

Number SOSM LHD RND119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0722 30698 minus9137 0817 34766 minus8728 minus102 0061 9798 minus1715 0090 14524 minus1611 0122 19659 minus1662 minus17153 1674 6464 minus0002 1790 6913 0140 2072 8001 0040 04 0728 38118 minus6556 0810 42456 minus5786 0861 45116 minus5374 minus65515 2455 10531 minus34243 3353 14380 minus12928 3789 16253 minus9907 minus34636 0995 10011 0116 0992 9991 2284 1076 10830 2070 0

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

RMSE

of f

unct

ions

1ndash6

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

0

1

2

3

4

5

Figure 7 Assessment of metamodels RMSE between SOSM andone-stage sampling methods

used to show the deviations of the accuracy and globalminimum of each metamodel In descriptive statistics box-plot is a convenient way of graphically depicting groups ofnumerical data through their quartilesThe center line of eachboxplot shows the 50th percentile (median) value and the boxencompasses the 25th and 75th percentile of the data Theleader lines (horizontal lines) are plotted at a distance of 15times the interquartile range in each direction or the limitof the data (if the limit of the data falls within 15 times theinterquartile range) The data points outside the horizontallines are shown by placing a sign (ldquo◻rdquo) for each point Thetwenty times of metamodels accuracy results (RMSE andNRMSE) and global minimum 119891min with sampling methodsthat is SOSM LHD and RND are illustrated in Figures 7ndash9with the help of boxplots

From the results shown in Figures 7 and 8 it is foundthat the median values of RMSE and NRMSE are smallercompared to LHD and RND for functions 1ndash5 For function6 the median values of RMSE and NRMSE are a little largerbut very close In addition the box size of RMSE andNRMSEbased on SOSM is the shortest for all functions except forfunctions 4 and 5 The difference of sizes is small It is clearthat a small size of the box suggests small standard deviationof results High standard deviation of results suggests largeuncertainties in the predictions and low standard suggestslow uncertainty on the contrary The points outside thehorizontal lines indicate that the results of experiments areterrible Above all either mean values or median values and

Function 1 Function 2 Function 3 Function 4 Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

Figure 8 Assessment of metamodels NRMSE between SOSM andone-stage sampling methods

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

SOSM

LHD

RND

Sampling methods

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0

5

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6 Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Figure 9 Assessment of metamodels Global minimum betweenSOSM and one-stage sampling methods

box sizes of results are taken into account the accuracy ofmetamodels with sampling method SOSM is better

Figure 9 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 9 that the median value of SOSM is probablyequal to the actual global minimum 119891min Meanwhile thesmall sizes of boxes imply small standard deviation whichis also reflected by small differences between the mean andmedian values The standard deviation of global minimumis one of the important factors for evaluating the robustnessof algorithmTherefore smaller standard deviation of results

The Scientific World Journal 9

minus015

minus01

minus005

0

005

01

minus02minus015 minus01 minus005 0 005 01

015

x1

x2

(a) Number 1

minus085 minus08 minus07 minus065minus02

minus01

0

01

02

03

x1

x2

minus075

(b) Number 2

minus001 minus0005

0005

001

minus001

minus0005

0005 001

x1

x2

0

0

(c) Number 3

0 01 02 03 04 05minus2

minus19

minus18

minus17

minus16

minus15

x1

x2

(d) Number 4

minus15 minus1 minus05 0 05 12

25

3

35

4

45

x1

x2

(e) Number 5

0

0minus15 minus1 minus05 05 1 15minus15

minus1

minus05

05

1

15

x1

x2

(f) Number 6

Figure 10 Global minima points of functions 1ndash6 (a)ndash(f) show the global minima points for functions 1ndash6 separately The red pentagonindicates the actual global minimum point The black dots indicate the global minima points of metamodels based on sampling methodSOSMThe blue square indicates the global minima points of metamodels based on sampling method LHDThemagenta diamond indicatesthe actual global minima points of metamodels based on sampling method RND

10 The Scientific World Journal

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(a) Actual function

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(b) SOSM

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(c) LHD

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(d) RND

Figure 11 Function 1 actual and metamodel surface

implies the robustness of the algorithm It is clear fromFigure 9 that the SOSM is a more robust sampling methodthan the other two sampling techniques under the parametersettings employed in this paper The success rate is badcomparing the actual global minimum from Table 4 withthe distribution of global minimum of metamodels basedon sampling methods LHD and RND from Figure 9 Inother words SOSM plays a perfect role in finding the globalminimum of metamodels

In order to indicate the effectiveness of SOSM in seekingglobal minimum points the positions of global minimumpoints of metamodels for functions 1ndash6 are shown in Figures10(a)ndash10(f) separately The red pentagon indicates the actualglobal minimum point The black dots indicate the globalminimum points of metamodels based on sampling methodSOSM The blue squares indicate the global minimumpoints of metamodels based on sampling method LHDThe magenta diamonds indicate the global minimum pointsof metamodels based on sampling method RND Figure 10shows that the black dots are distributed densely in the centerof red pentagon Meanwhile the blue squares and magentadiamonds are decentralized around the red pentagon andthe difference between actual global minimum points and

global minimum points of metamodels based on LHD orRND ismostly quite largeThe above demonstrate that SOSMis superior to LHD and RND on global optimization (a)ndash(d)in Figures 11 12 13 14 15 and 16 show the graphs of actualfunctions and of the associated metamodels based on threesampling methods separately It can be observed intuitionallythat the metamodels surface adopting SOSM method issmoother than LHD and RND and furthermore the globalminima of metamodels based on SOSM are consistent withthe actual global minima The conclusions reached above areidentified further by comparing the actual function surfacesto metamodels surfaces

442 Comparison of the Performance between SOSMand Pre-vious Sequential Sampling Methods In this part three differ-ent sequential sampling methods including cross-validationsampling method (CV) [25] sequential sampling methodproposed by Kitayama et al [28] termed KSSM and suc-cessive local enumeration sampling method (SLE) [29] areused to construct metamodels in comparison with thoseconstructed by SOSM Similarly the accuracy of metamodelsand global minimumof functions are obtained andmanaged

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 9: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

The Scientific World Journal 9

minus015

minus01

minus005

0

005

01

minus02minus015 minus01 minus005 0 005 01

015

x1

x2

(a) Number 1

minus085 minus08 minus07 minus065minus02

minus01

0

01

02

03

x1

x2

minus075

(b) Number 2

minus001 minus0005

0005

001

minus001

minus0005

0005 001

x1

x2

0

0

(c) Number 3

0 01 02 03 04 05minus2

minus19

minus18

minus17

minus16

minus15

x1

x2

(d) Number 4

minus15 minus1 minus05 0 05 12

25

3

35

4

45

x1

x2

(e) Number 5

0

0minus15 minus1 minus05 05 1 15minus15

minus1

minus05

05

1

15

x1

x2

(f) Number 6

Figure 10 Global minima points of functions 1ndash6 (a)ndash(f) show the global minima points for functions 1ndash6 separately The red pentagonindicates the actual global minimum point The black dots indicate the global minima points of metamodels based on sampling methodSOSMThe blue square indicates the global minima points of metamodels based on sampling method LHDThemagenta diamond indicatesthe actual global minima points of metamodels based on sampling method RND

10 The Scientific World Journal

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(a) Actual function

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(b) SOSM

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(c) LHD

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(d) RND

Figure 11 Function 1 actual and metamodel surface

implies the robustness of the algorithm It is clear fromFigure 9 that the SOSM is a more robust sampling methodthan the other two sampling techniques under the parametersettings employed in this paper The success rate is badcomparing the actual global minimum from Table 4 withthe distribution of global minimum of metamodels basedon sampling methods LHD and RND from Figure 9 Inother words SOSM plays a perfect role in finding the globalminimum of metamodels

In order to indicate the effectiveness of SOSM in seekingglobal minimum points the positions of global minimumpoints of metamodels for functions 1ndash6 are shown in Figures10(a)ndash10(f) separately The red pentagon indicates the actualglobal minimum point The black dots indicate the globalminimum points of metamodels based on sampling methodSOSM The blue squares indicate the global minimumpoints of metamodels based on sampling method LHDThe magenta diamonds indicate the global minimum pointsof metamodels based on sampling method RND Figure 10shows that the black dots are distributed densely in the centerof red pentagon Meanwhile the blue squares and magentadiamonds are decentralized around the red pentagon andthe difference between actual global minimum points and

global minimum points of metamodels based on LHD orRND ismostly quite largeThe above demonstrate that SOSMis superior to LHD and RND on global optimization (a)ndash(d)in Figures 11 12 13 14 15 and 16 show the graphs of actualfunctions and of the associated metamodels based on threesampling methods separately It can be observed intuitionallythat the metamodels surface adopting SOSM method issmoother than LHD and RND and furthermore the globalminima of metamodels based on SOSM are consistent withthe actual global minima The conclusions reached above areidentified further by comparing the actual function surfacesto metamodels surfaces

442 Comparison of the Performance between SOSMand Pre-vious Sequential Sampling Methods In this part three differ-ent sequential sampling methods including cross-validationsampling method (CV) [25] sequential sampling methodproposed by Kitayama et al [28] termed KSSM and suc-cessive local enumeration sampling method (SLE) [29] areused to construct metamodels in comparison with thoseconstructed by SOSM Similarly the accuracy of metamodelsand global minimumof functions are obtained andmanaged

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 10: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

10 The Scientific World Journal

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(a) Actual function

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(b) SOSM

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(c) LHD

minus2

2

minus2

2

2

minus10

f(x)

x1

x2

(d) RND

Figure 11 Function 1 actual and metamodel surface

implies the robustness of the algorithm It is clear fromFigure 9 that the SOSM is a more robust sampling methodthan the other two sampling techniques under the parametersettings employed in this paper The success rate is badcomparing the actual global minimum from Table 4 withthe distribution of global minimum of metamodels basedon sampling methods LHD and RND from Figure 9 Inother words SOSM plays a perfect role in finding the globalminimum of metamodels

In order to indicate the effectiveness of SOSM in seekingglobal minimum points the positions of global minimumpoints of metamodels for functions 1ndash6 are shown in Figures10(a)ndash10(f) separately The red pentagon indicates the actualglobal minimum point The black dots indicate the globalminimum points of metamodels based on sampling methodSOSM The blue squares indicate the global minimumpoints of metamodels based on sampling method LHDThe magenta diamonds indicate the global minimum pointsof metamodels based on sampling method RND Figure 10shows that the black dots are distributed densely in the centerof red pentagon Meanwhile the blue squares and magentadiamonds are decentralized around the red pentagon andthe difference between actual global minimum points and

global minimum points of metamodels based on LHD orRND ismostly quite largeThe above demonstrate that SOSMis superior to LHD and RND on global optimization (a)ndash(d)in Figures 11 12 13 14 15 and 16 show the graphs of actualfunctions and of the associated metamodels based on threesampling methods separately It can be observed intuitionallythat the metamodels surface adopting SOSM method issmoother than LHD and RND and furthermore the globalminima of metamodels based on SOSM are consistent withthe actual global minima The conclusions reached above areidentified further by comparing the actual function surfacesto metamodels surfaces

442 Comparison of the Performance between SOSMand Pre-vious Sequential Sampling Methods In this part three differ-ent sequential sampling methods including cross-validationsampling method (CV) [25] sequential sampling methodproposed by Kitayama et al [28] termed KSSM and suc-cessive local enumeration sampling method (SLE) [29] areused to construct metamodels in comparison with thoseconstructed by SOSM Similarly the accuracy of metamodelsand global minimumof functions are obtained andmanaged

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 11: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

The Scientific World Journal 11

minus2

2

minus2

2

2

f(x)

x1x2

minus2

(a) Actual function

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(b) SOSM

minus2

2

minus2

2

2

f(x)

x1

x2

minus2

(c) LHD

minus22

minus2 minus2

2

2

f(x)

x1

x2

(d) RND

Figure 12 Function 2 actual and metamodel surface

The accuracy measures RMSE and NRMSE and global min-imum summarized in Table 5 are mean values Note that avalue of zero for both accuracy measures RMSE and NRMSEwould indicate a perfect fit

From Table 5 the RMSE and NRMSE of metamodelsusing samplingmethod SOSMare smaller than the sequentialsampling methods CV and KSSM for functions 1ndash5 Forfunction 6 the values are close The SLE performs greatestThat is metamodels based on sampling method SOSM mayprovide a better fit to actual functions than CV and KSSMHowever the samplingmethod SLEperforms unsatisfactorilyin terms of exploring global minimum As seen in Table 5on exploring global minimum

119891min of metamodels SOSM issuperior to the previous sequential sampling methods SLEthrough all numerical examples compared to the actual globalminimum119891min In addition the sequential samplingmethodsCV and KSSM perform as great as SOSM on exploring theglobal minimum In general the sequential samplingmethodKSSM is the best SOSM takes second place and CV isthe least It can be concluded that the sequential samplingmethod SOSM proposed in this paper is the best choiceconsidering the accuracy ofmetamodels and exploring globalminimum

In order to represent the advantages and disadvantages ofdifferent sequential samplingmethods adequately the twentytimes of metamodels accuracy results (RMSE and NRMSE)and globalminimum119891min with sequential samplingmethodsthat is SOSM CV KSSM and SLE are illustrated in Figures17ndash19 with the help of boxplot

From the results shown in Figures 17 and 18 it is foundthat themedian values of RMSE andNRMSE based on SOSMare smaller compared to CV and KSSM for functions 1ndash5 Forfunction 6 the median values of RMSE and NRMSE are alittle larger but very close Except for Function 5 the accuracyofmetamodels constructed by SLE is the best In addition thebox size of RMSE andNRMSEbased on SOSM is shorter thanCV and KSSM for functions 1ndash4 The box size of RMSE andNRMSE based on SLE is the shortest for all functions exceptfor functions 1 and 3 A small size of the box suggests smallstandard deviation of results and low standard deviation ofresults suggests small uncertainties in the predictions andlarge standard suggests high uncertainty on the contraryThepoints outside the horizontal lines indicate that the resultsof experiments are terrible Above all either mean values ormedian values and box sizes of results are taken into accountthe accuracy of metamodels with sampling method SOSM

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 12: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

12 The Scientific World Journal

minus08

08

40

minus08

08

0

f(x)

x1

x2

(a) Actual function

minus08

08

40

minus08

08

0

f(x)

x1

x2

(b) SOSM

minus08

08

40

minus08

08

0

f(x)

x1

x2

(c) LHD

minus08

08

40

minus08

08

0

f(x)

x1

x2

(d) RND

Figure 13 Function 3 actual and metamodel surface

Table 5 Metamodel accuracy results for functions 1ndash6 between SOSM and previous sequential sampling methods

Number SOSM CV [25] KSSM [28] SLE [29]119891minRMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min RMSE NRMSE

119891min

1 0663 28214 minus10002 0699 29707 minus9995 0709 30142 minus10017 0493 20951 minus9539 minus102 0061 9798 minus1715 0089 14312 minus1715 0120 19343 minus1716 0065 10504 minus1622 minus17153 1674 6464 minus0002 2261 8732 0000 2305 8902 minus0001 1459 5636 0105 04 0728 38118 minus6556 0813 42601 minus6514 0847 44359 minus6552 0533 27899 minus5904 minus65515 2455 10531 minus34243 2508 10759 minus33794 2466 10575 minus34259 2789 11961 minus12212 minus34636 0995 10011 0116 0943 9486 0390 0950 9558 0052 0703 7080 1274 0

is superior to sampling methods CV and KSSM Certainlythe sampling method SLE performs greatest in terms ofthe accuracy of metamodels However SLE is not good atexploring the global minimum of functions

Figure 19 depicts the boxplot of global minimum

119891minobtained for twenty numerical experiments It is obviousfrom Figure 19 that the median value of SOSM CV andKSSM is probably equal to the actual global minimum 119891minMeanwhile the small sizes of boxes imply small standarddeviation which is also reflected by small differences betweenthemean andmedian valuesThe standard deviation of global

minimum is one of the important factors for evaluatingthe robustness of algorithm Therefore smaller standarddeviation of results implies the robustness of the algorithmIt is clear from Figure 19 that the sequential sampling meth-ods except SLE are robust sampling techniques under theparameter settings employed in this paper The success rateis bad comparing the actual global minimum from Table 5with the distribution of global minimum of metamodelsbased on sampling methods SLE from Figure 19 It is obviousthat SOSM plays a perfect role in constructing the accuratemetamodels and finding the global minimum of functions

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 13: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

The Scientific World Journal 13

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(a) Actual function

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(b) SOSM

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(c) LHD

f(x)

x1

x2

3

3

minus3 minus3

8

minus8

(d) RND

Figure 14 Function 4 actual and metamodel surface

Six various sampling methods have been used to con-struct metamodels for six typical functions in this paperIt can be demonstrated that sequential sampling methodsperform better and more efficient than one-stage samplingmethods Furthermore sequential sampling technique allowsengineers to control the sampling process In general one-stage sampling technique is not as good as sequential sam-pling methods for fitting the area where the global minimumlocates In a word SOSM as a sequential sampling methodproposed in this paper is the best choice considering theaccuracy of metamodels and locating global minimum Inother words SOSM is reliable in the whole fitting space andalso good for fitting the area where the global minimumlocates

443 Engineering Problem The validity of sequential sam-plingmethod SOSMproposed in this paper is tested by a typ-ical mechanical design optimization problem involving fourdesign variables that is pressure vessel design This problemhas been studied bymany researchers [40ndash42]The schematicof the pressure vessel is shown in Figure 20 In this case acylindrical pressure vessel with two hemispherical heads is

designed for minimum fabrication cost Four variables areidentified thickness of the pressure vessel 119879

119904 thickness of the

head 119879

ℎ inner radius of the pressure vessel 119877 and length of

the vessel without heads 119871 In this case the variable vectorsare given (in inches) by

119883 = (119879

119904 119879

ℎ 119877 119871) = (119909

1 119909

2 119909

3 119909

4) (15)

The objective function is the combined cost of materialsforming andwelding of the pressure vesselThemathematicalmodel of the optimization problem is expressed as

min 119891 (119883) = 06224119909

1119909

3119909

4+ 17781119909

2119909

2

3+ 31661119909

2

1119909

4

+ 1984119909

2

1119909

3

st 119892

1 (119883) = minus119909

1+ 00193119909

3le 0

119892

2 (119883) = minus119909

2+ 000954119909

3le 0

119892

3 (119883) = minus120587119909

2

3119909

4minus

4

3120587119909

3

3

+ 129600 le 0

119892

4 (119883) = 119909

4minus 240 le 0

(16)

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 14: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

14 The Scientific World Journal

35

minus356

6

minus6 minus6

f(x)

x1

x2

(a) Actual function

35

minus356

6

minus6 minus6

f(x)

x1x2

(b) SOSM

35

minus35

6

6

minus6 minus6

f(x)

x1

x2

(c) LHD

35

minus356

6

minus6 minus6

f(x)

x1

x2

(d) RND

Figure 15 Function 5 actual and metamodel surface

Table 6 Comparison of optimal results for the design of a pressure vessel

Design variables Cao and Wu [42] Kannan and Kramer [43] Deb [44] This paper using SOSM119909

11000 1125 09375 1000

119909

20625 0625 05000 0625

119909

3511958 58291 483290 41523

119909

4607821 43690 1126790 120562

119891(119883) 7108616 7198042 6410381 6237402

The ranges of design variables 119909

1isin [00625 625] 119909

2isin

[00625 625] 1199093isin [625 125] and 119909

4isin [625 125] are used

referring to the literature [40]The problem formulated above is a simple nonlinear con-

strained problem Now assuming that the objective and con-straint functions defined by (16) are computation-intensivefunctions hencemetamodels of functions are constructed byRBF using the sequential sampling method SOSM

This problem has been solved by many researchersincluding Cao and Wu [42] applying an evolutionary pro-gramming model Kannan and Kramer [43] solved theproblemusing an augmented Lagrangianmultiplier approachand Deb [44] using a genetic adaptive search

The average values of optimal results from 50 runs arelisted in Table 6 compared with the three results reported inthe literature [42ndash44] It can be seen from the table that theoptimal solution in this paper is still about 27 superior tothe best solution previously reported in the literature [44]

5 Conclusions

In this paper the sequential optimization sampling method(SOSM) for metamodels has been proposed The recentlydeveloped extended radial basis functions (E-RBF) are intro-duced as an approximated method to construct the meta-models Combining the new sampling strategy SOSM with

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 15: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

The Scientific World Journal 15

12

minus3120587 minus3120587

03120587

3120587

x1

x2

f(x)

(a) Actual function

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(b) SOSM

3120587

3120587

f(x)

12

minus3120587 minus3120587

0

x1

x2

(c) LHD

f(x)

12

minus3120587 minus3120587

0

x1

x2

3120587

3120587

(d) RND

Figure 16 Function 6 actual and metamodel surface

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Function 1 Function 2

Function 3

Function 4

Function 5

Function 6

RMSE

of f

unct

ions

1ndash6

0

05

1

15

2

25

3

35

Figure 17 Assessment of metamodels RMSE between SOSM andprevious sequential sampling methods

the extended radial basis functions the design algorithmfor building metamodels is presented In the SOSM theoptimal sampling points of response surface are taken as thenew sampling points in order to improve the local accuracy

Function 1 Function 2 Function 3

Function 4

Function 5 Function 6

10

20

30

40

50

60

70

NRM

SE o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 18 Assessment of metamodels NRMSE between SOSM andprevious sequential sampling methods

In addition new sampling points in the sparse region arerequired for better global approximation To determine thesparse region the density function constructed by the radialbasis functions network has been applied In the proposed

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 16: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

16 The Scientific World Journal

minus35

minus30

minus25

minus20

minus15

minus10

minus5

0Function 1 Function 2 Function 3 Function 4 Function 5

Function 6

Glo

bal m

inim

um o

f fun

ctio

ns1ndash6

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

SOSM CV

KSSM SL

ESO

SM CVKS

SM SLE

Sampling methods

Figure 19 Assessment of metamodels global minimum betweenSOSM and previous sequential sampling methods

TsTh

R

L

Figure 20 Diagram of pressure vessel design

algorithm the response surface is constructed repeatedlyuntil the terminal criterion that is the maximum number ofsampling points 119898max is satisfied

For the sake of examining the validity of the proposedSOSM sampling method six typical mathematical functionshave been tested The assessment measures for accuracyof metamodels that is RMSE and NRMSE are employedMeanwhile global minimum is introduced to judge theperformance of sampling methods for metamodels Theproposed sampling method SOSM is successfully imple-mented and the results are analyzed comprehensively andthoroughly In contrast to the one-stage sampling methods(LHD and RND) and sequential sampling methods (CVKSSM and SLE) the SOSM results in more accurate meta-models Furthermore SOSM is superior in exploring theglobal minimum of metamodels compared to the other fivesampling methods

The new sequential optimization sampling methodSOSM has been proposed which provides another effectiveway for generating sampling points to construct metamodelsIt is superior in seeking global minimum compared to theprevious sampling methods and meanwhile can improvethe accuracy of metamodels Therefore the sequential opti-mization sampling method SOSM significantly outperformsthe previous sampling methods

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors would like to thank everybody for their encour-agement and support The Grant support from National Sci-ence Foundation (CMMI-51375389 and 51279165) is greatlyacknowledged

References

[1] L Gu ldquoA comparison of polynomial based regressionmodels invehicle safety analysisrdquo in Proceedingsof the ASME Design Engi-neering Technical Conferences-Design Automation Conference (DAC rsquo01) A Diaz Ed ASME Pittsburgh Pa USA September2001

[2] P N Koch TW Simpson J K Allen and FMistree ldquoStatisticalapproximations for multidisciplinary design optimization theproblem of sizerdquo Journal of Aircraft vol 36 no 1 pp 275ndash2861999

[3] A I J Forrester and A J Keane ldquoRecent advances in surrogate-based optimizationrdquo Progress in Aerospace Sciences vol 45 no1ndash3 pp 50ndash79 2009

[4] Y C Jin ldquoSurrogate-assisted evolutionary computation recentadvances and future challengesrdquo Journal of Swarm and Evolu-tionary Computation vol 1 no 2 pp 61ndash70 2011

[5] L Lafi J Feki and S Hammoudi ldquoMetamodel matchingtechniques evaluation and benchmarkingrdquo in Proceedings of theInternational Conference on Computer Applications Technology(ICCAT rsquo13) pp 1ndash6 IEEE January 2013

[6] M Chih ldquoA more accurate second-order polynomial meta-model using a pseudo-random number assignment strategyrdquoJournal of the Operational Research Society vol 64 no 2 pp198ndash207 2013

[7] L Laurent P-A Boucard and B Soulier ldquoA dedicated mul-tiparametric strategy for the fast construction of a cokrigingmetamodelrdquoComputers and Structures vol 124 pp 61ndash73 2013

[8] G S Babu and S Suresh ldquoSequential projection-basedmetacognitive learning in a radial basis function network forclassification problemsrdquo IEEE Transactions on Neural Networksand Learning Systems vol 24 no 2 pp 194ndash206 2013

[9] W Yao X Q Chen Y Y Huang and M van Tooren ldquoAsurrogate-based optimizationmethodwithRBFneural networkenhanced by linear interpolation and hybrid infill strategyrdquoOptimization Methods amp Software vol 29 no 2 pp 406ndash4292014

[10] N Vukovic and Z Miljkovic ldquoA growing and pruning sequen-tial learning algorithm of hyper basis function neural networkfor function approximationrdquo Neural Networks vol 46 pp 210ndash226 2013

[11] I Couckuyt S Koziel and T Dhaene ldquoSurrogate modeling ofmicrowave structures using kriging co-kriging and spacemap-pingrdquo International Journal of Numerical Modelling ElectronicNetworks Devices and Fields vol 26 no 1 pp 64ndash73 2013

[12] A Ozmen and G W Weber ldquoRMARS robustification ofmultivariate adaptive regression spline under polyhedral uncer-taintyrdquo Journal of Computational and Applied Mathematics vol259 pp 914ndash924 2014

[13] A J Makadia and J I Nanavati ldquoOptimisation of machiningparameters for turning operations based on response surfacemethodologyrdquoMeasurement vol 46 no 4 pp 1521ndash1529 2013

[14] M Sabzekar and M Naghibzadeh ldquoFuzzy c-means improve-ment using relaxed constraints support vector machinesrdquoApplied Soft Computing vol 13 no 2 pp 881ndash890 2013

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 17: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

The Scientific World Journal 17

[15] M S Sanchez L A Sarabia and M C Ortiz ldquoOn theconstruction of experimental designs for a given task by jointlyoptimizing several quality criteria pareto-optimal experimen-tal designsrdquo Analytica Chimica Acta vol 754 pp 39ndash46 2012

[16] S Ghosh and A Flores ldquoCommon variance fractional factorialdesigns and their optimality to identify a class of modelsrdquoJournal of Statistical Planning and Inference vol 143 no 10 pp1807ndash1815 2013

[17] S Kehoe and J Stokes ldquoBox-behnken design of experimentsinvestigation of hydroxyapatite synthesis for orthopedic appli-cationsrdquo Journal of Materials Engineering and Performance vol20 no 2 pp 306ndash316 2011

[18] R L J Coetzer L M Haines and L P Fatti ldquoCentral compositedesigns for estimating the optimum conditions for a second-order modelrdquo Journal of Statistical Planning and Inference vol141 no 5 pp 1764ndash1773 2011

[19] J Sacks W J Welch T J Mitchell et al ldquoDesign and analysisof computer experimentsrdquo Statistical Science vol 4 no 4 pp409ndash423 1989

[20] R Jin W Chen and T W Simpson ldquoComparative studies ofmetamodelling techniques under multiple modelling criteriardquoJournal of Structural andMultidisciplinaryOptimization vol 23no 1 pp 1ndash13 2001

[21] L Gu and J-F Yang ldquoConstruction of nearly orthogonal Latinhypercube designsrdquoMetrika vol 76 no 6 pp 819ndash830 2013

[22] Y He and B Tang ldquoStrong orthogonal arrays and associatedLatin hypercubes for computer experimentsrdquo Biometrika vol100 no 1 pp 254ndash260 2013

[23] L Ke H B Qiu Z Z Chen and L Chi ldquoEngineering designbased on Hammersley sequences sampling method and SVRrdquoAdvanced Materials Research vol 544 pp 206ndash211 2012

[24] Y Tang and H Xu ldquoAn effective construction method formulti-level uniform designsrdquo Journal of Statistical Planning andInference vol 143 no 9 pp 1583ndash1589 2013

[25] R Jin W Chen and A Sudjianto ldquoOn sequential samplingfor global metamodeling in engineering designrdquo in Proceedingsof the ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in EngineeringConference pp 539ndash548 American Society of MechanicalEngineers 2002

[26] Y M Deng Y Zhang and Y C Lam ldquoA hybrid of mode-pursuing sampling method and genetic algorithm for mini-mization of injection molding warpagerdquo Materials amp Designvol 31 no 4 pp 2118ndash2123 2010

[27] X Wei Y Wu and L Chen ldquoA new sequential optimalsampling method for radial basis functionsrdquo Journal of AppliedMathematics and Computation vol 218 no 19 pp 9635ndash96462012

[28] S KitayamaMArakawa andKYamazaki ldquoSequential approx-imate optimization using radial basis function network forengineering optimizationrdquo Optimization and Engineering vol12 no 4 pp 535ndash557 2011

[29] H Zhu L Liu T Long and L Peng ldquoA novel algorithmof maximin Latin hypercube design using successive localenumerationrdquo Engineering Optimization vol 44 no 5 pp 551ndash564 2012

[30] S Kitayama J Srirat andMArakawa ldquoSequential approximatemulti-objective optimization using radial basis function net-workrdquo Structural and Multidisciplinary Optimization vol 48no 3 pp 501ndash515 2013

[31] R L Hardy ldquoMultiquadric equations of topography and otherirregular surfacesrdquo Journal of Geophysical Research vol 76 no8 pp 1905ndash1915 1971

[32] A A Mullur and A Messac ldquoExtended radial basis functionsmore flexible and effective metamodelingrdquo AIAA Journal vol43 no 6 pp 1306ndash1315 2005

[33] T Krishnamurthy ldquoResponse surface approximation with aug-mented and compactly supported radial basis functionsrdquo inProceedings of the 44thAIAAASMEASCEAHSASC StructuresStructural Dynamics and Materials Conference vol 1748 pp3210ndash3224 April 2003 AIAA Paper

[34] Z Wu ldquoCompactly supported positive definite radial func-tionsrdquo Journal of Advances in Computational Mathematics vol4 no 1 pp 283ndash292 1995

[35] C AMicchelli ldquoInterpolation of scattered data distancematri-ces and conditionally positive definite functionsrdquo ConstructiveApproximation vol 2 no 1 pp 11ndash22 1984

[36] A A Mullur and A Messac ldquoMetamodeling using extendedradial basis functions a comparative approachrdquo Engineeringwith Computers vol 21 no 3 pp 203ndash217 2006

[37] G G Wang and S Shan ldquoReview of metamodeling techniquesin support of engineering design optimizationrdquo Journal ofMechanical Design vol 129 no 4 pp 370ndash380 2007

[38] D BMcDonaldW J GranthamW L Tabor andM JMurphyldquoGlobal and local optimization using radial basis functionresponse surface modelsrdquo Applied Mathematical Modelling vol31 no 10 pp 2095ndash2110 2007

[39] P Borges T Eid and E Bergseng ldquoApplying simulated anneal-ing using different methods for the neighborhood search inforest planning problemsrdquo European Journal of OperationalResearch vol 233 no 3 pp 700ndash710 2014

[40] C A C Coello ldquoUse of a self-adaptive penalty approach forengineering optimization problemsrdquo Computers in Industryvol 41 no 2 pp 113ndash127 2000

[41] L D S Coelho ldquoGaussian quantum-behaved particle swarmoptimization approaches for constrained engineering designproblemsrdquo Expert Systems with Applications vol 37 no 2 pp1676ndash1683 2010

[42] Y J Cao and Q H Wu ldquoMechanical design optimization bymixed-variable evolutionary programmingrdquo in Proceedings ofthe IEEE International Conference on Evolutionary Computation(ICEC rsquo97) pp 443ndash446 Indianapolis Ind USA April 1997

[43] B K Kannan and S N Kramer ldquoAn augmented Lagrangemultiplier based method for mixed integer discrete continuousoptimization and its applications tomechanical designrdquo Journalof Mechanical Design Transactions of the ASME vol 116 pp318ndash320 1994

[44] K Deb ldquoGeneAS a robust optimal design technique formechanical component designrdquo in Evolutionary Algorithms inEngineering Applications D Dasrupta andZMichalewicz Edspp 497ndash514 Springer Berlin Germany 1997

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 18: Research Article A Sequential Optimization Sampling Method for …downloads.hindawi.com/journals/tswj/2014/192862.pdf · 2019-07-31 · stage. It is di cult for the one-stage sampling

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of