Upload
balajigandhirajan
View
250
Download
0
Embed Size (px)
Citation preview
7/27/2019 Response Surface Approximation Using Sparse
1/20
International Journal of Computational and Applied Mathematics.
ISSN 1819-4966 Volume 5, Number 4 (2010), pp. 459478
Research India Publications
http://www.ripublication.com/ijcam.htm
Response Surface Approximation using Sparse
Grid Design
P Beena1
and Ranjan Ganguli2
1Research Assistant and
2Professor
1,2
Department of Aerospace Engineering, Indian Institute of Science,Bangalore-560012, India1E-mail: [email protected] and
2E-mail: [email protected]
Abstract
An approach to simplify an optimization problem is to create metamodels or
surrogates of the objective function with respect to the design variables.
Response surface approximations yield low order polynomial metamodels
which are very effective in the engineering analysis and optimization.
However, response surface approximations based on design of experimentsrequire a large number of sampling points. In this paper, the response surface
approximations are investigated using the Sparse Grid Design (SGD). SGD
requires significantly fewer analysis runs than the full grid design for the
construction of response surfaces. It is found using several test functions that
the SGD is able to capture the basic trends of the analysis using second-order
polynomial response surfaces and give good estimate of the actual minimum
point.
Keywords: Response surface approximation; Sparse grids; Metamodels;
Polynomial response surfaces; Function approximations; Optimization.
IntroductionFor a thorough understanding of physical, economic, and other complex systems,
developing mathematical models and performing numerical simulations plays a key
role [1]. With increasing capabilities of modern computers, the models are becoming
more sophisticated and realistic. It is difficult to link optimization algorithms to
complex computational models. Considerable research has been done on using
polynomial response surface approximations based on sampling points from the
theory of design of experiments to decouple the analysis and optimization problems.
However, a large number of sampling points is needed by the design of experiments.
7/27/2019 Response Surface Approximation Using Sparse
2/20
460 P. Beena and Ranjan Ganguli
In this paper, we propose to investigate the use of sparse grids [2] for response
surface construction. Sparse grids provide sampling points which avoid the curse of
dimensionality. Regression models are used to fit the data and construct response
surfaces for the objective function. Once the response surfaces are obtained, the
optimum can be found at low cost because the response surfaces are merely algebraic
expressions.
Response surface approximations are a collection of statistical and mathematical
techniques which were originally created for developing, improving and optimizing
products and processes. They are the most widely used metamodels in optimization.
Response surface method constructs global approximations to system behaviour based
on results calculated at various points in the design space. The response surface seeks
to find a functional relationship between an output variable and a set of input
variables. Typically, second-order polynomials are used for response surfaces.However, some studies have also used higher-order polynomial approximations.
Response surface approximations are global in nature, they have witnessed
widespread application in recent years [3- 5]. An excellent introduction to response
surface methods can be found in reference [6].
An important objective in response surface construction is to achieve an
acceptable level of accuracy while attempting to minimize the computational effort,
i.e. the number of function evaluations [5]. Sparse grids have been developed to
approximate general smooth functions of many variables. They provide a method for
reducing dimensionality problems for high dimensional function approximations. The
advantages of sparse grids over other grid-based methods is that they use fewer
parameters and this makes the sparse grid approach particularly attractive for thenumerical solution of moderate and higher-dimensional problems. The sparse grids
approach was first described by Smolyak [7] and adapted for partial differential
equations by Zenger [8]. Subsequently, Griebel et al. [9] developed an algorithm
known as the combination technique, prescribing how the collection of simple grids
can be combined to approximate high dimensional functions. More recently, Garcke
and Griebel [10, 11] demonstrated the feasibility of sparse grids in data mining by
using the combination technique in predictive modelling. Sparse grid is also
successfully used for integral equations [12, 13], interpolation and approximation [14-
18]. Furthermore there is work on stochastic differential equations [19-20],
differential forms in the context of the Maxwell-equation [21] and a wavelet -based
sparse grid discretization of parabolic problems is treated in [22]. A tutorialintroduction to sparse grid is available in [2]. Sparse grids are studied in detail in [23-
24].
Sparse GridsThe sparse grid method is a special discretization technique. It is based on hierarchical
basis [25-27], a representation of a discrete function space which is equivalent to the
conventional nodal basis, and a sparse tensor product construction. Sparse grids
represent a very flexible predictive modeling and analysis system [28]. Sparse grid
methods are known under various names, such as hyperbolic cross points, discrete
7/27/2019 Response Surface Approximation Using Sparse
3/20
Response Surface Approximation using Sparse Grid Design 461
blending, boolean interpolation or splitting extrapolation as the concept is closely
related to hyperbolic crosses [29-31], boolean methods [32-33] and splitting
extrapolation methods [34].
The distribution of the points in a sparse grid is as shown in Figure 1. SGD for a
problem of dimension (d=2) and level (n=2) is as shown in Figure 1(a). The number
of degrees of freedom in each coordinate direction is determined by N and it is equal
to 12 +n . Thus, a n=2 problem will have five degrees of freedom in each direction.SGD for a 3-D problem (d=3) with level (n=2) is shown in Figure 1(b). SGD for a
problem with dimension (d=2) and levels (n=4) is as shown in Figure 1(c). A n=4
problem will have seventeen degrees of freedom in each direction. SGD for a 3-D
problem (d=3) with level (n=4) is shown in Figure 1(d). Thus, as the dimension and
the levels of the points are increased, the total number of points and its distribution in
a SGD varies.
(a) (b)
(c) (d)
Figure 1: Distribution of points in a sparse grid.
The comparison of the experimental runs required by factorial designs and sparse
grids is given in Table 1. It can be seen from the table that the sparse grid approach
overcomes the disadvantage of full factorial design as dincreases. It just employs 221
experimental runs for d=10 as against 107of full factorial design.
0
0.5
1
0
0.5
1
0
0.2
0.4
0.6
0.8
1
n=4, d=3: 177nodes
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
n=4, d=2: 65nodes
0
0.5
1
0
0.5
1
0
0.2
0.4
0.6
0.8
1
n=2, d=3: 25nodes
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
n=2, d=2: 13nodes
7/27/2019 Response Surface Approximation Using Sparse
4/20
462 P. Beena and Ranjan Ganguli
Table 1: Number of experiments required by sparse grid, full factorial and CCD.
Method n N d = 2 d = 3 d = 4 d = 10 d = 100
Sparse grid 1 3 5 7 9 21 201
Full factorial 2 4 8 16 1024 10
Central composite
design (CCD) 2 9 15 25 1045 1030
Sparse grid 2 5 13 24 41 221 20201
Full factorial 5 25 125 625 10 7x10
Central composite
design (CCD) 5 36 141 646 107
7x1069
Selection of a suitable sparse grid is essential for response surface approximation:
We have examined three possibilities to construct the sparse grid, namely
1. The classical maximum- or L2-norm-based sparse grid ,HM
including the
boundary. The points jix comprising the set of support nodes iX are defined
as
1iand,m,1jfor)1)/(m1(jx
12m
iiji
i
i
==
+=
2. The maximum-norm-based sparse grid, but excluding the points on the
boundary, denoted byNB
H . Now, the ji
x are defined as
iiji
i
i
,m,1jfor)1j/(mx
12m
=+=
=
3. The Clenshaw-Curtis-type sparse grid ,HCC
with equidistant nodes as
described in [35-36]. Here the jix are defined as
==
>==
>+
==
1mif1jfor5.0
1mif,m,1jfor)1)/(m1(jx
1iif,12
1iif1m
i
iiij
i
1ii
Figure 2 illustrates the gridsHM
4,2,HNB
4,2 andHCC
4,2 for d= 2. Figure 3 illustrates
the gridsHM
4,3,HNB
4,3, andHCC
4,3 for d= 3. It can be seen from these figures that the
number of grid points grows much faster with increasing n (levels) and d(dimension)
forHM
. The number of points of the Clenshaw-Curtis gridHCC
increases the slowest.
In this paper, we use the Clenshaw-Curtis sparse grids for the study of response
surfaces.
7/27/2019 Response Surface Approximation Using Sparse
5/20
Response Surface Approximation using Sparse Grid Design 463
Figure 2: Different sparse grids for d=2.
Figure 3: Different sparse grids for d=3.
To further illustrate the growth of the number of nodes with n, ddepending on the
chosen grid type, we have included Table 2. Note that the grid HM
is not suited for
higher-dimensional problems, since at least 3dsupport nodes are needed.
Table 2: Comparison of nodes in different sparse grids.
d=2 d=4 d=8
n M NB CC M NB CC M NB CC0 9 1 1 81 1 1 6561 1 1
1 21 5 5 297 9 9 41553 17 17
2 49 17 13 945 49 41 1.9e5 161 1453 113 49 29 2769 209 137 7.7e5 1121 8494 257 129 65 7681 769 401 2.8e6 6401 39375 577 321 145 20481 2561 1105 9.3e6 31745 157136 1281 769 321 52993 7937 2929 3.0e7 141569 567377 2817 1793 705 1.3e5 23297 7537 9.1e7 5.8e5 1.9e5
7/27/2019 Response Surface Approximation Using Sparse
6/20
464 P. Beena and Ranjan Ganguli
Response surface method using sparse gridResponse surfaces are smooth analytical functions that are most often approximated
by low-order polynomials. The approximation can be expressed as
f (x)y(x) += (1)
where y(x) is the unknown function of interest, f (x) is a known polynomial
function of x , and is random error. If the response is well modeled by a linear
function of the kindependent variables, then the approximating function is the first-
order model
22110 +++++= kkxxxf (2)
When nonlinearities are present, a second-order model is used.
1
2
1
0
7/27/2019 Response Surface Approximation Using Sparse
7/20
Response Surface Approximation using Sparse Grid Design 465
Once the design points are obtained, we need to obtain a least-squares response
surface. To evaluate the parameters 0 , etc., Equations (2) and (3) can be written
Xy += (5)
where y is a 1n vector of responses and X is a pn matrix of sample data
points and as
n2x
n1x2
n2x2
n1x
n2x
n1x1
......
......22
x21
x222
x221
x22
x21
x1
12x
11x2
12x2
11x
12x
11x1
=X (6)
Here is a p 1 vector of the regression parameters, is a p 1 vector of error
terms, and p is the number of design points. The parameters ijiii and,,0 are
obtained by minimizing the least-square error obtained using Equation (5). [6]
( ) ( )
+=
====
XXyX2-yy
L
TTTTT
1i
2XyXy
TTn
i
(7)
whereL is the square of the error. To minimize L, Equation (7) is differentiated
with respect to :
)(=
0=+=
yXXXor
L
1
TT
X2Xy-2X
(8)
Therefore, the fitted regression model is
Xy = (9)
Numerical StudiesResponse surfaces are constructed using sparse grids for a variety of test functions
given in [37-38]. A MATLAB implementation of sparse grids [39-40] can be found
here http://www.ians.uni-stuttgart.de/spinterp/. We have used this tool box to generate
sparse grid coordinates for required dimension and levels. The response surface
approximations are then minimized and the stationary points obtained are compared
with that of the actual function.
7/27/2019 Response Surface Approximation Using Sparse
8/20
466 P. Beena and Ranjan Ganguli
Problem 1: Rosenbrocks function
2
1
22
12 )1()(100 xxxf +=
By setting the gradient equal to zero, the minimum point of the above function is
found to be at 11 =x and 12 =x . We use (n=2, d=2) SGD to construct the response
surface for this function where n represents the levels and d represents the problem
dimension. As mentioned before, the number of degrees of freedom in each
coordinate direction is five and is determined by N which is equal to 12 +n . Let 1y
and 2y represent the coded SGD points in the domain (0, 1). We obtain the physical
points 1x and 2x by using %20 and %40 perturbation on the design variables 1y
and 2y respectively i.e the coded variables are selected with the physical space of 0.8
to 1.2 and 0.8 to 1.4. The relation between the coded and the physical variables isobtained by a linear transformation given by equation (10).
14.08.04.0 2211 +=+= yxyx (10)
0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 1.21
1.05
1.1
1.15
1.2
1.25
1.3
1.35
1.4
Figure 5: Sparse grid design (2 ,2) with physical points.
The value of the Rosenbrocks function at these data points is evaluated and a
second order response surface thus obtained after solving for the regression
coefficients is
2
2
2
12121 2743.1658.1943648813.283276.520889.13 yyyyyyf +++=
After setting the gradient equal to zero we see that the response surface has a
minimum at 47.01 =y and 04.02 =y . Substituting the values of 1y and 2y in
equation (10) we get 988.01 =x and 016.12 =x and at this point, 15.0=f .
7/27/2019 Response Surface Approximation Using Sparse
9/20
Response Surface Approximation using Sparse Grid Design 467
Next, we use a (n=3, d=2) SGD as in Figure 6 and a higher order cubic response
surface for a better fit. Using the data generated, the third order response surface thus
obtained is3
12
2
121
2
2
2
121 6.2580.1220.5194.1540.268.286.4597.12 yyyyyyyyyf ++++=
Finding the minimum for the cubic function yields multiple possible solutions of
(0.15, -0.64), (0.81, 0.67), (0.499, -4.45e-15
). As we know that a positive definite
Hessian matrix is the sufficient condition for a local minimum, we calculate Hessian
at these stationary points and found the points (0.15, -0.64) and (0.81, 0.67) are the
points of inflexion and (0.499, -4.45e-15
) is found to be the minimum point.
At 0.4991 =y and 15--4.45e2 =y , from equation (10) we get 999.01 =x and 2x =1.
At this point, 000064.0=f which is much less than the starting design. Thus for the
Rosenbrocks function a quadratic response surface provides an adequate fit and a
cubic response surface provides an excellent approximation to the objective function.
Accurate approximations can be created by using higher values of the level n,
however, this also leads to more sampling points.
0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 1.21
1.05
1.1
1.15
1.2
1.25
1.3
1.35
1.4
Figure 6: Sparse grid design (3 ,2) with physical points.
Problem 2: Powells badly scaled function
[ ]2212
21 0001.1)exp()exp()1000,10( ++= xxxxf
This function has a minima at 51 10098.1= x and 106.92 =x and
.0),( 21 =xxf We use (n=2, d=2) SGD as in Figure 4 to construct a response surface
for this function. Next, we scale the design variables using the linear transformation
formula given by equation (11) and obtain the physical points 1x and 2x . Equation
7/27/2019 Response Surface Approximation Using Sparse
10/20
468 P. Beena and Ranjan Ganguli
(11) relates the physical points with that of the coded SGD points where 1y and 2y
represents the coded SGD points in the domain (0, 1).8400001.000004.0 2211 +=+= yxyx (11)
The value of the function at these data points is evaluated and a second order
response surface obtained after solving for the regression coefficients is
21
2
2
2
121 1680.136.1616.251.727.0 yyyyyyf +++=
After setting the gradient equal to zero, we see that the response surface has a
minimum at 05.01 =y and 35.02 =y . Substituting these values of 1y and 2y in
equation (11) we get 000012.01 =x and 4.92 =x and at this point, 016.0=f which
is a good approximation to the original function.
Problem 3: Browns badly scaled function
2
21
26
2
26
1 )2()102()10( ++= xxxxf
This function has a minima at 61 10=x and6
2 102=x and .0),( 21 =xxf We
now try to fit a second order response surface using the SGD as in Figure 4. We scale
the design variables using the linear transformation as in equation (12) and obtain the
physical points.
10510 26
21
6
1 yxyx== (12)
The second order response surface obtained after solving for the regression
coefficient is
2
2
2
1
12
2121
1212 8.9101552.10102101 yyyyyyf +++=
We see that the response surface has a minimum at 11 =y and 28.02 =y .
Substituting these values of 1y and 2y in equation (12) we get6
1 101=x and6
2 104.1=x and at this point, .36.0=f
As this value is higher than the actual minima, we use SGD with n=3 and d=2
(Figure 6) and use the linear transformation as in Equation (12) to evaluate a better fit.
A second order response surface is thus obtained as
2
2
2
1
12
2121
1212 71.7101596.11102101 yyyyyyf +++=
We see that the coefficients of the linear and the quadratic term for 2y have
changed in the response surface. Solving for 1y and 2y and substituting the values in
Equation (12) gives 61 101=x and6
2 1025.2=x and at this point, 062.0=f
which is a better approximation. Next, we form another second order response surface
using (n=4, d=2) as in Figure 1(c) and get 61 101=x and6
2 1015.2=x and at this
point, 02.0=f is very close to the actual minima. Thus, by increasing the number
points in the SGD we have obtained a better fit for the objective function.
7/27/2019 Response Surface Approximation Using Sparse
11/20
Response Surface Approximation using Sparse Grid Design 469
Problem 4: Powells quartic function
4
41
4
32
2
43
2
21 )(10)2()(5)10( xxxxxxxxf ++++=
This function has a minimum point at )0,0,0,0(),,,( 4321 =xxxx and
),,,( 4321 xxxxf
= 0. The sparse grid points are generated with level (n=2, d=4). For this problem,
the physical and the coded points are considered identical i.e
44332211 yxyxyxyx ==== (13)
The value of the original function at these grid points is determined and a response
surface is constructed using the data obtained.
2
4
2
3
2
2
2
1
433241214321
96.1101.1586.10196.7
1016202003.382.328.303.321.4
yyyy
yyyyyyyyyyyyf
++++
+++++=
Solving for 1y , 2y , 3y and 4y and substituting the values in equation (13) gives
45.11 =x , 14.02 =x , 18.03 =x , 16.14 =x and at this point, 05.13=f . As this
value is higher than the actual minima, we use higher levels of SGD to obtain a better
fit. With (n=3, d=4) SGD, we obtain )14.0,07.0,01.0,16.0(),,,( 4321 =xxxx
and .092.0),,,( 4321 =xxxxf With (n=4, d=4) SGD, we obtain
)015.0,006.0,005.0,019.0(),,,( 4321 =xxxx and .005.0),,,( 4321 =xxxxf This
is a better design when compared to the starting design.
Problem 5: Beales function
23
21
2221
2
21 )]1(6252[)]1(252[)]1(51[=f xx.xx.-x-x. ++
This function has a minimum point at )5.0,3(),( 21 =xx and .0),( 21 =xxf The
sparse grid points are generated with level n=2 and d=2. For this problem, the
physical points are related to the coded points by equation (14)
14 2211 yxyx =+= (14)
The second order response surface is obtained as
2
2
2
12121 93.2941.292182.1958.985.4 yyyyyyf ++=
Solving for 1y , 2y and substituting the values in equation (14) gives
28.21 =x , 44.02 =x , and at this point, 50.0=f . With (n=3, d=2) SGD, we obtain a
second order response surface as
2
2
2
12121 15.3403.2370.3038.1423.507.3 yyyyyyf ++=
Solving for 1y , 2y and substituting the values in equation (14) gives
44.21=
x , 37.02=
x , and at this point, 11.0=
f which is certainly a good
7/27/2019 Response Surface Approximation Using Sparse
12/20
470 P. Beena and Ranjan Ganguli
approximation. However,we use a (n=3, d=2) SGD and fit a cubic polynomial to see
if we can evaluate a better fit. Using the data generated, the third order response
surface is obtained as
3
2
3
1
2
2
1
2
21
2
2
2
12121
66.5162.9
45.5664.3916.6369.6589.1348.2405.3236.3
yy
yyyyyyyyyyf
+
+++=
Finding the minimum for the cubic function yields multiple possible solutions of
(0.48, 0.50), (0.33, 0.19) and (-0.18. 1.24). We calculate Hessian at these stationary
points and found the points (0.33, 0.19) and (-0.18. 1.24) are the points of inflexion
and (0.48, 0.50) is found to be the minimum point. At 0.481 =y and 0.502 =y from
equation (14) we get 92.21 =x and 2x =0.5. At this point 01.0=f .
Problem 6: Booths function
2
21
2
21 )52()72( +++= xxxxf
This function has a global minimum at 11 =x and 32 =x and .0)( =xf We use
(n=2, d=2) SGD to construct response surface for this function. Equation (15) relates
the physical points with that of the coded SGD points.
44 2211 yxyx == (15)
We create a second order response surface after solving for the regression
coefficients as
22
212121 808012815213674 yyyyyyf +++=
Solving for 1y , 2y and substituting the values in equation (15) gives
11 =x , 32 =x and
0),( 21 =xxf as that of the original function.
Problem 7: Woods function
)(1.0)2(10)1()(90)1()](10[ 422
42
2
3
22
34
2
1
22
12 xxxxxxxxxxf ++++++=
This function has a minimum point at )1,1,1,1(),,,( 4321 =xxxx and),,,( 4321 xxxxf
.0= We use n=2 and d=4 SGD to construct a response surface for this problem.The physical and the coded points are related as
8.04.08.04.08.04.08.04.044332211+=+=+=+= yxyxyxyx (16)
A second order response surface is obtained as
2
4
2
3
2
2
2
1
4342214321
03.1636.5863.1782.64
6.572.36420.1097.2377.1162.2667.4
yyyy
yyyyyyyyyyf
++++
+++=
7/27/2019 Response Surface Approximation Using Sparse
13/20
Response Surface Approximation using Sparse Grid Design 471
Solving for 1y , 2y , 3y and 4y and substituting the values in equation (16) gives
,86.01 =x ,77.02 =x 98.03 =x , 004.14 =x and at this point, 68.0=f . With (n=3,d=4) SGD and a second order response surface, we obtain
)01.1,99.0,81.0,88.0(),,,( 4321 =xxxx and 44.0),,,( 4321 =xxxxf which is a
somewhat better fit.
Problem 8: A nonlinear function of three variables
++
+
+=
2
2
31322
21
2exp2
1sin
)(1
1
x
xxxx
xxf
This function has a maximum point at )1,1,1(),,( 321 =xxx and .3),,( 321 =xxxf We use n=3 and d=3 SGD to construct a response surface for this problem. The
physical and coded points are related as
6.08.06.08.06.08.0 332211 +=+=+= yxyxyx (17)
A second order response surface solving is obtained as
2
3
2
2
2
1
323121321
022.174.172.0
47.059.040.130.138.132.021.2
yyy
yyyyyyyyyf
++++=
Solving for 1y , 2y , 3y and substituting the values in equation (17) gives
20.11 =x , 12.12 =x , 8.03 =x and at this point, 97.2=f , which is a reasonable
approximation.
Problem 9: Extended Raydan function
This function is defined for any general dimension d as, )(1 i
d
i
xxef i = = . Several
cases ofd= 2, 3, 5, 10 and 20 are evaluated next using SGD.
Case 1: )(2
1 ii
xxef i = =
By setting the gradient equal to zero, the minimum point of the given function is
found to be at 01 =x and 02 =x and .2),( 21 =xxf For this problem, the physicaland the coded points are considered identical. Using (2, 2) SGD , the second order
response surface obtained is
2
2
2
121 8435.08435.01311.01311.00058.2 xxxxf ++=
Solving for 1x , 2x we get 077.01 =x and 077.02 =x and 0062),( 21 .xxf =
which is a good approximation to the objective function.
Case 2: )(3
1 ii
xxef i = =
This function has a minimum at )0,0,0(),,( 321 =xxx and .3),,( 321 =xxxf
7/27/2019 Response Surface Approximation Using Sparse
14/20
472 P. Beena and Ranjan Ganguli
Using (2, 3) SGD, the second order response surface obtained is2
3
2
2
2
1321 8435.08435.08435.01283.01283.01283.00051.3
xxxxxxf +++= Solving for 1x , 2x , 3x we get 076.01 =x , 076.02 =x and 076.03 =x and
.008.3),,( 321 =xxxf
Case 3: )(5
1 ii
xxef i = =
This function has a minimum at
)0,0,0,0,0(),,( 521 =xxx and 5),( 521 =xxxf . Using (2, 5) SGD , the second
order response surface obtained is
25
24
23
2
2
2
154321
8435.08435.08435.0
8435.08435.012680126801268012680126800054.5
xxx
xxx.-x.-x.-x.-x.-f
+++
++=
Solving for ),,( 521 xxx we obtain )075.0,075.0,075.0,075.0,075.0( and
.01.5),( 521 =xxxf
Case 4: )(10
1 ii
xxef i = =
The above function has a minimum at )0,,0,0(),,( 1021 =xxx and
),( 1021 xxxf
10.= For a function with d=10 and five degrees of freedom in each coordinatedirection the total number of runs required by the sparse grid approach is equal to 221.
For the two level factorial and CCD designs 210 = 1024 and 210 + 2*10+1=1045 points
would be required. Using (2, 10) SGD , the second order response surface obtained is
2
10
2
9
2
8
2
7
2
6
2
5
2
4
2
3
2
2
2
11098
7654321
8435.08435.08435.08435.08435.08435.0
8435.08435.08435.08435.0126001260012600
126001260012600126001260012600126000073.10
xxxxxx
xxxxx.-x.-x.-
x.-x.-x.-x.-x.-x.-x.-f
++++++
++++
=
Solving for ),x,x,x( 1021 we get )074.0,,074.0,074.0( and )xx,x(f 1021 =
10.02.
Case 5: )(20
1 ii
xxef i =
=
This function has a minimum at
)0,,0,0(),,( 2021 =xxx and 20),( 2021 =xxxf . Using (2, 20) SGD, the second
order response surface obtained is
7/27/2019 Response Surface Approximation Using Sparse
15/20
Response Surface Approximation using Sparse Grid Design 473
2
20
2
19
2
18
2
17
2
16
2
15
2
14
2
13
2
12
2
11
2
10
2
9
2
8
2
7
2
6
2
5
2
4
2
3
2
2
2
12019
181716151413
121110987
654321
8435.08435.08435.0
8435.08435.08435.08435.08435.08435.08435.0
8435.08435.08435.08435.08435.08435.0
8435.08435.08435.08435.01256.01256.0
1256.01256.01256.01256.01256.01256.01256.01256.01256.01256.01256.01256.0
1256.01256.01256.01256.01256.01256.00118.20f
xxx
xxxxxxx
xxxxxx
xxxxx-x-
x-x-x-x-x-x-x-x-x-x-x-x-
x-x-x-x-x-x-
+++
+++++++
++++++
+++++
=
Solving for ),,( 2021 xxx , we obtain )074.0,,074.0,074.0( and ),( 2021 xxxf
= 20.05. For d=20, SGD requires 841 runs compared to 220
= 1048576 for the two
level factorial design and 220 + 2*20+1=1048617 for the CCD design. Thus for an
extended Raydan function we obtain very good approximations using SGD. Next, we
use the diagonal function to further test SGD at higher dimensions.
Problem 10: Extended Diagonal function
This function is defined for a d- dimensional problem as 2
1
2
1 100i
d
i
d
i
i xi
xf ==
+
= .
We evaluate SGD for the cases d=2, 3, 5, 10 and 20.
Case 1: 22
1
22
1 100i
ii
i xixf ==
+
=
The minimum point of the given function is found to be at 01 =x and 02 =x and
.0),( 21 =xxf Using (2, 2) SGD, the second order response surface obtained is
2
2
2
121 02.101.12 xxxxf ++=
Solving for 1x , 2x we get 01 =x and 02 =x and 0),( 21 =xxf which is a good
approximation to the objective function.
Case 2:
23
1
23
1 100i
iii x
i
xf == +
=
This function has a minimum at )0,0,0(),,( 321 =xxx and .0),,( 321 =xxxf Using
(2, 3) SGD, the second order response surface obtained is2
3
2
2
2
1323121 03.102.101.1222 xxxxxxxxxf +++++=
Solving for 1x , 2x , 3x we get 01 =x , 02 =x and 03 =x and 0),,( 321 =xxxf .
Case 3: 25
1
25
1 100i
ii
i xi
xf ==
+
=
7/27/2019 Response Surface Approximation Using Sparse
16/20
7/27/2019 Response Surface Approximation Using Sparse
17/20
7/27/2019 Response Surface Approximation Using Sparse
18/20
476 P. Beena and Ranjan Ganguli
References
[1] Klimke, A., 2006, Uncertainty Modeling using Fuzzy Arithmetic and Sparse
Grids, Institut fr Angewandte Analysis und Numerische Simulation
Universitt Stuttgart.
[2] Garcke, J., 2005, Sparse Grid Tutorial,
http://www.maths.anu.edu.au/~garcke/paper/ sparseGridTutorial.pdf.
[3] Kodiyalam, S., and Sobieski-sobieszczanski, J., 2000, Bilevel Integrated
System Synthesis with Response Surfaces, American Institute of Aeronautics
and Astronautics Journal, 38, pp. 14791485.
[4] Batill, S. M., Stelmack, M. A., and Sellar, R. S., 1999, Framework of
Multidisciplinary Design Based on Response Surface Approximations,
Journal of Aircraft, 36, pp. 275287.[5] Roux, W. J., Stander, N., and Haftka, R. T., 1998, Response Surface
Approximations for Structural Optimization, International Journal for
Numerical Methods in Engineering, 42, pp. 517534.
[6] Myers, R. H. and Montgomery, D. C., 1995, Response Surface Methodology,
Process and Product Optimization Using Designed Experiments, New York:
Wiley.
[7] Smolyak, S. A., 1963, Quadrature and Interpolation Formulas for Tensor
Products of Certain Classes of Functions, Dokl. Akad. Nauk SSSR, 148,
1042-1043. Russian, Engl.: Soviet Math. Dokl. 4, pp. 240243.
[8] Zenger, C., 1990, Sparse grids, Parallel Algorithms for Partial Differential
Equations, Proceedings of the Sixth GAMM-Seminar, Kiel, pp. 241251,Vieweg-Verlag,
[9] Griebel, M., Schneider, M., Zenger, C., 1992, A Combination Technique for
the Solution of Sparse Grid Problems, Iterative Methods in Linear Algebra,
IMACS, Elsevier, North Holland, pp. 263281.
[10] Garcke, J., Griebel, M., 2002, Classification with Sparse Grids using
Simplicial Basis Functions, Intelligent Data Analysis, 6, pp. 483502.
[11] Garcke, J., Griebel, M., Thess, M., 2001, Data Mining with Sparse Grids,
Computing, 67, pp, 225253.
[12] Frank, K., Heinrich, S., and Pereverzev, S., 1996, Information Complexity of
Multivariate Fredholm Integral Equations in Sobolev Classes, J. of
Complexity, 12, pp. 17-34.[13] Griebel, M., Oswald, P., Schiekofer, T., 1999, Sparse Grids for Boundary
Integral Equations, Numer. Mathematik, 83(2), pp. 279-312.
[14] Baszenski, G., 1985, N-th order Polynomial Spline Blending, In Schempp,
W., and Zeller, K., editors, Multivariate Approximation Theory III, ISNM
75, pp. 35-46, Birkhauser , Basel.
[15] Temlyakov, V. N., 1989, Approximation of Functions with Bounded Mixed
Derivative, Proc. Steklov Inst. Math, 1.
[16] Sickel, W., and Sprengel, F., 1999, Interpolation on Sparse Grids and
Nikol'skij-Besov Spaces of Dominating Mixed Smoothness, J. Comput. Anal.
Appl., 1, pp. 263-288,
7/27/2019 Response Surface Approximation Using Sparse
19/20
Response Surface Approximation using Sparse Grid Design 477
[17] Griebel, M., and Knapek, S., 2000, Optimized Tensor-Product
Approximation Spaces, Constructive Approximation, 16(4), pp. 525-540.
[18] Klimke, A., and Wohlmuth, B., 2005, Computing Expensive Multivariate
Functions of Fuzzy Numbers using Sparse Grids, Fuzzy Sets and Systems,
154(3), pp. 432-453.
[19] Schwab, C., and Todor, R., 2003, Sparse Finite Elements for Stochastic
Elliptic Problems - Higher Order Moments, Computing, 71(1), 43-63, 2003.
[20] Schwab, C., and Todor, R. A., 2003, Sparse Finite Elements for Elliptic
Problems with Stochastic Loading, Numer. Math., 95(4), pp.707-734.
[21] Gradinaru, V., and Hiptmair, R., 2003, Multigrid for Discrete Differential
Forms on Sparse Grids, Computing, 71(1), pp.17-42.
[22] von Petersdorff, T., and Schwab, C., 2004, Numerical solution of parabolic
equations in high dimensions, Mathematical Modeling and NumericalAnalysis, 38 (1), pp. 93-127.
[23] Bungartz, H. J., and Griebel, M., 2004, Sparse Grids, Acta Numerica, 13,
pp. 147-269.
[24] Bungartz, H. J., and Griebel, M., 1999, A Note on the Complexity of Solving
Poisson's Equation for Spaces of Bounded Mixed Derivatives, J. of
Complexity, 15:167-199. Also as Report No 524, SFB 256, Univ. Bonn, 1997.
[25] Faber, G., 1909, berU stetige Funktionen, Mathematische Annalen, 66, pp.
81-94.
[26] Yserentant, H., 1992, Hierarchical bases, In R. E. O'Malley, J. et al., editors,
Proc. ICIAM'91, SIAM, Philadelphia.
[27] Yserentant, H., 1986, On the Multi-Level Splitting of Finite ElementSpaces, Numerische Mathematik, 49, pp. 379-412.
[28] Laffan, S. W., Nielsen, O. M., Silcock, H., and Hegland, M., 2005, Sparse
Grids: A New Predictive Modeling Method for the Analysis of Geographic
Data, International Journal of Geographical Information Science, 19(3), 267
292.
[29] Babenko, K. I., 1960, Approximation of Periodic Functions of Many
Variables by Trigonometric Polynomials, Dokl. Akad. Nauk SSSR, 132:247-
250. Russian, Engl. Transl: Soviet Math. Dokl. 1, 513-516, 1960.
[30] Temlyakov, V. N., 1993, Approximation of Periodic Functions, Nova
Science, New York.
[31] Temlyakov, V. N., 1993, On Approximate Recovery of Functions withBounded Mixed Derivative. J. Complexity, 9, 41-59.
[32] Delvos, F. J., 1982, D-Variate Boolean Interpolation, J. Approx. theory, 34,
99-114.
[33] Delvos, F. J., and Schempp, W., 1989, Boolean Methods in Interpolation and
Approximation, Pitman Research Notes in Mathematics series 230, Longman
Scientific & Technical, Harlow.
[34] Liem, C. B., uL , T., and Shih, T. M., 1995, The Splitting Extrapolation
Method, World Scientific, Singapore,
[35] Novak, E., and Ritter, K., 1996, High-Dimensional Integration of Smooth
Functions Over Cubes, Numer. Math, 75 (1), 7997.
7/27/2019 Response Surface Approximation Using Sparse
20/20