59
Reduced order optimization of large-scale nonlinear systems with nonlinear inequality constraints using steady state simulators Panagiotis Petsagkourakis , Ioannis Bonis , and Constantinos Theodoropoulos * *Corresponding author. Tel.: +44 1612004386; fax: +44 1612367439. E-mail address: [email protected] (C. Theodoropoulos). School of Chemical Engineering and Analytical Science, University of Manchester, Sackville St, Manchester M13 9PL, UK 1 1 2 3 4 5 6 7 8 9 10 11 12 13

Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Reduced order optimization of large-scale nonlinear

systems with nonlinear inequality constraints using

steady state simulators

Panagiotis Petsagkourakis†, Ioannis Bonis†, and Constantinos Theodoropoulos *†

*Corresponding author. Tel.: +44 1612004386; fax: +44 1612367439. E-mail address:

[email protected] (C. Theodoropoulos).

†School of Chemical Engineering and Analytical Science, University of Manchester, Sackville

St, Manchester M13 9PL, UK

Keywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality

constraints, black-box simulator, large-scale optimization.

1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

Page 2: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Abstract

Technological advances have led to the widespread use of computational models of increasing

complexity, in both industry and everyday life. This helps to improve the design, analysis and

operation of complex systems. Many computational models in the field of engineering consist of

systems of coupled nonlinear Partial Differential Equations (PDEs). As a result, optimization

problems involving such models may lead to computational issues because of the large number

of variables arising from the spatio-temporal discretization of the PDEs. In this work, we present

a methodology for steady-state optimization, with non-linear inequality constraints of complex

large-scale systems, for which only an input/output steady-state simulator is available. The

proposed method is efficient for dissipative systems and is based on model reduction. This

framework employs a two-step projection scheme followed by three different approaches for

handling the nonlinear inequality constraints. In the first approach, partial reduction is

implemented on the equality constraints, while the inequality constraints remain the same. In the

second approach an aggregation function is applied in order to reduce the number of inequality

constraints and solve the augmented problem. The final method applies slack variables to replace

the one aggregated inequality from the previous method with an equality constraint without

affecting the eigenspectrum of the system. Only low-order Jacobian and Hessian matrices are

employed in the proposed formulations, utilizing only the available black-box simulator. The

advantages and disadvantages of each approach are illustrated through the optimization of a

tubular reactor where an exothermic reaction takes place. It is found that the approach involving

the aggregation function can efficiently handle inequality constraints while significantly reducing

the dimensionality of the system.

2

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

Page 3: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

1. Introduction

The optimization of large-scale systems is not a trivial task and has received significant

attention over the years. Advances in technology have allowed the evolution of available

simulators that can model accurately physical systems with high complexity, like COMSOL1 and

OpenFOAM2. However, in most cases, these simulators cannot perform optimization tasks. In

addition, many of these simulators do not offer access to the underlying modelling equations.

Distributed parameter systems (DPS)3, consisting of partial differential equations (PDEs)4, which

can express the physical behaviour of many engineering systems such as supercapacitor

manufacturing5, thermal-fluid process6 or convection-diffusion–reaction systems7. The most

common way to treat PDEs is to discretise them over a computational mesh producing large

systems of nonlinear (dynamic) equations. The resulting large-scale models cannot easily be

used for optimization and control applications8 as these applications require the repeated solution

of the system at real time. Computing reduced versions of the (large-scale) system gradients can,

therefore significantly enhance the applicability of deterministic optimization and control

algorithms for large-scale systems. Moreover, when commercial simulators are employed, the

systems’ gradients are not usually explicitly available to the user and need to be computed

numerically. Automatic differentiation9,10,11 can be utilised for the computation of numerical

derivatives in an efficient manner.

Reduced Hessian methods (rSQP)12–15 have been developed based on sequential quadratic

programming (SQP) and can be used for large-scale DPS with relatively few degrees of freedom.

The advantage of these algorithms is that a low-order projection of the Hessian matrix is used, so

less computational effort is required. The main idea is that suitable bases are constructed and

3

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

Page 4: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

used to project the large-scale system onto the low-dimensional subspace of the system’s

independent variables, thus effectively reducing the dimensionality of the original system.

However, these methods still require the construction (and inversion) of large-scale Jacobians

and Hessians, hence requiring significant computational effort. To side-step these issues, rSQP

methods have been combined with equation-free model reduction methodologies16, when a

(black-box) dynamic simulator of the system is available, significantly enhancing the

computational efficiency of large-scale optimization problems.

Equality constraints regularly represent the physical model of the system, for which the

optimization algorithm has to find a feasible solution. In engineering practice, there are many

limitations, either physical or technical, such as bounds of the system, of (dependent and

independent) variables, and of properties. In addition, there are economic limitations, as well as

limitations due to safety considerations (e.g. temperature bounds in the case of exothermic

reactions where sudden temperature rise can lead to runaways).

There are two main approaches for handling inequality constraints within the SQP

framework17: The first approach involves the sequential solution of inequality‐constrained QP

sub-problems (IQP) and the second involves the sequential solution of equality constrained ones

(EQP)17,18. Following the IQP rationale, at every iteration of the SQP (termed outer or major

iteration) the nonlinear inequality constraints are linearized and included in the QP sub-problem,

which in turn is solved using an active set approach. Conversely, in the EQP formulation, at

every major iteration an estimation of the active subset of the inequality constraints is identified

using estimates of the Lagrange multipliers and passed on to the QP as a working‐set, which

4

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

Page 5: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

leads to only equality‐constrained QP. This method has the advantage of lower computational

cost and the utilization of simpler algorithms for quadratic programming. Both approaches have

advantages and disadvantages, however none of the two can effectively handle the nonlinear

inequality and equality constraints produced by a large-scale black box simulator for solution at

real-time, as they require the full system gradients.

In the framework of rSQP, the inequality constraints cannot be introduced directly as is the

case in IQP. As a result Schulz19 proposed a variant of rSQP, the so-called Partially Reduced

SQP (PRSQP), which combines the strong properties of SQP and rSQP. The main idea of this

approach is to exploit the structure of the null space of the equality constraints (or some of them)

and handle the inequality constraints as in the SQP method. One widely-used approach to handle

inequality constraints is the use of an aggregation function like the Kreisselmeier-Stainhauser

(KS)20. This method can reduce the number of inequality constraints to just one inequality and it

can be combined with SQP20. The KS function has been also used as a barrier function21 in

chemical vapor deposition (CVD) applications in order to remove the nonlinear inequality

constraints use them as a penalty term in the objective function through the KS function. Another

interesting technique is the use of slack variables22, to turn inequality constraints into equalities,

taking into account only the active inequality constraints. This method has some disadvantages

as the number of the equality constraints may change from iteration to iteration. Therefore rSQP

may fail to produce feasible solutions. Another interesting approach to solve the optimization

problem with nonlinear inequality constraints is the barrier method, also known as interior point

method22,23, which solves an unconstrained problem by introducing a barrier function in the

objective function. The most common barrier function is the logarithmic function. Thus, when

5

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

Page 6: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

one inequality is close to zero, the value of the barrier function increases exponentially. Hence,

this method may produce sub-optimal results, as the inequality constraints can never be active.

However, techniques exist that help the convergence of the algorithm to an active inequality

taking advantage of central path methodologies24–26.

A new optimisation technique was recently presented for large-scale dissipative systems27

based on equation-free methods28, which exploits their dissipative nature for model order

reduction (MOR). Dissipativity is expressed as separation of eigenvalues in the spectrum of the

linearized system and therefore as a separation of system modes (or scales) to slow and fast

ones29,30. This separation has been used in various ways within the MOR context, leading to

different formulations27,31–33. Nevertheless, none of the above MOR-based methods has dealt with

problems that include nonlinear constraints.

In this paper, the aforementioned model reduction technique27 was exploited in conjunction

with three different approaches to handle large-scale systems with nonlinear constraints. The first

approach combines PRSQP with equation-free model reduction, to reduce the dimensionality of

equality constraints only. The second approach adds the feature of constraint aggregation34,

where all the inequality constraints are replaced by a single KS function. This way large-scale

inequality constraints can be effectively handled. In the last approach, a slack variable is

employed to turn the aggregated inequality into an additional equality constraint. Slack variables

could potentially produce difficulties in the model reduction step; however we provide a proof

that the aggregation of the inequalities helps to avoid such issues.

6

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

Page 7: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

The rest of the paper is organized as follows: Section 2 presents the background for this work

including a brief overview of the Partial reduced Sequential Quadratic Programming method, the

constraint aggregation method and the equation-free model reduction framework.

In section 3 we present the new methodology developed in this work for handling large-scale

nonlinear optimisation problems with nonlinear inequalities. The proposed schemes are

discussed alongside with proofs of equivalence of the computed optima. We apply the 3 schemes

developed to an illustrative case study, the optimization of a tubular reactor, in section 4. Finally,

a comparison of the three schemes along with relevant conclusions is presented in section 5.

2. Background

2.1 Partial Reduced Sequential Quadratic Programming

Partial Reduced Sequential Quadratic Programming (PRSQP) was introduced19 in order to

extend reduced Hessian methods for problems with “additional” nonlinear equality and

inequality constraints. PRSQP reduces the space of (some) equality constraints and the rest of the

equality and inequality constraints are treated in a similar manner as in SQP. Thus, the extra

constraints (both inequality and equality constraints) are passed to the QP sub-problem. The

problem formulation is described as follows:

min f ( x )

s . t .G ( x )=0h ( x )≤0xL≤ x≤ xU (1)

Here f ( x ) is the objective function, x∈ RN+dof is the vector of (dependent (u) and independent (

xdof )) variables. G :RN+dof→RN represents the N❑ equality constraints, and h :RN+dof→RN ¿ the

N ¿ inequality constraints. Additionally, the Lagrange function, L ,is defined as follows

L ( x )=f ( x )+λΤG ( x )+ λ¿T h (x) (2)

7

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

Page 8: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

where the Lagrange multipliers, λ❑ ,that correspond to equality constraints and the ones

corresponding to inequality constraints, λ¿❑, are computed directly from the solution of the QP

sub-problem. As mentioned above, large-scale problems, i.e. systems with large number of

equality and inequality constraints still require the construction (and inversion) of large full-scale

Jacobians and Hessians, compromising the computational efficiency of the optimization method.

For cases with large number of inequality constraints, aggregation methods can be applied.

2.2. Constraints aggregation

The KS function was first presented by G. Kreisselmeier and R. Steinhauser20. The function

contains an ‘aggregation parameter’,ρ which is equivalent to the penalty factor in penalty

methods. This formulation was first used to combine multiple objectives and constraints into a

single function and has been utilised in a wide variety of applications such as CVD

optimization21 and structural optimization20,34. The KS function can be used to aggregate

inequality constraints and is described as follows:

KS (hi )=1ρ

ln ¿ (3)

An equivalent expression is as follows:

KS (hi )=M+ 1ρ

ln ¿ (4)

where ρ and M are design parameters. The second expression provides better behaviour when

one or more inequalities are positive, due to numerical difficulties that may be caused by the

exponential term. The design parameterM is suggested21 to be the maximum value of the

inequality constraints. Properties of the KS function can be found in Raspanti et al21.

8

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Page 9: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

2.3. Equation-Free Model Reduction

Equation-free model reduction16,31 has been successfully combined with rSQP, in the RSPQP

methodology24 so that the dissipative nature of the system can be exploited. The system’s

dissipativity can be expressed as a gap in the spectrum of the eigenvalues of the linearized

problem (Jacobian of the equality constraints). A (usually) small number of eigenvalues is

clustered near the imaginary axis (red dots in Figure 1). These eigenvalues correspond to the

slow and/or unstable modes of the system. The rest of the eigenvalues beyond the gap (blue dots

in Figure one) correspond to the fast modes. This idealized spectrum is illustrated in Figure 1.

Figure 1. Idealized separation of scales for the eigenvalues of a dissipative system.

The eigenvalues affect the stability of the static states19,30. In fact, the rightmost, slow modes in

the idealized eigenspectrum (Figure 1) enslave the rest and determine the system’s stability35.

The number, m, of the slow modes depends on the separation of scales whose existence has been

proven for parabolic PDEs36–38. A basis spanning the dominant subspace of the system, of sizem,

can efficiently be computed using subspace iterations or Krylov subspace-based algorithms like

Arnoldi iterations35. The size of the basis can be heuristically derived or adaptively computed as

in the case of the recursive projection method (RPM)39. These techniques follow the matrix-free

9

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

Page 10: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

concept as only evaluation of matrix-vector products is needed. As a result, even though the

systems’ equations and/or Jacobians may not be explicitly available, the calculation of the basis

is efficient and feasible.

Assume thatG=0 represents the system equations, as in eq. 1 above, contained within a

(steady-state) black box simulator, G :RN+dof→RN being Lebesque integrable. If P is the

dominant sub-space of the system and Q its orthogonal complement then:

P⊕Q=RN (5)

An orthonormal basis Zϵ RN×m for the subspace P and a projector P are defined as

P=Z ZT (6)

ZT Z=I (7)

As discussed above, an approximation, Z, of Z is computed through Krylov or Anrnoldi

iterations. The vector of system states, u , can be replaced with its low-dimensional projection, υ,

ontoP:υ=ZT u. The reduced Jacobian, Hϵ Rm×m,is then computed by the restriction of the full-

scale Jacobian, J, onto Z:

H=ZT J Z (8)

side-stepping the need to calculate the full Jacobian. The reduced Jacobian is efficiently

computed through m numerical perturbations for ε>0:

J Z j=1

2 ε (G (u+ε Z j )−G (u−ε Ζ j )) , j=1 ,…,m (9)

This scheme follows the matrix-free concept, to reduce mainly memory requirements and has

roots in the Recursive Projection Method30. Multiplying ZTwith J Zproduces the desired reduced

Jacobian21,27. The selection of the m slow modes is crucial for efficient model reduction and more

10

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

Page 11: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

details can be found in Bonis & Theodoropoulos27. It should be noted that only the directional

derivatives need to be computed as is the case of automatic differentiation (AD) approaches. As

a result, AD can be combined with our model reduction approach to enhance even further the

computational capabilities of our methodology.

3. MethodologyOur proposed methodologies combining equation-free model reduction with PRSQP to handle

large-scale optimisation problems are presented below. Three different ways to handle (non-

linear) inequality constraints are examined.

3.1. Equation-Free Reduced PRSQP (EF-PRSQP)

Here, equation-free model reduction is employed to project the full-scale system onto the low-

dimensional space of the slow modes, taking advantage of the separation of scales, effectively

reducing the dimensionality of the state variables given by the system equality constraints. The

inequality constraints are handled as in PRSQP.

Equation-free model reduction is successfully coupled with PRSQP using a 2-step projection

scheme32. In order to include the decision variables in the low-dimensional subspace, an

extended orthonormal basis is defined:

Zext=( Z 00 I dof ) (10)

Here I dof ϵ R(N+dof )×dof is the identity matrix and dof is the number of decision (independent)

variables. A coordinate basis of the subspace of the independent variables can be computed as:

11

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

Page 12: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Z r=(−H−1 ZT∇zGI ) (11)

while a basis, Υ , for the complement subspace is given by:

Υ=(I0) (12)

Hence, a projection basis, Z¿ , equivalent to the basis computed in rSQP5-8, can be calculated

as:

Z¿=Zext Z r=(−Z H−1 ZT ∇zGI )ϵ R(N+dof )×dof (13)

where only the inverse of the low-order matrix, H , is required. This projection is equivalent to

the one of the reduced Hessian method, but not equal as the latter is the null space of the

constraints and the former is essentially a double projection firstly on the low-order subspace of

the dominant modes and subsequently on the null space of the constraints. Even though, both

methods give the same basis size, rSQP requires the construction and inversion of a large matrix

(Jacobian of constrains), whilst our model reduction-based method computes and inverts only a

low-order matrix. The reduced Hessian, BR, is the restriction of the (unavailable) full system

Hessian, B , on Z¿27:

BR=Z¿ TB Z¿ (14)

Here B is the Hessian of the Lagrange function, L ( x) , defined as L ( x )=f ( x )+GT(x )λ+h¿T ( x) λ¿.

The reduced Hessian is efficiently computed taking advantage of the directional derivatives,

employing the same central finite difference-based scheme as the one used for the reduced

Jacobian (eq. 9). To accelerate this computation a BFGS17 approach can be followed preserving

positivity of the Hessian.

12

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Page 13: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

It is important to mention that the dependent variables in the optimization problem satisfy the

constraints at every iteration, so the procedure is a feasible-point algorithm. Thus, the reduced

QP sub-problem is transformed as follows:

minpZ

(Z❑¿T∇ f❑)T pz+

12pz

T BR pzs . t .∇h ( x )Z¿ pZ ≤−h(x )xL−x≤ Z¿ pz≤x

U−x

(15)

Here pz ϵ Rdof is the component of the search direction onto the subspace of the decision variables.

The basis Z¿ and the reduced Hessian BR are given by eq. 13 and 14, respectively. The low-order

projections, φ, of the Lagrange multipliers, λ ϵ R(N+dof )×dof , of the equality constraints onto P are

computed as:

HT φ=ZT (Υ Τ∇ f +λ¿Τ∇h) (16)

where

φ=ZT λ (17)

It can be easily shown that the reduced optimization problem has the following KKT conditions:

∇ f ( x )T+[H ZT ZT ∇zG ( x ) ]T φ+∇h ( x )T λ¿=0PG ( x )=0

hi λi ni=0, for i=1…N ¿ (18)

3.1.1 Proposed optimization scheme

The proposed method, which handles large-scale systems and nonlinear inequality constraints, is

presented in Algorithm 1 (Table 1).

Table 1. Optimization scheme including inequality constraints (EF-PRSQP), Algorithm 1

1. Choose initial values for x

13

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Page 14: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

2. Compute a feasible point using the black box simulator.

3. Model reduction: Compute basis Z and reduced Jacobian H .

4. Construct the basis Z¿ using eq. 13.

5. If iteration number >1 and ‖Z¿ pz‖>ε then compute the multipliers, φ , using values from

the previous iteration.

6. Compute the reduced Hessian, BR, in order to solve the reduced QP problem

minpZ

(Z k¿T∇ f k )

T pz+12pz

TBR pzs . t .∇h ( x )Z¿ pZ ≤−h(x )xL−x≤ Z¿ pz≤x

U−x

7. Update the values of the variables: x=x+Z¿ pZ, x previous=x

8. If ‖Z¿ pz‖<ε update the basis Z¿, and calculate the multipliers (eq. 16) based on the new

reduced Jacobian, H .

9. H previous=H

10. Check convergence (‖Z¿ pZ‖<ε); if the problem not converged then go to step 2

Implementation of Algorithm 1 produces satisfactory results as is illustrated in the case study

(see section 4), but has a major drawback: The number of inequality constraints may be large,

because the corresponding physical system is distributed-parameter. Also, inequality constraints

14

1

2

3

4

Page 15: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

may hold for the whole spatial domain. Consequently, calculating the gradient of the inequality

constraints with respect to all variables will be computationally expensive.

3.1.2 Equivalence of the computed optima

In this section, we show that an optimum of the full problem is an optimum of the reduced

problem (and vice-versa), if the reduced system is a good approximation of the full-scale one and

the reduced states can accurately reproduce the full states. The KKT conditions of the full

problem are the following:

∇ f +∇GT λ+∇hT λ¿=0

G ( x )=0 (19)

hi ( x ) λ i ni=0

for some λ¿>0, λ>0.

It can also be proven that our reduced optimization scheme has super-linear convergence

properties.

Theorem 1: For dissipative systems every optimum point satisfies eq. 19, iff satisfies eq. 18.

Proof: Eq. 19 can equivalently be written as:

∇ f T+ [ (P+Q )∇G (P ext+Qext ) ]T λ+∇hT λ¿=0

∇ f T+ [ (P )∇G (Pext )]T λ+ [P∇GQext+Q∇G Pext+Q∇GQext ]T λ+∇hT λ¿=0 (20)

Taking into account eq.17

∇ f T+Pext∇GT Z φ+[P∇GQext+Q∇G Pext+Q∇GQ ext ]

T λ+∇ hT λ¿=0 (21)

We can set

15

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

Page 16: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

[P∇GQext+Q∇GP ext+Q ∇GQext ]T λ=E (22)

E being the error associated with the model reduction operation. Therefore, eq. (21) can be

written as:

∇ f T+P ext∇GT Z φ+E+∇hT λ¿=0 (23)

This can be easily shown to be equivalent to:

∇ f T+ [H ZT ZT∇zG ( x ) ]Tφ+E+∇hT λ¿=0 (24)

As long as the basis, Z , approximates the maximal invariant subspace of ∇uG and the

dominant modes are captured by the model reduction with x≈ Z v30,31, ‖E‖ is small and bounded.

In addition, Z is udated at every iteration of the algorithm, to ensure that the full-scale model is

adequately captured by the dominant modes. Hence E→0 , throughout, and the following holds:

∇ f T+ [H ZT ZT∇zG ( x ) ]Tφ+E+∇hT λ¿=0 (25)

In addition, PG(x)=0 holds, because the (feasible point) algorithm uses a black-box

simulator, which solves the equality constraints. Also h ( x ) λ¿=0 holds, as the inequality

constraints are part of the solution of the QP problem at every iteration. Hence the equivalence of

eq. (19) with eq. (18) is proven. This of course does not guarantee that the reduced problem does

not exhibit additional stationary points which satisfy the KKT conditions.

The inverse can be shown accordingly. The KKT conditions corresponding to Algorithm 1 (eq.

18) can be written as:

∇ f T+Pext∇GT Z φ+∇hT λ¿=0∇ f T+P ext∇GT Pλ+∇hT λ¿=0 (26)

The term Pext∇GT Pλ from eq. (26) is equal to∇Gλ−E. As above we can assume that

E→0whenthe full-scale model is adequately captured by the dominant modes. Additionally,

16

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

Page 17: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

since the algorithm is feasible-path, G(x )=0, andPG ( x )=0. Also, h ( x ) λ¿=0 as explained above.

Then (eq. 26) becomes equal to (eq. 19) ∎

3.2. Equation-Free Reduced PRSQP with Aggregated Inequalities (EF-

PRSQP-KS)

Equation-Free Model Reduced PRSQP handles the inequality constraints effectively.

Nevertheless, the derivatives of all inequality constraints are needed, which means that the

computational efficiency may be jeopardised, since conditions, such as safety specifications or

economic restrictions, may be applied to the whole spatial domain.

An effective way to tackle this problem is to use an aggregation function, such as the KS

function in eq. 3, which can be combined with the Equation-Free PRSQP in order to produce an

efficient optimization algorithm suitable for a large number of both equality and inequality

constraints.

The main advantage of this KS aggregation function is the ability to substitute all the

inequality constraints with only one. It can easily be shown that if all the inequality constraints

are negative then the KS function is negative as well, and also if there is an active set of

inequality constraints then KS approximates zero as ρ→∞21. It is then easy to replace the

formulation of the nonlinear optimization problem of eq. 6 with the formulation of eq. 27

min f ( x )s . t .G ( x )=0 (27)KS ( x , ρ )≤0

17

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

1617

18

19

20

21

22

23

Page 18: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Yet, this formulation may produce sub-optimal results, and the solution gets closer to the

optimum for high values ofρ. This behaviour can be explained since the KS function creates a

smaller feasible region for the optimizer than the original one. Nevertheless, the parameter ρ

cannot be too large from the beginning of the algorithm, because numerical difficulties may

arise. As a result, an adaptive procedure should be implemented. Poon and Martins20 introduce

such an adaptive approach in order to avoid sub-optimal results. In this approach the aggregation

parameter, ρ , changes according to the sensitivity of the KS function. The aggregation parameter

is increased so that the derivative of KS, KS ', is less than (or equal to) a small number. This

number can be defined as a desired valueK Sd' . Hence, assuming that KS ' has a linear

dependence on the aggregation parameter20, a relationship between the current value, ρc, of the

aggregation parameter and the desired one ρd, can be found:

log K S1' −log K Sc

'

ρ1−ρc=

log K Sd' −log K Sc

'

ρd−ρc (28)

ρ1 being the value of the parameter at a small step ahead. Solving eq. 28 with respect to ρd the

following equation is derived:

ρd=exp ¿ (29)

In algorithm 2 (Table 2) an adaptive procedure is presented20,34, taking into account the constraint

functions, the current aggregation parameter ρc and the desired sensitivity.

18

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Page 19: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Table 2. Adaptation procedure of aggregation parameter Algorithm 2

Compute the derivative of KS(K S' ) at the current point

If the current value is smaller than the desired, return the current value

Otherwise compute ρd according to eq. 29

Compute KS using ρd

The aggregation function allows us to handle all inequality constraints with a single

constraint. Consequently, the computational time will decrease dramatically, as the case study

will show.

3.2.1 Proposed optimization scheme.

In Algorithm 3 (Table 3) a modification of Algorithm 1 (Table 1) is presented, including the

adaptation procedure for the aggregation function. This algorithm is implemented in section 4,

where all approaches are applied to a chemical engineering example and are evaluated and

compared.

Table 3. EF-PRSQP-KS Algorithm 3

1. Steps 1. -5. are the same with Algorithm 1

2. Compute the reduced Hessian27 in order to solve the reduced QP problem

19

1

2

3

4

5

6

7

8

9

10

11

12

Page 20: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

3. Compute the KS function and its derivative

4. Apply the Algorithm 2 (Table 2)

5. Solve the reduced QP problem and calculate the Lagrange multipliers for the inequality

constraints

minpZ

(Zk¿T∇ f k )

T pz+12pz

TBR pzs . t .KS ' ( x , ρ )Z¿ pZ ≤−KS ( x , ρ )

xL−x≤ Z¿ pz≤xU−x

6. Steps 7. -10. are the same with Algorithm 1

3.2.2 Equivalence of the computed optima

In this section it is shown that every optimization point of the full NLP problem is also an

optimum of the reduced problem when the aggregation function is applied. If the reduced space

is a good approximation of the real one, the reduced states can accurately reconstruct the full

states and also the adaptive procedure of the aggregation function produces a large enough ρ

when required. Then Theorem 1 can be applied to prove the equivalence of the computed

optima. An additional fair assumption has been posed here: The aggregation parameter ρ , is

large enough and can be produced by the adaptation procedure in Algorithm 2.

20

1

2

3

4

5

6

7

8

9

Page 21: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

3.3. Equation-Free Reduced PRSQP with Aggregated Inequalities and Slack

Variable (EF-PRSQP-KS-S)

Slack variables have been utilized to handle inequality constraints; however the

straightforward approach of introducing slack variables and treating inequality constraints as

equalities is not a reasonable approach when the methodology implemented, includes model

reduction or generally partitioning of the solution space. If the model reduction includes active

inequality constraints then 2 undesirable effects may arise:

The dimension of the basis would vary during run-time, depending on the number of

active inequality constraints. Then it will be difficult for the algorithm to give a good

initial guess for the basis for the dominant subspace. Hence the numerical efficiency will

be jeopardized.

If the eigenvalues of the active inequality constraints are aggregated with the eigen-

spectrum of the equality constraints then this will ruin the separation of scales due to the

addition of non-dissipative modes coming from the active inequality constraints.

These disadvantages are overcome using an aggregation function, as only one equality will be

added. The KS function, presented in section 2, can aggregate effectively all the inequality

constraints. If there is only one inequality in the problem then the eigen-spectrum will not change

significantly when slack variables are added. This is proven in Lemma 1.

Lemma 1:

21

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

Page 22: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

If only one inequality is added to the problem then only one eigenvalue equal to one will be

added in the eigenspectrum of the original problem.

Proof:

G ( x )=0h1 ( x )≤0 slack variables⇔

hnew=h1 ( x )+s=0 (30)

where s is the slack variable that is zero when the inequality constraints are active and positive

otherwise. The Jacobian of the augmented system is:

Jaug=( ∇xGT ∇sG

T

∇x hnew ∇sh )(31)

The derivative of G with respect to the slack variable is always zero and the derivative of hnew is

always one. Furthermore, the derivative of hnew with respect to the states is equal to the derivative

of the inequality constraints. As a result the augmented Jacobian can be written as

Jaug=(∇xGT ∅

∇x h 1 ) (32)

Every eigenvalue should be the solution of the following problem.

det (J aug−λeig IN+1 )=0 (33)

22

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

Page 23: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

❑⇒det ((∇xG

T ∅∇xh 1 )− λeig I N+1)=0❑

⇒det ((∇xG

T−λeig IN ∅∇x h 1− λeig))=0

❑⇒

(−1 )N+2 ∂h∂ x1

det((∂G1

∂x1− λeig ⋯ 0

⋮ ⋱ ⋮∂GN

∂x1⋯ 0))+…+ (1−λeig )det (∇xG

T−λeig I N) =0 (34)

In eq. 34 all the determinants are zero because all of them have (at least) one zero column, except

from the last one. So, eq. 34 can be rewritten as

(1− λeig )det (∇xGT− λeig IN)=0 (35)

The solution of eq. 35 produces the original eigenvalues of the system and/or one eigenvalue

equal to one. ∎

The lemma shows that a slack variable can be used without a significant disturbance in the eigen-

spectrum of the system. Specifically, to ensure that no critical eigenvalue will be overlooked, the

dominant space enlarged by one (m+1¿.

3.3.1 Proposed optimization scheme

In this optimization algorithm, model reduction proceeds as in Algorithm 1, the adaptive KS

function must be calculated in every step as in Algorithm 3 (Table 3). A slack variable is

introduced and the solver solves an additional equation. The algorithm is shown in Table 4.

Table 4. EF-slack variables-KS-S Algorithm 4

23

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Page 24: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

1. Step 1. – 2. are the same with Algorithm 3

2. Use Algorithm 2 to find the adaptive KS

3. Find the slack variable

4. Step 3. – 5. are the same with Algorithm 1

5. Compute the reduced Hessian27 in order to solve the reduced QP problem

minpZ

(Z❑¿T ∇ f❑)T pz+

12pz

T BR pzxL−x≤ Z¿ pz≤xU−x

6. Steps 7. – 10. are the same with Algorithm 1

3.3.2 Equivalence of computed optima

The approach in this section aggregates all the inequality constraints into one KS function, and

then a slack variable is used to transform the resulting inequality constraint into an equality

constraint. According to Lemma 1, the eigenspectrum will not be disturbed when the KS

function is used in conjunction with a slack variable. Therefore, Theorem 1 can be used to show

that the computed optimum of the reduced problem is also an optimum problem of the full NLP.

4. Results

To illustrate the behaviour of the proposed optimization algorithms, a case study of a tubular

reactor is implemented. In the reactor an exothermic, first order, irreversible reaction takes place

(A→B ¿. The reactor has three heat exchangers on its jacket, the temperature in each heat

exchanger is considered to be constant and is used as a degree of freedom for the optimization

24

1

2

3

4

5

6

7

8

9

10

11

12

Page 25: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

problem. The model of the reactor consists of 2 PDEs42 (Note that in this case the system of

equations 36-37 is actually a set of ODEs; however we keep the term PDE as the methodology

does not change for problems with more partial derivatives, e.g. 2- or 3-dimensional systems):

1Pe1

∂2 x1

∂ y2 −∂x1

∂ y❑+Da (1−x1 ) exp( x2

1+x2

γ )=0(36)

1¿Pe2

∂2x2

∂ y2 −1¿∂ x2

∂ y❑+C¿ Da (1−x1 ) exp( x2

1+x2

γ )+ β¿ (x2w−x2)=0 (37)

where is the dimensionless concertation of the product, the dimensionless temperature

inside the reactor, the Damkohler number, the Lewis number, Pe1 and Pe2 the Peclet

numbers for mass and heat transfer respectively, the dimensionless heat transfer, C the

dimensionless adiabatic temperature rise, the dimensionless activation energy, ∈[0 , L] the

dimensionless longitude coordinate, L the length of the reactor and the dimensionless

adiabatic wall temperature. , whose expression is given as a function of the longitudinal

coordinate:

(38)

where is the Heaviside function, , , and the dimensionless

temperature at each of the 3 cooling zones. The boundary conditions are:

25

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

Page 26: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

(39)

(40)

The parameters of the physical model are

andL=1. The set of equations (36-37) are discretised with the central Finite Differences method

on a mesh of 250 nodes. This discretisation results in n = 500 dependent variables. The size of

the subspace, m, is chosen to be 10 so it can be large enough to capture the dominant dynamics

throughout the parameter space.

The optimization problem aims to maximize the concertation at the outlet of the reactor by

changing the 3 wall temperatures. The basic problem consists of only equality constraints the

steady state mass and energy balances27. In reactors that exothermic reactions take place, a

thermal runaway43 may produce uncontrollable situations. To control this phenomenon,

inequality constraints should be applied to meet some safety specifications. In this case study the

dimensionless reaction rate is selected to have an upper-bound in order to avoid thermal

explosions. The optimization problem is then set up as:

maxx2w

x1( y=1) (41)

s.t. G ( x )=0

h ( x )≤3.5

26

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

Page 27: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

0≤ x1≤1

0≤ x2≤8

0≤ x2w i≤4

i=1 ,…,3

where is the dimensionless reaction rate across the reactor, given by eq. 42, are the

PDEs of the physical model (eq. 36,37) in discretized form and x=[ x1T , x2

T , x❑2wiT ]T .

(42)

The inequality constraints, h(x), are applied to the whole length of the reactor to ensure the limit

is not surpassed at any point. Thus, not only the problem consists of a large number of equality

constraints but also of a large number of inequality constraints.

All three approaches produce the same optimal solution and in order to illustrate and compare

the 3 optimization algorithms, the methodologies are evaluated in terms of the number of

iterations for the same initial guess and CPU-time for different initial guesses. The initial guess

for the Equation Free-PRSQP-KS and the Equation Free -PRSQP should satisfy the inequality

constraints.

Firstly, a base case scenario is presented in order to observe the optimization path and the

general convergence behavior, where the initial guess for all wall temperatures is 0.2.

Convergence results are given in terms of ‖Z¿ pz‖, which is the norm of the solution update for

each iteration. As mentioned before, the optimum point computed for all three algorithms is the

27

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

Page 28: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

same. The results presented in Figures 2-3 depict the solution of the problem with inequality

constraints alongside with the solution of the problem taking in account only equality constraints

(Initial problem). As it can be seen in Figure 3, the reaction rate and the temperature of the

inequality-constrained problem are higher than those of the initial problem beyond y=0.2 to

allow the reactant concentration to reach the optimum point.

Figure 2. Dimensionless concentration at the optimum point with and without inequality

constraints.

28

1

2

3

4

5

6

7

8

9

Page 29: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Figure 3. (a) Dimensionless reaction rate and (b) dimensionless temperatures at the optimum

point with and without inequality constraints.

Despite the fact that all 3 methods produce identical optimum point results, the convergence path

and computational time for each of the methods varies. Convergence paths for the three

approaches are depicted in Figure 4a showing that the norm of the solution follows almost the

same path for each method for the same initial guess ( dimensionless temperature all 3 cooling

zones being 0.2) but Equation Free-PRSQP-KS requires the minimum amount of iterations,

while Equation Free-PRSQP-KS-S requires the most. It is important to have a knowledge on

how an algorithm convergences for different initial guesses, because in most cases there is no a

priori knowledge of good guesses. In order to examine this behavior, 6 experiments have been

performed for every algorithm: Initially, all wall temperatures (degrees of freedom) start with the

value 0.00, then the code runs and the CPU-time is recorded. After that, wall temperatures are

given the previous initial guess value plus 0.075, and the algorithm starts again. These

experiments are conducted for all the proposed algorithms. The corresponding results are

depicted in Figure 4b.

29

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Page 30: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Figure 4. (a) Convergence behaviour for each method starting from the same initial guess. (b)

CPU- times for each method for a range of initial guesses for the degrees of freedom.

As it can be seen, EF-PRSQP-KS is the fastest with the minimum number of iterations, whilst

the slowest is the EF-PRSQP as expected. Figure 4 shows that EF-PRSQP-KS is faster in

comparison to the others, for almost all the different initial guesses tested. In conclusion, all three

algorithms seem to have the same trend in terms of iterations and computational time for all the

experiments tested, however the fastest algorithm regardless of the number of iterations is the

EF-PRSQP-KS, EF-PRSQP-KS-S comes next, and the slowest is EF-PRSQP. Thus, the

aggregation of the inequality constraints into one seems to have a significant impact into the

computational time as the CPU time of EF-PRSQP-KS is always around 80% faster than that of

EF-PRSQP.

To investigate how the fastest method, EF-PRSQP-KS, scales with problem size, we have tested

the solution of the same system (eq. 36-42) for n=1000 and 1500, respectively. In addition, we

30

1

2

3

4

5

6

7

8

9

10

11

12

13

14

Page 31: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

have compared its performance (in CPU s) against that of PRSQP and of a “standard” NAG

solver. As we can see in Table 1, EF-PRSQP-KS is approximately 8-10 times faster than PRSQP

for all problem sizes and 17- 58 times faster than standard NAG-based SQP which solves the full

model. All 3 methods converge to the same solution.

Table 1: Comparison of performance of EF_PRSQP-KS against PRSQP and NAG SQP for different problem sizes.

Problem size, n EF-PRSQP-KS, m=10(CPU, s)

PRSQP(CPU, s)

NAG SQP(CPU, s)

500 8.8 64 150

1000 39.1 509 1293

1500 75.8 796 4454

In addition, to investigate the effect of the size of m, we have tested the performance of

EF_PRSQP-KS for m ranging from 10 to 50 for system size n= 500 and 1000. As it can be seen

in Table 2, the method performs equally well for the whole range of m, which attests to its

robustness. It is worthwhile to note that the method is even faster for m = 20-50 than for m =10

as the system requires less iterations to converge as seen in Fig. 5, where the convergence

behavior is plotted for m = 10-30.

Table 2: Comparison of performance of EF_PRSQP-KS for different subspace, m, sizes. Problem size

(number of nodes, n)m=10(CPU, s)

m=20(CPU, s)

m=30(CPU, s)

m=40(CPU, s)

m=50(CPU, s)

500 8.8 5.5 5.6 5.8 6.1

1000 39.1 22 23.3 25.3 25.6

31

1

2

3

4

5

67

8

9

10

11

12

13

14

15

16

17

Page 32: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Figure 5. Convergence behaviour of EF_PRSQP-KS from the same initial guess for different

subspace, m, sizes.

The Arnoldi iterations as well as the solution of each sub-QP and SQP-based solutions were

computed using Nag library Mark 25 in Nag Builder 6.1.

The times reported correspond to single threaded executions of FORTRAN source code on an

Intel core i7-6700 processor (3.40 GHz) and 16 GB of RAM, running 64-bit windows 7.

5. Conclusions

A model reduction-based deterministic optimisation method for large-scale nonlinear dissipative

PDE‐constrained problems, which involve a large number of nonlinear inequality constraints has

been presented. The work builds on previous research from the group presented earlier27. The

proposed algorithms are based on a 2 step-projection scheme. The first projection is onto the

32

1

2

3

4

5

6

7

8

9

10

Page 33: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

dominant space of the physical system, which is defined by the user in the beginning of the

algorithm, augmented in order to include the independent variables. Then the second projection

is onto the null space of the equality constraints. In this work all the methodologies make the

assumption that the system is dissipative and thus there exists a separation of scales. As a result,

there exists a low-order basis, which can be efficiently computed in order to approximate the full

large-scale system. Three approaches have been followed to handle (the large number of)

inequality constraints: the Equation-free-reduced-PRSQP method that reduces the number of

equality constraints and then includes the inequality constraints in the quadratic sub-problem,

This approach may exhibit numerical difficulties as the full derivatives of the inequality

constraints with respect to the states have to be computed. The second approach takes advantage

of the KS aggregation functions in order to handle only one inequality and combines it with the

former method. The last approach uses slack variables to convert the inequality constraints into

equalities. This is convenient when an aggregated function is used because the eigenvalue of the

one additional equality is known and is equal to one. The behaviour of the three approaches is

illustrated by applying them to the optimisation of a tubular reactor also testing the behaviour of

the three algorithms for different initial guesses. In conclusion, the EF-PRSQP-KS seems to have

the best performance, as it requires the least computational time and iterations compared to the

other two. This methodology, was applied to a static system assuming that there is an available

steady state simulator, however a dynamic model where G ( x )=xk+1−g (xk )=0 can be available

instead. In this case the method presented in Theodoropoulos and Luna-Ortiz33 can be extended

to include inequalities.

33

Page 34: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Acknowledgements

The EU program CAFE (KBBE-2008-212754) and the University of Manchester Presidential

Doctoral Scholarship Award to Panagiotis Petsagkourakis are gratefully acknowledged.

References

(1) COMSOL Multiphysics® v. 5.2. Www.comsol.com. COMSOL AB, Stockholm, Sweden.

(2) Guide, U. OpenFOAM, The Open Source CFD Toolbox. 2013, No. September, 211.

(3) Li, H. X.; Qi, C. Modeling of Distributed Parameter Systems for Applications - A

Synthesized Review from Time-Space Separation. J.ProcessControl 2010, 20 (8), 891.

(4) Biegler, L. T. New Nonlinear Programming Paradigms for the Future of Process

Optimization. AIChEJ. 2017, 63 (4), 1178.

(5) Drummond, R.; Howey, D. A.; Duncan, S. R. Low-Order Mathematical Modelling of

Electric Double Layer Supercapacitors Using Spectral Methods. J.PowerSources 2015,

277, 317.

(6) Balsa-Canto, E.; Alonso, A. A.; Banga, J. R. A Novel, Efficient and Reliable Method for

Thermal Process Design and Optimization. Part I: Theory. J.FoodEng. 2002, 52 (3), 227.

(7) Christofides, P. D. Nonlinear and Robust Control of PDE Systems: Methods and

Applications to Transport-Reaction Processes; Systems & Control: Foundations &

34

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Page 35: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Applications; Birkhäuser Boston, 2012.

(8) Hazra, S. B. Large-ScalePDE-ConstrainedOptimizationinApplications; Springer Berlin

Heidelberg, 2010; Vol. 54.

(9) Andersson, J.; Åkesson, J.; Diehl, M. {CasADI}: {A} Symbolic Package for Automatic

Differentiation and Optimal Control. In RecentAdvancesinAlgorithmicDifferentiation;

Forth, S., Hovland, P., Phipps, E., Utke, J., Walther, A., Eds.; Lecture Notes in

Computational Science and Engineering; Springer: Berlin, 2012; Vol. 87, pp 297–307.

(10) Bischof, C.; Corliss, C.; Green, L. L.; Griewank, A.; Haigler, K. J.; Newman, P. A.

AutomaticDifferentiationofAdvancedCFDCodesforMultidisciplinaryDesign; NASA

Langley Technical Report Server, 2003.

(11) Baydin, A. G.; Pearlmutter, B. A.; Radul, A. A. Automatic Differentiation in Machine

Learning: A Survey. CoRR 2015, abs/1502.0.

(12) Biegler, L. T.; Nocedal, J.; Schmid, C. A Reduced Hessian Method for Large-Scale

Constrained Optimization. SIAMJ.Optim. 1995, 5 (2), 314.

(13) Ternet, D. J.; Biegler, L. T. Recent Improvements to a Multiplier-Free Reduced Hessian

Successive Quadratic Programming Algorithm. Comput.Chem.Eng. 1998, 22 (7–8), 963.

(14) Wang, K.; Shao, Z.; Biegler, L. T.; Lang, Y.; Qian, J. Robust Extensions for Reduced-

Space Barrier NLP Algorithms. Comput.Chem.Eng. 2011, 35 (10), 1994.

(15) Bock, H. G.; Diehl, M.; Kühl, P.; Kostina, E.; Schiöder, J. P.; Wirsching, L. Numerical

Methods for Efficient and Fast Nonlinear Model Predictive Control. In Assessmentand

35

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Page 36: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

FutureDirectionsofNonlinearModelPredictiveControl; Findeisen, R., Allgöwer, F.,

Biegler, L. T., Eds.; Springer Berlin Heidelberg: Berlin, Heidelberg, 2007; pp 163–179.

(16) Theodoropoulos, C.; Qian, Y.-H.; Kevrekidis, I. G. “Coarse” stability and Bifurcation

Analysis Using Time-Steppers: A Reaction-Diffusion Example. Proc. Natl. Acad. Sci.

2000, 97 (18), 9840.

(17) Nocedal, J.; Wright, S. J. NumericalOptimization; 2006.

(18) Byrd, R. H.; Hriba, M. E.; Nocedal, J. An Interior Point Algorithm for Large Scale

Nonlinear Programming. SIAMJ.Opt. 2000, 9 (4), 877.

(19) Schulz, V. H. Reduced SQP Methods for Large-Scale Optimal Control Problems in DAE

with Application to Path Planning Problems for Satellite Mounted Robots. 1996, No.

November 1995.

(20) Poon, N.; Martins, J. Adaptive Constraint Aggregation for Structural Optimization Using

Adjoint Sensitivities. 2005Can.Aeronaut.Sp.Inst.Annu.Gen.Meet. 2005, 1.

(21) Raspanti, C. G.; Bandoni, J. a; Biegler, L. T. New Strategies for Flexibility Analysis and

Design under Uncertainty. Comput.Chem.Eng. 2000, 24 (9–10), 2193.

(22) Sun, W.; Yuan, Y. Optimization Theory and Methods: Nonlinear Programming. 2006.

(23) Bazaraa; Sherali; Shetty. NonlinearProgrammingTheoryandAlgorithms; 2006.

(24) Wills, A. G.; Heath, W. P. Interior-Point Algorithms for Nonlinear Model Predictive

Control. 2007, 207.

36

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Page 37: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

(25) Wang, Y.; Boyd, S. Fast Model Predictive Control Using Online Optimization. Control

Syst.Technol.IEEETrans. 2010, 18 (2), 267.

(26) Boyd, S. P.; Vandenberghe, L. ConvexOptimization; Berichte über verteilte messysteme;

Cambridge University Press, 2004.

(27) Bonis, I.; Theodoropoulos, C. Model Reduction-Based Optimization Using Large-Scale

Steady-State Simulators. Chem.Eng.Sci. 2012, 69 (1), 69.

(28) Theodoropoulos, C.; Qian, Y.; Kevrekidis, I. G. “ Coarse ” Stability and Bifurcation

Analysis Using Time-Steppers : A Reaction-Diffusion Example. 2000, 97 (18), 9840.

(29) Armaou, A.; Siettos, C. I.; Kevrekidis, I. G. Time-Steppers and “Coarse” Control of

Distributed Microscopic Processes. Int.J.RobustNonlinearControl 2004, 14 (2), 89.

(30) Shroff, G. M.; Keller, H. B. Stabilization of Unstable Procedures: The Recursive

Projection Method. SIAMJ.Numer.Anal. 1993, 30 (4), 1099.

(31) Theodoropoulos, C. Optimisation and Linear Control of Large Scale Nonlinear Systems:

A Review and a Suite of Model Reduction-Based Techniques. In Lecture Notes in

ComputationalScienceandEngineeringVol75; 2011; pp 37–61.

(32) Luna-Ortiz, E.; Theodoropoulos, C. An Input/output Model Reduction-Based

Optimization Scheme for Large-Scale Systems. MultiscaleModel.Simul. 2005, 4 (2), 691.

(33) Theodoropoulos, C.; Luna-ortiz, E. A Reduced Input / Output Dynamic Optimisation

Method. In Model Reduction and Coarse-Graining Approaches for Multiscale

Phenomena; 2006; pp 535–560.

37

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Page 38: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

(34) Martins, J. R. R. a; Poon, N. M. K. On Structural Optimization Using Constraint

Aggregation. Proc.6thWorldCongr.Struct.Multidiscip.Optim. 2005, No. June, 1.

(35) Saad, Y. Numerical Methods for Large Eigenvalue Problems. AlgorithmsArchit.Adv.Sci.

Comput. 1992, 346 p.

(36) Christofides, P. D.; Armaou, A. Control and Optimization of Multiscale Process Systems.

Comput.Chem.Eng. 2006, 30 (10–12), 1670.

(37) El-Farra, N. H.; Armaou, A.; Christofides, P. D. Analysis and Control of Parabolic PDE

Systems with Input Constraints. Automatica 2003, 39 (4), 715.

(38) Friedman, A. PartialDifferentialEquationsofParabolicType; Dover Publications INC:

Mineola, New York, 1964.

(39) Shroff, G. M.; Keller, H. B. Stabilization of Unstable Procedures: The Recursive

Projection Method. SIAMJ.Numer.Anal. 1993, 30 (4), 1099.

(40) Bonis, I.; Xie, W.; Theodoropoulos, C. A Linear Model Predictive Control Algorithm for

Nonlinear Large-Scale Distributed Parameter Systems. AIChEJournalAIChEJ. 2012, 58,

801.

(41) Biegler, L. T.; Nocedal, J.; Schmid, C. A Reduced Hessian Method for Large-Scale

Constrained Optimization. SIAMJ.Optim. 1995, 5 (2), 314.

(42) Jensen, K. F.; Harmon Ray, W. The Bifurcation Behaviour of Tubular Reactors. Chem.

EngheetinrhgSci. 1982, 37 (2), 199.

(43) Ni, L.; Mebarki, A.; Jiang, J.; Zhang, M.; Pensee, V.; Dou, Z. Thermal Risk in Batch

38

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Page 39: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Reactors: Theoretical Framework for Runaway and Accident. J.LossPrev.ProcessInd.

2016, 43, 75.

39

1

2

3

Page 40: Template for Electronic Submission to ACS Journals · Web viewKeywords: Model reduction-based optimization, reduced Hessian, nonlinear inequality constraints, black-box simulator,

Table of Contents (TOC) Graphical Abstract

40

1

2