SIMULATION OPTIMIZATION: A REVIEW, NEW DEVELOPMENTS, AND APPLICATIONS

Embed Size (px)

DESCRIPTION

SIMULATION OPTIMIZATION: A REVIEW, NEW DEVELOPMENTS, AND APPLICATIONS

Citation preview

  • Proceedings of the 2005 Winter Simulation ConferenceM. E. Kuhl, N. M. Steiger, F. B. Armstrong, and J. A. Joines, eds.

    SIMULATION OPTIMIZATION: A REVIEW, NEW DEVELOPMENTS, AND APPLICATIONS

    Michael C. Fu

    Smith School of BusinessUniversity of Maryland

    College Park, MD 20742, U.S.A.

    Fred W. Glover

    Leeds School of BusinessUniversity of Colorado

    Boulder, CO 80309, U.S.A.

    Jay April

    OptTek Systems, Inc.1919 Seventh Street

    Boulder, CO 80302, U.S.A.

    ABSTRACT

    We provide a descriptive review of the main approachesfor carrying out simulation optimization, and samplesome recent algorithmic and theoretical developmentsin simulation optimization research. Then we surveysome of the software available for simulation languagesand spreadsheets, and present several illustrative appli-cations.

    1 INTRODUCTION

    The advances in computing power and memory over thelast decade have opened up the possibility of optimizingsimulation models. This development offers one of themost exciting opportunities in simulation, and there areplenty of interesting research problems in the field. Thegoals of this tutorial include the following:

    to provide a general overview of the primary ap-proaches found in the research literature, and in-clude pointers/references to the state of the art;

    to survey some of the commercial software; to illustrate the problems through examples andreal-world applications.

    The general optimization problem we consider it tofind a setting of controllable parameters that minimizesa given objective function, i.e.,

    min

    J(), (1)

    where represents the (vector of) input variables,J() is the (scalar) objective function, and is theconstraint set, which may be either explicitly given orimplicitly defined.The assumption in the simulation optimization set-

    ting is that J() is not available directly, but must beestimated through simulation, e.g., the simulation out-put provides J(), a noisy estimate of J(). The most

    common form for J is an expectation, e.g.,

    J() = E[L(, )],

    where represents a sample path (simulation replica-tion), L is the sample performance measure. Althoughthis form is fairly general (includes probabilities by us-ing indicator functions), it does exclude certain typesof performance measures such as the median (and otherquantiles) and the mode.

    Real-World Example: Call Center DesignThe state-of-the-art call centers (sometimes called con-tact centers) integrate traditional voice operations withboth automated response systems (computer accountaccess) and Internet (Web-based) services, often spreadover multiple geographically separate sites. Most ofthese centers handle multiple sources of jobs, includingvoice, e-mail, fax, and interactive Web, each of whichmay require a different levels of operator (call agent)training, as well as different priorities, e.g., voice al-most always preempting any of the other contact types(except possibly interactive Web). There are also differ-ent types of jobs according to the service required, e.g.,checking the status of an order versus placing a neworder versus requesting live service help. Furthermore,because of individual customer segmentation, there aredifferent classes of customers in terms of priority lev-els. Designing and operating such a call center includessuch problems as selecting the number of operators ateach skill level, and determining what routing algorithmand type of queue discipline to use. A trade off mustbe made between achieving a desired level of customerservice and the cost of providing service. An objectivefunction might incorporate costs associated with oper-ations such as agent wages and network utilization, aswell as customer service performance metrics such asthe probability of waiting more than a certain amountof time. This is just one example of how simulation op-timization can be applied to business process manage-ment. A similar design problem is considered later inthe applications section for a hospital emergency room.

  • Fu, Glover, and April

    Toy Example: Single-Server QueueConsider a first-come, first-served (FCFS) single-serverqueue. A well-studied optimization problem uses thefollowing objective function (cf. Fu 1994):

    J() = E[W ()] + c/, (2)

    where W is the mean time spent in the system, is themean service time of the server (so 1/ corresponds tothe server speed), and c is the cost factor for the serverspeed (i.e., a higher-skilled worker costs more). SinceW is increasing in , the objective function quantifiesthe trade-off between customer service level and cost ofproviding service. This could be viewed as the simplestpossible case of the call center design problem, wherethere is a single operator whose skill level must be se-lected. For the special M/M/1 queue in steady state,this problem is analytically tractable, and serves as atest case for optimization procedures.

    Another Academic Example: Inventory ControlThe objective is to minimize a total cost function con-sisting of ordering, holding, and backlogging or lostsales components. The ordering policy involves two pa-rameters, s and S, corresponding to the re-order leveland order-up-to level, respectively. When the inventorylevel falls below s, an order is placed for an amount thatwould bring the current level back up to S.

    The input variables can be divided into two maintypes: qualitative and quantitative. In the call centerexample, both types are present, whereas in the twosimpler examples, the input variables are quantitative.Quantitative variables are then either discrete or con-tinuous. Many of the call center variables are inherentlydiscrete, e.g., the number of operators, whereas in thesingle-server queue and inventory control examples, theinput variables and (s, S) could be specified as either,depending on the particular problem setting or solutiontechnique being applied. As in deterministic optimiza-tion, the approaches to solve these different types ofproblems can differ greatly.To get an idea of the particular challenges facing sim-

    ulation optimization, consider the problem of findingthe value of that minimizes the objective function in(1) versus the problem of finding the minimum valueof the objective function itself. In terms of our nota-tion, this is the difference between finding a setting

    that achieves the minimum in (1) versus finding thevalue of J(). A key difference between deterministicoptimization and stochastic optimization is that thesetwo problems are not necessarily the same! In deter-ministic optimization, where J can be easily evaluated,the two problems are essentially identical; however, inthe stochastic setting, one or the other may be easier.

    Furthermore, it may also be the case that only one orthe other is the real goal of the modeling exercise. Insome cases, selecting the best design is the goal, and theobjective function is merely a means towards achievingthis end, providing a way to measuring the relative per-formance of each design, whereas the absolute value ofthe metric may have little meaning. In other cases,the objective function may have intrinsic meaning, e.g.,costs or profits. And in yet other (less frequent) cases,estimating the optimal value itself is the primary goal.An example of this is the pricing of financial derivativeswith early exercise opportunities (especially when doneprimarily for the sake of satisfying regulatory require-ments of marking a portfolio to market).To put it another way, simulation optimization has

    two major components that vie for computational re-sources: search and evaluation. How to balance be-tween the two, i.e., how to best allocate simulationreplications, is a large challenge in making simulationoptimization practical. To be concrete, one is choosingbetween simulation replications to get better estimatesversus more iterations of the optimization algorithm tomore thoroughly explore the search space.

    2 APPROACHES

    We now briefly describe the main approaches in thesimulation literature.

    2.1 Ranking & Selection

    In the setting where it is assumed that there is a fixedset of alternatives so no search for new candidates isinvolved the problem comes down to one of statisti-cal inference, and ranking & selection procedures can beapplied. Let the probability of correct selection be de-noted by PCS, which we will not define formally here,but intuitively correct selection would mean eitherselecting either the best solution or a solution withinsome presecified tolerance of the best.There are two main forms the resulting problem for-

    mulations can take:

    (i) minimize the number of simulation replicationssubject to the PCS exceeding a given level.

    (ii) maximize the PCS subject to a given simulationbudget constraint;

    In case (i), one ensures a level of correct selection, buthas little control over how much computation this mightentail. This is the traditional statistics ranking & se-lection approach, and in the simulation setting, Kimand Nelson (2005) overview the state of the art, wheremultiple comparison procedures can be used to provide

  • Fu, Glover, and April

    valid confidence intervals, as well; see also Goldsmanand Nelson (1998). The books by Bechhofer, Sant-ner, and Goldsman (1995) and Hochberg and Tamhane(1987) contain more general discussion of multiple com-parison procedures outside of the simulation setting.In case (ii), one tries to do the best within a speci-

    fied computational limit, but a priori one may not haveany idea how good the resulting solution will be. Thisformulation was coined the optimal computing bud-get allocation (OCBA) problem, as first proposed byChen (1995). Subsequent and related work includesChick and Inoue (2001ab), Chen et al. (2005), Fu etal. (2005).Ranking & selection procedures can also used in the

    following ways that are relevant to simulation opti-mization: for screening a large set of alternatives, i.e.,quickly (meaning based on a relatively small amountof replications) eliminating poor performers in order toget a more manageable set of alternatives; for compar-ing among candidate solutions in an iterative algorithm,e.g., deciding whether or not an improvement has beenmade; for providing some statistical guidance in assess-ing the quality of the declared best solution versus allthe solutions visit. The latter is what Boesel, Nelsonand Kim (2003) call cleaning up after simulation op-timization.The framework of ordinal optimization (Ho, Sreeni-

    vas, and Vakili 1992; Ho et al. 2000) might also beclassified under ranking & selection. This approach isbased on the observation that in most cases it is mucheasier to find ordering among candidate solutions thanto carry out the estimation procedure for each solutionindividually, and then try to rank order the solutions.This can be especially true in the simulation setting,where the user has more control, so for instance canuse common random numbers to induce positive cor-relation between estimates of solution performance todramatically reduce the number of simulation replica-tions required to make a distinction. Intuitively, it is thedifference between estimating J1J2 = E[L1L2] ver-sus estimating P (J1 > J2), say using the simple meanbased on n simulation replications. Estimating the for-mer using the sample mean is governed by the MonteCarlo convergence rate of n1/2, whereas deciding onthe latter based on the sample mean has an exponentialconvergence rate. Using the theory of large deviationsfrom probability, Dai and Chen (1997) explore this ex-ponential rate of convergence in the discrete-event sim-ulation context.

    2.2 Response Surface Methodology

    Response surface methodology (RSM) has its roots instatistical design of experiments, and its goal is to ob-

    tain an approximate functional relationship betweenthe input variables and the output objective function.In design of experiments terminology, these are re-ferred to as the factors and the response, respectively.RSM carried out on the entire domain of interest re-sults in what is called a metamodel (see Barton 2005).The two most common ways of obtaining this repre-sentation are regression and neural networks. Oncea metamodel is in hand, optimization can be carriedout using deterministic optimization procedures. How-ever, when optimization is the focus, a form of se-quential RSM is usually employed (Kleijnen 1998), inwhich a local response surface representation is ob-tained that guides the sequential search. For example,linear regression could be used to obtain an estimateof the direction of steepest descent. This approachis model free, well-established, and fairly straightfor-ward to apply, but it is not implemented in any ofthe commercial packages. Until recently, SIMUL8() had em-ployed an optimization algorithm based on a form ofsequential RSM using neural networks, but now usesOptQuest instead. The primary drawback seems to bethe excessive use of simulation points in one area beforeexploring other parts of the search space. This can beespecially exacerbated when the number of input vari-ables is large. Recently, kriging has been proposed asa possibly more efficient way of carrying out this step(see van Beers and Kleijnen 2003). For more informa-tion on RSM procedures for simulation optimization,see Barton (2005) and Kleijnen (1998).

    2.3 Gradient-Based Procedures

    The gradient-based approach tries to mimic its coun-terpart in deterministic optimization. In the stochasticsetting, the resulting procedures usually take the formof stochastic approximation (SA) algorithms; the bookby Kushner and Yin (1997) contains a general discus-sion of SA outside of simulation. Specifically, given acurrent best setting of the input variables, a movementis made in the gradient direction, similar to sequen-tial RSM. However, unlike sequential RSM procedures,SA algorithms can be shown to be provably convergent(asymptotically, usually to a local optimum) under ap-propriate conditions on the gradient estimator and stepsizes, and they generally require far less simulations periteration. Practically speaking, the key to making thisapproach successful is the quality of the gradient esti-mator. Fu (2005) surveys the main approaches avail-able for coming up with gradient estimators that canbe implemented in simulation. These include brute-force finite differences, simultaneous perturbations,perturbation analysis, the likelihood ratio/score func-

  • Fu, Glover, and April

    tion method, and weak derivatives. For technical detailson simultaneous perturbation stochastic approximation(SPSA), see Spall (1992), Fu and Hill (1997), Spall(2003), and . SPSAhas two advantages over the other methods: it requiresonly two simulations per gradient estimate, regardlessof the number of input variables, and it can treat thesimulation model as a black box, i.e., no knowledge ofthe workings of the system is required (model free). Aone-simulation version of SPSA is also available, but inpractice it appears far noisier than the two-simulationversion, even accounting for the effort being half asmuch.An in-depth study of various gradient-based algo-

    rithms for the single-server M/M/1 queue example canbe found in LEcuyer, Giroux, and Glynn (1994); seeFu (1994b) and Fu and Healy (1997) for the (s, S)inventory system. Kapuscinski and Tayur (1999) de-scribe the use of perturbation analysis in a simulationoptimization framework for inventory management ofa capacitated production-inventory system. This ap-proach was implemented on the worldwide supply chainof Caterpillar (its success reported in a Fortune maga-zine article by Philip Siekman, October 30, 2000: NewVictories in the Supply Chain Revolution). The pri-mary drawback of the gradient-based approach is that itcurrently is really only practically applicable to the con-tinuous variable case, notwithstanding recent attemptsto apply it to discrete-valued variables. Furthermore,estimating direct gradients may require knowledge ofthe underlying model, and the applicability of such es-timators is often highly problem dependent.

    2.4 Random Search

    In contrast to gradient-based procedures, randomsearch algorithms are targeted primarily at discrete in-put variable problems. They were first developed fordeterministic optimization, but have been extended tothe stochastic setting. Like gradient-based procedures,they proceed by moving iteratively from a current bestsetting of the input variables (candidate solution). In-stead of using a gradient, however, the next move isprobabilistically drawn from the neighborhood of thecurrent best. For example, defining the neighborhoodof a solution as all of the other solutions and drawingfrom a uniform distribution (assuming the number offeasible solutions is finite) would give a pure randomsearch algorithm. Practically speaking, the success ofa particular random search algorithm depends heavilyon the defined neighborhood structure. Furthermore,in the stochastic setting, the estimation problem mustalso be incorporated into the algorithm. Thus, the twofeatures that define an algorithm in the simulation op-

    timization setting are(a) how the next candidate solution(s) is(are) chosen;and(b) how to determine which is the current best solution(which is not necessarily the current iterate).The second feature (b) does not arise in the determin-istic setting, because the current best (among visitedsolutions) is known with certainty, since the objectivefunction values have no estimation noise. However, inthe stochastic case, due to the noise in the objectivefunction estimates, there are many possible choices,e.g., the current solution, the solution that has beenvisited the most often, or the solution that has the bestsample mean thus far.Like stochastic approximation algorithms, random

    search algorithms can generally be shown to be provablyconvergent (often to a global optimum). For more de-tails on random search methods in simulation, see An-dradottir (2005); for a more general survey on discreteinput simulation optimization problems, see Swisheret al. (2001). A recently proposed version of ran-dom search that is very promising is Convergent Op-timization via Most-Promising-Area Stochastic Search(COMPASS), introduced by Hong and Nelson (2005),which utilizes a unique neighborhood structure and re-sults in a provably convergent algorithm to a locallyoptimal solution.

    2.5 Sample Path Optimization

    Sample path optimization (also known as stochasticcounterpart, sample average approximation; see Ru-binstein and Shapiro 1993) takes many simulationsfirst, and then tries to optimize the resulting estimates.Specifically, if Ji denotes the estimate of J from the ithsimulation replication, the sample mean over n replica-tions is given by

    Jn() =1n

    ni=1

    Ji().

    If each of the Ji are i.i.d. unbiased estimates of J , thenby the strong law of large numbers,

    Jn() J() with probability 1.The approach then is to optimize, for a sufficiently largen, the deterministic function Jn, which approximates J .Its key feature, as Robinson (1996) advocates, is thatwe can bring to bear the large and powerful array ofdeterministic [primarily continuous variable] optimiza-tion methods that have been developed in the last half-century. In particular, we can deal with problems inwhich the parameters might be subject to complicatedconstraints, and therefore in which gradient-step meth-ods like stochastic approximation may have difficulty.

  • Fu, Glover, and April

    In the simulation context, the method of common ran-dom numbers is used to provide the same sample pathsfor Jn() over different values of . Furthermore, theavailability of derivatives greatly enhances the effective-ness of the approach, as many nonlinear optimizationpackages require these.

    2.6 Metaheuristics

    Metaheuristics are methods that guide other procedures(heuristic or truncated exact methods) to enable themto overcome the trap of local optimality for complexoptimization problems. Four metaheuristics have pri-marily been applied with some success to simulationoptimization: simulated annealing, genetic algorithms,tabu search and scatter search (occasionally supple-mented by a procedure such as neural networks in aforecasting or curve fitting role). Of these, tabu searchand scatter search have proved to be by far the mosteffective, and are at the core of the simulation opti-mization software that is now most widely used. Webriefly sketch the nature of these two approaches be-low. The general metaheuristics framework of Olaffson(2005), which looks very much like the deterministicversion of random search, also contains discussion ofthe nested partitions method introduced by Shi andOlaffson (2000ab) (see also Pinter 1996).Tabu Search (TS) is distinguished by introducing

    adaptive memory into metaheuristic search, togetherwith associated strategies for exploiting such memory,equipping it to penetrate complexities that often con-found other approaches. Applications of TS span therealms of resource planning, telecommunications, VLSIdesign, financial analysis, space planning, energy, dis-tribution, molecular engineering, logistics, pattern clas-sification, flexible manufacturing, waste management,mineral exploration, biomedical analysis, environmen-tal conservation and scores of others. A partial indi-cation of the rapid recent growth of TS applicationsis disclosed by the fact that a Google search on tabusearch returns more than 90,000 pages, a figure thathas been growing exponentially over the past severalyears.Adaptive memory in tabu search involves an

    attribute-based focus, and depends intimately on theelements of recency, frequency, quality and influence.This catalog disguises a surprising range of alterna-tives, which arise by differentiating attribute classesover varying regions and spans of time. The TS no-tion of influence, for example, encompasses changes instructure, feasibility and regionality, and the logicalconstructions used to interrelate these elements spanmultiple dimensions, involving distinctions between se-quential logic and event driven logic, giving rise to dif-

    ferent kinds of memory structures.The most comprehensive reference for tabu search

    and its applications is the book by Glover and Laguna,(1997). A new book that gives more recent applicationsand pseudo-code for creating various implementationsis scheduled to appear in 2006.Scatter Search (SS) has its foundations in propos-

    als from the 1970s that also led to the emergence oftabu search, and the two methods are highly comple-mentary and often used together. SS is an evolutionary(population-based) algorithm that constructs solutionsby combining others.Scatter search is designed to operate on a set of

    points, called reference points, that constitute goodsolutions obtained from previous solution efforts. No-tably, the basis for defining good includes special cri-teria such as diversity that purposefully go beyond theobjective function value. The approach systematicallygenerates combinations of the reference points to cre-ate new points, each of which is mapped into an asso-ciated feasible point. The combinations are generalizedforms of linear combinations, accompanied by processesto adaptively enforce feasibility conditions, includingthose of discreteness.The SS process is organized to (1) capture informa-

    tion not contained separately in the original points, (2)take advantage of auxiliary heuristic solution methods(to evaluate the combinations produced and to activelygenerate new points), and (3) make dedicated use ofstrategy instead of randomization to carry out com-ponent steps. SS basically consists of five methods:a diversification generation method, an improvementmethod (often consisting of tabu search), a referenceset update method, a subset generation method, and asolution combination method. Applications of SS, likethose of TS, have grown dramatically in recent years,and its use in simulation optimization has become thecornerstone of significant advances in the field. Themost complete reference on SS is the book by Lagunaand Marti (2002).

    3 MODEL-BASED METHODS

    An approach that looks promising and which has justbegun to be explored in the simulation optimizationcontext are model-based methods. These are contrastedwith what are called instance-based approaches, whichgenerate new solutions based only on the current solu-tion (or population of solutions) (cf. Dorigo and Stutzle2004, pp.139-140). The metaheuristics described earliergenerally fall into this latter category, with the excep-tion of tabu search, because it uses memory. Model-based methods, on the other hand, are not dependentexplicitly on any current set of solutions, but use a prob-

  • Fu, Glover, and April

    Table 1: Optimization for Simulation: Commercial Software Packages

    Optimization Package Vendor Primary Search Strategies(simulation platform) (URL)

    AutoStat AutoSimulations, Inc. evolutionary,(AutoMod) (www.autosim.com) genetic algorithms

    Evolutionary Optimizer AutoSimulations, Inc. evolutionary,(Extend) (www.imaginethatinc.com) genetic algorithms

    OptQuest OptTek Systems, Inc. scatter search, tabu search,(Arena, Crystal Ball, ProModel, SIMUL8, et al.) (www.opttek.com) neural networks

    RISKOptimizer Palisade Corp. genetic algorithms(@RISK) (www.palisade.com)

    Optimizer Lanner Group, Inc. simulated annealing,(WITNESS) (www.lanner.com/corporate) tabu search

    ability distribution on the space of solutions to providean estimate of where the best solutions are located.The following are some examples:

    Swarm Intelligence. This approach is perhapsbest known under the name of Ant Colony Opti-mization, because it uses ant behavior (group co-operation and use of pheromone updates and evap-oration) as a paradigm for its probabilistic work-ings. Because there is memory involved in themechanisms, like tabu search, it is not instance-based; see Dorigo and Stutzle (2004) for more de-tails.

    Estimation of Distribution Algorithms(EDAs). The goal of this approach is to pro-gressively improve a probability distribution onthe solution space based on samples generatedfrom the current distribution. The crudest formof this would utlize all samples generated to acertain point, hence the use of memory, but inpractical implementation, parameterization ofthe distribution is generally employed, and theparameters are updated based on the samples; seeLarranaga and Lozano (2002) for more details.

    Cross-Entropy (CE) Method. This approachgrew out of a procedure to find an optimal impor-tance sampling measure by projecting a parameter-ized probability distribution, using cross entropyto measure the distance from the optimum mea-sure. Like EDAs, samples are taken that are usedto update the parameter values for the distribu-tion. Taking the optimal measure as a point massat the solution optimum of an optimization prob-lem, the procedure can be applied in that con-

    text; see De Boer et al. (2005), Rubinstein andKroese (2004), and for more details.

    Model Reference Adaptive Search. As inEDAs, this approach updates a parameterizedprobability distribution, and like the CE method,it also uses the cross-entropy measure to projecta parameterized distribution. However, the par-ticular projection used relies on a stochastic se-quence of reference distributions rather than a sin-gle fixed reference distribution (the final optimalmeasure) as in the CE method, and this resultsin very different performance in practice. Further-more, stronger theoretical convergence results canbe established; see Hu, Fu, and Marcus (2005abc)for details.

    4 SOFTWARE

    Table 1 surveys a few simulation optimization softwarepackages (either plug-ins or integrated) currently avail-able, and summarizes their search strategies. Compar-ing with Table 1 in Fu (2002), one observes that Pro-Model and SIMUL8 have both migrated to OptQuestfrom their previous simulation optimization packages(SimRunner and OPTIMIZ, respectively).

    5 APPLICATIONS

    Applications of optimization technology are quite di-verse; they cover a broad surface of business activities.To illustrate, the user of simulation and other businessor industry evaluation models may want to know: What is the most effective factory layout?

  • Fu, Glover, and April

    What is the safest equipment replacement policy? What is the most cost effective inventory policy? What is the best workforce allocation? What is the most productive operating schedule? What is the best investment portfolio?

    The answers to such questions require a painstakingexamination of multiple scenarios, where each scenarioin turn requires the implementation of an appropriatesimulation or evaluation model to determine the conse-quences for costs, profits and risks. The critical miss-ing component is to disclose which decision scenariosare the ones that should be investigated and still morecompletely, to identify good scenarios automatically bya search process designed to find the best set of deci-sions. This is the core problem for simulation optimiza-tion in a practical setting. The following descriptionsprovide a sampling of uses of the technology that enablesolutions to be identified efficiently.

    5.1 Project Portfolio Management

    For project portfolio management, OptFolio is a soft-ware tool being implemented in several markets in-cluding Petroleum and Energy, IT Governance, andPharmaceuticals. The following example demonstratesthe versatility of OptFolio as a simulation optimizationtool.Among many other types of initiatives, the Pharma-

    ceutical Industry uses project portfolio optimization tomanage investments in new drug development. A phar-maceutical company that is developing a new break-through drug is faced with the possibility that the drugmay not do what it was intended to do, or have se-rious side effects that make it commercially infeasible.Thus, these projects have a considerable degree of un-certainty related to the probability of success. Rela-tively recently, an options-pricing approach, called realoptions has been proposed to model such uncertain-ties. Initial feedback has indicated that an obstacleto its market penetration is that it is difficult to un-derstand and use; furthermore, there are no researchresults that illustrate performance that rivals the algo-rithmic approach underlying OptFolio.The following example is based on data provided by

    Decision Strategies, Inc., a consulting firm with numer-ous clients in the Pharmaceutical Industry. The dataconsists of twenty potential projects in drug develop-ment having rather long horizons 5 to 20 years and the pro-forma information is given as triangulardistributions for both per-period net contribution andinvestment. The models use a probability of success in-dex from 0% to 100% applied in such a way that, ifthe project fails during a simulation trial, then the in-vestments are realized, but the net contribution of the

    project is not. In this way, the system can be used tomodel premature project terminations providing a sim-ple, understandable alternative to real options. In thisexample, we examined five cases, the first two repre-senting approaches commonly used in practice but, aswe have found, turn out to be significantly inferior tothe ones using the OptFolio software.

    Case 1: Simple Ranking of ProjectsProjects were ranked in this approach according to aspecific objective criterion, a process often adopted incurrently available Project Portfolio Management toolsin order to select projects under a budgetary constraint.In this case the following objective measure was se-lected:

    R =PV (Revenues)PV (Expenses)

    .

    Employing the customary design, the 20 projects wereranked in descending order according to this measure,and projects were added to the final portfolio as long asthe budget constraint was not violated. This procedureresulted in a portfolio with the following statistics:

    NPV = 7342, NPV = 2472, q.05 = 3216,

    where q.05 denotes the 5th percentile (quantile), i.e.,P (NPV qp) = p.In this case, 15 projects were selected in the final

    portfolio. What follows is a discussion of how usingOptFolio can help improve these results.

    Case 2: Traditional Markowitz ApproachThe decision was to determine participation levels [0,1]in each project with the objective of maximizing theexpected NPV of the portfolio while keeping the stan-dard deviation of the NPV below a specified thresholdof 1000. An investment budget was also imposed on theportfolio, where Bi denotes the budget in period i.Maximize NPV subject to

    NPV 1000, B1 125, B2 140, B3 160.This formulation resulted in a portfolio with the follow-ing statistics:

    NPV = 4140, NPV = 1000, q.05 = 2432.

    We performed this traditional mean-variance caseto provide a basis for comparison for the subsequentcases. An empirical histogram for the optimal portfoliois shown in Figure 1.

    Case 3: Risk Controlled by 5th PercentileThe decision was to determine participation levels [0,1]in each project with the objective of maximizing theexpected NPV of the portfolio while keeping the 5thpercentile of NPV above the value determined in Case 2

  • Fu, Glover, and April

    0

    200

    400

    600

    800

    1000

    1200

    0

    1500

    3000

    4500

    6000

    7500

    9000

    10500

    12000

    13500

    15000

    16500

    18000

    NPV

    Frequency

    Figure 1: Mean-Variance Portfolio

    (2432), keeping the same investment budget constraintson the portfolio.Maximize NPV subject to

    q.05 2432, B1 125, B2 140, B3 160.

    This case has replaced standard deviation with the 5thpercentile for risk containment, which is an intuitiveway to control catastrophic risk (Value at Risk or VaRin traditional finance terminology). The resulting port-folio has the following attributes:

    NPV = 7520, NPV = 2550, q.05 = 3294.

    By using the 5th percentile as a measure of risk, wewere able to almost double the expected return com-pared to the solution found in Case 2, and improved onthe simple ranking solution. Additionally, as previouslydiscussed, the 5th percentile provides a more intuitivemeasure of risk, i.e., there is a 95% chance that theportfolio will achieve a NPV of 3294 or higher. TheNPV distribution is shown in Figure 2. It is interestingto note that this solution has more variability but isfocused on the upside of the distribution. By focusingon the 5th percentile rather than standard deviation, asuperior solution was created.

    Case 4: Maximizing Probability of SuccessThe decision was to determine participation levels [0,1]in each project with the objective of maximizing theprobability of meeting or exceeding the mean NPVfound in Case 2, keeping the same investment budgetconstraints on the portfolio.Maximize Prob(NPV 4140) subject to

    B1 125, B2 140, B3 160.

    This case focuses on maximizing the chance of obtaininga goal and essentially combines performance and risk

    0

    100

    200

    300

    400

    500

    0

    2000

    4000

    6000

    8000

    10000

    12000

    14000

    16000

    18000

    NPV

    Frequency

    Figure 2: 5th Percentile Portfolio

    containment into one metric. The resulting portfoliohas the following attributes:

    NPV = 7461, NPV = 2430, q.05 = 3366.

    This portfolio has a 91% chance of achieving/exceedingthe NPV goal of 4140, representing a significant im-provement over the Case 2 portfolio, where the proba-bility was only 50%. The NPV distribution is shown inFigure 3.

    0

    50

    100

    150

    200

    250

    300

    350

    400

    450

    0

    1500

    3000

    4500

    6000

    7500

    9000

    10500

    12000

    13500

    15000

    16500

    18000

    NPV

    Frequency

    Figure 3: 5th Percentile Portfolio

    Case 5: All-or-NothingIn many real-world settings, these types of projectshave all-or-nothing participation levels, whereas in theCase 4 solution, most of the optimal participation lev-els found were fractional. Under the same investmentbudget constraints on the portfolio, Case 5 modified theCase 4 constraints to allow only 0 or 1 participation lev-els, i.e., a project must utilize 100% participation or beexcluded from the portfolio.Maximize Prob(NPV 4140) subject to

    B1 125, B2 140, B3 160, Participations {0, 1}.The resulting portfolio has the following attributes:

    NPV = 7472, NPV = 2503, q.05 = 3323.

  • Fu, Glover, and April

    In spite of the participation restriction, this portfolioalso has a 91% chance of exceeding an NPV of 4140, andhas a high expected return. In this case, as in Case 1,15 out of the 20 projects were selected in the final port-folio, but the expected returns are higher. These casesillustrate the benefits of using alternative measures forrisk. Not only are percentiles and probabilities moreintuitive for the decision-maker, but they also producesolutions with better financial metrics. The OptFoliosystem can also be used to optimize performance met-rics such as Internal Rate of Return (IRR) and PaybackPeriod.The illustrated analyses can be applied very effec-

    tively for complex, as well as simple, sets of projects,where different measures of risk and return can produceimprovements over the traditional Markowitz (mean-variance) approach, as well as over simple project rank-ing approaches. The flexibility to choose various mea-sures and statistics, both as objective performance mea-sures as well as constraints, is a major advantage to us-ing a simulation optimization approach as embedded inOptFolio. The user is given an ability to select betterways of modeling and controlling risk, while aligningthe outcomes to specific corporate goals.OptFolio also provides ways to define special relation-

    ships that often arise between and among projects. Cor-relations can be defined between the revenues and/orexpenses of two projects. In addition, the user can de-fine projects that are mutually exclusive, or dependent.For example, in some cases, selecting Project A impliesselecting Project B; such a definition can easily be donein OptFolio.Portfolio analysis tools are designed to aid senior

    management in the development and analysis of projectportfolio strategies, by giving them the capability to as-sess the impact on the corporation of various investmentdecisions. To date, commercial portfolio optimizationpackages are relatively inflexible and are often not ableto answer the key questions asked by senior manage-ment. As a result of the simulation optimization capa-bilities embodied in OptFolio, new techniques are madeavailable that increase the flexibility of portfolio opti-mization tools and deepen the types of portfolio analy-sis that can be carried out.

    5.2 Business Process Management

    When changes are proposed to business processes inorder to improve performance, important advantagescan result by evaluating the projected improvementsusing simulation, and then determining an optimal setof changes using simulation optimization. In this caseit becomes possible to examine and quantify the sensi-tivity of making the changes on the ultimate objectives

    to reduce the risk of actual implementation. Changesmay entail adding, deleting, and modifying processes,process times, resources required, schedules, work rateswithin processes, skill levels, and budgets. Performanceobjectives may include throughput, costs, inventories,cycle times, resource and capital utilization, start-uptimes, cash flow, and waste. In the context of businessprocess management and improvement, simulation canbe thought of as a way to understand and communicatethe uncertainty related to making the changes while op-timization provides the way to manage that uncertainty.The following example is based on a model provided

    by CACI, and simulated on SIMPROCESS. Considerthe operation of an emergency room (ER) in a hospital.Figure 4 shows a high-level view of the overall process,which begins when a patient arrives through the doorsof the ER, and ends when a patient is either releasedfrom the ER or admitted into the hospital for furthertreatment. Upon arrival, patients sign in, receive anassessment of their condition, and are transferred toan ER. Depending on their assessment, patients thengo through various alternatives involving a registrationprocess and a treatment process, before being releasedor admitted into the hospital.Patients arrive either on their own or in an ambu-

    lance, according to some arrival process. Arriving pa-tients are classified into different levels, according totheir condition, with Level 1 patients being more criti-cal than Level 2 and Level 3 patients.Level 1 patients are taken to an ER immediately upon

    arrival. Once in the room, they undergo their treat-ment. Finally, they complete the registration processbefore being either released or admitted into the hospi-tal for further treatment.Level 2 and Level 3 patients must first sign in with

    an Administrative Clerk. After their condition is thenassessed by a Triage Nurse, and then they are taken toan ER. Once in the room, Level 2 and 3 patients, mustfirst complete their registration, then go on to receivetheir treatment, and, finally, they are either released oradmitted into the hospital for further treatment.After undergoing the various activities involved in

    registration and treatment, 90% of all patients are re-leased from the ER, while the remaining 10% are admit-ted into the hospital for further treatment. The finalrelease/hospital admission process consists of the fol-lowing activities: 1. In case of release, either a nurseor a PCT fills out the release papers (whoever is avail-able first). 2. In case of admission into the hospital,an Administrative Clerk fills out the patients admis-sion papers. The patient must then wait for a hospitalbed to become available. The time until a bed is avail-able is handled by an empirical probability distribution.Finally, the patient is transferred to the hospital bed.

  • Fu, Glover, and April

    Figure 4: High-level Process View

    The following illustrates a simple instance of this pro-cess that is actually taken from a real world applica-tion. In this instance, due to cost and layout considera-tions, hospital administrators have determined that thestaffing level must not exceed 7 nurses, 3 physicians, 4PCTs and 4 Administrative Clerks. Furthermore, theER has 20 rooms available; however, using fewer roomswould be beneficial, since the additional space could beused more profitably by other departments in the hos-pital. The hospital wants to find the configuration ofthe above resources that minimizes the total asset cost.The asset cost includes the staffs hourly wages and thefixed cost of each ER used. We must also make surethat, on average, Level 1 patients do not spend morethan 2.4 hours in the ER. This can be formulated as anoptimization problem, as follows:Minimize Expected Total Asset Cost

    subject to the following constraints:Average Level 1 Cycle Time 2.4 hours,# Nurses 7,# Physicians 3,# PCTs 4,# Admin. Clerks 4,# ERs 20.

    This is a relatively unimposing problem in terms ofsize: five variables and six constraints. However, if wewere to rely solely on simulation to solve this problem,even after the hospital administrators have narroweddown our choices to the above limits, we would have toperform 7x3x4x4x20=6,720 experiments. If we want asample size of, say, at least 30 runs per trial solutionin order to obtain the desired level of precision, theneach experiment would take about 2 minutes, based ona Dell Dimension 8100 with a 1.7GHz Intel Pentium 4processor. This means that a complete enumeration of

    all possible solutions would take approximately 13,400minutes, or about 70 working days. This is obviouslytoo long a duration for finding a solution.In order to solve this problem in a reasonable amount

    of time, we called upon the OptQuest optimizationtechnology integrated with SIMPROCESS. As a basecase we used the upper resource limits provided by hos-pital administrators, to get a reasonably good initialsolution. This configuration yielded an Expected TotalAsset Cost of $36,840, and a Level 1 patient cycle timeof 1.91 hours.Once we set up the problem in OptQuest, we ran it

    for 100 iterations (experiments), and 5 runs per itera-tion (each run simulates 5 days of the ER operation).Given these parameters, the best solution, found at it-eration 21 was: 4 nurses, 2 physicians, 3 PCTs, 3 ad-ministrative clerks, and 12 ERs.The Expected Total Asset Cost for this configura-

    tion came out to $25,250 (a 31% improvement over thebase case), and the average Level 1 patient cycle timewas 2.17 hours. The time to run all 100 iterations wasapproximately 28 minutes.After obtaining this solution, we redesigned some fea-

    tures of the current model to improve the cycle time ofLevel 1 patients even further. In the redesigned model,we assume that Level 1 patients can go through thetreatment process and the registration process in paral-lel. That is, we assume that while the patient is under-going treatment, the registration process is being doneby a surrogate or whoever is accompanying the patient.If the patients condition is very critical, than someoneelse can provide the registration data; however, if thepatients condition allows it, then the patient can pro-vide the registration data during treatment.Figure 5 shows the model with this change. By opti-

    mizing the model that incorporates this change, we now

  • Fu, Glover, and April

    Figure 5: Proposed Process

    obtain an average Level 1 patient cycle time of 1.98 (a12% improvement).The new solution had 4 nurses, 2 physicians, 2 PCTs,

    2 administrative clerks, and 9 ERs, yielding an Ex-pected Total Asset Cost of $24,574, and an averageLevel 1 patient cycle time of 1.94 hours. By using sim-ulation optimization, we were able to find a very highquality solution in less than 30 minutes.

    6 CONCLUSIONS

    In addition to chapters in the Handbook on OperationsResearch and Management Science: Simulation volumecited already, more technical details on simulation op-timization techniques can be found in the chapter byAndradottir (1998) and the review paper by Fu (1994),whereas the feature article by Fu (2002) explores deeperresearch versus practice issues. Previous volumes ofthese Winter Simulation Conference proceedings alsoprovide good current sources (e.g., April et al. 2003,2004). Other books that treat simulation optimizationin some technical depth include Rubinstein and Shapiro(1993), Fu and Hu (1997), Pflug (1997), Spall (2003).Note that the model in model-based approaches is

    a probability distribution on the solution space, as op-posed to modeling the response surface itself; the inputvariables are the same in both cases. Is there someway of combining the two approaches? One seemingadvantage of the probabilistic approach is that it ap-plies equally well to both the continuous and discretecase. The key in both cases to practical implementa-tion is parameterization! For example, neural networksand regression are used in the former case, whereas thenatural exponential family works well in the latter case.Relatively little research has been done on multi-

    response simulation optimization, or for that matter,with random constraints, i.e., where the constraintsthemselves must be estimated. Most of the commer-cial software packages, however, do allow multiple re-sponses (combining by using a weighting) and explicitinequality constraints on output performance measures,but in the latter case, there is usually not provided anystatistical estimate as to how likely the constraint is ac-tually being violated (just a confidence interval on theperformance measure itself).To summarize, here are some key issues in simulation

    optimization algorithms:

    neighborhood definition; mechanism for exploration/sampling (search), es-pecially how previously generated (sampled) solu-tions are incorporated;

    determining which candidate solution(s) to declarethe best (or good); statistical statements?

    the computational burden of each function esti-mate (obtained through simulation replications)relative to search (the optimization algorithm).

    The first two issues are not specific to the stochasticsetting of simulation optimization, but their effective-ness depends intimately on the last issue. For example,defining the neighborhood as a very large region maylead to theoretical global convergence, but it may notlead to very efficient search, especially if simulation isexpensive. The model-based algorithms allow a largeneighborhood, but can also allow search in a localizedmanner by the way the model (probability distribution)is constructed and updated.

  • Fu, Glover, and April

    ACKNOWLEDGMENTS

    Michael C. Fu was supported in part by the NationalScience Foundation under Grant DMI-0323220, andby the Air Force of Scientific Research under GrantFA95500410210.

    REFERENCES

    Andradottir, S. 1998. Simulation optimization. Chap-ter 9 in Handbook of Simulation: Principles,Methodology, Advances, Applications, and Practice,ed. J. Banks. New York: John Wiley & Sons.

    Andradottir, S. 2005. An overview of simulation op-timization via random search. Chapter 21 in Hand-books in Operations Research and Management Sci-ence: Simulation, S.G. Henderson and B.L. Nelson,eds., Elsevier.

    April, J., F. Glover, J.P. Kelly, M. Laguna. 2003. Prac-tical introduction to simulation optimization. In Pro-ceedings of the 2003 Winter Simulation Conference,eds. S. Chick, P. J. Sanchez, D. Ferrin, and D. J.Morrice. 71-78. Piscataway, New Jersey: Institute ofElectrical and Electronics Engineers.

    April, J., F. Glover, J.P. Kelly, M. Laguna. 2004. Newadvances and applications for marrying simulationand optimization. In Proceedings of the 2004 Win-ter Simulation Conference, eds. R. G. Ingalls, M. D.Rossetti, J. S. Smith, and B. A. Peters, 255-260. Pis-cataway, New Jersey: Institute of Electrical and Elec-tronics Engineers.

    Barton, R. 2005. Response surface methodology. Chap-ter 19 in Handbooks in Operations Research and Man-agement Science: Simulation, S.G. Henderson andB.L. Nelson, eds., Elsevier.

    Bechhofer, R.E., T.J. Santner, and D.M. Goldsman.1995. Design and Analysis of Experiments for Statis-tical Selection, Screening, and Multiple Comparisons,New York: John Wiley & Sons.

    Boesel, J., B.L. Nelson, and S.-H. Kim. 2003. Usingranking and selection to clean up after simulationoptimization. Operations Research 51: 814-825.

    Chen, C. H. 1995. An Effective Approach to SmartlyAllocate Computing Budget for Discrete Event Sim-ulation. Proceedings of the 34th IEEE Conference onDecision and Control, 2598-2605.

    Chen, C.H., J. Lin, E. Yucesan, S.E. Chick. 2000. Sim-ulation budget allocation for further enhancing theefficiency of ordinal optimization. Discrete Event Dy-namic Systems: Theory and Applications 10 251-270.

    Chen, C.H., E. Yucesan, L. Dai, H.C. Chen. 2005. Ef-ficient computation of optimal budget allocation fordiscrete event simulation experiment. IIE Transac-tions forthcoming.

    Chen, H.C., C.H. Chen, and E. Yucesan. 2000. Com-puting efforts allocation for ordinal optimization anddiscrete event simulation. IEEE Transactions on Au-tomatic Control 45: 960-964.

    Chick, S.E., K. Inoue. 2001a. New two-stage and se-quential procedures for selecting the best simulatedsystem. Operations Research 49 732-743.

    Chick, S.E., K. Inoue. 2001b. New procedures to se-lect the best simulated system using common randomnumbers. Management Science 47 1133-1149.

    Dai, L., C. Chen. 1997. Rate of convergence for or-dinal comparison of dependent simulations in dis-crete event dynamic systems. Journal of Optimiza-tion Theory and Applications 94 29-54.

    De Boer, P.-T., D.P. Kroese, S. Mannor, and R.Y.Rubinstein. 2005. A tutorial on the cross-entropymethod. Annals of Operation Research 134 (1): 19-67.

    Dorigo, M. and T. Stutzle. 2004. Ant Colony Opti-mization. Cambridge, MA: MIT Press.

    Fu, M.C. 1994a. Optimization via simulation: A re-view. Annals of Operations Research 53: 199-248.

    Fu, M.C., 1994b. Sample path derivatives for (s, S)inventory systems, Operations Research 42, 351-364.

    Fu, M.C. 2002. Optimization for simulation: Theoryvs. Practice (Feature Article). INFORMS Journalon Computing 14 (3): 192-215, 2002.

    Fu, M.C. 2005. Gradient estimation. Chapter 19 inHandbooks in Operations Research and ManagementScience: Simulation, S.G. Henderson and B.L. Nel-son, eds., Elsevier.

    Fu, M.C. and K.J. Healy. 1997. Techniques for simu-lation optimization: An experimental study on an(s, S) inventory system. IIE Transactions 29 (3):191-199.

    Fu, M.C. and S.D. Hill. 1997. Optimization of discreteevent systems via simultaneous perturbation stochas-tic approximation. IIE Transactions 29 (3): 233-243.

    Fu, M.C. and J.Q. Hu, 1997. Conditional Monte Carlo:Gradient Estimation and Optimization Applications.Boston: Kluwer Academic Publishers.

    Fu, M.C., J.Q. Hu, C.H. Chen, and X. Xiong. 2005Simulation allocation for determining the best designin the presence of correlated sampling. INFORMSJournal on Computing forthcoming.

    Glover, F. and M. Laguna. 1997. Tabu Search. Boston:Kluwer Academic.

    Goldsman, D. and B.L. Nelson. 1998. Comparing sys-tems via simulation. Chapter 8 in Handbook of Sim-ulation: Principles, Methodology, Advances, Applica-tions, and Practice, ed. J. Banks. New York: JohnWiley & Sons.

    Ho, Y.C., C.G. Cassandras, C.H. Chen, and L.Y. Dai.2000. Ordinal optimization and simulation. Journal

  • Fu, Glover, and April

    of Operations Research Society 51: 490-500.Ho, Y.C., R. Sreenivas, and P. Vakili. 1992. Ordinaloptimization of DEDS. Discrete Event Dynamic Sys-tems: Theory and Applications 2: 61-88.

    Hochberg, Y. and A.C. Tamhane. 1987. Multiple Com-parison Procedures. New York: John Wiley & Sons.

    Hu, J., M.C. Fu, and S.I. Marcus. 2005a. A model ref-erence adaptive search algorithm for global optimiza-tion. Operations Research, submitted. also availableat http://techreports.isr.umd.edu/ARCHIVE/dsp reportList.php?year=2005&center=ISR.

    Hu, J., M.C. Fu, and S.I. Marcus. 2005b. A model ref-erence adaptive search algorithm for stochastic globaloptimization, submitted.

    Hu, J., M.C. Fu, and S.I. Marcus. 2005c. Simulationoptimization using model reference adaptive search.Proceedings of the 2005 Winter Simulation Confer-ence.

    Kapuscinski, R. and S.R. Tayur. 1999. Optimalpolicies and simulation based optimization for ca-pacitated production inventory systems. Chapter 2in Quantitative Models for Supply Chain Manage-ment, eds. S.R. Tayur, R. Ganeshan, M.J. Magazine.Boston: Kluwer Academic Publishers.

    Kim, S.-H., and Nelson, B.L. 2005. Selecting thebest system. Chapter 18 in Handbooks in Opera-tions Research and Management Science: Simula-tion, S.G. Henderson and B.L. Nelson, eds., Elsevier,2005.

    Kleijnen, J.P.C. 1998. Experimental design for sensi-tivity analysis, optimization, and validation of sim-ulation models. Chapter 6 in Handbook of Simu-lation: Principles, Methodology, Advances, Applica-tions, and Practice, ed. J. Banks. New York: JohnWiley & Sons.

    Kushner, H.J. and G.G. Yin. 1997. Stochastic Ap-proximation Algorithms and Applications. New York:Springer-Verlag.

    Laguna, M., and R. Marti. 2002. Scatter Search,Boston: Kluwer Academic Publishers.

    Larranaga, P. and J.A. Lozano. 2002. Estimation ofDistribution Algorithms: A New Tool for Evolution-ary Computation. Boston: Kluwer Academic Pub-lishers.

    LEcuyer, N. Giroux, and P. W. Glynn. 1994. Stochas-tic optimization by simulation: Numerical experi-ments with the M/M/1 queue in steady-state. Man-agement Science 40: 1245-1261.

    Olaffson, S. 2005. Metaheuristics. Chapter 22 in Hand-books in Operations Research and Management Sci-ence: Simulation, S.G. Henderson and B.L. Nelson,eds., Elsevier.

    Pflug, G.C. 1996. Optimization of Stochastic Models.Boston: Kluwer Academic Publishers.

    Pinter, J.D. 1996. Global Optimization in Action.Boston: Kluwer Academic Publishers.

    Rubinstein, R. Y. and A. Shapiro. 1993. Discrete EventSystems: Sensitivity Analysis and Stochastic Opti-mization by the Score Function Method. New York:John Wiley & Sons.

    Rubinstein, R. Y. and D.P. Kroese. 2004. The Cross-Entropy Method: A Unified Approach to Combinato-rial Optimization, Monte-Carlo Simulation, and Ma-chine Learning. New York: Springer-Verlag.

    Shi, L. and S. Olafsson. 2000. Nested partitionedmethod for global optimization. Operations Research48: 390-407.

    Spall, J.C. 1992. Multivariate stochastic approximationusing a simultaneous perturbation gradient approx-imation. IEEE Transactions on Automatic Control37 (3): 332-341.

    Spall, J.C. 2003. Introduction to Stochastic Search andOptimization. New York: John Wiley & Sons.

    Swisher, J.R., P.D. Hyden, S.H. Jacobson, andL.W. Schruben. 2001. A survey of recent advancesin discrete input parameter discrete-event simulationoptimization, IIE Transactions 36(6): 591-600.

    van Beers, W. C. M. and J. P. C. Kleijnen. 2003. Krig-ing for interpolation in random simulation, Journalof the Operational Research Society 54(3): 255-262.

    AUTHOR BIOGRAPHIES

    MICHAEL C. FU is a Professor in the Robert H.Smith School of Business, with a joint appointment inthe Institute for Systems Research and an affiliate ap-pointment in the Department of Electrical and Com-puter Engineering, all at the University of Maryland.He received degrees in mathematics and EE/CS fromMIT, and an M.S. and Ph.D. in applied mathematicsfrom Harvard University. His research interests includesimulation methodology and applied probability mod-eling, particularly with applications towards manufac-turing and financial engineering. He teaches coursesin simulation, stochastic modeling, computational fi-nance, and supply chain logistics and operations man-agement. In 1995 he received the Allen J. Krowe Awardfor Teaching Excellence, and was a University of Mary-land Distinguished Scholar-Teacher for 20042005. Hecurrently serves as Simulation Area Editor of Opera-tions Research, and was co-Editor for a 2003 specialissue on simulation optimization in the ACM Transac-tions on Modeling and Computer Simulation. He is co-author of the book, Conditional Monte Carlo: GradientEstimation and Optimization Applications, which re-ceived the INFORMS College on Simulation Outstand-ing Publication Award in 1998. His e-mail address [email protected].

  • Fu, Glover, and April

    FRED W. GLOVER is President of OptTek Sys-tems, Inc., and is in charge of algorithmic design andstrategic planning initiatives. He currently serves asComcast Chaired Professor in Systems Science at theUniversity of Colorado. He has authored or co-authoredmore than 340 published articles and seven books inthe fields of mathematical optimization, computer sci-ence and artificial intelligence, with particular emphasison practical applications in industry and government.Dr. Glover is the recipient of the distinguished vonNeumann Theory Prize, as well as of numerous otherawards and honorary fellowships, including those fromthe American Association for the Advancement of Sci-ence, the NATO Division of Scientific Affairs, the Insti-tute of Management Science, the Operations ResearchSociety, the Decision Sciences Institute, the U.S. De-fense Communications Agency, the Energy Research In-stitute, the American Assembly of Collegiate Schools ofBusiness, Alpha Iota Delta, and the Miller Institute forBasic Research in Science. He also serves on advisoryboards for numerous journals and professional organi-zations. His email address is [email protected].

    JAY APRIL is Chief Development Officer of OptTekSystems, Inc. He holds bachelors degrees in philosophyand aeronautical engineering, an MBA, and a Ph.D.in Business Administration (emphasis in operations re-search and economics). Dr. April has held several exec-utive positions including VP of Business Developmentand CIO of EG&G subsidiaries, and Director of Busi-ness Development at Unisys Corporation. He also heldthe position of Professor at Colorado State University,heading the Laboratory of Information Sciences in Agri-culture. His email address is [email protected].