Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Synergy for Smart Multi-Objective Optimisation
D3.2Report on refining surrogate modelling
Beate Breiderhoff, Thomas Bartz-Beielestein & Boris Naujoks
31 July 2018
SYNERGY Horizon 2020 – GA No 692286
CONTENTS
1 Introduction ......................................................................................................................... 3
1.1 Motivation................................................................................................................... 3
1.2 Structure .................................................................................................................... 4
2 Invited Lectures, Training, Tutorials and Workshops ................................................................ 4
2.1 Invited Lectures .......................................................................................................... 4
2.2 Training ...................................................................................................................... 7
2.3 Tutorials ..................................................................................................................... 8
2.4 Workshops ................................................................................................................. 9
3 Special Sessions.................................................................................................................. 10
4 Future Events ...................................................................................................................... 11
5 References .......................................................................................................................... 13
Appendix A: Lecture and tutorial handouts ................................................................................... 14
Appendix B: WCCI 2016 special session program ........................................................................ 93
D3.2 2 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
1 IntroductionThis report on refining surrogate modelling describes the actions taken to transport Cologne
University of Applied Sciences (CUAS) know-how to Jožef Stefan Institute (JSI) and the co-
operation of JSI and CUAS within different events. These will spread from the organisation of
events at different conferences to the dissemination of know-how to the areas from Slovenia’s
Smart Specialisation Strategy. The corresponding reporting period is RP2 from month 16 to
month 36.
This deliverable is part of the SYNERGY H2020 European project (http://cordis.europa.
eu/project/rcn/199104_en.html). It shortly describes the invited lectures and tutorials
given at different scientific events where JSI stuff was attending next to training event on site in
Ljubljana. Consortium members have performed many activities related to a variety of events
aimed at sharing the existing knowledge and gaining new knowledge on surrogate modelling
and smart multi-objective optimisation. With the gained knowledge, applied to real-world prob-
lems from Slovenian industry (see Section 4), it is expected, that innovative solutions will be
developed.
1.1 MotivationWhen faced with expensive to evaluate optimisation problems, one frequent approach is to
replace the optimised objective function by a surrogate model. This models, also known as
response surface models or meta-models, is a well-recognised research area. Meta-model as-
sisted optimisation yields huge improvements in optimisation time or cost in a large number of
different scenarios. Hence, it is extremely useful for numerous real-world applications. Many
real-world optimisation problems involve multiple, often conflicting objectives and rely on com-
putationally expensive simulations to assess these objectives. Such multi-objective optimisation
problems can be solved more efficiently if the simulations are partly replaced by accurate surro-
gate models. Surrogate models are data driven models built to simulate the processes or devices
that are subject to optimisation. They are used when more precise models, such as those based
on the finite element method or computational fluid dynamics, spend too much time and re-
sources. These include, but are not limited to, the optimisation of designs like airfoils or ship
D3.2 3 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
propulsion systems, chemical processes, biogas plants, composite structures, and electromag-
netic circuit design. While surrogate models allow for fast simulation and assessment of the
optimisation objectives, they also represent an additional source of impreciseness. In multi-
objective optimisation, this may constitute a particular challenge when comparing candidate
solutions.
1.2 StructureThe rest of the report is organised as follows. In Section 2, we briefly describe the given lec-
tures and tutorials related to surrogate modelling. At the beginning of every description there is
a short summary of the main important characteristics of every event. The associated handouts
are included in Appendix A. In Section 3, we summarise special sessions, the first organised
by the SYNERGY members Bogdan Filipic and Thomas Bartz-Beielstein at the IEEE World
Congress on Computational Intelligence (WCCI 2016) in Vancouver. The corresponding pro-
gram is included in Appendix B. The second was organised by Boris Naujoks, Vanessa Volz,
Tea Tušar and Pascal Kerschke at GECCO 2018. The last Section 4 gives an outlook on future
events.
2 Invited Lectures, Training, Tutorials and Workshops
2.1 Invited Lectures
Title: On Applying Surrogates in Evolutionary (Multi-Objective) Optimization
Lecturer: Boris Naujoks
Date: 8 May 2017
Venue: University of Primorska, Koper, Slovenia
The presentation started with a short introduction to evolutionary algorithms (EA) and the con-
cepts of Pareto-based multi-objective optimisation that are often applied in EAs. The industrial
applicability of EAs is often said to be limited by the runtime of the algorithm, which itself is
D3.2 4 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
heavily depending on the time spent for single fitness function evaluations. For this scenario,
the method of learning and incorporating surrogate models became a state-of-the-art during the
last decade. To introduce surrogate-assisted EAs, first the focus was set on different modelling
techniques that are applied frequently. After that different techniques on how to integrate surro-
gates into EA and typical problems arising when implementing these in real-world optimisation
tasks were discussed. The talk concluded with a discussion of open issues. The slides created
for this lecture are included in Appendix A on page 15.
Title: Industrial Applications of Model-based Simulation and Optimisation
Lecturer: Thomas Bartz-Beielstein, Jörg Stork and Martin Zaefferer
Date: 17 May 2016
Venue: JSI, Ljubljana
Industrial applications play a prominent role at the SPOTSeven Lab, which is located at the
Department of Computer and Engineering Sciences at Cologne University of Applied Sciences
(CUAS). This talk addressed three topics:
1. The SPOTSeven lab was introduced. This included a presentation of its partners, goals
and aims, as well as its approach to industry projects.
2. Three research projects with different industrial partners were introduced. The first dealt
with the optimisation of the injection molding process required for the production of
a ring gasket. The second project concerned the optimisation of an SO2-Sensor. The
third project involved measurement and process optimisation of drinking water and its
distribution networks.
3. The background of model-based optimisation was introduced. Simple, continuous exam-
ples were used to demonstrate the ideas. Furthermore, an extension to discrete, combina-
torial problems was discussed.
D3.2 5 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
The case study illustrates, how simulation-optimisation can be enhanced by integrating various
modelling techniques. These techniques include analytical, surrogate, experimental approaches.
The slides created for this lecture are included in Appendix A on page 30.
Title: A Survey of Model-based Methods for Global Optimization
Lecturer: Thomas Bartz-Beielstein
Date: 18 May 2016
Venue: The 7th International Conference on Bioinspired Optimization Methods and their Ap-
plications (BIOMA 2016), Bled, Slovenia
This lecture of one hour and twenty minutes took place at the BIOMA 2016 conference in Bled
on May 18th. It was entitled "A Survey of Model-based Methods for Global Optimization"
and was held by Thomas Bartz-Beielstein. The presented work described model-based meth-
ods for global optimisation. After introducing the global optimisation framework, modelling
approaches for stochastic algorithms were presented. You can differentiate between models
that use a distribution and models that use an explicit surrogate model. Fundamental aspects
of and recent advances in surrogate-model based optimisation were discussed. Strategies for
selecting and evaluating surrogates were presented. The lecture concluded with a description
of key features of two state-of-the-art surrogate model based algorithms, namely EVOlvability
Learning of Surrogates (EvoLS) algorithm and Sequential Parameter Optimization (SPO). The
slides created for this lecture are included in Appendix A on page 36.
Title (in Slovene): Umetna inteligenca za pametne tovarne (English: Artificial Intelligence
for Smart Factories)
Lecturers: Peter Korošec and Bogdan Filipic
Date: 3 July 2018
D3.2 6 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
Venue: Chamber of Commerce and Industry of Slovenia, Ljubljana, Slovenia
The presentation started with elevator style of pitch talk about competences of JSI and goals
of SYNERGY project, followed by introduction to artificial intelligence, where advantages and
weaknesses of using it were explained. Among various difficulties of applying artificial intel-
ligence, a special attention was dedicated to communication and requirements from industrial
partners. Following this general introduction, we focused on applying artificial intelligence in
digital twins. A digital twin is a digital copy of real world (systems, precesses, etc), which
authentically models real word and reacts to changes in it. During presentation a special at-
tention was given to optimisation as part of artificial intelligence. The talk concluded with a
presentation of two real-world examples from practice, where different applications of artificial
intelligence in Smart Specialisation Strategy for Smart Factories was shown and discussed. The
slides (in Slovene) created for this lecture are included in Appendix A on page 54.
2.2 Training
Title: A Gentle Introduction to Kriging
Author, Presenter: Martin Zaefferer
Date: 17 June 2016
Venue: CUAS, Gummersbach, Germany
In this training, the modelling technique Kriging was introduced. Kriging is a popular choice
of surrogate models. It understands observations as realisations of a Gaussian process. The
popularity of this technique is due to the fact that it not only produces accurate predictions,
but also provides an estimate of the prediction uncertainty. This feature is used to balance ex-
ploration and exploitation in surrogate assisted optimisation. We started by motivating Kriging
with a simple, linear modelling problem that was solved by a classic regression approach. Then,
Kriging was introduced and used to model a more complex problem. Finally, the uncertainty
estimate of the model was presented and it was briefly explained how this was employed in
D3.2 7 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
optimisation algorithms. The training was based on R code to show that Kriging can very easily
be implemented.
The slides created for this lecture are included in Appendix A. All employed source code is
included within the slides on page 58.
Title: Training on Different Approaches and Methods for Multi-objective Optimization
Author, Presenter: Boris Naujoks
Date: 10-11 November 2016
Venue: JSI, Ljubljana, Slovenia
Boris Naujoks trained JSI stuff on "different approaches and methods for multi-objective opti-
misation". The training was given in an informal way where the approaches and methods were
discussed based on their applicability to real-world optimisation tasks. This provided a strong
connection to such kinds of problems and directly made the attendees familiar with different
ways to approach such problems. In addition, real-world problems were selected that could be
considered for developing and testing surrogate-assisted optimisation techniques. As a result,
the training ended up in discussing different ways to integrate both, multi-objective optimisation
and surrogate models in evolutionary algorithms. There were no slides used for this lecture.
2.3 Tutorials
Title: Meta-model Assisted (Evolutionary) Optimization
Author, Presenter: Boris Naujoks, Jörg Stork, Martin Zaefferer, and Thomas Bartz-Beielstein
Date: 18 September 2016
Venue: 14th International Conference on Parallel Problem Solving from Nature (PPSN 2016),
Edinburgh, United Kingdom
D3.2 8 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
This tutorial mainly focused on evolutionary optimisation assisted by meta-models, and had the
following aims: Firstly, it provided a detailed understanding of the established concepts and dis-
tinguished methods in meta-model assisted optimisation. Therefore, it presented an overview
of current research and open issues in this field. Moreover, it was aimed for a practical ap-
proach. The tutorial was expected to enable the participants to apply up-to-date meta-modelling
approaches to actual problems at hand. Afterwards, typical problems and their solutions were
discussed with the participants. Finally, the tutorial offered new perspectives by taking a look
into areas where links to meta-modelling concepts have been established more recently, e.g., the
application of meta-models in multi-objective optimisation or for combinatorial search spaces.
The slides created for this lecture are included in Appendix A on page 67.
2.4 Workshops
Title: Investigating the Effectiveness of Multi-Criteria Surrogate-Assisted Evolutionary Algo-
rithms
Author, Presenter: Vanessa Volz, Boris Naujoks
Date: 9 May 2017
Venue: Šmarna Gora, Slovenia
This workshop contribution presented a new approach in surrogate-assisted optimisation. This
approach war recently presented in two conferences where is was tailored for single-objective
optimisation on the one hand and for multi-objective optimisation on the other hand. The new
contribution of this approach called SAPEO (Surrogate-Assisted Partial Order-Based Evolu-
tionary Optimization Algorithm) is that exact, expensive evaluation of search points are only
executed if certain necessities really require such evaluations. These necessities base on the
intervals that are provided by the surrogate model and how they interfere with intervals from al-
ready evaluated points as well as a threshold for the corresponding uncertainties. Several ideas
and implementations have been tested with the aim to reduce exact, expensive evaluations as
much as possible [1],[2].
D3.2 9 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
The presentation took approx. 40 minutes with another 5 minutes for discussion. However, a lot
of good feedback was caused by the presentation so that the 5 minutes planned for discussion
were extended. The slides created for this lecture are included in Appendix A on page 85.
3 Special Sessions
Title: Multiobjective Optimization with Surrogate Models
Organiser: Bogdan Filipic, Thomas Bartz-Beielstein and Carlos A. Coello
Date: 29 July 2016
Venue: IEEE World Congress on Computational Intelligence (WCCI 2016), Vancouver, Canada
The aim of this special session of two hours was to bring together researchers and practi-
tioners working with surrogate-based multi-objective optimisation algorithms to present recent
achievements in the field and discuss directions for further work. Prospective authors were
invited to submit their original and unpublished work on all aspects of surrogate-assisted multi-
objective optimisation. The scope of the special session covered, but was not limited to the
following topics:
• State-of-the-art in multi-objective optimisation with surrogate models
• Theoretical aspects of surrogate-assisted multi-objective optimisation
• Novel surrogate-based multi-objective optimisation algorithms
• Comparative studies in multi-objective optimisation with surrogates
• Benchmark problems and performance measures for multi-objective optimisation with
surrogates
• Real-world applications of multi-objective optimisation using surrogate
D3.2 10 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
The program of this session is included in Appendix B.
Title: Game-Benchmark for Evolutionary Algorithms
Organiser: Boris Naujoks, Vanessa Volz, Tea Tušar and Pascal Kerschke
Date: 19 July 2018
Venue: Genetic and Evolutionary Computation Conference (GECCO 2018), Kyoto, Japan
Games are a very interesting topic that motivates a lot of research. Key features of games are
controllability, safety and repeatability, but also the ability to simulate properties of real-world
problems such as measurement noise, uncertainty and the existence of multiple objectives. They
have therefore been repeatedly suggested as testbeds for AI algorithms. However, until now,
there has not been any concerted effort to implement such a benchmark.The workshop intended
to fill this gap by motivating and coordinating the development of game-based problems for EAs
and encouraging a discussion about what type of problems and function properties are of inter-
est. For this workshop students were invited to submit their game-based optimisation problems,
they were compiled into a publicly available benchmark and analysed. We discussed results and
future work during the conference in Kyoto. As a result of the workshop, we obtained a first
game-based test-suite for the COCO (COmparing Continuous Optimizers) platform.
4 Future Events
Title: SYNERGY Summer School on Efficient Multi-Objective Optimisation,
Organiser: JSI, CUAS and USTL
Date: 27-31 August 2018
Venue: Jožef Stefan Institute (JSI), Ljubljana
D3.2 11 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
The SYNERGY Summer School on Efficient Multi-Objective Optimisation will be devoted to
efficient multi-objective optimisation through parallelisation and surrogate modelling. In ad-
dition to lectures on the state of the art in this area, the summer school will offer all attendees
access to Grid’5000, a large-scale infrastructure for research in grid computing. The targeted au-
dience are students, researchers and industrial participants interested in efficient multi-objective
optimisation.
SYNERGY Summer school represents a unique chance to provide a coherent educational pack-
age that spans from large scale parallelisation to surrogate modelling, and builds a attendee
profile with knowledge in the form of smart multi-objective optimisation and its applications.
Title: Conference on High-Performance Optimization (HPO) in Industry
Organiser: Bogdan Filipic and Thomas Bartz-Beielstein,
Date: 8 October 2018
Venue: Jožef Stefan Institute (JSI), Ljubljana
The conference on High-Performance Optimization (HPO) in Industry will be a forum for pre-
senting use cases and exchanging experience among academic and industrial partners on de-
ploying HPO, as well as stimulating further proliferation of the methodology through promo-
tional activities, establishing direct collaboration between academia and industry, and apply-
ing for multilateral projects. The conference is an activity of the H2020 project SYNERGY
for Smart Multi-objective Optimisation and will be held as part of the International Multi-
conference on Information Society (IS 2018). Prospective authors are invited to submit original
papers describing applied research and real-world applications. Areas of interest include high-
performance computing, single- and multi-objective optimisation, evolutionary computation,
surrogate modelling, surrogate-based optimisation, and their applications in science, engineer-
ing and business.
D3.2 12 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
5 References[1] Vanessa Volz, Günter Rudolph, and Boris Naujoks. Investigating uncertainty propagation in
surrogate-assisted evolutionary algorithms. In Proceedings of the Genetic and Evolutionary
Computation Conference, GECCO ’17, pages 881–888, New York, NY, USA, 2017. ACM.
[2] Vanessa Volz, Günter Rudolph, and Boris Naujoks. Surrogate-assisted partial order-based
evolutionary optimisation. In EMO, 2017.
D3.2 13 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
Appendix A: Lecture and tutorial handouts
D3.2 14 31 July 2018
Surrogate Assisted (Evolutionary) Optimization
Boris Naujoks
May 8, 2017
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 1 / 89
Overview
• Motivation
• Requirements
• Concepts and methods
• Typical problems in application
• Multi-criteria optimisation
• Combinatorial optimisation
• Open Issues / Research perspectives / Fields of Interest
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 2 / 89
Overview
• Motivation
• Requirements
• Concepts and methods
• Typical problems in application
• Multi-criteria optimisation
• Combinatorial optimisation
• Open Issues / Research perspectives / Fields of Interest
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 3 / 89
Most common applications
• Engineering design• Long, expensive fitness function evaluations
– Finite elements models– Computational fluid dynamics models
• Examples– Airfoil design– Ship propulsion systems– etc.
Additional variable geometry parameters for the linear jetin contrast to the more simple propeller blade optimizationare:
• the hub’s diameter and length• the nozzle’s length and profile angle.
This results in an optimization problem featuring 14 deci-sion parameters in contrast to the nine decision parametersof the pure propeller blade optimization problem describedabove. The stator is not included into the optimization yet.
Again, like in the task presented before, a geometry todeliver more or less thrust compared to the desired valueis “punished” with a higher target value. By a similar”punishment” of a geometry generating cavitation and a“reward” for efficiency we calculate a single target valuewhich is returned to the optimization program after everyhydrodynamic simulation.
Fig. 2. Visualization of the linear jet propulsion systems, within the two-dimensional view, the three different parts blades, hub and nozzle can beidentified.
III. META-MODEL-ASSISTED EVOLUTIONARY
OPTIMIZATION
The idea to assist direct search algorithms by meta-modelshas first been explored by Torczon et al. [8], [9] for patternsearch algorithms. A similar approach can be employedin evolutionary algorithms (EA) by incorporating a pre-screening procedure before the offspring population is eval-uated with the time consuming evaluation tool. Algorithm1 gives an outline of MAEA which is, in fact, a modifiedversion of the basic (µ+�)-EA described by Back, Hammel,and Schwefel [10] or Beyer and Schwefel [11]. Two featuresdistinguish MAEA from standard EA.
1) All exactly evaluated individuals are recorded andstored in a database. Up to 15 nearest neighbors areconsidered to set up the meta-model for each of the �individuals per generation.
Fig. 3. Visualization of a more complex propulsion systems featuring rotor,hub, and nozzle. Within the three-dimensional figure of the geometry, thenozzle had to be hidden except for the corresponding grid to see the othercomponents. Nevertheless, this view enables to see the composition of huband blades in more detail.
2) During the pre-screening phase, the objective functionvalues for new solutions are predicted by the meta-model, before deciding whether they need to be re-evaluated by the exact and costly tool.
Thereby, at generation t, the set of offspring solutions Gt
is reduced to the subset of offspring solutions Qt, which willbe evaluated exactly and will also be considered in the finalselection procedure (cf. [1]).
Algorithm 1 (µ + ⌫ < �)-MAEAt 0Pt init() /* Pt: Set of solutions */evaluate Pt preciselyinitialize database Dwhile t < tmax do
Gt generate(Pt) /* � new offsprings */evaluate Gt with meta-modelQt select(Gt) /* |Qt| = ⌫ */evaluate Qt preciselyupdate databasePt+1 select(Qt [ Pt) /* Select µ best */t t + 1
end while
A. Pre-screening procedures
A ranking algorithm, applied over the offspring populationGt, identifies the most promising individuals in the newgeneration. In the general case, this algorithm is based on thevalues y(x) (predictions for f(x)) and s(x) (correspondingstandard deviations) obtained for each individual x 2 Gt
through the meta-model. Comparisons with objective func-tion values for the parent population Pt are necessary. Vari-ous criteria for identifying promising solutions are discussedby Emmerich et al. [1]. Once the promising subset Qt of Gt
has been found, its members undergo exact evaluations.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 4 / 89
Intro and Motivation
• Synonyms– Metamodels– Surrogates– Response surface models– Approximation models– Simulation models– Data-driven models– Emulators
• From Latin surrogatus– a replacement for something, a substitute or alternativePerfect passive participle of surrogare
– Variant of subrogare, from– Sub (under) + rogare (ask)
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 5 / 89
Overview
• Motivation
• Requirements
• Concepts and methods
• Typical problems in application
• Multi-criteria optimisation
• Combinatorial optimisation
• Open Issues / Research perspectives / Fields of Interest
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 6 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 15 31 July 2018
Overview
• Motivation
• Requirements– Evolutionary Algorithms– Multi-criteria Optimisation
• Concepts and methods
• Multi-criteria optimisation
• Combinatorial optimisation
• Open Issues / Research perspectives / Fields of Interest
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 7 / 89
Multiobjective OptimizationHow to compare Apples and Oranges?!?
Von: http://xkcd.com/388/, modified
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 8 / 89
Multi-Objective optimisation problem• Minimize
f : Rn ≠æ Rm, f(x) = (f1(x), . . . , fm(x))
Pareto Dominance• Solution x dominates solution y
x <p y :… ’i : fi(x) Æ fi(y) (i = 1, . . .m)÷j : fj(x) < fj(y) (j = 1, . . .m)
• Pareto-Set: Set of all non-dominated solutions in the search space
{x | @z : z <p x}
• Pareto-Front: Image of Pareto-set in objective space
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 9 / 89
Aggregating approaches
Combine functions (easily)
• Sumf(x) =
mÿ
i=1fi(x)
• frequently using weights Ê:
f(x) =mÿ
i=1Êi · fi(x)
(useful conditions for weights:qm
i=1 Êi = 1)
• Weighted Sum: Frequently used first approach, however ...
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 10 / 89
Aggregating approachesWeighted Sum: characteristics
convex front convex front, unequal weighted
concave frontB. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 11 / 89
Aggregating approaches
• Weighted Sum: Frequently used first approach, however ...– Convergence to single point– Problems facing concave fronts
• Alternative approaches:– E.g.: min max approach– But: convergence to a single point again
• Alternatively: Set- (population-) based approaches
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 12 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 16 31 July 2018
Set- / population-based ApproachEvolutionary Algorithms
• t = 0• initialize Q(t)• evaluate Q(t)• do
– QÕ(t) Ω recombine Q(t)– QÕÕ(t) Ω mutate QÕ(t)– evaluate QÕÕ(t)– Q(t + 1) Ω select Q(t) fi QÕÕ(t)– t = t + 1
• until happy
• populations sizes– parent population
µ = |Q(t)|– o�spring population
⁄ = |QÕÕ(t)|
• EA- variants, notation– (µ + 1) - EA– (µ + ⁄) - EA– (µ,⁄) - EA
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 13 / 89
Judging the quality of sets
Criteria• Distribution, dispersion• Convergence, advancement
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 14 / 89
Which is the better set of points?
●
●
●
●
●
●
●
●
●
●● ● ● ●
0.05 0.10 0.15
0.05
0.15
SMS−EMOA, npregen=15
Y1
Y 2
●
●
Stop GenerationGeneration 125
●
●
●
●
●
●
●●
●
●● ●
● ●
●
●
●
●
●
●
●
●
●●
● ● ●
0.05 0.10 0.15
0.05
0.15
SMS−EMOA, npregen=10
Y1
Y 2
●
●
Stop GenerationGeneration 125
●
●
●
●
●
●
●●
●
●● ●
● ●
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 15 / 89
Hypervolume
• Advantages– Honors uniform distribution– Honors convergence– Upgradeable to higher dimensions
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 16 / 89
SMS-EMOA
• (µ+ 1) Hypervolume-selektion– Generate 1 new solutions– Omit solution with least hypervolume contribution
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 17 / 89
Airfoil optimization
• Lift vs. drag• Two "optimal" profiles as reference
solutions
Solution 1 Solution 2
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 18 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 17 31 July 2018
Dust separators - cyclones
• Separation of solid and liquid particlesfrom gases
Goals:• Maximize collection e�ciency• Minimize pressure loss
Design parameter (selection)• Cyclone diameter• Cyclone height• Inlet width etc.
From: en.wikipedia.org
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 19 / 89
Pareto-fronts and visualization
Direct visualization in2-D and 3-D case
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 20 / 89
Pareto-fronts and visualization
More sophisticated methods• parallel plot• star plot• heat map
MU ETA_C ETA_M PROB_C PROB_M
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 21 / 89
Overview
• Motivation
• Requirements
• Concepts and methods
• Multi-criteria optimisation
• Combinatorial optimisation
• Open Issues / Research perspectives / Fields of Interest
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 22 / 89
Surrogate Modeling - Concepts and Methods
Questions to Answer:
1 What is the core concept of surrogate modeling?2 How does a typical surrogate optimisation cycle work?3 Which models are common for surrogate optimisation?4 Example method: E�cient Global Optimisation (EGO)
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 23 / 89
Costly real world (blackbox) problems
• Real-world applications: commonly blackbox problems (machines, complexprocesses)
• Available information is very sparse, properties of the objective function aredi�cult or impossible to determine,
• No a priori information about modality, convexity, gradients, or the minimalfunction value f(xú) is known
• Most complex problems arise if physical experiments are involved. Aside frombeing costly in terms of needed resources (manpower, material, time)
• Wrong values can lead to hazardous e�ects, e.g., damaging or destroyingexperimental material.
• Instead of physical experiments, simulations are used, e.g., from field ofComputational Fluid Dynamics (CFD).
• Require a lot of computational power and are very time demanding
Inevitable need to evaluate candidate solutions in the search space to retrieve anyinformation and high demand on resources for each of these evaluations.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 24 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 18 31 July 2018
Surrogate Modeling - Application Layers
L1 The Real-World Application– Direct optimisation is very costly or impossible– Evaluations involve resource demanding prototype building or even hazardous
experimentsL2 The Simulation Model
– Complex computational model from fluid- or structural dynamics– Single simulation process may take minutes, hours, or even weeks to compute– Available computational power limits the amount of available evaluations
L3 The Surrogate Model– Data-driven regression model– The accuracy heavily depends on the underlying surrogate type and number of
available information– Typically cheap
L4 The Optimisation Process– Any suitable optimisation algorithm (deterministic, stochastic, metaheuristic...)– Can be tuned
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 25 / 89
Surrogate Modeling - Core Concept
tuning procedure
(f4f4) optimization algorithm
(f3f3) surrogate model
(f1f1) real world application, physical model
algorithm control parameters
optimized algorithm control parameters
decisionvariables
optimized variables
process parameters, estimated output
simulated input
input output
candidate solutions predicted fitness
(f2f2) simulation model
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 26 / 89
Surrogate Modeling - Costs and Benefits
• Each layer L1 to L4 imposes di�erent evaluation costs and solutionaccuracies:
• Most expensive and accurate: L1 real world• Commonly cheapest: L3 Surrogate Model• Modeling process itself requires computational resources for evaluations,
construction or validation of the surrogate.
Main benefit of using surrogates: reduction of needed fitness evaluations onobjective function during optimisation.
• Additional advantage: surrogate itself is available, can be utilized to gainfurther problem insight.-> Particularly valuable for blackbox problems.
• Initial sampling design plan has a major impact on the optimisationperformance and should be carefully selected.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 27 / 89
Surrogate Modeling - Optimisation Cycle
A common optimisation process using surrogates is outlined by the following steps:
1 Sampling the objective function to generate a set of evaluated points2 Selecting a suitable surrogate3 Constructing surrogate using evaluated points4 Utilizing surrogate to predict new promising locations5 Evaluating objective function on one (or more) of identified locations6 Updating surrogate and repeating optimisation cycle
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 28 / 89
Surrogate Modeling - Important Publications
Important publications featuring overviews or surveys on surrogate modeling andsurrogate optimisation:
• Design and analysis of computer experiments, [Sacks et al., 1989]• A taxonomy of global optimisation methods based on response surfaces,
[Jones, 2001]• Surrogate-based analysis and optimisation, [Queipo et al., 2005]• Recent advances in surrogate-based optimisation, [Forrester and Keane, 2009]• Surrogate-assisted evolutionary computation: Recent advances and future
challenges, [Jin, 2011]
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 29 / 89
Linear Models
• Combination of linear predictor functions of each input to model output• Basic LM: y = —0 + —1x1 + —2x2 + · · · + —nxn + Á, where Á is the error term• Extensions: Interactions between inputs, quadratic terms, response surface
models, polynomial regression• Polynomial model takes the formy = a0 + a1x+ a2x
2 + a3x3 + · · · + anx
n + Á
Pro: white box, easy to interpret / analyse, simple and fast
Con: not suitable for complex functions, overfitting (by using too many terms)
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 30 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 19 31 July 2018
Decision Trees and Random Forests
• Decision Trees [Breiman et al., 1984] model objective function by usingtree-based approximations.
• At each node of the tree a split is made on basis of decision variable value• Prediction of a new point is given by mean value of associated points• Random Forests Regression [Breiman, 2001]: large number of decision trees
is combined to ensemble predictor• Usually, each tree in ensemble is fitted using subset of evaluated points to
avoid overfitting• Predictions of new individuals given by cumulated mean of all predictors in
ensemble
Pro: easy to interpret white box model (decision trees), fast, binary+integer+realvariables
Con: complex to interpret (RF), bad fit for complex functions (decision tree), nosmooth surface, overfitting (too large tree)
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 31 / 89
Artificial Neural Networks and Deep Learning
• Neural Networks [Haykin, 2004; Hornik et al., 1989] inspired by biologicalbrain
• Utilize artificial neurons to learn and approximate behavior of function• Neurons: weighted transform functions• Several layers of neurons: input, output and hidden layers• Layers consist of neurons with di�erent forward an/or backward connections• Deep learning [Deng and Yu, 2014; Hinton et al., 2006]:
– Complex structured networks with multiple processing layers and/or multiplenon-linear transformations and stacked model approaches
– Excellent results in approximation and specially classification tasks– Highly computational complex, lot of resources needed
Pro: very accurate (deep learning), universal approximator
Con: high computational e�ort, di�cult to interpret, very complex (DeepLearning)
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 32 / 89
Symbolic Regression
• Symbolic Regression [Flasch et al., 2010] is high level method to fit ahuman-readable mathematical model
• Based on Genetic Programming (GP)• Mathematical expressions building blocks (+,≠, sin, cos, exp ...)• Model is evolved using population-based (evolutionary) approach
Pro: easy to interpret, fast prediction
Con: high computational complexity (building process)
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 33 / 89
Kriging
• Kriging or Gaussian Process Regression [Sacks et al., 1989]: used to modelerror term of model instead of the linear coe�cients
• Simplest form: —0 + Á, where —0 is mean• Á expressed by gaussian stochastic process.• Modeling of error term Á with help of covariance distance matrix• Correlation between errors is related to distance between corresponding points• Covariance matrix utilized to predict unknown candidates.• Outstanding feature of Kriging models: uncertainty measure for the
prediction and Expected Improvement (EI)
Pro: suitable for complex functions, uncertainty measurment and EI
Con: not suitable for high dimensional data, high computational e�ort
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 34 / 89
Expected Improvement• Current best point for minimization: xú with function value f(xú)• For a new point x’, the improvement in our objective function is
[f(xú) ≠ f(xÕ)]+ 2
found in [7]. Recent publications show, that metamodels arebeneficial to speed up the evolutionary search in constrainedand multi-objective optimization [15], [16], [17], [18], thoughthere are still open questions.
Recently, screening methods also consider the confidence ofthe predicted output have been suggested [3], [9], [13], [19].This information can be obtained through Gaussian RandomField models which predict the unknown evaluation result bymeans of a Gaussian distribution. The use of confidence in-formation increases the prediction accuracy of the metamodeland helps guiding the search towards less explored regions inthe search space. This also prevents premature convergence.
In this paper, the term pre-screening will denote the use ofmetamodels for the selection of promising members which arenot evaluated so far. Criteria that can be used to support pre-screening procedures by incorporating confidence informationare introduced; their concept is discussed and statistical studiesof their performance are presented. These criteria figure outimprovements in a set of new (offspring) solutions, assumingthat the unknown response is described by a Gaussian distri-bution.
In order to extend the application domain of the proposedmethods, pre-screening criteria used in single-objective prob-lems will be generalized to constrained and multi-objectiveproblems. After scrutinizing a number of mathematical opti-mization problems, a challenging aerodynamic design problemwith 3 objectives and 6 constraints will be solved. The resultspresented below indicate that it is very beneficial to considerthe confidence information within a MAEA, in order toimprove its robustness.
The structure of this paper is as follows: In section II,GRFM are presented and discussed. In section III, the single-and multi-objective EA used are presented. In section IV,the integration of GRFM in single- and multi-objective EAis outlined. Finally, using a number of academic test cases(section V) and a real-world test problem (section VI), theefficiency of the proposed MAEA is investigated.
II. GAUSSIAN RANDOM FIELD METAMODELS
The Gaussian Random Field (GRF) theory constitutes apowerful framework for building metamodels based on dataobtained through computer experiments. Gaussian RandomField Models (GRFM) will be defined below, by first puttingemphasis to the information required by the model as wellas its responses after training. Later, statistical assumptions,limitations and practicalities related to the model itself and itsuse will be discussed.
In the literature, GRFM are also known under differentnames, such as Kriging, Gaussian processes and Gaussianrandom functions methods. The term Kriging points directlyto the origin of these prediction methods dating back to thesixties, when the mining engineer Krige used GRFM-likemodels to predict the concentration of ore in gold- and uraniummines [20]. Today, Kriging includes a wide class of spatialprediction methods which do not necessarily assume Gaussianfields.
Note that the latter assumption is essential in our algorithmsand the term GRFM is used herein in the standard (strict)
x x x(1) (2) (3)
y
y
(1) (3)
(2)^
y
Confidence Range
^
y
y(x’)
’
y(x’)+s(x’)
x’
^
^
y(x’)−s(x’)^
x
Predicted Function
Fig. 1. Outputs of Gaussian Random Field Metamodels using a R ! Rmapping example.
sense. On the other hand, the term Gaussian random functions[21] might be misleading, because a random function is oftenassociated with a single random variable instead of a setof them. However, in the present paper, the term GaussianRandom Field seems to be more appropriate than GaussianProcess [22] since this paper is dealing with a multidimen-sional – spatial – rather than a one-dimensional – temporal –input space [23].
Apart from the predicted objective function value, anotherinformation provided by a GRFM is a measure of confidencefor its prediction. It is reasonable that the confidence isexpected to be higher if the training point density in theneighborhood of a newly proposed point is higher. Anotherimportant output of the metamodel is the variance of the outputvalues and the average correlation between responses at neigh-boring points. A GRFM interpolates data values and estimatestheir prediction accuracy. It provides the mean value and thestandard deviation for a one-dimensional Gaussian distributionwhich represents the likelihood for different realizations ofoutcomes to represent a precise function evaluation. Figure 1illustrates the use of GRFM in an example mapping R! R.
The user of modern optimization methods desires to operatethe metamodel in the most efficient manner, i.e. to maximizeits prediction capabilities and minimize the CPU cost forits training. For this purpose, a better understanding of thestatistical assumptions, limitations and practicalities related tothe model itself and its use are needed.
Let y : Rd ! R be the output of a computationallyexpensive computer experiment and X = {x(1), . . . ,x(m)}be a set of m input configurations which are available alongwith the corresponding responses y(1) = y(x(1)), y(2) =y(x(2)), . . . , y(m) = y(x(m)). No assumption on the regularityof the distribution of x(1), . . . , x(m) in S is made.
In GRF theory, the aim is to build a non-time-consumingtool capable of predicting the output corresponding to a newpoint x0 2 S, according to an approximated Rd ! R mapping.With x0 2 X , the precise objective function value is returned.This corresponds to the well known exact interpolation prob-lem for which a large variety of methods, ranging from splines[24] to radial basis networks [25] and Shepard polynomials[26], are available.
The basic assumption in modeling with GRFM is that theoutput function is a realization (sample-path) of a Gaussian
Image taken from [Emmerich et al., 2006]
Intuition: Expected Improvement is every possible improvement value weighted byits probability
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 35 / 89
Method Example: E�cient Global Optimisation
• E�cient Global Optimisation (EGO) by [Jones et al., 1998a] is surrogateoptimisation framework specialized on utilizing Kriging and expectedimprovement
• Focus on optimisation of expensive blackbox functions• Original version of EGO starts by sampling the objective function by
space-filling experimental design• Example: LHD with approximate k = 10n points: convenient, finite-decimal
value for the inter-point spacing, e.g., 21 design points for 2-dimensions• Kriging surrogate is fit using maximum likelihood estimation on the selected
design points• Surrogate is then manually analyzed by applying di�erent diagnostic tests• If it is satisfactory the iterative optimisation process is started, if not, the
objective function is tried to be transformed (by log or inversetransformation) to acquire a better fit.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 36 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 20 31 July 2018
Method Example: E�cient Global Optimisation
Optimisation cycle features following steps:
1 Calculate and maximize expected improvement on surrogate by exactbranch-and-bound algorithm
2 Sample objective function where expected improvement is maximized3 Re-estimate Kriging surrogate including new candidate by maximum
likelihood estimation
The authors introduce stopping criterion, which is reached if expectedimprovement is less than one percent of current best candidate
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 37 / 89
EGO Pseudo-Code Phase I: Building
Algorithm 1.1: EGO
beginphase 1, initial surrogate building:initialize population X of size k based on a space-filling DOEevaluate X on f(x)xc = best candidate in f(X)fit Kriging surrogate model fm with X by maximum likelihood estimationmanually verify fm by diagnostic testsif verify(fm)=false then
transform f(x) by log or inverse and repeat fitting processend
end
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 38 / 89
EGO Pseudo-Code Phase II: Optimisation
Algorithm 1.2: EGO
beginphase 2, use and refine surrogate:while not termination-condition do
xnew = calculate and maximize EI on surrogate model bybranch-and-bound optimisation
if EI(xnew)/|f(xc)| < 0.01 thenstop algorithm
endevaluate f(xnew)add xnew to Xxc = best candidate in f(X)re-estimate fm with X by maximum likelihood estimation
endend
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 39 / 89
Overview
• Motivation
• Requirements
• Concepts and methods
• Typical problems in application
• Multi-criteria optimisation
• Combinatorial optimisation
• Open Issues / Research perspectives / Fields of Interest
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 40 / 89
Overview
• Motivation
• Requirements
• Concepts and methods
• Typical problems in application
• Multi-criteria optimisation
• Combinatorial optimisation
• Open Issues / Research perspectives / Fields of Interest
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 41 / 89
Typical Problems in Practice
• Previous slides: numerical issues (Kriging)
• Other, more general issues:– Problem definition
• What is the objective• What variables impact the objective• ...
– Algorithm design, selection of:• Model• Optimizer• Parameters
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 42 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 21 31 July 2018
Typical Problems in Practice: Problem definition
• Very important, crucial to success• Often underestimated• Information based on
– Discussions with application experts, practitioners– Literature– Experience
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 43 / 89
Typical Problems in Practice: Problem definition• Consider the following:
– Aims and goals• What are they?• Can they be clearly defined?• Can they be evaluated
(measured, computed)?• Cost of evaluation?• Budget?• Desired accuracy?
– Variables a�ecting theobjective(s)
• How many?• Independent variables?• Disturbance variables?• Data types?
– Constraints?– Noise?– Interfacing, data exchange
• Repeat the aforementioned, e.g.,after first results
Surrogate Model Optimization
CostlyExperiment or
Simulation
ComputeObjective(s)
Measurements & Results
Disturbance Param
eters
Objectives
Independent Parameters
Objectives
OptimizerModel
Independent Param
eters
Noise
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 44 / 89
Typical Problems in Practice: Model Selection
Spline models
Linear regression
Kriging
Support Vector Machines
Neural
Networks
Symbolic Regression
RBFNs
Random Forest
Regression
Trees
• Large variety of models available• Which to choose?
• Potential solutions:– Use the "default" (e.g., Kriging / EGO)– Exploit problem knowledge– Select performance-based or combine ->
Ensembles (open issue)
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 45 / 89
Typical Problems in Practice: Model Selection
• No problem is truly black-box• Use what you know, e.g.:
• Number of parameters– 20 or more: Kriging and related loose
performance• Data types
– Continuous: Kriging, SVMs, RBFNs– Integer, binary, categorical parameters:
e.g., Random Forest– Mixed: Treed Gaussian Processes (TGP)– Structured / combinatorial (e.g.:
permutations, trees): see later slides• Data set sizes (budget)
– Large: Kriging may become slow– Small: Take care to use models that avoid
overfitting
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 46 / 89
Typical Problems in Practice: Model Selection
• Structure of the fitness landscape:– Highly multi-modal: do not use simple linear models– Smooth: Kriging or related– Large plateaus or discontinuities: Kriging variants may perform poorly– Known trend: Use Kriging with trend function.
• Cost of the objective function– Rather high: Complex, powerful models (Kriging, SVMs)– Rather low: Less complex, cheaper models (linear regression,
tree-based,k-Nearest Neighbor)
• Requirements of understandability / learning from the model– Variable importance: most models– Rule extraction: regression trees– Human readable formulas: linear models, genetic programming
(symbolic regression)• Availability of derivatives
– e.g., Gradient Enhanced Kriging ?
$$$
?
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 47 / 89
Typical Problems in Practice: Model Selection
• Other considerations– Customer / Practitioner preferences and knowledge
• Do they understand the models• Do they trust results from the models
– Your own preferences & experience• e.g., with regards to parameterization• or implementation
– Note, various model types often quite similar & related, interchangeable– e.g.: Spline models - Kriging - SVM - RBFN
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 48 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 22 31 July 2018
Typical Problems in Practice: Implementation
• Once models are selected -> Implementation• Can have significant impact• Options
– Frequently employed packages/libraries• Quality• Community support• Examples, documentation• Continuity of development
– Less frequently used work• For special tasks?• Because of specific features?
– Do it yourself• None other available (or too slow, buggy)• Specific features not available• More fun , but also more work /• You know what the model really does
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 49 / 89
Typical Problems in Practice: Optimizer Selection
• Similar considerations as for models
• Optimizer also depends on model type (and vice versa)
– Smooth, di�erentiable models like Kriging: gradient-based optimizers are fine
– Non-smooth (tree-based): GA, DE, PSO
– Multimodality (of prediction or infill criterion, e.g., EI):Population based, restarts, niching, etc.
– Simple linear regression: analytical
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 50 / 89
Typical Problems in Practice: Parameter Selection
• Similar to model selection / optimizer selection ...• ... but with more attention to details
– Use expert / literature suggestions– Exploit problem knowledge
• Parameters a�ect:• complexity,• cost of modeling,• cost of model optimization,• noise handling,• robustness,• smoothness,• ...
– Tuning, benchmarks (open issue)
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 51 / 89
Overview
• Motivation
• Requirements
• Concepts and methods
• Typical problems in application
• Multi-criteria optimisation
• Combinatorial optimisation
• Open Issues / Research perspectives / Fields of Interest
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 52 / 89
SAMCO
Surrogate-Assisted Multi-Criteria Optimisation
• Intersection of– Multi-criteria optimisation– Surrogate-assisted optimisation
• MCO – Multi-Criteria Optimisation• EMO – Evolutionary Multi-objective Optimisation• EMOA – Evolutionary Multi-objective Optimisation Algorithm• MOEA – Multi-Objective Evolutionary Optimisation
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 53 / 89
Basics of Multi-criteria optimisation
• Multiple objective functions considered• Minimize
f : IRn ≠æ IRm, f(x) = (f1(x), . . . , fm(x))
Pareto Dominance• Solution x dominates solution y
x <p y :… ’i : fi(x) Æ fi(y) (i = 1, . . .m)÷j : fj(x) < fj(y) (j = 1, . . .m)
• Pareto-Set: Set of all non-dominated solutions in the search space
{x | @z : z <p x}
• Pareto-Front: Image of Pareto-set in objective space
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 54 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 23 31 July 2018
SAMCO
• Available budget: 100 to 10 000 evaluations• Di�erent strategies
– Stochastic variation of EAs assisted (e.g. filtering solutions)– Completely replaced (e.g. optimizing figure of merit)
• Many algorithms already developed• However
– Very heterogeneous research fields– Di�erent sciences / faculties involved
- Engineering- Statistics- Computer Science
- Mathematics- Aeronautics- Agriculture
• Thus: di�erent backgrounds, also di�erent languages to be considered
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 55 / 89
SAMCO
• Application driven– Proposed algorithms tested respective application tasks mainly– Comparison of di�erent approaches hard to accomplish– Lacks existence of accepted benchmarks
• Theoretical aspects almost neglected due to focus on practical applications
• Methodological research areas– Choice of the surrogate model– Respective figure of merit (or infill criterion)
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 56 / 89
SAMCO• Easiest approach: one model per objective• Kriging and expected improvement used commonly
10
on the old Pareto front
0
5
10
15
20
0
5
10
15
20
0
0.2
f1
f2
Probability Density
Precise Evaluations
Mean values of approximations
Lower confidence bounds
x1
x2
x3
Fig. 8. Interval boxes for approximations in a solution space with twoobjectives.
Pareto−front
PSfrag replacements
Vd(P )
Vnd(P )
f1
f2
f(x1)
f(x2)
f(x3)
f(x4)
fmax
fmin
Fig. 9. Illustration of the hypervolume measure.
V. METHOD APPLICATION – RESULTS AND DISCUSSION
A. Mathematical Test Problems and Performance Measures
At first, experiments have been conducted on selected math-ematical test problems in order to compare the performanceof different MAES variants. These variants were tested ondifferent objective function landscapes, featuring minimizationof a simple convex function (sphere function, appendix A), anon-isotropic function (ellipsoid function, appendix B), a dis-continuous function (step function, appendix C) and - finally -a highly multimodal function (Ackley function, appendix D).
MAES testing was carried out using population sizes ofµ = 5 and � = 100 individuals, among which ⌫ = 20individuals at most were pre-selected for exact evaluation.Each run was repeated 20 times, each of which with a differentrandom number seed.
The median of the best results found after t evaluations(t 1000) was plotted. In order to get a reliability measure,the 16th worst function value, i. e. the 80%-quantile of thedistribution of the obtained function values, was recorded andpresented. For similar studies on 20–dimensional test-casesand with different population sizes the reader should refer to[9], [19], [45] and recently [3].
B. Prediction Accuracy Measures
It is well known that EA are rank-based strategies thatare invariant to monotonic transformations of the objectivefunction. Hence, for a metamodel used in conjunction withan EA to be successful, it suffices this to predict the subsetof Gt that would be selected by the recombination if allevaluations were precise improvements with respect to theparent population Pt. The so–called retrieval quality of any
pre-screening tool (metamodel) can be measured through therecall and precision measures defined below.
Let Mµ(A) denote the subset of the µ best solutions inA. Pre-screening aims at identifying the members of Gt \Mµ(Gt [Pt) which will enter the next generation. Thus, it isdesirable that
Qt ⇡ Gt \Mµ(Gt [ Pt). (47)
It is reasonable that none of the metamodels could alwaysretrieve the ensemble of relevant individuals out of Gt. Anon-satisfactory metamodel is one which: (a) fails capturinga considerable part of Gt \Mµ(Gt [ Pt) or (b) in order tocapture as many as possible of them, it additionally selects toomany irrelevant individuals.
The retrieval accuracy, practically in relation to the first ofthe two unpleasant situations just mentioned, i. e. the ratiothe relevant solutions retrieved from Gt to the number of allrelevant solutions in Gt, is quantified as follows:
recall(t) =|Mµ(Gt [ Pt) \Qt||Mµ(Gt [ Pt) \Gt|
, (48)
where the optimal values is recall(t) = 1.On the other hand, precision(t) is a measure for controlling
the second unpleasant metamodel behavior. This is expressedby the ratio of the number of correctly retrieved solutions tothe total number of retrieved solutions, namely:
precision(t) =|Mµ(Gt [ Pt) \Qt|
|Qt|(49)
The optimal value for this criterion is precision(t) = 1.Unfortunately, in contrast to quantitative measures such as
y � y plots, specificity measures cannot be evaluated withoutperforming extra evaluations with the costly evaluation tool.Hence, these are useful for statistics on simple academic casesbut not for real–world problems.
C. Implementation details
The basic evolution strategy corresponds to the one de-scribed previously in section IV. The initial step-size wasset to 0.05% of the search space width. The database isformed only by exact evaluations. The metamodel is usedfrom the first generation on. As soon as there are more than2d solutions in the database, the algorithm switches to thelocal metamodeling strategy as described in section II. For allstrategies, the maximal number of pre-selected individuals wasset to µ.
D. Results – Discussion on the performance
The first comparison was conducted on the 20-dimensionalsphere model (cf. appendix A). The median of the best foundsolution is shown in Figure 10. All metamodel-based strategiesoutperformed conventional strategies (i.e. (5+20)-ES, (5+35)-ES4, (5+100)-ES) since they ask considerably less function
4This strategy has been added in order to be comparable with Ulmer et al.[9].
Image taken from [Emmerich et al., 2006]
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 57 / 89
Alternative approaches
ParEGO – Pareto E�cient Global Optimisation [Knowles, 2006]
• Converts di�erent cost values into single one– Parameterized scalarizing weight vector (augmented Tchebyche� function)– Using augmented Tchebyche� function– Di�erent weight vector at each iteration– Weight vector is drawn uniformly at random– Allows for gradually building an approximation to whole Pareto front
• Learns a Gaussian processes model of search landscape– Scalar costs of all previously visited solutions is computed– DACE model of landscape is constructed by maximum-likelihood– Solution that maximizes expected improvement becomes next point– Evaluation on real, expensive cost function– Update after every function evaluation
• Ensures that weakly dominated solutions are rewarded less than Paretooptimal ones
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 58 / 89
Alternative approaches
RASM – Rank-based aggregated surrogate models [Loshchilov et al.,2010]
• Mono-surrogate approach again• Single surrogate model to reflect Pareto dominance in EMO framework• Locally approximates Pareto dominance relation
– Ranking neighbor points within the objective space– O�spring filter estimating whether they improve on their parents in terms of
approximated Pareto-dominance– Used for o�spring generation in standard EMOA
• Modeling Pareto dominance within the rank-SVM framework
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 59 / 89
Existing libraries and approaches
• Many libraries already existing, e.g.- mlrMBO- DiceKriging- SUMO- parEGO
- GPareto- SPOT- Shark- QstatLab
– Overview on SAMCO homepage:http://samco.gforge.inria.fr/doku.php?id=surr_mco
• However: up-to-date overview is missing– List algorithms contained– Compare strengths and weaknesses
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 60 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 24 31 July 2018
Overview
• Motivation
• Requirements
• Concepts and methods
• Typical problems in application
• Multi-criteria optimisation
• Combinatorial optimisation
• Open Issues / Research perspectives / Fields of Interest
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 61 / 89
Discrete / combinatorial / structured search spaces
• Well established in expensive, continuous optimization
What about combinatorial / discrete optimization problems?
• Let’s get an overview
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 62 / 89
Survey: combinatorial surrogatesMixed variablesmodel optimizer cost budget dimension remarks / topics reference
RBFN ES cheap /≥ expensive
560 /280
15 /23
benchmark /real-world:medical image analysis
Li et al. [2008]
Random Forest,Kriging NSGA2 ≥expensive - 4-76 algorithm tuning Hutter et al. [2010]
RBFN +cluster GA cheap 2,000 12
benchmark,real-world:chemical industry
Bajer and Hole�a [2010]
RBFN +GLM GA cheap several
thousand 4-13benchmark,real-world:chemical industry
Bajer and Hole�a [2013]
SVR NSGA2 ? 2,000 10 finite element,multi criteria Herrera et al. [2014]
Binary stringsmodel optimizer cost budget dimension remarks / topics reference
ANN SA expensive ? 16 real world,pump positioning Rao and Manju [2007]
RBFN GA cheap dimension2 10-25 NK-Landscape Moraglio and Kattan [2011a]
RBFN GA expensive 100 10-40 benchmark,package deal negotiation Fatima and Kattan [2011]
Kriging GA cheap dimension2 10-25 NK-Landscape Zae�erer et al. [2014b]
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 63 / 89
Survey: combinatorial surrogates
Permutations
model optimizer cost budget dimension remarks / topics reference
custom brute force expensive 28 6signed permutation,real world:weld sequence
Voutchkov et al. [2005]
RBFN GA cheap 100 30 - 32 benchmark Moraglio et al. [2011]Kriging GA cheap 100 12 - 32 benchmark Zae�erer et al. [2014b]Kriging GA cheap 200 10 - 50 distance selection Zae�erer et al. [2014a]
Kriging ACO cheap 100 -1,000 50 - 100 benchmark,
tuning Pérez Cáceres et al. [2015]
RBFN GA* instancedependent 1,000 50 - 1,928
numerical stability,real world:cell suppression
Smith et al. [2016]
Kriging brute force,GA cheap 100 5 - 10 kernel definiteness Zae�erer and Bartz-Beielstein [2016]
*Di�erent integration: GA produces random solutions, which are filtered by the model in each iteration
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 64 / 89
Survey: combinatorial surrogatesTreesmodel optimizer cost budget remarks / topics referenceRBFN GA cheap 100 symbolic regression Moraglio and Kattan [2011b]
kNN GA expensive 30,000 Phenotypic similarity,genetic programming Hildebrandt and Branke [2014]
RBFN* GA cheap 100 symbolic regression,parity Kattan and Ong [2015]
Random Forest GA cheap 15,000 benchmark,genetic programming Pilát and Neruda [2016]
*two models: semantic and fitness
Othermodel optimizer cost budget dimension remarks / topics reference
k-NN GA rathercheap
20000 -200000 161 - 259
real-valued+structure,real-world,protein structure
Custódio et al. [2010]
Kriging GA expensive fewhundreds
graph-based,real-world,protein structure
Romero et al. [2013]
ANN DE cheap severalhundreds 40 - 500 assignment problem,
dynamic Hao et al. [2016]
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 65 / 89
Summary: Strategies
• Strategies of dealing with discrete / combinatorial searchspaces
– Inherently discrete models (e.g., regression trees)• simple, but may not be e�cient/feasible for any
representation
– Dummy variables• Only for linear regression, vector-based
– Feature based• Extract real-valued features from genotype / phenotype• Requires good features
– (Dis)similarity measure based (distance, kernel)• Requires good measure
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 66 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 25 31 July 2018
Summary: Types of Models
• As varied as in the continuous case:
– Custom, application specific models (expertknowledge, physics)
– Artificial Neural Networks (ANN)
– Markov Random Fields [Allmendinger et al., 2015]
– Random Forest (Integer, Mixed Integer Problems)
– (Probabilistic models - in Estimation of DistributionAlgorithms)
– (Pheromone trails - in Ant Colony Optimization)– ”Classical” kernel-based (similarity-based) models:
• k-Nearest Neighbour (k-NN)• Radial Basis Function Networks (RBFN)• Support Vector Regression (SVR)• Kriging (Gaussian Processes)
Linear regression
Markov Random FieldsKriging
Support Vector Machines
Neural
Networks
k-NN
RBFNs
Random Forest
custom
Proba-
bilistic
Models
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 67 / 89
Why kernel based approach, Kriging?
• Conceptually simple:– Replace kernel or distance function– e.g., with Gaussian kernel and arbitrary distance:
k(x, xÕ) = exp(≠◊d(x, xÕ))
• Transfer of popular method from continuous domain– Powerful predictor
– Elegant parameter fitting (maximum likelihood estimation)
– Uncertainty estimate, Expected Improvement
æ E�cient Global Optimization EGO [Jones et al., 1998b]
• Note:– None of these features exclusive to Kriging– Closely related to other model types
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 68 / 89
Combinatorial Surrogates: Research Questions
• Which kernel/distance works best and why?
• How to choose a suitable kernel/distance?
• Or else, combine?
• Genotypic vs phenotypic distances? [Hildebrandt and Branke, 2014]
• Definiteness?
• Dimensionality issues? Dimensionality reduction? See e.g., the very highdimensional problems in [Smith et al., 2016]
• Comparison of model types?1
• And again: benchmarking / testing?
1If you want to compare your approach to our methods: R Package for CombinatorialE�cient Global Optimization CEGO - https://cran.r-project.org/package=CEGO.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 69 / 89
Research Question: Choosing a Distance / Kernel[Zae�erer et al., 2014a]
* Choice crucial for success* Use prior knowledge (if available?)* Cross-validation• Fitness Distance Correlation (FDC)
(potentially misleading)FDC
Ham.Swa.Int .Lev.R
Pos.Posq.Adj.
LCSeqLCStrEuc.Man.Che.Lee
tho30
kra32
nug30
nug12
reC05
reC13
reC19
reC31
bayg29
fri26
gr24
atsp10
atsp20
atsp30
wt40a
wt40b
wt40c
wt40d
0.00.10.20.30.4
va lu e
note: larger FDC values are better
• Maximum Lilkelihood Estimation(MLE) (seems to work well)
Performance reC19
● ●●
● ●
●●
Ham.AllLev.Pos.
LCSeqLee
Man.GAInt .Swa.Euc.Posq.Adj.R
LCStr
2200 2300 2400All (MLE) - GA (model-free) - Posq (squared position distance)
note: smaller performance values are betterB. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 70 / 89
Research Question: Definiteness[Zae�erer and Bartz-Beielstein, 2016]
So we can just replace the distance or kernel function with something appropriateand everything is fine, right?
Common requirement for kernels (distances): Definiteness2
• Definiteness may be unknown / lacking• Designing definite kernels may be hard / infeasible• Required: correction procedure• Some results from SVM field [Ong et al., 2004; Chen et al., 2009; Loosli
et al., 2015] Survey: [Schleif and Tino, 2015]• Can be transfered to Kriging, with some tweaks2Positive semi-definite kernel matrix: all eigenvalues are positive or zero
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 71 / 89
Overview
• Motivation
• Requirements
• Concepts and methods
• Typical problems in application
• Multi-criteria optimisation
• Combinatorial optimisation
• Open Issues / Research perspectives / Fields of Interest
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 72 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 26 31 July 2018
Overview
• Motivation
• Concepts and methods
• Practical approach: instructive application
• Typical problems in application
• Open Issues / Research perspectives / Fields of Interest– Multi-criteria optimization– Combinatorial optimization
• Discussion– Typical problems and their solutions
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 73 / 89
Open Issues
• Research perspectives• Fields of Interest
– Multi-objective:SAMCO - Surrogate Assisted Multi-Criteria Optimisation
– Combinatorial surrogates models (optimisation)– ... both handled in more detail later!
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 74 / 89
Open Issues
• Meaningful benchmarking and testing of algorithms• Noise handling• Complex resource limitations• High-dimensional / large scale data• Constraint handling• Aggregation: Model ensembles, Multi-fidelity models• Dynamic optimization problems
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 75 / 89
Open Issues
• Meaningful benchmarking and testing of algorithms– Some benchmark sets available– (Almost?) not considered for evaluation– No standard implemented– Depending on people who apply?
• Noise handling– Surrogates considered for noisy problems– What about noise in models?
• Complex resource limitations– Resources like computation times may not be available constantly– Server availability, di�erent calculation times per job . . .– Problem handled separately– Integration of resources handling in algorithm needed
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 76 / 89
Open Issues
• High-dimensional / large scale data– Models may fail / not be applicable– New models might need to be considered– New integration schemes needed as well?
• Constraint handling– Di�erent scenarios possible– Most common: infeasible o�spring of feasible ancestor
• Easy strategy: just omit . . . optimal?– Constraints to be considered by models as well?– Integration in algorithms?– Optimal strategy?
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 77 / 89
Open Issues
• Aggregation: Model ensembles, Multi-fidelity models– Which model in which situation?
- Again depending on many parameters- Some results available . . .
– How to aggregate ensembles best?– Setting may vary over time . . .
• Dynamic optimization problems– In general: time-varying fitness function– Surrogates used for forecasting, predicting future values– Other settings possible . . . see above
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 78 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 27 31 July 2018
SAMCO Promising research areas
• Multiple objectives with di�erent response surfaces+ specific requirements of set- and indicator- based optimization
• New variants of models• New infill criteria• Approaches beyond one model per objective function
– Model dominance relations– Model performance indicator landscapes
• Ensembles of Surrogates– Multiple surrogates simultaneously or successively
- To improve overall quality of prediction of each objective- Model evolves over time from a coarse grained to finer one- Di�erent parts of search space with significantly di�erent behavior
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 79 / 89
SAMCO Promising research areas
• Collect existing approaches and libraries
• Benchmarking Surrogate-Assisted Optimizers lacks rigorously– Review of common test functions (academic vs. real-world)– Understand weaknesses and strengths of each algorithm– Algorithm recommendations for practice
– Overview on SAMCO homepage:http://samco.gforge.inria.fr/doku.php?id=benchmarking
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 80 / 89
That’s all Folks. Thanks for hanging on.
• Any questions?
• Discussion:
– Problems you encountered in practice?
– New directions, challenges?
– What is missing in the field?
– Interesting applications?
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 81 / 89
Allmendinger, R., Coello, C. A. C., Emmerich, M. T. M., Hakanen, J., Jin, Y.,and Rigoni, E. (2015). Surrogate-assisted multicriteria optimization (wg6). InGreco, S., Klamroth, K., Knowles, J. D., and Rudolph, G., editors,Understanding Complexity in Multiobjective Optimization (Dagstuhl Seminar15031) - Dagstuhl Reports, volume 5, pages 96–163, Dagstuhl, Germany.Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
Bajer, L. and Hole�a, M. (2010). Surrogate model for continuous and discretegenetic optimization based on rbf networks. In Intelligent Data Engineering andAutomated Learning – IDEAL 2010, volume 6283 LNCS, pages 251–258.
Bajer, L. and Hole�a, M. (2013). Surrogate model for mixed-variablesevolutionary optimization based on glm and rbf networks. In SOFSEM 2013:Theory and Practice of Computer Science, volume 7741 LNCS, pages 481–490.
Breiman, L. (2001). Random forests. Machine learning, 45(1):5–32.Breiman, L., Friedman, J., Stone, C. J., and Olshen, R. A. (1984). Classification
and regression trees. CRC press.Chen, Y., Gupta, M. R., and Recht, B. (2009). Learning kernels from indefinite
similarities. In Proceedings of the 26th Annual International Conference onMachine Learning, ICML ’09, pages 145–152, New York, NY, USA. ACM.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 82 / 89
Custódio, F. L., Barbosa, H. J., and Dardenne, L. E. (2010). Full-atom ab initioprotein structure prediction with a genetic algorithm using a similarity-basedsurrogate model. In Proceedings of the Congress on Evolutionary Computation(CEC’10), pages 1–8, New York, NY, USA. IEEE.
Deng, L. and Yu, D. (2014). Deep learning: methods and applications.Foundations and Trends in Signal Processing, 7(3–4):197–387.
Emmerich, M. T. M., Giannakoglou, K. C., and Naujoks, B. (2006). Single- andmultiobjective evolutionary optimization assisted by gaussian random fieldmetamodels. IEEE Transactions on Evolutionary Computation, 10(4):421–439.
Fatima, S. and Kattan, A. (2011). Evolving optimal agendas for package dealnegotiation. In Proceedings of the 13th Annual Conference on Genetic andEvolutionary Computation, GECCO ’11, pages 505–512, New York, NY, USA.ACM.
Flasch, O., Mersmann, O., and Bartz-Beielstein, T. (2010). Rgp: An open sourcegenetic programming system for the r environment. In Proceedings of the 12thAnnual Conference Companion on Genetic and Evolutionary Computation,GECCO ’10, pages 2071–2072, New York, NY, USA. ACM.
Forrester, A., Sobester, A., and Keane, A. (2008). Engineering Design viaSurrogate Modelling. Wiley.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 83 / 89
Forrester, A. I. and Keane, A. J. (2009). Recent advances in surrogate-basedoptimization. Progress in Aerospace Sciences, 45(1):50–79.
Hao, J., Liu, M., Lin, J., and Wu, C. (2016). A hybrid di�erential evolutionapproach based on surrogate modelling for scheduling bottleneck stages.Computers & Operations Research, 66:215–224.
Haykin, S. (2004). A comprehensive foundation. Neural Networks, 2(2004).Herrera, M., Guglielmetti, A., Xiao, M., and Filomeno Coelho, R. (2014).
Metamodel-assisted optimization based on multiple kernel regression for mixedvariables. Structural and Multidisciplinary Optimization, 49(6):979–991.
Hildebrandt, T. and Branke, J. (2014). On using surrogates with geneticprogramming. Evolutionary Computation, pages 1–25.
Hinton, G. E., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm fordeep belief nets. Neural computation, 18(7):1527–1554.
Hornik, K., Stinchcombe, M., and White, H. (1989). Multilayer feedforwardnetworks are universal approximators. Neural networks, 2(5):359–366.
Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2010). Sequential model-basedoptimization for general algorithm configuration (extended version). TechnicalReport TR-2010-10, University of British Columbia, Department of ComputerScience. Available online:http://www.cs.ubc.ca/˜hutter/papers/10-TR-SMAC.pdf.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 84 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 28 31 July 2018
Jin, Y. (2011). Surrogate-assisted evolutionary computation: Recent advancesand future challenges. Swarm and Evolutionary Computation, 1(2):61–70.
Jones, D. R. (2001). A taxonomy of global optimization methods based onresponse surfaces. Journal of global optimization, 21(4):345–383.
Jones, D. R., Schonlau, M., and Welch, W. J. (1998a). E�cient globaloptimization of expensive black-box functions. Journal of Global optimization,13(4):455–492.
Jones, D. R., Schonlau, M., and Welch, W. J. (1998b). E�cient globaloptimization of expensive black-box functions. Journal of Global Optimization,13(4):455–492.
Kattan, A. and Ong, Y.-S. (2015). Surrogate genetic programming: A semanticaware evolutionary search. Information Sciences, 296:345–359.
Knowles, J. (2006). Parego: A hybrid algorithm with on-line landscapeapproximation for expensive multiobjective optimization problems. IEEETransactions on Evolutionary Computation, 10(1):50–66.
Li, R., Emmerich, M. T. M., Eggermont, J., Bovenkamp, E. G. P., Bäck, T.,Dijkstra, J., and Reiber, J. (2008). Metamodel-assisted mixed integer evolutionstrategies and their application to intravascular ultrasound image analysis. InProceedings of the Congress on Evolutionary Computation (CEC’08), pages2764–2771, New York, NY, USA. IEEE.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 85 / 89
Loosli, G., Canu, S., and Ong, C. (2015). Learning svm in krein spaces. IEEETransactions on Pattern Analysis and Machine Intelligence, 38(6):1204–1216.
Loshchilov, I., Schoenauer, M., and Sebag, M. (2010). Dominance-BasedPareto-Surrogate for Multi-Objective Optimization. In Simulated Evolution andLearning (SEAL 2010), volume 6457 of LNCS, pages 230–239. Springer.
Moraglio, A. and Kattan, A. (2011a). Geometric generalisation of surrogate modelbased optimisation to combinatorial spaces. In Proceedings of the 11thEuropean Conference on Evolutionary Computation in CombinatorialOptimization, EvoCOP’11, pages 142–154, Berlin, Heidelberg, Germany.Springer.
Moraglio, A. and Kattan, A. (2011b). Geometric surrogate model basedoptimisation for genetic programming: Initial experiments. Technical report,University of Birmingham.
Moraglio, A., Kim, Y.-H., and Yoon, Y. (2011). Geometric surrogate-basedoptimisation for permutation-based problems. In Proceedings of the 13thAnnual Conference Companion on Genetic and Evolutionary Computation,GECCO ’11, pages 133–134, New York, NY, USA. ACM.
Naujoks, B., Steden, M., Muller, S. B., and Hundemer, J. (2007). Evolutionaryoptimization of ship propulsion systems. In 2007 IEEE Congress on EvolutionaryComputation, pages 2809–2816.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 86 / 89
Ong, C. S., Mary, X., Canu, S., and Smola, A. J. (2004). Learning withnon-positive kernels. In Proceedings of the Twenty-first International Conferenceon Machine Learning, ICML ’04, pages 81–88, New York, NY, USA. ACM.
Pilát, M. and Neruda, R. (2016). Feature extraction for surrogate models ingenetic programming. In Parallel Problem Solving from Nature – PPSN XIV,pages 335–344. Springer Nature.
Pérez Cáceres, L., López-Ibáñez, M., and Stützle, T. (2015). Ant colonyoptimization on a limited budget of evaluations. Swarm Intelligence, pages1–22.
Queipo, N. V., Haftka, R. T., Shyy, W., Goel, T., Vaidyanathan, R., and Tucker,P. K. (2005). Surrogate-based analysis and optimization. Progress in aerospacesciences, 41(1):1–28.
Rao, S. V. N. and Manju, S. (2007). Optimal pumping locations of skimmingwells. Hydrological Sciences Journal, 52(2):352–361.
Romero, P. A., Krause, A., and Arnold, F. H. (2013). Navigating the proteinfitness landscape with Gaussian processes. Proceedings of the NationalAcademy of Sciences, 110(3):E193–E201.
Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P. (1989). Design andanalysis of computer experiments. Statistical science, pages 409–423.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 87 / 89
Schleif, F.-M. and Tino, P. (2015). Indefinite proximity learning: A review. NeuralComputation, 27(10):2039–2096.
Smith, J., Stone, C., and Serpell, M. (2016). Exploiting diverse distance metricsfor surrogate-based optimisation of ordering problems: A case study. InProceedings of the 2016 on Genetic and Evolutionary Computation Conference,GECCO ’16, pages 701–708, New York, NY, USA. ACM.
Voutchkov, I., Keane, A., Bhaskar, A., and Olsen, T. M. (2005). Weld sequenceoptimization: The use of surrogate models for solving sequential combinatorialproblems. Computer Methods in Applied Mechanics and Engineering,194(30-33):3535–3551.
Zae�erer, M. and Bartz-Beielstein, T. (2016). E�cient global optimization withindefinite kernels. In Parallel Problem Solving from Nature–PPSN XIV, pages69–79. Springer.
Zae�erer, M., Stork, J., and Bartz-Beielstein, T. (2014a). Distance measures forpermutations in combinatorial e�cient global optimization. In Bartz-Beielstein,T., Branke, J., Filipic, B., and Smith, J., editors, Parallel Problem Solving fromNature–PPSN XIII, pages 373–383, Cham, Switzerland. Springer.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 88 / 89
Zae�erer, M., Stork, J., Friese, M., Fischbach, A., Naujoks, B., andBartz-Beielstein, T. (2014b). E�cient global optimization for combinatorialproblems. In Proceedings of the 2014 Conference on Genetic and EvolutionaryComputation, GECCO ’14, pages 871–878, New York, NY, USA. ACM.
B. Naujoks Surrogate Assisted (Evolutionary) Optimization May 8, 2017 89 / 89
SYNERGY Horizon 2020 – GA No 692286
D3.2 29 31 July 2018
17 May 2016, Ljubljana
Industrial applications of model-based simulation and optimisation
Thomas Bartz-Beielstein, Jörg Stork, Martin Zaefferer
OurVision
• Scientificsolutionstoproblemsinindustrialoptimizationandprocessanalytics
• Individualtask• Weare
– passionateabouttechnology,science,andcriticalreasoning– excitedtopromotemoderndataanalysisandoptimizationtechnologyintoindustry
– proudthateverysuccessinourconsultingprojectsbringsastrongnewrelationshipwithindustrialpartners
• Ourresearchmotivatesourstudents
Connections
• Closecooperationbetweenindustryandacademia• Wellconnectedtoleadingresearchgroupsintheworld• Focusedonsolvingengineeringproblemsin
– energy,– water,– steel,– automotiveand– plasticindustry
• Weworkindependently,i.e.,wedonotsellonespecificproductormethod
• Customersgetthebestfromseveralworlds
Howwework
• Wedonotclaimtodeliveraperfectsolutioninasingleiteration,butmeasurableimprovementsineachstep
• Werelyoninteractionandfeedback• Cuttingedgeresearchresults• Exploringnewscientificfrontiers• Excellencein– teaching,datasciences,optimization,andstatisticalanalysisarestandard
Whoweare
• Teamofpassionatedatadetectivesthrilledbyknowledgediscovery
• Experiencefromdifferentprojectsandformauniqueteam,combiningcomplementaryskills
• Alldifferent,havedifferentbackgrounds,havehistoricallyworkedindifferentdomains,butweallgetakickfromhavingaproblemsolved
TheSPOTSevenProcessModel
• SPOTSevendefinesaprocessmodel• Extensionandadaptationofthewell-knownSixSigmaprocedure
• SPOTSevenleavesroomforindividualprocedures
• Enablestransferprocess(theory2practice)
SYNERGY Horizon 2020 – GA No 692286
D3.2 30 31 July 2018
1Define
• Customerandbusinessrequirementsspecifiedfromahighlevelpointofview
• Currentsituationdescribed• Necessarystepsdefined• Criticalparametersandstatisticsidentified• Projectteamwithclearlyspecifiedresponsibilities• Result:tentativeprojectcharter• Developedduringaworkshop
2DataAcquisitionandMeasurement
• Selectionofsuitablemethods,instruments,andprocessestocollectdata
• Thisstepcomprehendsalsoasystematicgatheringofmeasurementunits,frequencies,andaccuracies
• Importantinput-/outputrelationsdefined• Basedoncurrentvalue:aimedvaluespecified
3ModelingandAnalysis
• Functionalrelationshipsbetweenin-/outputparameters
• High-qualityopensourcesoftwaresuchasR,aswellascommercialproductssuchasSAS/JMP,Minitab
• Process-anddataanalysis– Processanalysisvisualizesqualitativerelationships,e.g.,byintegratingcause-effectdiagrams
– Dataanalysisisbasedonmathematicalandstatisticalmethodsandreliesonquantifiablemeasurementvalues
• Statisticaltests• Inaddition:interactivevisualizations
4Optimization
• Goal:– understandability– simplicity– interpretability– robustness
• Toolswhichallowanindividualselectionbetweenexactnessandrobustnessofthesolution
• Communicationwiththepractitioners
5IntegrationandDeployment
• Implementationinreal-worldsystem.• Verificationoftheprocessimprovementontherealsystem
• Newsettingsdiscussedwith– projectleaders,– technicalexperts,and– userstoreachahighacceptancerate
6Control
• Statisticalqualitycontrol• Documentedsolutionsdelivered• Workshopsandtrainingcourses:– discussresultsand– totrainpractitioners
SYNERGY Horizon 2020 – GA No 692286
D3.2 31 31 July 2018
7MetaEvaluation
• SPOTSevenitselfcontainsacontinuousimprovementprocess
• Feedbackfromthecustomers• Ourpassion:– todeliverthebestresults– learnfromtheprocess– teachstudents
ApplyingDesignofExperimentsforVOSSAutomotive
VOSSAutomotive• Severequalityissuesfromaninjectionmoldingprocess
• Gasketring• Severalmillionpartsweredeliveredtoautomobilemanufacturerseveryyear
VOSSAutomotive
• DesignofExperiment(DoE)• Here:smalldata• Projectduration:20weeks• Onestudentworkedfulltime• Weeklymeetings:– projectleaders,technicalexpertsandusers.
• AnalysisandDoEbasedonOpenSourceSoftware
VOSSAutomotive VOSSAutomotive
• Techniques:– Experimentaldesignwithtwostages– Cause-effectdiagrams– Screening– Optimization– Simplelinearregressionmodelsandtreebasedmodels– Regressiontrees:easytounderstand– Fractionalfactorialdesigns– Responsesurfacemethodologytovisualizeresults
• Behaviorofthesystem:explainedwiththreeparametersonly
SYNERGY Horizon 2020 – GA No 692286
D3.2 32 31 July 2018
VOSSAutomotive VOSSAutomotive
• IntegrationandDeployment:– notthebestconfigurationwasselectedtokeepdeteriorationlow
– Settingswithreducedpressureandvelocitywereusedinstead
• Experimentsperformedwithsettingsfoundonthemodel
• Controlphase:confirmedthatprocessstable
DevelopmentofaMultivariateModellingandOnlineAdaptiveOptimizationforIn-situMeasurementsusinganSO2-Sensor
Grantreference:KF3145101WM3 Totalfunding:168.573,00€
Funding:”ZentraleslnnovationsprogrammMittelstand(ZlM)“-Kooperationen,Projektform:Kooperationsprojekt(KF)
ProjectPartner:ENOTECGmbH
Goal:•The Development of amullvariatemodellingmethod for in-situmeasurements.
•Freelyparameterizedandinterpretablemodel.Highrobustnessoftheresults
•Develop an oplmizalon process which adapts the modelconlnuouslytothechangingcondilonsinthemeasuredgases.
IMProvT:IntelligenteMessverfahrenzurProzessoptimierungvonTrinkwasserbereitstellungundVerteilung
StartedDec.2015untilDec.2018TotalFunding:590.445,00€
ProjectPartners:THKöln(GECO-C,SPOTSeven),DVGW-TechnologiezentrumWasser,Endress+Hauser,ThüringerFernwasser,LandeswasserversorgungStuttgart,WasserversorgungKleineKinzig,IWWZentrumWasser,Aggerverband
Goal:developmentofsolutionsandtoolsforenergyoptimizeddrinkingwaterproductionanddistribution
• Optimizationofsensorusage(positioning,reliability,driftdetection)• Applicationofmachinelearningmethodsforwaterqualitymonitoring• Evaluationofenergysavingpotentialforintelligentcontrolsystems
SEVEN
Surrogate-models for Combinatorial Search Spaces
Martin Zae�erer
17.05.2016
Martin Zae�erer 17.05.2016 1 / 10
Combinatorial Surrogate-models Motivation
Motivation: Combinatorial Surrogate-models
• Well established in expensive, continuous optimization, e.g.,
x
f(x)
build surrogate model
x
f(x)
optimize surrogate model
x
f(x)
evaluate
What about expensive, combinatorial optimization problems?
• Example applications:• Engineering: weld path optimization [Voutchkov et al., 2005],
twin-screw configuration [Teixeira et al., 2012]• Bioinformatics: protein sequence optimization [Romero et al., 2013]• Computer science: algorithm tuning/configuration [Hutter, 2009]
Martin Zae�erer 17.05.2016 2 / 10
SYNERGY Horizon 2020 – GA No 692286
D3.2 33 31 July 2018
Combinatorial Surrogate-models Types of Models
Types of Models
• Application specific models (expert knowledge, physics),e.g., [Voutchkov et al., 2005]
• Neural Networks• Bayesian Networks• Markov Random Fields [Allmendinger et al., 2015]• Random Forest (Integer, Mixed Integer Problems) [Hutter, 2009]• ...• ”Classical” kernel-based models:
• Radial Basis Function Networks (RBFN)[Li et al., 2008; Moraglio and Kattan, 2011]
• Support Vector Machines (SVM)• Kriging (Gaussian Processes)
[Hutter, 2009; Zae�erer et al., 2014b]• e.g., with Gaussian kernel and arbitrary distance:
k(x , x Õ) = exp(≠◊d(x , x Õ))Martin Zae�erer 17.05.2016 3 / 10
Combinatorial Surrogate-models Types of Models
Focus
• Kernel based methods, especially Kriging
• Powerful predictor
• Elegant parameter fitting (maximum likelihood estimation)
• Uncertainty estimate, Expected Improvement1
æ E�cient Global Optimization EGO [Jones et al., 1998]
1Other methods do also provide an uncertainty estimate, e.g., RBFN [Sóbester et al., 2005]Martin Zae�erer 17.05.2016 4 / 10
Combinatorial Surrogate-models First Results
Does it work at all?
• Positive results[Zae�erer et al., 2014b,a]
• Genetic Algorithm(+Kriging)
• Inexpensive test-functions(permutation problems)
• Rather low-dimensional:d œ [10, ..., 50]
• Negative results[Pérez Cáceres et al., 2015]
• Ant Colony Optimization(+Kriging)
• Inexpensive test-functions(permutation problems)
• Rather high-dimensional:d œ [50, ..., 100]
d : number of elements in permutations
Martin Zae�erer 17.05.2016 5 / 10
Combinatorial Surrogate-models Choosing a Distance
Choosing a distance
* Choice of distance measure crucial for success* Use prior knowledge (if available?)* Cross-validation
• Fitness Distance Correlation (FDC)(potentially misleading)
FDC
Ham.Swa.Int .Lev.R
Pos.Posq.Adj.
LCSeqLCStrEuc.Man.Che.Lee
tho30
kra32
nug30
nug12
reC05
reC13
reC19
reC31
bayg29
fri26
gr24
atsp10
atsp20
atsp30
wt40a
wt40b
wt40c
wt40d
0.00.10.20.30.4
va lu e
• MLE seems to work well(for Kriging)
Performance reC19
● ●●
● ●
●●
Ham.AllLev.Pos.
LCSeqLee
Man.GAInt .Swa.Euc.Posq.Adj.R
LCStr
2200 2300 2400All (MLE) - GA (model-free) - Posq (squared position distance)
Martin Zae�erer 17.05.2016 6 / 10
Combinatorial Surrogate-models Outlook
Research Questions
• Which kernel/distance works best and why?
• How to choose a suitable kernel/distance?
• Or else, combine? (ensembles)
• Definiteness?
• Genotypic vs phenotypic distances? [Hildebrandt and Branke, 2014]
• Dimensionality issues? Dimensionality reduction?
• Comparison to other model types?
• Visualization?
Martin Zae�erer 17.05.2016 7 / 10
Bibliography
Allmendinger, R., Coello, C. A. C., Emmerich, M. T. M., Hakanen, J., Jin, Y.,and Rigoni, E. (2015). Surrogate-assisted multicriteria optimization (wg6). InGreco, S., Klamroth, K., Knowles, J. D., and Rudolph, G., editors,Understanding Complexity in Multiobjective Optimization (Dagstuhl Seminar15031) - Dagstuhl Reports, volume 5, pages 96–163, Dagstuhl, Germany.Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
Hildebrandt, T. and Branke, J. (2014). On using surrogates with geneticprogramming. Evolutionary Computation, pages 1–25.
Hutter, F. (2009). Automated configuration of algorithms for solving hardcomputational problems. PhD thesis, University of British Columbia.
Jones, D. R., Schonlau, M., and Welch, W. J. (1998). E�cient globaloptimization of expensive black-box functions. Journal of Global Optimization,13(4):455–492.
Li, R., Emmerich, M. T. M., Eggermont, J., Bovenkamp, E. G. P., Bäck, T.,Dijkstra, J., and Reiber, J. (2008). Metamodel-assisted mixed integer evolutionstrategies and their application to intravascular ultrasound image analysis. InCongress on Evolutionary Computation (CEC’08), Proceedings, pages2764–2771, New York, NY, USA. IEEE.
Martin Zae�erer 17.05.2016 8 / 10
SYNERGY Horizon 2020 – GA No 692286
D3.2 34 31 July 2018
Bibliography
Moraglio, A. and Kattan, A. (2011). Geometric generalisation of surrogate modelbased optimisation to combinatorial spaces. In Proceedings of the 11thEuropean Conference on Evolutionary Computation in CombinatorialOptimization, EvoCOP’11, pages 142–154, Berlin, Heidelberg, Germany.Springer.
Pérez Cáceres, L., López-Ibáñez, M., and Stützle, T. (2015). Ant colonyoptimization on a limited budget of evaluations. Swarm Intelligence, pages1–22.
Romero, P. A., Krause, A., and Arnold, F. H. (2013). Navigating the proteinfitness landscape with gaussian processes. Proceedings of the NationalAcademy of Sciences, 110(3):E193–E201.
Sóbester, A., Leary, S. J., and Keane, A. J. (2005). On the design of optimizationstrategies based on global response surface approximation models. Journal ofGlobal Optimization, 33(1):31–59.
Teixeira, C., Covas, J. A., Stützle, T., and Gaspar-Cunha, A. (2012).Multi-objective ant colony optimization for the twin-screw configurationproblem. Engineering Optimization, 44(3):351–371.
Martin Zae�erer 17.05.2016 9 / 10
Bibliography
Voutchkov, I., Keane, A., Bhaskar, A., and Olsen, T. M. (2005). Weld sequenceoptimization: The use of surrogate models for solving sequential combinatorialproblems. Computer Methods in Applied Mechanics and Engineering,194(30-33):3535–3551.
Zae�erer, M., Stork, J., and Bartz-Beielstein, T. (2014a). Distance measures forpermutations in combinatorial e�cient global optimization. In Bartz-Beielstein,T., Branke, J., Filipic, B., and Smith, J., editors, Parallel Problem Solving fromNature–PPSN XIII, volume 8672 of Lecture Notes in Computer Science, pages373–383, Cham, Switzerland. Springer.
Zae�erer, M., Stork, J., Friese, M., Fischbach, A., Naujoks, B., andBartz-Beielstein, T. (2014b). E�cient global optimization for combinatorialproblems. In Proceedings of the 2014 Conference on Genetic and EvolutionaryComputation, GECCO ’14, pages 871–878, New York, NY, USA. ACM.
Martin Zae�erer 17.05.2016 10 / 10
Thank you for your attention
SYNERGY Horizon 2020 – GA No 692286
D3.2 35 31 July 2018
A Survey of Model-based Methods for GlobalOptimization
Thomas Bartz-Beielstein
SPOTSeven Labwww.spotseven.de
TechnologyArts Sciences
TH Köln
BIOMA
Bartz-Beielstein MBO 1 / 94
Keywords
I Abelson & SussmannI WolpertI NetflixI Deep Learning
Bartz-Beielstein MBO 2 / 94
Agenda
I Part 1: BasicsI Part 2: ExampleI Part 3: ConsiderationsI Part 4: ExampleI Part 5: Conclusion
Bartz-Beielstein MBO 3 / 94
Introduction
Overview
Introduction
Stochastic Search Algorithms
Quality Criteria: How to Select Surrogates
Examples
Ensembles: Considerations
SPO2 Part 2
Stacking: Considerations
Bartz-Beielstein MBO 4 / 94
Introduction
Model-based optimization (MBO)
I Prominent role in todays modeling, simulation, and optimizationprocesses
I Most efficient technique for expensive and time-demanding real-worldoptimization problems
I Engineering domain, MBO is an important practiceI Recent advances in
I computer science,I statistics, andI engineeringI in combination with progress in high-performance computing
I Tools for handling problems, considered unsolvable only a few decadesago
Bartz-Beielstein MBO 5 / 94
Introduction
Global optimization (GO)
I GO can be categorized based on different criteria.I Properties of problems
I continuous versus combinatorialI linear versus nonlinearI convex versus multimodal, etc.
I We present an algorithmic view, i.e., properties of algorithmsI The term GO will be used in this talk for algorithms that are trying to find
and explore global optimal solutions with complex, multimodal objectivefunctions [Preuss, 2015].
I GO problems are difficult: nearly no structural information (e.g., numberof local extrema) available
I GO problems belong to the class of black-box functions, i.e., the analyticform is unknown
I Class of black-box function contains also functions that are easy to solve,e.g., convex functions
Bartz-Beielstein MBO 6 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 36 31 July 2018
Introduction
Problem
I Optimization problem given by
Minimize: f (~x) subject to ~xl ≤ ~x ≤ ~xu,
where f : Rn → R is referred to as the objective function and ~xl and ~xudenote the lower and upper bounds of the search space (region ofinterest), respectively
I Setting arises in many real-world systems:I when the explicit form of the objective function f is not readily available,I e.g., user has no access to the source code of a simulator
I We cover stochastic (random) search algorithms, deterministic GOalgorithms are not further discussed
I Random and stochastic used synonymously
Bartz-Beielstein MBO 7 / 94
Introduction
Taxonomy of model-based approaches in GO
[1] Deterministic[2] Random Search
[2.1] Instance based[2.2] Model based optimization (MBO)
[2.2.1] Distribution based[2.2.2] Surrogate Model Based Optimization (SBO)
[2.2.2.1] Single surrogate based[2.2.2.2] Multi-fidelity based[2.2.2.3] Evolutionary surrogate based[2.2.2.4] Ensemble surrogate based
Bartz-Beielstein MBO 8 / 94
Stochastic Search Algorithms
Overview
Introduction
Stochastic Search Algorithms
Quality Criteria: How to Select Surrogates
Examples
Ensembles: Considerations
SPO2 Part 2
Stacking: Considerations
Bartz-Beielstein MBO 9 / 94
Stochastic Search Algorithms
Random Search
I Stochastic search algorithm: Iterative search algorithm that uses astochastic procedure to generate the next iterate
I Next iterate can beI a candidate solution to the GO orI a probabilistic model, where solutions can be drawn from
I Do not depend on any structural information of the objective function suchas gradient information or convexity⇒ robust and easy to implement
I Stochastic search algorithms can further be categorized asI instance-based orI model-based algorithms [Zlochin et al., 2004]
Bartz-Beielstein MBO 10 / 94
Stochastic Search Algorithms
[2.1] Instance-based Algorithms
I Instance-based algorithms: use a single solution, ~x , or population, P(t),of candidate solutions
I Construction of new candidates depends explicitly on previouslygenerated solutions
I Examples: Simulated annealing, evolutionary algorithms
1: t = 0. InitPopulation(P).2: Evaluate(P).3: while not TerminationCriterion() do4: Generate new candidate solutions P’(t ) according to a specified
random mechanism.5: Update the current population P(t+1) based on population P(t) and
candidate solutions in P’(t).6: Evaluate(P(t + 1)).7: t = t + 1.8: end while
Bartz-Beielstein MBO 11 / 94
Stochastic Search Algorithms
[2.2] MBO: Model-based Algorithms
I MBO algorithms: generate a population of new candidate solutions P ′(t)by sampling from a model
I In statistics: model ≡ distributionI Model (distribution) reflects structural properties of the underlying true
function, say fI Adapting the model (or the distribution), the search is directed into
regions with improved solutionsI One of the key ideas: replacement of expensive, high fidelity, fine grained
function evaluations, f (~x), with evaluations, f (~x), of an adequate cheap,low fidelity, coarse grained model, M
Bartz-Beielstein MBO 12 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 37 31 July 2018
Stochastic Search Algorithms
[2.2.1] Distribution-based Approaches
I Metamodel is a distributionI Generate a sequence of iterates (probability distributions) {p(t)} with the
hope thatp(t)→ p∗ as t →∞,
where p∗: limiting distribution, assigns most of its probability mass to theset of optimal solutions
I Probability distribution is propagated from one iteration to the nextI Instance-based algorithms propagate candidate solutions
1: t = 0. Let p(t) be a probability distribution.2: while not TerminationCriterion() do3: Randomly generate a population of candidate solutions P(t) from p(t).4: Evaluate(P(t)).5: Update the distribution using population (samples) P(t) to generate a
new distribution p(t + 1).6: t = t + 1.7: end while
Bartz-Beielstein MBO 13 / 94
Stochastic Search Algorithms
[2.2.1] Estimation of distribution algorithms (EDA)
I EDA: very popular in the field of evolutionary algorithms (EA)I Variation operators such as mutation and recombination replaced by a
distribution based procedure:I Probability distribution estimated from promising candidate solutions from
the current population⇒ generate new populationI Larraaga and Lozano [2002] review different ways for using probabilistic
modelsI Hauschild and Pelikan [2011] discuss advantages and outline many of
the different types of EDAsI Hu et al. [2012] present recent approaches and a unified view
Bartz-Beielstein MBO 14 / 94
Stochastic Search Algorithms
[2.2.2] Focus on Surrogates
I Although distribution-based approaches play an important role in GO,they will not be discussed further in this talk
I We will concentrate on surrogate model-based approachesI Origin in statistical design and analysis of experiments, especially in
response surface methodology [G E P Box, 1951, Montgomery, 2001]
Bartz-Beielstein MBO 15 / 94
Stochastic Search Algorithms
[2.2.2] Surrogate Model-based Approaches
I In general: Surrogates used, when outcome of a process cannot bedirectly measured
I Imitate the behavior of the real model as closely as possible while beingcomputationally cheaper to evaluate
I Surrogate models also known asI the cheap model, orI a response surface,I meta model,I approximation,I coarse grained model
I Simple surrogate models constructed using a data-driven approachI Refined by integrating additional points or domain knowledge, e.g.,
constraints
Bartz-Beielstein MBO 16 / 94
Stochastic Search Algorithms
[2.2.2] Surrogate Model-based Approaches
Sample designspace
Optimize onmetamodel
Initial design
Buildmetamodel
Validatemetamodel
I Validation step (e.g., via CV) is optionalI Samples generated iteratively to improve the surrogate model accuracy
Bartz-Beielstein MBO 17 / 94
Stochastic Search Algorithms
[2.2.2] Surrogate Model Based Optimization (SBO)Algorithm
1: t = 0. InitPopulation(P(t))2: Evaluate(P(t))3: while not TerminationCriterion() do4: Use P(t) to build a cheap model M(t)5: P ′(t + 1) = GlobalSearch(M(t))6: Evaluate(P ′(t + 1))7: P(t + 1) ⊆ P(t) + P ′(t + 1)8: t = t + 19: end while
Bartz-Beielstein MBO 18 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 38 31 July 2018
Stochastic Search Algorithms
[2.2.2] Surrogates
I Wide range of surrogates developed in the last decades⇒ complexdesign decisions [Wang and Shan, 2007]:
I (a) MetamodelsI (b) DesignsI (c) Model fit
I (a) Metamodels:I Classical regression models such as polynomial regression or response
surface methodology [G E P Box, 1951, Montgomery, 2001]I support vector machines (SVM) [Vapnik, 1998],I neural networks [Zurada, 1992],I radial basis functions [Powell, 1987], orI Gaussian process (GP) models, design and analysis of computer
experiments, Kriging [Schonlau, 1997], [Büche et al., 2005], [Antognini andZagoraiou, 2010], [Kleijnen, 2009], [Santner et al., 2003]
I Comprehensive introduction to SBO in [Forrester et al., 2008]
Bartz-Beielstein MBO 19 / 94
Stochastic Search Algorithms
[2.2.2] Surrogates: Popular metamodeling techniques
I (b) Designs [Wang and Shan, 2007]:
I ClassicalI Fractional factorialI Central compositeI Box-BehnkenI A-, D-optimal (alphabetically)I Plackett-Burmann
I Space fillingI Simple grids
I Latin hypercubeI OrthogonalI UniformI Minimax and Maximin
I Hybrid methodsI Random or human selectionI Sequential methods
Bartz-Beielstein MBO 20 / 94
Stochastic Search Algorithms
[2.2.2] Surrogates: Popular metamodeling techniques
I (c) Model fitting [Wang and Shan, 2007]:
I Weighted least squaresregression
I Best linear unbiased predictor(BLUP)
I Likelihood
I Multipoint approximationI Sequential metamodelingI Neural networks:
backpropagationI Decision trees: entropy
Bartz-Beielstein MBO 21 / 94
Stochastic Search Algorithms
[2.2.2] Applications of SBO
I One of the most popular application areas for SBO:I Simulation-based design of complex engineering problems
I computational fluid dynamics (CFD)I finite element modeling (FEM) methods
I Exact solutions⇒ solvers require a large number of expensive computersimulations
I Two variants of SBOI (i) metamodel [2.2.2.1]: uses one or several different metamodelsI (ii) multi-fidelity approximation [2.2.2.2]: uses several instances with different
parameterizations of the same metamodel
Bartz-Beielstein MBO 22 / 94
Stochastic Search Algorithms
[2.2.2.1] Applications of Metamodels and [2.2.2.2]Multi-fidelity Approximation
I Meta-modeling approachesI 31 variable helicopter rotor design [Booker et al., 1998]I Aerodynamic shape design problem [Giannakoglou, 2002]I Multi-objective optimal design of a liquid rocket injector [Queipo et al., 2005]I Airfoil shape optimization with CFD [Zhou et al., 2007]I Aerospace design [Forrester and Keane, 2009]
I Multi-fidelity ApproximationI Several simulation models with different grid sizes in FEM [Huang et al.,
2015]I Sheet metal forming process [Sun et al., 2011]
I “How far have we really come?” [Simpson et al., 2012]
Bartz-Beielstein MBO 23 / 94
Stochastic Search Algorithms
[2.2.2.3] Surrogate-assisted Evolutionary Algorithms
I Surrogate-assisted EA: EA that decouple the evolutionary search and thedirect evaluation of the objective function
I Cheap surrogate model replaces evaluations of expensive objectivefunction
Bartz-Beielstein MBO 24 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 39 31 July 2018
Stochastic Search Algorithms
[2.2.2.3] Surrogate-assisted Evolutionary Algorithms
I Combination of a genetic algorithm and neural networks for aerodynamicdesign optimization [Hajela and Lee, 1997]
I Approximate model of the fitness landscape using Kriging interpolation toaccelerate the convergence of EAs [Ratle, 1998]
I Evolution strategy (ES) with neural network based fitness evaluations [Jinet al., 2000]
I Surrogate-assisted EA framework with online learning [Zhou et al., 2007]I Not evaluate every candidate solution (individual), but to just estimate the
objective function value of some of the neighboring individuals [Brankeand Schmidt, 2005]
I Survey of surrogate-assisted EA approaches [Jin, 2003]I SBO approaches for evolution strategies [Emmerich et al., 2002]
Bartz-Beielstein MBO 25 / 94
Stochastic Search Algorithms
[2.2.2.4] Multiple Models
I Instead of using one surrogate model only, several models Mi ,i = 1,2, . . . ,p, generated and evaluated in parallel
I Each model Mi : X → y usesI same candidate solutions, X , from the population P andI same results, y , from expensive function evaluations
I Multiple models can also be used to partition the search spaceI The tree-based Gaussian process (TGP): regression trees to partition the
search space, fit local GP surrogates in each region [Gramacy, 2007].I Tree-based partitioning of an aerodynamic design space, independent
Kriging surfaces in each partition [Nelson et al., 2007]I Combination of an evolutionary model selection (EMS) algorithm with
expected improvement (EI) criterion: select best performing surrogatemodel type at each iteration of the EI algorithm [Couckuyt et al., 2011]
Bartz-Beielstein MBO 26 / 94
Stochastic Search Algorithms
[2.2.2.4] Multiple Models: Ensembles
I Ensembles of surrogate models gained popularity:I Adaptive weighted average model of the individual surrogates [Zerpa
et al., 2005]I Use the best surrogate model or a weighted average surrogate model
instead [Goel et al., 2006]I Weighted-sum approach for the selection of model ensembles [Sanchez
et al., 2006]I Models for the ensemble chosen based on their performanceI Weights are adaptive and inversely proportional to the local modeling errors
Bartz-Beielstein MBO 27 / 94
Quality Criteria: How to Select Surrogates
Overview
Introduction
Stochastic Search Algorithms
Quality Criteria: How to Select Surrogates
Examples
Ensembles: Considerations
SPO2 Part 2
Stacking: Considerations
Bartz-Beielstein MBO 28 / 94
Quality Criteria: How to Select Surrogates
Model Refinement: Selection Criteria for SamplePoints
I An initial model refined during the optimization⇒ Adaptive samplingI Identify new points, so-called infill pointsI Balance between
I exploration, i.e., improving the model quality (related to the model, global),and
I exploitation, i.e., improving the optimization and determining the optimum(related to the objective function, local)
I Expected improvement (EI): popular adaptive sampling method [Mockuset al., 1978], [Jones et al., 1998]
Bartz-Beielstein MBO 29 / 94
Quality Criteria: How to Select Surrogates
Model Selection Criteria
I EI approach handles the initialization and refinement of a surrogatemodel
I But not the selection of the model itselfI Popular efficient global optimization (EGO) algorithm uses a Kriging
modelI Because Kriging inherently determines the prediction variance (necessary
for the EI criterion)I But there is no proof that Kriging is the best choiceI Alternative surrogate models, e.g., neural networks, regression trees,
support vector machines, or lasso and ridge regression may be bettersuited
I An a priory selection of the best suited surrogate model is conceptuallyimpossible in the framework treated in this talk, because of the black-boxsetting
Bartz-Beielstein MBO 30 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 40 31 July 2018
Quality Criteria: How to Select Surrogates
Single or Ensemble
I Regarding the model choice, the user can decide whether to useI one single model, i.e., one unique global model orI multiple models, i.e., an ensemble of different, possibly local, models
The static SBO uses a single, global surrogate model, usually refined byadaptive sampling, but did not change⇒ category [2.2.2.1]
Bartz-Beielstein MBO 31 / 94
Quality Criteria: How to Select Surrogates
Criteria for Selecting a Surrogate
I Here, we do not consider the selection of a new sample point (as done inEI)
I Instead: Criteria for the selection of one (or several) surrogate modelsI Usually, surrogate models chosen according to their estimated true error
[Jin et al., 2001], [Shi and Rasheed, 2010]I Commonly used performance metrics:
I mean absolute error (MAE)I root mean square error (RMSE)
I Generally, attaining a surrogate model that has minimal error is thedesired feature
I Methods from statistics, statistical learning [Hastie, 2009], and machinelearning [Murphy, 2012]:
I Simple holdoutI Cross-validationI Bootstrap
Bartz-Beielstein MBO 32 / 94
Examples
Overview
Introduction
Stochastic Search Algorithms
Quality Criteria: How to Select Surrogates
Examples
Ensembles: Considerations
SPO2 Part 2
Stacking: Considerations
Bartz-Beielstein MBO 33 / 94
Examples
Criteria for Selecting a Surrogate: Evolvability
I Model error is not the only criterion for selecting surrogate modelsI Evolvability learning of surrogates approach (EvoLS) [Le et al., 2013]:
I Use fitness improvement for determining the quality of surrogate modelsI EvoLS belongs to the category of surrogate-assisted evolutionary
algorithms ([2.2.2.3])I Distributed, local information
Bartz-Beielstein MBO 34 / 94
Examples
Evolvability Learning of Surrogates
I EvoLS: select a surrogate models that enhance search improvement inthe context of optimization
I Process information about theI (i) different fitness landscapes,I (ii) state of the search, andI (iii) characteristics of the search algorithm to statistically determine the
so-called evolvability of each surrogate modelI Evolvability of a surrogate model estimates the expected improvement of
the objective function value that the new candidate solution has gainedafter a local search has been performed on the related surrogatemodel [Le et al., 2013]
Bartz-Beielstein MBO 35 / 94
Examples
Evolvability
I Local search: After recombination and mutation, a local search isperformed
I It uses an individual local meta-model, M, for each offspringI The local optimizer, ϕM , uses an offspring ~y as an input and returns ~y∗ as
the refined offspringI Evolvability measure can be estimated as follows [Le et al., 2013]:
EvM(~x) = f (~x)−K∑
i=1
f (~y∗i )× wi(~x)
with weights (selection probabilities of the offsprings):
wi(~x) =P(~yi |P(t), ~x)
∑Kj=1 P(~yj |P(t), ~x)
Bartz-Beielstein MBO 36 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 41 31 July 2018
Examples
SPO
I EvoLS: distributed, local information. Now: more centralized, globalinformation⇒ sequential parameter optimization (SPO)
I Goal: Analysis and understanding of algorithmsI Early versions of the SPO [Bartz-Beielstein, 2003, Bartz-Beielstein et al.,
2005] combined methods fromI design of experiments (DOE) [Pukelsheim, 1993]I response surface methodology (RSM) [Box and Draper, 1987, Montgomery,
2001]I design and analysis of computer experiments (DACE) [Lophaven et al.,
2002, Santner et al., 2003]I regression trees [Breiman et al., 1984]
I Also: SPO as an optimizer
Bartz-Beielstein MBO 37 / 94
Examples
SPO
I SPO: sequential, model based approach to optimizationI Nowadays: established parameter tuner and an optimization algorithmI Extended in several ways:
I For example, Hutter et al. [2013] benchmark an SPO derivative, theso-called sequential model-based algorithm configuration (SMAC)procedure, on the BBOB set of blackbox functions.
I Small budget of 10× d evaluations of d-dimensional functions, SMAC inmost cases outperforms the state-of- the-art blackbox optimizer CMA-ES
Bartz-Beielstein MBO 38 / 94
Examples
SPO
I The most recent version, SPO2, is currently under developmentI Integration of state-of-the-art ensemble learnersI SPO2 ensemble engine:
I Portfolio of surrogate modelsI regression trees and random forest, least angle regression (lars), and KrigingI Uses cross validation to select an improved model from the portfolio of
candidate modelsI Creates a weighted combination of several surrogate models to build the
improved modelI Use stacked generalization to combine several level-0 models of different
types with one level-1 model into an ensemble [Wolpert, 1992]I Level-1 training algorithm: simple linear model
Bartz-Beielstein MBO 39 / 94
Examples
SPO
I Promising preliminary resultsI SPO2 ensemble engine can lead to significant performance
improvementsI Rebolledo Coy et al. [2016] present a comparison of different data driven
modeling methodsI Bayesian modelI Several linear regression modelsI Kriging modelI Genetic programming
I Models build on industrial data for the development of a robust gas sensorI Limited amount of samples and a high variance
Bartz-Beielstein MBO 40 / 94
Examples
SPO
I Two sensors are comparedI 1st sensor (MSE)
I Linear model (0.76), OLS (0.79), Lasso (0.56), Kriging (0.57), Bayes (0.79),and genetic programming (0.58)
I SPO2 0.38I 2nd sensor (MSE)
I Linear model (0.67), OLS (0.80), Lasso (0.49), Kriging (0.49), Bayes (0.79),and genetic programming (0.27)
I SPO2 0.29
Bartz-Beielstein MBO 41 / 94
Examples
Begin: Jupyter Interactive Document
I The following part of this talk is based on an interactive jupyternotebook [Pérez and Granger, 2007]
I The next slides (43 - 56) summarize the output from the jupyter notebook
BIOMA
Bartz-Beielstein MBO 42 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 42 31 July 2018
Examples DSPO: Deep Sequential Parameter Optimization
Preparation [jupyter]
I BasicallyI 1. import libraries andI 2. set the SPO2 parameters, i.e., the number of folds
I Code implements ideas from Wolpert [1992], based on Olivetti [2012]I Libraries shown explicitly, because we will comment on this topic later!
import matplotlib.pyplot as pltfrom IPython.display import set_matplotlib_formatsfrom sklearn.linear_model import LassoCV, LassoLarsCV, LassoLarsIC...import statsmodels.formula.api as sm
Bartz-Beielstein MBO 43 / 94
Examples Data
The Complete Data Set [jupyter]I Problem description in Rebolledo Coy et al. [2016]I One training data set and one test data set
In [2]: dfTrain = read_csv('/Users/bartz/workspace/svnbib/Python.d/projects/pyspot2/data/training.csv')dfTest = read_csv('/Users/bartz/workspace/svnbib/Python.d/projects/pyspot2/data/validation.csv')
RangeIndex: 80 entries, 0 to 79Data columns (total 9 columns):X1 80 non-null float64X2 80 non-null float64..X7 80 non-null float64Y1 80 non-null float64Y2 80 non-null float64dtypes: float64(9)memory usage: 5.7 KB
X1 X2 X3 ...0 -1.132054 -1.206144 .......
Bartz-Beielstein MBO 44 / 94
Examples Data
Data Used in this Study [jupyter]
I Here, we consider data from the seconds sensorI There are seven input values and one output value (y)I The goal of this study: predict the outcome y using the seven input
measurements (X1, . . . ,X7)I Output (y ) plotted against input (X1, . . . ,X7)
−2−1012345−3
−2
−1
0
1
2
3
−2−1012345−3
−2
−1
0
1
2
3
−2−1012345−3
−2
−1
0
1
2
3
−2−1012345−3
−2
−1
0
1
2
3
−2−1012345−3
−2
−1
0
1
2
3
−2−1012345−3
−2
−1
0
1
2
3
−2−1012345−3
−2
−1
0
1
2
3
Bartz-Beielstein MBO 45 / 94
Examples Data
2. Cross Validation (CV Splits) [jupyter]
I The training data are split into foldsI KFold divides all the samples in k = nfolds folds, i.e., groups of samples of
equal sizes (if possible)I If k = n, this is equivalent to the Leave One Out strategyI Prediction function is learned using (k − 1) folds, and the fold left out is
used for test
Bartz-Beielstein MBO 46 / 94
Examples Data
3. Models in the Ensemble [jupyter]
I Linear RegressionI 1. Normalized predictors
I LinearRegression(normalize=False)I LinearRegression(normalize=True)
I 2. InterceptI LinearRegression(normalize=False, fit_intercept = False)I LinearRegression(normalize=True, fit_intercept = False)
I Random ForestI RandomForestRegressor(n_estimators = 10, random_state=0),I RandomForestRegressor(n_estimators = 100, oob_score = True,
random_state=2),I Lasso
I Lasso(alpha=0.1, fit_intercept = False)I LassoCV(positive = True)
Bartz-Beielstein MBO 47 / 94
Examples Data
3. Models in the Ensemble [jupyter]
I Gaussian ProcessesI The kernel specifying the covariance function of the GPI Parameterized with different kernels:I For example RBF, Matern, RationalQuadratic, ExpSineSquared, DotProduct,
ConstantKernelI If none is passed, the kernel RBF() is used as defaultI Kernel’s hyperparameters are optimized during fittingI Kernel combinations are allowed.I Example:kern = 1.0 * RBF(length_scale=100.0
, length_scale_bounds = (1e-2, 1e3))+ WhiteKernel(noise_level=1, noise_level_bounds=(1e-10, 1e+1))
⇒ GaussianProcessRegressor(kern)
Bartz-Beielstein MBO 48 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 43 31 July 2018
Examples Data
3. Models in the Ensemble [jupyter]
In [5]: clfs = [ LinearRegression(), RandomForestRegressor()#, GaussianProcessRegressor() ]
Bartz-Beielstein MBO 49 / 94
Examples Data
Matrix Preparation, Dimensions [jupyter]
I n: size of the training set (samples): X.shape[0]I k : number of folds for CV: n_foldsI p: number of models: len(clfs)I m: size of the test data (samples): XTest.shape[0]I We will use two matrices:
I 1. YCV is a (n × p)-matrix. It stores the results from the cross validation foreach model. The training set is partitioned into k folds (n_folds=k).
I 2. YBT is a (m × p)-matrix. It stores the aggregated results from the crossvalidation models on the test data. For each fold, p separate models arebuild, which are used for prediction on the test data. The predicted valuesfrom the k folds are averaged for each model, which results in (m × p)different values.
In [6]: YCV = np.zeros((X.shape[0], len(clfs)))YBT = np.zeros((XTest.shape[0], len(clfs)))
Bartz-Beielstein MBO 50 / 94
Examples Data
Cross-Validation [jupyter]
I Each of the p algorithms is run separately on the k foldsI The training data set is split into foldsI Each fold contains training data (train) and validation data (val)I For each fold, a model is build using the train dataI The model is used to make predictions on
I 1. the validation data from the k -th fold These values are stored in the matrixYCV
I 2. the test data⇒ YBT[i]I The average of these predictions are stored in the matrix YBT
Bartz-Beielstein MBO 51 / 94
Examples Data
The YCV Matrix [jupyter]
I The YCV matrix has p columns and n rowsI Each row contains the prediction for the n-th sample from the training
data set
(80, 2)[[ 0.98940238 1.72943761][ 0.14445345 0.17159034][-0.0969411 0.86240816][ 1.24820656 0.27214466][ 0.0897393 0.1542136 ]...[-0.56801858 -0.50319222]]
Bartz-Beielstein MBO 52 / 94
Examples Data
The YBT Matrix [jupyter]
I The YBT matrix has p columns and m rowsI Each row contains the prediction for the m-th sample from the test data
setI These values are the mean values of the k fold predictions calculated
with the training dataI Each fold generates one model, which is used for prediction on the test
dataI The mean value from these k predictions are stored in the matrix YBT
(60, 2)[[ 6.85725928e-01 1.18237211e+00][ 1.19810293e+00 -2.93516856e-01][ -8.16703631e-01 -1.87229011e+00][ 2.06323037e+00 7.25281970e-01][ 1.37816630e+00 -1.35272556e-01]...[ 3.22714124e+00 1.36502285e+00]]
Bartz-Beielstein MBO 53 / 94
Examples Blending: Level-1 Model
Model Building [jupyter]
I The level-1 model is a function of the CV-values of each model to theknown, training y -values
I It provides an estimate of the influence of the single modelsI For example, if a linear level-1 model is used, the coefficient βi represents
the effect of the i-th model
Bartz-Beielstein MBO 54 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 44 31 July 2018
Examples Blending: Level-1 Model
Model Prediction [jupyter]
I The level-1 model is used for predictions on the YBT data, i.e., on theaveraged predictions of the CV-models
I Constructed using the effects of the predicted values of the single models(determined by linear regression) on the true values of the training data
I If a model predicts a similar value as the true value during the CV, then ithas a strong effect
I The final predictions are made using the coefficients (weights) of thesingle models on the YBT data
I YBT data: predicted values from the corresponding models on the finaltest data
[[ 0.98940238 1.72943761 0.55617508][ 0.14445345 0.17159034 0.28026176]...
[-0.56801858 -0.50319222 -0.28009123]]Intercept: Coefficients:-0.0199327428866 [ 0.34705597 0.68483785]
Bartz-Beielstein MBO 55 / 94
Examples Blending: Level-1 Model
Comparing MSE [jupyter]I Comparison of the mean squared error from the SPO2 ensemble and the
single models:
SPO2 (MSE): 0.284948273406L (MSE): 0.673695001324R (MSE): 0.367652881967
−3 −2 −1 0 1 2 3
Measured
−3
−2
−1
0
1
2
3
4
Pre
dic
ted
SPO2
L
R
Bartz-Beielstein MBO 56 / 94
Examples Blending: Level-1 Model
End: Jupyter Interactive Document
I End of the interactive jupyter notebook [Pérez and Granger, 2007]I The next slides (59 - 64) are based on statics slides
BIOMA
Bartz-Beielstein MBO 57 / 94
Ensembles: Considerations
Overview
Introduction
Stochastic Search Algorithms
Quality Criteria: How to Select Surrogates
Examples
Ensembles: Considerations
SPO2 Part 2
Stacking: Considerations
Bartz-Beielstein MBO 58 / 94
Ensembles: Considerations
Why are Ensembles Better?
I The following considerations are based on van Veen [2015]
Example 1 (Error Correcting Codes)I Signal in the form of a binary string like:
0111101010100000111001010101 gets corrupted with just one bitflipped as in the following:0110101010100000111001010101
I Repetition code: Simplest solutionI Repeat signal multiple times in equally sized chunks and have a majority
voteI 1. Original signal: 001010101 2. Encoded: (Relay multiple times)
000000111000111000111000111 3. Decoding: Take three bit andcalculate majority
Bartz-Beielstein MBO 59 / 94
Ensembles: Considerations
A Simple Machine Learning Example
I Test set of ten samplesI The ground truth is all positive: 1111111111I Three binary classifiers (A ,B, C) with a 70% accuracyI View classifiers as pseudo-random number generators
I output a 1 70% of the time andI a 0 30% of the time
I Pseudo-classifiers are able to obtain 78% accuracy through a votingensemble [van Veen, 2015]
Bartz-Beielstein MBO 60 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 45 31 July 2018
Ensembles: Considerations
A Simple Machine Learning Example
I All three are correct: (0.7× 0.7× 0.7) = 0.3429I Two are correct: (0.7× 0.7× 0.3 + 0.7× 0.3× 0.7 + 0.3× 0.7× 0.7) =
0.4409I Two are wrong : (0.3× 0.3× 0.7 + 0.3× 0.7× 0.3 + 0.7× 0.3× 0.3) =
0.189I All three are wrong (0.3× 0.3× 0.3) = 0.027I This majority vote ensemble will be correct an average of ≈ 78%:
(0.3429 + 0.4409 = 0.7838)
Bartz-Beielstein MBO 61 / 94
Ensembles: Considerations
Correlation
I Uncorrelated models clearly do better when ensembled than correlatedI 1111111100 = 80% accuracyI 1111111100 = 80% accuracyI 1011111100 = 70% accuracy
I Models highly correlated in their predictions. Majority vote results in noimprovement:
I 1111111100 = 80% accuracyI Now 3 less-performing, but highly uncorrelated models:
I 1111111100 = 80% accuracyI 0111011101 = 70% accuracyI 1000101111 = 60% accuracy
I Ensembling with a majority vote results in:I 1111111101 = 90% accuracy
I Lower correlation between ensemble model members seems to result inan increase in the error-correcting capability [van Veen, 2015]
Bartz-Beielstein MBO 62 / 94
Ensembles: Considerations
Beyond Averaging: Stacking
I Averaging, i.e., taking the mean of individual model predictions, workswell for a wide range of problems (classification and regression) andmetrics
I Often referred to as baggingI Averaging predictions often reduces overfit [van Veen, 2015]I Wolpert [1992] introduced stacked generalization before bagging was
proposed by Breiman [1996]I Wolpert is famous for There is no free lunch in search and optimization
theoremI Basic idea behind stacking: use a pool of base classifiers, then using
another classifier to combine their predictions
Bartz-Beielstein MBO 63 / 94
Ensembles: Considerations
2-fold Stacking
I Stacker model gets information on the problem space by using thefirst-stage predictions as features [van Veen, 2015]:
I 1. Split the train set in 2 parts: train_a and train_bI 2. Fit a first-stage model on train_a and create predictions for train_bI 3. Fit the same model on train_b and create predictions for train_aI 4. Finally fit the model on the entire train set and create predictions for the
test set.I 5. Now train a second-stage stacker model on the predictions from the
first-stage model(s).
⇒ Python Example
Bartz-Beielstein MBO 64 / 94
Ensembles: Considerations
Begin: Jupyter Interactive Document
I The following part of this talk is based on an interactive jupyternotebook [Pérez and Granger, 2007]
I The next slides (67 - 85) summarize the output from the jupyter notebook
BIOMA
Bartz-Beielstein MBO 65 / 94
SPO2 Part 2
Overview
Introduction
Stochastic Search Algorithms
Quality Criteria: How to Select Surrogates
Examples
Ensembles: Considerations
SPO2 Part 2
Stacking: Considerations
Bartz-Beielstein MBO 66 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 46 31 July 2018
SPO2 Part 2 Artificial Test Functions
Function Definitions [jupyter]
I Motivated by van der Laan and Polley [2010], we consider six testfunctions
I All simulations involve a univariate X drawn from a uniform distribution in[-4, +4]
I Test functions:I f1(x): return -2 * I(x < -3) + 2.55 * I(x > -2) - 2 * I(x > 0) + 4 * I(x > 2) - 1 * I(x
>3 ) + εI f2(x): return 6 + 0.4 * x - 0.36x * x + 0.005x * x * x + εI f3(x): return 2.83 * np.sin(math.pi/2 * x) + εI f4(x): return 4.0 * np.sin(3 * math.pi * x) * I(x >= 0) + εI f5(x): return x + εI f6(x): return np.random.normal(0,1,len(x)) + ε
I I(·) indicator function, ε drawn from an independent standard normaldistribution, sample size r = 100 (repeats)
Bartz-Beielstein MBO 67 / 94
SPO2 Part 2 Artificial Test Functions
Function Definitions [jupyter]
I f1: Step function
−4 −3 −2 −1 0 1 2 3 4−4
−2
0
2
4
6
Bartz-Beielstein MBO 68 / 94
SPO2 Part 2 Artificial Test Functions
Function Definitions [jupyter]
I f2: Polynomial function
−4 −3 −2 −1 0 1 2 3 4−2
0
2
4
6
8
Bartz-Beielstein MBO 69 / 94
SPO2 Part 2 Artificial Test Functions
Function Definitions [jupyter]
I f3: Sine function
−4 −3 −2 −1 0 1 2 3 4
−4
−2
0
2
4
Bartz-Beielstein MBO 70 / 94
SPO2 Part 2 Artificial Test Functions
Function Definitions [jupyter]
I f4: Composite function
−4 −3 −2 −1 0 1 2 3 4−8
−6
−4
−2
0
2
4
6
Bartz-Beielstein MBO 71 / 94
SPO2 Part 2 Artificial Test Functions
Function Definitions [jupyter]
I f5: Linear function
−4 −3 −2 −1 0 1 2 3 4−8
−6
−4
−2
0
2
4
6
Bartz-Beielstein MBO 72 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 47 31 July 2018
SPO2 Part 2 Artificial Test Functions
Function Definitions [jupyter]
I f6: Noise function
−4 −3 −2 −1 0 1 2 3 4−5
−4
−3
−2
−1
0
1
2
3
Bartz-Beielstein MBO 73 / 94
SPO2 Part 2 Experiment 1: Step Function
f1: Coefficients of the Level-1 Model [jupyter]
I The coefficients can be interpreted as weights in the linear combinationof the models. 0 = intercept; 1, 2, and 3 denote the β1, β2, and β3 values,respectively
0 1 2 3−0.4
−0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
Bartz-Beielstein MBO 74 / 94
SPO2 Part 2 Experiment 1: Step Function
f1: R2 Values [jupyter]I R2 (larger values are better) and standard deviation.
I SPO: 0.78211976, 0.03308847I L: 0.4024831, 0.07134356I R: 0.78556947, 0.03187105I G: 0.76547433, 0.03564519
0 1 2 30.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Bartz-Beielstein MBO 75 / 94
SPO2 Part 2 Experiment 2: Polynomial Function
f2: Coefficients of the Level-1 Model [jupyter]
I The coefficients can be interpreted as weights in the linear combinationof the models. 0 = intercept; 1, 2, and 3 denote the β1, β2, and β3 values,respectively
0 1 2 3−0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
Bartz-Beielstein MBO 76 / 94
SPO2 Part 2 Experiment 2: Polynomial Function
f2: R2 Values [jupyter]I R2 (larger values are better) and standard deviation.
I SPO: 0.79514735 0.03602018I L: 0.21445917 0.07656562I R: 0.79488344 0.03604606I G: 0.79514727 0.03602018
0 1 2 30.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Bartz-Beielstein MBO 77 / 94
SPO2 Part 2 Experiment 3: Sine Function
f3: Coefficients of the Level-1 Model [jupyter]
I The coefficients can be interpreted as weights in the linear combinationof the models. 0 = intercept; 1, 2, and 3 denote the β1, β2, and β3 values,respectively
0 1 2 3−0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
Bartz-Beielstein MBO 78 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 48 31 July 2018
SPO2 Part 2 Experiment 3: Sine Function
f3: R2 Values [jupyter]I R2 (larger values are better) and standard deviation.
I SPO: 0.7939634 0.02777211I L: 0.11677184 0.05688847I R: 0.79244941 0.02743085I G: 0.79396338 0.02777211
0 1 2 30.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Bartz-Beielstein MBO 79 / 94
SPO2 Part 2 Experiment 4: Linear-Sine Function
f4: Coefficients of the Level-1 Model [jupyter]
I The coefficients can be interpreted as weights in the linear combinationof the models. 0 = intercept; 1, 2, and 3 denote the β1, β2, and β3 values,respectively
0 1 2 3−7
−6
−5
−4
−3
−2
−1
0
1
2
Bartz-Beielstein MBO 80 / 94
SPO2 Part 2 Experiment 4: Linear-Sine Function
f4: R2 Values [jupyter]I R2 (larger values are better) and standard deviation.
I SPO: 0.74144195 0.05779718I L: 0.00651219 0.01489886I R: 0.75301025 0.05133169I G: 0.31721598 0.07939812
0 1 2 3
0.0
0.2
0.4
0.6
0.8
Bartz-Beielstein MBO 81 / 94
SPO2 Part 2 Experiment 5: Linear Function
f5: Coefficients of the Level-1 Model [jupyter]
I The coefficients can be interpreted as weights in the linear combinationof the models. 0 = intercept; 1, 2, and 3 denote the β1, β2, and β3 values,respectively
0 1 2 3−0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
Bartz-Beielstein MBO 82 / 94
SPO2 Part 2 Experiment 5: Linear Function
f5: R2 Values [jupyter]I R2 (larger values are better) and standard deviation.
I SPO: 0.8362937 0.02381472I L: 0.8362937 0.02381472I R: 0.83628043 0.02374492I G: 0.8362937 0.02381472
0 1 2 30.76
0.78
0.80
0.82
0.84
0.86
0.88
0.90
Bartz-Beielstein MBO 83 / 94
SPO2 Part 2 Experiment 6: Random Noise (normal)
f5: Coefficients of the Level-1 Model [jupyter]
I The coefficients can be interpreted as weights in the linear combinationof the models. 0 = intercept; 1, 2, and 3 denote the β1, β2, and β3 values,respectively
0 1 2 3−12
−10
−8
−6
−4
−2
0
2
Bartz-Beielstein MBO 84 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 49 31 July 2018
SPO2 Part 2 Experiment 6: Random Noise (normal)
f5: R2 Values [jupyter]I R2 (larger values are better) and standard deviation.
I SPO: -0.02025601 0.10308039I L: -0.00035958 0.01505964I R: 0.3586063 0.06232495I G: 0.10037904 0.05356867
0 1 2 3−0.3
−0.2
−0.1
0.0
0.1
0.2
0.3
0.4
0.5
Bartz-Beielstein MBO 85 / 94
SPO2 Part 2 Experiment 6: Random Noise (normal)
End: Jupyter Interactive Document
I End of the interactive jupyter notebook [Pérez and Granger, 2007]I The next slides (88 - 93) are based on statics slides
BIOMA
Bartz-Beielstein MBO 86 / 94
Stacking: Considerations
Overview
Introduction
Stochastic Search Algorithms
Quality Criteria: How to Select Surrogates
Examples
Ensembles: Considerations
SPO2 Part 2
Stacking: Considerations
Bartz-Beielstein MBO 87 / 94
Stacking: Considerations
Data
Training:{(Xi, yi)}i=1,...n{(Xi, yi)}i=1,...n
{yti}{yti}
A1A1 A2A2Val_Training:{(Xi, yi)}i=1,...q{(Xi, yi)}i=1,...q
Val_Test:{(Xi, yi)}i=q+1,...n{(Xi, yi)}i=q+1,...n
AV1AV1 AV
2AV2
Val_Test:{(Xi)}i=q+1,...n{(Xi)}i=q+1,...n
Val_Test:{(yi)}i=q+1,...n{(yi)}i=q+1,...nyAV
1yAV
1
yAV2
yAV2
A3A3 A(y,y)3A(y,y)3
ATr2ATr2ATr
1ATr1
yAT r1
yAT r1
yAT r2
yAT r2
yy
{Xti}{Xti}
Test:{(Xti, yti)}i=1,...,m{(Xti, yti)}i=1,...,m
Bartz-Beielstein MBO 88 / 94
Stacking: Considerations
Blending and Meta-Meta Models
I Continue with the summary of van Veen [2015]:I Blending is a word introduced by the Netflix winnersI Instead of creating out-of-fold predictions for the train set: use a small
holdout set of say 10% of the train setI Benefit: Simpler than stackingI Con: The final model may overfit to the holdout set
I Further improvements: Combine multiple ensembled modelsI Use averaging or voting on manually-selected well-performing ensembles:
I Start with a base ensembleI Add a model when it increases the train set score the mostI By allowing put-back of models, a single model may be picked multiple times
(weighting)I Use of genetic algorithms and CV-scores as the fitness function
I van Veen [2015] proposes a fully random method:I Create a 100 or so ensembles from randomly selected ensembles (without
placeback)I Then pick the highest scoring model
Bartz-Beielstein MBO 89 / 94
Stacking: Considerations
“Frankenstein Ensembles"
I Why stacking and combining 1000s of models and computational hours?I van Veen [2015] lists the following pros for these “monster models":
I Win competitions (Netflix, Kaggle, . . .)I Beat most state-of-the-art academic benchmarks with a single approachI Transfer knowledge from the ensemble back to a simpler shallow modelI Loss of one model is not fatal for creating good predictionsI Automated large ensembles don’t require much tuning or selectionI 1% increase in accuracy may push an investment fund from making a loss,
into making a little less loss. More seriously: Improving healthcare screeningmethods helps save lives
Bartz-Beielstein MBO 90 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 50 31 July 2018
Stacking: Considerations
Video Sequence
I The following part of this talk refers to a sequence from the video “GeraldJay Sussman on Flexible Systems, The Power of Generic Operations”,which is available on https://vimeo.com/151465912
I 1:01:42 - 1:04:25 (to be deleted from the videolectures.net version)
BIOMA
Bartz-Beielstein MBO 91 / 94
Stacking: Considerations
Summary: Structure and Interpretation of ComputerPrograms (SICP)
I PROGRAMMING BY POKING: WHY MIT STOPPED TEACHING SICPhttp://www.posteriorscience.net/?p=206
I 1. Hal Abelson and Gerald Jay Sussman and got tired of teaching itI 2. SICP curriculum no longer prepared engineers for what engineering is
like todayI In the 80s and 90s, engineers built complex systems by combining simple
and well-understood partsI Programming today is
” [m]ore like science. You grab this piece of library and you pokeat it. You write programs that poke it and see what it does. Andyou say, ‘Can I tweak it to do the thing I want?’”
I MIT chose Python as an alternative for SCIP
Bartz-Beielstein MBO 92 / 94
Stacking: Considerations
Summary
I Keywords:I Abelson & Sussmann XI Wolpert XI Netflix XI Deep learning X
I Related report “Stacked Generalization of Surrogate Models—A PracticalApproach” [Bartz-Beielstein, 2016]
I More: http://www.spotseven.de
Bartz-Beielstein MBO 93 / 94
Stacking: Considerations
Acknowledgement
I This work has been supported by the Bundesministeriums für Wirtschaftund Energie under the grants KF3145101WM3 und KF3145103WM4.
I This work is part of a project that has received funding from the EuropeanUnion’s Horizon 2020 research and innovation program under grantagreement No 692286.
Bartz-Beielstein MBO 94 / 94
References
Alessandro Baldi Antognini and Maroussa Zagoraiou. Exact optimal designsfor computer experiments via Kriging metamodelling. Journal of StatisticalPlanning and Inference, 140(9):2607–2617, September 2010.
Thomas Bartz-Beielstein. Experimental Analysis of EvolutionStrategies—Overview and Comprehensive Introduction. Technical report,November 2003.
Thomas Bartz-Beielstein. Stacked Generalization of Surrogate Models—APratical Approach. Technical Report 05/2016, Cologne Open Science,Cologne, 2016. URL https://cos.bibl.th-koeln.de/solrsearch/index/search/searchtype/series/id/8.
Thomas Bartz-Beielstein, Christian Lasarczyk, and Mike Preuß. SequentialParameter Optimization. In B McKay et al., editors, Proceedings 2005Congress on Evolutionary Computation (CEC’05), Edinburgh, Scotland,pages 773–780, Piscataway NJ, 2005. IEEE Press.
Andrew J Booker, J E Dennis Jr, Paul D Frank, David B Serafini, and VirginiaTorczon. Optimization Using Surrogate Objectives on a Helicopter TestExample. In Computational Methods for Optimal Design and Control, pages49–58. Birkhäuser Boston, Boston, MA, 1998.
G E P Box and N R Draper. Empirical Model Building and ResponseSurfaces. Wiley, New York NY, 1987.
Bartz-Beielstein MBO 94 / 94
References
J Branke and C Schmidt. Faster convergence by means of fitness estimation.Soft Computing, 9(1):13–20, January 2005.
L Breiman, J H Friedman, R A Olshen, and C J Stone. Classification andRegression Trees. Wadsworth, Monterey CA, 1984.
Leo Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996.ISSN 1573-0565. doi: 10.1023/A:1018054314350. URLhttp://dx.doi.org/10.1023/A:1018054314350.
D Büche, N N Schraudolph, and P Koumoutsakos. Accelerating EvolutionaryAlgorithms With Gaussian Process Fitness Function Models. IEEETransactions on Systems, Man and Cybernetics, Part C (Applications andReviews), 35(2):183–194, May 2005.
Ivo Couckuyt, Filip De Turck, Tom Dhaene, and Dirk Gorissen. Automaticsurrogate model type selection during the optimization of expensiveblack-box problems. In 2011 Winter Simulation Conference - (WSC 2011),pages 4269–4279. IEEE, 2011.
Michael Emmerich, Alexios Giotis, Mutlu özdemir, Thomas Bäck, andKyriakos Giannakoglou. Metamodel-assisted evolution strategies. InJ J Merelo Guervós, P Adamidis, H G Beyer, J L Fernández-Villacañas, andH P Schwefel, editors, Parallel Problem Solving from Nature—PPSN VII,
Bartz-Beielstein MBO 94 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 51 31 July 2018
References
Proceedings~Seventh International Conference, Granada, pages 361–370,Berlin, Heidelberg, New York, 2002. Springer.
Alexander Forrester, András Sóbester, and Andy Keane. Engineering Designvia Surrogate Modelling. Wiley, 2008.
Alexander I J Forrester and Andy J Keane. Recent advances insurrogate-based optimization. Progress in Aerospace Sciences, 45(1-3):50–79, January 2009.
K B Wilson G E P Box. On the Experimental Attainment of OptimumConditions. Journal of the Royal Statistical Society. Series B(Methodological), 13(1):1–45, 1951.
K C Giannakoglou. Design of optimal aerodynamic shapes using stochasticoptimization methods and computational intelligence. Progress inAerospace Sciences, 38(1):43–76, January 2002.
Tushar Goel, Raphael T Haftka, Wei Shyy, and Nestor V Queipo. Ensemble ofsurrogates. Struct. Multidisc. Optim., 33(3):199–216, September 2006.
Robert B Gramacy. tgp: An R Package for Bayesian Nonstationary,Semiparametric Nonlinear Regression and Design by Treed GaussianProcess Models. Journal of Statistical Software, 19(9):1–46, June 2007.
Bartz-Beielstein MBO 94 / 94
References
P Hajela and E Lee. Topological optimization of rotorcraft subfloor structuresfor crashworthiness considerations. Computers & Structures, 64(1-4):65–76, July 1997.
Trevor Hastie. The elements of statistical learning : data mining, inference,and prediction. Springer, New York, 2nd ed. edition, 2009.
Mark Hauschild and Martin Pelikan. An introduction and survey of estimationof distribution algorithms. Swarm and Evolutionary Computation, 1(3):111–128, September 2011.
Jiaqiao Hu, Yongqiang Wang, Enlu Zhou, Michael C Fu, and Steven I Marcus.A Survey of Some Model-Based Methods for Global Optimization. In DanielHernández-Hernández and J Adolfo Minjárez-Sosa, editors, Optimization,Control, and Applications of Stochastic Systems, pages 157–179.Birkhäuser Boston, Boston, 2012.
Edward Huang, Jie Xu, Si Zhang, and Chun Hung Chen. Multi-fidelity ModelIntegration for Engineering Design. Procedia Computer Science, 44:336–344, 2015.
Frank Hutter, Holger Hoos, and Kevin Leyton-Brown. An Evaluation ofSequential Model-based Optimization for Expensive Blackbox Functions. InProceedings of the 15th Annual Conference Companion on Genetic and
Bartz-Beielstein MBO 94 / 94
References
Evolutionary Computation, pages 1209–1216, New York, NY, USA, 2013.ACM.
R Jin, W Chen, and T W Simpson. Comparative studies of metamodellingtechniques under multiple modelling criteria. Struct. Multidisc. Optim., 23(1):1–13, December 2001.
Y Jin. A comprehensive survey of fitness approximation in evolutionarycomputation. Soft Computing, 9(1):3–12, October 2003.
Y Jin, M Olhofer, and B Sendhoff. On Evolutionary Optimization withApproximate Fitness Functions. GECCO, 2000.
D R Jones, M Schonlau, and W J Welch. Efficient Global Optimization ofExpensive Black-Box Functions. Journal of Global Optimization, 13:455–492, 1998.
Jack P C Kleijnen. Kriging metamodeling in simulation: A review. EuropeanJournal of Operational Research, 192(3):707–716, February 2009.
P Larraaga and J A Lozano. Estimation of Distribution Algorithms. A New Toolfor Evolutionary Computation. Kluwer, Boston MA, 2002.
Minh Nghia Le, M N Le, Yew Soon Ong, Y S Ong, S Menzel, Stefan Menzel,Yaochu Jin, Y Jin, B Sendhoff, and Bernhard Sendhoff. Evolution byadapting surrogates. Evolutionary Computation, 21(2):313–340, 2013.
Bartz-Beielstein MBO 94 / 94
References
S N Lophaven, H B Nielsen, and J Søndergaard. DACE—A Matlab KrigingToolbox. Technical report, 2002.
J Mockus, V Tiesis, and A Zilinskas. Bayesian Methods for Seeking theExtremum. In L C W Dixon and G P Szegö, editors, Towards GlobalOptimization, pages 117–129. Amsterdam, 1978.
D C Montgomery. Design and Analysis of Experiments. Wiley, New York NY,5th edition, 2001.
K P Murphy. Machine learning: a probabilistic perspective, 2012.Andrea Nelson, Juan Alonso, and Thomas Pulliam. Multi-Fidelity
Aerodynamic Optimization Using Treed Meta-Models. In Fluid Dynamicsand Co-located Conferences. American Institute of Aeronautics andAstronautics, Reston, Virigina, June 2007.
Emanuele Olivetti. blend.py, 2012. Code available on https://github.com/emanuele/kaggle_pbr/blob/master/blend.py.Published under a BSD 3 license.
Fernando Pérez and Brian E. Granger. IPython: a system for interactivescientific computing. Computing in Science and Engineering, 9(3):21–29,May 2007. ISSN 1521-9615. doi: 10.1109/MCSE.2007.53. URLhttp://ipython.org.
MJD Powell. Radial Basis Functions. Algorithms for Approximation, 1987.Bartz-Beielstein MBO 94 / 94
References
Mike Preuss. Multimodal Optimization by Means of Evolutionary Algorithms.Natural Computing Series. Springer International Publishing, Cham, 2015.
F Pukelsheim. Optimal Design of Experiments. Wiley, New York NY, 1993.Nestor V Queipo, Raphael T Haftka, Wei Shyy, Tushar Goel, Rajkumar
Vaidyanathan, and P Kevin Tucker. Surrogate-based analysis andoptimization. Progress in Aerospace Sciences, 41(1):1–28, January 2005.
Alain Ratle. Parallel Problem Solving from Nature — PPSN V: 5thInternational Conference Amsterdam, The Netherlands September 27–30,1998 Proceedings. pages 87–96. Springer Berlin Heidelberg, Berlin,Heidelberg, 1998.
Margarita Alejandra Rebolledo Coy, Sebastian Krey, Thomas Bartz-Beielstein,Oliver Flasch, Andreas Fischbach, and Jörg Stork. Modeling andOptimization of a Robust Gas Sensor. Technical Report 03/2016, CologneOpen Science, Cologne, 2016.
E Sanchez, S Pintos, and N V Queipo. Toward an Optimal Ensemble ofKernel-based Approximations with Engineering Applications. In The 2006IEEE International Joint Conference on Neural Network Proceedings,pages 2152–2158. IEEE, 2006.
T J Santner, B J Williams, and W I Notz. The Design and Analysis ofComputer Experiments. Springer, Berlin, Heidelberg, New York, 2003.
Bartz-Beielstein MBO 94 / 94
References
M Schonlau. Computer Experiments and Global Optimization. PhD thesis,University of Waterloo, Ontario, Canada, 1997.
L Shi and K Rasheed. A Survey of Fitness Approximation Methods Applied inEvolutionary Algorithms. In Computational Intelligence in ExpensiveOptimization Problems, pages 3–28. Springer Berlin Heidelberg, Berlin,Heidelberg, 2010.
Timothy Simpson, Vasilli Toropov, Vladimir Balabanov, and Felipe Viana.Design and Analysis of Computer Experiments in Multidisciplinary DesignOptimization: A Review of How Far We Have Come - Or Not. In 12thAIAA/ISSMO Multidisciplinary Analysis and Optimization Conference,pages 1–22, Reston, Virigina, June 2012. American Institute of Aeronauticsand Astronautics.
G Sun, G Li, S Zhou, W Xu, X Yang, and Q Li. Multi-fidelity optimization forsheet metal forming process. Structural and Multidisciplinary . . . , 2011.
M J van der Laan and E C Polley. Super Learner in Prediction. UC BerkeleyDivision of Biostatistics Working Paper . . . , 2010.
Henk van Veen. Using Ensembles in Kaggle Data Science Competitions,2015. URL http://www.kdnuggets.com/2015/06/ensembles-kaggle-data-science-competition-p1.html.
V N Vapnik. Statistical learning theory. Wiley, 1998.Bartz-Beielstein MBO 94 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 52 31 July 2018
Stacking: Considerations
G Gary Wang and S Shan. Review of Metamodeling Techniques in Support ofEngineering Design Optimization. Journal of Mechanical . . . , 129(4):370–380, 2007.
David H Wolpert. Stacked generalization. Neural Networks, 5(2):241–259,January 1992.
Luis E Zerpa, Nestor V Queipo, Salvador Pintos, and Jean-Louis Salager. Anoptimization methodology of alkaline–surfactant–polymer floodingprocesses using field scale numerical simulation and multiple surrogates.Journal of Petroleum Science . . . , 47(3-4):197–208, June 2005.
Z Zhou, Y S Ong, P B Nair, A J Keane, and K Y Lum. Combining Global andLocal Surrogate Models to Accelerate Evolutionary Optimization. IEEETransactions on Systems, Man and Cybernetics, Part C (Applications andReviews), 37(1):66–76, 2007.
Mark Zlochin, Mauro Birattari, Nicolas Meuleau, and Marco Dorigo.Model-Based Search for Combinatorial Optimization: A Critical Survey.Annals of Operations Research, 131(1-4):373–395, 2004.
J M Zurada. Analog implementation of neural networks. IEEE Circuits andDevices Magazine, 8(5):36–41, 1992.
Bartz-Beielstein MBO 94 / 94
SYNERGY Horizon 2020 – GA No 692286
D3.2 53 31 July 2018
Umetna inteligenca za pametne tovarne
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Kratka predstavitev
• Institut “Jožef Stefan”(http://www.ijs.si/) je vodilna slovenska raziskovalna ustanova za temeljne in aplikativne raziskave na področju naravoslovnih znanosti
• V projektu Obzorja 2020 SYNERGY sodelujeta:•Odsek za računalniške sisteme
([email protected]) •Odsek za inteligentne sisteme
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Odsek za računalniške sisteme
• Raziskovalna področja:• Računska inteligenca• IoT / Vgradni sistemi• HPC• HCI
• Relevantne aplikacije:• Optimiranje geometrije elektromotorja (dosežen 36 % boljši izkoristek)• Hitra simulacija in optimiranje krmiljenja naprav• Optimiranje načrtovanja proizvodnje (dosežena 90+ % izkoriščenost
proizvodnjih linij)
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Odsek za inteligentne sisteme
• Raziskovalna področja:• Ambientalna inteligenca• Računska inteligenca• Agentni sistemi• Govorne in jezikovne tehnologije
• Relevantne aplikacije:• Optimizacija procesnih parametrov pri ulivanju jekla• Večkriterijsko načrtovanje sistemov za oskrbo z energijo iz obnovljivih virov• Nadzor kakovosti v proizvodnji komponent za avtomobilsko industrijo
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Projekt Obzorja 2020 SYNERGY
• Cilj:• Izboljšati kakovost raziskav na področju večkriterijske
optimizacije in prispevati k njenemu prenosu v industrijsko prakso v okviru Pametne specializacije• Podrobneje: http://synergy-twinning.eu/
• Partnerji:• Institut “Jožef Stefan”, Slovenija• University of Lille, Francija• Technische Hochschule Köln, Nemčija
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Umetna inteligenca
• “Inteligenca” kot jo izkazujejo naprave / stroji
• Temelji na povezavi računalništva in matematike s ciljem posnemanja kognitivnih
sposobnosti človeka
• Primeri uporabe (metod):
• Prepoznavanje vzorcev (nevronske mreže)
• Klasifikacija na podlagi ujemanja (strojno učenje)
• Inteligentno iskanje rešitev (optimizacija)
SYNERGY Horizon 2020 – GA No 692286
D3.2 54 31 July 2018
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Prednosti uporabe umetne inteligence
• Zmanjša vnos človeških napak• Deluje v človeku neprimernih okoljih• Nudi pomoč pri odločanju• “Dela” brez počitka
• Slabost: • Ne obstaja „splošna“ inteligenca• Sistemi umetne inteligence delujejo le s točno določenim namenom
oz. rešujejo le dani problem
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Zahtevnost uporabe
• Obstaja mnogo orodij umetne inteligence
• Njihova uporaba zahteva veliko znanja o orodju in prilagoditvi problema orodju
• Zaradi specifičnih zahtev posameznega problema se splošno-namenska orodja
pogosto izkažejo kot neučinkovita
• Potreba po problemu prilagojeni umetni inteligenci
• Potrebno znanje o problemu
• Potrebno znanje o pristopih za učinkovito reševanje problema
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Vloga domenskih ekspertov
• Iz industrije• Znanje o problemu• Vrednotenje delovanja umetne inteligence
• Iz raziskav• Izbira ustreznega pristopa• Razvoj problemu prilagojene inteligence• Nastavitev parametrov za optimalno delovanje
• Izziv: komunikacija dveh svetov
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Umetna inteligenca v digitalnih dvojčkih
• Digitalna kopija fizičnega sveta (sredstev, sistemov, procesov)
• Zahteva • Na podatkih iz fizičnega sveta z uporabo umetne inteligence in analitike izdelati
čim bolj realne simulacijske modele (digitalni dvojček), ki verodostojno modelirajo fizični svet (dvojček) in se odzivajo na spremembe v njem
• Cilj• Optimizacija predvidevanja vzdrževalnih del in načrtovanja raznih postopkov,
sistemov in proizvodnih procesov
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Uporaba digitalnih dvojčkov
• Uporaba umetne inteligence za izboljšave digitalnih dvojčkov
• Digitalni dvojček se uporablja kot orodje, nad katerim se učimo brez posegov v fizični dvojček
• Primer: ugotavljamo, kakšen bi bil vpliv določene spremembe
• Digitalni dvojček se uporablja za izboljšavo potekajočih procesov
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Primera iz prakse
• Optimizacija parametrov proizvodnega procesa
• Nadzor kakovosti v proizvodnji komponent za avtomobilsko industrijo
SYNERGY Horizon 2020 – GA No 692286
D3.2 55 31 July 2018
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Optimizacija parametrov proizvodnega procesaPartner: ETA Cerkno d.o.o.
PROBLEM• ETA Cerkno proizvaja komponente za gospodinjske aparate (npr. kuhalne plošče,
termostate in grelne elemente)• Ena glavnih skrbi pri načrtovanju
proizvodnje je zagotovitev učinkovitega načrtovanja procesa
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Optimizacija parametrov proizvodnega procesaPartner: ETA Cerkno d.o.o.
REŠITEV• Simulacijsko orodje (digitalni dvojček) za vrednotenje proizvodnega procesa, kjer
je bil proizvodni proces simuliran na podlagi podatkov, pridobljenih v podjetju• Prilagojen optimizacijski algoritem
(umetna inteligenca), ki upošteva vse specifike proizvodnje za učinkovito načrtovanje• Učinkovit vmesnik, ki omogoča
enostavno spremljnaje načrtovane proizvodnje
Dinamični optimizacijski proces
Nova naročila
Optimizacijski algoritem
Omejitve zalogeDostopnost surovin
Trenutni najboljši načrt
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Optimizacija parametrov proizvodnega procesaPartner: ETA Cerkno d.o.o.
REZULTATI• Tedenski razpored proizvodnje, ki ga je pripravljal ekspert, je zamenjal celovit
razpored z 90+ % izkoriščenostjo vseh linij• Razpored proizvodnje upošteva roke in količinske omejitve za vsa naročila• Razpored proizvodnje se hitro in
učinkovito prilagaja novim naročilom•Možnost učinkovitega načrtovnja
proizvodnje glede na več kriterijev
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Nadzor kakovosti v proizvodnji komponent za avtomobilsko industrijo, Partner: Kolektor Group d.o.o., Idrija
PROBLEM• Kolektor je vodilni svetovni proizvajalec komutatorjev za
elektromotorje, ki se uporabljajo v avtomobilski industriji (npr. v črpalkah za gorivo)• Trg zahteva visoko zanesljivost izdelkov, kar je moč zagotoviti
le z naprednimi postopki nadzora kakovosti v proizvodnji• Naloga: razvoj računalniško podprtega postopka za
samodejno preverjanje kakovosti izdelkov
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Nadzor kakovosti v proizvodnji komponent za avtomobilsko industrijo, Partner: Kolektor Group d.o.o., Idrija
REŠITEV• Tri metodologije: računalniški vid, strojno učenje,
optimizacija• Računalniški vid: zajem digitalnih slik izdelkov in določitev
vrednosti njihovih značilk• Strojno učenje: gradnja napovednih modelov za
vrednotenje kakovosti izdelkov na osnovi njihovih slik• Optimizacija: nastavitev parametrov algoritmov
računalniškega vida in strojnega učenja za povečanje točnosti in učinkovitosti napovednih modelov
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Nadzor kakovosti v proizvodnji komponent za avtomobilsko industrijo, Partner: Kolektor Group d.o.o., Idrija
REZULTATI• Razviti postopek je implementiran kot vgradni
računalniški sistem na proizvodni liniji• Avtomatiziran nadzor kakovosti je zanesljiv, učinkovit in
omogoča odkrivanje napak v zgodnjih fazah proizvodnje• Zagotavlja večjo kakovost izdelanih komponent in s tem
prinaša konkurenčno prednost proizvajalcu
SYNERGY Horizon 2020 – GA No 692286
D3.2 56 31 July 2018
V id n o st in P ilo tn i p ro je kti, d atu m 3 . 7 . 2 0 1 8
IZM EN JA LN IC A p raks In d u strije 4 .0 ,
Hvala za vašo pozornost
Peter Korošec Bogdan Filipič
SYNERGY Horizon 2020 – GA No 692286
D3.2 57 31 July 2018
A Gentle Introduction to KrigingSYNERGY Training
Martin Zaefferer - TH Köln
2016-06-17
This project has received funding from the European Union’s Horizon 2020research and innovation programme under grant agreement No 692286.
Introduction
Overview
• Purpose: Easy to follow Kriging introduction• What is Kriging• How does it work• How do we apply it• Prominent features and limitations
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 2 / 54Introduction
Origin and Current Use
• Origin• Mining for gold [Krige, 1951]
• Based on samples from test drilling• Most likely distribution of gold
• Geostatistics• What interests us today:
• Kriging for computer experiments [Sacks et al., 1989]• Efficient Global Optimizaton [Jones et al., 1998]• Frequently used in engineering optimization [Forrester et al., 2008]
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 3 / 54
First steps Linear Model
Simple model• n samples X = {x(i) = (x (i)
1 , ..., x (i)k )}i=1...n
• k: Search space dimension• respective observations y = {y (i)}i=1...n
• Simple assumption:• Observations are derived from model
y (i) =∑
h
βhfh(x(i)) + ε(i)
• βh is an unknown coefficient• fh arbitrary function of x• Error of each sample: ε(i)
• Error is normal distributed with zero mean and variance σ2
→ errors assumed to be independent.
• One dimensional example
y (i) = β0 + β1x (i) + ε(i)
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 4 / 54First steps Linear Model
R: Create Example Dataset.seed(1) #random number generator seedx <- 1:10 #input data(y <- 2*x - 4 + rnorm(length(x))) #observations
## [1] -2.6264538 0.1836433 1.1643714 5.5952808 6.3295078 7.1795316## [7] 10.4874291 12.7383247 14.5757814 15.6946116
par(mar=c(4,4,.1,.1)) #plot options: marginsplot(x,y)
2 4 6 8 10
05
1015
x
y
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 5 / 54
First steps Linear Model
R: Create Model
• Create linear model with R function:
fit <- lm(y~x+1,data.frame(x,y))fit
#### Call:## lm(formula = y ~ x + 1, data = data.frame(x, y))#### Coefficients:## (Intercept) x## -4.169 2.055
• Intercept: Estimate of β0• x: Estimate of β1• True values: β0 = −4 and β1 = 2
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 6 / 54
SYNERGY Horizon 2020 – GA No 692286
D3.2 58 31 July 2018
First steps Linear Model
R: Plot Model
xnew <- seq(from=1,to=10,by=0.01)ypred <- predict(fit,data.frame(x=xnew))ytrue <- 2*xnew-4par(mar=c(4,4,.1,.1))plot(xnew,ytrue,type="l",col="black",xlab="x",ylab="y",lty=2)lines(xnew,ypred,col="red",lty=1);points(x,y)legend(2,15,c("Predicted","True"),col=c("red","black"),lty=1:2)
2 4 6 8 10
05
1015
x
y
PredictedTrue
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 7 / 54
First steps Maximum Likelihood Estimation
Manual estimation of coefficients
• lm function calculates the coefficients automatically• But how can we do that manually?
• One option: Maximum Likelihood Estimation• Determine parameters for which the observed data are maximal likely• Assumptions in our case: Errors are normal distributed, independent
with ε(i) ∼ N(0, σ2) and y (i) = β0 + β1x (i) + ε(i)
y (i) ∼ N(β0 + β1x (i), σ2).
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 8 / 54First steps Maximum Likelihood Estimation
Likelihood
• Likelihood of single sample: normal PDF
pdfN(y (i), µ, σ) = 1σ√2π
exp[− (y (i) − µ)2
2σ2].
• with µ = β0 + β1x (i) we get Likelihood
L(y (i)) = 1σ√2π
exp[− (y (i) − β0 − β1x (i))2
2σ2],
• σ, β0, β1: to be estimated
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 9 / 54
First steps Maximum Likelihood Estimation
Likelihood
• Combined likelihood of all samples• Remember: Errors in this example assumed to be independent
L(y) =n∏
i=1L(y (i))
• Interpretation: Likelihood of parameters, given the observed data
L(σ, β0, β1) = L(y) = 1σn√2πn exp
[− 12σ2
n∑
i=1(y (i) − β0 − β1x (i))2
]
• Goal: Parameters σ, β0 and β1 that maximize likelihood• Maximum Likelihood Estimation (MLE)
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 10 / 54First steps Maximum Likelihood Estimation
MLE
• Step 1: simplify by logarithm
ln(L(σ, β0, β1)) = −n2 ln(2π)− n
2 ln(σ2)− 12σ2
n∑
i=1(y (i) − β0 − β1x (i))2
• Step 2: Set partial derivatives to zero → maximum
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 11 / 54
First steps Maximum Likelihood Estimation
MLE β0
• Partial Derivative β0
∂ ln L∂β0
= −2 12σ2
n∑
i=1(−y (i) + β0 + β1x (i)) = 0
n∑
i=1y (i) − β1x (i) = nβ0
∑ni=1 y (i) − β1x (i)
n = β0
β0 = y − β1x
x and y : mean values of x, y
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 12 / 54
SYNERGY Horizon 2020 – GA No 692286
D3.2 59 31 July 2018
First steps Maximum Likelihood Estimation
MLE β1 and σ2
• Partial Derivative β1
∂ ln L∂β1
= −2 12σ2
n∑
i=1(y (i) − β0 − β1x (i))(−x (i)) = 0
β1 = Cov(x, y)Var(x)
• Partial Derivative σ2
∂ ln L∂σ2
= − n2σ2 + 1
2σ4n∑
i=1(y (i) − β0 − β1x (i))2 = 0
σ2 =∑n
i=1(y (i) − β0 − β1x (i))2n = MSE(y, y)
Cov: Covariance, Var: Variance, MSE: Mean Squared ErrorMartin Zaefferer - TH Köln Kriging Introduction 2016-06-17 13 / 54
First steps Maximum Likelihood Estimation
MLE in Rb1=cov(x,y)/var(x) #==sum(x*(y-mean(y))) / sum(x*(x-mean(x)))b0= mean(y)-b1*mean(x)ssq = sum((y-b0-b1*x)^2)/length(y)
β parameters should be identical to lmfit
#### Call:## lm(formula = y ~ x + 1, data = data.frame(x, y))#### Coefficients:## (Intercept) x## -4.169 2.055
b1
## [1] 2.054732
b0
## [1] -4.168824
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 14 / 54Kriging Basics
Motivation of Kriging
• Problem of the earlier model:• Fixed structure
• Example: even simpler modelstructure
y (i) = β0 + ε(i)
• for deterministic measurements
x <- 1:10(y <- 2*x - 4)
## [1] -2 0 2 4 6 8 10 12 14 16
par(mar=c(4,4,.1,.1))plot(x,y)
2 4 6 8 10
05
1015
x
y
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 15 / 54
Kriging Basics
Motivation of Kriging
(fit <- lm(y~1,data.frame(x,y)))
#### Call:## lm(formula = y ~ 1, data = data.frame(x, y))#### Coefficients:## (Intercept)## 7
xnew <- seq(from=1,to=10,by=0.01)ypred <- predict(fit,data.frame(x=xnew))ytrue <- 2*xnew-4
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 16 / 54Kriging Basics
Motivation of Kriging
par(mar=c(4,4,.1,.1))plot(xnew,ytrue,type="l",col="black",xlab="x",ylab="y",lty=2)lines(xnew,ypred,col="red",lty=1);points(x,y)legend(2,15,c("Predicted","True"),col=c("red","black"),lty=1:2)
2 4 6 8 10
05
1015
x
y
PredictedTrue
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 17 / 54
Kriging Basics
Motivation of Kriging
• Error now only due to structure• Depending on location (x)• → Errors not independent any more!• → Errors are correlated!
• Assumption• Small distance -> high correlation• Large distance -> lower correlation
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 18 / 54
SYNERGY Horizon 2020 – GA No 692286
D3.2 60 31 July 2018
Kriging Basics
• Correlation Function
Corr [ε(x (i)), ε(x (j))] = exp[−δ(x (i), x (j)))
],
• δ(.): function to evaluate distance between the two samples,
δ(x (i), x (j)) = θ|x (i) − x (j)|p.
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 19 / 54
Kriging Basics
xdist = seq(from=0,to=2.5,by=0.01)
theta = c(1,3,1,3); p = c(1,1,2,2)
r1 = exp(-theta[1] * xdist^p[1])r2 = exp(-theta[2] * xdist^p[2])r3 = exp(-theta[3] * xdist^p[3])r4 = exp(-theta[4] * xdist^p[4])
par(mar=c(4,5,1,1)) # reset plot margins for larger axis labels
plot(xdist,r1,type="l",xlab=expression(abs(x^(i) - x^(j))),ylab=expression(paste("Corr[",epsilon(x^(i)),",",epsilon(x^(j)),"]")),lty=1,ylim=c(0,1))
lines(xdist,r2,lty=2); lines(xdist,r3,lty=3); lines(xdist,r4,lty=4)legend(1.5,1,c(expression(paste(theta,"=1 and p=1")),
expression(paste(theta,"=3 and p=1")),expression(paste(theta,"=1 and p=2")),expression(paste(theta,"=3 and p=2"))),lty=1:4)
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 20 / 54Kriging Basics
0.0 0.5 1.0 1.5 2.0 2.5
0.0
0.2
0.4
0.6
0.8
1.0
x(i) − x(j)
Cor
r[ε(
x(i)),ε
(x(j)
)]
θ=1 and p=1θ=3 and p=1θ=1 and p=2θ=3 and p=2
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 21 / 54
Kriging Basics
Correlation function: features and parameters
• Distance zero → Correlation one• Correlation decreases with increasing distance• θ defines speed of decrease• p defines shape/acuteness of curve
• But how do we determine parameters θ and p?
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 22 / 54Kriging MLE
MLE of Kriging Parameters
• Again: MLE• Problem:
• Error not independent!• Total likelihood NOT the product of individual likelhoods
• Solution:• PDF Multivariate Normal Distribution
pdfMVN(y) = 1(2π)n/2|Σ|1/2 exp
[−12 (y− 1µ)T Σ−1(y− 1µ)
]
• number of observations n• covariance matrix Σ = σ2Ψ.• determinant |Σ|• correlation matrix Ψ with Ψij = Corr [ε(x (i)), ε(x (j))]
L = 1(2πσ2)n/2|Ψ|1/2 exp
[− (y− 1µ)T Ψ−1(y− 1µ)
2σ2
]
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 23 / 54
Kriging MLE
Kriging MLE: µ and σ2
• Unknown model parameters θ, p, µ and σ2• Assume p = 2 (to keep it simple)• µ and σ2: set partial derivatives to zero, solve
µ = 1T Ψ−1y1T Ψ−11
,
σ2 = (y− 1µ)T Ψ−1(y− 1µ)n .
What about θ ?
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 24 / 54
SYNERGY Horizon 2020 – GA No 692286
D3.2 61 31 July 2018
Kriging MLE
Kriging MLE: θ
• Resubstitute µ and σ in ln(L)• Remove constant terms → concentrated ln-likelihood
ln(L) ≈ conln(L) = −n2 ln(σ2)− 1
2 ln(|Ψ|).
• Find θ that optimizes conln(L)• Can be complex, nonlinear optimization problem• Solve with appropriate, numerical optimizer (e.g., Evolutionary Algorithm)
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 25 / 54
Kriging MLE
Kriging Predictor
• Reminder: Model structure
y (i) = β0 + ε(x (i))
• Where errors ε(x (i)) are realization of Gaussian Process• But how to predict?
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 26 / 54Kriging MLE
Kriging Predictor
• How to predict y at unknown location x∗
• Basic idea:• Add unknown value y to the list of known observations
y = {y, y}T
• Treat y as model parameter• Again use MLE: find y that maximizes likelihood
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 27 / 54
Kriging MLE
Kriging Predictor
• Correlations between known locations and x∗
ψ = (cor [x (1), x∗], ..., cor [x (n), x∗])T
• Augmented correlation matrix: Ψ =(
Ψ ψψT 1
).
• Substitute Ψ and y into likelihood function and maximize• Result: predictor for y at x∗
y = µ+ ψT Ψ−1(y− 1µ).
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 28 / 54Kriging Implementation in R
Implement Likelihood Function in Rconlnlik <- function(theta,x,y,p=2){
theta <- 10^theta #retransformationn <- length(y) #number of observationsxdist <- as.matrix(dist(x)) #distance matrix of the one dim. input vectorPsi <-exp(-theta*xdist^p) #correlation matrixPsinv <- try(solve(Psi), TRUE) #inversionif(class(Psinv) == "try-error"){
#return(1e4) # penalty, in case of numerical optimizerreturn(NA)
}ones <- rep(1,n)mu <- (ones %*% Psinv %*% y) / (ones %*% Psinv %*% ones)ymu <- (y-ones*mu)ssq <- ( ymu %*% Psinv %*% ymu)/nLnDetPsi <- as.numeric(determinant.matrix(Psi)$modulus)lnlike <- -(- n/2 * log(ssq) - LnDetPsi/2) #negated, for minimizationreturn(lnlike)
}#example callconlnlik(-1,x,y)
## [,1]## [1,] -2.660267
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 29 / 54
Kriging Implementation in R
Optimize likelihood
• Next step: optimize likelihood with suited algorithm• or in our simple case: manually• θ values to be tested
theta <- seq(from=-3, by=0.01,to=3) #actually, this is log10(theta)
• calculate (negative concentrated ln-) likelihoodL <- as.numeric(sapply(theta,conlnlik,x=x,y=y))
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 30 / 54
SYNERGY Horizon 2020 – GA No 692286
D3.2 62 31 July 2018
Kriging Implementation in R
Likelihood Landscape• Plot likelihood
par(mar=c(4,4,1,1))plot(theta,L,type="l",xlab="lg(theta)",ylab="Neg. Con. Ln-Likelihood")
−3 −2 −1 0 1 2 3
−40
−30
−20
−10
010
lg(theta)
Neg
. Con
. Ln−
Like
lihoo
d
optimum: theta ≈ 10−2
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 31 / 54
Kriging Implementation in R
Compute Prediction
xnew <- seq(from=0,by=0.01,to=10) #to be predictedp <- 2theta <- 10^-2n <- length(y) #number of observationsxdist <- as.matrix(dist(x)) #distance matrix of the one dim. input vectorPsi <-exp(-theta*xdist^p) #correlation matrixPsinv <- solve(Psi)#inversion.ones <- rep(1,n)mu <- (ones %*% Psinv %*% y) / (ones %*% Psinv %*% ones)ymu <- (y-ones*mu)ypred <- NULLfor(i in 1:length(xnew)){ #predict each sample
psi <- exp(-theta*(xnew[i]-x)^2)#correlation between old and new solutionsypred <- c(ypred, mu + psi %*% Psinv %*% ymu)
}
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 32 / 54Kriging Implementation in R
Plot Predictionytrue <- 2*xnew-4par(mar=c(4,4,1,1))plot(xnew,ytrue,type="l",col="black",xlab="x",ylab="y",lty=2)lines(xnew,ypred,col="red",lty=1)points(x,y)legend(0.5,15,c("Predicted","True"),col=c("red","black"),lty=1:2)
0 2 4 6 8 10
05
1015
x
y
PredictedTrue
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 33 / 54
Kriging Non-linear test case
More Difficult Target Function
• What happens in case of more complex, nonlinear problems
tfun <- function(x) x^4-2*x^2+xx <- seq(from=-1.5, by=0.5,to=1.5)y <- tfun(x)theta <- seq(from=-3.5, by=0.01,to=5)L <- as.numeric(sapply(theta,conlnlik,x=x,y=y))
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 34 / 54Kriging Non-linear test case
Plot Likelihoodpar(mar=c(4,4,1,0))plot(theta,L,xlab="lg(theta)",type="l",ylab="Neg. Con. Ln-Likelihood")thetahat = theta[which.min(L)]thetahat
## [1] 0.37
−2 0 2 4
05
1015
20
lg(theta)
Neg
. Con
. Ln−
Like
lihoo
d
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 35 / 54
Kriging Non-linear test case
Predict
xnew <- seq(from=-1.5,by=0.01,to=1.5) #solutions to be predictedp <- 2theta <- 10^thetahatn <- length(y) #number of observationsxdist <- as.matrix(dist(x)) #distance matrix of the one dim. input vectorPsi <-exp(-theta*xdist^p) #correlation matrixPsinv <- solve(Psi)#inversion.ones <- rep(1,n)mu <- (ones %*% Psinv %*% y) / (ones %*% Psinv %*% ones)ymu <- (y-ones*mu)ypred <- NULLfor(i in 1:length(xnew)){ #predict each sample
psi <- exp(-theta*(xnew[i]-x)^2)#correlation between old and new solutionsypred <- c(ypred, mu + psi %*% Psinv %*% ymu)
}
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 36 / 54
SYNERGY Horizon 2020 – GA No 692286
D3.2 63 31 July 2018
Kriging Non-linear test case
Plot Predictionpar(mar=c(4,4,1,0))ytrue <- tfun(xnew)plot(xnew,ytrue,type="l",col="black",xlab="x",ylab="y",lty=2)lines(xnew,ypred,col="red",lty=1)points(x,y)legend(-1.25,2,c("Predicted","True"),col=c("red","black"),lty=1:2)
−1.5 −1.0 −0.5 0.0 0.5 1.0 1.5
−2
−1
01
2
x
y
PredictedTrue
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 37 / 54
Kriging Non-linear test case
Summary
• Simple, basic model structure• Model correlation of errors• Very flexible, regardless of structure
• Important feature• Provides estimate of prediction uncertainty s(x)
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 38 / 54Kriging Uncertainty estimation
Uncertainty of Prediciton
• y : Mean (prediction at x∗)• s: Standard Deviation (uncertainty of prediction at x∗)
s2(x) = σ2
[1− ψT Ψ−1ψ + 1− 1T Ψ−1ψ
1T Ψ−11
]
• Possible uses:• Probability of Improvement (PI)• Expected Improvement (EI)• Probability of Feasibility (PF)
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 39 / 54
Kriging Uncertainty estimation
Kriging uncertainty estimation in R
preds <- function(xnew,x,y,theta,p){n <- length(y) #number of observationsxdist <- as.matrix(dist(x)) #distance matrix of the one dim. input vectorPsi <-exp(-theta*xdist^p) #correlation matrixPsinv=solve(Psi)#inversion.#Psinv = chol2inv(chol(Psi)) #alternative with cholesky decomposition.ones <- rep(1,n)mu <- (ones %*% Psinv %*% y) / (ones %*% Psinv %*% ones)ymu <- (y-ones*mu)SigmaSqr <- (t(ymu) %*% Psinv %*% ymu) / nspred <- NULLfor(i in 1:length(xnew)){ #predict each sample
psi <- as.matrix(exp(-theta*(xnew[i]-x)^2))spred= c(spred, SigmaSqr * (1 - diag(t(psi) %*% (Psinv %*% (psi)))))
}pmax(0,spred) #avoid negative uncertainty (due to numerical issues)
}preds(x,x,y,theta,2) #test: should be vector of zeros (or close to zero)
## [1] 0.000000e+00 0.000000e+00 0.000000e+00 5.396103e-16 0.000000e+00## [6] 1.798701e-16 1.798701e-16
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 40 / 54Kriging Uncertainty estimation
Kriging uncertainty estimation in R
spred <- preds(xnew,x,y,theta,2)par(mar = c(5, 4, 4, 4) + 0.3) # Leave space for z axisplot(xnew,spred,type="l",col="blue",xlab="x",ylab="s",lty=3,lwd=2)par(new = TRUE)plot(xnew,ytrue,,type="l",col="black",xaxt="n",yaxt="n",xlab="",ylab="",lty=2)axis(4)mtext("y",side=4,line=3)lines(xnew,ypred,col="red",lty=1)points(x,y,pch=20)legend("top",c("Predicted","True","Uncertainty"),
col=c("red","black","blue"),lty=1:3,lwd=c(1,1,3))
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 41 / 54
Kriging Uncertainty estimation
Kriging uncertainty estimation in R
−1.5 −0.5 0.5 1.5
0.00
0.01
0.02
0.03
x
s
−2
−1
01
2
y
PredictedTrueUncertainty
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 42 / 54
SYNERGY Horizon 2020 – GA No 692286
D3.2 64 31 July 2018
Kriging Uncertainty estimation
Reminder: Surrogate-model optimization, SPO
x
f(x)
build surrogate model
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 43 / 54
Kriging Uncertainty estimation
Reminder: Surrogate-model optimization, SPO
optimize surrogate model
x
f(x)
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 44 / 54Kriging Uncertainty estimation
Reminder: Surrogate-model optimization, SPO
evaluate
x
f(x)
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 45 / 54
Kriging Uncertainty estimation
Improvement• Improvement:
I = ymin − y• where ymin is the observed minimum• Improvement predicted at x∗:
I(x∗) = ymin − y(x∗)
ymin
I
-y
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 46 / 54Kriging Uncertainty estimation
Probability of Improvement• Probability of Improvement:• Probability of y being lower than ymin
PI(x∗) = Φ(
I(x∗)s(x∗)
)
Φ(.) Cumulative Distribution Function (Normal Distribution)
ymin
A=PI(x*)
-y
ŷ(x*)
ŝ(x*)
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 47 / 54
Kriging Uncertainty estimation
Probability of Improvement
x
f(x)
ymin
PI at x*
x*
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 48 / 54
SYNERGY Horizon 2020 – GA No 692286
D3.2 65 31 July 2018
Kriging Uncertainty estimation
Expected Improvement
• Expected value:probability-weighted average of allvalues of a random variable
• Intuition of EI: Every possible Iweighted by P
• In our (continuous) case: integralcalculation
ymin
I
-y
P
• If s(x∗) = 0 then EI(x∗) = 0• Else
EI(x∗) = I(x∗) Φ(
I(x∗)s(x∗)
)+ s(x∗) φ
(I(x∗)s(x∗)
)
φ(.) Probability Density Function (Normal Distribution)
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 49 / 54
Kriging Uncertainty estimation
Expected Improvement in R
In our simple example: maximizing EI and minimization of y are identical:
xnew[which.min(ypred)]
## [1] -1.02
ei <- function (mean, sd, min) {EITermOne = (min - mean) * pnorm((min - mean)/sd)EITermTwo = sd * (1/sqrt(2 * pi)) * exp(-(1/2) * ((min -
mean)^2/(sd^2)))-log10(EITermOne + EITermTwo + (.Machine$double.xmin))
}
xnew[which.min(ei(ypred,spred,min(y)))]
## [1] -1.02
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 50 / 54Kriging Uncertainty estimation
Efficient Global Optimization
• Use of EI: Efficient Global Optimization [Jones et al., 1998]• Maximize EI in surrogate-model optimization• Balance between Exploration and Exploitation
• Explore regions where model is not certain• Optimization less likely to get stuck
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 51 / 54
Kriging Uncertainty estimation
Summary
• Advantages:• Predicting black-box, non-linear data• Very flexible• Provides uncertainty estimate (PI, EI, ...)• Able to handle noisy observations (*)
• Disadvantages:• High dimensionality may be a problem (»20 Parameters)• Data should be smooth (i.e., small distances should lead to small changes in
function values)• Strong outliers problematic
(*) Handling noise requires some adaptations (nugget-effect, re-interpolation),see [Forrester et al., 2008]
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 52 / 54Kriging Uncertainty estimation
Disclaimer
• Code looks simple but can become arbitrarily complex, due to:• Including multiple parameters• Including noise handling• Including custom kernel functions• Including custom regression functions• Improving numerical stability• Improving speed of computation (vectorization, parallelization, implementation
in C/C++)• Hence, the presented code is instructive• But for productive use, consider existing code for a starting point:
• R: SPOT package - forrBuilder (fast but not that flexible), daceBuilder (slowerbut more flexible)
• R: diceKriging package (rather fast but potentially unstable)• R: And many more, all with their own strengths and weaknesses• python: scikit-learn, GPy ....• ...
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 53 / 54
References
References I
Forrester, A., Sobester, A., and Keane, A. (2008). Engineering Design viaSurrogate Modelling. Wiley.
Jones, D. R., Schonlau, M., and Welch, W. J. (1998). Efficient globaloptimization of expensive black-box functions. Journal of Global Optimization,13(4):455–492.
Krige, D. (1951). A statistical approach to some basic mine valuation problems onthe witwatersrand. Journal of the Chemical, Metallurgical and Mining Societyof South Africa, 52(6):119–139.
Sacks, J., Welch, W. J., Mitchell, T. J., Wynn, H. P., et al. (1989). Design andanalysis of computer experiments. Statistical science, 4(4):409–423.
Martin Zaefferer - TH Köln Kriging Introduction 2016-06-17 54 / 54
SYNERGY Horizon 2020 – GA No 692286
D3.2 66 31 July 2018
Meta-Model Assisted (Evolutionary) OptimizationTutorial at PPSN 2016 - 18.09.2016
Boris Naujoks, Jörg Stork, Martin Zaefferer, Thomas Bartz-Beielstein
This project has received funding from the European Union’s Horizon 2020research and innovation programme under grant agreement No 692286.
Intro and Motivation
• Synonyms• Metamodels• Surrogates• Response surface models• Approximation models• Simulation models• Data-driven models• Emulators
• From Latin surrogatus– a replacement for something, a substitute or alternativePerfect passive participle of surrogare
• Variant of subrogare, from• Sub (under) + rogare (ask)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 2 / 74
Most common applications• Engineering design• Long, expensive fitness function evaluations
• Finite elements models• Computational fluid dynamics models
• Examples• Airfoil design• Ship propulsion systems• etc.
Additional variable geometry parameters for the linear jetin contrast to the more simple propeller blade optimizationare:
• the hub’s diameter and length• the nozzle’s length and profile angle.
This results in an optimization problem featuring 14 deci-sion parameters in contrast to the nine decision parametersof the pure propeller blade optimization problem describedabove. The stator is not included into the optimization yet.
Again, like in the task presented before, a geometry todeliver more or less thrust compared to the desired valueis “punished” with a higher target value. By a similar”punishment” of a geometry generating cavitation and a“reward” for efficiency we calculate a single target valuewhich is returned to the optimization program after everyhydrodynamic simulation.
Fig. 2. Visualization of the linear jet propulsion systems, within the two-dimensional view, the three different parts blades, hub and nozzle can beidentified.
III. META-MODEL-ASSISTED EVOLUTIONARY
OPTIMIZATION
The idea to assist direct search algorithms by meta-modelshas first been explored by Torczon et al. [8], [9] for patternsearch algorithms. A similar approach can be employedin evolutionary algorithms (EA) by incorporating a pre-screening procedure before the offspring population is eval-uated with the time consuming evaluation tool. Algorithm1 gives an outline of MAEA which is, in fact, a modifiedversion of the basic (µ+λ)-EA described by Back, Hammel,and Schwefel [10] or Beyer and Schwefel [11]. Two featuresdistinguish MAEA from standard EA.
1) All exactly evaluated individuals are recorded andstored in a database. Up to 15 nearest neighbors areconsidered to set up the meta-model for each of the λindividuals per generation.
Fig. 3. Visualization of a more complex propulsion systems featuring rotor,hub, and nozzle. Within the three-dimensional figure of the geometry, thenozzle had to be hidden except for the corresponding grid to see the othercomponents. Nevertheless, this view enables to see the composition of huband blades in more detail.
2) During the pre-screening phase, the objective functionvalues for new solutions are predicted by the meta-model, before deciding whether they need to be re-evaluated by the exact and costly tool.
Thereby, at generation t, the set of offspring solutions Gt
is reduced to the subset of offspring solutions Qt, which willbe evaluated exactly and will also be considered in the finalselection procedure (cf. [1]).
Algorithm 1 (µ + ν < λ)-MAEAt← 0Pt ← init() /* Pt: Set of solutions */evaluate Pt preciselyinitialize database Dwhile t < tmax do
Gt ← generate(Pt) /* λ new offsprings */evaluate Gt with meta-modelQt ← select(Gt) /* |Qt| = ν */evaluate Qt preciselyupdate databasePt+1 ← select(Qt ∪ Pt) /* Select µ best */t← t + 1
end while
A. Pre-screening procedures
A ranking algorithm, applied over the offspring populationGt, identifies the most promising individuals in the newgeneration. In the general case, this algorithm is based on thevalues y(x) (predictions for f(x)) and s(x) (correspondingstandard deviations) obtained for each individual x ∈ Gt
through the meta-model. Comparisons with objective func-tion values for the parent population Pt are necessary. Vari-ous criteria for identifying promising solutions are discussedby Emmerich et al. [1]. Once the promising subset Qt of Gt
has been found, its members undergo exact evaluations.
Image takenfrom [Naujoks et al., 2007]
Naujoks, Stork, Zaefferer, Bartz-Beielstein 3 / 74
Most common applications• Engineering design• Long, expensive fitness function evaluations
• Finite elements models• Computational fluid dynamics models
• Examples• Airfoil design• Ship propulsion systems• etc.
Additional variable geometry parameters for the linear jetin contrast to the more simple propeller blade optimizationare:
• the hub’s diameter and length• the nozzle’s length and profile angle.
This results in an optimization problem featuring 14 deci-sion parameters in contrast to the nine decision parametersof the pure propeller blade optimization problem describedabove. The stator is not included into the optimization yet.
Again, like in the task presented before, a geometry todeliver more or less thrust compared to the desired valueis “punished” with a higher target value. By a similar”punishment” of a geometry generating cavitation and a“reward” for efficiency we calculate a single target valuewhich is returned to the optimization program after everyhydrodynamic simulation.
Fig. 2. Visualization of the linear jet propulsion systems, within the two-dimensional view, the three different parts blades, hub and nozzle can beidentified.
III. META-MODEL-ASSISTED EVOLUTIONARY
OPTIMIZATION
The idea to assist direct search algorithms by meta-modelshas first been explored by Torczon et al. [8], [9] for patternsearch algorithms. A similar approach can be employedin evolutionary algorithms (EA) by incorporating a pre-screening procedure before the offspring population is eval-uated with the time consuming evaluation tool. Algorithm1 gives an outline of MAEA which is, in fact, a modifiedversion of the basic (µ+λ)-EA described by Back, Hammel,and Schwefel [10] or Beyer and Schwefel [11]. Two featuresdistinguish MAEA from standard EA.
1) All exactly evaluated individuals are recorded andstored in a database. Up to 15 nearest neighbors areconsidered to set up the meta-model for each of the λindividuals per generation.
Fig. 3. Visualization of a more complex propulsion systems featuring rotor,hub, and nozzle. Within the three-dimensional figure of the geometry, thenozzle had to be hidden except for the corresponding grid to see the othercomponents. Nevertheless, this view enables to see the composition of huband blades in more detail.
2) During the pre-screening phase, the objective functionvalues for new solutions are predicted by the meta-model, before deciding whether they need to be re-evaluated by the exact and costly tool.
Thereby, at generation t, the set of offspring solutions Gt
is reduced to the subset of offspring solutions Qt, which willbe evaluated exactly and will also be considered in the finalselection procedure (cf. [1]).
Algorithm 1 (µ + ν < λ)-MAEAt← 0Pt ← init() /* Pt: Set of solutions */evaluate Pt preciselyinitialize database Dwhile t < tmax do
Gt ← generate(Pt) /* λ new offsprings */evaluate Gt with meta-modelQt ← select(Gt) /* |Qt| = ν */evaluate Qt preciselyupdate databasePt+1 ← select(Qt ∪ Pt) /* Select µ best */t← t + 1
end while
A. Pre-screening procedures
A ranking algorithm, applied over the offspring populationGt, identifies the most promising individuals in the newgeneration. In the general case, this algorithm is based on thevalues y(x) (predictions for f(x)) and s(x) (correspondingstandard deviations) obtained for each individual x ∈ Gt
through the meta-model. Comparisons with objective func-tion values for the parent population Pt are necessary. Vari-ous criteria for identifying promising solutions are discussedby Emmerich et al. [1]. Once the promising subset Qt of Gt
has been found, its members undergo exact evaluations.
Image takenfrom [Naujoks et al., 2007]
Naujoks, Stork, Zaefferer, Bartz-Beielstein 3 / 74
Most common applications• Engineering design• Long, expensive fitness function evaluations
• Finite elements models• Computational fluid dynamics models
• Examples• Airfoil design• Ship propulsion systems• etc.
Additional variable geometry parameters for the linear jetin contrast to the more simple propeller blade optimizationare:
• the hub’s diameter and length• the nozzle’s length and profile angle.
This results in an optimization problem featuring 14 deci-sion parameters in contrast to the nine decision parametersof the pure propeller blade optimization problem describedabove. The stator is not included into the optimization yet.
Again, like in the task presented before, a geometry todeliver more or less thrust compared to the desired valueis “punished” with a higher target value. By a similar”punishment” of a geometry generating cavitation and a“reward” for efficiency we calculate a single target valuewhich is returned to the optimization program after everyhydrodynamic simulation.
Fig. 2. Visualization of the linear jet propulsion systems, within the two-dimensional view, the three different parts blades, hub and nozzle can beidentified.
III. META-MODEL-ASSISTED EVOLUTIONARY
OPTIMIZATION
The idea to assist direct search algorithms by meta-modelshas first been explored by Torczon et al. [8], [9] for patternsearch algorithms. A similar approach can be employedin evolutionary algorithms (EA) by incorporating a pre-screening procedure before the offspring population is eval-uated with the time consuming evaluation tool. Algorithm1 gives an outline of MAEA which is, in fact, a modifiedversion of the basic (µ+λ)-EA described by Back, Hammel,and Schwefel [10] or Beyer and Schwefel [11]. Two featuresdistinguish MAEA from standard EA.
1) All exactly evaluated individuals are recorded andstored in a database. Up to 15 nearest neighbors areconsidered to set up the meta-model for each of the λindividuals per generation.
Fig. 3. Visualization of a more complex propulsion systems featuring rotor,hub, and nozzle. Within the three-dimensional figure of the geometry, thenozzle had to be hidden except for the corresponding grid to see the othercomponents. Nevertheless, this view enables to see the composition of huband blades in more detail.
2) During the pre-screening phase, the objective functionvalues for new solutions are predicted by the meta-model, before deciding whether they need to be re-evaluated by the exact and costly tool.
Thereby, at generation t, the set of offspring solutions Gt
is reduced to the subset of offspring solutions Qt, which willbe evaluated exactly and will also be considered in the finalselection procedure (cf. [1]).
Algorithm 1 (µ + ν < λ)-MAEAt← 0Pt ← init() /* Pt: Set of solutions */evaluate Pt preciselyinitialize database Dwhile t < tmax do
Gt ← generate(Pt) /* λ new offsprings */evaluate Gt with meta-modelQt ← select(Gt) /* |Qt| = ν */evaluate Qt preciselyupdate databasePt+1 ← select(Qt ∪ Pt) /* Select µ best */t← t + 1
end while
A. Pre-screening procedures
A ranking algorithm, applied over the offspring populationGt, identifies the most promising individuals in the newgeneration. In the general case, this algorithm is based on thevalues y(x) (predictions for f(x)) and s(x) (correspondingstandard deviations) obtained for each individual x ∈ Gt
through the meta-model. Comparisons with objective func-tion values for the parent population Pt are necessary. Vari-ous criteria for identifying promising solutions are discussedby Emmerich et al. [1]. Once the promising subset Qt of Gt
has been found, its members undergo exact evaluations.
Image takenfrom [Naujoks et al., 2007]
Naujoks, Stork, Zaefferer, Bartz-Beielstein 3 / 74
Other areas
. . . where surrogates are applied, involved, used . . .
• No explicit fitness function available• Fitness depending on external factors, e.g. human interactions• Music and arts
• Uncertain environments• Noisy environments• Robustness wrt. design variables• Dynamic fitness landscapes
• Smoothing multi-modal fitness landscapes
Naujoks, Stork, Zaefferer, Bartz-Beielstein 4 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 67 31 July 2018
Overview
• Motivation
• Concepts and methods
• Practical approach: instructive application
• Typical problems in application
• Open Issues / Research perspectives / Fields of Interest• Multi-criteria optimization• Combinatorial optimization
• Discussion• Typical problems and their solutions
Naujoks, Stork, Zaefferer, Bartz-Beielstein 5 / 74
Overview
• Motivation
• Concepts and methods
• Practical approach: instructive application
• Typical problems in application
• Open Issues / Research perspectives / Fields of Interest• Multi-criteria optimization• Combinatorial optimization
• Discussion• Typical problems and their solutions
Naujoks, Stork, Zaefferer, Bartz-Beielstein 6 / 74
Surrogate Modeling - Concepts and Methods
Questions to Answer:
1 What is the core concept of surrogate modeling?2 How does a typical surrogate optimization cycle work?3 Which models are common for surrogate optimization?4 Example method: Efficient Global Optimization
Naujoks, Stork, Zaefferer, Bartz-Beielstein 7 / 74
Costly real world (blackbox) problems• Real-world applications: commonly blackbox problems (machines, complexprocesses)
• Available information is very sparse, properties of the objective function aredifficult or impossible to determine,
• No a priori information about modality, convexity, gradients, or the minimalfunction value f(x∗) is known
• Most complex problems arise if physical experiments are involved. Aside frombeing costly in terms of needed resources (manpower, material, time)
• Wrong values can lead to hazardous effects, e.g., damaging or destroyingexperimental material.
• Instead of physical experiments, simulations are used, e.g., from the field ofComputational Fluid Dynamics (CFD).
• Require a lot of computational power and are very time demanding
Inevitable need to evaluate candidate solutions in the search space to retrieve anyinformation and high demand on resources for each of these evaluations.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 8 / 74
Surrogate Modeling - Application Layers
L1 The Real-World Application• Direct optimization is very costly or impossible as incorrectly chosen decision
variable values• Evaluations involve resource demanding prototype building or even hazardous
experimentsL2 The Simulation Model
• Complex computational model from fluid- or structural dynamics• Single simulation process may take minutes, hours, or even weeks to compute• Available computational power limits the amount of available evaluations
L3 The Surrogate Model• Data-driven regression model• The accuracy heavily depends on the underlying surrogate type and number of
available information• Typically cheap
L4 The Optimization Process• Any suitable optimization algorithm (deterministic, stochastic, metaheuristic...)• Can be tuned
Naujoks, Stork, Zaefferer, Bartz-Beielstein 9 / 74
Surrogate Modeling - Core Concept
tuning procedure
(f4f4) optimization algorithm
(f3f3) surrogate model
(f1f1) real world application, physical model
algorithm control parameters
optimized algorithm control parameters
decisionvariables
optimized variables
process parameters, estimated output
simulated input
input output
candidate solutions predicted fitness
(f2f2) simulation model
Naujoks, Stork, Zaefferer, Bartz-Beielstein 10 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 68 31 July 2018
Surrogate Modeling - Costs and Benefits
• Each layer L1 to L4 imposes different evaluation costs and solutionaccuracies:
• Most expensive: L1 real world• Commonly cheapest: L2 Surrogate Model• Modeling process itself requires computational resources for evaluations,construction or validation of the surrogate.
The main benefit of using surrogates is the reduction of needed fitness evaluationson the objective function during the optimization.
• An other advantage is the availability of a surrogate itself, which can beutilized to gain further problem insight.-> This is particularly valuable for blackbox problems.
• The initial sampling design plan has a major impact on the optimizationperformance and should be carefully selected.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 11 / 74
Surrogate Modeling - Optimization Cycle
A common optimization process using surrogates is outlined by the following steps:
1 Sampling the objective function to generate a set of evaluated points2 Selecting a suitable surrogate3 Constructing the surrogate using the evaluated points4 Utilizing the surrogate to predict new promising locations5 Evaluating the objective function on one (or more) of the identified locations6 Updating the surrogate and repeating the optimization cycle
Naujoks, Stork, Zaefferer, Bartz-Beielstein 12 / 74
Surrogate Modeling - Important Publications
Important publications featuring overviews or surveys on surrogate modeling andsurrogate optimization:
• Design and analysis of computer experiments, [Sacks et al., 1989]• A taxonomy of global optimization methods based on response surfaces,[Jones, 2001]
• Surrogate-based analysis and optimization, [Queipo et al., 2005]• Recent advances in surrogate-based optimization, [Forrester and Keane, 2009]• Surrogate-assisted evolutionary computation: Recent advances and futurechallenges, [Jin, 2011]
Naujoks, Stork, Zaefferer, Bartz-Beielstein 13 / 74
Linear Models
• Combination of linear predictor functions of each input to model the output• Basic LM: y = β0 + β1x1 + β2x2 + · · ·+ βnxn + ε, where ε is the error term• Extensions: Interactions between inputs, quadratic terms, response surfacemodels, polynomial regression
• Polynomial model takes the formy = a0 + a1x+ a2x
2 + a3x3 + · · ·+ anx
n + ε
Pro: white box, easy to interpret / analyse, simple and fast
Con: not suitable for complex functions, overfitting (by using too many terms)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 14 / 74
Decision Trees and Random Forests• Decision Trees [Breiman et al., 1984] model the objective function by usingtree-based approximations.
• At each node of the tree a split is made on the basis of an decision variablevalue
• The prediction of a new point is given by the mean value of associated points• Random Forests Regression [Breiman, 2001] a large number of decision treesis combined to an ensemble predictor
• Usually, each tree in the ensemble is fitted using a subset of the evaluatedpoints to avoid overfitting (bagging)
• Predictions of new individuals are then given by a cumulated mean of allpredictors in the ensemble
Pro: easy to interpret white box model (decision trees), fast, binary+integer+realvariables
Con: complex to interpret (RF), bad fit for complex functions (decision tree), nosmooth surface, overfitting (too large tree)Naujoks, Stork, Zaefferer, Bartz-Beielstein 15 / 74
Artificial Neural Networks and Deep Learning• Neural Networks [Haykin, 2004; Hornik et al., 1989] are inspired by thebiological brain
• They utilize so-called connected neurons to learn and approximate thebehavior of a function
• Neurons are weighted transform functions• Several layers of neurons: input, output and hidden layers• Layers consist of neurons with different forward an/or backward connections• Deep learning [Deng and Yu, 2014; Hinton et al., 2006]:• Complex structured networks with multiple processing layers and/or multiplenon-linear transformations and stacked model approaches
• Excellent results in approximation and specially classification tasks• Highly computational complex, lot of resources needed
Pro: very accurate (deep learning), universal approximator
Con: high computational effort, difficult to interpret, very complex (DeepLearning)Naujoks, Stork, Zaefferer, Bartz-Beielstein 16 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 69 31 July 2018
Symbolic Regression
• Symbolic Regression [Flasch et al., 2010] is a high level method to fit ahuman-readable mathematical model
• Based on Genetic Programming (GP)• Mathematical expressions building blocks (+,−, sin, cos, exp ...)• Model is evolved using an evolutionary population-based approach
Pro: easy to interpret, fast prediction
Con: high computational complexity (building process)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 17 / 74
Kriging
• Kriging or Gaussian Process Regression [Sacks et al., 1989] is used to modelthe error term of the model instead of the linear coefficients
• Simplest form: β0 + ε, where β0 is the mean• The ε is then expressed by an gaussian stochastic process.• Modeling of the error term ε with help of a covariance distance matrix• The correlation between errors is related to the distance between thecorresponding points
• The covariance matrix is utilized to predict unknown candidates.• Outstanding feature of Kriging models: uncertainty measure for theprediction and Expected Improvement (EI):
Pro: suitable for complex functions, uncertainty measurment and EI
Con: not suitable for high dimensional data, high computational effort
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18 / 74
Expected ImprovementImprovement:
• Current best point for minimization: x∗ with function value f(x∗)• For a new point x’, the improvement in our objective function is
[f(x∗)− f(x′)]+
0
f(x*) - f(x')
Impr
ovem
ent
Naujoks, Stork, Zaefferer, Bartz-Beielstein 19 / 74
Expected Improvement2
found in [7]. Recent publications show, that metamodels arebeneficial to speed up the evolutionary search in constrainedand multi-objective optimization [15], [16], [17], [18], thoughthere are still open questions.
Recently, screening methods also consider the confidence ofthe predicted output have been suggested [3], [9], [13], [19].This information can be obtained through Gaussian RandomField models which predict the unknown evaluation result bymeans of a Gaussian distribution. The use of confidence in-formation increases the prediction accuracy of the metamodeland helps guiding the search towards less explored regions inthe search space. This also prevents premature convergence.
In this paper, the term pre-screening will denote the use ofmetamodels for the selection of promising members which arenot evaluated so far. Criteria that can be used to support pre-screening procedures by incorporating confidence informationare introduced; their concept is discussed and statistical studiesof their performance are presented. These criteria figure outimprovements in a set of new (offspring) solutions, assumingthat the unknown response is described by a Gaussian distri-bution.
In order to extend the application domain of the proposedmethods, pre-screening criteria used in single-objective prob-lems will be generalized to constrained and multi-objectiveproblems. After scrutinizing a number of mathematical opti-mization problems, a challenging aerodynamic design problemwith 3 objectives and 6 constraints will be solved. The resultspresented below indicate that it is very beneficial to considerthe confidence information within a MAEA, in order toimprove its robustness.
The structure of this paper is as follows: In section II,GRFM are presented and discussed. In section III, the single-and multi-objective EA used are presented. In section IV,the integration of GRFM in single- and multi-objective EAis outlined. Finally, using a number of academic test cases(section V) and a real-world test problem (section VI), theefficiency of the proposed MAEA is investigated.
II. GAUSSIAN RANDOM FIELD METAMODELS
The Gaussian Random Field (GRF) theory constitutes apowerful framework for building metamodels based on dataobtained through computer experiments. Gaussian RandomField Models (GRFM) will be defined below, by first puttingemphasis to the information required by the model as wellas its responses after training. Later, statistical assumptions,limitations and practicalities related to the model itself and itsuse will be discussed.
In the literature, GRFM are also known under differentnames, such as Kriging, Gaussian processes and Gaussianrandom functions methods. The term Kriging points directlyto the origin of these prediction methods dating back to thesixties, when the mining engineer Krige used GRFM-likemodels to predict the concentration of ore in gold- and uraniummines [20]. Today, Kriging includes a wide class of spatialprediction methods which do not necessarily assume Gaussianfields.
Note that the latter assumption is essential in our algorithmsand the term GRFM is used herein in the standard (strict)
�����������������������������������������������������������������������������
�����������������������������������������������������������������������������
������������������������������������������������������
������������������������������������������������������
����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������
����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������
�������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������
�����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������
����������������������������������������������������������������������������������������������������������������������������������������������������������������
����������������������������������������������������������������������������������������������������������������������������������������������������������������
����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������
����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������
x x x(1) (2) (3)
y
y
(1) (3)
(2)^
y
Confidence Range
^
y
y(x’)
’
y(x’)+s(x’)
x’
^
^
y(x’)−s(x’)^
x
Predicted Function
Fig. 1. Outputs of Gaussian Random Field Metamodels using a R → Rmapping example.
sense. On the other hand, the term Gaussian random functions[21] might be misleading, because a random function is oftenassociated with a single random variable instead of a setof them. However, in the present paper, the term GaussianRandom Field seems to be more appropriate than GaussianProcess [22] since this paper is dealing with a multidimen-sional – spatial – rather than a one-dimensional – temporal –input space [23].
Apart from the predicted objective function value, anotherinformation provided by a GRFM is a measure of confidencefor its prediction. It is reasonable that the confidence isexpected to be higher if the training point density in theneighborhood of a newly proposed point is higher. Anotherimportant output of the metamodel is the variance of the outputvalues and the average correlation between responses at neigh-boring points. A GRFM interpolates data values and estimatestheir prediction accuracy. It provides the mean value and thestandard deviation for a one-dimensional Gaussian distributionwhich represents the likelihood for different realizations ofoutcomes to represent a precise function evaluation. Figure 1illustrates the use of GRFM in an example mapping R→ R.
The user of modern optimization methods desires to operatethe metamodel in the most efficient manner, i.e. to maximizeits prediction capabilities and minimize the CPU cost forits training. For this purpose, a better understanding of thestatistical assumptions, limitations and practicalities related tothe model itself and its use are needed.
Let y : Rd → R be the output of a computationallyexpensive computer experiment and X = {x(1), . . . ,x(m)}be a set of m input configurations which are available alongwith the corresponding responses y(1) = y(x(1)), y(2) =y(x(2)), . . . , y(m) = y(x(m)). No assumption on the regularityof the distribution of x(1), . . . , x(m) in S is made.
In GRF theory, the aim is to build a non-time-consumingtool capable of predicting the output corresponding to a newpoint x′ ∈ S, according to an approximated Rd → R mapping.With x′ ∈ X , the precise objective function value is returned.This corresponds to the well known exact interpolation prob-lem for which a large variety of methods, ranging from splines[24] to radial basis networks [25] and Shepard polynomials[26], are available.
The basic assumption in modeling with GRFM is that theoutput function is a realization (sample-path) of a Gaussian
Image taken from [Emmerich et al., 2006]
Intuition: Expected Improvement is every possible improvement value weighted byits probabilityNaujoks, Stork, Zaefferer, Bartz-Beielstein 20 / 74
Method Example: Efficient Global Optimization
• Efficient Global Optimization (EGO) by [Jones et al., 1998a] is a surrogateoptimization framework specialized on utilizing Kriging and expectedimprovement
• Focus on optimization of expensive blackbox functions• The original version of EGO starts by sampling the objective function by aspace-filling experimental design
• Example: LHD with approximate k = 10n points: convenient, finite-decimalvalue for the inter-point spacing, e.g., 21 design points for 2-dimensions
• Kriging surrogate is fit using maximum likelihood estimation on the selectedpoints
• The surrogate is then manually analyzed by applying different diagnostic tests• If it is satisfactory the iterative optimization process is started, if not, theobjective function is tried to be transformed (by log or inversetransformation) to acquire a better fit.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 21 / 74
Method Example: Efficient Global Optimization
The optimization cycle has following steps:
1 Calculate and maximize expected improvement on surrogate by a exactbranch-and-bound algorithm
2 Sample the objective function where expected improvement is maximized3 Re-estimate the Kriging surrogate including the new candidate by maximumlikelihood estimation
The authors introduce a stopping criterion, which is reached if the expectedimprovement is less than one percent of the current best candidate
Naujoks, Stork, Zaefferer, Bartz-Beielstein 22 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 70 31 July 2018
EGO Pseudo-Code Phase I: Building
Algorithm 1.1: EGO
beginphase 1, initial surrogate building:initialize population X of size k based on a space-filling DOEevaluate X on f(x)xc = best candidate in f(X)fit Kriging surrogate model fm with X by maximum likelihood estimationmanually verify fm by diagnostic testsif verify(fm)=false then
transform f(x) by log or inverse and repeat fitting processend
end
Naujoks, Stork, Zaefferer, Bartz-Beielstein 23 / 74
EGO Pseudo-Code Phase II: Optimization
Algorithm 1.2: EGO
beginphase 2, use and refine surrogate:while not termination-condition do
xnew = calculate and maximize EI on surrogate model bybranch-and-bound optimizationif EI(xnew)/|f(xc)| < 0.01 then
stop algorithmendevaluate f(xnew)add xnew to Xxc = best candidate in f(X)re-estimate fm with X by maximum likelihood estimation
endend
Naujoks, Stork, Zaefferer, Bartz-Beielstein 24 / 74
Overview
• Motivation
• Concepts and methods
• Practical approach: instructive application
• Typical problems in application
• Open Issues / Research perspectives / Fields of Interest• Multi-criteria optimization• Combinatorial optimization
• Discussion• Typical problems and their solutions
Naujoks, Stork, Zaefferer, Bartz-Beielstein 25 / 74
A practical example
• language: R• installation of R and more: https://cran.r-project.org/• optional / recommended IDE: RStudio https://www.rstudio.com/• R tutorial:
https://cran.r-project.org/doc/manuals/r-release/R-intro.html• Uses a 1-dim benchmark function from Forrester et al. [2008]• See following code
Naujoks, Stork, Zaefferer, Bartz-Beielstein 26 / 74
## To install the required packages, uncomment the following lines:# install.packages("SPOT")library("SPOT") #load required package: SPOT
## Initialize random number generator seed. Reproducibility.set.seed(1)
## Define objective functionobjectFun <- function(x){
(6*x-2)^2 * sin(12*x-4)}
## Plot the function:curve(objectFun(x),0,1)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
SYNERGY Horizon 2020 – GA No 692286
D3.2 71 31 July 2018
## Now, let us assume objectFun is expensive.## First, we start with making some initial## design of experiment, which in this case## is simply a regular grid:x <- seq(from=0, by=0.3,to=1)
## Evaluate with objective:y <- sapply(x,objectFun)
## Add to plot:curve(objectFun(x),0,1)points(x,y)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
xob
ject
Fun
(x)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
## Build a model (here: Kriging, with the SPOT package.## But plenty of alternatives available)fit <- forrBuilder(as.matrix(x),as.matrix(y),
control=list(uselambda=FALSE #do not use nugget effect (regularization)))
## Evaluate prediction based on model fitxtest <- seq(from=0, by=0.001,to=1)pred <- predict(fit,as.matrix(xtest),predictAll=T)ypred <- pred$fspred <- pred$s
## Plot the prediction of the model:curve(objectFun(x),0,1)points(x,y)lines(xtest,ypred,lty=2,lwd=2)
## Plot suggested candidate solutionpoints(xtest[which.min(ypred)],ypred[which.min(ypred)],col="black",pch=20,cex=2)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
## Calculate expected improvement (EI)ei <- 10^(-spotInfillExpImp(ypred,spred,min(y)))## note: the function used above returns negative## log. of EI, for optimization purposes.
## Plot EIcurve(objectFun(x),0,1)points(x,y)lines(xtest,ypred,lty=2,lwd=2)par(new = T)plot(xtest,ei,lty=3,lwd=2, type="l", axes=F, xlab=NA, ylab=NA,
ylim=rev(range(ei)))axis(side = 4); mtext(side = 4, line = 1.4, 'EI')
## Determine solution that maximizes EInewx <- xtest[which.max(ei)]
## Plot suggested candidate solution, based on EIpoints(newx,max(ei),col="red",pch=20,cex=2)
## Add datax <- c(x,newx)y <- c(y,objectFun(newx))
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
0.4
0.3
0.2
0.1
0.0
EI
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
SYNERGY Horizon 2020 – GA No 692286
D3.2 72 31 July 2018
## Now repeat the same as often as necessary:repeatThis <- expression({
curve(objectFun(x),0,1)points(x,y)fit <- forrBuilder(as.matrix(x),as.matrix(y),
control=list(uselambda=FALSE))
xtest <- seq(from=0, by=0.001,to=1)pred <- predict(fit,as.matrix(xtest),predictAll=T)ypred <- pred$fspred <- pred$slines(xtest,ypred,lty=2,lwd=2)points(xtest[which.min(ypred)],ypred[which.min(ypred)],col="black",pch=20,cex=2)ei <- 10^(-spotInfillExpImp(ypred,spred,min(y)))par(new = T)plot(xtest,ei,lty=3,lwd=2, type="l", axes=F, xlab=NA, ylab=NA,
ylim=rev(range(ei)))axis(side = 4); mtext(side = 4, line = 1.4, 'EI')points(xtest[which.max(ei)],max(ei),col="red",pch=20,cex=2)newx <- xtest[which.max(ei)]x <- c(x,newx)y <- c(y,objectFun(newx))
})
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
eval(repeatThis)
0.0 0.2 0.4 0.6 0.8 1.0−
50
510
15x
obje
ctF
un(x
)
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
EI
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
eval(repeatThis)
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
0.4
0.3
0.2
0.1
0.0
EI
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
eval(repeatThis)
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
0.4
0.3
0.2
0.1
0.0
EI
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
eval(repeatThis)
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
0.20
0.15
0.10
0.05
0.00
EI
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
eval(repeatThis)
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
0.00
80.
006
0.00
40.
002
0.00
0E
I
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
SYNERGY Horizon 2020 – GA No 692286
D3.2 73 31 July 2018
eval(repeatThis)
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
0.01
50.
010
0.00
50.
000
EI
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
eval(repeatThis)
0.0 0.2 0.4 0.6 0.8 1.0−
50
510
15x
obje
ctF
un(x
)
3e+
072e
+07
1e+
070e
+00
EI
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
eval(repeatThis)
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
1e+
098e
+08
6e+
084e
+08
2e+
080e
+00
EI
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
• EI looks noisy, strange• Predicted mean is completely off target• Why?
• Common practical problem• Numerical issue• Closeness of solutions• Problem for Kriging model
• Near identical rows in correlation matrix• Badly conditioned
• Often, Kriging implementations will crash• Here, nonsensical predictions
• Potential remedy: use nugget (regularization) + reinterpolation[Forrester et al. 2008]
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
## repeat as often as necessary (but now with regularization):repeatThis <- expression({
curve(objectFun(x),0,1)points(x,y)fit <- forrBuilder(as.matrix(x),as.matrix(y),
control=list(uselambda=TRUE, # Use nugget (parameter lambda)reinterpolate=T # Reinterpolation, to fix uncertainty estimates
))xtest <- seq(from=0, by=0.001,to=1)pred <- predict(fit,as.matrix(xtest),predictAll=T)ypred <- pred$fspred <- pred$slines(xtest,ypred,lty=2,lwd=2)points(xtest[which.min(ypred)],ypred[which.min(ypred)],col="black",pch=20,cex=2)ei <- 10^(-spotInfillExpImp(ypred,spred,min(y)))par(new = T)plot(xtest,ei,lty=3,lwd=2, type="l", axes=F, xlab=NA, ylab=NA,
ylim=rev(range(ei)))axis(side = 4); mtext(side = 4, line = 1.4, 'EI')points(xtest[which.max(ei)],max(ei),col="red",pch=20,cex=2)newx <- xtest[which.max(ei)]x <- c(x,newx)y <- c(y,objectFun(newx))
})
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
eval(repeatThis)
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
0.00
150.
0010
0.00
050.
0000
EI
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
SYNERGY Horizon 2020 – GA No 692286
D3.2 74 31 July 2018
eval(repeatThis)
0.0 0.2 0.4 0.6 0.8 1.0
−5
05
1015
x
obje
ctF
un(x
)
0.00
120.
0008
0.00
040.
0000
EI
Naujoks, Stork, Zaefferer, Bartz-Beielstein 18.09.2016
Overview
• Motivation
• Concepts and methods
• Practical approach: instructive application
• Typical problems in application
• Open Issues / Research perspectives / Fields of Interest• Multi-criteria optimization• Combinatorial optimization
• Discussion• Typical problems and their solutions
Naujoks, Stork, Zaefferer, Bartz-Beielstein 27 / 74
Typical Problems in Practice
• Previous slides: numerical issues (Kriging)
• Other, more general issues:• Problem definition
• What is the objective• What variables impact the objective• ...
• Algorithm design, selection of:• Model• Optimizer• Parameters
Naujoks, Stork, Zaefferer, Bartz-Beielstein 28 / 74
Typical Problems in Practice: Problem definition
• Very important, crucial to success• Often underestimated• Information based on
• Discussions with application experts, practitioners• Literature• Experience
Naujoks, Stork, Zaefferer, Bartz-Beielstein 29 / 74
Typical Problems in Practice: Problem definition• Consider the following:
• Aims and goals• What are they?• Can they be clearly defined?• Can they be evaluated
(measured, computed)?• Cost of evaluation?• Budget?• Desired accuracy?
• Variables affecting theobjective(s)
• How many?• Independent variables?• Disturbance variables?• Data types?
• Constraints?• Noise?• Interfacing, data exchange
• Repeat the aforementioned, e.g.,after first results
Surrogate Model Optimization
CostlyExperiment or
Simulation
ComputeObjective(s)
Measurements & Results
Distu
rban
ce P
arameters
Objectives
Independent Parameters
Ob
jectives
OptimizerModel
Ind
epen
den
t P
arameters
No
ise
Naujoks, Stork, Zaefferer, Bartz-Beielstein 30 / 74
Typical Problems in Practice: Model Selection
Spline models
Linear regression
Kriging
Support Vector Machines
Neural
Networks
Symbolic Regression
RBFNs
Random Forest
Regression
Trees
• Large variety of models available• Which to choose?
• Potential solutions:• Use the "default" (e.g., Kriging / EGO)• Exploit problem knowledge• Select performance-based or combine ->
Ensembles (open issue)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 31 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 75 31 July 2018
Typical Problems in Practice: Model Selection
• No problem is truly black-box• Use what you know, e.g.:
• Number of parameters• 20 or more: Kriging and related loose
performance• Data types
• Continuous: Kriging, SVMs, RBFNs• Integer, binary, categorical parameters:
e.g., Random Forest• Mixed: Treed Gaussian Processes (TGP)• Structured / combinatorial (e.g.:
permutations, trees): see later slides• Data set sizes (budget)
• Large: Kriging may become slow• Small: Take care to use models that avoid
overfitting
Naujoks, Stork, Zaefferer, Bartz-Beielstein 32 / 74
Typical Problems in Practice: Model Selection
• Structure of the fitness landscape:• Highly multi-modal: do not use simple linear models• Smooth: Kriging or related• Large plateaus or discontinuities: Kriging variants may perform poorly• Known trend: Use Kriging with trend function.
• Cost of the objective function• Rather high: Complex, powerful models (Kriging, SVMs)• Rather low: Less complex, cheaper models (linear regression,
tree-based,k-Nearest Neighbor)
• Requirements of understandability / learning from the model• Variable importance: most models• Rule extraction: regression trees• Human readable formulas: linear models, genetic programming
(symbolic regression)• Availability of derivatives
• e.g., Gradient Enhanced Kriging ?
$$$
?
Naujoks, Stork, Zaefferer, Bartz-Beielstein 33 / 74
Typical Problems in Practice: Model Selection
• Other considerations• Customer / Practitioner preferences and knowledge
• Do they understand the models• Do they trust results from the models
• Your own preferences & experience• e.g., with regards to parameterization• or implementation
• Note, various model types often quite similar & related, interchangeable• e.g.: Spline models - Kriging - SVM - RBFN
Naujoks, Stork, Zaefferer, Bartz-Beielstein 34 / 74
Typical Problems in Practice: Implementation
• Once models are selected -> Implementation• Can have significant impact• Options
• Frequently employed packages/libraries• Quality• Community support• Examples, documentation• Continuity of development
• Less frequently used work• For special tasks?• Because of specific features?
• Do it yourself• None other available (or too slow, buggy)• Specific features not available• More fun , but also more work /• You know what the model really does
Naujoks, Stork, Zaefferer, Bartz-Beielstein 35 / 74
Typical Problems in Practice: Optimizer Selection
• Similar considerations as for models
• Optimizer also depends on model type (and vice versa)
• Smooth, differentiable models like Kriging: gradient-based optimizers are fine• Non-smooth (tree-based): GA, DE, PSO• Multimodality (of prediction or infill criterion, e.g., EI):
Population based, restarts, niching, etc.• Simple linear regression: analytical
Naujoks, Stork, Zaefferer, Bartz-Beielstein 36 / 74
Typical Problems in Practice: Parameter Selection
• Similar to model selection / optimizer selection ...• ... but with more attention to details
• Use expert / literature suggestions• Exploit problem knowledge
• Parameters affect:• complexity,• cost of modeling,• cost of model optimization,• noise handling,• robustness,• smoothness,• ...
• Tuning, benchmarks (open issue)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 37 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 76 31 July 2018
Overview
• Motivation
• Concepts and methods
• Practical approach: instructive application
• Typical problems in application
• Open Issues / Research perspectives / Fields of Interest• Multi-criteria optimization• Combinatorial optimization
• Discussion• Typical problems and their solutions
Naujoks, Stork, Zaefferer, Bartz-Beielstein 38 / 74
Open Issues
• Research perspectives• Fields of Interest
• Multi-objective:SAMCO - Surrogate Assisted Multi-Criteria Optimisation
• Combinatorial surrogates models (optimisation)
• ... both handled in more detail later!
Naujoks, Stork, Zaefferer, Bartz-Beielstein 39 / 74
Open Issues
• Research perspectives• Fields of Interest
• Multi-objective:SAMCO - Surrogate Assisted Multi-Criteria Optimisation
• Combinatorial surrogates models (optimisation)• ... both handled in more detail later!
Naujoks, Stork, Zaefferer, Bartz-Beielstein 39 / 74
Open Issues
• Meaningful benchmarking and testing of algorithms• Noise handling• Complex resource limitations• High-dimensional / large scale data• Constraint handling• Aggregation: Model ensembles, Multi-fidelity models• Dynamic optimization problems
Naujoks, Stork, Zaefferer, Bartz-Beielstein 40 / 74
Open Issues
• Meaningful benchmarking and testing of algorithms• Some benchmark sets available• (Almost?) not considered for evaluation• No standard implemented• Depending on people who apply?
• Noise handling• Surrogates considered for noisy problems• What about noise in models?
• Complex resource limitations• Resources like computation times may not be available constantly• Server availability, different calculation times per job . . .• Problem handled separately• Integration of resources handling in algorithm needed
Naujoks, Stork, Zaefferer, Bartz-Beielstein 41 / 74
Open Issues
• Meaningful benchmarking and testing of algorithms• Some benchmark sets available• (Almost?) not considered for evaluation• No standard implemented• Depending on people who apply?
• Noise handling• Surrogates considered for noisy problems• What about noise in models?
• Complex resource limitations• Resources like computation times may not be available constantly• Server availability, different calculation times per job . . .• Problem handled separately• Integration of resources handling in algorithm needed
Naujoks, Stork, Zaefferer, Bartz-Beielstein 41 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 77 31 July 2018
Open Issues
• Meaningful benchmarking and testing of algorithms• Some benchmark sets available• (Almost?) not considered for evaluation• No standard implemented• Depending on people who apply?
• Noise handling• Surrogates considered for noisy problems• What about noise in models?
• Complex resource limitations• Resources like computation times may not be available constantly• Server availability, different calculation times per job . . .• Problem handled separately• Integration of resources handling in algorithm needed
Naujoks, Stork, Zaefferer, Bartz-Beielstein 41 / 74
Open Issues
• High-dimensional / large scale data• Models may fail / not be applicable• New models might need to be considered• New integration schemes needed as well?
• Constraint handling• Different scenarios possible• Most common: infeasible offspring of feasible ancestor
• Easy strategy: just omit . . . optimal?• Constraints to be considered by models as well?• Integration in algorithms?• Optimal strategy?
Naujoks, Stork, Zaefferer, Bartz-Beielstein 42 / 74
Open Issues
• High-dimensional / large scale data• Models may fail / not be applicable• New models might need to be considered• New integration schemes needed as well?
• Constraint handling• Different scenarios possible• Most common: infeasible offspring of feasible ancestor
• Easy strategy: just omit
. . . optimal?• Constraints to be considered by models as well?• Integration in algorithms?• Optimal strategy?
Naujoks, Stork, Zaefferer, Bartz-Beielstein 42 / 74
Open Issues
• High-dimensional / large scale data• Models may fail / not be applicable• New models might need to be considered• New integration schemes needed as well?
• Constraint handling• Different scenarios possible• Most common: infeasible offspring of feasible ancestor
• Easy strategy: just omit . . . optimal?• Constraints to be considered by models as well?• Integration in algorithms?• Optimal strategy?
Naujoks, Stork, Zaefferer, Bartz-Beielstein 42 / 74
Open Issues
• Aggregation: Model ensembles, Multi-fidelity models• Which model in which situation?
- Again depending on many parameters- Some results available . . .
• How to aggregate ensembles best?• Setting may vary over time . . .
• Dynamic optimization problems• In general: time-varying fitness function• Surrogates used for forecasting, predicting future values• Other settings possible . . . see above
Naujoks, Stork, Zaefferer, Bartz-Beielstein 43 / 74
Open Issues
• Aggregation: Model ensembles, Multi-fidelity models• Which model in which situation?
- Again depending on many parameters- Some results available . . .
• How to aggregate ensembles best?• Setting may vary over time . . .
• Dynamic optimization problems• In general: time-varying fitness function• Surrogates used for forecasting, predicting future values• Other settings possible . . . see above
Naujoks, Stork, Zaefferer, Bartz-Beielstein 43 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 78 31 July 2018
Overview
• Motivation
• Concepts and methods
• Practical approach: instructive application
• Typical problems in application
• Open Issues / Research perspectives / Fields of Interest• Multi-criteria optimization• Combinatorial optimization
• Discussion• Typical problems and their solutions
Naujoks, Stork, Zaefferer, Bartz-Beielstein 44 / 74
SAMCO
Surrogate-Assisted Multi-Criteria Optimisation
• Intersection of• Multi-criteria optimisation• Surrogate-assisted optimisation
• MCO – Multi-Criteria Optimisation• EMO – Evolutionary Multi-objective Optimisation• EMOA – Evolutionary Multi-objective Optimisation Algorithm• MOEA – Multi-Objective Evolutionary Optimisation
Naujoks, Stork, Zaefferer, Bartz-Beielstein 45 / 74
Basics of Multi-criteria optimisation
• Multiple objective functions considered• Minimize
f : IRn −→ IRm, f(x) = (f1(x), . . . , fm(x))
Pareto Dominance• Solution x dominates solution y
x <p y :⇔ ∀i : fi(x) ≤ fi(y) (i = 1, . . .m)∃j : fj(x) < fj(y) (j = 1, . . .m)
• Pareto-Set: Set of all non-dominated solutions in the search space
{x | @z : z <p x}
• Pareto-Front: Image of Pareto-set in objective space
Naujoks, Stork, Zaefferer, Bartz-Beielstein 46 / 74
SAMCO
• Available budget: 100 to 10 000 evaluations• Different strategies
• Stochastic variation of EAs assisted (e.g. filtering solutions)• Completely replaced (e.g. optimizing figure of merit)
• Many algorithms already developed• However
• Very heterogeneous research fields• Different sciences / faculties involved
- Engineering- Statistics- Computer Science
- Mathematics- Aeronautics- Agriculture
• Thus: different backgrounds, also different languages to be considered
Naujoks, Stork, Zaefferer, Bartz-Beielstein 47 / 74
SAMCO
• Application driven• Proposed algorithms tested respective application tasks mainly• Comparison of different approaches hard to accomplish• Lacks existence of accepted benchmarks
• Theoretical aspects almost neglected due to focus on practical applications
• Methodological research areas• Choice of the surrogate model• Respective figure of merit (or infill criterion)
Naujoks, Stork, Zaefferer, Bartz-Beielstein 48 / 74
SAMCO• Easiest approach: one model per objective• Kriging and expected improvement used commonly
10
on the old Pareto front
0
5
10
15
20
0
5
10
15
20
0
0.2
f1
f2
Probability Density
Precise Evaluations
Mean values of approximations
Lower confidence bounds
x1
x2
x3
Fig. 8. Interval boxes for approximations in a solution space with twoobjectives.
Pareto−front
PSfrag replacements
Vd(P )
Vnd(P )
f1
f2
f(x1)
f(x2)
f(x3)
f(x4)
fmax
fmin
Fig. 9. Illustration of the hypervolume measure.
V. METHOD APPLICATION – RESULTS AND DISCUSSION
A. Mathematical Test Problems and Performance Measures
At first, experiments have been conducted on selected math-ematical test problems in order to compare the performanceof different MAES variants. These variants were tested ondifferent objective function landscapes, featuring minimizationof a simple convex function (sphere function, appendix A), anon-isotropic function (ellipsoid function, appendix B), a dis-continuous function (step function, appendix C) and - finally -a highly multimodal function (Ackley function, appendix D).
MAES testing was carried out using population sizes ofµ = 5 and λ = 100 individuals, among which ν = 20individuals at most were pre-selected for exact evaluation.Each run was repeated 20 times, each of which with a differentrandom number seed.
The median of the best results found after t evaluations(t ≤ 1000) was plotted. In order to get a reliability measure,the 16th worst function value, i. e. the 80%-quantile of thedistribution of the obtained function values, was recorded andpresented. For similar studies on 20–dimensional test-casesand with different population sizes the reader should refer to[9], [19], [45] and recently [3].
B. Prediction Accuracy Measures
It is well known that EA are rank-based strategies thatare invariant to monotonic transformations of the objectivefunction. Hence, for a metamodel used in conjunction withan EA to be successful, it suffices this to predict the subsetof Gt that would be selected by the recombination if allevaluations were precise improvements with respect to theparent population Pt. The so–called retrieval quality of any
pre-screening tool (metamodel) can be measured through therecall and precision measures defined below.
Let Mµ(A) denote the subset of the µ best solutions inA. Pre-screening aims at identifying the members of Gt ∩Mµ(Gt ∪Pt) which will enter the next generation. Thus, it isdesirable that
Qt ≈ Gt ∩Mµ(Gt ∪ Pt). (47)
It is reasonable that none of the metamodels could alwaysretrieve the ensemble of relevant individuals out of Gt. Anon-satisfactory metamodel is one which: (a) fails capturinga considerable part of Gt ∩Mµ(Gt ∪ Pt) or (b) in order tocapture as many as possible of them, it additionally selects toomany irrelevant individuals.
The retrieval accuracy, practically in relation to the first ofthe two unpleasant situations just mentioned, i. e. the ratiothe relevant solutions retrieved from Gt to the number of allrelevant solutions in Gt, is quantified as follows:
recall(t) =|Mµ(Gt ∪ Pt) ∩Qt||Mµ(Gt ∪ Pt) ∩Gt|
, (48)
where the optimal values is recall(t) = 1.On the other hand, precision(t) is a measure for controlling
the second unpleasant metamodel behavior. This is expressedby the ratio of the number of correctly retrieved solutions tothe total number of retrieved solutions, namely:
precision(t) =|Mµ(Gt ∪ Pt) ∩Qt|
|Qt|(49)
The optimal value for this criterion is precision(t) = 1.Unfortunately, in contrast to quantitative measures such as
y − y plots, specificity measures cannot be evaluated withoutperforming extra evaluations with the costly evaluation tool.Hence, these are useful for statistics on simple academic casesbut not for real–world problems.
C. Implementation details
The basic evolution strategy corresponds to the one de-scribed previously in section IV. The initial step-size wasset to 0.05% of the search space width. The database isformed only by exact evaluations. The metamodel is usedfrom the first generation on. As soon as there are more than2d solutions in the database, the algorithm switches to thelocal metamodeling strategy as described in section II. For allstrategies, the maximal number of pre-selected individuals wasset to µ.
D. Results – Discussion on the performance
The first comparison was conducted on the 20-dimensionalsphere model (cf. appendix A). The median of the best foundsolution is shown in Figure 10. All metamodel-based strategiesoutperformed conventional strategies (i.e. (5+20)-ES, (5+35)-ES4, (5+100)-ES) since they ask considerably less function
4This strategy has been added in order to be comparable with Ulmer et al.[9].
Image taken from [Emmerich et al., 2006]Naujoks, Stork, Zaefferer, Bartz-Beielstein 49 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 79 31 July 2018
Alternative approaches
ParEGO – Pareto Efficient Global Optimisation [Knowles, 2006]
• Converts different cost values into single one• Parameterized scalarizing weight vector (augmented Tchebycheff function)• Using augmented Tchebycheff function• Different weight vector at each iteration• Weight vector is drawn uniformly at random• Allows for gradually building an approximation to whole Pareto front
• Learns a Gaussian processes model of search landscape• Scalar costs of all previously visited solutions is computed• DACE model of landscape is constructed by maximum-likelihood• Solution that maximizes expected improvement becomes next point• Evaluation on real, expensive cost function• Update after every function evaluation
• Ensures that weakly dominated solutions are rewarded less than Paretooptimal ones
Naujoks, Stork, Zaefferer, Bartz-Beielstein 50 / 74
Alternative approaches
RASM – Rank-based aggregated surrogate models [Loshchilov et al.,2010]
• Mono-surrogate approach again• Single surrogate model to reflect Pareto dominance in EMO framework• Locally approximates Pareto dominance relation
• Ranking neighbor points within the objective space• Offspring filter estimating whether they improve on their parents in terms of
approximated Pareto-dominance• Used for offspring generation in standard EMOA
• Modeling Pareto dominance within the rank-SVM framework
Naujoks, Stork, Zaefferer, Bartz-Beielstein 51 / 74
Existing libraries and approaches
• Many libraries already existing, e.g.- mlrMBO- DiceKriging- SUMO- parEGO
- GPareto- SPOT- Shark- QstatLab
• Overview on SAMCO homepage:http://samco.gforge.inria.fr/doku.php?id=surr_mco
• However: up-to-date overview is missing• List algorithms contained• Compare strengths and weaknesses
Naujoks, Stork, Zaefferer, Bartz-Beielstein 52 / 74
SAMCO Promising research areas
• Multiple objectives with different response surfaces+ specific requirements of set- and indicator- based optimization
• New variants of models• New infill criteria
• Approaches beyond one model per objective function• Model dominance relations• Model performance indicator landscapes
• Ensembles of Surrogates• Multiple surrogates simultaneously or successively
- To improve overall quality of prediction of each objective- Model evolves over time from a coarse grained to finer one- Different parts of search space with significantly different behavior
Naujoks, Stork, Zaefferer, Bartz-Beielstein 53 / 74
SAMCO Promising research areas
• Multiple objectives with different response surfaces+ specific requirements of set- and indicator- based optimization
• New variants of models• New infill criteria• Approaches beyond one model per objective function
• Model dominance relations• Model performance indicator landscapes
• Ensembles of Surrogates• Multiple surrogates simultaneously or successively
- To improve overall quality of prediction of each objective- Model evolves over time from a coarse grained to finer one- Different parts of search space with significantly different behavior
Naujoks, Stork, Zaefferer, Bartz-Beielstein 53 / 74
SAMCO Promising research areas
• Collect existing approaches and libraries
• Benchmarking Surrogate-Assisted Optimizers lacks rigorously• Review of common test functions (academic vs. real-world)• Understand weaknesses and strengths of each algorithm• Algorithm recommendations for practice
• Overview on SAMCO homepage:http://samco.gforge.inria.fr/doku.php?id=benchmarking
Naujoks, Stork, Zaefferer, Bartz-Beielstein 54 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 80 31 July 2018
Overview
• Motivation
• Concepts and methods
• Practical approach: instructive application
• Typical problems in application
• Open Issues / Research perspectives / Fields of Interest• Multi-criteria optimization• Combinatorial optimization
• Discussion• Typical problems and their solutions
Naujoks, Stork, Zaefferer, Bartz-Beielstein 55 / 74
Discrete / combinatorial / structured search spaces
• Well established in expensive, continuous optimization
What about combinatorial / discrete optimization problems?
• Let’s get an overview
Naujoks, Stork, Zaefferer, Bartz-Beielstein 56 / 74
Survey: combinatorial surrogatesMixed variablesmodel optimizer cost budget dimension remarks / topics reference
RBFN ES cheap /∼ expensive
560 /280
15 /23
benchmark /real-world:medical image analysis
Li et al. [2008]
Random Forest,Kriging NSGA2 ∼expensive - 4-76 algorithm tuning Hutter et al. [2010]
RBFN +cluster GA cheap 2,000 12
benchmark,real-world:chemical industry
Bajer and Holeňa [2010]
RBFN +GLM GA cheap several
thousand 4-13benchmark,real-world:chemical industry
Bajer and Holeňa [2013]
SVR NSGA2 ? 2,000 10 finite element,multi criteria Herrera et al. [2014]
Binary stringsmodel optimizer cost budget dimension remarks / topics reference
ANN SA expensive ? 16 real world,pump positioning Rao and Manju [2007]
RBFN GA cheap dimension2 10-25 NK-Landscape Moraglio and Kattan [2011a]
RBFN GA expensive 100 10-40 benchmark,package deal negotiation Fatima and Kattan [2011]
Kriging GA cheap dimension2 10-25 NK-Landscape Zaefferer et al. [2014b]
Naujoks, Stork, Zaefferer, Bartz-Beielstein 57 / 74
Survey: combinatorial surrogates
Permutations
model optimizer cost budget dimension remarks / topics reference
custom brute force expensive 28 6signed permutation,real world:weld sequence
Voutchkov et al. [2005]
RBFN GA cheap 100 30 - 32 benchmark Moraglio et al. [2011]Kriging GA cheap 100 12 - 32 benchmark Zaefferer et al. [2014b]Kriging GA cheap 200 10 - 50 distance selection Zaefferer et al. [2014a]
Kriging ACO cheap 100 -1,000 50 - 100 benchmark,
tuning Pérez Cáceres et al. [2015]
RBFN GA* instancedependent 1,000 50 - 1,928
numerical stability,real world:cell suppression
Smith et al. [2016]
Kriging brute force,GA cheap 100 5 - 10 kernel definiteness Zaefferer and Bartz-Beielstein [2016]
*Different integration: GA produces random solutions, which are filtered by the model in each iteration
Naujoks, Stork, Zaefferer, Bartz-Beielstein 58 / 74
Survey: combinatorial surrogatesTreesmodel optimizer cost budget remarks / topics referenceRBFN GA cheap 100 symbolic regression Moraglio and Kattan [2011b]
kNN GA expensive 30,000 Phenotypic similarity,genetic programming Hildebrandt and Branke [2014]
RBFN* GA cheap 100 symbolic regression,parity Kattan and Ong [2015]
Random Forest GA cheap 15,000 benchmark,genetic programming Pilát and Neruda [2016]
*two models: semantic and fitness
Othermodel optimizer cost budget dimension remarks / topics reference
k-NN GA rathercheap
20000 -200000 161 - 259
real-valued+structure,real-world,protein structure
Custódio et al. [2010]
Kriging GA expensive fewhundreds
graph-based,real-world,protein structure
Romero et al. [2013]
ANN DE cheap severalhundreds 40 - 500 assignment problem,
dynamic Hao et al. [2016]
Naujoks, Stork, Zaefferer, Bartz-Beielstein 59 / 74
Summary: Strategies
• Strategies of dealing with discrete / combinatorial searchspaces
• Inherently discrete models (e.g., regression trees)• simple, but may not be efficient/feasible for any
representation
• Dummy variables• Only for linear regression, vector-based
• Feature based• Extract real-valued features from genotype / phenotype• Requires good features
• (Dis)similarity measure based (distance, kernel)• Requires good measure
Naujoks, Stork, Zaefferer, Bartz-Beielstein 60 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 81 31 July 2018
Summary: Types of Models
• As varied as in the continuous case:
• Custom, application specific models (expertknowledge, physics)
• Artificial Neural Networks (ANN)• Markov Random Fields [Allmendinger et al., 2015]• Random Forest (Integer, Mixed Integer Problems)• (Probabilistic models - in Estimation of Distribution
Algorithms)• (Pheromone trails - in Ant Colony Optimization)• ”Classical” kernel-based (similarity-based) models:
• k-Nearest Neighbour (k-NN)• Radial Basis Function Networks (RBFN)• Support Vector Regression (SVR)• Kriging (Gaussian Processes)
Linear regression
Markov Random FieldsKriging
Support Vector Machines
Neural
Networks
k-NN
RBFNs
Random Forest
custom
Proba-
bilistic
Models
Naujoks, Stork, Zaefferer, Bartz-Beielstein 61 / 74
Why kernel based approach, Kriging?
• Conceptually simple:• Replace kernel or distance function• e.g., with Gaussian kernel and arbitrary distance:
k(x, x′) = exp(−θd(x, x′))
• Transfer of popular method from continuous domain• Powerful predictor• Elegant parameter fitting (maximum likelihood estimation)• Uncertainty estimate, Expected Improvement
→ Efficient Global Optimization EGO [Jones et al., 1998b]
• Note:• None of these features exclusive to Kriging• Closely related to other model types
Naujoks, Stork, Zaefferer, Bartz-Beielstein 62 / 74
Combinatorial Surrogates: Research Questions
• Which kernel/distance works best and why?
• How to choose a suitable kernel/distance?
• Or else, combine?
• Genotypic vs phenotypic distances? [Hildebrandt and Branke, 2014]
• Definiteness?
• Dimensionality issues? Dimensionality reduction? See e.g., the very highdimensional problems in [Smith et al., 2016]
• Comparison of model types?1
• And again: benchmarking / testing?
1If you want to compare your approach to our methods: R Package for CombinatorialEfficient Global Optimization CEGO - https://cran.r-project.org/package=CEGO.Naujoks, Stork, Zaefferer, Bartz-Beielstein 63 / 74
Research Question: Choosing a Distance / Kernel[Zaefferer et al., 2014a]
* Choice crucial for success* Use prior knowledge (if available?)* Cross-validation• Fitness Distance Correlation (FDC)(potentially misleading)
FDC
Ham.Swa.Int.
Lev.R
Pos.Posq.Adj.
LCSeqLCStr
Euc.Man.Che.Lee
tho3
0kr
a32
nug3
0nu
g12
reC
05re
C13
reC
19re
C31
bayg
29fr
i26
gr24
atsp
10at
sp20
atsp
30w
t40a
wt4
0bw
t40c
wt4
0d
0.00.10.20.30.4
value
note: larger FDC values are better
• Maximum Lilkelihood Estimation(MLE) (seems to work well)
Performance reC19
● ●●
● ●
●●
Ham.All
Lev.Pos.
LCSeqLee
Man.GAInt.
Swa.Euc.
Posq.Adj.
RLCStr
2200 2300 2400
All (MLE) - GA (model-free) - Posq (squared position distance)
note: smaller performance values are better
Naujoks, Stork, Zaefferer, Bartz-Beielstein 64 / 74
Research Question: Definiteness[Zaefferer and Bartz-Beielstein, 2016]
So we can just replace the distance or kernel function with something appropriateand everything is fine, right?
Common requirement for kernels (distances): Definiteness2
• Definiteness may be unknown / lacking• Designing definite kernels may be hard / infeasible• Required: correction procedure• Some results from SVM field [Ong et al., 2004; Chen et al., 2009; Loosliet al., 2015] Survey: [Schleif and Tino, 2015]
• Can be transfered to Kriging, with some tweaks2Positive semi-definite kernel matrix: all eigenvalues are positive or zero
Naujoks, Stork, Zaefferer, Bartz-Beielstein 65 / 74
That’s all Folks. Thanks for hanging on.
• Any questions?
• Discussion:
• Problems you encountered in practice?• New directions, challenges?• What is missing in the field?• Interesting applications?
Naujoks, Stork, Zaefferer, Bartz-Beielstein 66 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 82 31 July 2018
Allmendinger, R., Coello, C. A. C., Emmerich, M. T. M., Hakanen, J., Jin, Y.,and Rigoni, E. (2015). Surrogate-assisted multicriteria optimization (wg6). InGreco, S., Klamroth, K., Knowles, J. D., and Rudolph, G., editors,Understanding Complexity in Multiobjective Optimization (Dagstuhl Seminar15031) - Dagstuhl Reports, volume 5, pages 96–163, Dagstuhl, Germany.Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
Bajer, L. and Holeňa, M. (2010). Surrogate model for continuous and discretegenetic optimization based on rbf networks. In Intelligent Data Engineering andAutomated Learning – IDEAL 2010, volume 6283 LNCS, pages 251–258.
Bajer, L. and Holeňa, M. (2013). Surrogate model for mixed-variablesevolutionary optimization based on glm and rbf networks. In SOFSEM 2013:Theory and Practice of Computer Science, volume 7741 LNCS, pages 481–490.
Breiman, L. (2001). Random forests. Machine learning, 45(1):5–32.Breiman, L., Friedman, J., Stone, C. J., and Olshen, R. A. (1984). Classificationand regression trees. CRC press.
Chen, Y., Gupta, M. R., and Recht, B. (2009). Learning kernels from indefinitesimilarities. In Proceedings of the 26th Annual International Conference onMachine Learning, ICML ’09, pages 145–152, New York, NY, USA. ACM.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 67 / 74
Custódio, F. L., Barbosa, H. J., and Dardenne, L. E. (2010). Full-atom ab initioprotein structure prediction with a genetic algorithm using a similarity-basedsurrogate model. In Proceedings of the Congress on Evolutionary Computation(CEC’10), pages 1–8, New York, NY, USA. IEEE.
Deng, L. and Yu, D. (2014). Deep learning: methods and applications.Foundations and Trends in Signal Processing, 7(3–4):197–387.
Emmerich, M. T. M., Giannakoglou, K. C., and Naujoks, B. (2006). Single- andmultiobjective evolutionary optimization assisted by gaussian random fieldmetamodels. IEEE Transactions on Evolutionary Computation, 10(4):421–439.
Fatima, S. and Kattan, A. (2011). Evolving optimal agendas for package dealnegotiation. In Proceedings of the 13th Annual Conference on Genetic andEvolutionary Computation, GECCO ’11, pages 505–512, New York, NY, USA.ACM.
Flasch, O., Mersmann, O., and Bartz-Beielstein, T. (2010). Rgp: An open sourcegenetic programming system for the r environment. In Proceedings of the 12thAnnual Conference Companion on Genetic and Evolutionary Computation,GECCO ’10, pages 2071–2072, New York, NY, USA. ACM.
Forrester, A., Sobester, A., and Keane, A. (2008). Engineering Design viaSurrogate Modelling. Wiley.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 68 / 74
Forrester, A. I. and Keane, A. J. (2009). Recent advances in surrogate-basedoptimization. Progress in Aerospace Sciences, 45(1):50–79.
Hao, J., Liu, M., Lin, J., and Wu, C. (2016). A hybrid differential evolutionapproach based on surrogate modelling for scheduling bottleneck stages.Computers & Operations Research, 66:215–224.
Haykin, S. (2004). A comprehensive foundation. Neural Networks, 2(2004).Herrera, M., Guglielmetti, A., Xiao, M., and Filomeno Coelho, R. (2014).Metamodel-assisted optimization based on multiple kernel regression for mixedvariables. Structural and Multidisciplinary Optimization, 49(6):979–991.
Hildebrandt, T. and Branke, J. (2014). On using surrogates with geneticprogramming. Evolutionary Computation, pages 1–25.
Hinton, G. E., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm fordeep belief nets. Neural computation, 18(7):1527–1554.
Hornik, K., Stinchcombe, M., and White, H. (1989). Multilayer feedforwardnetworks are universal approximators. Neural networks, 2(5):359–366.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 69 / 74
Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2010). Sequential model-basedoptimization for general algorithm configuration (extended version). TechnicalReport TR-2010-10, University of British Columbia, Department of ComputerScience. Available online:http://www.cs.ubc.ca/˜hutter/papers/10-TR-SMAC.pdf.
Jin, Y. (2011). Surrogate-assisted evolutionary computation: Recent advancesand future challenges. Swarm and Evolutionary Computation, 1(2):61–70.
Jones, D. R. (2001). A taxonomy of global optimization methods based onresponse surfaces. Journal of global optimization, 21(4):345–383.
Jones, D. R., Schonlau, M., and Welch, W. J. (1998a). Efficient globaloptimization of expensive black-box functions. Journal of Global optimization,13(4):455–492.
Jones, D. R., Schonlau, M., and Welch, W. J. (1998b). Efficient globaloptimization of expensive black-box functions. Journal of Global Optimization,13(4):455–492.
Kattan, A. and Ong, Y.-S. (2015). Surrogate genetic programming: A semanticaware evolutionary search. Information Sciences, 296:345–359.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 70 / 74
Knowles, J. (2006). Parego: A hybrid algorithm with on-line landscapeapproximation for expensive multiobjective optimization problems. IEEETransactions on Evolutionary Computation, 10(1):50–66.
Li, R., Emmerich, M. T. M., Eggermont, J., Bovenkamp, E. G. P., Bäck, T.,Dijkstra, J., and Reiber, J. (2008). Metamodel-assisted mixed integer evolutionstrategies and their application to intravascular ultrasound image analysis. InProceedings of the Congress on Evolutionary Computation (CEC’08), pages2764–2771, New York, NY, USA. IEEE.
Loosli, G., Canu, S., and Ong, C. (2015). Learning svm in krein spaces. IEEETransactions on Pattern Analysis and Machine Intelligence, 38(6):1204–1216.
Loshchilov, I., Schoenauer, M., and Sebag, M. (2010). Dominance-BasedPareto-Surrogate for Multi-Objective Optimization. In Simulated Evolution andLearning (SEAL 2010), volume 6457 of LNCS, pages 230–239. Springer.
Moraglio, A. and Kattan, A. (2011a). Geometric generalisation of surrogate modelbased optimisation to combinatorial spaces. In Proceedings of the 11thEuropean Conference on Evolutionary Computation in CombinatorialOptimization, EvoCOP’11, pages 142–154, Berlin, Heidelberg, Germany.Springer.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 71 / 74
Moraglio, A. and Kattan, A. (2011b). Geometric surrogate model basedoptimisation for genetic programming: Initial experiments. Technical report,University of Birmingham.
Moraglio, A., Kim, Y.-H., and Yoon, Y. (2011). Geometric surrogate-basedoptimisation for permutation-based problems. In Proceedings of the 13thAnnual Conference Companion on Genetic and Evolutionary Computation,GECCO ’11, pages 133–134, New York, NY, USA. ACM.
Naujoks, B., Steden, M., Muller, S. B., and Hundemer, J. (2007). Evolutionaryoptimization of ship propulsion systems. In 2007 IEEE Congress on EvolutionaryComputation, pages 2809–2816.
Ong, C. S., Mary, X., Canu, S., and Smola, A. J. (2004). Learning withnon-positive kernels. In Proceedings of the Twenty-first International Conferenceon Machine Learning, ICML ’04, pages 81–88, New York, NY, USA. ACM.
Pilát, M. and Neruda, R. (2016). Feature extraction for surrogate models ingenetic programming. In Parallel Problem Solving from Nature – PPSN XIV,pages 335–344. Springer Nature.
Pérez Cáceres, L., López-Ibáñez, M., and Stützle, T. (2015). Ant colonyoptimization on a limited budget of evaluations. Swarm Intelligence, pages1–22.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 72 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 83 31 July 2018
Queipo, N. V., Haftka, R. T., Shyy, W., Goel, T., Vaidyanathan, R., and Tucker,P. K. (2005). Surrogate-based analysis and optimization. Progress in aerospacesciences, 41(1):1–28.
Rao, S. V. N. and Manju, S. (2007). Optimal pumping locations of skimmingwells. Hydrological Sciences Journal, 52(2):352–361.
Romero, P. A., Krause, A., and Arnold, F. H. (2013). Navigating the proteinfitness landscape with Gaussian processes. Proceedings of the NationalAcademy of Sciences, 110(3):E193–E201.
Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P. (1989). Design andanalysis of computer experiments. Statistical science, pages 409–423.
Schleif, F.-M. and Tino, P. (2015). Indefinite proximity learning: A review. NeuralComputation, 27(10):2039–2096.
Smith, J., Stone, C., and Serpell, M. (2016). Exploiting diverse distance metricsfor surrogate-based optimisation of ordering problems: A case study. InProceedings of the 2016 on Genetic and Evolutionary Computation Conference,GECCO ’16, pages 701–708, New York, NY, USA. ACM.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 73 / 74
Voutchkov, I., Keane, A., Bhaskar, A., and Olsen, T. M. (2005). Weld sequenceoptimization: The use of surrogate models for solving sequential combinatorialproblems. Computer Methods in Applied Mechanics and Engineering,194(30-33):3535–3551.
Zaefferer, M. and Bartz-Beielstein, T. (2016). Efficient global optimization withindefinite kernels. In Parallel Problem Solving from Nature–PPSN XIV, pages69–79. Springer.
Zaefferer, M., Stork, J., and Bartz-Beielstein, T. (2014a). Distance measures forpermutations in combinatorial efficient global optimization. In Bartz-Beielstein,T., Branke, J., Filipic, B., and Smith, J., editors, Parallel Problem Solving fromNature–PPSN XIII, pages 373–383, Cham, Switzerland. Springer.
Zaefferer, M., Stork, J., Friese, M., Fischbach, A., Naujoks, B., andBartz-Beielstein, T. (2014b). Efficient global optimization for combinatorialproblems. In Proceedings of the 2014 Conference on Genetic and EvolutionaryComputation, GECCO ’14, pages 871–878, New York, NY, USA. ACM.
Naujoks, Stork, Zaefferer, Bartz-Beielstein 74 / 74
SYNERGY Horizon 2020 – GA No 692286
D3.2 84 31 July 2018
Investigating the Effectiveness of Multi-CriteriaSurrogate-Assisted Evolutionary Algorithms
Vanessa Volz∗, Boris Naujoks+∗TU Dortmund University, Germany+TH Köln - University of Applied Sciences, Germany
09/05/2017
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 1 / 48
Overview / Orientation
Background
SAPEO Concepts
BBOB Benchmark
Single-Objective Results
Bi-Objective BBOB
Multi-Objective Results
Further Investigation
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 2 / 48
Overview / Orientation
Background
SAPEO Concepts
BBOB Benchmark
Single-Objective Results
Bi-Objective BBOB
Multi-Objective Results
Further Investigation
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 3 / 48
Surrogate-Assisted Evolutionary Optimisation
∗Emmerich, M. et al: Single- and Multi-objective Evolutionary Optimization Assisted byGaussian Random Field Metamodels. IEEE Trans.Evol. Comput. 10(4), 421-439 (2006)
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 4 / 48
Problem: Expensive fitness function
Idea: Replace evaluations withpredictions from surrogate model
Different model managementstrategies, e.g. pre-selection∗ ⇒However: Predicted estimation errorrarely considered
Initialise EA+ model
Parentpopulation
Generate λ∗ ≥ λoffspring
Select best λ(predicted fitness)
Select best µ(fitness function)Continue?
Return bestsolutions found
no
yes
Multi-Objective Optimisation
Multiple objective functions consideredMinimize
f ∶ IRn Ð→ IRm, f(x) = (f1(x), . . . , fm(x))Pareto Dominance
Solution x dominates solution y
x <p y ∶⇔ ∀i ∶ fi(x) ≤ fi(y) (i = 1, . . .m)∃j ∶ fj(x) < fj(y) (j = 1, . . .m)Pareto-Set: Set of all non-dominated solutions in the search space
{x ∣ ∄z ∶ z <p x}Pareto-Front: Image of Pareto-set in objective space
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 5 / 48
Hypervolume and SMS-EMOA
Hypervolume: space covered by Pareto-front wrt. reference point
AdvantagesHonors uniform distributionHonors convergenceUpgradeable to higher dimensions
(µ + 1) SMS-EMOAHypervolume-selektionOmit solution with leasthypervolume contribution
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 6 / 48
SYNERGY Horizon 2020 – GA No 692286
D3.2 85 31 July 2018
Interval Representation of Kriging Estimates
Expected value f(x) and error estimate σx provided
†plot created using R package DiceKrigingV. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 7 / 48
x
●●
●
●
●
●
_
_
●
f~(x
)−u x
f~(x
)f~(x
)+u x
† f~
1(x)
f~2(
x)
ux = σ~xz(1 −α2)
ux
Partial Order on Intervals‡
‡from Rudolph, G.: A Partial Order Approach to Noisy Fitness functions. In: IEEE Congresson Evolutionary compuation (CEC 2001). pp. 318-325. IEEE Press, Piscataway, NJ (2001)
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 8 / 48
Set of closed intervals Π = {[x1, x2] ⊂ R ∶ x1 ≤ x2}[x1,x2] ≺ [y1, y2] ⇐⇒ x2 < y1,[x1,x2] = [y1, y2] ⇐⇒ x1 = y1 ∧ x2 = y2,[x1,x2] ⪯ [y1, y2] ⇐⇒ x ≺ y ∨ x = y.
Partially ordered set (Π,⪯)nonvoid intersection ⇐⇒ incomparable
●
●
●
Overview / Orientation
Background
SAPEO Concepts
BBOB Benchmark
Single-Objective Results
Bi-Objective BBOB
Multi-Objective Results
Further Investigation
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 9 / 48
⪯f : Pareto Dominance on Function Values
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 10 / 48
f1(x) f1(y)
f 2(x
)f 2
(y) x ⪯f y ∶=∀k ∈ {1 . . . d} ∶ fk(x) ≤ fk(y) ∧∃k ∈ {1 . . . d} ∶ fk(x) < fk(y).
⪯u: Confidence Interval Dominance§
§cf. Mlakar, M. et al.: GP-DEMO: Differential Evolution for Multiobjective Optimizationbased on Gaussian Process models: EUR J OPER RES 243(2), 347-361 (2015)
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 11 / 48
f~
1(x) f~
1(y)
f~2(
x)f~
2(y)
x ⪯u y ∶=⋀k∈[1...d]fk(x) + ux < fk(y) − uy
⪯c: Confidence Interval Bounds as Objectives
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 12 / 48
x ⪯c y ∶=⋀k∈[1...d] ( fk(x)−ux
fk(x)+ux) ≾ ( fk(y)−uy
fk(y)+uy)
∧ ∃k ∈ [1 . . . d] ∶ ( fk(x)−ux
fk(x)+ux) ⪯ ( fk(y)−uy
fk(y)+uy)
f1~
(x) f1~
(y)
f 2~(x
)f 2~
(y)
lower bounds f1
uppe
r bo
unds
f 1
lower bounds f2
uppe
r bo
unds
f 2
SYNERGY Horizon 2020 – GA No 692286
D3.2 86 31 July 2018
⪯p: Pareto Dominance on Predicted Values
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 13 / 48
f~
1(x) f~
1(y)
f~2(
x)f~
2(y)
x ⪯p y ∶=∀k ∈ {1 . . . d} ∶ fk(x) ≤ fk(y) ∧∃k ∈ {1 . . . d} ∶ fk(x) < fk(y).
⪯o: Pareto Dominance on Lower Bounds
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 14 / 48
f~
1(x) f~
1(y)f~
2(x)
f~2(
y)
x ⪯o y ∶=⋀k∈[1...d] fk(x)− ux ≤ fk(y)− uy∧ ∃k ∈ [1 . . . d] ∶fk(x) − ux < fk(y) − uy
Secondary Criterion: Hypervolume Indicators
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 15 / 48
≤ho: Hypervolumecontribution ofobjective values
f1(x) f1(y) f1(z)
f 2(z
)f 2
(y)
f 2(x
)
f1~
(x) f1~
(y) f1~
(z)
f 2~(z
)f 2~
(y)
f 2~(x
)
≤hc: Hypervolumecontribution ofconfidence intervalbounds
lower bounds f1
uppe
r bo
unds
f 1
lower bounds f2
uppe
r bo
unds
f 2
Surrogate-Assisted Partial Order-Based Evol. Optim.
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 16 / 48
DoE (size k)starting population
EA main loop
Evaluationarchive
Loop throughpopulation
Estimate(local model)
Enforce uncer-tainty threshold Sort individuals
Generate population
Continue?
Evaluate population
Return bestsolutions found
Select offspring
Decrease uncer-tainty threshold
uncertaintythreshold ε
exit
no
yes
only ⪯f
information
Overview / Orientation
Background
SAPEO Concepts
BBOB Benchmark
Single-Objective Results
Bi-Objective BBOB
Multi-Objective Results
Further Investigation
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 17 / 48
BBOB Single-Objective Testsuite
Benchmarking ProcedureFixed-target scenarioPrecision targets between 100 and 10−8
Expected running time for restart algorithm
24 different functions in 5 classesSeparable functionsFunctions with low or moderate conditioningFunctions with high conditioning and unimodalMulti-modal functions with adequate global structureMulti-modal functions with weak global structure
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 18 / 48
SYNERGY Horizon 2020 – GA No 692286
D3.2 87 31 July 2018
Empirical Cumulative Distribution Functions
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
20-D
10-D
5-D
3-D
2-Dbbob - f315 instances
1.0.2.9
3 Rastrigin separable
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 19 / 48
Average Number of Evaluations to Reach Target
2 3 5 10 20 40
0
1
2
absolute targets 1.0.2.9
1 Sphere
-8
-5
-3
-2
-1
0
1
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 20 / 48
Overview / Orientation
Background
SAPEO Concepts
BBOB Benchmark
Single-Objective Results
Bi-Objective BBOB
Multi-Objective Results
Further Investigation
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 21 / 48
Algorithms
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 22 / 48
CMA-ES Standard CMA-ESSAPEO-minRisk SAPEO with ⪯u , ⪯fSAPEO-maxRisk SAPEO with ⪯u,⪯c,⪯p
SAPEO-woMO SAPEO with ⪯u,⪯pEach algorithm with sample ofλ and 2λ
f~
1(x) f~
1(y)
f~2(
x)f~
2(y)
f~
1(x) f~
1(y)
f~2(
x)f~
2(y)
lower bounds f1
uppe
r bo
unds
f 1
f1(x) f1(y)
f 2(x
)f 2
(y)
f~
1(x) f~
1(y)
f~2(
x)f~
2(y)
⪯o ⪯p ⪯c≤hc
≤ho⪯u
⪯f
Experiment specifications and parameters
budget 1000 ffe per dimensionvariation operators standard for all algorithmsnumber of offspring µ 4 + ⌊(3 logn⌋)number of parents λ ⌊λ2 ⌋individual weights wi 1
µ
covariance matrix update weights ccov 2n2
step size σ 1sample size for surrogate λ
correlation assumption squared exponentialtrend assumption constantregression weights maximum likelihood using COBYLA
start: 10−2, bounds: [10−4,101]
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 23 / 48
Single-Objective SAPEO Result Patterns
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 24 / 48
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
exSAPEO-7-1 on
exSAPEO-65-1 on
exSAPEO-62-1 on
exSAPEO-642-1 o
exSAPEO-62-2 on
exSAPEO-642-2 o
exSAPEO-65-2 on
best 2009bbob - f1, 2-D15, 15, 15, 15, 15, 15, 15 instances
1.0.2.9
1 Sphere
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
exSAPEO-7-1 on
exSAPEO-642-2 o
exSAPEO-62-2 on
exSAPEO-65-2 on
exSAPEO-642-1 o
exSAPEO-65-1 on
exSAPEO-62-1 on
best 2009bbob - f5, 2-D15, 15, 15, 15, 15, 15, 15 instances
1.0.2.9
5 Linear slope
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
exSAPEO-65-1 on
exSAPEO-7-1 on
exSAPEO-642-1 o
exSAPEO-62-1 on
best 2009
exSAPEO-65-2 on
exSAPEO-642-2 o
exSAPEO-62-2 onbbob - f9, 2-D15, 15, 15, 15, 15, 15, 15 instances
1.0.2.9
9 Rosenbrock rotated
0 1 2 3 4 5 6 7 8log10 of (# f-evals / dimension)
0.0
0.2
0.4
0.6
0.8
1.0
Pro
port
ion o
f fu
nct
ion+
targ
et
pair
s
exSAPEO-65-1 on
exSAPEO-642-1 o
exSAPEO-62-1 on
exSAPEO-62-2 on
exSAPEO-642-2 o
exSAPEO-65-2 on
exSAPEO-7-1 on
best 2009bbob - f13, 2-D15, 15, 15, 15, 15, 15, 15 instances
1.0.2.9
13 Sharp ridge
+ CMA-ES▽ SAPEO-minRisk-1⋆ SAPEO-minRisk-2○ SAPEO-maxRisk-1◻ SAPEO-maxRisk-2△ SAPEO-woMO-1D SAPEO-woMO-2
SYNERGY Horizon 2020 – GA No 692286
D3.2 88 31 July 2018
Single-Objective SAPEO Result Patterns (cont’d)
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 25 / 48
2 3 5 10 20 40
0
1
2
3
target Df: 1e-8 1.0.2.9
1 Sphere
2 3 5 10 20 40
0
1
2
3
target Df: 1e-8 1.0.2.9
5 Linear slope
2 3 5 10 20 40
0
1
2
3
4
target Df: 1e-8 1.0.2.9
9 Rosenbrock rotated
2 3 5 10 20 400
1
2
3
4
5
target Df: 1e-8 1.0.2.9
13 Sharp ridge
○ CMA-ES♢ SAPEO-minRisk-1⋆ SAPEO-minRisk-2▽ SAPEO-maxRisk-19 SAPEO-maxRisk-2△ SAPEO-woMO-1D SAPEO-woMO-2
Overview / Orientation
Background
SAPEO Concepts
BBOB Benchmark
Single-Objective Results
Bi-Objective BBOB
Multi-Objective Results
Further Investigation
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 26 / 48
BBOB Bi-Objective Benchmarking
separablemoderateill−conditionedmulti−modalweakly structured
f01 f02 f06 f08 f13 f14 f15 f17 f20 f21
f21
f20
f17
f15
f14
f13
f08
f06
f02
f01
Gallagher 101
Schwefel x*sin(x)
Schaffer F7 c10
Rastrigin
Sum diff. powers
Sharp ridge
Rosenbrock
Attractive sector
Ellipsoid
Sphere01 02 03 04 05 06 07 08 09 10
11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27
28 29 30 31 32 33 34
35 36 37 38 39 40
41 42 43 44 45
46 47 48 49
50 51 52
53 54
55
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 27 / 48
Overview / Orientation
Background
SAPEO Concepts
BBOB Benchmark
Single-Objective Results
Bi-Objective BBOB
Multi-Objective Results
Further Investigation
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 28 / 48
Algorithms
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 29 / 48
SMS-EMOA Standard SMS-EMOASA-SMS-exp SMS-EMOA with ⪯p (pre-selection)SA-SMS-opt SMS-EMOA with ⪯o (pre-selection)
SAPEO-minRisk SAPEO with ⪯u , ⪯f (ranking)≤ho (secondary criterion)SAPEO-maxRisk SAPEO with ⪯u,⪯c,⪯p (ranking)≤ho (secondary criterion)
SAPEO-MO SAPEO with ⪯u,⪯c , (ranking)≤hc (secondary criterion)
f~
1(x) f~
1(y)
f~2(
x)f~
2(y)
f~
1(x) f~
1(y)
f~2(
x)f~
2(y)
lower bounds f1
uppe
r bo
unds
f 1
f1(x) f1(y)
f 2(x
)f 2
(y)
f~
1(x) f~
1(y)
f~2(
x)f~
2(y)
⪯o ⪯p ⪯c≤hc
≤ho⪯u
⪯f
Experiment specifications and parameters
budget 1000 ffe per dimensionvariation operators standard for all algorithmspopulations size 100sample size for surrogate 15number of candidate offspring 15 (for SA-SMS, same as sample size)correlation assumption squared exponentialtrend assumption constantregression weights maximum likelihood using COBYLA
start: 10−2, bounds: [10−4,101]
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 30 / 48
SYNERGY Horizon 2020 – GA No 692286
D3.2 89 31 July 2018
SAPEO-minRisk Performance Target 101 [Budget Perc.]
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 31 / 48dimensions and instances
func
tions
0
20
40
60
80
100
●
d02_
i01
d02_
i02
d02_
i03
d02_
i04
d02_
i05
d02_
i06
d02_
i07
d02_
i08
d02_
i09
d02_
i10
d03_
i01
d03_
i02
d03_
i03
d03_
i04
d03_
i05
d03_
i06
d03_
i07
d03_
i08
d03_
i09
d03_
i10
d05_
i01
d05_
i02
d05_
i03
d05_
i04
d05_
i05
d05_
i06
d05_
i07
d05_
i08
d05_
i09
d05_
i10
d10_
i01
d10_
i02
d10_
i03
d10_
i04
d10_
i05
d10_
i06
d10_
i07
d10_
i08
d10_
i09
d10_
i10
d20_
i01
d20_
i02
d20_
i03
d20_
i04
d20_
i05
d20_
i06
d20_
i07
d20_
i08
d20_
i09
d20_
i10
f55f54f53f52f51f50f49f48f47f46f45f44f43f42f41f40f39f38f37f36f35f34f33f32f31f30f29f28f27f26f25f24f23f22f21f20f19f18f17f16f15f14f13f12f11f10f09f08f07f06f05f04f03f02f01
dimensions and instances
func
tions
0
20
40
60
80
100
●
d02_
i01
d02_
i02
d02_
i03
d02_
i04
d02_
i05
d02_
i06
d02_
i07
d02_
i08
d02_
i09
d02_
i10
d03_
i01
d03_
i02
d03_
i03
d03_
i04
d03_
i05
d03_
i06
d03_
i07
d03_
i08
d03_
i09
d03_
i10
d05_
i01
d05_
i02
d05_
i03
d05_
i04
d05_
i05
d05_
i06
d05_
i07
d05_
i08
d05_
i09
d05_
i10
d10_
i01
d10_
i02
d10_
i03
d10_
i04
d10_
i05
d10_
i06
d10_
i07
d10_
i08
d10_
i09
d10_
i10
d20_
i01
d20_
i02
d20_
i03
d20_
i04
d20_
i05
d20_
i06
d20_
i07
d20_
i08
d20_
i09
d20_
i10
f55f54f53f52f51f50f49f48f47f46f45f44f43f42f41f40f39f38f37f36f35f34f33f32f31f30f29f28f27f26f25f24f23f22f21f20f19f18f17f16f15f14f13f12f11f10f09f08f07f06f05f04f03f02f01
dimensions and instances
func
tions
0
20
40
60
80
100
●
d02_
i01
d02_
i02
d02_
i03
d02_
i04
d02_
i05
d02_
i06
d02_
i07
d02_
i08
d02_
i09
d02_
i10
d03_
i01
d03_
i02
d03_
i03
d03_
i04
d03_
i05
d03_
i06
d03_
i07
d03_
i08
d03_
i09
d03_
i10
d05_
i01
d05_
i02
d05_
i03
d05_
i04
d05_
i05
d05_
i06
d05_
i07
d05_
i08
d05_
i09
d05_
i10
d10_
i01
d10_
i02
d10_
i03
d10_
i04
d10_
i05
d10_
i06
d10_
i07
d10_
i08
d10_
i09
d10_
i10
d20_
i01
d20_
i02
d20_
i03
d20_
i04
d20_
i05
d20_
i06
d20_
i07
d20_
i08
d20_
i09
d20_
i10
f55f54f53f52f51f50f49f48f47f46f45f44f43f42f41f40f39f38f37f36f35f34f33f32f31f30f29f28f27f26f25f24f23f22f21f20f19f18f17f16f15f14f13f12f11f10f09f08f07f06f05f04f03f02f01
dimensions and instances
func
tions
0
20
40
60
80
100
●
d02_
i01
d02_
i02
d02_
i03
d02_
i04
d02_
i05
d02_
i06
d02_
i07
d02_
i08
d02_
i09
d02_
i10
d03_
i01
d03_
i02
d03_
i03
d03_
i04
d03_
i05
d03_
i06
d03_
i07
d03_
i08
d03_
i09
d03_
i10
d05_
i01
d05_
i02
d05_
i03
d05_
i04
d05_
i05
d05_
i06
d05_
i07
d05_
i08
d05_
i09
d05_
i10
d10_
i01
d10_
i02
d10_
i03
d10_
i04
d10_
i05
d10_
i06
d10_
i07
d10_
i08
d10_
i09
d10_
i10
d20_
i01
d20_
i02
d20_
i03
d20_
i04
d20_
i05
d20_
i06
d20_
i07
d20_
i08
d20_
i09
d20_
i10
f55f54f53f52f51f50f49f48f47f46f45f44f43f42f41f40f39f38f37f36f35f34f33f32f31f30f29f28f27f26f25f24f23f22f21f20f19f18f17f16f15f14f13f12f11f10f09f08f07f06f05f04f03f02f01
SAPEO-minRisk Performance Target 101 [Budget Perc.]
dimensions and instances
func
tions
0
20
40
60
80
100
●
d02_
i01
d02_
i02
d02_
i03
d02_
i04
d02_
i05
d02_
i06
d02_
i07
d02_
i08
d02_
i09
d02_
i10
d03_
i01
d03_
i02
d03_
i03
d03_
i04
d03_
i05
d03_
i06
d03_
i07
d03_
i08
d03_
i09
d03_
i10
d05_
i01
d05_
i02
d05_
i03
d05_
i04
d05_
i05
d05_
i06
d05_
i07
d05_
i08
d05_
i09
d05_
i10
d10_
i01
d10_
i02
d10_
i03
d10_
i04
d10_
i05
d10_
i06
d10_
i07
d10_
i08
d10_
i09
d10_
i10
d20_
i01
d20_
i02
d20_
i03
d20_
i04
d20_
i05
d20_
i06
d20_
i07
d20_
i08
d20_
i09
d20_
i10
f55f54f53f52f51f50f49f48f47f46f45f44f43f42f41f40f39f38f37f36f35f34f33f32f31f30f29f28f27f26f25f24f23f22f21f20f19f18f17f16f15f14f13f12f11f10f09f08f07f06f05f04f03f02f01
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 32 / 48
Target Performance Heatmaps SAPEO
SAPE
O-M
O
101 100 10−1 10−2 10−3
SAPE
O-m
axRisk
SAPE
O-m
inRisk
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 33 / 48
Target Performance Heatmaps SA-SMS + SMS-EMOA
SA-SMS-op
t
101 100 10−1 10−2 10−2
SA-SMS-exp
SMS-EM
OA
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 34 / 48
Performance on Targets 100 and 10−1
SA-SMS-op
t
100 10−1
SAPE
O-M
O
100 10−1
SA-SMS-exp
SAPE
O-m
axRisk
SMS-EM
OA
SAPE
O-m
inRisk
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 35 / 48
Aggregated Performance Results [Expected Runtime]
−2
−1
0
1
2
3
4
5
●f55f54f53f52f51f50f49f48f47f46f45f44f43f42f41f40f39f38f37f36f35f34f33f32f31f30f29f28f27f26f25f24f23f22f21f20f19f18f17f16f15f14f13f12f11f10f09f08f07f06f05f04f03f02f01
SA
PE
O−
uf−
hoS
MS
−E
MO
A
SA
PE
O−
ucp−
hoS
AP
EO
−uc
−hc
SA
−S
MS
−p
SA
−S
MS
−o
SA
PE
O−
uf−
hoS
MS
−E
MO
A
SA
PE
O−
ucp−
hoS
AP
EO
−uc
−hc
SA
−S
MS
−p
SA
−S
MS
−o
SA
PE
O−
uf−
hoS
MS
−E
MO
A
SA
PE
O−
ucp−
hoS
AP
EO
−uc
−hc
SA
−S
MS
−p
SA
−S
MS
−o
SA
PE
O−
uf−
hoS
MS
−E
MO
A
SA
PE
O−
ucp−
hoS
AP
EO
−uc
−hc
SA
−S
MS
−p
SA
−S
MS
−o
SA
PE
O−
uf−
hoS
MS
−E
MO
A
SA
PE
O−
ucp−
hoS
AP
EO
−uc
−hc
SA
−S
MS
−p
SA
−S
MS
−o
dim 2 dim 3 dim 5 dim 10 dim 20
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 36 / 48
SYNERGY Horizon 2020 – GA No 692286
D3.2 90 31 July 2018
Overview / Orientation
Background
SAPEO Concepts
BBOB Benchmark
Single-Objective Results
Bi-Objective BBOB
Multi-Objective Results
Further Investigation
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 37 / 48
Possible Selection Errors
−2 −1 0 1 2−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x
y
Population1error2errors3errors0errors
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 38 / 48
Critical Individuals SAPEO-u
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●
●
●●●●●●●●●●●●
●
●●
●
●●●●●●
●●
●●●●●●●●●
●●
●●●●●●●●
●
●●
●
●
●
●●
●
●●
●
●●●●●
●●
●
●
●●
●●
●●●
●
●●●●●●●
●●
●●●●
●
●
●●
●●●●
●●●
●●
●
●●●●●
●●
●●●
●
●●●
●
●●●●●
●●
●
●
●●●
●
●●●●●●●●●●●
●
●●●●●●●
●
●●●●●
●
●●●
●
●●●
●
●●
●
●
●
●●●●●●
●
●●●●●●
●
●
●
●
●●●●●●
●
●
●●
●●●
●
●
●●
●●●●
●
●●●●●●●
●
●
●
●●●
●●
●●●●●
●
●●●●●●●●
●
●●●
●
●
●
●
●●
●
●●●
●
●
●
●●●●●●●●●
●
●●●●●
●
●●●●●●
●
●
●
●
●●
●
●
●
●●
●
●●●
●
●●●
●
●●
●
●
●
●
●●●●
●
●●
●●●
●
●●
●
●●●
●●
●
●●●●
●
●●●●●
●
●
●
●●●●●●●●●●●●●
●
●●●●●
●
●●●
●
●
●
●●●●●●●●●●●
●
●●●
●
●
●
●●●
●
●●●
●
●
●
●●●●●
●
●●●●
●●
●
●
●
●
●●●●●
●
●
●●
●●●●●●
●
●
●
●●●
●
●●●●●●
●●
●●●●●
●
●●●●
●
●●●●●●●
●
●●●●
●
●
●
●●
●
●●●●
●
●
●●
●
●●
●●
●
●
●●
●
●●●●●
●●●●●
●
●●
●
●
●
●
●●
●●●●●
●●
●●●●
●
●●●●
●
●●
●
●
●●
●
●●●●●
●●
●●●
●
●●●
●
●
●●
●●●●
●
●
●●●
●●
●
●
●
●●●●●●
●●
●●●
●
●●●
●
●●●
●
●●●●●●
●●
●●
●
●
●●●●●●●●●●●●●●
●●●
●
●
●
●
●●
●
●
●
●
●●●●●
●
●●●●●●
●
●●●
●
●●
●
●●
●
●●●●
●
●
●●●
●
●●●●●●
●
●●
●●
●●●●
●
●
●
●
●●
●
●
●●●●
●
●
●
●
●●●●
●
●●
●
●
●
●
●●
●●●●●●
●
●●●●●
●
●
●
●●●
●●
●●●●●●●●●●●●●●
●
●●●●●
●
●●●●●
●
●●●●●●●●
●
●
●●
●●●●●●●●
●
●●●●●●●
●
●●●●
●
●
●
●●●
●
●
●
●●
●
●●●
●●●
●●●●●●
●
●●
●
●
●
●
●
●
●
●
●●●●●●●●●●●●
●
●
●●●●●
●
●
●●
●●●
●
●●●●●●●●●●●●●●
●
●●
●
●
●
●●
●
●●
●
●●●●●●
●
●
●●
●
●
●●●●●●●●
●
●
●
●
●
●●●●●●●
●
●
●
●●●
●
●
●
●
●
●●●●●●●●●●●●●
●
●
●
●
●●
●●●●●●●●●
●
●●●●●●●●●●●
●
●●
●
●
●
●
●
●●
●
●●
●●
●●
●
●●●●●●●●●●●
●
●
●
●
●●
●●
●
●
●
●
●
●●●●●
●
●●●●
●
●
●
●●
●
●●●●
●
●●●
●
●●●●●
●
●
●
●
●
●
●
●●
●●●●
●
●
●●●●●●
●
●●●●●●●●●●●●●●
●
●
●●
●●●●●●●
●
●●●●●●
●
●●●●●●
●●●
●●
●
●
●●●●●●●
●
●●●
●●
●
●●●●●●
●
●●●●●●●
●
●●●●
●
●●●
●
●●●●●●●●●●●
●
●
●
●●●●●●
●
●●●●●●●
●●
●
●●
●
●
●
●●
●●●●●●
●
●
●●●●
●
●●●●●●
●
●●
●
●●●
●
●●●●●●●●●●●●●●●●●
●
●●●
●
●●●●
●
●●●●●●●●●●
●
●
●
●
●
●●
●
●●●●●●●●
●●
●
●
●●●
●
●●●●●●●●●●
●●●●
●●
●
●
●
●
●●●●●●
●
●
●
●
●●●●●●●
●
●●●●●●●●●
●
●●●●●●●●●●●
●
●
●
●●●
●
●●
●
●●●●●●●●●●
●
●●●●●●●●
●
●●
●
●
●
●●●●●●
●
●●●●●●●●●●●●●●
●●
●●●
●
●●●●
●●
●
●
●●●
●
●●●●●●●●
●●
●
●
●●
●
●●●●●●●●●●
●
●
●
●
●●
●●●●
●●
●●●●●
●
●●
●●
●
●
●
●●●●●●●
●
●
●
●
●●●●●●●
●
●●●
●
●
●
●●
●
●●●
●
●●●
●●
●●●●●
●
●●●●
●
●
●
●●●●●●●●●●●
●●●●
●
●
●●●●
●
●●●●●●●●●
●
●●●●●●●●●
●
●●
●
●●●
●●
●
●
●
●
●
●
●
●●●●
●
●●●●
●
●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●
●
●●●●
●
●●●
●
●●
●
●●●●●●
●●●●
●●
●
●●●●●
●●
●●●
●
●
●●●●●●●●●●●
●
●●●●●
●
●●●
●
●●●●●●●
●
●●●●●
●●
●●●●●●●
●
●●●●●●●●●●●●
●
●●●●
●
●
●
●
●●
●
●
●●●●
●
●●●●●●
●
●●
●
●●●●
●
●●●●
●
●●●
●
●●●
●●
●●
●
●●●
●●●
●●
●
●●●
●
●
●●●●●●
●
●●●●●
●
●●
●●●●
●
●
●●●
●
●●●●
●
●●●●●
●
●●●
●
●●
●
●●●●●
●
●
●
●
●●●●●
●
●
●●
●●
●
●
●
●●●●
●
●●●●
●
●●
●
●
●
●●●●●●
●●
●●●
●●
●●●●●
●
●
●●
●●●●●
●
●●●●●●●
●
●●●●●
●
●●●●
●
●●●●●
●
●●
●●
●
●
●●●
●●●●
●●●●●●
●
●
●
●
●
●●●●
●
●●
●
●
●
●
●●●●
●
●●●●●
●
●
●
●
●●●●●
●
●●
●
●●●●●●●●
●
●
●●●
●●
●
●
●●●
●
●●●●●●
●
●●
●●
●●
●
●●●
●
●
●
●●●
●
●●●●●
●
●●●
●
●●●
●
●●
●
●●●
●
●
●●●●●
●
●●●
●
●●
●
●●●
●
●●●●●●●
●●●
●●
●
●
●
●●●●●●●
●
●
●
●●
●
●
●
●
●●●
●●
●●●
●
●●●●●●●●●●●●●●
●
●
●●
●
●●
●
●●
●
●
●
●
●
●●
●
●
●
●●
●
●●●
●●
●●●
●●
●●●●●
●
●
●
●
●
●●
●●●●●●●●
●
●
●●●
●●●
●
●
●●
●●●
●
●●●●
●
●●●●●●●
●
●
●
●●●●
●●
●●●●●●●●
●
●●●●●
●
●●●
●
●●
●
●
●
●
●●
●
●●●
●
●●●●●
●
●
●●●●●●●●
●
●●●
●●
●●●
●
●●
●
●●●●
●
●●●●●●
●
●
●
●●●●
●
●●●
●
●
●
●●
●
●●●
●
●●●
●
●●●
●
●●●●●●●●
●●
●●
●
●
●●●●●●●●
●●●
●
●
●●●●●●
●●
●●
●
●
●●
●●●●●●●●●●
●
●
●●
●
●●●●●●●●●●●●
●
●●●●
●
●●●●●●●●●●●
●
●●●●●
●
●
●
●
●
●
●●
●●●●●●
●●
●●●●●
●
●
●●
●●
●
●●●●●
●
●●●●
●
●
●
●●●●●●
●
●
●
●●●●●●●●
●
●●●
●●
●●●●
●
●●●●●
●
●●●●●●●●●●
●
●
●
●
●
●
●●●
●●●●●●●●●
●
●
●
●●
●
●●●
●
●
●
●
●
●●
●●●
●
●●
●
●
●
●●
●●
●●
●●●●●
●
●●● ●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●●
●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●
●
●●●●●●●●●●●●●●●●●●●●
●
●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●
●
●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●
●
●●●●
●
●●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●
0.0
0.2
0.4
0.6
0.8
1.0
Critical Individuals (SAPEO−c)
0.1
2
0.5
2
1
2
1.5
2
2
2
0.1
10
0.5
10
1
10
1.5
10
2
10
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 39 / 48
Confidence Interval Errors
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●
●
●●●●
●●
●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●
●
●
●●●●●●●●●●●●●
●
●●●●●●
●
●
●
●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●
●
●●●●●●●●●●
●
●●●●●●●
●
●●
●
●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●
●
●●●
●
●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●
●
●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●
●●●●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●
●
●●●●●
●
●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●
●
●●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●
●
●●●●●
●
●●●●●●
●
●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●
●
●
●
●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●
●●
●●
●
●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●
●
●●●●
●
●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●
●
●●●
●
●
●
●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●
●●
●●●●●●●●●
●
●●●●
●
●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●
●●●
●
●●●●●●●●
●●
●●
●
●●●●●●●●●●●●
●
●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●●
●●●●●●●●●●●●●
●
●●●●●●
●●
●●●●●●●●●●●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●
●
●●●●●●●●●●●●●
●
●●●
●
●●●●
●
●
●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●
●
●
●
●●●●●●●●
●
●●●●●●●●●●
●
●
●
●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●
●
●
●
●●●
●
●●●●●●●●●●●●●●●●
●
●●●●
●
●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●
●
●
●
●
●
●●●●●●
●
●●
●
●●●●●●●●●●
●
●●●●
●
●●●●●●●●●●●●
●
●
●●●●●●●●●●●●
●
●●●●●●●
●
●●
●
●●●●●●●●●
●
●●
●
●●●●●●●●●
●●
●●●●●●●●
●
●●●●●●
●●
●●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●
●
●●
●
●●●●
●●
●
●●●●
●●●●
●
●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●
●
●●●
●
●●●●●●●●●●
●●
●●●●●
●
●●●●●●●●●●●
●
●●●●●●●
●
●●●●●●●●●●●●●●
●
●●
●
●●
●
●●●●●●●●●
●
●●●●●●
●
●●
●
●●●
●
●●●●●●●●●●●●●●●
●●●
●
●
●●●●●●●●●●●●●●
●
●
●
●●●●●
●
●●●●●●●●●●●●●
●
●●●●●●●●
●
●●●●
●
●●●●●●●●
●●●
●●●●●●●●●●
●
●●●●●●
●●
●●
●
●●●●●●●●●●●●●
●
●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●●●
●
●●●●
●
●●
●●
●●●●●●●●●●●●
●
●●●●
●
●●●●
●
●●●●●●●●●●●●●●●
●●
●●●
●
●●
●
●●●●●●●●●●●●●●●●●●●
●
●●
●●●●●
●
●●●●●●●●●●●●●●●●●●
●●
●●●●●●●●●●●●●●
●
●●●●●
●
●
●●
●●●●●●●●●
●
●●●●●●
●
●
●●●●
●
●
●
●●●●
●
●●●●●●
●
●●●●
●
●
●●●●
●
●
●
●●
●
●
●
●●
●
●●
●●
●●●●●●
●
●●
●
●
●
●●●●●●●●●●●●
●
●●●
●
●●●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●
●●
●●●
●
●
●●●●
●
●
●
●●
●●
●●●
●
●●
●
●●●●●
●
●●
●
●
●●
●●
●
●
●●●●●●●●●●●●●●
●
●●●●●●
●●
●●●●●●●●●●●●
●
●●
●
●
●
●
●●●●●●●
●●
●●●●●●
●●
●●●●●
●●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●
●
●●●●●
●
●●●●●●●●●
●
●●●●●●●●●●●●●●●●●
●
●●
●
●●●●
●
●●●●●●●●●●●●●
●
●●●●●●●●●
●
●
●
●
●
●●●●●
●
●●●●●●●●
●
●
●
●●●
●
●●●●●●●●●●●●
●
●●●●●
●●
●
●
●●●●●●●●
●
●●●●
●
●●●●
●●
0.0
0.2
0.4
0.6
0.8
1.0
CI Errors (SAPEO−c)
0.1
2
0.5
2
1
2
1.5
2
2
2
0.1
10
0.5
10
1
10
1.5
10
2
10
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 40 / 48
Selection Errors SAPEO-u vs. SAPEO-up
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●
●
●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●● ●●●●●●
●
●●●
●
●
●●●●●
●●●
●
●
●
●
●
●
●●
●●
●
●●
●
●●
●
●●
●●
●●●
●
●
●●●
●●●●
●●
●
●●
●
●●●●
●●
●
●●
●●●
●
●
●
●●
●
●
●●
●
●
●●
●●
●
●●
●
●●
●●
●●
●
●●●
●
●
●
●●
●●
●
●
●●
●
●●
●●
●
●●●
●●●●
●
●
●●●●
●
●
●
●
●●●
●
●
●●
●●●
●
●
●●
●
●
●●●
●
●
●
●
●●●●
●
●●●●
●
●
●●●●
●●●●
●
●●●●
●
●●●
●●
●
●●●
●●
●●
●
●●
●
●
●●
●●
●
●
●
●●
●
●
●
●
●●●
●●●
●
●●
●
●●
●
●
●●
●●●●
●
●●●●
●●●●
●●●
●
●●●
●●●
●●●●●
●●●●●
●
●●●
●●●●
●
●●
●●
●
●●●
●
●●●
●
●
●
●●●
●
●●
●
●●●
●
●●
●●
●●●●
●
●
●
●
●
●
●●●●●●●●●
●●●●●●
●
●●●●●●
●
●
●
●●●
●
●●●
●
●●●
●
●
●●
●●●
●
●●
●
●●●
●
●
●
●
●●
●
●
●
●●●
●
●●
●
●●●
●
●
●●●●
●●
●
●●●●●
●
●
●
●
●●
●
●
●
●
●●
●
●●●
●
●●●●
●●●
●●
●●
●●●
●●
●
●
●
●●
●●●
●●
●
●
●
●
●●
●●●
●●●●●●●●●
●
●●●●●●●
●
●
●●●●●
●●●
●
●●●●●
●●
●●●
●●
●●●
●●
●●
●
●
●●
●
●●
●●●●
●●●●
●●
●
●●●
●●●
●●●●●●
●●
●
●●●●
●●
●●●
●
●
●
●
●
●
●
●
●●●●●●●●
●●
●
●●●
●
●
●●●
●
●●●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●●●
●●
●
●
●●
●●●●●
●
●●●●
●
●●●●
●
●
●
●●●●
●●
●●
●
●●
●●
●●●●●●●●
●
●●
●
●●
●
●●
●
●●
●●
●
●●●●●●●
●●
●●
●
●
●
●
●●
●
●●
●
●
●
●●●●●
●●
●
●
●
●
●
●●●●●
●
●●
●
●●
●
●
●
●
●
●●●
●●
●●
●
●●
●
●
●●
●
●●●
●●●●●
●
●●●●●●
●
●
●
●●●
●
●●●●●
●
●●●
●
●●
●●
●●●●
●
●●
●
●●
●●
●●
●
●●●
●●●●●
●
●●
●
●●
●
●●
●●
●
●●●●
●●
●
●●●●
●
●●
●
●
●●
●
●
●
●●
●
●●
●
●
●●
●●
●
●
●
●
●●●
●
●●●●●●●●●●●●
●
●●
●
●
●●
●
●●
●●
●
●
●
●
●
●
●
●●
●
●●●
●
●●●●●●
●
●●
●
●
●
●
●●●
●
●●●
●
●
●●●●
●
●●
●
●●●●●●●●●●●●●●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●●●
●
●●
●●●●●●●●●●●
●
●
●
●●
●
●●
●●●●
●
●●●●
●
●●●
●
●
●●●
●
●●●
●
●
●
●●
●●
●
●
●●●●
●
●●
●●
●●
●
●
●●●●●●
●
●●●●
●
●
●
●
●
●●●●●
●
●●●
●●
●
●
●●●
●
●●
●
●
●
●●
●
●
●
●●●
●●
●●
●●●
●
●
●●
●
●
●
●
●
●●
●
●●
●
●
●●●
●
●
●●●
●
●●●
●●●●●●●●
●●
●
●
●
●●●
●
●●●●●●
●
●
●
●
●●
●●●
●●
●
●
●
●●●
●●●●
●
●
●
●
●●
●
●●●
●
●●●
●●●●●●●
●●
●
●
●
●●
●
●
●
●
●●●
●
●
●●●●●
●●
●
●
●●●
●●●
●
●
●
●
●
●
●
●
●●
●
●●●●●
●
●●
●●
●
●●
●●
●
●●
●●●●
●●
●
●●
●
●●●●
●●●
●●●●
●●●●
●●●●●
●
●●●●●●●●●
●
●●●●●●●
●●
●
●
●
●●●
●●●●
●
●
●
●
●
●●
●
●●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●●
●
●●●●
●
●
●
●●
●
●●●●
●
●●●
●●●●
●●
●●
●
●
●
●●
●●
●
●
●
●
●
●
●●
●
●●●
●●
●
●●
●
●
●
●●
●●
●●●
●●
●●●●●
●●●
●
●●
●
●
●
●
●●
●●●●●
●
●●●
●
●●●
●
●●●●●●●●
●●
●
●●●●
●●
●●●
●
●
●
●
●●●●●
●●
●●
●
●●●●
●●
●
●
●
●●
●
●●
●
●
●
●●
●
●
●
●●●●●●●
●●●
●
●●●
●●●
●
●●●●●
●●●
●●●
●
●●
●
●
●
●●●●
●
●
●●
●●
●
●●●●●
●●
●●
●
●●●
●
●
●●●
●
●●●●●
●
●●
●
●
●●●●
●●
●
●●
●
●
●
●
●●
●
●
●
●●●
●●
●●
●
●●
●●●●●
●
●
●●
●
●
●
●
●
●●
●
●●●●
●●●
●
●
●●●●●●●
●
●
●●
●
●●
●
●●●●●
●●
●●
●
●●●●
●
●●●
●
●
●
●
●
●
●●
●●
●
●
●
●
●●●
●
●
●
●●
●
●
●●●
●
●
●
●●●
●
●●●●●
●
●
●
●●●
●
●
●
●●
●●
●
●●●
●
●
●
●
●
●
●
●●
●●●●
●
●
●
●●
●
●●
●●
●●
●●
●●●
●
●
●
●●
●●
●●
●●●●●●
●
●
●
●
●
●
●
●
●
●●●
●
●●
●●●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●●●
●
●
●●
●
●
●●
●
●
●●●
●
●
●
●●●
●
●
●
●
●●●●
●
●●●●
●
●●●●●●●●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●●●●●
●
●
●
●
●
●
●
●●●
●
●
●●●
●
●●
●●
●
●
●
●
●●
●
●
●
●●
●
●●
●
●●●●
●
●
●●
●●
●
●●
● ●●●●●●●●●●●●●●●●
0.0
0.2
0.4
0.6
0.8
1.0
Selection Errors (SAPEO−c)
0.1
2
0.5
2
1
2
1.5
2
2
2
0.1
10
0.5
10
1
10
1.5
10
2
10
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●
●●
●●
●
●●●●
●●●
●●●
●
●●
●
●
●●●
●
●
●
●●
●
●
●
●●●
●
●
●●
●
●
●●●
●●●
●●
●●
●
●●
●●●
●
●●
●●●●
●
●●
●
●
●
●●
●
●
●
●
●●
●●●●●●●●
●
●
●
●●
●
●●
●
●
●●●
●●●
●●
●●
●●
●
●
●
●
●
●●●●●●●
●●
●●●
●●●●●
●●
●
●
●●●
●
●
●●
●
●
●
●●
●●
●●●●
●
●●
●
●
●
●
●
●
●
●●●●●
●●●
●
●●
●●
●
●
●●
●
●
●
●●●
●●
●
●
●●●
●●
●●●●●●
●
●●●●●
●
●●
●
●●●
●●●●●●
●●
●
●
●
●●
●
●●●●
●●
●●
●
●
●
●
●●●●
●
●
●●●
●
●
●●●●●●●●
●●●
●
●●●●●●●●
●●●
●
●●
●
●
●
●●●●●●●●
●
●●
●●
●●
●●
●
●●
●
●
●
●
●
●●●●
●●
●●●
●●
●●●●●●
●●
●●●
●
●●
●●●●
●
●
●
●
●●
●
●●
●●●
●●
●
●●●
●
●●
●●●
●●●●●●
●
●●
●●
●●
●●
●
●●●
●
●●●
●
●
●●
●●●
●
●●●●
●●●●
●
●
●●●
●
●●
●
●●●
●●
●
●●
●
●
●
●
●●
●
●
●●●●●
●
●●
●
●●
●
●
●
●
●
●●●
●●
●●
●●●
●●
●●
●●●●
●
●●
●
●
●
●●●●●
●●
●
●
●
●
●●●
●●
●
●
●
●●●
●
●
●●●●●●
●
●
●
●●
●
●
●●
●●
●●●
●●
●●
●
●●●
●
●●●●
●●
●●
●●
●●
●
●●
●
●
●●
●●
●●●●
●●●
●●
●●
●●
●
●●●●
●
●
●●
●●●
●●
●●
●●●
●●●
●●
●
●
●
●●●●●●
●
●
●
●
●
●●●
●
●●●●
●
●
●
●●
●
●
●●●
●
●
●●
●
●
●
●●●
●
●●
●
●
●
●
●●
●●●
●
●
●●
●●●
●
●●●
●
●
●
●●
●●●●
●
●●●●●
●
●
●●
●
●
●
●
●●
●
●
●●
●●●
●
●●●
●
●●
●●
●●●●
●
●●●●●●●
●
●
●
●●●
●
●
●●
●
●●●●●
●●
●
●●
●●
●●
●●
●
●
●●
●●
●
●●
●
●●
●
●
●
●
●●●●●●●●
●
●
●
●●●
●●
●●
●●
●
●●
●●
●●
●●
●
●
●
●●●
●
●●●
●
●
●
●
●●
●●
●
●●
●
●●●
●
●●
●
●●●
●●
●
●
●
●
●
●●
●●●●
●●●
●
●
●
●
●●
●
●
●
●
●●●●
●
●●
●
●
●●
●●●●●●●
●●
●●
●●
●
●●
●●●●
●
●●
●●
●●●
●
●●
●
●
●
●●
●
●
●●●
●●●
●
●●
●
●●
●
●●
●
●●
●
●
●●
●●●
●●
●●●●●●●●
●●
●●●●
●●
●
●
●●●●
●●
●●●●
●
●
●●●
●
●
●
●
●●●
●●●
●●
●
●
●●●
●
●
●
●
●●
●●
●●●●
●●●●●
●
●
●●●●●
●
●
●●
●
●●●●●●●●●●
●
●●●●●
●
●●
●●
●
●●
●●
●
●●
●
●●
●
●●
●
●●
●●
●●
●
●
●●
●
●
●●
●●●●
●
●
●●●●
●●
●●
●
●
●
●
●●●
●
●●●
●●
●
●
●●
●●
●
●●
●●
●
●
●
●●
●
●●●
●
●
●●●
●
●
●●●
●
●
●
●●
●
●
●●●
●
●
●
●●
●●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●●●●●
●
●●
●●●
●●●●●
●●●●
●
●
●●
●
●●●
●●
●
●●
●●
●●
●
●●●
●
●●●●
●
●●
●
●
●●
●●
●●●
●
●
●
●
●●
●●●
●
●●●
●
●●●
●
●●●
●
●
●●●●●●
●●●
●
●
●
●●
●
●
●●●
●
●●●
●●
●
●
●●
●●●
●
●
●
●
●
●
●
●●●
●
●●●
●
●●
●●●
●
●
●
●●●
●
●
●
●●
●
●●●
●
●●●●●
●●
●●
●
●●
●
●
●●●
●●●
●●
●
●
●
●●●●
●●
●
●●
●●
●●●●●●●●●
●
●
●
●
●
●
●●
●●
●●●
●●
●
●●●●●
●
●●●●
●
●●
●●●
●
●
●●
●●
●●●
●●
●
●●
●●
●
●
●
●
●●●●
●●
●
●
●●●●
●●
●●●●
●●
●●●
●●●
●●●●●●
●
●
●
●
●●●
●
●●●●
●●●
●
●●●
●
●
●
●●
●
●
●
●●●
●
●●●
●
●
●
●
●
●
●
●
●
●
●●
●●●
●
●●
●
●
●
●
●
●●●
●
●●
●●
●
●●●●
●●
●
●●●●●
●●
●●●●●
●
●
●●●
●
●●●
●●
●●
●●
●●●
●●
●●
●
●●●●●●
●
●●●
●●
●
●
●
●
●●●
●●
●●
●
●●●
●
●●
●
●
●
●
●●●●●
●●
●●●●
●
●●●
●●●
●
●●●
●
●
●
●●●
●
●●
●
●●●●●
●●
●
●
●●
●
●●●
●
●
●●●●
●
●
●
●
●
●
●●●
●
●●
●
●
●
●
●
●●
●
●●
●
●●
●
●
●
●
●
●●●
●
●●●●●●
●●
●●
●
●
●
●●●
●●●●
●●
●●●●
●●●
●●●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●●●
●●
●
●●●●
●
●●●
●●
●
●●
●●
●●●●●
●
●
●
●
●
●●
●
●●●
●
●●
●
●●●●●
●
●
●
●
●●●●●
●●
●●
●
●●
●●
●●
●
●
●●
●●
●
●●●
●
●●●
●●●
●
●●
●●●
●●●
●
●●●●
●●●●●●●●●
●
●
●
●●
●
●
●●
●●
●
●●
●
●
●●
●●●●
●●●●
●●●
●●
●
●
●
●●●●
●
●
●
●●●
●
●●●
●
●●●
●●
●●
●●●
●
●●●
●
●
●●
●●●●●
●●●●
●
●
●●
●●
●●
●●
●
●●
●
●●
●
●●
●●●
●
●●●
●●●
●
●●●●●●●
●●●●●●●●
●
●
●●●
●
●●●●
●
●
●
●●
●●●●●●●
●
●●
●
●
●
●●
●
●●●
●
●●●
●
●
●●●
●●●
●●●●
●
●
●
●
●●●
●●●●●
●
●●●
●●
●●
●
●
●●
●
●
●
●
●●●
●●●
●
●
●●
●●●
●
●
●
●●
●●●
●
●
●●●
●●
●
●
●●●
●
●●●●●●
●●
●
●●
●●
●
●●●●
●
●
●
●●
●●
●
●
●●●●
●
●●●
●
●
●●
●
●
●●
●●●
●
●●
●
●●●●
●●●●●●●
●
●●
●
●
●●●
●
●
●●●
●●●●●●●●
●●
●
●●●●
●
●●
●
●
●●
●●
●●
●
●
●●
●
●
●●
●●
●●
●●●
●
●●●●
●●
●●●
●●
●●
●
●
●
●
●
●●
●●
●●
●●●
●
●●
●●
●
●●
●●●
●●●●●●●●●
●
●
●●
●
●
●●
●
●●
●●●
●●●●
●●
●
●
●
●
●●
●●●●●●●●●
●
●●
●
●●●
●
●●●
●
●●
●●
●●●
●●●●●
●●●
●●
●
●
●●●
●
●
●
●●
●
●
●●
●
●●
●●●
●●●●●●
●
●●●●
●
●
●
●
●●
●●
●●
●●●●
●
●●●
●
●
●●●
●
●●●●●
●●
●●●
●●
●
●●
●●●
●
●
●
●●
●
●●●
●
●●●●●●●
●
●●
●●
●
●
●●●
●●●
●●
●
●●
●●
●
●●●●
●●
●
●●●
●
●
●●
●●
●
●●
●
●●●●●
●
●
●●●●●
●●●●
●●
●
●
●
●
●
●
●
●
●●
●●
●
●
●●●●
●●
●●
●
●
●●●●
●●●●●
●
●●
●●
●●●●
●●
●
●
●●
●●
●●●●
●
●●●●●
●●●
●●
●●
●
●●
●
●●
●
●●
●
●
●
●●●●●●●●
●
●
●
●
●●●
●●
●●
●●●
●●
●
●
●
●●●●●
●●
●
●
●
●●●
●
●●
●●
●●
●●
●●
●
●●●●●
●
●●
●
●●●●
●●●●
●●●
●●●
●
●●
●●
●
●●
●●●
●●●
●●
●
●●●
●
●
●●
●●●●●●●●
●
●●●
●●
●
●●●●●●
●
●●●●
●●●
●
●●
●●●●
●●
●
●●
●
●
●
●●●
●
●
●●●
●
●●●●
●●
●
●●
●●●●
●
●●
●
●
●●
●
●●
●●
●
●●●●●●
●●●
●●●●●●
●●●●
●
●
●
●●
●
●●●●●●●●
●
●●
●
●
●
●
●
●
●●
●●
●●●●
●●
●●
●●●
●●
●
●●●
●●
●
●
●
●●
●
●●●●
●
●
●
●●●●●
●
●
●●●
●
●
●
●●
●
●●●●
●
●●●●
●●●
●●●●
●●
●
●
●
●●
●●●
●●●●
●●●●
●●
●●●●●●
●●
●
●
●●●●
●
●●●
●●●●
●
●
●
●
●●
●●
●
●●
●
●●●
●●●●
●
●
●
●
●●
●●●
●●
●
●
●●
●●
●●●●
●
●●●●●●
●●●
●●●●
●●●●●
●
●
●
●●●
●
●
●●
●
●
●●●
●
●●
●
●
●
●●
●
●
●●
●
●
●●●●●
●
●
●●
●
●
●
●●●
●●
●
●●
●
●
●
●●●●●●●●
●●●
●●●
●
●●●●
●●●
●
●
●●
●●
●
●●
●
●●●
●
●
●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●●●
●●
●●
●
●
●●●●
●●●
●●●●
●●●●●●●●
●●●●●●
●●●
●●●
●
●●
●
●●
●●●●●●●
●
●
●●●
●
●
●●●
●
●●
●
●
●
●●●●●
●
●●
●●
●●
●●●●
●●●●●
●
●●●●●
●
●
●
●
●●
●●●
●●●●
●
●
●●
●
●
●
●●
●
●
●
●●●●
●
●
●●●●
●●
●
●●
●
●●
●
●●●●
●
●●●●
●
●
●
●●
●●
●●●●●●●
●
●
●●
●
●
●
●
●●●
●
●●●
●●
●●●
●
●
●
●
●
●●●●●●●
●●●●●
●●
●
●●●●
●●
●
●
●
●●●●
●
●●●●
●●●●●●●●
●
●
●
●●●●●●
●●
●
●●●
●
●●
●●●
●
●
●
●●●
●
●●●●●
●
●
●●●●●
●●●●
●
●
●●●●●
●●●
●
●●
●
●
●
●
●
●●
●●●●●●
●●●●
●
●
●●●●
●●
●
●●●●
●●
●
●
●●
●●●●●●●●●●
●
●●●●
●
●●
●●●
●
●●
●
●
●●
●●
●
●●
●
●●
●●●●
●
●●
●●●
●●
●
●●
●●
●●
●
●
●
●
●●
●●●●
●
●●
●
●●●●●●●●●●●
●●
●
●●●
●
●●●●●●●
●●●●●
●●●
●
●
●
●●
●
●●●
●●
●●●
●●
●●
●
●
●
●●●●●
●●●●
●
●●●●●
●●●●
●
●●
●●●●●
●
●
●
●
●●
●●
●●●
●●
●
●
●●
●
●●
●●●●
●
●
●
●
●●
●
●
●
●●
●
●●
●
●●
●●●
●
●●●
●
●
●
●
●●
●
●
●
●●●
●
●
●
●●
●
●●●
●
●
●●
●
●
●●
●●
●●●●●
●●●●●●●
●●●●
●
●●
●
●●
●
●●
●●
●
●
●●
●●●●●●●
●
●
●●
●●
●
●●●●
●
●●
●●●
●
●●
●●●
●●
●
●●●
●
●●
●
●●●
●●
●●
●●
●●
●
●●
●
●
●●
●
●
●●●
●
●
●
●●
●
●●
●
●
●
●●●
●●●●
●●●
●●
●●●
●●●●●●
●●
●●●●●
●●
●
●
●●
●●
●●●●
●
●
●
●●
●●●
●●●
●●
●
●●
●●●
●●
●
●
●●
●
●
●●
●●●
●●
●
●●
●●●●●●
●●
●
●●
●
●●
●
●
●
●●
●●
●●●●
●
●●●
●
●●●●●●
●
●
●●●
●●
●
●●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●●●
●●●
●
●●
●●
●
●
●●●●●●●
●
●●●●
●
●
●
●
●●
●●●
●
●●●
●
●
●●
●
●●●
●
●●●
●
●●
●
●●
●●
●
●●
●●
●●
●●
●●●
●
●●●
●●
●●
●●●●
●
●
●●
●
●●●●●
●●●
●
●
●●●●●●
●
●●
●●
●
●
●
●
●●●
●●
●
●●●●
●
●●●
●●●
●●
●●
●●●
●
●
●●●●
●
●
●●
●●
●
●●
●
●
●●
●
●
●●
●●
●
●●
●
●
●●●
●●
●●
●●●●●●
●●●
●
●
●
●●
●
●●●
●●
●
●●
●●
●●
●
●
●
●
●
●●●●●●●●
●
●●
●●
●●●
●●
●
●
●●
●●
●●●●●●
●●●●
●
●
●●●
●
●
●●
●
●●●
●
●●●
●
●●
●
●●
●●
●●●●●●
●
●●
●
●
●●●
●●
●
●●●
●
●●●●
●●●
●
●
●●
●
●●●
●●
●●●●
●●●
●
●●
●●
●●
●●
●
●
●●
●
●●
●
●
●●●●●●
●
●
●
●●●
●●●●
●
●●
●●
●
●●●
●
●
●
●
●●
●
●
●●●
●
●
●●●
●
●●●
●
●●
●
●
●●
●●
●
●
●
●●●
●
●
●
●
●●●
●●
●
●
●
●
●●
●●●●
●
●●
●●●●
●
●
●
●
●●
●●●●
●●●
●
●●
●●●
●
●●●●●
●
●
●●
●●
●
●
●●●
●
●●●●●●●
●
●
●●
●
●
●●
●
●●●
0.0
0.2
0.4
0.6
0.8
1.0
Selection Errors (SAPEO−cp)
0.1
2
0.5
2
1
2
1.5
2
2
2
0.1
10
0.5
10
1
10
1.5
10
2
10
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 41 / 48
Distance between Means SAPEO-u vs. SAPEO-up
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●● ●●●●●●
●
●●●
●
●
●●●●●
●●●
●
●
●
●
●
●
●●
●●
●
●●
●
●●
●
●●
●●
●●●
●
●
●●●
●●●●
●●
●
●●
●
●●●●
●●
●
●●
●●●
●
●
●
●●
●
●
●●
●
●
●●
●●
●
●●
●
●●
●●
●●
●
●●●
●
●
●
●●
●●
●
●
●●
●
●●
●●
●
●●●
●●●●
●
●
●●●●
●
●
●
●
●●●
●
●
●●
●●●
●
●
●●
●
●
●●●
●
●
●
●
●●●●
●
●●●●
●
●
●●●●
●●●●
●
●●●●
●
●●●
●●
●
●●●
●●
●●
●
●●
●
●
●●
●●
●
●
●
●●
●
●
●
●
●●●
●●●
●
●●
●
●●
●
●
●●
●●●●
●
●●●●
●●●●
●●●
●
●●●
●●●
●●●●●
●●●●●
●
●●●
●●●●
●
●●
●●
●
●●●
●
●●●
●
●
●
●●●
●
●●
●
●●●
●
●●
●●
●●●●
●
●
●
●
●
●
●●●●●●●●●
●●●●●●
●
●●●●●●
●
●
●
●●●
●
●●●
●
●●●
●
●
●●
●●●
●
●●
●
●●●
●
●
●
●
●●
●
●
●
●●●
●
●●
●
●●●
●
●
●●●●
●●
●
●●●●●
●
●
●
●
●●
●
●
●
●
●●
●
●●●
●
●●●●
●●●
●●
●●
●●●
●●
●
●
●
●●
●●●
●●
●
●
●
●
●●
●●●
●●●●●●●●●
●
●●●●●●●
●
●
●●●●●
●●●
●
●●●●●
●●
●●●
●●
●●●
●●
●●
●
●
●●
●
●●
●●●●
●●●●
●●
●
●●●
●●●
●●●●●●
●●
●
●●●●
●●
●●●
●
●
●
●
●
●
●
●
●●●●●●●●
●●
●
●●●
●
●
●●●
●
●●●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●●●
●●
●
●
●●
●●●●●
●
●●●●
●
●●●●
●
●
●
●●●●
●●
●●
●
●●
●●
●●●●●●●●
●
●●
●
●●
●
●●
●
●●
●●
●
●●●●●●●
●●
●●
●
●
●
●
●●
●
●●
●
●
●
●●●●●
●●
●
●
●
●
●
●●●●●
●
●●
●
●●
●
●
●
●
●
●●●
●●
●●
●
●●
●
●
●●
●
●●●
●●●●●
●
●●●●●●
●
●
●
●●●
●
●●●●●
●
●●●
●
●●
●●
●●●●
●
●●
●
●●
●●
●●
●
●●●
●●●●●
●
●●
●
●●
●
●●
●●
●
●●●●
●●
●
●●●●
●
●●
●
●
●●
●
●
●
●●
●
●●
●
●
●●
●●
●
●
●
●
●●●
●
●●●●●●●●●●●●
●
●●
●
●
●●
●
●●
●●
●
●
●
●
●
●
●
●●
●
●●●
●
●●●●●●
●
●●
●
●
●
●
●●●
●
●●●
●
●
●●●●
●
●●
●
●●●●●●●●●●●●●●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●●●
●
●●
●●●●●●●●●●●
●
●
●
●●
●
●●
●●●●
●
●●●●
●
●●●
●
●
●●●
●
●●●
●
●
●
●●
●●
●
●
●●●●
●
●●
●●
●●
●
●
●●●●●●
●
●●●●
●
●
●
●
●
●●●●●
●
●●●
●●
●
●
●●●
●
●●
●
●
●
●●
●
●
●
●●●
●●
●●
●●●
●
●
●●
●
●
●
●
●
●●
●
●●
●
●
●●●
●
●
●●●
●
●●●
●●●●●●●●
●●
●
●
●
●●●
●
●●●●●●
●
●
●
●
●●
●●●
●●
●
●
●
●●●
●●●●
●
●
●
●
●●
●
●●●
●
●●●
●●●●●●●
●●
●
●
●
●●
●
●
●
●
●●●
●
●
●●●●●
●●
●
●
●●●
●●●
●
●
●
●
●
●
●
●
●●
●
●●●●●
●
●●
●●
●
●●
●●
●
●●
●●●●
●●
●
●●
●
●●●●
●●●
●●●●
●●●●
●●●●●
●
●●●●●●●●●
●
●●●●●●●
●●
●
●
●
●●●
●●●●
●
●
●
●
●
●●
●
●●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●●
●
●●●●
●
●
●
●●
●
●●●●
●
●●●
●●●●
●●
●●
●
●
●
●●
●●
●
●
●
●
●
●
●●
●
●●●
●●
●
●●
●
●
●
●●
●●
●●●
●●
●●●●●
●●●
●
●●
●
●
●
●
●●
●●●●●
●
●●●
●
●●●
●
●●●●●●●●
●●
●
●●●●
●●
●●●
●
●
●
●
●●●●●
●●
●●
●
●●●●
●●
●
●
●
●●
●
●●
●
●
●
●●
●
●
●
●●●●●●●
●●●
●
●●●
●●●
●
●●●●●
●●●
●●●
●
●●
●
●
●
●●●●
●
●
●●
●●
●
●●●●●
●●
●●
●
●●●
●
●
●●●
●
●●●●●
●
●●
●
●
●●●●
●●
●
●●
●
●
●
●
●●
●
●
●
●●●
●●
●●
●
●●
●●●●●
●
●
●●
●
●
●
●
●
●●
●
●●●●
●●●
●
●
●●●●●●●
●
●
●●
●
●●
●
●●●●●
●●
●●
●
●●●●
●
●●●
●
●
●
●
●
●
●●
●●
●
●
●
●
●●●
●
●
●
●●
●
●
●●●
●
●
●
●●●
●
●●●●●
●
●
●
●●●
●
●
●
●●
●●
●
●●●
●
●
●
●
●
●
●
●●
●●●●
●
●
●
●●
●
●●
●●
●●
●●
●●●
●
●
●
●●
●●
●●
●●●●●●
●
●
●
●
●
●
●
●
●
●●●
●
●●
●●●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●●●
●
●
●●
●
●
●●
●
●
●●●
●
●
●
●●●
●
●
●
●
●●●●
●
●●●●
●
●●●●●●●●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●●●●●
●
●
●
●
●
●
●
●●●
●
●
●●●
●
●●
●●
●
●
●
●
●●
●
●
●
●●
●
●●
●
●●●●
●
●
●●
●●
●
●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
0.0
0.2
0.4
0.6
0.8
1.0
1.2
Distance between means (SAPEO−c)
0.1
2
0.5
2
1
2
1.5
2
2
2
0.1
10
0.5
10
1
10
1.5
10
2
10
●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●
●
●●
●●
●●●●●●●●●●●●●●●●●●●●●
●●
●●●●●●●●●●●●●●
●
●
●
●●●●●●●●●●●
●
●●●●●●
●●
●●
●
●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●
●
●●
●
●●●●●●●
●
●●●●●●
●
●●●●●
●
●
●
●●●●●
●
●●●●●●●●●●●●●●
●
●●●●●●
●
●●●
●
●●●●●●●●●
●
●●●●
●
●●●●●●●●●
●
●●
●
●●●●●●
●
●
●
●●●●●●●●●●●●●
●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●
●
●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●●
●●●●
●
●●
●
●●●●
●
●●●●
●
●
●
●●●
●
●●●●●●
●●
●●●●●●●●
●
●
●
●●●●●●●●
●
●●●●●●●●●●●●●
●
●
●
●●●●●●
●
●●●
●
●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●
●
●
●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●
●●●●●●●●●●●●●●●●●●●●
●
●●●
●
●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●
●
●●●●
●
●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●
●●
●●●●●●●●
●
●●●●●
●
●●●●●●●●●●●●●●●●●●●●
●●
●●●●●●●●
●
●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●●●●●
●
●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●●●●
●
●●●
●
●
●
●
●
●●●
●●●
●●●
●
●●●●●●●●●●●●●●●
●
●
●
●●●●
●
●●●●●●
●
●●●●
●
●
●
●●●●●●●●●
●
●●●
●
●●●●●●●●●●●●●●●
●
●●
●
●●
●
●●●●●●●
●
●●●●●●●●●●●●●●●
●
●
●
●●●●●●●●
●
●●●●
●
●
●
●●●●●●●●●●●●●●●
●
●●
●
●●●●
●
●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●
●
●●●●
●
●●●●●●●●●●●●●●●
●
●●●●●●●
●
●●●●●
●
●
●
●●●●●●●●●●●●●
●
●●
●●
●
●
●●●●●●●
●
●●●●●●
●
●
●
●●●●●●●●●●●●●●●
●
●
●
●●●●
●
●●
●●
●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●
●
●●●●
●
●●
●
●●●●●●
●
●●●●●
●
●●●●●●
●
●●●●●●
●●
●
●
●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●
●
●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●
●
●
●
●
●
●●●
●
●●●●●●●●●●●
●
●●
●
●●●
●
●●●●●●●●●●
●
●●●●●
●
●●●●●●●●●
●
●
●●●●●●●●●●●●●●●●●●●●●●
●
●●
●
●●●●
●
●●
●
●●
●
●●●●
●●
●
●
●
●●●●
●
●●●
●
●●●●●
●
●●●●
●
●●●●●●●●●●●●●●●●
●
●
●
●●●●
●
●
●
●●●●●●●
●
●●●●●
●●
●
●●●●●●●●
●●
●●
●
●●●●●●●●●●●●
●
●●
●
●●●●●
●●
●●●●●●●●●●●●●●
●
●●●
●
●●●●●●●●●●
●
●●
●
●●●●●●●●●●●
●
●●
●
●
●
●●●●●●●
●
●●
●●●
●
●
●●●●●
●
●●●●
●
●●●●●●●
●
●●●●
●●●●
●●●●
●●
●●●●●●
●
●●
●
●●●●●●●●●●●●●●
●
●
●
●
●
●
●●●●●
●
●●
●
●
●
●●●●●●
●
●
●●
●●●●●●
●
●●
●
●●
●●
●●●●●●
●
●●
●
●●●●●●●
●
●●●●
●
●●●
●
●●●●●●●●●●
●
●●
●
●●●●●●●●●
●
●●
●
●●
●●
●
●●●
●
●●●●●●●●
●
●●●●●●●●●●●
●●●
●●●●●●●●●●●
●
●●●●●
●●
●●●●●●●●●●●●●●●
●
●
●
●●●
●
●●
●
●
●
●●●●●●●●
●
●●●●●
●
●●
●●●
●
●
●●●●●●●●●
●
●
●
●●●●
●
●
●
●
●
●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●●●●●
●
●
●●●●●●
●
●
●●●●●●●●●●●
●
●●●●●●
●●
●
●
●●●●
●
●●●●●
●
●●●●●●●
●
●●●●●●●●
●
●●●●●●
●
●●
●
●●●●●●●
●
●●●
●●
●●●●●●
●
●●
●
●●●●●●
●●
●●●●
●
●●●●
●
●●●●●●●●
●●●●
●●●●●●●●●●●●●●●
●
●●●●
●
●●●●●●●
●
●●●●
●
●●●●●●●●
●
●●●●●●●●
●●
●
●
●●
●
●●●●●●●●●●
●●●
●
●●
●●●●●●●●
●
●●
●
●●●●●
●
●●●●
●●
●●
●
●●●●●
●
●●●●●●●●●●●
●
●●●●●
●
●●●●●
●
●
●
●●
●
●
●
●●●
●
●
●●
●●●●●
●●●
●
●
●●●●●●●
●
●●●●●●●●●
●
●●●●●
●
●●●●●●●●●●●●
●
●●●●●●
●
●●●●●
●
●●●●●●
●
●●●●●●●●
●
●●
●●
●●
●
●●●●●
●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●
●
●●●●●
●●
●●●●●●●●●●●●●●●
●
●
●
●
●
●
●●●
●
●●●●●
●
●●●●●●●●●●●●
●
●
●●●
●●●
●
●●
●
●
●
●●●
●
●●●
●
●
●●
●●●●●●●●
●
●
●
●●●
●
●
●
●●
●
●●
●
●●●
●
●●●●
●
●●●●●●●●●●●
●●
●●●
●
●●●●●●●
●●
●●
●
●●●
●
●
●●●●●●●●●●●●
●
●●●
●
●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●
●
●
●
●●
●●
●
●
●●
●●
●●
●
●●●●●●●
●
●●●●●
●
●●●●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●●●
●
●●●●●●●●
●●●
●●●●●●●●●●●●
●
●
●
●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●
●●●●●●
●●
●●●
●
●●●●
●
●●
●
●●●●●●●●●●●●●●●
●
●
●●●
●●●●●●
●
●
●
●
●●●
●
●●●●●
●
●●
●●●
●
●
●●●●
●
●●●●
●
●
●
●●●
●
●●●●
●●
●●●●●●●●●●●●●
●
●●
●●
●●●●●
●
●●●●●●
●●
●●●●●●●●●
●
●
●●●
●
●●
●
●●●●●●●●
●
●●●●
●
●●
●●●●●●
●
●
●
●
●
●
●
●●
●
●
●●●●●●●●
●
●●
●
●●
●●●
●●●●●●
●
●●
●
●●●●●●
●
●●●
●
●●●●●
●
●●●●●●●●●●
●●
●●●●●●
●
●
●●
●●●●●●●●●●●●
●
●
●
●●●●
●●
●●
●
●●
●
●●●
●
●●●●●●●●●
●
●
●●
●
●●●
●●●
●●
●
●●
●
●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●●
●
●●●●●●●
●
●●●
●
●●
●●
●●●●●●
●
●
●●●
●●●●●●●●●●●
●
●●●
●
●●●●
●
●●●●●●●
●
●●●
●
●●
●●
●
●
●●●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●
●●
●●●●●●●●●●●●●●●
●
●●●●●
●
●●●●●
●
●●●●●●●●●●●●●
●
●●
●
●●●
●
●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●
●●
●●
●
●●●
●
●
●●●
●●
●
●
●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●●●
●
●
●●
●●
●
●
●
●
●●
●●●
●●
●
●
●
●
●●
●●●
●
●●
●●●●
●
●●
●
●
●
●●
●
●
●
●
●●
●●●●●●●●
●
●
●
●
●
●
●
●●
●
●
●●●
●
●●●
●●
●
●
●
●●
●
●
●
●
●
●
●●●●●●●
●●
●
●
●●
●●●●●
●●
●
●
●●●
●
●
●●
●
●
●
●●
●●
●●●●
●
●●
●
●
●
●
●
●
●
●●●●●
●●●
●
●●
●●
●
●
●●
●
●
●
●●●
●●
●
●
●●●
●●
●●●●●●
●
●●
●
●●●
●
●
●●
●
●●●
●●●●●●
●●
●
●
●
●●
●
●●●●
●●
●●
●
●
●
●
●●●●
●
●
●●●
●
●
●●●●●●●●
●●●
●
●●●●●●●●
●●●
●
●●
●
●
●
●●●●●●●●
●
●
●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●
●
●●●●
●●
●●●
●●
●●●●●●
●
●
●
●●●
●
●●
●●●●
●
●
●
●
●
●●
●
●●
●●●
●●
●
●●●
●
●●
●●●
●
●●●●●●
●
●●
●●
●●
●●
●
●●●
●
●●●
●
●
●●
●●●
●
●●●●
●●●●
●
●
●●●
●
●●
●
●●●
●●
●
●●
●
●
●
●
●
●●
●
●
●●●●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●
●●
●●
●
●
●●
●●
●
●
●
●
●●●●
●
●●
●
●
●
●
●
●●●●
●●
●
●
●
●
●
●●●
●●
●
●
●
●●●
●
●
●●●
●
●●●
●
●
●
●●
●
●
●●
●●
●●●
●
●●
●●
●
●●●
●
●●●●
●●
●●
●●
●●
●
●●
●
●
●●
●●
●
●●●●
●●●
●●
●●
●●
●
●●●●
●
●
●●
●●●
●●
●●
●●
●
●
●●●
●●
●
●
●
●●●●●●
●
●
●
●
●
●●●
●
●●
●
●●
●
●
●
●
●●
●
●
●●●
●
●
●●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●●
●●●
●
●
●●
●●●
●
●●●
●
●
●
●●
●●●●
●
●
●
●●●●
●
●
●●
●
●
●
●
●●
●
●
●●
●●●
●
●●●
●
●
●
●
●●
●●●●
●
●●●●●
●●
●●
●
●
●
●●●
●
●
●●
●
●
●
●●●●
●●
●
●●
●●
●●
●●
●
●
●●
●●
●
●●
●
●●
●
●
●
●
●
●
●●●●●●●
●
●
●
●●●
●●
●
●
●
●●
●
●●
●●
●●
●●
●
●
●
●
●●●
●
●●
●
●
●
●
●
●
●●
●●
●
●●
●
●●●
●
●●
●
●
●●●
●●
●
●
●
●
●
●●
●●●●
●●●
●
●
●
●
●
●●
●
●
●
●
●●●●
●
●●
●
●
●●
●
●
●●●●●●
●
●
●
●
●●
●●
●
●
●●
●●●●
●
●●
●●
●●●
●
●●
●
●
●
●
●
●
●
●
●●●
●●●
●
●●
●
●●
●
●●
●
●●
●
●
●●
●●●
●●
●
●
●●●●●●●
●●
●●●●
●●
●
●
●●●●
●
●●
●●●●
●
●
●
●●●
●
●
●
●
●
●
●
●●
●●●
●●
●
●●●
●
●
●
●●
●
●
●
●
●●
●●
●●●●
●●●●●
●
●
●●●●●
●
●
●●
●
●
●
●●●●●●●●●
●
●●●●●
●
●●
●●
●
●●
●●
●
●●
●
●●
●
●●
●
●●
●
●●
●●
●
●
●
●●
●
●
●●
●●●●
●
●
●●●●
●●
●●
●
●
●
●
●
●●●
●
●●●
●●
●
●
●●
●●
●
●●
●●
●
●
●
●
●●
●
●
●●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●●
●
●
●●●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●●●●●
●
●●
●●
●●●
●●●●●
●●●
●
●
●
●
●●
●
●●●
●●
●
●
●●
●●
●●
●
●●●
●
●●●●
●
●●
●
●
●●
●●
●●●
●
●
●
●
●●
●●●
●
●●
●
●
●
●●●
●
●●●
●
●
●●●●●●
●●●
●
●
●
●●
●
●
●●●
●
●●●
●●
●
●
●
●●
●●●
●
●
●
●
●
●
●
●●●
●
●●●
●
●
●●
●●●
●
●
●
●
●●●
●
●
●
●●
●
●●●
●
●●●●●
●●
●●
●
●●
●
●
●●●
●●●
●●
●
●
●
●●●●
●
●●
●
●●
●
●●
●●●●●●●●●
●
●
●
●
●
●
●
●●
●●
●●●
●●
●
●●●●●
●
●
●●●●
●
●
●●
●●●
●
●
●
●
●
●●
●●●
●●
●
●●
●●
●
●
●
●
●●●●
●●
●
●
●
●●●●
●●
●●●●
●●
●●●
●●●
●●●●●●
●
●
●
●
●●●
●
●●
●●
●●
●●●
●
●●●
●
●
●
●●
●
●
●
●
●
●●
●
●●●
●
●
●
●
●
●
●
●
●
●
●●
●●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●●●●
●●
●
●
●●●●●
●●
●
●
●●●●
●
●
●
●
●●
●
●●●
●●
●●
●●
●●●
●●
●●
●
●●●●●●
●
●●●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●
●●●
●
●●
●
●
●
●
●
●●
●
●●●
●●
●●●●
●
●
●
●●
●●●
●
●●
●
●
●
●
●
●
●●●
●
●●
●
●●●
●
●●
●●
●
●
●●
●
●
●●●
●
●
●●●●
●
●
●
●
●
●
●
●●●
●
●●
●
●
●
●
●
●●
●
●●
●
●●
●
●
●
●
●
●
●
●●
●
●●●●●●
●●
●
●
●
●
●
●
●●
●
●
●●●●
●●
●●●●
●●●
●●●●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●●●
●●
●
●●●●
●
●
●●●
●●
●
●
●
●
●●
●●●●●
●
●
●
●
●●
●
●●
●
●●●
●
●●
●
●●●●●
●
●
●
●
●
●
●●●●
●●
●●
●
●●
●●
●●
●
●
●●
●●
●
●●●
●
●
●●●
●●●
●
●●
●
●
●●
●●●
●
●●●●
●●●●●●●●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●●
●
●
●●
●●●●
●●●●
●●●
●●
●
●
●
●●●●
●
●
●
●●●
●
●●●
●
●●●
●●
●●
●●●
●
●●●
●
●
●●
●●●●●
●●●●
●
●
●
●●
●●
●●
●●
●
●●
●
●●
●
●
●
●
●●●
●
●●●
●●●
●
●●●●●
●
●●
●●●
●
●●●●●
●
●
●●●
●
●●●●
●
●
●
●●
●●●●●●●
●
●●
●
●
●
●●
●
●●●
●
●
●
●●
●
●
●
●
●●
●●●
●●●
●
●
●
●
●
●
●●●
●●●●●
●
●●●
●●
●●
●
●
●●
●
●
●
●
●●●
●●●
●
●
●
●●
●
●●●
●
●
●
●●
●●●
●
●
●●●
●●
●
●
●●●
●
●●●
●
●●●
●●
●
●●
●●
●
●●●●
●
●
●
●
●●
●
●●
●
●
●●●●
●
●●●
●
●
●●
●
●
●●
●
●
●●
●
●●
●
●●●●
●●
●●●●●●●
●
●●
●
●
●●●
●
●
●●●
●
●
●
●
●●●●●●
●●
●
●●●●
●
●●
●
●
●●
●
●●
●●
●
●
●●
●
●
●●
●●
●●
●●●
●
●●●●
●●
●
●
●●
●●
●●
●
●
●
●
●
●
●●
●●
●●
●●●
●
●●
●●
●
●●
●●●
●●●●
●
●●●●●
●
●
●●
●
●
●●
●
●●
●●●
●●●●
●●
●
●
●
●
●●
●●●●●●●●●
●
●●
●
●●●
●
●●●
●
●●
●●
●●●
●●●●●
●●●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●
●●
●●●
●●●●●●
●
●●●●
●
●
●
●
●●
●●
●
●●
●
●●●●
●
●●●
●
●
●●●
●
●●●●●
●●
●●●
●●
●
●●
●●●
●
●
●
●●
●
●●
●
●
●
●●●●●●●
●
●●
●●
●
●
●
●●●
●●●
●●
●
●●
●●
●
●●●●
●●
●
●●●
●
●
●●
●●
●
●●
●
●●●●●
●
●
●●●●●
●●●●
●●
●
●
●
●
●
●
●
●
●●
●●
●
●
●●●●
●●
●●
●
●
●●●●
●●●●●
●
●
●●
●
●●
●●●●
●●
●
●
●●
●●
●●●●
●
●●●●●
●●●
●●
●●
●
●●
●
●●
●
●●
●
●
●
●●●●●●
●
●●
●
●
●
●
●●●
●●
●●
●
●●●
●●
●
●
●
●●●●●
●
●●
●
●
●
●●●
●
●●
●●
●●
●●
●●
●
●●●●●
●
●
●
●
●
●●●●
●
●
●●●
●●●
●●●
●
●
●
●
●●
●
●●
●●●
●●●
●●
●
●●●
●
●
●●
●●●●●
●
●●●
●
●●
●
●
●●
●
●
●●●●●●
●
●●●●
●●●
●
●●
●●●●
●●
●
●●
●
●
●
●●●
●
●
●●●
●
●●●●
●●
●
●●
●●●●
●
●●
●
●
●●
●
●
●●
●●
●
●●●●
●
●●
●●●
●
●●
●●●●●
●●●
●
●
●
●
●
●●
●
●●●●●●●●
●
●●
●
●
●
●
●
●
●
●●
●●
●
●●●●
●●
●●
●●●
●●
●
●●●
●●
●
●
●
●●
●
●●●●
●
●
●
●●●●●
●
●
●●●
●
●
●
●●
●
●●●●
●
●
●●●●
●●●
●●●●
●
●
●
●
●
●
●●
●
●●
●●
●●●
●
●
●●●●
●●
●●●●●●
●●
●
●
●●●●
●
●●●
●●●●
●
●
●
●
●●
●●
●
●●
●
●●●
●●●●
●
●
●
●
●●
●●●
●●
●
●
●
●●
●●
●●
●●●●
●
●●●●●●
●●●
●●●●
●●●●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●●●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●
●●
●
●
●●
●
●
●
●●●
●●
●
●
●
●
●
●
●
●●●●●●●●
●●●
●●●
●
●●●●
●●●
●
●
●●
●●
●
●
●●
●
●●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●●●
●●
●●
●
●
●●●
●
●
●●●
●●●●
●●
●
●●●●●●
●
●●
●●●●●
●●●
●●●
●
●
●
●
●
●●
●●●
●●
●
●
●●●
●
●
●●●
●
●
●●●
●
●●
●
●
●
●●●●●
●
●
●●
●●
●●
●●●●
●●●●●
●
●●●●●
●
●
●
●
●
●●
●●●
●●●●
●
●
●●
●
●
●
●●
●
●
●
●
●●●●
●
●
●●●●
●●
●
●
●●
●
●●
●
●●●●
●
●●●●
●
●
●
●●
●
●●
●●●●●●●
●
●
●●
●
●
●
●
●●●
●
●●●
●●
●●●
●
●
●
●
●
●●
●
●
●
●●●●
●●●●●
●●
●
●●●●
●●
●
●
●
●●●●
●
●●●
●
●
●●●●●●●●
●
●
●
●●●●●●
●●
●
●●●
●
●●
●●●
●
●
●
●
●●●
●
●●●
●
●●
●
●
●
●
●
●●●●
●●●●
●
●
●●
●
●●●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●●●●●●
●●●●
●
●
●●●●
●●
●
●●●●
●●
●
●
●●
●●
●
●●●●●●●●
●
●
●
●●●
●
●
●●
●●●
●
●
●●
●
●
●
●●
●●
●
●●
●
●●
●
●●●●
●
●●
●●●
●●
●
●●
●●
●●
●
●
●
●
●●
●●●●
●
●
●●
●
●●●●●●●●●●
●
●
●●
●
●●●
●
●●●●●●●
●●●●●
●●●
●
●
●
●●
●
●●●
●
●●
●●●
●●
●●
●
●
●
●●●●●
●●●●
●
●●●●●
●●●●
●
●●
●●●●●
●
●
●
●
●●
●●
●●●
●●
●
●
●
●●
●
●●
●●●●
●
●
●
●
●●
●
●
●
●
●●
●
●●
●
●●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●
●●●
●
●
●
●●
●
●●●
●
●
●●
●
●
●
●●
●●
●●●●●
●
●●●●●●●
●●●●
●
●●
●
●
●
●
●
●●
●●
●
●
●●
●●●●●●
●
●
●
●
●
●●
●●
●
●●●●
●
●●
●●●
●
●●
●●●
●
●
●
●
●●
●
●
●
●●
●
●●●
●●
●●
●
●●
●
●
●
●
●●
●
●
●●
●
●
●●●
●
●
●
●●
●
●
●●
●
●
●
●●●
●●●●
●
●
●
●
●
●●
●●●
●
●
●●●
●
●●
●●
●●●●●
●●
●
●
●
●●
●●
●●●●
●
●
●
●●
●
●●
●●
●●●
●●
●
●●
●●●
●●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●●
●
●●
●●●●●●
●●
●
●●
●
●●
●
●
●
●●
●●
●●●●
●
●●●
●
●
●●●●●●
●
●
●●●
●●
●
●●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●●●
●●●
●
●●
●●
●
●
●
●●●●●●●
●
●
●●●●
●
●
●
●
●●
●●●
●
●●●
●
●
●●
●
●
●●●
●
●●●
●
●●
●
●●
●●
●
●●
●●
●●
●
●●
●●●
●
●
●
●●
●●
●
●
●
●●
●●
●●
●
●
●●
●
●●●●●
●●●
●
●
●●●●●●
●
●●
●●
●
●
●
●
●
●●●
●●
●
●●●●
●
●●●
●●●
●●
●●
●●●
●
●
●●●●
●
●
●●
●●
●
●●
●
●
●●
●
●
●●
●●
●
●●
●
●
●
●
●●
●●
●●
●●●●●●
●●●
●
●
●
●●
●
●●●
●●
●
●●
●●
●●
●
●
●
●
●
●●●●
●
●●●●
●
●●
●●
●●●
●●●
●●
●
●
●●
●●
●●●●●●
●●●●
●
●
●●●
●
●
●●
●
●●●
●
●●●
●
●
●●
●
●
●●
●●
●●●
●
●●●
●
●●
●
●
●●●
●●
●
●●●
●
●●●●
●●●
●
●
●●
●
●●●
●●
●●●●
●●●
●
●●
●●
●●
●●
●
●●
●
●
●●
●
●●
●
●
●●●●●●
●
●
●
●●
●●●
●●●●
●
●●
●●
●
●●●
●
●
●
●
●●
●
●
●
●●●
●
●
●●●
●
●●
●
●
●
●●
●
●
●●
●●
●
●
●
●●●
●
●
●
●
●●●
●●
●
●
●
●
●●
●
●●●●
●
●●
●●●●
●
●
●
●
●●
●●●●
●
●●●
●
●
●
●
●●●
●
●●●●●
●
●
●●
●●
●
●
●●●
●
●
●●●●●●●
●
●
●●
●
●
●●
●
●●●
0.0
0.2
0.4
0.6
0.8
1.0
1.2
Distance between means (SAPEO−cp)
0.1
2
0.5
2
1
2
1.5
2
2
2
0.1
10
0.5
10
1
10
1.5
10
2
10
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 42 / 48
SYNERGY Horizon 2020 – GA No 692286
D3.2 91 31 July 2018
Difference min EV SAPEO-u vs. SAPEO-up
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●● ●●●
●
●●●
●
●●
●
●●●
●●
●●●●●●
●
●●●●●●●●●●
●●
●●
●
●●●●●●
●
●
●●
●
●
●
●
●●
●●
●●●
●
●
●
●●
●
●
●
●●●●●
●
●●●
●
●
●
●●●●●●
●●
●●●●●●●
●
●●●●
●
●
●●●
●
●
●●●●
●
●●●●
●●
●●
●●
●
●
●●●●
●●
●
●
●●●
●●
●●●●●●●●
●●
●
●●●●●
●
●●
●
●
●●
●
●
●
●●
●
●
●●●
●
●
●
●
●●●
●
●●
●
●
●
●
●●
●●●
●
●●●●●●
●
●●
●●
●
●●●
●●
●
●●●
●
●●
●●●●●●●
●
●●
●●●
●●
●●
●●●
●
●●●●●●
●
●
●●●
●●●●●●
●
●●
●●●●●
●●
●
●●●●●●●●
●
●
●●
●
●●
●
●
●
●
●
●●●
●●●●●
●
●●●●●●
●
●
●
●●●●
●
●
●●
●●●●●
●
●●●●●
●●●
●●●●●●●●●●●
●
●
●
●●●
●●
●
●
●
●●
●●●●
●●●
●●
●
●●●●●●●●●
●●●●●
●●●
●
●●●
●
●●●
●●
●
●
●●●
●
●
●
●●
●●
●●●
●
●●●●●●●
●
●
●●●●
●●
●
●●
●
●
●
●●
●●
●●●●●●●●●●●●
●
●
●
●
●●
●
●
●●●●
●
●
●
●●●●●
●
●●●
●
●●●
●
●●
●
●●●●
●
●●●●●●
●
●●●●
●●●
●
●
●
●●●●●●●
●●
●●
●●●●
●
●●
●
●●●
●
●●●●●
●●
●
●
●
●
●●●●●●
●
●
●●
●●●●●●●
●●●
●●
●●●
●●●●●●●●
●●
●●●
●
●
●
●
●
●
●●●●
●●●
●●
●●
●
●●
●
●
●
●●●
●
●●
●●
●●●●●
●
●
●●●
●●
●
●
●
●
●●
●
●●
●●
●
●
●
●
●●
●
●
●●●
●
●●
●
●●●●●●●
●●
●
●
●●●●
●
●
●
●●●
●●
●●
●
●
●
●●●●●
●
●●●●●●
●
●●●●
●●
●●
●
●
●
●
●
●●●●●●
●
●●
●●
●●●
●
●
●
●●●●
●●
●
●
●
●
●●
●
●
●
●●●
●●
●●●●●
●●
●
●●
●●
●
●
●●●
●●
●
●●●●●●
●
●●●●●●●●
●
●
●●
●
●
●●●●
●
●●●●●
●
●●
●●●●
●
●
●●
●
●●●●●●●
●●
●
●●
●●●●●●●●
●●
●●●●●
●●
●●
●
●●
●
●
●
●●●
●●
●
●
●
●●●●●●●
●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●●●●
●●
●●●●●●●●
●
●
●
●●●●
●
●
●
●●●●●●●●
●
●●●●●●●●
●●
●●●
●●●●
●●
●
●
●●
●
●
●●●
●
●
●
●
●●
●●
●●
●
●
●
●
●●●●●●●●●●●●
●
●●●●●
●●
●●●
●
●●●●●●
●●●●
●●●
●●
●
●●
●
●
●●●●
●
●
●
●
●
●●●●●●●
●
●
●
●●
●
●●●●
●
●
●●●
●
●●
●●●
●
●
●
●
●
●
●●
●●●●
●
●●
●
●●
●
●
●●
●●
●
●●●●●●●●●●● ●
●
●
●●●●●
●
●●
●
●●
●●●
●
●
●
●
●
●●●●●
●
●●●●●
●
●●●●●●●
●●
●●
●●
●●
●
●●
●
●●
●●
●●●●
●
●●
●
●
●●●
●●
●
●●
●
●●
●●●
●
●●
●●●
●●
●
●
●
●●●
●
●
●
●●●
●
●●●●
●●
●●
●
●
●●
●
●●●●●
●●
●
●●
●●
●
●●●●
●
●
●
●●●●●●●●●●●●
●
●
●
●
●
●●●●●
●●
●●
●
●●●
●
●●●
●●
●●●●●●
●●●●●●
●
●
●
●
●●●●
●
●●●
●●
●
●
●●
●●
●
●
●
●●●
●
●
●
●
●
●●
●●
●●●●●
●
●●
●●●●●
●●●
●
●
●●●
●●●
●
●●
●
●
●
●
●
●●
●
●
●
●●●●●●
●●●
●●
●
●●
●●
●●●●
●●
●●●
●●●
●
●
●●●
●
●
●●●
●●●●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●●
●●
●●●
●
●
●
●●
●●
●
●
●
●●●●●●●●●●
●
●
●●
●
●
●
●
●
●●●●
●●
●●
●
●
●
●●●
●
●
●
●
●
●●
●●●●●●
●●
●
●●●●
●
●
●●
●●●
●
●
●●●●
●●●
●●
●
●●●●●
●●
●
●
●
●
●●
●
●●
●
●
●●
●●
●
●
●●●
●
●
●●●
●
●
●
●●●
●●
●
●●
●●●
●●●●●
●
●
●
●
●
●●●●●
●
●●
●
●●
●
●
●●●●●
●●●●●●●●●●
●
●●
●
●
●
●●●
●
●●
●●
●
●
●
●●●
●●●●●
●
●
●
●
●●
●●
●●●
●
●●
●
●●
●
●●●
●●
●
●●●●●●
●
●
●
●
●
●
●●
●●
●
●
●●●●●
●●●●●●●●
●●●
●
●
●●●
●●●●●●
●●●●
●
●
●
●●●
●
●●
●●●●
●
●
●●●
●
●●
●●●
●
●●●
●
●
●
●●
●
●
●●●●
●●●
●
●
●
●
●
●
●●
●
●●
●●
●
●●●●●●
●
●●●
●
●●●
●●●●●
●●●
●
●●
●●
●
●●●●●
●
●●
●●●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●●●
●●●●●
●●●●
●●●
●
●●
●
●
●
●
●●
●
●●●●
●
●●
●
●●
●
●
●●
●●●●
●●●
●
●
●
●●●
●
●●
●
●
●●
●
●
●●●●●●
●●●●
●
●
●
●
●●●
●●
●●●●
●
●
●●
●●●
●●
●●●●●
●●
●●●●●
●●●
●
●●
●
●
●●●
●●●
●●
●●
●
●●●
●●
●
●
●●●●●●
●
●
●
●●●●●
●
●
●
●
●●
●●●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●
●
●●
●●●●
●
●
●
●●●●●●●●
●●●●
●●●
●
●●
●
●
●●●●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●●●●●●●●●
●●
●
●
●●●●
●
●●
●●
●
●
●
●●●●●
●
●●
●
●
●
●●●
●●
●
●
●
●
●●●
●
●
●
●●●
●
●
●
●
●●
●●
●
●
●
●
●●●
●
●●
●●
●●
●
●
●●
●●
●
●●
●●●●●
●
●●●●
●
●
●
●●●●
●●
●
●
●
●
●●●●●
●
●●
●
●●●●
●●
●
●
●
●
●●●
●
●
●●
●●
●●
●
●
●
●
●
●
●●●
●●●
●●●●
●●●
●
●●●
●●●
●
●●●●
●
●
●
●●●●
●●
●●●
●
●
●
●●
●
●●
●
●●●●●
●
●
●●
●
●●
●●●●●
●
●●●●●
●
●
●
●
●
●
●
●●
●
●
●●●●●
●●●
●●●●
●
●
●
●
●
●●●●
●●
●●●
●●
●
●
●●●●
●●●●
●
●
●
●●
●●
●●●
●
●
●
●
●●●
●
●
●
●●
●
●●
●
●●●●
●●
●
●●●●
●
●
●●
●
●
●
●
●
●●●
●
●●
●●●●●
●
●
●
●●●●
●
●●
●
●●
●
●
●
●●●
●●
●●●●
●●
●●●●
●
●●●
●●
●●
●
●
●
●●
●
●●
●●●
●
●●●
●●
●
●●●●
●
●●●
●
●●●
●●
●●●
●●
●●●
●●
●
●
●
●●
●●●
●●●
●●●●●●●●●●
●
●●●
●●
●
●●●●●
●
●●
●●
●●
●●
●
●●
●●●●●●●
●
●●
●●●●●●
●
●●
●
●
●
●●●●●●●●
●●●●
●●
●
●●
●●●
●
●●●●●●
●●●
●●
●
●
●
●
●
●
●●
●●●
●
●
●
●●
●
●●●
●●
●
●●
●
●
●
●
●●
●
●
●●
●●●●
●
●
●●●
●
●●●
●
●●
●
●●
●●●●●
●●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●●
●●●
●
●
●●●
●
●●
●
●●●
●
●●●
●
●
●
●
●
●●●
●●●
●●●
●
●●●
●
●
●●●●
●●●
●●
●
●
●
●
●
●●
●●●●●●
●●
●●●
●
●●●●●●●
●
●
●●●
●●
●
●●
●●
●●
●
●●
●
●
●
●●
●●
●
●
●
●●●●
●●
●
●●
●
●●
●●●
●
●
●●
●●●
●
●●●●●●●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●●●
●●●
●
●●
●●●●
●●
●●●●●●●●
●
●
●
●●●
●
●
●●
●
●●●●
●
●
●
●●●●
●
●
●
●
●●
●
●
●
●
●●●
●
●●●
●●
●
●
●
●
●●
●
●●●●
●●●●●●●
●●●●●●
●●●
●
●
●
●●●
●
●●●
●●●
●
●●
●
●●●
●●●
●●
●
●●
●
●●
●●●
●●●
●
●
●
●●
●
●●●●●
●●●●●
●
●
●
●
●●●
●●
●●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●●●●●●
●●
●●
●
●●
●●●●
●
●●●●
●
●●●
●●●●●●
●●
●●●●
●●
●
●
●●●
●●
●
●
●
●●
●●
●
●●●●
●●
●
●
●●
●●
●
●
●
●●
●●
●
●●●
●●●●
●●
●●
●●●●
●
●●●●●
●●
●
●●●
●
●●●●
●●●●●
●
●
●
●●
●●●●●
●
●●●●●●●
●
●
●
●●
●
●
●●●●
●●●●●●●●
●●
●●●
●
●
●●
●
●
●
●●
●●
●●
●●
●
●
●●
●●
●
●
●
●
●
●●
●●●●
●●●●●●●●●
●
●●●●●●●
●
●
●●●●
●
●●
●●
●
●
●●
●
●
●●
●●●
●
●
●
●
●
●
●●
●●●
●●●●
●●
●●
●
●
●
●●●
●
●
●
●
●●
●
●●●●●
●
●
●
●
●
●
●●
●●
●●
●
●●●●
●
●●●
●●
●●
●●
●●
●
●●●●
●●●●
●
●
●●
●
●
●
●●
●
●●
●
●●
●●●●
●●
●●
●
●●
●
●●
●
●
●●
●●●
●
●●
●
●●
●●
●
●●●●●
●●●●●●●●
●●●
●
●
●
●
●
●●
●
●
●
●
●●●●●●
●
●●●●
●
●
●●●
●
●●●
●●●
●
●
●
●●
●
●
●●
●●
●
●
●
●●●●
●
●
●●●
●
●●●
●●
●
●
●
●●
●
●●●●●
●
●●
●●
●●●●
●
●
●●●●
●
●
●●
●●
●●
●
●
●●●●
●●●●
●
●
●●●●●●
●
●
●●●
●
●
●
●●
●
●
●
●
●●●
●●●
●●
●●●●
●
●
●
●
●●
●
●
●
●
●●●
●
●●
●
●
●●
●●●
●●●●●●
●
●
●●●
●
●
●
●●
●●●●
●
●●●
●●●
●●
●●●●
●●
●●
●
●
●●●
●
●●
●
●
●●●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●●●
●●●
●●
●
●●●●
●
●
●●●●
●
●
●
●●
●
●
●●
●
●●
●●●
●●
●
●●●
●
●
●●
●
●●
●
●
●
●●
●●●●
●●●
●●●●
●
●●●●●
●
●
●
●●
●●
●●
●
●●●●●
●
●
●
●●●●●●●
●●
●
●●●
●●●●●
●
●
●●
●
●
●●
●●
●
●●●●
●
●●●●
●●
●●
●●●●●
●
●●●
●
●
●
●●●●
●●●
●
●●
●
●●●●
●
●●
●●
●●
●●
●●●●
●
●
●●
●●
●
●●
●
●
●
●●●●●●
●
●●
●●
●●●
●
●
●●●
●
●
●●
●
●●
●
●
●●
●
●●●●●●●●●
●●
●●●●
●
●
●●
●●●●
●●●
●
●
●●●
●●●
●●
●
●●
●
●
●
●●●
●
●●●
●
●●
●
●
●●●
●
●
●
●●●●
●●
●
●
●
●●●●●●
●●
●
●
●●●●●●●
●●
●
●
●
●
●●●●●
●●
●
●
●●
●●●●
●●
●●●
●
●
●
●
●
●
●
●
●●●●●●●●
●
●●
●●
●●
●
●
●
●●●
●●
●●●●
●●●
●●●●●
●
●●●●●
●
●
●●
●●●
●
●
●
●
●●●●
●
●
●
●●●●●●●
●●
●
●●●
●●●
●
●
●●●
●
●●
●●
●
●
●●
●●●●●
●
●
●●●
●●●●
●
●●●●
●
●●
●
●●●●●●●●
●●
●
●
●●
●●
●
●
●●●●●●●●●●●●
●●●●●
●
●
●●
●●
●
●●●
●●●●
●
●●●
●●●
●●
●
●●●
●
●●●
●●
●
●
●
●
●
●
●
●
●
●●
●●●
●●
●
●
●●
●●
●
●
●
●
●●
●
●●●●
●
●
●
●●●
●●
●●
●●●●
●●
●
●●
●●
●
●
●
●●●●●
●
●●●●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●●●●●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●●●
●●
●●●
●●
●
●●
●
●●
●●
●
●
●●
●
●●●
●
●●
●
●●
●
●●
●
●●●
●●
●●●
●
●
●●
●●
●
●
●
●
●●●●●
●
●
●
●
●●●
●
●●●
●
●
●
●●●●●●
●
●
●
●●●
●
●
●
●●●
●
●●●●●●
●
●
●●●
●●●●
●
●
●●
●
●
●●●
●
●
●
●
●●●
●
●●
●●●●●
●
●●
●
●●
●●
●
●●●
●
●
●●●●
●
●●
●
●
●
●
●●
●
●●●●
●
●●
●●
●●
●●●●
●
●
●
●
●
●
●
●
●●
●●●
●
●
●●
●
●●●●
●
●
●●●●
●
●●●
●
●●
●
●
●
●●
●
●●
●
●●
●
●●
●
●●●●●●
●
●
●●●
●●
●
●●
●
●
●●
●●
●●
●
●
●
●
●
●
●●●●
●●
●●●●
●●●
●
●●●
●●
●●●●
●
●●●●
●●●
●
●
●
●
●
●
●●●●●●●
●
●●●●
●●
●
●●
●
●●●
●●●
●
●
●
●●●●●●●●●
●
●
●
●●●
●
●
●●●
●
●●
●
●●
●
●
●
●
●
●
●
●●
●●
●●
●
●●●●
●
●
●
●●
●
●●●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●●
●●
●●
●●●●●●●
●
●
●●●●
●●●
●●●
●
●
●●
●
●
●
●
●
●●
●●
●
●
●
●●
●
●●
●
●●
●
●
●
●●
●
●
●
●●●
●●●
●
●
●
●
●
●
●●●
●
●●●
●
●●●
●●●●●●
●●
●●
●●●●●●●
●●
●
●
●
●●●●●
●
●●
●●
●●
●●
●●●
●
●
●●
●
●
●
●●
●
●●
●●
●
●●
●
●●●●●
●●●
●
●
●●
●
●●
●●●
●
●
●
●●
●
●●
●
●
●
●●●●
●●
●●●
●
●
●●●●●●
●
●
●
●●
●
●●●●
●●
●
●
●●●
●
●
●
●●
●
●
●●●●
●
●
●●
●
●
●
●●●●●
●
●
●
●
●●●●
●
●●
●●
●
●
●●●
●●
●
●
●
●
●
●
●
●●
●
●●
●●
●●
●
●
●
●
●
●●●
●
●●
●●●●
●
●
●●
●●●●
●●
●
●●●●●●
●
●
●
●
●
●●●●●
●●
●
●
●●●
●
●●●●
●
●
●●
●●
●●●
●●●●
●●
●
●●
●
●
●
●
●●●●
●●●●●●
●
●
●●
●
●●
●●●●●●●●
●●
●●●●●●●●●●●●●
●
●
●●
●
●●●
●●
●
●●●
●
●●●●
●●●
●
●
●
●●
●
●
●●●
●●●
●●●●●●
●
●
●
●
●
●
●●●
●●
●●
●
●
●●
●
●●
●
●●
●
●●●
●
●●●
●
●
●●●
●
●
●
●●
●
●●●●
●
●
●
●
●●
●●
●●●
●●
●
●●
●
●●●●
●●
●●●●
●
●●●
●●●●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●●●
●
●●
●
●
●
●
●●●
●
●
●●●
●
●
●●●●
●●●
●
●
●●●●●●●
●
●
●
●●●
●
●
●
●
●●●●
●●
●
●
●●●●
●●●●●
●
●
●●●
●●●●
●●
●
●●
●
●
●●●
●
●●●
●●●●
●●
●
●●●●
●
●●●●
●●
●●●
●●
●●●●
●
●
●●
●
●●
●
●●●●
●●
●●
●
●
●
●●
●●
●
●●●●
●
●
●
●
●
●●
●●●
●●●●●
●
●●
●
●
●●
●
●
●●
●
●●●●
●
●●
●●
●●
●●
●
●●
●●●
●●●●
●
●
●
●
●
●
●
●●
●●●●
●●●
●
●●●●
●●●●
●●●●
●
●●
●●
●
●
●
●●●●
●
●●●
●●●●
●
●
●●
●●
●
●
●
●●
●
●
●●
●●●
●
●
●●●
●
●●
●●
●
●
●
●
●
●
●
●●
●●
●
●●
●●
●●
●
●
●●●
●
●
●
●●●
●●
●
●
●
●●●
●
●
●●
●
●●●●
●
●
●●●
●●
●
●
●
●
●
●●●●●
●
●●
●
●
●●●
●
●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●●
●
●●●●
●●●●●●
●●●
●●●●
●
●
●●
●
●●●
●
●●
●
●
●●●●
●
●
●●
●
●●
●●●
●
●●
●●●
●
●●●
●
●
●
●
●
●●●
●●
●
●
●
●●
●●
●●●
●
●●
●
●
●●●
●
●●●
●
●●●
●
●
●●●●
●
●
●
●●●●
●
●●●●
●●
●●●●●
●
●
●●
●●
●●
●
●●●
●
●●●
●●
●●●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●●●
●●
●
●
●
●
●●
●●
●
●●
●
●●●
●●●●
●
●
●●●●
●
●
●
●
●●●
●
●
●
●●
●
●●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●
●
●●●●
●
●
●
●
●
●●
●●
●●●●●●●●●●●●●●●●●●●●● ●
●
●
●
●
●
●
●●●
●
●●●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●
●●
●
●●
●●
●
●●●
●
●
●
●
●
●
●●
●
●●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●
●●
●
●
●●
●
●
●●
●
●
●●
●
●
●●●
●
●●●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●●●
●
●●●
●
●●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●●
●●●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●●
●
●
●●●
●
●●●
●
●
●
●●●
●●
●
●
●●●
●●
●
●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●●
●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●●
●
●
●
●
●●
●●●
●
●●
●
●
●
●●
●●
●
●
●●
●●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●●●
●
●
●
●
●●
●
●
●
●●
●
●
●●●
●
●●
●
●
●
●
●●●
●
●●
●
●●●
●
●
●
●●
●
●
●
●●●
●●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●●
●
●
●
●●●
●●
●
●●●
●
●●
●
●●●●
●
●
●●●●●●●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●●
●
●
●
●●
●●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●●●
●
●
●
●
●●●
●●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●●
●
●
●
●
●●
●●
●●●
●●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●●
●●●
●●●●●●●●
●
●
●
●
●●
●●
●
●
●●●
●
●
●
●
●●
●
●
●●●
●
●
●
●
●●
●●●
●●
●●
●●
●●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●●●
●●
●
●●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●●●●
●●
●
●●●
●
●
●
●●
●
●●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●●●●●●●●
●
●
●
●
●
●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●●●●●
●
●
●●●●
●
●
●
●●●
●
●
●
●
●
●●
●
●●
●
●
●
●●
●
●●
●●
●
●●
●●●●
●
●●
●
●
●
●
●●
●
●
●●
●
●
●●
●
●
●●
●
●
●
●●
●●●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●●
●●●●●
●
●●●●●●
●
●
●
●
●●●
●
●
●
●●●
●
●●
●
●●
●●●
●
●●
●●
●●●●
●
●●
●●
●●
●
●●
●
●
●
●●
●
●●●
●●
●●
●●●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●●●●
●
●
●
●●
●
●●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●
●●
●
●●●
●●●
●●
●
●●●●●
●
●
●
●●
●
●●●●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●●●●
●
●●●●●●●
●
●●●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●
●●
●
●●●
●
●
●
●
●
●
●
●●●
●
●●●
●
●
●●
●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●
●●
●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●
●
●
●
●●
●
●
●●●●
●
●●●●
●
●
●
●
●
●
●
●
●●●●●
●
●●●
●
●
●
●
●
●●●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●●
●●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●
●
●
●●●●●●
●●
●●
●●
●
●
●
●
●
●●
●
●
●●●
●●
●●●
●
●
●
●
●
●
●
●●
●
●●●
●●
●
●
●
●
●
●
●●●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●●
●●
●
●
●●●●●
●
●
●●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●●●●
●●
●
●
●
●
●●●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●●●
●●●
●●
●●
●●
●
●●
●
●
●●
●
●
●●
●●
●●
●
●
●●●
●
●
●
●
●●
●
●●
●●
●
●
●
●
●
●●●
●
●
●●
●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●●●●
●
●●●
●●●
●●●
●
●●●
●
●
●●●●●●●
●
●
●
●
●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●●●
●
●
●●●●
●
●
●
●
●●
●
●●●●
●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●●
●
●
●
●●
●
●●
●●
●●●
●
●
●
●●●
●●●
●●
●
●●
●●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●●
●
●●●●●●●●
●●
●
●
●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●●●●
●●
●
●
●●
●●
●
●
●
●●●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●●
●
●●●●
●●
●
●
●
●
●
●
●
●●●
●●●
●
●●
●
●●
●
●
●
●
●
●
●●●
●
●●
●
●
●
●
●●●●
●
●●
●
●●
●●
●
●
●●●●●
●
●
●●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●●●
●
●●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●●
●●
●
●●
●
●
●
●●
●●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●●●●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●
●
●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●●
●
●
●
●
●●●
●
●●
●●
●●
●
●
●
●●●
●
●
●●●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●●
●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●●
●●
●●●
●
●
●
●
●●
●
●●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●●
●
●
●
●●●
●●●●
●
●
●
●
●
●
●
●●●●
●
●
●
●●●●●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●●●
●
●
●
●
●
●
●●●●
●
●
●●●
●
●●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●●
●
●
●●
●
●
●
●●
●●●
●●
●
●
●●●
●
●
●
●
●
●
●●●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●●
●●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●
●●
●
●●
●
●●●●
●
●
●
●
●●
●
●
●
●
●
●●
●●●
●
●
●
●
●
●
●●●
●
●●
●●●●●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●●●
●
●●
●
●●●
●
●
●
●
●
●●
●●
●
●
●●●●
●
●
●
●
●●●
●●
●
●
●
●
●●●●
●●●
●
●●●
●
●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●●
●
●●●
●●●
●
●●
●
●●
●
●
●
●●●●●●
●
●●
●
●●●
●●●●
●
●
●
●
●
●
●●●●●●●
●●
●●●
●
●●
●●●
●
●●●●
●●
●●●
●●
●
●●
●●
●
●●
●
●
●
●
●●
●
●●
●●
●●●
●●●
●
●
●
●●
●
●●
●
●
●
●
●
●●●●●●
●
●●
●●
●
●
●●
●
●●●●
●
●●●
●
●●
●
●
●
●
●
●
●
●
●
●●●●●●●●●●
●
●●●●
●
●●
●
●
●
●
●
●●
●●
●
●
●●●●●●
●●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●●
●●●
●
●●
●●
●
●●●●●
●
●●●
●
●●●
●
●●
●
●●
●
●
●
●●●●●
●●●●
●●●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●
●
●
●
●
●●
●●●●
●●
●
●●
●
●
●●
●
●
●
●●
●●
●
●●
●●
●●●●
●●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●●●
●●●
●●
●●
●●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●●●●●●●●●
●●
●●●
●
●
●
●●●
●
●●
●
●
●
●
●●●●●
●
●●
●
●●●●●●●
●●
●●
●
●●●
●●
●●
●
●
●●
●
●
●●
●
●●
●
●●●●●●
●●
●
●
●
●●●
●
●●●●●
●●
●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●●
●
●
●●●
●
●
●●
●●
●
●
●●●●
●●●
●●
●
●●
●
●●●●
●
●
●●
●
●
●
●●●
●●
●
●●
●
●
●●●●●●●●
●●●
●●
●
●●
●
●
●●
●●
●●
●
●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●●
●●
●
●
●
●
●●
●
●
●●
●
●●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●●●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●●
●
●●
●●●●●
●
●
●
●●●●●
●
●●
●
●
●
●●
●●●
●
●
●●●
●
●●
●
●●●●
●
●
●
●
●●●●●●●●
●●
●
●
●●
●
●●●
●●
●
●●
●
●●
●●
●
●
●
●
●
●
●●
●
●●●
●●
●
●●
●
●
●
●
●
●●●
●●
●
●
●
●
●
●
●●●●
●
●
●●●●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●●
●●
●●●●●●●●●
●
●
●●●●●●●●●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●●
●●
●●
●
●
●●●●●●●
●●●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●●
●●●
●●●●
●
●●●
●●●●●●●●
●
●●
●
●●
●●●
●
●
●
●
●
●●●
●●
●
●●
●●
●
●●●
●●
●
●
●
●
●●
●
●
●●
●
●
●
●●
●●
●●●●●●
●
●
●
●
●
●
●●
●
●
●
●●
●
●●●
●
●
●●●
●●●
●
●●
●
●
●●
●
●
●●●
●●
●
●
●●●●●
●
●●●●
●
●●●
●
●●●●
●
●
●●
●
●●
●
●●●
●●
●
●
●
●
●
●
●●
●●
●
●●
●●●
●
●●●
●●
●
●
●●●
●
●
●●
●●
●
●
●●
●
●
●●●
●
●●●
●
●●
●
●
●●●
●
●
●●●●●
●●
●
●
●●
●
●
●
●
●
●●●●
●
●●●
●
●●
●
●●
●
●●
●
●●●●●
●●
●
●
●
●
●●●
●●●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●●●●●
●
●
●
●●●
●
●
●
●
●
●
●
●●
●
●●
●●
●
●
●●●
●
●●●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●●
●
●
●
●
●
●●
●●
●
●
●●
●
●●●●
●
●
●
●
●●●
●
●
●
●
●
●
●
●●●●
●●
●●●
●
●●
●
●
●
●●
●
●●●
●
●
●
●●
●
●
●●
●●●●
●●
●●
●●
●●
●
●●
●
●
●
●●
●
●●●
●
●
●●
●
●●
●●
●●●
●
●
●●●●
●●
●●●
●
●
●
●
●●●●
●
●●
●●●●
●●
●
●●
●●
●
●●
●●●●●●
●
●●●●
●●
●
●
●
●●
●
●
●●
●●●●
●
●
●
●●●
●●●
●
●
●●●
●
●●●●
●
●
●
●●●
●
●●●●●
●
●●
●
●●
●●
●●●
●
●
●
●●
●
●●●
●
●●●●
●
●●
●
●●●●●●
●
●●●●●●
●
●
●
●
●●●
●●
●●
●
●
●●
●●
●●●
●
●
●●●
●●
●
●
●
●
●●●
●●
●
●
●
●
●
●●●
●
●
●●
●●
●
●●
●
●
●
●
●
●●
●
●●●
●
●●
●
●
●●
●
●
●
●●
●●
●●●●
●
●●
●
●●
●●●
●
●●●●
●
●●●
●●
●
●
●
●
●
●
●●●●●●●
●
●●●●
●
●
●●●●●
●
●●
●
●●
●
●
●●
●
●●
●
●
●●
●
●●
●
●
●●
●
●
●●
●
●●
●
●
●
●●
●
●
●
●
●●●●●
●
●
●
●
●
●
●●●
●
●●
●
●
●●
●●
●
●●●●●
●
●
●●
●●●●
●
●●●●●●
●
●
●●
●
●●●●●●●●●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●●●●●
●
●●
●
●
●●
●
●
●●
●
●
●●●
●●●●
●
●●
●●●●●
●
●
●
●
●
●
●
●●
●
●●
●
●●●
●●
●
●
●
●
●●●●●
●
●
●●●●
●
●
●
●
●
●●●●●●●
●
●
●●
●
●
●
●
●
●
●
●●●●
●●●
●
●●●
●
●●
●
●
●
●●●
●
●●
●
●
●●
●
●
●
●
●
●●●●●
●●●
●●●
●
●
●
●
●
●
●
●
●
●●●
●●●
●
●●
●
●
●
●
●
●
●●●●●●●
●
●
●
●●●●●●
●●
●
●
●●●
●
●
●
●
●●●
●
●
●
●●●
●
●●●
●
●
●
●
●●●●
●
●●
●
●●●
●
●
●●
●
●●
●
●
●
●●
●
●●
●●
●
●●●
●●
●●●
●
●●
●
●●●●
●
●
●●●●●●●
●●●●
●
●●●●●●
●
●●
●
●●
●
●●
●
●
●
●
●●
●●
●●●●
●
●●●
●
●
●
●
●●●
●●●
●
●
●
●●
●●●
●
●
●●●●●
●
●
●
●●●
●
●●●●●●
●
●
●●●●●
●
●●
●
●●●●●●
●
●
●
●
●
●
●
●●
●
●●
●●●●
●●
●
●●●●●●
●
●
●
●●●●●●●●●
●
●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●
●
●●●●
●
●
●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●
●●
●●●●●●●●●●
●●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●
●
●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●
●
●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●
●●●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●
●
●●●●●●●
●
●
●
●●●●
●
●●
●
●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●●●
●
●●●●●●●●●●●
●
●●●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●
●
●●●●
●
●●● ●
●
●●●●
●
●●
●●
●
●
●●●●●●●●
●
●
●●●●●
●
●
●
●●●●
●
●●●●●●●●●●●●
●●
●●●
●
●●
●
●●●
●
●
●
●●
●
●
●
●●●●●●
●
●●
●
●
●
●
●
●●●
●●
●●●●●●●
●
●
●
●
●
●●●●●●●●●●●●●●●●
●
●●●●
●●
●
●
●●●
●
●●
●
●●●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●
●
●●
●
●●●●●●●
●
●●●
●
●
●
●
●
●
●
●●●●●●●●
●
●
●
●●●
●●
●
●●●●●●●●●●●●
●
●●
●
●
●
●●●●
●
●
●●
●●●●●●●
●
●●●●●●●
●
●●
●
●●
●
●
●●●●●
●
●
●
●●●●●●●●
●
●●●
●
●●●●●●●●●●●
●
●●●●●●●●●
●●
●
●●
●
●●●●●●●●
●
●●●●●●
●
●
●
●
●●●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●
●
●
●
●●●●●●
●
●
●
●●●●●●●●●●●●●●●
●●
●●●●●●●●
●
●●●●
●
●●
●
●
●●
●
●
●
●
●●●●●●
●
●●●●●●●●●●●●●●
●●
●●●●
●
●●●
●
●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●
●
●●
●
●
−2
e−
15
−1
e−
15
0e
+0
01
e−
15
2e
−1
5
Difference min EV (SAPEO−c)
0.1
2
0.5
2
1
2
1.5
2
2
2
0.1
10
0.5
10
1
10
1.5
10
2
10
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●
●
●●●●●●
●
●●●●●
●
●●
●
●●●●●●●
●
●●●●●●●
●
●●●●●●●
●●
●
●
●
●
●
●
●
●
●●●●●●●
●
●●●●●●●●●
●
●●●●●
●
●●●●
●
●●●●●
●
●●
●
●●●●
●
●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●
●
●●●
●
●
●
●●●●●
●
●
●
●
●
●●●●●●●●●●●●●●●●●●●●●
●
●
●
●
●
●●●●●●●●●●●
●
●
●●●●●●●● ●●●●●●
●
●
●●
●●●
●
●
●●
●
●
●
●
●●●●
●
●●●
●
●●●
●
●
●
●●●●
●
●
●
●
●
●●●●●●●
●
●●●●●●●●●
●●●
●●
●
●●●●
●
●●●●
●●
●
●
●
●
●
●
●●●
●
●
●
●●
●
●●●
●
●
●●
●
●
●●●●●●●●
●
●●
●
●●●●●●●●
●
●
●
●
●
●●
●
●●
●●●●●●
●
●
●
●
● ●●●
●●
●●
●●●●
●●
●
●
●
●
●●
●
●
●
●
●●●
●
●
●
●●●●
●
●
●
●●
●●
●●●●●
●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●●●●
●
●
●
●
●
●
●
●●●●●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
−2
e−
15
−1
e−
15
0e
+0
01
e−
15
2e
−1
5Difference min EV (SAPEO−cp)
0.1
2
0.5
2
1
2
1.5
2
2
2
0.1
10
0.5
10
1
10
1.5
10
2
10
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 43 / 48
Absolute Main Axis Angle SAPEO-u vs. SAPEO-up
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●● ●●●
●●
●●●
●
●●
●
●
●●
●
●
●
●
●
●
●●●●●
●
●●●●●●
●
●●
●
●
●
●●●●
●
●●
●
●
●●
●●
●
●
●●
●
●
●
●
●●●●
●
●●●
●
●●●●●●
●
●●
●
●
●
●●
●
●
●
●●
●
●●●●●●●●●
●
●●●
●●●
●●●●●
●
●●
●
●
●●
●●●
●
●●
●
●●
●●
●●●●●
●
●●●●
●
●
●
●●●●●●●●
●
●●
●
●
●●●●●●
●
●●●
●
●●●●●
●
●
●●●
●●●●●
●●
●●●●●
●
●
●
●●
●●
●●
●
●●
●
●●●●
●
●●●
●
●● ●
●●●●●●
●●●
●●
●
●
●
●●
●
●●
●●●●
●
●
●
●●●
●
●
●
●
●●●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●●●
●
●
●
●
●●
●
●
●
●●●
●
●
●●
●
●
●
●
●●
●
●
●
●
●●
●
●
●●
●
●●●
●
●●●
●
●●●●
●
●●●
●
●●
●
●●
●●●●
●●●
●●
●●●●
●
●
●●
●
●
●
●●●●●●●●
●
●●●
●
●
●
●
●
●
●
●●●●
●
●●
●
●●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●●●●
●
●●
●●●
●●
●●●●
●
●●●
●
●
●
●●●
●●
●●●
●
●●●●
●●
●
●
●●●●
●●
●●●●●●●
●
●
●
●
●
●●
●
●
●
●
●●
●●●●●
●●●●●
●
●●
●
●
●
●●●●
●
●
●●
●●●
●
●●
●
●●●
●
●●
●
●
●●●
●
●●●●
●
●
●●
●●●
●●
●●
●
●
●
●
●●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●●
●
●
●
●
●
●●●
●●●●●
●●
●●
●
●●
●●●
●●●
●
●
●
●●●
●●●
●
●
●●
●●
●
●●●●●●
●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●●●
●●●●
●●
●
●
●
●
●●
●
●
●
●●●
●●●
●
●
●●
●
●
● ●●
●
●●
●●●●
●
●●
●
●
●
●●
●●
●●
●●
●●●●
●
●●
●
●●
●
●●
●
●
●●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●●●●
●
●●
●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●●●●
●●
●
●●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●●●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●●●
●
●
●
●
●
●
●●●●●●●●
●
●
●●
●
●●
●
●
●
●
●●●
●
●
●●
●●●●
●
●●
●●
●●
●
●●●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●●
●
●
●
●
●
●●
●●●●
●
●
●●
●
●
●
●
●
●
●●
●●
●
●
●
●●●
●
●●
●
●
●
●●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●●
●
●●
●●●●
●
●
●
●
●●
●●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●●
●
●
●●
●
●●
●●●
●●
●
●●
●
●●●●
●
●●
●●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●
●
●●
●●●●
●
●
●●●
●●
●
●●
●●●
●
●
●
●
●
●
●●
●●●
●
●
●
●●●●●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●●●●●
●
●
●●●●
●
●
●
●●●●
●
●
●●
●●
●
●
●
●
●
●
●
●
●●
●
●●
●●●●
●
●●
●
●
●●
●●
●
●
●
●
●
02
04
06
08
0
Absolute Main Axis Angle (SAPEO−c)
0.1
2
0.5
2
1
2
1.5
2
2
2
0.1
10
0.5
10
1
10
1.5
10
2
10
●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●
●
●●
●●
●●●●●●●●●●●●●●●●●●●●●
●●
●●●●●●●●●●●●●●
●
●
●
●●●●●●●●●●●
●
●●●●●●
●●
●●
●
●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●
●
●●
●
●●●●●●●
●
●●●●●●
●
●●●●●
●
●
●
●●●●●
●
●●●●●●●●●●●●●●
●
●●●●●●
●
●●●
●
●●●●●●●●●
●
●●●●
●
●●●●●●●●●
●
●●
●
●●●●●●
●
●
●
●●●●●●●●●●●●●
●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●
●
●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●●
●●●●
●
●●
●
●●●●
●
●●●●
●
●
●
●●●
●
●●●●●●
●●
●●●●●●●●
●
●
●
●●●●●●●●
●
●●●●●●●●●●●●●
●
●
●
●●●●●●
●
●●●
●
●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●
●
●
●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●
●●●●●●●●●●●●●●●●●●●●
●
●●●
●
●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●
●
●●●●
●
●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●
●●
●●●●●●●●
●
●●●●●
●
●●●●●●●●●●●●●●●●●●●●
●●
●●●●●●●●
●
●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●●●●●
●
●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●●●●
●
●●●
●
●
●
●
●
●●●
●●●
●●●
●
●●●●●●●●●●●●●●●
●
●
●
●●●●
●
●●●●●●
●
●●●●
●
●
●
●●●●●●●●●
●
●●●
●
●●●●●●●●●●●●●●●
●
●●
●
●●
●
●●●●●●●
●
●●●●●●●●●●●●●●●
●
●
●
●●●●●●●●
●
●●●●
●
●
●
●●●●●●●●●●●●●●●
●
●●
●
●●●●
●
●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●
●
●●●●
●
●●●●●●●●●●●●●●●
●
●●●●●●●
●
●●●●●
●
●●●●●●●●●●●●●●●
●
●●
●●
●
●
●●●●●●●
●
●●●●●●
●
●
●
●●●●●●●●●●●●●●●
●
●
●
●●●●
●
●●
●●
●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●
●
●●●●
●
●●
●
●●●●●●
●
●●●●●
●
●●●●●●
●
●●●●●●
●●
●
●
●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●
●
●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●
●
●
●
●
●
●●●
●
●●●●●●●●●●●
●
●●
●
●●●
●
●●●●●●●●●●
●
●●●●●
●
●●●●●●●●●
●
●
●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●●
●
●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●
●
●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●●
●●
●●●●●●●●●●
●
●●
●
●●●●●●●●●●●●●●
●
●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●
●●
●●●●
●
●●●●●●●●
●
●●●●●●●●●●●
●●●
●●●●●●●●●●●
●
●●●●●
●●
●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●●●
●
●●●●●●
●
●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●
●
●●
●
●●●●●●●
●
●●●
●
●●●●●●
●
●●●●●●●●●
●
●●●●
●
●●●●
●
●●●●
●
●●●
●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●
●●
●
●
●●●●●●●●●●●●●
●●
●
●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●●●●●●●●●●
●
●●●●
●
●
●
●●●●●
●
●
●
●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●●
●
●●●●●●●●●●
●
●●
●
●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●
●●
●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●●●●●●●●●●●
●
●●●●●
●
●
●
●●●
●
●●●●●
●●
●●●●●●●●●●●●●●
●
●●●
●
●●●●●●●●●●●●●●●●●●●●
●
●●●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●
●
●●●●●●●●●●●●
●
●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●
●
●
●
●●●●●●●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●
●●●●●●
●
●●●●●●●●●●●●●
●●
●
●
●●●●
●
●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●
●●
●●●●●●●●●●●●●●●●●●●●
●
●●●●
●
●●●●●●●●●●●●●●●
●●
●●●●●●
●
●
●
●●●●●●
●
●●●●●●●●
●
●●●●
●
●●●●●●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●●●●
●●
●●●●●●●●
●●
●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●
●
●●●
●
●●●
●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●
●
●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●
●
●●●●●●●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
●●●●●●●●●●
●
●●●●●●●●●●●●●
●
●●
●
●●●
●
●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●
●
●
●●
●●●
●●●●
●
●●●●●●
●●●●●
●
●●●
●
●
●●
●●●●●
●
●●●●●●●●●
●
●●●●●
●
●●●●
●
●●●
●
●
●
●●●●●●●●
●
●●●●●●●●●●●●●●●●●
●
●
●
●●●
●
●●
●
●●
●
●●●
●
●●
●
●●●
●
●●●
●
●●●
●
●●●●●
●
●●●●●●●●●●●●
●●●●●●●●●●
●
●●●●●
●●
●●
●
●●●●
●
●
●●●●●●●●●●●●●
●●
●●●●●
●●●●
●
●
●
●●●●●
●
●
●●●
●●
●●
●
●
●
●
●
●●
●
●
●
●
●
●●●●●●●
●●●
●
●●
●
●
●●
●
●●●
●
●●●●
●
●●●●●
●
●●●
●
●●
●●
●
●
●●●
●
●
●
●●
●●●
●●●●●●●●●●
●
●
●●
●●●●●●
●
●●●
●
●●●
●
●●●
●
●
●
●●●●
●
●●●
●
●●
●●●●●●●●●
●
●
●
●●●●●●●
●●
●●●●●●●
●●
●●
●
●●
●
●●●●●●●
●
●●
●●●●●●●●
●
●
●●
●●●●●●
●●
●
●
●
●●●
●
●●●●●●
●
●●●●●●●●●●
●
●
●●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●●●●●
●
●●●
●
●●●
●
●
●
●
●
●●
●●
●
●
●●
●
●●
●
●●●●
●
●●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●
●
●
●●
●
●
●●●●●
●
●●
●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●●
●
●
●
●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●●
●
●●
●
●●
●●
●
●
●●●●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●●
●
●
●●
●●
●●
●
●
●●
●●
●
●
●
●
●
●●
●
●
●●●
●
●
●
●
●
●●●
●
●
●●
●●●●●
●●●●●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●●
●●
●
●
●
●
●●●
●
●●●●●●
●
●●
●
●
●
●
●●
●●
●●
●●
●
●
●
●
●●●
●
●
●
●
●●
●●●
●
●
●
●
●●●●
●
●
●
●●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●●●
●
●
02
04
06
08
0
Absolute Main Axis Angle (SAPEO−cp)
0.1
2
0.5
2
1
2
1.5
2
2
2
0.1
10
0.5
10
1
10
1.5
10
2
10
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 44 / 48
Empirical Frequency of Ranking Errors
0.0 0.2 0.4 0.6 0.8 1.0
0.5
0.6
0.7
0.8
0.9
1.0
rho = 1
α
P(X
≤Y
)
●
●
●
●
●
●
β
0.10.61.11.62.12.6
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 45 / 48
Probability of Evaluation (including Theoretical Bounds)
0 1 2 3 4 5
0.0
0.2
0.4
0.6
0.8
1.0
Incomparability 1 2 0.5
R
prob
abili
ty /
freq
uenc
y
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 46 / 48
Future Work
Interaction of Model Quality with Optimisation Result
Theoretical Performance BoundsExpected and Worst-Case Progress Rate
Surrogate Model Sample Size and StrategyAccuracy of σ predictionFurther Investigation of Dominance Relations
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 47 / 48
Thank You For Your Attention
Questions?
V. Volz, B. Naujoks SAPEO Effectiveness @CIGbalance 48 / 48
SYNERGY Horizon 2020 – GA No 692286
D3.2 92 31 July 2018
SYNERGY Horizon 2020 – GA No 692286
Appendix B: WCCI 2016 special session program
D3.2 93 31 July 2018
Session FA-13 Meta-modeling and Surrogate Models / SS CEC-55: Multiobjective Optimization with Surrogate Models Date/Time Friday, 29 July 2016 / 8:00AM – 10:00AM Venue Room 203 Chair(s) Tapabrata Ray / Bogdan Filipic 8:00AM
E-16059
Multiple Surrogate Assisted Multiobjective Optimization using Improved Pre-selection
Kalyan Shankar Bhattacharjee1, Hemant Kumar Singh1, Tapabrata Ray1 and Juergen Branke2
1School of Engineering and Information Technology, The University of New South Wales, Australia; 2Warwick Business School, University of Warwick, UK.
8:20AM
E-16429
A Multi-objective Batch Infill Strategy for Efficient Global Optimization
Ahsanul Habib, Hemant Kumar Singh and Tapabrata Ray School of Engineering and Information Technology, University of New South Wales, Australia.
8:40AM
E-16350
Ensemble of Surrogates Based on Error Classification by Unsupervised Learning
Yi Zhang, Xiaoqian Chen, Ning Wang,Wen Yao and Bingxiao Du College of Aerospace Science and Engineering, National University of Defense Technology, Changsha, Hunan, China
9:00AM
E-16718
Truncated Expected Hypervolume Improvement: Exact Computation and Application
Kaifeng Yang, Andre Deutz, Zhiwei Yang, Thomas Back and Michael Emmerich Leiden Institute of Advanced Computer Science, Leiden University, Leiden, 2333 CA, The Netherlands.
9:20AM
E-17169
Study of the Approximation of the Fitness Landscape and the Ranking Process of Scalarizing Functions for
Many-objective Problems
Gregorio Toscano1 and Kalyanmoy Deb2
1CINVESTAV-Tamaulipas, Cd. Victoria, Tamaulipas, 87130, Mexico; 2Michigan State University, East Lansing,MI, USA. 181
SYNERGY Horizon 2020 – GA No 692286
D3.2 94 31 July 2018