03 Operations Research

Embed Size (px)

Citation preview

  • 8/6/2019 03 Operations Research

    1/66

    The role of Operations

    ResearchSolving Decisional Models for EnvironmentalManagement

  • 8/6/2019 03 Operations Research

    2/66

    The search process

    x(t+1)=f(x(t),u(t))

    Control Variables

    yes

    Modification of

    control variable

    u(t)

    Time series

    Maps

    Simulation Model

    Simulation Results

    J=J(x,u)

    Performance Criterion

    J

    max?

    Search algorithm

    StopAdapted from Seppelt, 2003

  • 8/6/2019 03 Operations Research

    3/66

    What is search

    To search means to explore possible valuesfor the control variable u, to obtain thevalue of the state x caused by u, and toevaluate the performance J

  • 8/6/2019 03 Operations Research

    4/66

    The policy

    The policy defines the value of u to bechosen in front of the value of x at eachtime step t and in each spatial location z

    P {Mt() ; t= 0,1, . . .

  • 8/6/2019 03 Operations Research

    5/66

    Finding the policy

    The problem is hard effects are uncertain

    decisions are dynamic and recursive

  • 8/6/2019 03 Operations Research

    6/66

  • 8/6/2019 03 Operations Research

    7/66

    Scenario analysis

    Scenario definition requires knowledge onthe modelled system, which might not be

    available, due to complexity (Jakeman andLetcher, 2003)

    In some (most) cases Scenarios are toolimited

    Too much weight on past experience

  • 8/6/2019 03 Operations Research

    8/66

    Optimisation

    Given the goal, the computer automaticallygenerates scenarios, by combination ofparameters, controls, exogenous inputs

    A numerical optimisation algorithm is incharge of evaluating the performance and

    directing exploration in the most promisingdirection

  • 8/6/2019 03 Operations Research

    9/66

    Simulation explained

  • 8/6/2019 03 Operations Research

    10/66

    The simulation model

    It is a constraint for the optimisationproblem all model variables may vary in space

    boundary and initial conditions are given

    xt+!t(z) = M!t(xt(z),ut(z),"(z),z)

  • 8/6/2019 03 Operations Research

    11/66

    The performance

    criterion

    state x and control u depend on time andlocation

    the performance criterion maps a trajectoryfrom state and policy space to a real

    number

    J(xt(z),ut(z))

  • 8/6/2019 03 Operations Research

    12/66

    The problem

    xt+!t(z) = M!t(xt(z),ut(z),"(z),z)

    J

    (xt

    (z

    ),ut

    (z

    ))Performance criterion:

    Model:

    Problem: find u so that

    x(t) = M!t(x,u,")

    J(x,u) J(x,u)

    for all t in [0, T] and u in U

  • 8/6/2019 03 Operations Research

    13/66

    Solving the problem

    Exact methods Approximation algorithms

    Heuristic and metaheuristic algorithms

  • 8/6/2019 03 Operations Research

    14/66

  • 8/6/2019 03 Operations Research

    15/66

    Exact methods

    The algorithm depends on the problemformulation

    Linear programming, simplex Integer programming, branch & bound Dynamic programming

  • 8/6/2019 03 Operations Research

    16/66

    Approximation

    algorithms Find a sub-optimal solution

    It is possible to say how far from the

    optimum (e.g. 95%)

    Formally proven

    Often is very specific to the problem athand

    E.g: neuro-dynamic programming

  • 8/6/2019 03 Operations Research

    17/66

    Heuristics

    Apply rules-of-thumb to the solution of theproblem

    No guarantee to find an optimum No guarantee on the quality of the solution Yet, they are fast and generic

  • 8/6/2019 03 Operations Research

    18/66

    Metaheuristics

    set of concepts that can be used to defineheuristic methods that can be applied to awide set of different problems.

    general algorithmic frameworks which canbe applied to different optimization

    problems with relatively few modificationsto make them adapted to a specific problem

  • 8/6/2019 03 Operations Research

    19/66

    Metaheuristics are...

    Simulated annealing

    Evolutionary computation Tabu search

    Ant Colony Systems

  • 8/6/2019 03 Operations Research

    20/66

    Ant Colony Systems

  • 8/6/2019 03 Operations Research

    21/66

    Ants

    Leaf cutters,fungi growers

    Breeding otherInsects

    Weaver ants

    I i l

  • 8/6/2019 03 Operations Research

    22/66

    Insects, socialinsects, and ants

    1018 living insects (estimate)

    ~2% of all insects are social

    Examples of social insects:

    All ants

    All termites

    Some bees

    Some wasps

    50% of all social insects are ants

    Average body weight of an ant: 1 - 5 mg

    Total mass of ants ~ total mass of humans

    Ants have colonized earth for 100 million yrs; Homosapiens sapiens for 50000 years

  • 8/6/2019 03 Operations Research

    23/66

    Ant colony society

    Size of a colony:from a few (30) to millions of worker ants

    Labour division:

    Queen: reproduction

    Soldiers: defense

    Specialised workers: food gathering, offspring tending,nest grooming, nest building and maintenance

  • 8/6/2019 03 Operations Research

    24/66

    How do ants

    coordinate theiractivities?

    Basic principle: stigmergy stigmergy is a kind of indirect

    communication, used by social insects to

    coordinate their activities

  • 8/6/2019 03 Operations Research

    25/66

    StigmergyStimulation of workingants due to the results

    they have reachedGrass P. P., 1959

  • 8/6/2019 03 Operations Research

    26/66

  • 8/6/2019 03 Operations Research

    27/66

    Artificial Stigmergy

    Indirect communication by changes on theenvironment accessible only locally tocommunicating agents

    Features of artificial stigmergy : Indirect communication Local accessibility

    (Dorigo and Di Caro, 1999)

  • 8/6/2019 03 Operations Research

    28/66

    The bridge experiment

    Goss et al., 1989, Deneubourg et al., 1990

  • 8/6/2019 03 Operations Research

    29/66

    Pheromone trail

    Ants and termites follow pheromone trails

  • 8/6/2019 03 Operations Research

    30/66

    Asymmetric bridge

    experiment

    Goss et al., 1989

  • 8/6/2019 03 Operations Research

    31/66

    Ant System for TSP

    Pheromonetrail deposition

    Probabilistic rule tochoose the path

    Travelling Salesman Problem

    memory

    Dorigo, Maniezzo, Colorni, 1991Gambardella & Dorigo, 1995

    Ph i ti

  • 8/6/2019 03 Operations Research

    32/66

    Pheromone, euristicsand memory to visit the

    next citMemory of

    visited cities

    tij ;hiji

    k

    j

    tik ;hik

    tir;hir

    r

    pijk

    t( )= f ij t( ),ij t( )( )

    r

  • 8/6/2019 03 Operations Research

    33/66

    Pheromone trail

    depositionij

    kt+1( ) 1 ( ) ij

    kt( ) + ij

    kt( )

    ijk

    t( ) = quality kwhere (i,j) are the links visited by ant k, and

    where qualityk

    is inversely proportional to the lengthof the solution found by ant k

  • 8/6/2019 03 Operations Research

    34/66

  • 8/6/2019 03 Operations Research

    35/66

    3 main components: TIME: a short route

    gets pheromone more quickly

    QUALITY: a short routegets more pheromone

    COMBINATORICS: a short pathgest pheromone more frequentlysince it (usually) has a smaller numberof decision points

    Why does it works?

    i

    j

    tijd

    origin

    destination

    d

    Some results on the

  • 8/6/2019 03 Operations Research

    36/66

    Some results on theTravelling Salesman

    Problem

    A i h i i

  • 8/6/2019 03 Operations Research

    37/66

    A constructive heuristics

    with local search The best strategy to approximate thesolution of a combinatorial optimisation

    problem is to couple a constructive heuristics and local search

    It is hard to find the best coupling: Ant System (and similar algorithms)

    appear to have found it

  • 8/6/2019 03 Operations Research

    38/66

    The ACO

    Metaheuristics Ant System has been extended, to be appliedto any problem formulated as a shortest pathproblem

    The extension has been named Optimisationmetaheuristics based on Ant Colonies (ACO)

    The main application:

    NP complex combinatorial optimisationproblems

  • 8/6/2019 03 Operations Research

    39/66

    The ACO-metaheuristic

    procedureprocedure ACO-metaheuristic()

    while (not-termination-criterion)

    planning subroutines

    generate-and-manage-ants()

    evaporate-pheromone()

    execute-daemon-actions() {opzionale}

    end planning subroutinesend while

    end procedure

    E.g. local search

  • 8/6/2019 03 Operations Research

    40/66

    ACO: Applications

    Sequential ordering in a production line

    Vehicle Routing of trucks goodsdistributions Job-shop scheduling

    Project scheduling Water distribution problems

  • 8/6/2019 03 Operations Research

    41/66

    Genetic Algorithms

  • 8/6/2019 03 Operations Research

    42/66

    Evolutionary computing

    EC (Evolutionary computing) = GA (Genetic Algorithms - Holland, 1975) ES (Evolution Strategies - Rechenberg

    1973)

    EP (Evolutionary Programming - Fogel,

    Owens, Walsh, 1966)

    GP (Genetic Programming - Koza, 1992)

  • 8/6/2019 03 Operations Research

    43/66

    Biological basis

    Evolution operates on chromosomes, whichencode the structure of a living being

    Natural selection favours reproduction ofthe most efficient chromosomes

    Mutations (new chromosomes) and

    recombination (mixing chromosomes) occurduring reproduction

  • 8/6/2019 03 Operations Research

    44/66

    Use

    GA are very well suited when the structureof the search space is not well known

    In other words, if the model of our systemis rough, we can use GA to find the best

    policy

  • 8/6/2019 03 Operations Research

    45/66

    How it works

    A genetic algorithm maintains a populationof candidate solutions for the problem athand, and makes it evolve byiteratively applying a set of stochastic

    operators

    Th k l f h

  • 8/6/2019 03 Operations Research

    46/66

    The skeleton of the

    algorithm Generate initial population P(0) t:=0

    while not converging do evaluate P(t)

    P(t) select best individuals from P(t)

    P(t) apply reproduction on P(t)

    P(t+1)replace individuals (P(t),P(t)

    end while

  • 8/6/2019 03 Operations Research

    47/66

    Components of a GA

    Encoding principles (gene, chromosome) Initialization procedure (creation)

    Selection of parents (reproduction) Genetic operators (mutation,

    recombination)

    Evaluation function (environment) Termination condition

    R

  • 8/6/2019 03 Operations Research

    48/66

    Representation

    (encoding) Possible individuals encoding

    Bit strings (0101 ... 1100)

    Real numbers (43.2 -33.1 ... 0.0 89.2)

    Permutations of element (E11 E3 E7 ... E1 E15)

    Lists of rules (R1 R2 R3 ... R22 R23)

    Program elements (genetic programming)

    ... any data structure ...

  • 8/6/2019 03 Operations Research

    49/66

    Choosing the encoding

    Use a data structure as close as possible tothe natural representation

    Write appropriate genetic operators asneeded If possible, ensure that all genotypes

    correspond to feasible solutions

    If possible, ensure that genetic operatorspreserve feasibility

  • 8/6/2019 03 Operations Research

    50/66

    Initialization

    Start with a random population a previously saved population a set of solutions provided by a human

    expert

    a set of solutions provided by anotherheuristic

  • 8/6/2019 03 Operations Research

    51/66

    Selection

    Purpose: to focus the search in promisingregions of the space

    Inspiration: Darwins survival of the fittest

    Trade-off between exploration and

    exploitation of the search space

    Li R ki

  • 8/6/2019 03 Operations Research

    52/66

    Linear Ranking

    SelectionBased on sorting of individuals by decreasing

    fitness. The probability to be extracted for the ith

    individual in the ranking is defined as

    where b can be interpreted as the expectedsampling rate of the best individual

    L l T

  • 8/6/2019 03 Operations Research

    53/66

    Extracts k individuals from the populationwith uniform probability (without re-insertion) and makes them play atournament,

    where the probability for an individual towin is generally proportional to its fitness

    Selection pressure is directly proportional tothe number k of participants

    Local Tournament

    Selection

    R bi i

  • 8/6/2019 03 Operations Research

    54/66

    Recombination

    (Crossover)

    Enables the evolutionary process to movetoward promising regions of the searchspace

    Matches good parents sub-solutions toconstruct better offspring

  • 8/6/2019 03 Operations Research

    55/66

    Mutation

    Purpose: to simulate the effect of errors thathappen with low probability during

    duplication

    Results: Movement in the search space

    Restoration of lost information to thepopulation

    E l i (fi

  • 8/6/2019 03 Operations Research

    56/66

    Evaluation (fitness

    function)

    Solution is only as good as the evaluation

    function; choosing a good one is often thehardest part

    Similar-encoded solutions should have asimilar fitness

  • 8/6/2019 03 Operations Research

    57/66

    Termination condition

    Examples:

    A pre-determined number of generations or

    time has elapsed

    A satisfactory solution has been achieved

    No improvement in solution quality hastaken place for a pre-determined number ofgenerations

  • 8/6/2019 03 Operations Research

    58/66

    Acknowledgements

    Part of the material extracted fromIntroduction to Genetic Algorithms byAssaf Zaritsky, Ben Gurion University,

  • 8/6/2019 03 Operations Research

    59/66

    Simulated Annealing

  • 8/6/2019 03 Operations Research

    60/66

    The origin

    It is the oldest metaheuristic

    Originated in statistical mechanics(Metropolis Monte Carlo algorithm)

    First presented to solve combinatorial

    problems by Kirkpatrick et al. 1983

  • 8/6/2019 03 Operations Research

    61/66

    The idea

    It searches in directions which result insolutions that are of worse quality than the

    current solution

    It allows to escape local minima

    The probability of the exploration of these

    unpromising directions is decrementedduring the search

  • 8/6/2019 03 Operations Research

    62/66

    The algorithm s

    GenerateInitialSolution()

    Temp:=InitialTemp

    while not converging loop

    sPickAtRandomFromNeighbor(s)

    if J(s) < J(s)

    ss else

    Accept s as new solution with prob p(T,s,s)

    Update(T)

    end while

  • 8/6/2019 03 Operations Research

    63/66

    Boltzmann probability

    if s is worse than s, then s might still bechosen as the new solution

    the probability depends on d=f(s)-f(s) andon temperature T

    the higher d the lower the probability

    the higher T the higher the probability

  • 8/6/2019 03 Operations Research

    64/66

  • 8/6/2019 03 Operations Research

    65/66

    Pros and Cons

    Pros

    proven to converge to the optimum

    easy to implement Cons

    converges in infinite time the cooling process must be slow

  • 8/6/2019 03 Operations Research

    66/66

    End of Part II