Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Hyper-heuristicsPart I
Ender Özcan
NATCOR – April 2016
Outline Introduction: motivation, relevant concepts
Hyper-heuristics – Definition, Origins andClassification
Selection Hyper-heuristics
Perturbative low level heuristics
Constructive low level heuristics
Hyflex and CHeSC 2011
Case Studies
2
Heuristic Search – revisited
A heuristic is method which seeks good, i.e. near-optimal solutions, at a reasonable cost without beingable to guarantee optimality.
Good for solving ill-structured problems, or complexwell-structured problems (large-scale combinatorialproblems that have many potential solutions toexplore)
A heuristic is a rule of thumb method derivedfrom human intuition.
3
Search Paradigms I
Single point based search vs. Multi-point(population) based search
Constructive
Search on partial candidate solutions
search steps: extend partial solutions, but neverreduce them
Perturbative
Search on complete solutions
4
Need for Search Methodologies (e.g.Heuristics, Metaheuristics) – Example
Travelling salesman problem
N=4, 24
N=5, 120
N=7, 5 040
N=10, 3 628 800
N=81, 5.797 x 10120
Number of particles in the universe is in between 1072 –1087
Tianhe-2: 30.65 petaflops (one thousand million (1015)floating-point operations per second) – ~6 x 1096 years
5
Examples – Heuristics for TSP
The nearest neighbour (NN) algorithm
Constructive
Pairwise exchange (2-opt), or Lin–Kernighan heuristics
Perturbative
6
The nearest neighbour (NN)algorithm
city1
city2
city3city4
10
4
5
7
116
7
city1
city2
city3city4
Select a starting city
<city2>
8
The nearest neighbour (NN)algorithm
city1
city2
city3city4
choose the nearestunvisited city as the nextmove
<city2, >
4
10
9
6
The nearest neighbour (NN)algorithm
city1
city2
city3city4
choose the nearestunvisited city as the nextmove
<city2, city1, >
4
511
10
The nearest neighbour (NN)algorithm
city1
city2
city3city4
choose the nearestunvisited city as the nextmove
<city2, city1, city4, >
4
5
7
11
The nearest neighbour (NN)algorithm
city1
city2
city3city4
After the choice of the lastcity, algorithm terminates
<city2, city1, city4, city3> : 26
10
4
5
7
12
The nearest neighbour (NN)algorithm
city1
city2
city3city4
4
7
116
Remove two edges andreplace them with two differentedges, reconnecting thefragments into a new andshorter tour.
<city2, city1, city3, city4> : 28
Pairwise exchange (2-opt)
13
city1
city2
city3city4
4
7
116
Remove two edges andreplace them with two differentedges, reconnecting thefragments into a new andshorter tour.
<city2, city1, city3, city4> : 28
14
Pairwise exchange (2-opt)
city1
city2
city3city4
4
7
116
Remove two edges andreplace them with two differentedges, reconnecting thefragments into a new andshorter tour.
<city1, city2, city3, city4> : 26 (28+(-2))
15
10
5
(5+10) – (11+6) = -2
Pairwise exchange (2-opt)
city1
city2
city3city4
4
7
<city1, city2, city3, city4> : 26
10
5
16
Pairwise exchange (2-opt)
Search Paradigms II
Mutational heuristics(diversification/exploration)
vs.
Hill climbing (intensification/exploitation)(objective)
current
17
Mutational Heuristic
MutationalHeuristic
3.0
16.0
22.0
CandidateSolution
16.0
Minimising Fitness /Cost/Penalty/…
e.g., total number of constraintviolations or a weighted sum ofviolations
Processes a given candidate solutionand generates a solution which is notguaranteed to be better than the input
22.0
18
HillClimbing
3.0
16.0
16.0
CandidateSolution
16.0
Minimising Fitness /Cost/Penalty/…
e.g., total number of constraint violations ora weighted sum of violations
Processes a given candidate solutionand generates a better or equal qualitysolution
Hill Climbing Heuristic
19
0-1 Knapsack Problem
Given a set of n items, each item i with aweight wi and a value vi, choose a subset of
those items (where xi =1) yielding themaximum total value without exceeding amaximum weight capacity W
20
Heuristic#1
Sort items by value in decreasing order
Choose the next item if adding its weight to thesum of already chosen items without exceedingthe capacity
Problem Instance
i 1 2 3 4
v [350, 30, 150, 110]
w [ 35, 5, 15, 10]
W=20
Heuristic#1
Sort items by value in decreasing order
Choose the next item if adding its weight to thesum of already chosen items without exceedingthe capacity
Problem Instance
i 1 2 3 4 1 3 4 2
v [350, 30, 150, 110] [350, 150, 110, 30]
w [ 35, 5, 15, 10] [ 35, 15, 10, 5]
W=20
Solution: {2,3} (optimal?), Weight: 20, Value: 180
Heuristic#1
Sort items by value in decreasing order
Choose the next item if adding its weight to thesum of already chosen items without exceedingthe capacity
Problem Instance
i 1 2 3 4 1 3 4 2
v [350, 30, 150, 190] [350, 150, 190, 30]
w [ 35, 5, 15, 10] [ 35, 15, 10, 5]
W=20
Solution: {2,3} (optimal?), Weight: 20, Value: 180
Heuristic#2
Sort items by value per unit weight indecreasing order
Choose the next item if adding its weight to thesum of already chosen items without exceedingthe capacity
Problem Instance
i 1 2 3 4
v [350, 30, 150, 190]
w [ 35, 5, 15, 10]
W=20
Heuristic#2
Sort items by value per unit weight indecreasing order
Choose the next item if adding its weight to thesum of already chosen items without exceedingthe capacity
Problem Instance
i 1 2 3 4 4 1 3 2
v [350, 30, 150, 190] [190, 350, 150, 30]
w [ 35, 5, 15, 10] [ 10, 35, 15, 5]
W=20 Solution: {2,4}, Weight: 15, Value: 220
Heuristic#2
Sort items by value per unit weight indecreasing order
Choose the next item if adding its weight to thesum of already chosen items without exceedingthe capacity
Problem Instance
i 1 2 3 4 4 1 3 2
v [350, 30, 150, 190] [190, 350, 200, 30]
w [ 35, 5, 15, 10] [ 10, 35, 20, 5]
W=20 Solution: {2,4}, Weight: 15, Value: 220
What is a Metaheuristic?
A metaheuristic is a high-level problemindependent algorithmic framework thatprovides a set of guidelines or strategies todevelop heuristic optimization algorithms
27
K. Sörensen and F. Glover. Metaheuristics. In S.I. Gass and M. Fu,editors, Encyclopedia of Operations Research and ManagementScience, pp 960–970. Springer, New York, 2013.
Metaheuristics
[Kirkpatrick, 1983] Simulated Annealing (SA)
[Glover, 1986] Tabu Search (TS)
[Voudouris, 1997] Guided Local Search (GLS)
[Stutzle, 1999] Iterated Local Search (ILS)
[Mladenovic, 1999] Variable Neighborhood Search (VNS)
[Holland, 1975] Genetic Algorithm (GA)
[Smith, 1980] Genetic Programming (GP)
[Goldberg, 1989] Genetic and Evolutionary Computation (EC)
[Moscato, 1989] Memetic Algorithm (MA)
[Kennedy and Eberhart, 1995]
Particle Swarm Optimisation (PSO)
[Dorigo, 1992] Ant Colony Optimisation (ACO)
[Resende, 1995] Greedy Randomized Adaptive Search Procedure
(GRASP)
Lo
ca
lS
ea
rch
Po
pu
lati
on
-ba
se
dC
on
str
uc
tive
Main Components ofMetaheuristics
Representation (encoding) ofcandidate solutions
Evaluation function
Initialisation
Search process (guideline)
Neighbourhood relation – moveoperator(s)
Mechanism for escaping fromlocal optima
29Guideline Encoding Initialisation Operator(s)
EscapeMethod
EvaluationFunction
0-1 Knapsack
Parameter Tuning vs Control
30
0-1 Knapsack
Guideline Encoding Initialisation Operator(s)EscapeMethod
EvaluationFunction
Tuning
1
2
3
4
5
Application to Another Domain
31
0-1 Knapsack
Vehicle RoutingDistribution center
Guideline Encoding Initialisation Operator(s)EscapeMethod
EvaluationFunction
Tuning
?
Building a Better Solver
32
Distribution center
Guideline Encoding Initialisation Operator(s)EscapeMethod
EvaluationFunction
Tuning
Vehicle Routing
GuidelineGuideline Encoding InitialisationInitialisation Operator(s)Operator(s)EscapeMethodEscapeMethod
EvaluationFunction
EvaluationFunction
Tuning
Guideline Encoding Initialisation Operator(s)EscapeMethod
EvaluationFunction
TuningEvaluationFunction
State-of-the-art in HeuristicOptimisation
33
Nurse RosteringJohn
Gem
Vehicle RoutingDistribution center
0-1 Knapsack
Automated Design of SearchProcess
Growing area of research motivated by raisingthe level of generality. What are the limits?
Grand Challenge
A CB
Problem Specific Solvers
Doesn’t exist….Significantscope forfutureresearch
34
OBSERVATIONS
Most of the real-world optimization problemsare proven to be NP-hard
The current state of the art in searchmethodologies tend to focus on bespokesystems
In general, these systems are expensive tobuild, but provide successful results
Unfortunately, their application to new problemdomains or even new problem instances from aknown domain still requires expert involvement.
35
OBSERVATIONS (cont.)
A different heuristic might generategood performance on a differentproblem instance
Balancing the exploration(diversification) and exploitation(intensification) during the search iscrucial
36
Iterated Local Search (ILS)s0= GenerateInitialSolution()
//random or construction heuristic
s* = LocalSearch(s0)
Repeat
s' = Perturbation(s*, memory)
// random move
s*' = LocalSearch(s' )
// hill climbing
s* = AcceptanceCriterion(s*, s*',memory)
// the conditions that the new local optimum
// must satisfy to replace the current solution
Until (termination conditions are satisfied)
Genetic vs Memetic Algorithm
38
Genetic vs Memetic Algorithm
39
Hyper-heuristics
40
A hyper-heuristic is a search method or learningmechanism for selecting or generating heuristics
to solve computationally difficult problems
E. K. Burke, M. Gendreau, M. Hyde, G. Kendall, G. Ochoa, E. Özcan,R. Qu, Hyper-heuristics: A Survey of the State of the Art, Journal of theOperational Research Society, 64 (12) , pp. 1695-1724, 2013.
Potential Solutions
Hyper-heuristic
Operates upon
Low Level Heuristics
Operates upon
41
Different Search Spaces
Potential Solutions
Standard Heuristics
Operates upon
Meta-heuristic
Characteristics ofHyper-heuristics
Operate on a search space of heuristics ratherthan directly on a search space of solutions
Existing (or computer generated) heuristics canbe used within hyper-heuristics
Aim is to take advantage of strengths and avoidweaknesses of heuristics
Easy to implement, practical to deploy (easy,cheap, fast)
? No problem specific information flow fromdomain to hyper-heuristic layer is allowed ?
42
2001
Cowling P.I., Kendall G. and Soubeiga E.,2001. A Hyperheuristic Approach toScheduling a Sales Summit, selectedpapers from PATAT 2000, Springer, LNCS2079, 176-190.1990-95
Storer R. H., Wu S. D. , Vaccari R., 1992. New SearchSpaces for Sequencing Problems with Application to JobShop Scheduling, INFORMS, 38(10), 1495-1509.
Fang H.-L., Ross P. and Corne D., 1994. A PromisingHybrid GA/Heuristic Approach for Open-Shop SchedulingProblems., in'ECAI' , 590-594.
1997
Denzinger J., Fuchs M. and Fuchs M., 1997. Highperformance ATP systems by combining several AImethods. In Proc. of the 15th IJCAI, 102-107.
Hyper-heuristics:Origins
19751961-63
Fisher H. and Thompson G.L., 1963. Probabilistic Learning Combinations ofLocal Job-shop Scheduling Rules. Ch 15,:225-251, Prentice Hall, New Jersey.
Crowston W.B., Glover F., Thompson G.L. and Trawick J.D. Probabilistic andParameter Learning Combinations of Local Job Shop Scheduling Rules. ONRResearch Memorandum, GSIA,CMU, Pittsburgh, (117), 1963
Related Areas Reactive search
Algorithm portfolios
Adaptive operator selection
Meta-learning
Co-evolution/multimeme memeticalgorithms/Memetic computing
Variable Neighbourhood Search
Cooperative (Distributed) Search
Parameter control (e.g., in EAs)
Algorithm configuration44
Classification ofHyper-heuristics
Nature of the heuristic search space
Hyper-heuristics
45
Heuristic generation
constructiveheuristics
perturbativeheuristics
constructiveheuristics
perturbativeheuristics
Heuristic selection
Methodologies to select
Methodologies to generate
Fixed heuristics(mostly humandesigned)
Automaticallygeneratedheuristics fromcomponents
E. K. Burke, M. Hyde, G. Kendall, G. Ochoa, E. Özcan, and J. Woodward (2010). A Classification of Hyper-heuristic Approaches. In Gendreau, M. and Potvin, J.Y. (eds.), Handbook of Metaheuristics, InternationalSeries in Operations Research & Management Science, Volume 146, pp. 449-468. Springer.
Online learning
while solving a probleminstance (adapt)
Examples: reinforcementlearning, meta-heuristics
Offline learning
from a set of training instances(generalise)
Examples: classifier systems,case-based, GP
46
Onlinelearning
Offlinelearning
No-learning
Hyper-heuristics
Feedback
Classification ofHyper-heuristicsE. K. Burke, M. Hyde, G. Kendall, G. Ochoa, E. Özcan, and J. Woodward (2010). A Classification of Hyper-heuristic Approaches. In Gendreau, M. and Potvin, J.Y. (eds.), Handbook of Metaheuristics, InternationalSeries in Operations Research & Management Science, Volume 146, pp. 449-468. Springer.
Domain Barrier
Hyper-heuristic
47
A Hyper-heuristic Framework
48
Heuristic Selection Method Move Acceptance Criteria
Perturbative low level heuristics
Domain Barrier
A Selection Hyper-heuristicFramework – Single Point Search
A Selection Hyper-heuristicFramework – Single Point Search
49
1. generate initial candidate solution p
2. while (termination criteria not satisfied){
3. select a heuristic (or subset of
heuristics) h from {H1, ..., Hn}
4. generate a new solution (or solutions) s
by applying h to p
5. decide whether to accept s or not
6. if (s is accepted) then
7. p=s }
8. return p;
Heuristic Selection
Component name Reference(s)
Simple Random Cowling et al (2000, 2002b)
Random Permutation Cowling et al (2000, 2002b)
Peckish Cowling and Chakhlevitch (2003)
Greedy
Cowling et al (2000, 2002b); Cowling and
Chakhlevitch (2003)
Random Gradient Cowling et al (2000, 2002b)
Random Permutation Gradient Cowling et al (2000, 2002b)
Choice Function
Cowling et al (2000, 2002b); Maashi et al (2015);
Drake et al (2015)
Reinforcement Learning Nareyek (2003); Pisinger and Ropke (2007)
Reinforcement Learning with Tabu Search Burke et al (2003); Dowsland et al (2007)
Quality Index and Tabu based Learning Heuristic Selection Mısır et al (2009, 2012)
Dominance-based Selection Kheiri and Özcan (2011; 2015)
Probability-based Selection Lehrbaum and Musliu (2012)
Adaptive pursuit Walker et al (2012)
Heuristic selection with no learning
Heuristic selection with learning
with no learning with learning
Apply each low level heuristic to the candidatesolution and choose the one that generatesthe best objective value
H1 H2 H3 H4 H5
GR
H6
f1 f2 f3 f4 f5 f6
f3 < f1, f2, f4, f5, f6 at step t
51
Heuristic Selection –Greedy (GR)
A machine learning technique
Inspired by related psychological theory
Reward and punishment
Concerned with how an agent ought to takeactions in an environment to maximize somenotion of long-term reward
Maintains a score for each heuristic
If an improving move then increase (e.g., +1),otherwise decrease (e.g., -1) the score of theheuristic 52
Heuristic Selection –Reinforcement Learning (RL)
The choice function maintains a record of theperformance of each heuristic. Three criteriaare maintained:
1) Its individual performance
2) how well it has performed with otherheuristics
3) the elapsed time since the heuristic has beencalled
53
Heuristic Selection –Choice Function (CF)
''t t t
H1 H2 H3 H4 H5
CF
H6
s1 s2 s3 s4 s5 s6
s2 > s1, s3, s4, s5, s6 at step t
54
Heuristic Selection –Choice Function (CF)
Move Acceptance
Kheiri and Özcan (2015)
Cowling et al (2000, 2002b)
Dowsland et al (2007); Bai et al. (2007a)
Bai and Kendall (2005); Bilgin et al (2006); Pisinger and Ropke (2007);Antunes et al (2009)
Mısır et al (2012)
Özcan et al (2009);Jackson et al. (2013)
Ayob and Kendall (2003)
Kendall and Mohamad (2004b) || Bilgin et al. (2006)
Burke et al (2010); Kheiri and Özcan (2012); Asta and Özcan (2015)
Kendall and Mohamad (2004b)
Mısır et al (2009)
All Moves
Only Improving
Improving&Equal
Late Acceptance
Great Deluge
Threshold Acceptance
Record-to-Record
Naïve Accept.
Exponential Monte Carlo (EMC)
Simulated Annealing (SA)
AM: All Moves Accepted
OI: Only Improving Moves accepted
IE: Improving or Equal moves are accepted.
Naïve Acceptance: Accept all improvingmoves and worsening move with a fixedprobability of p (e.g., 0.5)
56
Move Acceptance –Simple Criteria
Great Deluge
57
0-1 Knapsack
minimise - ݅ ݅ୀଵ
maximise ݅ ݅ୀଵ
minimisingobjective
f(s0)
f(starget)
itermaxiter
0
58
t
c
eU
)1,0(
> 0 inferior solution
- < 0t
t
te
Annealing parameter t, called temperature is slowly decreased:
t is initially high - many inferior moves are acceptedt is decreasing - inferior moves are nearly always rejected
Improving moves are accepted Worsening moves are allowed using Metropolis criterion
= f(s') - f(s) Assume that F has to be minimised
As the temperature decreases, the probability of acceptingworsening moves decreases.
An inferior solution s' (yielding > 0)is accepted with a probability of te
Simulated Annealing
Simulated Annealing
59
Hyper-heuristic Tools
HYFLEX
G. Ochoa, M. Hyde, T. Curtois, J. A. Vazquez-Rodriguez, J. Walker, M.
Gendreau, G. Kendall, B. McCollum, A. J. Parkes, S. Petrovic, E. K. Burke(2012). European Conference on Evolutionary Computation inCombinatorial Optimisation (EvoCOP 2012), J.-K. Hao and M. Middendorf(Eds.), LNCS 7245, pp. 136-147. Springer, Heidelberg
HYPERION
J. Swan, E. Özcan, G. Kendall, Hyperion - A Recursive Hyper-heuristic
Framework, The Learning and Intelligent OptimizatioN Conference (LION5),Lecture Notes in Computer Science 6683, pp. 616-630, 2011.
60
Selection Hyper-heuristic –revisited
61
HyFlexHyper-heuristics Flexible Interface
62
Defines behaviours of components andarranges the interaction between them
Separation between theproblem-specific and thegeneral-purpose parts, bothof which are reusable andinterchangeable throughthe HyFlex interface
http://www.hyflex.org/
HyFlex v1.0 JavaImplementation
63
Currently there are 6 problem domain implementations
heuristic types: mutational, ruin-recreate, local search, crossover
parameters: intensity, depth of search
BinPacking
Flow Shop
PersonnelScheduling
TSP
MAX-SAT
VRP http://www.hyflex.org/
CHeSC 2011 benchmark based on HyFlex v1.0
Organising Partners:
Sponsor:
BinPacking
Flow Shop
PersonnelScheduling
TSP
MAX-SAT
VRP
• 10 public training instances• 5 test instances(3 training + 2 hidden/all hidden)
• Set problem instance• Set time limit (10 min.)• Perform 31 runs• Report median
Hidden
http://www.hyflex.org/
Ranking: Formula 1scoring system
64
65
http://www.hyflex.org/
And the winner is...
AdapHH – M. MısırK. VerbeeckP. De CausmaeckerG. Vanden Berghe
AdapHH – Overview
66
Case Study: An IteratedMulti-stage SelectionHyper-heuristic
A. Kheiri and E. Özcan, An Iterated Multi-stageSelection Hyper-heuristic, European Journal ofOperational Research, (250)1:77–90, 2016
A Multi-stage Hyper-heuristic
68
Stage 1
Select a low level heuristic i with probability
Apply the chosen heuristic
Accept/reject based on an adaptive thresholdacceptance method
A Multi-stage Hyper-heuristic
69
LLH1=2, LLH2=1, LLH3=150% 25% 25%
6 LLHs 3 LLHs
Reduce theNumber of LLHs
(N n)+
Assign Scores
Stage 2
LLH3LLH1 + LLH1
LLH4LLH2 + LLH2
LLH5LLH1 + LLH2
LLH6LLH2 + LLH1
Given N LLHs, e.g., LLH1, LLH2
Pair up all and increase the number of LLHs to N+N2
Relay Hybridisation
PS TSP
70
SAT
71
BP
72
PS
73
Performance Comparison
74
Topwith aCHeSC2011score of163.60
75
Performance Comparison
Case Study: A Hybrid Approach tothe Multi-mode Resource-
constrained Multi-projectScheduling Problem – Winner of the
MISTA 2013 Challenge School ofComputer Science
ASAP Team:ID#3Shahriar Asta, Daniel Karapetyan,Ahmed Kheiri, Ender Özcan andAndrew J. Parkes
S. Asta, D. Karapetyan, A. Kheiri, E. Özcan, and A.J. Parkes,Combining Monte-Carlo and Hyper-heuristic methods for theMulti-mode Resource-constrained Multi-project SchedulingProblem, in review.
Problem Description
Resource-ConstrainedProject Scheduling
Schedule given jobs
Limited resources
Precedence relations
Minimise makespan
Multi-modeResource-constrainedMulti-project Scheduling
Multiple modes for each job
Multiple projects
Local and global resources
Minimise the sum ofmakespans
MISTA 2013 Challenge
Aim: Develop an algorithm that produces thebest possible solution to any given problemin 5 minutes.
Problem instances are not known in advance.
21 teams registered, 16 teams qualified afterthe first round, 9 teams qualified after thefinal round.
We designed a memetic algorithm –construct and improve
78
Memetic Algorithm
79
Schedule generator: scheduleseach job to the earliestavailable time in the given order
Monte Carlo Search Treebased initialisation
Decide on good initial sequenceof projects
Sequence basedrepresentation
# 1 2 3 4 5 6job 3 6 1 5 2 4
mode 1 1 3 2 2 1
Memetic Algorithm
80
Hyper-heuristic
Memetic Algorithm
81
Hyper-heuristic
Memetic Algorithm
82
Core1
Core2
Corek
Hyper-heuristic
Hyper-heuristic
Hyper-heuristic
83
• swap jobs• change mode of a job
3
• reshuffle several jobs• change mode of several jobs
10
• swap projects• move a project
4
Low LevelHeuristics/Operators
Iterated Multi-stage Hyper-heuristic
Results
84
MISTA 2013 Challenge – Result
We produced the bestsolutions for 17 out ofthe 20 instances
On the 12th secondour algorithmbecomes the winner
85
Case Study: A Tensor-based Selection
Hyper-heuristic for Cross-domain Heuristic Search
School ofComputer Science
S. Asta and E. Özcan, A Tensor-based SelectionHyper-heuristic for Cross-domain Heuristic Search,Information Sciences, vol. 299, pp. 412-432, 2015.
Two Simple Hyper-heuristicsMixing Heuristics(Stochastic Local Search)
Simple Random Heuristic Selection –Improving and Equal Move Acceptance (IE)
Reject any worsening move
Simple Random Heuristic Selection – NaïveMove Acceptance (NA)
Accept a worsening move with a fixed probabilityof p (0.5 in this study)
87
Proposed Approach – Ideas
The balance between diversification andintensification is crucial
Mix move acceptance methods
Use machine learning to partition the low levelheuristics associated with each method
ts 2ts 3ts? ?IE NAhIE hNA = h (hIE hNA = )
h: set of low level heuristics(MU+RC+LS)
(e.g. ILS)
Intensify Diversify Intensify Diversify Intensify
time
88
Tensors
Many real-world data are multidimensional
Very high-dimensional (big) with a large amountof redundancy
Multi-dimensional arrays representing suchdata describe a tensor
Many applications insignal processing,psychometrics, andmore
SOURCE:http://en.wikipedia.org/wiki/File:Video_represented_as_a_third-order_tensor.jpg
89
Tensor Factorisation
There are different decomposition methods,we use Canonical Polyadic (CP) factorisation
This gives a projection of 3D data onto 1Dvectors
Helps to discoverlatent structures indata, quantifying therelationship betweenpairs of differentcomponents
SOURCE: B. Krausz, C. Bauckhage, Actionrecognition in videos using nonnegative tensorfactorization., in: ICPR, IEEE, 2010, pp. 1763–1766.
90
Proposed Approach –TeBHA-HH
91
-NA
Noise Elimination
(Exclude Poor
Performing
Heuristic Group)
Construct Tensor
Tensor
Factorization (CP
Decomposition)
Analysis: Extract
two subgroups of
and
Switch the subgroup
and move
acceptance, XX
XX←NA XX←IE
Apply SR-XX
using ܆܆
ି
Tmax
reached
?
No
YesReturn Solution
(Stop)
Tmax
tp tp
ts
Perform Search
Basic
FrameUse SR-NA
TeBHA-HH:Tensor Construction Phase
-NA
Noise Elimination
(Exclude Poor
Performing
Heuristic Group)
Construct Tensor
Tensor
Factorization (CP
Decomposition)
Analysis: Extract
two subgroups of
and
Switch the subgroup
and move
acceptance, XX
XX←NA XX←IE
Apply SR-XX
using ܆܆
ି
Tmax
reached
?
No
YesReturn Solution
(Stop)
Tmax
tp tp
ts
Perform Search
Basic
FrameUse SR-NA
Represent the searchhistory of SR-NA usingremaining low levelheuristics andconstruct a 3rd ordertensor in time tp
Proposed Approach –Tensor Construction
93
No. of active entries =
=
Pre
vious
heurist
icin
dex
Current heuristic index
01234567
0 1 2 3 4 5 6 7
Proposed Approach –Tensor Construction
94
No. of active entries =
=
Pre
vious
heurist
icin
dex
Current heuristic index
01234567
0 1 2 3 4 5 6 7
Proposed Approach –Tensor Construction
95
No. of active entries =
=
Pre
vious
heurist
icin
dex
Current heuristic index
01234567
0 1 2 3 4 5 6 7
Proposed Approach –Tensor Construction
96
No. of active entries =
=
Pre
vious
heurist
icin
dex
Current heuristic index
01234567
0 1 2 3 4 5 6 7
Proposed Approach –Tensor Construction
97
• The frame is now put in the empty tensor .• The label of the frame is the change in the objective
value ( ) resulted by applying the active elementsof the frame collectively.
Pre
vious
heurist
icin
dex
Current heuristic index
Frame 1 of tensor ञ
Proposed Approach –Tensor Construction
98
Pre
vious
heurist
icin
dex
Current heuristic index
01234567
0 1 2 3 4 5 6 7
No. of active entries =
=
Proposed Approach –Tensor Construction
99
Pre
vious
heurist
icin
dex
Current heuristic index
01234567
0 1 2 3 4 5 6 7
No. of active entries =
=
Proposed Approach –Tensor Construction
100
Pre
vious
heurist
icin
dex
Current heuristic index
01234567
0 1 2 3 4 5 6 7
No. of active entries =
=
Proposed Approach –Tensor Construction
101
Pre
vious
heurist
icin
dex
Current heuristic index
01234567
0 1 2 3 4 5 6 7
No. of active entries =
=
Proposed Approach –Tensor Construction
102
Pre
vious
heurist
icin
dex
Current heuristic index
Frame 2 of tensor ञ
01234567
0 1 2 3 4 5 6 7• The frame is now appended to the tensor .• The label of the frame is the change in the objective
value ( ) resulted by applying the active elementsof the frame collectively.
Proposed Approach –Tensor Construction
103
Frame 2 of tensor ञFrame 2 of tensor ञ
Frame 2 of tensor ञFrame 2 of tensor ञ
Frame 2 of tensor ञFrame 2 of tensor ञ
Frame 2 of tensor ञFrame 2 of tensor ञ
Frame 2 of tensor ञ
Continuing thisprocess results inan initial tensor.
Proposed Approach –Tensor Construction
104
Frame 2 of tensor ञFrame 2 of tensor ञ
Frame 2 of tensor ञFrame 2 of tensor ञ
Frame 2 of tensor ञFrame 2 of tensor ञ
Frame 2 of tensor ञFrame 2 of tensor ञ
Frame 2 of tensor ञ
Frames withconsecutivepositive labels( ) are selectedand put into thefinal tensor .
Proposed Approach –Tensor Construction
105
Frame 2 of tensor ञFrame 2 of tensor ञ
Frame 2 of tensor ञFrame 2 of tensor ञ
Frames withconsecutivepositive labels( ) are selectedand put into thefinal tensor .
Emphasis on inter-frame correlations
TeBHA-HH:Tensor Factorisation
-NA
Noise Elimination
(Exclude Poor
Performing
Heuristic Group)
Construct Tensor
Tensor
Factorization (CP
Decomposition)
Analysis: Extract
two subgroups of
and
Switch the subgroup
and move
acceptance, XX
XX←NA XX←IE
Apply SR-XX
using ܆܆
ି
Tmax
reached
?
No
YesReturn Solution
(Stop)
Tmax
tp tp
ts
Perform Search
Basic
FrameUse SR-NA
Decompose the tensorusing CP (AlternatingLeast Squarealgorithm)
: model fitness
Produce a basic frame
Basic Frame
TeBHA-HH:Tensor Analysis
-NA
Noise Elimination
(Exclude Poor
Performing
Heuristic Group)
Construct Tensor
Tensor
Factorization (CP
Decomposition)
Analysis: Extract
two subgroups of
and
Switch the subgroup
and move
acceptance, XX
XX←NA XX←IE
Apply SR-XX
using ܆܆
ି
Tmax
reached
?
No
YesReturn Solution
(Stop)
Tmax
tp tp
ts
Perform Search
Basic
FrameUse SR-NA
Locate the pair withmax score: LS0,LS1
Top half goes to hNA, the rest to hIE
Sort all entries on the column:
(LS0,LS1,MU3,MU2,MU5,MU4,MU1,MU0)
TeBHA-HH:Final Phase: Perform Search
-NA
Noise Elimination
(Exclude Poor
Performing
Heuristic Group)
Construct Tensor
Tensor
Factorization (CP
Decomposition)
Analysis: Extract
two subgroups of
and
Switch the subgroup
and move
acceptance, XX
XX←NA XX←IE
Apply SR-XX
using ܆܆
ି
Tmax
reached
?
No
YesReturn Solution
(Stop)
Tmax
tp tp
ts
Perform Search
Basic
FrameUse SR-NA
Run the cyclic multi-stage hyper-heuristicSR−IE with ® SR-NA withalternating at every time period ts
Results–CHeSC2011
MAX-SAT
VRP
2nd in BP4th in TSP4th in PSWorst in FS
109
Conclusion
Tensors can be used to represent the trail ofheuristic invocations in a concise manner undera selection hyper-heuristic framework
They can be further used to extract the latentrelationship between the low level heuristicswhich emerges during the search process
Tensor analysis can help improve theperformance of a multi-stage hyper-heuristic(allowing hybridisation of move acceptance)yielding counter-intuitive (basis) results
110