View
217
Download
1
Category
Preview:
Citation preview
Hydra-MIP: Automated Algorithm Configuration and Selection
for Mixed Integer Programming
Lin Xu, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown
Department of Computer ScienceUniversity of British Columbia
2
Solving MIP more effectivelyPortfolio-based algorithm selection (SATzilla) [Xu et al., 2007;2008;2009]
Where are the solvers? Parameter settings of a single solver (e.g. CPLEX) How to find good settings? Automated algorithm configuration tool [Hutter et al., 2007;2009] How to find good candidates for algorithm selection?
Algorithm configuration with dynamic performance metric [Xu et al., 2010]
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Some particularly related work: [Gratch & Dejong, 1992]; [Balaprakash, Birattari & Stuetzle, 2007]; [Hutter, Babic, Hoos & Hu, 2007]; [Hutter, Hoos, Stuetzle & Leyton-Brown, 2009]
Some particularly related work: [Rice, 1976]; [Leyton-Brown, Nudelman & Shoham, 2003; 2009]; [Guerri & Milano, 2004]; [Nudelman, Leyton-Brown, Shoham & Hoos, 2004]
3
Hydra Portfolio-based algorithm selection:
Automated algorithm configuration:
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
NEW MODELS
BETTER USE
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Outline• Improve algorithm selection
– SATzilla– Drawback of SATzilla– New SATzilla with cost sensitive classification– Results
• Reduce the construction cost– Hydra– The cost– Make full use of configuration– Results
• Conclusion 4
5
• Given:– training set of instances– performance metric– candidate solvers– portfolio builder
(incl. instance features)
• Training:– collect performance data– portfolio builder learns
predictive models
• At Runtime:– predict performance– select solver
Metric
Portfolio Builder
Training Set
NovelInstance Portfolio-Based
Algorithm Selector
Candidate Solvers
SelectedSolver
SATzilla: Portfolio-Based Algorithm Selection[Xu, Hutter, Hoos, Leyton-Brown, 2007; 2008]
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP5
6
Drawback of SATzilla
Algorithm selection in SATzilla based on regression:– Predict each solver performance independently– Select best predicted solver– Classification based on regression
Goal of regression:Accurately predict each solver’s performance
Algorithm selection:Pick solvers on a per-instance basis in order to minimize some overall performance metric
Better regression Better algorithm selection
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Algorithm Selector
7
Cost sensitive classification for SATzilla
Loss function: the performance difference– Punish misclassifications in direct proportion to their
impact on portfolio performance– No need for predicting runtime
Implementation: Binary cost sensitive classifier: decision forest (DF)
– Build DF for each pair of candidate solvers – one vote for the better solver– Most votes -> Best solver
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
8
SATzillaDF performance
DataSet Model Average Time Solved Percentage Time speedup
RANDLR 177 99.1%
HANDLR 549 92.9%
INDULR 545 92.1%
LR: linear regression as used in previous SATzilla; DF: cost sensitive decision forest
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
SATzillaDF performance
DataSet Model Average Time Solved Percentage Time speedup
RANDLR 177 99.1%
1.08×DF 164 99.3%
HANDLR 549 92.9%
1.16×DF 475 94.4%
INDULR 545 92.1%
1.12×DF 487 94.4%
LR: linear regression as used in previous SATzilla; DF: cost sensitive decision forest
9Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
10
MIPzillaDF performance
DataSet Model Average Time Solved Percentage
Time speedup
LR 39.4 100%
LR 102.6 100%
ISAC(new)
LR 2.36 100%
MIXLR 56 99.6%
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
11
MIPzillaDF performance
DataSet Model Average Time Solved Percentage
Time speedup
LR 39.4 100%1.00×
DF 39.3 100%
LR 102.6 100%1.04×
DF 98.8 100%
ISAC(new)
LR 2.36 100%1.18×
DF 2.00 100%
MIXLR 56 99.6%
1.05×DF 48 99.6%
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
12
Hydra Procedure: Iteration 1
Algorithm Configurator
Metric Training Set
Portfolio-BasedAlgorithm Selector
Candidate Solver Set
CandidateSolver
ParameterizedAlgorithm
PortfolioBuilder
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
13
Hydra Procedure: Iteration 2
Algorithm Configurator
Metric Training Set
Portfolio-BasedAlgorithm Selector
Candidate Solver Set
CandidateSolver
ParameterizedAlgorithm
PortfolioBuilder
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
14
Hydra Procedure: Iteration 3
Algorithm Configurator
Metric Training Set
Portfolio-BasedAlgorithm Selector
Candidate Solver Set
CandidateSolver
ParameterizedAlgorithm
PortfolioBuilder
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
15
Output:
Hydra Procedure: After Termination
Portfolio-BasedAlgorithm Selector
NovelInstance
SelectedSolver
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
16
We are wasting configuration results!
Algorithm Configurator
Metric Training Set
CandidateSolver
ParameterizedAlgorithm
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
17
Make full use of configurations
Algorithm Configurator
Metric Training SetParameterized
Algorithm
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
k Candidate Solvers
18
Make full use of configurations
• Advantage:
– Add k solvers instead of 1 in each iteration (good for algorithm selection)
– No need for validation step in configuration (SAVE time)• Disadvantage:
– Need to collect runtime data for more solvers (COST time)
• In our experiment, we found SAVE = COST (k=4)
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
19
Experimental Setup: Hydra’s Inputs Portfolio Builder:
• MIPzillaLR (SATzilla for MIP) [Xu et al., 2008]
• MIPzillaDF (MIPzilla using cost sensitive DF)
Parameterized Solver: CPLEX12.1
Algorithm Configurator: FocusedILS 2.4.3 [Hutter, Hoos, Leyton-
Brown, 2009]
Performance Metric: • Penalized average runtime (PAR)
Instance Sets:• 4 heterogeneous sets by combining homogeneous subsets
[Hutter et al., 2010];[Kadioglu et al., 2010]; [Ahmadizadeh et al., 2010]Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
20
Three versions of Hydra for MIP
• HydraLR,1: Original Hydra for MIP [Xu et al., 2010]
• HydraDF,1: Hydra for MIP with Improvement I
• HydraDF,4: Hydra for MIP with Improvement I and II
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
MIP-Hydra performance on MIX
• HydraDF,* performs better than HydraLR,1
• HydraDF,4 performs similar to HydraDF,1 , but converge faster
• Performance close to Oracle and MIPzillaDF
21Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
22
Conclusion
• Cost sensitive classification based SATzilla outperforms original SATzilla
• New Hydra-MIP outperforms CPLEX default, algorithm configuration alone, and original Hydra on four heterogeneous MIP sets
• Technical contributions:– Cost sensitive classification results better algorithm
selection for SAT and MIP– Using multiple configurations speeds up the convergence
of Hydra
Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Recommended