80
Chapter 12 Discrete Optimization Methods

Chapter 12 Discrete Optimization Methods

  • Upload
    hank

  • View
    97

  • Download
    2

Embed Size (px)

DESCRIPTION

Chapter 12 Discrete Optimization Methods. 12.1 Solving by Total Enumeration. If model has only a few discrete decision variables, the most effective method of analysis is often the most direct: enumeration of all the possibilities. [12.1] - PowerPoint PPT Presentation

Citation preview

Page 1: Chapter  12 Discrete Optimization Methods

Chapter 12

Discrete Optimization Methods

Page 2: Chapter  12 Discrete Optimization Methods

12.1 Solving by Total Enumeration

• If model has only a few discrete decision variables, the most effective method of analysis is often the most direct: enumeration of all the possibilities. [12.1]

• Total enumeration solves a discrete optimization by trying all possible combinations of discrete variable values, computing for each the best corresponding choice of any continuous variables. Among combinations yielding a feasible solution, those with the best objective function value are optimal. [12.2]

Page 3: Chapter  12 Discrete Optimization Methods

Swedish Steel Model with All-or-Nothing Constraints

min 16(75)y1+10(250)y2 +8 x3+9 x4 +48 x5+60 x6 +53 x7

s.t. 75y1+ 250y2 + x3+ x4 + x5+ x6 + x7 = 10000.0080(75)y1+ 0.0070(250)y2+0.0085x3+0.0040x4 6.5

0.0080(75)y1+ 0.0070(250)y2+0.0085x3+0.0040x4 7.5 0.180(75)y1 + 0.032(250)y2 + 1.0 x5 30.00.180(75)y1 + 0.032(250)y2 + 1.0 x5 30.5 0.120(75)y1 + 0.011(250)y2 + 1.0 x6 10.00.120(75)y1 + 0.011(250)y2 + 1.0 x6 12.0 0.001(250)y2 + 1.0 x7 11.00.001(250)y2 + 1.0 x7 13.0 x3…x7 0y1, y2 = 0 or 1

(12.1)

Cost = 9967.06y1* = 1, y2* = 0, x3* = 736.44, x4* = 160.06x5* = 16.50, x6* = 1.00, x7* = 11.00

Page 4: Chapter  12 Discrete Optimization Methods

Swedish Steel Model with All-or-Nothing Constraints

Discrete Combination

Corresponding Continuous Solution Objective Value

y1 y2 x3 x4 x5 x6 x7

0 0 823.11 125.89 30.00 10.00 11.00 10340.890 1 646.67 63.33 22.00 7.25 10.75 10304.081 0 736.44 160.06 16.50 1.00 11.00 9967.061 1 561.56 94.19 8.50 0.00 10.75 10017.94

Page 5: Chapter  12 Discrete Optimization Methods

Exponential Growth of Cases to Enumerate

• Exponential growth makes total enumeration impractical with models having more than a handful of discrete decision variables. [12.3]

Page 6: Chapter  12 Discrete Optimization Methods

12.2 Relaxation of Discrete Optimization Models

Constraint Relaxations• Model () is a constraint relaxations of model (P) if every

feasible solution to (P) is also feasible in () and both models have the same objective function. [12.4]

• Relaxation should be significantly more tractable than the models they relax, so that deeper analysis is practical. [12.5]

Page 7: Chapter  12 Discrete Optimization Methods

Example 12.1 Bison Booster

The Boosters are trying to decide what fundraising projects to undertake at the next country fair. One option is customized T-shirts, which will sell for $20 each; the other is sweatshirts selling for $30. History shows that everything offered for sale will be sold before the fair is over.

Materials to make the shirts are all donated by local merchants, but the Boosters must rent the equipment for customization. Different processes are involved, with the T-shirt equipment renting at $550 for the period up to the fair, and the sweatshirt equipment for $720. Display space presents another consideration. The Boosters have only 300 square feet of display wall area at the fair, and T-shirts will consume 1.5 square feet each, sweatshirts 4 square feet each. What plan will net the most income?

Page 8: Chapter  12 Discrete Optimization Methods

Bison Booster Example Model

• Decision variables:x number of T-shirts made and soldx number of sweatshirts made and soldy 1 if T-shirt equipment is rented; =0 otherwisey 1 if sweatshirt equipment is rented; =0 otherwise

• Max 20x1 + 30x2 – 550y1 – 720y2 (Net income)s.t. 1.5x1 + 4x2 300 (Display space)

x1 200y1 (T-shirt if equipment)x2 75y2 (Sweatshirt if equipment)

x1, x2 0y1, y2 = 0 or 1

Net Income = 3450x1* = 200, x2* = 0, y1* = 1, y2* = 0

(12.2)

Page 9: Chapter  12 Discrete Optimization Methods

Constraint Relaxation Scenarios

• Double capacities1.5x1 + 4x2 600 x1 400y1

x2 150y2

x1, x2 0y1, y2 = 0 or 1

• Dropping first constraint1.5x1 + 4x2 300 x1 200y1

x2 75y2

x1, x2 0y1, y2 = 0 or 1

Net Income = 74501* = 400, 2* = 0, 1* = 1, 2 * = 0

Net Income = 49801* = 200, 2* = 75, 1* = 1, 2 * = 1

Page 10: Chapter  12 Discrete Optimization Methods

Constraint Relaxation Scenarios

• Treat discrete variables as continuous1.5x1 + 4x2 300 x1 200y1

x2 75y2

x1, x2 00 y1 10 y2 1

Net Income = 34501* = 200, 2* = 0, 1* = 1, 2 * = 0

Page 11: Chapter  12 Discrete Optimization Methods

Linear Programming Relaxations

• Continuous relaxations (linear programming relaxations if the given model is an ILP) are formed by treating any discrete variables as continuous while retaining all other constraints. [12.6]

• LP relaxations of ILPs are by far the most used relaxation forms because they bring all the power of LP to bear on analysis of the given discrete models. [12.7]

Page 12: Chapter  12 Discrete Optimization Methods

Proving Infeasibility with Relaxations

• If a constraint relaxation is infeasible, so is the full model it relaxes. [12.8]

Page 13: Chapter  12 Discrete Optimization Methods

Solution Value Bounds from Relaxations

• The optimal value of any relaxation of a maximize model yields an upper bound on the optimal value of the full model. The optimal value of any relaxation of a minimize model yields an lower bound. [12.9]

Feasible solutions in relaxation

Feasible solutions in true model

True optimum

Page 14: Chapter  12 Discrete Optimization Methods

Example 11.3 EMS Location Planning

1

2

3

4

5

6

7

8

9

10

Page 15: Chapter  12 Discrete Optimization Methods

Minimum Cover EMS Model

s.t. (12.3)

x2 1

x1 + x2 1x1 + x3 1x3 1x3 1x2 1x2 + x4 1x3 + x4 1x8 1

xi = 0 or 1 j=1,…,10

x4 + x6 1x4 + x5 1

x4 + x5 + x6 1x4 + x5 + x7 1x8 + x9 1x6 + x9 1x5 + x6 1x5 + x7 + x10 1x8 + x9 1x9 + x10 1x10 1

x2* = x3* = x4* = x6* = x8* = x10* =1, x1* = x5* = x7* = x9* = 0

Page 16: Chapter  12 Discrete Optimization Methods

Minimum Cover EMS Modelwith Relaxation

s.t. (12.4)

x2 1

x1 + x2 1x1 + x3 1x3 1x3 1x2 1x2 + x4 1x3 + x4 1x8 1

0xj 1 j=1,…,10

x4 + x6 1x4 + x5 1

x4 + x5 + x6 1x4 + x5 + x7 1x8 + x9 1x6 + x9 1x5 + x6 1x5 + x7 + x10 1x8 + x9 1x9 + x10 1x10 1

2* = 3 * = 8* = 10* =1,

4* = 5* = 6* = 9* = 0.5

Page 17: Chapter  12 Discrete Optimization Methods

Optimal Solutions from Relaxations

• If an optimal solution to a constraint relaxation is also feasible in the model it relaxes, the solution is optimal in that original model. [12.10]

Page 18: Chapter  12 Discrete Optimization Methods

Rounded Solutions from Relaxations

• Many relaxations produce optimal solutions that are easily “rounded” to good feasible solutions for the full model. [12.11]

• The objective function value of any (integer) feasible solution to a maximizing discrete optimization problem provides a lower bound on the integer optimal value, and any (integer) feasible solution to a minimizing discrete optimization problem provides an upper bound. [12.12]

Page 19: Chapter  12 Discrete Optimization Methods

Rounded Solutions from Relaxation: EMS Model

Ceiling1 = = = 0

2 = = = 1

3 = = = 1

4 = = = 1

5 = = = 1

6 = = = 1

7 = = = 0

8 = = = 1

9 = = = 1

10 = = = 1= 8

(12.5)

Floor1 = = = 0

2= = = 1

3= = = 1

4 = = = 0

5 = = = 0

6 = = = 0

7= = = 0

8= = = 1

9 = = = 0

10= = = 1 = 4

Page 20: Chapter  12 Discrete Optimization Methods

12.3 Stronger LP Relaxations, Valid Inequalities,

and Lagrangian Relaxation• A relaxation is strong or sharp if its optimal value

closely bounds that of the true model, and its optimal solution closely approximates an optimum in the full model. [12.13]

• Equally correct ILP formulations of a discrete problem may have dramatically different LP relaxation optima. [12.14]

Page 21: Chapter  12 Discrete Optimization Methods

Choosing Big-M Constants

• Whenever a discrete model requires sufficiently large big-M’s, the strongest relaxation will result from models employing the smallest valid choice of those constraints. [12.15]

Page 22: Chapter  12 Discrete Optimization Methods

Bison Booster Example Modelwith Relaxation in Big-M Constants

• Max 20x1 + 30x2 – 550y1 – 720y2 (Net income)s.t. 1.5x1 + 4x2 300 (Display space)

x1 200y1 (T-shirt if equipment)x2 75y2 (Sweatshirt if equipment)

x1, x2 0y1, y2 = 0 or 1

• Max 20x1 + 30x2 – 550y1 – 720y2 (Net income)s.t. 1.5x1 + 4x2 300 (Display space)

x1 10000y1 (T-shirt if equipment)x2 10000y2 (Sweatshirt if equipment)

x1, x2 0y1, y2 = 0 or 1

Net Income = 3450x1* = 200, x2* = 0, y1* = 1, y2* = 0

(12.2)

(12.6)

Net Income = 3989 = 200, = 0, = 0.02, = 0

Page 23: Chapter  12 Discrete Optimization Methods

Valid Inequalities

• A linear inequality is a valid inequality for a given discrete optimization model if it holds for all (integer) feasible solutions to the model. [12.16]

• To strengthen a relaxation, a valid inequality must cut off (render infeasible) some feasible solutions to the current LP relaxation that are not feasible in the full ILP model. [12.17]

Page 24: Chapter  12 Discrete Optimization Methods

Example 11.10 Tmark Facilities Location

1

2 3

4

5

6

7

8

iFixed Cost

1 24002 70003 36004 16005 30006 46007 90008 2000

Page 25: Chapter  12 Discrete Optimization Methods

Example 11.10 Tmark Facilities Location

Zonej

Possible Center Location, i Call Demand1 2 3 4 5 6 7 8

1 1.25 1.40 1.10 0.90 1.50 1.90 2.00 2.10 2502 0.80 0.90 0.90 1.30 1.40 2.20 2.10 1.80 1503 0.70 0.40 0.80 1.70 1.60 2.50 2.05 1.60 10004 0.90 1.20 1.40 0.50 1.55 1.70 1.80 1.40 805 0.80 0.70 0.60 0.70 1.45 1.80 1.70 1.30 506 1.10 1.70 1.10 0.60 0.90 1.30 1.30 1.40 8007 1.40 1.40 1.25 0.80 0.80 1.00 1.00 1.10 3258 1.30 1.50 1.00 1.10 0.70 1.50 1.50 1.00 1009 1.50 1.90 1.70 1.30 0.40 0.80 0.70 0.80 475

10 1.35 1.60 1.30 1.50 1.00 1.20 1.10 0.70 22011 2.10 2.90 2.40 1.90 1.10 2.00 0.80 1.20 90012 1.80 2.60 2.20 1.80 0.95 0.50 2.00 1.00 150013 1.60 2.00 1.90 1.90 1.40 1.00 0.90 0.80 43014 2.00 2.40 2.00 2.20 1.50 1.20 1.10 0.80 200

Page 26: Chapter  12 Discrete Optimization Methods

Tmark Facilities Location Example with LP Relaxation

(12.8)

s.t.

y4*=y8*= 1y1*=y2*= y3*=y5*= y6*=y7*= 0Total Cost = 10153

LP Relaxation = 0.230, = 0.000, = 0.000, = 0.301, = 0.115, = 0.000, = 0.000, = 0.650

Total Cost = 8036.60

Page 27: Chapter  12 Discrete Optimization Methods

Tmark Facilities Location Example with LP Relaxation

(12.8)

s.t.

LP Relaxation= 0.000, = 0.000, = 0.000, = 0.537, = 0.000, = 0.000, = 0.000, = 1.000 Total Cost = 10033.68Insolvable by Excel.

Page 28: Chapter  12 Discrete Optimization Methods

Lagrangian Relaxations

• Lagrangian relaxations partially relax some of the main, linear constraints of an ILP by moving them to the objective function as terms

Here is a Lagrangian multiplier chosen as the relaxation is formed. If the relaxed constraint has form multiplier for a maximize model and for a minimize. If the relaxed constraint has form multiplier for a maximize model and for a minimize model. Equality constraints have URS multiplier . [12.18]

Page 29: Chapter  12 Discrete Optimization Methods

Example 11.6 CDOT Generalized Assignment

The Canadian Department of Transportation encountered a problem of the generalized assignment form when reviewing their allocation of coast guard ships on Canada’s Pacific coast. The ships maintain such navigational aids as lighthouses and buoys. Each of the districts along the coast is assigned to one of a smaller number of coast guard ships. Since the ships have different home bases and different equipment and operating costs, the time and cost for assigning any district varies considerably among the ships. The task is to find a minimum cost assignment.

Table 11.6 shows data for our (fictitious) version of the problem. Three ships-the Estevan, the Mackenzie, and the Skidegate--are available to serve 6 districts. Entries in the table show the number of weeks each ship would require to maintain aides in each district, together with the annual cost (in thousands of Canadian dollars). Each ship is available SG weeks per year.

Page 30: Chapter  12 Discrete Optimization Methods

Example 11.6 CDOT Generalized Assignment

District, iShip, j 1 2 3 4 5 61.Estevan Cost 130 30 510 30 340 20

Time 30 50 10 11 13 92.Mackenzie Cost 460 150 20 40 30 450

Time 10 20 60 10 10 173.Skidegate Cost 40 370 120 390 40 30

Time 70 10 10 15 8 12

𝑥𝑖 , 𝑗≡ {1     if   district   i   is   assigned   to   ship   j0    otherwise                                        

Page 31: Chapter  12 Discrete Optimization Methods

CDOT Assignment Model

min 130x1,1+460x1,2 +40x1,3 +30x2,1+150x2,2 +370x2,3 +510x3,1+20x3,2

+120x3,3 +30x4,1+40x4,2 +390x4,3 +340x5,1+30x5,2 +40x5,3 +20x6,1+450x6,2

+30x6,3

s.t.

(12.12)

x1,1 + x1,2 + x1,3 = 1

x2,1 + x2,2 + x2,3 = 1

x3,1 + x3,2 + x3,3 = 1

x4,1 + x4,2 + x4,3 = 1

x5,1 + x5,2 + x5,3 = 1

x6,1 + x6,2 + x6,3 = 1

Xi,j = 0 or 1 i=1,…,6; j=1,…,3

x1,1*=x4,1*=x6,1*=x2,2*=x5,2*=x3,3*=1Total Cost = 480

30x1,1 + 50x2,1 + 10x3,1 + 11x4,1 + 13x5,1 + 9x6,1 50

10x1,2 + 20x2,2 + 60x3,2 + 10x4,2 + 10x5,2 + 17x6,2 50

70x1,3 + 10x2,3 + 10x3,3 + 15x4,3 + 8x5,3 + 12x6,3 50

Page 32: Chapter  12 Discrete Optimization Methods

Largrangian Relaxation of the CDOT Assignment Example

min 130x1,1+460x1,2 +40x1,3 +30x2,1+150x2,2 +370x2,3 +510x3,1+20x3,2

+120x3,3 +30x4,1+40x4,2 +390x4,3 +340x5,1+30x5,2 +40x5,3 +20x6,1+450x6,2

+30x6,3 + (1- x1,1 - x1,2 - x1,3 ) + (1- x2,1 - x2,2 - x2,3 ) + (1- x3,1 - x3,2 - x3,3 ) + (1- x4,1 - x4,2 - x4,3 ) + (1- x5,1 - x5,2 - x5,3 ) + (1- x6,1

+ x6,2 + x6,3 ) s.t.

(12.13)Xi,j = 0 or 1 i=1,…,6; j=1,…,3

30x1,1 + 50x2,1 + 10x3,1 + 11x4,1 + 13x5,1 + 9x6,1 50

10x1,2 + 20x2,2 + 60x3,2 + 10x4,2 + 10x5,2 + 17x6,2 50

70x1,3 + 10x2,3 + 10x3,3 + 15x4,3 + 8x5,3 + 12x6,3 50

Page 33: Chapter  12 Discrete Optimization Methods

More on Lagrangian Relaxations

• Constraints chosen for dualization in Lagrangian relaxations should leave a still-integer program with enough special structure to be relatively tractable. [12.19]

• The optimal value of any Lagrangian relaxation of a maximize model using multipliers conforming to [12.18] yields an upper bound on the optimal value of the full model. The optimal value of any valid Lagrangian relaxation of a minimize model yields a lower bound. [12.20]

• A search is usually required to identify Lagrangian multiplier values defining a strong Lagrangian relaxation. [12.21]

Page 34: Chapter  12 Discrete Optimization Methods

Largrangian Relaxation of the CDOT Assignment Example

min 130x1,1+460x1,2 +40x1,3 +30x2,1+150x2,2 +370x2,3 +510x3,1+20x3,2

+120x3,3 +30x4,1+40x4,2 +390x4,3 +340x5,1+30x5,2 +40x5,3 +20x6,1+450x6,2

+30x6,3 + (1- x1,1 - x1,2 - x1,3 ) + (1- x2,1 - x2,2 - x2,3 ) + (1- x3,1 - x3,2 - x3,3 ) + (1- x4,1 - x4,2 - x4,3 ) + (1- x5,1 - x5,2 - x5,3 ) + (1- x6,1

+ x6,2 + x6,3 ) s.t.

(12.13)Xi,j = 0 or 1 i=1,…,6; j=1,…,3

30x1,1 + 50x2,1 + 10x3,1 + 11x4,1 + 13x5,1 + 9x6,1 50

10x1,2 + 20x2,2 + 60x3,2 + 10x4,2 + 10x5,2 + 17x6,2 50

70x1,3 + 10x2,3 + 10x3,3 + 15x4,3 + 8x5,3 + 12x6,3 50

Using =300, =200, =200, =45, =45, =30 x1,1*= x2,2*= x3,3*= x4,1*= x4,2*= x5,2*= x5,3*= x6,1 =1Total Cost = 470

Page 35: Chapter  12 Discrete Optimization Methods

12.4 Branch and Bound Search

• Branch and bound algorithms combine a subset enumeration strategy with the relaxation methods. They systematically form classes of solutions and investigate whether the classes can contain optimal solutions by analyzing associated relaxations.

Page 36: Chapter  12 Discrete Optimization Methods

Example 12.2 River Power

River Power has 4 generators currently available for production and wishes to decide which to put on line to meet the expected 700-MW peak demand over the next several hours. The following table shows the cost to operate each generator (in $1000/hr) and their outputs (in MW). Units must be completely on or completely off.

Generator, j1 2 3 4

Operating Cost 7 12 5 14

Output Power 300 600 500 1600

Page 37: Chapter  12 Discrete Optimization Methods

Example 12.2 River Power

The decision variables are defined as

min 7x1+ 12x2 + 5x3 + 14x4 s.t. 300x1 + 600x2 + 500x3 + 1600x4 700

xi = 0 or 1 j=1,…,4

x1* = x3* = 1Cost = 12

Page 38: Chapter  12 Discrete Optimization Methods

Branch and Bound Search:Definitions

• A partial solution has some decision variables fixed, with others left free or undetermined. We denote free components of a partial solution by the symbol #. [12.22]

• The completions of a partial solution to a given model are the possible full solutions agreeing with the partial solution on all fixed components. [12.23]

Page 39: Chapter  12 Discrete Optimization Methods

Tree Search

• Branch and bound search begins at initial or root solution x(0) = (#,…,#) with all variables free. [12.24]

• Branch and bound searches terminate or fathom a partial solution when they either identify a best completion or prove that none can produce an optimal solution in the overall model. [12.25]

• When a partial solution cannot be terminated in a branch-and-bound search of a 0-1 discrete optimization model, it is branched by creating a 2 subsidiary partial solutions derived by fixing a previously free binary variable. One of these partial solutions matches the current except that the variable chosen is fixed =1, and the other =0. [12.26]

Page 40: Chapter  12 Discrete Optimization Methods

Tree Search

• Branch and bound search stops when every partial solution in the tree has been either branched or terminated. [12.27]

• Depth first search selects at each iteration an active partial solution with the most components fixed (i.e., one deepest in the search tree). [12.28]

Page 41: Chapter  12 Discrete Optimization Methods

LP-Based Branch and Bound Solution of the River Power Example

0x(0)=(#,#,#,#), +(0)=(0,0,0,.4375); =6.125

X4=1 X4=0

1

x(1)=(#,#,#,1)(1)=(0,0,0,1)=14

Terminated by solving 14

2

x(2)=(#,#,#,0)(2)=(0,.33,1,0); =9

3

X2=1 X2=0

x(3)=(#,1,#,0)(3)=(0,1,.2,0); =13

X3=1 X3=0

4(4)=(0,1,1,0); =17Terminated by bound

5(5)=(.33,1,1,0); =14.33Terminated by bound

6

x(6)=(#,0,#,0)(6)=(.67,0,1,0); =9.67

X1=1 X1=0

7

(7)=(1,0,.8,0)=11

X3=1X3=0

(8)=(1,0,1,0)=128T. by solving; 12 9

(9)=(1,0,0,0)Infeasible

10(10)=(0,0,1,0)Infeasible

Page 42: Chapter  12 Discrete Optimization Methods

Incumbent Solutions

• The incumbent solution at any stage in a search of a discrete model is the best (in terms of objective value) feasible solution known so far. We denote the incumbent solution and its objective function value . [12.29]

• If a branch and bound search stops as in [12.27], with all partial solutions having been either branched or terminated, the final incumbent solution is a global optimum if one exists. Otherwise, the model is infeasible. [12.30]

Page 43: Chapter  12 Discrete Optimization Methods

Candidate Problems

• The candidate problem associated with any partial solution to an optimization model is the restricted version of the model obtained when variables are fixed as in the partial solution. [12.31]

• The feasible completions of any partial solution are exactly the feasible solutions to the corresponding candidate problem, and thus the objective value of the best feasible completion is the optimal objective value of the candidate problem. [12.32]

Page 44: Chapter  12 Discrete Optimization Methods

Terminating Partial Solutions with Relaxations

• If any relaxation of a candidate problem proves infeasible, the associated partial solution can be terminated because it has no feasible completions. [12.33]

• If any relaxation of a candidate problem has optimal objective value no better than the current incumbent solution value, the associated partial solution can be terminated because no feasible completion can improve on the incumbent. [12.34]

• If an optimal solution to any constraint relaxation of a candidate problem is feasible in the full candidate, it is a best feasible completion of the associated partial solution. After checking whether a new incumbent has been discovered, the partial solution can be terminated. [12.35]

Page 45: Chapter  12 Discrete Optimization Methods

Algorithm 12A: LP-Based Branch and Bound (0-1 ILPs)

Step 0: Initialization. Make the only active partial solution the one with all discrete variable free, and initialize solution index t0. If any feasible solutions are known for the model, also choose the best as incumbent solution with objective value . Otherwise, set - if the model maximizes, and + if the model minimizes. Step 1: Stopping. If active partial solutions remain, select one as x(t), and proceed to Step 2. Otherwise, stop. If there is an incumbent solution , it is optimal, and if not, the model is infeasible.Step 2: Relaxation. Attempt to solve the LP relaxation of the candidate problem corresponding to x(t).

Page 46: Chapter  12 Discrete Optimization Methods

Algorithm 12A: LP-Based Branch and Bound (0-1 ILPs)

Step 3: Termination by Infeasibility. If the LP relaxation proved infeasible, there are no feasible completions of the partial solution x(t). Terminate x(t), increment tt+1, and return to Step 1. Step 4: Termination by Bound. If the model maximizes and LP relaxation value satisfies , or it minimizes and , the best feasible completion of partial solution x(t) cannot improve on the incumbent. Terminate x(t), increment tt+1, and return to Step 1.

Page 47: Chapter  12 Discrete Optimization Methods

Algorithm 12A: LP-Based Branch and Bound (0-1 ILPs)

Step 5: Termination by Solving. If the LP relaxation optimum (t) satisfies all binary constraints of the model, it provides the best feasible completion of partial solution x(t). After saving it as new incumbent solution

(t); Terminate x(t), increment tt+1, and return to Step 1. Step 6: Branching. Choose some free binary-restricted component xp that was fractional in the LP relaxation optimum, and branch x(t) by creating two new actives. One is identical to x(t) except that xp is fixed = 0, and the other the same except that xp is fixed = 1. Then increment tt+1, and return to Step 1.

Page 48: Chapter  12 Discrete Optimization Methods

Branching Rules for LP-Based Branch and Bound

• LP-based branch and bound algorithms always branch by fixing an integer-restricted decision variable that had a fractional value in the associated candidate problem relaxation. [12.36]

• When more than one integer-restricted variable is fractional in the relaxation optimum, LP-based branch and bound algorithms often branch by fixing the one closest to an integer value. [12.37]

• Tie-breaker

Page 49: Chapter  12 Discrete Optimization Methods

LP-Based Branch and Bound Solution of the River Power Example

The decision variables are defined as

min 7x1+ 12x2 + 5x3 + 14x4 s.t. 300x1 + 600x2 + 500x3 + 1600x4 700

xi = 0 or 1 j=1,…,4

x1* = x3* = 1Cost = 12

Page 50: Chapter  12 Discrete Optimization Methods

LP-Based Branch and Bound Solution of the River Power Example

0x(0)=(#,#,#,#), +(0)=(0,0,0,.4375); =6.125

X4=1 X4=0

1

x(1)=(#,#,#,1)(1)=(0,0,0,1)=14

Terminated by solving 14

2

x(2)=(#,#,#,0)(2)=(0,.33,1,0); =9

3

X2=1 X2=0

x(3)=(#,1,#,0)(3)=(0,1,.2,0); =13

X3=1 X3=0

4(4)=(0,1,1,0); =17Terminated by bound

5(5)=(.33,1,1,0); =14.33Terminated by bound

6

x(6)=(#,0,#,0)(6)=(.67,0,1,0); =9.67

X1=1 X1=0

7

(7)=(1,0,.8,0)=11

X3=1X3=0

(8)=(1,0,1,0)=128T. by solving; 12 9

(9)=(1,0,0,0)Infeasible

10(10)=(0,0,1,0)Infeasible

Page 51: Chapter  12 Discrete Optimization Methods

12.5 Rounding, Parent Bounds, Enumerations Sequences, and

Stopping Early in Branch and Bound• Refinements of Branch and bound algorithms.

Page 52: Chapter  12 Discrete Optimization Methods

NASA Example Model

max 200x1+3x2 +20x3+50x4 +70x5+20 x6 +5x7+10x8+200x9 +150x10+18x11 +8x12 +300x13+185x14

s.t. 6x1 + 2x2 + 3x3 + 1x7 + 4x9 + 5x12 103x2 + 5x3 + 5x5 + 8x7 + 5x9 + 8x10 + 7x12 + 1x13 + 4x14

128x5 + 1x6 + 4x10 + 2x11 + 4x13 + 5x14 148x6 + 5x8 + 7x11 + 1x13 + 3x14 1410x4 + 4x6 + 1x13 + 3x14 14x4 + x5 1x8 + x11 1x9 + x14 1

xi = 0 or 1 j=1,…,14

(12.16)

x11 x2

x4 x3

x5 x3

x6 x3

x7 x3

x1* = x3* = x4* = x8* = x13* = x14* =1Value = 765

Page 53: Chapter  12 Discrete Optimization Methods

Rounding for Incumbent Solutions

• If convenient rounding schemes are available, the relaxation optimum for every partial solution that cannot be terminated in a branch and bound search is usually rounded to a feasible solution for the full model prior to branching. The feasible solution provides a new incumbent if it is better than any known. [12.38]

Page 54: Chapter  12 Discrete Optimization Methods

Branch and Bound Family Tree Terminology

• Any node created directly from another by branching is called a child, and the branched node is its parent.

• The relaxation optimal value for the parent of any partial solution to a minimize model provides a lower bound on the objective value of any completion of its children. The relaxation optimal value for the parent in a maximize model provides an upper bound. [12.39]

• Whenever a branch and bound search discovers a new incumbent solution, any active partial solution with parent bound no better than the new incumbent solution value can immediately be terminated. [12.40]

Page 55: Chapter  12 Discrete Optimization Methods

Bounds on the Error of Stopping with the Incumbent Solution

• The least relaxation optimal value for parents of the active partial solutions in a minimizing branch and bound search always provides a lower bound on the optimal solution value of the full model. The greatest relaxation optimal value for parents in a maximizing search provides an upper bound. [12.41]

• At any stage of a branch and bound search, the difference between the incumbent solution value and the best parent bound of any active partial solution shows the maximum error in accepting the incumbent as an approximate optimum. [12.42]

Page 56: Chapter  12 Discrete Optimization Methods

Depth First, Best First, and Depth Forward Best Back Sequences

• Best first search selects at each iteration an active partial solution with best parent bound. [12.43]

• Depth forward best back search selects a deepest active partial solution after branching a node, but one with best parent bound after a termination. [12.44]

• When several active partial solutions tie for deepest or best parent bound, the nearest child chooses the one with last fixed variable value nearest the corresponding component of the parent LP relaxation. [12.45]

Page 57: Chapter  12 Discrete Optimization Methods

Branch and Cut Search

• Branch and cut algorithms modify the basic branch and bound strategy of Algorithm 12A by attempting to strengthen relaxations with new inequalities before branching a partial solution. Added constraints should hold for all feasible solutions to the full discrete model, but they should cut off (render infeasible) the last relaxation optimum. [12.46]

Page 58: Chapter  12 Discrete Optimization Methods

Algorithm 12B: Branch and Cut (0-1 ILPs)

Step 0: Initialization. Make the only active partial solution the one with all discrete variable free, and initialize solution index t0. If any feasible solutions are known for the model, also choose the best as incumbent solution with objective value . Otherwise, set - if the model maximizes, and + if the model minimizes. Step 1: Stopping. If active partial solutions remain, select one as x(t), and proceed to Step 2. Otherwise, stop. If there is an incumbent solution , it is optimal, and if not, the model is infeasible.Step 2: Relaxation. Attempt to solve the LP relaxation of the candidate problem corresponding to x(t).

Page 59: Chapter  12 Discrete Optimization Methods

Algorithm 12B: Branch and Cut (0-1 ILPs)

Step 3: Termination by Infeasibility. If the LP relaxation proved infeasible, there are no feasible completions of the partial solution x(t). Terminate x(t), increment tt+1, and return to Step 1. Step 4: Termination by Bound. If the model maximizes and LP relaxation value satisfies , or it minimizes and , the best feasible completion of partial solution x(t) cannot improve on the incumbent. Terminate x(t), increment tt+1, and return to Step 1.Step 5: Termination by Solving. If the LP relaxation optimum (t) satisfies all binary constraints of the model, it provides the best feasible completion of partial solution x(t). After saving it as new incumbent solution

(t); Terminate x(t), increment tt+1, and return to Step 1.

Page 60: Chapter  12 Discrete Optimization Methods

Algorithm 12B: Branch and Cut (0-1 ILPs)

Step 6: Valid Inequality. Attempt to identify a valid inequality for the full ILP model that is violated by the current relaxation optimum (t). If successful, make the constraint a part of the full model, increment tt+1, and return to Step 2. Step 7: Branching. Choose some free binary-restricted component xp that was fractional in the LP relaxation optimum, and branch x(t) by creating two new actives. One is identical to x(t) except that xp is fixed = 0, and the other the same except that xp is fixed = 1. Then increment tt+1, and return to Step 1.

Page 61: Chapter  12 Discrete Optimization Methods

12.6 Improving Search Heuristics for Discrete Optimization INLPS

• Many large combinatorial optimization models, especially INLPs with nonlinear objective functions, are too large for enumeration and lack strong relaxations that are tractable.

• Discrete Neighborhoods and Move Sets: Improving searches over discrete variables define neighborhoods by specifying a move set M of moves allowed. The current solution and all reachable from it in a single move x M comprise its neighborhood. [12.47]

Page 62: Chapter  12 Discrete Optimization Methods

Algorithm 12C: Rudimentary Improving Search Algorithm

Step 0: Initialization. Choose any starting feasible solution x(0), and set solution index t 0. Step 1: Local Optimum. If no move x in move set M is both improving and feasible at current solution x(t), stop. Point x(t) is a local optimum. Step 2: Move. Choose some improving feasible move x M as x(t+1).

Step 3: Step. Update x(t+1) x(t) + x(t+1).

Step 4: Increment. Increment t t+1, and return to Step 1.

Page 63: Chapter  12 Discrete Optimization Methods

Example 11.8 NCB Circuit Board TSP

Figure 11.4 shows the tiny example that we will investigate for fictional board manufacturer NCB. We seek an optimal route through the 10 hole locations indicated. Table 11.7 reports straight-line distances di,j between hole locations i and j. Lines in Figure 11.4 show a fair quality solution with total length 92.8 inches. The best route is 11 inches shorter (see Section 12.6). 1 2 3 4 5 6 7 8 9 10

1 3.6 5.1 10.0 15.3 20.0 16.0 14.2 23.0 26.42 3.6 3.6 6.4 12.1 18.1 13.2 10.6 19.7 23.03 5.1 3.6 7.1 10.6 15.0 15.8 10.8 18.4 21.94 10.0 6.4 7.1 7.0 15.7 10.0 4.2 13.9 17.05 15.3 12.1 10.6 7.0 9.9 15.3 5.0 7.8 11.36 20.0 18.1 15.0 15.7 9.9 25.0 14.9 12.0 15.07 16.0 13.2 15.8 10.0 15.3 25.0 10.3 19.2 21.08 14.2 10.6 10.8 4.2 5.0 14.9 10.3 10.2 13.09 23.0 19.7 18.4 13.9 7.8 12.0 19.2 10.2 3.610 26.4 23.0 21.9 17.0 11.3 15.0 21.0 13.0 3.6

1

23

4 5

6

78 9

10

Page 64: Chapter  12 Discrete Optimization Methods

Quadratic Assignment Formulation of the TSP

(12.20)

s.t.

Let

Page 65: Chapter  12 Discrete Optimization Methods

Choosing a Move Set

• The move set M of a discrete improving search must be compact enough to be checked at each iteration for improving feasible neighbors. [12.48]

• The solution produced by a discrete improving search depends on the move set (or neighborhood) employed, with larger move sets generally resulting in superior local optima. [12.49]

Page 66: Chapter  12 Discrete Optimization Methods

Multistart Search

• Multistart or keeping the best of several local optima obtained by searches from different starting solutions is one way to improve the heuristic solutions produced by improving search. [12.50]

Page 67: Chapter  12 Discrete Optimization Methods

12.7 Tabu, Simulated Annealing, and Genetic Algorithm Extensions of

Improving SearchDifficulty with Allowing Nonimproving Moves• Nonimproving moves will lead to infinite cycling of

improving search unless some provision is added to prevent repeating solutions. [12.51]

Page 68: Chapter  12 Discrete Optimization Methods

Tabu Search

• Tabu search deals with cycling by temporarily forbidding moves that would return to a solution recently visited. [12.52]

Page 69: Chapter  12 Discrete Optimization Methods

Algorithm 12D: Tabu Search

Step 0: Initialization. Choose any starting feasible solution x(0), and an iteration limit tmax. Then set solution x(0) and solution index t 0. No moves are tabu.Step 1: Stopping. If no non-tabu move x in move set M leads to a feasible neighbor of current solution x(t), or if t=tmax, then stop. Incumbent solution is an approximate optimum. Step 2: Move. Choose some non-tabu feasible move x M as x(t+1).

Step 3: Step. Update x(t+1) x(t) + x(t+1).

Step 4: Incumbent Solution. If the objective function value of x(t+1) is superior to that of incumbent solution , replace x(t+1) .

Page 70: Chapter  12 Discrete Optimization Methods

Algorithm 12D: Tabu Search

Step 5: Tabu List. Remove from the list of tabu of forbidden moves any that have been on it for a sufficient number of iterations, and add a collection of moves that includes any returning immediately from x(t+1) to x(t).

Step 6: Increment. Increment t t+1, and return to Step 1.

Page 71: Chapter  12 Discrete Optimization Methods

Simulated Annealing Search

• Simulated annealing algorithms control cycling by accepting non-improving moves according to probabilities tested with computer-generated random numbers. [12.53]

• The move selection process at each iteration begins with random choice of a provisional feasible move, totally ignoring its objective function impact. Next, the net objective function improvement obj is computed for the chosen move. The move is always accepted if it improves (obj > 0), and otherwise

probability of acceptance = eobj/q where q=temp.

Page 72: Chapter  12 Discrete Optimization Methods

Algorithm 12E: Simulated Annealing Search

Step 0: Initialization. Choose any starting feasible solution x(0), an iteration limit tmax, and a relative large initial temp q>0. Then set incumbent solution x(0) and solution index t 0. Step 1: Stopping. If no move x in move set M leads to a feasible neighbor of current solution x(t), or if t=tmax, then stop. Incumbent solution is an approximate optimum. Step 2: Provisional Move. Randomly choose a feasible move x M as a provisional x(t+1), and compute the net objective function improvement obj for moving from x(t) to (x(t)+x(t+1)) (increase for a maximize, decrease for a minimize).

Page 73: Chapter  12 Discrete Optimization Methods

Algorithm 12E: Simulated Annealing Search

Step 3: Acceptance. If x(t+1) improves, or with probability eobj/q if obj0, accept x(t+1) and update

x(t+1) x(t) + x(t+1).

Otherwise, return to Step 2.Step 4: Incumbent Solution. If the objective function value of x(t+1) is superior to that of incumbent solution , replace x(t+1) .Step 5: Temperature Reduction. If a sufficient number of iterations have passed since the last temperature change, reduce temperature q.Step 6: Increment. Increment t t+1, and return to Step 1.

Page 74: Chapter  12 Discrete Optimization Methods

Genetic Algorithms

• Genetic algorithms evolve good heuristic optima by operations combining members of an improving population of individual solutions. [12.54]

• Crossover combines a pair of “parent” solutions to produce a pair of :children” by breaking both parent vectors at the same point and reassembling the first part of one parent solution with the second part of the other, and vice versa. [12.55]

Page 75: Chapter  12 Discrete Optimization Methods

Genetic Algorithms

• The elitest strategy for implementation of genetic algorithms forms each new generation as a mixture of elite (best) solutions held over from the previous generation, immigrant solutions added arbitrarily to increase diversity, and children of crossover operations on non-overlapping pairs of solutions in the previous population. [12.56]

• Effective genetic algorithm search requires a choice for encoding problem solutions that often, if not always, preserves solution feasibility after crossover. [12.57]

Page 76: Chapter  12 Discrete Optimization Methods

Algorithm 12F: Genetic Algorithm Search

Step 0: Initialization. Choose a population size p, initial starting feasible solutions x(1), …, x(p), an generation limit tmax, and a population subdivisions pe for elites, pi for immigrants, and pc for crossovers. Also set the generation index t 0. Step 1: Stopping. If t=tmax, stop and report the best solution of the current population as an approximate optimum. Step 2: Elite. Initialize the population of generation t+1 with copies of the pe best solutions in the current generation. Step 3: Immigrants. Arbitrarily choose pi new immigrant feasible solutions, and include them in the t+1 population.

Page 77: Chapter  12 Discrete Optimization Methods

Algorithm 12F: Genetic Algorithm Search

Step 4: Crossover. Choose pc/2 non-overlapping pairs of solutions from the generation t population, and execute crossover on each pair at an independently chosen random cut point to complete the generation t+1 population. Step 5: Increment. Increment t t+1, and return to Step 1.

Page 78: Chapter  12 Discrete Optimization Methods

12.8 Constructive Heuristics

• Improving search heuristics in Section 12.6 and 12.7 move from complete solution to complete solution.

• The constructive search alternative follows a strategy more like the branch and bound searches proceeding through partial solutions, choosing values for decision variables one at a time and (often) stopping upon completion of a first feasible solution.

Page 79: Chapter  12 Discrete Optimization Methods

Algorithm 12G: Rudimentary Constructive Search

Step 0: Initialization. Start with all-free initial partial solution x(0) = (#, …, #) and set the solution index t 0. Step 1: Stopping. If all components of a current solution x(t) are fixed, stop and output x(t) as an approximate optimum. Step 2: Step. Choose a free component xp of partial solution x(t) and a value for it that plausibly leads to good feasible completions. Then, advance to partial solution x(t+1) identical to x(t) except that xp is fixed at the chosen value. Step 3: Increment. Increment t t+1, and return to Step 1.

Page 80: Chapter  12 Discrete Optimization Methods

Greedy Choices of Variables to Fix

• Greedy constructive heuristics elect the next variable to fix and its value that does least damage to feasibility and most helps the objective function, based on what has already been fixed in the current partial solution. [12.58]

• In large, especially non-linear, discrete models, or when time is limited, constructive search is often the only effective optimization-based approach to finding good solutions. [12.59]