prev

next

out of 7

Published on

21-Jun-2016View

213Download

0

Embed Size (px)

Transcript

Computers ind. Engng Vol. 14, No. 4, pp. 387-393, 1988 0360-8352/88 $3.00+0.00 Printed in Great Britain. All rights reserved Copyright 1988 Pergamon Press pie

SINGLE FACILITY SCHEDULING WITH NONLINEAR PROCESSING TIMES

JATINDER N. D. GUPTA 1 and SUSHIL K. GUPTA 2 1Department of Management Science, Ball State University, Muncie, IN 47303 and 2Department of Decision Sciences,

Florida International University, Miami, FL 33199, U.S.A.

(Received for publication 10 December 1987)

Abstract--This paper considers the static single facility scheduling problem where the processing times of jobs are a monotonically increasing function of their starting (waiting) times and the objective is to minimize the total elapsed time (called the makespan) in which all jobs complete their processing. Based on the combinatorial analysis of the problem, an exact optimization algorithm is developed for the general processing time function which is then specialized for the linear case. In view of the excessive computational burden of the exact optimization algorithm for the nonlinear processing time functions, heuristic algorithms are proposed. The effectiveness of these proposed algorithms is empirically evaluated and found to indicate that these heuristic algorithms yield optimal or near optimal schedules in many cases.

INTRODUCTION

Consider the following static single facility problem: a set of n independent, single-operation jobs, is ready for being processed at time zero on a single machine. Neither job splitting nor machine idleness are allowed. The processing time of each job depends on its starting (or waiting) time in the sequence. It is desired to find that processing order (schedule) which minimizes the makespan, defined as the total elapsed time in which all jobs complete their processing. Such situations often occur in many chemical and metallurgical processes. For example, in steel rolling mills, ingots are heated to the required temperature before rolling. In this case, the furnace is the single facility and ingots are the independent jobs to be processed (heated). Heating time depends upon the ingot's current temperature, which depends upon the time it has been waiting. During the waiting period, the ingot cools down thus requiring more heating time in the furnace. It is desired to minimize the total time spent by all available ingots in the heating shop.

For each job i, let Pi and ti be its processing and starting times respectively. Since the processing time of job i depends on its starting time, the relationship between Pi and ti can be represented as:

Pi = f(t i ) . (1)

The exact form of the function f in equation (1) depends upon the specific production process under consideration. Mathematically, the linear, the quadratic, and the general forms are expressed as follows:

Pi = ai + biti (linear) (2)

Pi = ai + biti + cit 2 (quadratic) (3)

Pi = ai + biti + ci t2 + + mi t m (general) (4)

where ai, bi, Ci, , mi are non-negative constants. As stated before, the objective is to find the schedule (or an order) S = ([1], [2] . . . . [K] . . . . .

[n]), where [K] is the job in Kth position that will minimize makespan, T(S) , computed as follows:

T(S) = Z Pi,I (5) i=1

387

388 JATINDER N. D. GUPTA and SUSHIL K. GUPTA

Minimizing makespan on a single machine is trivial if the processing times of jobs are independent of their starting times since all schedules give the same makespan. However, for the scenario presented above, finding a schedule that minimizes makespan is neither trivial nor easy. In fact, the problem appears to be NP-complete implying that it is very unlikely that polynomially bounded solution techniques will be found for its solution [1]. Branch and bound procedures are not useful since it is very difficult (if not impossible) to develop general expressions for the lower bounds on makespan. Therefore, the combinatorial approaches based on the sequence domi- nance conditions are used to describe an exact optimization algorithm for the general processing time function and a specialized version for the linear case. In view of the relative inefficiency of combinatorial approaches several heuristic approaches are discussed to find approximate solu- tions to the problem.

COMBINATORIAL APPROACH

Consider two schedules S = PQ and S' = P'Q where the partial schedules P and P' are different permutations of the same subset of jobs. Let the completion time of partial schedule P be represented as T(P). The developments in flowshop scheduling [2,3], lead to the following result.

Theorem 1. For two partial schedules P and P' that are different permutations of the same subset of jobs, T(P)

Ite

rati

on

1

Ta

ble

2. Il

lust

rati

on

o

f th

e co

mb

ina

tori

al

alg

ori

thm

Ite

rati

on

2

Ite

rati

on

3

Ite

rati

on

4

p

T(P

)

1

0.2

3

2

0.2

7

3

0.1

3

4

0.2

9

e r(

P)

* t'

r(

P)

* P

r(P

)

1-2

0

.74

7

* 1

-2-3

1.1

395

* 1

-2-3

--4

3.3

856

1-2

-4

2.0

33

0

1-3

0

.39

9

* 1

-3-2

1.1

554

1-3

--4

1 .O

844

1-4

0

.70

9

* 1

-4-2

2.0

301

* 1

-4-2

-3

3.7

852

1-4

-3

1.0

788

* 1

-4-3

-2

3.2

84

4

2-1

2

-3

2-4

3-1

3

-2

3-4

4-1

4

-2

4-3

0.7

70

0

.44

98

0.7

93

4

0.4

76

6

0.5

28

0

0.5

14

5

0.8

15

0

0.8

86

7

0.4

755

* 2

-3-1

1.1

919

2-3

--4

1.2

085

* 2-4

--1

2.1

29

0

2-4

-3

1.2

145

4--

3--

1

1.2

56

7

4--

3--

2

1.3

574

2-3

-4-1

3.4

988

~r

o t~

=

-.

* T

his

sh

ow

s th

e pa

rtia

l sch

ed

ule

re

tain

ed

.

390 JATINDER N. D. GUPTA and SUSHIL K. GUPTA

least completion times are retained as shown in the column labelled Iteration 2. This process is continued until complete schedules are obtained as shown in Table 2.

From the calculations in Table 2 above, it follows that the optimal schedule for the example problem of Table 1 is 1 A 3--2 with a makespan of 3.2844.

Linear processing time function case

The processing time of any job i, as a linear function of its starting (or waiting) time, is given by equation (2). For each job i, define: hi = ai/bi.

Theorem 2. For the linear case, the schedule obtained by arranging jobs in ascending order of h i values is optimal.

Proof. Consider two schedules S = PijQ and S' = PjiQ. It is obvious that:

T(Pij) = T(P) + a i + b iT (P ) + a /+ bj{T(P) + a i + b iT (P )} .

Similarly:

T(Pji) = T(P) + aj + bjT(P) + ai + bi{T(P) + aj + biT(P)}.

From these two equations, it follows that T(Pij)

Single facility scheduling 391

Table 3. hi(x) for the example problem

Job(i) hi(a ) hi(b ) hi(c)

1 0.23 0.29 0.31 2 0.27 0.31 0.32 3 0.13 1.45 0.37 4 0.29 0.48 0.30

Table 4. Schedules obtained by static heuristic algorithm

Processing times for jobs in positions

Rule Schedule 1 2 3 4 Makespan

hi(a) 3-1-24 0.13 0.3466 0.8825 2.9150 4.274 hi(b) 1-2-4-3 0.23 0.5174 1.2858 1.7598 3.793 h,(c) 4-1-2-3 0.29 0.5250 1.5518 2.3039 4.670

As an illustration of the static heuristic algorithm, consider the four-job problem of Table 1 above. The ratios hi(x) are given in Table 3.

The schedules obtained by arranging jobs in ascending order of these ratios in Table 3 are given in Table 4.

Schedule 1-2-4-3 is the best schedule with a makespan of 3.793.

Dynamic heuristic algorithm Partition all jobs in two mutually exclusive subsets P and Q. The jobs in subset P have already

been scheduled. Let the completion time of the last job in P be T(P). For each job i e Q fijad the ratio Hi(x), x = a, b, c , . . . , m. For the quadratic processing time function given in equation (3), Hi(x) are defined as follows:

Hi (a) = a i /{b i+c iT (P )} H i (b ) = {ai+biT(P)+ciT(p)2}/{bi+2ciT(P)}, and Hi(c) = ai+biT(P)+ciT(P) 2

Similar expressions for Hi(x) can be developed for the general non-linear processing time function given in equation (4).

The dynamic and static heuristic algorithms follow the same steps except that in the dynamic case, implementation of the step 2 is as follows:

(i) Find job j with minimum Hj(x), j ~ Q. (ii) Append job j to sequence P and find T(Pj). (iii) Let P = Pj, Q = Q - {j}, and T(P) = T(Pj). (iv) If Q is empty, go to step (v), otherwise, find Hi(x) for each j ~ Q and go to step (i). (v) P represents the heuristic schedule and T(P) is the makespan.

To illustrate the dynamic heuristic algorithm, consider the example problem of Table 1 again. The calculations are shown in Table 5 below and results in the same solution as by the static heuristic algorithm.

Improving the solution The above heuristic algorithms were found to be rather unpredictable in finding an optimal (or

close to optimal) makespan schedules. Therefore, the following neighborhood search technique was used to improve their effectiveness:

(0) Let i= 1, and j= 2. (1) Let the best schedule generated by the heuristic algorithm be S = ([1], [2], . . . [k] . . . . ,

[n]), where, [k] is the job in kth position. Let T(S) be the makespan for S.

Ta

ble

5.

Sc

he

du

les o

bta

ine

d by

dy

na

mic

he

uri

stic

alg

ori

thm

e T(

P)

Pro

ce

ssin

g

tim

e fo

r Jo

b

job

H

i(x

) H

2(x

) H

3(x

) H

4(x

) se

lec

ted

se

lect

ed

Sc

he

du

le fo

r x

= a

0

0.0

0~

11

.29

(1.3

1(1

1.4

50

0

(I.4

80

0

1 0

.23

00

1

O. 2

30

0

* (1

.251

0

.76

2(I

O

. 35

13

2

(1.5

173

1-2

0

.74

73

*

* (1

.369

8 (1

.217

6 4

1.2

85

8

1-2

-4

2.0

33

1

* *

O. 1

62

2

* 3

1.7

59

8

1-2

-4-3

3

.79

29

*

* *

* *

*

Sc

he

du

le fo

r x

= b

0

O.O

~X

k')

0.2

9

(1.3

10

1.4

50

(I

.48

00

1

(I.2

30

0

1 O

. 23

00

*

0.4

07

0

.67

4

(). 4

56

6

2 0

.51

73

1

-2

0.7

47

3

* *

0.6

40

0

.62

27

4

1.2

85

8

1-2

-4

2.0

33

1

* *

1.1

63

*

3 1

,75

98

1

-2-4

-3

3.7

92

9

* *

* *

*

Sc

he

du

le fo

r x

= c

0

0.f

~X

I 0

.23

0

0.2

70

(1

.13

0.2

90

3

0.1

30

3

(I.1

30

0

.34

7

0.3

99

*

0.3

85

1

0.3

47

3

-1

0.4

77

*

(I.8

83

*

0.7

99

4

0.7

99

3

-1-4

1

.27

6

* 2

,77

5

* *

2 2

.77

5

3-1

-4-2

4

.05

1

* *

* *

* *

z :z

~7

*No

t n

ec

ess

ary

to

fin

d t

he

se va

lue

s

Ta

ble

6.

Co

mp

ara

tiv

e

ev

alu

ati

on

of

the

pro

po

sed

he

uri

stic

alg

ori

thm

s

Sta

tic h

eu

rist

ic alg

ori

thm

D

yn

am

ic he

uri

stic

alg

ori

thm

n #

e OP

T

MIN

A

VR

M

AX

~

OP

T

MIN

A

VR

M

AX

4 1

7

0.0

0

.02

6

0.1

55

14

0

.0

0.0

19

6

0.1

53

5

9 0

.0

0.1

12

1

.44

0

7 0

.0

0,1

53

4

1.5

34

6

7 0

,0

0.3

02

0

.95

1

6 0

.0

0.3

71

2

2.9

52

7

2 0

.0

1.4

44

5

.61

8

3 0

.0

12

.02

85

1

34

.54

0

Ta

ble

7.

Ex

pe

rim

en

tal re

sult

s of

usi

ng

ne

igh

bo

rho

od

se

arc

h te

ch

niq

ue

Sin

gle

pa

ss

Do

ub

le p

ass

n #

OP

T

MIN

A

VR

M

AX

#

e OP

T

MIN

A

VR

M

AX

4 18

0

.0

0.0

03

0

.04

8

19

0

.0

0.0

02

0

.04

8

5 1

8

0.0

l)

.00

4

0.0

50

2

0

0.0

0

.00

0

0.0

00

6

17

0.0

0

.01

6

0.2

30

1

9

0.0

0

.00

0

0.0

04

7

10

0.0

0

.38

5

5.6

18

15

0

.0

0.3

24

5

.61

8

,v

Single facility scheduling 393

(2) Let T(S') be the makespan of the schedule obtained by interchanging [i] and [j]. If T(S') < T(S), let S = S', i = j and j = j+ 1. If j < n, return to step 1, otherwise enter step 3.

(3) Set i = i+ 1. If i < n, set j = i+ 1 and return to step 1 otherwise enter step 4. (4) Accept schedule S with makespan T(S) as an approximate solution.

The schedule S may still be improved by applying neighborhood search once more (i.e. repeating steps 1 through 3 above with seed as the schedule obtained at step 4).

COMPUTATIONAL EXPERIENCE

To test the effectiveness of proposed heuristic algorithms in finding optimal or near optimal schedules, the proposed optimization algorithm, both heuristic algorithms and the neighborhood search technique were used to solve problems varying in size from 4 to 7 jobs. A set of 20 problems was solved for each problem size. The ai, bi, and ci values were generated from a uniform distribution in the range [0,1]. For each problem the effectiveness of the kth heuristic algorithm, q~,, is defined as:

qk = (Pk - Po)IPo

where Pk and Po are the kth heuristic and optimal solutions respectively. Table 6 depicts the number of optimal solutions (~ OPT); the minimum (MIN), average (AVR) and maximum (MAX) values of q for each problem size and for each algorithm.

The difference in the effectiveness of the two heuristic algorithms is not appreciable. In fact the static heuristic algorithm seems to be more effective than the dynamic case, especially for large size problems. Moreover, static rules require less computational effort than the dynamic heuristic algorithm. Therefore, more experimentation was done with the static heuristic algorithm. Neighborhood search technique was used to improve the solution obtained by the static heuristic algorithm. Table 7 shows the results of these experiments with a single and a double pass of the neighborhood search procedure.

The results in Table 7 above show that the neighborhood search technique improves the quality of the heuristic solution considerably. However, the computational effort is much more than for the static heuristic algorithm only. The tradeoff depends on other managerial considerations and the values assigned to a reduction in makespan.

CONCLUSIONS

This paper has described exact and heuristic algorithms for finding a minimum makespan schedule for a single facility scheduling problem when the processing times of jobs are nonlinar functions of their starting times. While the problem appears to be in the NP-complete category, the proposed algorithms do provide a practical way to approach the problem. Further refine- ments and improvements may be possible to provide even more efficient and effective solution procedures. Nevertheless, the suggested approaches are useful in finding practical and workable schedules for these difficult problems. Even if the branch and bound approaches to solve these problems do become available in the future, the proposed heuristic algorithms can be used to find an initial solution to act as an upper bound to curtail the domain of search.

REFERENCES

1. M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, San Francisco (1979).

2. J. N. D. Gupta. An improved combinatorial algorithm for the flowshop scheduling problem. Opns Res. 20, 1753-1758 (1971).

3. J. N. D. Gupta. A review of flowshop scheduling research. Disaggregation: Problems in Manufacturing and Service Organizations (Edited by L. P. Ritzman et al.) Martinus Nijhoff, The Hague (1979).

4. J. N. D. Gupta. Optimal scheduling in a multi-stage flowshop. AHE Trans. 4, 238-243 (1972). 5. A. Schild and I. J. Fredman. Scheduling tasks with deadlines and nonlinear loss functions. Mgmt Sci. 9, 73-81 (1962).