10
Crossdocking—JIT scheduling with time windows Y Li 1 , A Lim 1 and B Rodrigues 2 * 1 Hong Kong University of Science and Technology, Clearwater Bay, Hong Kong; and 2 Singapore Management University, Singapore In this paper, we study a problem central to crossdocking that aims to eliminate or minimize storage and order picking activity using JIT scheduling. The problem is modelled naturally as a machine scheduling problem. As the problem is NP-hard, and for real-time applications, we designed and implemented two heuristics. The first uses Squeaky Wheel Optimization embedded in a Genetic Algorithm and the second uses Linear Programming within a Genetic Algorithm. Both heuristics offer good solutions in experiments where comparisons are made with the CPLEX solver. Journal of the Operational Research Society (2004) 55, 1342–1351. doi:10.1057/palgrave.jors.2601812 Published online 7 July 2004 Keywords: crossdock; machine scheduling; just-in-time scheduling; heuristics Introduction Materials handling is a central concern in storage space decisions and is mostly a cost-consuming activity. The objectives for materials handling are therefore cost focused and attempt to reduce handling costs while increasing space utilization. Materials handling can be improved through good load utilization, space layout and equipment choices. Typically, storage and order picking operations take up the bulk of handling activity in a warehouse. These operations include stock locating, stock arrangement, product sequen- cing, order splitting and item batching, all of which are labour intensive and expensive. Further, warehouses need to be configured to handle equipment and storage facilities to accommodate these activities and an inventory management system has to be in place. Crossdocking attempts to lessen or eliminate these burdens by reducing warehouses to purely trans-shipment centres where receiving and shipping are its only functions. The objective is to transfer incoming cargo onto outgoing vehicles dispensing with costly interfacing activities. Shipments need to spend very little time at crossdocks, typically less than 24 h, before being moved on in the supply chain. Cargo that have known destinations are shipped to crossdocking centres. At crossdocks, trucks arrive with cargo that is sorted, consolidated and loaded onto outbound vehicles headed for manufacturing sites, retailers or even another warehouse or crossdock. In a crossdock, the customer is predetermined before the product arrives and there is no need for storage. This is different from a typical warehouse where stock is maintained until a customer order is filled when the required product is then picked, packed and shipped. Replenishments, if necessary, are stored until the next demand. The working area at a crossdock can be divided into an import area and an export area, where breakdown and buildup occurs, respectively. In the import area, incoming containers are broken down and in the export area, containers are built up after consolidation if necessary. Incoming cargo will reach the crossdock at various times since they come from a number of suppliers. Items, including breakdowns, are then either shipped away directly or sent to export area to be loaded into outgoing containers. Out- bound cargo may be shipped away by vehicles with scheduled departure times, such as scheduled aircraft or trains (Figure 1). In this context, each incoming container has a release time and due date and each outgoing container has a due date. Each incoming (resp. outgoing) container is processed by a breakdown (resp. buildup) process in the import (resp. export) area. This is usually accomplished by teams of workers and equipment. Since such teams are available in shifts and are limited in number, scheduling teams to jobs has to be achieved precisely. This is especially when specialized teams and equipment are required, as for example, when handling dangerous cargo at airports. It is expected that once breakdown is completed, cargo is packed into outgoing containers immediately and is ready to be shipped out. In such operations, timing is important and for purpose of achieving the primary objective of crossdocking, timing is crucial. We need a schedule to specify when to start breakdown and to complete buildup of all cargo where the goal is to complete processing each container exactly at its due date and hence achieve the raison d’etre for the crossdock. Recently, crossdocking has received much attention as a result of the commercial successes of large trans-shippers *Correspondence: B Rodrigues, School of Business, Singapore Manage- ment University, Singapore 259756, Singapore. E-mail: [email protected] Journal of the Operational Research Society (2004) 55, 1342–1351 r 2004 Operational Research Society Ltd. All rights reserved. 0160-5682/04 $30.00 www.palgrave-journals.com/jors

Crossdocking—JIT scheduling with time windows

  • Upload
    b

  • View
    229

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Crossdocking—JIT scheduling with time windows

Crossdocking—JIT scheduling with time windowsY Li1, A Lim1 and B Rodrigues2*1Hong Kong University of Science and Technology, Clearwater Bay, Hong Kong; and 2Singapore ManagementUniversity, Singapore

In this paper, we study a problem central to crossdocking that aims to eliminate or minimize storage and order pickingactivity using JIT scheduling. The problem is modelled naturally as a machine scheduling problem. As the problem isNP-hard, and for real-time applications, we designed and implemented two heuristics. The first uses Squeaky WheelOptimization embedded in a Genetic Algorithm and the second uses Linear Programming within a Genetic Algorithm.Both heuristics offer good solutions in experiments where comparisons are made with the CPLEX solver.Journal of the Operational Research Society (2004) 55, 1342–1351. doi:10.1057/palgrave.jors.2601812Published online 7 July 2004

Keywords: crossdock; machine scheduling; just-in-time scheduling; heuristics

Introduction

Materials handling is a central concern in storage space

decisions and is mostly a cost-consuming activity. The

objectives for materials handling are therefore cost focused

and attempt to reduce handling costs while increasing space

utilization. Materials handling can be improved through

good load utilization, space layout and equipment choices.

Typically, storage and order picking operations take up the

bulk of handling activity in a warehouse. These operations

include stock locating, stock arrangement, product sequen-

cing, order splitting and item batching, all of which are

labour intensive and expensive. Further, warehouses need to

be configured to handle equipment and storage facilities to

accommodate these activities and an inventory management

system has to be in place. Crossdocking attempts to lessen or

eliminate these burdens by reducing warehouses to purely

trans-shipment centres where receiving and shipping are its

only functions. The objective is to transfer incoming cargo

onto outgoing vehicles dispensing with costly interfacing

activities. Shipments need to spend very little time at

crossdocks, typically less than 24h, before being moved on

in the supply chain. Cargo that have known destinations are

shipped to crossdocking centres. At crossdocks, trucks arrive

with cargo that is sorted, consolidated and loaded onto

outbound vehicles headed for manufacturing sites, retailers

or even another warehouse or crossdock. In a crossdock, the

customer is predetermined before the product arrives and

there is no need for storage. This is different from a typical

warehouse where stock is maintained until a customer order

is filled when the required product is then picked, packed

and shipped. Replenishments, if necessary, are stored until

the next demand.

The working area at a crossdock can be divided into an

import area and an export area, where breakdown and

buildup occurs, respectively. In the import area, incoming

containers are broken down and in the export area,

containers are built up after consolidation if necessary.

Incoming cargo will reach the crossdock at various times

since they come from a number of suppliers. Items, including

breakdowns, are then either shipped away directly or sent to

export area to be loaded into outgoing containers. Out-

bound cargo may be shipped away by vehicles with

scheduled departure times, such as scheduled aircraft or

trains (Figure 1). In this context, each incoming container

has a release time and due date and each outgoing container

has a due date.

Each incoming (resp. outgoing) container is processed by a

breakdown (resp. buildup) process in the import (resp.

export) area. This is usually accomplished by teams of

workers and equipment. Since such teams are available in

shifts and are limited in number, scheduling teams to jobs has

to be achieved precisely. This is especially when specialized

teams and equipment are required, as for example, when

handling dangerous cargo at airports. It is expected that once

breakdown is completed, cargo is packed into outgoing

containers immediately and is ready to be shipped out. In

such operations, timing is important and for purpose of

achieving the primary objective of crossdocking, timing is

crucial. We need a schedule to specify when to start

breakdown and to complete buildup of all cargo where the

goal is to complete processing each container exactly at its

due date and hence achieve the raison d’etre for the crossdock.

Recently, crossdocking has received much attention as a

result of the commercial successes of large trans-shippers

*Correspondence: B Rodrigues, School of Business, Singapore Manage-ment University, Singapore 259756, Singapore.E-mail: [email protected]

Journal of the Operational Research Society (2004) 55, 1342–1351 r 2004 Operational Research Society Ltd. All rights reserved. 0160-5682/04 $30.00

www.palgrave-journals.com/jors

Page 2: Crossdocking—JIT scheduling with time windows

such as Wal-Mart.1 Research on the subject has focused on

determining a crossdocking centre’s position in a supply

chain network,2,3 designing the shape of crossdocking

centres to reduce long-term congestion level4 and the

distribution of inbound and outbound docks.5 In this work,

we study a scheduling problem that allows a warehouse to

function as a crossdock where transit storage time for cargo

is eliminated or minimized. We do this by building a model

using machine scheduling notions to depict the problem. We

then provide heuristic solution approaches for this NP-hard

problem. Experiments are carried out to determine the

usefulness of these techniques and analysis and comparisons

are made.

The model

We have noticed that the crossdocking scheduling problem

described above can be modelled as a machine scheduling

problem. Such a description is natural and is useful for our

purpose since we are then able to exploit the vast machine

scheduling literature available for the problem at hand. Each

incoming container and each outgoing container is a job to

be processed by teams which we think of as machines, where

only a limited number are available. Further, machines

handling incoming cargo can be thought to be parallel and

likewise for machines handling outgoing cargo (see Figure 1).

This is because teams are able to operate simultaneously.

Each incoming container can be thought of as a job that has

a release time after which it can be processed, a due date and

a processing time, assumed to be known beforehand. Each

outgoing container has a due date, a processing time, and a

number of source containers which feed it. Here, we use

‘container’ as a generic packing form to include containers,

pallets, igloos and other packing used in warehouses. We use

the terms ‘team’ and ‘machine’ and ‘container’ and ‘job’

interchangeably. The number of breakdown and buildup

processing teams is also known together with penalties for

earliness and lateness. We assume we have n incoming

containers, i¼ 1,y, n, and N outgoing containers, j¼ 1,y,

N, and will use the following notation:

ri release time after which container i can be broken

down

di due date for incoming container i

pi processing time required to break down container i

Dj due date of outgoing container j

Pj processing time

Kj number of sources of container j; that is, j is built from

Kj different incoming containers;

Sij the ith source of container j; i¼ 1, y, Kj

m number of breakdown machines

M number of buildup machines

a penalty for unit time earliness

b penalty for unit time tardiness

Breakdown teams are described by MI¼ {MI,1, MI,2, y,

MI,m} and the buildup teams by MO¼ {MO,1, MO,2, y,

MO,M}, where jobs within MI and MO are parallel. Jobs are

given by JI¼ {JI,1, JI,2, y, JI,n} and JO¼ {JO,1, JO,2, y,

JO,N} where jobs in JI are inbound and processed only by

teams in MI, and jobs in JO are outbound and processed

only by teams inMO. A job JI,iAJI is described by {ri, pi, di},

and a job JO,jAJO is described by {pred, Pj,Dj}, where Pj and

Dj denote its processing time and due date, respectively, and

pred describes its predecessors, all belonging to JI. Earliness

and tardiness penalties of a job JI,i are denned by ei¼ amax{0, ci–di} and ti¼ b max{0, di–ci}, where ci is the job’s

finish time and a, b are unit earliness and tardiness penalties.The definitions for a job JO,j is similar. The problem then is

to find a schedule that minimizes the total penalty when job

starting times have been specified.

We do not consider the time needed for a container to

travel from import area to export area. There are two

reasons for this. The first is because this time is in many cases

directly ship away after breadown

outgoingcargo

incomincargo

Import areafor Breakdown operation

Export areafor Buildup operation

directly ship away after breadown

crossdocking center

***

***

Figure 1 Typical crossdock flow.

Y Li et al—Crossdocking 1343

Page 3: Crossdocking—JIT scheduling with time windows

negligible compared to processing time. The other reason

being, even if we take travelling time into consideration, we

can incorporate it easily into the model by advancing due

dates of outgoing containers.

Cargo is processed in two phases in breakdown and

buildup, and precedence relationships exists between jobs in

these: Each incoming container i must been broken down

before an outgoing container j can commence to be built up

if cargo items in container j come from container i. We can

assume all breakdown (buildup) machines are identical and

if a job is completed earlier than the due date, an earliness

penalty is applied and if a job is finished later than due date,

a tardiness penalty is applied. The crossdocking problem can

now be viewed as a machine scheduling problem, which we

can describe as a two-phase parallel machine scheduling

problem with earliness and tardiness and an integer

programming model can be formulated for this scheduling

problem as follows:

Decision variables:

yik¼ 1 if incoming container i is processed on break-

down machine k and 0 otherwise, for i¼ 1, y,

n, k¼ 1, y, m

Yjk¼ 1 if outgoing container j is processed on buildup

machine k and 0 otherwise, for j¼ 1, y, N;

k¼ 1, yM

xij¼ 1 if incoming container i and j are processed on

the same machine and i immediately precedes j

and 0 otherwise, for i, j¼ 1, y, n;

Xij¼ 1 if outgoing container i and j are processed on the

same machine and i immediately precedes j and

0 otherwise, for i, j¼ 1, y, N;

ci completion time of incoming container i, i¼ 1,

y, n

Cj completion time of outgoing container j, j¼ 1,

y, N

ei earliness of incoming container i, i¼ 1, y, n

ti tardiness of incoming container i, i¼ 1, y, n

Ej earliness of outgoing container j, j¼ 1, y, N

Tj tardiness of outgoing container j, j¼ 1, y, N

For convenience of formulation, we introduce two

dummy jobs for the breakdown and buildups area,

respectively, incoming (resp., outgoing) containers 0 and

nþ 1 (resp., Nþ 1). These are characterized as follows:

y0k ¼ 1; k ¼ 1; . . . ;m

ynþ 1;k ¼ 1; k ¼ 1; . . . ;m

c0 ¼ 0

Y0k ¼ 1; k ¼ 1; . . . ;M

YN þ 1;k ¼ 1; k ¼ 1; . . . ;M

C0 ¼ 0

Objective function:

MinimizeXn

i¼1ðaei þ btiÞ þ

XN

j¼1ðaEj þ bTjÞ

Constraints:

Xm

k¼1yik ¼ 1; i ¼ 1; . . . ;m ð1Þ

XM

k¼1Yjk ¼ 1; j ¼ 1; . . . ;M ð2Þ

Xn

i¼0;iaj

xij ¼ 1; j ¼ 1; . . . ; n ð3Þ

XN

i¼0;iaj

Xij ¼ 1; j ¼ 1; . . . ;N ð4Þ

Xnþ 1

j¼1;iaj

xij ¼ 1; i ¼ 1; . . . ; n ð5Þ

XN þ 1

j¼1;iaj

Xij ¼ 1; i ¼ 1; . . . ;N ð6Þ

Xnþ 1

j¼0x0j ¼ m ð7Þ

XNþ 1

j¼0X0j ¼ M ð8Þ

Xn

i¼0xi;nþ 1 ¼ m ð9Þ

XN

i¼0Xi;Nþ 1 ¼ M ð10Þ

x0i þ x0j þ yik þ yjk ¼ 3;

i; j ¼ 1; . . . ; n þ 1; iaj; k ¼ 1; . . . ;mð11Þ

X0i þ X0j þ Yik þ Yjk ¼ 3;

i; j ¼ 1; . . . ;N þ 1; iaj; k ¼ 1; . . . ;Mð12Þ

1344 Journal of the Operational Research Society Vol. 55, No. 12

Page 4: Crossdocking—JIT scheduling with time windows

xij � 1pyik � yjkp1� xij;

i; j ¼ 1; 2; . . . ; n; iaj; k ¼ 1; . . . ;mð13Þ

Xij � 1pYik � Yjkp1� Xij ;

i; j ¼ 1; 2; . . . ;N; iaj; k ¼ 1; . . . ;Mð14Þ

ci � cj þ Gð3� xij � yik � yjkÞXpj ;

i ¼ 0; 1; . . . ; n; j ¼ 1; . . . ; n þ 1; iaj; k ¼ 1; . . . ;mð15Þ

Ci � Cj þ Gð3� Xij � Yik � YjkÞXPj;

i ¼ 0; 1; . . . ;N; j ¼ 1; . . . ;N þ 1; iaj; k ¼ 1; . . . ;M

ð16Þ

ci � riXpi; i ¼ 1; . . . ; n ð17Þ

Cj � ckXPj; j ¼ 1; . . . ;N; k ¼ 1; . . . ;Kj ð18Þ

ci � di ¼ ti � ei; i ¼ 1; . . . ; n ð19Þ

Ci �Di ¼ Ti � Ei; i ¼ 1; . . . ;N ð20Þ

yik 2 f0; 1g; i ¼ 0; ; . . . ; n þ 1; k ¼ 1; . . . ;m;

Yij 2 f0; 1g; i ¼ 0; . . . ;N þ 1; j ¼ 1; . . . ;M

xij 2 f0; 1g; i; j ¼ 0; . . . ; n þ 1; iaj;

Xij 2 f0; 1g; i; j ¼ 0; . . . ;N þ 1; iaj

ci; ei; ti;Cj;Ej;Tj 2 Zþ ; i ¼ 1; . . . ; n; j ¼ 1; . . . ;N

Constraints 1 and 2 ensure each job (breakdown or

buildup) must be processed by only one machine. Con-

straints 3 and 4 ensure each non-dummy job has exactly one

preceding job (possibly a dummy job). Constraints 5 and 6

ensure that each non-dummy job has exactly one succeeding

job (possibly a dummy job). Constraints 7 and 8 restrict both

dummy jobs 0 in the import and export areas to be the first

job on each machine and constraints 9 and 10 restrict both

dummy jobs nþ 1 and Nþ 1 to be the last job on each

machine. Constraints 11 and 12 specify that if job i and j

both immediately follow job 0, they must be on different

machines. Constraints 13 and 14 specify that if job i precedes

job j, they must be on the same machine. Constraints 15 and

16 ensure that if job i precedes job j, there must be enough

time between them for job j to be completed. Constraint 17

enforces release times and constraint 18 specifies that a

container can commence buildup only if all its source

containers have been broken down. Constraints 19 and 20

specify each job’s tardiness and earliness.

We now simplify the above model by reducing the number

of constraints that are present. This is done so that the

solution techniques we employ later can be applied more

easily.

In order to reduce the number of constraints we will

require the following binary variables reduction lemma from

Sierksma:6 Let f: D ! R for some D, dA{0, 1} and G be a

nonzero real number such that GXmax{f(x)|xAD}. Then for

each dA{0,1} and xAD, the following are equivalent: (1)

d¼ 0) f(x)p0 and (2) f(x)�Gdp0.

The new model has the same objective function as before

and its decision variables include the following used before:

yik; i ¼1; . . . ; n; k ¼ 1; . . . ;m; Yjk; j ¼ 1; . . . ;N;

k ¼ 1; . . . ;M

ci; ei; ti;Cj;Ej ;Tj; i ¼ 1; . . . ;m; j ¼ 1; . . . ;M

In replacing the remaining decision variables, we intro-

duce the following new variables:

Iijk; i; j ¼ 1; . . . ; n; iaj; k ¼ 1; . . . ;m

Jijk; i; j ¼ 1; . . . ;N; iaj; k ¼ 1; . . . ;M

where Iijk (resp. Jijk)¼ 1 if incoming (resp. outgoing)

containers i and j are both processed by machine k and i

precedes (not necessarily immediately) j, otherwise it is 0;

Further to this, the constraints can now be written as

follows:

For job to machine uniqueness, these are the same as

constraints 1 and 2:

Xm

k¼1yik ¼ 1; i ¼ 1; . . . ;m;

XM

k¼1Yjk ¼ 1; j ¼ 1; . . . ;M;

For job precedence relationships: We first note that, for

incoming containers:

yik þ yjk ¼ 2 , Iijk þ Ijik ¼ 1

yik þ yjkp1 , Iijk þ Ijik ¼ 0

i; j ¼ 1; . . . ; n; iaj; k ¼ 1; . . . ;m

ð21Þ

Since it can take only the 0 or 1 values, we can view

Iijkþ Ijik as a single binary variable and using the lemma

given above, these relationships can be transformed by

Y Li et al—Crossdocking 1345

Page 5: Crossdocking—JIT scheduling with time windows

enumerating all the possible values of yik, yjk, Iijk, Ijik to

obtain the following:

yik þ yjk � ðIijk þ IjikÞp1

2ðIijk þ IjikÞ � yik � yjkp0

i; j ¼ 1; . . . ; n; iaj; k ¼ 1; . . . ;m

ð22Þ

Similarly, for outgoing containers, we have:

Yik þ Yjk � ðJijk þ JjikÞp1

2ðJijk þ JjikÞ � Yik � Yjkp0

i; j ¼ 1; . . . ;N; iaj; k ¼ 1; . . . ;M

ð23Þ

In considering the requirement for sufficient time between

jobs on the same machine, we have for incoming containers:

cipðcj � pjÞ þ Gð1� IijkÞ

i; j ¼ 1; . . . ; n; iaj; k ¼ 1; . . . ;mð24Þ

And, similarly, for outgoing containers, we have

CipðCj � PjÞ þ Gð1� IijkÞ

i; j ¼ 1; . . . ;N; iaj; k ¼ 1; . . . ;Mð25Þ

Constraints (17)–(20) in the first formulation are necessary

again for this new formulation of the model.

It is easily verified from the above that for each job’s

precedence in the same area (import or export), the new

formulation has significantly fewer constraints than is

present in the first formulation. For example, for the import

area, constraints 11, 13 and 15 have a total of (4n2mþm)

constraints, while in new formulation, we have only

(3n2m�3nm) constraints. It is expected therefore that the

new formulation should be easier to solve than the first

formulation and this is verified by the experiments.

Relationships with machine scheduling

We have shown our problem can be viewed as a two-phase

parallel machine scheduling problem with time window

constraints. The objective for the problem is closely related

to those for just-in-time (JIT) models where a service or job

is expected to be completed in a time window and not, as in

traditional models, where earlier is better.

The earliest JIT machine scheduling problem is for single

machine scheduling with common due date d. There are

many variations even for this problem. If d is so large that

any given scheduling decision will not be affected by d, we

call the problem unrestricted. Such a problem can be solved

in polynomial time.7,8 If d is not large, the problem is

restricted, and has been shown to be NP-hard in Hall et al9

even if unit earliness and tardiness penalties are the same for

all the jobs. If earliness and tardiness penalties are job-

dependent, even for unrestricted d, the problem is NP-hard.9

Many approaches have been used for these problems. In

Bagchi et al,8 the authors propose an enumerative method

for the restricted version. More efficient heuristics algo-

rithms are given in Baker and Chadowitz10 and Sundarar-

aghavan and Ahmed.11 Other variations of single machine

with common due date can be found in Szwarc,12 Lee et al13

and Yeung et al14 and an excellent survey is given in Baker

and Scudder.15

Job-dependent due dates make the problem much more

complicated. Most properties of optimal schedules for

common due dates do not hold any longer. The problem is

shown to be NP-complete in Garey et al.16 However,

optimal schedules can be found in polynomial time if a job

sequence has been determined and the problem can therefore

be reduced to finding a good job sequence. The main

approach for the problem is to first search for a good job

sequence and then insert idle time into it optimally. For this,

several optimal idle time insertion algorithms can be found

in Steve Davis and Kanet,17 Yano and Kim18 and Lee and

Choi.19 To find a good job permutation, a genetic algorithm

is used in Lee and Choi.19 A linear programming approach

was outlined in Ten Kate John et al20 where branch and

bound and heuristic methods are developed for small and

large instances respectively.

Most researchers do not consider job release times. All

jobs are thought to be available at time 0. Few papers deal

with the situation where each job has a unique release time.

Several heuristic methods have been developed for such

problems in Mazzini and Armentano21 and Sridharan and

Zhou.22

Although a parallel machine problem seems more suited

for the crossdocking problem, there is little research on this

problem. As with single machine scheduling, most research

done on parallel machine scheduling deal with common due

dates. Several properties of optimal schedules are presented

in Sridharan and Zhou22 corresponding to the single

machine counterpart. In Bank and Werner,23 genetic

algorithms are developed and the paper takes into account

sequence-dependent job setup times and earliness and

tardiness penalties are the same for all the jobs. Sivrikaya-

Serifoglu and Ulusoy24 decompose the problem with a

column generation method and then apply a branch-and-

bound approach to get optimal schedules for small size

instances.

Further, few authors study problems with job-dependent

due dates. A heuristic algorithm for single machine

scheduling is extended to parallel machines in Heardy and

Zhu25 and in Radhakrishnan and Ventura,26 a simulated

1346 Journal of the Operational Research Society Vol. 55, No. 12

Page 6: Crossdocking—JIT scheduling with time windows

annealing method is implemented. Both papers do not

consider job release times.

The crossdocking problem, even as a machine scheduling

problem, is new and, to the best of our knowledge, no

previous research has been done on the machine scheduling

characterization of the problem. It is NP-hard since if we let

the due dates of outgoing jobs become large enough, then

the two phases can be processed independently, where each

phase is NP-hard since the unrestricted single machine

problem is already NP-hard.

Solution approaches

We have provided two formulations for the crossdock

scheduling problem. Both of these are appropriate for the

problem although the experiments show that it will still take

long time for an integer programming solver such as ILOG

CPLEX to reach optimal solution for large-scale test cases

using the second formulation. For real-time applications,

more efficient algorithms are therefore needed. In this

section, we propose two approaches for the problem; both

can be viewed as local search embedded in a genetic

algorithm (GA). One uses the Squeaky Wheel Optimization

heuristic and the other utilizes an LP solver such as CPLEX

to solve a subproblem and both of these are embedded

in a GA.

GA

GA are often used to solve NP-hard problems, for which

efficient exact algorithms usually cannot be found. In

particular, they have many applications in machine schedul-

ing.27–29 A typical GA algorithm consists of an encoding

scheme, selection, crossover, mutation and evaluation

components.

Encoding represents a solution by a ‘chromosome’. For

parallel machine scheduling, a priority-based encoding is

used in Li et al30 Such an encoding scheme, on the one hand,

forms a mapping between solution space and priority space,

and, on the other hand, keeps solutions feasible after

crossover and mutation operations. We adopt such a

method for encoding, see Figure 2. Each job is assigned a

priority. A smaller number corresponds to a higher priority

and a job with higher priority will be scheduled first. Since

our problem can be divided into two phases, jobs in the

import area and export area will be ranked separately.

For the selection, crossover and mutation operators (see

Li et al30), we use the following. A steady-state selection

scheme is adopted for selecting parent chromosomes. In

every generation a few good (more fit) chromosomes are

selected for creating new offspring. Other less fit chromo-

somes are removed and the new offspring are used in their

place. The rest of population survives to new generation. A

two-pointmethod is used for crossover. As shown in Figure 3,

two random positions divide two parents into three segments

and the offspring chromosome is formed from its parents.

For a mutation operator, we randomly choose a gene of one

chromosome and change it.

The performance of a GA algorithm is dependent on the

parameters, including POPSIZE, MAXGENS, XOVER,

RMUTATE. POPSIZE specifies the population in each

generation which we set to 1000 for our problem. MAX-

GENS is the number of maximum generations. Usually, this

is set to 1000 for better performance. XOVER, RMUTATE

are respectively ratio of crossover and mutation operators.

We set these 0.85 and 0.15 respectively.

Given a chromosome, we can obtain various solutions

using different decoding methods. We adopt a greedy

procedure for decoding, which is effective in obtaining good

schedules. Decoding in import area can be describe as

follows:

1. Define a two-dimensional array to record current

machine utilization: res[numofMachines] [timeHorizon],

where, for example, res[1][1]¼ 1 denotes that in time

period, machine 1 is idle, that is, available, and initialize

each of its elements to 1 which denotes unused.

2. Choose one job with highest .priority among all the

unscheduled jobs. First try to schedule it just in time, that

is, schedule the job to complete right at its due date if (1)

there is one machine with enough idle time to process it in

the interval; (2) such an arrangement allows the job to

start after its release time. If the job is assigned and there

are still unscheduled jobs, go on with step 2.

3. Compare unit penalty of earliness and tardiness. (For our

crossdocking problem, unit tardiness penalty is the larger

since we want to satisfy customer demand on time.)

Without loss of generality, we take the unit tardiness

penalty to be much larger, so we would rather advance

the job than defer it in the hope of minimize penalty. We

try to schedule the job on a machine by advancing it step

by step. Once an available machine is found, modify

machine utilization record and then go to step 2.

4. If at the job’s release time, we cannot find available

machines, we can only defer the job from its due date. We

try each machine by deferring the job step by step until

5

2

4

4

3

1

2

3

1

3

Index

Gene

Figure 2 Chromosome encoding scheme. Figure 3 Two-point crossover.

Y Li et al—Crossdocking 1347

Page 7: Crossdocking—JIT scheduling with time windows

we find an available machine, then modify the machine

utilization record and go to step 2.

Decoding for outgoing jobs is similar with the modifica-

tion that the release time of each job is calculated from the

completion time of its precedent incoming jobs.

Since we cannot guarantee optimal when we evaluate a

given chromosome, the solution quality is limited. We hope

to obtain better solutions by utilizing a local search

procedure. We can apply local search to the best of each

generation, which have the highest potential for a better

solution. In the next two sections, we will describe two local

search techniques.

Using SWO as local search

‘Squeaky Wheel’ optimization (SWO) is a relatively new

heuristic first proposed in Joslin and Chements.31 The SWO

framework comprises a ‘Constructor–Analyzer–Prioritizer’

scheme. The solution space of a problem is mapped to a

priority space. According to given priorities, the ‘construc-

tor’ constructs a solution, which is then analysed by

‘analyser’ to find ‘trouble’ elements. By removing these

elements, ‘prioritizer’ generates a new priority sequence. The

cycle repeats where the three components work recursively

until no improvement can be obtained or maximum number

of iterations is reached. Because SWO is easily trapped in

local optima, it is often used with tabu search. We use SWO

for the problem by taking the job assignment order to

determine priority. Before each SWO procedure is called we

already have a relatively good solution from the current

generation of GA. The following describes the SWO

algorithm.

The SWO Algorithm

1. Set objbest as the penalty of initial given schedule, number

of iterations¼ 500, counter¼ 0.

2. Evaluate current solution: obj¼P

i¼ 1n (aeiþbti)þP

j¼ 1N (aEiþ bTi). Store the current solution if objoobjbest

and set objbest¼ obj. counter¼ counterþ 1. If counter-

¼ number of itetions, stop; generate a new job priority

sequence according to each job’s penalty; a job with

higher penalty will be scheduled earlier.

3. Generate a new schedule according to new ‘job priority

sequence’ and ‘penalty’. All incoming containers will be

scheduled first. Choose a job to be scheduled (if it is an

outgoing container, its release time is the maximum of all

completion times of its source containers).

(a) If its penalty in last iteration is not 0, go to (b);

Check if we can schedule the job from last starting

time; if possible, go to (d); otherwise, check if we

can schedule it as late as possible without delay; if

possible, go to 2; otherwise attempt to schedule it

with as small as possible tardiness.

(b) If the job is tardy in the last iteration, go to (c); Try

to advance the job one time unit earlier, and if this is

possible and it starts after release time, go to (d);

otherwise, advance it further, unit by unit, to check

whether it can be scheduled earlier than in the last

iteration and after release time; if this is possible, go

to (d); otherwise, schedule it as early as possible

from starting time of last iteration.

(c) Check whether we can schedule the job one time unit

later than in the last iteration. If this is possible, go

to 2; otherwise, try to schedule it later step by step,

and once this is done, go to (d); if the job cannot be

scheduled before its due date, try to move it forward

from the starting time in last iteration; if it can be

scheduled before its release time, go to (d); other-

wise, schedule it as early as possible with tardiness.

(d) Calculate penalty of the current job. If the job is

early, penalty¼ a(due date � completion time); if the

job is late, penalty¼b(completion time � due date).

For an outgoing job that is late due to lateness of its

source container(s) and has penalty larger than that

for its source container(s), change the source contain-

er(s) penalty to the outgoing job’s penalty. Go to 2.

For the algorithm described, we note the following:

� Each job can only start after its release time. For outgoing

containers, the release time is defined by their source

containers.

� The penalty calculated for priority sequences is different

from the penalty calculated for the objective function.

When deciding a priority sequence, an incoming container

will incur tardiness penalty if it causes its destination

container to be tardy, even if it itself is not tardy.

� In implementation, we set unit earliness and tardiness

penalties to be 1 and 100, respectively, and we suppose the

time horizon is no bigger than 100. More precisely, we

suppose no earliness penalty itself can reach 100. A similar

modification is needed if we change the unit penalty

parameters.

� We use a greedy method for local search in each iteration

trying to decrease penalty. In order not to be trapped in

local optimum or cycling, we attempt to make only a little

progress each time. For example, in 3(b), when a job is

tardy, we advance the job one time unit if possible,

without using an earlier-the-better strategy.

Using IP as local search

Although the SWO approach is efficient, we cannot

guarantee optimal solutions when we perform the local

search. We observe that once a schedule is obtained from

any heuristic, we then have a partitioning of jobs on each

machine, that is, we know which jobs are processed by which

machine.

In the second formulation, the IP model is difficult to

solve because there are many integer variables, arising

largely from partitioning and precedence constraints

1348 Journal of the Operational Research Society Vol. 55, No. 12

Page 8: Crossdocking—JIT scheduling with time windows

(21)–(23). However, once a partition is given, the number of

variables decreases greatly resulting in an IP model that is

easier to solve. We call such a resulting model a subproblem.

The only difference between the subproblem and the original

one is that, in the subproblem, each machine is dedicated to

process some specific jobs, or, in other words, each job can

only be processed by a specific machine.

This subproblem is NP-hard which is easy to verify. Grney

et al16 have shown the single machine scheduling problem

with job-dependent due dates to minimize total earliness and

tardiness penalties is NP-hard. For the subproblem, we can

assume that the two phases are independent if we let due

dates of jobs in export area be large enough. For the second

phase, we can then take release times to be 0 since this will

not affect decisions made. Because the jobs are partitioned to

machines, what we deal with in the subproblem is just a

‘single machine scheduling problem with job-dependent due

dates to minimize earliness and tardiness’, which we know is

NP-hard. Hence, the subproblem is NP-hard.

Since our subproblem is NP-hard, we cannot find a

polynomial algorithm for optimal solutions. For small-scale

cases, obtaining optima by solving the IP model efficiently

becomes a good choice. Compared with the original

problem, the subproblem formulation has the same objective

function but now only with constraints (17)–(20) and (22)–

(25). Actually constraints (22), (23) has reduced to

Iijkþ Ijik¼ 1 and Jijkþ Jjik¼ 1 only for those jobs i and j

which are processed on the same machine k. Also in

constraints (24) and (25), only those jobs on the same

machine need to be considered, so the number of binary

variables is greatly reduced (Iijk and Jijk would be 0 and the

corresponding constraint can be removed when jobs i and j

are not both on machine k). Another observation is since in

the subproblem formulation, all the coefficients are integers,

it is easy to generate ‘flow cover cuts’32 when solving the

problem by a branch-and-bound algorithm. At the same

time, we can update the upper bound of the subproblem

since we only expect solutions better than the current best

one. Many of subproblems can be ignored if we find solution

of the LP relaxation of the subproblem is not better than the

current upper bound. Experiments have shown that a solver

such as CPLEX can exploit these conditions to solve the

subproblem efficiently.

Experiments and results

Test data are generated by specifying five parameters:

number of machines in import area, number of machines in

export area, number of incoming containers, number of

outgoing containers, time horizon. Since our problem is a

practical one, we generated data as realistically as possible.

First, we generated data for the import area. For each

incoming container, we generate a release time ri uniformly

on (0, timehorizon) and a processing time pi uniformly on

(0, longestprocessingtime) where longestprocessingtime is the

longest time to process any container. The due date of this

container is a small random variable plus riþ pi because we

expect that orders will not be accepted which clearly will not

be satisfied in time. For each outgoing job, we first randomly

generated its source containers, and then took the maximum

of their due dates as its release time to generate remaining

data. The ratio of number of machines and time horizon to

number of jobs determined the extent of the penalty.

In Table 1, column data set specifies the above five

parameters in order (x–x–x–x–x). Results of three ap-

proaches are listed—CPLEX applied to the second IP

formulation, SWO with GA (SWOGA) and LP with GA

(LPGA). Italicized results from CPLEX indicate that

CPLEX was unable to obtain optima as memory limits

were exceeded. Our test cases can be divided into two

categories—loosely constrained and tightly constrained sets.

Table 1 Experimental results

ID Data set CPLEX Time (s) SWOGA LPGA Time (s)

1 2–3–10–11–30 104 1 104 104 22 3–2–15–14–35 8 32 8 8 2.43 3–3–20–21–40 618 18755 1305 715 154 3–4–32–34–50 1828 16500 924 530 345 4–5–30–29–46 211 31214 409 312 426 4–5–32–33–50 5 51362 5 4 448 5–5–30–30–90 1 54.74 1 1 437 5–5–40–38–60 10 42252 107 27 409 5–5–42–43–55 112 43972 210 111 4510 5–6–32–35–54 3 180.69 3 3 2511 5–6–40–43–56 4 45230 200 4 3512 5–6–56–57–62 6569 23374 2463 1384 12313 6–6–34–32–60 9 42436 7 6 3814 7–8–50–60–70 10 42723 110 12 4615 8–9–90–89–70 1147 41514 113 15 5716 9–9–93–94–75 3500 38193 458 149 131

Y Li et al—Crossdocking 1349

Page 9: Crossdocking—JIT scheduling with time windows

Test cases 1,2,7,8,10 can be regarded ‘loosely constrained’

because the ratios of number of jobs to number of machines

are smaller making it easier to find a good schedule with less

penalties. The other test cases are ‘tightly constrained’ and

difficult to find good schedules for.

CPLEX can reach optimal or near-optimal for loosely

constrained test cases, although for large-scale test cases, it

takes a long time and can result in memory errors. For

tightly constrained test cases, a purely IP approach was not

suitable and is unable to find very good solutions even if

allowed to run for long periods. Results are obtained by

solving the second formulation of the model. When applied

to the first formulation, more than 35,000 s is required to

reach the optimum just for test case 1, which is the smallest

of our test cases.

SWOGA was observed to be very efficient. It obtained

optimal solutions for loosely constrained test cases such as

cases 1,2,8,10 and near-optimal solutions for cases such as

6,13. For tightly constrained test cases it often surpassed

CPLEX. One of main advantages of such an approach is

that it completes in very short times, usually less than a

minute, even for very large test cases.

LPGA is both time efficient and effective. It takes a little

longer time than SWOGA but takes a much shorter time

than the purely IP approach. More importantly, it frequently

achieved the best performance, especially for relatively larger

test cases. For some test cases (6,13,14) it actually reached

the optimum as CPLEX gave the same values as the lower

bound, where CPLEX cannot reach these solutions itself.

We can say that for certain NP-hard problems such as the

crossdocking problem, a hybrid approach as the one

developed here which uses a meta-heuristic with a linear

(integer) programming model as subproblem can offer good

results. It is easy to implement and at the same time an

efficient and effective strategy.

Summary

In this work, we studied a central problem in crossdocking

operations which is to eliminate storage and order picking

activities allowing the crossdock to function as a purely

transshipment centre. For such requirements, JIT scheduling

is required and we found that it was natural, useful and

effective to model the problem as a machine scheduling

problem. Two formulations were given where the second

formulation was found to offer better results in experiments.

As the problem is NP-hard, we designed and implemented

two heuristic approaches for large-scale problems. One uses

SWO as local search embedded in GA framework and the

other solves an IP subproblem as a local search embedded in

GA algorithm. The latter is a new technique and can

possibly be applied to other problems. Both obtained good

solutions in short times. Experiments showed that LPGA is

superior in performance while SWOGA is faster.

References

1 Gue KR (2001). Crossdocking: just-in-time for distribution,Graduate School of Business & Public Policy Naval Post-graduate School, Monterey, CA, 8 May 2001.

2 Donaldson H, Johnson EL, Ratliff HD and Zhang M (1998).Schedule-driven cross-docking networks. GIT technical report.

3 Ratliff HD, Vate JV and Zhang M (1999). Network design forload-driven cross-docking systems. GIT technical report.

4 Gue KR (2000). The Best Shape for a Crossdock, INFORMSNational Conference, San Antonio, TX, 1 November 2000.

5 Gue KR (1999). The effects of trailer scheduling on the layout offreight terminals. Transp Sci 33(4): 419–428.

6 Sierksma G (1996). Linear and Integer Programming: Theoryand Practice, 2nd edn. Marcel Dekker: New York, p 230.

7 Bagchi U, Change Y and Sullivan R (1987). Minimizingabsolute and squared deviations of completion times withdifferent earliness and tardiness penalties and a common duedate. Naval Res Logis Q 34: 739–751.

8 Bagchi U, Sullivan R and Chang Y (1986). Minimizing meanabsolute deviations of completion times about a common duedate. Naval Res Logis Q 33: 227–240.

9 Hall N, Kubiak W and Sethi S (1989). Deviation of completiontimes about a restrictive common due date. Working Paper 89-15, College of Business, The Ohio State University, Columbus.

10 Baker KR and Chadowitz A (1989). Algorithms for minimizingearliness and tardiness penalties with a common due date.Working Paper 240, Amos Tuck School of Business Adminis-tration, Dartmouth College, Hanover, NH.

11 Sundararaghavan P and Ahmed M (1984). Minimizing the sumof absolute lateness in single machine and multimachinescheduling. Naval Res Logis Q 31: 325–333.

12 Szwarc W (1996). The weighted common due date single machinescheduling problem revisited. Comput Opns Res 23: 255–262.

13 Lee CY, Danusaputro SL and Lin CS (1991). Minimizingweighted number of tardy jobs and weighted earliness-tardinesspenalties about a common due date. Comput Opns Res 18:379–389.

14 Yeung WK, Oguz C and Cheng TCE (2001). Single-machinescheduling with a common due window. Comput Opns Res 28:157–175.

15 Baker KR and Scudder GD (1990). Sequencing with earlinessand tardiness penalties: a review. Opns Res 38: 22–36.

16 Garey M, Tarjan R and Wilfong G (1988). One-processorscheduling with symmetric earliness and tardiness penalties.Math Opns Res 13: 330–348.

17 Davis JS and Kanet JJ (1993). Single-machine scheduling withearly and tardy completion costs. Naval Res Logis 40: 85–101.

18 Yano CA and Kim Y-D (1991). Algorithms for a class of singlemachine weighted tardiness and earliness problems. Eur J OplRes 52: 167–178.

19 Lee CY and Choi JY (1995). A genetic algorithm for jobsequencing problems with distinct due dates and general early–tardy penalty weights. Comput Opns Res 22: 857–869.

20 Ten KateJohn HA, Wijngaard J and Zijm WHM (1995).Minimizing weighted total earliness, total tardiness and setupcosts. Research Paper 95A37, University of Groningen, ResearchInstitute SOM (Systems, Organisations and Management).

21 Mazzini R and Armentano VA (2001). A heuristic for singlemachine scheduling with early and tardy costs. Eur J Opl Res128: 129–146.

22 Sridharan V and Zhou Z (1996). A decision theory basedscheduling procedure for single machine weighted earliness andtardiness problems. Eur J Opl Res 94: 292–301.

23 Bank J and Werner F (2001). Heuristic algorithms for unrelatedparallel machine scheduling with a common due date, release

1350 Journal of the Operational Research Society Vol. 55, No. 12

Page 10: Crossdocking—JIT scheduling with time windows

dates, and linear earliness and tardiness penalties.Math ComputModell 33: 363–383.

24 Sivrikaya-Serifoglu F and Ulusoy G (1999). Parallel machinescheduling with earliness and tardiness penalties. Comput OpnsRes 26: 773–787.

25 Heardy RB and Zhu Z (1998). Minimizing the sum of jobearliness and tardiness in a multimachine system. Int J Prod Res36: 1619–1632.

26 Radhakrishnan S and Ventura JA (2000). Simulated annealing forparallel machine scheduling with earlines–tardiness penalties andsequence-dependent set-up times. Int J Prod Res 38: 2233–2252.

27 Fang HL, Ross P and Corne D (1993). A promising geneticalgorithm approach to job-shop scheduling, rescheduling, andopen-shop scheduling problems. Proceedings of the FifthInternational Conference on Genetic Algorithms, pp 375–382.

28 Lee CY and Choi JY (1995). A genetic algorithm for jobsequencing problems with distinct due dates and general early–tardy penalty weights. Comput Opns Res 22: 857–869.

29 Webster S, Gupta A and Jog PD (1997). A genetic algorithm forscheduling job families on a single machine with arbitraryearliness/tardiness penalties and an unrestricted common duedate. Int J Prod Res 36: 2543–2551.

30 Li Y, Lim A and Wang F (2003). A genetic algorithm formachine scheduling problem under shared resource constraints.In: Proceedings of the Congress on Evolutionary Computation2003. Canberra, pp 1080–1085.

31 Joslin DE and Clements DP (1999). ‘Squeaky wheel’ optimiza-tion. J Artif intell Res 10: 353–373.

32 Gu ZH, Nemhauser GL and Savelsbergh MWP (1999). Liftedflow cover inequalities for mixed 0–1 integer programs. MathProgramm 85(3): 439–467.

Received October 2003;accepted May 2004

Y Li et al—Crossdocking 1351