Vous êtes sur la page 1sur 5

Simulated annealing for lot sizing and scheduling on parallel

machines with sequence-dependent set-up times


Eduardo Diaz-Santillan; César O. Malavé
Department of Industrial Engineering
Texas A&M University
College Station, TX 77843-3131

Abstract

Splitting jobs among machines often result in improved customer service and reduction in throughput time. Implicit
in determining a schedule, there is a lot-sizing decision specifying how jobs are to be split. This research considers
the problem of lot-sizing and scheduling jobs with varying processing times, non-common due dates, and sequence-
dependent set-up times on parallel machines. The objective is to minimize the sum of total tardiness. The system is
non-preemptive. Most of the research work performed on parallel machines scheduling does not consider job
splitting. This research proposes a simulated annealing method. Computational simulations indicate that this
procedure can be used to solve large-scale problems of practical size.

Keywords: Scheduling, Parallel Machines, Simulated Annealing.

1. Introduction
Make-to-Order companies have to face desirable quote short lead times by customers. Quote short lead time is a
critical factor in order to maintain highly competitive market position. It has become imperative to modify decision
processes in order to stay competitive. We present a simulated annealing (SA) method for supporting scheduling
decisions. This work studies the parallel-machine tardiness problem, with splitting jobs, and sequence-dependent
set-up time. The objective is to minimize the sum of total tardiness. The required scheduling decisions to make a set
of splitting jobs on parallel machines are: What is the appropriate sequence? What lot size is more efficient? What
machines should be allocated to process which job? The lot size considered here is required in product units such
that an integer solution is necessary. The problem studied here is NP-complete [5]. Using SA it is possible to
answer these questions for practical size problems in reasonable computing time.

2. Literature review
In this section, some of the related literature is reviewed. Literature on the parallel machine problem and simulated
annealing are examined. The problem of scheduling parallel machines has not been extensively studied with the
objective to minimizing the total tardiness with sequence-dependent set-up times between jobs and splitting jobs
combination.
Dearing and Henderson [2] studied the problem of assigning a set of machines to produce different items for each
week of a planning horizon. A given number of machines were available each week and a quarterly demand for
each product was required. The objective function was chosen to express a preference among products and to
smooth the number of set-ups from period to period. The two major operational constraints were the long time
required to changeover a loom from one product to another and that the total time available for changeovers in a
week is limited by personnel available. They considered that the products could be divided fractionally and modeled
the problem as a linear problem.
Serafini [7] examined the problem of scheduling jobs with different due dates on parallel machines. Each job may be
split arbitrarily and processed independently on several specified machines and preemption was allowed; each job
had a deadline, and splitting the jobs was possible without limitation. The objective was to minimize the maximum
weighted tardiness. He resolved the problem by building a network flow and exploiting the particular structure
embodied in the constraints and suggested a linear programming technique as a second approach.
Radhakrisnan and Ventura [6] studied the parallel machine earliness-tardiness with non-common due date sequence-
dependent set-up times scheduling problem (PETNDDSP) for jobs by varying processing times. The objective
function was to minimize the sum of the absolute deviations of job completion times from their corresponding due
dates. Splitting the jobs was not considered. They used SA algorithm to solve the problem.
SA may be considered as one type of randomized heuristic for solving combinatorial optimization problems [1,3].
SA has been applied to a wide range of combinatorial optimization problems in diverse areas such as: traveling
salesman problem, vehicle routing problem, scheduling problems, layout problems, graph coloring/graph
partitioning, quadratic assignment problem, bin packing [4]. SA incorporates aspects related to iterative
improvement algorithm. The use of an improvement algorithm presupposes: a current solution, an objective
function, a mechanism for generating other solutions, and an accepting function. SA starts with an initial solution, a
neighbor is generated to this solution by some suitable mechanism and the objective function is evaluated. If there is
a reduction in the objective function, the current solution is replaced by the generated neighbor. If there is an
increment in the objective function, the generated neighbor will be accepted according to a function of probability,
attempting to avoid be trapped in a local optimum.

3. Problem statement
This research studies the problem of lot sizing and scheduling n jobs on m parallel machines with sequence-
dependent set-up times. Each job j has a processing time and due date. Jobs may be split into lots and each lot may
be processed on different machines. There are sequence dependent set-up times between jobs. To each lot of a job, a
constant set-up time is needed before the first item of the lot is processed on each machine. The completion time of
each job is equal to the time when the last item in the last lot in the sequence has finished its processing. A schedule
stipulates the number of lots for each job, the lot size, the sequence of each lot on machines, the start, and the finish
time of each lot. The objective is to find a schedule and lot sizing, which minimize the total tardiness time. The
objective function is to minimize the sum of the tardiness for all n jobs on m machines.
n
Min ∑T j (1)
j =1
The definition of tardiness is represent mathematically by

Tj = max {0, C j –d j } (2)

The completion time of the job j is defined by


Cj = max { C lk } (3)

Completion time of lot l on machine k is defined by


Clk = C ik + S ilk +Plk (UPTlk) (4)

Where:
Silk is the set-up time for lot i following lot l on machine k in the sequence
Plk is the number of parts of job l processed on machine k
UPTlk is The process time per one part of lot l processed on machine k

4. Solution methodology
The problem of determining the schedule of a set of n jobs on m identical parallel machines to minimize tardiness is
NP- complete [6], which means that the running time for any algorithm currently known to find optimal solution is
an exponential function of the size of the problem. Optimal solutions cannot be obtained even for problems of
reasonable size. Using heuristics methods, such as local search, it is possible to obtain a good solution within a
reasonable computing time. In this research, SA is employed to find a good solution for the problem described
earlier.

4.1 Simulated Annealing algorithm

SA is a type of local search algorithm. A simple local search algorithm starts with an initial solution, then a new
solution is generated and the change in cost function is calculated. If an improvement in cost function is found, then
the current solution is replaced by the new solution. Only improved solutions are accepted. Obviously, a simple
local search algorithm can be trapped at a local minimum, far from the global minimum. SA attempts to avoid
becoming trapped in a local solution by sometimes accepting worse values of the cost function. The accepting or
rejection increase of a value in cost function is controlled by a sequence of random numbers and a controlled
probability. The probability of accepting an increase ÄC in cost function is called the acceptance function. The
acceptance function is set to exp(-ÄC / T ) where T is a control parameter. If the acceptance function is greater than
a random number generated from a uniform distribution on (0,1) the bad solution is accepted, otherwise it is
rejected. This acceptance function has a large probability of accepting small increases in cost function than large
increases. When T is high, most increases in cost function will be accepted, but as T value decreases, almost all
increases in cost function value will be rejected. SA is started with a relatively high value of T [1, 3].

4.2 Simulated Annealing algorithm in pseudo-code

Begin
Select an initial solution s0 ª S;
Select an initial temperature T
Select temperature reduction function
Select number of iterations for each temperature
Select number of temperatures
Repeat
Set Temperature Counter = 0
Repeat
Set Iteration Counter = 0
Generate solution s ª S, s a neighbor of s0 ;
Calculate ÄCs0s = C(s) – C (s0 );
If ÄCs0s < 0
Then s0 = s ;
Else
Generate random x uniformly in the range (0.1)
If x < exp (-ÄCs0s / T) then s0 = s;
Iteration Counter = Iteration Counter + 1
Until number of iterations
Counter Temperature = Counter Temperature + 1
Temperature T i+1 = K(T i )
Until number of temperatures. (stopping criterion, Number of Temperatures is satisfied)
s0 is the approximation to the optimal solution
End

4.3 Algorithm parameters

SA requests that a number of decisions have to be made about algorithm parameters: generic and problems specific
decisions. The generic decisions involve the cooling schedule: initial temperature, final temperature and temperature
reduction function. The problem-specific decisions include how to develop an initial solution and to generate good
neighbors to the current solution.

4.3.1 Initial temperature


The initial temperature, T will be large, so that exp(-Äx/T) is approximately equal to 1. So that the probability value
to accept a worse solution at the beginning of the SA procedure will be nearly equal to one. This initial value of T is
set according to information gained before the annealing process. After a number of iterations where each variable
was assigned a random value, the best random solution (BRS) and the worst random solution (WRS) are used to
evaluate Äx. Where Äx is defined by
Äx = k*( BRS – WRS) (5)
The initial value of T has to be such that
exp(-k*(BRS – WRS) /T) is approximately equal to 1.
4.3.2 Temperature Tuning
The temperature function is a geometric progression given by
T i+1 = Ô(T i ) (6)
In this work Ô= 0.9 is used.

4.3.3 Number of iterations


The number of iterations used in each temperature in this work is 100, but the number of operations in each iteration
is a function of the number of machines m, and number of jobs n. The neighborhood will be proportional to the
problem size.

4.3.4 Stopping criteria


Stopping criteria is based on the number of temperatures. The number of temperatures used is 50 temperatures. To
make sure that at the last temperatures worse solutions will be rejected. T final = (0.9^50)*(Initial Temperature)

4.4 Initial Solution

The following algorithm generates the initial solution: first, make a priority list of the jobs according to non-
decrease due date. Second, assign a random number, between 1 and the number of machines, to each job. That
number indicates how many lots each job will be split. The first lot is assigned on the first available machine, the
next lot will be assigned on the next available machine. Whenever there is a lot waiting to be processed, it will be
assigned on the first available machine. Compute: Completion time of lot l on machine k ,Clk . Completion time of
n
the job j ,Cj . Tardiness of the job j, Tj . Total tardiness ∑T j .
j =1
4.5 Generation of neighborhood solution

Once an initial solution has been generated, neighbors to this solution must be defined. A current solution is
transformed into another one, called a neighbor by applying a generation mechanism. For this problem the neighbor
is generated by the following mechanism: first, pick the jobs in current solution (or initial solution) with positive
( Cj –d j ) > 0. Increase by one the current number of lots in those jobs. For example, if ( Cj –d j ) > 0, and the numbers
of lots in a job is 3, increase the number of lots, from 3 to 4 in that job. Second, pick the jobs in the current solution,
which have been finished before their due date, (C j –d j )< 0, decrease by one the number of lots in these jobs. For
example, if (C j –d j )< 0, and the numbers of lots is 6, then decrease the number of lots in that job, from 6 to 5. Third,
make a priority list of the jobs according non-decrease due date. The first lot in the list will be assigned on the first
available machine; the next lot will be assigned on the next available machine. Continue until all the jobs and its lots
have been assigned.

5. Computational experiments and results


The SA initial solution and generation of neighbors were coded in Visual Basic 6.0 on a Pentium III 700 Mhz PC.
SA was tested on the following problems: 10 and 15-jobs on 5-machines, 15 and 30-jobs problem on 10-machines,
and 50-jobs on 15-machines. The problems were designed with known solution, whose objective function was equal
to zero. The average process time of the jobs is 47 hours and the range is from 16 to 118 hrs. Each job consists of
many identical items. Each job is identified by: job number, type, family, demand, and due date at specified hour.
Each type of job has a specific processing time per item. There is no set-up time between jobs when the jobs are of
same type and same family. There is a minimum set-up time between jobs (0.5 hr) when the jobs are from the same
family but different type. Jobs of different type and family need a major set-up time (2.0 hr) between them. Due to
the probabilistic nature of the SA algorithm, it is necessary to carry out multiple runs on the same instance in order
to obtain meaningful results. Therefore, all instances were run ten times. SA may accept a bad solution even if it is
currently at a global optimum so that we store the best array solution: number of lots, lot size to each job and
sequence of the jobs. The number of lots for each job could be between one and the number of machines in each
instance. The Table 1 shows the results of SA. The first column indicates the number of machines. The second
column indicates the number of jobs. The third column in the table is the average objective value of the initial
solution, Avg Zinit . It is the best value from 100 random iterations .The fourth column is the average objective
function value, Avg Zbest . It is the best SA solution for a particular instance. The fifth column is the average CPU
time in seconds needed for convergence of the SA algorithm. The sixth column is the average the improvement
achieved through the SA algorithm for a particular instance, (Zinit - Zbest)/ Zinit. The last column show the average of
tardiness, Zbest / Jobs, fourth column divided by second column.

Table 1. Results from SA algorithm

Machines Jobs Average


Zinit Zbest CPU time % (Zinit - Zbest) Tardiness
5 10 22.5 0.0 40.3 100.0 0.0
5 15 34.9 2.0 49.9 94.2 0.2
10 15 26.8 8.6 122.0 66.9 0.6
10 30 79.1 24.9 192.6 68.3 0.8
15 50 148.2 81.7 626.2 44.1 1.6

The SA algorithm solution to on the 5-machines with 10 job problem is optimal. On the 5-machines with 15-job and
10 machines with 15-job problems, the solutions are nearly optimal. On the 10-machines with 30-job problems, the
average tardiness (Zbest / total jobs) was one 48 minutes which is relatively small compared with the average process
time of 47 hours. On 15-machines with 50-job problem, the average tardiness was 69 minutes: even here the
averages remain relatively small when compared with the average process time of 47 hours.

6. Conclusions
A particular scheduling problem present in manufacturing systems has been defined and investigated to improve
company performance. The problem of lot-sizing and scheduling jobs with varying processing times, non-common
due dates, and sequence-dependent set-up times on parallel machines. The objective is to minimize the sum of total
tardiness. The problem cannot be solved in polynomial time so a local search heuristic, simulated annealing was
used. The SA outputs supports scheduling decisions. The SA outputs are: number of lots for each job, lot size,
sequence of the lots on machines, start, and finish time of each lot.

Acknowledgments
Eduardo Diaz would like to thank the financial assistance provided by CONACYT, Mexican Science and
Technology National Commission, and ITESM -CEM Tecnologico de Monterrey campus Estado de Mexico.

References

1. Eglese, R.W., 1990, Simulated Annealing: A tool for Operational Research. European Journal of
Operational Research, 46, 271-281.

2. Dearing, P. M. and Henderson R. A., 1984, Assigning looms in a textile weaving operation with
changeover limitations, Production and Inventory Management, 25, 23-31

3. Kirkpatrick, S., Gelatt, Jr., C.D. and Vecchi, M.P., 1983, Optimization by simulated annealing,
Science, 220, 671-680.

4. Koulamas C., Antony SR., and Jaen R., 1994, A Survey of Simulated Annealing Applications to
Operations Research Problems. Omega, 22 (1), 41-56

5. Lenstra J.K., 1977, Sequencing by Enumeration Methods, Mathematisch Centrum, Amsterdam.

6. Radhakrishnan, S., and Ventura, J. A., 2000, Simuated annealing for parallel machine scheduling with
earliness-tardiness penalties and sequence-dependent set-up times. International Journal of Production
Research, 38 (10), 2233-2252.

7. Serafini Paolo, 1996, Scheduling jobs on several machines with the job splitting property, Operations
Research, 44 (4): 617-628

Vous aimerez peut-être aussi