Vous êtes sur la page 1sur 9

European Journal of Operational Research 260 (2017) 12–20

Contents lists available at ScienceDirect

European Journal of Operational Research


journal homepage: www.elsevier.com/locate/ejor

Discrete Optimization

Parallel batch scheduling with inclusive processing set restrictions and


non-identical capacities to minimize makespan
Shuguang Li a,b,∗
a
Key Laboratory of Intelligent Information Processing in Universities of Shandong (Shandong Institute of Business and Technology), Yantai 264005, China
b
College of Computer Science and Technology, Shandong Institute of Business and Technology, Yantai 264005, China

a r t i c l e i n f o a b s t r a c t

Article history: We consider the problem of scheduling n jobs on m parallel batching machines with inclusive processing
Received 1 March 2016 set restrictions and non-identical capacities. The machines differ in their functionality but have the same
Accepted 24 November 2016
processing speed. The inclusive processing set restriction has the following property: the machines can be
Available online 1 December 2016
linearly ordered such that a higher-indexed machine can process all those jobs that a lower-indexed ma-
Keywords: chine can process. Each job is characterized by a processing time that specifies the minimum time needed
Scheduling to process the job, a release date before which it cannot be processed, and a machine index which is the
Batching machines smallest index among the machines that can process it. Each batching machine has a limited capacity
Inclusive processing set restrictions and can process a batch of jobs simultaneously as long as its capacity is not violated. The capacities of
Non-identical capacities the machines are non-identical. The processing time of a batch is the maximum of the processing times
Makespan of the jobs belonging to it. Jobs in the same batch have a common start time and a common comple-
tion time. The goal is to find a non-preemptive schedule so as to minimize makespan (the maximum
completion time). When all jobs are released at the same time, we present two fast algorithms with ap-
proximation ratios 3 and 9/4, respectively. For the general case with unequal release dates, we develop a
polynomial time approximation scheme (PTAS), which is also the first PTAS even for the case with equal
release dates and without processing set restrictions.
© 2016 Elsevier B.V. All rights reserved.

1. Introduction ied extensively under different names. These include “scheduling


typed task systems” (Jaffe, 1980; Jansen, 1994), “multi-purpose ma-
Scheduling of jobs on parallel machines is one of the classi- chine scheduling” (Brucker, Jurisch, & Krämer, 1997; Vairaktarakis
cal problems in combinatorial optimization. It is well known that & Cai, 2003), “scheduling with eligibility constraints” (Centeno &
these problems are usually NP-hard for standard objective func- Armacost, 2004; Hwang, Chang, & Hong, 2004a; Lee, Leung, &
tions like minimizing makespan (the maximum completion time), Pinedo, 2011; Li, 2006), “scheduling with processing set restric-
even for two identical machines. There is a rich literature on this tions” (Epstein and Levin, 2011; Glass and Kellerer, 2007; Huo and
class of problems (see, e.g., Chen, Potts, and Woeginger (1998), Leung, 2010; Li and Wang (2010); Ou, Leung, & Li, 2008), and
Leung (2004), Brucker (2007)). “scheduling with assignment restriction” (Bar-Noy, Freund, & Naor,
In practice, the machines running in parallel are often non- 2001; Lam, Ting, To, & Wong, 2002). See the survey papers by
identical. They may differ in their functionality as well as their Leung and Li (2008); 2016).
processing speeds. Such machines are called unrelated machines. In There are two special cases of processing set restrictions that
between identical and unrelated, there is a class of machines that have received increasing attention recently: (i) processing sets that
differ in their functionality but have the same processing speed. do not partially overlap and are said to be nested; (ii) processing
In such settings, jobs have a restricted set of machines to which sets that are not only nested but also include one another, and are
they may be assigned, called its processing set, while the pro- called inclusive processing sets. It is the latter case that this paper
cessing time of a job is independent of the machines. Schedul- focuses on.
ing problems with processing set restrictions have been stud- In the classic scheduling theory (Drozdowski, 2009), each ma-
chine can process at most one job at a time. In the past few

decades, along another line of research there has been significant
Correspondence to: College of Computer Science and Technology, Shandong In-
stitute of Business and Technology, Yantai 264005, China interest in scheduling problems concerning batching machines. A
E-mail address: sgliytu@hotmail.com batching machine (or batch processing machine, BPM for short in

http://dx.doi.org/10.1016/j.ejor.2016.11.044
0377-2217/© 2016 Elsevier B.V. All rights reserved.
S. Li / European Journal of Operational Research 260 (2017) 12–20 13

the literature) is a machine that can process a group of jobs as special case of all r j = 0 is denoted as P||Cmax , which is strongly
a batch simultaneously. Webster and Baker (1995) distinguished NP-hard Lawler, Lenstra, Kan, and Shmoys (1993). For P||Cmax , Gra-
three types of models for scheduling batching machines: the serial ham proposed algorithms called list scheduling (LS) and largest pro-
batch model, in which the processing time of a batch is equal to cessing time first (LPT) in his seminal works (Graham, 1966; 1969).
the sum of processing times of jobs belonging to it (see also Albers The approximation ratios of LS and LPT are 2 − 1/m and 4/3 −
& Brucker (1993)); the parallel batch model, in which the process- 1/(3m ), respectively. The problem P||Cmax also admits a PTAS
ing time of a batch is the maximum of the processing times of jobs when m is part of the input Hochbaum and Shmoys (1987) and
belonging to it (see also Brucker et al. (1998)); and the fixed batch an FPTAS when m is a fixed number Sahni (1976). For problem
model, in which the processing time of a batch is a constant, inde- P|rj |Cmax , Hall and Shmoys (1989) obtained the first PTAS.
pendent of the jobs it contains (see also Ahmadi, Ahmadi, Dasu, & When all Ki = 1, P |r j , a j , p − batch, Ki |Cmax becomes the prob-
Tang (1992)). We refer the readers to Lee, Uzsoy, and Martin-Vega lem of scheduling with inclusive processing set restrictions, de-
(1992) for the motivation, and to Brucker et al. (1998), Potts and noted as P|rj , aj |Cmax . The special case of it where all jobs are re-
Kovalyov (20 0 0), Mathirajan and Sivakumar (20 06), Mönch, Fowler, leased at the same time is denoted as P|aj |Cmax . Since P|aj |Cmax
Dauzère-Pérès, Mason, and Rose (2011) for surveys of recent re- is a special case of the classical unrelated machines schedul-
sults. We focus on the parallel batch model in this paper. ing problem R||Cmax , the 2-approximation algorithm Lenstra,
We note, with some surprise, that despite the extensive re- Shmoys, and Tardos (1990) and (2 − 1/m )-approximation algo-
search on scheduling with either processing set restrictions or rithm Shchepin and Vakhania (2005) developed for R||Cmax are
batching, there is essentially no work which takes both of them applicable to P|aj |Cmax . There also exist algorithms for P|aj |Cmax
into consideration. In this paper we initiate the study of parallel with approximation ratios better than 2 − 1/m: a (2 − 1/(m − 1 ))-
batch scheduling with inclusive processing set restrictions. The ob- approximation algorithm (Hwang, Chang, & Lee, 2004b; Kafura
jective function we consider is minimizing makespan. This is per- & Shen, 1977), a 3/2-approximation algorithm (Glass & Kellerer,
haps the most popular objective considered in scheduling theory. 2007), a 4/3-approximation algorithm and a PTAS (Ou et al., 2008),
The problem we consider can be formally described as follows. an FPTAS when m is a fixed number (Ji & Cheng, 2008; Li, Li, &
Given a set of n jobs J = {1, 2, . . . , n} and a set of m batching ma- Zhang, 2009). The problem P|rj , aj |Cmax also admits a PTAS when
chines M = {M1 , M2 , . . . , Mm }. The machines differ in their func- m is part of the input and an FPTAS when m is a fixed number Li
tionality but have the same processing speed. They can be lin- and Wang (2010)).
early ordered such that a higher-indexed machine can process all When all Ki = B (1 < B < n) and all a j = 1, P |r j , a j , p −
those jobs that a lower-indexed machine can process. Each job j is batch, Ki |Cmax is the parallel batch scheduling problem P |r j , p −
characterized by a processing time pj that specifies the minimum batch, B|Cmax . This problem is strongly NP-hard even for the sin-
time needed to process the job, a release date rj before which it gle machine case 1|r j , p − batch, B|Cmax (Brucker et al., 1998). Lee
cannot be processed, and a machine index aj which is the small- and Uzsoy (1999) initiated the study of 1|r j , p − batch, B|Cmax and
est index among the machines that can process it. Job j can be proposed a number of heuristics, one of which was proved to be
processed by machine Mi if and only if i ≥ aj . The machines in a 2-approximation algorithm by Liu and Yu (20 0 0). Deng, Poon,
{Ma j , Ma j +1 , . . . , Mm } are called eligible machines for job j. Machine and Zhang (2003) obtained the first PTAS for 1|r j , p − batch, B|Cmax .
Mi has a limited capacity Ki and can process a batch of jobs si- Lee et al. (1992) proposed a (4/3 − 1/(3m ))-approximation algo-
multaneously as long as the total number of the jobs in the batch rithm for P | p − batch, B|Cmax (all jobs are released at the same
does not exceed Ki , i = 1, 2, . . . , m. The capacities of the machines time). For P |r j , p − batch, B|Cmax , there is a fast (7/3 − 1/(3m ))-
are non-identical. The processing time of a batch is the maximum approximation algorithm (Liu, Ng, & Cheng, 2014) and a PTAS (Li,
of the processing times of the jobs belonging to it. Jobs in the same Li, & Zhang, 2005).
batch have a common start time and a common completion time. To the best of our knowledge, the general P |r j , a j , p −
Thus, scheduling involves grouping the jobs into batches and pro- batch, Ki |Cmax has not been studied to date. In this paper, we
cessing the batches on the machines. The goal is to find a non- present two fast algorithms for P |a j , p − batch, Ki |Cmax (all jobs are
preemptive schedule so as to minimize makespan, Cmax = max j C j , released at the same time) with approximation ratios 3 and 9/4,
where Cj denotes the completion time of job j in the schedule. Fol- respectively. We also develop a PTAS for the general P |r j , a j , p −
lowing Brucker (2007), Graham, Lawler, Lenstra, and Kan (1979), batch, Ki |Cmax problem, which is also the first PTAS even for P | p −
we denote this problem as P |r j , a j , p − batch, Ki |Cmax . batch, Ki |Cmax (all r j = 0 and all a j = 1). We draw upon several
The problem contains many fundamental scheduling problems ideas from Hall and Shmoys (1989), Li et al. (2005), Li and Wang
as special cases which are strongly NP-hard, hence it is also (2010)) but the combination of parallel batch scheduling and inclu-
strongly NP-hard. Therefore, we will design approximation algo- sive processing set restrictions makes the analysis quite involved
rithms for this problem. An approximation algorithm can be evalu- and non-trivial.
ated by its approximation ratio, which is defined as the worst-case Before we proceed, we introduce some frequently used termi-
ratio between the value of the solution obtained by the algorithm nologies and notations. Job j is available for machine Mi if Mi is
and the optimal solution value (for minimization problems) on any an eligible machine for it and it has been released but not yet as-
input instance of the problem. An algorithm with approximation signed to any machine. For machine Mi (i = 1, 2, . . . , m), a batch
ratio ρ is called a ρ -approximation algorithm. A family of algo- is called a full batch if it contains exactly Ki jobs, otherwise it is
rithms {Aε } is called a polynomial time approximation scheme (PTAS) called a partial batch. Let p(Bg ), S(Bg ) and C(Bg ) denote the pro-
if, for any arbitrarily small positive constant ε , Aε is a (1 + ε )- cessing time, the start time and the completion time of batch Bg ,
approximation algorithm running in time that is polynomial in the respectively. Let q(Bg ) be the processing time of the smallest job
input size of the problem instance. If the running time is polyno- in batch Bg . A batch can be naturally regarded as a set. There-
mial in 1/ε as well, then we have a fully polynomial time approxi- fore, if job j is contained in batch Bg , we simply say j ∈ Bg . Let
mation scheme (FPTAS) (Papadimitriou & Steiglitz, 1998). r(Bg ) denote the release date of Bg , which is defined to be r (Bg ) =
Below, we will briefly survey the existing results on the related max{r j | j ∈ Bg }. Let a(Bg ) denote the machine index associated with
scheduling problems with the objective of minimizing makespan. Bg , which is defined to be a(Bg ) = max{a j | j ∈ Bg }. The machines in
When all Ki = 1 and all a j = 1, P |r j , a j , p − batch, Ki |Cmax re- {Ma(Bg ) , Ma(Bg )+1 , . . . , Mm } are called eligible machines for batch Bg .
duces to the well-known classical problem P|rj |Cmax , which is NP- Let Ji = { j ∈ J |a j = i}, i = 1, 2, . . . , m. Then, J = ∪m J . The jobs
i=1 i
hard even if all r j = 0 and m = 2 Garey and Johnson (1979). The in Ji can be processed on any of the machines Mi , Mi+1 , . . . , Mm .
14 S. Li / European Journal of Operational Research 260 (2017) 12–20

Let pmax = max j p j , rmax = max j r j . Throughout this paper, denote Proof. Denote by SOL the makespan of the schedule generated by
by OPT the makespan of an optimal schedule. BLPT-LIF-LS. Assume, without loss of generality, that after all the
The remainder of this paper is organized as follows. In jobs have been scheduled machine Ml has the highest load. Let Bx
Section 2, we present approximation algorithms with constant ap- denote the last batch to be assigned to machine Ml . It is obvious
proximation ratios for the case of equal release dates. In Section 3, that a(Bx ) ≤ l. Let μ be the number of machines which can process
we develop a PTAS for P |r j , a j , p − batch, Ki |Cmax . We conclude this Bx , μ = m − a ( Bx ) + 1.
paper in Section 4. By the rule of BLPT-LIF-LS, when batch Bx is assigned to ma-
chine Ml , Ml is the least loaded machine among the last μ ma-
2. Algorithms for the case of equal release dates chines Ma(Bx ) , Ma(Bx )+1 , . . . , Mm with a load exactly SOL − p(Bx ). At
(Bx )−1 h
this point, the batches in ∪ai=1 ∪g=1
i
Bi,g have not been assigned
In this section we study P |a j , p − batch, Ki |Cmax , i.e., the prob- yet. Hence we get
lem of minimizing makespan on parallel batching machines with

m 
hi
inclusive processing set restrictions, where the machines have non- p(Bi,g ) ≥ μ · (SOL − p(Bx )) + p(Bx ). (2)
identical capacities but all the jobs are released at time zero. i=a(Bx ) g=1
Recall that the BLPT rule (Lee et al., 1992) works as follows.
By Lemma 1, we have:
Given a capacity and a set of jobs. The rule sorts the jobs in non-
increasing order of processing times, and then batches them by 
m 
hi

m 
hi

successively putting as many remaining jobs with the largest pro- p(Bi,g ) ≤ p(B˜i,g ) + μ · pmax . (3)
cessing times as possible into one batch (without violating the i=a(Bx ) g=1 i=a(Bx ) g=1
given capacity). We claim that the following inequality is true:
We first consider a special case of P |a j , p − batch, Ki |Cmax where
all Ki = B, denoted as P |a j , p − batch, B|Cmax . For this special case, 
m 
hi
p(B˜i,g )
we now present an algorithm which is called BLPT-LIF-LS (BLPT- i=a(Bx ) g=1
OP T ≥ . (4)
Largest Index First-List Scheduling). The algorithm works as fol- μ
lows. For i = m, m − 1, . . . , 1 (this ordering is used crucially), it
first applies the BLPT rule to Ji to obtain a sequence of batches Suppose that it is not the case and we have
m hi hi
Bi,1 , Bi,2 , . . . , Bi,hi , and then it assigns the batches Bi,1 , Bi,2 , . . . , Bi,hi i=a ( Bx ) g=1
p(B˜i,g ) > μ · OPT . Each batch in ∪m B˜
i=a (Bx ) ∪g=1 i,g
by the List Scheduling algorithm (Graham, 1966): whenever an el- with a positive processing time is a full batch and the jobs in it
igible machine is idle (ties broken arbitrarily), choose the first re- have the same processing time. Hence we get
maining batch in the sequence to start processing on that machine.  
m hi pj > m hi p j > μ · B · OP T .
It is not difficult to check that this algorithm is implementable in j∈ ∪ ∪ Bi,g j ∈ ∪ ∪ B˜i,g
i=a (Bx ) g=1 i=a (Bx ) g=1
O(nlog mn) time.
In order to analyze the approximation ratio of BLPT-LIF-LS, we h
All the jobs contained in ∪m i=a (Bx ) ∪g=1 i,g
i
B have to be processed
need the following observation. The batches Bi,1 , Bi,2 , . . . , Bi,hi −1 are on the machines Ma(Bx ) , Ma(Bx )+1 , . . . , Mm in any feasible schedule.
full batches and q(Bi,g ) ≥ p(Bi,g+1 ) (g = 1, 2, . . . , hi − 1). We have The above inequality thus implies that any feasible schedule can-
the following inequality which first appeared in Deng, Feng, Li, and h
not complete all of the jobs in ∪m
i=a (Bx ) ∪g=1 i,g
i
B by time OPT, a con-
Shi (2005) where Deng et al. presented the first PTAS for the prob-
tradiction.
lem of minimizing total completion time on a single batching ma-
Combining the inequalities (2), (3) and (4), we get:
chine with release dates.
p( B x )
hi −1
 OP T + pmax ≥ (SOL − p(Bx )) + .
( p(Bi,g ) − q(Bi,g )) + p(Bi,hi ) ≤ p(Bi,1 ) (1) μ
g=1 Since p(Bx ) ≤ pmax ≤ OPT and μ ≤ m, we get:
   
Let Bi = {Bi,g : g = 1, 2, . . . , hi }. We modify Bi to define a new 1 1
SOL ≤ 3 − · OP T ≤ 3 − · OP T .
set of batches B˜i = {B˜i,g : g = 1, 2, . . . , hi }, where B˜i,g is obtained by μ m
letting all processing times of the jobs in Bi, g be equal to q(Bi, g ) 
for g = 1, 2, . . . , hi − 1 and B˜i,hi is obtained by letting all processing
Next, we modify BLPT-LIF-LS to get an algorithm for P |a j , p −
times of the jobs in Bi,hi be equal to zero.
batch, Ki |Cmax . Note that although some jobs in Ji (i = m−1, m −
Each original job j ∈ Ji is now modified to a new job j ∈ Ji
2, . . . , 1) may be assigned to the machines with indices larger than
whose processing time is set to be p j = q(Bi,g ) if j ∈ Bi, g for g ∈
i, BLPT-LIF-LS batches all the jobs in Ji (i = m, m − 1, . . . , 1) by the
{1, 2, . . . , hi − 1} and p j = 0 if j ∈ Bi,hi . It should be stressed that BLPT rule at the beginning of the (m − i + 1)-th iteration, once and
all the machine indices associated with the jobs keep unchanged.
for all. As a result, all the batches constructed by BLPT-LIF-LS are
The set Ji denotes the set of modified jobs with machine index i.
homogenous, in the sense that each of them consists of the jobs
By the inequality (1), we get the following lemma.
associated with the same machine index. When we turn to the

hi 
hi non-identical capacities case, however, the batches may not neces-
Lemma 1. p(Bi,g ) ≤ p(B˜i,g ) + pmax . sarily be homogenous. This difficulty prevents us from determining
g=1 g=1
the batch structure of the jobs a priori. We have to construct the
In fact, for any subset of Bi and its corresponding subset of B˜i , batches dynamically, one by one during the iterations.
Lemma 1 always holds. The algorithm, called LIF-LS, is based on the following idea. All
The load on a machine is defined to be the total processing time the jobs are sorted in non-increasing order of their machine in-
of the batches scheduled on this machine. dices, and within that in non-increasing order of their processing
We are now ready to prove the following theorem. times. The jobs will be batched and assigned in that order. Batches
have to be generated greedily one by one. Whenever an eligible
Theorem 1. BLPT-LIF-LS is a (3 − 1/m )-approximation algorithm for machine with respect to the first unassigned job is idle, a new
P |a j , p − batch, B|Cmax that runs in O(nlog mn) time. batch to be assigned to that machine is generated which contains
S. Li / European Journal of Operational Research 260 (2017) 12–20 15

as many as possible of the first unassigned eligible jobs. After the Let Bi,1 , Bi,2 , . . . , Bi,hi be the sequence of the batches sched-
batch is generated and assigned to that machine, we immediately uled on machine Mi in non-increasing order of processing
check if all the batches assigned to that machine abide by the BLPT times, i = 1, 2, . . . , m. The batches Bi,1 , Bi,2 , . . . , Bi,hi −1 are full
rule. If not, we perform an additional adjustment which simply ap- batches and q(Bi,g ) ≥ p(Bi,g+1 ) (g = 1, 2, . . . , hi − 1). Let Bi = {Bi,g :
plies the BLPT rule to all the jobs assigned to that machine. We g = 1, 2, . . . , hi }. We modify each Bi to obtain B˜i = {B˜i,g : g =
repeatedly batch the jobs and assign the batches in this way until 1, 2, . . . , hi } in the same way as in the context of identical capaci-
all the jobs are scheduled. ties.
In LIF-LS, Bi represents the ordered list of the batches in non- Each original job is now modified to a new job j with the same
increasing order of processing times that have been processed on machine index (and with the same release date in the next section)
machine Mi , Listi represents the ordered list of the jobs in non- whose processing time is set to be p j = q(Bi,g ) if j ∈ Bi, g for g ∈
increasing order of processing times that have been assigned to {1, 2, . . . , hi − 1} and p j = 0 if j ∈ Bi,hi .
machine Mi , and Loadi represents the total processing time of the A crucial observation is that Lemma 1 still holds in the context
batches processed on machine Mi , i = m, m − 1, . . . , 1. During the of non-identical capacities, although a batch constructed in LIF-LS
run of the ith iteration, List represents the ordered list of the jobs may contain jobs with different machine indices.
in non-increasing order of processing times waiting to be assigned
to machines Mm−i+1 , . . . , Mm , i = 1, 2, . . . , m. Theorem 2. LIF-LS is a 3-approximation algorithm for P |a j , p −
Algorithm LIF-LS: batch, Ki |Cmax that runs in O(n log m + n2 ) time.

Proof. Denote by SOL the makespan of the schedule generated by


Step 1. Initially, set Bi = ∅, Listi = ∅, Loadi = 0, i = 1, 2, . . . , m. LIF-LS. Assume that after all the jobs have been scheduled machine
Step 2. For i = m, m − 1, . . . , 1 (this ordering is used crucially), do Ml has the highest load. Let Bx denote the last batch to be assigned
the following: to machine Ml (before the adjustment has been done). It is obvious
(i) Place the jobs in Ji into List in non-increasing order of that a(Bx ) ≤ l. Let μ be the number of machines which can process
processing times. Bx , μ = m − a(Bx ) + 1. Let t be the start time of batch Bx . We now
(ii) Select the currently least loaded eligible machine Ml , prove that t ≤ 2OPT by contradiction.
i.e., l = arg minm h=i {Loadh }, ties broken arbitrarily. Let Suppose that t > 2OPT. For i = a(Bx ), a(Bx ) + 1, . . . , m, let
Bl = {Bl,1 , Bl,2 , . . . , Bl,z−1 } denote the ordered list of the Bi,1 , Bi,2 , . . . , Bi,h i be the sequence of the batches which are started
batches in non-increasing order of processing times (but not necessarily completed) before time t on machine Mi in
that have been processed on machine Ml . Process a non-increasing order of processing times. By the rule of LIF-LS,
new batch Bl, z on Ml which accommodates as many when batch Bx is assigned to machine Ml , Ml is the least loaded
currently largest jobs (i.e., the first jobs) in List as pos- machine among the last μ machines Ma(Bx ) , Ma(Bx )+1 , . . . , Mm .
sible (without violating the capacity Kl of Ml ). Merge h
At this point, all the batches except for those in ∪m B ∪
i=a (Bx ) ∪g=1 i,g
i
these newly assigned jobs into Listl and delete them
from List. Update Loadl = Loadl + p(Bl,z ). {Bx } have not been assigned yet. Hence for i = a(Bx ), a(Bx ) +
(iii) If l > i, then perform an adjustment: Apply the BLPT 1, . . . , m, we get
rule to Listl . Let Bl be the ordered list of the obtained 
hi 

batches. Update Loadl to be equal to the total process- p(Bi,g ) ≥ t > 2OP T .
ing time of the batches in Bl . g=1
(iv) Repeat Step 2(ii) and Step 2(iii) until List is empty.
By Lemma 1, for i = a(Bx ), a(Bx ) + 1, . . . , m we have
h i 
Example 1. Here is an example to illustrate the execution of LIF-  
hi

LS. Take m = 2 machines and n = 10 jobs. Suppose that machines p(B˜i,g ) ≥ p(Bi,g ) − pmax > 2OP T − OP T = OP T .
M1 and M2 have capacities K1 = 2 and K2 = 3, respectively. Jobs j1 , g=1 g=1

j2 , j3 , j4 are associated with the processing set {M2 } and have pro- hi ˜ 
Each batch in ∪m
i=a (Bx ) ∪g=1 i,g
B with a positive processing time
cessing times 1, 2, 3, 4, respectively. Jobs j5 , j6 , j7 , j8 , j9 , j10 are
associated with the processing set {M1 , M2 } and have processing is a full batch and the jobs in it have the same processing time.
times 1, 2, 3, 4, 5, 6, respectively. First, LIF-LS generates and as- Hence we get
signs two batches {j2 , j3 , j4 } and {j1 } to M2 . Then, it generates and   m
h i pj > h i p j > Ki · OP T .
assigns one batch {j9 , j10 } to M1 . At this point, we have Load1 = 6, j∈ ∪
m
∪ Bi,g j ∈
m
∪ ∪ B˜i,g i=a ( Bx )
i=a (Bx ) g=1 i=a (Bx ) g=1
Load2 = 5. Since Load2 < Load1 , LIF-LS generates and assigns one
batch {j6 , j7 , j8 } to M2 . Since the batches processed on M2 ({j2 , j3 , h
All of the jobs contained in ∪m i=a (Bx ) ∪g=1 i,g
i
B have to be pro-
j4 }, {j1 } and {j6 , j7 , j8 }) do not abide by the BLPT rule, LIF-LS applies
cessed on the machines Ma(Bx ) , Ma(Bx )+1 , . . . , Mm in any feasible
the BLPT rule to the jobs in these batches. After that, the batches
schedule. Hence, the above inequality implies that any optimal
processed on M2 are {j3 , j4 , j8 }, {j6 , j2 , j7 } and {j1 }. After updating
schedule cannot complete all of the jobs by time OPT. This con-
of Load2 , we have Load2 = 8. Since Load1 < Load2 , LIF-LS generates
tradiction completes the proof of t ≤ 2OPT.
and assigns one batch {j5 } to M1 . No need to perform the addi-
Since SOL ≤ t + p(Bx ) (the adjustment may make SOL better),
tional adjustment. All jobs are scheduled and LIF-LS terminates.
we get SOL ≤ 3OPT. 
Step 1 of LIF-LS requires O(m) time. In Step 2(i), sorting the
jobs in Ji needs O(|Ji | log |Ji | ) time. In Step 2(ii), selecting the We are going to present the second algorithm for P |a j , p −
least loaded eligible machine takes O(log m) time, opening a new batch, Ki |Cmax . We assume here that all job processing times are
batch takes O(n) time, and merging the newly assigned jobs into integer valued. The framework is similar to that used in Ou et al.
Listl takes O(n) time. In Step 2(iii), the adjustment takes O(n) time. (2008) to get a 4/3-approximation algorithm for P|aj |Cmax (the spe-
For each job, Steps 2(ii) and 2(iii) will be performed at most once. cial case of P |a j , p − batch, Ki |Cmax where all Ki = 1). It works as
Hence the overall running time of LIF-LS is O(n log m + n2 ). follows. We first compute a schedule using the algorithm LIF-LS.
In order to analyze the approximation ratio of this algorithm, Let UB be the makespan of this schedule. Let LB = UB/3. Clearly,
we re-use some notations defined at the beginning of this section, we have LB ≤ OPT ≤ UB. We then conduct a binary search in the
such as Bi , B˜i , with subtly different meanings. interval [LB, UB]. For each value T selected in the binary search, we
16 S. Li / European Journal of Operational Research 260 (2017) 12–20

attempt to construct a schedule with makespan at most 9T/4 by Proof. Let  ∗ be an optimal schedule with makespan OPT. Let 
the SIF-LPT procedure (to be described later). If we are successful, be the schedule generated by SIF-LPT procedure. Let Bi = {Bi,g : g =
then we search the lower half of the interval; otherwise we search 1, 2, . . . , hi } be the ordered set of the batches processed on ma-
the upper half. chine Mi in  which are sorted in non-increasing order of pro-
We first perform some preprocessing in O(n log n) time. For i = cessing times.
1, 2, . . . , m, we sort all the jobs in Ji in non-increasing order of In  ∗ , at most one huge batch and one median batch, or at
processing times, and let Ji denote the ordered set. most three median batches, can be processed on each machine. In
The procedure SIF-LPT takes as input an integer parameter T.  , SIF-LPT definitely assigns one huge batch on a machine when-
According to T, it classifies jobs and batches as huge, median, and ever possible, and after that this machine still has enough room to
tiny. A job j is huge if pj > T/2, or median if T/4 < pj ≤ T/2, or tiny process at least two median batches, by the choice of the threshold
if pj ≤ T/4. Similarly, a batch Bg is huge if p(Bg ) > T/2, or median if 9T/4. Generating batches greedily, SIF-LPT attempts to assign more
T/4 < p(Bg ) ≤ T/2, or tiny if p(Bg ) ≤ T/4. Let HJ, MJ, TJ denote the (precisely, not fewer) processing times of huge jobs and median
sets of huge jobs, median jobs and tiny jobs, respectively. jobs on the smaller-indexed machines than  ∗ does. This intuition
The procedure SIF-LPT is based on the following idea. For i = can be easily proved by induction on the machine indices. Hence,
1, 2, . . . , m (this ordering is used crucially), it maintains a set Qi dy- all huge jobs and median jobs can be assigned by SIF-LPT pro-

namically which represents the ordered set of the currently unas- cedure. Equivalently, we claim that j∈ (HJ ∪MJ )∩∪m B p j is a lower
l=i l
signed jobs in non-increasing order of processing times that can be bound on the total processing time of the huge jobs and median
assigned to machine Mi . If the first job in Qi is a huge job, SIF-LPT jobs scheduled on the machines Mi , Mi+1 , . . . , Mm in an optimal
will generate a huge batch which consists of as many as possible schedule for any i = 1, 2, . . . , m.
of the first (i.e., the largest) jobs in Qi (without violating the ca- Suppose that OPT ≤ T but SIF-LPT procedure fails to schedule
pacity Ki of Mi ), and assign this huge batch to Mi . Then, SIF-LPT all the jobs on the m machines, i.e., Qm = φ when the proce-
continues to generate greedily the median and tiny batches and dure terminates. Let j be an unassigned job left over at the end
assign them to Mi , each of which consists of as many as possible of the execution of the procedure. Job j must be a tiny job, be-
of the largest jobs in Qi \HJ, such that the total processing time of cause we have proved that all huge jobs and median jobs can be
the batches scheduled on machine Mi does not exceed 9T/4. We assigned by SIF-LPT procedure. It follows that each of the machines
use Loadi to store the total processing time of the batches sched- Ma j , Ma j +1 , . . . , Mm has a load greater than 2T at the point when
uled on Mi (i.e., its current load). job j is assigned and hence also at the end of the procedure.
SIF-LPT (Smallest index first-largest processing time) proce- Find the largest index imax < aj such that in  machine Mimax
dure for parameter T: has a load no more than 2T. (It is possible that imax = 0 which
means that all the m machines have loads greater than 2T.) There-
Step 1. Let Q0 ← φ , Loadi ← 0 (i = 1, 2, . . . , m). fore, in  the machines Mimax +1 , Mimax +2 , . . . , Mm have loads greater
Step 2. For i = 1, 2, . . . , m (this ordering is used crucially), do the than 2T. There remains room on machine Mimax for any tiny job.
following:
Hence, the procedure scheduled no tiny job from ∪ii=1
max
Ji onto ma-
(i) Merge Qi−1 into Ji to get Qi .
chines Mimax +1 , Mimax +2 , . . . , Mm .
(ii) If the first job in Qi is a huge job, then generate a huge
By Lemma 1, for i = imax + 1, . . . , m we have
batch which consists of as many as possible of the first
jobs in Qi , and assign it to Mi . Set Loadi to be equal to 
hi

hi

the processing time of this batch. p(B˜i,g ) ≥ p(Bi,g ) − pmax > 2T − T = T .


(iii) If min{ p j | j ∈ Qi \HJ} ≤ 9T /4 − Loadi , then let y = g=1 g=1
arg max j { p j | j ∈ Qi \HJ and p j ≤ 9T /4 − Loadi }. Select h
the first y when there is a tie. Open a new batch Each batch in ∪m B˜ with a positive processing time
i=imax +1 ∪g=1 i,g
i

with processing time py , put the Ki (or as many as is a full batch and the jobs in it have the same processing time.
possible) largest jobs in Qi \HJ with processing times Hence we get
no greater than py into this batch, and assign this   m
hi pj > hi p j > Ki · T .
batch to machine Mi . Denote by D the set of these j∈
m
∪ ∪ Bi,g j ∈
m
∪ ∪ B˜i,g i=imax +1
i=imax +1 g=1 i=imax +1 g=1
newly assigned jobs. Let Loadi ← Loadi + py , Qi ←
Qi \D, and repeat Step 2(iii). All the tiny jobs contained in ∪m
h
i=imax +1 ∪g=1 i,g
i
B have to be pro-
cessed on the machines Mimax +1 , Mimax +2 , . . . , Mm in any feasible
In step 2(i), we create an ordered set Qi , which consists of
schedule. On the other hand, we have proved at the beginning that
the jobs with machine index i and the unassigned jobs left over 
hi p j is a lower bound on the total pro-
from the previous iteration, sorted in non-increasing order of their j∈ (HJ ∪MJ )∩ (∪i=i
m ∪g=1 Bi,g )
max +1
processing times. At the end of the execution of this procedure, cessing time of the huge and median jobs scheduled on the ma-
if Qm = φ , then the procedure has generated a feasible schedule chines Mimax +1 , Mimax +2 , . . . , Mm in an optimal schedule. Hence, the
with makespan at most 9T/4. We now show that if the optimal above inequality implies that any optimal schedule cannot com-
makespan is at most T, the procedure will always generate such a plete all of the jobs by time T, a contradiction. 
schedule.
Since the algorithm conducts a binary search in the interval [LB,
Once again, we will re-use the notations previously used, such 
as Bi , B˜i . This time we use them in a similar way to that used in UB], the number of iterations could be O(log( nj=1 p j )). Hence we
the analysis of LIF-LS. However, it should be stressed that SIF-LPT get the following theorem.
procedure uses a smallest-index-first ordering to schedule the ma- Theorem 3. There is a 9/4-approximation algorithm for P |a j , p −
chines one by one. Lemma 1 still holds here, although a batch con- batch, Ki |Cmax that runs in O(mn log psum ) time, where psum =
structed in the SIF-LPT procedure may contain jobs with different n
j=1 p j .
machine indices.
Note that although the running time of each iteration is
Lemma 2. If OPT ≤ T, then SIF-LPT procedure will generate a feasible strongly polynomial, the overall time complexity is not strongly
schedule with makespan at most 9T/4. polynomial. In fact, a strongly polynomial time approximation
S. Li / European Journal of Operational Research 260 (2017) 12–20 17

algorithm can be obtained by modifying the algorithm slightly We classify both jobs and batches as large and small according
using a technique described in Ou et al. (2008). This tech- to their processing times. A job j is large if pj > δ /β , otherwise it
nique works as follows. Instead of search for the exact inte- is small. Similarly, a batch Bg is large if p(Bg ) > δ /β , otherwise it is
ger value in the interval [LB, UB] using binary search, we di- small.
vide the interval [LB, UB] into h subintervals: [LB, ξ LB], (ξ LB,
ξ 2 LB], .. . , (ξ h−2 LB, ξ h−1 LB], (ξ h−1 LB, UB], where ξ = 1 + 4ε/9, h = Lemma 4. With 1 + ε loss, the number γ of different processing
 times of large jobs can be bounded from above by (1 + ε )/ε 3 − 1/ε .
logξ 3 , and ε is a given positive constant which can be set ar-
bitrarily close to zero. Note that UB ≤ ξ h LB. It is easy to check Proof. We round the processing time of each large  job j down to
that O(h) ≤ O(1/ε ). We use binary search to search these subinter- the nearest integral multiple of ε · δ /β , i.e., p j = p j /(ε · δ /β ) ε ·
vals. For each subinterval (C (u−1 ) , C (u ) ] (or [C (u−1 ) , C (u ) ]) involved in δ /β . If we obtain an optimal schedule for the instance with
the binary search, we apply SIF-LPT procedure with T = C (u ) . This rounded processing times, we can get a feasible schedule for the
modified algorithm is a (9/4 + ε )-approximation algorithm. The bi- original instance by re-introducing the original processing times.
nary search takes O(log h) ≤ O(log(1/ε )) iterations. Hence we get Since the optimal value of the rounded instance cannot be greater
the following theorem. than OPT and for each large job j we have pj > δ /β , we may
increase the objective value by a factor of 1 + ε . Since δ /β <
Theorem 4. There is a (9/4 + ε )-approximation algorithm for
pj ≤ (1/ε )δ , we get γ ≤ (1/ε )δ /(ε · δ /β ) − (δ /β )/(ε · δ /β ) = (1 +
P |a j , p − batch, Ki |Cmax that runs in O(n log n + mn log(1/ε )) time,
ε )/ε 3 − 1/ε , as claimed. 
where ε is a positive constant which can be set arbitrarily close to
zero. If the rounded processing time of a large job is exactly δ /β ,
then this job will be treated as a small job. Thus, all large jobs
3. A PTAS for the case of unequal release dates have processing times of the form h · (ε · δ /β ), where h ∈ {1/ε +
1, 1/ε + 2, . . . , (1 + ε )/ε 3 }. Without loss of generality, let γ = (1 +
In this section we study P |r j , a j , p − batch, Ki |Cmax , i.e., the ε )/ε 3 − 1/ε and let Pt = (t + 1/ε ) · (ε · δ /β ) denote the tth pro-
problem of minimizing makespan on parallel batching machines cessing time of the large jobs, t ∈ {1, 2, . . . , γ }.
with inclusive processing set restrictions, non-identical machine For simplicity, we call the instance with the rounded release
capacities and non-identical job release dates. We will present a dates and processing times the rounded instance. We have the fol-
polynomial time approximation scheme for this problem. Let ε be lowing two lemmas which are crucial in solving the rounded in-
an arbitrarily small positive constant. For simplicity, we shall as- stance.
sume that 1/ε is an integer and 0 < ε < 1.
Lemma 5. With 2ε ·OPT loss, we can restrict our attention to the
We begin by providing lower and upper bounds on the op-
schedules with the following properties:
timal value OPT. To do so, we first ignore all job release dates
and schedule the jobs using the 3-approximation algorithm for (i) small jobs are contained only in small batches, and
P |a j , p − batch, Ki |Cmax , LIF-LS, developed in the preceding section; (ii) each small batch consists of the small jobs with the same re-
then we increase the start time of all jobs by rmax . Clearly, we will lease date, and
get a feasible schedule with makespan at most 4OPT. Let UB be (iii) on each machine, the batches are processed in non-decreasing
the makespan of this schedule. Then, UB/4 is a lower bound on order of release dates.
OPT. Note that rmax and pmax are also lower bounds on OPT. Let
LB = max{rmax , pmax , UB/4}. We have Proof. Denote by  a (1 + 2ε )-approximate schedule that satisfies
Lemmas 3 and 4, i.e.,  is an optimal schedule for the rounded
LB ≤ OP T ≤ 4LB. (5) instance.
To prove property (i), we apply the BLPT rule to re-batch and
We will perform several transformations to simplify the input
schedule the jobs started in the same interval on each machine
instance. Each transformation may potentially increase the objec-
in  . We get a schedule (without increasing makespan) in which
tive value by O(ε ) · OPT or by a factor of 1 + O(ε ). If a trans-
on each machine among the batches started in the same interval,
formation potentially increases the objective value by a factor of
only one large batch may contain small jobs. Therefore, we stretch
1 + O(ε ), we say, as in Afrati et al. (1999), it produces 1 + O(ε )
each interval to make an extra space of length δ /β to accommo-
loss. If it potentially increases the objective value by O(ε ) · OPT, we
date these small jobs. Since there are β intervals, we may increase
say that it produces O(ε )·OPT loss.
the objective value by at most δ ≤ ε ·OPT. Denote by  1 the ob-
Let δ = ε · LB. Following Hall and Shmoys (1989), we round each
tained schedule in which small jobs are contained only in small
release date down to the nearest multiple of δ . Since rmax ≤
batches.
(1/ε )δ , the number of different release dates is now bounded by
A small job with release date ρ k is called a ρ k -small job, other-
β =1/ε + 1. (The quantity 1/ε + 1 is used frequently in the devel- wise it is called a non-ρ k -small job, k = 1, 2, . . . , β . We will trans-
opment of the PTAS. So, it is convenient to give a specific notation
form  1 into a schedule satisfying property (ii). To do so, let Mi
β to denote 1/ε + 1.) After obtaining a schedule for the instance (i = 1, 2, . . . , m) be an arbitrary machine. We now explain how to
with rounded release dates, we can get a feasible schedule for the
modify the small batches processed on Mi in  1 .
original instance by adding δ to each job’s start time. Since δ ≤
We start with the small batches which contain both ρ β -small
ε·OPT, we get the following lemma. jobs and non-ρ β -small jobs. Note that all these small batches have
Lemma 3. With ε · OPT loss, we assume that there are at most β release dates ρ β . We exchange some jobs between these batches
different release dates. such that these batches abide by the BLPT rule. Denote them by
B1 , B2 , . . . , Bh , where p(B1 ) ≥ p(B2 ) ≥  ≥ p(Bh ). We move some
The inequalities (5) guarantee that any optimal schedule fin- jobs in the following way until finally each of B1 , B2 , . . . , Bh con-
ishes no later than (4/ε )δ . We partition the time interval [0, (4/ε )δ ) tains only ρ β -small jobs or only non-ρ β -small jobs.
into β disjoint intervals and denote by k = [(k − 1 )δ, kδ ) the kth Split B1 into two small batches B1 and B1 (with the same pro-
interval, k = 1, 2, . . . , β −1 and β = [(1/ε )δ, (4/ε )δ ). We assume cessing time as that of B1 ) such that B1 contains only the ρ β -small
without loss of generality that the release dates take on β values, jobs in B1 and B1 contains only the non-ρ β -small jobs in B1 . Let n1
which we denote by ρk = (k − 1 )δ, k = 1, 2, . . . , β . (resp. n1 ) be the number of the ρ β -small jobs (resp. non-ρ β -small
18 S. Li / European Journal of Operational Research 260 (2017) 12–20

jobs) in B1 . Likewise, let n2 (resp. n2 ) be the number of the ρ β - We assign the jobs in Qik (k = 1, 2, . . . , β ) to machine Mi as fol-
small jobs (resp. non-ρ β -small jobs) in B2 . There are two different lows. Let Listik denote the ordered list of the small jobs with re-
cases to consider: lease date ρ k which have been assigned to Mi . The jobs in Listik
Case 1. n1 ≤ n2 are sorted in non-increasing order of their processing times. Ini-
We move n1 non-ρ β -small jobs from B2 into B1 , and move all tially, let Listik = φ . Pick the first Ki jobs in Qik . (If there are less
ρ β -small jobs from B2 into B1 . After that, B1 becomes full and con-
than Ki jobs left in Qik , then pick all of them.) Assign them to Mi .
tains only non-ρ β -small jobs. Set B1 ← B1 , B1 ← B2 , B2 ← B3 .
Case 2. n1 > n2 Update Qik by deleting these newly picked jobs. Update Listik by
Since B1 and B2 are full batches, we have n1 ≤ n2 . We move n1 merging these newly picked jobs. Let Loadik denote the total pro-
ρ β -small jobs from B2 into B1 , and move all non-ρ β -small jobs cessing time of the small batches obtained by applying the BLPT
from B2 into B1 . After that, B1 becomes full and contains only ρ β - rule to all the jobs in Listik . We repeatedly assign the jobs in Qik
small jobs. Set B1 ← B2 , B1 ← B1 , B2 ← B3 . to Mi in this manner until the first time that Loadik ≥ T Sik + δ /β or
In each case, we reduce the number of small batches containing until Qik = φ .
both ρ β -small jobs and non-ρ β -small jobs. We repeat this process Let Bik = {Bki,g : g = 1, 2, . . . , hki } denote the ordered set of the
by repeatedly choosing the next batch as B2 until finally Bh con- small batches with release date ρ k that are sorted in non-
tains only ρ β -small jobs or only non-ρ β -small jobs. increasing order of processing times and have been assigned to
Note that we only created a new batch with processing time Mi when the above procedure terminates, k = 1, 2, . . . , β . We mod-
p(B1 ). Therefore, we may increase makespan by at most p(B1 ) ≤ ify Bik to define a new set of batches B˜ik = {B˜ki,g : g = 1, 2, . . . , hki },
δ /β .
where B˜k is obtained by letting all processing times of the jobs in
We repeat the above process on Mi (i = 1, 2, . . . , m) for ρ k (k = i,g
β , β −1, . . . , 1). In the end, we get a schedule  2 in which each Bki,g be equal to q(Bki,g ) for g = 1, 2, . . . , hki − 1 and B˜k is obtained
i,hki
small batch contains only the small jobs with the same release
by letting all processing times of the jobs in Bk be equal to zero.
date. Since there are β intervals, we may increase makespan by i,hki
δ ≤ ε·OPT. hki
Note that Loadik = g=1 p(Bki,g ). Since Loadik ≥ T Sik + δ /β and any
Property (iii) can be proved easily. Let B1 and B2 be two adja-
small batch has processing time no more than δ /β , by Lemma 1 we
cent batches processed on the same machine in  2 such that r(B1 ) hki
< r(B2 ) and S(B1 ) > S(B2 ). Since r(B1 ) < r(B2 ) ≤ S(B2 ), we let B1 get g=1
p(B˜ki,g ) ≥ T Sik (unless there are no available small jobs
start at S(B2 ). This will delay some batches but will not increase with release date ρ k that can been assigned to Mi ). This inequality,
makespan. Repeat this process until the batches on each machine plus the definition of T Sik , guarantees to maximize the total pro-
are processed in non-decreasing order of release dates.  cessing time of small jobs with release date ρ k processed on Mi .
That is, we assign more processing times of small jobs with release
The dynamic programming procedure underlying our PTAS han- date ρ k on Mi than  does (unless there are no such jobs that can
dles the machines in increasing order of their indices. That is, it be assigned), i = 1, 2, . . . , m. We conclude that after we handle ma-
starts from M1 , continues to go through M2 , . . . , Mm−1 , and finishes chine Mm , all small jobs can be assigned to the machines.
at Mm . In this sense it is similar to SIF-LPT. hki
On the other hand, we observe that g=1 p(Bki,g ) ≤ T Sik + 2δ /β
We first perform some preprocessing in O(n log n) time. Re-
call that Ji denotes the set of the jobs with machine index i. For holds for k = 1, 2, . . . , β . Let Li be the makespan of machine Mi
i = 1, 2, . . . , m and k = 1, 2, . . . , β , we sort all the small jobs with (the completion time of the last batch on Mi ) in  . Let Li be the
release date ρ k in Ji in non-increasing order of processing times, makespan of Mi in the schedule generated by the above procedure.
and let SJik denote the ordered set. We have Li ≤ Li + 2δ ≤ Li + 2ε · OP T , i = 1, 2, . . . , m. Therefore, with
2ε ·OPT loss we have constructed a schedule from  which satisfies
Lemma 6. With 2ε ·OPT loss, the small jobs with the same release the property of the lemma. 
date can be assigned to the machines M1 , M2 , . . . , Mm (this ordering is
used crucially) in non-decreasing order of their machine indices, and To solve the rounded instance, we need two central defini-
within that in non-increasing order of their processing times. tions which are motivated from Li and Wang (2010)): outline and
assignment. They provide useful information that can make the
Proof. By Lemma 5, we consider a schedule  with makespan at enumeration and dynamic programming more efficient. Roughly
most (1 + 2ε ) · OP T for the rounded instance in which on each speaking, an outline for machines M1 , M2 , . . . , Mi specifies the jobs
machine the small batches with release date ρ k (k = 1, 2, . . . , β ) (including the small and the large jobs) processed on machines
are processed successively one after another. Note that each small M1 , M2 , . . . , Mi , while an assignment for machine Mi specifies the
batch in  consists of the small jobs with the same release date. jobs (including the small and the large jobs) processed on machine
Let T Sik be the total processing time of the small batches with Mi . Both the outline for machines M1 , M2 , . . . , Mi and the assign-
release date ρ k processed on machine Mi in  , i = 1, 2, . . . , m, ment for machine Mi are represented as ((γ + 1 )β )-dimensional
k = 1, 2, . . . , β . From  we will construct a schedule which satis- vectors, i = 1, 2, . . . , m.
fies the property of this lemma. Before we clarify the two definitions, we introduce the follow-
We keep all the large batches in  unchanged. That is, we ing notations for the convenience of description. Let nktl be the
schedule all the large jobs in the new schedule in the same way number of large jobs with release date ρ k , processing time Pt
as  does. and machine index l, k = 1, 2, . . . , β , t = 1, 2, . . . , γ , l = 1, 2, . . . , m.
We now explain how to assign the small jobs with the same 
Let nkt (i ) = il=1 nktl , which is the maximum number of large
release date to the machines. Initially, let Q0k = φ (k = 1, 2, . . . , β ). jobs with release date ρ k and processing time Pt that can be as-
We handle the machines in increasing order of their indices. Ma- signed to machine Mi , i = 1, 2, . . . , m. Let nkt be the total num-
chine Mi (i = 1, 2, . . . , m) is only handled after all the machines ber of large jobs with release date ρ k and processing time Pt , i.e.
M1 , M2 , . . . , Mi−1 have already been handled. Suppose that we are 
nkt = nkt (m ) = m l=1 nktl . Likewise, let nk0l be the number of small
handling machine Mi . Let Qik = Qik−1 ||SJik , where || denotes the con- jobs with release date ρ k and machine index l, k = 1, 2, . . . , β ,

catenation of the ordered sets Qik−1 and SJik . (We stress that we do l = 1, 2, . . . , m. Let nk0 (i ) = il=1 nk0l , which is the maximum num-
not merge Qik−1 and SJik .) ber of small jobs with release date ρ k that can be assigned to ma-
S. Li / European Journal of Operational Research 260 (2017) 12–20 19

chine Mi , i = 1, 2, . . . , m. Let nk0 be the total number of small jobs computes the minimum makespan with 5ε ·OPT loss (by Lemmas 5,

with release date ρ k , i.e. nk0 = nk0 (m ) = m l=1 nk0l . 6 and 7) for any given outline for machines M1 , M2 , . . . , Mi . In the
Below we formally describe the two central definitions. We fix end, when it handles Mm , it computes the minimum makespan
a specific schedule with makespan at most (1 + 4ε ) · OP T for the with 5ε ·OPT loss for only one outline for machines M1 , M2 , . . . , Mm ,
rounded instance,  , that satisfies Lemmas 5 and 6. We handle i.e., Zm = {(zktm = nkt |1 ≤ k ≤ β , 0 ≤ t ≤ γ )}.
the machines one by one in increasing order of their indices. Let For each zi ∈ Zi (i = 1, 2, . . . , m), the dynamic programming ta-
the current machine we are handling be Mi (i = 1, 2, . . . , m). ble entry Fi (zi ) stores the minimum makespan if we use the out-
The outline for machines M1 , M2 , . . . , Mi (with respect to  ) is line zi for machines M1 , M2 , . . . , Mi . We will compute Fi (zi ) with
defined to be a vector zi = (zkti |1 ≤ k ≤ β , 0 ≤ t ≤ γ ), where zk0i 5ε ·OPT loss. Since the optimal makespan for the rounded instance
represents the number of small jobs with release date ρ k that is no more than OPT and OPT ≤ (4/ε )δ , we claim that Fi (zi ) ≤
are processed on machines M1 , M2 , . . . , Mi , and zkti represents the (4/ε )δ + 5δ . If there is an outline zi ∈ Zi such that Fi (zi ) > (4/ε )δ +
number of large jobs with release date ρ k and processing time 5δ, then we delete this outline from Zi .
Pt that are processed on machines M1 , M2 , . . . , Mi , k = 1, 2, . . . , β , Let Z0 = {0} and F0 (0 ) = 0. For i = 1, 2, . . . , m, we have the fol-
t = 1, 2, . . . , γ . (The index t = 0 is used for the small jobs.) Since lowing recurrence relation:
 is a feasible schedule, we have the constraints of zkti ≤ nkt (i),
Fi (zi ) = min {max{Fi−1 (zi−1 ), Ci (zi − zi−1 )}},
k = 1, 2, . . . , β , t = 0,1, . . . , γ . Given the outline zi = (zkti |1 ≤ k ≤ zi−1 ∈Zi−1 s.t.zi−1 ≤zi
β , 0 ≤ t ≤ γ ) for machines M1 , M2 , . . . , Mi , by Lemma 6 we know
where zi−1 ≤ zi means that zkt (i−1 ) ≤ zkti for 1 ≤ k ≤ β , 0 ≤ t ≤ γ .
that the small jobs with release date ρ k processed on machines
Finally, the procedure returns Fm ((zktm = nkt |1 ≤ k ≤ β , 0 ≤ t ≤
M1 , M2 , . . . , Mi are exactly the first zk0i jobs in the ordered set
γ )), which is not greater than (1 + 5ε ) · OPT .
SJ1k ||SJ2k || · · · ||SJik , k = 1, 2, . . . , β .
Since we need an actual schedule, we extend the dynamic pro-
The outline for machines M1 , M2 , . . . , Mi just specifies the jobs
gramming approach to record not only the objective value com-
processed on M1 , M2 , . . . , Mi . It does not tell us directly how to as-
puted for each outline, but also a choice that led to the value.
sign these jobs onto M1 , M2 , . . . , Mi . To achieve the assignment of
With this information, we can readily construct a schedule with
the jobs onto the machines in the dynamic programming proce-
makespan at most (1 + 5ε ) · OP T for the rounded instance. After
dure, we define the assignment for a single machine using vec-
re-introducing the original release dates and processing times, we
tor subtraction. Let z0 = (zkt0 = 0|1 ≤ k ≤ β , 0 ≤ t ≤ γ ). The assign-
can get a schedule for the original instance whose makespan is at
ment for machine Mi (with respect to  ) is defined to be xi =
most (1 + 5ε )(1 + ε ) · OP T + ε · OP T ≤ (1 + 12ε ) · OP T .
zi − zi−1 , i.e., the outline for machines M1 , M2 , . . . , Mi subtracts the
It is easy to see that the overall running time of the algorithm
outline for machines M1 , M2 , . . . , Mi−1 , i = 1, 2, . . . , m. Given the
is O(m · (n + 1 )(2/ε+2 )(γ +1 )+1 · f (1/ε )). Therefore we get:
assignment xi = zi − zi−1 for machine Mi (with respect to  ), by
Lemma 6 we know that the small jobs with release date ρ k pro- Theorem 5. Problem P |r j , a j , p − batch, Ki |Cmax admits a PTAS.
cessed on Mi are exactly the first zk0i − zk0(i−1 ) jobs in the ordered
set Qik−1 ||SJik (which is determined at the beginning of handling 4. Conclusion
Mi ), k = 1, 2, . . . , β .
Given the assignment xi for Mi , let Ci (xi ) be the makespan of In this paper we initiated the study of scheduling parallel
Mi for scheduling the jobs specified by xi , i.e., the minimum com- batching machines with inclusive processing set restrictions and
pletion time of the last batch on Mi . Scheduling the jobs specified non-identical machine capacities. The objective is to minimize
by xi is essentially an instance of 1|r j , p − batch, B|Cmax where the makespan. For the case of equal release dates, we presented two
release dates and processing times have been rounded down and fast algorithms with approximation ratios 3 and 9/4, respectively.
the jobs have been sorted already. Hence, we can use the PTAS de- For the case with unequal release dates, we developed a polyno-
veloped in Li et al. (2005) to compute Ci (xi ). It follows that mial time approximation scheme. To the best of our knowledge,
this is also the first PTAS even for the case with equal release
Lemma 7. With ε · OPT loss, Ci (xi ) can be computed in O(ni · f(1/ε )) dates and without processing set restrictions. For future research,
time, where ni ≤ n denotes the number of the jobs to be scheduled it would be interesting to see if there are fast approximation algo-
on Mi (specified by xi ) and f is a function exponentially depending on rithms with approximation ratios better than 9/4. Another future
1/ε . research direction is to investigate other objective functions, such
as minimizing total (weighted) completion time. Moreover, exten-
Let Zi (i = 1, 2, . . . , m) denote the set of all feasible outlines sions to uniformly related batching machines scheduling problems
for machines M1 , M2 , . . . , Mi , i.e., Zi is defined to be the set con- with inclusive processing sets, warrant further research.
sisting of all possible vectors zi = (zkti |1 ≤ k ≤ β , 0 ≤ t ≤ γ ) with
the constraints of zkti ≤ nkt (i), k = 1, 2, . . . , β , t = 0,1, . . . , γ . There- Acknowledgments
fore, zkti takes on nkt (i ) + 1 ≤ n + 1 different values, k = 1, 2, . . . , β ,
t = 0, 1, . . . , γ . We have Z1 ⊆ Z2 ⊆  ⊆ Zm and (replacing β by This work is supported by the National Natural Science Founda-

γ
/ε+1
1/ε + 1) |Zm | = t=0 1k=1 (nkt + 1 ))=O( (n + 1 )(1/ε+1)(γ +1) ). tion of China (Nos. 61373079, 61472227, 61272244 and 61672327),
We are now ready to describe the dynamic programming Key project of Chinese Ministry of Education (No. 212101),
framework which solves the rounded instance. It can be viewed Shandong Provincial Natural Science Foundation of China (No.
as a generalization of the framework used in Li and Wang (2010)) ZR2013FM015).
where Li and Wang presented a PTAS for problem P|rj , aj |Cmax
(the special case of P |r j , a j , p − batch, Ki |Cmax with all Ki = 1). Note References
that we are looking for a schedule for the rounded instance with
makespan at most (1 + 5ε ) · OP T (that satisfies Lemmas 5, 6 and Afrati, F., Bampis, E., Chekuri, C., Karger, D., Kenyon, C., Khanna, S., et al. (1999).
Approximation schemes for minimizing average weighted completion time with
7; rounding down the release dates and the processing times in release dates. In Proceedings of the 40th annual IEEE symposium on foundations
Lemmas 3 and 4 cannot increase makespan). of computer science (pp. 32–43).
As mentioned above, the dynamic programming handles the Ahmadi, J. H., Ahmadi, R. H., Dasu, S., & Tang, C. S. (1992). Batching and scheduling
jobs on batch and discrete processors. Operations research, 40(4), 750–763.
machines in increasing order of their indices. It starts from M1 , and Albers, S., & Brucker, P. (1993). The complexity of one-machine batching problems.
works its way towards Mm . When it handles Mi (i = 1, 2, . . . , m), it Discrete Applied Mathematics, 47(2), 87–107.
20 S. Li / European Journal of Operational Research 260 (2017) 12–20

Bar-Noy, A., Freund, A., & Naor, J. (2001). On-line load balancing in a hierarchical Lawler, E. L., Lenstra, J. K., Kan, A. H. R., & Shmoys, D. B. (1993). Sequencing and
server topology. SIAM Journal on Computing, 31(2), 527–549. scheduling: algorithms and complexity. In S. C. Graves, A. H. G. Rinnooy Kan,
Brucker, P. (2007). Scheduling algorithms (5th). Springer. & P. Zipkin (Eds.): 4, Handbooks in operations research and management science
Brucker, P., Gladky, A., Hoogeveen, H., Kovalyov, M. Y., Potts, C. N., Tautenhahn, T., (pp. 445–522). North-Holland.
et al. (1998). Scheduling a batching machine. Journal of Scheduling, 1(1), 31–54. Lee, C.-Y., & Uzsoy, R. (1999). Minimizing makespan on a single batch processing
Brucker, P., Jurisch, B., & Krämer, A. (1997). Complexity of scheduling problems with machine with dynamic job arrivals. International Journal of Production Research,
multi-purpose machines. Annals of Operations Research, 70, 57–73. 37(1), 219–236.
Centeno, G., & Armacost, R. L. (2004). Minimizing makespan on parallel machines Lee, C.-Y., Uzsoy, R., & Martin-Vega, L. A. (1992). Efficient algorithms for scheduling
with release time and machine eligibility restrictions. International Journal of semiconductor burn-in operations. Operations Research, 40(4), 764–775.
Production Research, 42(6), 1243–1256. Lee, K., Leung, J. Y.-T., & Pinedo, M. L. (2011). Scheduling jobs with equal process-
Chen, B., Potts, C. N., & Woeginger, G. J. (1998). A review of machine schedul- ing times subject to machine eligibility constraints. Journal of Scheduling, 14(1),
ing: complexity, algorithms and approximability. Handbook of combinatorial op- 27–38.
timization (pp. 1493–1641). Springer. Lenstra, J. K., Shmoys, D. B., & Tardos, E. (1990). Approximation algorithms
Deng, X., Feng, H., Li, G., & Shi, B. (2005). A PTAS for semiconductor burn-in for scheduling unrelated parallel machines. Mathematical Programming, 46(1),
scheduling. Journal of Combinatorial Optimization, 9(1), 5–17. 259–271.
Deng, X., Poon, C. K., & Zhang, Y. (2003). Approximation algorithms in batch pro- Leung, J. Y. (2004). Handbook of scheduling: Algorithms, models, and performance
cessing. Journal of Combinatorial Optimization, 7(3), 247–257. analysis. CRC Press.
Drozdowski, M. (2009). Classic scheduling theory. Scheduling for parallel processing Leung, J. Y.-T., & Li, C.-L. (2008). Scheduling with processing set restrictions: a sur-
(pp. 55–86). Springer. vey. International Journal of Production Economics, 116(2), 251–262.
Epstein, L., & Levin, A. (2011). Scheduling with processing set restrictions: PTAS re- Leung, J. Y.-T., & Li, C.-L. (2016). Scheduling with processing set restrictions: a liter-
sults for several variants. International Journal of Production Economics, 133(2), ature update. International Journal of Production Economics, 175, 1–11.
586–595. Li, C.-L. (2006). Scheduling unit-length jobs with machine eligibility restrictions. Eu-
Garey, M. R., & Johnson, D. S. (1979). Computers and intractability: A guide to the ropean Journal of Operational Research, 174(2), 1325–1328.
theory of NP-completeness. San Francisco, CA: freeman. Li, C.-L., & Wang, X. (2010). Scheduling parallel machines with inclusive processing
Glass, C. A., & Kellerer, H. (2007). Parallel machine scheduling with job assignment set restrictions and job release times. European Journal of Operational Research,
restrictions. Naval Research Logistics, 54(3), 250–257. 200(3), 702–710.
Graham, R. L. (1966). Bounds for certain multiprocessing anomalies. Bell System Li, S., Li, G., & Zhang, S. (2005). Minimizing makespan with release times on identi-
Technical Journal, 45(9), 1563–1581. cal parallel batching machines. Discrete Applied Mathematics, 148(1), 127–134.
Graham, R. L. (1969). Bounds on multiprocessing timing anomalies. SIAM Journal on Li, W., Li, J., & Zhang, T. (2009). Approximation schemes for scheduling on paral-
Applied Mathematics, 17(2), 416–429. lel machines with gos levels. Lecture Notes in Operations Research-Operations Re-
Graham, R. L., Lawler, E. L., Lenstra, J. K., & Kan, A. R. (1979). Optimization and ap- search and Its Applications, 10, 53–60.
proximation in deterministic sequencing and scheduling: a survey. Annals of dis- Liu, L., Ng, C., & Cheng, T. (2014). Scheduling jobs with release dates on parallel
crete mathematics, 5, 287–326. batch processing machines to minimize the makespan. Optimization Letters, 8(1),
Hall, L. A., & Shmoys, D. B. (1989). Approximation schemes for constrained schedul- 307–318.
ing problems. In Proceedings of the 30th annual IEEE symposium on foundations Liu, Z., & Yu, W. (20 0 0). Scheduling one batch processor subject to job release dates.
of computer science (pp. 134–139). Discrete Applied Mathematics, 105(1), 129–136.
Hochbaum, D. S., & Shmoys, D. B. (1987). Using dual approximation algorithms for Mathirajan, M., & Sivakumar, A. (2006). A literature review, classification and simple
scheduling problems theoretical and practical results. Journal of the ACM, 34(1), meta-analysis on scheduling of batch processors in semiconductor. The Interna-
144–162. tional Journal of Advanced Manufacturing Technology, 29(9), 990–1001.
Huo, Y., & Leung, J. Y.-T. (2010). Fast approximation algorithms for job schedul- Mönch, L., Fowler, J. W., Dauzère-Pérès, S., Mason, S. J., & Rose, O. (2011). A survey
ing with processing set restrictions. Theoretical Computer Science, 411(44), of problems, solution techniques, and future challenges in scheduling semicon-
3947–3955. ductor manufacturing operations. Journal of Scheduling, 14(6), 583–599.
Hwang, H.-C., Chang, S. Y., & Hong, Y. (2004a). A posterior competitiveness for list Ou, J., Leung, J. Y. T., & Li, C. L. (2008). Scheduling parallel machines with inclusive
scheduling algorithm on machines with eligibility constraints. Asia-Pacific Jour- processing set restrictions. Naval Research Logistics, 55(4), 328–338.
nal of Operational Research, 21(1), 117–125. Papadimitriou, C. H., & Steiglitz, K. (1998). Combinatorial optimization: Algorithms
Hwang, H.-C., Chang, S. Y., & Lee, K. (2004b). Parallel machine scheduling un- and complexity. Courier Dover Publications.
der a grade of service provision. Computers and Operations Research, 31(12), Potts, C. N., & Kovalyov, M. Y. (20 0 0). Scheduling with batching: a review. European
2055–2061. Journal of Operational Research, 120(2), 228–249.
Jaffe, J. M. (1980). Bounds on the scheduling of typed task systems. SIAM Journal on Sahni, S. K. (1976). Algorithms for scheduling independent tasks. Journal of the ACM,
Computing, 9(3), 541–551. 23(1), 116–127.
Jansen, K. (1994). Analysis of scheduling problems with typed task systems. Discrete Shchepin, E. V., & Vakhania, N. (2005). An optimal rounding gives a better approx-
Applied Mathematics, 52(3), 223–232. imation for scheduling unrelated machines. Operations Research Letters, 33(2),
Ji, M., & Cheng, T. E. (2008). An FPTAS for parallel-machine scheduling under a 127–133.
grade of service provision to minimize makespan. Information Processing Letters, Vairaktarakis, G. L., & Cai, X. (2003). The value of processing flexibility in multipur-
108(4), 171–174. pose machines. IIE Transactions, 35(8), 763–774.
Kafura, D. G., & Shen, V. (1977). Task scheduling on a multiprocessor system with Webster, S., & Baker, K. R. (1995). Scheduling groups of jobs on a single machine.
independent memories. SIAM Journal on Computing, 6(1), 167–187. Operations Research, 43(4), 692–703.
Lam, T.-W., Ting, H.-F., To, K.-K., & Wong, W.-H. (2002). On-line load balancing of
temporary tasks revisited. Theoretical Computer Science, 270(1), 325–340.

Vous aimerez peut-être aussi