Vous êtes sur la page 1sur 7

A Hybrid Approach for the 01 Multidimensional Knapsack problem

Michel Vasquez 1) and Jin-Kao Hao 2) 1) EMAEERIE, Parc Scientique G. Besse, F30035 Nimes cedex 1, vasquez@eerie.fr 2) Universit dAngers, 2 bd Lavoisier, F49045 Angers cedex 1, hao@info.univ-angers.fr to appear in the Proc. of IJCAI-01, Seatle, Washington, August 2001

Abstract
We present a hybrid approach for the 01 multidimensional knapsack problem. The proposed approach combines linear programming and Tabu Search. The resulting algorithm improves signicantly on the best known results of a set of more than 150 benchmark instances.

Given the practical and theoretical importance of the MKP01, it is not surprising to nd a large number of studies in the literature; we give a brief review of these studies in the next section.

2 State of the Art


Like for many NP-hard combinatorial optimization problems, both exact and heuristic algorithms have been developed for the MKP01. Existing exact algorithms are essentially based on the branch and bound method [Shih, 1979]. These algorithms are different one from another according to the way the upper bounds are obtained. For instance, in [Shih, 1979], single constrained, reShih solves, exactly, each of the laxed knapsack problems and select the minimum of the objective function values as the upper bound. Better algorithms have been proposed by using tighter upper bounds, obtained with other MKP01 relaxation techniques such as lagrangean, surrogate and composite relaxations [Gavish and Pirkul, 1985]. Due to their exponential time complexity, exact algorithms are limited to small size instances ( and ). Heuristic algorithms are designed to produce near-optimal solutions for larger problem instances. The rst heuristic approach for the MKP01 concerns for a large part greedy methods. These algorithms construct a solution by adding, according to a greedy criterion, one object each time into a current solution without constraint violation. The second heuristic approach is based on linear programming by solving various relaxations of the MKP01. A rst possibility is to relaxe the integrality constraints and solve optimally the relaxed problem with simplex. Other possibilities include surrogate and composite relaxations [Osorio et al., 2000]. More recently, several algorithms based on metaheuristics have been developed, including simulated annealing [Drexl, 1988], tabu search [Glover and Kochenberger, 1996; Hana and Frville, 1998] and genetic algorithms [Chu and Beasley, 1998]. The metaheuristic approach has allowed to obtain very competitive results on large instances compared with other and ). methods ( Most of the above heuristics use the so-called pseudo utility criterion for selecting the objects to be added into a solution. For the single constraint case (the KP01), this criterion corresponds to the prot/resource ratio. For the general MKP01, the pseudoutility is usually dened as
H WF a`26 1 YE X 1   P R"Q3 7 7  V 3 WTB7   S R"U3 S 3 TB7

1 Introduction
The NP-hard 01 multidimensional knapsack problem (MKP01) consists in selecting a subset of given objects (or items) in such a way that the total prot of the selected objects is maximized while a set of knapsack constraints are satised. More formally, the MKP01 can be stated as follows.

where IN , IN and IN . The binary components of are decision variables: if the object is selected, otherwise. is the prot associated to . Each of the constraints is called a knapsack constraint. The special case of the MKP01 with is the classical knapsack problem (KP01), whose usual statement is the following. Given a knapsack of capacity and objects, each being associated a prot (gain) and a volume occupation, one wants to select ( and not xed) objects such that the total prot is maximized and the capacity of the knapsack is not exceeded. It is well known that the KP01 is not strongly NP-hard because there are polynomial approximation algorithms to solve it. This is not the case for the general MKP01. The MKP01 can formulate many practical problems such and conas capital budgeting where project has prot sume units of resource . The goal is to determine a subset of the projects such that the total prot is maximized and all resource constraints are satised. Other important applications include cargo loading [Shih, 1979], cutting stock problems, and processor allocation in distributed systems [Gavish and Pirkul, 1982]. Notice that the MKP01 can be considered as a general 01 integer programming problem with non-negative coefcients.
 3 1 402 1 6 '  3 AB7  0 8 A@ 1 8 6 9 5 ( # )' I C   DC # & 7 C 1 2 H F 1 GE 8  % 5 5

# !     $" 

MKP01

maximize subject to and

where is the sum of the components of , dening a hyperplane. We have thus several (promising) points around which a careful search will be carried out. To show the interest of this extra constraint, take again the , leading to three relaxed previous example with problems:
H  W Wf`tyx sc       E 3 n k H 66RfRytyu sc         E 3 n k H 66Rytn sc         E 3 k SRUv kx P 3 n w P 3 n "Uv ku P  3 $rn k c q q

3 A Hybrid Approach for the MKP01


The basic idea of our approach is to search around the fractional optimum of some relaxed MKP. Our hypothesis is that the search space around should contain high quality solutions1 . This general principle can be put into practice via a two phase approach. The rst phase consists in solving exactly a relaxation MKP of the initial MKP01 to nd a fractional optimal solution . The second phase consists in exploring carefully and efciently some areas around this fractional optimal point. Clearly, there are many possible ways to implement this general approach. For this work, we have done the following choices. The rst phase is carried out by using simplex to solve a relaxed MKP. The second phase is ensured by a Tabu Search algorithm. Let us notice rst that relaxing the integrality constraints alone may not be sufcient since its optimal solution may be far away from an optimal binary solution. To be convinced, let us consider the following small example with ve objects and one knapsack constraint:
H   H t  t s c P  uWE   P  P  2E 3 c c qi h g e rp9fd

where gives directly an optimal binary solution without further search. In general cases, search is necessary to explore around each fractional optimum and this is carried out by an effective TS algorithm (Section 5). In order not to go far away from each fractional optimum , we constraint TS to explore only the points such that , the distance between and 2 is no greater than a given threshold . To sum up, the proposed approach is composed of three steps: 1. determine interesting hyperplanes ; 2. run simplex to obtain the corresponding fractional optimum ; 3. run Tabu Search around each , limited to a sphere of xed radius.
))

4 Determine Hyperplanes Simplex Phase


C D0

and


Given an size instance of MKP01, it is obvious that the values do not have the same interest regarding the optimum. Furthermore exploring all the hyperplanes would be too much time consuming. We propose, in this section, a simple way to compute good values of . Starting from a 01 lower bound , the method consists in solving, with the simplex algorithm, the two following problems:
#f 

MKP
I D7 h

[ ]

minimize and

s.t.

and

This point is discussed more formally in section 5.2.

1 This hypothesis is conrmed by experimental results presented later. Of course, the hypothesis does not exclude the possibility that other areas may contain good solutions.

Let be the optimal value of this problem. It is not possible to hold more items than without violate one of the constraints .
F w7 h 3 8 %  ~ 8 ' 9 C

#f gf

MKP
Fw7 h

[ ]

maximize and

s.t. and

I w7 h 3

H  E r  d  H iE h E " ' C

H 

F 7 h

This relaxed problem leads to the fractional optimal solution with an optimal cost value while with an the optimal binary solution is optimal cost value . However, the above relaxation can be enforced to give more precise information regarding the discrete aspect of the problem. To do this, let us notice that all solutions of MKP01 verify the property: IN with an integer. Now if we add this constraint to the relaxed MKP, we obtain
H  WfRfG        E 3 V WT3 c  V C  C UQ3 1 # 1 S P 3 RT H 6 yvW Wyxwc       E 3

Let be the optimal value of this problem. A knapsack that holds fewer items than respects no longer the constraint .

3 H iE h

3 n k zvx l

C j3 H `E h

 ~ v'

H   E  d % H iE h

n k 6m sc

| n k c  v6m lWa |

n k 6m sc

I D7 h

MKP[ ] MKP[ ] MKP[ ]


V 3 jyC  P V p

et et et

n k 6m lc

#f gf

 e d 

V  P   3 626RpoC

Cj3 H iE h %

MKP[ ]
C H iE h

where is a multiplier for the column vector . Notice that it is impossible to obtain "optimal" multipliers. Thus the pseudoutility criterion may mislead the search. In this paper, we propose an alternative way to explore the search space. We use fractional optimal solutions given by linear programming to guide a neighborhood search (Tabu Search or TS) algorithm. The main idea is to exploit around a fractional optimal solution with additional constraints. We experiment this approach on a very large variety of MKP01 benchmark instances and compare our results with the best known ones. We show that this hybrid approach outperforms previously best algorithms for the set of tested instances. The paper is organized as follows. Next section presents the general scope of our approach. Section 4 describes the algorithm used to determine promising sub-space of the whole search space from where we run a TS algorithm. This algorithm is presented in section 5. Finally we give computational results on a large set of MKP01 instances in section 6.
1 2 Y b

a series of problems: maximize s.t. and IN and

n k 6m sc

{  @0

n k 6m lc C

 V 3 RUv

 

a move consists in changing a small set of components of giving and is denoted by ; in this binary context, the ipped variables of a move can be identied by their indexes in the vector: These indexes are the attributes of the move; the neighborhood of , , is the subset of congurations reachable from in one move. In this binary context, a move from to can be identied without ambiguity by the attribute if is obtained by element of . More generally, we use ipping the to denote the move where . Such a move is called a _ .
H `A E H s`Rw7  E

is the A neighborhood which satises the constraint classical add/drop neighborhood: we remove an object from the current conguration and add another object to it at the same time. This neighborhood is used in this study. The set of neighboring congurations of a conguration is thus dened by the formula:
! P 3  | jW| s X m  r3 H iA  E H iA E

It is easy to see that this neighborhood is a special ase of _ (cf. sect. 5.1). We use in the following sections or with and indifferently . As TS evaluates the whole neighborhood for for a move, we have a time complexity of each iteration of .

5.4

Tabu List Management


5.2

Search Space Reduction


# !     3 WzU{ j

Y2y H 5I`ERw7pfW5 dfI dY2Fy%3 I F

For the rst point, we use to dene the distance where et may be binary or continuous. We use the following heuristic to estimate the maximal distance authorized from the starting point . Let be the elements of the vector sorted in decreasing order. are fractional components of and we have: . Let u be the number of the components having the value of
 f Q u  n k 6m lc 1 n k 6m lc H f "f f vRf ffRfRyE               ~ ' } n k 6m sc | 1 1 | # 1 |  s |

P 3 TW|

F 2 f

2. limitation of the number of objects taken in the congurations , that are visited by the tabu search algorithm, to the constant (intersection of and the hyperplane ).
{ C C T3 H `E h

H ` IE u

1. limitation of to a sphere of xed radius around the point , the optimal solution of the relaxed MKP . That is: ;
f C d  ~ ' } | n k c  ov6m lWa | { n k 6m lc

H pvs`Rw7  E

Regarding the knapsack constraints ( ), is the completely relaxed search space. It is also the largest possible search space. We specify in this section a reduced search space which will be explored by our Tabu Search algorithm. To dene this reduced space, we take the ideas discussed previously (Section 3):
{

The reverse elimination method ( ), introduced by Fred Glover [Glover, 1990], leads to an exact tabu status (i.e. tabu has been visited). It consists in storing in a running list the attributes (pair of components) of all completed moves. Telling if a move is forbidden or not needs to trace back the running list. Doing so, one builds another list, the so-called residual cancellation sequence ( ) in which attributes are either added, if they are not yet in the , or dropped Otherwise. The condition corresponds to a move leading to a yet visited conguration. For more details on this method see [Dammeyer and Vo, 1993; has a time complexity ( Glover, 1990]. is the current value of the move counter). For our specic _ move, we need only one trace of the running list. Each time we meet , the move with and attributes is said tabu. Let be the move counter. The following algorithm updates the tabu status of the whole neighborhood of a conguration and corresponds to the equivalence

H H C C

Cj3 H `E h

the unconstrained search space is dened to equal the set , including both feasible and unfeasible congurations;
{ # !   " $

5.3

Neighborhood

 3 8

E E 0

H iD7 5 IE

H pvs`Rw7  E

F 2 f

'

where so far.

is the best value of a feasible conguration found

'

The following notations are specically oriented to the MKP01: a conguration is a binary vector with components;

5.1

Notations

Note that all are disjoint, this is a good feature for a distributed implementation of our approach. Finally, in order to further limit TS to interesting areas of , we add a qualitative constraint on visited congurations:
 m m

 ~ '

A comprehensive introduction of Tabu Search may be found in [Glover and Laguna, 1997]. We give briey below some notations necessary for the understanding of our TS algorithm .

} | n k c  | C v6m lRs j3 H `E h | # "  A3 m !     

5 Tabu Search Phase

3 n k em "}

%l6m } 3 n k C 3 H 6m lE h n k c n k 6m }  ~ ' }

n k 6m lc

n k 6m lc C 3 jDY

H C

E Yiy9P Y

n k 6m lc

Consequently, local search will run only in the hyperplanes for which is bounded by and . Hence we compute, using once again the simplex algorithm, the relaxed MKP which give us the conwhich are used by the TS phase. tinuous optima
 ~ ' C # 8 ' C f C d C n k 6m sc # H  8 ' C C r3 H iE h  ~ ' C GE

1 in the

. In the worst case, we select items rather than components. With that gives . If , it follows that corresponds to the case where is a binary vector. Depending on the MKP01 instances, we choose . Hence, each tabu process running around has its own search space :

 3 1

H vs`Rw7  E

5  twp

r8

F 2 f C 3 p Wrx5 f C  d  H m v8 rf u v `Rw7 I  I IE ` 5

if

else if

until

if

This algorithm traces back the running list table from its entry to its entry.

if

5.5

Evaluation Function
H 8 GE F 8 Gz"`~ 8 3 H iE  x

We use a two components evaluation function to assess the congurations : and . To make a move from , we take among the conguration dened by the following relation:

if

then

if

% Complete move then %Reset the running list

else

U PDATE_TABU

5.6

Tabu Algorithm
l

Based on the above considerations, our TS algorithm for ) works on a series of problems like: MKP01 (
8 w } | n k c  | C o6m sRs U3 # # !    3 {  " $0D8

& I

 V Wp3

  S WW3

&

& C

& I & & " c

&

All the procedures of have been coded in C language. Our algorithm has been executed on different kind of CPU (up to 20 distributed computers like PII 350, PIII 500, U LTRA S PARC 5 and 30). For all the instances solved below, we run with 10 random seeds (0..9) of the standard srand() C function. We have rst tested our approach on the 56 classical problems used in [Aboudi and Jrnsten, 1994; Balas and Martin, 1980; Chu and Beasley, 1998; Dammeyer and Vo, 1993; Drexl, 1988; Frville and Plateau, 1986; 1993; Glover and Kochenberger, 1996; Shih, 1979; Toyoda, 1975]. The size of these problems varies from =6 to 105 items and from =2 to 30 constraints. These instances are easy to solve for stateof-the-art algorithms. Indeed, our approach nds the optimal value (known for all these instances) in an average time
7 l

Table 1:Comparative results on the 7 largest GK pb.

Column shows the number of items of the best solution with cost found by . From the table, we observe that all the results are improved. Columns and give the number of moves and the time elapsed to reach . We know that the algorithm runs 50000 moves after . The search process takes thus an average of 380 seconds for the test problems GK 18... GK 22, 600 seconds for GK 23 and 1100 seconds for GK 24. The third set of test problems concerns the 30 largest benchmarks ( items, constraints) of OR-

D $u x 6u u 6u u

6 Computational Results
s

P B $u x u D C Q y A

Q x6u D $x u D C

u P Q x 6u Q u C 6au C Q x C P$P B x Du C 6ED @ 9 1

C $u C C$u C$u C$u C$u C $u

( ( ( ( ( ( ( ' (

6 C 6 u 6 6 6 6 6#

GK

7 5 86

u Q P Q P u C C P $ED vE u B D$D x x $x P B 8

Y b Y 2SSU GW`WU R H R22SH H G Y Y H S`WR YSXWG I V US2SR T I I2H2F G

u Q B

where is a strictly increasing sequence. Let be the running list size. Considering the last feature of our algorithm given just above (sect. 5.5), is twice the maximum number of iterations without improvement of the is built with cost function. The starting conguration the following simple greedy heuristic: Choose the items , which have the largest value.
C | | # 8 8 8 n k 6m lc ! m vf f  I  I

3 p7

nd
# ' 8 

such that
 ~ '

| |

Random choice is used to break ties. Each time the running list is reset and is updated. Hence the tabu algorithm explores areas of the trying to reach feasibility. search space where
# 8 ' # 8 '  3 H `E

until

of 1 second (details are thus omitted). The running list size ( ) is xed to 4000. For the next two sets of problems, the running list size ( ) is xed to 100000. Hence the maximum number of moves without improvement is 50000. The second set of tested instances is constituted of the last seven (also the largest ones with 100 to 500 items, 15 to 25 constraints) of 24 benchmarks proposed by Glover and Kochenberger [Glover and Kochenberger, 1996]. These instances are known to be particularly difcult to solve for Branch & Bound algorithms. Table 1 shows a comparison between our results (columns 4 to 7) and the best known ones ). reported in [Hana and Frville, 1998] (column
| |

such that

l

or and

running list

running list

 (      v  $ % v  ' v  1 i8

(8

 p

|g9 | # '

H iE U3 H piE H iE H piE H `Az p E 

~ k n z n z k 8 8  n Wz n z k ~ k 8 u az 1 vaz az 1 vaz az z1

running list then

then

n k

R R8 8 

repeat

if

then ; ;

else

repeat for

if

` y
do then % evaluation then then ) % Restore old values

% end of running list

r ' 4 3  u 0 2%W 1 a 8 0 8    % r   W  ( (  r(    ) 1 1 r  ( '& i8 ( 8    !   %&  # $ "    n 1 gn k ~ k 8 # 8 1 8  y  r v r    v %  rf  ~ n kn k 8


% Move counter

Algorithme 1: U PDATE_TABU

Algorithme 2:

H `A E

H iA E 3 H iE

%R8 1

Table 3: Average performance over the 90 largest CB pb.

The column indicates the average gap values in percentage between the continuous relaxed optimum and the best cost value found: . and algorithms were stopped after 3 hours of computing or when the tree size memory of 250M bytes was exceeded. Our best results were obtained with and the algorithm never requires more than 4M bytes of memory. Except for the instances with and our approach outperforms all the other algorithms. To nish the presentation on Chu and Beasley benchmarks, Table 4 and 5 show the best values obtained by our algorithm on the 30 CB 5.500 and the 30 CB 10.500 instances.

XX XX XX

Table 2:Comparative results on the 30 largest CB pb.

0 1 2 3 4 5 6 7 8 9

117779 119190 119194 118813 116462 119504 119782 118307 117781 119186

134 134 135 137 134 137 139 135 136 138

10 11 12 13 14 15 16 17 18 19

217343 219036 217797 216836 213859 215034 217903 219965 214341 220865

256 259 256 258 256 257 261 256 258 255

20 21 22 23 24 25 26 27 28 29

304351 302333 302408 300757 304344 301754 304949 296441 301331 307078

378 380 379 378 381 375 378 379 379 378

Table 5: Best values for CB 10.500. XX

Available at http://mscmga.ms.ic.ac.uk.

To conclude this section, we present our results on the 11 latest instances proposed very recently by Glover and Kochenberger (available at: http://hces.bus. olemiss.edu/tools.html.) These benchmarks contain up to =2500 items and =100 constraints, thus are very large. Table 6 compares our results (columns 4 and 5) and the best known results taken from the above web site. Once again, our approach gives improved solutions for 9 out of 11 instances. Let us mention that the experimentation on
7

XX

XX

XX

Table 2 compares our results with those reported in [Chu and Beasley, 1998] ( ), which are among the best results for these instances. From the table, we see that our approach improves signicantly on all these results. The average time of the 50000 last iterations is equal to 1200 seconds for these instances. These results can be further improved by giving more CPU time (iterations) to our algorithm. For example, with , nds after 6000 seconds for the instance CB 30.500.0. The Chu and Beasley benchmark contains 90 instances with 500 variables: 30 instances with =5 constraints, 30 and 30 with (results detailed just with above). Each set of 30 instances is divided into 3 series with and . Table 3 compares, for each subset of 10 instances, the averages of , those obtained more rethe best results obtained by cently by Osorio, Glover and Hammer [Osorio et al., 2000] (column 6). The (columns 4 and 5) and those by new algorithm of [Osorio et al., 2000] uses advanced techniques such as cutting and surrogate constraint analysis (see column Fix+Cuts for results). We reproduce also from [Osorio et al., 2000], in column , the best values obw X V Q3  s s S   W"$W3 & u l 7 y w x  V W43 P X " t ) 3 r v 7  w X u $ t r s 3 1 8      V 3 | WRWRWp | # 1   X 3 8 3 7 u

0 1 2 3 4 5 6 7 8 9

120134 117864 121112 120804 122319 122024 119127 120568 121575 120707

146 148 145 149 147 153 145 150 148 151

10 11 12 13 14 15 16 17 18 19

218428 221191 217534 223558 218966 220530 219989 218194 216976 219704

267 265 264 264 267 262 266 266 262 267

20 21 22 23 24 25 26 27 28 29

295828 308083 299796 306478 300342 302561 301329 306454 302822 299904

383 383 385 385 385 384 385 383 382 379

Table 4: Best values for CB 5.500. XX

y w x

     V 3 | WRWRWrg9 |

c Y pX H & I c

w X V T3 u

S 3 TB7

c E

SF

P aD 6 x C x x D aD Q P x x x u 66x x D $B P Q u Q D aD x D C Q ax x C $Q D x Q B Q Q C $aaQ u u P x D u P $ED u Q u u P x a6P u x u C u u 6P C u P P C a$aC u P ax B u D $C B C u D x C $ED u Q C $Q u D Q P Du 6C Q Q aq D u 6C Q x D aC P x C x 6$6P x u 6C P P $aP P P $ED B P B C i g f ph6e

Q u 6 C x B u 6 C x u 6 C x P $u 6 C x C $u 6 C x D u 6 C x x u 6 C x u u 6 C x u 6 C x u 6 C x Q q C x 6 B q 6 C x 6 C x P 6 C x C 6 C x D 6 C x x 6 C x u 6 C x 6 C x C x 6 Q C x B C x C x P C x C C x D C x x C x u C x C x C x

CB

C D $u C aC aE Q E u u D a E B ~

Fix+Cuts

CPLEX

F Y F H Y 2WXW2R VSWS`SH V H T V b F G G V `222V GSS2W2R b G H Y I U H b V S222SH Y Y T I V SS2`2V P x au x bW2S`SH Y G U V R T Y H 2HW22`V

Library3, proposed recently by Chu and Beasley [Chu and Beasley, 1998].
C C $x PP $ax C u P 6aD x x 6x 66 P 6 u x 6C P B $x C D DQ x 6y x x 6P u P au u 6 x D B $u P Q P B C Q D x C C B u D BP D B B C x u DP u a6u x Q A y B D QP D $P x B 6x B x x $Q D x D x x B D u P Q au P D $u Q $P B u B B aQ u x B 6C Q D D $B $$P x B P D P $$C D D u D 6$x Du C u 6$6P u x C D P Q u aB B u B P D $D vQ x x D u 6$C x Q C B D aQ Q x x Q 6x Q aQ C u q B C vQ $x PaP B D D B y 8 P x D x P x x x D x P x C x C x D x C x u C au x C au x C au u C au x C au C au C au C au C au C au Q u u Q u B u x u B u B u B u x m H R G Y Y S2SSSR Y I Y R Y 2S2SSR U U U T Y 2S2SSR I H R R Y `W2SSR VWS2S2H T U T U V222SR V F F Y F Y Y H Y `S`WSR I H Y G Y `W`WSR G G Y Y Y 2W2SSR R F T V Y S2XSR VSWSXWH b T F V G Y R b V SS`2WH U V F T V X``WH I R I G V 2S`2WH H F I G V W2`2WH T U G G V 2222WH Y V U b V X`2WH R Y U G V 2S`2WH H T G F V S2WXWH I T Y I V 2S2`WH G G Y b V 2W`2SV I I H G V 2222SV R V H F V XWWXSV R Y Y F V 2SSXSV F U T G V `S`2SV G R F T V SW``SV VWS`2SV Y R G V R I T T V 2S2`SV YXSXSV V I F V Y`GW`2SV U G V d

tained by the MIP solver CPLEX v6.5.2 alone.


P P x $au x C P u D Q C 6 x $u x C vQ u u Q C 6 B C x B au x P C u Q Q q P u x u x x B v P u u C 6$C 6 u P C a$u x Q u u D B C 6 B VS`W2R T R H Y D C u Q P u x C x au x B P u D C 6 P C C $a$u x C u u P P B $aC 6 C C x $au x x C u Q P P u i f g 6e D x u D D x u D D x u D C '

these instances showed some limits of our approach regarding the computing time for solving some very large instances ( ). Indeed, given the size of our neighborhood ( , see Section 5.3), up to 3 days were needed to get the reported values.
D D u x P D x x u P aD P $u P aD P u $ ~
MK _GK

References
[Aboudi and Jrnsten, 1994] R. Aboudi and K. Jrnsten. Tabu Search for General Zero-One Integer Programs using the Pivot and Complement Heuristic. ORSA Journal of Computing, 6(1):8293, 1994. [Balas and Martin, 1980] E. Balas and C.H. Martin. Pivot and a Complement Heuristic for 0-1 Programming. Management Science, 26:8696, 1980. [Chu and Beasley, 1998] P.C. Chu and J.E. Beasley. A genetic algorithm for the multidimensional knapsack problem. Journal of Heuristic, 4:6386, 1998. [Dammeyer and Vo, 1993] F. Dammeyer and S. Vo. Dynamic tabu list management using the reverse elimination method. Annals of Operations Research, 41:3146, 1993. [Drexl, 1988] A. Drexl. A simulated annealing approach to the multiconstraint zero-one knapsack problem. Computing, 40:18, 1988. [Frville and Plateau, 1986] A. Frville and G. Plateau. Heuristic and reduction methods for multiple constraints 0-1 linear programming problems. European Journal of Operational Research, 24:206215, 1986. [Frville and Plateau, 1993] A. Frville and G. Plateau. Sac dos multidimensionnel en variable 0-1 : encadrement de la somme des variables loptimum. Recherche Oprationnelle, 27(2):169187, 1993. [Gavish and Pirkul, 1982] B. Gavish and H. Pirkul. Allocation of data bases and processors in a distributed computting system. Management of Distributed Data Processing, 31:215231, 1982. [Gavish and Pirkul, 1985] B. Gavish and H. Pirkul. Efcient algorithms for solving multiconstraint zero-one knapsack problems to optimality. Mathematical Programming, 31:78105, 1985. [Glover and Kochenberger, 1996] F. Glover and G.A. Kochenberger. Critical event tabu search for multidimensional knapsack problems. In I.H. Osman J.P. Kelly, editor, Metaheuristics: The Theory and Applications, pages 407427. Kluwer Academic Publishers, 1996. [Glover and Laguna, 1997] F. Glover and M. Laguna. Tabu Search. Kluwer Academic Publishers, 1997. [Glover, 1990] F. Glover. Tabu search. ORSA Journal of Computing, 2:432, 1990. [Hana and Frville, 1998] S. Hana and A. Frville. An efcient tabu search approach for the 0-1 multidimensional knapsack problem. European Journal of Operational Research, 106:659675, 1998. [Osorio et al., 2000] M.A. Osorio, F. Glover, and Peter Hammer. Cutting and surrogate constraint analysis for improved multidimensional knapsack solutions. Technical report, Hearin Center for Enterprise Science. Report HCES-08-00, 2000. [Shih, 1979] W. Shih. A branch & bound method for the multiconstraint zero-one knapsack problem. Journal of the Operational Research Society, 30:369378, 1979.

Table 6: Comparison on the lastest 11 MK _GK pb.

7 Conclusion
In this paper, we have presented a hybrid approach for tackling the NP-hard 01 multidimensional knapsack problem (MKP01). The proposed approach combines linear progamming and Tabu Search. The basic idea consists in obtaining "promising" continuous optima and then using TS to explore carefully and efciently binary areas close to these continuous optima. This is carried out by introducing several additional constraints:
 C yT3 #

1. hyperplane constraint:

IN ; ;
 ~ v'

3. qualitative constraint:

Our Tabu Search algorithm integrates also some advanced features such as an efcient implementation of the reverse elimination method for a dynamic tabu list management in the context of a special 2-change move. This hybrid approach has been tested on more than 100 state-of-the-art benchmark instances, and has led to improved solutions for most of the tested instances. One inconvenience of the proposed approach is its computing time for very large instances ( > 1000 items). This is partially due to the time complexity of the neighborhood used. Despite of this, this study constitutes a promising starting point for designing more efcient algorithms for the MKP01. Some possibilities are: 1) to develop a partial evaluation of relaxed knapsack constraints; 2) to study more precisely the relationship between and for a given instance; 3) to nd a good crossover operator and a cooperative distributed implementation of the algorithm. Finally, we hope the basic idea behind the proposed approach may be explored to tackle other NP-hard problems. Acknowledgement: We would like to thank the reviewers of the paper for their useful comments.
 ~ ' }

# ' 8 } | n k c  oy6m ls |

2. geometrical constraint:

C u 6C B x

u6$u C C Q $u QaQ D B

b R H G S2SSU G U H b S2S2G b I Y I SSS`G T Y I I 2SSXV Y H H U `SWXV b b T 2W2b Y T G 22Sb b T b S2SG T G T `W2G B C Q x P aP x

x u 66C Q u Q u C C EB C B aq B B C u q Q u P C aC D P C C P aaC B C Q x P $P x

y C C au

C Cau C Cau C Cau Cau C

( C ( 6 au ( C C ( ( 6 C ( 6 C ( 6 u ( 6 u ( C ( C ( 6 ' ( 6#

H C E TeC    RW P C D x u Qa B a

[Toyoda, 1975] Y. Toyoda. A simplied algorithm for obtaining approximate solutions to zero-one programming problem. Management Science, 21(12):14171427, 1975.

Vous aimerez peut-être aussi