13
Approximation algorithms for knapsack problems with cardinality constraints Alberto Caprara a , Hans Kellerer b , Ulrich Pferschy b , David Pisinger c, * a DEIS, University of Bologna, viale Risorgimento 2, I-40136 Bologna, Italy b Universit at Graz, Institut f ur Statistik und Operations Research, Universit atsstr.15, A-8010 Graz, Austria c Department of Computer Science, DIKU, University of Copenhagen, Universitetsparken 1, DK-2100 Copenhagen, Denmark Abstract We address a variant of the classical knapsack problem in which an upper bound is imposed on the number of items that can be selected. This problem arises in the solution of real-life cutting stock problems by column generation, and may be used to separate cover inequalities with small support within cutting-plane approaches to integer linear pro- grams. We focus our attention on approximation algorithms for the problem, describing a linear-storage Polynomial Time Approximation Scheme (PTAS) and a dynamic-programming based Fully Polynomial Time Approximation Scheme (FPTAS). The main ideas contained in our PTAS are used to derive PTAS’s for the knapsack problem and its multi-dimensional generalization which improve on the previously proposed PTAS’s. We finally illustrate better PTAS’s and FPTAS’s for the subset sum case of the problem in which profits and weights coincide. Ó 2000 Elsevier Science B.V. All rights reserved. Keywords: Packing; Approximation algorithms; Dynamic programming; Knapsack problem 1. Introduction The classical Knapsack Problem (KP) is defined by a set N :f1; ... ; ng of items, each having a positive integer profit p j and a positive integer weight w j , and by a positive integer knapsack ca- pacity c. The problem calls for selecting the set of items with maximum overall profit among those whose overall weight does not exceed the knapsack capacity. KP has the following immediate Integer Linear Programming (ILP) formulation: maximize X j2N p j x j 1 subject to X j2N w j x j 6 c; 2 x j 2f0; 1g; j 2 N ; 3 where each binary variable x j , j 2 N , is equal to 1 if and only if item j is selected. The Subset Sum Problem (SSP) is the special case of KP arising European Journal of Operational Research 123 (2000) 333–345 www.elsevier.com/locate/orms * Corresponding author. Tel.: +45-35-32-14-00; fax: +45-35- 32-14-01. E-mail addresses: [email protected] (A. Caprara), [email protected] (H. Kellerer), pferschy@kfuni- graz.ac.at (U. Pferschy), [email protected] (D. Pisinger). 0377-2217/00/$ - see front matter Ó 2000 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 9 9 ) 0 0 2 6 1 - 1

Approximation algorithms for knapsack problems with cardinality constraints

Embed Size (px)

Citation preview

Page 1: Approximation algorithms for knapsack problems with cardinality constraints

Approximation algorithms for knapsack problems with cardinalityconstraints

Alberto Caprara a, Hans Kellerer b, Ulrich Pferschy b, David Pisinger c,*

a DEIS, University of Bologna, viale Risorgimento 2, I-40136 Bologna, Italyb Universit�at Graz, Institut f �ur Statistik und Operations Research, Universit�atsstr.15, A-8010 Graz, Austria

c Department of Computer Science, DIKU, University of Copenhagen, Universitetsparken 1, DK-2100 Copenhagen, Denmark

Abstract

We address a variant of the classical knapsack problem in which an upper bound is imposed on the number of items

that can be selected. This problem arises in the solution of real-life cutting stock problems by column generation, and

may be used to separate cover inequalities with small support within cutting-plane approaches to integer linear pro-

grams. We focus our attention on approximation algorithms for the problem, describing a linear-storage Polynomial

Time Approximation Scheme (PTAS) and a dynamic-programming based Fully Polynomial Time Approximation

Scheme (FPTAS). The main ideas contained in our PTAS are used to derive PTAS's for the knapsack problem and its

multi-dimensional generalization which improve on the previously proposed PTAS's. We ®nally illustrate better

PTAS's and FPTAS's for the subset sum case of the problem in which pro®ts and weights coincide. Ó 2000 Elsevier

Science B.V. All rights reserved.

Keywords: Packing; Approximation algorithms; Dynamic programming; Knapsack problem

1. Introduction

The classical Knapsack Problem (KP) is de®nedby a set N :� f1; . . . ; ng of items, each having apositive integer pro®t pj and a positive integerweight wj, and by a positive integer knapsack ca-pacity c. The problem calls for selecting the set of

items with maximum overall pro®t among thosewhose overall weight does not exceed the knapsackcapacity. KP has the following immediate IntegerLinear Programming (ILP) formulation:

maximizeXj2N

pjxj �1�

subject toXj2N

wjxj6 c; �2�

xj 2 f0; 1g; j 2 N ; �3�

where each binary variable xj, j 2 N , is equal to 1if and only if item j is selected. The Subset SumProblem (SSP) is the special case of KP arising

European Journal of Operational Research 123 (2000) 333±345www.elsevier.com/locate/orms

* Corresponding author. Tel.: +45-35-32-14-00; fax: +45-35-

32-14-01.

E-mail addresses: [email protected] (A. Caprara),

[email protected] (H. Kellerer), pferschy@kfuni-

graz.ac.at (U. Pferschy), [email protected] (D. Pisinger).

0377-2217/00/$ - see front matter Ó 2000 Elsevier Science B.V. All rights reserved.

PII: S 0 3 7 7 - 2 2 1 7 ( 9 9 ) 0 0 2 6 1 - 1

Page 2: Approximation algorithms for knapsack problems with cardinality constraints

when pj � wj for each j 2 N . For notational con-venience, later in the paper we will use the notationp�S� :�Pj2S pj and w�S� :�Pj2S wj for someS � N .

KP has widely been studied in the literature, seethe book of Martello and Toth [13] and the recentsurvey by Pisinger and Toth [19] for a compre-hensive illustration of the problem. Among theseveral applications of KP, two very importantones are the following. First, KP is the subproblemwhich appears when instances of the one-dimen-sional Cutting Stock Problem are solved bycolumn generation, see e.g. [22]. Second, theseparation of cover inequalities in cutting-plane/branch-and-cut approaches to general ILPs callsfor the solution of a KP, see e.g. [15].

The m-Dimensional Knapsack Problem (m-DKP) is the generalization of KP in which eachitem j 2 N has m nonnegative integer weightsw1j; . . . ;wmj and m positive integer knapsack ca-pacities c1; . . . ; cm are speci®ed. The objective is toselect a subset of the items with maximum overallpro®t among those whose weight does not exceedthe knapsack capacity for any of the m dimen-sions. Formally, the problem can be formulated as(1) subject to (3) and the capacity constraintsXj2N

wijxj6 ci; i 2 M ; �4�

where M :� f1; . . . ;mg. The importance of thisproblem follows from its generality, namely it is ageneric ILP in which all variables are binary andall coe�cients nonnegative. In the sequel we referto m-DKP by implicitly assuming that m is ®xed,i.e. not part of the problem input. This implies forinstance that a time complexity of O�nm� will beconsidered polynomial.

In this paper we address a problem which is atthe same time a generalization of KP and a specialcase of 2-DKP, namely the k-item KnapsackProblem (kKP), which is a KP in which an upperbound of k is imposed on the number of items thatcan be selected in a solution. The problem can beformulated as (1), (2) and (3) with the additionalconstraintXj2N

xj6 k; �5�

with 16 k6 n. The Exact k-item Knapsack Prob-lem (E-kKP) is the variant of kKP where thenumber of items in a feasible solution must beexactly equal to k. Clearly E-kKP can be formu-lated as kKP by replacing (5) byXj2N

xj � k: �6�

Without loss of generality we assume wj6 cfor j 2 N . Actually, for E-kKP we shall assumethat, for each j 2 N , wj plus the sum of thesmallest k ÿ 1 weights of items in N n fjg does notexceed c. The k-item Subset Sum Problem (kSSP)and Exact k-item Subset Sum Problem (E-kSSP)are the special cases of kKP and E-kKP, respec-tively, in which item pro®ts and weights coincide.

Observe that kKP and E-kKP can easily betransformed into each other. More precisely, anykKP instance can be solved as the followingE-kKP. The value of k is the same, whereas thepro®ts and the weights of the original items aremultiplied by k and the new capacity is de®ned as�k � 1�cÿ 1, where c is the original capacity.Finally, k ``dummy'' items with pro®t and weight1 are added. By de®nition, an item subset ofcardinality, say, k06 k is feasible for the originalkKP instance if and only if it satis®es the knap-sack constraint for the E-kKP instance, in whichcase it can be made feasible by adding k ÿ k0

dummy items. The ranking by pro®t of thefeasible solutions of the two instances is the samedue to pro®t scaling. On the other hand, E-kKPcan be solved as a kKP by adding a suitably largequantity M to the item pro®ts, e.g. M :� p�N�. Ifthe optimal value z� of kKP is smaller than kMthen E-kKP has no solution, otherwise its optimalsolution value is z� ÿ kM . This latter constructioncannot be applied to show that E-kSSP can betransformed into kSSP, in fact we are not awareof any simple transformation between the twoproblems.

kKP is the subproblem to be solved when in-stances of the Cutting Stock Problem with cardi-nality constraints are tackled by columngeneration techniques. For instance the problemappears where the number of pieces cut from eachstock is bounded by a constant due to the limited

334 A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345

Page 3: Approximation algorithms for knapsack problems with cardinality constraints

number of knives. kKP also appears in processorscheduling problems on computers with k proces-sors and shared memory. The Collapsing Knap-sack Problem presented by Posner and Guignard[20], which has applications in satellite communi-cation where transmissions on the band requiregaps between the portions of the band assigned toeach user, can be solved to optimality by consid-ering n kKP problems de®ned on di�erent capac-ities but with the same set of items.

Furthermore, kKP could replace KP in theseparation of cover inequalities, as outlined in thefollowing. In general, separation calls for a con-straint in some class which is violated by the op-timal solution of the current LP relaxation, see[15]. Therefore, separation is a recognition ratherthan an optimization problem. For cover in-equalities, the exact solution of a KP allows for thedetermination of the most violated such inequality,if any exists, without any control on the number ofvariables with nonzero coe�cient in the inequality,which is the same as the number of items selectedby the optimal KP solution. On the other hand,within cutting-plane approaches it is typicallybetter to separate inequalities with a small numberof nonzero coe�cients, mainly because such in-equalities are easier to handle for the LP solvers.This would suggest to focus on the separation ofviolated cover inequalities having a small numberof nonzero coe�cients, which can be carried outby explicitly imposing a cardinality constraint onthe KP to be solved. In fact, cover inequalities aretypically ``lifted'' before being added to the currentLP relaxation, increasing the number of nonzerocoe�cients, as well as the strength of the inequal-ity. Nevertheless, a cover inequality with fewnonzero coe�cients before lifting will correspondto a set of items with large weights, which are morelikely to be selected if an upper bound on thecardinality of the solution is imposed. If theweights of the items with nonzero coe�cient inthe inequality before lifting are large with respectto the other weights, it is known [15] that most ofthe lifting coe�cients will be equal to 0, regardlessof the lifting procedure used. Accordingly, a coverinequality with fewer nonzero coe�cients beforelifting will tend to have fewer nonzero coe�cientsalso after.

It is well known that both KP and m-DKPare NP-hard but pseudopolynomially solvablethrough dynamic programming, therefore thesame properties hold for kKP and E-kKP (actu-ally, the fact that E-kKP is solvable in pseudo-polynomial time follows from the structure of thedynamic programming recursion that solves kKP,which will be shown later). On the other hand, for®xed k, both kKP and E-kKP are polynomiallysolvable in O�nk� time by brute force. A naturalquestion is whether these problems are ®xed-pa-rameter tractable [2], i.e. there is a constant a andan algorithm which solves either problem in timeO�naf �k��, where f of course may have exponentialgrowth. The results reported in [3] state howeverthat the existence of such an algorithm is unlikely,since this would imply that the classes FPT andW �1� are equal (something similar to P � NP ).

In this paper we will mainly address polynomialtime approximation algorithms for kKP (andE-kKP). For a generic problem P we will denoteby z� the optimal solution value and by zH thevalue of the solution returned by a heuristic al-gorithm H. Similarly, for a subinstance S of P,z�S and zH

S will denote the optimal and heuristicsolution values, respectively, for the subinstance.Assume P is a maximization problem, and con-sider e 2 �0; 1�. We say that H is a �1ÿ e� ap-proximation algorithm if zH P �1ÿ e�z� for allinstances of P. A Polynomial Time ApproximationScheme (PTAS) for P is an algorithm taking asinput an instance of and a value e 2 �0; 1�, deliv-ering a solution of value zH P �1ÿ e�z�, and run-ning in a time polynomial in the size of the Pinstance. If the running time is also polynomial in1=e the algorithm is called a Fully Polynomial TimeApproximation Scheme (FPTAS).

A PTAS for KP was proposed by Sahni [21],and the ®rst FPTAS by Ibarra and Kim [7], lateron improved by Lawler [11], Magazine and Oguz[12] and Kellerer and Pferschy [8]. While PTAS forKP typically requires only O�n� storage, all theFPTAS are based on dynamic programming andtheir memory requirement increases rapidly withthe accuracy e, which makes them impractical evenfor relatively big values of e. For SSP, the bestPTAS requiring O�n� storage is due to Fischetti[4]. FPTAS were developed by Lawler [11], Gens

A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345 335

Page 4: Approximation algorithms for knapsack problems with cardinality constraints

and Levner [6] and recently improved by Kellereret al. [9]. Also the more general m-DKP (recall thatwe assume m not to be part of the input) admits aPTAS, as shown by Oguz and Magazine [16] andFrieze and Clarke [5], whereas Korte and Schrader[10] proved that the problem does not admit anyFPTAS unless P � NP . The latter results clearlyimply that kKP has a PTAS, while leaving openthe question about the existence of a FPTAS, aswell as of a PTAS for E-kKP. Actually, the ques-tion of a FPTAS for kKP is even more interestingas Korte and Schrader [10] proved that even forthe two-dimensional KP such a scheme does notexist unless P � NP .

In this paper we adapt classical PTAS andFPTAS for KP and SSP to kKP, E-kKP and kSSP.As a byproduct of our analysis, we propose somemodi®cations to the classical PTAS for KP and m-DKP. With these changes, one can achieve thesame approximation guarantee with a consider-ably reduced running time.

2. Linear programming and a PTAS for kKP

We start by de®ning a simple approximationalgorithm for kKP based on the LP relaxation ofthe problem, obtained by replacing constraints (3)by

06 xj6 1; j 2 N : �7�

Denote the optimal solution value of this LP byzLP.

Lemma 1. An optimal basic solution �x�� of the LPrelaxation (1), (2), (5) and (7), has at most twofractional components. Let J1 :� fk : x�k � 1g. If thebasic solution has two fractional components x�i andx�j , supposing w.l.o.g. wj6wi, then p�J1� � pi P z�

and the solution defined by J1 [ fjg is feasible forkKP.

Proof. If upper bounds on the variables are treatedimplicitly, there will be two basic variables x�i , x�j ina basic LP solution, whereas the other variableswill take an integer value. If x�i and x�j are bothfractional, then x�i � x�j � 1, p�J1� � pix�i � pjx�j �

zLP and wix�i � wjx�j � cÿ w�J1�. Therefore, ifwj6wi, then pi P pj, otherwise one could improvethe LP solution by setting xi � 0 and xj � 1.Hence, pix�i � pjx�j 6 pi�x�i � x�j � � pi, showing thatz�6 zLP6 p�J1� � pi. Moreover, wj � wj�x�i � x�j �6wix�i � wjx�j � cÿ w�J1�, which yields the feasibili-ty of J1 [ fjg. �

Lemma 1 is immediately extended to E-kKP,for which the number of fractional components inthe LP solution can be either 0 or 2.

The algorithm below is a rather straightforwardgeneralization of the well-known 1/2approxima-tion algorithm for KP, see e.g. [13], and proceedsas follows.

Algorithm H 1=2:1. Compute an optimal basic solution of the LP

relaxation of kKP. Let J1 and JF contain, re-spectively, the indices of the variables at 1 andof the fractional variables in the LP solution.

2. If JF � ;, return the (optimal) kKP solution J1,of value zH :� p�J1�.

3. If JF � fig for some i 2 N , return the best kKPsolution among J1 and fig, of value zH :�max fp�J1�; pig.

4. If JF � fi; jg for some i; j 2 N such that wj6wi,return the best kKP solution among J1 [ fjgand fig, of value zH :� max fp�J1� � pj; pig.

Proposition 2. H 1=2 is a 1/2approximation algo-rithm for kKP and runs in O�n� time.

Proof. The time complexity is due to the fact thatthe LP relaxation of kKP can be solved in O�n�time, as shown by Megiddo and Tamir [14]. (Themethod by Megiddo and Tamir is in fact moregeneral and rather sophisticated, applying La-grangian relaxation and a multi-dimensionalsearch procedure.) The feasibility of the solutionreturned follows immediately from Lemma 1. Asto the approximation ratio, if the algorithm ter-minates at Step 2, the solution returned is clearlyoptimal, otherwise, denoting by zLP the optimalLP solution value and letting zG :� p�J1� if thealgorithm stops at Step 3, zG :� p�J1� � pj if itstops at Step 4, one has z�6 zLP6 zG � pi62zH . �

336 A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345

Page 5: Approximation algorithms for knapsack problems with cardinality constraints

A class of instances for which the approxima-tion can be arbitrarily close to 1/2 is de®ned byn � 3, p1 � 2, w1 � 1, p2 � p3 � M , w2 � w3 � M ,k � 3 and c � 2M , where M is some ``big'' integer.The optimal solution consists of items 2 and 3,whereas the best solution computed by H 1=2 selectsitem 1 and only one item from 2 and 3, whichyields zH=z� � ��M � 2�=2M� !M!1 1=2.

The adaptation of H 1=2 to E-kKP is quite easy.Using the same notation as in the description ofH 1=2, observe that in the E-kKP case JF has card-inality zero or two. In the second case, the heu-ristic solution returned is the best among J1 [ fjgand fig [ Ri, where Ri is the set of the k ÿ 1 itemswith smallest weight in N n fig, which can be de-termined in O�n� time by partial sorting techniques[1].

Our PTAS for kKP (and E-kKP) is based onheuristic H 1=2, and has the same ¯avor as theclassical approximation scheme by Sahni [21] forKP and by Oguz and Magazine [16], Frieze andClarke [5] for m-DKP.

Algorithm PTASkKP:1. Let e6 1=2 be the required accuracy, and de®ne` :� minf 1=ed e ÿ 2; kg. Initialize the solution Hto be the empty set and set the correspondingvalue zH to 0.

2. Consider each L � N such that jLj6 `ÿ 1. Ifw�L�6 c and p�L� > zH let H :� L, zH :� w�L�.

3. Consider each L � N such that jLj � `. Ifw�L�6 c, apply algorithm H 1=2 to the subin-stance S de®ned by item set fk 2 N n L : pk 6minj2L pjg, by capacity cÿ w�L� and by thecardinality upper bound k ÿ `. Let T and zH

Sdenote the solution and the solution valuereturned by H 1=2. If p�L� � zH

S > zH let H :�L [ T and zH :� p�L� � zH

S .4. Return solution H of value zH .

The main di�erence in the scheme of Sahni [21] isthe fact that in Step 3 we complete the partialsolution L by considering only items whose pro®tis at most equal to the smallest pro®t of the itemsin L, according to [5,16].

Proposition 3. PTASkKP is a PTAS for kKP whichruns in O�n 1=ed eÿ1� time and requires linear space.

Proof. As de®ned in Step 1, let ` :� minf 1=ed e ÿ 2;kg. The time and space requirements follow fromthe fact that algorithm H 1=2 and the de®nition ofsubinstance S by partial sorting require O�n� timeand are executed O�n`� times in Step 3, whereas inStep 2 the algorithm considers O�n`ÿ1� subsets, foreach one performing operations that clearlyrequire O�n� time. What remains to be shownis that PTASkKP is a �1ÿ e� approximationalgorithm. Clearly, if an optimal solution containsat most ` items, the solution returned by PTASkKP

is optimal. Otherwise, let fj�1; j�2; . . . ; j�` ; . . .g be theset of items in an optimal solution ordered so thatpj�

1P pj�

2P � � � P pj�

`P � � �. In one of the iterations

of Step 3, the set L considered by PTASkKP isL� :� fj�1; j�2; . . . ; j�`g. Let S� be the correspondingsubinstance, on which algorithm H 1=2 is applied,returning a solution of value zH

S� . The optimal so-lution value is clearly given by z� � p�L�� � z�S� ,where z�S� is the optimal solution value for S�. AszH P p�L�� � zH

S� , it is su�cient to show thatp�L�� � zH

S� P ��`� 1�=�`� 2��z�, and then theclaim follows from the de®nition of `. We distin-guish between two cases.

(i) p�L��P �`=�`� 2��z�: From Proposition 2 weget

p�L�� � zHS� P p�L�� � 1

2z�S�

� p�L�� � 1

2z�� ÿ p�L���

� 1

2z�� � p�L���P

1

2z��� `

`� 2z��

� `� 1

`� 2z�:

(ii) p�L�� < �`=�`� 2��z�: Obviously, the small-est pro®t of an item in L�, and hence all thepro®ts of items in S�, are smaller thanz�=�`� 2�. We can assume that the solution ofthe LP relaxation of subinstance S� is fractional,otherwise the solution returned by PTASkKP isoptimal. In particular, let pi� be pro®t ofthe item of larger weight whose associated vari-able is fractional, and p�J �1 � be the sum of thepro®ts of items with variable at 1 in the LP so-lution (see algorithm H 1=2). Due to Lemma1 one has

A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345 337

Page 6: Approximation algorithms for knapsack problems with cardinality constraints

z� � p�L�� � z�S� 6 p�L�� � p�J �1 � � pi�

6 p�L�� � zHS� � pi� 6 p�L�� � zH

S� �1

`� 2z�;

and hence

��`� 1�=�`� 2��z�6 p�L�� � zHS� : �

A class of instances for which the approximationratio can be arbitrarily close to �`� 1�=�`� 2� fora given ` is de®ned by n � `� 3, p1 � 2, w1 � 1,p2 � � � � � pn � M , w2 � � � � � wn � M , k � n, andc � �`� 2�M , where M is some ``big'' integer.The optimal solution consists of all `� 2 bigitems, whereas the best solution computed byPTASkKP selects item 1 and only `� 1 big items,which yields

zH=z� � �`� 1�M � 2

�`� 2�M !M!1 `� 1

`� 2:

Again, algorithm PTASkKP can be easily mod-i®ed to handle E-kKP. In particular, Step 2 can beskipped, whereas in Step 3 one has to replace thecall to heuristic H 1=2 by a call of the correspondingheuristic for E-kKP. The call must in fact be pre-ceded by a preprocessing of subinstance S, whichremoves the items j such that wj plus the smallestk ÿ `ÿ 1 weights of items in S other than j exceedsthe capacity. Such a preprocessing can be done inO�n� time by partial sorting techniques. The timeand space complexity and approximation analysisof the algorithm obtained are essentially un-changed.

3. Improved PTAS for KP and m-DKP

The main di�erence between the PTAS pro-posed in the previous section and the PTAS pre-sented in the literature for KP [21] and m-DKP[5,16] is the fact that we use procedure H 1=2 insteadof simply rounding down the LP solution. Thismodi®cation yields improved PTAS also for KPand m-DKP, in the sense that we get better ap-proximations for a given asymptotic running timebound. In fact, for KP we can make additionalconsiderations to further reduce the complexity ofour scheme.

The straightforward adaptation of PTASkKP toKP just requires running the algorithm with thekKP instance obtained from a KP by setting thecardinality upper bound k :� n. This would al-ready constitute an improvement with respect tothe scheme proposed by Sahni [21]. As anticipated,we can do better, by exploiting the structure ofoptimal LP solutions of KP. Namely, it is well-known that an LP solution of KP can be deter-mined by sorting the items by decreasing pro®t/weight ratios, and this ordering solves the LP re-laxation of KP for any given capacity. Let PTASKP

be the algorithm derived from PTASkKP as follows,assuming e6 1=3 .

In Step 1, we set ` :� minf 1=ed e ÿ 2; ng (`P 3)and sort and store items according to increasingpro®ts. Furthermore, items are also sorted bothby decreasing weights and by decreasing pro®t/weight ratios, and their consecutive indices ac-cording to these sortings are stored in arrays Wand PW, respectively. This step requires O�n log n�time.

In Step 3, we replace H 1=2 by its (straightfor-ward) counterpart for KP, that returns the bestsolution among the one de®ned by the set ofitems with variable at 1 in the solution of theLP relaxation and the one de®ned by the itemwith fractional variable (if any). Moreover, theenumeration of the subsets L of cardinality ` andthe corresponding computation of a heuristicsolution is carried out in the following way. Letevery subset L consist of fi1; i2; . . . ; i`g and assumep16 p26 � � � 6 pn.

For i1 :� 1; . . . ; nÿ `� 1Let S :� f1; . . . ; i1 ÿ 1g and sort the items inS in increasing order of pro®t/weight ratio inO�n� time by using PW.For i`ÿ1 :� i1 � `ÿ 2; . . . ; nÿ 1

For every T � fi1 � 1; . . . ; i`ÿ1 ÿ 1g withjT j � `ÿ 3

Let the array R contain the itemsfi`ÿ1 � 1; . . . ; ng sorted in decreasing or-der of weight by using W.Let i` :� R�1� and compute the LP solu-tion and the associated heuristic solu-tion for a knapsack with item set Sand capacity cÿ W �L�.For i` :� R�2�; . . . ;R�nÿ i`ÿ1�

338 A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345

Page 7: Approximation algorithms for knapsack problems with cardinality constraints

Recompute the LP solution and theassociated heuristic solution for aknapsack with item set S and capac-ity cÿ W �L� starting from the previ-ous solution.

In an outer loop, we let the item i1 with smallestpro®t in the current set L go from 1 to nÿ `� 1.For each i1, we let the item in L with the secondlargest pro®t, i.e. item i`ÿ1, go through all possibleindices from i1 � `ÿ 2 to nÿ 1, and iterativelyconsider all subsets T of �`ÿ 3� items withinfi1 � 1; . . . ; i`ÿ1 ÿ 1g. These subsets, jointly with i1

and i`ÿ1, form O�n`ÿ1� subsets to consider. For eachsuch subset we let the largest item in L, i`, gothrough all indices from i`ÿ1 � 1 to n, in decreasingorder of weight. Now we can compute the LP so-lution, and therefore the heuristic one, consecu-tively for all values of i` by inserting the items in Sinto a knapsack of capacity cÿ w�L� according totheir order. As soon as the capacity is ®lled, wecompute the heuristic solution and choose the nexti` which yields a capacity cÿ w�L� not smaller thanbefore. Therefore, we can continue the computa-tion of the LP solution from the point it wasstopped for the previous set L, and hence the heu-ristic solutions for all possible items i` can indeed bedetermined in linear time. Overall, we get a runningtime bound of O�n`� for Step 3, whereas three callsto a sorting procedure are required in Step 1. Byusing the same analysis as for PTASkKP, we have

Proposition 4. PTASKP is a PTAS for KP which runsin O�n 1=ed eÿ2 � n log n� time, and requires linearspace.

To compare our algorithm with the Sahni PTAS[21] notice that the running time complexity of thelatter is O�n 1=ed e�. In other words, for `P 2, toachieve an approximation ratio of �`� 1�=�`� 2�the Sahni scheme requires a running time ofO�n`�2� whereas our algorithm runs in O�n`� time.For example, with a still reasonable running timeof O�n3� we get an approximation ratio of 4=5compared to the previous 2=3. As both PTAS's arepractical only for very small values of `, thisspeedup by a factor of O�n2� is a considerable gainin performance and enables one to get a solutionwith an approximation ratio ``improved by twolevels'' within the same asymptotic running time(cf. Table 1).

In order to extend our PTAS and the corre-sponding analysis to m-DKP for any ®xed m, weinitially modify procedure H 1=2 so as to get itsobvious extension H 1=�m�1�, which solves the LPrelaxation of m-DKP determining a basic solutionwith at most m fractional components. Again, letJ1 and JF � fi1; . . . ; ikg, k6m, contain respec-tively the indices of the variables at 1 and withfractional value in the solution. The solution re-turned by H 1=�m�1� is the best one amongJ1; fi1g; . . . ; fikg, of value maxfp�J1�; pi1 ; . . . ; pikg.It was shown by Megiddo and Tamir [14] that theLP relaxation of m-DKP can be solved in O�n�time (recall that we assume m to be ®xed). Hence,an immediate counterpart of Proposition 2 is thefollowing.

Proposition 5. H 1=�m�1� is a 1=�m� 1� approxima-tion algorithm for m-DKP and runs in O�n� time.

Table 1

Comparison of our PTAS and by the previous ones for KP, kKP and 2-DKP

KP Complexity O�n� O�n log n� O�n2� O�n3� . . . O�n`�Previous [21] 1/2 1/2 1/2 2/3 . . . �`ÿ 1�=`Ours 1/2 2/3 3/4 4/5 . . . �`� 1�=�`� 2�

kKP Complexity O�n� O�n2� O�n3� . . . O�n`�Previous [5,16] ÿ 1/3 1/2 . . . �`ÿ 1�=�`� 1�Ours 1/2 2/3 3/4 . . . `=�`� 1�

2-DKP Complexity O�n� O�n2� O�n3� . . . O�n`�Previous [5,16] ÿ 1/3 1/2 . . . �`ÿ 1�=�`� 1�Ours 1/3 1/2 3/5 . . . `=�`� 2�

A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345 339

Page 8: Approximation algorithms for knapsack problems with cardinality constraints

The corresponding PTAS, called PTASm-DKP, isobtained by modifying PTASkKP, de®ning` :� minf m=ed e ÿ �m� 1�; ng in Step 1, and re-placing the call to H 1=2 by a call to H 1=�m�1� in Step3, after having removed the items whose weightexceeds the residual knapsack capacity on some ofthe dimensions.

Proposition 6. PTASm-DKP is a PTAS for m-DKPwhich runs in O�n m=ed eÿm� time.

Proof. Analogous to the proof of Proposition 3. Inparticular, in the analysis of the approximationratio one has to show that

p�L�� � zHS� P

`� 1

`� m� 1z�:

This is done again by considering two cases,namely

p�L��P `

`� m� 1z� and p�L�� < `

`� m� 1z�:

The comparison with the previously proposedPTAS for m-DKP [5,16] shows that our schemeguarantees the same approximation ratio with anasymptotic running time reduced by a factor of n.

We give an overview of our improvements inTable 1, where we report the approximationachieved for a ®xed running time bound by ourPTAS and by the previous ones for KP, kKP and2-DKP. For KP we consider the Sahni PTAS fortime complexities 6O�n2� and the classical greedyalgorithm for lower complexities.

4. Dynamic programming and an FPTAS for

kKPE-kKP

In this section we ®rst describe how to solvekKP and E-kKP to optimality by a pseudopoly-nomial dynamic programming scheme which is animmediate adaptation of the schemes for KP. Thepresented scheme is then used to derive a FPTASfor both problems. Note that dynamic program-ming algorithms for KP with cardinality con-straints are also considered in [17,18].

Let b be an upper bound on the optimal solu-tion value. A straightforward dynamic program-ming recursion which has time complexity O�nkb�and space complexity O�k2b�, can be stated asfollows. Denote by function gi�a; `� for i �1; . . . ; n; ` � 1; . . . ; k; a � 0; . . . ; b, the optimal so-lution value of the following problem:

gi�a; `� :� minXi

j�1

wjxj :Xi

j�1

pjxj � a;

(Xi

j�1

xj � `; xj 2 f0; 1g;

j � 1; . . . ; i

): �8�

One initially sets g0�a; `� :� �1 for all ` �0; . . . ; k; a � 0; . . . ; b, and then g0�0; 0� :� 0. Then,for i � 1; . . . ; n the entries of gi can be computedfrom those of giÿ1 by using the formula

gi�a; `� :� mingiÿ1�a; `�;giÿ1�aÿ pi; `ÿ 1� � wi

if ` > 0; a P pi:

8<: �9�

The optimal solution value of E-kKP is givenby maxa�0;...;bfa : gn�a; k�6 cg, whereas the optimalsolution value of kKP is maxa�0;...;b;`�0;...;kfa :gn�a; `�6 cg.

To achieve the stated time and space complex-ities we observe that only entries in giÿ1�a; `� needto be stored in order to derive gi�a; `�. However todetermine the optimal solution vector we mustsave some previous entries as follows. Everymodi®ed entry gi�a; `� is generated from a previousentry giÿ1�aÿ pi; `ÿ 1� by adding a new item i inrecursion (9). Hence associated with gi�a; `� westore a pointer to the added item and a pointer tothe previous entry in giÿ1. In this way, the repre-sentation of the solution vector can be seen as adirected rooted tree with nodes corresponding toitems. Some nodes are linked to a function entry ingi while the nodes on the path from the root of atree to such a linked node represent the corre-sponding set of chosen items. Clearly, each of thekb function entries may contain a list of at most kitems which yield the O�k2b� space bound.

After each iteration of (9) we also have to re-lease the nodes in giÿ1 which are not anymore part

340 A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345

Page 9: Approximation algorithms for knapsack problems with cardinality constraints

of a solution vector. Thus we keep a counter ineach node indicating the number of immediatechild nodes. Moving from stage iÿ 1 to i eachcounter associated with giÿ1�a; `� is decreased byone. If the counter reaches 0, the node is putback in the pool of unused memory (organized asa garbage list) and its predecessor's counter isalso decreased with the obvious recursive contin-uation.

To get the time complexity of O�nkb� observethat the recursion (9) demands nkb operations, andat each update only one counter is increased insome node. Hence, totally we can also have atmost nkb decrements of counters (and thus free-ings of nodes) throughout the algorithm.

By using the dynamic programming schemeabove, we can devise a simple FPTAS for KP andkKP, referred to as FPTASkKP. Let zH be the so-lution value returned by heuristic H 1=2. Scale theitem pro®ts space by replacing each value pj byqj :� pjk=zH e

� �where e is the accuracy required,

and set the upper bound b for the new instance to2 k=ed e � k (this upper bound is correct since 2zH isan upper bound for the original instance and theoptimal solution values before and after round updi�er by at most k). Then apply the above dynamicprogramming scheme, and return as heuristic so-lution the optimal solution for the scaled pro®ts. Itis easy to see that

Proposition 7. FPTASkKP is an FPTAS for KP whichruns in O�nk2=e� time and requires O�n� k3=e�space.

Proof. The time and space requirements are easilycomputed by plugging in the value of b in the timeand space requirements of the dynamic program-ming scheme. We still have to show that thesolution value returned is within �1ÿ e� of theoptimum. Let T � and ~T � be an optimal item set forthe original and for the scaled problem, respec-tively, and observe that ~T � is a feasible solution ofthe original problem. Using the notation q�S� :�P

j2S qj we further observe that q� ~T ��P q�T ��.Using the fact that

zH ek�qj ÿ 1� < pj6

zHek

qj

we obtain that

p�T �� ÿ p� ~T ��6 zH ek

q�T ���

� j ~T �j ÿ q� ~T ���

6 zH ekj ~T �j6 ez�: �

It is straightforward to check that the abovescheme works for E-kKP as well. Note that thescheme above may further be improved by usingmore complicated techniques as given in [11].However, it is beyond the scope of this paperto elaborate all the details presented in that paper.

5. Approximation algorithms for kSSP

As kSSP is a special case of kKP, all the ap-proximation algorithms presented in the previoussection are suited for kSSP as well. In fact, byexploiting the fact that weights and pro®ts coin-cide, it is possible to derive considerably improvedapproximation schemes for this latter problem.The key result for the derivation of such algo-rithms is Lemma 8 below.

All the algorithms presented in this section splitthe set of items into two subsets. More precisely,let e 2 �0; 1� be the accuracy required: the set ofsmall items S contains the items whose weight doesnot exceed ec, whereas the set of large items Lcontains the remaining ones, whose weight islarger than ec. The ®rst obvious observation is thatat most ` :� minf 1=ed e ÿ 1; kg large items can beselected by a feasible solution. On the other hand,an accuracy of e in the solution of the subinstancede®ned by the items in L is su�cient to guaranteethe same accuracy for the overall instance, as ex-plained in the sequel.

We start by describing a simple linear-timegreedy procedure SUB�c0; k0� which returns afeasible solution of cardinality at most k0 forthe subinstance de®ned by the small items andcapacity c0:

Procedure SUB�c0; k0�:1. Let T be the set of the k0 items in S with largest

weight.2. If w�T �6 c0, return T.

A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345 341

Page 10: Approximation algorithms for knapsack problems with cardinality constraints

3. Otherwise, remove iteratively an arbitrary itemfrom T until w�T �6 c0 and then return T.

A large item heuristic for kSSP is a heuristicalgorithm structured as follows. For each value`0 2 f0; . . . ; `g the algorithm determines a heuristicsolution as L�`0� [ S�`0�, where L�`0� is a solution ofcardinality `0 for the subinstance de®ned by thelarge items, while S�`0� is the solution returned bySUB�cÿ w�L�`0��; k ÿ `0�. Then the algorithmoutputs the best solution among the `� 1 com-puted, of value, say, zH . For `0 � 0; . . . ; `, letzH �`0� :� w�L�`0�� and denote by z��`0� the value ofthe best solution of cardinality `0 for the kSSP-subinstance de®ned by the large items.

Lemma 8. Let H be a large item heuristic for kSSP.If zH �`0�P �1ÿ e�z��`0� for all `0 2 f0; . . . ; `g, thenH is a �1ÿ e� approximation algorithm.

Proof. Let `� be the number of large items in anoptimal solution of kSSP. Let z�L denote the totalweight of these `� large items and z�S the weight ofthe corresponding small items, respectively. Ofcourse, z� � z�L � z�S and z�L6 z��`��.

Consider the iteration of H for which `0 � `�. Ifprocedure SUB�cÿ w�L�`���; k ÿ `�� performsStep 3, i.e. some small items must be removed inorder to satisfy the capacity constraint, then

zH P zH �`�� � w�S�`���P �1ÿ e�c P �1ÿ e�z�: �

Otherwise, procedure SUB�cÿ w�L�`���; k ÿ `��stops at Step 2, i.e. the k ÿ `� largest small items donot completely ®ll the capacity. In this case,z�S 6w�S�`���, and hence

zH P zH �`�� � w�S�`���P �1ÿ e�z��`�� � z�SP �1ÿ e�z�L � z�S P �1ÿ e�z�: �

As an illustration of the above result, we pre-sent a simple 3=4 approximation linear-time heu-ristic H 3=4.

Algorithm H 3=4:1. Initialize L :� fj 2 N : wj > c=4g, S :� fj 2 N :

wj6 c=4g and ` :� minf3; k; jLjg.

2. Let H�0� be the solution returned by SUB�c; k�.3. Let H�1� be the solution de®ned by the union of

the largest i in L and the solution returned bySUB�cÿ wi; k ÿ 1�.

4. If ` < 2 go to 6. Otherwise, let i; j be the twolargest items in L with weight 6 c=2. If twosuch items exist let L�2� :� fi; jg, otherwise letL�2� :� ;. Let h be the smallest item in L andk be the smallest item in L with weight Pc=2. If such an item k exists and w�L�2�� <wh � wk 6 c, update L�2� :� fh; kg. If L�2� 6� ;,let H�2� be the solution de®ned by the unionof L�2� and the solution returned by SUB�cÿw�L�2��; k ÿ 2�, otherwise let H�2� :� ;.

5. If ` < 3 go to 6. Otherwise, let L�3� be the set ofthe three smallest items in L. If w�L�3��6 c, letH�3� be the solution de®ned by the union ofL�3� and the solution returned by SUB�cÿw�L�3��; k ÿ 3�, otherwise let H�3� :� ;.

6. Return the best solution among H�`0�,`0 � 0; 1; 2; 3.

Proposition 9. H 3=4 is a 3=4approximation algo-rithm for kSSP and runs in O�n� time.

Proof. The running time bound follows from thefact that SUB can be performed in linear time bystandard partial-sorting algorithms. To completethe proof, one can readily check that for every`0 � 0; 1; 2; 3 the large items selected have anoverall weight at least equal to 3z��`0�=4. Then theclaim follows from Lemma 8. �

The best known PTAS for SSP requiring onlylinear storage is due to Fischetti [4], and can bebrie¯y described as follows. For a given accuracye, the set of items is subdivided into small and largeitems as de®ned above. Furthermore, the set L oflarge items is partitioned into a minimal number ofq buckets B1; . . . ;Bq such that the di�erence be-tween the smallest and the biggest item in a bucketis at most ec. Clearly, q6 1=eÿ 1. A q-tuple�m1; . . . ; mq� is called proper if there exists a set~L � L with j~L \ Bij � mi (i � 1; . . . ; q) and w�~L�6 c.Then a procedure LOCAL returns for any properq-tuple a subset ~L � L with j~L \ Bij � mi (i � 1;. . . ; q) and w�~L�6 c such that w�~L�P �1ÿ e�c or

342 A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345

Page 11: Approximation algorithms for knapsack problems with cardinality constraints

w�~L�P w��L� for each �L � L with j�L \ Bij �mi (i � 1; . . . ; q) and w��L�6 c. Procedure LOCAL iscompleted by adding small items in a greedy way.Since the q-tuple which corresponds to an optimalsolution is not generally known, all possible valuesfor m1; . . . ; mq are tried. Therefore, only the currentbest solution value produced by LOCAL has to bestored. Any call of LOCAL can be done in lineartime and an upper bound for the number ofproper q-tuples is given by 2d �������

1=ep e ÿ 3 which

gives a running time bound of O�n2d�����1=ep

eÿ2�,whereas the memory requirement is only O�n�, see[4]. In fact, a more careful analysis shows that thetime complexity is also bounded by O�n log n�/�1=e��, where / is a suitably-de®ned (exponen-tially growing) function. Moreover, a (simpli®ed)variant of the scheme runs in O�n� �1=e� log2 1=e�/�1=e��, but requires O�n� 1=e� space.

It is easy to adapt the above scheme and itsvariant to kSSP. Since the number of large items ina set ~L which corresponds to �m1; . . . ; mq� is justm :�Pq

i�1 mi, we only have to discard the tuplessuch that m > k and to replace the call to a greedyalgorithm in LOCAL by a call to SUB�cÿ w�~L�;k ÿ m�. Time and storage requirements remain thesame. Let PTASkSSPbe the resulting scheme andPTAS0kSSP its variant. The following propositionsfollow immediately from the results of [4] andLemma 8.

Proposition 10. PTASkSSP is a PTAS for kSSP which

runs in O�minfn2�����1=ep� �

ÿ2, n log n� /�1=e�g� timeand requires linear space.

Proposition 11. PTAS 0kSSP is a PTAS for kSSP whichruns in O�n� �1=e� log2 1=e� /�1=e�� time and re-quires O�n� 1=e� space.

We conclude this section by showing an exactalgorithm and an FPTAS for kSSP. At ®rsta special dynamic programming algorithm ispresented.

Let gi�d� for i � 1; . . . ; n; d � 0; . . . ; c, be theoptimal solution value of the following kSSP de-®ned on the ®rst i items and by knapsack capacity d:

gi�d� :� min ` :Xi

j�1

wjxj

(� d;

Xi

j�1

xj � `; xj 2 f0; 1g;

j � 1; . . . ; i

); �10�

where gi�d� :� n� 1 if no solution satis®es the twoconstraints. One initially sets g0�0� :� 0, andg0�d� :� n� 1 for d � 1; . . . ; c. Then, for i �1; . . . ; n the entries of gi are computed from thoseof giÿ1 by using the recursion

gi�d� :� mingiÿ1�d�;giÿ1�d ÿ wi� � 1 if d > wi;

��11�

and the optimal solution value of kSSP is given byz� � maxd�0;...;cfd : gn�d�6 kg. This means that wecompute for every reachable subset value theminimum number of items summing up to thisvalue. The recursion has time complexity O�nc�and space complexity O�kc�, improving by a factorof k over the recursion for kKP. Indeed, we solvethe problem in O�nc� for all cardinalities k06 k andall capacities c06 c.

Unfortunately, we could not ®nd a way to con-struct an FPTAS directly from this dynamic pro-gramming scheme. However, most classicalFPTASs can be extended to the kSSP by ``expand-ing by a factor of k''. In the following we will brie¯yexplain such an approach, called FPTASkSSP.

Given the accuracy e, we partition the item setinto big and small items as above, consider thesubinstance de®ned by the big items in L, andperform the dynamic programming scheme (9) onintervals instead of all possible weight sums.Therefore, we let ` :� minf 1=ed e ÿ 1; kg and par-tition the range of values �0; c� into 1=ed e intervalsof equal length c e. Instead of determining everyreachable weight sum in �0; c� we keep at most 2`values for each interval. In particular we considera new item i by adding wi to all currently storedweight sums, but then keep only the smallest andthe largest weight sums for every interval and forevery cardinality `0 � 1; . . . ; ` of the associatedsubset. The corresponding modi®cation of the re-cursion (9) is easy to implement.

A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345 343

Page 12: Approximation algorithms for knapsack problems with cardinality constraints

After performing the dynamic programmingalgorithm on the large items, consider for each`0 � 0; . . . ; ` the item set L�`0� corresponding to avalue in a dynamic programming interval withlargest weight among all such sets of cardinality `0,and complete this partial solution by adding theitems returned by SUB�cÿ w�L�`0��; k ÿ `0�. The®nal solution is the best among these.

It is shown by induction in [9] that this yieldsan FPTAS. In fact, one can show that afterconsidering a new item, for every weight sum inan optimal dynamic programming scheme thereexist an upper and a lower bound in the abovedynamic programming array which di�er by atmost c e.

Obviously, the size of the above dynamic pro-gramming array is O�k=e� and each entry containsa subset of at most ` items. This yields the fol-lowing improvement on Proposition 7.

Proposition 12. FPTASkSSP is an FPTAS for kSSPwhich runs in O�n`=e� time and requires O�n� `2=e�space.

Assuming the more natural case of k6 1=e, onehas an improvement by a factor of k in both timeand space complexity. Again, this scheme mayfurther be improved by using more sophisticatedtechniques e.g. from [9], but presenting all thedetails is beyond the scope of this paper.

Acknowledgements

The hospitality of the Universities of Bolognaand Graz are gratefully acknowledged. The workof the ®rst author was partially supported by CNRand MURST, Italy. The work of the fourth authorwas partially supported by the EC NetworkDIMANET through the European Research Fel-lowship No. ERBCHRXCT-94 0429.

References

[1] N. Blum, R.W. Floyd, V. Pratt, R.L. Rivest, R.E. Tarjan,

Time bounds for selection, Journal of Computer System

and Science 7 (1973) 448±461.

[2] R.G. Downey, M.R. Fellows, Fixed-parameter tractability

and completeness I: Basic results, SIAM Journal on

Computing 24 (1995) 203±225.

[3] R.G. Downey, M.R. Fellows, Fixed-parameter tractability

and completeness II: On completeness for W[1], Theoret-

ical Computer Science 141 (1995) 109±131.

[4] M. Fischetti, A new linear storage polynomial-time

approximation scheme for the subset-sum problem, Dis-

crete Applied Mathematics 26 (1990) 61±77.

[5] A.M. Frieze, M.R.B. Clarke, Approximation algorithms

for the m-dimensional 0±1 knapsack problem: Worstase

and probabilistic analyses, European Journal of Opera-

tional Research 15 (1984) 100±109.

[6] G.V. Gens, E.V. Levner, Fast approximation algorithms

for knapsack type problems, in: K. Iracki, K. Malinowski,

S. Walukiewicz (Eds.), Optimization Techniques, Part 2

Lecture Notes in Control and Information Sciences, vol.

23, Springer, Berlin, 1980, pp. 185±194.

[7] O.H. Ibarra, C.E. Kim, Fast approximation algorithms for

the knapsack and sum of subset problems, Journal of the

ACM 22 (1975) 463±468.

[8] H. Kellerer, U. Pferschy, A New Fully Polynomial

Approximation Scheme for the Knapsack Problem, Jour-

nal of Combinatorial Optimization 3 (1999) 59±71.

[9] H. Kellerer, R. Mansini, U. Pferschy, M.G. Speranza, An

e�cient fully polynomial approximation scheme for the

subset-sum problem, Technical Report, Faculty of Eco-

nomics, University Graz, submitted, see also Proceedings

of the 8th ISAAC Symposium, Springer Lecture Notes in

Computer Science 1350 (1997) 394±403.

[10] B. Korte, R. Schrader, On the existence of fast approxi-

mation schemes, in: O.L. Mangasarian, R.R. Meyer, S.M.

Robinson (Eds.), Nonlinear Programming 4, Aademic

Press, 1981, pp. 415±437.

[11] E.G. Lawler, Fast approximation algorithms for knapsack

problems, Mathematics of Operations Research 4 (1979)

339±356.

[12] M.J. Magazine, O. Oguz, A fully polynomial approxima-

tion algorithm for the 0±1 knapsack problem, European

Journal of Operational Research 8 (1981) 270±273.

[13] S. Martello, P. Toth, Knapsack Problems, Wiley, Chich-

ester, 1990.

[14] N. Megiddo, A. Tamir, Linear time algorithms for some

separable quadratic programming problems, Operations

Research Letters 13 (1993) 203±211.

[15] G.L. Nemhauser, L.A. Wolsey, Integer and Combinatorial

Optimization, Wiley, New York, 1988.

[16] O. Oguz, M.J. Magazine, A polynomial time approxima-

tion algorithm for the multidimensional knapsack prob-

lem, Working Paper, University of Waterloo, Waterloo,

Ont., 1980.

[17] U. Pferschy, D. Pisinger, G.J. Woeginger, Simple but

e�cient approaches for the collapsing knapsack problem,

Discrete Applied Mathematics 77 (1997) 271±280.

[18] D. Pisinger, A fast algorithm for strongly correlated

knapsack problems, Discrete Applied Mathematics 89

(1998) 197±212.

344 A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345

Page 13: Approximation algorithms for knapsack problems with cardinality constraints

[19] D. Pisinger, P. Toth, Knapsack problems, in: D.Z. Du, P.

Pardalos (Eds.), Handbook of Combinatorial Optimiza-

tion, Kluwer, Norwell, MA, 1998, pp. 1±89.

[20] M.E. Posner, M. Guignard, The collapsing 0±1 knapsack

problem, Mathematical Programming 15 (1978) 155±161.

[21] S. Sahni, Approximate algorithms for the knapsack

problem, Journal of the ACM 22 (1975) 115±124.

[22] F. Vanderbeck, Computational study of a column gener-

ation algorithm for bin packing and cutting stock prob-

lems, TR-14/96, University of Cambridge, 1996.

A. Caprara et al. / European Journal of Operational Research 123 (2000) 333±345 345