Vous êtes sur la page 1sur 21

ARTICLE IN PRESS

Computers & Operations Research

www.elsevier.com/locate/cor

A case-based distance model for multiple criteria ABC analysis


Ye Chena , Kevin W. Lib , D. Marc Kilgourc , Keith W. Hipela,
a Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada N2L 3G1 b Odette School of Business, University of Windsor, Windsor, ON, Canada N9B 3P4 c Department of Mathematics, Wilfrid Laurier University, Waterloo, ON, Canada N2L 3C5

Abstract In ABC analysis, a well-known inventory planning and control technique, stock-keeping units (SKUs) are sorted into three categories. Traditionally, the sorting is based solely on annual dollar usage. The aim of this paper is to introduce a case-based multiple-criteria ABC analysis that improves on this approach by accounting for additional criteria, such as lead time and criticality of SKUs, thereby providing more managerial exibility. Using decisions from cases as input, preferences over alternatives are represented intuitively using weighted Euclidean distances which can be easily understood by a decision maker. Then a quadratic optimization program nds optimal classication thresholds. This system of multiple criteria decision aid is demonstrated using an illustrative case study. 2006 Elsevier Ltd. All rights reserved.
Keywords: Inventory management; ABC analysis; Multiple criteria decision aid; Case-based distance model; Euclidean distance

1. Introduction Efcient and effective inventory management helps a rm maintain competitive advantage, especially in a time of accelerating globalization [1]. The number of stock-keeping units (SKUs) held by larger rms can easily reach tens of thousands. Clearly, it is not economically feasible to design an inventory management policy for each individual SKU. In addition, different SKUs may play quite different roles in the rms business and, hence, necessitate different levels of management attention. In order to implement a sound inventory control scheme, it is necessary to group SKUs into manageable and meaningful categories rst, and then design different policies for each group according to the groups importance to the rm [2]. Thus, a generic inventory management policy requiring a certain level of effort and control from management is applied to all items in each category. This aggregation process should dramatically reduce the number of SKUs requiring extensive management attention. ABC analysis is the most frequently used approach to classifying SKUs. This traditional method is based on solely annual dollar usage, reecting the principle that a small proportion of SKUs accounts for a majority of the dollar usage. Classical ABC analysis follows from Paretos famous observations on the uneven distribution of incomes [3], and hence is sometimes referred to as Pareto analysis. Because of its easy-to-implement nature and remarkable effectiveness in many inventory systems, this approach is still widely used in practice.
Corresponding author.

E-mail address: kwhipel@uwaterloo.ca (K.W. Hipel). 0305-0548/$ - see front matter doi:10.1016/j.cor.2006.03.024 2006 Elsevier Ltd. All rights reserved.

ARTICLE IN PRESS
2 Y. Chen et al. / Computers & Operations Research ( )

However, although the annual dollar usage is a critical determinant of the importance of SKUs in the inventory system, many other criteria may also deserve managements attention, and hence affect the classication of SKUs. For instance, in the technology industries, some parts may become obsolete in a very short time period, should therefore be closely monitored by inventory managers. In this case, the obsolescence becomes a critical criterion for classifying SKUs. Other factors, such as length and variability of lead time, substitutability, reparability and criticality may also affect managements decision [4]. Various multiple criteria ABC analysis (MCABC) methods have been developed to complement classical ABC analysis including some based on AHP (analytic hierarchy process) [5,6], statistical analysis [7], weighted linear programming [8], articial neural networks [9], and genetic algorithms [10], to name a few. The approach presented in this paper is motivated by the work of Flores and Whybark [4], wherein dollar usage is combined with another criterion relevant to the rms inventory system. But their approach cannot handle situations in which three or more criteria must be taken into account at the same time to classify all SKUs. Our research aims to lift this restriction and allow any nite number of criteria to be considered simultaneously. Moreover, in our approach criterion weights and sorting thresholds are generated mathematically based on the decision makers assessment of a case set and therefore, difculties associated with the direct acquisition of preference information are avoided. The remainder of the paper is organized as follows. Section 2 summarizes the classical ABC analysis and Flores and Whybarks MCABC extension. Section 3 provides an introduction of multiple criteria decision aid (MCDA) and describes its connection with MCABC. Section 4 proposes a case-based distance model for MCABC. Then a case study is carried out to demonstrate the proposed model in Section 5. Finally, the paper concludes with some comments in Section 6. 2. ABC analysis and its extensions Classical ABC analysis aggregates SKUs into groups based solely on annual dollar usage. The most important SKUs in terms of dollar usage are placed in group A, which demand the greatest effort and attention from management; the least important SKUs fall into group C, where minimal effort is applied; other SKUs belong to the middle group B. The 8020 (or 9010) Rulethat 80% (or 90%) of total annual usage comes from 20% (or 10%) of SKUsconstitutes the basis of the classical ABC analysis. This rule suggests that the number of SKUs in A is substantially smaller than the total number of SKUs. Although exact values vary from industry to industry, the 8020 rule can be applied to many real-world situations. Fig. 1 captures the essence of this rule. The classication obtained from ABC analysis is sometimes subject to further adjustments. For example, dollar usage of some SKUs may not be signicant, but their stock-out cost may be extremely high; other SKUs may have high dollar usages, but sufcient and consistent supply. In these cases, SKUs may appropriately be switched to another group. The relevance of this re-classication process is that some criteria, other than dollar usage, may come into play in determining how much attention should be paid to specic SKUs. Flores and Whybark [4] proposed a multiple criteria framework to handle ABC analysis and applied it to a service organization and a manufacturing rm [11]. This approach begins with selecting another critical criterion, in addition to the dollar usage. This criterion, which depends on the nature of the industry, may be obsolescence, lead time,

Cumulative percentage of dollar usage

80%

20% Cumulative percentage of SKUs


Fig. 1. Example of dollar usage distribution curve.

ARTICLE IN PRESS
Y. Chen et al. / Computers & Operations Research ( ) 3

A Dollar Usage A B C AA BA CA

Second Critical Criterion B C AB BB CB AC BC CC

Fig. 2. Joint matrix for two criteria.

substitutability, reparability, criticality or commonality [4]. Next, all SKUs are divided into three levels of importance, A, B, and C, with respect to each of the two criteria. The model then reclassies SKUs into three categories, AA, BB, and CC, which represent the three inventory control groups, according to a simple rule. The structure of the model can be conveniently represented as a joint criteria matrix as shown in Fig. 2 (adapted from [4]). As indicated by the arrows, the rule categorizes AB and BA with AA, AC and CA with BB and BC and CB with CC. 3. Multiple criteria decision aid A brief introduction to MCDA is provided next, as MCABC can be regarded as a special kind of MCDA problem. MCDA is a research area aiming at assisting a single decision maker (DM) to choose, rank, or sort alternatives within a nite set according to two or more criteria [12]. MCDA begins with processes of: (1) dening objectives; (2) arranging them into criteria, identifying all possible alternatives, and (3) measuring the consequences of each alternative on each criterion. A consequence is a direct measurement of the success of an alternative against a criterion (e.g. cost in dollars, capacity in millions of liters), and is usually an objective physical measurement that includes no preferential information. The basic structure of an MCDA problem is shown in Fig. 3. In this gure, N = {A1 , A2 , . . . , Ai , . . . , An } is the set of alternatives, and Q = {1, 2, . . . , j, . . . , q} is the set of criteria. The consequence on criterion j of alternative Ai is i expressed as cj (Ai ) or cj . The DM may conceive the decision problem in several ways. Roy [12] described these MCDA problmatiques that are applicable in the MCABC context: Choice problmatique: Choose the best alternative from N. Sorting problmatique: Sort the alternatives of N into relatively homogeneous groups that can be arranged in preference order. For example, MCABC can be regarded as a three-group sorting problem, which arranges SKUs into group A, B or C, where SKUs in group A requires most management attention, and C the least. Ranking problmatique: Rank the alternatives of N from best to worst.

The DMs preferences are crucial to the solution of any MCDA problem. There are two kinds of preference expressions: values (preferences on consequences) and weights (preferences on criteria). Values are rened from the consequence data to reect the needs and objectives of the DM. The relation between consequences and values can be expressed symbolically as
i i vj = fj (cj ),

(1)

i i where cj is the consequence of alternative i on criterion j, vj is the value of alternative i on criterion j, and the mapping i ) = (v i , v i , . . . , v i ) is called the preference vector for alternative fj () corresponds to the DMs objectives. Then, v(A q 1 2 Ai . Preferences on criteria refer to expressions of the relative importance of criteria to the DM. We assume that the weight for criterion j Q is wj R, where wj 0 for all j, and j Q wj = 1. A typical weight vector is denoted w = (w1 , w2 , . . . , wj , . . . , wq ), and the set of all possible weight vectors is denoted W Rq .

ARTICLE IN PRESS
4 Y. Chen et al. / Computers & Operations Research ( )

A1 1 2 ... j ... q

A2

Alternatives ... Ai

...

An

Criteria

cji

Fig. 3. The structure of MCDA.

After the MCDA problem has been described in Fig. 3, and after the DMs preferences have been acquired, preferences are aggregated and the specied problem (choosing, ranking or sorting) can then be solved. For all Ai N, V (Ai ) = F (v(Ai ), w), (2)

where V (Ai ) is the evaluation of alternative Ai , F (, ) is a real-valued mapping from the preference vector v(Ai ) and the weight vector w to the evaluation result. Usually the solution of the MCDA problem ( , , or ) is based on the value of V (A1 ), V (A2 ), . . . , and V (An ). For example, the linear additive value function, which is used in many practical applications, is dened as V (Ai ) =
j Q

wj vj (Ai ).

(3)

4. A case-based distance model for MCABC 4.1. Multiple criteria sorting and case-based reasoning With the evolution of MCDA and the appearance of powerful new tools to deal with classication, research on sorting in MCDA is now receiving more attention. For example Doumpos and Zopounidis [13] write the rst book on sorting in MCDA. Kilgour et al. [14] study the problem of screening (two-group sorting) alternatives in subset selection problems. Zopounidis and Doumpos [15] give a comprehensive literature review of sorting problems in MCDA. Chen et al. [16], and Malakooti and Yang [17] study, from different points of view, the extension of traditional sorting methods to multiple criteria nominal classication problems. The main difculty with many existing MCDA methods lies in the acquisition of the DMs preference information in the form of values or weights. Case-based reasoning is an approach to nding preferential information using cases selected by the DM [18,19]. The choice of cases may involve: (1) past decisions taken by the DM; (2) decisions taken for a limited set of ctitious but realistic alternatives; (3) decisions rendered for a representative subset of the alternatives under consideration, which are sufciently familiar to the DM that they are easy to evaluate. A main advantage of casebased reasoning is decision makers may prefer to make exemplary decisions than to explain them in terms of specic functional model parameters [13]. Chen et al. [20] developed a case-based distance model for screening alternatives in MCDA and, later, Chen et al. [21] extended this idea to develop a procedure for sorting alternatives. In this paper, we rene the previous models and design a case-based distance procedure for MCABC. Assume an MCABC problem is to classify SKUs in N (|N| = n) into group A, B, and C based on the criteria set Q (|Q| = q). Several representative SKUs for the MCABC problem are available and they are partitioned into three categories TA , TB , and TC , which represent case sets for groups A, B, and C, respectively. The number of SKUs in r group g (g = A, B, C) is denoted ng and zg is a representative SKU in Tg . The SKUs in case sets may, for example, be fabricated by the DM or obtained by having the DM modify historical records in which the case sets are representative for the DM. Note that all criteria in Q must apply and cj (Ai ) must be measurable for all SKUs in the case sets and all j Q. Preference and indifference of SKUs are implied in the case sets. For two SKUs in the same group, they are k l k l k l equally preferred by the DM. For example, zB , zB TB , zB zB ( means the DM equally prefers zg and zg ); any

ARTICLE IN PRESS
Y. Chen et al. / Computers & Operations Research ( ) 5

B A

Fig. 4. The idea of a case-based distance model.

k SKU in a more important group is more preferred to the one in a less important group. For example, zA TA , and l k l zB TB , zA zB . Our case-based reasoning idea is based on the right distance based preference expression: cases should be close to a pre-dened point in the same group within a range and farther from cases in other groups, in some sense. We use Tg , g = A, B, C, to estimate criterion weight w, and a distance threshold vector R, so that this information can be applied to classify (sort) SKUs in A. Fig. 4 provides an illustration of this idea. Two ellipses partition the SKUs into three case r r sets and represent preference sequences: ellipses closer to o represent more preferred groups. For zA TA , zB TB , r r r r and zC TC , zA zB zC . Then, by a properly designed transformation from the original consequence data space to a weighted normalized consequence data space (preference space), ellipse-based distances can be transformed to circle-based distances and, accordingly, this information can be applied to classify SKUs in N. Note that in Fig. 4, the r r r distance of zA from o is less than RA , the distance of zB is greater than RA and less than RB , and the distance of zC is greater than RB .

4.2. Distance assumptions Assuming that the DMs preferences over Q are monotonic, two kinds of criteria are dened as follows: (1) positive criteria, Q+ , which means the greater the value (consequence) the more important (preference) it is for the DM; (2) negative criteria, Q , which means the less value (consequences) the more important (preference) it is for the DM. Thus, Q = Q+ Q . For example, the manager in a manufacturing company may set the criterion of dollar usage as Q+ while the criterion of lead time as Q . Furthermore, the DM can identify the maximum consequence on criterion max max max min min min j (j Q), cj R+ and the minimum consequence, cj R+ , where cj > cj . Note that cj and cj are min c (Ai ) cmax . extreme values for criterion j, so that the consequence of any SKU, cj (Ai ) satises cj j j

ARTICLE IN PRESS
6 Y. Chen et al. / Computers & Operations Research ( )

Two ctitious SKUs, the ideal SKU, A+ and the anti-ideal SKU, A , are set. By denition, cj (A+ ) = and cj (A ) =
min cj max cj max cj min cj

if j Q+ , if j Q . if j Q+ , if j Q .

max max min For j = 1, 2, . . . , q, dene dj = (cj cj )2 to be the normalization factor for criterion j. For g = A, B, C and r r = 1, 2, . . . , ng , the normalized distance between zg Tg and A+ on criterion j is r r dj (zg , A+ ) = dj (zg )+ = r (cj (zg ) cj (A+ ))2 max dj

(4)

r r Note that (4) denes dj (Ai )+ if zg =Ai , Ai N. Similarly, the distance between any SKU zg Tg and A on criterion j is r r dj (zg , A ) = dj (zg ) = r (cj (zg ) cj (A ))2 max dj

(5)

r r Note that (5) denes dj (Ai ) if zg = Ai , Ai N. It is easy to verify that dj (zg )+ [0, 1], dj (Ai )+ [0, 1], r ) [0, 1], and d (Ai ) [0, 1]. dj (zg j Weighted Euclidean distance has a clear geometric meaning, and is easily understood and accepted to represent the DMs aggregated preference. The relative order of non-negative numbers (distances) is the same as the relative order of their squares, so an ordering of an SKU set can be determined equally well using squares. Therefore, instead of Euclidean distances, we employ their squares, because they are easier to compute while preserving orders. The r aggregated distance between zg Tg and A+ over the criteria set Q is identied as r r D(zg , A+ ) = D(zg )+ = j Q + where wj w+ is the A+ -based weight (relative importance) of criterion j. The weight vector w+ is to be determined. + + r It is assumed that 0 < wj 1 and j Q wj = 1. Similarly, the aggregated distance from alternative zg to A is r r D(zg , A ) = D(zg ) = j Q where wj w is the A -based weight of criterion j. Note that (6) and (7) dene D(Ai )+ and D(Ai ) , respectively, r if zg = Ai , Ai N. Based on the above denitions, the distance of an SKU from A+ or A is applied to identify its group membership. i In terms of the aggregation approach to MCDA discussed above, dj (Ai )+ is analogous to vj in (1), and D(Ai )+ is analogous to V (Ai ) in (3). It is assumed that the closer Ai is to A+ , the greater the DMs preference. Therefore, smaller values of D(Ai )+ indicate a greater preference. It is easy to understand that dj (Ai ) and D(Ai ) have opposite interpretations. r wj dj (zg ) , + r wj dj (zg )+ ,

(6)

(7)

4.3. Model construction Taking A+ as the original point o, A+ MCABC analysis is explained as follows: a compact ball (in q dimensions) + with radius of RA R+ includes (in principle) every case in TA and any case that is not in TA is (in principle) outside + that ball. Similarly, a compact ball with radius of RB R+ includes every case in TA and TB and any case in TC is + + outside. Therefore, RA and RB can be employed to classify SKUs in N and Fig. 5 demonstrates this idea. Similarly,

ARTICLE IN PRESS
Y. Chen et al. / Computers & Operations Research ( ) 7

A A+ B Ai

N C

+ RA

A+ D(Ai)+
+ RB

Distance

+ + Fig. 5. Relationships among RA , RB and D(Ai )+ .

A MCABC analysis can be developed taking A as the original point in which a greater distance indicates a greater r r r preference. A ball with radius of RC includes zC TC , and a ball with radius of RB includes zB TB and zC TC . + MCABC based model construction is explained in detail next. Since the procedure of A MCABC is similar, The A the details for this are omitted. For j Q, wj refers to the DMs preference on criterion j, and represents the relative importance of criterion j
+ + within the aggregated distance. RA , RB stands for thresholds to classify SKUs into different groups. Here, we obtain + + + + + w+ = (w1 , w2 , . . . , wq ), and RA , RB by a case-based reasoning model founded upon TA , TB , and TC . For an MCABC problem, SKUs in the case set TA (belonging to A) are assessed by the DM, with the preference relationships described above, and they are more preferred than other cases in TB and TC . Therefore, based on distance measurement from A+ , the following constraints are set: + r The distance of zA TA to A+ is less than RA , provided that there are no inconsistent judgements. Thus, for r = 1, 2, . . . , nA , r D(zA )+ + r A + RA

or
j Q

+ r wj dj (zA )+ +

r A

+ RA ,

(8)

+ r where 1 r 0 is an upper-bound error adjustment parameter (keeping the distance of zA less than RA ). A r T to A+ is larger than R + and less than R + provided that there are no inconsistent The distance of zB B A B judgements. Thus, for r = 1, 2, . . . , nB , r D(zB )+ + r B + RB

or
j Q

+ r wj dj (zB )+ +

r B

+ RB ,

(9)

r D(zB )+ +

r B

+ RA

or
j Q

+ r wj dj (zB )+ +

r B

+ RA ,

(10)

+ r where 1 r 0 is an upper-bound error adjustment parameter (keeping the distance of zB less than RB ) and B r r larger than R + ). 0 B 1 is a lower-bound error adjustment parameter (keeping the distance of zB A

ARTICLE IN PRESS
8 Y. Chen et al. / Computers & Operations Research ( )
+ r The distance of zC TC to A+ is larger than RB provided that there are no inconsistent judgements. Thus, for r = 1, 2, . . . , nC , r D(zC )+ + r C + RB

or
j Q

+ r wj dj (zC )+ +

r C

+ RB ,

(11)

where 0

r C

+ r 1 is a lower-bound error adjustment parameter (keeping the distance of zC larger than RB ).

Accordingly, the overall squared error in all case sets is denoted as


nA nB nC

ERR =
r=1

r 2 A)

+
r=1

[(

r 2 B)

+(

r 2 B) ] + r=1

r 2 C) .

(12)

Then, the following optimization model can be adopted to nd the most descriptive weight vector w+ , and the distance + + thresholds RA and RB . D( , )
nA nB nC

Minimize: subject to:

ERR =
r=1

r 2 A)

+
r=1 r A r B r B r C

[(

r 2 B)

+(

r 2 B) ] + r=1

r 2 C)

+ r wj dj (zA )+ + j Q + r wj dj (zB )+ + j Q + r wj dj (zB )+ + j Q + r wj dj (zC )+ + j Q

+ RA , + RB , + RA , + RB ,

r = 1, 2, . . . , nA ; r = 1, 2, . . . , nB ; r = 1, 2, . . . , nB ; r = 1, 2, . . . , nC ;

+ + + + 0 < RA < 1, 0 < RB < 1, RA < RB ; 1 r 0, g = A, B; g

r g

1,

g = B, C;
+ wj = 1; j Q

+ wj > 0,

Theorem 1. D( , ) has at least one optimal solution. Proof. The constraints in D( , ) constitute a convex set (see Steuer [22] for detailed denition of convex sets). The objective function ERR is a quadratic function on this set. As all of the variables are continuous and bounded, D( , ) attains its maximum at least once. An indifference distance threshold, is set to evaluate the error, ERR, generated by D( , ): When ERR , the error is small and can be ignored, so the information in the case sets provided by the DM is considered to be consistent; when ERR > , the error cannot be ignored and there is some inconsistency in the case sets. Therefore, the DM should reconsider them. Let = 1/k, where k R+ is an adjustment parameter. A suggested value of k is |N|. When the case set is large and the likelihood of the error is high, then k > |N| may be better; when the case set is small and the likelihood of the error is small, then k < |N| should be considered. Furthermore, the DM could provide rough information about weights to ensure that the results reect his or her intrinsic preferences, insofar as they are known. The imprecise preference expressions proposed by Sage and

ARTICLE IN PRESS
Y. Chen et al. / Computers & Operations Research ( ) 9

White [23] and Eum et al. [24] can be used for this purpose. Some examples of imprecise weight preference expressions follow: Weak ranking: w1 w2 wq > 0; Strict ranking: wj wj +1 j , j Q, where j is a small positive value; Difference ranking: w1 w2 wq1 wq 0; Fixed bounds: Lj wj Uj , j Q, where Lj and Uj are lower and upper bounds for wj , respectively;

In our method, the following imprecise weight preference expressions are proposed to align with the MCABC scenario: wd Lj wk , wj k Q, Uj , j Q, (13) (14)

where wd represents the weight of annual dollar usage, and Lj and Uj are, respectively, lower and upper bounds for wj . For simplicity, we could set Lj = L and Uj = U for all criteria. (Setting 0 < L < U < 1 ensures that all specied criteria count in the nal classicationno criterion can be discarded. In particular, the value of L should be set to some non-negligible amount to ensure that no criterion is effectively dropped from the model.) If constraints (13) and (14) can be incorporated directly into D( , ), the program will still have at least one optimal solution. Note that constraints (13) and (14) must be carefully checked to make sure that incorporating them into D( , ) will not signicantly affect the overall error. Alternatively, when (13) and (14) are not included in D( , ), they can guide the DM in selecting the most suitable solutions when multiple optimal results are identied in D( , ), as will be explained in Section 4.5, Post-optimality analyses.

4.4. Distance-based sorting Assume ERR , and A+ , B+ , and C+ denote the A+ MCABC-based group A, B and C, respectively. With w+ = + + + + + (w1 , w2 , . . . , wq ), RA and RB obtained from D( , ), A+ MCABC can be carried out to classify SKUs in N as follows:
+ If D(Ai )+ RA , Ai A+ ; + + If RA < D(Ai )+ RB , Ai B+ ; + + If D(Ai ) > RB , Ai C+ . Employing similar procedures, w = (w1 , w2 , . . . , wq ), RB and RC can be calculated and A MCABC can thus be carried out to classify SKUs in N as follows: If D(Ai ) RC , Ai C ; If RC < D(Ai ) RB , Ai B ; i ) > R , A i A . If D(A B

Note that A , B and C denote A MCABC-based groups A, B, and C, respectively. Next, a process similar to Flores and Whybark [4] is designed to nalize the classication of SKUs in N to different groups as shown in Fig. 6. Based on the classication results of A+ MCABC and A MCABC, nine combination groups, A A+ , A B+ , A C+ , A+ , B B+ , B C+ , C A+ , C B+ , and C C+ are identied. Then, these combination groups are reclassied into B three categories, A A+ , B B+ and C C+ , which represent the most important, the medium-level important and the least important groups, respectively. The guideline as indicated by the arrows is to regroup A B+ and B A+ as A A+ , A C+ and C A+ as B B+ , and B C+ and C B+ as C C+ .

ARTICLE IN PRESS
10 Y. Chen et al. / Computers & Operations Research ( )

A+MCABC analysis A+ A-MCABC analysis ABCA-A+ B-A+ C-A+ B+ A-B+ B-B+ C-B+ C+ A-C+ B-C+ C-C+

Fig. 6. The joint matrix for two MCABC methods.

4.5. Post-optimality analyses Because D( , ) may have many sets of criterion weights and distance thresholds that are optimal or near-optimal, we discuss how the robustness of each solution can be examined using post-optimality analysis. There are several ways to assess whether multiple or near-optimal solutions of D( , ) exist. Programming-based near-optimality analyses: Assuming the optimal objective value of D( , ) is ERR , some suggestions of Jacquet-Lagrze and Siskos [18] can be adapted for use in post-optimality analysis of wj (for each j Q) using the following programs: D ( , , wj ) Maximize: subject to: wj ERR max{ , (1 + )ERR }, all constraints of D( , ), constraints (13) and (14) as applicable, D ( , , wj ) Minimize: subject to: wj ERR max{ , (1 + )ERR }, all constraints of D( , ), constraints (13) and (14) as applicable, In both programs, is a small positive number. + + These programs obtain maximum and minimum values for wj . Similarly, the programs D ( , , RA ), D ( , , RB ),

+ + + + D ( , , RA ) and D ( , , RB ) yield, respectively, maximum and minimum values for RA and RB . The difference between the generated minimum and maximum values for a criterion weight or distance threshold is a measure of the robustness of the initial solution. There are two ways to use this robustness information to determine a nal sorting. (1) Average value method: Based on the suggestions from Jacquet-Lagrze and Siskos [18] and Siskos et al. [25], the averages of the initial solutions, and the maximum and minimum values for each criterion or distance threshold generated from the above procedures may be considered as a more representative solution of D( , ), and used to sort SKUs. + + (2) Percentage value method: Each of D( , ), D ( , , wj ), D ( , , wj ), D ( , , RA ), D ( , , RB ), + + D ( , , RA ) and D ( , , RB ) generates a vector of solutions for all criterion weights and distance thresholds. Each of these vectors implies a different sorting of the SKUs. For each SKU, the frequency of sorting into A, B and C can be calculated. (A sharply peaked distribution is another indicator of robustness.) Then each SKU is assigned to the group where it appears most often, for both A+ MCABC and A MCABC. Finally, the procedure explained in Fig. 6 is applied to generate the nal sorting result. Multiple optimal solution identication and selection: Another way to conduct the post-optimality analyses is to employ optimization software packages, such as LINGO and Matlab, in which varying the initialization of

ARTICLE IN PRESS
Y. Chen et al. / Computers & Operations Research ( ) 11

the optimization algorithm can identify multiple optimal solutions. When D( , ) has multiple solutions, the DM could select a solution which is in some sense closest to the imprecise weight information he or she has supplied, as in (13) and (14). For example, solutions that do not satisfy the constraint (13) can simply be screened out. Also, the mean of the two parameters of (14), (L + U )/2, can be employed as a centroidin which case, the solution at the minimum distance from the centroid should be regarded as best. Note that (13) and (14) are not incorporated into D( , ) as constraints. Transformation of the objective function: The sum of squared errors, nA ( r )2 + nB [( r )2 + ( r )2 ] + B B r=1 A r=1 nC r 2 r=1 ( C ) , the objective function in D( , ), measures the overall error in representation of the entire case set. Because it is similar to linear regression in statistics, this representation may be easily understood by DMs. Nevertheless, there are other ways to express the overall error. For instance, following the example of Siskos and Yannacopoulos [26] and Siskos et al. [25], nA ( r ) + nB [( r ) + r ] + nC ( r ) also measures B A B r=1 r=1 r=1 C the overall error. In other words, this procedure transforms D( , ) is transformed into a linear rather than a quadratic program. The same procedures described by Siskos et al. [25] can then be employed to carry out post-optimality analyses. The constraints (13) and (14) may be incorporated in D( , ), depending on the DMs available information.

5. A case study of MCABC 5.1. Background A case study to demonstrate the proposed procedure is carried out based upon data provided by Flores et al. [5] on a hospital inventory management problem. In that example, 47 disposable SKUs used in a hospital-based respiratory therapy unit are classied using the AHP [27] based MCABC. Table 1 lists the 47 disposable SKUs referred to as S1 through S47. Four criteria are dened for the MCABC analysis: (1) average unit cost ($), which ranges from a low $5.12 to a high of $210.00; (2) annual dollar usage ($), which ranges from $25.38 to a high of $5840.64; (3) critical factor, 1, 0.50, or 0.01 is assigned to each of the 47 disposable SKUs. A value of 1 indicates very critical, a value of 0.50 means moderately critical and a value of 0.01 stands for non-critical; (4) lead time (weeks) is the time that it takes to receive replenishment after an SKU is ordered, ranging from 1 to 7 weeks.

5.2. Case sets settings In this case study, all criteria are assumed to be positive criteria, which means the greater value of the consequence, the more important it is for the DM. Note that for a product buyer, like hospitals, lead time is a positive criterion while for a producer it may be a negative criterion. The settings of A+ , A are listed in Table 2. It is assumed that the DM would like to provide the case information using three representative SKUs for A, four for B, and four for C among those 47 SKUs. We start with an assumed most representative case set as shown in Table 2. Based on this information, the normalized consequence data of case sets for A+ MCABC and A MCABC are calculated using (4) and (5), and listed in Tables 3 and 4, respectively.

5.3. Model construction


+ + + + + + First, D( , ) is employed to nd w+ = {w1 , w2 , w3 , w4 } as well as RA and RB , which represent the weights for the average unit cost, the annual dollar usage, the critical factor, the lead time, and the distance thresholds for A and B in A+ ABC, respectively. The imprecise weight information is assumed to be as follows: for j = 1, 2, 3, 4, + + + + + + + 0.01 wj 0.9; w2 w1 ; w2 w3 ; w2 w4 . These two groups of constraints guarantee that each weight is positive and hence that each criterion contributes to the classication, and that the dollar usage is the most important criterion. The results found using Lingo [28] software are ERR = 1.1878 109 ; w+ = (0.0915, 0.3118, 0.2987, 0.2980), + + 1 RA = 0.2700 and RB = 0.6717. Assuming = 1/n = 47 , since ERR>, the error can be ignored.

ARTICLE IN PRESS
12 Y. Chen et al. / Computers & Operations Research ( ) Table 1 Listing of SKUs with multiple criteria, adapted from Flores et al. [5] SKUs Criteria Average unit cost ($) S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20 S21 S22 S23 S24 S25 S26 S27 S28 S29 S30 S31 S32 S33 S34 S35 S36 S37 S38 S39 S40 S41 S42 S43 S44 S45 S46 S47 49.92 210.00 23.76 27.73 57.98 31.24 28.20 55.00 73.44 160.50 5.12 20.87 86.50 110.40 71.20 45.00 14.66 49.50 47.50 58.45 24.40 65.00 86.50 33.20 37.05 33.84 84.03 78.40 134.34 56.00 72.00 53.02 49.48 7.07 60.60 40.82 30.00 67.40 59.60 51.68 19.80 37.70 29.89 48.30 34.40 28.80 8.46 Annual dollar usage ($) 5840.64 5670.00 5037.12 4769.56 3478.80 2936.67 2820.00 2640.00 2423.52 2407.50 1075.20 1043.50 1038.00 883.20 854.40 810.00 703.68 594.00 570.00 467.60 463.60 455.00 432.50 398.40 370.50 338.40 336.12 313.60 268.68 224.00 216.00 212.08 197.92 190.89 181.80 163.28 150.00 134.80 119.20 103.36 79.20 75.40 59.78 48.30 34.40 28.80 25.38 Critical factor 1 1 1 0.01 0.5 0.5 0.5 0.01 1 0.5 1 0.5 1 0.5 1 0.5 0.5 0.5 0.5 0.5 1 0.5 1 1 0.01 0.01 0.01 0.01 0.01 0.01 0.5 1 0.01 0.01 0.01 1 0.01 0.5 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 Lead time (weeks) 2 5 4 1 3 3 3 4 6 4 2 5 7 5 3 3 4 6 5 4 4 4 4 3 1 3 1 6 7 1 5 2 5 7 3 3 5 3 5 6 2 2 5 3 7 3 5

The optimization problem is: Minimize: ERR = (


1 2 A)

+(

2 2 A)

+(

3 2 A)

+(

1 2 B)

+(
3 2 C)

2 2 B)

+(
4 2 C)

3 2 B)

+(

4 2 B)

+(

1 2 B)

+(

2 2 B)

+(

3 2 B)

+(

4 2 B)

+(

1 2 C)

+(

2 2 C)

+(

+(

ARTICLE IN PRESS
Y. Chen et al. / Computers & Operations Research Table 2 The basic information settings SKUs Criteria Average unit cost ($) A+ A max dj TA S1 S2 S13 S10 S29 S36 S45 S4 S25 S27 S34 250.00 1.00 62001.00 49.92 210.00 86.50 160.50 134.34 40.82 34.40 27.73 37.05 84.03 7.07 Annual dollar usage ($) 6000.00 10.00 35880100.00 5840.64 5670.00 1038.00 2407.50 268.68 163.28 34.40 4769.56 370.50 336.12 190.89 Critical factor 1 0 1.00 1.00 1.00 1.00 0.50 0.01 1.00 1.00 0.01 0.01 0.01 0.01 Lead time (weeks) 7 1 36.00 2.00 5.00 7.00 4.00 7.00 3.00 7.00 1.00 1.00 1.00 7.00 ( ) 13

TB

TC

Subject to:
+ + + 0.6457w1 + 0.0007w2 + 0.6944w4 + + + 0.4312w1 + 0.6862w2 + 3 A + RA ; + + + 0.0258w1 + 0.0030w2 + 0.1111w4 + 1 A 2 A + RA ; + RA ; 1 B + RB ;

+ + + + 0.1292w1 + 0.3597w2 + 0.2500w3 + 0.2500w4 + + + + 0.2158w1 + 0.9155w2 + 0.9801w3 + + + + 0.7497w1 + 0.9919w2 + 0.9801w3 + + + + 0.2158w1 + 0.9155w2 + 0.9801w3 + + + + 0.7497w1 + 0.9919w2 + 0.9801w3 + + + + 0.7057w1 + 0.9495w2 + 0.4444w4 + 2 B 3 B 4 B 2 B 3 B 4 B + RB ; + RB ; + RB ;

+ + + + 0.1292w1 + 0.3597w2 + 0.2500w3 + 0.2500w4 + + + + 0.7057w1 + 0.9495w2 + 0.4444w4 + + RA ; + RA ; + RA ;

1 B

+ RA ;

+ + + + 0.4443w1 + 0.8941w2 + 0.9801w3 + 1.0000w4 + + + + 0.9518w1 + 0.9405w2 + 0.9801w3 + + 0 RA + 0 RB + + 1, RA < RB ; 4 C + RB ;

+ + + + 0.7314w1 + 0.8833w2 + 0.9801w3 + 1.0000w4 +

+ + + + 0.7968w1 + 0.0422w2 + 0.9801w3 + 1.0000w4 +

1 C 2 C 3 C

+ RB ;

+ RB ;

+ RB ;

1,
1 A 1 B

1 1 0 0
1 B 1 C

0, 0, 0 0

1 1
2 B 2 C

2 A 2 B

0, 1 0, 1
3 B 3 C + w4 ;

3 A 3 B

0; 0, 1
4 B 4 C 4 B

0;

1, 1,

1, 0 1, 0

1, 0 1, 0

1; 1;
+ 0.9, 0.01 w4

+ + 0.01 w1 0.9, 0.01 w2 + + + + + w2 w1 ; w2 w3 ; w2 + + + + w1 + w2 + w3 + w4 = 1.

0.9, 0.01

+ w3

0.9;

ARTICLE IN PRESS
14 Y. Chen et al. / Computers & Operations Research ( ) Table 3 The normalized consequence data of case sets for A+ MCABC SKUs Criteria Average unit cost ($) TA S1 S2 S13 S10 S29 S36 S45 S4 S25 S27 S34 0.6457 0.0258 0.4312 0.1292 0.2158 0.7057 0.7497 0.7968 0.7314 0.4443 0.9518 Annual dollar usage ($) 0.0007 0.0030 0.6862 0.3597 0.9155 0.9495 0.9919 0.0422 0.8833 0.8941 0.9405 Critical factor 0.0000 0.0000 0.0000 0.2500 0.9801 0.0000 0.9801 0.9801 0.9801 0.9801 0.9801 Lead time (weeks) 0.6944 0.1111 0.0000 0.2500 0.0000 0.4444 0.0000 1.0000 1.0000 1.0000 0.0000

TB

TC

Table 4 The normalized consequence data of case sets for A MCABC SKUs Criteria Average unit cost ($) TA S1 S2 S13 S10 S29 S36 S45 S4 S25 S27 S34 0.0386 0.7045 0.1179 0.4103 0.2868 0.0256 0.0180 0.0115 0.0210 0.1112 0.0006 Annual dollar usage ($) 0.9475 0.8929 0.0295 0.1602 0.0019 0.0007 0.0000 0.6314 0.0036 0.0030 0.0009 Critical factor 1.0000 1.0000 1.0000 0.2500 0.0001 1.0000 0.0001 0.0001 0.0001 0.0001 0.0001 Lead time (weeks) 0.0278 0.4444 1.0000 0.2500 1.0000 0.1111 1.0000 0.0000 0.0000 0.0000 1.0000

TB

TC

Similar procedures are carried out for A MCABC. The details are omitted, and the results obtained are listed as: ERR = 9.9018 1010 ; w = (0.2039, 0.3138, 0.2639, 0.2184); RC = 0.2205 and RB = 0.4502. As ERR>, the + ABC and A ABC methods are applied to classify the 47 SKUs into A, B, and C. The error is ignored. Then, both A re-classication procedures shown in Fig. 6 are implemented and the results are shown in Table 5. 5.4. Post-optimality analyses The percentage value method, one of the techniques described in Section 4.5, is chosen to demonstrate post-optimality analyses. The post-optimality programs for A+ MCABC, D ( , , wj ) and D ( , , wj ) are formulated for each criterion + + weight, wj , and distance threshold, RA and RB . The minimum threshold is xed at 0.01. The results are listed in Table 6. Based on the information in Table 6, all 13 sortings of the 47 SKUs were generated; for each SKU, the percentage of sortings into A, B and C are shown in Table 7 . Table 7 also provides the nal sorting for the A+ MCABC method, based on the rule that the group with the largest percentage is used to represent the sorting

ARTICLE IN PRESS
Y. Chen et al. / Computers & Operations Research Table 5 The initial results of A+ ABC and A ABC classication SKUs S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20 S21 S22 S23 S24 S25 S26 S27 S28 S29 S30 S31 S32 S33 S34 S35 S36 S37 S38 S39 S40 S41 S42 S43 S44 S45 S46 S47 D(Ai )+ 0.2662 0.0364 0.1581 0.6768 0.3168 0.3593 0.3676 0.5215 0.1654 0.2731 0.5062 0.3988 0.2534 0.3641 0.4097 0.5032 0.4747 0.3963 0.4245 0.4693 0.4160 0.4669 0.3833 0.4745 0.9331 0.7727 0.9102 0.6255 0.5980 0.9362 0.4453 0.5553 0.6778 0.6731 0.7723 0.4931 0.6947 0.5553 0.6799 0.6612 0.8825 0.8712 0.7040 0.7931 0.6706 0.8073 0.7222 A+ ABC results A+ A+ A+ C+ B+ B+ B+ B+ A+ B+ B+ B+ A+ B+ B+ B+ B+ B+ B+ B+ B+ B+ B+ B+ C+ C+ C+ B+ B+ C+ B+ B+ C+ C+ C+ B+ C+ B+ C+ B+ C+ C+ C+ C+ B+ C+ C+ D(Ai ) 0.5751 0.7848 0.5412 0.2005 0.2062 0.1682 0.1617 0.1247 0.4837 0.2545 0.2799 0.1737 0.5156 0.2091 0.3106 0.1022 0.1254 0.2284 0.1729 0.1333 0.3221 0.1358 0.3441 0.2929 0.0054 0.0288 0.0236 0.1722 0.2775 0.0104 0.1800 0.2792 0.1051 0.2188 0.0362 0.2936 0.1000 0.1049 0.1085 0.1602 0.0073 0.0106 0.0999 0.0317 0.2221 0.0268 0.0973 A ABC results A A A C C C C C A B B C A C B C C B C C B C B B C C C C B C C B C C C B C C C C C C C C B C C Final results A A A C C C C C A B B C A C B C C B C C B C B B C C C C B C C B C C C B C C C C C C C C B C C ( ) 15

result for an SKU. Most of the sorting results are quite robust; only S4 is ambiguous, in that the percentages in B and C are close. The post-optimality programs for A MCABC were solved similarly; the results are shown in Tables 8 and 9. In this case, only S5, S10, S18 and S45 do not produce robust sortings. In this case study, the A+ MCABC method is more robust than the A MCABC method. The nal sorting, based on the re-arrangement procedure described in Fig. 6, is shown in Table 10.

ARTICLE IN PRESS
16 Y. Chen et al. / Computers & Operations Research ( ) Table 6 Post-optimality analyses and nal solutions for A+ MCABC Criterion weights
+ w1 + w2 + w3 + w4

Distance thresholds
+ RA + RB

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

Initial solution
+ min(w1 ) + max(w1 )

0.0915 0.1141 0.0779 0.0861 0.0962 0.0794 0.0861 0.0813 0.1141 0.0861 0.0794 0.1083 0.0813

0.3118 0.3022 0.3074 0.3395 0.3013 0.3134 0.3394 0.3206 0.3022 0.3395 0.3134 0.3019 0.3206

0.2987 0.3022 0.3074 0.2562 0.3013 0.3134 0.2562 0.2774 0.3022 0.2561 0.3134 0.3019 0.2774

0.2980 0.2816 0.3074 0.3183 0.3013 0.2938 0.3183 0.3206 0.2816 0.3183 0.2938 0.2879 0.3206

0.2700 0.2694 0.2696 0.2768 0.2715 0.2673 0.2768 0.2754 0.2694 0.2769 0.2555 0.2701 0.2754

0.6717 0.6814 0.6645 0.6523 0.6681 0.6775 0.6523 0.6509 0.6814 0.6523 0.6775 0.6829 0.6509

+ max(w2 ) + min(w2 ) + max(w3 ) + min(w3 ) + max(w4 ) + min(w4 ) + max(RA ) + min(RA ) + max(RB ) + min(RB )

Table 7 The percentage value method based post-optimality analyses for A+ ABC SKUs S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20 S21 S22 S23 S24 S25 S26 S27 S28 S29 S30 S31 A 100.00% 100.00% 100.00% 0.00% 0.00% 0.00% 0.00% 0.00% 100.00% 38.46% 0.00% 0.00% 92.31% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% B 0.00% 0.00% 0.00% 46.15% 100.00% 100.00% 100.00% 100.00% 0.00% 61.54% 100.00% 100.00% 7.69% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 0.00% 0.00% 0.00% 100.00% 100.00% 0.00% 100.00% C 0.00% 0.00% 0.00% 53.85% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 100.00% 100.00% 100.00% 0.00% 0.00% 100.00% 0.00% Final results A+ A+ A+ C+ B+ B+ B+ B+ A+ B+ B+ B+ A+ B+ B+ B+ B+ B+ B+ B+ B+ B+ B+ B+ C+ C+ C+ B+ B+ C+ B+

ARTICLE IN PRESS
Y. Chen et al. / Computers & Operations Research Table 7 continued SKUs S32 S33 S34 S35 S36 S37 S38 S39 S40 S41 S42 S43 S44 S45 S46 S47 A 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% B 100.00% 15.38% 30.77% 0.00% 100.00% 7.69% 100.00% 15.38% 92.31% 0.00% 0.00% 0.00% 0.00% 100.00% 0.00% 0.00% C 0.00% 84.62% 69.23% 100.00% 0.00% 92.31% 0.00% 84.62% 7.69% 100.00% 100.00% 100.00% 100.00% 0.00% 100.00% 100.00% Final results B+ C+ C+ C+ B+ C+ B+ C+ B+ C+ C+ C+ C+ B+ C+ C+ ( ) 17

Table 8 Post-optimality analyses and nal solutions for A MCABC Criterion weights
w1 w2 w3 w4

Distance thresholds
RC RB

Initial solution max(w1 ) min(w1 ) max(w2 ) min(w2 ) max(w3 ) min(w3 ) max(w4 ) min(w4 ) max(RC ) min(RC ) max(RB ) min(RB )

0.2039 0.3153 0.0567 0.1676 0.2500 0.0571 0.3138 0.2416 0.2759 0.2463 0.2746 0.0567 0.2729

0.3138 0.3153 0.3584 0.3813 0.2500 0.3585 0.3138 0.2646 0.2759 0.2616 0.2746 0.3584 0.3005

0.2639 0.1725 0.3584 0.2115 0.2500 0.3585 0.1711 0.2292 0.2759 0.2305 0.2746 0.3584 0.2387

0.2184 0.1970 0.2266 0.2396 0.2500 0.2259 0.2013 0.2646 0.1723 0.2616 0.1761 0.2266 0.1879

0.2205 0.2027 0.2269 0.2427 0.2523 0.2270 0.2017 0.2650 0.1774 0.2661 0.1765 0.2269 0.1929

0.4502 0.3786 0.5208 0.4308 0.4445 0.5206 0.3806 0.4415 0.4305 0.4403 0.4316 0.6022 0.2667

5.5. Comparisons and explanation Table 11 shows a comparison of the classication outcomes in Table 10 with the AHP ndings of Flores et al. [5]. Some of the main results are explained below: There are no inconsistent classications in the most important group, A. In the AHP method, there are ten SKUs in A while our method produces ve SKUs, which are all included in the top AHP group. There are ve different classications in group B and seven in group C. The proportions of the number of SKUs in the two groups are 14/23 for the AHP method and 12/30 for our method. Both methods contain roughly consistent information: assign a larger number of SKUs to group C, similar to the traditional ABC analysis. The weight generation mechanisms are different: the AHP method estimates a weight set by subjective judgements to suit all situations, while our method uses quadratic programming to estimate the weights. Based on the distance to an ideal SKU and an anti-ideal SKU, different weights are obtained. In our method, weight for a criterion is connected with value (preference on consequences) in that when the denitions of values change, the weight sets

ARTICLE IN PRESS
18 Y. Chen et al. / Computers & Operations Research ( ) Table 9 The percentage value method based post-optimality analyses for A ABC SKUs S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20 S21 S22 S23 S24 S25 S26 S27 S28 S29 S30 S31 S32 S33 S34 S35 S36 S37 S38 S39 S40 S41 S42 S43 S44 S45 S46 S47 A 100.00% 100.00% 100.00% 0.00% 0.00% 0.00% 0.00% 0.00% 84.62% 0.00% 0.00% 0.00% 100.00% 0.00% 7.69% 0.00% 0.00% 0.00% 0.00% 0.00% 7.69% 0.00% 7.69% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% B 0.00% 0.00% 0.00% 7.69% 53.85% 0.00% 0.00% 0.00% 15.38% 53.85% 61.54% 0.00% 0.00% 30.77% 92.31% 0.00% 0.00% 53.85% 0.00% 0.00% 92.31% 0.00% 92.31% 69.23% 0.00% 0.00% 0.00% 0.00% 100.00% 0.00% 0.00% 61.54% 0.00% 0.00% 0.00% 61.54% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 53.85% 0.00% 0.00% C 0.00% 0.00% 0.00% 92.31% 46.15% 100.00% 100.00% 100.00% 0.00% 46.15% 38.46% 100.00% 0.00% 69.23% 0.00% 100.00% 100.00% 46.15% 100.00% 100.00% 0.00% 100.00% 0.00% 30.77% 100.00% 100.00% 100.00% 100.00% 0.00% 100.00% 100.00% 38.46% 100.00% 100.00% 100.00% 38.46% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 100.00% 46.15% 100.00% 100.00% Final results A A A C B C C C A B B C A C B C C B C C B C B B C C C C B C C B+ C C C B C C C C C C C C B C C

are different. Because of its clear geometric meaning, our method can be readily understood and may thereby be more easily accepted by a DM. It is worth mentioning that the classication results in Flores et al. [5] do not necessarily provide a benchmark to evaluate the merits or limitations of other methods. Because the proportions of SKUs in groups A, B and C are 5/47, 12/47, and 30/47, which are close to the 8020 rule that is observed in many practical inventory systems, our model provides a sound classication result.

ARTICLE IN PRESS
Y. Chen et al. / Computers & Operations Research Table 10 The nal sorting results for the percentage value based post-optimality analyses SKUs S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20 S21 S22 S23 S24 S25 S26 S27 S28 S29 S30 S31 S32 S33 S34 S35 S36 S37 S38 S39 S40 S41 S42 S43 S44 S45 S46 S47 A+ ABC A+ A+ A+ C+ B+ B+ B+ B+ A+ B+ B+ B+ A+ B+ B+ B+ B+ B+ B+ B+ B+ B+ B+ B+ C+ C+ C+ B+ B+ C+ B+ B+ C+ C+ C+ B+ C+ B+ C+ B+ C+ C+ C+ C+ B+ C+ C+ A ABC A A A C B C C C A B B C A C B C C B C C B C B B C C C C B C C B+ C C C B C C C C C C C C B C C Final results A A A C B C C C A B B C A C B C C B C C B C B B C C C C B C C B C C C B C C C C C C C C B C C ( ) 19

6. Conclusions The classical ABC analysis is a straightforward approach that assists a DM in achieving cost-effective inventory management by arranging SKUs according to their annual dollar usages. However, in many situations, the DM should consider other criteria, such as lead time and criticality, in addition to annual dollar usage. MCABC procedures furnish an inventory manager with additional exibility to account for more factors in classifying SKUs. This paper proposes a

ARTICLE IN PRESS
20 Y. Chen et al. / Computers & Operations Research ( ) Table 11 Comparison of results with the Flores et al. [5] method Case based distance model A The AHP method A B C Total 5 0 0 5 B 5 7 0 12 C 0 7 23 30 Total 10 14 23 47

case-based distance model to handle MCABC problems under the umbrella of MCDA theory. A case study is developed to illustrate how the procedure can be applied; the results demonstrate that this approach is robust and can produce sound classications of SKUs when multiple criteria are to be considered. The procedure described in this paper can be easily extended to handle cases in which more than three groups are required for the classication of SKUs in an inventory system. In particular, D( , ) can be revised to incorporate more than three groups and generate the criterion weights and group thresholds. Future research could be carried out to compare the sorting abilities of this method with other methods, such as discussed by Doumpos and Zopounidis [13]. In future research, we plan to address the problem of uncertain and inconsistent information in the case set. Acknowledgments The authors wish to express their sincere appreciation to the anonymous referee and the Editor for their many constructive suggestions, which helped to signicantly improve the quality of their paper. References
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] Silver EA, Pyke DF, Peterson R. Inventory management and production planning and scheduling. 3rd ed., New York: Wiley; 1998. Chakravarty AK. Multi-item inventory aggregation into groups. Journal of Operational Research Society 1981;32(1):1926. Pareto V. Manual of political economy. New York: A. M. Kelley Publishers; 1971 [English translation]. Flores BE, Whybark DC. Multiple criteria ABC analysis. International Journal of Operations and Production Management 1986;6(3):3846. Flores BE, Olson DL, Dorai VK. Management of multicriteria inventory classication. Mathematical and Computer Modeling 1992;16(12): 7182. Partovi FY, Hopton WE. The analytic hierarchy process as applied to two types of inventory problems. Production and Inventory Management Journal 1994;35(1):139. Cohen MA, Ernst R. Multi-item classication and generic inventory stock control policies. Production and Inventory Management Journal 1988;29(3):68. Ramanathan R. ABC inventory classication with multiple-criteria using weighted linear optimization. Computer and Operations Research 2006;33(3):695700. Partovi FY, Anandarajan M. Classifying inventory using an articial neural network approach. Computer and Industrial Engineering 2002;41: 389404. Guvenir HA, Erel E. Multicriteria inventory classication using a genetic algorithm. European Journal of Operational Research 1998;105(1): 2937. Flores BE, Whybark DC. Implementing multiple criteria ABC analysis. Journal of Operations Management 1987;7(1):7984. Roy B. Multicriteria methodology for decision aiding. Dordrecht: Kluwer; 1996. Doumpos M, Zopounidis C. Multicriteria decision aid classication methods. Dordrecht: Kluwer; 2002. Kilgour DM, Rajabi S, Hipel KW, Chen Y. Screening alternatives in multiple criteria subset selection. INFOR 2004;42(1):4360. Zopounidis C, Doumpos M. Multicriteria classication and sorting methods: a literature review. European Journal of Operational Research 2002;138(2):22946. Chen Y, Kilgour DM, Hipel KW. Multiple criteria classication with an application in water resources planning. Computers and Operations Research, 2006;33(11):330123. Malakooti B, Yang ZY. Clustering and group selection of multiple criteria alternatives with application to space-based networks. IEEE Transactions on Systems, Man and Cybernetics, Part B 2004;34(1):4051. Jacquet-Lagrze E, Siskos Y. Assessing a set of additive utility functions for multicriteria decision-making: the UTA method. European Journal of Operational Research 1982;10(2):15164. Slowinski R. Rough set approach to decision analysis. AI Expert Magazine 1995;10(3):1825. Chen Y, Kilgour DM, Hipel, KW. A case-based distance model for screening in multiple criteria decision aid. OMEGA 2006, in press.

ARTICLE IN PRESS
Y. Chen et al. / Computers & Operations Research ( ) 21 [21] Chen Y, Hipel KW, Kilgour DM. A case-based model for sorting problems in multiple criteria decision analysis. Proceedings of the 2005 IEEE international conference on systems, man and cybernetics. Hawaii, US, October 1012, 2005. p. 21520. [22] Steuer RE. Multiple criteria optimization: theory, computation and application. New York: Wiley; 1986. [23] Sage AP, White CC. ARIADNE: a knowledge-based interactive system for planning and decision support. IEEE Transactions on Systems, Man and Cybernetics 1984;14:3547. [24] Eum YS, Park KS, Kim SH. Establishing dominance and potential optimality in multi-criteria analysis with imprecise weight and value. Computers and Operations Research 2001;28(5):397409. [25] Siskos Y, Grigoroudis E, Matsatsinis NF. UTA Methods. In: Figueira J, Greco S, Ehrgott M, editors. Multiple criteria decision analysis: state of the art surveys. Boston, Dordrecht, London: Springer; 2005. p. 297344. [26] Siskos Y, Yannacopoulos D. UTASTAR: an ordinal regression method for building additive value functions. Investigaco Operacional 1985;5(1):3953. [27] Saaty TL. Analytic hierarchy process. New York: McGraw-Hill; 1980. [28] LINDO Systems, Inc., Lingo software, http://www.lindo.com/ , accessed on September 16, 2005.

Vous aimerez peut-être aussi