Vous êtes sur la page 1sur 12

Goal Programming

James P. Ignizio
Resource Management Associates

Carlos Romero
Technical University of Madrid

I. II. III. IV. V.

INTRODUCTION HISTORICAL SKETCH THE MULTIPLEX MODEL FORMS OF THE ACHIEVEMENT FUNCTION GENERAL FORM OF THE MULTIPLEX MODEL

VI. VII. VIII. IX. X.

THE MULTIDIMENSIONAL DUAL ALGORITHMS FOR SOLUTION GOAL PROGRAMMING AND UTILITY OPTIMIZATION EXTENSIONS THE FUTURE

GLOSSARY
achievement function The function that serves to measure the achievement of the minimization of unwanted goal deviation variables in the goal programming model. goal function A mathematical function that is to be achieved at a specied level (i.e., at a prespecied aspiration level). goal program A mathematical model, consisting of linear or nonlinear functions and continuous or discrete variables, in which all functions have been transformed into goals. multiplex Originally this referred to the multiphase simplex algorithm employed to solve linear goal programs. More recently it denes certain specic models and methods employed in multiple- or single-objective optimization in general. negative deviation The amount of deviation for a given goal by which it is less than the aspiration level. positive deviation The amount of deviation for a given goal by which it exceeds the aspiration level. satisce An old Scottish word referring to the desire, in the real world, to nd a practical solution to a given problem, rather than some utopian result for an oversimplied model of the problem.

and objectives, has often been cited as being the workhorse of multiple objective optimization (i.e., the solution to problems having multiple, conicting goals and objectives) as based on its extensive list of successful applications in actual practice. Here we describe the method and its history, cite its mathematical models and algorithms, and chronicle its evolution from its original form into a potent methodology that now incorporates techniques from articial intelligence (particularly genetic algorithms and neural networks). The article concludes with a discussion of recent extensions and a prediction of the role of goal programming in real-world problem solving in the 21st century.

I. INTRODUCTION A. Definitions and Origin


Real-world decision problemsunlike those found in textbooksinvolve multiple, conicting objectives and goals, subject to the satisfaction of various hard and soft constraints. In short, and as the experienced practitioner is well aware, problems that one encounters outside the classroom are invariably massive, messy, changeable, complex, and resist treatment via conventional approaches. Yet the vast majority of traditional approaches to such problems utilize conventional models and methods that idealistically and unrealistically (in most cases) presume the optimization of a single-objective subject to a set of rigid

GOAL PROGRAMMING, a powerful and effective methodology for the modeling, solution, and analysis of problems having multiple and conicting goals

Encyclopedia of Information Systems, Volume 2


Copyright 2003, Elsevier Science (USA). All rights reserved.

489

490 constraints. Goal programming was introduced in an attempt to eliminate or, at the least, mitigate this disquieting disconnect. Conceived and developed by Abraham Charnes and William Cooper, goal programming was originally dubbed constrained regression. Constrained regression, in turn, was and is a powerful nonparametric method for the development of regression functions (e.g., curve-tting) subject to side constraints. Charnes and Cooper rst applied constrained regression in the 1950s to the analysis of executive compensation. Recognizing that the method could be extended to a more general class of problemsthat is, any quantiable problem having multiple objectives and soft, as well as rigid constraintsCharnes and Cooper later renamed the method goal programming when describing it within their classic 1961 two-volume text Management Models and Industrial Applications of Linear Programming.

Goal Programming and comfortably, into the concept of optimization or efciency (i.e., nondominated solutions) as used by more conventional forms of mathematical modeling. This is because, in goal programming, we seek a useful, practical, implementable, and attainable solution rather than one satisfying the mathematicians desire for global optimality. (However, if one wishes, it is relatively trivial to develop efcient, or nondominated solutions for any goal programming problem. That matter is briey described in a section to follow.)

C. A Brief List of Applications


Goal programmings label as the workhorse of multiple-objective optimization has been achieved by its successful solutions of important real-world problems over a period of more than 50 years. Included among these applications are: The analysis of executive compensation for General Electric during the 1950s The design and deployment of the antennas for the Saturn II launch vehicle as employed in the Apollo manned moon-landing program The determination of a siting scheme for the Patriot Air Defense System Decisions within sheries in the United Kingdom A means to audit transactions within the nancial sector (e.g., for the Commercial Bank of Greece) The design of acoustic arrays for U.S. Navy torpedos As well as a host of problems in the general areas of agriculture, nance, engineering, energy, and resource allocation.

B. Philosophical Basis
The two philosophical concepts that serve to best distinguish goal programming from conventional (i.e., single-objective) methods of optimization are the incorporation of exibility in constraint functions (as opposed to the rigid constraints of single-objective optimization) and the adherence to the philosophy of satiscing as opposed to optimization. Satiscing, in turn, is an old Scottish word that denes the desire to nd a practical, real-world solution to a problem rather than a utopian, optimal solution to a highly simplied (and very possibly oversimplied) model of that problem. The concept of satiscing, as opposed to optimization, was introduced by Herbert Simon in 1956. As a consequence of the principle of satiscing, the goodness of any solution to a goal programming problem is represented by an achievement function, rather than the objective function of conventional optimization. The goal programming achievement function measures the degree of nonachievement of the problem goals. The specic way in which this nonachievement is measured characterizes the particular subtype of goal programming approach that is being employed, and may be dened so as to include the achievement of commensurable as well as noncommensurable goals. It should be emphasized that, because a goal programming problem is to be satisced, it is possible that the solution derived may not t, conveniently

D. Overview of Material to Follow


In this article, the topic of goal programming is covered in a brief but comprehensive manner. Sections to follow discuss the past, present, and future of goal programming as well as the models and algorithms for implementation. The reader having previous exposure to the original goal programming approach will (or should) immediately notice the many signicant changes and extensions that occurred during the 1990s. As just one example, powerful and practical hybrid goal programming and genetic algorithm modeling and solution methods will be discussed. Readers seeking more detailed explanations of any of the ma-

Goal Programming terial covered herein are referred to the Bibliography at the end of this article.

491

III. THE MULTIPLEX MODEL A. Numerical Illustrations

II. HISTORICAL SKETCH


As mentioned, goal programming was conceived by Abraham Charnes and William Cooper nearly a half century ago. The tool was extended and enhanced by their students and, later, by other investigators, most notably Ijiri, Jskelinen, Huss, Ignizio, Gass, Romero, Tamiz, and Jones. In its original form, goal programming was strictly limited to linear multiple-objective problems. Ignizio, in the 1960s, extended the method to both nonlinear and integer models, developed the associated algorithms for these extensions, and successfully applied them to a number of important real-world problems, including, as previously mentioned, the design of the antenna systems for the Saturn II launch vehicle as employed in the Apollo manned moon-landing program. During that same period, and in conjunction with Paul Huss, Ignizio developed a sequential algorithm that permits one to extendwith minimal modicationany single-objective optimization software package to the solution of any class of goal programming models (the approach was also developed, independently, by Dauer and Kruger). Later in that same decade, Ignizio developed the concept of the multidimensional dual, providing goal programming with an effective economic interpretation of its results as well as a means to support sensitivity and postoptimality analysis. Huss and Ignizios contributions in engineering, coupled with the work of Charnes, Cooper, Ijiri, Jskelinen, Gass, Romero, Tamiz, Jones, Lee, Olson, and others in management science served to motivate the interest in multiple objective optimization that continues today. Goal programming is the most widely applied tool of multiple-objective optimization/multicriteria decision making. However, todays goal programming models, methods, and algorithms differ signicantly from those employed even in the early 1990s. Goal programming, as discussed later, may be combined with various tools from the articial intelligence sector (most notably genetic algorithms and neural networks) so as to provide an exceptionally robust and powerful means to model, solve, and analyze a host of real-world problems. In other words, todays goal programmingwhile maintaining its role as the workhorse of multiple-objective decision analysisis a much different tool than that described in most textbooks, even those published relatively recently.

Any single-objective problem, and most multipleobjective ones, can be placed into a model format that has been designated as the multiplex model, and then solved via the most appropriate version of a multiplex (or sequential goal programming) algorithm. For example, consider a conventional (albeit simple) linear programming problem taking on the following traditional form: Maximize z Subject to: 10x1 x1 x2 x x2 4 0 4x2 100 (1) (2) (3) (4)

Ignoring the fact that this undemanding singleobjective model can be solved by inspection, let us transform it into the multiplex form for the sake of illustration. To do so, we add a negative deviation variable to, and subtract a positive deviation variable from, each constraint. In addition, we transform the maximizing objective function into a minimizing form by simply multiplying the original objective function by a negative one. The resultant model, in multiplex form, can be written: Lexicographically minimize U {( Satisfy: x1 x2 x, ,
1 2),

( 10x1
1 2 1

4x2)} 100

(5) (6) (7) (8)

x2
2

The new variables (i.e., the negative and positive deviation variables, that have been added to the constraints) indicate that a solution to the problem may result, for a given constraint i, in a negative deviation ( i 0), or a positive deviation ( i 0), or no deviation ( i 0). That is to say that we can underi achieve a goal (be it a hard or soft constraint), overachieve it, or precisely satisfy it. In the multiplex formulation, the deviation variables that are to be minimized have been shown in boldface, as well as appearing in the rst (highest priority) term of the achievement function of function (5). While this new formulation may appear unusual (at least to those schooled in traditional, singleobjective optimization), it provides an accurate

492 representation of the linear programming problem originally posed. To appreciate this, examine the achievement function, as represented by formula (5). The multiplex achievement function is a vector, rather than a scalar as in conventional single-objective optimization (e.g., linear programming). The terms in this vector are ordered according to priority. The rst term [i.e., 1 2 in function (5)] is reserved for the unwanted deviation variables for all rigid constraints, or hard goalsrestrictions that supposedly must be satised for the solution to be deemed feasible. Any solution in which this rst term takes on a value of zero is thusin math programming terms a feasible solution. In goal programming, such a solution is deemed implementable, inferring that it could actually be implemented in the real-world problem under consideration. Once the rst term has been minimized, the next term (the second term, or 10x1 4x2 in this case) can be dealt with. The algorithm will seek a solution that minimizes the value of this second term, but this must be accomplished without degrading the value already achieved in the higher priority term. And this is the manner in which one seeks the lexicographic minimum of an ordered vector. Again, this formulation may appear unusual but it not only accurately represents the linear programming (LP) problem, it also indicates the way in which most commercial software actually solves LP models. Specically, LP problems are generally solved by the two-phase simplex algorithm, wherein the rst phase attempts to nd a feasible solution and the second seeks an optimal solution that does not degrade the feasibility achieved in phase 1. Multiplex algorithms simply extend this notion to any number of phases, according to the formulation employed to represent the given problem. Maximize z1 3x1

Goal Programming x2 (prot per time period)

(9)

Maximize z2 2x1 3x2 (market shares captured per time period) (10) Satisfy: 2x1 x2 50 (raw material limitations) (11) (12) (13) (14)

x1 20 (market saturation level, product 1) x2 30 (market saturation level, product 2) x 0 (nonnegativity conditions)

If the reader cares to graph this problem, he or she will see that there is no way in which to optimize both objectives simultaneouslyas is the case in virtually any nontrivial, real-world problem. However, the purpose of goal programming is to nd a solution, or solutions, that simultaneously satisce all objectives. But rst these objectives must be transformed into goals. To transform an objective into a goal, one must assign some estimate (usually the decision makers preliminary estimate) of the aspired level for that goal. Lets assume that the aspiration level for prot [i.e., function (9)] is 50 units while that of market shares [i.e., function (10)] is 80 units. Consequently the multiplex model for the goal programming problem is shown below, wherein the two transformed objectives now appear as (soft) goals (19) and (20), respectively: Lexicographically minimize U {( 1 2 Satisfy: 2x1 x2
3),

5)}

(15) (16) (17) (18) (19) (20) (21)

50 1 1 (raw material limitations)

B. Multiplex Form of the Goal Programming Problem


While any single-objective optimization problem can be represented in a manner similar to that described above, our interest lies in multiple-objective optimization and, more specically, in goal programming (GP). Consequently, let us examine the multiplex model for a specic GP problem. Consider the illustrative problem represented below, wherein there are two objective functions to be optimized, functions (9) and (10), and a set of constraints to be satised. For purposes of discussion, we assume that the constraints of (11) through (13) are hard, or rigid.

x1 20 2 2 (market saturation level, product 1) x2 30 3 3 (market saturation level, product 2) 3x1 2x1 x, , x2 3x2 0 (nonnegativity conditions)
4 4

50 (prot goal)

80 5 (market shares goal)

The multiplex model for the problem indicatesvia the achievement functionthat the rst priority is to satisfy the hard goals of (16), (17), and (18). Note

Goal Programming

493 2. Non-Archimedean (also known as lexicographic, or preemptive goal programming) 3. Chebyshev (also known as fuzzy programming).

50 40 A 30 20 10 0 0 10 20 30 40 50 C D B

A. Archimedean Goal Programming


The achievement function for an Archimedean GP model consists of exactly two terms. The rst term always contains all the unwanted deviation variables associated with the hard goals (rigid constraints) of the problem. The second term lists the unwanted deviation variables for all soft goals (exible constraints), each weighted according to importance. Returning to our previous goal programming formulation, assume that the market shares goal [i.e., function (20)] is considered, by the decision maker, to be twice as important as the prot goal. Consequently, the Archimedean form of the achievement function could be written as follows. Notice carefully that 5 has now been weighted by 2: Lexicographically minimize U {( 1 2
3),

Figure 1 The satiscing region for the example.

that the nonnegativity conditions of (21) will be implicitly satised by the algorithm. Once the deviation variables (i.e., 1 2 3) associated with those hard goals have been minimized (albeit not necessarily to a value of zero), the associated multiplex (or sequential GP) algorithm proceeds to minimize the unwanted deviations (i.e., 4 5) associated with the prot and market share goals, while not degrading the values of any higher ordered achievement function terms. Figure 1 serves to indicate the nature of the goal programming problem that now exists. Note that the solid lines represent the constraints, or hard goals, while the dashed lines indicate the original objectivesnow transformed into soft goals. It is particularly important to note that those solutions satiscing this problem form a region, bounded by points A, B, C, and D, and including all points within and on the edges of the bounded region. This contrasts with conventional optimization in which the optimal solution is most often a single point. The achievement functions for the linear programming and goal programming illustrations posed previously represent but two possibilities from a large and growing number of choices. We describe a few of the more common achievement functions in the next section.

5)}

(22)

Note that, as long as the unwanted deviations are minimized, we have achieved a satiscing solution. This may mean that we reach a satiscing solution, for our example, whether the prot achieved is 50 or more units. This infers a one-sided measure of achievement. If we wish to reward an overachievement of the prot, then either the model should be modied or we should employ one of the more recent developments in achievement function formatting. The latter matter is discussed in a forthcoming section. For the moment, however, our assumption is that we simply seek a solution to the achievement function given in Eq. (22), one that provides a satiscing result. Realize that Archimedean, or weighted goal programming, makes sense only if you believe that numerical weights can be assigned to the nonachievement of each soft goal. In many instances, goals are noncommensurable and thus other forms of the achievement function are more realistic. One of these is the non-Archimedean, or lexicographic achievement function.

IV. FORMS OF THE ACHIEVEMENT FUNCTION B. Non-Archimedean Goal Programming


The three earliest, and still most common forms of the Multiplex achievement function are listed here and discussed in turn: 1. Archimedean (also known as weighted goal programming) The achievement function for a non-Archimedean GP model consists of two or more terms. As in the case of the Archimedean form, the rst term always contains the unwanted deviation variables for all the hard goals. After that, the deviation variables for all soft

494 goals are arranged according to prioritymore specifically, a nonpreemptive priority. To demonstrate, consider the problem previously posed as Archimedean GP. Assume that we are unable, or unwilling, to assign weights to the prot or market share goals. But we are convinced that the capture of market share is essential to the survival of the rm. It might then make sense to assign a higher priority to market share than to prot, resulting in the nonArchimedean achievement function given here: Lexicographically minimize U {( 1 2
3),

Goal Programming zk the value of the function representing the kth objective (e.g., z1 3x1 x2 and z2 2x1 3x2).

Given the specic model of (24) through (30), the resulting Chebyshev formulation is simply: Minimize Satisfy: 2x1 x1 x2 x2 20 (market saturation level, product 1) 30 (market saturation level, product 2) (70 (110 ,x 0 3x1 2x1 x2)/(70 60) 70) 50 (raw material limitations)

5),

4)}

(23)

While the achievement function of Eq. (23) consists of but a single deviation variable for the second and third terms, the reader should understand that several deviation variables may appear in a given term if you are able to weight each according to its perceived importance.

3x2)/(110

C. Chebyshev (Fuzzy) Goal Programming


There are numerous forms of Chebyshev, or fuzzy, goal programming but we restrict our coverage to just one in this subsection. The notion of Chebyshev GP is that the solution sought is the one that minimizes the maximum deviation from any single soft goal. Returning to our prot and market shares model, one possible transformation is as follows: Minimize Satisfy: 2x1 x2 50 (raw material limitations) (24) (25) (26) (27) (28) (29) (30)

This model may be easily transformed into the multiplex form by means of adding the necessary deviation variables and forming the associated achievement function. However, it is clear that the Chebyshev model, as shown, is simply a single objective optimization problem in which we seek to minimize a single variable, . In other words, we seek to minimize the single worst deviation from any one of the problem goals/constraints. While the Archimedean, non-Archimedean, and Chebyshev forms of the achievement function are the most common, other, newer versions may offer certain advantages. As mentioned, these newer forms of the achievement function are briey described in a later section on extensions of GP.

x1 20 (market saturation level, product 1) x2 30 (market saturation level, product 2) (U1 (U2 ,x where: 0 z1)/(U1 z2)/(U2 L1) L2)

V. GENERAL FORM OF THE MULTIPLEX MODEL


Whatever the form of the achievement function, the multiplex model takes on the following general form: Lexicographically minimize U {c(1)Tv, c(2)Tv, ..., c(K)Tv} (31) Satisfy: F(v) v where: the total number of terms (e.g., priority levels) in the achievement function F(v) the problem goals, of either linear or nonlinear form, and in which negative and positive deviation variables have been augmented K 0 b (32) (33)

the best possible value for objective k (e.g., optimize the problem without regard to any other objectives but objective k) Lk the worst possible value for objective k (e.g., optimize the problem without regard to objective k) a dummy variable representing the worst deviation level Uk

Goal Programming the right-hand side vector the vector of all structural (e.g., xj) and deviation (i.e., i and i) variables c(k) the vector of coefcients, or weights, of v in the kth term of the achievement function c(K)T the transpose of c(k). b v We designate functions (31) through (33) as the primal form of the muliplex model. Starting with this primal form, we may easily derive the dual of any goal programming, or multiplex, model. Av

495 b are the linear constraints and goals of the problem, as transformed via the introduction of negative and positive deviation variables.

VI. THE MULTIDIMENSIONAL DUAL


The real power underlying conventional singleobjective mathematical programming, particularly linear programming, lies in the fact that there exists a dual for any conventional mathematical programming model. For example, the dual of a maximizing LP model, subject to type I ( ) constraints, is a minimizing LP model, subject to type II ( ) constraints. The property of duality allows one to exploit, for example, LP models so as to develop additional theories and algorithms, as well as provide a useful economic interpretation of the dual variables. One of the alleged drawbacks of goal programming has been the lack of a dual formulation. It is difcult to understand why this myth has endured as the dual of goal programming problems was developed, by Ignizio, in the early 1970s. It was later extended to the more general multiplex model in the 1980s. However, space does not permit any exhaustive summary of the multidimensional dual (MDD) and thus we present only a brief description. We listed the general form of the multiplex model in (31) through (33). Simply for sake of discussion, lets examine the dual formulation of a strictly linear multiplex model, taking on the primal form listed below: PRIMAL: Lexicographically minimize U {c(1)Tv, c(2)Tv, ..., c(K)Tv} (34) Satisfy: Av v where: K the total number of terms (e.g., priority levels) in the achievement function v the vector of all structural (e.g., xj) and deviation (i.e., i and i) variables c(k) the vector of coefcients, or weights, of v in the kth term of the achievement function 0 b (35) (36)

If you are familiar with single-objective optimization, you may recall that the dual of a linear programming model is still a linear programming model. However, in the case of a GP, or multiplex model, its dualthe multidimensional dualtakes on the form of a model in which (1) the constraints have multiple, prioritized right-hand sides and (2) the objective function is in the form of a vector. More specically, the general form of the multidimensional dual is given as: DUAL: Find Y so as to lexicographically maximize w Subject to: ATY c(1), c(2), ..., c(K) bTY (37) (38) (39)

Y, the dual variables, are unrestricted and multidimensional

Note that the symbol indicates the lexicographic nature of the inequalities involved (i.e., the left-hand side of each function is lexicographically less than or equal to the multiple right-hand sides). Physically, this means that we rst seek a solution subject to the rst column of right-hand side elements. Next, we nd a solution subject to the second column of right-hand side elements but one that cannot degrade the solution achieved for the previous column of right-hand side values. We continue in this manner until a complete set of solutions has been obtained for all right-hand side values. The transformation from primal to dual may be summarized as follows: A lexicographically minimized achievement function for the primal translates into a set of lexicographically ordered right-hand sides for the dual. If the primal is to be lexicographically minimized, the dual is to be lexicographically maximized. For every priority level (achievement function term) in the primal, there is an associated vector of dual variables in the dual. Each element of the primal achievement function corresponds to an element in the right-hand side of the dual. The technological coefcients of the dual are the transpose of the technological coefcients of the primal. The development of the MDD for goal programming (or general multiplex) models leads immediately to

496 both a means for economic interpretation of the dual variable matrix as well as supporting algorithms for solution. A comprehensive summary of both aspects, as well as numerous illustrative numerical examples, are provided in the references.

Goal Programming

C. Parallel Algorithms: Basic Approach


The references provide details and illustrations of the use of such serial multiplex algorithms, for linear, nonlinear, and integer goal programming problems. However, with the advent of hybrid multiplex algorithms, the use of parallel algorithms is both possible and, in the minds of some, preferable. A parallel algorithm follows somewhat the same steps as listed previously for serial algorithms. The primary difference is that rather than employing a single solution point at each step, multiple pointsor populations of solutionsexist at each iteration (or generation) of the algorithm. There are two very important advantages to the employment of parallel algorithms in goal programming. The rst is that, if the algorithm is supported by parallel processors, the speed of convergence to the nal solution is signicantly increased. The second advantage is less well known, but quite likely even more signicant. Specifically, it would appear from the evidence so far that solutions to certain types (more specically, those employing evolutionary operations) of parallel algorithms are far more stable, and less risky, than those derived by conventional means.

VII. ALGORITHMS FOR SOLUTION A. Original Approach


As noted, algorithms exist for the solution of goal programming (as well as any problem that can be placed into the multiplex format) problems in either the primal or dual form. The original emphasis of goal programming was, as discussed, on linear goal programs, and the GP algorithms derived then (by Charnes and Cooper, and their students) were multiphase simplex algorithms. That is, they were based on a straightforward extension of the two-phase simplex algorithm.

B. Serial Algorithms: Basic Approach


Assuming that a serial algorithm (one in which only a single solution exists at a given time, as is the case with the well-known simplex algorithm for linear programming) is used to solve a goal programming problem, the fundamental steps are as follows: Step 1. Transform the problem into the multiplex format. Step 2. Select a starting solution. [In the case of a linear model, the starting solution is often the one in which all the structural variables (i.e., the xjs) are set to zero.] Step 3. Evaluate the present solution (i.e., determine the achievement function vector). Step 4. Determine if a termination criterion has been satised. If so, stop the search process. If not, go to step 5. [Note that, in the case of linear models, the shadow prices (dual variable values) are used to determine if optimality has been reached.] Step 5. Explore the local region about the present best solution to determine the best direction of movement. (For linear models, we move in the direction indicated by the best single shadow price.) Step 6. Determine how far to move in the direction of best improvement, and then do so. (In linear models, this is determined by the so-called Theta or blocking variable rule.) Step 7. Repeat steps 3 through 6 until a termination rule is satised.

D. Hybrid Algorithm Employing Genetic Algorithms


The basics of a hybrid goal programming/genetic algorithm for solving multiplex models are listed below. Such an approach has been found, in actual practice, to achieve exceptionally stable solutionsand do so rapidly. Furthermore, it is particularly amenable to parallel processing. Step 1. Transform the problem into the multiplex format. Step 2. Randomly select a population of trial solutions (typically 20 to a few hundred initial solutions will compose the rst generation). Step 3. Evaluate the present solutions (i.e., determine the achievement function vector for each member of the present population). Step 4. Determine if a termination criterion has been satised. If so, stop the search process. If not, go to step 5. Step 5. Utilize the genetic algorithm operations of selection, mating, reproduction, and mutation to develop the next generation of solutions (see the references for details on genetic algorithms). Step 6. Repeat steps 3 through 5 until a termination rule is satised.

Goal Programming The parallel algorithm developed via combining goal programming with genetic algorithms offers a convenient way in which to address any single or multiobjective optimization problem, be they linear, nonlinear, or discrete in nature. The single disadvantage is that global optimality cannot be ensured.

497 Regarding Chebyshev goal programming, it is recognized that to this variant underlies a utility function where the maximum (worst) deviation level is minimized. In other words, the Chebyshev option underlies the optimization of a MINMAX utility function, for which the most balanced solution between the achievement of the different goals is achieved. These insights are important for the appropriate selection of the goal programming variant. In fact, the appropriate variant should not be chosen in a mechanistic way but in accordance with the decision makers structure of preferences. These results also give theoretical support to the extensions of the achievement function to be commented on in the next section.

VIII. GOAL PROGRAMMING AND UTILITY OPTIMIZATION


Although goal programming is developed within a satiscing framework, its different variants can be interpreted from the point of view of utility theory as highlighted in this section. This type of analysis helps to clarify goal programming variant as well as to provide the foundations of certain extensions of the achievement function. Let us start with lexicographic goal programming, where the noncompatibility between lexicographic orderings and utility functions is well known. To properly assess the effect of this property on the pragmatic value of this approach, we must understand that the reason for this noncompatibility is exclusively due to the noncontinuity of preferences underlying lexicographic orderings. Therefore, a worthwhile matter of discussion would not be to argue against lexicographic goal programming because it implicitly assumes a noncontinuous system of preferences but to determine if the characteristics of the problem situation justify or not a system of continuous preferences. Hence, the possible problem associated with the use of the lexicographic variant does not lie in its noncompatibility with utility functions, but in the careless use of this approach. In fact, in contexts where the decision makers preferences are clearly continuous, a model based on nonpreemptive weights should be used. Moreover, it is also important to note that a large number of priority levels can lead to a solution where every goal, except those situated in the rst two or three priority levels, is redundant. In this situation, the possible poor performance of the model is not due to the lack of utility meaning of the achievement function but to an excessive number of priority levels or to overly optimistic aspiration levels (i.e., close to the ideal values of the goals). Regarding weighted (Archimedean) goal programming, we know that this option underlies the maximization of a separable and additive utility function in the goals considered. Thus, the Archimedean solution provides the maximum aggregate achievement among the goals considered. Consequently, it seems advisable to test the separability between attributes before the decision problem is modeled with the help of this variant.

IX. EXTENSIONS A. Transforming Satisficing Solutions into Efficient Solutions


The satiscing logic underlying goal programming implies that its formulations may produce solutions that do not t classic optimality requirements like efciency; that is, solutions for which at least the achievement of one of the goals can be improved without degrading the achievement of the others. However, if one wishes, it is very simple to force the goal programming approach to produce efcient solutions. To secure efciency, it is enough to augment the achievement function of the multiplex formulation with an additional priority level where the sum of the wanted deviation variables is maximized. For instance, in the example plotted in Fig. 1 the closed domain ABCD represents the set of satiscing solutions and the edge BD the set of satiscing and efcient solutions. Thus, if the achievement function of the multiplex model (15) through (21) is augmented with the term ( 4 5) placed in a third priority level, then the lexicographic process will produce solution point B, a point that is satiscing and efcient at the same time. Note that there are more rened methods capable of distinguishing the efcient goals from the inefcient ones, as well as simple techniques to restore the efciency of the goals previously classied as inefcient. Technical details about this type of procedure can be found in the Bibliography.

B. Extensions of the Achievement Function


According to the arguments developed in Section VIII, from a preferential point of view the weighted

498 and the Chebyshev goal programming solutions represent two opposite poles. Thus, since the weighted option maximizes the aggregate achievement among the goals considered, then the results obtained with this option can be biased against the performance achieved by one particular goal. On the other hand, because of the preponderance of just one of the goals, the Chebyshev model can provide results with poor aggregate performance between different goals. The extreme character of both solutions can lead in some cases to possibly unacceptable solutions by the decision maker. A possible modeling solution for this type of problem consists of compromising the aggregate achievement of the Archimedean model with the MINMAX (balanced) character of the Chebyshev model. Thus, the example of the earlier section can be reformulated with the help of the following multiplex extended goal programming model: Lexicographically minimize U {( 1 Z) 2 3), [(1 Satisfy: 2x1 x2 Z(
4 5)]}

Goal Programming decision makers actual preferences with more accuracy than any single variant. Other extensions of the achievement function have been derived taking into account that in all traditional goal programming formulations there is the underlying assumption that any unwanted deviation with respect to its aspiration level is penalized according to a constant marginal penalty. In other words, any marginal change is of equal importance no matter how distant it is from the aspiration level. This type of formulation only allows for a linear relationship between the value of the unwanted deviation and the penalty contribution. This corresponds to the case of the achievement functions underlying a weighted goal programming model or to each priority level of a lexicographic model. This type of function has been termed a one-sided penalty function, when only one deviation variable is unwanted, or V-shaped penalty function when both deviation variables are unwanted. However, other penalty function structures have been used. Thus, we have the two-sided penalty functions when the decision maker feels satised when the achievement of a given goal lies within a certain aspiration level interval, or the U-shaped penalty function if the marginal penalties increase monotonically with respect to the aspiration levels. Several authors have proposed improvements and renements to the goal programming model with penalty functions. Details about this type of modeling are described in the Bibliography.

(40) (41) (42) (43) (44) (45) (46) (47) (48)

50 1 1 (raw material limitations)

x1 20 2 2 (market saturation level, product 1) x2 30 3 3 (market saturation level, product 2) 3x1 2x1 (1 (1 x2 3x2 Z) Z)
4 5 4 4

50 (prot goal)

80 5 (market shares goal) 0 0

C. Goal Programming and MCDM


Goal programming is but one of several approaches possible within the broader eld of multiple criteria decision making (MCDM). It is a common practice within MCDM to present its different approaches in an independent, disjoint manner, giving the impression that each approach is completely autonomous. However, this is not the case. In fact, signicant similarities exist among most of the MCDM methods. In this sense, the multiplex approach presented in Section V is a good example of a goal programming structure encompassing several single- and multipleobjective optimization methods. Furthermore, goal programming can provide a unifying basis for most MCDM models and methods. With this purpose, extended lexicographic goal programming has recently been proposed. To illustrate this concept, consider the representation provided in Eqs. (49) through (52):

, x, ,

where the parameter Z weights the importance attached to the minimization of the sum of unwanted deviation variables. For Z 0, we have a Chebyshev goal programming model. For Z 1 the result is a weighted goal programming model, and for other values of parameter Z belonging to the interval (0, 1) intermediate solutions between the solutions provided by the two goal programming options are considered. Hence, through variations in the value of parameter Z, compromises between the solution of the maximum aggregate achievement and the MINMAX solution can be obtained. In this sense, this extended formulation allows for a combination of goal programming variants that, in some cases, can reect a

Goal Programming Lexicographically minimize U {


j i hj Q Q Q i hQ j 1 1 1 i h1

499 model decision-making problems for which a good representation of a decision makers preferences requires a mix of goal programming variants. In short, this type of general formulation can increase the enormous potential exibility inherent to goal programming. (49)

i i)

, ...,

p i i) ,

...,
p i i) }

Satisfy:

i i)

0 hj j i x {1, ..., Q} (50) {1, ..., q} F (51) (52)

X. THE FUTURE
In the nearly half century since its development, goal programming has achieved and maintained its reputation as the workhorse of the multiple-objective optimization eld. This is due to a combination of simplicity of form and practicality of approach. The more recent extensions to the approach have, thankfully, not negatively impacted on these attributes. We envision a future in which goal programming will continue to utilize methods from other sectors, particularly articial intelligence. The most signicant advances, from a strictly pragmatic perspective, involve combining goal programming with evolutionary search methods, specically genetic algorithms. Such extensions permit the rapid development of exceptionally stable solutionsthe type of solutions needed in most real-world situations.

i fi (x)
i i

ti 0

, ,

where p is a real number belonging to the interval [1, ) or . Parameters i and i are the weights reecting preferential and normalizing purposes attached to the negative and positive variables of the ith goal, respectively; j and j are control parameters; and hj represents the index set of goals placed in the jth priority level. The block of rigid constraints x F, can be transferred to an additional rst priority level in order to formulate the model within a multiplex format. If the above structure is considered the primary model, then it is easy to demonstrate that an important number of multiple-criteria methods are just secondary models of the extended lexicographic goal programming model. Thus, the following multicriteria methods can be straightforwardly deduced just by applying different parameter specications to the above model: 1. Conventional single-objective mathematical programming model 2. Nonlinear and linear weighted goal programming 3. Lexicographic linear goal programming 4. Chebyshev goal programming 5. Reference point method 6. Compromise programming (L1 bound) and (L bound or fuzzy programming with a linear membership function) 7. Interactive weighted Tchebycheff procedure. The use of goal programming as a unifying framework seems interesting at least for the following reasons. The extended lexicographic goal programming model stresses similarities between MCDM methods that can help reduce gaps between advocates of different approaches. Moreover, this unifying approach can become a useful teaching tool in the introduction of MCDM, thus avoiding the common presentation based upon a disjoint system of methods. Finally, the extended lexicographic goal programming approach allows us to

SEE ALSO THE FOLLOWING ARTICLES


Decision Support Systems Evolutionary Algorithms Executive Information Systems Game Theory Industry, Articial Intelligence in Model Building Process Neural Networks Object-Oriented Programming Strategic Planning for/of Information Systems Uncertainty

BIBLIOGRAPHY
Charnes, A., and Cooper, W. W. (1961). Management models and industrial applications of linear programming, Vols. 1 and 2. New York: John Wiley. Charnes, A., and Cooper, W. W. (1977). Goal programming and multiple objective optimization. Part I. European Journal of Operational Research, Vol. 1, 3954. Dauer, J. P., and Kruger, R. J. (1977). An iterative approach to goal programming. Operational Research Quarterly, Vol. 28, 671681. Gass, S. I. (1986). A process for determining priorities and weights for large-scale linear goal programming models. Journal of the Operational Research Society, Vol. 37, 779784. Ignizio, J. P. (1963). S-II trajectory study and optimum antenna placement, Report SID-63. Downey, CA: North American Aviation Corporation. Ignizio, J. P. (1976). Goal programming and extensions. Lexington Series. Lexington, MA: D. C. Heath & Company.

500
Ignizio, J. P. (1985). Introduction to linear goal programming. Beverly Hills, CA: Sage Publishing. Ignizio, J. P., and Cavalier, T. M. (1994). Linear programming. Upper Saddle River, NJ: Prentice Hall. Markowski, C. A., and Ignizio, J. P. (1983). Theory and properties of the lexicographic LGP dual. Large Scale Systems, Vol. 5, 115121. Romero, C. (1991). Handbook of critical issues in goal programming. Oxford, UK: Pergamon Press. Romero, C. (2001). Extended lexicographic goal programming: A unifying approach. OMEGA, International Journal of Management Science, Vol. 29, 6371.

Goal Programming
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, Vol. 63, 129138. Tamiz, M., and Jones, D. (1996). Goal programming and Pareto efciency. Journal of Information and Optimization Sciences, Vol. 17, 291307. Tamiz, M., Jones, D. F., and Romero, C. (1998). Goal programming for decision making: An Overview of the current state-of-the-art. European Journal of Operational Research, Vol. 111, 569581. Vitoriano, B., and Romero, C. (1999). Extended interval goal programming. Journal of the Operational Research Society, Vol. 50, 12801283.

Vous aimerez peut-être aussi