Vous êtes sur la page 1sur 58

A linear program (LP) is a mathematical formulation of a problem.

We define a set of decision variables which fully describe the decisions we wish to make. We then use these variables to define an objective function which we wish to minimize or maximize, and a set of constraints which restrict the decision options open to us. In a linear program, the variables must be continuous and the objective function and constraints must be linear expressions. An expression is linear if it can be expressed in the form c1x1 + c2x2 + ... + cnxn for some constants c1, c2, ... ,cn. For example, 2x1 + 7x2 is a linear expression; x2 and sin x are not. We can think of an LP as having three sections, the objective function, the constraints and the sign restrictions on the decision variables.

1.
The object function is a mathematical expression for what we are trying to accomplish. This expression will always start with either "maximize" or "minimize" and be followed by a linear expression in terms of the variables. The objective function takes the following form:

where the coefficient (Coef.) is a real number and it is followed by the variable name (Var.). For example, let's say we are producing three products, call them p1, p2, and p3, and they sell for $2, $3, and $8 respectively. If our goal is to maximize our revenues, then our objective function would be:

2.
In all systems that we model, we will be faced with constraints which restrict the values of our variables. We can have many constraints. Each constraint will be a linear expression of our variables, with any appropriate coefficients, followed by the type of restriction and the value of the right hand side. We can think of a constraint looking like this:

The "Constraint Sense" will either be less than equal to ( ), greater than equal to ( ) or equal to (=). Note that we cannot have strict inequalities, i.e. we cannot use greater than (>) or less than (<). The right hand side (RHS) will be the limiting value of this expression.

For example, we may have a budget for producing our three products. Let's assume that each product has a per unit production cost of $1, $2, and $5, respectively, and we have a budget of $300. An appropriate constraint would be:

Or maybe we have demand for the products that requires that we produce a combination of at least 50 units of p1 and p2. Then we would add the following constraint:

Note that not all variables need to appear in all constraints. We could also think of the above constraint as 1p1 + 1p2 + 0p3 50. Let's also assume that we have exactly 400 hours of available production time and each unit requires 2, 4, and 5 hours of production time, respectively. Then we would write a constraint to say:

.
We can think of the sign restrictions as additional constraints on our variables, but we think of them as a unique section because they are important and often forgotten. In this section we specify whether our variables our non-negative ( 0), non-positive ( 0), or free. The sign restrictions will take the following form:

In most practical cases, we will want our variables to be positive. In our example, we cannot produce negative quantities of a product. Therefore our Sign Restriction section will include these three expressions:

In the case where we do not want to impose sign restrictions on a variable, we will not include an expression for that variable in this section, implying that the variable is free. So, putting these pieces together, we have a graphical representation of our linear program with our example by its side.

Solutions to Linear Programs


The constraints of a linear program restrict the values of our decision variables. A solution to an LP is a set of values for our decision variables. Every solution has an objective value which is the value of the objective function when it is computed for the variable values given by the solution. We call a solution feasible if it satisfies all our constraints, and infeasible if it violates one or more of them. For instance, p1=40, p2=50, and p3=24 is a feasible solution to our example problem because it satisfies all our constraints. The objective value of this solution is $422 ($2*40 + $3*50 + $8*24 = $422). The task of solving a linear program is to find the feasible solution to the problem that optimizes our objective function. If it is a maximization problem, this will be the solution with the largest objective value. If it is a minimization problem, then we want the solution with the smallest objective value. This solution is called

the optimal solution. For our sample problem, the optimal solution is p1=100, p2=0, and p3=40 which has an objective value of $520. In writing the mathematical formulation of a linear program, we need to define three things: the variables, the objective function, and the constraints. Sometimes the formulation will be obvious and other times constructing the model will be more challenging. Lets start with an example.

The Problem
ChemCo produces two fertilizers, FastGro and ReallyFastGro. Each is made of a mix of two growth agents. FastGro is a 50/50 mix of the two, and it sells for $13 per pound. ReallyFastGro is a 25/75 mix (i.e., its 25% Agent 1 and 75% Agent 2), and sells for $15/pound. Agent 1 costs $2/pound and Agent 2 costs $3/pound. ChemCo can purchase up to 250 pounds of Agent 1 and up to 350 pounds of Agent 2 each day. What is ChemCo's optimal production strategy? i.e., How much of each product should it produce in order to maximize its profits? We start by defining the variables.

In defining the variables, we need to ask ourselves what it is that we wish the model to determine. In this case, we need to know how much of each product to produce. We will use the units that are specified; namely, pounds and days. These observations lead us to believe the following might be appropriate variable definitions:

= number of pounds of FastGro to produce each day.

= number of pounds of ReallyFastGro to produce each day.

(click on this figure to see the problem again)

We need to define our objective function in terms of the variables that we defined for this problem. We are interested in maximizing our profits, so our objective function should express our total profits in terms of our variables. We need to include profit from sales and the expense of the raw materials. We can write our profits as: Profit = $13 FG + $15 RFG - $2 (0.5 FG+0.25 RFG) $3 (0.5 FG + 0.75 RFG) We can simplify this expression to: Profit = $10.5 FG + $12.25 RFG

Therefore our objective function is:

ChemCo produces two fertilizers, FastGro and ReallyFastGro. Each is made of a mix of two growth agents. FastGro is a 50/50 mix of the two, and it sells for $13 per pound. ReallyFastGro is a 25/75 mix (i.e., its 25% Agent 1 and 75% Agent 2), and sells for $15/pound. Agent 1 costs $2/pound and Agent 2 costs $3/pound. ChemCo can purchase up to 250 pounds of Agent 1 and up to 350 pounds of Agent 2 each day. What is ChemCo's optimal production strategy? i.e., How much of each product should it produce in order to maximize its profits?

(click on this figure to see the problem again)

Our only two constraints in this problem arise from the supply limitations for the two agents. We can express these restrictions with the following inequalities:

Agen t 1: Agen t 2:

And finally, we need to determine if it is necessary to impose sign restrictions on our variables. In this example, both our variables must be positive. Therefore we impose the following non-negativity conditions:

Putting all the pieces together, we have the following formulation:


(click on this figure to see the problem again)

We are done. Now you can either go through a finance example, a blending example, another production planning example, view other examples, or you can try your hand at formulating a problem.
Finance Example Blending Example You Try It Production Example

Other Examples by J.E. Beasley

Lets look now at an example from the world of finance.

The Problem
We are given $10 million to invest in a diversified portfolio. We can choose between 5 bond funds with the following properties:
Bond Name Bond Type Moodys Quality Banks Quality Years to Maturity Aftertax Yield

A B C D E

Municipal Agency Govt. Govt. Municipal

Aa Aa Aaa Aaa Ba

2 2 1 1 5

9 15 4 3 2

4.3% 2.7% 2.5% 2.2% 4.5%

In addition, we wish our portfolio to include at least $4 million invested in government and agency bonds, an average quality rating of no more than 1.4 on the bank's scale, and an average maturity which doesn't exceed 5 years. Once again, we start by defining the variables.

In defining the variables, we need to ask ourselves what it is that we wish the model to determine. In this case, we need to know how much money to invest into each bond. Therefore, we will use the following variable definitions: = amount to be invested in Bond A = amount to be invested in Bond B = amount to be invested in Bond C = amount to be invested in Bond D = amount to be invested in Bond E
(click on this figure to see the

problem again)

We need to define our objective function in terms of the variables that we defined for this problem. We are interested in maximizing our after-tax earnings, so our objective function should express our total earnings in terms of our variables. The following accomplishes this: Total Return = 0.043 A + 0.027 B + 0.025 C + 0.022 D + 0.045 E Therefore our objective function is:

(click on this figure to see the problem again)

In this problem, we have several constraints. The first is the amount of money we have. Namely, we can only invest $10 million. This constraint can be expressed with the following expression: Cash:

Our next constraint states that we must invest at least $4 million in governement and agency bonds. As a constraint, this is written:

G&A:

Next we state that our average quality rating must be no more than 1.4 on the bank's scale. To compute the average rating, we will use the following expression:

This can be rewritten as:

Quality :

And finally, we need to limit the average maturity to no more than 5 years. We will compute the average maturity as we did the average quality:

Similarly, this can be rewritten as:

Quality :

And as a final step, we need to determine if it is necessary to impose sign restrictions on our variables. In this example, our variables must be positive. Therefore we impose the following nonnegativity conditions:

Putting all the pieces together, we have the following formulation:


(click on this figur e to see the probl em again )

We are done. Now you can either go through a blending example, another production planning example, view other examples, or you can try your hand at formulating a problem. OK, now lets look at a blending example.

The Problem *
We need to blend iron ore from 4 mines in order to produce tire tread. To assure proper quality, minimum requirements of aluminum, boron, and carbon must be present in the final blend. In particular, there must be 5, 100, & 30 pounds, respectively, in each ton of ore. These elements exist at different levels in the mines. In addition, the cost of mining differs for each mine. The data is in the following tables. How much should be mined from each mine in order to attain a proper blend at minimum cost?
Mine Aluminum (lbs/ton) Boron (lbs/ton Carbon (lbs/ton) Cost 1 10 90 45 $800 2 3 150 25 $400 3 8 75 20 $600 4 2 175 37 $500

Now that we have defined the problem, we can start writing

a formulation.
* This problem was adapted from "Introductory Management Science" by Eppen, Gould, Schmidt, Moore and Weatherford.

Once again, we start by defining the variables.

In defining the variables, we need to ask ourselves what it is that we wish the model to determine. In this case, we need to know what percentage of a ton of the final blend should come from each mine. Therefore, we will use the following variable definitions: = fraction of a ton to be mined from Mine 1 = fraction of a ton to be mined from Mine 2 = fraction of a ton to be mined from Mine 3 = fraction of a ton to be mined from Mine 4 Now that we have variable definitions, we can think about how to write the objective function.

(click on this figure to see the problem again)

We need to define our objective function in terms of the variables that we defined for this problem. In this problem, we are interested in minimizing our costs. For each ton we ore in a particular mine, we

incur the mining cost. For each fraction of a ton we ore in that mine, we incur that fraction of the cost. Therefore, the following objective function captures our total per ton cost:

(click on this figure to see the problem again)

We will now concentrate on our constraints. First, we have our constraints for the minimum requirements for each of the three elements. These constraints can be expressed with the following expressions:

In addition to these, we have an implied constraint on our variables that we must include in the formulation. We have defined our variables as percentages and we need them to sum to one. This constraint is easy to miss since it isn't part of our problem definition, but rather a by-product of our variable definition. As a constraint, this is written:

Blend :

And as a final step, we need to determine if it is necessary to impose sign restrictions on our variables. In this example, our variables must be positive. Therefore we impose the following non-negativity conditions. Note that these conditions, together with the blend constraint, assure that our variables will be between 0 and 1 which is what we want.

Putting all the pieces together, we have the following formulation:

We will look at another production planning example.

The Problem
A manufacturer produces 4 wire cloth products: industrial wire cloth, insect screen, roofing mesh, and snow fence. For each product, aluminum wire is stretched to an appropriate thickness and then the wire is woven together to form a mesh. Production requirements and profit margins vary by product according to the chart below. There are 600 hours available on the wire-drawing machine and 1000 hours on the loom. In addition, 15 cwt of aluminum wire is available to be used as raw material. How much of each product should be produced so as to maximize profit? Assume that the company can sell everything it produces.
Data per 1000 sq. ft. of each product: Aluminum wire (cwt) Wire drawing (100s of Weaving (100s of hrs.) Profit Margin ($100s)

hrs.) Industrial Cloth Insect Screen Roofing Mesh Snow Fence 1 3 3 2.5 1 1 2 1.5 2 1 1.5 2 6 5 3.8 4

Given a clear problem description, the first step in writing the formulation is to define the variables.

In defining the variables, we need to ask ourselves what it is that we wish the model to determine. In this case, we need to know how much of each of the 4 products to produce. Therefore, we will use the following variable definitions: = 1000s sq. ft. of industrial cloth to produce = 1000s sq. ft. of insect screen to produce = 1000s sq. ft. of roofing mesh to produce = 1000s sq. ft. of snow fence to produce Next we define the objective function.

(click on this figure to see the problem

again)

We need to define our objective function in terms of the variables that we defined for this problem. In this problem, we are interested in maximizing profits. For each product, we have an associated profit margin. Note that we need to be sure that our units are correct. In this case, our variables and our profits are both given in terms of 1000s sq. ft.. Therefore, the following objective function captures our total profit (in $100s):

(click on this figure to see the problem again)

We will now concentrate on our constraints. We have three constraints. The first is the capacity on the wire-drawing machine. Our total production must not consume more than 600 hours on the wire-drawing machine. This constraint can be written as follows: Wiredrawin g: We have a similar capacity constraint on the loom: Loom: And our third constraint is on the availability of raw

materials. We only have 15 cwt of aluminum wire available. In constraint form, this translates into: Wire :

And as a final step, we need to determine if it is necessary to impose sign restrictions on our variables. In this example, our variables must be positive. Therefore we impose the following nonnegativity conditions.

Putting all the pieces together, we have the following formulation:

The Problem
We wish to develop a beverage that meets specified nutritional needs for protein, Calcium and Vitamin C, while not exceeding a caloric and volume restriction. We have three available ingredients: yogurt, bananas, and strawberries. We know the following information:
Food Quantit y 1 ounce 1 1 Volume (ounces ) 1 2 0.2 Cost ($) 0.1 0.2 0.1 Calorie s 18 105 4 Protein (grams ) 1.5 1 None Calciu m (% RDA) 5 1 0.2 Vitamin C (% RDA) 0.5 17 7

Yogurt Banana Strawberr y

Our beverage must contain no more than 300 calories, contain at least 6 grams of protein, at least 15% of our daily calcium recommendation and at least 30% of our vitamin C recommendation. In addition, we wish our beverage to fit in an 8-ounce container (but it can be less than 8 ounces). We would like to find a solution that will cost us the least amount to produce.

The Variables

We will define our variables as follows: Y = number of ounces of yogurt to include B = number of bananas to include S = number of strawberries to include

The Formulation
See if you can fill in the blanks in order to provide a complete formulation for this problem:
Objective: Subject to: Calories: Protein: Calcium: Vitamin C: Volume:

(click on this figure to see the problem again)

Y+ B+ B+ B+ B+ B+

B+ S S S S S

Y+ Y+ Y+ Y+ Y+

Signs of Vars: Y B S 0 0 0

Once you feel that you have the correct coefficients for all the variables, submit your answer.
The formulation is: Minimize Subject to 0.1 Y+ 0.2 B+ 0.1 S

Calories: Protein: Calcium: Vitamin C: Volume: Signs of Vars:

18 1.5 5 0.5 1

Y+ Y+ Y+ Y+ Y+

105 1 1 17 2

B+ B+ B+ B+ B+

4 0 0.2 7 0.2

S S S S S Y B S

<= >= >= >= <= >= >= >=

300 6 15 30 8 0 0 0

We can take a problem with only two variables and graph it. From the graph we can then determine the optimal solution. While it may be rare that we would need to solve a 2 variable problem in the "real world," understanding the geometry can lead us to better intuition about LPs and how we can solve them. Lets try this with the following example: Maximize Subject to: x+y x+2y 2 x 3 y 4 x 0y 0

We first look at the nonnegativity constraints. If x and y are greater than or equal to zero, we only need to concern ourselves with the positive quadrant of our graph. Therefore, we will start our graphing exercise with the graph on the right.

We will now look at our first constraint: x + 2 y >= 2. We want to find all the points which satisfy this constraint. We therefore will graph x + 2 y = 2 which will provide the border for the region of points that satisfy it. We can see that the point (0,0) does not satisfy this constraint, therefore its the points above this line that are feasible for this constraint.

Our second constraint says that x <=3. We can add this to the graph.

And our final constraint imposes y <= 4 which we can also add to the graph.

We now have all our constraints graphed and we can see the region where all the constraints are satisfied. This is called the feasible region. Any point in this region satisfies all the constraints, and is called feasible. Any point outside of this region violates one or more of them, and is called infeasible.

Our optimal solution must be feasible, therefore it must lie in the green region. For each point in this region we can calculate the objective value. For instance, the point (1,2) has an objective value of 1+2=3. Our task is to find the point which gives the maximum objective value. Clearly, (1,2) is not optimal since the point (1,3) has a larger objective value of 4. But which point has the largest value? Next we will look at how we find this solution geometrically.

We can now look at our objective function. If we plot the line x + y = c for some constant c, all points on this line will have an objective value of c. For instance, the blue line is x + y = 4. The points along this line have an objective value of 4. Can we do better than this?

We want to maximize our objective function, so if we move the blue line in the direction of the arrow, we will be improving our objective value. Here we have moved it to x + y = 5. All points on this line have an objective value of 5. Those points on the line that are within the green region are feasible solutions with an objective value of 5. This is better than our previous objective value of 4, but can we do even better?

This time we will try x + y = 6. Now we have feasible solutions with an objective value of 6. The point (2,4) is one such point. We are improving our objective value, but can we still do better? What happens if we try x + y = 7?

Indeed we still have a feasible solution that lies on our line. Namely, (3,4) has an objective value of 7. But can we do better than this? Notice that if we push the objective function any further in the direction of the arrow, the line will lie entirely outside of our feasible region. This means that we can't improve any further and we have found our optimal solution.

The point (3,4) is the feasible solution that optimizes our objective function, therefore we call it the optimal solution.

Notice that the optimal solution lies on two of the constraints. We call these active or binding constraints. The constraint x + 2y >= 2 is nonbinding. In fact, at our optimal solution, x + 2y = 3 + 2(4) = 11. The slack of this constraint then is 11-2 = 9. So, we just took a 2 variable LP and solved it graphically. We first plotted all the constraints in order to find the feasible region. We then pushed the objective function as far as we could before leaving the feasible region. This showed us where our optimal solution was. We will now look at another example.

Lets try solving this example geometrically: Minimize Subject to: x-y 1/3 x + y 4 -2 x + 2 y 4

x 3 x 0y 0

Again, because of the nonnegativity constraints, we only need to concern ourselves with the positive quadrant of our graph.

Our first constraint is: 1/3 x + y <= 4. So we first graph 1/3 x + y = 4 which will provide the border for the region of points that satisfy it. To determine which side of this line is feasible, we can check any point. For instance, (2,2) satisfies this constraint. Therefore, we want to consider points below this line [including (2,2)].

Our second constraint says that -2 x + 2 y <=4. We can add this to the graph much like we did the previous constraint.

And our final constraint imposes x <= 3 which we can also add to the graph.

We now have all our constraints graphed and we can see the region where all the constraints are satisfied. Again, this shows us the "feasible region". Any point in this region satisfies all the constraints. Any point outside of this region violates one or more of them. We should also note that sometimes our constraints are inconsistent and our feasible region is "empty." Suppose we had the following two constraints: x + y >= 4 and x + y <= 3. We can't satisfy both of these constraints at once. If we graphed them we would see that there is no point that satisfies both. We would then declare this problem to be "infeasible" and would not attempt to solve it. But the problem we have here does has a feasible region so we will look to solve it. We again need to fine a feasible point which gives us the optimal objective value. In this case we are minimizing so we will look for a point with the minimum objective value.

We can now look at our objective function: Minimize x - y. We'll start by plotting x - y = 1. The points along this blue line have an objective value of 1. But we can do better than this. We want to minimize our objective function, so we would like to find solutions with an objective value of less than 1. This corresponds to moving our objective function in the direction of the arrow.

We could look for solutions with an objective value of 0, but we'll be ambitious and look for solutions with an objective value of -1. The blue line here plots x-y = -1 and shows us feasible solutions with an objective value of -1, for instance (1,2). Can you see what is going to happen as we try to do even better?

This time we will try x - y = -2. Now we have feasible solutions with an objective value of -2. Do you think we can do better than this? Notice that if we push the objective function any further, the line will lie entirely outside of our feasible region. This means that we can't improve any further. But also notice that our objective function coincides with one of our constraints. This means that any point along the intersection of this blue line and our feasible region is feasible and has the same optimal objective value.

In this case, all the points between the two blue circles (on the blue line) are optimal. They all have an objective value of -2.

Just as in the first example, we took a 2 variable LP and solved it graphically. We first plotted all the constraints in order to find the feasible region. We then pushed the objective function as far as we could before leaving the feasible region. In this case we found that there were multiple optimal solutions. We will now look at one more example.

Lets look at one more example and then we'll draw some conclusions. Minimize Subject to: x + 1/3 y x + y 20 -2 x + 5 y 150 x 5 x 0y 0

This time around we will do things more quickly. To the right you have a graph of the non-negative quadrant (representing the nonnegativity constraints) and the three constraints. What does the feasible region look like in this case?

Our feasible region goes on forever off to the right. This region is called "unbounded" because we can go in one direction forever. For instance, the point (x, 10) is feasible for any positive x greater than 10 no matter how large!

We will next look at our objective function. Here we have graphed x + 1/3 y = 30. Now we would like to minimize our objective function, so we want to move this constraint so as to find solutions with lower objective values, i.e. in the direction of the arrow. And rather than take incremental steps this time, lets just move it as far as we can in that direction in one bold move!

And there we have it. We have moved our objective function as far as we can. If we move it any further we will leave the feasible region. Therefore we have found our optimal solution at (5,15) with an objective value of 10.

Lets look again at our objective function. What would have happened if we had wanted to maximize it? Now our arrow would point the other way and we would be encouraged to move the objective function as far as we could in that direction. Well, we could keep pushing out the objective function forever. It would never leave the feasible region because our feasible region is unbounded in that direction. We call this an "unbounded LP" and there is no optimal solution. So we solved another 2 variable LP. This time we had an unbounded feasible region. Depending on the objective function, this may cause no problems in solving the LP and we can find an optimal solution. But in other cases it will lead to an unbounded LP and there will be no optimal solution. graphically. In essence, we have taken math and turned it into geometry. We have seen examples with a single optimal solution and one with multiple optimal solutions. We have seen bounded feasible regions and unbounded feasible regions. In addition, we have looked at what can happen if a feasible region is unbounded: it can lead to an unbounded LP in which case there is no optimal solution. We also mentioned another case when you would not have an optimal solution

is if your feasible region is empty.

Lets now look again at the results from these three examples and see if we can draw any conclusions.

Do you notice anything about these 3 examples? Do they appear to have anything in common?

How about this: they all have an optimal solution which lies on a corner of the feasible region.

This turns out to be a fundamental truth of linear programming. If an optimal solution exists, you can always find an optimal solution which is at a corner of the feasible region. These corner points are also called "extreme points" or "basic feasible solutions". Lets see if we can convince ourselves that this is true in 3 dimensions as well.

As we just saw in 3 examples, the constraints of a 2 variable problem define a 2 dimensional feasible region. We

determined the bounds of this region by graphing each constraint as a line and dividing the 2-dimensional space into 2 halves - a feasible half and an infeasible half. We then found where these half-spaces intersected, and defined the feasible region. If we extend that to a 3 variable problem, we would graph each constraint as a plane and divide the 3-dimensional space into a feasible half and an infeasible half. Our feasible region would similarly be defined by where these halfspaces intersect. The result would be a three dimensional space like the one below. Each face of this region corresponds to a constraint.

Now imagine adding an objective function to this. Our objective function would also be in 3 variables so it would look like a plane cutting through this feasible region. We would then want to move this plane as far as we could in one direction. We would again push it until it was just about to leave the feasible region. Can you see that this would again give us a corner point optimal solution?

This is the case for any feasible region and any objective function - we will always get a corner point optimal solution (as long as the feasible region isn't empty or the LP unbounded). Can we extend this to a 4 variable problem? Well, unless you can see in 4-dimensions, it is tough to show geometrically, but in fact there are formal proofs that show that indeed it is true for all problems - big or small.

Lets review what we have just learned. First, the constraints of an LP give rise to a geometrical shape which defines the feasible region. We call this shape a polyhedron. For the purpose of building intuition, we can assume that the dimension of the polyhedron will be equal to the number of variables in the problem Second, as long as the feasible region is non-empty and the LP is bounded, the objective function is always optimized over this feasible region at a corner of this polyhedron. Therefore, there will always be an optimal solution that is a corner point. There may be other optimal solutions - but at

least one will be at a corner. In the section on the Simplex Method, you will see how this knowledge can be used to think about solving large LPs.

In the previous section, we looked at the geometry of linear programming. Without going into an elaborate proof, we provided some intuition to why "normal" linear programs have optimal solutions at corners. We say "normal" because we need to ignore infeasible or unbounded LPs which don't have optimal solutions at all. This leads us to believe that if we could list all the corner points of our feasible region, we could calculate the objective value at each and take the best one. Given that there is always a corner point optimal solution, we would be assured that this process would give us an optimal solution. Note that there may be others (as we saw in the second example) but we would have found one optimal solution. This raises two questions: 1) How do we find the corners? and 2) What if there are a lot of corners? Well, like many things there are short answers to these questions and long ones. We will provide short answers here, but there is a wealth of insight that can be gained by understanding the longer, more formal answers. First, how do we find the corner points? You should notice that corners occur where constraints intersect. In 2dimensions, they occur where 2 constraints intersect. In 3dimensions, they occur where 3 constraints intersect. And, you guessed it, in n-dimensions, they occur where n constraints intersect.

Finding where n constraints intersect is a matter of solving a system of n equations which we can do efficiently using Gaussian elimination. Therefore, we can find our corner points by solving a series of systems of n equations.

But what if there are a lot of corners? You can imagine even in 3-dimensions a polyhedron with hundreds or even thousands of corner points. Surely we don't need to find them all. We would like to be intelligent about which corners we consider. This brings us to the Simplex Method. This is an efficient approach for solving linear programs and is currently the method used in most commercial solvers. In essence, the simplex method intelligently moves from corner to corner until it can be proven that it has found the optimal solution. Each corner that it visits is an improvement over the previous one. Once it can't find a better corner, it knows that it has the optimal solution.

Needless to say, this is a quick description and there are many interesting features of the algorithm, but we will leave it at this. For the purpose of solving most LPs with commercial solvers, this basic understanding of the algorithm will suffice.

Once we have solved an LP, we may find ourselves interested in more than just the solution. We may be interested in knowing what happens to our solution as we change one of the coefficients. In other words, we may wish to know how sensitive our solution is to changes in the data. We call this sensitivity analysis. One way to learn this information is to change one of the coefficients and re-solve the problem. Fortunately, there is a better way. The output of the simplex method gives us information about what happens as we change the right hand side of a constraint, or as we change the objective function coefficient on a variable. In order to motivate this analysis, we will return to a 2dimensional example and look at the graphic effects of changing coefficients. We will first consider changing the right hand side of a constraint.

(page 2 of 9)

We will first look at what happens as we change the righthand side (RHS) of a constraint. Lets look at our third example from the section on graphing 2-dimensional LPs. Minimize Subject to: x + 1/3 y x + y 20 -2 x + 5 y 150 x 5 x 0y 0

To the right you have the graphical solution that we developed previously. The optimal solution to this problem was x = 5, y = 15 which has an objective value of 10. Now lets look at what happens if we decrease the right hand side of the second constraint from 20 to10.

To the right we have regraphed the constraint. The dotted line represents its old location, while the solid line is its new one. Notice that decreasing the right hand side translates into moving the constraint out, while keeping the same slope. Any time we move a constraint, our feasible region changes. In this case, our feasible region enlarges to include the darker green area. With this new feasible region, is our old solution (the blue dot) still optimal?

Notice that now that our feasible region is larger, we can push our objective function (the blue line) even further than we could before. In fact, we can move it from the dotted line, to the solid line. This gives us a new optimal solution with a new objective value.

Notice that when we decrease the right hand side of this constraint, we find a solution with a lower objective value. This is becausea decrease in the right hand side makes this constraint less restrictive, therefore the size of our feasible region increases and we can expect that we might be able to find a "better" solution. To the right we have the graphical solution when the RHS = 10. The optimal solution to this problem is x = 5, y = 5 which has an objective value of 6 2/3. Now lets look at what happens if we further decrease the right hand side of the second constraint from 10 to 5.

To the right we have regraphed the constraint. Again, the dotted line represents its old location, while the solid line is its new one. We have again moved this constraint out and added the darker green area to our feasible region. With this new feasible region, is our old solution (the blue dot) still optimal?

As before, we can push our objective function from the dotted blue line, to the solid blue line. This gives us a new optimal solution with a new objective value.

We are going to try this one more time. To the right we have the graphical solution when the RHS = 5. The optimal solution to this problem is x = 5, y = 0 which has an objective value of 5. Now lets look at what happens if we decrease the right hand side of the second constraint yet again, this time from 5 to 0.

To the right we have regraphed the consraint. As before, the change in the right hand side translates into a shift of the constraint. Notice that this time however, our feasible region does not change. Do you think our optimal solution will change?

Because our feasible region does not change, we can't move our objective function any further. Therefore we are left with the same optimal solution as before.

So we have now decreased the right hand side of the second constraint 3 times and looked at how each change effected our feasible region and our optimal solution. Lets look now at what happens as we INCREASE the right hand side of this same constraint. Lets first check our intuition. Which direction do you expect the objective value to go? Do you think we will find a solution with an objective value higher than 10 or lower than 10 (10 is the optimal objective value when the RHS = 20)?

Click this button to get the answer.

We will go back to our original formulation with the original right hand side value of 20. To the right we have the graphical representation of the optimal solution. New lets consider increasing the right hand side of the second constraint from 20 to 30.

We will do to two steps in one this time. As before, the change in the right hand side

translates into a shift of the constraint, this time the constraint shifts in the opposite direction as when we were decreasing the right hand side. This change in the constraint shrinks the feasible region to the dark green area. Now our previous solution is no longer feasible. We therefore have to slide the objective function in a bit. This gives us a new optimal solution.

Now lets increase the right hand side again - this time from 30 to 37. 37 may seem arbitrary, but you'll see why we chose that value soon.

Once again, we will shift the constraint in which shrinks the feasible region. We then have to shift the objective function in so that our optimal solution is feasible. And again this gives us a new optimal solution with a new objective value.

OK, we are going to do this one last time, this time with a change from 37 to 40.

Once again, we will shift the constraint in which shrinks the feasible region. We then have to shift the objective function in so that our optimal solution is feasible. And again this gives us a new optimal solution with a new objective value.

Now we will graph the results of all this work. Below you see a plot of the right hand side value versus the optimal objective value.

Notice that the result is piecewise linear and the slope of the segment that contains the original right hand side value of 20, is equal to 1/3. This implies that as we change the value of the right hand side of this constraint by one unit, within this range, the optimal objective value changes by 1/3. This value, 1/3, is called the SHADOW PRICE of this constraint. Here is a more formal definition:

SHADOW PRICE: the change in the objective value per unit increase in the right hand side, given all other data remain the same. Notice that the definition refers to the change in the objective value resulting from an increase in the right hand side. If we decrease the right hand side, we can expect the same change (as long as we are within the range) but in the opposite direction. In other words, if the shadow price is positve, then the objective value will increase by the amount of the shadow price for each unit increase in the right hand side, and it will decrease by the same amount for each unit decrease in the right hand side. Similarly, if the shadow price is negative, then the objective value will decrease by the amount of the shadow price for each unit increase in the right hand side, and it will increase by the same amount for each unit decrease in the right hand side. That is confusing to digest, but you can always fall back on your logic and intuition: If you are making the constraint less restrictive, then you are increasing the size of the feasible region and you might be able to find a "better" solution (if you are minimizing, a better solution will have a smaller objective value, and if you are maximizing, it will have a higher objective value). If you are making the constraint more restrictive, then you are decreasing the size of the feasible region and you might have to settle for a "worse" solution. Here are some important facts about shadow prices. Associated with each constraint is a shadow price. The shadow price is the change in the objective value per unit change in the right hand side, given all other data remain the same. Associated with each shadow price is a range over which this shadow price holds. Most solvers provide shadow prices and ranges as part of the solution information. Shadow prices are also called dual values.

The shadow price on a non-binding constraint is zero. If we have not used all of a resource available to us (the constraint is non-binding), then small changes in the right hand side do not effect the optimal solution. (Think about the 2-dimensional graph to see why this is so.) With the information we are given from most solvers, we can only state the exact effects of changes to the right hand side if they are within the specified range. Changing the right hand side of a constraint to values outside the range of the shadow price, will effect the objective value but we cannot state by how much, at least not without re-solving the problem. In the case of the example we did graphically, we know the effects of changing the right hand side to 40 because we graphed it, but a solver would only tell us that within the range [5,37] the shadow price is 1/3. Note also that although we can state with certainty the new optimal objective value as we change the right hand side within the allowabe range, we cannot make any statements about what the new optimal solution is.

If you feel comfortable with the concept of shadow prices and would like to move on to the next topic,click on "Reduced Costs." If you would like to look at an example and quiz yourself on shadow prices, click on "Shadow Price Quiz" We can think of non-negativity constraints as additional constraints on our system. How then would we interpret the shadow prices on these constraints? First, we will give them a different name. We call this new value the reduced cost and we will associate it with the variable that is nonnegative. In non-negativity constraints our right hand side is zero. Using a shadow price interpretation then, the reduced cost of a variable is the change in the objective function if we require that variable to be greater than or

equal to one (instead of zero), assuming a feasible solution still exists. In fact, the reduced cost is the rate of change of the objective function per unit increase in the right hand side of this non-negativity constraint. It should be clear that if, at an optimal solution, a variable has a positive value, the reduced cost for that variable will be zero. It is optimal to give this variable a value greater than zero, so forcing the variable to be greater than zero should not change the optimal solution nor the objective value. If however, at an optimal solution a variable has a value of zero then forcing this variable to be greater than zero will change the optimal solution and most likely the optimal objective value (if there are multiple optimal solutions, it may be that the optimal solution changes, but the optimal objective value does not). In this case of a variable with a zero optimal value, we can interpret the reduced cost as the amount by which the objective value changes if we increase the value of this variable to one (or at least the rate of change). Alternatively, the reduced cost of a variable with a zero optimal value is the amount by which the objective coefficient would have to decrease in order to have a positive optimal value for that variable. So to summarize, we make the following points about reduced costs: If, at an optimal solution, a variable has a positive value, the reduced cost for that variable will be 0. If, at an optimal solution, a variable has a value of 0, the reduced cost for that variable can be interpreted as the amount by which the objective value will change if we increase the value of this variable to one, assuming a feasible solution still exists. If, at an optimal solution, a variable has a value of 0, the reduced cost for that variable can also be interpreted as

the amount by which the objective coefficient would have to decrease in order to have a positive value for that variable in an optimal solution. Now lets make a quick check of your intuition. We already stated that a variable with a positive optimal value will have a reduced cost of zero. But what will the sign be on reduced costs for variables that have a zero optimal value? If we our minimizing our objective function and in the optimal solution the value of x equals zero, will the reduced cost for x by positive or negative?

Click this button to get the answer. You should be able to see why the sign will change if we are instead maximizing our objective function. If you feel comfortable with the concept of reduced costs and would like to move on to the next topic, click on "Objective Coefficients." If you would like to look at an example and quiz yourself on reduced costs, click on "Reduced Cost Quiz".

Vous aimerez peut-être aussi