Vous êtes sur la page 1sur 22

ASSIGNMENTS - MBA – II SEMESTER

MB0032

SET 1

OPERATIONS RESEARCH

Q.1:- Describe in details the different scopes of application of Operations Research?

Ans:- Operations Research (OR) in the USA, South Africa and Australia, and Operational
Research in Europe and Canada, is an interdisciplinary branch of applied mathematics and
formal science that uses methods such as mathematical modeling, statistics, and algorithms to
arrive at optimal or near optimal solutions to complex problems. It is typically concerned
with optimizing the maxima (profit, assembly line performance, crop yield, bandwidth, etc)
or minima (loss, risk, etc.) of some objective function. Operations research helps
management achieve its goals using scientific methods.

The terms operations research and management science are often used synonymously.
When a distinction is drawn, management science generally implies a closer relationship to
the problems of business management. The field of operations research is closely related to
Industrial engineering. Industrial engineers typically consider Operations Research (OR)
techniques to be a major part of their toolset.

Some of the primary tools used by operations researchers are statistics, optimization,
probability theory, queuing theory, game theory, graph theory, decision analysis, and
simulation. Because of the computational nature of these fields, OR also has ties to computer
science, and operations researchers use custom-written and off-the-shelf software.

Operations research is distinguished by its frequent use to examine an entire


management information system, rather than concentrating only on specific elements (though
this is often done as well). An operations researcher faced with a new problem is expected to
determine which techniques are most appropriate given the nature of the system, the goals for
improvement, and constraints on time and computing power. For this and other reasons, the
human element of OR is vital. Like any other tools, OR techniques cannot solve problems by
themselves.

Scope of operation Research

Examples of applications in which operations research is currently used include:


• Critical path analysis or project planning: identifying those processes in a complex
project which affect the overall duration of the project
• Designing the layout of a factory for efficient flow of materials

-1-
• constructing a telecommunications network at low cost while still guaranteeing QoS
(quality of service) or QoS (Quality of Experience) if particular connections become
very busy or get damaged
• Road traffic management and 'one way' street allocations i.e. allocation problems.
• Determining the routes of school buses (or city buses) so that as few buses are needed
as possible
• designing the layout of a computer chip to reduce manufacturing time (therefore
reducing cost)
• Managing the flow of raw materials and products in a supply chain based on uncertain
demand for the finished products
• Efficient messaging and customer response tactics
• Robotizing or automating human-driven operations processes
• Globalizing operations processes in order to take advantage of cheaper materials,
labor, land or other productivity inputs
• Managing freight transportation and delivery systems (Examples:
LTLhttp://en.wikipedia.org/wiki/Less-Than-Truckload_%28LTL%29_Shipping
Shipping, intermodal freight transport)
• Scheduling:
o Personnel staffing
o Manufacturing steps
o Project tasks
o Network data traffic: these are known as queuing models or queueing systems.
o sports events and their television coverage
• Blending of raw materials in oil refineries
• Determining optimal prices, in many retail and B2B settings, within the disciplines of
pricing science
Operations research is also used extensively in government where evidence-based
policy is used.

-2-
Q.2:- What do you understand by Linear Programming Problem? What are the
requirements of L.P.P.? What are the basic assumptions of L.P.P.?
Ans:-

Linear programming problem (LPP): The standard form of the linear programming
problem is used to develop the procedure for solving a general programming problem.
A general LPP is of the form
Max (or min) Z = c1x1 + c2x2 + … +cnxn
x1, x2, ....xn are called decision variable.
subject to the constraints

c1, c2,…. Cn, a11, a12,…. amn are all known constants
Z is called the "objective function" of the LPP of n variables which is to be maximized or
minimized.

Requirements of L.P.P : There are mainly four steps in the mathematical formulation of
linear programming problem as a mathematical model. We will discuss formulation of those
problems which involve only two variables.

• Identify the decision variables and assign symbols x and y to them. These decision
variables are those quantities whose values we wish to determine.
• Identify the set of constraints and express them as linear equations/inequations in
terms of the decision variables. These constraints are the given conditions.
• Identify the objective function and express it as a linear function of decision variables.
It might take the form of maximizing profit or production or minimizing cost.
• Add the non-negativity restrictions on the decision variables, as in the physical
problems, negative values of decision variables have no valid interpretation.

There are many real life situations where an LPP may be formulated. The following
examples will help to explain the mathematical formulation of an LPP.

Example-1. A diet is to contain at least 4000 units of carbohydrates, 500 units of fat and
300 units of protein. Two foods A and B are available. Food A costs 2 dollars per unit and
food B costs 4 dollars per unit. A unit of food A contains 10 units of carbohydrates, 20 units
of fat and 15 units of protein. A unit of food B contains 25 units of carbohydrates, 10 units of
fat and 20 units of protein. Formulate the problem as an LPP so as to find the minimum cost
for a diet that consists of a mixture of these two foods and also meets the minimum
requirements.

-3-
Suggested answer:
The above information can be represented as

Let the diet contain x units of A and y units of B.


 Total cost = 2x + 4y
The LPP formulated for the given diet problem is
Minimize Z = 2x + 4y
subject to the constraints

Basic Assumptions of L.P.P: Linear programming is applicable only to problems


where the constraints and objective function are linear i.e., where they can be expressed as
equations which represent straight lines. In real life situations, when constraints or objective
functions are not linear, this technique cannot be used.

• Factors such as uncertainty, weather conditions etc. are not taken into consideration.
There may not be an integer as the solution, e.g., the number of men required may be
a fraction and the nearest integer may not be the optimal solution. i.e., Linear
programming technique may give practical valued answer which is not desirable.

• Only one single objective is dealt with while in real life situations, problems come
with multi-objectives.

• Parameters are assumed to be constants but in reality they may not be so.

-4-
Q.3:- Describe the different steps needed to solve a problem by simplex method?

Ans:- Simplex method The simplex method is a method for solving problems in linear
programming. This method, invented by George Dantzig in 1947, tests adjacent vertices of
the feasible set (which is a polytope) in sequence so that at each new vertex the objective
function improves or is unchanged. The simplex method is very efficient in practice,
generally taking 2m to 3m iterations at most (where m is the number of equality constraints),
and converging in expected polynomial time for certain distributions of random inputs
(Nocedal and Wright 1999, Forsgren 2002). However, its worst-case complexity is
exponential, as can be demonstrated with carefully constructed examples (Klee and Minty
1972).

A different type of methods for linear programming problems is interior point


methods, whose complexity is polynomial for both average and worst case. These methods
construct a sequence of strictly feasible points (i.e., lying in the interior of the polytope but
never on its boundary) that converges to the solution. Research on interior point methods was
spurred by a paper from Karmarkar (1984). In practice, one of the best interior-point methods
is the predictor-corrector method of Mehrotra (1992), which is competitive with the simple
method, particularly for large-scale problems.

Dantzig's simplex method should not be confused with the downhill simplex method
(Spendley 1962, Nelder and Mead 1965, Press et al. 1992). The latter method solves an
unconstrained minimization problem in n dimensions by maintaining at each iteration n+1
points that define a simplex. At each operation, this simplex is updated by applying certain
transformations to it so that it "rolls downhill" until it finds a minimum.

The Simplex Method is "a systematic procedure for generating and testing candidate
vertex solutions to a linear program." (Gill, Murray, and Wright, p. 337) It begins at an
arbitrary corner of the solution set. At each iteration, the Simplex Method selects the variable
that will produce the largest change towards the minimum (or maximum) solution. That
variable replaces one of its compatriots that is most severely restricting it, thus moving the
Simplex Method to a different corner of the solution set and closer to the final solution. In
addition, the Simplex Method can determine if no solution actually exists. Note that the
algorithm is greedy since it selects the best choice at each iteration without needing
information from previous or future iterations.

The Simplex Method solves a linear program of the form described in Figure 3. Here,
the coefficients represent the respective weights, or costs, of the variables . The
minimized statement is similarly called the cost of the solution. The coefficients of the
system of equations are represented by , and any constant values in the system of
equations are combined on the right-hand side of the inequality in the variables .

-5-
Combined, these statements represent a linear program, to which we seek a solution of
minimum cost.
Figure 3

A Linear Program
Solving this linear program involves solutions of the set of equations. If no solution to
the set of equations is yet known, slack variables , adding no cost to the
solution, are introduced. The initial basic feasible solution (BFS) will be the solution of the
linear program where the following holds:

Once a solution to the linear program has been found, successive improvements are
made to the solution. In particular, one of the nonbasic variables (with a value of zero) is

chosen to be increased so that the value of the cost function, , decreases. That variable
is then increased, maintaining the equality of all the equations while keeping the other
nonbasic variables at zero, until one of the basic (nonzero) variables is reduced to zero and
thus removed from the basis. At this point, a new solution has been determined at a different
corner of the solution set.

The process is then repeated with a new variable becoming basic as another becomes
nonbasic. Eventually, one of three things will happen. First, a solution may occur where no
nonbasic variable will decrease the cost, in which case the current solution is the optimal
solution. Second, a non-basic variable might increase to infinity without causing a basic-
variable to become zero, resulting in an unbounded solution. Finally, no solution may
actually exist and the Simplex Method must abort. As is common for research in linear
programming, the possibility that the Simplex Method might return to a previously visited
corner will not be considered here.

The primary data structure used by the Simplex Method is "sometimes called a
dictionary, since the values of the basic variables may be computed (‘looked up’) by
choosing values for the nonbasic variables." (Gill, Murray, and Wright, p. 337) Dictionaries
contain a representation of the set of equations appropriately adjusted to the current basis.
The use of dictionaries provide an intuitive understanding of why each variable enters and
leaves the basis. The drawback to dictionaries, however, is the necessary step of updating
them which can be time-consuming. Computer implementation is possible, but a version of
the Simplex Method has evolved with a more efficient matrix-oriented approach to the same
problem. This new implementation became known as the Revised Simplex Method.

-6-
The steps of the Simplex Method also need to be expressed in the matrix format of the
Revised Simplex Method. The basis matrix, B, consists of the column entries of A
corresponding to the coefficients of the variables currently in the basis. That is if is the

fourth entry of the basis, then is the fourth column of B. (Note that B is
therefore an matrix.) The non-basic columns of A constitute a similar though likely not
square, matrix referred to here as V.
Simplex Method Revised Simplex Method

Determine the current basis, d.

Choose to enter the basis based


on the greatest cost contribution.

If cannot decrease the cost, d is If , d is optimal solution.


optimal solution.

Determine that first exits the


basis (becomes zero) as
increases.

If can decrease without causing If for all i, the solution is


another variable to leave the basis, unbounded.
the solution is unbounded.

Update dictionary. Update .

-7-
Examples:

Use the two phase method to


Maximize z = 3x1 – x2
Subject to 2x1 + x2 ≥ 2
x1 + 3x2 ≤ 2
x2 ≤ 4,
x1, x2 ≥ 0

Rewriting in the standard form,


Maximize z = 3x1 – x2 + 0S1 – MA1 + 0.S2 + 0.S3
Subject to 2x1 + x2 – S1 + A1 = 2
x1 + 3x2 + S2 = 2
x2 + S3 = 4,
x1, x2, S1, S2, S3, A1 ≥ 0.

Phase I : Consider the new objective,


Maximize Z* = – A1
Subject to 2x1 + x2 – S1 + A1 = 2
x1 + 3x2 + S2 = 2
x2 + S3 = 4,
x1, x2, S1, S2, S3, A1 ≥ 0.

Solving by Simplex method, the initial simplex table is given by

x1 x2 S1 A1 S2 S3
2 0 0 –1 0
0 Ratio
A1 – 1 2* 1 –1 1 0 0 2 2/2=1
S2 0 1 3 0 0 1 0 2 2/1=2
S3 0 0 1 0 0 0 1 4
–2 –1 1 0 0 0 –2

Work column * pivot element

x1 enters the basic set replacing A1.

-8-
The first iteration gives the following table:

x1 x2 x1 A1 S2 S3
0 0 0 –1 0 0
X1 0 1 1/2 – 1/2 ½ 0 0 1
S2 0 0 5/2 1/2 –½ 1 0 1
S3 0 0 1 0 0 0 1 4
0 0 0 1 0 0 0

Phase I is complete, since there are no negative elements in the last row. The Optimal
solution of the new objective is Z* = 0.
Phase II: Consider the original objective function,
Maximize z = 3x1 – x2 + 0S1 + 0S2 + 0S3
Subject to x1 + x2/2 – S1/2=1
5/2 x2 + S1/2 + S2=1
x2 + S3 = 4
x1, x2, S1, S2, S3 ≥ 0
with the initial solution x1 = 1, S2 = 1, S3 = 4, the corresponding simplex table is

x1 x2 S1 S2 S3
3 –1 0 0 0 Ratio
X1 3 1 1/2 – 1/2 0 0 1
S2 0 0 5/2 1/2* 1 0 1 1/1/2=2
S3 0 0 1 0 0 1 4
0 5/2 – 3/2 0 0 3

Work column * pivot element


Proceeding to the next iteration, we get the following table:

x1 x2 S1 S2 S3
3 –1 0 0 0
X1 0 1 3 0 1 0 2
S1 0 0 5 1 2 0 2
S3 0 0 1 0 0 1 4
0 10 0 3 0 6

Since all elements of the last row are non negative, the current solution is optimal.
The maximum value of the objective function Z = 6 which is attained for x1 = 2, x2 = 0.

-9-
Q.4:- Describe the economic importance of the Duality concept?

Ans:- The Importance of Duality Concept Is Due To Two Main Reasons

i. If the primal contains a large number of constraints and a smaller number of


variables, the labour of computation can be considerably reduced by converting it
into the dual problem and then solving it.
ii. The interpretation of the dual variable from the loss or economic point of view
proves extremely useful in making future decisions in the activities being
programmed.

Economic importance of duality concept The linear programming problem can be thought
of as a resource allocation model in which the objective is to maximize revenue or profit
subject to limited resources. Looking at the problem from this point of view, the associated
dual problem offers interesting economic interpretations of the L.P resource allocation model.
We consider here a representation of the general primal and dual problems in which the
primal takes the role of a resource allocation model.

Primal

Maximize
n

z= ∑C
j =1
j .x j

Subject to ∑a
j =1
ij x j ≤b j , i = 1,2,…., m

xj ≥ 0, j = 1,2,…., n

Dual

Minimize
m
w= ∑b . y
i =1
i i

m
Subject to ∑a
i =1
ij yi ≤ ci , i = 1,2,…., n

yj ≥ 0, i = 1,2,…., m
From the above resource allocation model, the primal problem has n economic
activities and m resources. The coefficient cj in the primal represents the profit per unit of
activity j. Resource i, whose maximum availability is bi, is consumed at the rate aij units per
unit of activity j.

Interpretation of Duel Variables –

For any pair of feasible primal and dual solutions,

-10-
(Objective value in the maximization problem) ≤ (Objective value in the minimization
problem)

At the optimum, the relationship holds as a strict equation. Note: Here the sense of
optimization is very important. Hence clearly for any two primal and dual feasible solutions,
the values of the objective functions, when finite, must satisfy the following inequality.

n m

z = ∑C j x j ≤ ∑bi yi = w
j =1 i =1

The strict equality, z = w, holds when both the primal and dual solutions are optimal.

Consider the optimal condition z = w first given that the primal problem represents a
resource allocation model, we can think of z as representing profit in Rupees. Because bi
represents the number of units available of resource i, the equation z = w can be expressed as
profit (Rs) = ∑ (units of resource i) x (profit per unit of resource i)

This means that the dual variables yi, represent the worth per unit of resource i
[variables yi are also called as dual prices, shadow prices and simplex multipliers].

With the same logic, the inequality z < w associated with any two feasible primal and
dual solutions is interpreted as (profit) < (worth of resources)

This relationship implies that as long as the total return from all the activities is less
than the worth of the resources, the corresponding primal and dual solutions are not optimal.
Optimality is reached only when the resources have been exploited completely, which can
happen only when the input equals the output (profit). Economically the system is said to
remain unstable (non optimal) when the input (worth of the resources) exceeds the output
(return). Stability occurs only when the two quantities are equal.

Q.5:- How can you use the Matrix Minimum method to find the initial basic feasible
solution in the transportation problem?

Ans:- The Initial basic Feasible solution using Matrix Minimum Method

Let us consider a T.P involving m-origins and n-destinations. Since the sum of origin
capacities equals the sum of destination requirements, a feasible solution always exists. Any
feasible solution satisfying m + n – 1 of the m + n constraints is a redundant one and hence
can be deleted. This also means that a feasible solution to a T.P can have at the most only m
+ n – 1 strictly positive component, otherwise the solution will degenerate.

-11-
It is always possible to assign an initial feasible solution to a T.P. in such a manner
that the rim requirements are satisfied. This can be achieved either by inspection or by
following some simple rules. We begin by imagining that the transportation table is blank i.e.
initially all xij = 0. The simplest procedures for initial allocation discussed in the following
section.

Matrix Minimum Method

Step 1:Determine the smallest cost in the cost matrix of the transportation table. Let it be cij ,
Allocate xij = min ( ai, bj) in the cell ( i, j)

Step 2: If xij = ai cross off the ith row of the transportation table and decrease bj by ai go to
step 3.

if xij = bj cross off the ith column of the transportation table and decrease ai by bj go to step 3.

if xij = ai= bj cross off either the ith row or the ith column but not both.

Step 3: Repeat steps 1 and 2 for the resulting reduced transportation table until all the
rim requirements are satisfied whenever the minimum cost is not unique make an arbitrary
choice among the minima.

Q.6:- Describe the Integer Programming Problem. Describe the Gomory’s All-I.P.P.
method for solving the I.P.P. problem?

Ans:- Integer Programming Problem The Integer Programming Problem I P P is a


special case of L P P where all or some variables are constrained to assume nonnegative
integer values. This type of problem has lot of applications in business and industry where
quite often discrete nature of the variables is involved in many decision making situations.
Eg. In manufacturing the production is frequently scheduled in terms of batches, lots or runs;
In distribution, a shipment must involve a discrete number of trucks or aircrafts or freight cars
An integer programming problem can be described as follows:
Determine the value of unknowns x1, x2, … , xn

so as to optimize z = c1x1 +c2x2 + . . .+ cnxn

subject to the constraints

ai1 x1 + ai2 x2 + . . . + ain xn =bi , i = 1,2,…,m

and xj ³ 0 j = 1, 2, … ,n

where xj being an integral value for j = 1, 2, … , k ≤ n.

-12-
If all the variables are constrained to take only integral value i.e. k = n, it is called an
all(or pure) integer programming problem. In case only some of the variables are restricted to
take integral value and rest (n – k) variables are free to take any non negative values, then the
problem is known as mixed integer programming problem.

Gomory’s All – IPP Method

An optimum solution to an I. P. P. is first obtained by using simplex method ignoring


the restriction of integral values. In the optimum solution if all the variables have integer
values, the current solution will be the desired optimum integer solution. Otherwise the given
IPP is modified by inserting a new constraint called Gomory’s or secondary constraint which
represents necessary condition for integrability and eliminates some non integer solution
without losing any integral solution. After adding the secondary constraint, the problem is
then solved by dual simplex method to get an optimum integral solution. If all the values of
the variables in this solution are integers, an optimum inter-solution is obtained, otherwise
another new constrained is added to the modified L P P and the procedure is repeated. An
optimum integer solution will be reached eventually after introducing enough new constraints
to eliminate all the superior non integer solutions. The construction of additional constraints,
called secondary or Gomory’s constraints, is so very important that it needs special attention.

The iterative procedure for the solution of an all integer programming problem is as follows:

Step 1: Convert the minimization I.P.P. into that of maximization, if it is in the


minimization form. The integrality condition should be ignored.

Step 2: Introduce the slack or surplus variables, wherever necessary to convert the
inequations into equations and obtain the optimum solution of the given L.P.P. by using
simplex algorithm.

Step 3: Test the integrality of the optimum solution

a) If the optimum solution contains all integer values, an optimum basic feasible integer
solution has been obtained.
b) If the optimum solution does not include all integer values then proceed onto next
step.

Step 4: Examine the constraint equations corresponding to the current optimum


solution. Let these equations be represented by

n1

∑y
j =0
ij x j = bi ( i 0 , 1, 2 , ........, m1 )

Where n’ denotes the number of variables and m’ the number of equations.

-13-
Choose the largest fraction of bis ie to find {bi}i

Let it be [bk 1]
or write is as f ko

Step 5: Express each of the negative fractions if any, in the k th row of the optimum
simplex table as the sum of a negative integer and a nonnegative fraction.

Step 6: Find the Gomorian constraint


n1

∑f
j =0
kj x j ≥ f ko

and add the equation


n1

Gsla ( 1 )= − f ko + ∑ f kj x j
j =0

to the current set of equation constraints.

Step 7: Starting with this new set of equation constraints, find the new optimum
solution by dual simplex algorithm. (So that Gsla (1) is the initial leaving basic variable).

Step 8: If this new optimum solution for the modified L.P.P. is an integer solution. It
is also feasible and optimum for the given I.P.P. otherwise return to step 4 and repeat the
process until an optimum feasible integer solution is obtained.

-14-
ASSIGNMENTS - MBA – II SEMESTER

MB0032

SET 2

OPERATIONS RESEARCH

Q.1:- What are the important features of Operations Research? Describe in details the
different phases of Operations Research?

Ans:- Important features of OR are: It is System oriented: OR studies the problem


from over all point of view of organizations or situations since optimum result of one part of
the system may not be optimum for some other part.

(i) It imbibes Inter – disciplinary team approach. Since no single


individual can have a thorough knowledge of all fast developing scientific
knowhow, personalities from different scientific and managerial cadre form
a team to solve the problem.
(ii) It makes use of Scientific methods to solve problems.
(iii) OR increases the effectiveness of a management Decision making ability.
(iv) It makes use of computer to solve large and complex problems.
(v) It gives Quantitative solution.
(vi) It considers the human factors also.

Phases of Operations Research The scientific method in OR study generally involves


the following three phases:

(i) Judgment Phase: This phase consists of:-

(a) Determination of the operation.


(b) Establishment of the objectives and values related to the operation.
(c) Determination of the suitable measures of effectiveness and
(d) Formulation of the problems relative to the objectives.

(ii) Research Phase: This phase utilizes

(a) Operations and data collection for a better understanding of the


problems.
(b) Formulation of hypothesis and model.
(c) Observation and experimentation to test the hypothesis on the basis
of additional data.
(d) Analysis of the available information and verification of the hypothesis
using pre established measure of effectiveness.

-15-
(e) Prediction of various results and consideration of alternative
methods.

(iii) Action Phase: It consists of making recommendations for the decision process
by those who first posed the problem for consideration or by anyone in a position to
make a decision, influencing the operation in which the problem is occurred.

Q.2:- Describe a Linear Programming Problem in details in canonical form?

Ans:- Linear Programming The Linear Programming Problem (LPP) is a class of


mathematical programming in which the functions representing the objectives and the
constraints are linear. Here, by optimization, we mean either to maximize or minimize the
objective functions. The general linear programming model is usually defined as follows:

Maximize or Minimize
Z = c1 x1 + c2 x 2 +..................................+cn x n

subject to the constraints,

a11 x1 + a12 x2 + ....................................+ a1n xn ~ b1


a21 x1 + a22 x2 + .................+ a2n xn ~ b2
...................................................
...................................................
am 1x1 + am2 x2 + ................ +amn xn ~ bm
and x1 > 0, x2 > 0, xn > 0.

Where cj, bi and aij (i = 1, 2, 3, m, j = 1, 2, 3........ n) are constants determined from


the technology of the problem and xj (j = 1, 2, 3 n) are the decision variables. Here ~ is either
< (less than), > (greater than) or = (equal). Note that, in terms of the above formulation the
coefficient cj, aij, bj are interpreted physically as follows. If bi is the available amount of
resources i, where aij is the amount of resource i, that must be allocated to each unit of activity
j, the “worth” per unit of activity is equal to cj.

Canonical forms :

The general Linear Programming Problem (LPP) defined above can always be
put in the following form which is called as the canonical form:

Maximise Z = c1 x1+c2 x2 + .......+cn xn


Subject to
a11 x1 + a12 x2 +................. a1n xn < b1
a21 x1 + a22 x2 + ................. +a2n xn < b2
.....................................................................
am1x1+am2 x2 + ...... + amn xn < bm

-16-
x1, x2, x3, ... xn > 0.

The characteristics of this form are:

1) all decision variables are nonnegative.


2) all constraints are of < type.
3) the objective function is of the maximization type.

Any LPP can be put in the cannonical form by the use of five elementary transformations:

1. The minimization of a function is mathematically equivalent to the


maximization of the negative expression of this function. That is, Minimize
Z = c 1 x 1 + c 2x2 + ....... + c n xn is equivalent to

Maximize – Z = – c 1x1 – c 2x2 – ... – cnxn.

2. Any inequality in one direction (< or >) may be changed to an inequality in the opposite
direction (> or <) by multiplying both sides of the inequality by –1.

For example 2x 1+ 3x 2 > 5 is equivalent to –2x 1–3x 2 < –5.

3. An equation can be replaced by two inequalities in opposite direction. For example,


2x1+3x2 = 5 can be written as 2x1+3x 2 < 5 and 2x1+3x2 > 5 or 2x1+3x2 < 5 and – 2x1 –
3x2 < – 5.

4. An inequality constraint with its left hand side in the absolute form can be changed into
two regular inequalities. For example: | 2x1+3x2 | < 5 is equivalent to 2x1+3x2 < 5 and
2x1+3x2 > – 5 or – 2x1 – 3x2 < 5.

5. The variable which is unconstrained in sign (i.e., > 0, < 0 or zero) is equivalent to the
difference between 2 nonnegative variables. For example, if x is unconstrained in sign then
x

= (x + – x – ) where x + > 0, x – < 0.

-17-
Q.3:- What are the different steps needed to solve a system of equations by the simplex
method?

Ans:- To Solve problem by Simplex Method

1. Introduce stack variables (Si’s) for “ < ” type of constraint.


2. Introduce surplus variables (Si’s) and Artificial Variables (Ai) for “>” type
of constraint.
3. Introduce only Artificial variable for “=” type of constraint.
4. Cost (Cj) of slack and surplus variables will be zero and that of artificial
variable will be “M” Find Zj Cj for each variable.
5. Slack and Artificial variables will form Basic variable for the first simplex
table. Surplus variable will never become Basic Variable for the first simplex table.
6. Zj = sum of [cost of variable x its coefficients in the constraints – Profit or
cost coefficient of the variable].
7. Select the most negative value of Zj - Cj. That column is called key column.
The variable corresponding to the column will become Basic variable for the next
table.
8. Divide the quantities by the corresponding values of the key column to get
ratios select the minimum ratio. This becomes the key row. The Basic variable
corresponding to this row will be replaced by the variable found in step 6.
9. The element that lies both on key column and key row is called Pivotal
element.
10. Ratios with negative and “a” value are not considered for determining key
row.
11. Once an artificial variable is removed as basic variable, its column will be
deleted from next iteration.
12. For maximisation problems decision variables coefficient will be same as in
the objective function. For minimization problems decision variables coefficients will
have opposite signs as compared to objective function.
13. Values of artificial variables will always is – M for both maximisation and
minimization problems.
14.The process is continued till all Zj – Cj > 0.

-18-
Q.4:- What do you understand by the transportation problem? What is the basic
assumption behind the transportation problem? Describe the MODI method of solving
transportation problem?

Ans:- This model studies the minimization of the cost of transporting a commodity from a
number of sources to several destinations. The supply at each source and the demand at each
destination are known. The transportation problem involves m sources, each of which
has available ai (i = 1, 2, .....,m) units of homogeneous product and n destinations, each of
which requires bj (j = 1, 2...., n) units of products. Here ai and bj are positive integers. The
cost cij of transporting one unit of the product from the i th source to the j th destination is
given for each i and j. The objective is to develop an integral transportation schedule that
meets all demands from the inventory at a minimum total transportation cost. It is assumed
that the total supply and the total demand are equal.

The condition (1) is guaranteed by creating either a fictitious destination with a


demand equal to the surplus if total demand is less than the total supply or a (dummy)
source with a supply equal to the shortage if total demand exceeds total supply. The cost of
transportation from the fictitious destination to all sources and from all destinations to the
fictitious sources are assumed to be zero so that total cost of transportation will remain the
same.

The Transportation Algorithm (MODI Method)

The first approximation to (2) is always integral and therefore always a feasible
solution. Rather than determining a first approximation by a direct application of the
simplex method it is more efficient to work with the table given below called the
transportation table. The transportation algorithm is the simplex method specialized to the
format of table it involves:

(i) Finding an integral basic feasible solution


(ii) Testing the solution for optimality
(iii) Improving the solution, when it is not optimal
(iv) Repeating steps (ii) and (iii) until the optimal solution is obtained.

The solution to T.P is obtained in two stages. In the first stage we find Basic feasible
solution by any one of the following methods a) Northwest corner rale b) Matrix Minima
Method or least cost method c) Vogel’s approximation method. In the second stage we
test the B.Fs for its optimality either by MODI method or by stepping stone method.
Modified Distribution Method / Modi Method / U – V Method .
Step 1: Under this method we construct penalties for rows and columns by
subtracting the least value of row / column from the next least value.

-19-
Step 2 : We select the highest penalty constructed for both row and column. Enter
that row / column and select the minimum cost and allocate min (ai, bj)
Step 3: Delete the row or column or both if the rim availability / requirements is met.
Step 4: We repeat steps 1 to 2 to till all allocations are over.
Step 5: For allocation all form equation ui + vj = cj set one of the dual variable ui / vj
to zero and solve for others.
Step 6: Use these value to find Δij = cij – ui – vj of all Δij >, then it is the optimal
solution.
Step 7 : If any Dij £ 0, select the most negative cell and form loop. Starting point of
the loop is +ve and alternatively the other corners of the loop are –ve and +ve. Examine
the quantities allocated at –ve places. Select the minimum. Add it at +ve places and
subtract from –ve place. Step 8: Form new table and repeat steps 5 to 7 till Δij > 0

Q.5:- Describe the North-West Corner rule for finding the initial basic feasible
solution in the transportation problem?

Ans:- North West Corner Rule

Step1: The first assignment is made in the cell occupying the upper left hand (north west)
corner of the transportation table. The maximum feasible amount is allocated there, that is
x11 = min (a1,b1)

So that either the capacity of origin O1 is used up or the requirement at destination


D1 is satisfied or both. This value of x11 is entered in the upper left hand corner (small
square) of cell (1, 1) in the transportation table

Step 2: If b1 > a1 the capacity of origin O, is exhausted but the requirement at destination
D1 is still not satisfied , so that at least one more other variable in the first column will have to
take on a positive value. Move down vertically to the second row and make the second
allocation of magnitude

x21 = min (a2, b1 – x21) in the cell (2,1). This either exhausts the capacity of origin O2
or satisfies the remaining demand at destination D1.

If a1 > b1 the requirement at destination D1 is satisfied but the capacity of origin O1 is not
completely exhausted. Move to the right horizontally to the second column and make the
second allocation of magnitude x12 = min

(a1 – x11, b2) in the cell (1, 2) . This either exhausts the remaining capacity of origin O1
or satisfies the demand at destination D2 .

If b1 = a1, the origin capacity of O1 is completely exhausted as well as the requirement at


destination is completely satisfied. There is a tie for second allocation, An arbitrary tie

-20-
breaking choice is made. Make the second allocation of magnitude x12 = min (a1 – a1, b2)
= 0 in the cell (1, 2) or x21 = min (a2, b1 – b2) = 0 in the cell (2, 1).

Step 3: Start from the new north west corner of the transportation table satisfying destination
requirements and exhausting the origin capacities one at a time, move down towards the
lower right corner of the transportation table until all the rim requirements are satisfied.

Q.6:- Describe the Branch and Bound Technique to solve an I.P.P. problem?

Ans:- The Branch And Bound Technique Sometimes a few or all the variables of
an IPP are constrained by their upper or lower bounds or by both. The most general
technique for the solution of such constrained optimization problems is the branch and
bound technique. The technique is applicable to both all IPP as well as mixed I.P.P. the
technique for a maximization problem is discussed below:

Let the I.P.P. be

Subject to the constraints

xj is integer valued , j = 1, 2, ........, r (< n) –––– (3)


xj > 0.......................................... j = r + 1, ........ , n __________ (4)

Further let us suppose that for each integer valued xj, we can assign lower and upper
bounds for the optimum values of the variable by

Lj ≤ xj ≤ Uj j = 1, 2, .... r _______________ (5)


The following idea is behind “the branch and bound technique”
Consider any variable xj, and let I be some integer value satisfying Lj £ I £ Uj – 1 . Then
clearly an optimum solution to (1) through (5) shall also satisfy either the linear constraint.

x j > I + 1 ______________________________ ( 6)

Or the linear constraint xj ≤ I ..................................... ..(7)

To explain how this partitioning helps, let us assume that there were no integer
restrictions (3), and suppose that this then yields an optimal solution to L.P.P. – (1), (2), (4)

-21-
and (5). Indicating x1 = 1.66 (for example). Then we formulate and solve two L.P.P’s each
containing (1), (2) and (4).

But (5) for j = 1 is modified to be 2 ≤ x1 ≤ U1 in one problem and L1 ≤ x1 ≤ 1 in


the other. Further each of these problems process an optimal solution satisfying integer
constraints (3)

Then the solution having the larger value for z is clearly optimum for the given I.P.P.
However, it usually happens that one (or both) of these problems has no optimal solution
satisfying (3), and thus some more computations are necessary. We now discuss step
wise the algorithm that specifies how to apply the partitioning (6) and (7) in a systematic
manner to finally arrive at an optimum solution.

We start with an initial lower bound for z, say z (0) at the first iteration which is less
than or equal to the optimal value z*, this lower bound may be taken as the starting Lj for
some xj.

In addition to the lower bound z (0) , we also have a list of L.P.P’s (to be called
master list) differing only in the bounds (5). To start with (the 0 th iteration) the master list
contains a single L.P.P. consisting of (1), (2), (4) and (5). We now discuss below, the step by
step procedure that specifies how the partitioning (6) and (7) can be applied systematically
to eventually get an optimum integer valued solution.

-22-

Vous aimerez peut-être aussi