Vous êtes sur la page 1sur 10

Definition

Interest which is calculated not only on the initial principal but also
the accumulated interest of prior periods. Compound interest differs from simple
interest in that simple interest is calculated solely as a percentage of the principal
sum.
The equation for compound interest is: P = C(1+ r/n) nt
Where:
P = future value
C = initial deposit
r = interest rate (expressed as a fraction: eg. 0.06 for 6%)
n = # of times per year interest is compounded
t = number of years invested

History[

Leonid Kantorovich

The problem of solving a system of linear inequalities dates back at least as far
as Fourier, who in 1827 published a method for solving them,[1] and after whom the
method of FourierMotzkin elimination is named.
The first linear programming formulation of a problem that is equivalent to the general
linear programming problem was given by Leonid Kantorovichin 1939, who also
proposed a method for solving it.[2] He developed it during World War II as a way to plan

expenditures and returns so as to reduce costs to the army and increase losses incurred
by the enemy. About the same time as Kantorovich, the Dutch-American economist T. C.
Koopmansformulated classical economic problems as linear programs. Kantorovich and
Koopmans later shared the 1975 Nobel prize in economics.[1] In 1941,Frank Lauren
Hitchcock also formulated transportation problems as linear programs and gave a
solution very similar to the later Simplex method;[2]Hitchcock had died in 1957 and the
Nobel prize is not awarded posthumously.
During 19461947, George B. Dantzig independently developed general linear
programming formulation to use for planning problems in US Air Force. In 1947, Dantzig
also invented the simplex method that for the first time efficiently tackled the linear
programming problem in most cases. When Dantzig arranged meeting with John von
Neumann to discuss his Simplex method, Neumann immediately conjectured the theory
of duality by realizing that the problem he had been working in game theory was
equivalent. Dantzig provided formal proof in an unpublished report "A Theorem on Linear
Inequalities" on January 5, 1948.[3] Postwar, many industries found its use in their daily
planning.
Dantzig's original example was to find the best assignment of 70 people to 70 jobs. The
computing power required to test all the permutations to select the best assignment is
vast; the number of possible configurations exceeds the number of particles in the
observable universe. However, it takes only a moment to find the optimum solution by
posing the problem as a linear program and applying the simplex algorithm. The theory
behind linear programming drastically reduces the number of possible solutions that must
be checked.
The linear programming problem was first shown to be solvable in polynomial time
by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the
field came in 1984 when Narendra Karmarkar introduced a new interior-point method for
solving linear-programming problems
Linear programming is the process of taking various linear inequalities relating to some
situation, and finding the "best" value obtainable under those conditions. A typical example
would be taking the limitations of materials and labor, and then determining the "best"
production levels for maximal profits under those conditions.
In "real life", linear programming is part of a very important area of mathematics called
"optimization techniques". This field of study (or at least the applied results of it) are used
every day in the organization and allocation of resources. These "real life" systems can have
dozens or hundreds of variables, or more. In algebra, though, you'll only work with the simple
(and graphable) two-variable linear case.
The general process for solving linear-programming exercises is to graph the inequalities
(called the "constraints") to form a walled-off area on the x,y-plane (called the "feasibility
region"). Then you figure out the coordinates of the corners of this feasibility region (that is,
you find the intersection points of the various pairs of lines), and test these corner points in

the formula (called the "optimization equation") for which you're trying to find the highest or
lowest value.

Examples of Linear Programming Applications


Presented are examples of linear programming applications; the last two are very simple. In the
case of a real-life application, it is necessary to extend the model to consider additional
constraints of the modelled situation. Despite that, linear programming problems are quite
tractable: with adequate effort, even relatively large problems (hundreds of thousands of
variables and constraints) can be solved.

Transportation Problem
A company has a stock of goods allocated in m storehouses. The goods is to be delivered
ton customers, each of which is requesting a certain quantity of the goods. (It is supposed that
the quantity of the goods in the storehouses is sufficient to cover the customers requests.) The
transportation cost of one unit of the goods from the storehouse no. i to the customer
no. j is cij fori = 1, 2, , m and j = 1, 2, , n. The goal is to make up a transportation plan so
that the requests of the customers are met and the total transportation costs are minimal.

Minimization of production costs


A company produces n different kinds of goods. It has received orders from customers to supply
certain quantity of each kind of the goods. The company produces the goods by m activities
(processes). Each of the activities no. 1, 2, , m produces all the kinds of the goods
no. 1, 2, , n in a certain ratio. (For example, the distillation of crude oil yields petrol, oil,
paraffin oil, asphalt, The production of iron in the blast furnace yields iron as well as slag,
which can be used in building industry. And so forth.) The unit production costs of the i-th
activity are ci. The goal is to make up an optimal production programme, i.e., to determine the
production level of the activities, so that the orders of the customers are met and the total
production costs are minimal.

Maximization of profit
A company performs n activities. It produces n kinds of goods, provides n kinds of services, and
so forth. The company sells its activities (products, services). Each unit of the j-th activity sold
yields a profit of cj for j = 1, 2, , n. The company needs m kinds of resources to run its
activities. Each of the resources (in the given period of time) is available only in a certain
amount. The goal is to make up an optimal programme of the activities so that the resources
are not overdrawn and the total profit is maximized.

A Short History of Probability


From Calculus, Volume II by Tom M. Apostol (2nd edition, John Wiley & Sons, 1969 ):
"A gambler's dispute in 1654 led to the creation of a mathematical theory of probability by
two famous French mathematicians, Blaise Pascal and Pierre de Fermat. Antoine Gombaud,
Chevalier de Mr, a French nobleman with an interest in gaming and gambling questions,
called Pascal's attention to an apparent contradiction concerning a popular dice game. The
game consisted in throwing a pair of dice 24 times; the problem was to decide whether or
not to bet even money on the occurrence of at least one "double six" during the 24 throws.
A seemingly well-established gambling rule led de Mr to believe that betting on a double
six in 24 throws would be profitable, but his own calculations indicated just the opposite.

This problem and others posed by de Mr led to an exchange of letters between Pascal
and Fermat in which the fundamental principles of probability theory were formulated for
the first time. Although a few special problems on games of chance had been solved by
some Italian mathematicians in the 15th and 16th centuries, no general theory was
developed before this famous correspondence.
The Dutch scientist Christian Huygens, a teacher of Leibniz, learned of this correspondence
and shortly thereafter (in 1657) published the first book on probability; entitled De
Ratiociniis in Ludo Aleae, it was a treatise on problems associated with gambling. Because
of the inherent appeal of games of chance, probability theory soon became popular, and
the subject developed rapidly during the 18th century. The major contributors during this
period were Jakob Bernoulli (1654-1705) and Abraham de Moivre (1667-1754).
In 1812 Pierre de Laplace (1749-1827) introduced a host of new ideas and mathematical
techniques in his book, Thorie Analytique des Probabilits. Before Laplace, probability
theory was solely concerned with developing a mathematical analysis of games of chance.
Laplace applied probabilistic ideas to many scientific and practical problems. The theory of
errors, actuarial mathematics, and statistical mechanics are examples of some of the
important applications of probability theory developed in the l9th century.
Like so many other branches of mathematics, the development of probability theory has
been stimulated by the variety of its applications. Conversely, each advance in the theory
has enlarged the scope of its influence. Mathematical statistics is one important branch of
applied probability; other applications occur in such widely different fields as genetics,
psychology, economics, and engineering. Many workers have contributed to the theory
since Laplace's time; among the most important are Chebyshev, Markov, von Mises, and
Kolmogorov.
One of the difficulties in developing a mathematical theory of probability has been to arrive
at a definition of probability that is precise enough for use in mathematics, yet
comprehensive enough to be applicable to a wide range of phenomena. The search for a
widely acceptable definition took nearly three centuries and was marked by much
controversy. The matter was finally resolved in the 20th century by treating probability
theory on an axiomatic basis. In 1933 a monograph by a Russian mathematician A.
Kolmogorov outlined an axiomatic approach that forms the basis for the modern theory.
(Kolmogorov's monograph is available in English translation as Foundations of Probability
Theory, Chelsea, New York, 1950.) Since then the ideas have been refined somewhat and
probability theory is now part of a more general discipline known as measure theory."

Discrete Probability Distributions


In statistics and probability theory, a discrete probability distribution is a
distribution characterized by a probability mass function. This distribution
is commonly used in computer programs which help to make equal
probability random selections between a number of choices. The most
common applications of discrete probability distribution are binomial
distribution, Poisson distribution, geometric distribution and Bernoulli

distribution.

Any random variable is called discrete random variable which is the part of discrete
distribution. A random variable can take two types of values, either fix numbers that is
discrete values or a range that is continuous type of values. In continuous type data,
the values can lie anywhere within the specified range. For example: the number of
apples in the basket is discrete while the time needed to drive from school to home is
continuous.So the probability distribution over a random variable X where
X takes discrete values, is commonly said to be discrete probability
distribution.

Formula
When we say that the probability distribution of an experiment is discrete then
the sum of probabilities of all possible values of the random variable must be
equal to 1. That is if X is a discrete random variable, then,

eP(X=e)=1eP(X=e)=1

Here, e is the set of all values that the variable X can take.
For Example: consider the event of tossing two
coins,

SS = HH,TH,HT,TTHH,TH,HT,TT. Let us consider the event e Y to

of occurrence of a tail. Now clearly Y = 0, 1, 2 only, that is discrete values


only.
For

YY = 0, that is HHHH, P(Y)P(Y) = 1414

For

YY = 1, that is TH,HTTH,HT, P(Y)P(Y) = 2424

For

YY = 2, that is TTTT, P(Y)P(Y) = 1414

On adding all three we get 1414 + 2424 + 1414 = 1.

Thus we have proved our formula using a very common example.


The discrete probability distribution can always be represented in the form
of a table as below:

YY

P(Y)P(Y)

1414 = 0.25

2424 = 0.50

1414 = 0.25

For any discrete probability distribution we can always find the mean or
the expected value by:

eP(X=e)eP(X=e)
In above example, the expected value = 0 + 2424 + 2424 = 1. But it is not
necessary to have expected value equal to 1. It can be

Example
Example 1: Find the expected value of the following discrete distribution.

YY P(Y)P(Y)
0

0.30

0.20

0.25

0.15

0.10

Solution:

YY P(Y)P(Y) YP(Y)YP(Y)
0

0.30

0.20

0.20

0.25

0.50

0.15

0.45

0.10

0.40

So expected value = 0 + 0.20 + 0.50 + 0.45 + 0.40 = 1.55


Example 2: We flip a coin 10 times. Find the probability that 6 heads are
obtained.
Solution:
We solve this using binomial distribution.
A binomial distribution is expressed by B (n, p) where n is the number of
trials made, k is the number of successes out of n trials and p is the
probability of a success in each trial. So (1 p) will be the probability of
failure in each trial. Then, the binomial distribution is calculated as below.

p(X=k)=(nk)pk.(1p)nkp(X=k)=(nk)pk.(1p)nk
The term (nk)(nk) is known as the binomial coefficient and is calculated
as:
n!((k!)(nk)!)n!((k!)(nk)!)

Here, n = 10, k = 6, p = 1212 = 0.5. So, 1 p = 0.5

Using this we get, P(X = 6) = 0.2051

Continuous Probability Distribution


If a random variable is a continuous variable , its probability distribution is called
a continuous probability distribution .
A continuous probability distribution differs from a discrete probability distribution
in several ways.

The probability that a continuous random variable will assume a particular


value is zero.

As a result, a continuous probability distribution cannot be expressed in


tabular form.

Instead, an equation or formula is used to describe a continuous


probability distribution.

The equation used to describe a continuous probability distribution is called


a probability density function (pdf). All probability density functions satisfy the
following conditions:

The random variable Y is a function of X; that is, y = f(x).

The value of y is greater than or equal to zero for all values of x.

The total area under the curve of the function is equal to one.

The charts below show two continuous probability distributions. The first chart
shows a probability density function described by the equation y = 1 over the
range of 0 to 1 and y = 0 elsewhere.

y=1
The next chart shows a probability density function described by the equation y =
1 - 0.5x over the range of 0 to 2 and y = 0 elsewhere. The area under the curve is
equal to 1 for both charts.

y = 1 - 0.5x
The probability that a continuous random variable falls in the interval
between a and b is equal to the area under the pdf curve between a and b. For
example, in the first chart above, the shaded area shows the probability that the
random variable X will fall between 0.6 and 1.0. That probability is 0.40. And in
the second chart, the shaded area shows the probability of falling between 1.0
and 2.0. That probability is 0.25.

Compound interest

Effective interest rates

The effect of earning 20% annual interest on an initial $1,000 investment at various compounding
frequencies

The addition of interest to the principal sum of a loan or deposit is


called compounding. Compound interest is interest on interest. It is the result of
reinvesting interest, rather than paying it out, so that interest in the next period is
then earned on the principal sum plus previously-accumulated interest.
Compound interest is standard in finance and economics.
Compound interest may be contrasted with simple interest, where interest is not
added to the principal, so there is no compounding. Thesimple annual interest
rate is the interest amount per period, multiplied by the number of periods per
year. The simple annual interest rate is also known as the nominal interest
rate (not to be confused with nominal as opposed to real interest rates)

Vous aimerez peut-être aussi