Vous êtes sur la page 1sur 58

Dichotomous Search method

(One-Dimensional Elimination
Numerical Method)
4
By
Prof. N. K. Jain
Delhi College of Engineering

1
What we have learnt
in the
previous lectures

2
OPTIMIZATION
Optimization is the process of finding
the best result under certain given
conditions

 Optimization can be defined as the


process of finding the conditions that
give the maximum or minimum value of
a function

3
 Mathematical programming
techniques are useful in finding
the minimum of a function
several variables under a
prescribed set of constraints .

4
Optimization is a natural process

• Plant Orientation
• Darwins’ theory of Survival of the
fittest.
• Businessman……..

5
………
• Housewife
• A student coming to college for
studying, for exam. Or for interview
etc.

6
• The historical Developments in
Optimization.
• Definition of a general optimization
problem.
• Classification of optimization
techniques.
• Applications of optimization in
various engineering fields.

7
Classification of optimization
problems
1. Linear/nonlinear programming
2. Single objective/multi objective
programming
3. Constrained/unconstrained
programming
………………………
8
………………….
4. One dimensional/multidimensional
programming
5. Direct/first order/second
order/Quasi Newton techniques
6. Quadratic programming
7. Deterministic/statistical/probablistic
programming……………..
9
………………
8. Analytical/Numerical problems

9. Static/dynamic programming

10.Optimal/non optimal programming

11.Integer programming
10
• Some Important Points from
Lecture 2

11
The basic philosophy of most of
the numerical method of
optimization is to produce a
sequence of improved
approximation to the optimum
according to following scheme.

12
Assumption of Unimodality

A unimodal function is one that has


only one peak (maximum) or valley
(minimum) in a given interval.

f(x) f(x) f(x)

x
a x a x1 x2 a b
b b x 13
CLASSIFICATION OF
NUMERICAL METHODS

1.ELIMINATION METHODS

2.INTERPOLATION METHODS

14
Elimination methods are:
1. Unrestricted search
2. Exhaustive search
3. Fibonacci method
4. Golden section methods
5. Dichotomous search
6. Interval Halving method.

15
Various One Dimensional Techniques
Covered in Lecture 2

1. Unrestricted search with fixed step


size.
2. Unrestricted search with
accelerated step size.
3. Exhaustive search.

16
• Some Important Points from
Lecture 3

17
INTERVAL HALVING METHOD

• Elimination method.
• Exactly one half of the current
interval of uncertainty is deleted in
every stage.

18
STEPS OF SOLUTION

STEP 1:-Division of initial interval of


uncertainty.
Divide the initial interval of
uncertainty L0=(a, b) into four equal
parts and label the intermediate points
as x1,x0and x2.

Interval Halving Method 19


STEP 2:- Evaluation of function.

Find the value of the function at the


three interior points as f1=f(x1),
f0=f(x0) and f2=f(x2).

Interval Halving Method 20


STEP 3:- Reduction of interval of
uncertainty.
3(a). If f2>f0>f1 delete (x0,b) label x1 and
x0 as the new x0 and b, respectively,
and go to step 4.
f2
f0
f1

xxxxxxxxxxxxxxxxxxxxx
a x1 x0 x2 b
L0
Interval Halving Method 21
3(b). If f1>f0>f2delete (a,x0) label x2and
x0 as the new x0and a respectively
and go to step 4.

f1
f0
f2

xxxxxxxxxxxxxxxxxxxxx
a x1 x0 x2 b
L0

Interval Halving Method 22


3(c). If f1>f0 and f2>f0 ,delete (a,x1) and
(x2,b), label x1 and x2 as the new a and
b, respectively, and go to step 4.
f2
f1
f0

xxxxxxxxxxx xxxxxxxxxxx
a x1 x0 x2 b
L0

Interval Halving Method 23


STEP 4:- Test of convergence.

If the new interval of uncertainty L=(b-


a) satisfies the convergence L≤ є,
where є is a small quantity, stop the
procedure. Otherwise, set the new
L0=L and go to step 1.

Interval Halving Method 24


REMARKS
Interval of uncertainty remaining at
the end of n experiments.

Ln=(1/2) (n-1)/2 L0
Where n≥3 and odd.
BUT ‘n’ NEEDS NOT TO BE A
MULTIPLE OF 3.

Interval Halving Method 25


Dichotomous Search method
(One-Dimensional Elimination
Numerical Method)

By
Prof. N. K. Jain
Delhi College of Engineering

26
INDEX
-Introduction
-Limitation
-Basic Process
-Result
-Flow Chart
-Example

27
BASIC BRACKETING ALGORITHM

a X1 X2 b

Two point search (dichotomous


search) for finding the solution to
minimizing ƒ(x):

28
1) Assume an interval [a,b]
2) Find x1 = a + (b-a)/2 -/2 and
x2 = a+(b-a)/2 +/2
where  is the resolution.

29
3) Compare ƒ(x1) and ƒ(x2)

4) If ƒ(x1) < ƒ(x2) then eliminate


x > x2 and set b = x2

30
If ƒ(x1) > ƒ(x2) then eliminate x < x1
and set a = x1
If ƒ(x1) = ƒ(x2) then pick another pair
of points
5) Continue placing point pairs until
interval < 2 

31
DICHOTOMOUS SEARCH

Dichotomous Search is a sequential


search method in which the result of
any experiment influences the location
of the subsequent experiments.

32
LIMITATIONS
The function has to be an unimodal
function where it is assumed that
there is only one maxima or minima
in the Initial interval of uncertainty

33
THE BASIC PROCESS

In dichotomous search, two


experiments are placed as close as
possible at the “Center of the
Interval of Uncertainty”

34
Based on the relative value of the
Objective Function at the two points,
almost half the interval of uncertainty
is eliminated.

35
DICHOTOMOUS SEARCH
Main Entry: di·chot·o·mous
Pronunciation: dI-'kät-&-m&s also d&-
● Conceptually simple Function: adjective
idea: : dividing into two parts

– Try to split interval in half in each step


L0

a0 d << L0 b0
L0/2

f   f  :
36
Dichotomous search (2)
Interval size after 1 step (2 evaluations):
L0

L1  L0  d 
1
2
● Interval size after m steps (2m evaluations):

L0  1 
Lm  m  d 1  m 
2  2 
● Proper choice for d :
L0 Lm Lideal L0
Lideal
 m  d  m

10 10  2m
m
2 10
37
Dichotomous search (3)
• Example: m = 10 L 
10
log  m  m
 L0 
L0 L0
Lideal
10  10 
2 1024

ideal L0
L10 L0 d
d   10240
10 10240
Ideal
interval
reduction

38
f2
f1

d
Xs Xf
X1 X2

d/2
I I

Lo

39
The above figure shows the position of
two experiments where “δ ” is a small
positive number chosen so that the two
experiments give significantly different
results.δ
The new interval of uncertainty is given
by (L0/2 + δ/2)

40
Next experiment is conducted by
taking a pair of points at the center of
“Current Interval of Uncertainty”

41
RESULT:-
This results in the reduction of the
interval of Uncertainty by nearly A
FACTOR OF TWO
Table shows the Interval of Uncertainty
at the end of different pairs of
Experiments:

42
No. of Final Interval of Uncertainty
Experiments

2 (1/2)(Lo+δ)

4 (1/2)((Lo+δ) /2) + δ/2

6 (1/2)(((Lo+ δ)/4)+δ /2)+ δ/2

43
In General, the final Interval of
Uncertainty after conducting “n”
experiments (n-even) is given by

44
START
FLOW CHART

DEFINE FUNCTION f(X)


K = K+1

A1 , B1 , є , d , SET K=1
IF No
|BK-AK|<=є
X1 = AK + { (BK-AK)/2- d /2 }
Yes

X2 = AK + { (BK-AK)/2+ d /2 } IF
f(X1)<=f(X2)
Yes No
IF
f(X1)<=f(X2) Xopt = X1 Xopt = X2
Yes No
PRINT SOLUTION
AK+1 = AK AK+1 = X1
BK+1 = X2 BK+1 = BK STOP
45
Let’s take an example to illustrate
Prob:- Find the minimum of function

f x   0.65 
0.75
1 x 2
 0.65 x  tan  
1 1
x

Using Dichotomous method in interval


(0,3) to achieve a accuracy of within
5% of the exact value using δ =0.0001.

46
Solution:
f  x   0.65 
0.75
1 x2
 0.65 x  tan  
1 1
x

Ln=L0/2n/2+ d (1-1/2n/2)
Accuracy for 5% of the exact value,
½ (Ln /L0)  (5/100)
1/2n/2+d /L0 (1-1/2n/2)<=(1/10)

47
Since δ = 0.0001 and Lo =3
(1/2n/2+(1/30000)(1- 1/2n/2)<=1/10
2n/2>= 10
Since n has to be even, this inequality
gives the minimum admissible value of
n as 8.

48
• The search is made as follows. The
first two experiments are made at
x1=(Lo/2)- δ /2

49
x1=1.49995 x1=1.49995 x2=1.50005

Xs=0 Xf=3

x2=(Lo/2)+ d /2 f2= -0.154065228


f1=-0.15407831

x2=1.50005
With this function value are
f1=-0.15407831 and
f2= -0.154065228

50
Since f2 > f1 ‘ we delete (x2 ,3) and
obtain the new interval of uncertainty as
(0, x2) i.e. (0, 1.50005). The second
pair of experiments is conducted at
x3=((0+1.50005)/2)-0.001/2)=0.749975

x3= 0.749975 x4= 0.750075


0 1.50005

f4= -0.282043663

f3= -0.282069886
51
• x4=((0+1.50005)/2)+(0.001/2)
=0.750075
• Which gives the function values
f3= -0.282069886 and
f4= -0.282043663
• Since f4 >f3 , we delete (x4,1.50005)
and the new interval of uncertainty
(0, x4) is (0, 0.750075).

52
The third pair of experiments will be
conducted at
x5=((0+0.750075)/2)-(0.001/2)
=0.3749875
x6=((0+0.750075)/2)+(0.001/2)
x5= 0.3749875 x6= 0.3750875
0 0.750075
=0.3750875
f5= -0.302963728 f6= -0.302977898

53
f5= -0.302963728
f6= -0.302977898
Since f5>f6 , we delete (0, x5 ),and the
new interval of uncertainty
(x5, 0.750075) is
(0.3749875, 0.750075).

54
The final set of experiments will be
conducted at
x7=((0.3749875 +0.750075-
0.3749875)/2)-(0.001/2)=0.56248125
x8=((0.3749875 +0.750075-
0.3749875)/2)+(0.001/2)
x7= 0.56248125 x8= 0.56258125
x5=0.3749875
=0.56258125 0.750075

f8= -0.306706715

f7= -0.306714385

55
f7= -0.306714385
f8= -0.306706715
Since f8 >f7 , the new interval of
uncertainty (x5, x8) is (0.750075,
0.56248125 ). The middle point of this
interval can be taken as optimum
point and hence

56
Xopt=0.056278125
fopt= -0.296542195

x5=0.3749875
Xopt=0.056278125 x8= 0.56258125

fopt= -0.296542195

57
Thanks
For your
patience
58

Vous aimerez peut-être aussi