Académique Documents
Professionnel Documents
Culture Documents
104
CHAPTER 6
KNOWLEDGE-BASED WORKCELL ATTRIBUTE ORIENTED
DYNAMIC SCHEDULERS FOR FMS
6.1 INTRODUCTION
Revising the schedule with regeneration requires much time to provide optimal or near
optimal solution. On the other hand, real time systems are capable of providing solutions in
short time. Knowledge-based expert systems, which belong to real time category, are evolving
to a great extent in order to handle dynamic scheduling environments. Several
knowledge-based scheduling algorithms have been proposed and analysed (Chapter 2-section
2.4.2.5). This chapter addresses two knowledge-based scheduling schemes (Work Cell
Attribute Oriented Dynamic Schedulers 'WCAODSs') to control the flow of parts efficiently in
real time for FMS wherein the part-mix varies continually with planning horizon. This is an
alternate approach to the off-line rescheduling schemes that are addressed in chapter 5 which
generate schedules for 'n' jobs on'm' WCs under negligible transportation times. The objective
of the present work is to evolve schedule knowledge base*, which furnishes WCwise-pdrs
(WCwise-pdr set**) for minimum makespan performance criterion, based on the WC
attributes"* to schedule the processes and regulate them in real time.
The concept of applying knowledge-based system for scheduling is not new. All the
AI approaches have aimed to provide a control strategy that most favourably suits the status
of the system and applied knowledge acquisition technologies of artificial intelligence to
induce decision-making knowledge from the collected experience data.
Foot note
Knowledge base: It is a set of IF-THEN rides predicated through the learning process,
which shows the relationship between the features extracted and the pdrs.
“WCwise-pdr set: It is the set of pdrs, one pdr for each WC, that resolves the conflicts
between the contendingjobs.
“WC attributes: It is a set of parameters that describes the processing requirements for the
given part mixes at each WC.
Scheduling FMS Using Heuristic and Search Techniques
105
i
i
i
TRAINING EXAMPLE GENERATION MODULE i
i
i
♦j Generate: Job matrix i
i
i
r' ▼ \------i 1i
Obtain: Best WC-pdr (Genetic program) i
j .
v 1
^ Extract:: Attributes of WCs j___ | 1
i
^ i
|
i
i
RUN MODULE
Calculate Features
Dn
Extract :WG attributes
the weights are refined with the learning rule so that some more information gains may be
obtained. Hyperplanes are gradually added to the current hidden layer provided that the
information entropy continuously decreases and becomes zero or further improvement is not
in near sight. Next, a new layer is added to the current network. The current layer uses the
original training data and outputs from all previously generated nodes (hyperplanes) as its
input. The addition of new layer continues until the training set separates the examples with a
single hyperplane (i.e. linearly separable). This self-generated neural net converts those
training examples into decision trees.
Genetic CID3 algorithm
The convergence of the network with CID3 algorithm depends on how weight for
each attribute is generated. The random generation of weight vectors in CID3 algorithm
results in more layers in NN architecture. In this context, a genetic search process is proposed
for the generation of weight vectors in CID3 algorithm to get precise NN architecture with
fast convergence and less number of layers. The following four stages are involved in GCID3
algorithm.
Stage I: Generation of initial node
First the training data set is classified into +ve and -ve classes and it becomes the
starting node of any layer. Let there be TT number of training examples in a training data set.
Define the pdr for which learning is carried out as +ve class and others as -ve class. For
example, if learning is done for SPT rule, then the examples with SPT rule are +ve class
examples and the remaining become -ve examples. This classification separates the training
data set into N+ +ve class examples and N -ve class examples.
i.e. N = N+ + N ................................................................ (6.1)
Stage II: Addition of hyperplanes
In the second stage, nodes are continually generated by the gradual addition of
hyperplanes/adalines (h) until the information entropy continuously decreases and becomes
zero or further improvement in information entropy gain is not in near sight. The maximum
number of nodes thus getting generated in a hyperplane is 2h l. The number of nodes at any
hyperplane will be less than this maximum because the nodes with zero entropy are not
classified further. A GA, which evaluates the information entropy of weight vectors of initial
population, determines the optimal initial weight vector (the weight vector that provides
Scheduling FMS Using Heuristic and Search Techniques
110
maximum information gain) of a hyperplane. The proposed GA for the generation of initial
weight vector. A feasible weight vector of a hyperplane classifies each of the 'R* nodes of
earlier classification in such a way that the classification satisfies the following conditions:
(i) n,; < n;
(ii) Nlr- < n;
(iii) No; < Nr+
(iv) N„ < Nr-
(v) N,; + no; = n;
(vii) Both Nlr* and Nlr should not be equal to zero and
(viii) Both and N^' should not be equal to zero.
Where.
where,
= 1 for examples belonging to +ve class
= 0 for examples belonging to -ve class.
Out, = [l+exp{ Ppf1 ( - MAp * Wp )}] 1
(6.6)
Outx is the threshold value (logistic function) of training example 'i' and is the output of a
training example.
MAp is the value of the attribute ’p*
Wpis the weight of WC attributes 'MAp' (including bias attribute MAJ
Calculate the information entropy 'E'. It is the average of the entropies of all R nodes at that
hyperplane and given as equation 6.7.
Further, the learning rate that is set as 0.01 refines the weights so that some more information
gains may be obtained. The changes in the weight vector and the entropy are calculated using
the equations 6.8 and 6.9 respectively.
Scheduling FMS Using Heuristic and Search Techniques
112
Where,
p = Learning Rate (0.01);
M_ _ Y dE dN'r dE 8N'r
dW? - L dNl dWP dN\r dWP
(6.8a)
dE _____ 1_ A^-JV|r
cWjr ~ Nlnl
In K+^r
In
Nr-N\-ir]r.
(6.8b)
dhru ~ N In 2
In K+»\r -In Nr-N]-N]r (6.8c)
dS'\r
dlVr, = ZD, Out,(l-Out,)MA, (6.8d)
R dE
AE = X -m-an+1 r AN7 r
+ ^T^V1 (6.9)
dN\,
Where,
diin ,"),v4
AKf+ - Y zlllL AW (6.9a)
ZA/V ^ ~0 dwp ayvP
dim -
Wlr-Xjfi A Wp (6.9b)
By exponential scaling the value of'AE', the fitness value of a chromosome is found.
Step 8 : Find the change in weight vector and change in information entropy, in a hyperplane
using the equation 6.8 and 6.9.
Step 9 : If (AE) > 0.0 then Ehkl+1 = Ek'h +AE and Whk,+' = Whk’ + AWh. Go To step 10.
Otherwise record Ehkl as Eh and Wh k' as Whopt Go To Step 11.
Step 11: If Eh = E^, go to step 12. Otherwise add a hyperplane to the current layer and go to
step 3.
Step 12: If there is only one hyperplane (i.e. h = 1) at the current layer STOP learning has
been done). Otherwise, add a new layer (i.e. set L = L+ 1) to the current network
and add a hyperplane (set h = 1) to this layer. Go back to step 1 with inputs from
both the original training data and output from all previously generated nodes.
Deduction of scheduling rules
The neural net generated converts its hidden layers into scheduling decision rules in
the following manner. The features (Feature, ^ corresponding to each node V of hyperpiane 'h'
in layer 'L' that relates the training examples and decision tree of the NN architecture
generated are found out with the connections weights obtained and using the relation given in
equation 6.11
dim
Feature^ = X MAp Wphopt + W0hopt (6.11)
p= l
Then, a set of 'IF ... THEN' rules with one or more combinations of features are formulated
for every terminal node that contains examples of positive class only.
6.2.2.2 SKAL module of WCAODS2
The SKAL module of WCAODS2 employs a Classification Decision Tree (CDT)
algorithm. The framework of a learning tree formulated, the structure of CDT and the
mechanism of learning with CDT algorithm are dealt with in this section.
General frame work of the Learning tree
A tree structure is framed to establish the relationship between the WC attributes and
its corresponding pdr, which is hereafter referred to as learning tree. The tree structure is
formulated as set of layers. Each layer 'LAp' corresponds to one attribute 'A^' and comprises one
or more number of nodes 'NN' in layer 'LAp' depending upon the number of nodes and number
of classes of attribute 'A^,' in the previous layer 'LA Each node in layer 'LA ' represents a
WC attribute 'A^' and it divides into several branches, the number of branches equal to the
Scheduling FMS Using Heuristic and Search Techniques
115
number of classes 'NCAp' of the attribute 'A,,'. For example, if the number of classes 'NCAp' of
attribute 'Ap' and number of nodes in layer 'LAp' are 3 and 4 respectively, then the number of
nodes 'NCAp+in layer 'LAp+is 12 (i.e. 3 x 4). The corresponding tree diagram is shown in
figure 6.2.
Structure of CDT
The first step in the formulation of CDT structure for learning is the fixation of
different classes to each of the WC attributes. The number of classes 'NCAp' of a attribute 'Ap'
depends on the range within which the data of each attribute 'Ap' fall. If the range is large, then
more number of classes is to be fixed. In this work, each attribute is divided into two classes,
classified with certain decision formula for each attribute. The decision formula compares the
attribute values with a standard value established for each set of training examples using any
one of the relational operators such as <, > aild =. This fits all attributes into either of the
two classes. They are then specified as either 0 or 1, depending upon whether they satisfy the
condition or not.
i.e. NCAp = 2 and C(Ap) = 0 or 1 A Ap
The decision rules used to find the classes of each attribute for the generation of tree structure
are given in TABLE 6.1.
No. of Operations in a WC Half of the number of jobs in the part mixes > Class 1 Class 0
'A,' 'n/2'
Average processing time of all The mean value of the range given during > Class 1 Class 0
process in a WC 'A2' problem generation.
Total time required for all No. of jobs by multiplied the mean value of > Class 1 Class 0
operations in a WC A, the range given during problem generation.
Variance of the processing Average of the training set variance (i.e. > Class 1 Class 0
times of the processes within a Total variance of the samples generated
WC a4 divided by number of examples generated)
Scheduling FMS Using Heuristic and Search Techniques
116
With this classification scheme, the tree structure is formulated as five layers. The top four
layers are for the attributes A, to A4 and the fifth layer consists of 4 classes ofpdr rules (0, 1,
2 and 3) . flic CDT that is structured for learning is as shown in figure 6.3. This results into
64 output nodes (i.e. 4 layers of 2 classes and one layer of 4 classes) and every node is
identified with an array of five dimensions denoted as counter(a][b|[c][d[cp), the values of a,
b, c and d are either 0 or 1, and the value of cp which indicates the pdr code is 0 or lor 2 or 3.
6.2.3 RN module
The WCwise-pdr set for the set of parts, which requires processing at each decision
point or planning horizon, is evolved through the decision rules arrived earlier with SKAL
module. These pdrs generate schedule and control the flow of parts during the planning
period or until a new part arrives the system.
Step 1: Read the job data.
Calculate the WC attributes.
Step 2: For every WC,
IN WCAODSI
Calculate the features.
Count the number of rules that satisfies each pdr.
Select the one that has maximum count.
IN WCAODS2
Find the classes of each WC attributes
Select the pdr corresponding to the set of classes evolved.
Step 3: Generate schedule using GT procedure with the WCwise-pdr set evolved.
3 2 - - - -
13 m _ _ _
4 2 - - - -
24 m _ _ _
5 3 1 - - -
16 10 _ _ _
6 2 - - - -
_ _
22
7 2 1 3 5 -
_
18 23 16 25
8 1 3 2 - -
.. _
21 10 11
9 5 - - - -
_ _
23
10 4 - - - -
_ _
16
11 5 2 1 4 3
25 12 24 18 14
12 4 3 2 - -
_ _
23 22 10
13 5 4 3 - -
_ .
24 14 13
14 5 4 1 2 3
19 24 11 23 18
15 4 2 - - -
_ _
19 11
TABLE 6.3 Optimal WCwise-pdr set and its corresponding makespan time
WC number pdr code
WC1 2
WC2 1
WC3 0
WC4 1
WC5 t
Makespan time of the schedule 194
Scheduling FMS Using Heuristic and Search Techniques
119
Calculation of WC attributes
WC attributes derived for WC1 are given below.
MA, = Total number of operations in WC1 = 6
By repeating the above steps three more times, twenty examples that are given in
TABLE 6.5 have been generated to illustrate the SKAL module.
Scheduling FMS Using Heuristic and Search Techniques
120
1 1 6 17 0.541 1 2 3 0 0 16 2.167 2
2 1 11 16 0.907 5 2 3 1 0 19 1.564 1
3 1 7 15 0.562 1 2 2 0 2 14 1.355 0
4 1 8 19 0.809 3 4 0 1 0 15 1.179 1
5 1 6 21 0.655 5 0 0 1 0 17 1.772 1
6 1 10 23 0.690 4 2 1 3 0 24 1.183 0
7 1 10 20 0.611 1 5 3 1 0 24 1.389 0
8 1 11 21 0.690 7 2 1 0 1 23 1.282 1
9 1 7 25 0.519 1 2 3 1 0 23 1.187 2
10 I 13 24 0.929 7 2 2 1 1 22 1.080 0
11 1 18 30 0.823 9 4 3 0 2 29 1.439 0
12 I 14 28 0.591 3 3 6 2 0 29 1.589 1
13 1 15 26 0.608 2 3 2 5 3 32 1.589 2
14 1 17 30 0.786 6 3 4 3 1 29 1.428 1
15 1 16 31 0.768 5 5 2 2 2 28 1.557 0
16 1 6 18 0.529 0 1 4 0 1 24 1.453 1
17 1 7 22 0.740 2 3 1 1 0 22 2.231 0
18 I 6 25 0.740 2 1 0 3 0 21 1.312 3
19 1 7 18 0.615 3 2 1 1 0 22 2.119 0
20 1 5 22 0.534 3 1 0 0 1 23 1.929 0
13 -ve classes. The information entropy 'E' is 0.93407 {i.e. E = -7/20 log2(7/20) -13/20
log2( 13/20)} with this initial classification. After the learning process, a four layer neural net
work consisting of eight hyperplanes is constructed as depicted in figure 6.4. It is to be noted
that the 20 pairs of training examples are classified by three hyperplanes in the first layer; two
hyperplanes in the second layer; two hyperplanes in the third layer; and a single hyperplane of
highest dimension in the final layer. The connection weights obtained for each hyperplane in
each layer are given in TABLE 6.6. Besides, the corresponding decision tree and information
entropies of all layers are given in figures 6.5a, 6.5b, 6.5c and 6.5d.
Scheduling FMS Using Heuristic and Search Techniques
121
LAYER- 3
ENTROPY
0.93407
Feature311
0.25986
Feature321
0.00000
LAYER- 4
ENTROPY
0.93407
Feature411
0.00000
Decision rules resulted from the hidden layers of the NN architecture are as follows:
o
RULE 3 : If Feature, n > 0 and Feature,2, > 0 and Feature,,, < OTHEN LPT
TABLE 6.7 Job matrix for generating a set of training examples in WCAODS2
Job no. WC no. 'j'/Processing time Tij' (job-matrix)
■i* Operation no. k=2 k=3
k =1
1 1 2
il
39 34
II
2 i 2 3
40 45 27
3 3 1 -
41 28
Scheduling FMS Using Heuristic and Search Techniques
125
Extraction of attributes:
The WC attributes 'Ap' along with the standards of each attribute 'SA^ extracted for the
above job matrix are given in TABLE 6.9.
Sufficient numbers of training examples are built by repeating the above steps.
Scheduling FMS Using Ileuristic and Search Techniques
126
LAYER-1
The schedule knowledge base rules extracted from the decision tree are presented below:
2. IF feature,,, >0 AND featurel21 > 0 AND feature,,, <= 0 AND featurel42 <= 0
THEN LPT
3. IF feature,,, > 0 AND feature,2, > 0 AND feature,3, <= 0 AND feature,42 >0
AND feature,53 <= 0 AND feature,66 > 0 THEN LPT
4. IF feature,,, > 0 AND feature,2, > 0 AND feature,3, <= 0 AND featurel42 > 0
AND feature,53 > 0 AND f eature,65 > 0 AND feature,79 <= 0
THEN LPT
5. IF feature,,, > 0 AND feature,2, > 0 AND feature,3, <= 0 AND feature,42 > 0
AND feature,53 > 0 AND feature,65 > 0 AND feature,79 > 0 AND featurem > 0
AND feature,933 > 0 THEN LPT.
6. IF feature,,, > 0 AND feature,2, > 0 AND feature,3, <= 0 AND feature,42 > 0 AND
feature,53 <= 0 AND feature,66 <= 0 AND feature,.,, <= 0 AND feature,824 <=0 AND
feature, ,48 <= 0 THEN LPT.
7. IF feature2I, > 0 AND feature22, > 0 AND feature23, <= 0 AND feature242 > 0 AND
feature253 > 0 THEN LPT.
8. IF feature.,,, > 0 AND feature22, > 0 AND feature23, <= 0 AND feature242 > 0 AND
feature,53 <= 0 AND feature,,6 > 0 THEN LPT.
10. IF feature,,, <= 0 AND feature,,, <= 0 AND feature234 > 0 AND feature,47 <= 0
THEN LPT
Scheduling FMS Using Heuristic and Search Techniques
131
11. IF feature,,, <= 0 AND feature222 <= 0 AND feature234 > 0 AND feature247 > 0 AND
feature,,,, > 0 AND feature2626 <= 0 THEN LPT
12. IF feature211 <= 0 AND feature222 <= 0 AND feature234 > 0 AND feature247 > 0 AND
feature2513 > 0 AND feature2625 <= 0 TFJEN LPT
14. IF feature3ll > 0 AND feature321 <= 0 AND feature332 <= 0 THEN LPT
15. IF feature,,, <= 0 AND feature322 > 0 AND feature333 <= 0 AND feature346 > 0 AND
feature,,,, > 0 THEN LPT
16. IF feature4U <= 0 AND feature422 >= 0 AND feature4„ > 0 AND feature^, > 0
THEN LPT
17. IF feature4n <= 0 AND feature422 <= 0 AND feature434 <= 0 AND feature^ >0
THEN LPT
18. IF feature4II > 0 AND feature42, <= 0 AND feature4„ <= 0 THEN LPT
20. IF feature,,, > 0 AND feature,,, <= 0 AND feature,,, > 0 THEN LPT.
2. IF feature,,, <= 0 AND feature,22 > 0 AND feature,,, <= 0 AND feature,^<=0 AND
feature,„, <= 0 THEN rule EDT
3. IF feature,,, <= 0 AND feature,22 > 0 AND feature,,, <= 0 AND feature,^<=0 AND
feature,5,2 > 0 feature,62, <= 0 THEN rule EDT.
Scheduling FMS Using I leuristic and Search Techniques
132
4. LF feature,,, <= 0 AND feature,22 > 0 AND feature,33 <= 0 AND feature,^<-0 AND
feature,5,2 > 0 feature,623 > 0 AND featurel745 <= 0 AND feature,890 <= 0 THEN rule
EDT.
5. IF feature,,, <= 0 AND feature,22 > 0 AND feature,33 <= 0 AND feature,^<=0 AND
feature,5,2 > 0 feature,62, > 0 AND feature,745 <= 0 AND feature,890 > 0 AND
feature,9179 > 0 THEN rule EDT.
6. IF feature,,, <= 0 AND feature,22 > 0 AND feature,33 <= 0 AND feature,^<=0 AND
featureI5I2 > 0 feature,623 > 0 AND feature,745 <= 0 AND feature,890 > 0 AND
feature,9,79 <= 0 AND feature,,0358 <= 0 AND feature,,,7,6 > 0 THEN rule EDT.
7. IF feature,,, <= 0 AND feature,22 > 0 AND feature,,, <= 0 AND feature,^<=0 AND
feature,M2 > 0 feature,623 > 0 AND feature,745 <= 0 AND feature,890 > 0 AND
feature,9179 <= 0 AND feature,,0358 > 0 AND feature,,m5 > 0 THEN rule EDT.
8. IF feature.,,, <= 0 AND feature222 > 0 AND feature233 >0 AND feature245 <= 0
THEN rule EDT.
9. IF feature2I1 <= 0 AND feature222 <= 0 AND feature234 >0 AND feature247 <= 0
THEN rule EDT.
10. IF feature2n <= 0 AND feature222 <= 0 AND feature234 >0 AND feature247 > 0 AND
feature2513 <= 0 AND feature2626 <= 0 AND feature2752 <= 0 THEN rule EDT.
11. IF feature.,,, <= 0 AND feature222 <= 0 AND feature234 >0 AND feature247 > 0 AND
feature,,,, <= 0 AND feature2626 > 0 AND feature,,,, <= 0 AND feature28102 >0
THEN rule EDT..
12. IF feature.,,, <= 0 AND feature222 <= 0 AND feature234 >0 AND feature247 > 0 AND
feature25,3 <= 0 AND feature2626 <= 0 AND feature2752 > 0 AND feature28,03 <= 0
THEN rule EDT.
13. IF feature.,,, <= 0 AND feature32, > 0 AND feature333 >0 THEN rule EDT.
14. IF feature,,, <= 0 AND feature,.,, > 0 AND feature,,, <= 0 AND feature,46 > 0 THEN
rule EDT.
15. IF feature,,, <= 0 AND feature,2I > 0 AND feature333 <= 0 AND feature,^ <= 0 AND
feature,,,, <= 0 THEN rule EDT.
Scheduling FMS Using Heuristic and Search Techniques
133
16. IF feature3n <= 0 AND feature,,, > 0 AND feature333 <= 0 AND feature346 <= 0 AND
feature,,,, > 0 AND feature,,,, > 0 THEN rule EDT.
17. IF feature311 <= 0 AND feature32, <= 0 AND feature334 > 0 AND feature347 <= 0 THEN
rule EDT.
19. IF feature411 <= 0 AND feature422 <= 0 AND feature434 > 0 THEN rule EDT.
2. IF feature, n >0 AND feature,2I >0 AND feature,3, <= 0 THEN rule MinSLK.
3. IF feature,,, <= 0 AND feature,22 <= 0 AND feature,34 >0 AND feature,47 <= 0
THEN ruieMinSLK.
4. IF feature.,,, <= 0 AND feature222 > 0 AND feature233 > 0 AND feature245 <= 0
THEN rule MinSLK.
5. IF feature.,,, <= 0 AND feature222 > 0 AND feature233 <= 0 AND feature246 > 0 AND
feature,,,, > 0 THEN rule MinSLK.
7. IF feature,,, > 0 AND feature,,, <=0 AND feature,,2 <= 0 THEN rule MinSLK.
8. IF feature,,, > 0 AND feature,,, <= 0 AND feature,,, > 0 ANDfeature,4, <= 0 THEN
rule MinSLK.
2. IF feature 111 <= 0 AND featurel22 > 0 AND featurel34 > 0 THEN rule SPT.
3. IF featurel 11 <= 0 AND featurel22 > 0 AND featurel33 > 0 AND featurel45
<= 0 AND featurel 510 <= 0 THEN rule SPT.
5. IF featurel 11 <= 0 AND featurel22 <= 0 AND featurel34 <= 0 AND featurel48
<= 0 AND feature 15 16 <= 0 AND feature 1632 > 0 AND feature 1763 >0 TI1EN
rule SPT.
9. IF feature211 > 0 AND feature221 <= 0 AND feature232 > 0 AND feature243 > 0
THEN rule SPT.
10. IF feature211 > 0 AND feature221 <= 0 AND feature232 <= 0 AND feature244
<= 0 THEN rule SPT.
11. IF feature211 > 0 AND feature221 > 0 AND feature231 <= 0 AND feature242 > 0
AND feature253 <= 0 THEN rule SPT.
12. IF feature211 > 0 AND feature221 > 0 AND feature231 <= 0 AND feature242 > 0
AND feature253 <= 0 THEN rule SPT.
13. IF feature211 > 0 AND feature221 > 0 AND feature231 > 0 AND feature241 <= 0
THEN rule SPT.
14. IF feature211 > 0 AND feature221 > 0 AND feature231 > 0 AND feature241 > 0
AND feature251 > 0 ANF feature261 <= 0 THEN rule SPT.
16. IF feature311 > 0 AND feature321 <= 0 AND feature332 > 0 THEN rule SPT.
17. IF feature311 > 0 AND feature321 <= 0 AND feature332 <= 0 AND feature344 >
0 feature357 > 0 AND feature3613 > 0 THEN rule SPT.
18. IF feature311 > 0 AND feature321 <= 0 AND feature332 <= 0 AND feature344
<= 0 feature358 <= 0 AND feature3616 > 0 THEN rule SPT.
19. IF feature311 > 0 AND feature321 <= 0 AND feature332 <= 0 AND feature344
<= 0 feature358 <= 0 AND feature3616 <= 0 AND feature3732 <=0 AND
feature3864 <= 0 THEN rule SPT.
20. IF feature411 > 0 AND feature421 > 0 AND feature431 > 0 AND feature441 <=
0 THEN rule SPT.
21. IF feature411 > 0 AND feature421 <= 0 AND feature432 <= 0 AND feature444 >
0 THEN rule SPT.
23. IF feature411 <= 0 AND feature422 <= 0 AND feature434 > 0 AND feature447
<= 0 THEN rule SPT.
24. IF feature511 > 0 AND feature521 <= 0 AND feature532 <= 0 THEN rule SPT.
25. IF feature511 > 0 AND feature521 > 0 AND feature531 <= 0 THEN rule SPT.
26. IF feature511 <= 0 AND feature522 > 0 AND feature533 > 0 THEN rule SPT.
28. IF feature611 <= 0 AND feature622 <= 0 AND feature634 > 0 THEN rule SPT.
If (A, E class 0 and A2 E class 0 and A, E class 0 and A4 E class 0), then select role 0 (SPT).
If (A, E class 0 and AjE class 0 and A3E class 0 and A4 E class 1), then select role 0 (SPT).
If (A, E class0 and A2 E class 0 and A3 E class 1 and A4 E class 0), then select role 2 (EDT).
If (A, E class0 and A2 E class 0 and A3 E class 1 and A4 E class 1), then select role 2 (EDT).
If (A, E class0 and Aj E class 1 and A3 E class 0 and A4 E class 0), then select role 0 (SPT).
If (A, E class0 and A2 E class 1 and A3 E class 0 and A4 E class 1), then select role 0 (SPT).
If (A, E class0 and A2 E class 1 and A, E class 1 and A4 E class 0), then select rale 0 (SPT).
If (A, E class 0 and A2 E class 1 and A, E class 1 and A4 E class 1), then select role 0 (SPT).
If (A, E class I and A2 E class 0 and A3 E class 0 and A4 E class 0), then select rale 0 (SPT).
If (A, E class1 and A2 E class 0 and A3 E class 0 and A4 E class 1), then select role 1 (LPT).
If (A, E class1 and A2 E class 0 and A} E class 1 and A4 E class 0). then select rule 1 (LPT).
Scheduling I'MS Using Heuristic and Search Techniques
139
If (A, E class 1 and Aj E class 0 and A, E class 1 and A4 E class 1), then select rule 2 (EDT),
If (A, E class 1 and A2 E class 1 and A, E class 0 and A4 E class 0), then select rule 0 (SPT).
If (A, E class 1 and A2 E class 1 and A, E class 0 and A4 E class 1), then select rule 0 (SPT).
If (A, E class 1 and A2 E class 1 and A, E class 1 and A4 E class 0), then select rule 0 (SPT).
If (A, E class 1 and Aj E class 1 and A, E class 1 and A4 E class 1), then select rule 0 (SPT).
6.7 CONCLUSIONS
In this chapter, two different knowledge-based scheduling schemes (WCAODSs) to
provide independent pdrs that is to be followed by WCs in different planning horizon are
proposed and their performances compared. The comparison made with a GA based
scheduling methodology show that WCAODSs provide solutions closer to optimum. The
application of GT's procedure with dynamic WC-wise pdr set assures good schedules and
performance. Also, they are capable of addressing the dynamic scheduling problems in FMS.
WCAODSs need less time than GA based methodology and the computational time is
independent of the size of the problem. They infer the rules quickly for real time control that
is essential for handling dynamic states of the system. Hence the KB scheduling rule can be
used for rescheduling. Further, embedded knowledge acquisition mechanism makes
WCAODSs intelligent and flexible, and ensures the most appropriate dispatching rules that are
to be followed by WCs depending upon the status of the system.
The crucial phase in the proposed methodologies is the acquisition of scheduling
knowledge base that is dependent on many factors such as the number of good training
examples, the number and characteristics of problem attributes, the classes of each attribute,
and the operation environment. Since the learning is done off-line, the time that is required for
it is not important. Scheduling knowledge is defined with randomly generated data. Any how
Scheduling 1'MS Using Heuristic and Search Techniques
142
that can be defined well for a specific system with its past part mixes data and the attributes
that affect the scheduling process.
The next chapter addresses the AGV scheduling problem in which transportation
times are included and need to go along with production schedule. The production schedule,
which is obtained neglecting transportation time using either the off-line heuristics or the
knowledge-based schemes, is used as the input to derive the modified production schedule that
integrates AGV schedule.