Vous êtes sur la page 1sur 39

Scheduling FMS Using Heuristic and Search Techniques

104

CHAPTER 6
KNOWLEDGE-BASED WORKCELL ATTRIBUTE ORIENTED
DYNAMIC SCHEDULERS FOR FMS

6.1 INTRODUCTION
Revising the schedule with regeneration requires much time to provide optimal or near
optimal solution. On the other hand, real time systems are capable of providing solutions in
short time. Knowledge-based expert systems, which belong to real time category, are evolving
to a great extent in order to handle dynamic scheduling environments. Several
knowledge-based scheduling algorithms have been proposed and analysed (Chapter 2-section
2.4.2.5). This chapter addresses two knowledge-based scheduling schemes (Work Cell

Attribute Oriented Dynamic Schedulers 'WCAODSs') to control the flow of parts efficiently in
real time for FMS wherein the part-mix varies continually with planning horizon. This is an
alternate approach to the off-line rescheduling schemes that are addressed in chapter 5 which
generate schedules for 'n' jobs on'm' WCs under negligible transportation times. The objective

of the present work is to evolve schedule knowledge base*, which furnishes WCwise-pdrs
(WCwise-pdr set**) for minimum makespan performance criterion, based on the WC
attributes"* to schedule the processes and regulate them in real time.
The concept of applying knowledge-based system for scheduling is not new. All the
AI approaches have aimed to provide a control strategy that most favourably suits the status
of the system and applied knowledge acquisition technologies of artificial intelligence to
induce decision-making knowledge from the collected experience data.

Foot note

Knowledge base: It is a set of IF-THEN rides predicated through the learning process,
which shows the relationship between the features extracted and the pdrs.
“WCwise-pdr set: It is the set of pdrs, one pdr for each WC, that resolves the conflicts
between the contendingjobs.
“WC attributes: It is a set of parameters that describes the processing requirements for the
given part mixes at each WC.
Scheduling FMS Using Heuristic and Search Techniques
105

The generalised procedure of AI technique is as follows. First, a large number of


training examples that describe the system's various dynamic environments (i.e. the problem
instances usually specified by a set of attributes) and their best operating policies are
generated. In the next step, a knowledge base is built by learning the examples and decision
formulae are found out. Finally, at each real time scheduling instance, the current state of the
system is analysed and then the control policy (from a series of rule based policies) is chosen
by pruning through the decision formulae (schedule knowledge rules).
The present work employs hybrid optimisation approach in the generalised AI
framework. In this thesis instead of following one single control policy (scheduling rule)
corresponding to system attributes, a method which operates with a set of dispatching rules
one each for one machine/workcell is suggested and two knowledge-based scheduling
schemes (Work Cell Attribute Oriented Dynamic Schedulers "WCAODS1" and "WCAODS2")
are proposed. All the earlier works used system attributes to reflect the dynamic state of the
system to find a particular control policy for follow up at various scheduling instances. The
GA based heuristic that is addressed in chapter 5 is used to generate training examples. It
provides an optimal combination of set of priority rules one each for one work cell 'WC'
(WCwise-pdr set) for each of the problem instances characterised by their WC attributes. The
WC attributes reflect the information about each individual WC's operating environment. Two
inductive learning algorithms are employed to learn the examples and scheduling rules are
formulated as knowledge base. The learning algorithms employed are: Genetic CID3
(Continuous Interactive Diehotomister3 algorithm extended with genetic program for weight
optimisation) and Classification Decision Tree (CDT) algorithm. The knowledge base
obtained through the above learning schemes generate robust and effective schedules
intelligently with respect to the part-mix changes in real time for makespan criterion. The
comparison made with a GA based scheduling methodology (Chapter 5) show that
WCAODSs provide solutions closer to optimum.
The rest of the chapter is organised as follows. In section 6.2, the proposed scheduling
schemes are presented. The knowledge building schemes of WCAODS1 and WCAODS2 are
illustrated in sections 6.3 and 6.4 respectively. The performances of them are compared with
GA the method that has been used for building the knowledge bases in sections 6.5 and 6.6.
Conclusions are drawn in the last section.
Scheduling FMS Using Heuristic and Search Techniques
106

6.2 THE STRUCTURE OF WCAODSs (WCAODS1 AND WCAODS2)


The structure of the proposed dynamic schedulers involves the following three
modules that is schematically represented in figure 6.1.

1. Training Example Generation (TEG) module

2. Scheduling Knowledge Aquisition and Learning (SKAL) module

3. Run (RN) module


The following sections furnish the details of WCAODSs.

i
i
i
TRAINING EXAMPLE GENERATION MODULE i
i
i
♦j Generate: Job matrix i
i
i

r' ▼ \------i 1i
Obtain: Best WC-pdr (Genetic program) i
j .
v 1
^ Extract:: Attributes of WCs j___ | 1

i
^ i
|

AciTinmlatc : Training examples ( WC vs best pdr) i

i
i

SCHEDULING KNOWLEDGE ACQUISITION MODULE

Learning through genetic CID3 algorithm 'WCAODS1'


OR
^Leraning through binary tree classificatiion algorithm 'WC'AODS2^

Generation of decision rules


J

RUN MODULE

( Read : Job matrix

Calculate Features
Dn
Extract :WG attributes

( Select: pdr for each VVC )-

^ Generate : Schedule (Using G fj)

Figure 6.1 Scheme of WCAODS


Scheduling FMS Using Heuristic and Search Techniques
107

6.2.1 TEG module


This module generates the training examples that are required for learning and
subsequently acquiring the scheduling knowledge base. TEG module constitutes: i) Problem
generation, ii) Evolution of optimal IVC-m.se pdr set, Hi) Extraction of WC attributes and
iv) Formation of training set.
6.2.1.1 Problem generation
The specific values pertaining to the model are the number of WCs'm' in the system,
number of jobs to be processed 'n' during the planning horizon under consideration and their
processing requirements. With the inputs of n, m and the range of processing times, the
processing requirements' data are randomly generated with discrete uniform distribution. The
processing requirements' data, referred as 'job-matrix', include the number of processes
associated with each job, the sequence at which the processes are carried out, and for each
process, the WC number in which the process is performed and the processing time.
6.2.1.2 Evolution of optimal WC-wise pdr set
The methodology employed to evolve independent WCwise-pdr set for resolving
conflicts is a genetic search process addressed in chapter 5.
6.2.1.3 Extraction of WC attributes
The attributes should reflect the circumstance under which a particular WC operates.
The WC attributes for WCAODS1 and WCAODS2 are different and are given below.
WC attributes for WCAODS1
The WC attributes (MAp)considered in WCAODS1 are: Number of processes (MA,);
average processing time (MA2); machine utilisation time (MA3); number of processes with
process sequence 'k'=T (MA4); number of processes with process sequence 'k'=2 (MA,);
number of processes with process sequence 'k-3 (MA6); number of processes with process
sequence 'k' =4 (MA7); Number of processes with process sequence 'k-5 (MA8); average
time of the remaining processes WC (MA^), and standard deviation of processing time
(MAln). A bias input (MA^,) is included to learn the examples with GCID3 algorithm.
WC attributes for WCAODS2
WCAODS2 considers four WC attributes (Ap) to describe the operating circumstance
of a WC. They are: number of processes (A,); average processing time of the processes (A^);
total processing time of all processes (A3); and variance of the processing times of the jobs
visiting that WC (A4).
Scheduling FMS l Ising Heuristic and Search Techniques 108

6.2.1.4 Formation of Training set


The attributes extracted from the job matrix for all the WCs and their corresponding
pdrs evolved through GA process are grouped to form 'm' training examples. The total
number of training examples is the factor that influences the learning of scheduling knowledge;
more numbers mean more accuracy.
6.2.2 SKAL module
The learning module is also called knowledge building module. In this module, the
scheduling knowledge base that shows the relationship between the WC attributes and its
respective optimal pdr is established.
6.2.2.1 SKAL module of WCAODS1
The SKAL module of WCAODS1 employs an inductive learning algorithm referred to
as GCID3 algorithm. It is the extension of Continuous Interactive Dichotomister 'CID3' (Cios
and Liu 1992) with a genetic search process that finds the optimal initial weight vector.
Inductive Learning and CID3 Algorithm
Inductive learning is a machine learning process, which uses common sense for the
formulation of general laws by examination of particular cases. There are several experimental
methods of inductive learning such as the method of agreement, the method of differences,
the method of residues, the method of concomitance, and so on (Forsythe 1986). On the other
hand, using the theoretic measure of information entropy 'E' (E = -plog2p, where probability p
is the percentage of a class in a given set of classes) is well known in machine learning
research, where numerous algorithms have been developed to solve pattern recognition
problems. Quinlan (Quinlan 1987) used the information entropy for finding the variable that is
most discriminating, and partitioning the data concerning that variable. His ID3 algorithm
generates a decision tree using entropy functions by the most discriminatory variable until all
the subsets contain data of only one kind. But it requires data of training examples with class
membership (discrete values). Cios and Liu (1992) developed the 'CID3' algorithm for
handling the continuous values based on the concept of ID3. The CID3 algorithm has the
feature of self-generating Neural Network (NN) architecture dynamically till all the training
examples data become linearly separable. Their algorithm employs information entropy to find
the hyperplane/adaline with which the training examples of a current set are further classified
into subsets with the most discriminatory variable. To start with, an initial random hyperplane
that indicates its position in space (the connection weight vector) is generated randomly. Then
Scheduling l'MS Using Heuristic and Search Tcclmii|ucs
109

the weights are refined with the learning rule so that some more information gains may be
obtained. Hyperplanes are gradually added to the current hidden layer provided that the
information entropy continuously decreases and becomes zero or further improvement is not
in near sight. Next, a new layer is added to the current network. The current layer uses the
original training data and outputs from all previously generated nodes (hyperplanes) as its
input. The addition of new layer continues until the training set separates the examples with a
single hyperplane (i.e. linearly separable). This self-generated neural net converts those
training examples into decision trees.
Genetic CID3 algorithm
The convergence of the network with CID3 algorithm depends on how weight for
each attribute is generated. The random generation of weight vectors in CID3 algorithm
results in more layers in NN architecture. In this context, a genetic search process is proposed
for the generation of weight vectors in CID3 algorithm to get precise NN architecture with
fast convergence and less number of layers. The following four stages are involved in GCID3
algorithm.
Stage I: Generation of initial node
First the training data set is classified into +ve and -ve classes and it becomes the
starting node of any layer. Let there be TT number of training examples in a training data set.
Define the pdr for which learning is carried out as +ve class and others as -ve class. For
example, if learning is done for SPT rule, then the examples with SPT rule are +ve class
examples and the remaining become -ve examples. This classification separates the training
data set into N+ +ve class examples and N -ve class examples.
i.e. N = N+ + N ................................................................ (6.1)
Stage II: Addition of hyperplanes
In the second stage, nodes are continually generated by the gradual addition of
hyperplanes/adalines (h) until the information entropy continuously decreases and becomes
zero or further improvement in information entropy gain is not in near sight. The maximum
number of nodes thus getting generated in a hyperplane is 2h l. The number of nodes at any
hyperplane will be less than this maximum because the nodes with zero entropy are not
classified further. A GA, which evaluates the information entropy of weight vectors of initial
population, determines the optimal initial weight vector (the weight vector that provides
Scheduling FMS Using Heuristic and Search Techniques
110

maximum information gain) of a hyperplane. The proposed GA for the generation of initial

weights is detailed below.


Step 1: Population initialisation
An initial population of feasible weight vectors (represented as chromosomes) is generated
randomly. A chromosome (c) comprises of a set of genes (each gene represents a weight), the
number of which is equal to the number of attributes. An eight digit coding scheme represents
a gene. The first digit is either 1 or 2 and specifies the sign of the weight (1 represent negative
and 2 represents positive). The rest seven digits are decimal values (0-9). With this
representation a value for a weight is obtained. A chromosome thus yields initial feasible

weight vector. A feasible weight vector of a hyperplane classifies each of the 'R* nodes of
earlier classification in such a way that the classification satisfies the following conditions:
(i) n,; < n;
(ii) Nlr- < n;
(iii) No; < Nr+
(iv) N„ < Nr-
(v) N,; + no; = n;

(Vi) N,; +N0; = n;

(vii) Both Nlr* and Nlr should not be equal to zero and
(viii) Both and N^' should not be equal to zero.
Where.

Nr number of training examples in node V


Nr+ number of positive class examples in node 'r1

Nr‘ number of negative examples in node V


Nlr+ : examples on the positive side of the adaline from Nr\
Ng/ examples on the negative side of the adaline from Nr',
Nlr‘ examples on the positive side of the adaline from Nr' and
N,,/ examples on the negative side of the adaline Nr'
h hyperplane number
R number of nodes in that hyperplane
The size of the population (popsize) depends on the number of training examples: more
examples then larger the pop size. The pop size are user defined and 20 is used in this
analysis.
Scheduling FMS Using Heuristic and Search Techniques
111

Step 2: Population evaluation


Since the goal of this genetic search process is the determination of the weight vectors with
maximum information entropy gain, the fitness parameter is the information entropy gain. This
fitness parameter evaluates the population. The calculation involves the following steps.
Classify the examples into the positive and negative sides of the hyperplane using equations
6.2 to 6.6.
Nf
Nlr+ « 2 D.Out, (6.2)
x=1
K" - n; - N„* (6.3)
N,t = § (l-DJOul,
(6.4)
X =1
N„r" - N; - No­ (6.5)

where,
= 1 for examples belonging to +ve class
= 0 for examples belonging to -ve class.
Out, = [l+exp{ Ppf1 ( - MAp * Wp )}] 1
(6.6)

Outx is the threshold value (logistic function) of training example 'i' and is the output of a
training example.
MAp is the value of the attribute ’p*
Wpis the weight of WC attributes 'MAp' (including bias attribute MAJ
Calculate the information entropy 'E'. It is the average of the entropies of all R nodes at that
hyperplane and given as equation 6.7.

a/;, ■0v'-*ln[;5feP+((W-A'|',‘ In a/; a/;,


Nl+N\, Nr-K-Wlr.
f r ~ia
(N;-Nu)*ln I (6.7)
V (w,-Vtr-N7,) Jy

In (..) refers log2 (..) in the above equation 6.7

Further, the learning rate that is set as 0.01 refines the weights so that some more information
gains may be obtained. The changes in the weight vector and the entropy are calculated using
the equations 6.8 and 6.9 respectively.
Scheduling FMS Using Heuristic and Search Techniques
112

AWP = -pf; (6.8)

Where,
p = Learning Rate (0.01);

M_ _ Y dE dN'r dE 8N'r
dW? - L dNl dWP dN\r dWP
(6.8a)

dE _____ 1_ A^-JV|r
cWjr ~ Nlnl
In K+^r
In
Nr-N\-ir]r.
(6.8b)

dE _____ 1_ M-]r N~r- N r

dhru ~ N In 2
In K+»\r -In Nr-N]-N]r (6.8c)

dS'\r
dlVr, = ZD, Out,(l-Out,)MA, (6.8d)

dWp I (1 - D,)Out,(l - Out,) MA, (6.8e)


x=\

R dE
AE = X -m-an+1 r AN7 r
+ ^T^V1 (6.9)
dN\,

Where,
diin ,"),v4
AKf+ - Y zlllL AW (6.9a)
ZA/V ^ ~0 dwp ayvP

dim -

Wlr-Xjfi A Wp (6.9b)

dim = total number of WC attributes in each pair of training examples.

By exponential scaling the value of'AE', the fitness value of a chromosome is found.

i.e. fitness parameter F(c) = es‘AE (6.10)


Scheduling FMS Using Heuristic and Search Techniques
113

Step 3: New population generation


The new population is generated with the genetic operations of selection of the fittest
chromosomes to represent the next generation, reproduction of child with two points partially
mapped crossover (PMX) and uniform mutation.
Step 4: Sorting the best weight vector and termination check
The entire process of new population generation and its evaluation refers to one
iteration. A number of iterations are carried out till the optimality is achieved (i.e. when there
is no improvement in the value of information entropy gain). The best among the iterated
results is the optimal weight vector. The schema of the proposed genetic search process is
outlined with an illustration in Appendix III.
Stage III: Termination of classification in a layer
In stage three, the determination of optimal initial weight vector of a hyperplane that is
evolved through combined genetic search and learning, and the gradual additions of
hyperplanes are repeated until the information entropy becomes zero or further improvement
in information entropy gain is not possible.
Stage IV: Addition of new layer
The above three stages are repeated until the information entropy becomes zero with a
single hyperplane (i.e. all of the training examples are linearly separable). When a new layer is
added to the current network, the original training data and outputs of the all previously
generated nodes (hyperplanes) become the inputs for the current layer. The procedural steps
of GCID3 algorithm are given as lern mod() below.
Lern_mod() /* LEARNING PROCESS BY GENETIC CID3 ALGORITHM */
# Define Lerrate = 0.01, Bias Input = 1 /* Lerrate - Learning Rate */
Step 1 : Read the training examples and classify them into two classes as +ve (isF)and -ve
(N) classes (+ve class: the pdr for which the learning is initiated; -ve class : others);
Step 2 : Set: Current layer (L) = 1 and hyperplane (h) = 1;
Step 3 : Generate Initial weight vector Wh using GA search for hyperplane h = 1;
Step 4 : Set the index of training iteration as (kl) = 1;
Step 5 : Calculate: Number of nodes 'R' = 2(h °;
Step 6 : Classify the examples using the equations 6.2 to 6.6
Step 7 : Find the Information Entropy using the equation 6.7
Scheduling FMS Using Heuristic and Search Techniques
114

Step 8 : Find the change in weight vector and change in information entropy, in a hyperplane
using the equation 6.8 and 6.9.
Step 9 : If (AE) > 0.0 then Ehkl+1 = Ek'h +AE and Whk,+' = Whk’ + AWh. Go To step 10.
Otherwise record Ehkl as Eh and Wh k' as Whopt Go To Step 11.

Step 10: If Ehkl+I > 0.0 set kl = kl+1 go back to step 5;


Otherwise record EhkHI as Eh and Wh kl+1 as Whopt go to step 11.

Step 11: If Eh = E^, go to step 12. Otherwise add a hyperplane to the current layer and go to
step 3.

Step 12: If there is only one hyperplane (i.e. h = 1) at the current layer STOP learning has
been done). Otherwise, add a new layer (i.e. set L = L+ 1) to the current network
and add a hyperplane (set h = 1) to this layer. Go back to step 1 with inputs from

both the original training data and output from all previously generated nodes.
Deduction of scheduling rules
The neural net generated converts its hidden layers into scheduling decision rules in
the following manner. The features (Feature, ^ corresponding to each node V of hyperpiane 'h'
in layer 'L' that relates the training examples and decision tree of the NN architecture
generated are found out with the connections weights obtained and using the relation given in
equation 6.11
dim
Feature^ = X MAp Wphopt + W0hopt (6.11)
p= l

Then, a set of 'IF ... THEN' rules with one or more combinations of features are formulated

for every terminal node that contains examples of positive class only.
6.2.2.2 SKAL module of WCAODS2
The SKAL module of WCAODS2 employs a Classification Decision Tree (CDT)
algorithm. The framework of a learning tree formulated, the structure of CDT and the
mechanism of learning with CDT algorithm are dealt with in this section.
General frame work of the Learning tree
A tree structure is framed to establish the relationship between the WC attributes and
its corresponding pdr, which is hereafter referred to as learning tree. The tree structure is
formulated as set of layers. Each layer 'LAp' corresponds to one attribute 'A^' and comprises one
or more number of nodes 'NN' in layer 'LAp' depending upon the number of nodes and number
of classes of attribute 'A^,' in the previous layer 'LA Each node in layer 'LA ' represents a
WC attribute 'A^' and it divides into several branches, the number of branches equal to the
Scheduling FMS Using Heuristic and Search Techniques
115

number of classes 'NCAp' of the attribute 'A,,'. For example, if the number of classes 'NCAp' of
attribute 'Ap' and number of nodes in layer 'LAp' are 3 and 4 respectively, then the number of
nodes 'NCAp+in layer 'LAp+is 12 (i.e. 3 x 4). The corresponding tree diagram is shown in
figure 6.2.

Structure of CDT
The first step in the formulation of CDT structure for learning is the fixation of
different classes to each of the WC attributes. The number of classes 'NCAp' of a attribute 'Ap'
depends on the range within which the data of each attribute 'Ap' fall. If the range is large, then
more number of classes is to be fixed. In this work, each attribute is divided into two classes,
classified with certain decision formula for each attribute. The decision formula compares the
attribute values with a standard value established for each set of training examples using any
one of the relational operators such as <, > aild =. This fits all attributes into either of the
two classes. They are then specified as either 0 or 1, depending upon whether they satisfy the
condition or not.
i.e. NCAp = 2 and C(Ap) = 0 or 1 A Ap
The decision rules used to find the classes of each attribute for the generation of tree structure
are given in TABLE 6.1.

TABLE 6.1 Decision rules for WC attributes classification


Attribute 'Ap' Standard value of Attribute 'SA^Tor comparison Relatio Condition
operator
Satified Not satisfied

No. of Operations in a WC Half of the number of jobs in the part mixes > Class 1 Class 0
'A,' 'n/2'
Average processing time of all The mean value of the range given during > Class 1 Class 0
process in a WC 'A2' problem generation.
Total time required for all No. of jobs by multiplied the mean value of > Class 1 Class 0
operations in a WC A, the range given during problem generation.
Variance of the processing Average of the training set variance (i.e. > Class 1 Class 0
times of the processes within a Total variance of the samples generated
WC a4 divided by number of examples generated)
Scheduling FMS Using Heuristic and Search Techniques
116

With this classification scheme, the tree structure is formulated as five layers. The top four
layers are for the attributes A, to A4 and the fifth layer consists of 4 classes ofpdr rules (0, 1,
2 and 3) . flic CDT that is structured for learning is as shown in figure 6.3. This results into
64 output nodes (i.e. 4 layers of 2 classes and one layer of 4 classes) and every node is
identified with an array of five dimensions denoted as counter(a][b|[c][d[cp), the values of a,
b, c and d are either 0 or 1, and the value of cp which indicates the pdr code is 0 or lor 2 or 3.

Knowledge acquisition and learning process


First, all the 64 counters are initialised with zero. The classes of each WC attribute of
the set of'm' training examples of a job matrix ('n' jobs on'm' WC) that is generated using
TEG module is found out. With the classes obtained, their corresponding counters are
incremented by one. For example, if the classes of attributes A„ A^, A, and A4 are 0, 1, 1 and
0 respectively, and the pdr evolved is 2, then the value of the counter|0][l||l||0)|2] increases
by one unit. This entire process is repeated till sufficient data are accumulated for scheduling
rule generation. The counters are the indicators of the number of times a particular rule has
been selected for a particular combination of attribute classes. Then, a set of 'IF ... THEN'
rules (16 rules) are formulated for every combination of attribute classes based on the
dominating pdr as indicated by the counter index.
Scheduling FMS Using Heuristic and Search Techniques
117

6.2.3 RN module
The WCwise-pdr set for the set of parts, which requires processing at each decision
point or planning horizon, is evolved through the decision rules arrived earlier with SKAL
module. These pdrs generate schedule and control the flow of parts during the planning
period or until a new part arrives the system.
Step 1: Read the job data.
Calculate the WC attributes.
Step 2: For every WC,
IN WCAODSI
Calculate the features.
Count the number of rules that satisfies each pdr.
Select the one that has maximum count.
IN WCAODS2
Find the classes of each WC attributes
Select the pdr corresponding to the set of classes evolved.
Step 3: Generate schedule using GT procedure with the WCwise-pdr set evolved.

6.3 WCAODSI ILLUSTRATED


The scheduling methodology of WCAODSI is illustrated with an example in this
section.
6.3.1 TEG module
Input data:
The inputs given to generate a set of training examples are as follows:
No. ofjobs 'n' : 15; No. of WCs 'm' : 5;
Range of operation times in the WCs TV : 10-25;
deaeration ofjob matrix-
The job matrix generated for the above input is given in TABLE 6.2
Scheduling FMS Using Heuristic and Search Techniques
118

TABLE 6.2 Job-matrix j/T,j data generated


Operation no. 'k' 1 2 3 4 5
Job no. *i' j T■j
1 5 4 2 - -
_
13 21 20
2 2 4 1 - -
12 22 13 m _

3 2 - - - -
13 m _ _ _

4 2 - - - -
24 m _ _ _

5 3 1 - - -
16 10 _ _ _

6 2 - - - -
_ _
22
7 2 1 3 5 -
_
18 23 16 25
8 1 3 2 - -
.. _
21 10 11
9 5 - - - -
_ _
23
10 4 - - - -
_ _
16
11 5 2 1 4 3
25 12 24 18 14
12 4 3 2 - -
_ _
23 22 10
13 5 4 3 - -
_ .
24 14 13
14 5 4 1 2 3
19 24 11 23 18
15 4 2 - - -
_ _
19 11

Evolution of optimal WCwise-pdr set


The optimal WCwise-pdr set obtained through GPEOWCPDR ALGORITHM along
with its makespan time is given in TABLE 6.3.

TABLE 6.3 Optimal WCwise-pdr set and its corresponding makespan time
WC number pdr code
WC1 2
WC2 1
WC3 0
WC4 1
WC5 t
Makespan time of the schedule 194
Scheduling FMS Using Heuristic and Search Techniques
119

Calculation of WC attributes
WC attributes derived for WC1 are given below.
MA, = Total number of operations in WC1 = 6

MA^ = Average processing time of operations in WC 1


_
__
_
Sum of processing times of all processes to be carried by WC !
102/6 = 17

MA3 = Machine Utilisation


__ Sum of the Processing Times of all Operations to be carried out WCl _ |Q2/]94 — 0 5*41
MakeSpan l ime

MA4 = Number of jobs visiting WC 1 with 1st operation in that WC = 1


MA5 = Number of jobs visiting WCl with 2nd operation in that WC =2
MA6 = Number of jobs visiting WCl with 3rd operation in that WC =3
MA7 = Number of jobs visiting WCl with 4th operation in that WC =0
MAS = Number of jobs visiting WCl with 5th operation in that WC =0
MA, = Ave.processing time of Remaining Operations to be carried (for all Jobs)

X (Sum of the Processing Times of Remaining Operations to be camed after WC!)


r =1
16
X (Number of Remaining Operations to be carried after WC I)
i “I

MA10 = Standard Deviation of processing times of the processes in WCl =2.167


Formation of a training set
The attributes MA, to MA10 along with a BIAS INPUT (MA^,) and a pdr of a WC give
one Training Example. A set of training examples resulted with the above job matrix is given
in TABLE 6.4

TABLE 6.4 One set of training examples


Training Example MA0 MA, MA, MA, MA, MA, MA. MA, MA„ MA, ma10 RULE
1 I 6 17 0.541 1 2 3 0 (! 16 2.167 2
2 1 11 16 0.907 5 2 3 1 0 19 1.564 1
3 1 7 15 0.562 1 2 2 0 2 14 1.355 0
4 1 8 19 0.809 3 4 0 1 0 15 1.179 1
5 I 6 21 0.655 5 0 0 1 0 17 1.772 1

By repeating the above steps three more times, twenty examples that are given in
TABLE 6.5 have been generated to illustrate the SKAL module.
Scheduling FMS Using Heuristic and Search Techniques
120

Table 6.5 Training examples data used for SKAL illustration


Training Example MA0 MA, MA, MA, ma4 MA, ma6 MA, MA* MA, MA,0 RULE

1 1 6 17 0.541 1 2 3 0 0 16 2.167 2

2 1 11 16 0.907 5 2 3 1 0 19 1.564 1
3 1 7 15 0.562 1 2 2 0 2 14 1.355 0
4 1 8 19 0.809 3 4 0 1 0 15 1.179 1

5 1 6 21 0.655 5 0 0 1 0 17 1.772 1

6 1 10 23 0.690 4 2 1 3 0 24 1.183 0
7 1 10 20 0.611 1 5 3 1 0 24 1.389 0
8 1 11 21 0.690 7 2 1 0 1 23 1.282 1
9 1 7 25 0.519 1 2 3 1 0 23 1.187 2
10 I 13 24 0.929 7 2 2 1 1 22 1.080 0
11 1 18 30 0.823 9 4 3 0 2 29 1.439 0
12 I 14 28 0.591 3 3 6 2 0 29 1.589 1

13 1 15 26 0.608 2 3 2 5 3 32 1.589 2
14 1 17 30 0.786 6 3 4 3 1 29 1.428 1
15 1 16 31 0.768 5 5 2 2 2 28 1.557 0
16 1 6 18 0.529 0 1 4 0 1 24 1.453 1
17 1 7 22 0.740 2 3 1 1 0 22 2.231 0
18 I 6 25 0.740 2 1 0 3 0 21 1.312 3
19 1 7 18 0.615 3 2 1 1 0 22 2.119 0
20 1 5 22 0.534 3 1 0 0 1 23 1.929 0

6.3.2 SKAL module


In this part, the inductive learning process that is carried out for LPT rule (pdr code 1) is
illustrated. In the rest of the explanations in this section, LPT is referred as +ve class and the
other rules as -ve class. This classifies the above 20 training examples into 7 +ve classes and

13 -ve classes. The information entropy 'E' is 0.93407 {i.e. E = -7/20 log2(7/20) -13/20
log2( 13/20)} with this initial classification. After the learning process, a four layer neural net
work consisting of eight hyperplanes is constructed as depicted in figure 6.4. It is to be noted
that the 20 pairs of training examples are classified by three hyperplanes in the first layer; two
hyperplanes in the second layer; two hyperplanes in the third layer; and a single hyperplane of
highest dimension in the final layer. The connection weights obtained for each hyperplane in
each layer are given in TABLE 6.6. Besides, the corresponding decision tree and information
entropies of all layers are given in figures 6.5a, 6.5b, 6.5c and 6.5d.
Scheduling FMS Using Heuristic and Search Techniques
121

Figure 6.4 A four layer neural network generated by GCID3.


V
Scheduling FMS Using Heuristic and Search Techniques
122

LAYER- 3
ENTROPY

0.93407

Feature311

0.25986

Feature321

0.00000

Figure 6.5c Classification at Layer 3


Scheduling FMS Using Heuristic and Search Techniques
123

LAYER- 4
ENTROPY

0.93407

Feature411

0.00000

Figure 6.5d Classification at Layer 4

TABLE 6.6 Connection weights


LAYER 1 LAYER 2 LAYER 3 LAYER 4
Hyperplane 1 Hyperplane 2 Hyperplane 3 Hyperplane 1 Hyperplane 2 Hyperplane 1 Hyperplane 2 Hyperplane 1
Node 1 Node 2
6.853546 -4.165894 -3.700043 8.196228 -2.984402 4.566376 0.649655 -0.547333 0.001434
6.304291 -1.221649 -3.78198 3.343292 1.184986 3.112610 0.890799 0.066345 0.964142
3.260406 3.865417 -3.87564 -7.465027 -2.139095 4.948795 0.705818 0.851654 -0.151489
-6.960327 3.228180 -0.796387 -6.815674 0.566498 0,203918 3.652083 -6.624695 -0.672638
7.24981 8.036743 -6.945374 3.141205 -0.18916 -8.764954 -3.091706 -0.121613 -0.685669
-3.331543 7.143494 -1.886597 4.396240 -6.643563 -7.458740 0.888652 3.238312 -0.556641
5.026428 6.584717 -1.027771 4.966980 8.831677 5.411987 -3.053453 -3.624817 -0.478668
-1.750275 1.114227 -3.811829 3.173553 4.203671 8.477661 0.529778 -3.168091 -0.742340
-7.305542 6.509613 -0.913605 2.109222 -5.937287 6.341858 3.937391 -6.424469 -0.763672
-6.645538 -6.619232 5.124023 7.241913 1.291581 6.279999 -0.531043 -0.822784 0.071014
-7.220428 -7.705750 -3.165558 -7.936279 -1.942136 4.770569 -0.731631 -6.790863 0,423981
4.676595 -6.783661 -3.507796 6.957916 3.477051

-8.495296 8.423615 -6.572748 6.984344 -0.738525


-7.295685 5.623718 6.287033 -0.490570 ^1.190460
-6.723939 6.164062 3.114349

6.5723551 -3.613190 -4.060303


-3.588501
3.686981

Decision rules resulted from the hidden layers of the NN architecture are as follows:
o

RULE 1 : If Feature,,, and Feature, 22 > 0 THEN LPT


VI

RULE 2: If Feature,,, > 0 and Feature,.,, THEN LPT


VI
Scheduling FMS Using Heuristic and Search Techniques
124

RULE 3 : If Feature, n > 0 and Feature,2, > 0 and Feature,,, < OTHEN LPT

RULE 4 : If Feature.,,, > 0 THEN LPT

RULE 5: If Feature2„ < 0 and Feature222 < 0 THEN LPT

RULE 6: If Feature,,, < 0 THEN LPT

RULE 7: If Feature3„ > 0 and Feature,.,, > 0 THEN LPT

RULE 8: If Feature4n > 0 THEN LPT


Similarly learning can be carried out for the remaining pdrs and decision rules can be
extracted.

6.4 WCAODS2 ILLUSTRATED


The methodology for the acquisition of scheduling knowledge in WCAODS2 is
illustrated with an example to get the insight of its working.
6.4.1 TEG module
Input data:
Data used for generating a set of training examples are given below.
Number of jobs 'n' to be processed : 3; Number of WCs'm' in the system : 3;
Range of operation times in the WCs ' Tij' : 25 - 45;
Generation of job matrix:
The job matrix generated for the above input is given in TABLE 6.7.

TABLE 6.7 Job matrix for generating a set of training examples in WCAODS2
Job no. WC no. 'j'/Processing time Tij' (job-matrix)
■i* Operation no. k=2 k=3
k =1
1 1 2
il

39 34
II

2 i 2 3
40 45 27
3 3 1 -

41 28
Scheduling FMS Using Heuristic and Search Techniques
125

Evolution of optimal WCwise-pdr set:


The optimal WCwise-pdr set obtained through GPEOWCPDR ALGORITHM is given in
TABLE 6.8.

TABLE 6.8 Optimal WCwise-pdr set


WC number pdr code
WC1 1
WC2 0
WC3 2

Extraction of attributes:
The WC attributes 'Ap' along with the standards of each attribute 'SA^ extracted for the
above job matrix are given in TABLE 6.9.

TABLE 6.9 WC attributes and standards for classification


WC no. *j' WC Attributes Af

A, SA, A, SA, A, SA, a4 sa4


(A, x SA,) (A,+A,+Aj+A4)/4

1 3 1.5 35.667 35 107 105 29.556 39.79

2 2 1.5 39.500 35 79 70 30.250 39.79


3 3 1.5 37.667 35 113 105 59.556 39.79

Formation of training set:


The classes of each attribute 'CA^ (0 or 1) derived by comparing them with its
corresponding standard value of its attribute 'SAp' and the optimal WC-wise pdr set evolved
through GA are given in TABLE 6.10 and these provide a set of training examples.

TABLE 6.10 A set of training examples


Example Classes of WC attributes Best
no. CA, CA, ca3 ca4 pdr
1 1 1 1 0 1
2 1 1 0 0 0
3 1 1 1 1 2

Sufficient numbers of training examples are built by repeating the above steps.
Scheduling FMS Using Ileuristic and Search Techniques
126

6.4.2 SKAL module


The classes of the attributes 'CA^ are used to prune the learning tree and each of the
following counters are incremented with one unit and becomes,
counter[l][l][l][0][l] = counter[ 1 ][ 1 ][ 1 ][0][ 1 ] + 1
counter[l][l][0][0][0] = counter[l][l][0][0][0] + 1
counter[l][l][l][l][2] = counter[l][l][l][l][2] + 1
Each set of training examples updates the counters.
The repetition of the above procedure of training example generation and counter
updating many times results in the dominance of a particular pdr in the same set of classes.
This is used for inferring and formulating the scheduling rules.
Suppose a counter[l][l][l][l][0] among the counter's counter[l][l][l][l][0],
counter[l]fl]fl]fl]fl], counterfl]f l]f I]f1]f21 and counterf l]f l]f l]f 1 ]f3] that represent a
same class (i.e. CA, = 1, CA^ 1, CA3 = 1 and CA4= 1) dominates the other counters, then
the scheduling rule is formulated as follows.
If (A, E class 1 and A2 E class 1 and A3 E class 1 and A4 E class 1), then select rule 0 (SPT)
As a result of learning, with large number of training examples, sixteen such rules, one each
for one set of classes, evolve.

6.5 WCAODSI COMPARED WITH GA


To evaluate the performance of WCAODSI, scheduling knowledge bases are obtained
for a hypothetical FMS model that consists of five WCs of special purpose. A data set, that
consists of hundred training examples to reflect the operating environment of FMS and cover
all possible variations in WC attributes, is used to learn the examples with GC1D3 algorithm.
Scheduling knowledge bases are derived from the hidden layers of self-generated neural
network architectures of all pdrs. The scheduling knowledge base rules of all pdrs inferred by
means of their respective decision trees, connection weights and features are given in the
following section.
6.5.1 Knowledge base obtained through WCAODSI
6.^ 1.1 Scheduling knowledge base rules resulting through learning LPT rule
The decision trees of the hidden layers 1 to 4 generated for LPT rule are shown in
respective figures 6.6a, 6.6b, 6.6c, 6.6d, 6d. 6e, 6f, and 6.6g.
Scheduling FMS Using Heuristic and Search Techniques
127

LAYER-1

Figure 6.6a Decision tree of layer 1 for LPT rule learning


Scheduling I'MS Using Heuristic and Search Techniques
128
Scheduling FMS Using Heuristic and Search Techniques
129
Scheduling I;MS Using Heuristic and Search Techniques
130

The schedule knowledge base rules extracted from the decision tree are presented below:

1. IF feature, „ > 0 AND feature121 <= 0 THEN LPT

2. IF feature,,, >0 AND featurel21 > 0 AND feature,,, <= 0 AND featurel42 <= 0
THEN LPT

3. IF feature,,, > 0 AND feature,2, > 0 AND feature,3, <= 0 AND feature,42 >0
AND feature,53 <= 0 AND feature,66 > 0 THEN LPT

4. IF feature,,, > 0 AND feature,2, > 0 AND feature,3, <= 0 AND featurel42 > 0
AND feature,53 > 0 AND f eature,65 > 0 AND feature,79 <= 0
THEN LPT

5. IF feature,,, > 0 AND feature,2, > 0 AND feature,3, <= 0 AND feature,42 > 0
AND feature,53 > 0 AND feature,65 > 0 AND feature,79 > 0 AND featurem > 0
AND feature,933 > 0 THEN LPT.

6. IF feature,,, > 0 AND feature,2, > 0 AND feature,3, <= 0 AND feature,42 > 0 AND
feature,53 <= 0 AND feature,66 <= 0 AND feature,.,, <= 0 AND feature,824 <=0 AND
feature, ,48 <= 0 THEN LPT.

7. IF feature2I, > 0 AND feature22, > 0 AND feature23, <= 0 AND feature242 > 0 AND
feature253 > 0 THEN LPT.

8. IF feature.,,, > 0 AND feature22, > 0 AND feature23, <= 0 AND feature242 > 0 AND
feature,53 <= 0 AND feature,,6 > 0 THEN LPT.

9. IF feature,,, <= 0 AND feature,,, > 0 THEN LPT

10. IF feature,,, <= 0 AND feature,,, <= 0 AND feature234 > 0 AND feature,47 <= 0
THEN LPT
Scheduling FMS Using Heuristic and Search Techniques
131

11. IF feature,,, <= 0 AND feature222 <= 0 AND feature234 > 0 AND feature247 > 0 AND
feature,,,, > 0 AND feature2626 <= 0 THEN LPT

12. IF feature211 <= 0 AND feature222 <= 0 AND feature234 > 0 AND feature247 > 0 AND
feature2513 > 0 AND feature2625 <= 0 TFJEN LPT

13. IF feature,,, <= 0 AND feature322 <= 0 THEN LPT

14. IF feature3ll > 0 AND feature321 <= 0 AND feature332 <= 0 THEN LPT

15. IF feature,,, <= 0 AND feature322 > 0 AND feature333 <= 0 AND feature346 > 0 AND
feature,,,, > 0 THEN LPT

16. IF feature4U <= 0 AND feature422 >= 0 AND feature4„ > 0 AND feature^, > 0
THEN LPT

17. IF feature4n <= 0 AND feature422 <= 0 AND feature434 <= 0 AND feature^ >0
THEN LPT

18. IF feature4II > 0 AND feature42, <= 0 AND feature4„ <= 0 THEN LPT

19. IF feature,,, <= 0 THEN LPT

20. IF feature,,, > 0 AND feature,,, <= 0 AND feature,,, > 0 THEN LPT.

21. IF feature,,, > 0 AND feature,,, <= 0 THEN LPT.

22. IF feature,,, <= 0 AND feature,22 > = 0 THEN LPT.

23. IF feature,,, <= 0 THEN LPT.


The connection weights generated for every classification node of all layers are stored
in separate data files. The run module uses the features that are derived from the connection
weights and predicts the rules that are to be followed by the WCs.
6.5.1.2 Scheduling Knowledge base rules resulting through learning EDTrule
The scheduling knowledge base rules extracted from the decision trees generated for
EDT pdr are given below:

1. IF feature,,, <= 0 AND feature,„ <= 0 THEN rule EDT.

2. IF feature,,, <= 0 AND feature,22 > 0 AND feature,,, <= 0 AND feature,^<=0 AND
feature,„, <= 0 THEN rule EDT

3. IF feature,,, <= 0 AND feature,22 > 0 AND feature,,, <= 0 AND feature,^<=0 AND
feature,5,2 > 0 feature,62, <= 0 THEN rule EDT.
Scheduling FMS Using I leuristic and Search Techniques
132

4. LF feature,,, <= 0 AND feature,22 > 0 AND feature,33 <= 0 AND feature,^<-0 AND
feature,5,2 > 0 feature,623 > 0 AND featurel745 <= 0 AND feature,890 <= 0 THEN rule
EDT.

5. IF feature,,, <= 0 AND feature,22 > 0 AND feature,33 <= 0 AND feature,^<=0 AND
feature,5,2 > 0 feature,62, > 0 AND feature,745 <= 0 AND feature,890 > 0 AND
feature,9179 > 0 THEN rule EDT.

6. IF feature,,, <= 0 AND feature,22 > 0 AND feature,33 <= 0 AND feature,^<=0 AND
featureI5I2 > 0 feature,623 > 0 AND feature,745 <= 0 AND feature,890 > 0 AND
feature,9,79 <= 0 AND feature,,0358 <= 0 AND feature,,,7,6 > 0 THEN rule EDT.

7. IF feature,,, <= 0 AND feature,22 > 0 AND feature,,, <= 0 AND feature,^<=0 AND
feature,M2 > 0 feature,623 > 0 AND feature,745 <= 0 AND feature,890 > 0 AND
feature,9179 <= 0 AND feature,,0358 > 0 AND feature,,m5 > 0 THEN rule EDT.

8. IF feature.,,, <= 0 AND feature222 > 0 AND feature233 >0 AND feature245 <= 0
THEN rule EDT.

9. IF feature2I1 <= 0 AND feature222 <= 0 AND feature234 >0 AND feature247 <= 0
THEN rule EDT.

10. IF feature2n <= 0 AND feature222 <= 0 AND feature234 >0 AND feature247 > 0 AND
feature2513 <= 0 AND feature2626 <= 0 AND feature2752 <= 0 THEN rule EDT.

11. IF feature.,,, <= 0 AND feature222 <= 0 AND feature234 >0 AND feature247 > 0 AND
feature,,,, <= 0 AND feature2626 > 0 AND feature,,,, <= 0 AND feature28102 >0
THEN rule EDT..

12. IF feature.,,, <= 0 AND feature222 <= 0 AND feature234 >0 AND feature247 > 0 AND
feature25,3 <= 0 AND feature2626 <= 0 AND feature2752 > 0 AND feature28,03 <= 0
THEN rule EDT.

13. IF feature.,,, <= 0 AND feature32, > 0 AND feature333 >0 THEN rule EDT.

14. IF feature,,, <= 0 AND feature,.,, > 0 AND feature,,, <= 0 AND feature,46 > 0 THEN
rule EDT.

15. IF feature,,, <= 0 AND feature,2I > 0 AND feature333 <= 0 AND feature,^ <= 0 AND
feature,,,, <= 0 THEN rule EDT.
Scheduling FMS Using Heuristic and Search Techniques
133

16. IF feature3n <= 0 AND feature,,, > 0 AND feature333 <= 0 AND feature346 <= 0 AND
feature,,,, > 0 AND feature,,,, > 0 THEN rule EDT.

17. IF feature311 <= 0 AND feature32, <= 0 AND feature334 > 0 AND feature347 <= 0 THEN
rule EDT.

18. IF feature411 <= 0 AND feature422 > 0 THEN rule EDT.

19. IF feature411 <= 0 AND feature422 <= 0 AND feature434 > 0 THEN rule EDT.

20. IF feature,,, <= 0 THEN rule EDT.


6.5.1.3 Scheduling Knowledge base rules resulted through learning MSLK rule
The scheduling knowledge base rules extracted from the decision trees generated for
MSLK pdr are given below:

1. IF feature,,, >0 AND feature)2l<= 0 AND feature132 >0 THEN ruieMinSLK.

2. IF feature, n >0 AND feature,2I >0 AND feature,3, <= 0 THEN rule MinSLK.

3. IF feature,,, <= 0 AND feature,22 <= 0 AND feature,34 >0 AND feature,47 <= 0
THEN ruieMinSLK.

4. IF feature.,,, <= 0 AND feature222 > 0 AND feature233 > 0 AND feature245 <= 0
THEN rule MinSLK.

5. IF feature.,,, <= 0 AND feature222 > 0 AND feature233 <= 0 AND feature246 > 0 AND
feature,,,, > 0 THEN rule MinSLK.

6. IF feature,,, > 0 AND feature,,, > 0 THEN rule MinSLK.

7. IF feature,,, > 0 AND feature,,, <=0 AND feature,,2 <= 0 THEN rule MinSLK.

8. IF feature,,, > 0 AND feature,,, <= 0 AND feature,,, > 0 ANDfeature,4, <= 0 THEN
rule MinSLK.

9. IF feature41, > 0 AND feature4„ > 0 THEN rule MinSLK.

10. IF feature,,, <= 0 AND feature,,, > 0 THEN rule MinSLK.

11. IF feature,,, <= 0 AND feature,,, > 0 THEN rule MinSLK.

12. IF feature,,, <= 0 THEN rule MinSLK.


6.5.1.4 Scheduling Knowledge base rules resulting through learning SPT rule
The scheduling knowledge base rules extracted from the decision trees generated for
SPT pdr are given below:
Scheduling FMS Using Heuristic and Search Techniques
134

1. IF feature 111 > 0 THEN rule SPT.

2. IF feature 111 <= 0 AND featurel22 > 0 AND featurel34 > 0 THEN rule SPT.

3. IF featurel 11 <= 0 AND featurel22 > 0 AND featurel33 > 0 AND featurel45
<= 0 AND featurel 510 <= 0 THEN rule SPT.

4. IF featurel 11 <= 0 AND featurel22 <= 0 AND featurel34 <= 0 AND


feature 148 <= 0 AND featurel516 >= 0 AND feature 1631 > 0 AND feature 1761
> 0 THEN rule SPT.

5. IF featurel 11 <= 0 AND featurel22 <= 0 AND featurel34 <= 0 AND featurel48
<= 0 AND feature 15 16 <= 0 AND feature 1632 > 0 AND feature 1763 >0 TI1EN
rule SPT.

6. IF featurel 11 <= 0 AND featurel22 <= 0 AND featurel34 <= 0 AND


feature 148 <= 0 AND featurel516 <= 0 AND feature 1632 > 0 AND feature 1763
< = 0 AND featurel8126 > 0 THEN rule SPT.

7. IF featurel 11 <= 0 AND featurel22 <= 0 AND featurel34 <= 0 AND


feature 148 <= 0 AND featurel516 <= 0 AND featurel632 > 0 AND featurel763
<=0 AND featurel8126 <=0 AND featurel9252 <= 0 THEN rule SPT.

8. IF feature211 <= 0 THEN rule SPT.

9. IF feature211 > 0 AND feature221 <= 0 AND feature232 > 0 AND feature243 > 0
THEN rule SPT.

10. IF feature211 > 0 AND feature221 <= 0 AND feature232 <= 0 AND feature244
<= 0 THEN rule SPT.

11. IF feature211 > 0 AND feature221 > 0 AND feature231 <= 0 AND feature242 > 0
AND feature253 <= 0 THEN rule SPT.

12. IF feature211 > 0 AND feature221 > 0 AND feature231 <= 0 AND feature242 > 0
AND feature253 <= 0 THEN rule SPT.

13. IF feature211 > 0 AND feature221 > 0 AND feature231 > 0 AND feature241 <= 0
THEN rule SPT.

14. IF feature211 > 0 AND feature221 > 0 AND feature231 > 0 AND feature241 > 0
AND feature251 > 0 ANF feature261 <= 0 THEN rule SPT.

15. IF feature3 11 <= 0 THEN rule SPT.


Scheduling FMS Using Heuristic and Search Techniques
135

16. IF feature311 > 0 AND feature321 <= 0 AND feature332 > 0 THEN rule SPT.

17. IF feature311 > 0 AND feature321 <= 0 AND feature332 <= 0 AND feature344 >
0 feature357 > 0 AND feature3613 > 0 THEN rule SPT.

18. IF feature311 > 0 AND feature321 <= 0 AND feature332 <= 0 AND feature344
<= 0 feature358 <= 0 AND feature3616 > 0 THEN rule SPT.

19. IF feature311 > 0 AND feature321 <= 0 AND feature332 <= 0 AND feature344
<= 0 feature358 <= 0 AND feature3616 <= 0 AND feature3732 <=0 AND
feature3864 <= 0 THEN rule SPT.

20. IF feature411 > 0 AND feature421 > 0 AND feature431 > 0 AND feature441 <=
0 THEN rule SPT.

21. IF feature411 > 0 AND feature421 <= 0 AND feature432 <= 0 AND feature444 >
0 THEN rule SPT.

22. IF feature411 <= 0 AND feature422 > 0 THEN rule SPT.

23. IF feature411 <= 0 AND feature422 <= 0 AND feature434 > 0 AND feature447
<= 0 THEN rule SPT.

24. IF feature511 > 0 AND feature521 <= 0 AND feature532 <= 0 THEN rule SPT.

25. IF feature511 > 0 AND feature521 > 0 AND feature531 <= 0 THEN rule SPT.

26. IF feature511 <= 0 AND feature522 > 0 AND feature533 > 0 THEN rule SPT.

27. IF feature611 > 0 AND feature621 <= 0 THEN rule SPT.

28. IF feature611 <= 0 AND feature622 <= 0 AND feature634 > 0 THEN rule SPT.

29. IF feature711 <= 0 THEN rule SPT.


6.5.1.5 Computational experience with learning process
The system used for carrying out the learning process is HP 720 workstation (series
700). The program that works under the UNIX environment is written in 'C' language. The
computational time taken by the system for each pdr is mentioned below.
LPT : 9 hours and 40 minutes
EDT : 6 hours and 35 minutes
MinSLK : 5 hours and 25 minutes
SPT : 12 hours and 30 minutes
Scheduling FMS Using i leuristic and Search Techniques
136

6.5.2 Performance comparison


The performance of the proposed WCAODS1 is compared with GA (the method used
to build the knowledge base) with makesapn achievement and computational time through the
results obtained with problems of various sizes generated with different job sizes and range of
processing times. TABLE 6,11a, 6.11b, 6.11c and 6.lid. show the results of the sample
problems tested. The following observations are made from the comparison. Makespan time of
the schedules obtained with WCAODS1 are comparable to GA. The mean makespan time
deviation of WCAODS1 from GA is 7.60% (with a minimum of 0.00% and a maximum of
15.75%). The size of the jobs does not influence much the solution quality. No significant
variation in the solution output concerning the range of processing times is observed. As far as
computational time is concerned, WCAODS1 provides solution with a time independent of
processing time range that lies between 3 and 4 seconds depending upon the size of the
problem. The computational time increases exponentially with the size of the problem in GA
process.

TABLE 6.1 la Makespan and Computation times of the sample problems


(Processing time range 5-10)
S.No. No. of WCAODS1 GA MM
jobs 'n' %deviation
WCwise-pdr MM CT WCwise-pdr MM CT
1 10 2-0-1-2-1 84 2.867 1-2-1-1-2 80 3.242 5.00
2 10 0-0-1-0-1 74 2.890 1-3-1-2-0 69 2.692 7.25
n 0-1-1-0-0 98 3.234 3-2-1-1-0 93 5.714 5.37
4 14 3-0-1-1-1 98 3.210 1-0-3-1-0 90 4.890 8.89
5 15 0-1-0-0-1 118 3.338 1-0-2-2-3 104 6.923 13.46
6 15 1-0-1-1-3 90 3.553 2-0-2-1-3 89 5.714 1.12
7 16 1-0-1-0-3 126 3.531 3-1-0-1-0 113 8.620 11.50
8 17 0-0-0-3-1 125 3.640 0-0-0-2-0 120 11.260 4.17
9 19 1-1-0-0-0 130 3.752 1-1-2-0-1 118 12.800 10.17
10 20 1-0-0-0-0 130 3.898 1-0-2-0-0 125 14.505 4.00

MEAN MM DEVIATION (%) 7.09

MM: Makespan time CT: Computation time


MM % deviation = (MM ofWCAODSl - MM of GA )/(MM of GA)
Scheduling FMS Using Heuristic and Search Techniques
137

TABLE 6.11b Makespan and Computation times of the sample problems


(Processing time range 5-15)
S.No. No. of WCAODS1 GA MM
jobs 'n' %deviation
WCwise-pdr MM CT WCwise-pdr MM CT
1 11 3-3-0-1-I 148 3.123 0-2-0-0-1 131 3.846 12.97
2 11 1-0-1-3-1 100 3.452 1-0-1-2-2 100 3.626 0.00
3 14 0-I-3-1-1 108 3.134 2-0-2-1-1 108 6.318 0.00
4 14 1-1 -3-3-1 112 3.245 1-1-3-3-1 112 5.769 0.00
5 15 1-1-1-1-0 138 3.467 1-2-2-1-1 126 6.813 9.52
6 16 0-1-1-1-1 156 3.578 0-0-2-1-1 139 9.010 12.23
7 16 0-0-1-1-0 126 3.678 0-3-3-0-2 113 7.912 11.50
8 18 1-1-2-1-1 151 3.790 0-0-0-2-0 135 10.980 11.85
9 18 0-3-1-0-1 180 3.962 0-0-0-0-2 170 12.198 5.88
10 19 0-3-0-1-0 229 3.956 2-2-2-1-2 204 15.55 12.25
MEAN MM DEVIATION ( %) 7.62

TABLE 6.11c Makespan and Computation times of the sample problems


(Processing time range 5-20)
S.No. No.of WCAODS1 GA MM
jobs 'n' MY ’wise-pdr MM MM %dcvintion
CT MY 'wisv-pdr CT
1 11 l-l-l-l-l 154 3.024 1-1-1-1-2 135 3.321 14.07
2 11 1-1-1-3-1 144 3.123 1-2-1-0-1 129 3.406 11.62
3 12 1 -1 -1 -1-0 191 3.245 2-3-0-1-2 165 3.860 15.75
4 13 3-1-1-1-1 165 3.346 1-0-2-0-0 148 4.060 11,48
5 13 1-3-1-1-1 166 3.456 0-2-1-1-0 167 5.054 0.60
6 14 1-3-3-3-2 216 3.567 0-0-0-2-0 191 5.380 13.09
7 16 1-3-0-1-1 212 3.621 0-0-2-2-0 188 7.520 12.76
8 17 l-l-l-l-l 246 3.821 3-2-1-2-1 214 10.930 14.95
9 19 2-1-2-1-2 264 3.982 2-0-2-2-2 229 15.000 15.28
10 20 0-1-i-l-l 226 4.028 1-2-0-1-3 194 14.34 16,49
MEAN MM DEVIATION (%) 12.61
Scheduling FMS Using Heuristic and Search Techniques
138

TABLE 6.1 Id Makespan and Computation times of the sample problems


(Processing time range 10-20)
S.No. No. of WCAODS1 GA MM
jobs 'n' fVCwise-pdr MM CT WCwise-pdr MM CT %deviation
1 10 1-3-1-3-1 119 2.989 1-0-1-3-1 119 1.758 0.00
2 10 0-0-3-3-1 131 3.273 2-3-0-3-0 129 1.480 1.55
3 11 1-0-3-3-0 117 3.289 2-2-1-3-3 116 2.690 0.86
4 12 3-3-1-1-1 213 3.456 3-2-3-3-1 197 3.570 8.12
5 14 1-1-0-3-1 242 3.543 1-2-0-3-0 241 6.318 0.41
6 15 1-3-1-3-1 181 3.623 2-2-1-0-2 177 5.109 0.57
7 15 1-1-3-1-1 236 3.768 0-2-2-0-1 224 8.186 5,35
8 15 1-3-3-2-1 209 3.856 0-0-1-0-1 199 7.250 5.03
9 19 1-1-0-0-2 266 3.988 1-1-0-0-2 266 16.530 0.00
10 20 3-1-2-1-3 306 4.121 1-1-0-1-2 281 18.077 8.90
MEAN MM DEVIATION (%) 3.08

6.6 WCAODS2 COMPARED WITH GA


6.6.1 Knowledge base obtained through WCAODS2
With many different problems of various sizes generated with TEG module, all the 64
counters are updated with the SKAL module. Sixteen scheduling rules are obtained with 4500
training examples, one each for one set of the combination of WC attribute classes from the
dominating pdr in that set. The pdr that dominates in the same combination WC attribute
classes is found out from the counter values. The scheduling knowledge base rules thus
formulated are given below.

Scheduling knowledge base rules

If (A, E class 0 and A2 E class 0 and A, E class 0 and A4 E class 0), then select role 0 (SPT).
If (A, E class 0 and AjE class 0 and A3E class 0 and A4 E class 1), then select role 0 (SPT).
If (A, E class0 and A2 E class 0 and A3 E class 1 and A4 E class 0), then select role 2 (EDT).
If (A, E class0 and A2 E class 0 and A3 E class 1 and A4 E class 1), then select role 2 (EDT).
If (A, E class0 and Aj E class 1 and A3 E class 0 and A4 E class 0), then select role 0 (SPT).
If (A, E class0 and A2 E class 1 and A3 E class 0 and A4 E class 1), then select role 0 (SPT).
If (A, E class0 and A2 E class 1 and A, E class 1 and A4 E class 0), then select rale 0 (SPT).
If (A, E class 0 and A2 E class 1 and A, E class 1 and A4 E class 1), then select role 0 (SPT).
If (A, E class I and A2 E class 0 and A3 E class 0 and A4 E class 0), then select rale 0 (SPT).
If (A, E class1 and A2 E class 0 and A3 E class 0 and A4 E class 1), then select role 1 (LPT).
If (A, E class1 and A2 E class 0 and A} E class 1 and A4 E class 0). then select rule 1 (LPT).
Scheduling I'MS Using Heuristic and Search Techniques
139

If (A, E class 1 and Aj E class 0 and A, E class 1 and A4 E class 1), then select rule 2 (EDT),
If (A, E class 1 and A2 E class 1 and A, E class 0 and A4 E class 0), then select rule 0 (SPT).
If (A, E class 1 and A2 E class 1 and A, E class 0 and A4 E class 1), then select rule 0 (SPT).
If (A, E class 1 and A2 E class 1 and A, E class 1 and A4 E class 0), then select rule 0 (SPT).
If (A, E class 1 and Aj E class 1 and A, E class 1 and A4 E class 1), then select rule 0 (SPT).

6.6.2 Performance comparison


The performance of the proposed WCAODS2 is compared with GA with the results
of the sample problems tested that are shown in tables TABLE 6.12a , 6.12b, 6.12c and
6.12d. The following observations are made from the comparison. The mean makespan time
deviation of WCAODS2 from GA is 7.96% (with a variation between 0.00% and 21.01%).
The size of the jobs does not influence much on the solution quality. The results indicate that
no significant variation in the solution output concerning the range of processing times.
WCAODS2 provides solution with a time independent processing time range and the time
required for computation is 0.6 to 1.4 seconds depending upon the number of jobs to be
processed. The computational time is less compared with WCAODS1.

TABLE 6.12a Makespan and Computation times of the sample problems


(Processing time range 5-10)
S.No. No. of WCAODS2 GA MM
jobs 'n1 IVCwise-pdr MM CT WCwise-pdr MM CT %deviation

1 11 0-0-1-0-0 84 0.783 0-2-0-0-1 76 2.692 10.53


2 13 1-0-1-0-1 102 0.821 0-0-1-2-0 98 4.945 4.08
3 13 0-1-0-0-1 97 0.912 0-3-1-0-2 93 4.066 4.30
4 14 0-1-0-1-0 100 1.102 3-0-3-0-0 91 5.769 9.89
5 14 0-I-0-1-0 108 1.112 0-1-2-3-0 90 4.725 20.00
6 16 0-1-0-0-0 117 1.254 0-1-0-1-1 101 8.021 15.4
7 16 1-0-1-0-1 120 1.125 1-2-1-0-0 104 9.231 15.38
8 17 0-1-0-2-0 112 1.386 1-2-2-0-1 102 10.549 9.80
9 18 0-0-0-0-1 115 1.402 0-2-2-0-2 112 10.274 2.68
10 20 0-1-0-0-0 144 1.398 1-2-0-1-2 119 14.121 21.01
MEAN MM DEVIATION (%) 11.37
Scheduling l'MS Using Heuristic and Sear ch Techniques
140

TABLE 6.12b Makespan and Computation times of the sample problems


(Processing time range 5-15)
S. No No. of WCAODS2 GA MM
jobs ’n' %deviation
WCwise-pdr MM CT WCwise-pdr MM CT
1 11 2-0-0-1-2 96 0.673 0-0-0-2-0 98 2.967 0.00
2 12 0-0-0-1-1 126 0.763 0-0-1-0-1 124 5.165 1.61
3 12 2-0-0-0-1 119 0.883 1-3-2-0-1 112 4.670 6.25
4 13 0-0-1-1-0 138 0.789 0-0-1-2-1 134 5.989 2.99
5 14 0-0-0-0-0 150 0.973 0-3-2-0-2 142 6.319 5.63
6 15 1-0-0-0-1 135 1.121 0-2-1-2-1 119 8.022 13.45
7 16 0-1-0-1-0 129 1.231 2-0-0-0-0 119 7.802 8.40
8 17 0-1-0-1-0 171 1.321 2-0-1-0-1 161 12.473 6.21
9 19 0-0-0-0-1 170 1.378 2-1-0-1-3 162 13.461 4.94
10 20 I-O-O-O-O 225 1.409 2-0-2-3-I 193 20.604 16.58
MEAN MM DEVIATION (%) 6.61

TABLE 6.12c Makespan and Computation times of the sample problems


(Processing time range 5-20)
WCA No. of WCAODS2 GA MM
ODS2 jobs 'n' %deviation
WCwise-pdr MM CT WCwise-pdr MM CT
1 10 0-0-0-0-0 108 0.782 0-2-0-0-2 103 2.527 4.85
2 12 0-0-0-0-2 161 0.745 1-2-3-0-0 160 4.890 0.62
3 12 0-0-0-0-0 126 0.897 0-0-0-2-2 121 3.681 4.13
4 13 0-0-2-0-1 190 0.992 2-2-0-1-0 180 6.428 5.56
5 14 0-0-1-0-0 183 1.103 0-0-0-0-1 191 7.527 0.00
6 15 1-0-0-1-0 227 1.202 2-1-0-2-2 199 8.956 14.07
7 17 o-o-o-o-o 270 1.263 1-2-0-0-2 268 14.560 0.75
8 18 1-1-0-2-0 197 1.345 2-1-1-2-0 190 13.791 3.68
9 19 2-0-0-0-0 193 1.332 O-2-l-2-0 186 13.681 3,76
10 20 O-O-O-O-O 220 1.392 1-2-0-0-2 212 16.648 3.78
MEAN MM DEVIATION (%) 4.12
Scheduling FMS Using Heuristic and Search Techniques
141

TABLE 6.12d Makespan and Computation times of the sample problems


(Processing time range 10-20)
S.No. No. of WCAODS2 GA MM
jobs 'n' %deviation
WCwise-pdr MM CT WCwise-pdr MM CT
1 10 0-0-0-0-0 165 0.629 0-0-2-2-2 156 3.131 8.33
2 11 1-0-0-0-0 264 0.745 2-0-0-1-2 222 4.890 18.92
3 12 0-0-0-0-0 229 0.788 2-2-1-2-1 215 5.824 6.51
4 13 0-1-0-0-0 230 0.992 0-2-2-1-2 209 7.088 10.05
5 13 o-o-o-o-o 246 1.023 2-0-1-1-2 228 7.088 7.89
6 14 0-2-0-1-0 198 1,123 2-3-3-2-3 186 6.429 6.45
7 17 1-0-0-0-0 248 1.234 0-3-0-0-2 225 10.989 10.22
8 17 2-0-0-2-0 258 1.332 2-0-0-0-0 239 11.703 7.95
9 19 0-1-0-0-1 338 1.378 0-0-2-1-2 280 17.033 20.71
10 20 0-1-0-0-0 293 1.421 0-0-2-0-0 291 17.802 0.69
MEAN MM DEVIATION % 8.78

6.7 CONCLUSIONS
In this chapter, two different knowledge-based scheduling schemes (WCAODSs) to
provide independent pdrs that is to be followed by WCs in different planning horizon are
proposed and their performances compared. The comparison made with a GA based
scheduling methodology show that WCAODSs provide solutions closer to optimum. The
application of GT's procedure with dynamic WC-wise pdr set assures good schedules and
performance. Also, they are capable of addressing the dynamic scheduling problems in FMS.
WCAODSs need less time than GA based methodology and the computational time is
independent of the size of the problem. They infer the rules quickly for real time control that
is essential for handling dynamic states of the system. Hence the KB scheduling rule can be
used for rescheduling. Further, embedded knowledge acquisition mechanism makes
WCAODSs intelligent and flexible, and ensures the most appropriate dispatching rules that are
to be followed by WCs depending upon the status of the system.
The crucial phase in the proposed methodologies is the acquisition of scheduling
knowledge base that is dependent on many factors such as the number of good training
examples, the number and characteristics of problem attributes, the classes of each attribute,
and the operation environment. Since the learning is done off-line, the time that is required for
it is not important. Scheduling knowledge is defined with randomly generated data. Any how
Scheduling 1'MS Using Heuristic and Search Techniques
142

that can be defined well for a specific system with its past part mixes data and the attributes
that affect the scheduling process.

The next chapter addresses the AGV scheduling problem in which transportation
times are included and need to go along with production schedule. The production schedule,
which is obtained neglecting transportation time using either the off-line heuristics or the

knowledge-based schemes, is used as the input to derive the modified production schedule that
integrates AGV schedule.

Vous aimerez peut-être aussi