Vous êtes sur la page 1sur 55

Soft Computing vs Hard Computing

Soft computing differs from hard computing , in that unlike hard


computing, it is tolerant of imprecision, uncertainty & partial truth.

In effect , the role model & soft computing is the human mind. Hard
computing methods are predominantly based on mathematical
approaches & therefore demand precise inputs.

The guiding principle of soft computing is to exploit the tolerance for


imprecision, uncertainty & partial truth to achieve tractability,
robustness & low solution cost.
Four major AI approaches are

1. Artificial Neural Network

2. Fuzzy Logic

3. Genetic Algorithm

4. Expert System
Some properties of ANN :
1. Inherent parallelism in the network architecture.
This leads to the possibility of very fast hardware implementation of NN.

2. Capability of learning information by example.

3. Ability to generalize to new inputs.


4. Robustness to noisy data that is found in the real
world

5. Fault tolerance
If one of the neurons or connections is damaged, the
whole network still works quite well.
Classification of ANN
No. of layers Single layer/ multilayer

Existence of feedback Feedforward, feedback(recurrent)

Types of learning Supervised, Unsupervised


Reinforced, competitive,
Change of weights during use Adaptive/non adaptive

Type of mathematics used Deterministic/stochastic/fuzzy

Physical aspect Simulated in software , implemented in


hardware
Combined with other AI Fuzzy/genetic/evolutionary
approaches
Type of task for which used Pattern classification, Clustering,
Function approximation, prediction ,
Optimization
It may be noted that it may not be possible to draw clear cut demarcation between
various architecture & paradigms. There is a lot of overlap, redundancy & fuzziness.
Tasks that neural networks can perform
Pattern Classification (Pattern Recognition):

Input pattern is passed to the network, the network produces


representative class as output. The task of pattern classification is to
assign an input pattern , some class from a group of pre-specified
classes. Some of the well known applications are ECG waveform
classification, blood cell classification and in engineering field,
classification of faulted and non faulty condition data.

Clustering/Categorization:

In clustering there are no pre-specified classes, instead a clustering


algorithm explores the similarities between patterns and places similar
in a cluster. This is mainly used for data compression and exploratory
data analysis.
Function Approximation:
Suppose a set of n labeled training patterns have been generated
from an unknown function subjected to noise, the task of function
approximation is to find an estimate of the unknown function. Various
engineering problems require function approximation.

Prediction/Forecasting:
When given a set of n samples in a time sequence t1,t2,,tn,
The task is to predict the sample at a future time tn+1. ANN based load
forecasting is one of these applications.

Optimization:
The goal of an optimization algorithm is to find a solution satisfying
a set of constraints such that an objective function is maximized or
minimized. Economic load dispatch is one such engineering problem.
McCulloch-Pitts neuron model
Transfer Functions :

[u(t)]
Hard limit Transfer Function
1

*u (t)+ = 1 for u (t) 0


u(t) =0 for u (t) < 0

[ u ( t )]
1 Symmetrical Hard limit Transfer Function

*u (t)+ = 1 for u (t) 0


u(t)
0
= -1 for u (t) < 0
-1
[ u ( t )]
1

Log-Sigmoid Transfer Function


0.5
1
[u (t )] , for u (t )
(1 exp( u (t ) ))
T

0 u(t)

[ u ( t )]
1 Linear Transfer Function

0 u(t) [u(t)] u(t), for - u(t)

-1
[u ( t )] Bipolar Sigmoid Transfer Function
1

(e [ u ( t )] e [ u ( t )])
[u (t )] tanh[u (t )] [ u ( t )] , or
0 u(t) (e e [ u ( t )] )

[u (t )] (1 e 2u (t )
) , for u (t )
(1 e 2u ( t ) )
-1

A particular transfer function is chosen to satisfy some specification of


the problem that the particular neuron is attempting to solve.
Notation for multi-input neuron
Two or more
neurons operate
in parallel in a
layer.
Multi-layer network
A layer whose output is the network output is called the output layer.
The other layers are called hidden layers.
ANN Learning
It is a process by which the network adapts itself to the
stimulus and eventually produces the desired
response.
During this process, network adjusts it weights so that its
actual output response converges to desired output
response.
Once the network has completed the learning phase, it
has acquired knowledge.
Learning Techniques:
1. Supervised 2. Unsupervised
3. Reinforced 4. Competitive
Supervised Learning:

In supervised learning, both the inputs and the


outputs are provided. The network then
processes the inputs and compares its resulting
outputs against the desired outputs.
1. LMS Rule ( Delta Rule) Errors are then calculated, causing the system
to adjust the weights which control the network.
2. BP Algorithm This process occurs over and over as the
weights are continually improved in a iterative
process.
LMS Rule:
Objective is to adjust the weights and biases of the network in
order to minimize the mean square error.
Error is the difference between the target output and network
output.
For weight update :
W(k+1) = W(k) + 2 e(k) pT(k)
b(k+1) =b(k) +2 e(k)
Back propagation Algorithm :
This algorithm is central to much of research work in the field.
This is based on the gradient principle.
The values of the weight are adjusted by the amount proportional
to first derivative (gradient) of the error between desired output
and actual output value of the neuron, with respect to the value of
the weight.
The goal is to decrease (descend) the error function avoiding local
minima and reaching the global minimum.
Weight update:
Wpq(k+1) = Wpq(k) E / Wpq
where
1 d
E [ y ( t ) - y (t ) ] 2

2
Unsupervised (self organized) learning
There is no target output.

During the training sessions ANN receives input patterns and it


arbitrarily organizes the pattern into categories (classes).

Later, when a new input pattern is applied, ANN provides an


output response indicating the category to which the input
belongs. If a category cannot be found for the input, a new
category is generated.

Unsupervised learning algorithms aim to learn rapidly and can be used in


real-time.

Hebb Rule: This does not require any information concerning


target output.
Wnew = Wold + aiq pjq
Tasks:
Clustering - Group patterns based on similarity,
Feature Extraction - Reduce dimensionality of input by
removing unimportant features (i.e. those that do not help in
clustering)
Reinforced Learning
In supervised learning it is assumed that there is a target
output value for each input value. However, in many
situations, there is less detailed information available. In
extreme situations, there is only a single bit of
information after a long sequence of inputs telling
whether the output is right or wrong. Reinforcement
learning is one method developed to deal with such
situations.
Teacher tells whether actual output is same as target output or
not. Thus there is only pass/fail indication.
(In supervised learning, there is a indication of how close the
actual output is to the desired output.)
If the indication is bad, networks adjusts its parameters in a
iterative process.

Reinforcement learning (RL) is a kind of supervised learning in


that some feedback from the environment is given. However the
feedback signal is only evaluative, not instructive.

Reinforcement learning is often called learning with a critic as


opposed to learning with a teacher.
Competitive Learning
In this method, those output neurons which respond strongly to
the input have their weights updated.
When the input pattern is presented, all the neurons in the output
layers compete and the winning neuron output is the network
output.
This is called as winner-takes-all strategy.
The output neuron that wins the competition is called the winner-
takes-all neuron
Simple pattern recognition problem

Consider sorting of two parts of a certain

machine/mechanism.

Part A is elliptical, smooth and weigh < 1lb

Part B is round, smooth and weigh < 1lb

Sensors to sense shape, texture and weight


Designed explicitly to solve binary pattern recognition problem.

Recurrent layer is called competitive layer. The neurons in this layer


compete with each other to determine a winner. After the competition, only
one neuron will have a nonzero output. The winning neuron indicates
which category of the input was presented to the network.
This layer performs a correlation between each of the prototype patterns
and the input patterns.

Rows of weight matrix are set to prototype


patterns

Each element in b
= no. of elements
in input vector
Hamming operation : First Layer
Hamming operation : Second Layer

Since the outputs of successive iterations produce the same


result, the network has converged.
Neurons are initialized with input vector.
Summary
The Methodology of applying ANN
Start

State the problem clearly.


What are the inputs & outputs ?

Do analytical modeling for


generating training data

Collect corresponding
Collect inputs
outputs

Create repertoire of
input-output patterns
1
The Methodology of applying ANN (continued)

Decide on network architecture


& paradigm
1

Execute learning algorithm Modify network


Include new
Training cases Carry out testing

No No
OK?

Yes

Hardware / Software
implementation
Application areas in Electrical Engg. based on ANNs*

Application Proportion

System identification 12%

Control 15%

Security assessment 19%

Fault diagnosis 14%

Load forecasting 21%

Operation, planning 7%

Protection 4%

*Agraval R., Song Y.,Artificial Neural Networks in Power Systems, IEE


Power Engineering Journal, 1998, pp129-134.
ANN Application To Improve Selectivity Of OC Relay Adaptively

OCRs need careful settings to avoid maloperation. However


relays may maloperate as a result of changed system conditions. This
can be tackled to some extent by adding adaptivity to the relay setting.

Tr,A > Tr,B + Tcb,B


Thus maloperation can be detected by observing current at bus A.

To avoid maloperation in future, increment the time setting of the relay


Ra by suitable incremental value. Modern numerical relays has facility
to change its settings online.

NN can be trained in offline manner.


Yes
Load Forecasting Estimating active load at various load buses ahead
of actual load occurrence.

Nature of Lead time Application


forecast
Very short A few seconds Generation, distribution
term to several schedules, contingency
minutes analysis for system
security
Short Half an hour to Allocation of spinning
term a few hours reserve, operational
planning and unit
commitment,
maintenance scheduling
Medium A few days to a Planning for seasonal
term few weeks peak winter, summer
Long term A few months to Planning generation
a few years
Let y(k) represent the total load demand at the discrete time k = l, 2,3 ..

It is generally possible to decompose y(k) into two parts of the form


y(k)= yd(k)+ ys(k)

where the subscript d indicates the deterministic part and the


subscript s indicates the stochastic part of the demand.
If k is considered to be the present time, then y(k + j), j > 0 would
represent a future load demand with the index j being the lead time.
For a chosen value of the index j, the forecasting problem is then the
problem of estimating the value of y(k +j) by processing adequate data
for the past load demand.

Extrapolation techniques involve fitting trend curves to basic historical


data adjusted to reflect the growth trend itself. With a trend curve the
forecast is obtained by evaluating the trend curve function at the
desired future point.
T. M. Peng, N. F. Hubele, G.G. Karady, Advancement In The Application
Of Neural Networks For Short-term Load Forecasting, IEEE Transactions
on Power Systems, Vol. 7, No. 1, Feb. 1992, pp250-257.

An improved neural network approach is proposed to produce


short-term electric load forecasts. A new strategy, suitable for selecting
the training cases for the neural network, is presented. This strategy
uses a minimum distance measurement to identify the appropriate
historical patterns of load and temperature readings to be used to
estimate the network weights.

In applying a neural network approach to load forecasting,


advancements have been proposed in the selection of training cases
and the structure of the network.
A search procedure is performed to find those days with load and temperature
readings which most closely resemble the information used to produce a forecast.
In order to produce a daily forecast, three steps are followed:
1. For a given forecast day, the values for the input nodes are identified. These values
are then compared to historical values for the same day type using a selection
procedure. Those days which have load and maximum and minimum weather values
that come closest to the forecast day inputs are selected to train the system. To
minimize the number of required cases and based on the results of prior research.
only seven training cases or days are selected for each forecast .
2. The network is trained using these seven cases until the observed error is less than
10-8
3. Finally, the input values for the forecast day (the same values used to search the
database in Step 1) are applied to the network, and a forecast for total daily load is
produced.
Forecasted Load

Values of variables of
the prior day

predicted values of
variables of the Forecast
day
To achieve an accurate and more comprehensive approach to load
forecasting, the days which have similar load and temperature values
should be selected from the past to train the network.
In this work, two years of load and temperature data were available.
The neural network is used to forecast the load in the second year.
A forecast represents a 24-hour ahead total daily load.
The training cases are selected from all previous data.
For example, if the forecasted load is in the 53rd week, the training
cases are selected from the previous 52 weeks.
The network is re-trained each day.
Load forecasting is important due to the economic considerations.

Load forecasting is difficult because load is based on the distributed,


individual decisions by people responding to their desire for electricity.

Some Factors affecting load demand :


Climatic Factors : Temperature, cloudiness, wind speed etc.
Sociological Factors : day of the week, festival etc.
Random Factors: storms, strikes etc.

Research shows that ANNs produce load forecasts with better accuracy
as compared to other methods (e.g. statistical methods).
2) Y. Bang 0. P. Malik G. S. Hope G. P. Chen, Application of an Inverse Input/Output
Mapped ANN as a Power System Stabilizer , IEEE Transactions on Energy
Conversion, Vol. 9, No. 3, Sep.1994, pp. 433-441.

An artificial neural network, trained as an inverse of the controlled plant, to function as


a power system stabilizer is presented in this paper.

Data used to train the ANN PSS consisted of the control input and the synchronous
machine response with an adaptive PSS controlling the generator. During training, the
ANN was required to memorize the reverse input/output mapping of the synchronous
machine.
After the training, the output of the synchronous machine was applied as the input of
the ANN PSS and the output of the ANN PSS was used as the control signal.

Simulation results show that the proposed ANN PSS can provide good damping of the
power system over a wide operating range and significantly improve the system
performance.
3) M. Azizur Rahman,M. Ashraful Hoque, On-Line Self-Tuning ANN-Based
Speed Control of a PM DC Motor, IEEE/ASME Transactions On
Mechatronics, Vol. 2, No. 3, Sept.1997, pp.167-178.

This paper presents an on-line self-tuning ANN-based speed control scheme of


a permanent magnet (PM) dc motor.
For precise speed control, an on-line training algorithm with an adaptive
learning rate is introduced, rather than using fixed weights and biases of the
ANN.
The complete system is implemented in real time using a digital signal
processor controller board (DS1102) on a laboratory PM dc motor.
To validate its efficacy, the performances of the proposed ANN-based scheme
are compared with a proportional-integral (PI)-controller-based PM dc motor
drive system under different operating conditions.
The comparative results show that the ANN-based speed control scheme is
robust, accurate, and insensitive to parameter variations and load disturbances.
4) K. R. Cho Y. C. Kang, S. S. Kim, J. K. Park, S. H. Kang, K. H. Kim,
An ANN Based Approach to Improve the Speed of a Differential
Equation Based Distance Relaying Algorithm, IEEE Transactions on
Power Delivery, Vol. 14, No. 2, April 1999, pp. 349-357

This paper presents an ANN based approach to improve the speed of a


differential equation based distance relaying algorithm.

As the differential equation used for the transmission line protection is


valid only at low frequencies the distance relaying algorithm requires a
lowpass filter removing frequency components higher than those
for relaying. However, the lowpass filter causes the time delay of the
components for relaying.

The proposed approach was tested in a 345 (kV) transmission system


and compared with the conventional distance relaying algorithm without
ANNs from the speed and accuracy viewpoints and the proposed
approach found to improve the speed of the relaying algorithm.
5) G. K. Singh and Saad Ahmed Saleh Al Kazzaz, Development of an
Intelligent Diagnostic System for Induction Machine Health Monitoring ,
IEEE Systems Journal, Vol. 2, No. 2, June 2008, pp. 273-288

In this paper, the development of an intelligent diagnostic system based on


neural network for induction machine health monitoring has been presented.
The results of using developed intelligent algorithm for efficient and accurate
identification of machine health are demonstrated.

C++ and MATLAB programs have been used for the development of the
intelligent algorithm (multi-layer feed-forward neural network with back
propagation algorithm) for machine diagnostic and monitoring.
6) Whei-Min Lin ; Chih-Ming Hong ; Chiung-Hsing Chen, Neural-Network-
Based MPPT Control of a Stand-Alone Hybrid Power Generation System,
IEEE Transactions on Power Electronics, Vol. 26, no. 12, Dec. 2011, pp. 3571 -
3581

A stand-alone hybrid power system consisting of solar power, wind power,


diesel engine, and an intelligent power controller is proposed.

MATLAB/Simulink was used to build the dynamic model and simulate the
system. To achieve a fast and stable response for the real power control, the
intelligent controller consists of a radial basis function network (RBFN) and an
improved Elman neural network (ENN) for maximum power point tracking
(MPPT).

The pitch angle of wind turbine is controlled by the ENN, and the solar system
uses RBFN, where the output signal is used to control the dc/dc boost
converters to achieve the MPPT.

Vous aimerez peut-être aussi