Vous êtes sur la page 1sur 6

COMPARISON OF TWO NEURAL NETWORK METHODS APPLIED TO A TRACKING CONTROL SYSTEM

Cliston Lcfr;iric, Scirior M L ' I I I ~ C IEEE, T , ririd Bcdcr Cistcratis

Department of Electrical Engineering, Catholic University of Valparaiso, Chile

Fax 56-32-212746.E-mail: glefraiic@ais 1 .ucv.cl P.O. Bos 4059,Valparaiso, Chile.

Abstract- This paper presents tlie comparison of tlie backpropagation neural network and the random research method applied to tlie tracking control of a direcl current motor. The neural network control system uses a very sim le scheme and it can use in the real-time way. Tiis technique is based on an adaptive distributed architecture. The comparison of the two alternatives is done in the learning process aiid the performance of the control system.

T h e . netiral network is applied as a tracking controller, with r trameter error trajectories inputs. The controller classifit., :.e feedback error signals and generates the appropriate control action to the motor. The control system can follow any arbitrary trajectory, even when the trajectory is changed to that not used in training process. The neural network i s programmed in a ersonal computer. The motor speed 1s sensed with a tacfometer, and the output of the neural network actuates on the field voltage of the motor.
1. "TRODUCTION

ilie plant, but it is 1101 clear that it can overcome that uncertainty. Additionally, the adaptive control schemes require ihe iidorrnation about the plant, and may not guarantee the stability of the system i n the resence of unrnodeled dynamics [7]-[8]. The algorithms ogtained are more complicated, needing escessive computation in realtime. Self tuning controller did not track the reference when the plant is, noillinear and must be redesigned if the plant has to follow different references [9).

A common industrial problem is to make the output of a gstem to track a iven reference trajecrory. For esample, to rive.servomotorsf7], to move robe) s joints 161 to assemble equipment, etc., require tlie machine follows prescribed trajectory. To get a good trackin performance, the dynamics of controlled systems are usual6 simple (e. linear) and known so that the modern control olicy can applied. When the structure of the lant IS untnown or the parameter variation is excessive, t i e effectiveness of the modern control diminishes. For esample, when tlie environment changes widely aiid a fised controller setting is set, the performance of the system is not satisfactory. If it is used a reasonably accurate model, !lie coiitrol algorithm could .be computationally intensive that it becomes impossible to implement it i n a real time control environment.

The application of iieural networks has a high computation rate provided, by garallelism, a great degree of robustness due to the distri utcd representation, and the ability of adaptation, learnin , aiid tlie generalization to iinprobx performance [SI-[ IO]. %earning control systems employ neural network to learn the characteristics of inverse dynamics of controlled systems. Most of them use the desired response and/or the plan1 output as inputs to the neural networks [ 111. Network training is bascd on-line observation of the inputs and oulputs of the plant. This operation could take time, as the case of back-propagation method. A model learnin scheme using a simple dynamic iiiodel for the generalize8 learning of the ncural network (81, provides a n eficicrit procedure for learriin tlie lant dynamics, in an off-line wa . The controllers o k a i n e 8 can respond ff esibly to unmoderated dynamics and unexpected situations, by using the learnin and adaptation ca abilities, different to the convenlionakcontrollers that i t to be programined to respoiid to the environmental changes.

h s

The output of the plant, in a tracking systeni, is controlled tryin to maintain nearly equal to a desired reference input. Hn that situation, the output is said to track the reference input. In motor drive applications, the niotor should have to follow a predetermined position or speed trajectory, without causing excessive stresses to the entire system hardware and with no escessive inrush current. Different techniques are used in tracking control: variable structure 151, the sliding mode 141, self tuning and model reference adaptive coiltrols 181. The first two teclini ues rcquire il valid niodcl of the plant being controjed, but llicy are not robust i n [lie scnsc 111;it the controller is sciisitive 10 lar e parameter variation and noise. Adaptive controfs are inore effective in compensating the inlluence of the structured uncertainty of

This aper presents the comyrison of two methods basel on neural networks app red to the speed tracking control of a D.C. motor. Tlie system uses a simple real-time control, using feedback error trajectories as inputs to the neural network trackin controller, instead of learnin tlie inverse dynamics of the $ant, to determine. the controlkr that erierates the roper control signal to achieve tlie desired perfoorinance of t i e speed of the motor.. Jt is not necessary to idellilly or to learn the motor dynaniics. The iiieiliods compared are the backpropagalion neural iietworks and the method based on minimization by random search teclirii ues. Tlie comparison of the two alternatives is done in they earning process and the performance of the control system.
2. BACKPROPAGATION NEURAL NETWORK

The neural networks used in this work are a three layer neural network, with the error back-propagation learning a1 orilhm [13] or error random optirnization iiietliod of %laty;is learning algoritliin 131. Tlie network consists of inpt!t, hidden and oulput la crs. Each layer contailis roccssiiig clc,ments witli sigmoiciil nonlinearitics. I t Iias sfown h a t this net can be used to model any continuous nonlinear transforniations. A back-propagation neural-network is used as pattern classifier instead of acting

66

as the inverse of the coiitrollcr plant. This network is feedfonvard wliere each unit receives inputs only from the units in the preceding layer. The information that goes to the input layer inits is recorded into internal representation and the outputs are enerated by that1 representation rather tlian b the inputs. T i e network converts the inputs according to d e connection weights. These weights are adjustcd d u m the learning process, to niinimize the sum of the square8 errors between the desired outputs and the network outputs. The errors are ropagated back to assign credits to each connection weigit.

Biickpropiigatioii iiictliod tiscs tlic S ~ C C ~ C~ S CS ~CCII~ method i n tlic learning process of the we1 jhts of the neural network. In that proccss each srep size is atways. constant q and often convergcs lo a local minimum, but i t cannot bd ensured IIieoretically. The value of each weight and tliresliold has lo be incrcnientally adjuslcd proportioiially to the contribution of each unit to tlie total error. The change i n each weight and threshold is calculated as : Awrs (I + 1) = q6,YS + CLAW,, ( I ) ,r = k, j, s = j, i
(5)

where q controls the rate of learning and 1 denotes the number of times for which a set of input patterns has been presented to the network. The parameter a determines the effect of previous weight changes on the current direction of movement in weight space.

Input Layer

Hidden Layer

Output Layer

In this work, it has been determined an initial ranqe value of the connection ncrghts of the network between - to + I , to reduce the numbcr of ilerations and to prevent the hiddeii units from acquiring identical weights during Ic;iriiing. The backpropagatroil USCS a neural network with sis neurons each one, due to the words needed has to have six bits. The quantity of hidden neurons is chosen according to processing time of calculations. The same scheme is employed in tlie random optimization method for training, that it re uires a number of weight less than one hundred to ensure a 100% of convergence.
3.- T H E MODIFIED JUNDOM OPTIMIZATION

Fig. 1.- Neural Network.


A back-propagation neural. network is shown in Fig. 1. The total inputs to unit J i n the hidden layers or input k, in

METHOD OF MATYAS

the output layer is

netr=CwrsYs , r = k , j , s = j , i
S

(1)

w1iere.i refers to the,input la 'ers; \vrs is the weight of tlie sth unit to the n h unit and represents the out u t of unit s in the hidden and input layers. A sigmoidal nonEnearity is applied to the each unit r to obtain the output Y,.

bs

Backpropagation is one of the most widely used melhod to adapt neural nciworks for pattern classification. However, a n iinportaiit limitation is that i t sometimes falls into a 10c;il miniintiin of error function. The random optimization metliod of Matyas and its modified algorithm are utilized to learn the weights and parameters in a neural network. This algoritlim is used to find the global minimum of error fuiiction of the neural network [ 11. Given a function f from RI1 to R and S a subset of R", a point s that minimizes f on S or at least that yields an acceptable ap rosiniation of tlie infimum of f on S. The algorithrii is Eased on the modified random optimization method proposed by Matyas [ 1]:[3]. Le1 the nest equations rcprcsciit the inputloutput relations of the neural iielwork:
sip (n') = fi(ZIp
(0,))

Ys= f(net, ) =

1 + esp [ (net, - 8, ) / 8 , ]

(2)

where 8 serve as a tliresliold of unit r and 8 , detcriiiiries the slope o t t h e activation function f. I t is assumed that 0,=1 Each layers only coniinunicntes w i t h all successive layers, because of there is no feedback.

In the learning process, the network is a pair of patterns: an in ut pattern and a corresponding desired out ut pattern. Learning comprises clianging the weiglits a n 8 thresholds to minimize the mean squared error the actual outputs and the desired out ut pattenis in a gradient descent manner. The activitv of each unit is ,propagated fonvard through. each layer of the i!ctwork by using ( I ) , (2). Then, the resulting output attern IS coin ared with the desired output pattern, a n i an error 6k !Or each unit iS calculated as:
(3)

(6)
(7)

zip (w) = zwij yip (w) j

wliere si (w) and yi (w) are the oul tits of tlie ith iicuroti of tlie (s-fitli layer, a i 4 t ~ i e jt~i output Froni tlie (s-l)tii la cr corresponding to the ptli input pattern, respective1 . Aie weight of tlie iicural nctwork is wij (vector w) and t o ) is a nondecresiiig sniooth function.

where tk is the desired out ut and Yk is the actual output. The error at out ut is bacf-propagated recursively to each lower layer as fol ow:

The algoritlini of the modified r m d m optiniization iiietliod roposcd by ,Matyas, is tlie following: Step I : !elect die iiiitial poiiit, vecror x(0) i n S and set k=O. Let M be tlie inasimum number of steps.
Stcp 2: Gcncrate Gaussian random vector ,(k).
If r(k) + ,(k)
E

(4)

X ,go to Step 3. Othenvise, go to Step 4.

67

Step 3:

f(W4- 5(k)) < f(x(k)), let x(k+l) = x(k) + c(k) and b(k+l) = 0,4*c(k) + 0,2*b(k) If f ( W + 5(k)) 2 f(x00) and f ( W + 5(k)) < f ( W ,
If

4.- NEURAL NETWORK TRACKING CONTROL

SYSTEM

let x(k+1)= x(k) - {(k) and b(k+l) = b(k) - 0,4*{(k) othenvise, let x(k+l) = x(k) and b(k+l) = O,S*b(k) , b( 0) Ste 4: If k = M, stop the calculation . If k c M, let k = k +I a n t g o to step 2. This method ensures convergence to a global minimum of tlie ob'ective function, with robability I on the conipact set. 11 carculates the value of S a t function at tlie reverse side x(k) 5 k) if it fails to improve the current value of the ob ective unction, eshibits faster convergence of as method. The method exploits the the original information of thexias b(k) tliat is the center of the Gaussian random vector {(k) at the kth step. The method is usehl when the dimension of the variables becomes very large [2].

The block diagra.m o f the neural iietwork tracking control systems is shown in Fig. 2. The plant is a DC motor with speed controlled by armature. The speed of the motor is sensed ti a tachometer tliat sends the inform!tion to a computer tfirough an Input Analog Module. This Module digitizes the signal to +5 volts normalized. The word received by tlie computer is com ared with the rcference word, determining the error, w k c h is the input to the neural network. The ncural network has to be trained to recognize the error ranges, 10 obtain an output pattern and to detenniiie delIi1 Vc 10 be addcd or to subtract to the reference. This information is convertcd to a voltage and sent to tlie driver coiitrol card that controls the DC motor. Figure 3 shows a detail of the trackin control systein. The coinmunicatioii card is based on a %I3 1 microcomputer, coin osed by ai1 In ut Analog Module, an Out ut Analog MoLle and the bicrocom uter Module. ' d e card .is connected to a 40 MHz, 80!86 compuler, through serial input. Tlie purpose of this card is to have an intelligent interface bctween the coin uier, and the sensor and llic actualor. The driver contror card, the actuator, consists of a n amplifier, n digital analog converter, an EPROM memory, a counter, a comparator, a zero crossin detector, a n isolated circuit, and a thiristor ower unit. i o t h cards have bccn developcd a t the &lliolic University of Valparaiso Labs.

h a

I.

The main ,bjective is to find a weight vector \v, which gives small ...: :es of tIie total error function E(w) as the objective hnction to be minimized. E(w ) = C Ep (w ) P

I
I

DC Motor
I

I
U -

Controlcard

I 1

I!

'I

Tachomeler

+
I

Fig. 3.- Neural network tracking control system, Training involves using the error signals, e, between the plant output signals and the desired signals as i,nputs to the neural networks. The neural nelwork tracking controller is shown in Fig. 4, contains four units : pre rocessiiig, neural nelwork classifier, look-up table, antservo drive unit. The preprocessing part will scale the error signal into the range of -1 to 1 and partition it inlo several groups. Each ,group clusters ,those error's signals for wlwh ail appropriate control action would correct. This error serves as inpuls to the neural network classificr. The neural network classifier is a feedfonvard three-layer backpropagation network, which consists of the input layer w i t h numbers of units dependin on the lication and the number of trajeclories to &ow; one :!&len layer with numbers of' units and sis neurons in the ourput layer.

Fig. 2.- Block diagram of the system. The modified random optimization method proposed by Matyas, applied to the learning of the weights of the neural network, has to consider the weight vector and the total error function, as s and the objective function f(x), res ectively. The a1 orithin converges with probability 1 on t l e compact set. d w e v e r , the set of IV is not necessanly compact. and these parameters could take arbitrarily lar e valucs. This nieans Illat the niethods enslire convergence if it is coifined tlie calculations to a coin act region, for example, the region in which the absolute vapues of all components of the iv vector are below 100. In this work, the neural network has three layers of six neurons each one that requires 72 weiglirs. This method is used i n a ncural network applied l o the speed trackiiig control of a DC motor, and i t is compared to the back propagation method.

68

TAl3LE 2. Modified method of Matyas


reprocessing Network
Clossifier

Fig. 4.Neural Network tracking controller. After tlie learnin process, it erforms the function of classification andfior mapping. ! h e outputs of the neural net classifier will be rounded off into 1 or 0, where tlic 1 indicates abnormal case. Tlie look-up table will determine whether to increase or to reduce tlie control signal based on the out ut decision produced from the neural net classifier. The T!ble. I relate the ,possible error ranges and their corresponding actions. This signals IS fed through the servo drive unit to ener,?te the adequate signal Va to the motor to correct the ieviation. The senlo unit contains the D/A converter, amplifier, trig er, and SCR. The error si nal is scaled down to values w t h n the range -1 to + I accoriing to the specified operation,ran e.. Tlie coeflicients k l , k2, k3 are choosen based on the siniuktton results. Table I
Error Ranges e [ - l e ,-0.51 [-0,5.,-0.2]
[-0.2,0.05J [0.05.+0.5]

YOMasiniutri error dcsircd 5 I 0.9 0.8 0.7 0.6 0.5 0.4


0.3

YOFinal error
obtaincd 1.1 1 0.76 0.275 0.48 0.098 0.483
0 0 0.04 0

Nurnhcr of i tcra tions 2 2 50 2 92 2


95 101 92 6 6

0.2
0.1

For avoiding oscillatioiis iri the output, an automatic reset i s done when the nuinber of iterations i s trea,ter tlian 200. This problem occurs witli a fre uency of 15 /o i n tlie r;lridom o timimiion mctliod of Ratyas. In backpropag;ittori iieurarnetwork, the CPU time increases i n a gcoinetrical way, to obtain a logic zero or logic one with errors below 0.2 %. In random optimization method of Matyas, the CPU time is less than backpropagation method for errors below 3 YO. With the random optimization method of Matyas is possible to search the adequate weights and ammeters of the neural network, in a fast way than the &ackpropagation method.
TABLE 3. Back propagalion method
'10Milrinlum error dcsircd

Output Pattern

I 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0
0 0 0 0

(+0.05,0.2] 10.2, 0.51


10.5, 1. ]

0 0 0 0

0 0 0 0

0 0 I 0 0 1 0 0

0 0 0 1

Control Act ioii Va- k3 Va- k2 Va- kl va Va+k 1 Va+k2 Va+k3

YOFinal error
ob (ai iictl 4 87 1 0.9

5 I

Num bcr of iterations

5.- TRAINNING COMPARISON

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0. I

GI
94

23 52 56

68 78
I80

i2-1

350 1288

The weight values obtained during the training of tlie neural network based on Matyas method, have the characteristic the error associated to tlie input aiid output pattern is closed to zero per cent, as i t is shown in Table 2. The final error, for a back pro agation neural network, is equal to the dcsircd error, be&v 1%. In the modified randoni optiniizalioii method of Mayas, tlie finril error is less than tlie desired error, observing O h below 0.5% desired error. The error corresponds to the mean square error. The Iliilsiniltlli error desired is given by the user before starting the traitling. The final error is the error of tlie neural network after tlic trainin process equal or less than tlie desired error. The iiiiinfkrs of iteratiotis are the algorithm steps to reach the condition final error equal or less than the desired error. Table 3 shows the same for backpropagation neural network. The nuriiber of iterations for reacliing the niasimum error of 0 5 %, bnckpropagntion neiir;d network needs 94, and rilildolli optinitzalioii t1lctliod of M;ity:is iiccds 95. The advsiitage of tlie M;ityas nicihod is to obtain less final error llint backpro I rilioii nic~liod.Tlie first one. sicps lis method oes rapi$;io the minimum, but it stays i n the limits. Tie other nielhod ensiires the global ininimum.

The selections of the patleriis for the error ranges are based on the weight and bias for every pair of inputoutput pattern of tlie network. 'The pattern used in the training is shown in the table 4. Table 4.- Iterations and errors in every pallern.
Pit t t c rn

b;icl< )rap

Iterations

Error

YO

. 010000 00 1000

100000

000000 000 100 0000 10 00000 1

180 180 183 181 I80 I83 180

0.3 0.3 0.3 0.3 0.3 0.3 0.3

It era t ioii Matyas 202 55 118 1 47 83 102

Error
Y O

'

0.0
0.0 0.0

0.0
0.0 0.0 0.0

A database is created wit11 tlie inforillctlion of the weights and biases for every pair ai' input-output pattern of the network. Tlie output pattern is given for the ouiput layer of 11ic nci\vork, dcpcnding 011 tlic crror V~IIIC.TIIC logic zero and (lie logic oiic are considering for the output word at the oiitput of llic ncural network, rclaled witli the ariiialure voltage. Tlie oul ut word of the net has 6 bits with a sequence of o ancl'oiily one 1

69

6.- PERFORMANCE OF THE CONTROL SYSTEM.


2500
Set point 2700 rom

To show the perforniaiice of the neural network tracking control system, it is applied a speed set oint, a change of set point, and a disturbance in the shad) of the molor. To compare the neural network controller performance, the trajectory tracking control of the motor were obtained.
Fig. 5 and Fig. 6 shows the output of the s stem with each ncural network, backpropagation nicthodl and niodificd raJid0in optimization method, of Matyas, respcc1ivel . It is observed the second one IS more sniootli than (lie h s t one. The peaks due to the backpropa neural network, ap ears when the algorithm excee%%: time to arrive to the grobal minimum. Fig. 7 and Fig. 8 shows the performance of the system with the different neural network when the set point is changed from 2700 rpm to 2300 'pm, Both arrive to the desired value, in a smooth way with the neural network bascd on the modified random optimization mclhod of Matyas. Fig. 9 and Fig. 10 shows the reaction of the control system to i1 disturbance applied to the shaft of the motor, at 2700 rptii set point. It is observed that the control system based on backprop! ation neural network acts in a slow way i n comparison witk the control system based. on neural network based on the modified random optimization method of Matyas.

f l o
1 10192837466664738291 10101112131416 0 9 8 7 6 6 4
T i m XlOm

Fig. 5.- Tracking control systems with backpropagation neural network controller. 3000 2500 -. 2000 -. 1500
T

h _ - _ h _ _ h _

S e t P O int 2700 rpm

t . 9

500

Fig. 6.- Trackiiig control systems with neural modified random optiniization method of Matyas network controller.

D
3o00,
1

1500
Disturbance at 27001pm
1 1 * n . * 1 1 " 1

cn W .
0
Tlim X l h

1 m . 0 0 1 m .

1 11 21 3141 61 61 7181 91 1011 121314161617 1 1 1 1 1 1 ? 1 Sepoint from 2700 t o 2300 rpm

T i m XlOm

3
1 6 9 13 17 21 26 29 33 37 41 46 49 63 67 61

Fig. 9.- Tracking control systems with backpropagation neural network controller.

3000
Fig. 7.- Tracking control systems with backpropagation neural network controller Fig. 8.- Tracking control systems with neural modified riIlldolil optimiziltlo~~ liictliod of Mntyils nctwork controller.

; 2000
U

c 2500

15mI loo0 I 500

Disturbance at2700rpm

Setpointfrom 270010 2300rpm

T i m e xlOms Fig. 10.- Trackiiig control systems with neural modified random optimization inetliod of Matyas nelwork coniroller.

70

7.- CONCLUSIONS It tias been presented the coinparison of two niethods based OII rieural netsvorks applied to the specd trackin control of a direct current niotor. The system uses a simp6 rea I -time control, using feedback error trajcclories as inputs lo [lie ncttral nctwork tracking controllcr, l o determine [lie controller that generates the proper control signal to achieve the desired performance of tlie speed of the motor. It is not necessary to identify or to learn the motor dynamics. Tlie neural network is prograninled in a personal computer, the motor spced is scnscd \villi a tachometer, and the output of the neural network actuates on the field voltage of the motor. Tlie methods compared are the backpro agation neural networks and the method based on modlfiei random optimization method of Matyas. In tlie learning process, the backpropagation IO converge to the method. exhibits the advantage minimurn, with errors more llian 1 %, but it requires an escessive amount of time to firld tlie global minimum of the total error function and sometimcs falls in local niininiuni. The choice of q affccts, an iniportant way, the learniii speed. In the niodified random, optimization methods. of Matyas is able to find the weiglits and parameters in a neural network, fasler than the backpropagation method. This method permits to find the global iiiininiuin of the total error function of neural nctivork. The value of the varraiice 5 of has a crucial effect on tlie learning speed. Tlie output of the system with neural nehvork, backpropa ation method and niodified random optimization method ofbatyas, are similar: bolli rcacli tlie set oint, but the second one is iiiore smootli t l i a n the first orie. he pcaks due to the backpropa ation neural network, appears when the algorithm escee& the time to arrive to the global nirnimuni. The reaclion of the control system to a disturbance applied IO the shaft of the niotor, in the coiilrol ystem based on backpropagatton neural network acts in a s ow way in coinparison with the control system based on neural network based on the niodificd random optiniization method of Malyas. Both systems can follow any arbitrarily trajcctory even \\.lien the dcsircd tr;ijectory is changed to that not used i n tlie training.

oint controller design for industrial robot inanipulator", iEEE Trans. lnd. Electronics, vol. IES-38, pp. 21-25, 1991.

161 Hsia T.C., Lasky T., and Guo Z., "Robust independent

171 El-Sliarkawi M., Aklierraz M., "Tracking control tcchniqiie for induction molors", IEEE Trans. Encrgy Convcrsion, vol. 4, pp. 8 1-87, 1989.
[81 Nniloh H., Tadakuma S., "Microprocessor based ad ustnblc specd dc motor drives using modcl reference tive controI~~, IEEE Trans. Industry applications, vol. IA-43, pp. 313-318, 1991.

ad

191 Kraft I.G., Campagna D.P., "A comparison between CMAC neural network control and two traditional adaptive control systems", IEEE Trans. Control Systems Mag. , vol. CSS-IO, no 3, pp. 36-43, 1990.

[ l o ] Ozaki T.. et ai, "Trajectory control of robotic manipulalors usin neural networks", IEEE Trans. Ind. Electronics, vol. IEt-38, pp. 195-202, 1991.
[ I 1 1 Yamada T., Yabuta T. "An extension of neural network direct controller", IEEE Trans. Control Systems Mag. , vol. CSS-8, no 2, pp. 17-21, 1988.
1121 Pa0 Y., "Ada live Pattern Reco nition and Neural networks", Addison Lesley Pub. Co., UfA, 1989.

131 Tai H., Wang J., ASshenayi K., "A neural networkbased tracking control systerrr", IEEE Trans. Ind. Elcctronics, vol. IES-38, pp. 195-202, 1991.
[ 141 Narpat, Gelilot, Alsiiia P.J.,"A conipaison of control strategies of robotic iiianipulators using neural networks", IEEE Trans. on Ind. Electron. Coiilerence IECON pp. 688693, 1992.

[ 15) Lefranc G., Zazopulos J., Ruz R., "Reconociniiento de imagenes geometricas simples niediante redes neurales Asoci!itiya Hopfield". Proceedin s X Congreso Chileno de Ingenieria hldctrica. Valdivia , 1963.

REFERENCES

11) .Baba N., A New Approach for finding the lobal nunimum of error function of neural networks. ieural Nctwork, vol. 2, 11'5, pp, 367-373, 1989.
121 Solis F and Wets R., Minimization by random search teclini ues. Mathematics of Operations research, vol. 6, no 1, Feb. 1 h 1 .

Matyas J, R;iiidooi UptlIiiiziitioli. Autoniatic and Reinole L31ontrol. vol. 2ti. pp246-253, 1965.
[4J Hashinioto H., M a r u y a o i a K., H:iriisliilii;i F; "A micro{rocessor-b;iscd robot nianipiilator coiilrol with sliding mode. , IEEE Trans. Ind. Electronics, vol. 1E-34, pp 11-18, 1987.

IS] El-Sh;irk;i\vi M., Iiuiirig C., "A variable s,mclured trackin of the itiotor for Iiiglr pcrformance ap Iications", IEEE #an,. Energy Conversion. vol. 4, pp. 643-&0, 1983.

71

Vous aimerez peut-être aussi