Vous êtes sur la page 1sur 8

Artificial Neural Network training Add-in

for

Particle Swarm Optimization Research Toolbox Version 20101128i By Tricia Rambharose


tricia.rambharose@sta.uwi.edu

Table of Contents
Introduction ..................................................................................................................... 2 Steps to implement this NN add-in .................................................................................. 2 Functions......................................................................................................................... 3 main.m ......................................................................................................................... 3 trainpso.m .................................................................................................................... 3 first_pso ................................................................................................................... 3 plot_epochs.m.............................................................................................................. 3 plotPSO_particles.m .................................................................................................... 4 display_NN_results.m .................................................................................................. 5 display_NN_settings.m ................................................................................................ 5 ObjFun_NN.m .............................................................................................................. 5 Changes made to PSORT functions ............................................................................... 6 Control_Panel.m .......................................................................................................... 6 Display_Settings.m ...................................................................................................... 6 Objectives.m ................................................................................................................ 6 Ethics.m ....................................................................................................................... 6 RegPSO_main.m ......................................................................................................... 6 lbest_core.m and gbest_core.m ................................................................................... 6 Support for users ............................................................................................................. 7 References ...................................................................................................................... 8

Tricia Rambharose

README: NN add-in for PSORT

Page 1 of 8

Introduction
This add-in to the PSO Research toolbox (Evers 2009) aims to allow an artificial neural network (ANN or simply NN) to be trained using the Particle Swarm Optimization (PSO) technique (Kennedy, Eberhart et al. 2001). This add-in acts like a bridge or interface between MATLABs NN toolbox and the PSO Research Toolbox. In this way, MATLABs NN functions can call the NN add-in, which in turn calls the PSO Research toolbox for NN training. This approach to training a NN by PSO treats each PSO particle as one possible solution of weight and bias combinations for the NN (Settles and Rylander ; Rui Mendes 2002; Venayagamoorthy 2003). The PSO particles therefore move about in the search space aiming to minimise the output of the NN performance function.

Steps to implement this NN add-in


1. Download the most recent version of the PSO Research Toolbox. 2. Download the most recent version of the NN add-in and unzip it alongside the PSO Research Toolbox folder named PSORTyyyymmdd. 3. On the MATLAB interface go to file -> set path -> add with subfolders. Then find the location of the add-in folder and click OK 4. In the PSORT open Control_Panel.m and set the following switches: OnOff_Tricias_NN_training = logical(1); Note: This is the first switch in the control panel. OnOff_user_input_validation_required = logical(0); Note: This switch can be found in the control panels section (1) BASIC SWITCHES & PSO ALGORITHM SELECTION under heading MISCELLANEOUS FEATURES. 5. Execute the add-in file NN_training_demo.m by one of the following ways: 5.1. Opening NN_training_demo.m pressing F5 on the keyboard 5.2. Clicking on the button found at the top of the MATLAB editor window of NN_training_demo.m 5.3. Typing NN_training_demo.m on the MATLAB workspace and pressing enter. NN settings can easily be modified in the body of the NN_training_demo file and PSO parameters can be modified in Control_Panel.m of the PSORT.

Tricia Rambharose

README: NN add-in for PSORT

Page 2 of 8

Functions
This section describes the add-in functions created for NN training with PSO.

main.m
This is a simple function used as an example of creating a neural network (NN) and using the PSO algorithm as the training function. Very basic input and target values are used, similar to the examples given in MATLABs Neural Network toolbox help file. A two layer NN is created and NN parameters are set. NN parameters mostly adjusted for testing and therefore set in main are: the NN training function, transfer functions, goal, max epochs, max_fail and a new PSO plot function. The NN is first simulated and then trained. For comparative analysis of results, the NN settings and results are displayed by calling respective functions created for this purpose. Finally, a function is called to plot the NN performance results vs. epochs to enable more meaningful analysis of results.

trainpso.m
This is a modification of existing neural network training algorithms provided in Matlab's Neural Network toolbox, using ideas from Brian Birge's PSO toolbox (Birge 2005), created for MATLAB 2005. This PSO training function aims to be the interface between the Matlab NN toolbox and the PSO Research Toolbox. For consistency, the formatting and structure of other NN training algorithms are followed. Training here is based on the idea that all weight and bias values are determined using a swarm optimization approach. Further explanations of variables and steps in this function are given in the documentation of the function file. first_pso This variable is in the trainpso.m function as a flag to indicate to the PSO toolbox the first time the PSO algorithm is run. This is necessary to prevent redundant PSO parameter validation and display that is needed only if the PSORT is not being used for NN training. Initially, the value of this variable is set to 1 and after the first PSO trial its value is automatically set to 0 in RegPSO_main.m.

plot_epochs.m
This is a script to plot NN epochs versus performance at the end of the NN training session. NN Epoch number is given on the x-axis and the NN performance is given on the y-axis. For PSO training of a NN, each PSO iteration corresponds to a NN epoch. For added analysis, the NN training goal is shown as a horizontal line. Figure 1 below gives a screenshot of the plot generated from plot_epochs.m.

Tricia Rambharose

README: NN add-in for PSORT

Page 3 of 8

Figure 1: trainpso epochs vs performance

plotPSO_particles.m
To enable this, net.trainParam.plotPSO must be set to true in NN_training_demo.m. This is a modification of a PSO plotting function created by Brian Birge in his PSO toolbox for MATLAB 2005 (Birge 2005). This is a dynamic 3D scatter diagram with the x-axis being PSO dimension one, the y-axis being PSO dimension two, and the z-axis being the NN performance or PSO objective function result. For the PSO approach to NN training in this implementation, the number and range of PSO dimensions are determined by the NN weights and bias values. Therefore, each PSO dimension represents one NN weight or bias. Since it is not possible to plot more than 3D, only the first, last and performance values are used for this plot. Each PSO particle is shown in this plot as a red marker. For each PSO iteration, the position of each PSO particle is updated. The PSO particle positions over all iterations are displayed in this plot. This is useful to visually present the convergence or divergence of PSO particles in the search space. The NN goal is shown as a straight line in the search space to indicate an Tricia Rambharose README: NN add-in for PSORT Page 4 of 8

approximate area where the PSO particles are desired to converge to. Figure 2 below shows a plot generated from plotPSO_particles.m.

PSO particles

NN training goal

Figure 2: PSO particles in the search space.

display_NN_results.m
This is a simple script used to display the main NN results after training, which helps make output more readable and interpretable.

display_NN_settings.m
This is a simple script used to display the main NN settings, which helps make output more readable and interpretable.

ObjFun_NN.m
This is a new objective function added to the PSORT. For NN training the objective is to minimize the NN error (NN target NN output); therefore, this is given as the PSO objective function. Each particle has its current position is evaluated by this objective function; the results are the performance value used in the 3D scatter plots z-axis and the epoch versus performance plot. Tricia Rambharose README: NN add-in for PSORT Page 5 of 8

Changes made to PSORT functions


Control_Panel.m
Added a new switch OnOff_Tricias_NN_training to indicate if the PSORT is being used for PSO training of a NN. If this switch is set to true then num_trials must be 1, dim is set to the number of NN weights and biases, objective_id is set to 11 and true_global_minimum is set to the NN training goal.

Display_Settings.m
Inserted OnOff_user_input_validation_required switch to prevent unnecessary display of PSO settings for NN training.

Objectives.m
Defined a new objective id for NN training and specified settings. The center is set as 0 and the range is set as [-1, 1] since it is assumed for NN training that the weight and bias values will be within this range.

Ethics.m
Check if PSORT is being used for NN training. If not, inputs are validated as usual. If so, inputs are only validated on the first call of the PSORT since they are unchanged by subsequent calls.

RegPSO_main.m
Check if PSORT being used for NN training. If not, then display standard output.

lbest_core.m and gbest_core.m


Inserted a section to check for PSO stopping conditions when used for NN training. New stopping conditions here include current time is more than or equal to max time for NN training and PSO stagnation. PSO stagnation here refers to no improvement in the PSO result for a specified number of consecutive NN epochs. Another mechanism that might be useful for detecting stagnation is activated by switch OnOff_NormR_stag_det in the PSORTs control panel.

Tricia Rambharose

README: NN add-in for PSORT

Page 6 of 8

Support for users


It is expected that acknowledgement and citation of this NN add-in will be made whenever it is used. The author is willing to provide support for this add-in and will consider possible research collaborations. Contact information for the author of this add-in is as follows: Ms. Tricia Rambharose Department of Computing & Information Technology The University of the West Indies St. Augustine, Trinidad, W.I. Office phone contact: +1868 662 2002 ext 3640 Email: tricia.rambharose@sta.uwi.edu OR tricia.rambharose@acm.org Skype: tricia_rambharose Website: http://www.tricia-rambharose.com

Tricia Rambharose

README: NN add-in for PSORT

Page 7 of 8

References
Birge, B. (2005) "Particle swarm optimization toolbox." Matlab central. Evers, G. (2009) "PSO Research Toolbox (written in MATLAB)." George Evers, MSE. Kennedy, J., R. Eberhart, et al. (2001). Swarm intelligence, San Francisco: Morgan Kaufmann Publishers. Rui Mendes, P. C., Miguel Rocha, Jose Neves (2002). "Particle Swarms for feedforward neural network training." Settles, M. and B. Rylander Neural Network Learning using Particle Swarm Optimizers. Venayagamoorthy, V. G. G. a. G. K. (2003). "Comparison of Particle Swarm Optimization and Backpropagation as Training Algorithms for Neural Networks."

Tricia Rambharose

README: NN add-in for PSORT

Page 8 of 8

Vous aimerez peut-être aussi