Vous êtes sur la page 1sur 4

Adaptive Back Propagation Radial Basis

Function Networks for Time Series Prediction


Copyright © 2008, Douglas D. Hammer; Principal Investigator
clearlineofsight@gmail.com

In this paper we explore an adaptive radial basis function network (ARFBFN) as an


extension of the traditional radial basis function network and the back propagation
network (BPNN). In the ARBFN signals are normalized between (0..1) as in the
traditional BPNN. Each hidden layer node consists of a center and a bandwidth. From a
normalized perspective all of the centers or mean values exist between (0..1). As the
hidden layer is initialized each hidden node's center is one divided by the number of
hidden nodes, to account for the range of mean values. In addition at each hidden node
signal event its bandwidth is computed from its maximum input, divided into a
Gaussian kernel function. Compared to both the BPNN and RBFN the ARBFN makes
more accurate predictions as its bandwidth is updated to reflect the actual signal.

INTRODUCTION
Radial Basis Functions are based on Gaussian kernel functions (Eq.1). Designed as function
approximations and RBF (Eq. 2) calculates a input value from a mean, as a metric distance from
its center. This new value is passed into a Gaussian kernel (Eq. 3) and multiplied by a weight
coefficient. In the context of a neural network hidden layer nodes take multiple inputs from an
input layer and apply a RBF in the hidden layer. The Normal radial basis function assumes a
bandwidth of one, covering the entire spectrum of input and mean values, where as in the
adaptive model the width, or bandwidth of the kernel function is based on a hidden node's
maximum input. The ARBFN (Eq. 4) has greater accuracy than both the RBFN and the BPNN.

PROCESS MODEL
The ARBFN is a three layer neural network with two inputs, a hidden layer composed of ARBF
nodes, and an output node. The network is trained on two previous days as input values, with a
target value on the next value trained on the output node. The training process uses a traditional
back propagation algorithm (Fig. 1). For a more illustrative example of the ARBF (Fig. 2)
explains the relation between the output weights and the hidden layer node's bandwidth. An
additional structural improvement was implemented that reduced the number of connections.
Rather than have two separate connections between nodes, only one is required. Each connection
keeps a pointer from its source to its destination. When a connection is created the destination
node takes a pointer to the incoming connection and places it in its back connection array. At
each iteration update the error that is back propagated is sent through the same pointers that are
input. This is because each connection pointer has a pointer to its source. This process reduced
memory in addition to structural complexity.

CONCLUSIONS
The ARBFN is an effective and efficient neural network that outperforms both the BPNN and the
RBFN (Fig. 3,4) One of the side effects of the bandwidth update based on the maximum input, is
that signals with near a mean of one become flooded. Additional bandwidth update rules could
be constructed to find a maximum within a range of inputs over time. This could be
implemented, where each hidden node stored an internal vector of inputs over a range of values.
Overall the ARBFN is a nice modification to an existing back propagation neural network, that
adds a new range of possible configurations based on actual signal input.
Normalized Center 0.1
f(x) hxi wi Âf wj
Input Node Hidden Node

Normalized Center 0.2 Sigmoid


wi+1 wj+1
f(x-1) hxi Âf q f(x+1)
Input Node Hidden Node Output Node

INPUT ARBF OUTPUT


LAYER LAYER LAYER
wn wm
Center 1.0

Âf
Hidden Node

Figure 1. Adaptive Radial Basis Function Network. Inputs are normalized between [0..1], accounting for the
two precious values in an objective function. Where the goal of the network is to output the next value in the
function. The hidden layer is composed of multiple nodes, where each node is assigned a center. Hidden
bandwidths are computed from a node's maximum input, In the normalized range the centers or mean values are
sequentially placed from 0.1 to 1.0 divided by the number of hidden nodes. The output node is trained in this
example on the next value, f(x+1), errors are back-propagated using the traditional BP architecture. Original
actual values are calculated from the inverse normalized scale.

1.5

0.5

wi

-1 -0.5 0 0.5 1 1.5

hi

-0.5

Figure 2. Adaptive Radial Basis Function, is a standard Gaussian kernel function divided by a bandwidth (h). In
combination with the back propagation network, each hidden node consists of its own bandwidth and center. At
each node signal a new bandwidth is derived from the node's maximum input signal. In the figure above (w) is
the connection weight between nodes, and its effect on the output signal.
0.18

SIGNAL
0.16
ARBFN
BPNN
RBFN

0.14

0.12

0.1

0.08

0.06
0 50 100 150 200
Figure 3. Single Step Ahead Prediction, each network was trained on six hundred days. The plot above is the
network test with only the two previous days as network input, and a plot of the output. The ARBFN is close to
the standard back propagation (BPNN), and can make more accurate predictions. The standard RBFN is similar
to an ARBFN with a set bandwidth of 1.0, but has a greater margin of error.

0.006

0.005
ARBFN
BPNN
RBFN
0.004

0.003

0.002

0.001

0
0 50 100 150 200
Figure 4. Convergence between the target and actual network output. Both the BPNN and the ARBFN have
close to the same error over time. Initial actual values have a degree of divergence from the target signal. This
difference is measured over time as a function of the RMSE of the network output and the actual signal.
Adaptive Radial Basis Function Equations Plate

Standard Gaussian Function

1.)

Normal Radial Basis Function

2.)

Normal Gaussian Kernel

3.)

Adaptive Radial Basis Function

4.)

Adaptive Gaussian Kernel

5.)

Root Mean Squared Error Function

Vous aimerez peut-être aussi