Vous êtes sur la page 1sur 11

Complex Valued Neural Network :

The complex-valued Neural Network is an extension of a (usual) real-valued


neural network, whose input and output signals and parameters such as weights
and thresholds are all complex numbers (the activation function is inevitably a
complex-valued function).

Neural Networks have been applied to various fields such as communication


systems, image processing and speech recognition, in which complex numbers are
often used through the Fourier Transformation. This indicates that complex-valued
neural networks are useful. In addition, in the human brain, an action potential may
have different pulse patterns, and the distance between pulses may be different.
This suggests that introducing complex
numbers representing phase and amplitude into neural networks is appropriate.

In these years the complex-valued neural networks expand the application fields in
image processing, computer vision, optoelectronic imaging, and communication
and so on. The potentially wide applicability yields new aspects of theories
required for novel or more effective functions and mechanisms.
1) Representation of information
Since the input and output signals are supposed to be complex numbers (i.e., 2 dimensions), the
complex-valued neural networks can represent 2-dimensional information naturally, needless to
say complex-valued signals.

(2) Characteristics of learning


The learning speed of the complex-valued back-propagation learning algorithm (called Complex-
BP) for multi-layered complex-valued neural networks is 2 or 3 times faster than that of the real-
valued one (called Real-BP). In addition, the required number of parameters such as the weights
and the thresholds is only about the half of the real-valued case.

An example of the application:


(a) The authors applied the Complex-BP algorithm to the recognition and classification of
epileptiform patterns in EEG, in particular, dealing with spike and eye-blink patterns. They
reconfirmed the characteristics of learning described above

Complexity analysis of Neural Network

There are two fundamental issues in neuro-computation: learning algorithm


development and the network topology design. In fact, these two issues are closely
related with each other. The learning ability of a neural network is not only a
function of time (or training iterations), but also a function of the network
structure. A typical neural network contains an input layer, an output layer, and
one or more hidden layers. The number of outputs and the number of inputs are
usually fixed (note that in some applications, even the number of inputs can be
changed — i.e., there may exist some inputs that are not actually related with the
system dynamics and/or solution of the problem); while the number of hidden
layers and number of hidden neurons in each hidden layer are parameters that can
be specified for each application. In a feedforward neural network, all the neurons
are connected only in forward direction (Fig. 1). It is the class of neural networks
that is used most often in system identification and adaptive control (1; 8; 19; 24;
26; 27). As we know from literature, a multi-layer feedforward neural network
with correct value of weights (and appropriate transfer functions of the neurons) is
capable of approximating any measurable continuous function to any degree of
accuracy with only one hidden layer (but infinite number of hidden neurons) (18;
20; 25)]. However, in practice, it is impossible to construct such a neural network
with infinite number of hidden nodes. On the other hand, none of the above results
indicates how the neural network can be trained to generate the expected output
and in fact, any existing learning law cannot guarantee this happening in a finite
time period.

Neuro-Fuzzy Learning and Genetic Algorithm


Neural networks applications contain three phases to solving any problem. At first, we have training
during which weights of network connections are changed. Output of network is compared to the
training data. Error of solved neural network is evaluated. The network is verified within the second
phase. Values of connections weights are constant and the solved neural network is checked if its output
is the same as in the training phase. Generalization procedure is the last phase when the network
output is evaluated for such data, which were not used for training procedure the network. Neuro-fuzzy
systems based on the interpretable knowledge can be initialized with some domain expert knowledge.
There is removed the last layer performing the division thus the system has two outputs [1], [7] - [8].
The error on the first output will be computed taking into account desired output from learning data.
Desired signal on the second output is constant and equals "one" (see Fig.1). After learning causes
structures according to the initial idea, we can build the modular system. In such system the rules can be
arranged in arbitrary order (see Fig.2). At the beginning of the numerical simulation, input fuzzy sets are
determined by the modified fuzzy c-means clustering algorithm. Then, all parameters are tuned by the
backpropagation algorithm.
The Architecture a Self Organizing Map

We shall concentrate on the SOM system known as a Kohonen Network. This has
a feed-forward structure with a single computational layer of neurons arranged in
rows and columns. Each neuron is fully connected to all the source units in the
input layer:
A one dimensional map will just have a single row or column in the computational layer. Input layer
Computational layer L17-3

The SOM Algorithm The aim is to learn a feature map from the spatially continuous input space,
in which our input vectors live, to the low dimensional spatially discrete output space, which is
formed by arranging the computational neurons into a grid. The stages of the SOM algorithm
that achieves this can be summarised as follows
Property 2 : Topological Ordering
Independent component analysis

Independent component analysis (ICA) is a recently developed method in which


the goal is to find a linear representation of nongaussian data so that
the components are statistically independent, or asindependent as possible.

independent component analysis (ICA) is a statistical and computational


technique for revealing hidden factors that underlie sets of random variables,
measurements, or signals.

ICA defines a generative model for the observed multivariate data, which is
typically given as a large database of samples. In the model, the data variables are
assumed to be linear mixtures of some unknown latent variables, and the mixing
system is also unknown. The latent variables are assumed nongaussian and
mutually independent, and they are called the independent components of the
observed data. These independent components, also called sources or factors, can
be found by ICA.
ICA is superficially related to principal component analysis and factor analysis.
ICA is a much more powerful technique, however, capable of finding the
underlying factors or sources when these classic methods fail completely.

The data analyzed by ICA could originate from many different kinds of application
fields, including digital images, document databases, economic indicators and
psychometric measurements. In many cases, the measurements are given as a set of
parallel signals or time series; the term blind source separation is used to
characterize this problem. Typical examples are mixtures of simultaneous speech
signals that have been picked up by several microphones, brain waves recorded by
multiple sensors, interfering radio signals arriving at a mobile phone, or parallel
time series obtained from some industrial process.

We are given two linear mixtures of two source signals which we know to be
independent of each other, i.e. observing the value of one signal does not give any
information about the value of the other. The BSS problem is then to determine the
source signals given only the mixtures.

Putting this into mathematical notation, we model the problem by


x = As
where s is a two-dimensional random vector containing the independent source
signals, A is the two-by-two mixing matrix, and x contains the observed (mixed
signals.
This first plot (below) shows the signal mixtures on the left and the corresponding
joint density plot on the right. That is, at a given time instant, the value of the top
signal is the first component of x, and the value of the bottom signal is the
corresponding second component. The plot on the right is then simply constructed by
plotting each such point x. The marginal densities are also shown at the edge of the
plot.

Vous aimerez peut-être aussi