Vous êtes sur la page 1sur 6

139

1

Abstract In this work an algorithm to adjust parameters
using a Bayesian method for cumulative rainfall time series
forecasting implemented by an ANN-filter is presented. The
criterion of adjustment comprises to generate a posterior
probability distribution of time series values from forecasted time
series, where the structure is changed by considering a Bayesian
inference. These are approximated by the ANN based predictor in
which a new input is taken in order for changing the structure
and parameters of the filter. The proposed technique is based on
the prior distribution assumptions. Predictions are obtained by
weighting up all possible models and parameter values according
to their posterior distribution. Furthermore, if the time series is
smooth or rough, the fitting algorithm can be changed to suit, in
function of the long or short term stochastic dependence of the
time series, an on-line heuristic law to set the training process,
modify the NN topology, change the number of patterns and
iterations in addition to the Bayesian inference in accordance with
Hurst parameter H taking into account that the series forecasted
has the same H as the real time series.
The performance of the approach is tested over a time series
obtained from samples of the Mackey-Glass delay differential
equations and cumulative rainfall time series from some
geographical points of Cordoba, Argentina.

I ndex Terms Bayesian approach, Neural networks, time
series forecast, Hursts parameter.
I. INTRODUCTION
he ANNs are mostly used as predictor filter with an
unknown number of parameters performed by a lot of
author, recently, such as in [2][13][16]. In turn, in this



Manuscript received April, 18, 2012. This work was supported in part by
Universidad Nacional de Cordoba, FONCYT-PDFT PRH N3 (UNC Program
RRHH03), SECYT UNC, National University of Catamarca, National Agency
for Scientific and Technological Promotion (ANPCyT), Departments of
Electrotechnics FCEFyN of Universidad Nacional de Cordoba.
C. Rodriguez Rivero is with Universidad Nacional de Cordoba, Department
of electrical and electronic engineering, Faculty of Exacts, Physical and Natural
Sciences, Cordoba, Argentina (phone 0054-351-4334147; e-mail:
cristian.rodriguezrivero@gmail.com).
J. A. Pucheta is with Universidad Nacional de Cordoba, Department of
electrical and electronic engineering, Faculty of Exacts, Physical and Natural
Sciences, Cordoba, Argentina (phone 0054-351-4334147; e-mail:
julian.pucheta@gmail.com).
M. R. Herrera is with National University of Catamarca, Department of
electronic engineering, Faculty of Technology and Applied Sciences, (e-mail:
martincitohache@gmail.com).

work the main purpose is to estimated water availability
useful for control problems in agricultural activities such
as seedling growth and decision-making. The number of
filter parameters is set in function of the roughness of the
time series. These are considered as random variables
whose distribution is inferred by posterior probability
from the data, in which is included as an additional
parameter, the number of hidden neurons and modeling
uncertainty [1]. The model selection problem is, therefore,
unavoidable; researchers must decide which model is best
at summarizing the data for each task of interest.
The Bayesian approach permits propagation of uncertainty
in quantities which are unknown to other assumptions in the
model, which may be more generally valid or easier to guess in
the problem. For neural networks, the Bayesian approach was
pioneered in [2]-[3], and reviewed [5], [6] and [7]. The main
difficulty in model building is controlling the complexity of the
model. It is well known that the optimal number of degrees of
freedom in the model depends on the number of training
samples, amount of noise in the samples and the complexity of
the underlying function being estimated.
The procedure of determining the prior density and
likelihood functions associated with rainfall time series
uncertainty is very complicated and there is a requirement to
assume a linear and normal distribution within the framework
of the proposed parameters. The problem of model selection is
often divided into discover an organization of a models
parameters that is well-matched such as the network topology,
e.g. number of patterns, layers, hidden units per layer, that
results in the best generalization performance. A common
result is with too many free parameters tend to overfit the
training data and, thus, show poor generalization performance.
A model attempting to estimate the value of a random
variable may have potential access to a wide range of
measurements regarding the state of the environment. Some of
these quantities may provide the model with useful information
regarding the random variable, whereas others may not. In the
context of neural networks, only the useful quantities should be
used as inputs to a network. A network that receives both
useful inputs and nuisance inputs will contain too many free
parameters and, thus, be prone to overfitting the training data
leading to poor generalization.


Time series forecasting using Bayesian method:
application to cumulative rainfall
C. Rodriguez Rivero, J. Pucheta, M. Herrera, V. Sauchelli and S. Laboret, Member, IEEE
T
139

2
II. BAYESIAN APPROACH

One of the most key principles of Bayesian technique is to
construct the posterior probability distributions for all the
unknown entities in a model, given the data sample. To use the
model, marginal distributions are constructed for all those
entities that we are interested in, i.e. the outputs of the ANNs.
These can be the parameters in parametric models, or the
predictions in (non-parametric) regression or classification
tasks. Use of the posterior probabilities requires explicit
definition of the prior probabilities for the parameters. The
posterior probability for the parameters u in a model M given
the data D is, according to the Bayes rule,


,
) / (
) / ( ) , / (
) , / (
M D P
M P M D P
M D P
u u
u

=
(1)

where P(/D,M) is the likelihood of the parameters , P(/M)
is the prior probability of , and P(D/M) is a normalizing
constant, called evidence of the model M. The term M denotes
all the hypotheses and assumptions that are made in defining
the model. All the results are conditioned on these
assumptions, and to make this clear we prefer to have the term
M explicitly in the equations. In this notation the normalization
term P(D/M) is directly understandable as the marginal
probability of the data, conditional on M, integrated over
everything the chosen assumption M and prior P(/M)
comprise

u u u d p x x p x x p
n e n e
) ( ) , / ( ) / (
}
=
(2)
when having several models, P(D/M) is the likelihood of the
model i, which can be used in comparing the probabilities of
the models, hence the term evidence of the model. A widely
used Bayesian model choice method between two models is
based on Bayes factors, P(D/M
1
)/P(D/M
2
). The more common
notation of Bayes formula, with M dropped, more easily causes
misinterpreting the denominator P(0) as some kind of
probability of obtaining data D in the studied problem (or prior
probability of data before the modeling).
The prior distribution expresses our initial beliefs about
parameter values before any data is observed. After new data
D = {(x
(1)
, x
e
(1)
), , (x
(n)
, x
e
(n)
)} are observed, the prior
distribution will be updated to posterior distribution using
Bayes rule where L(/|D) is likelihood function of unknown
model parameters to the observed data. In case of independent
and exchangeable data points, the likelihood function is

) , / ( ) / (
1
) ( ) (
[
=
=
n
i
i i
e
x x P D P u u
(3)
where n is the number of data points.
To predict the new output x
e
(n+1) for the new input
x(n+1), predictive distribution is obtained by integrating the
predictions of the model with respect to the posterior
distribution of the model parameters


, ) / ( ) , / (
, / ( ) / (
) 1 ( ) 1 (
) 1 ( ) 1 (
u u u
u
d D P x x D
x x P D P
n n
e
n n
e
=
=
}
O
+ +
+ +
(4)
where O is the space of all possible parameters. Note that
predictive distribution for x
e
(n+1) is implicitly conditioned on
hypotheses that hold throughout and to be more explicit as the
following [51]


, ) , / ( ) , , / ( ) , / (
) 1 ( ) 1 ( ) 1 (
u u u d M D P D x x M D x P
n n
e
n
e
=
}
O
+ + +
(5)
where M refers to the set of hypotheses or assumptions used to
define the model. Practically, the posterior distribution for
parameters in Eq. (4) is very complex and with many modes.
As a result, evaluating the above integral is a difficult task.
Neal [7] introduced Markov Chain Monte Carlo (MCMC)
method to perform this kind of difficult integrations. Then the
MCMC methods have been utilized by other authors.
III. ARQUITECTURE OF THE ANN

In Fig. 1 the block diagram of the nonlinear prediction
scheme based on a ANN filter is shown. Here, a prediction
device [15]-[13] is designed such that starting from a given
sequence {x
n
} at time n corresponding to a time series it can be
obtained the best prediction {x
e
} for the following sequence of
18 values. Hence, it is proposed a predictor filter with an input
vector l
x
, which is obtained by applying the delay operator, Z
-1
,
to the sequence {x
n
}. Then, the filter output will generate x
e
as
the next value, that will be equal to the present value x
n
. So,
the prediction error at time k can be evaluated as:
(6)

which is used for the learning rule to adjust the NN weights.











Fig. 1. Block diagram of the nonlinear prediction.

The coefficients of the nonlinear filter are adjusted on-line
in the learning process, by considering an online heuristic
criterion that modifies at each pass of the time series the
number of patterns, the number of iterations and the length of
the tapped-delay line, in function of the Hursts value H
calculated from the time series taking into account the
Bayesian inference of the output values [9].
According to the stochastic behavior of the series, H can
be greater or smaller than 0.5, which means that the series
tends to present long or short term dependence [14],
respectively.
Estimation of
prediction error
Z
-1
I Error-correction
signal
One-step
prediction
NN-Based
Nonlinear Filter
Input signal
( ) ( ) ( ) k x k x k e
e n
=
139

3
Furthermore, the NNs weights are tuned by means of the
Levenberg-Marquardt rule, which considers the long or short
term stochastic dependence of the time series measured by the
Hursts parameter H. The learning rule consists of changing the
number of patterns, the filters length and the number of
iterations for each corresponding time series using the Bayesian
inference in accordance with Hurst parameter H taking into
account that the series forecasted has the same H as the real
time series.

A. Application to ANN predictor

When a short or long series is being analyzed, it is
important to make use of the simplest possible models.
Specifically, the number of unknown parameters must be kept
at a minimum.
The gamma distributions have been considered in the
literature for this purpose. When a Bayesian analysis is
conducted, inferences about the unknown parameters are
derived from the posterior distribution. This is a probability
model which describes the knowledge gained after observing a
set of data. The application of the regression problem involving
the correspond neural network function y(x,w) and the data set
consisting of N pairs, input vector lx and targets t
n
(n=1,.,N)
Assuming Gaussian noise on the target, the likelihood
function takes the form:


, ) ; (
2
exp
2
) , / (
1
2
2 /
)
`

|
.
|

\
|
=

=
N
n
n n
N
t w x y M w D P
|
t
|

(7)
where | is a hyper-parameter representing the inverse of the
noise variance. We consider in this work a single hidden layer
of tanh units and a linear outputs units.
To complete the Bayesian approach for this work, prior
information for the network is required. It is proposed to use,
analogous to penalties terms, the following equation


( ) ,
2
exp 2 ) (
2
2
2 /
2
|
|
|
.
|

\
|
=

w
w
w w P
N
t

(8)
assuming that the expected scale of the weights is given by w
set by hand. This was carried out considering that the network
function f(x
n
+1,w) is approximately linear with respect to w in
the vicinity of this mode, in fact, the predictive distribution for
y
n+1
will be another multivariate Gaussian.

B. Performance measure of the ANN predictor filter

In order to test the proposed design procedure of the ANN
predictor, an experiment with time series obtained from the
MG solution and cumulative rainfall time series was performed.
The performance of the filter is evaluated using the Symmetric
Mean Absolute Percent Error (SMAPE) proposed in the most
of metric evaluation, defined by

( )
100
2
1
1

+

=

=
n
t t t
t t
S
F X
F X
n
SMAPE
(9)
where t is the observation time, n is the size of the test set, s is
each time series, X
t
and F
t
are the actual and the forecasted
time series values at time t respectively. The SMAPE of each
series s calculates the symmetric absolute error in percent
between the actual X
t
and its corresponding forecast value F
t
,
across all observations t of the test set of size n for each time
series s.
IV. MAIN RESULTS
A. MG and Santa Francisa Rainfall generations

The time series are obtained from the solution of the MG
equation [10]. The MG equation is explained by the time delay
differential equation defined as:

), (
) ( 1
) (
) ( t y
t y
t y
t y
c
|
t
t o

+

=
-
(10)
and rainfall time series obtained from Santa Francisca, San
Bartolom and La Sevillana, from Cordoba, Argentina, with
parameters shown in Table 1. This collection of coefficients
was chosen to generate time series MG1 and MG2 whose H
parameters vary between 0 and 1. The chosen one was selected
in accordance to its roughness.

Table 1. Parameters to generate the times series.
Time Series Type Parameters H
1 MG1 =1.9 0.22
2 MG2 =2.1 0.13
3 Cumulative Rainfall Santa Francisca 0.022
4 Cumulative Rainfall San Bartolom 0.0132
5 Cumulative Rainfall La Sevillana 0.259

B. Set-up of the ANN algorithm

The initial conditions for the filter and learning algorithm
are shown in Table 2. Note that the first number of hidden
neurons and iteration are set in function of the input number.
These initiatory conditions of the learning algorithm were used
to forecast the primitive of the time series, whose length is 102
values.

TABLE 2. INITIAL CONDITIONS OF THE PARAMETERS
Variable Initial Conditions
l
x
12
H
o
7
it 100
H 0.5

C. Time Series Prediction Results

139

4
Each time series is composed of the MG solutions and
cumulative rainfall time series. However, there are three classes
of data sets: one is the original time series used for the
algorithm in order to give the forecast, which comprises 102 or
79 as in La Sevillana rainfall series values. The other one is the
primitive obtained by integrating the values of original time
series and the last one is used to compare if the forecast is
acceptable or not where the 18 last values can be used to
validate the performance of the prediction system, which 102
or 61 values form the data set, and 120 or 79 values constitute
the Forecasted and the Real ones. A comparison is made
between this work and earlier presented in [18] ANN H
dependent predictor filter.
The Monte Carlo method was used to forecast the next 18
values from MG time series and rainfall time series. Such
outcomes are shown from Fig. 2 to Fig. 6.

0 20 40 60 80 100 120 140
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Time [samples].
Real Media = 0.10534. Forescated Mean = 0.1261.
H = 0.2263. H
e
= 0.17241. l
x
= 20. SMAPE = 1.1693.
H dependent algorithm.
Neural Network based predictor
Mackey Glass parameters: |=1.9,o=30, c=10, t=100.


Mean Forescasted
Data
Real

Fig. 2. Results obtained from MG1 time series.

0 20 40 60 80 100 120 140
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Time [samples].
Real Media = 0.11226. Forescated Mean = 0.097659.
H = 0.13835. H
e
= 0.1472. l
x
= 20. SMAPE = 2.2398.
H dependent algorithm.
Neural Network based predictor
Mackey Glass parameters: |=2.1,o=40, c=10, t=100.


Mean Forescasted
Data
Real

Fig. 3. Results obtained from MG2 time series.

0 20 40 60 80 100 120 140
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time [samples].
Real Media = 0.20763. Forescated Mean = 0.25411.
H = 0.022508. H
e
= 0.17071. l
x
= 20. SMAPE = 2.5966.
H dependent algorithm.
Neural Network based predictor
Lluvia Mensual Acumulado Santa Francisca


Mean Forescasted
Data
Real

Fig. 4. Results obtained from cumulative rainfall time series
Santa Francisca, Crdoba, Argentina.


0 20 40 60 80 100 120 140
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time [samples].
Real Media = 0.14141. Forescated Mean = 0.20693.
H = 0.12252. H
e
= 0.23774. l
x
= 20. SMAPE = 8.7109.
H dependent algorithm.
Neural Network based predictor
Lluvia Mensual Acumulado San Bartolom


Mean Forescasted
Data
Real

Fig. 5. Results obtained from cumulative rainfall time series
San Bartolom, Crdoba, Argentina.
0 10 20 30 40 50 60 70 80
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time [samples].
Real Media = 0.25847. Forescated Mean = 0.21356.
H = 0.25994. H
e
= 0.027721. l
x
= 20. SMAPE = 4.8701.
H dependent algorithm.
Neural Network based predictor
Lluvia Mensual Acumulado La Sevillana


Mean Forescasted
Data
Real

Fig. 6. Results obtained from cumulative rainfall time series
San Bartolom, Crdoba, Argentina.

139

5
D. Comparative Results

TABLE 3. FIGURES OBTAINED BY THE PROPOSED APPROACH

Series No. H H
e

Real
Mean
Mean
Forecasted SMAPE
MG1 0.22 0.17 0.105 0.126 1.16
MG2 0.13 0.14 0.11 0.097 2.23
Santa
Francisca 0.022 -0.17 0.207 0.254 2.59
San
Bartolom 0.12 0.23 0.141 0.206 8.71
La Sevillana 0.25 0.027 0.258 0.217 4.87


The assessments of the obtained results by comparing the
performance of the proposed filter with the earlier work [18]
[19] and [20], both are based on ANN.
Although the difference between filters resides only in the
adjustment algorithm, the coefficients that each filter has, each
ones performs different behaviors, in which it can be noted that
rainfall time series have more roughness than MG solutions, so
the Bayesian approach applied to the parameter of the ANN
demonstrate a level improvement, in which an adequate prior
distribution model was chosen in order for tuning the
parameters and outputs of the predictor filter.
V. CONCLUSIONS

In this work an algorithm to adjust parameters using a
Bayesian method for cumulative rainfall time series forecasting
implemented by an ANN-filter was presented. The contribution
of this work resides in modeling the number of NN in the filter
that has an uncertainty number of parameters, which is adjusted
by Bayesian approach. Thereby, these are considered as
random variables whose posterior probability distribution is
inferred from the data.
The learning rule proposed to adjust the NNs weights is
based on the Levenberg-Marquardt method. Furthermore, in
function of the long or short term stochastic dependence of the
time series, the proposed approach changes the number of
patterns and iterations using the Bayesian inference in
accordance with parameter H evaluated of each time series. An
on-line heuristic adaptive law was proposed to update the
ANN topology at each time-stage taking into account that the
series forecasted has the same H as the real time series. The
major result shows that the predictor system has an optimal
performance from several MG time series and cumulative
rainfall against the classical predictor filter presented in [20], in
particular to time series whose H parameter has a high
roughness of signal, which is evaluated by H, respectively.
These results show a better performance compared with [19].
This fact encourages us to apply the proposed approach to
meteorological time series when the observations are taken
from a single point.

REFERENCES

[1] Taner, M. A. (1993). Tools for statistical inference: methods for the
exploration of posterior distributions and likelihood functions. New
York: Springer-Verlag.
[2] Buntine, W. L., & Weigend, A. S. (1991). Bayesian back-propagation.
Complex Systems, 05(6), 603.
[3] D.J.C. MacKay, A practical Bayesian framework for backpropagation
networks, Neural Comput. 4 (1992) 448}472.
[4] Neal, R. M. (1992). Bayesian training of backpropagation networks by
the hybrid Monte Carlo method. Technical report CRG-TR-92-1,
Department of Computer Science, University of Toronto.
[5] Bishop, C. (2006). Pattern Recognition and Machine Learning. Boston:
Springer.
[6] MacKay, D. J. C. (1995). Probable networks and plausible predictions -
a review of practical Bayesian methods for supervised neural networks.
Network: Computation in Neural Systems, 6 (3), 469-505.
[7] R.M. Neal, in: Bayesian learning for neural networks, Lecture Notes in
Statistics, Vol. 118, Springer, New York, 1996.
[8] Liu, J.N.K.; Lee, R.S.T (1999). Rainfall forecasting from multiple point
sources using neural networks. In proc. of the International Conference
on Systems, Man, and Cybernetics, 3, 429-434.
[9] Mendoza, M. & de Alba, E. (2006). Forecasting an accumulated series
based on partial accumulation II: A new Bayesian method for short
series with seasonal patterns, International Journal of Forecasting, Issue
4, 781-798.
[10] Mandelbrot, B. B., (1983), The Fractal Geometry of Nature, Freeman,
San Francisco, CA. 1983.
[11] Masulli, F., Baratta, D., Cicione, G., Studer, L (2001). Daily Rainfall
Forecasting using an Ensemble Technique based on Singular Spectrum
Analysis. In Proceedings of the IEEE International Joint Conference on
Neural Networks IJCNN 01, 1, 263-268.
[12] Mozer, M. C (1994). Neural Net Architectures for Temporal Sequence
Processing. Time Series Predictions: Forecasting the Future and
Understanding the Past. IEEE International Conference on Image
Processing, 243-264.
[13] Pucheta, J., Patio, H., Schugurensky, C., Fullana, R., Kuchen, B.
(2007a). Optimal Control Based-Neurocontroller to Guide the Crop
Growth under Perturbations. Dynamics Of Continuous, Discrete And
Impulsive Systems. Special Volume Advances in Neural Networks-
Theory and Applications. DCDIS A Supplement, Advances in Neural
Networks, 14(S1), 618-623.
[14] Pucheta, J., Patio, H.D., Kuchen, B. (2007). Neural Networks-Based
Time Series Prediction Using Long and Short Term Dependence in the
Learning Process. In proc. of the 2007 International Symposium on
Forecasting, New York, USA.
[15] Pucheta, J., Patino, D. and Kuchen, B. A Statistically Dependent
Approach For The Monthly Rainfall Forecast from One Point
Observations. In Book Series Title: IFIP Advances in Information and
Communication Technology, Book Title: Computer and Computing
Technologies in Agriculture II, Volume 2, IFIP International Federation
for Information Processing Volume 294, Computer and Computing
Technologies in Agriculture II, Volume 2, eds. D. Li, Z. Chunjiang,
(Boston: Springer), ISBN: 978-1-4419-0210-8, pp. 787798, Url:
http://dx.doi.org/10.1007/978-1-4419-0211-5_1, Doi: 10.1007/978-1-
4419-0211-5_1. (2009).
[16] Pucheta, J., Patio, H., Schugurensky, C., Fullana, R., Kuchen, B.
Optimal Control Based-Neurocontroller to Guide the Crop Growth under
Perturbations. Dynamics Of Continuous, Discrete And Impulsive
Systems Special Volume Advances in Neural Networks-Theory and
Applications. DCDIS A Supplement, Advances in Neural Networks,
Watam Press, Vol. 14(S1), pp. 618623. 2007.
[17] J.A. Pucheta, C. Schugurensky, R. Fullana, H. Patio and B. Kuchen.
A Neuro-Dynamic Programming-Based Optimal Controller for Tomato
Seedling Growth in Greenhouse Systems. Neural Processing letters.
Editorial Springer Verlag (Springer Netherlands). ISSN 1370-4621
(Print) 1573-773X (Online) DOI 10.1007/s11063-006-9022-9, Volume
24, Number 3 / December, 2006, Pages 241-260.
139

6
[18] J. Pucheta, M., C. Rodrguez Rivero, M. Herrera, C. Salas, D. Patio
and B. Kuchen. A Feed-forward Neural Networks-Based Nonlinear
Autoregressive Model for Forecasting Time Series. Revista
Computacin y Sistemas, Centro de Investigacin en Computacin-IPN,
Mxico D.F., Mxico, Computacin y Sistemas Vol. 14 No. 4, pp. 423-
435 ISSN 1405-5546, 2011.
http://www.cic.ipn.mx/sitioCIC/images/revista/vol14-4/art07.pdf
[19] C. Rodrguez Rivero, J. Pucheta, J. Baumgartner, M. Herrera, D. Patio
y B. Kuchen. A NN-based model for time series forecasting in function
of energy associated of series, Proc. of the International Conference on
Applied, Numerical and Computational Mathematics (ICANCM'11),
Barcelona, Spain, September 15-17, 2011, ISBN 978-1-61804-030-5,
Pp. 80-86. (2011).
[20] C. Rivero Rodrguez, J. Pucheta, J. Baumgartner, H.D. Patio and B.
Kuchen, An Approach for Time Series Forecasting by simulating
Stochastic Processes Through Time-Lagged feed-forward neural
network. The 2010 World Congress in Computer Science, Computer
Engineering, and Applied computing. Las Vegas, Nevada, USA, July
12-15, 2010. DMIN10 Proceedings ISBN 1-60132-138-4 CSREA
Press,p.p 278, (CD ISBN 1-60132-131-7), USA, (2010).






C. Rodriguez Rivero received the Electrical Engineering degree from Faculty
of Exact, Physical and Natural Sciences at National University of Cordoba,
Argentina, in 2007, and he is currently pursuing the Ph.D. degree in electrical
engineering at University of Cordoba, Argentina, under supervision of Dr.
Pucheta and Grant PRH National Agency for Scientific and Technological
Promotion (ANPCyT), in the field of automatic control for slow dynamics
processes, stochastic control, optimization and time series forecast. He joined
the Mathematics Research Laboratory applied to Control in 2009. IEEE
member of CSS and CIS.




J. Pucheta received the Electrical Engineering degree from National
Technological University - Cordoba Regional Faculty, Argentina, the M.S. and
Ph.D. degrees from National University of San Juan, Argentina, in 1999, 2002
and 2006, respectively. He is currently Professor at the Laboratory of research
in mathematics applied to control, National University of Cordoba, Argentina.
His research interests include stochastic and optimal control, time series
forecast and machine learning. IEEE member.




M. Herrera received the Electrical Engineering degree in 2007 from Faculty
of Exact, Physical and Natural Sciences at National University of Cordoba,
Argentina. He is Associated Professor and researcher from Faculty of Applied
Sciences and Technology at National University of Catamarca. His research
interests include control systems and automation.




V. Sauchelli received the Electrical Engineering degree and Ph.D. degrees
from National University of Cordoba, Major in University Education from
National Technological University, Crdoba Regional Faculty (UTN, FRC,)
Argentina, in 1973 and 1997, respectively. He is currently Principal at
Telecommunication Postgraduate Program, National University of Cordoba,
and Professor of Control Systems and Signal Processing in electrical and
electronic engineering. He is also Director of Director of Mathematics Research
Laboratory applied to Control (LIMAC)and Research Projects at Secretary of
Sciences and Technology (SECyT), National University of Cordoba. His
interests include automatic control, neural networks, fractional-order controllers
and signal processing.

Vous aimerez peut-être aussi