Vous êtes sur la page 1sur 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/303543459

Stock Market Prediction Using Neural Network Time Series Forecasting

Working Paper · May 2016


DOI: 10.13140/RG.2.1.1954.1368

CITATIONS READS

0 1,036

1 author:

Lev Ertuna

3 PUBLICATIONS   0 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Lev Ertuna on 26 May 2016.

The user has requested enhancement of the downloaded file.


Stock Market Prediction
Using Neural Network Time Series Forecasting
Lev Ertuna

Abstract
Time series forecasting is a very powerful computational tool that allows predicting future outcomes of a system based on
how the system behaved previously, it has a great number of applications in many areas of science. It is possible to predict
a wide range of various events, both of natural and human generated origin, using neural networks and machine learning
approach and methodology. Predictions based only on previous responses of the system are especially appealing for solving
problems where input variables of the system cannot be clearly defined. But forecasting with neural networks has its own
challenges: in particular it is difficult to choose a neural network architecture for a specific problem, since a very simplified
model may not be able to learn successfully, and excessively complicated model may lead to overfitting and data
memorization. In this paper the application of time series prediction to stock market forecasting is examined, and a
comparative study of different neural network structures and different learning methods is performed in order to obtain a
better understanding of how the quality of predictions changes with various approaches to solving a given problem.

1. Introduction
Artificial neural network is a computational model inspired by biological nervous systems. Neural networks are widely
used for estimation and approximation of unknown complicated functions and systems that may depend on a large number
of input variables. Artificial neural networks are commonly modeled as layers of neurons, exchanging information with
each other. Connections between neurons have some numeric weights that are adjusted during the learning process. [1]
Neural networks obtain intelligent behavior by learning from the provided data or from interactions with environment.
Due to the fact that they can learn complex non-linear mappings, neural networks are widely used for solving advanced
problems, such as pattern recognition and classification. The process of learning the data by the neural network is called
training, and while there are several approaches to neural network training, for this study the supervised learning (training)
methods are employed. [2]
In supervised training pairs of input-output data must be provided to the network, and the network will try to learn the
mapping implied by that data. The robustness of the neural network often depends on the training algorithm that was used
to learn the data. There are two training methods that will be compared in this paper – backpropagation and resilient
backpropagation. [3]
Over the last decade neural networks have been proven to be one of the most powerful tools in modelling and
forecasting. Recently neural networks have expanded their forecasting applications to many areas, such as: urban traffic
state predictions [4], disease predictions [5] [6], earthquake magnitude forecasting [7], river flow forecasting [8] [9] [10],
air quality and pollution forecasting [11] [12], solar power forecasting [13], weather forecasting [14].
Forecasting with neural networks is also widely used in analyzing complex financial systems and market based relations:
credit risk evaluations [15] [16], gas prices and production level predictions [17] [18] [19], forecasting of vehicle sales [20],
forecasting demand on consumable parts in production [21], predicting tourism demand [22], forecasting airline data [23],
forecasting stock index prices [24] [25] [26] [27] [28] and currency exchange rates [29] [30]. Since neural networks proved
to be reliable for chaotic time series forecasting, financial firms worldwide are employing neural networks to solve difficult
prediction problems; and it is anticipated that neural networks will eventually outperform even the best traders and investors
[31].
2. Time Series Forecasting
A time series can be represented as a sequence of data which depend on time 𝑡𝑡: {𝑦𝑦(𝑡𝑡0 ), 𝑦𝑦(𝑡𝑡1 ), ⋯ 𝑦𝑦(𝑡𝑡𝑘𝑘 ), ⋯ }. Normally a
single element of this sequence 𝑦𝑦(𝑡𝑡) can be described as a function of some independent variables and time: 𝑦𝑦(𝑡𝑡) =
𝑓𝑓(𝑎𝑎, 𝑏𝑏, 𝑐𝑐, ⋯ , 𝑛𝑛, 𝑡𝑡).
In order to represent such time series as a neural network model, it is necessary to define input variables (independent
variables and time) and output variables (one or multiple elements of a given sequence). Then the network can be trained
on some historical records of previous input variables, resulting in certain outputs of the system; and it can be used to predict
future outputs of the system, if future values of input variables are known.
To use neural networks and machine learning techniques for such time series model, the input variables should be
clearly defined. But what happens when the input to the system is not very straightforward?

3. Forecasting Challenges in Application to Stock Market


Stock market can be viewed as a system where input variables are not well defined. And indeed it cannot be determined
directly what influences the stock market prices; it’s a very chaotic system driven by complicated human interactions and
decisions [31]. How can inputs of such systems be defined and how can they be represented as a numerical data? Well,
while there is no clear answer to this question, it is still desirable to use neural networks and time series predictions for stock
market data analysis.
In order to still have the ability to forecast data using neural networks, a system that is dependent only on its previous
states can be assumed. This assumption is not the best solution, since many possibly important factors are discarded in such
system, but this approach is suitable for problems with large amount of historical data available for analysis. This system
can be described as a function of its previous states: 𝑦𝑦(𝑡𝑡) = 𝑓𝑓�𝑦𝑦(𝑡𝑡 − 1), 𝑦𝑦(𝑡𝑡 − 2), 𝑦𝑦(𝑡𝑡 − 3), ⋯ , 𝑦𝑦(𝑡𝑡 − 𝑛𝑛)�. Then a neural
network for this function can be modeled with input variables defined as previous states of the system.
What challenges does having such neural network present? Well, first of all, it is not obvious how many previous states
of the system should be taken into account when calculating its future state. Will it depend on past 2 values, 3, 10, 100? It
becomes a subject of experimentation to find the number of past states that gives the best prediction results [32]. Another
thing to consider is the internal neural network structure: the number of layers and neurons and their interconnections have
great influence on the quality of predictions and on the time it takes to train a network [33]. There is no golden rule to
determine suitable neural network architecture when trying to solve a given problem; the choice of network structure is
unique for a specific problem, since a very simplified model may not be able to learn the data, and too complicated model
may lead to overfitting and data memorization [34] [35] [36].

4. Neural Network Structure


Since this study is mostly concentrated on analysis of different network architectures, some basic ideas about neural
network structures should be introduced here. Neural network structure (also known as topology or architecture) usually
consists of several layers of neurons; the neurons in the same layer are not connected to each other, but the connections exist
between neurons of two adjacent layers [3].
One of the main types of neural network topologies is the feedforward network, also known as multilayer perceptron.
In this network architecture the neurons are grouped in several layers: the input layer, one or more hidden layers (since they
are invisible from the outside) and the output layer; in such network each neuron in a layer has only directed connections to
the neurons of the next layer (starting from the input layer, moving towards the output layer) [3] [7] [37].
Neural network structure has dramatic effects on its performance, however the exact relation between network structure
and prediction performance depends on the chosen problem and cannot be inferred directly [38]. Thus it requires
experimentation on specific problems to determine the optimal solution.
5. Learning Strategy
In this study two most commonly used learning methods were examined. Backpropagation (short for backward
propagation of errors) is a method of training neural networks that attempts to gradually decrease the error of the network;
backpropagation is used in supervised learning, as it requires a known output for each input in order to calculate the error.
Resilient backpropagation is a modification of the backpropagation algorithm, but it is considered to be a faster and more
efficient training method. [3]
When using backpropagation or resilient backpropagation training method, the speed and accuracy of a learning
procedure can be controlled by the learning rate and learning momentum. The choice of learning rate and momentum
significantly depends on the problem, the network structure and the training data set. If the learning rate is too high, the
learning becomes uncontrolled and might not find the solution to the given data set; but the speed of the learning procedure
is always proportional to the learning rate, so decreasing the learning rate might result in huge, often unacceptable, amount
of time spent on training. It is important to adjust these parameters to the specific problem [3]. In this paper these parameters
are kept constant, the learning rate and learning momentum remain at relatively low values, which allows to concentrate
more on the neural network’s topology. The time of the network’s training process (measured in epochs, i.e. iterations of
the training algorithms) is also fixed in this study.

6. Data Preparation
In this study we used stock market historical prices for Apple Inc. (AAPL) provided by Yahoo Finance. The historical
records of daily stock prices were taken for a period between January 1, 2000 and January 1, 2016. For this analysis only
stock open prices were used. The same analysis can be performed with other stocks and other specific prices, the obtained
behavior of neural networks will not change dramatically, but the network structure that will yield the best performance
might be different.
In order to feed the data into the neural network and perform the training, the data must be normalized [3] [39] to some
specific range. Most commonly used ranges are: 0.0-1.0, 0.1-0.9 or 0.2-0.8 [10]. The normalization range was the same for
all networks in this paper, although normalization also has some effect on the networks performance [10] and should be a
matter of special discussion. For this study 0.1-0.9 range was used for data normalization.
The data was also split in two parts – the training data set, on which the neural network was learning; and the testing
data set, which was not exposed to the network, but after learning was finished the network was tested on it and the error of
the network on this testing set was the most important criteria for evaluating the network’s performance. Different sizes of
testing sets were used – 1%, 10% and 20% of the provided historical data.
7. Experiment
During this study three major models of neural networks for stock prices time series forecasting were analyzed. The
models had one, five and thirty inputs correspondingly. The structure of hidden layers was varied for these networks:
examination was performed upon structures with one hidden layer, two hidden layers that have the same amount of neurons
on each layer, three hidden layers that are diamond shaped (expanding from the input layer towards the middle, and
shrinking from the middle towards the output layer), ten hidden layers with the same amount of neurons on each layer
(referred to as the deep neural networks). The complexity of each structure was varied by changing the number of neurons
in the hidden layers.
The initial assumption when dealing with neural networks was as follows: more complex structure, with more inputs,
with more hidden layers, should produce better results. But structures that are too complicated might lead to memorization
of the data, also known as the overfitting problem: network gets perfect results on the training data set but suffers from bad
performance on the data sets that were not exposed to it during training. The figure below can illustrate that phenomena:

It was analyzed whether this overfitting problem occurs for the attempted network structures. The plots of network
complexity vs testing set error for testing sets of different size are provided, except for the deep neural networks (since only
3 deep networks were tested).
The neural network’s performance also depends on the training strategy that was used to learn the data. To make training
conditions equal, networks were trained with backpropagation and resilient backpropagation methods separately, the
learning rate was 0.005, learning momentum was 0.001, and the networks were trained for 1000 epochs. These two training
methods were compared and opposed in terms of stability, predictability of behavior, and performance.
The experiments conducted during this study were performed using Java programming language and the Encog machine
learning framework for Java [39].
7.1 One input, one output

The significance of this model reads: based only on one previous response of the system, neural networks can
predict future outcomes of a system. This approach doesn’t seem reliable and it cannot be related with how real
traders or investors analyze stock market, but it demonstrates the advantages of machine learning computational
tools.

Table 1 – One input, one output networks with backpropagation


Network Topology Errors %
Input Output 1% 10% 20%
Hidden Structure
Neurons Neurons Set Set Set
1 5 1 1.367 3.061 3.394
1 10 1 9.723 38.020 14.586
Single 1 50 1 2.704 3.100 2.776
Layer 1 100 1 2.677 3.287 2.675
1 250 1 4.786 4.632 3.455
1 500 1 18.382 16.889 11.233
1 5-5 1 37.664 20.857 12.628
1 10-10 1 4.318 20.160 10.636
Two
1 50-50 1 36.322 31.247 21.689
Symmetric
1 85-85 1 28.683 39.758 21.823
Layers
1 100-100 1 13.217 62.286 17.258
1 120-120 1 27.354 34.370 18.198
1 5-10-5 1 57.236 59.127 28.826
Three
1 10-50-10 1 15.012 62.565 26.571
Diamond
1 20-100-20 1 47.896 18.852 13.218
Layers
1 50-100-50 1 50.288 70.014 22.314
1 10 layers by 5 neurons 1 51.836 60.621 48.584
Deep
1 10 layers by 10 neurons 1 51.836 60.621 48.583
Network
1 10 layers by 45 neurons 1 51.836 60.621 48.584

Table 2 – One input, one output networks with resilient backpropagation


Network Topology Errors %
Input Output 1% 10% 20%
Hidden Structure
Neurons Neurons Set Set Set
1 5 1 21.300 25.174 21.235
1 10 1 39.301 45.654 37.264
Single 1 50 1 5.346 7.921 5.229
Layer 1 100 1 4.940 7.515 4.919
1 250 1 5.124 7.587 4.969
1 500 1 5.023 7.586 4.952
1 5-5 1 22.520 27.108 19.967
1 10-10 1 4.804 7.514 4.717
Two
1 50-50 1 4.512 7.206 4.748
Symmetric
1 85-85 1 4.009 7.016 4.454
Layers
1 100-100 1 3.809 6.477 4.336
1 120-120 1 4.136 6.763 4.501
1 5-10-5 1 47.773 55.416 45.152
Three
1 10-50-10 1 5.077 7.912 5.232
Diamond
1 20-100-20 1 4.389 7.145 4.782
Layers
1 50-100-50 1 4.444 7.133 4.724
1 10 layers by 5 neurons 1 50.760 59.256 47.327
Deep
1 10 layers by 10 neurons 1 11.079 14.014 9.599
Network
1 10 layers by 45 neurons 1 3.488 7.444 4.668
Table 3 – One input, one output networks: graphical comparison of performance

Network Complexity vs Testing Data Set Error %

Table 4 – One input, one output networks: best performance cases

Best achieved error with corresponding network structure and training method
Testing Set Best Error Network Structure Training Method
1% Set 1.367 1-5-1 Backpropagation
10% Set 3.061 1-5-1 Backpropagation
20% Set 2.776 1-50-1 Backpropagation

The general approach of one input, one output network proved to work, although it didn’t seem likely it
would produce good results. The backpropagation method was very effective with the smallest network structure
(1-5-1) demonstrating errors of 1.367% on 1% testing data set, and 3.061% on 10% testing data set. But for more
complicated networks backpropagation was incapable of providing stable performance on testing data sets. It was
also observed that backpropagation is incapable of training deep neural networks, resulting in errors above 45%
level.

On the other hand, resilient backpropagation showed very stable results in testing data sets, with no signs
of overfitting whatsoever. It didn’t demonstrate the best performance in terms of errors, but this learning strategy
was able to reach the error below 10% in most networks, while backpropagation mostly showed results above 20%
error level. Networks that were trained with resilient backpropagation behaved exactly as it was expected, more
complicated structure gave better performance in terms of errors in the testing data sets. Deep neural networks did
not demonstrate any performance improvement compared to other network structures trained with resilient
backpropagation method, and only the network with 45 neurons in hidden layers managed to achieve errors below
10% level.
7.2 Five inputs, one output

This is a more logical approach to modelling neural networks, last 5 observations are taken into consideration when
trying to predict future system responses. It is somehow closer to how human brain analytics work.

Table 5 – Five inputs, one output networks with backpropagation


Network Topology Errors %
Input Output 1% 10% 20%
Hidden Structure
Neurons Neurons Set Set Set
5 5 1 1.961 6.831 5.939
5 10 1 64.928 47.533 13.861
Single 5 50 1 15.403 18.118 8.890
Layer 5 100 1 7.719 42.222 14.450
5 250 1 10.613 13.385 8.042
5 500 1 5.217 24.806 7.198
5 5-5 1 21.420 23.381 14.196
5 10-10 1 38.292 53.552 33.996
Two
5 50-50 1 32.672 25.360 13.873
Symmetric
5 85-85 1 19.610 37.912 22.174
Layers
5 100-100 1 42.662 62.571 10.234
5 120-120 1 14.951 27.214 7.743
5 5-10-5 1 37.347 38.586 15.129
Three
5 10-50-10 1 27.976 61.336 21.800
Diamond
5 20-100-20 1 57.981 65.192 21.579
Layers
5 50-100-50 1 50.648 42.263 7.075
5 10 layers by 5 neurons 1 52.562 60.586 48.454
Deep
5 10 layers by 10 neurons 1 52.560 60.584 48.454
Network
5 10 layers by 45 neurons 1 52.562 60.586 48.454

Table 6 – Five inputs, one output networks with resilient backpropagation


Network Topology Errors %
Input Output 1% 10% 20%
Hidden Structure
Neurons Neurons Set Set Set
5 5 1 52.224 60.303 48.246
5 10 1 2.891 8.863 5.871
Single 5 50 1 3.464 7.027 4.681
Layer 5 100 1 3.775 7.649 5.095
5 250 1 3.498 8.305 5.547
5 500 1 3.779 9.002 6.307
5 5-5 1 2.978 8.134 6.256
5 10-10 1 5.669 10.596 6.834
Two
5 50-50 1 4.117 8.281 5.504
Symmetric
5 85-85 1 2.523 6.369 4.322
Layers
5 100-100 1 1.608 6.126 4.468
5 120-120 1 2.806 6.668 4.489
5 5-10-5 1 8.437 15.549 9.641
Three
5 10-50-10 1 5.159 10.009 6.710
Diamond
5 20-100-20 1 1.216 4.908 3.424
Layers
5 50-100-50 1 1.401 5.532 4.016
5 10 layers by 5 neurons 1 52.243 60.090 48.043
Deep
5 10 layers by 10 neurons 1 10.477 17.402 10.680
Network
5 10 layers by 45 neurons 1 1.443 5.781 4.094
Table 7 – Five inputs, one output networks: graphical comparison of performance

Network Complexity vs Testing Data Set Error %

Table 8 – Five inputs, one output networks: best performance cases

Best achieved error with corresponding network structure and training method
Testing Set Best Error Network Structure Training Method
1% Set 1.216 5-20-100-20-1 Resilient backpropagation
10% Set 4.908 5-20-100-20-1 Resilient backpropagation
20% Set 3.424 5-20-100-20-1 Resilient backpropagation

When the input to the neural network became more complicated, the backpropagation training method could
no longer demonstrate valuable performance. Only neural networks with one hidden layer, trained with
backpropagation, managed to achieve errors somewhere around 10% level with some logical behavior. All the other
structures showed chaotic results of backpropagation training. As in the previous approach, it was observed that
backpropagation was incapable of training deep neural networks, resulting in errors above 45% level.

A very stable behavior was once again observed with the resilient backpropagation training method, most
of the structures trained with this method showed errors below the 10% level. Some evidence of overfitting problem
was observed, but it occurred in a very small error range and might not necessarily signify overfitting. In general,
networks trained with resilient backpropagation showed better performance with more complicated network
structures. The best behavior in all testing data sets (1%, 10% and 20%) was demonstrated by the 5-20-100-20-1
neural network trained with resilient backpropagation method. As it was previously observed, deep neural networks
did not produce any performance improvement, as compared to other network structures trained with resilient
backpropagation method; only the network with 45 neurons in hidden layers managed to achieve errors below 10%
level.
7.3 Thirty inputs, one output

A more advanced model that requires 30 last data samples to predict future outcome of a system was developed and
analyzed. It was expected to demonstrate the best performance, since it had the greatest theoretical computational
power.

Table 9 – Thirty inputs, one output networks with backpropagation


Network Topology Errors %
Input Output 1% 10% 20%
Hidden Structure
Neurons Neurons Set Set Set
30 5 1 56.815 50.613 22.559
30 10 1 56.815 50.284 34.330
Single 30 50 1 34.078 44.613 17.967
Layer 30 100 1 56.815 58.826 47.819
30 250 1 56.897 88.572 44.890
30 500 1 95.285 84.584 48.405
30 5-5 1 84.490 65.308 38.027
30 10-10 1 83.870 63.808 43.744
Two
30 50-50 1 56.815 18.007 18.660
Symmetric
30 85-85 1 40.666 53.828 39.705
Layers
30 100-100 1 56.815 58.826 58.970
30 120-120 1 26.742 17.744 31.340
30 5-10-5 1 28.595 48.387 25.524
Three
30 10-50-10 1 62.284 61.076 40.791
Diamond
30 20-100-20 1 47.439 67.366 13.985
Layers
30 50-100-50 1 56.815 58.826 20.623
30 10 layers by 5 neurons 1 56.814 58.825 47.680
Deep
30 10 layers by 10 neurons 1 56.814 58.825 47.679
Network
30 10 layers by 45 neurons 1 56.815 58.826 47.680

Table 10 – Thirty inputs, one output networks with resilient backpropagation


Network Topology Errors %
Input Output 1% 10% 20%
Hidden Structure
Neurons Neurons Set Set Set
30 5 1 23.492 27.624 19.932
30 10 1 45.526 50.690 36.539
Single 30 50 1 1.065 4.286 7.782
Layer 30 100 1 2.840 4.294 5.979
30 250 1 1.438 3.571 9.039
30 500 1 1.367 5.258 10.046
30 5-5 1 54.201 56.219 47.013
30 10-10 1 0.941 3.015 8.655
Two
30 50-50 1 0.806 1.928 4.752
Symmetric
30 85-85 1 1.132 2.478 5.843
Layers
30 100-100 1 1.857 3.481 6.697
30 120-120 1 1.749 2.860 5.974
30 5-10-5 1 47.296 50.119 41.574
Three
30 10-50-10 1 2.121 2.760 7.495
Diamond
30 20-100-20 1 2.227 2.522 6.435
Layers
30 50-100-50 1 1.200 2.208 5.640
30 10 layers by 5 neurons 1 9.631 12.148 34.310
Deep
30 10 layers by 10 neurons 1 1.805 5.411 8.033
Network
30 10 layers by 45 neurons 1 1.980 2.140 6.215
Table 11 – Thirty inputs, one output networks: graphical comparison of performance

Network Complexity vs Testing Data Set Error %

Table 12 – Thirty inputs, one output networks: best performance cases

Best achieved error with corresponding network structure and training method
Testing Set Best Error Network Structure Training Method
1% Set 0.806 30-50-50-1 Resilient backpropagation
10% Set 1.928 30-50-50-1 Resilient backpropagation
20% Set 4.752 30-50-50-1 Resilient backpropagation

If the input to the neural network becomes even more complex, backpropagation training method
completely fails to satisfy the training process. With input consisting of 30 samples of previous responses of the
system, backpropagation could hardly achieve errors below 30% level. It showed no valuable results and behaved
very chaotically. Once again, backpropagation demonstrated its incapability of training deep neural networks,
resulting in errors above 45% level.

Resilient backpropagation demonstrated the best performance with 30-50-50-1 neural network, showing
0.806% error in 1% testing data set, 1.928% error in 10% testing data set, 4.752% error in 20% testing data set.
Similar to previous observations, it demonstrated very stable behavior, most of the structures trained with this
method showed errors below the 10% level. Deep neural networks performed better than in previous experiments:
the networks with 10 and with 45 neurons in hidden layers managed to achieve errors below 10% level.
8. Best Training Set Performance
The best performance, in terms of errors, in 1% and 10% testing sets was achieved by 30-50-50-1 neural network trained
with resilient backpropagation, and in 20% testing set by 1-50-1 neural network trained with backpropagation. Figures
below demonstrate what this performance means in terms of forecasting computing power: the actual system response for
the testing sets was compared with predicted system response.
9. Conclusion
During this study the application of time series prediction to stock market forecasting was examined, and a comparative
study of different neural network structures and different learning methods was performed. The best network topologies to
solve time series forecasting problem (stock market price prediction) were determined. It was demonstrated that resilient
backpropagation training method worked equally well with most network structures, showing more predictable error
behavior, and it was in general more reliable than backpropagation. It was also observed that the complexity of inputs to the
network was not directly related with the performance of the network, but networks with more input data tended to produce
lower errors in the testing sets, although networks with only one input proved to be very effective as well, which was a
spectacular demonstration of neural networks’ computational power. Overfitting occurred to some extent with most of the
networks, but it produced very slight increase in errors and could be ignored for most purposes. Among other neural network
structures, a special case, deep neural networks, was analyzed and it was shown to be not very effective, it didn’t result in
the best prediction quality. It still produced acceptable error performance (below 10% level), but simpler network topologies
outperformed deep neural networks.
While performing this study, many networks demonstrated errors below 10% and some even below 5%, which shows
that neural networks are a great tool for time series forecasting with huge computing power, capable of learning chaotic
time series with no underlying mathematical model. Thus, it can be successfully applied to stock market and many other
fields where such prediction tools are required.

References

[1] S. Russell and P. Norvig, Artificial Intelligence a Modern Approach, New Jersey: Pearson Education, 2010.

[2] R. O. Duda, P. E. Hart and D. G. Stork, Pattern Classification, Wiley, 2004.

[3] D. Kriesel, A Brief Introduction to Neural Networks, 2005.

[4] H. Peng and K.-L. Du, "Urban Traffic State Detection Based on Support". Patent US 9,037,519 B2, 19 May 2015.

[5] M. N. R. Deepthi Gurram, "A Decision Support System for Predicting Heart Disease Using Multilayer Perceptron
and Factor Analysis," International Review on Computers and Software, vol. 10, no. 8, August 2015.

[6] M. O. G. Nayeem, M. N. Wan and M. K. Hasan, "Prediction of Disease Level Using Multilayer Perceptron of
Artificial Neural Network for Patient Monitoring," 2015.

[7] J. Mahmoudi, M. A. Arjomand, M. Rezaei and M. Mohammadi, "Predicting the Earthquake Magnitude Using the
Multilayer Perceptron Neural Network with Two Hidden Layers," Civil Engineering Journal, January 2016.

[8] A. Atiya, S. M. El-Shoura, S. I. Shaheen and M. S. El-Sherif, "A comparison between neural-network forecasting
techniques-case study: river flow forecasting," IEEE Transactions on Neural Networks, April 1999.

[9] M. Shafaei and O. Kisi, "Predicting river daily flow using wavelet-artificial neural networks based on regression
analyses in comparison with artificial neural networks and support vector machine models," Neural Computing and
Applications, April 2016.

[10] A. Singh, R. Panda and N. Pramanik, "Appropriate data normalization range for daily river flow forecasting using
an artificial neural network," January 2009.

[11] H. Abderrahim, M. R. Chellali and A. Hamou, "Forecasting PM10 in Algiers: efficacy of multilayer perceptron
networks," Environmental Science and Pollution Research, September 2015.
[12] L. Hrust, Z. B. Klaic, J. Križan, O. Antonić and P. Hercog, "Neural network forecasting of air pollutants hourly
concentrations using optimised temporal averages of meteorological variables and pollutant concentrations,"
Atmospheric Environment, November 2009.

[13] C. Poolla, A. Ishihara, S. Rosenberg, R. Martin, A. Fong, S. Ray and C. Basu, "Neural network forecasting of solar
power for NASA Ames sustainability base," 2015.

[14] S. Tasdemir and A. Cinar, "Application of artificial neural network forecasting of daily maximum temperature in
Konya".

[15] C.-L. Huang, M.-C. Chen and C.-J. Wang, "Credit scoring with a data mining approach based on support vector
machines," Expert Systems with Applications, November 2007.

[16] H. A. Abdou, S. T. Alam and J. Mulkeen, "Would credit scoring work for Islamic finance? A neural network
approach," International Journal of Islamic and Middle Eastern Finance and Management, April 2014.

[17] I. S. Agbon and J. C. Araque, "Predicting Oil and Gas Spot Prices Using Chaos Time Series Analysis and Fuzzy
Neural Network Model," 2003.

[18] S. M. Al-Fattah and R. Startzman, "Predicting Natural Gas Production Using Artificial Neural Network," in SPE
Hydrocarbon Economics and Evaluation Symposium, Dallas, 2001.

[19] S. Hosseinipoor, Forecasting Natural Gas Prices in the United States Using Artificial Neural Networks, 2016.

[20] X. X. Zhang and D. T. Zhang, "A Neural Network Forecasting Model of Beijing Motor Vehicles Sold Based on Set
Pare Analysis," 2011.

[21] Y.-T. Jou, H.-M. Wee, H.-C. Chen, Y.-H. Hsieh and L. Wang, "A neural network forecasting model for consumable
parts in semiconductor manufacturing," Journal of Manufacturing Technology Management, March 2009.

[22] S. C. Kon and L. W. Turner, "Neural network forecasting of tourism demand," Tourism Economics, September
2005.

[23] L. R. Weatherford, T. W. Gentry and B. Wilamowski, "Neural network forecasting for airlines: A comparative
analysis," Journal of Revenue & Pricing Management, January 2003.

[24] Y.-H. Wang, "Nonlinear neural network forecasting model for stock index option price: hybrid GJR-GARCH
approach.," Expert Systems with Applications, January 2009.

[25] T.-S. Lee and C.-C. Chiu, "Neural network forecasting of an opening cash price index," International Journal of
Systems Science, February 2002.

[26] J.-S. Chen and P.-C. Wu, "Neural network forecasting of TAIMEX index futures," 2000.

[27] M. Dixon, D. Klabjan and J. H. Bang, "Classification-based Financial Markets Prediction using Deep Neural
Networks," March 2016.

[28] B. W. Wanjawa and L. Muchemi, "ANN Model to Predict Stock Prices at Stock Exchange Markets," August 2014.

[29] G. Zhang and M. Y. Hu, "Neural network forecasting of the British Pound/US Dollar exchange rate," Omega,
August 1998.

[30] A. D. Aydin and S. C. Cavdar, "Comparison of Prediction Performances of Artificial Neural Network (ANN) and
Vector Autoregressive (VAR) Models by Using the Macroeconomic Variables of Gold Prices, Borsa Istanbul
(BIST) 100 Index and US Dollar-Turkish Lira (USD/TRY) Exchange Rates," Procedia Economics and Finance,
December 2015.

[31] R. R. Trippi and E. Turban, "Neural Networks in Finance and Investing: Using AI to Improve Real World
Performance," 1993.

[32] S. Zhang, H.-X. Liu, T. Gao and S.-D. Du, "Determining the input dimension of a neural network for nonlinear time
series prediction," Chinese Physics, June 2003.

[33] Q. Li and D. Zheng, "Determining topology architecture for chaotic time series neural network," February 1999.

[34] P. L. Rosin and F. Fierens, "Improving Neural Network Generalisation," 1995.

[35] E. Baum and D. Haussler, "What Size Net Gives Valid Generalization?," in Advances in Neural Information
Processing Systems, Denver, 1988.

[36] S. Aras and I. D. Kocakoç, "A new model selection strategy in time series forecasting with artificial neural
networks: IHTS," Neurocomputing, October 2015.

[37] S. F. Abdullah, A. F. N. A. Rahman, Z. A. Abas and W. H. B. M. Saad, "Multilayer Perceptron Neural Network In
Classifying Gender Using Fingerprint Global Level Features," Indian Journal of Science and Technology, February
2016.

[38] U. Smyczyńska, J. Smyczynska and R. Tadeusiewicz, "Influence of neural network structure and data-set size on its
performance in the prediction of height of growth hormone-treated patients," Bio-Algorithms and Med-Systems,
January 2016.

[39] J. T. Heaton, "Encog: Library of Interchangeable Machine Learning Models for Java and C#," 2015.

View publication stats

Vous aimerez peut-être aussi