Académique Documents
Professionnel Documents
Culture Documents
399
IRACST - International Journal of Computer Science and Information Technology & Security (IJCSITS), ISSN: 2249-9555
Vol. 5, No6, December 2015
affects both producers and consumers and shows the effectiveness in reducing the
has macroeconomic implications as well. time of prediction process.
A steep rise in the prices of primary
commodities spills over to other sectors of Chauhan,Bhargwant et.al [12],
the economy and leads to an increase in implemnented neural networks with back
the overall rate of inflation. propogation algorithm for stock
market.Err error rate is reduced using this
Data mining provides the algorithm.The datas are predicted easily
methodology to transform these data into through artificial neural networks
useful information for decision making.
Vegetable price changes fast and unstable Jaan peralta donate et. al [7], used
which makes great impact in our daily life. evolving artificial neural networks
Vegetable price has attributes such as high [EANN] in forecasting application. Two
nonlinear and high noise. So, it is hard to methods are used to evolve neural
predict the vegetable price. networks architectures.They are Genetic
Algorithm and differential evolution
Machine learning technique Algorithm.By comparing these algorithm
genetic algorithm gives improved accuracy
Machine learning is a subfield of system in final forecasting by using
computer science that evolved from the artificial neural networks.
study of pattern recognition and
computational learning theory in artificial K.K Sureshkumar et, al [15], implemented
intelligence. Machine learning explores the predict tools that is used to predict the
construction and study of algorithms that future stock prices and their performance
can learn from and make predictions on statistics is evaluated. This would help the
data. investor to analyze better in business
decision such as buy or sell a stock.
2. RELATED WORK
Alionue Dieng et. al [8], used two
Amit jain et.al [1], proposed a technique forecasting approaches is obtained from
for the short term load forecasting the methods and it is evaluvated using
problem.fuzzy logic is used to obtain next quantitative and qualitative criteria.Jenkins
day load forecast .Swarm optimization autogressive model is used for generating
technique is used on the training data set vegetable price forecasts for producers and
and Euclidean norm is used with weight consumers.
factors for the selection of similar
days.This provides a reasonable accuracy Ozgur kisi et,al [16], presents a
for the predicted details. comparison of different artificial neural
networks (ANNs) algorithms for short
Mukesh,Rohini T.V et.al [5], used
term daily streamflow forecasting. Four
multilayer perceptron neural network to
different ANN algorithms, namely,
solve the problem of atock market
backpropagation, conjugate gradient,
prediction.Least mean square
cascade correlation, and Levenberg
algorithm(LMS) and sigmoid delta
Marquardt are applied to continuous
algorithm are compared to calculate RMS
streamflow data of the North Platte River
error.Least mean squre algorithm has
in the United States. The models are
lower RMS error than sigmoid delta
verified with untrained data. The results
algorithm.Map reduce programming model
from the different algorithms are compared
is used for the large amount of data to
with each other.
provide rapid process.This provides better
accuracy for the predicted data and also
400
IRACST - International Journal of Computer Science and Information Technology & Security (IJCSITS), ISSN: 2249-9555
Vol. 5, No6, December 2015
401
IRACST - International Journal of Computer Science and Information Technology & Security (IJCSITS), ISSN: 2249-9555
Vol. 5, No6, December 2015
There is no clear cut guideline for deciding purelin(). The optimization algorithms
the architecture of ANN. It is problem were compared and Levenberg-Marquardt
dependent, and there is no formula to algorithm was chosen, which leads to fast
determine number of neurons in hidden convergence and higher hit rate compared
layers. If number of neurons in the hidden to gradient decent algorithm.
layer is increased then the computation
time will be more. The exact number of Construction of Neural Network Model
neurons in the hidden layer determined is Based on Genetic Algorithm
based on experience.
The neural network based on GA is
The number of neurons in the hidden layer constructed. Its structure is similar to
can be selected by one of the following BPNN. The main process of using GA for
thumb rules: optimization of neural network model is as
follow:
a) (n 1) neurons, where n is number of
input neurons. (1) Gene Encoding. According to the
BPNN, gain its weight number. Every
b) (n + 1) neurons, where n is number of weight is on behalf of a gene. All of them
input neurons. structure a chromosome.
c) For every input neuron, 8 hidden neuron (2) Initial chromosome group generation.
can be taken Choose the number of chromosome in
initial population. To each chromosome,
d) Number of input neuron / number of generate weights randomly in the given
output neuron range to construct the initial group.
e) Half the sum of input and output neuron (3) Individual fitness computation. Use
training samples to train the individual
f) P / n neuron, were n is the number of
chromosome which is on behalf of an
neurons and P represents number of
ANN, and then, calculate the individual
training sample
learning error E. The formula is as follow:
In this paper, former three week data of
tomato price are taken as input and later
one week data as output for weekly price
, there into,
prediction. So three input neurons for
weekly price prediction consider. Three
layer feed forward network structure is
used for weekly vegetable price prediction.
The network structure includes input layer, Here, n is the number of training sample,
hidden layer and output layer. The
connection from one nerve cell to all nerve m is the number of output unit.
cells in the next layer. But there is no is the difference between actual value and
connection among nerve cells at the same expected value of l-output when it takes i-
layer. Because the price of the vegetable sample to train. Fitness function fs is as
which is output under certain period, is the bellow:
price of input in the previous period.
fs = 1/ E
Choice of activation function, learning rate
and optimization target were determined (4) Selecting operation. Select the
by experiment. In this paper activation individual taking roulette wheel and retain
function from input layer to hidden layer is the best individuals.
tansig() and hidden to output layer is
402
IRACST - International Journal of Computer Science and Information Technology & Security (IJCSITS), ISSN: 2249-9555
Vol. 5, No6, December 2015
(5) Crossover operation. Assume x1 and The Accuracy is shown in this graph. In
x2 is parents, its children y1 and y2 after the X-axis comparison of BPNN and
crossover is gained by the formula as GANN is taken. Y-axis Accuracy is taken.
bellow: The Accuracy of proposed system is high
compared to the existing Back-propagation
= + (1-) = + (1-) neural network (BPNN) method.
Here, is a parameter which changed with 4.2. Mean square error (MSE)
the evolution algebra.
There are many measuring of predictor
(6) Variation. Take Gaussian approximate
variance to improve the local search
performance of GA on key search area. Mean square error
During the variation, using a random
number of normal distribution which comparison
average is P and variance is P2 instead of
original gene. 50
45
80
70
60
50
40
30
20
10
BPNN GANN
Methods
403
IRACST - International Journal of Computer Science and Information Technology & Security (IJCSITS), ISSN: 2249-9555
Vol. 5, No6, December 2015
404
IRACST - International Journal of Computer Science and Information Technology & Security (IJCSITS), ISSN: 2249-9555
Vol. 5, No6, December 2015
[14] Zabir Haider Khan, Tasnim Sharmin Alin [16] Ozgur Kisi Streamflow Forecasting Using
and Sylhet, Bangladesh, Price Prediction of Share Different Artificial Neural Network Algorithms
Market using Artificial Neural Network (ANN), Journal of Hydrologic Engineering 2007 vol 12
International Journal of Computer Applications, issue 5 pp 532-539.
2011, Vol. 22, No. 2, pp 0975 8887.
Authors Details
405