Vous êtes sur la page 1sur 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/252016485

An ANFIS model for stock price prediction: The case of Tehran stock
exchange

Conference Paper · June 2011


DOI: 10.1109/INISTA.2011.5946124

CITATIONS READS

16 224

2 authors:

Akbar Esfahanipour Parvin Mardani


Amirkabir University of Technology Amirkabir University of Technology
45 PUBLICATIONS   479 CITATIONS    1 PUBLICATION   16 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Volatility Modeling and Forecasting View project

Behavioral Portfolio Selection View project

All content following this page was uploaded by Akbar Esfahanipour on 16 June 2015.

The user has requested enhancement of the downloaded file.


An ANFIS Model for Stock Price Prediction: The
Case of Tehran Stock Exchange
Akbar Esfahanipour, Parvin Mardani
Department of Industrial Engineering and Management Systems, Amirkabir University of Technology, Tehran, Iran
esfahaa@aut.ac.ir, mardaniparvin@aut.ac.ir

Abstract— The main purpose of forecasting in financial markets to generate an FIS with the minimum number of rules to
is to estimate future trends and to reduce risks of decision distinguish data behavior to achieve promising results [3].
making. This research suggests an ANFIS model to improve
prediction accuracy in stock price forecasting. For doing so, we In this perspective, the purpose of this research is to present
applied fuzzy subtractive clustering for structure identification of a successful application of ANFIS using fuzzy subtractive
our ANFIS model. We implemented the proposed model for clustering to determine appropriate number of rules and
predicting Tehran Stock Exchange Price Index (TEPIX) using a membership functions (MFs) in the first order Takagi-Sugeno-
dataset including TEPIX data from 25 March 2001 until 25 Kang (TSK) fuzzy rule base for stock price prediction.
September 2010. To demonstrate the advantages of this model, Additionally, performance of our proposed ANFIS model has
first we compared our results with an Artificial Neural Network been compared with ANFIS models based on different
(ANN) model of type Multi Layer Perceptron (MLP). Then, we clustering methods, grid partitioning as well as an Artificial
compared our results with ANFIS models using grid partitioning Neural Network (ANN) model of type Multi Layer Perceptron
and Fuzzy C-Mean (FCM) clustering. The comparative results (MLP).
show the superiority of our proposed ANFIS model against ANN
model and ANFIS models with no clustering and FCM The rest of this paper has been organized as follows.
clustering. Section II provides the relevant studies. Section III reviews the
overall structure and learning procedure of ANFIS besides
Keywords- Stock price prediction; ANFIS; fuzzy subtractive fuzzy subtractive clustering method. The utilized performance
clustering; Artificial Neural Network; Tehran Stock Exchange measures for evaluating our model are described in section IV.
Section V presents the structure of our proposed model.
I. INTRODUCTION Section VI describes implementation of the proposed approach
in Tehran Stock Exchange (TSE) and comparison results of the
Followers of random walk theory and the Efficient Market models. Finally, some concluding remarks are pointed out in
Hypothesis (EMH) believe that stock price in an efficient section VII.
market is unpredictable. However, adherents of technical
analysis, fundamental analysis, nonlinear dynamics and chaos
theory believe on the predictability of stock price and they have II. LITERATURE REVIEW
provided several methods for this purpose. Two studies of [4-5] provided several evidences for
Recent advances in soft computing techniques provide predictability of stock price in Tehran Stock Exchange (TSE)
suitable tools to predict chaotic environments such as stock as well as its chaotic and nonlinear behavior in this emerging
market with taking into account their nonlinear behaviors [1]. market. Therefore, it is reasonable to apply an ANFIS model,
Fuzzy Inference Systems (FISs), Artificial Neural Networks as a more suitable approach, for prediction of stock price in this
(ANNs) and heuristic techniques such as Genetic Algorithm market rather than the traditional forecasting methods which
(GA) are principle constituents of soft computing. The hybrid were not efficient enough.
of these techniques is quite effective which synergistically Table 1 summarizes characteristics of recent studies that
exploits the advantages of two or more of these areas [2]. utilized an ANFIS model for stock price prediction in
Neuro- fuzzy systems are an example of these combinatorial comparison with this study. Some of these studies compare
methodologies. Adaptive Network based Fuzzy Inference their derived ANFIS model with other techniques including
System (ANFIS), as presented by Roger Jang in 1993, is one of Support Vector Machine (SVM) and various types of Artificial
the most powerful structures that widely used in financial Neural Networks such as Back Propagation Neural Network
markets. (BPNN), Multi Layer Perceptron (MLP), Different Boosting
Although neuro- fuzzy systems have the advantages of Neural Network (DBNN) and Radial Basis Function (RBF).
faster convergence, higher accuracy of prediction, and needless All these researches justify the high potential of ANFIS based
to make any assumptions about statistical properties of data; modeling in stock market prediction.
defining their structures is not a simple task. Structure Reference [9] proposed a volatility model based on fusion
identification relates to effective partitioning of the input space ANFIS models with Fuzzy C-mean (FCM) and subtractive
clustering for partitioning the universe of discourse.

978-1-61284-922-5/11/$26.00 ©2011 IEEE


44
TABLE I. RECENT STUDIES THAT UTILIZED AN ANFIS MODEL FOR STOCK PRICE PREDICTION

Fuzzy clustering Performance


Reference Input Data Sample Size Model Comparisons Case Study
method Measures
7, 4 years of RMSE, MAPE, NASDAQ and S&P
open, close, high and
[6] NASDAQ and ANN, SVM, DBNN maximum absolute CNX NIFTY stock
low Index price
NIFTY percentage error indices
fitting error,
Index values of five 244 forecasting error, Shanghai Stock
[7]
previous days observations ratio of correct Market Index
tendency
B&H and thirteen Some stocks of
4775 reported soft computing Hit, RMSE, MAE, Athens and New
[8] daily stock price
observations models such as RBF, MSE York Stock
MLP, FIS and etc. Exchange (NYSE)
NASDAQ, DJ and Two reported fuzzy Taiwan Stock
c-mean,
[9] TAIEX values of one 7 years models in the literature RMSE Exchange Index
subtractive
previous day for time series prediction (TAIEX)
614, 479 Multi regression analysis,
Six days moving Taiwan and Tehran
observations of BPNN and one reported
[10] c-mean average and six days MAPE Stock Exchange
Taiwan and TSK model in the
bias Indices
Tehran Index literature
Six monthly
macroeconomic RMSE, R2,
228 Istanbul Stock
[11] indicators and three coefficient of
observations Exchange return
indices of DJI, DAX, variance
and BOVESPA
BPNN, SVM and two
close, high and low 1000
[12] hybrid models based on MSE S&P500 Index
Index price observations
ANFIS, BPNN and SVM
MLP, ANFIS based on Tehran Stock
c-mean, TEPIX values of four 2297 RMSE, MAE,
This study two clustering methods Exchange Price
subtractive previous days observations MAPE, NMSE, R2
and grid partitioning Index (TEPIX)

They put into practice both these clustering methods in clustering is somewhat preferred and have had conductive
implementation steps of their model and selected the one with applications as modeling the solar power plant [14], PI type
better results. They concluded that the best empirical results fuzzy logic controller [18], Twin Rotor MIMO Systems
had been obtained by ANFIS with subtractive clustering. (TRMS) control [17] and prediction chaotic time series of
Reference [10] presented adaptive neuro-fuzzy model based on traffic flow [16].
TSK fuzzy inference system and applied FCM clustering for
identifying number of rules that showed higher performance in III. THEORETICAL BASICS
comparison with multiple regression and BPNN.
Collazo et al. accomplished a comparison between FCM A. ANFIS Architecture and Learning Procedure
and subtractive clustering as a basis of fuzzy model To give a concise explanation of ANFIS as a basis of our
identification algorithm [13]. They used both generated FISs in proposed model, we assume first order TSK fuzzy inference
prediction air pollution at northwest of England. Their study 2
system with input x = ( x1 , x 2 ) ∈ U ⊂ R and two fuzzy
asserted the feasibility of both methods to forecast behavior of
complex dynamic systems. rules as:

Pereira and Dourado constructed a neuro-fuzzy system If x1 is C1l and x 2 is C 2l then y l = c0l + c1l x1 + c 2l x 2 , l=1, 2
based on RBF and SVM that take advantage of subtractive
clustering for complexity reduction [14]. They contend that l l
Where Ci represents fuzzy sets or linguistic values, ci shows
using the subtractive clustering is more efficient than the FCM
in the sense that the number of clusters does not need a prior constant coefficients, and l is the number of rules. The
specification while the quality of the FCM strongly depends on following is a layer by layer description of ANFIS architecture
the choice of the number of centers and initial cluster positions. based on the above-mentioned TSK system.
FCM and fuzzy subtractive clustering require setting of • Layer1: There are two inputs and two Membership
prerequisite parameters as the number of clusters and clustering Functions (MFs) associated to each one and so this
radius, respectively. Pre-assigned cluster radius implicitly layer contains four adaptive nodes. The output of a
determines the number of clusters in later technique as
node μ C l ( x ) is the MF of Ci . We can choose the MF
l
mentioned in [15-17]. On the other hand, we don’t have any i
idea or prior knowledge about the number of clusters in a given as any continuous and piecewise differentiable
data set in most cases. In such conditions, fuzzy subtractive functions between 0 and 1 such as Gaussian function

45
of (1) which is used in this study. It has two parameters dimensional space and let k as counter of cluster numbers, this
of {ai, bi} that are related to antecedent parameters. algorithm does the followings:

• Step1: Rescaling data into [0, 1] in each dimensions.


⎡ x − bi 2 ⎤
μ Cil ( x) = exp ⎢− ( ) ⎥ (1) • Step2: Setting cluster radius ra, quash factor,
⎣ ai ⎦ acceptance ratio of ε and rejection ratio of ε ′ . Cluster
radius specifies a cluster center’s range of influence in
• Layer2: It has two fixed nodes that every node each of the data dimensions. Quash factor is used in
generates the weight of the l-th rule by (2). consequent steps to determine the neighborhood of a
cluster center within the existence of other cluster
centers. We can also obtain upper and lower thresholds
wl = ∏ i =1,2 μC l ( xi ) = μC l ( x1 ) × μC l ( x2 ) (2) for potential cluster center by means of acceptance and
i 1 2
rejection ratios in the following steps.
• Layer3: the number and type of nodes in this layer is • Step3: Each data point is assumed as a potential
the same as the previous layer. The output of a fixed cluster center. If {x1, x2, …, xn} shows normalized
node is a normalized weight which is defined as (3). points, potential value of i-th data as a cluster center is
defined by (6).
l
w = wl ( w1 + w2 ) (3)
⎡− x − x 2

Pi = ∑ j =1 exp ⎢ ⎥
n i j
• Layer4: There are two adaptive nodes that calculate (6)
rule outputs based on the consequent parameters and ⎢ (ra 2) 2

then multiply them by the related normalized weight, ⎣ ⎦
l l
as a result we have w y that is sent to the next layer • Step4: The data point xt with the maximum potential Pt
from each node. Each rules output is obtained by (4). is considered as the first cluster center, k=1, hence we
show it as ( xt(1) , Pt (1) ) .

y l = c0l + c1l x1 + c2l x2 (4) • Step5: The first cluster center and all data points in its
neighborhood must be removed and the potential
values of the remaining points must be revised. In
• Layer5: The final output of the network can be other words the new potential value of xi excludes the
achieved in this layer. The single node in this last layer influence of the first cluster center. In general, equation
calculates f ( x ) ∈ V ⊂ R that is equivalent to the (7) shows the revised potential value after obtaining k-
first order TSK fuzzy inference system output. It is th cluster center where rb = quash factor * ra.
obtained by the summation of incoming signals from
layer 4 as shown by (5).
⎡ − x − x (k ) 2

exp ⎢ ⎥
new old (k ) i t
Pi = Pi − Pt (7)
f ( x) = ∑ w y l = w y + w y 2
l 1 1 2
(5) ⎢ (rb 2) 2 ⎥
⎣ ⎦
In this study, parameters recalibration has been performed • Step6: The data point xt with the highest revised
through combination of Error Back Propagation (EBP) and potential value of Pt is a candidate for the next cluster
Least Square (LS) estimates that is known as hybrid learning center. If Pt εPt (1) or ⎡(dmin ra ) + ( Pt Pt (1) ) ⎤ ≥ 1 where
rule. In MATLAB implementation, we can control step size, ⎣ ⎦
step size decrease rate, and step size increase rate as well as dmin is the minimum distance between xt and all the
error tolerance and the number of training epochs. For more previous cluster centers, we accept xt as a new cluster
details of ANFIS architecture and hybrid learning please refer center and set cluster counter as k=k+1 so we have
to [3]. ( xt( k +1) , Pt ( k +1) ) , then we go to the previous step. In
situation that any of these conditions are not satisfied,
B. The Fuzzy Subtractive Clustering the next data with the highest potential value will be
As stated in [19], the Fuzzy subtractive clustering was tested. This process will be continued until Pt ≺ ε ′Pt (1) .
introduced by Chiu which is an expansion of the grid-based This condition is used as stopping criteria for acquiring
mountain clustering method that proposed by Yager and Filev. new cluster center in this algorithm.
This Algorithm was designed for estimating the number of
cluster centers in a set of data, particularly for high dimensional Next, each cluster center can be considered as a basis for
problems [19]. Assuming there are n data points in an m constructing a rule so that initial parameters of a rule can be

46
identified [13, 15, 19]. Applying this process in our proposed After setting training parameters, learning algorithm
model, we obtain appropriate number of rules and membership will be continued to reach the specified error goal or
functions for each input by finding the number of clusters in maximum number of epochs.
input-output training set besides initial rule parameters. The
number of clusters is equal to the number of fuzzy rules. • Step5: Calculating performance measures.
• Step6: Setting cluster radius and step size and then
IV. PERFORMANCE MEASURES running the model.
We applied the commonly used performance measures in • Step7: Selecting the model with the minimum
forecasting problems to compare and evaluate the accuracy of validation error.
our model. These performance measures are Root Mean Square
Error (RMSE), Mean Absolute Error (MAE), Mean Absolute By using clusters information and neural methodology in
Percentage Error (MAPE), Normalized Mean Square Error ANFIS structure, this procedure establishes the first order TSK
(NMSE) and coefficient of determination (R2) which are fuzzy inference system that models the data behavior in a better
calculated by (8) – (12) where Q is the number of testing way. The flowchart of our proposed model is shown in Fig. 1.
patterns, dq is the actual output for pattern q, zq is the model
output for pattern q, and d is the average of all actual outputs VI. IMPLEMENTATION OF THE PROPOSED MODEL IN TSE
related to Q testing patterns.
In this section, we describe the best structure obtained for
the proposed model to forecast Tehran Stock Exchange Price

Q
RMSE = q =1
( d q − zq ) 2 Q (8) Index (TEPIX). Comparative studies are represented for a more
accurate assessment.

A. Input Data
MAE = ∑ q =1 d q − zq Q
Q
(9) Four lags of TEPIX, that is xt-1, xt-2, xt-3, and xt-4, are used as
input variables and the model predicts one step ahead of Index
x̂t by finding the function f in xˆt = f ( xt −1 , xt − 2 , xt −3 , xt − 4 ) .
In order to test generalization capability of the FIS at each
MAPE = (100 Q) × ∑ q =1 (d q − zq ) d q
Q
(10) epoch, we separated our data into training, validation and test
sets as 70%, 15% and 15%, respectively. Our training and
validation sets include 1957 data sets that encompass TEPIX
values from 25 March 2001 to 4 May 2009. Test set contains
NMSE = ∑ q =1 (d q − zq ) 2 ∑
Q Q
q =1
(d q − d ) 2 (11) 340 data sets from 5 May 2009 to 25 September 2010.

B. Numerical Results and Model Comparisons


Most fitting structure of our proposed model to predict
R 2 = 1 − ⎡ ∑ q =1 (d q − zq ) 2 ∑ (d q − d ) 2 ⎤ (12)
Q Q
TEPIX has a first order TSK fuzzy inference system with four
⎣ q =1 ⎦ inputs, one output and four fuzzy rules. Each input variable has
four Gaussian membership functions and each rules output is a
linear combination of input variables plus a constant term.
V. THE PROPOSED MODEL FOR STOCK PRICE PREDICTION
The following steps show the implementation process of The result of fuzzy subtractive clustering was to separate
the proposed model. input-output training set into four clusters where cluster radius
of 0.45 had been specified for all data dimensions, since it gave
the best generalization power. Quash factor, acceptance ratio
• Step1: Collecting data. and rejection ratio were set as 1.25, 0.5 and 0.15, respectively.
These values are default values of MATLAB 7.11.0. ANFIS
• Step2: Determining training, validation and testing hybrid learning tuned antecedent and consequent parameters
input- output sets. within 2000 epochs with error tolerance of zero, step size of
• Step3: Generating initial first order TSK fuzzy 0.1, step size increase rate of 1.1 and step size decrease rate of
inference system through fuzzy subtractive clustering. 0.9.

Step3-1: Estimating clusters. To set cluster radius and according to [16], we consider the
restriction of 0.1 ≤ ra ≤ 0.5. In fuzzy subtractive clustering,
Step3-2: Extracting rule and initial rule parameters. range of influence strongly affects the number of clusters (i.e.,
number of rules) and performance level of the model.
• Step4: Tuning antecedent and consequent parameters
by ANFIS hybrid learning algorithm.

47
activation function in output layer. Faster training techniques
Collecting Data including heuristic techniques and numerical optimization
techniques were employed to improve Error Back Propagation
(EBP) algorithm. The minimum error achieved with Levenberg
Determine training, validation and testing
input-output sets Marquardt algorithm in 5000 epochs. To acquire appropriate
results in the neural network model, we performed pre-
processing on the inputs and outputs to normalize the mean and
standard deviation. In this situation, the output of the network
STRUCTURE IDENTIFICATION is trained to produce zero mean and unity standard deviation.
Set cluster radius, quash factor, acceptance Therefore, we used post-processing to convert these outputs
and rejection ratios back to the original scale.

65.5
Perform fuzzy subtractive clustering on
input- output training set 65
5 rules
10 rules
5 rules
64.5
Generate initial FIS 5 rules
64 11 rules

RMSE
63.5
6 rules 3 rules
ANFIS STRUCTURE 63

Set number of epochs, step size, step size 62.5 4 rules


decrease and increase rate
62

Adjust antecedent and consequent parameters 61.5


of rules by hybrid learning 61
4 rules
0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Cluster radius for each data dimensions

Calculate performance measures


Figure 2. The comparison of our model results regarding to the change of
cluster radius
Reset cluster radius and step size
and run the model
4
x 10
2
Choose the model with
minimum RMSE validation
1.5
TEPIX

Finish

1
Figure 1. Flowchart of the proposed model
Actual Model
Output Output
For showing this fact, we have performed a comparison 0.5
between number of rules, RMSE validation against the change 0 50 100 150 200 250 300 350
of cluster radius. As shown in Fig. 2, larger range of influence
leads to fewer clusters, then fewer number of rules and vice
versa. 500
Since we considered validation set, the final FIS structure is
the one associated with the minimum validation error. This
structure is used for calculating network output per training,
Error

validation and testing sets. Fig. 3 shows the comparison of 0


actual output (i.e., solid line) and the proposed model’s output
(i.e., dashed line) for testing data.
For more investigation on response of our proposed model -500
we employed ANN techniques to forecast TEPIX. According 0 50 100 150 200 250 300 350
to Universal Function Approximation theorem, we established Data Number
an MLP and the best results achieved by the one that has four
nodes in input layer, four Sigmoidal neurons with Tansig
activation function in hidden layer and one node with linear Figure 3. The comparison of actual and model output for testing set

48
TABLE II. NUMERICAL RESULTS

No. of No. of
Model RMSE MAE MAPE NMSE R2
MFs Rules
Multi Layer Perceptron - - 80.0096 54.6414 0.4133 0.0009957 0.9990
ANFIS with grid partitioning 2 16 178.8469 91.7062 0.6313 0.0050 0.9950
ANFIS with FCM clustering 4 4 80.0793 53.9239 0.4103 0.0009975 0.9990
Our proposed ANFIS model 4 4 78.0690 53.3059 0.4050 0.0009480 0.9991

To evaluate the effectiveness of subtractive clustering on [4] H. Khalouzadeh, and A. Khaki, “Evaluating stock price prediction
performance of ANFIS model, we implemented ANFIS methods and representing a nonlinear model based on neural networks,”
(In persian), Economic Researches (published by Tehran University
without clustering and with FCM clustering. Table II shows Faculty of Economics), no. 64 , 2003, pp. 43-85.
numerical results of the best observed performance for MLP, [5] Kh. Nasrollahi, S. Samadi, and R. Saghafi, “An appraisal of
ANFIS with grid partitioning, ANFIS with FCM clustering and predictability in TEPIX,” (In persian), Quarterly Journal of Tehran
our proposed model. Securities& Exchange Organization, vol. 2, no. 6 , 2009, pp. 5-30.
[6] A. Abraham, N. S. Philip, and P. Saratchandran, “Modeling chaotic
Our empirical results confirm the positive impact of behavior of stock indices using intelligent paradigms,” 2004, available at
clustering on increasing strength of ANFIS. These techniques http://arxiv.org/abs/cs.
reduced the number of rules with higher level of generalization [7] R. J. Li, and Zh. B. Xiong, “Forecasting stock market with fuzzy neural
performance. The superiority of our proposed model in networks,” IEEE, [4th International Conf. on Machine Learning and
comparison with neural network model justifies application of Cybernetics, pp. 3475-3479, 2005].
ANFIS for stock price prediction rather than MLP model. [8] G. S. Atsalakis, and K. P. Valavanis, “Forecasting stock market short
term trends using a neuro-fuzzy based methodology,” Journal of Expert
Systems with Applications, vol. 36, no. 7, 2009, pp.10696-10707.
VII. CONCLUSION [9] Ch. H. Cheng, L. Y. Wei, and Y. Sh. Chen, “Fusion ANFIS models
Combining a fuzzy inference system into an ANFIS based on multi- stock volatility causality for TAIEX forecasting,”
Journal of Neurocomputing, vol. 72, no. 16-18, 2009, pp. 3462-3468.
structure, we take advantage of neural methodology to train a
fuzzy system that intensifies its capability for dealing with [10] A. Esfahanipour, and W. Aghamiri, “Adapted neuro-fuzzy inference
system on indirict approach TSK fuzzy rule base for stock market
nonlinearity and imprecision in addition to its easier analysis,” Journal of Expert Systems with Applications, vol. 37, 2010,
implementation. In this research, we synthesized an ANFIS pp. 4742-4748.
model with fuzzy subtractive clustering aimed at some [11] M. A. Boyacioglu, and D. Avci. “An adaptive Network Based Fuzzy
dimensionality reduction to construct a first order TSK fuzzy Inference System (ANFIS) for the prediction of Stock Market Return:
inference system with the minimum number of rules. The Case of Istanbul Stock Exchange.” Journal of Expert Systems with
Applications 37, no. 12, 2010, pp. 7908-7912.
We implemented our proposed model in Tehran Stock [12] F. Zhai, Q. Wen, Z. Yang, and Y. Song. “Hybrid forecasting model
Exchange (TSE) to investigate its accuracy. More than 8 year’s research on stock data mining,”IEEE, [4th International Conf. on New
TEPIX values were utilized and the results were compared Trends in Inf. Science and Service Science (NISS), pp. 630-633, 2010].
with the best obtained results of ANFIS with no clustering, [13] J. I. Collazo Cuevas, M. A. Aceves Fernandez, E. Gorrostieta Hurtado,
ANFIS with FCM clustering and Multi Layer Perceptron with J. C. Pedraza Ortega, A. Sotomayor Olmedo, and M. Delgado Rosas.
“Comparison between fuzzy c-mean clustering and fuzzy clustering
Levenberg Marquardt training algorithm. Based on our subtractive in urban air pollution,” IEEE, [20th international conference
comparative results, clustering has a positive influence on the on Electronics, Communications and Computer (CONIELECOMP), pp.
performance of ANFIS model. This advantage is more 174-179, 2010].
observable when the large number of input data leads to [14] C. Pereira, and A. Dourado, “Neuro-fuzzy systems complexity reduction
explosion of rule numbers in constructing an ANFIS structure by subtractive clustering and support vector learning for nonlinear
based on grid partitioning. The utilized performance measures process modeling,” eunite, 2001.
confirmed higher forecasting accuracy of our model in [15] A. Ghodsi, “Efficient parameter selection for system identification,”
IEEE, [Annual Meeting of the Fuzzy Information (NAFIPS'04), pp. 750-
comparison with the best results of MLP and ANFIS with 755, 2004].
FCM clustering. Our results also show the predictability of [16] M. b. Pang, and X. P. Zhao, “Traffic flow prediction of chaos time series
stock price trend using historical prices that violates weak form by using subtractive clustering for fuzzy neural network modeling,”
of Efficient Market Hypothesis (EMH) in TSE. IEEE, [Second International Symposium on Intelligent Information
Technology Application, pp. 23-27, 2008].
[17] T. Sh. Mahmoud, M. H. Marhaban, and T. S. Hong, “ANFIS controller
REFERENCES with fuzzy subtractive clustering method to reduce coupling effects in
[1] G. S. Atsalakis, and P. K. Valavanis, “Surveying stock market twin rotor MIMO systems (TRMS) with less memory at time usage,”
forecasting techniques- part II: soft computing methods,” Journal of IEEE, [International Conf. on Advanced Computer Control, pp. 19-23,
Expert System with Applications, vol.36, 2009, pp.5932-5941. 2008].
[2] F. Karray, and C. D. Silva, Soft Computing and Intelligent System [18] S. Chopra, R. Mitra, and V. Kumar, “Identification of rules using
Design, Addison Weseley, 2004. subtractive clustering with application to fuzzy controllers,” IEEE, [The
[3] J. R. Jang, “ANFIS: Adaptive network based fuzzy inference system,” Third International Conf. on Machine Learning and Cybernetics
IEEE Trans. On Systems, Man and Cybernetics vol. 23, no. 3, 1993, pp. Shanghai, 4125-4130, 2004].
665-685. [19] S. Chiu, “Method and software for extracting fuzzy classification rules
by subtractive clustering,” IEEE, 1996, pp. 461-465.

49

View publication stats

Vous aimerez peut-être aussi