Vous êtes sur la page 1sur 16

European Journal of Scientific Research

ISSN 1450-216X Vol.49 No.3 (2011), pp.468-483 EuroJournals Publishing, Inc. 2011 http://www.eurojournals.com/ejsr.htm

Neural Network Based Respiratory Signal Classification Using Various Feed-Forward Back Propagation Training Algorithms
A. Bhavani Sankar Research Scholar (Part Time), Anna University of Technology Trichirappalli Assistant Professor, Department of ECE, Anjalai Ammal- Mahalingam Engineering College Kovilvenni-614 403, Tamilnadu, India E-mail: absankar72@gmail.com Tel: 9443890164 D. Kumar Dean/Research, Periyar Maniyammai University,Vallam, Tamilnadu, India E-mail: kumar_durai@yahoo.com K. Seethalakshmi Senior Lecturer, Department of ECE Anjalai Ammal- Mahalingam Engineering College, Kovilvenni-614 403, Tamilnadu, India E-mail: seetha.au@gmail.com

Abstract In this work, we examined the classification of respiratory signals using an Artificial Neural Networks (ANN) system. Respiratory signals contains potentially precise information that could assist clinicians in making appropriate and timely decisions during sleeping disorder and labor. The extraction and detection of the sleep apnea from composite abdominal signals with powerful and advance methodologies is becoming a very important requirement in apnea patient monitoring. The states were classified into normal, sleep apnea and respiratory signals with artifacts. Four significant features extracted from respiratory signal were Energy Index, Respiration Rate, Dominant Frequency and Strength of Dominant Frequency. In our work, we analyze the performance of five back propagation training algorithms, namely, Levenberg-Marquardt, Scaled Conjugate Gradient, Quasi Newton BFGS Algorithm, One Step Secant and Powell-Beale Restarts algorithm for classification of the respiratory states. First two significant features, Energy Index and Respiration Rate were fed as input parameters to the ANN for classification. Then the result was further improved by taking four features as input for the ANN classifier. The Levenberg-Marquardt algorithm was observed to be correct in approximately 99% of the test cases.

Keywords: Sleep Apnea, Motion Artifact, Back propagation training algorithmsLevenberg-Marquardt, Scaled Conjugate Gradient, BFGS algorithm, One Step Secant, Powell-Beale Restarts

Neural Network Based Respiratory Signal Classification Using Various Feed-Forward Back Propagation Training Algorithms

469

1. Introduction
Respiration monitors are of crucial importance in providing timely information regarding pulmonary function in adults and the incidence of Sudden Infant Death Syndrome (SIDS) in neonates. However, to accurately monitor respiration, the noise inherent in measuring devices, as well as artifacts introduced by body movements must be removed or discounted. One can imagine a multitude of intelligent classification algorithms that could help to reach better identification mechanism. For example an algorithm should be capable of classifying different types of signal with different characteristics feature. Such an algorithm has the potential to become major classification tool. There have been enormous growth in developing efficient algorithm for classification of the respiratory signals, the reduced computational steps, reduced number of parameters used, increasing the capability to differentiate the signals and easy to implement in hardware setup to provide clinical support. ANN are biologically inspired networks inspired by the human brain with its organization of neurons and decision making processes which are useful in application areas such as pattern recognition and classification. The decision making process of ANN is more holistic, based on the aggregate of entire input patterns, whereas the conventional computer has to wade through the processing of individual data elements to arrive at a conclusion. Neural networks derive their power due to their massively parallel structure and ability to learn from experience. They can be used for fairly accurate classification of input data into categories, provided they are previously trained to do so. The accuracy of the classification depends on the efficacy of training, which, in turn, depends upon the rigor and depth of the training. The knowledge gained by the learning experience is stored in the form of connection weights, which are used to make decisions on fresh input. The processing elements are organized into layers, and layers interconnect to form a network. The inputs to the processing unit are weighted signals derived from similar processing units of the previous layer. Usually, a processing element is linked to all the neurons of its immediate neighboring layers, which gives rise to massive parallelism in architecture.

2. Previous Research
Many researchers have suggested various techniques including unconventional approaches, such as engineering diagnostic techniques, for determining patient conditions. A review of the literature includes, in [1] a method for the online classification of sleep/wake states based on cardio respiratory signals produced by wearable sensors was described. The method was conceived in view of its applicability to a wearable sleepiness monitoring device. The method uses a fast Fourier transform as the main feature extraction tool and a feed forward artificial neural network as a classifier. Two approaches to classify the ECG biomedical signals are presented in [2]. One is the Artificial Neural Network (ANN) with multilayer perceptron and the other is the Fuzzy Logic with Fuzzy Knowledge Base Controller (FKBC). It is focused on eye blink detection using kurtosis and amplitude analysis of EEG signal in [3]. An Artificial Neural Network (ANN) is trained to detect the eye blink artifact. In research work [4], a simple scheme has been proposed to identify the type as well as the distance of the moving vehicles based on the noise emanated by them. Using simple feature extraction techniques, the one-third-octave band frequency spectrum of the noise were computes and used as a feature set. The feature sets were then used to model a feed forward network trained by back propagation algorithm. In [5], they proposed a classification method entailing time-series EEG signals with back propagation neural networks (BPNN). To test the improvement in the EEG classification performance with the proposed method, comparative experiments were conducted using Bayesian Linear Discriminant Analysis (BLDA). The work in [6] takes a step in that direction by introducing a hybrid evolutionary neural network classifier (HENC) combining the evolutionary algorithm, which has a powerful global exploration capability, with gradient-based local search method, which can exploit the optimum offspring to develop a diagnostic aid that accurately differentiates malignant from benign pattern.

470

A. Bhavani Sankar, D. Kumar and K. Seethalakshmi

A spike sorting method using a simplified feature set with a nonparametric clustering algorithm was presented in [7]. The performances of different off-line methods for two different Electroencephalograph (EEG) signal classification tasks motor imagery and finger movement, are investigated in [8]. The main purpose of this paper is to provide a fair and extensive comparison of some commonly employed classification methods under the same conditions so that the assessment of different classifiers will be more convictive. As a result, a guideline for choosing appropriate algorithms for EEG classification tasks is provided. An alternative evaluation of Obstructive Sleep Apnea (OSA) based on ECG signal during sleep time was proposed in [9]. K-Nearest Neighbor (KNN) supervised learning classifier was employed for categorizing apnea events from normal ones, on a minute-by-minute basis for each recording. The authors have focused on the various schemes for extracting the useful features of the ECG signals for use with artificial neural networks in [10]. Once feature extraction is done, ANNs can be trained to classify the patterns reasonably accurately. The paper [11] deals with a novel method of analysis of EEG signals using wavelet transform and classification using artificial neural network (ANN) and logistic regression (LR). A modification on Levenberg-Marquardt algorithm for MLP neural network learning was proposed in [12]. The proposed algorithm has good convergence. The paper [13] reports the results of the clinical evaluation for detection and classification of sleep apnea syndromes. An automatic classification of respiratory signals using a Field Programmable Gate Array (FPGA) was implemented in [14]. The design of an automatic infant cry recognition system that classifies three different kinds of cries, which come from normal, deaf and asphyxiating infants, of ages from one day up to nine months old was presented in [15]. The classification of the states of patients with certain diseases in the intensive care unit using their ECG and an Artificial Neural Networks (ANN) classification system was examined in [16]. The paper [17] proposed a novel and simple local neural classifier for the recognition of mental tasks from on-line spontaneous EEG signals. The proposed neural classifier recognizes three mental tasks from on-line spontaneous EEG signals. Correct recognition is around 70%. The work in [18] introduces an innovative signal classification method that is capable of on-line detection of the presence or absence of normal breathing. A classification method for respiratory sounds (RSs) in patients with asthma and in healthy subjects is presented in [19]. Grow and Learn (GAL) neural network is used for the classification. The trade-off between the time consuming training of ANNs and their performances in ECG analysis is explored in [20], [21].Multilayer perceptrons trained with the back propagation algorithm are tested in detection and classification tasks and are compared to optimal algorithms resulting from likelihood ratio tests in [22].The extraction of features derived from the autoregressive modeling and threshold crossing schemes that was used to classify respiratory signals was referred in [23]. The Matlab coding and different back propagation training algorithms for the classification of respiratory signal using neural network was referred using [24], [25].

3. Proposed Work
Our work focused on the classification of the respiratory signal using Neural Network. The capability of classifying respiratory signals and detecting apnea episodes are of crucial importance for clinical purposes. The features are derived from the autoregressive modeling and modified threshold crossing schemes and it is fed as input to neural network. The network classifies the respiratory signals into the following categories: (1) normal respiration, (2) respiration with artifacts and (3) sleep apnea. This classification is capable of detecting fatigue of the human by identifying sleep apnea, early detection of sleep troubles and disorders in groups at risk, reduces the risks of being affected by serious heart diseases in future. The main contribution of this paper is the analysis of different back propagation training algorithms those are necessary for classification of the respiratory signals which yields not only the classification but also the analysis of various ailments.

Neural Network Based Respiratory Signal Classification Using Various Feed-Forward Back Propagation Training Algorithms

471

4. Respiratory Data Analysis


The traditional methods for assessment of sleep related breathing disorders are sleep studies with the recordings of ECG, EEG, EMG and respiratory effort. Sleep apnea detection with ECG recordings requires more number of electrodes on the skin and people may wear it continuously for effective monitoring. EEG measurement can also be used for the detection of sleep apnea but the brain signals are always random in nature. For the complete detection, we need more number of samples for analysis. Also, the mathematical modeling of EMG signals is very complex for sleep apnea detection. From the results in [1], the respiratory signals alone are sufficient and perform even better than ECG, EEG and EMG. In our paper, we consider only the respiratory signal for the detection of sleep apnea since it is more convenient and do not require more number of electrodes on the skin. Our previous work deals with the feature extraction using modified threshold based algorithm that plays a vital role since the classification is completely based on the values of the extracted features. The fundamental features of respiratory signal provide the numerical value which is compared with the threshold values and the classification results will be produced. A Synthetic respiratory signal of 0.5 Hz is used for simulation. The signal was sampled at 16 Hz and the four main features of respiratory signal is extracted using modified threshold based algorithm. Total of 300 samples with four feature set is used for simulation. The fundamental features of respiratory signals that are extracted and used for simulation are, 1. Energy Index (EI) 2. Respiration frequency estimated by a modified Zero crossing scheme (FZX) 3. Dominant frequency estimated by AR modeling (FAR) 4. Strength of the dominant frequency estimated by AR modeling (STR)

5. Neural Classifier
The use of neural network systems in respiratory signal analysis offers several advantages over conventional techniques. The neural network can perform the necessary transformation and clustering operations automatically and simultaneously. The neural network is also able to recognize complex and nonlinear groups in the hyperspace. The latter ability is a distinct advantage over many conventional techniques. The neural networks have been defined as systems composed of many simple processing elements, that operate in parallel and whose function is determined by the network's structure, the strength of its connections, and the processing carried out by the processing elements or nodes. Generally, the neural networks are adjusted or trained so that an input in particular leads to a specified or desired output. The training of a network is done trough changes on the weights based on a set of input vectors. The training adjusts the connection's weights from the nodes, after obtaining an output from the network and comparing it with a wished output, with previous presentation of the whole set of input vectors. The neural networks have been trained to make complex functions in many application areas including the pattern recognition, identification, classification, speech, vision, and control systems.

472

A. Bhavani Sankar, D. Kumar and K. Seethalakshmi


Figure 1: Overview of proposed approach

In general, the training can be supervised or not supervised. The methods of supervised training are those that are more commonly used, when labeled samples are available. Among the most popular models are the feed-forward neural networks, trained under supervision with the back-propagation algorithm. The proposed approach for the classification of respiratory signal involves preprocessing of the respiratory signal, extraction of characteristic features and classification using ANN techniques. The overview of the proposed approach is shown in Figure 1. Using neural network techniques, the patient states were classified into three classes: normal, sleep apnea and respiration with artifacts.

6. Multilayer Feed forward Network


Multilayer feed-forward network architecture is made up of multiple layers: an input layer, a number of hidden layers and an output layer as shown in Figure 2. Neurons are the computing elements in each layer. The acceleration or retardation of the input signals is modeled by the weights. The weighted sum of the inputs to each neuron is passed through an activation function to get the output of a neuron. In addition to the inputs there are also biases to each neuron.
Figure 2: A Multilayer feed-forward network

Neural Network Based Respiratory Signal Classification Using Various Feed-Forward Back Propagation Training Algorithms

473

7. Back Propagation Algorithms


The decision making process of the Artificial neural networks ANN is holistic, based on the features of input patterns, and is suitable for classification of biomedical data. Typically, multilayer feed forward neural networks can be trained as non-linear classifiers using the generalized Back propagation algorithm (BPA). The BPA is a supervised learning algorithm, in which a sum square error function is defined, and the learning process aims to reduce the overall system error to a minimum. Training a NN can be viewed as the minimization of an error function. The performance can be improved if suitable error functions and minimization algorithms are chosen. Regarding the minimization algorithm, we selected the five different back propagation training algorithms, namely, Levenberg-Marquardt, Scaled Conjugate Gradient, Quasi Newton BFGS Algorithm, One Step Secant and Powell-Beale Restarts and compared their performance in the classification of respiratory signal. 7.1. Levenberg-Marquardt The Levenberg-Marquardt algorithm was designed to approach second-order training speed without having to compute the Hessian matrix H. When the performance function has the form of a sum of squares (as is typical in training feed forward networks), then the Hessian matrix can be approximated as (1) H = JTJ and the gradient can be computed as g= JT e (2) where J is the Jacobian matrix that contains first derivatives of the network errors with respect to the weights and biases, and e is a vector of network errors. The Jacobian matrix can be computed through a standard back propagation technique that is much less complex than computing the Hessian matrix. The Levenberg-Marquardt algorithm uses this approximation to the Hessian matrix in the following update: (3) X k +1 = X k [ J T J + I ] 1 J T e When the scalar is zero, this is just Newtons method, using the approximate Hessian matrix. When is large, this becomes gradient descent with a small step size. Thus, is decreased after each successful step (reduction in performance function) and is increased only when a tentative step would increase the performance function. In this way, the performance function will always be reduced at each iteration of the algorithm. 7.2. Scaled Conjugate Gradient Each of the conjugate gradient algorithms requires a line search at each iteration. This line search is computationally expensive, since it requires that the network response to all training inputs be computed several times for each search. The scaled conjugate gradient algorithm was designed to avoid the time-consuming line search. This algorithm is too complex to explain in a few lines, but the basic idea is to combine the model-trust region approach used in the Levenberg-Marquardt algorithm with the conjugate gradient approach. 7.3. BFGS Algorithm Newtons method is an alternative to the conjugate gradient methods for fast optimization. The basic step of Newtons method is X k +1 = X k A k 1g k (4) Where Ak is the Hessian matrix of the performance index at the current values of the weights and biases. Newtons method often converges faster than conjugate gradient methods. Unfortunately, it is complex and expensive to compute the Hessian matrix for feed forward neural networks. There is a

474

A. Bhavani Sankar, D. Kumar and K. Seethalakshmi

class of algorithms that is based on Newtons method, but which doesnt require calculation of second derivatives. These are called quasi-Newton (or secant) methods. They update an approximate Hessian matrix at each iteration of the algorithm. The update is computed as a function of the gradient. The quasi-Newton method that has been most successful in published studies is the Broyden, Fletcher, Goldfarb, and Shanno (BFGS) update. 7.4. One Step Secant Algorithm Since the BFGS algorithm requires more storage and computation in each iteration than the conjugate gradient algorithms, there is need for a secant approximation with smaller storage and computation requirements. The one step secant (OSS) method is an attempt to bridge the gap between the conjugate gradient algorithms and the quasi-Newton (secant) algorithms. This algorithm does not store the complete Hessian matrix; it assumes that at each iteration, the previous Hessian was the identity matrix. This has the additional advantage that the new search direction can be calculated without computing a matrix inverse. 7.5. Powell-Beale Restarts For all conjugate gradient algorithms, the search direction will be periodically reset to the negative of the gradient. The standard reset point occurs when the number of iterations is equal to the number of network parameters (weights and biases), but there are other reset methods that can improve the efficiency of training. One such reset method was proposed by Powell, based on an earlier version proposed by Beale. For this technique we will restart if there is very little orthogonality left between the current gradient and the previous gradient. This is tested with the following inequality.

g k 1 g k 0.2 g k

(5)

If this condition is satisfied, the search direction is reset to the negative of the gradient.

8. Performance Measures
The neural networks performance in our classification process is evaluated by means of the following four performance indices: Mean Squared Error, Confusion Matrix, Receiver Operating Characteristics Curve and Linear Regression Curve. 8.1. Mean Squared Error The MSE is computed by taking the differences between the target and the actual neural network output, squaring them and averaging over all classes and internal validation samples. Because the neural network outputs are real numbers between 0 and 1, this result in a Mean Squared Error between 0 and 1. As the neural network is iteratively trained, the MSE should drop to some small, stable value. Each neural network has its MSE plotted independently. Some components may stop if they reach stability earlier than others, and hence have MSE plots which do not extend over all iterations. 8.2. Confusion Matrix Given a classifier and an instance, there are four possible outcomes. If the instance is positive and it is classified as positive, it is counted as a true positive; if it is classified as negative, it is counted as a false negative. If the instance is negative and it is classified as negative, it is counted as a true negative; if it is classified as positive, it is counted as a false positive. Given a classifier and a set of instances (the test set), a two-by-two confusion matrix as shown in Table 1 can be constructed representing the dispositions of the set of instances. The matrix can also be extended for nine or more possible outcomes.

Neural Network Based Respiratory Signal Classification Using Various Feed-Forward Back Propagation Training Algorithms
Table 1: Confusion Matrix
True Class p True Positives

475

Hypothesized Class

n False Positives

False Negatives

True Negatives

8.3. Receiver Operating Characteristics Curve Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. A ROC graph is a plot with the false positive rate on the X axis and the true positive rate on the Y axis. The point (0,1) is the perfect classifier: it classifies all positive cases and negative cases correctly. It is (0,1) because the false positive rate is 0 (none), and the true positive rate is 1 (all). The point (0,0) represents a classifier that predicts all cases to be negative, while the point (1,1) corresponds to a classifier that predicts every case to be positive. Point (1,0) is the classifier that is incorrect for all classifications. In many cases, a classifier has a parameter that can be adjusted to increase TP at the cost of an increased FP or decrease FP at the cost of a decrease in TP. Each parameter setting provides a (FP, TP) pair and a series of such pairs can be used to plot an ROC curve. A non-parametric classifier is represented by a single ROC point, corresponding to its (FP, TP) pair. 8.3.1. Features of ROC Graphs An ROC curve or point is independent of class distribution or error costs. An ROC graph encapsulates all information contained in the confusion matrix, since FN is the complement of TP and TN is the complement of FP. ROC curves provide a visual tool for examining the tradeoff between the ability of a classifier to correctly identify positive cases and the number of negative cases that are incorrectly classified. The more each curve hugs the left and top edges of the plot, the better the classification. 8.4. Linear Regression A data model explicitly describes a relationship between predictor and response variables. Linear regression fits a data model that is linear in the model coefficients. The most common type of linear regression is a least-squares fit, which can fit both lines and polynomials, among other linear models. It is the study of the behavior of one variable in relation to several compartments induced by another variable. By the use of regression line or equation as shown in Figure 3; we can predict scores on the dependent variable from those of the independent variable. There are different nomenclatures of independent and dependent variables. The R value is an indication of the relationship between the outputs and targets. If R=1, this indicates that there is an exact linear relationship between outputs and targets. If R is close to zero, then there is no linear relationship between outputs and targets.

476

A. Bhavani Sankar, D. Kumar and K. Seethalakshmi


Figure 3: Regression Line

9. Training, Validation and Testing of Neural Network


A Synthetic respiratory signal of 0.5 Hz is used for simulation. The signal was sampled at 16 Hz and the four main features of respiratory signal is extracted using modified threshold based algorithm which is part of our research work. Total of 300 samples with two and four feature set is used for simulation. The neural network and the training algorithm are implemented with the Matlab's Neural Network Toolbox. To train the networks, the data were divided into three sets: 1) training, 2) validation, and 3) test. The training set (TR) contained the data used to update the synaptic weights. The performance of the network was evaluated on the validation set (VA) after each iteration and the training was stopped if the performance of VA did not increase for more than 15 training iterations or the minimal gradient was reached. The test set (TE) was used to measure the performance of the network after the training. The Procedure to train the neural network for classification is given as follows: 1. Load the data, consisting of input vectors and target vectors. 2. Create a network. We used a feed-forward network with the tan-sigmoid transfer function in the hidden layer and linear transfer function in the output layer. This structure is useful for data classification problems. Use 20 neurons (arbitrary) in one hidden layer. The network has three output neurons, because of three classes. 3. Train the network. The network uses five different back propagation algorithms for training. The application divides input vectors and target vectors into three sets as follows: 60% are used for training. 20% are used to validate that the network is generalizing and to stop training before over fitting. The last 20% are used as a completely independent test of network generalization. 4. The classification accuracy was calculated by taking the number of samples correctly classified, divided by the total number of samples.

10. Simulation Results and Discussion


The Specifications for the neural network employed for the classification of respiratory signal are given in Table.2

Neural Network Based Respiratory Signal Classification Using Various Feed-Forward Back Propagation Training Algorithms
Table 2:
S.No 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

477

Specifications for Neural Network


Parameters Type of network No. of neurons in the input layer No. of neurons in the hidden layer No. of neurons in the output layer Performance function Activation function in the hidden layer Activation function in the output layer Learning rate Maximum no. of epochs Minimum MSE Value Value Feed forward 2 20 3 MSE Tan-sigmoid Linear 0.05 1000 0

Class distributions of the samples in the training, validation and testing data sets are given in Table 3. It includes four main features of respiratory signal.
Table 3: Class distributions of the Samples
Training 62 60 58 180 Validation 18 20 20 58 Testing 20 20 22 62

Class Normal Sleep apnea Artifacts Total

First two significant features extracted from the respiratory signal were fed as input parameters to the ANN for classification. Then the result was further improved by taking four features as input for the ANN classifier. The results with four features are demonstrated in the following figures, which plots the mean square error versus number of epochs for several algorithms. First we train the network to classify the respiratory signal using Levenberg-Marquardt algorithm. The mean square error plot and ROC plot for the simulation is shown in Figure.4.
Figure 4: (a) Plot of MSE (b) Plot of ROC using Levenberg-Marquardt algorithm

(a)

(b)

The results are reasonable because of the following considerations: The final mean-square error is small. The test set error and the validation set error has similar characteristics.

478

A. Bhavani Sankar, D. Kumar and K. Seethalakshmi No significant overfitting has occurred by iteration 7, where the best validation performance occurs. Each curve in ROC plot hugs more the left and top edges of the plot, and hence the better classification.
Figure 5: Confusion Matrix

The Confusion matrix showing a 98.7 % precision, after 13 training epochs and mean square error of 0.0453 is shown in Figure 5. The classification accuracy was calculated by taking the number of samples correctly classified, divided by the total number of samples. The regression plot in Figure 6 shows the relationship between the outputs of the network and the targets. If the training were perfect, the network outputs and the targets would be exactly equal, but the relationship is rarely perfect in practice.
Figure 6: Regression plot for training, testing and validation

Neural Network Based Respiratory Signal Classification Using Various Feed-Forward Back Propagation Training Algorithms

479

The three axes represent the training, validation and testing data. The dashed line in each axis represents the perfect result outputs = targets. The solid line represents the best fit linear regression line between outputs and targets. The R value is an indication of the relationship between the outputs and targets. If R=1, this indicates that there is an exact linear relationship between outputs and targets. If R is close to zero, then there is no linear relationship between outputs and targets. The training data indicates a good fit. The validation and test results also show R values that greater than 0.9.Then we repeat the training using Scaled Conjugate Gradient, Quasi Newton BFGS Algorithm, Powell-Beale Restarts and One Step Secant algorithm and MSE and ROC plots are shown in Figures 7 and 8.
Figure 7: MSE plot for a) Scaled Conjugate Gradient b) BFGS c) Powell-Beale Restarts d) One Step Secant

(a)

(b)

(c)

(d)

480

A. Bhavani Sankar, D. Kumar and K. Seethalakshmi

Figure 8: ROC plot for a) Scaled Conjugate Gradient b) BFGS c) Powell-Beale Restarts d) One Step Secant

(a)

(b)

(c)

(d)

11. Comparative Evaluation


It is very difficult to know which training algorithm will be the fastest for a given problem. It will depend on many factors, including the complexity of the problem, the number of data points in the training set, the number of weights and biases in the network, the error goal, and whether the network is being used for pattern recognition or function approximation. The Comparative evaluation is done for all the algorithms with two and four features as input parameters and it is tabulated in Tables 4 and 5. In both cases, it is observed that, the training of the neural network with LM algorithm is faster than other algorithms. With LM algorithm, the number of iterations taken to achieve the performance goal is less (13 iterations) with four features when compared to performance with two features (17 iterations). But the other algorithms with two features achieves same accuracy consumes only less number of iterations when compared to performance with four features. With other algorithms, the neural network takes more epochs to reach the defined MSE which is zero. On the other hand, with LM algorithms, the network converges when it reaches the defined error, that is 0, and after the training has reached only around 7 epochs. Scaled Conjugate gradient and BFGS achieves 97.3% and 96.7% accuracy but it takes 35 and 39 number of iterations to reach the performance goal. Powell-Beale Restarts and One Step Secant algorithms provide 97.7% classification accuracy and very low MSE value of 0.033. But it takes 57 and 88 number of iterations

Neural Network Based Respiratory Signal Classification Using Various Feed-Forward Back Propagation Training Algorithms

481

to achieve the performance goal of zero MSE. With the reported experiments we confirm that it is easier to classify the respiratory signals using LM algorithm.
Table 4:
S.No 1 2 3 4 5

Performance Evaluation with 2 features


Training Algorithm Levenberg-Marquardt Scaled Conjugate Gradient BFGS Powell-Beale Restarts One Step Secant Lowest MSE Obtained 0.0222 0.0415 0.0411 0.0437 0.0416 Epoch at which Performance goal is met 11 16 19 13 38 No. of Iterations 17 22 25 19 44 Accuracy (%) 98.7 96.7 98 97 96.3

Table 5:
S.No 1 2 3 4 5

Performance Evaluation with 4 features


Training Algorithm Levenberg-Marquardt Scaled Conjugate Gradient BFGS Powell-Beale Restarts One Step Secant Lowest MSE Obtained 0.0453 0.0512 0.0471 0.0333 0.0333 Epoch at which Performance goal is met 7 29 33 51 82 No. of Iterations 13 35 39 57 88 Accuracy (%) 98.7 97.3 96.7 97.7 97.7

Since the LM algorithm is designed for least squares problems that are approximately linear, the performance of the LM algorithm is better relative to the other algorithms. From the literature it is observed that the LM is suitable for small and medium size networks with enough memory available. If memory is a problem, then we have to opt for scaled conjugate gradient or BFGS algorithm. SCG and other algorithms are suitable for pattern recognition problems rather than classification problems.

12. Conclusion and Future Work


In the present study, an automatic system for the classification of respiratory states employing ANN techniques for decision making was developed and implemented. The decision-making was performed using features extracted from respiratory signals. Emphasis was placed on selection of the characteristic features and for the accurate extraction of these features. From the results, it can be seen that the Levenberg-Marquardt back propagation algorithm provided an excellent performance for the studied application. The performance and tradeoffs between the five training methods was also studied. The choice of algorithm to be used is a tradeoff between the performance, convergence power and training input. The proposed Levenberg-Marquardt approach exhibited a superior performance in terms of classification accuracy and was also easier and simpler to implement and use. The performance of the system can be further enhanced by training with a larger number of training inputs, which increase the network ability to classify unknown signals.

References
[1] Walter Karlen, Claudio Mattiussi and Dario Floreano, 2009. Sleep and Wake Classification with ECG and Respiratory Effort Signals IEEE Transactions on Biomedical circuits and systems, Vol. 3, No. 2. Saad Alshaban and Rawaa\ Ali, 2010. Using Neural and Fuzzy Software for the Classification of ECG Signals Research Journal of Applied Sciences, Engineering and Technology.

[2]

482 [3] [4]

A. Bhavani Sankar, D. Kumar and K. Seethalakshmi Brijil Chambayil, Rajesh Singla, R. Jha, 2010. EEG Eye Blink Classification Using Neural Network Proceedings of the World Congress on Engineering 2010, London, Vol I. Norasmadi Abdul Rahim, Paulraj M P, Abdul Hamid Adom, Sathishkumar Sundararaj, 2010. Moving Vehicle Noise Classification using Backpropagation Algorithm 6th International Colloquium on Signal Processing & Its Applications (CSPA). Arjon TURNIP, Keum-Shik HONG, Shuzhi Sam GE, 2010. Backpropagation Neural Networks Training for Single Trial EEG Classification Proceedings of the 29th Chinese Control Conference, China. R. El hamdi, M. Njah, M. Chtourou, 2010. Breast Cancer Diagnosis Using a Hybrid Evolutionary Neural Network Classifier 18th Mediterranean Conference on Control & Automation Congress, Morocco. Zhi Yang, Qi Zhao and Wentai Liu, 2009. Neural Signal Classification Using a Simplified Feature Set with Nonparametric Clustering Elsevier. Boyu Wang, Chi Man Wong, Feng Wan, Peng Un Mak, Pui In Mak, and Mang I Vai, 2009. Comparison of Different Classification Methods for EEG-Based Brain Computer Interfaces: A Case Study Proceedings of the International Conference on Information and Automation, China, IEEE. Martin O. Mendez, Davide D. Ruini, Omar P. Villantieri, Matteo Matteucci, 2007. Detection of Sleep Apnea from surface ECG based on features extracted by an Autoregressive Model Proceedings of the 29th Annual International Conference of the IEEE EMBS, France. Rajesh Ghongade, Dr. A.A. Ghatol, 2007. A Brief Performance Evaluation of ECG Feature Extraction Techniques for Artificial Neural Network Based Classification IEEE. Abdul Hamit Subasia, Ergun ErcElebi, 2005. Classification of EEG signals using neural network and logistic regression Computer Methods and Programs in Biomedicine, Elsevier. Amir Abolfazl Suratgar, Mohammad Bagher Tavakoli, and Abbas Hoseinabadi, 2005. Modified Levenberg-Marquardt Method for Neural Networks Training World Academy of Science, Engineering and Technology. Khaled M. Al-Ashmouny, Ahmed A. Morsy, Shahira F. Loza, 2005. Sleep Apnea Detection and Classification Using Fuzzy Logic: Clinical Evaluation Proceedings of the IEEE Engineering in Medicine and Biology 27th Annual Conference, China. Bozidar Marinkovic, Matthew Gillette, and Taikang Ning, 2005. FPGA Implementation of Respiration Signal Classification Using a Soft-Core Processor IEEE. Mohamed Soltane, Mahamood Ismail, Zainol Abinin Abdul Rashid, 2004. Artificial Neural Networks(ANN) Approach to PPG Signal Classification International Journal of Computing and Information Sciences, Vol.2, No.1. Orion F. Reyes-Galaviz, Carlos Alberto Reyes-Garci, 2004. A System for the Processing of Infant Cry to Recognize Pathologies in Recently Born Babies with Neural Networks 9th Conference Speech and Computer, St. Petersburg, Russia. N.Kannathal, U. Rajendra Acharya, Choo Min Lim , PK. Sadasivan, SM Krishnan, 2003. Classification of cardiac patient states using artificial neural networks Journal in Clinical Cardiology, Vol 8, No 4. Jose del R. Millan, Josep Mourino, Marco Franze, Febo Cincotti, 2002. A Local Neural Classifier for the Recognition of EEG Patterns Associated to Mental Tasks IEEE Transactions on Neural Networks, Vol. 13, No. 3. Peter Varady,Tamas Micsik, Sandor Benedek, and Zoltan Benyo, 2002. A Novel Method for the Detection of Apnea and Hypopnea Events in Respiration Signals IEEE Transactions on Biomedical Engineering, Vol. 49, No. 9. Li Gang, Ye Wenyu, Tin ling, Yu Qilian, Y U Xuemin, 2000. An Artificial-Intelligence Approach to ECG Analysis IEEE Engineering in Medicine and Biology.

[5]

[6]

[7] [8]

[9]

[10] [11] [12]

[13]

[14] [15]

[16]

[17]

[18]

[19]

[20]

Neural Network Based Respiratory Signal Classification Using Various Feed-Forward Back Propagation Training Algorithms [21] [22]

483

[23] [24] [25]

Rosaria Silipo and Carlo Marchesi, 1998. Artificial Neural Networks for Automatic ECG Analysis IEEE Transactions on Signal Processing, Vol. 46, No. 5. Z.H.Michalopoulou, L.W.Nolte and D.Alexandrou, 1995. Performance Evaluation of Multilayer Perceptrons in Signal Detection and Classification IEEE Transactions on Neural Networks, Vol. 6, No. 2. Taikang Ning and Joseph D. Bronzino, 1989. Automatic Classification of Respiratory Signals IEEE Engineering in Medicine & Biology Society 11th Annual International Conference, IEEE. S.N.Sivanandam, S.Sumathi, S.N.Deepa, 2006. Introduction to Neural Networks using Matlab 6.0 Tata McGraw-Hill. Mark Hudson Beale, Martin T. Hagan, Howard B. Demuth, 2010. Neural Network Toolbox 7Users Guide, The Math Works Inc.

Vous aimerez peut-être aussi