Vous êtes sur la page 1sur 6

A New Approach to Fault Detection and Diagnosis in Cellular Systems Using Competitive Learning

Guilherme A. Barreto, Jo o C. M. Mota, Lus G. M. Souza a Rewbenio A. Frota and Leonardo Aguayo
Department of Teleinformatics Engineering Federal University of Cear , Fortaleza-CE, Brazil a Email: {guilherme,mota}@deti.ufc.br

Jos S. Yamamoto and Pedro E. O. Macedo e


CPqD Telecom & IT Solutions Campinas-SP, Brazil Email: {sindi,macedo}@cpqd.com.br

Abstract
We propose a new approach to fault detection and diagnosis in third-generation (3G) cellular networks using competitive neural algorithms. For density estimation purposes, a given neural model is trained with data vectors representing normal functioning of a CDMA2000 cellular system. After training is completed, a normality prole is constructed by means of the sample distribution of the quantization errors of the training vectors. Then, we nd empirical condence intervals for testing hypotheses of normal/abnormal functioning of the cellular network. The trained network is also used to generate inference rules that identify the causes of the faults. We compared the performance of four neural algorithms and the results suggest that the proposed approaches outperform current methods.

1. Introduction
The third generation (3G) of wireless systems promise to provide mobile users with ubiquitous access to multimedia information services, providing higher data rates by means of new radio access technologies, such as UMTS, WCDMA and CDMA2000 [1]. This multi-service aspect brings totally new requirements into network optimization process and radio resource management algorithms, which differ signicantly from traditional speech-dominated second generation (2G) approach. A challenge is related to the quality of service (QoS) requirements and control. For each provided service and service prole, the QoS targets have to be set and met. Because of these requirements, operation and maintenance of 3G cellular networks will be challenging. The mobile cells interact and interfere more, they have hundreds of adjustable parameters and they monitor and record several hundreds of different variables in each cell, thus producing a huge amount of spatiotemporal data, consisting of parameters of base stations and quality information of calls. Thus, performance evaluation of 3G cellular systems will be carried out by means of network output data analysis, which in turn requires efcient multivariate data analysis techniques, such as neural networks. The goal of this paper is to propose straightforward methods to deal with fault detection and diagnosis (FDD) of 3G cellular systems using competitive learning algorithms [2]. Competitive neural models are particularly suitable for these tasks because they are able to extract statistical regularities

from the input data vectors, organizing them in clusters, which can then be further analyzed in order to reveal hidden data structures. We formalize our approach within the context of statistical hypothesis testing, comparing the performance of four neural algorithms (WTA, FSCL, SOM and Neural-Gas). We show through simulations that the proposed methods outperform current standard approaches for FDD tasks. We also evaluate the sensitivity of the proposed approaches to changes in the training parameters of the neural models, such as number of neurons, training epochs and the size of the training set. The remainder of the paper is organized as follows. In Section 2, we describe the competitive neural models to be used in the simulations. In Section 3, we describe the data vectors for the training of the neural algorithms. In Section 4, we introduce a general approach for the fault detection task based on the sample distribution of the quantization errors associated with a given competitive neural model. In Section 5, we present an approach to generating inference rules from a trained neural model. Computer simulations evaluating the neural algorithms in FDD tasks for several scenarios of the cellular system are presented in Section 6. The paper is concluded in Section 7.

2. Competitive Neural Models


Competitive learning models are based on the concept of winning neuron, dened as the one whose weight vector is the closest to the current input vector. During the learning phase, the weight vectors of the winning neurons are modied incrementally in order to extract average features from the input patterns. Using Euclidean distance, the simplest strategy to nd the winning neuron, i (t), is given by: i (t) = arg min x(t) wi (t)
i

(1)

where x(t) n denotes the current input vector, wi (t) n is the weight vector of neuron i, and t symbolizes the iterations of the algorithm. Then, the weight vector of the winning neuron is modied as follows: wi (t + 1) = wi (t) + (t)[x(t) wi (t)] (2)

where 0 < (t) < 1 is the learning rate, which should decay with time to guarantee convergence of the weight vectors to

stable states. We always adopt an exponential decay, given (t/T ) by (t) = 0 (T /0 ) where 0 and T are the initial and nal values of (t), respectively. The competitive learning strategy in (1) and (2) are referred to as Winner-Take-All (WTA), since only the winning neuron has its weight vector modied per iteration of the algorithm. An undesirable characteristic of this algorithm is its high sensitivity to weight initialization, a situation which can lead eventually to the occurrence of dead neurons; that is, neurons that were never chosen as winners. To avoid this, the plain WTA has been modied to give opportunity to all neurons to become winner sometimes during the learning phase. We present next modications of interest to this paper. The rst method, called Frequency-Sensitive Competitive Learning (FSCL) [3], changes (1) slightly by penalizing those neurons which are chosen as winners too frequently: i (t) = arg min{fi (t) x(t) wi (t) }
i

system. These KPIs are gathered, for example, from the cellular systems operator, drive tests, customer complaints or protocol analyzers, and put together in a pattern vector x(t), which summarizes the state of the system at time t: x(t) = [KP I 1 (t) KP I 2 (t) KP I n (t)]
T

(7)

(3)

where fi (t) = (ci /t)z , so that ci is the number of times neuron i was the winner until the current iteration t, and z > 1 is a constant. No change in (2) is required. The second method is the well-known Self-Organizing Map (SOM) [4]. This algorithm nds the winning neurons as in (1), but alters the learning equation (2) to allow adjustment of the weight vectors of the winning neuron and of those neurons belonging to its neighborhood: wi (t + 1) = wi (t) + (t)h(i , i; t)[x(t) wi (t)] (4)

where h(i , i; t) = exp ri (t) ri (t) 2 / 2 (t) is a Gaussian weighting function which limits the neighborhood of the winning neuron. The parameter (t) denes the radius of the neighborhood, while ri (t) and ri (t) are respectively, the positions of neurons i and i in the array. The variable (t) should decay in time similarly to the learning rate (t). The third method, called the Neural-Gas algorithm (NGA) [5], modies both (1) and (2) as follows. A single winning neuron is not directly searched for, but rather, all the neurons are ranked according to the distance of their weight vectors to the current input x(t). x(t) wi1 (t) < x(t) wi2 (t) < < x(t) win (t) (5) where wi1 (t) is the weight vector closest to x(t), wi2 (t) is weight vector second-closest to x(t) and so on. Thus, the weight vectors are adapted as follows: wi (t + 1) = wi (t) + (t)h (k, t)[x(t) wi (t)] (6)

where n is the number of KPIs chosen to monitor the cellular network. Among the huge amount of KPIs available for selection, we have chosen the following ones: Number of Users: Number of initial mobile users attempting to use the services provided by the cellular network. Downlink Throughput: Total throughput in downlink direction, in Kb/s, summed over all the active links in the cell being analyzed. Noise Rise: Ratio between overall interference and thermal noise in the analyzed cell, in dB. Other-Cells Interference: This variable measures the interference (in the uplink direction) from other cells to the cell being analyzed, in dBm. To train the neural model we need hundreds of samples of the state vector x(t), each corresponding to a certain instant of observation of the cellular network. These sample vectors, from now onwards called the training set, must be representative of the normal functioning the cellular systems. We have developed a static simulation tool [6] in order to get the performance data for training the neural models. Furthermore, since the KPIs may differ considerably in amplitude, we applied to each component xj of the state 2 vector x, with corresponding mean j and variance j , a linear transformation yj = (xj j )/j in order to get a normalized variable yj , with zero mean and unity variance.

4. Fault Detection via Competitive Models


Our approach can be summarized as follows. First, we choose one of the neural models presented in Section 2 and train it with state vectors x(t) collected during normal functioning of the cellular network (i.e. no examples of abnormal features are available for training). After the training is completed, we compute the quantization error e(t), associated to each state vector x(t) used during training: e(t) = E(t) = x(t) wi (t) , t = 1, . . . , N (8)

where h (k, t) = exp{(k 1)/(t)} plays a role similar to the neighborhood function of the SOM algorithm. For the sake of convergence, (t) and (t) should decay in time.

3. Specifying the State Vectors


To evaluate the performance of competitive models just described on FDD tasks we need to dene a set of KPIs (Key Parameter Indicators), which consist of a number of variables responsible for monitoring the QoS of a cellular

where E(t) denotes the quantization error vector and i is the winning neuron for the state vector x(t). In other words, the quantization error is simply the distance from the state vector x(t) to the weight vector wi (t) of its winning neuron. We refer to the distribution of N samples of quantization errors resulting from the training vectors as the normality prole of the cellular system. Using the normality prole we can then dene a numerical interval representing normal behavior of the system by computing a lower and upper limits via percentiles. The percentile of a distribution of values is a number ep such that a percentage p of the population values are less than or

equal to ep . In this paper, we are interested in an interval within which we can nd a given percentage p = 1 (e.g. p = 0.95) of normal values of the variable. We compute the lower and upper limits of this interval as follows: (100p) Lower Limit, (ep ): This is the th percentile. 2 (100+p) + Upper Limit, (ep ): This is the th percentile. 2 In Statistics jargon, the probability p denes the condence level and, hence, the normality interval [e , e+ ] p p is then called (empirical) condence interval. This interval can then be used to classifying a new state vector into normal/abnormal by means of a simple hypothesis test: IF THEN ELSE e new x is NORMAL x
new new

The probabilities and are the responsible for the specication of the width of interval of normality [e , e+ ]. p p As discussed next, this width is decided through the analysis of the costs that each type of error has on the cellular system. The ideal would be to have = 0 and = 0, but this is not possible in practice. So, it is necessary to try to manage and error probabilities based on the overall costs of the FDD system. The difculty is that, for a given set of data, the occurrence of type I and type II errors are inversely related; the smaller the risk of one, the higher the risk of the other. 4.2. Controlling Type I and Type II Errors If is very low, this will make few false alarms (Type I errors). But there is a larger chance that we will not nd a signicant difference even if H0 is false, i.e., reducing the value of will decrease the chance of making a false alarm but increase the chance of an absence of alarm (Type II error), and vice-versa. So, and are interrelated. To decrease both and , we may increase the sample size N . Once it is not simple to increase the sample size N , which in this application is closely related to the number of neurons in the neural model, we decide to minimize the type II error, that is much more serious (i.e. costly) for an actual mobile communication company. So, is increased in order to minimize . The consequence of an increase in is a reduction of the width of the normality interval [e , e+ ], p p since the condence level p depends on .

[e , p

e+ ] p

(9)

is ABNORMAL

In words, the rule in (9) says that if the quantization error associated to a new state vector xnew is within the limits of normality [e , e+ ], then the state of the cellular system is p p considered normal, otherwise it is abnormal. The main advantages of this approach are its simplicity, its generality since it can be used by a number of competitive learning algorithms and, as we will see in Section 6, its better performance in detecting in comparison to other methods available in the literature. 4.1. On Hypothesis Testing It is worth presenting the detection rule (9) under the formalism of Statistics in order to establish criteria to measure the performance of the neural models on the fault detection task. Firstly, it is necessary to dene a null hypothesis, i.e., the hypothesis to be tested. It directly stems from the problem statement and is denoted as H0 . For this paper, we have: H0 : The input vector reects the NORMAL activity of the cellular system. The so-called alternative hypothesis, H1 , is given by: H1 : The input vector reects the ABNORMAL activity of the cellular system. Thus, when formulating a conclusion regarding the condition of the cellular system based on the denitions of H0 and H1 , two types of errors are possible: Type I error: This error occurs when the null hypothesis (H0 ) is rejected when it is, in fact, true. The probability of making a type I error is denoted by and is called signicance level. The value of is set by the investigator in relation to the consequences of such an error. In this paper, type I error is referred to as False Alarm, i.e., incoming state vectors are NORMAL but called ABNORMAL. Type II error: This error occurs when the null hypothesis (H0 ) is not rejected when it should be rejected. The probability of making a type II error is denoted by (which is generally unknown). In this paper, type II error is referred to as Absence of Alarm, i.e., incoming vectors are ABNORMAL, but called NORMAL. A type II error is mainly due to sample sizes N being too small.

5. Fault Diagnosis via Competitive Models


Once a fault has been detected, it is necessary to investigate which of the attributes (KPIs) of the problematic input vector are responsible for the fault. From the weight vectors of a trained competitive neural model it is possible to extract inference rules that can determine the faulty KPIs in order to invoke the cellular network supervisor system to take any corrective action. All the previous works generate inference rules through the analysis of the clusters formed by a subset of the NORMAL/ABNORMAL state vectors [8], [9]. This approach is not adequate for our purposes, since the state vectors reect only the normal functioning of the cellular network. We propose instead to evaluate the absolute values of the quantization errors of each KPI, computed for each training state vector: |E1 (t)| |x1 (t) wi 1 (t)| |E2 (t)| |x2 (t) wi 2 (t)| ABS (E(t)) = = . . . . . . |En (t)| |xn (t) wi n (t)| (10) This approach is similar to that used in the fault detection task, but now we built n sample distributions using the absolute values of each component of the quantization error vector, E. For the detection task we used only one sample distribution built from the norm of the quantization error vector, as described in (8). Then, for all the sample distributions,

TABLE I T YPICAL FALSE ALARM (FA) MEAN RATES AND


CONFIDENCE INTERVALS FOR THE VARIOUS NEURAL MODELS .

Model WTA FSCL NGA SOM

Proposed Approach CI, FA (95%) CI, FA (99%) [0.366, 1.534], 12.43 [0.074, 1.836], 5.41 [0.214, 1.923], 10.20 [0.136, 4.584], 1.80 [0.277, 1.944], 9.50 [0.1651, 4.218], 2.10 [0.361, 1.815], 8.75 [0.187, 2.710], 1.43

Approach by [7] CI, FA (95%) CI, FA (99%) [0.000, 0.465], 17.91 [0.000, 1.018], 7.13 [0.000, 1.126], 12.20 [0.000, 0.385], 3.00 [0.000, 1.329], 10.10 [0.000, 0.941], 2.30 [0.000, 1.122], 13.28 [0.000, 1.191], 2.71

{ |Ej (t)| }, t = 1, . . . , N and j = 1, . . . , n, we compute + the corresponding condence intervals [ |Ej |, |Ej | ], where + |Ej | and |Ej | are the lower and upper bounds of the j-th interval. Thus, whenever an incoming state vector xnew is signalized as abnormal by the fault detection stage, we take the new absolute value of each component Ej of the corresponding quantization error vector and execute the following test: IF THEN ELSE
+ new |Ej | [ |Ej |, |Ej | ],

xj is normal. xj is one (possible) cause of the fault.

In words, if the quantization error computed for the KPI xj is + within the range dened by the interval [ |Ej |, |Ej | ], then it is not responsible for the fault previously detected, otherwise it will be indicated as a possible cause of the detected fault. If none of the KPIs are found to be faulty, then a false alarm will be discovered and then corrected. Condence levels of 95% and 99% are commonly used.

6. Computer Simulations
The simulated 3G cellular environment is macrocellular, with two rings of interfering cells around the central one, resulting in a total of 19 cells. Other congurations are possible, with 1, 7 or 37 cells. All base stations use omnidirectional antennas at 30 meters above ground level, and the RF propagation model is the classic OkumuraHata for 900MHz carrier frequency. Fast, as well shadow fading multipath phenomena, are also included in the channel modeling. Each mobile user is assigned with voice or data services with different transmission rates, and the admission control is constrained both by carrier received power and overall cell load. Quality parameters, such as Eb /NtT arget and maximum Noise Rise level are set to 5dB and 6dB, respectively. The number of initial mobile users is 60, which can be removed from the system by a power control algorithm. For each Monte Carlo simulation (drop) of the celular environment, a set of KPIs is stored and used for ANN training/testing procedures. Each data set corresponding to a specic network scenario is formed by 500 state vectors (collected from 500 drops of the static simulation tool), from which 400 vectors are selected randomly for training and the remaining 100 vectors are used for testing the neural models.

The rst set of simulations evaluates the performance of the neural models, by quantifying the occurrence of false alarms after training them. The chosen network scenario corresponds to 100 mobile stations initially trying to connect to 7 base stations. No shadow fading is considered, and only voice services are allowed. The results (in percentage) are organized in Table I, where we show the intervals found for two condence levels (95% and 99%). For comparison purposes, we show the results obtained for the single threshold approach. The error rates were averaged for 100 independent training runs. For all neural models, the number of neurons and the number of training epochs were set to 20 and 50, respectively, and the initial and nal values for the learning rate were set to 0 = 0.9 and T = 105 . For the FSCL, the parameter z in (3) was set to 5. For the NGA, we set 0 = 40 and T = 0.01. It is worth noting that the NGA/SOM models performed much better than the WTA/FSCL models. In a supercial analysis, the main difference among them is that the former algorithms possess the topology-preserving property in addition to the inherent clustering abilities of simple competitive learning rules. This property is incorporated into the SOM/NGA learning rules through weighting factors which depend on the distance of the winning neuron to its neighbors. The weighting factors transform the simple competitive learning rule in (2) into the Hebbian-like learning rules (4) and (6). It is well known that Hebbian learning rules can learn second-order statistics of the input distribution, while plain competitive learning rules learn only rst-order statistics. To have an idea of the distribution of the quantization errors produced by the neural algorithms for the last scenario, typical normality proles for the WTA and the SOM are shown in Figures 1a and 1b, respectively. The vertical thick lines correspond to the 95% condence intervals. The second set of simulations evaluates the sensitivity of the neural models to changes in their training parameters. The goal is to understand how the number of neurons, the number of training epochs and the size of the training set affect the occurrence of false alarms after training the neural models. The results are shown in Figures 2, 3 and 4, respectively. For each case, we compare the interval-based approach proposed in this paper with the single threshold presented in [7]. The chosen network scenario corresponds to 120 mobile stations initially trying to connect to 7 base stations, for which fast and shadow fading are considered this time. Voice and data

Normality Profile and Confidence Interval 95% (WTA) 70 90 80 60 70 50


Amplitude Histogram Amplitude Histogram

Normality Profile and Confidence Interval 95% (SOM)

60 50 40 30 20

40

30

20

10 10 0 0 0 0

0.2

0.4

0.6

0.8 1 1.2 Quantization Errors

1.4

1.6

1.8

0.5

1.5

2 2.5 3 Quantization Errors

3.5

4.5

(a) Fig. 1.

(b)

Typical normality proles for (a) the WTA model and for (b) the SOM model. Vertical lines represent the condence interval.

Results for FSCL 25 Single Threshold Confidence Interval 15 16

Results for SOM Single Threshold Confidence Interval

Mean error rates (False Alarm)

20
Mean error rate (False Alarm)

14

13

15

12

11

10

10

5 0

20

40

60

80 100 120 Number of neurons

140

160

180

200

8 0

10

20

30

40 50 60 Number of epochs

70

80

90

100

Fig. 2. Evolution of the error rate of false alarms with the number of neurons for the FSCL model.

Fig. 3. Evolution of the error rate of false alarms with the number of training epochs for the SOM model.

services are allowed. For the sake of simplicity, results are shown for one neural model only, since similar patterns are observed for the others. For a given value of a parameter (e.g. the number of neurons), the neural model is trained 100 times with different initial weights. For each training run, state vectors are selected randomly for the training and testing data sets. Also, the ordering of presentation of the state vectors for each training epoch is changed randomly. Then, the nal value of the false alarm error rate is averaged for 100 testing runs. These independent training and testing runs are necessary to avoid biased estimates of the error rate. In Figure 2, the number of neurons is varied from 1 to 200, and each training run lasts 50 epochs. In Figure 3, the number of epochs is varied from 1 to 100, while the number of neurons is xed at 30. Finally, in Figure 4, the number of

neurons and the number of training epochs are xed at 30 and 50, respectively, while the size of the training set (i.e. number of state vectors used for training) is varied from 10 to 490. The size of testing set varies accordingly from 490 to 10. So, we can infer that in average the proposed approach produces better results than the single threshold method. The last set of simulations evaluate the method proposed in Section 5 for generating inference rules from competitive ANNs. Table II depicts the obtained results, averaged over 100 Monte Carlo simulations. ERROR I refers to the falsealarm rate, while ERROR II refers to the absence-of-alarm rate. The indicator P ERF (%) denotes the mean accuracy of the FDD system and it is computed as P ERF = 100 (1 ERRORS/S), where S is the total number of state vectors used for testing. For each simulation, there were 8 state vectors corresponding to ABNORMAL conditions

Results for NGA 100 90 80


Mean error rate (False Alarm)

Single Threshold Confidence Interval

70 60 50 40 30 20 10 0 0

50

100

150

200 250 300 Size of training set

350

400

450

500

The proposed approaches select lower and upper limits for the threshold, dening an acceptable range (interval) of variation for all the quantization errors of interest. Theses intervals can then be used for hypothesis testing purposes. In [7], only a single threshold is dened, which correspond to the upper threshold of our approach. There is an inherent problem when we use a single (upper) threshold in the analysis of cellular networks. Depending on the chosen KPIs, excessive low values of a given variable (e.g. the number of users of transmission power in downlink) can also be indicative of a fault in the system. Hence, a lower bound for the threshold should be computed to take into account these cases. Hence, the proposed condence-interval-based methods outperformed current available single-threshold methods.

Fig. 4. Evolution of the rate of false alarms with the size of training set for the NGA model. TABLE II R ESULTS ( IN PERCENTAGE ) FOR THE JOINT FAULT DETECTION AND DIAGNOSIS TASKS . FA= FALSE ALARM , AA=A BSENCE OF ALARM . N eural M odel WTA FSCL NGA SOM p = 95% AA PERF 0.00 91.50 0.00 92.83 0.00 91.50 0.00 92.80 p = 99 % AA PERF 0.04 95.80 0.00 98.17 0.00 99.00 0.00 98.50

Acknowledgements
The authors thank CPqD Telecom&IT Solutions/Instituto Atl ntico and CNPq (DCR grant: 305275/2002-0) for the a nancial support.

References
[1] R. Prasad, W. Mohr, and W. Kon user, Third Generation Mobile Coma munication Systems - Universal Personal Communications, Artech House Publishers, 2000. [2] J. C. Principe, N. R. Euliano, and W. C. Lefebvre, Neural and Adaptive Systems: Fundamentals through Simulations, John Wiley & Sons, 2000. [3] S. Ahalt, A. Krishnamurthy, P. Cheen, and D. Melton, Competitive learning algorthms for vector quantization, Neural Networks, vol. 3, pp. 277290, 1990. [4] T. Kohonen, The self-organizing map, Proceedings of the IEEE, vol. 78, no. 9, pp. 14641480, 1990. [5] T. M. Martinetz and K. J. Schulten, A neural-gas network learns topologies, in Articial Neural Networks, T. Kohonen, K. Makisara, O. Simula, and J. Kangas, Eds., pp. 397402. North-Holland, Amsterdam, 1991. [6] C. E. Fernandes, L. Aguayo, C. A. Fernandes, J. M. Maciel, G. A. Barreto, and J. C. Mota, A simulation tool for cdma2000 systemlevel performance evaluation, IEEE Transactions on Education, 2003, submitted. [7] J. Laiho, M. Kylv j , and A. H glund, Utilisation of advanced analyaa o sis methods in UMTS networks, in Proceedings of the IEEE Vehicular Technology Conference (VTS/spring), Birmingham, Alabama, 2002, pp. 726730. [8] B. Hammer, A. Rechtien, M. Strickert, and T. Villmann, Rule extraction from self-organizing networks, Lecture Notes in Computer Science, vol. 2415, pp. 877882, 2002. [9] M. Siponen, J. Vesanto, O. Simula, and P. Vasara, An approach to automated interpretation of SOM, in Proceedings of the 2001 Workshop on the Self-Organizing Map (WSOM), N. Allinson, H. Yin, L. Allinson, and J. Slack, Eds. 2001, pp. 8994, Springer. [10] T. Binzer and F. M. Landstorfer, Radio network planning with neural networks, in Proceedings of the IEEE Vehicular Technology Conference (VTS/fall), Boston, MA, 2000, pp. 811817. [11] K. Raivio, O. Simula, and J. Laiho, Neural analysis of mobile radio access network, in Proceedings of the IEEE International Conference on Data Mining (ICDM), San Jose, California, 2001, pp. 457464. [12] J. Laiho, K. Raivio, P. Lehtim ki, K. H t nen, and O. Simula, a ao Advanced analysis methods for 3G cellular networks, Tech. Rep. A65, Helsinki University of Technology, Publications in Computer and Information Science, 2002, submitted to IEEE Transactions on Wireless Communications. [13] K. Raivio, O. Simula, J. Laiho, and P. Lehtim ki, Analysis of mobile a radio access network using the Self-Organizing Map, in Proceedings of the IPIP/IEEE International Symposium on Integrated Network Management, Colorado Springs, Colorado, 2003, pp. 439451.

FA 5.54 4.30 5.54 4.67

FA 2.20 1.10 0.65 0.98

of the cellular network, and 52 state vectors reecting NORMAL conditions. Thus, we have S = 60. One can infer that the maximum possible value of ERRORS is S, reached only in the case of a very unreliable FDD system. Two faulty state vectors per KPI were simulated by adding or subtracting random values obtained from Gaussian distributions with standard deviations greater than 1. The underlying idea of this procedure is to generate random values outside the range of normality of each KPI, and then, to test the sensitivity of the FDD system. It is worth emphasizing that all neural models performed very well, irrespective to their performances in the fault detection task. The only remaining error is the false alarm, which is the less crucial in a cellular network. Even this type of error has presented a very low rate of occurrence. All the ABNORMAL vectors have been found and his causes correctly assigned, i.e., all the faulty KPIs inserted in each ABNORMAL state vector have been detected.

7. Conclusion
In this paper we proposed general methods for fault detection and diagnosis in 3G cellular networks using competitive neural models. Unlike the available qualitative methods [10], [11], [12], [13], the approach we took is focused on quantitative (numerical) results, more adequate for online performance analysis, being based on a statistically-oriented and widely accepted method of computing condence intervals.

Vous aimerez peut-être aussi