Vous êtes sur la page 1sur 5

Software Reliability Assessment using Artificial Intelligence

Indrajit Mandal 4th Year, IS E SaIT, Bangalore Mobile: +919047879869 Email: indrajit1729@gmail.com

Abstract: Fuzzy-Neural networks have emerged as a promising technology in applications that requires generalization, abstraction, adaptation and learning. This paper presents an Artificial Neural Networks with fuzzy logic for Software Reliability prediction. It also discusses the main findings and concludes with a promising result. It explores the use of backpropagation neural networks as a model for software reliability estimation and modeling. The neural networks approach exhibits a consistent behavior in prediction and the predictive performance is comparable to that of parametric models. It will show how to apply fuzzy logic with Backpropagation algorithm to predict software reliability by designing different elements of neu ral networks. Furthermore, the fuzzy neural network approach is used to build a dynamic weighted combinational model. The applicability of proposed model is demonstrated through software failure data sets. From experimental results, it can be seen that the proposed model significantly outperforms the traditional software reliability models. Key Terms: Fuzzy-Neural networks, Software Reliability prediction, Back propagation algorithm, software failure data set, reliability models.

Train ing input/output data is presented to the system, the computed output will be co mpared to a given one, and as with the regular neural network, the system will learn through a back propagation algorith m. Fro m the point of view of design, artificial neural networks are computing devices that use design principles similar to human brain processing. .

Fig1. Neural Net work model In the above figure.1 shows the architecture of neural network used for the software reliability assesssment. In the first layer i.e. input layer has two nodes or neurons represented by circles having input as MTBF and time .In the second layer i.e. h idden layer has seven nodes. In the last layer i.e. output layer where outputs are received has one neuron and software reliability is output.

1. INTRODUCTION
Neural networks are considered to operate in basically two modes: training and recall where training corresponds to the adaptation of the link weights when new patterns/vectors (train ing data) are applied to the neural network at the input layer. Depending on the presence of the desired responses for the applied inputs, the training is labeled as supervised or unsupervised. In any case, the neural network will change the values of its connection weights after the inputs are applied in such a manner that the outputs yield a satisfactory result. The degree to which the outputs are considered to be satisfactory also depends on the presence of desired given outputs. Fuzzy neural network [7] is functionally equivalent to a fuzzy in ference model in which fu zzy logic is used for bringing down the data values within range of zero to one.

3. RELIABILITY THEORY
Reliab ility is defined as the probability that a device will perform its intended function during a specified period of time under stated conditions.[2] Mathematically, this may be exp ressed as,

where is the failure probability density function and t is the length of the period of time (wh ich is assumed to start fro m t ime zero).

First, reliability is a probability. This means that failure is regarded as a random phenomenon. Second, reliability is predicated on "intended function:" Generally, this is taken to mean operation without failure.. The system requirements specification is the criterion against which reliability is measured. Third, reliability applies to a specified period of time. In practical terms, this means that a system has a specified chance that it will operate without failure before t ime t. Fourth, reliability is restricted to operation under stated conditions. In this paper, fuzzy logic is being applied to software reliability data set [2].

7. ACTUAL IMPLEMENTATION
Any way all this ru le implementation is not implemented by direct method. The following steps are followed. [The backward propagation follo ws the follo wing ru les.] Let x, y, z :as outputs of neurons in input layer, hidden layer, output layer, and a subscript to indicate which neuron is referring to Desired output P: p1,p2, ..p3 m: no of input neurons. input : x1,x2,..xm r : output neurons n : nu mber of hidden neurons o = learning rate fo r output layer h = learning rate for h idden layer = threshold of hidden = threshold of output Eb = erro r in output of output layer Tb = error in output of hidden layer f(x) = 1/(1 + exp ( - kx ) ) output of b th hidden layer neuron y b = f[ ( a xa M 1 [ a ] [ b ] ) + b ] Za = f [ ( a y a M 2 [a ] [b] + a output of a th output layer neuron a th component of vector of output difference desired value - co mputed value = Pa - za a th component of output error at output layer Ea = Pa - za ath component of error of h idden layer Ta = y a ( 1 - y a) * [ b M 2 [a ][ b ] Eb ] adjustments for weight between ath neuron and bth output neuron M 2 [a][b] = 0 y a Eb M 1 [a][b] = h xa Tb = 0 Eb b = h Eb M2 [a][b](t) = 0 y aEb + M 2 [a] [b ](t-1) M1[a][b](t) = h xa t b + M 1 [ a ][ b](t-1)

5. MOTIVATION
There are large nu mbers of models that are existing in the current Software Reliability Engineering [1],[2],[3],[8]. I have also come across several neural network models but I found that the Backpropagation algorithm with fuzzy logic is not been used till now, so I decided to work in this area. It made me quite curious to work and contribute something in the field o f neural network.

6. BACKPROPAGATION ALGORITHM
Backp ropagation [6], or propagation of error, is a common method of teaching artificial neural networks how to perform a given task. It is a supervised learning method, and is an imp lementation of the Delta rule. Backp ropagation requires that the activation function used by the artificial neurons (or "nodes") is differentiable . Summary of the backpropagation technique: 1. Present a training samp le to the neural network. 2. Co mpare the network's output to the desired output fro m that samp le. Calculate the error in each output neuron. 3. For each neuron, calculate what the output should have been, and a scaling factor, how much lo wer or higher the output must be adjusted to match the desired output. This is the local error. 4. Adjust the weights of each neuron to lower the local error. 5. Assign "blame" for the local error to neurons at the previous level, giv ing greater responsibility to neurons connected by stronger weights. 6. Repeat fro m step 3 on the neurons at the previous level, using each one's "blame" as its error. The activation function used in the neural network training is sigmoid function mentioned in (1) where k is integer constant and x is the input value.

f(x) =

1 / (1 + e (-kx))

(1)

The simulator developed for the work has the following objectives. 1. Allow the user to specify the number and size of all layers 2. Allow the use of one or more hidden layers. 3. Be able to save and restore the state of the network. 4. Run fro m an arbitrarily large training data set or test data set. 5. Query the user for key network and simulat ion parameters 6. Display key information at the end of the simu lation.

8. SIMULATION MODES
8.1 Trai ning mode Here the user provides a training file in the current directory called training.dat. This file contains examp le pattern. Each pattern has a set of inputs followed by a set of outputs. Each value is separated by one or more spaces. Another file that is used in training is the weight file called weights.dat . Once the simulator reaches the error tolerance (specified by the user) or the maximu m nu mber of iterations the simulator saves the state of the network by saving all of its weights in a file called weights.dat. This file can then be used in subsequently in another run of the simulator in non training mode. To provide some idea of how the network has done, informat ion about the total and average error is presented at the end of the simulation. In addition , the output generated by the network for the vector provided in an outpu t file called output.dat. 8.2 Test mode Here the user provides test data to the simulator in a file called test.dat. This file contains only input patterns. When this file is applied to an already trained network, an output.dat file is generated, which contains the outputs from the network for all of the input patterns. The network goes
expected Output 0.977364 0.976650 0.792536 0.796503 0.408812 0.665736 0.727370 0.646120 0.767112 result obtained 0.976441 0.971797 0.792681 0.794138 0.395036 0.671583 0.729866 0.646411 0.765696

through one cycle of operation in this mode, covering all the patterns in the test.dat file. Weight.dat is read to init ialise the state of the network. Test.dat will have user test data with expected output. In all these cases the values chosen are taken in the range of 0-1 as backpropagation neural network is efficient for this range. Any parameter can be converted by suitable conversions. All the cases considered shown similar trend and almost same values.

9. RESULTS OBTAINED: Case Study 1:


The case study one is done using standard data set [2] collected for Software Reliability. Around 80% of data is used for training purpose and rest 20% of data is used for the testing so that the result obtained can be compared with the standard data and the accuracy level of neural network can be assessed. Fuzzy logic is used for the normalization of data so that data values are in the range from 0 to 1. Here inputs 1& 2 are MTBF and time , inputs to simulator and expected and obtained are shown below table1.

After training the neural network, the weights (case one) are generated which are summarized below:
1 1.451231 7.179730 -5.012209 0.927069 1.165352 1.720945 -1.379831 1 -10.801326 -0.004935 0.206632 -2.451720 0.491591 -0.796951 1.666641 2 2.883634 0.395986 -0.871015 1.069819 7.713162 2.212281 -0.390156 0.766935 2 0.379247 -0.849840 -0.262248 -1.933704 -10.675800 -4.154297 0.577442 0.171044 2 -0.134482 -1.253082 0.020250 -3.479356 5.270518 2.888530 -0.374376 1.935062 2 1.792643 -0.856076 -0.097033 -2.221425 1.577753 1.226687 -1.653748 1.262026 2 -2.010406 -0.794682 -1.168519 0.791442 2.399513 -0.401209 -0.636701 -0.682632 2 -0.760431 0.240488 0.056293 1.516305 1.262841 -1.613197 0.069580 2.180249 2 2.490106 -0.999430 0.331698 -2.274142 -4.280077 -0.205699 -1.471257 2.064625 3 2.668287 3 -0.632551 3 0.684229 3 -3.858803 3 8.174465 3 4.115244 3 -0.787614 3 2.363752

Table1

Fig2. Result for case1

Results

obtained: average error per cycle =0.09927 error last cycle =0.079994 error last cycle per pattern=0.009 total cycles = 85530

total patterns = 1068870

9.2Case Study 2
Here is a list of standard data below:
expecte d result 0.945162 0.565294 0.969272 0.976441 0.991051 0.945162 0.395036 0.671583 0.729866 0.646411 0.765696 0.570136 0.929805 0.987139 0.999871 result obtained 0.945168 0.565299 0.968279 0.976448 0.999063 0.945168 0.395031 0.671589 0.725860 0.646409 0.765699 0.570149 0.929806 0.987135 0.999877

average error per cycle =0.027662 error last cycle =0.007597 error last cycle per pattern =0.000019 total cycles = 98000 total patterns = 6880000

CONCLUSION
In this paper, fuzzy neural network approach is applied to software reliability engineering. Although many researchers think that the neural network approach is a black-bo x method, I still can exp lain the neural networks from a different aspect, that is, fro m the mathemat ical viewpoints of software reliability modeling. I have shown how to design the elements of neural network to represent the existing models in detail. The inputs to the neural network are the MTBF and time and output is software reliability. The output of neural network is validated using exponential distribution model [2],[3],[4]. Since Correlat ion coefficient shows the degree of accuracy of the fuzzy neural network model is shown below: Correlation Coefficient Case Study 1 0.999579 Case Study 2 0.999931

Table2

As Neural Networks are considered as black bo x, the results of proposed model is verified with the existing model. It also discusses the main findings and concludes with a promising result. Exponential distribution model is used for validation of result. R = exp (-t * t ) R = reliability t = the number of failures/hour(MTBF) t = the time period fo r which the reliability is to be calculated The neural networks deal well with complex, nonlinear relationship and noisy data. There can be a tremendous amount of trial and error in processing and selecting inputs. Train ing a neural network can be computationally intensive and the results are sensitive to the selection of learning parameters. Another drawback of neural networks is that the knowledge stored in their weights cannot be interpreted in simp le terms that are related to software metrics - wh ich some analytical models can do with. where

Fig3. Result for case2

After training the neural network, the weights are generated which are summarized belo w:
1 -0.714588 -2.295422 1.294301 -0.097654 -1.342859 -3.219371 2.783578 2.284196 1 -1.109957 0.262510 -7.740844 1.284753 2.320924 0.302445 -0.549055 0.583874 2 -0.390682 -0.617271 -3.954391 1.791666 1.817389 -1.033096 2 3.248093 0.736044 -1.721729 0.930586 0.602024 0.521881 2 5.836539 -0.303995 -0.767941 -0.001422 0.531786 -0.597786 2 -0.751609 -0.863094 0.480778 -1.713410 0.039522 0.111841 2 -1.931814 -0.419395 0.793617 -0.338331 -1.115845 -1.144012 2 3.774179 0.251104 -2.062135 0.286253 -0.197699 -0.429261 2 -2.980067 -0.406325 0.231854 0.859418 -0.455224 -0.348361 2 -2.718219 -1.113166 -0.503107 -0.186728 -0.713809 -0.952001 3 6.116115 3 0.624598 3 -3.945798 3 2.442640 3 2.025535 4 0.055965

11. FUTURE WORKS


The simulator can be used for many other problems where the relation between input (s) and output(s) is not defined or cannot be understood. Fuzzy logic is incorporated in back propagation simulator, so as the tool will be more powerfu l. Genetic algorith m can be incorporated to make fu zzy-neural network mo re robust and efficient.

Results obtained:

However, there are many research issues that I want to address in order for the model to be fully specified and applied in real software development projects for estimating and predicting. REFERENCES
[1]Generalized Discrete Software Reliability M odeling With Effect of Program Size Inoue, S.; Yamada, S.; Systems, M an and Cybernetics, Part A, IEEE Transactions on Volume 37, Issue 2, M arch 2007 Page(s):170 - 179 Digital Object Identifier 10.1109/TSM CA.2006.889475 [2]J.D. M usa et al., Software Reliability Engineering, JohnWiley & Sons, New York, 1998 [3] Failure correlation in software reliability models ;GosevaPopstojanova, K.; Trivedi, K.S.; Reliability, IEEE Transactions on Volume 49, Issue 1, M arch 2000 Page(s):37 - 48 Digital Object Identifier 10.1109/24.855535 [4] Censored software-reliability models ; Kai-Yuan Cai; Reliability, IEEE Transactions on Volume 46, Issue 1, M arch 1997 Page(s):69 75 Digital Object Identifier 10.1109/24.589930 [5]. C.-J. Lin and C.-T. Lin, Reinforcement learn ing for

[6]W. Schiffinann, M. Joost and R. Wernre,

'Comparison of Optimized Backpropagation Algorithms', European


Symposiu m on Artificial Neural Net works Proceeding, April 1993, pp. 97-103. ESA N"93

[7]N. Karunanithi, D. Whitney Y. K. M alaiya, "Using neural networks in reliability prediction", IEEE Software, July 1992, pp 53-59. [8]An Accurate M odel of Software Reliability ; Shiyi Xu; Dependable Computing, 2007. PRDC 2007. 13th Pacific Rim International Symposium on 17-19 Dec. 2007 Page(s):77 - 84 Digital Object Identifier 10.1109/PRDC.2007.12 [9]C. T. Lin and C. S. G. Lee, Neural-network-based fuzzy

logic control and decision system, IEEE Trans. Comput., vol. 40, pp. 1320 1336,Dec. 1991. [10]Shin-ichi Horikawa, Takeshi Furuhashi, and Yoshiky Uchikawa, On Fuzzy Modeling Using Fu zzy Neural Networks with the Back-Propagation Algorithm, IEEE Transaction on Neural Net works, Vol.3, No.5, pp.801806, Sep.1992.

ART-based fuzzy adaptive learning control networks, in Proc. Fuzzy-IEEE/IFES, Mar. 1995, vol. 3, pp. 1299 1306.

Vous aimerez peut-être aussi