Académique Documents
Professionnel Documents
Culture Documents
TERMOANDES S.A.
INDEX
1. INTRODUCTION ...................................................................................................................................................... 3
2. Theoretical Distributions for Analysis ....................................................................................................................... 5
2.1 Normal Distribution............................................................................................................................................. 5
2.2 Log Normal Distribution...................................................................................................................................... 5
2.3 Exponential Distribution...................................................................................................................................... 5
2.4 Weibull Distribution............................................................................................................................................. 6
3. Decision Process ...................................................................................................................................................... 8
4. Stages in the Decision Process................................................................................................................................ 9
4.1 Bar chart. Descriptive Measures ........................................................................................................................ 9
4.2 Goodness-of-fit tests ........................................................................................................................................ 12
4.2.1 Kolmogorov Smirnoff Test (K-S)............................................................................................................ 12
4.2.2 Test K-S Lilliefors (normality).................................................................................................................... 13
4.2.3 Stephens Test........................................................................................................................................... 13
4.2.4 Shapiro Wilks (Normality) ...................................................................................................................... 14
4.2.5 Probability Plot .......................................................................................................................................... 16
4.2.6 Anderson Darling Test ........................................................................................................................... 20
4.2.7 Weibull Distribution Analysis..................................................................................................................... 21
4.2.7.1 Linearization ...................................................................................................................................... 21
4.2.7.2 Estimation of F(t) ............................................................................................................................... 21
4.2.7.3 Estimation of 0 y 1 parameters for the linear equation .................................................................. 23
4.2.7.4 Estimation of y in Weibull Distributions ....................................................................................... 23
4.2.7.5 Plotting the linearized Weibull distribution........................................................................................ 23
4.2.7.6 B Life, Definition and How to Compute It .......................................................................................... 24
4.2.8 How to calculate correlation coefficient xy ............................................................................................... 25
5. How to calculate MTBF and Useful Life ................................................................................................................. 26
6. Failure Finding Tasks in RCM II Software .............................................................................................................. 28
1. INTRODUCTION
The aim of this module is to define:
1. Frequency for preventive maintenance (scheduled restoration or discard task )
2. Mean Time Between Failures (MTBF), which allows determining the time corrective tasks must be carried
out.
The calculus algorithms will depend on the distribution most suitable for the data (failures). In consequence, the
very first step is to determine which is the most suitable distribution, this task will involve 2 kinds of analysis, a
qualitative one, which will comprise bar charts and descriptive values, and another , a quantitative one, which
involves a series of tests (depending on the users view), probabilistic graphs for several distributions and their
associated correlations quotient to determine the degree of linearization.
These distributions are: Normal, Log Normal, Exponential and Weibull.
The first three are applied to wear out zone of the bathtub curve, though Weibull distribution is also used in infant
mortality and useful life.
Exponential distribution is used to analyse random failures, that is to say, those with constant failure rate (useful life
area). If the data fit this distribution then it means there is no rapid increase in failure rate (see Figure 1), therefore,
neither a useful life value (Preventive maintenance) nor a MTBF measure (corrective maintenance) can be
determined, in this case it is useless to apply predictive maintenance, that is the reason why the possibility of
predictive maintenance (MdP) should be considered. Although, there is a MTBF for exponential distribution, which
we will address as exponential MTBF, it will enable us to compare components and parts of the same kind, such as
bearings, with different MTBF, which will be useful to choose that component with the highest MTBF value.
To define the best distribution for the data we can make use of, as we said before, a qualitative and quantitative
analysis, it is advisable to use both of them to be more precise about which distribution is more adequate. (See flow
chart for Decision Process)
Figure 1
This statistical module uses historical data and comprises the following sections:
Descriptive analysis of data: Qualitative analysis which allows determining visually the distribution that best
fits the data. It uses a graphic representation of the data by means of a bar chart of frequencies and it
computes central tendency parameters, scatter parameters, asymmetry, skewness or kurtosis, all of them
will help to draw conclusions as regards normality of data, besides there are other parameters which can
help us find out if a given distribution is normal, for example RIQ/s, (where RIQ is the Interquartile interval
and s, standard deviation of the sample), in case this coefficient is less than 0.13 we can infer this
distribution is normal.
Tests of Adjustment Fitness: It enables to determine in a quantitative way the distribution most appropriate
of the data. The methods used are: Kolmogorov-Smirnoff (K-S), K-S Lillefors, Anderson- Darling y ShapiroWilk. The distributions to bear in mind are: exponential, normal, log normal and Weibull. Additionally,
another procedure can be used, The Probabilistic Paper to confirm such distributions, if the values (data)
closely resemble a straight line, then we can conclude the distribution fits the data, this will be shown
together with a correlation coefficient to give us an idea of the adjustment. Another option is compare the
various correlation coefficients obtained for the distributions and choose the highest one.
Computing Mp Frequency and MTBF: once the distribution is selected, the next step is to compute the time
to carry out a preventive maintenance (the end of useful life) and MTBF, which indicates the frequency of a
corrective maintenance.
(
1
f (t ) =
e 2
2
Reliability
Failure Rate
)2
: mean; 2: variance
Parameters
F (t ) =
1
2
1
R (t ) =
2
z (t ) =
1 x 2
)
(
2
1 x 2
)
(
2
dx
dx
1 t 2
)
(
2
1 x 2
)
(
2
dx
Failure Rate is of a growing type, therefore the normal curve can be used to describe wearing.
However, as we will see, that wear out zone does not match a normal distribution, being that the case a log normal
or Weibull distribution might be the most appropriate.
y=
ln t
(1)
f (t ) = e t , t 0 ,
> 0 (failure rate)
Parameter
Distribution Function
F ( t ) = 1 e t
Reliability
R(t ) = e t
Z (t ) =
Failure Rate
1
Mean Life o Mean Time Between Failures TMEF =
Probability Density Function
For a device in order to fail exponentially it must be sensitive to age and use. It is applied to components with long
useful life, which exceeds service life of those systems involved, in components which are replaced for prevention
before wear signs are present.
The parameter used in exponential distributions is rate of failure [failures/time], its inverse, Mean Time Between
Failures or mean life which is ultimately what we are estimating:
n
TMEF = =
t
i =1
Where n is the amount of failures in a given period and ti is the time in which the i-th failure occurs.
2.4 Weibull Distribution
As a general rule, mechanic and electromechanic components, like tooth wheels, bearings, engines, relays,
breakers, etc. fail mainly due to wearing, though they can have more than one wear device. In this case, the density
can have asymmetry, then it is wise to use a Weibull distribution instead of a Normal one. This distribution is quite
flexible, that is to say, for a given parameters of shape it can represent an exponential distribution while for
another value of it can approach a normal distribution, log normal, etc. It is possible to use in the 3 areas of the
bathtub curb or Davies curve, infant mortality, useful life, wear zone.
t
f (t ) = (t ) 1 e
Parameters:
: Shape parameter
: Scale parameter or characteristic life
: Location parameter, origin, guarantee period, or minimum life,
failure starting at t = 0).
Distribution Function
F (t ) = 1 e
Reliability
R (t ) = e
Failure Rate
z (t ) =
(t ) 1
E (t ) = + (1 +
Mean Life
Where (1 +
For = 0 and = 1 the effect of on the failure rate (see bathtub curve) can be interpreted as follows:
For < 1, z(t) is decreasing, it can be used in the period of infant mortality.
For =1, z(t) is constant and equals 1/ , and the Weibull distribution is just like the exponential type with
mean , it can be used in the period of useful life.
For > 1, z(t) is increasing, it can be used in the wear zone. For = 2, function z(t) is linear and for = 3.5
Weibull distribution closely resembles the normal kind.
Its Accumulated Distribution Function (hence FDA) allows obtaining area values under the curve (Probability):
( t )
F (t ) = 1 e
This is an example:
The duration in hours of a drill bit in a factory has a Weibull distribution with parameters = 2 and = 100. Let us
calculate the probability that a drill from a drilling machine fails before 80 hours of operation.
80 2
)
100
= 1 e 0.64 = 0.47
In our case we will use as data F(t), which represents risk , the usual values for risk are 20%, 10%, 5%,
1% y 0.1%.
Then, t time in which a preventive maintenance task should be carried out for a given risk is:
t = * [ ln(1 )]
1/
As we can see we agree that in the example, the data fit a Weibull distribution and that we know the parameters, or
at least an estimation. Later on we will see how to find out whether the data fit a Weibull distribution and to estimate
the parameters associated.
3. Decision Process
Data
Qualtitative Analysis
Histogram
Descritpive Measures
Quantitave Analysis
Test K-S
Probabilistic Role..
Normal,
lognormal,
Weibull ?
Exponential
No Preventive
Maintenance
i
MTBF and Frequency MP
Calculations
These characteristics give us a clear idea of the distribution of the data. The probabilistic paper and test of
adjustment can be used to verify the distribution model.
Calculations
Data xi, i = 1..n: data are sorted in ascending order. The data to be used must be hours between failures, therefore,
to obtain these values we substract a date from the previous date and then result is transformed into hours.
Hours
This column is
calculated by
the software.
Date
10
x=
x
i =1
Median Me: it is the value from the sample that splits the group of data into two equal halves. If n is odd then the
Median value is the value in the middle position of the data sorted in ascending order. If n es even then the Median
value is the mean between the 2 most central values.
Mode or Mode Mo: The value which has the highest frequency.
Dispersion-Scattering
Range R: the difference between the highest and the lowest values.
R = xmx - xmin
Variance S2: It provides us with the mean value of the deviations for each value in relation to the mean squared It is
denoted as S2, when it refers to the sample.
n
S2 =
(x
i =1
x)2
n 1
x
i =1
2
i
( xi ) 2
i =1
n 1
10
11
x Mo
S
(x
i =1
x)3
n 1
S3
Bowleys asymmetry Coefficient : Up to now the 2 coefficients described have one issue and it is
that the can take very big absolute values specially when we have distribution with extreme
asymmetry, for example J-Shaped distributions. This coefficient allows working with limited scales.
Asb =
Q3 Q1 2 Me
Q3 Q1
Kurtosis Coefficient:
n
(x
i =1
K=
x)4
n 1
S4
Kp =
(Q3 Q1 ) / 2
P90 P10
0<Kp<0.5
11
12
IQR
S
4.2 Goodness-of-fit tests
4.2.1 Kolmogorov Smirnoff Test (K-S)
This test compares the empiric distribution Fn(x) built with the data and the distribution function F(x) from the
theoretical model, in our case with Normal, Exponential, Lognormal and Weibull distributions. This test is advised
when the hypothetical distribution is specified, that is to say with given parameters.
Order Statistics Values: The result of a random sample is represented by the sequence x1, x2, xi, xn , with xi the
value in the i-th place. If we sort the values in ascending order, then we have (1),x(2),...x(i,...x(n), with
x(1)<x(2)<...x(i)<...< x(n)
The random variable X(i) is known as statistics value of h order.
Empiric Distribution Function: Let us suppose that X is a random variable which has a distribution function F(x). A
random sample of n values of X comprises these values x1, x2,, xn. It is advisable to re-sort the data in ascending
order, the sorted xi will be x(1)x(2)x(n).. Now, the empiric distribution function is:
(i-1)/n si x(i-1)x<x(i) i =1,,n
Fn(x)
1
si xx(n)
Where x0= -
Our FDA (Accumulated Distribution Function) Fn(x), is then matched with the hypothetical FDA , F0(x). If Fn(x) is too
different from F0(x) this proves F(x) is not F0(x). The matching between Fn(x) and F0(x) is the magnitude of the
absolute value for each x. That is to say:
D+ and D- are generated
D = mx (F(x) Fn(x))
D+ = mx[i/n F(x(i))]
1 i n
i 1
D = mx F(x(i))
1 i n
n
12
13
D = max D+, D-
The value D observed is then compared to the Critical D value (Dc) obtained from the size of the sample and
the level of significance a.k.a observed level of significance. If the observed D> Critical D then the hypothesis
is rejected/discarded . If D<Dc then the hypothesis should be accepted according to the null hypothesis.
However, this test does not seem to be quite reliable and there might be some other distributions for which we
could also get D<Dc. Then it is commonsense to say the analysis of the data does not contradict the specific
precedence of the null hypothesis.
The critical values are obtained from Annex 1 and for samples with sizes bigger than 35 we apply formulas also
shown in such annex.
D calculated depends on the kind of hypothetical distribution:
Exponential Distribution
FDA
Parameters Estimation.
Weibull Distribution:
FDA
Parameters Estimation.
Normal Distribution:
FDA: Instead of using the associated expression it is more suitable to use distribution tables for normal
standard distributions.
Parameters Estimation: Through the use of mean and standard deviation values explained in chapter
4.1.
Note: Samples with size lower than 20 do not allow differentiation of distributions, i.e. it seems several
distributions are equally suitable.
4.2.2 Test K-S Lilliefors (normality)
When we estimate the parameters to compute F0(x), the typical tabulation of this test leads us to a very
conservative contrast, with a level of significance much lower than the one in the table. Lilliefors has tabulated this
statistic value (Dn) when we estimate parameters and 2 in normal distribution with x and s2.
The contrast takes place computing statistic D and rejecting the normality hypothesis when the
computed/calculated D is significantly high, i.e., higher than the value in the tables for such significance level.
The reliability of this test for medium sized samples is low. For example, to spot the difference between a N(0,1)
and one uniform in (-3, 3) more than 100 values are needed.
The critical values are obtained from tables from Annex 2, and for sizes higher than 30 we resort to formulae shown
at the bottom of such table.
13
Cases
14
Significance Level
D modified
0.15
0.10
0.05
0.025
0.01
F(x) specified
D(n+0.12+0.11/n)
1.138
1.224
1.358
1.480
1.626
F(x) normal y
2
unknown
D(n-0.001+0.85/n)
.775
.819
.895
.955
1.035
F(x) exponential
unknown
(D-0.2/n)(n+0.26+0.5/n)
.926
.990
1.094
1.190
1.308
1
w=
ns 2
Where
ain ( xn j 1) x( j )
j =1
ns 2 = ( xi x ) 2 , h is n/2 if n is even and (n-1)/2 if it is odd, coefficients ain are tabulated and x(j) is the
sorted value in the sample and the one whose location is in the j-th place.
Distribution w is tabulated and normality is rejected when the calculated value is lower than the critical one from the
table. All of these because w measures fitness in relation to the straight line and not disparity in relation to the
hypothesis.
Coefficient table ain for Shapiro Wilks
in
10
11
12
13
14
15
16
17
18
19
20
21
0.71
0.71
0.69
0.66
0.64
0.62
0.61
0.59
0.57
0.56
0.55
0.54
0.53
0.52
0.51
0.5
0.49
0.48
0.47
0.46
0.17
0.24
0.28
0.3
0.32
0.32
0.33
0.33
0.33
0.33
0.33
0.33
0.33
0.33
0.33
0.32
0.32
0.32
0.09
0.14
0.17
0.2
0.21
0.23
0.23
0.24
0.25
0.25
0.25
0.25
0.26
0.26
0.26
0.26
0.06
0.09
0.12
0.14
0.16
0.17
0.18
0.19
0.2
0.2
0.2
0.21
0.21
0.21
0.04
0.07
0.09
0.11
0.12
0.14
0.14
0.15
0.16
0.16
0.17
0.17
0.03
0.05
0.07
0.09
0.1
0.11
0.12
0.13
0.13
0.14
0.02
0.04
0.05
0.07
0.08
0.09
0.1
0.11
0.02
0.04
0.05
0.06
0.07
0.08
0.02
0.03
0.04
0.05
0.01
0.03
2
3
4
5
6
7
8
9
10
11
in
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
0.46
0.45
0.45
0.45
0.44
0.44
0.43
0.43
0.43
0.42
0.42
0.42
0.41
0.41
0.41
0.4
0.4
0.4
0.4
0.39
0.32
0.31
0.31
0.31
0.3
0.3
0.3
0.3
0.29
0.29
0.29
0.29
0.29
0.28
0.28
0.28
0.28
0.28
0.27
0.27
0.26
0.26
0.26
0.25
0.25
0.25
0.25
0.25
0.25
0.25
0.25
0.25
0.24
0.24
0.24
0.24
0.24
0.24
0.24
0.24
0.21
0.21
0.21
0.21
0.22
0.22
0.22
0.22
0.21
0.21
0.21
0.21
0.21
0.21
0.21
0.21
0.21
0.21
0.21
0.21
14
0.18
0.18
0.18
0.18
0.18
0.18
0.19
0.19
0.19
0.19
0.19
0.19
0.19
0.19
0.19
0.19
0.19
0.19
0.19
0.19
0.14
0.15
0.15
0.15
0.16
0.16
0.16
0.16
0.16
0.16
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.12
0.12
0.12
0.13
0.13
0.13
0.14
0.14
0.14
0.14
0.14
0.15
0.15
0.15
0.15
0.15
0.15
0.15
0.15
0.15
0.09
0.09
0.1
0.1
0.11
0.11
0.12
0.12
0.12
0.12
0.13
0.13
0.13
0.13
0.13
0.13
0.14
0.14
0.14
0.14
0.06
0.07
0.08
0.08
0.09
0.09
0.1
0.1
0.1
0.11
0.11
0.11
0.11
0.12
0.12
0.12
0.12
0.12
0.12
0.12
10
0.04
0.05
0.05
0.06
0.07
0.07
0.08
0.08
0.09
0.09
0.09
0.1
0.1
0.1
0.1
0.11
0.11
0.11
0.11
0.11
11
0.01
0.02
0.03
0.04
0.05
0.05
0.06
0.07
0.07
0.07
0.08
0.08
0.08
0.09
0.09
0.09
0.09
0.1
0.1
0.1
0.01
0.02
0.03
0.04
0.04
0.05
0.05
0.06
0.06
0.07
0.07
0.07
0.08
0.08
0.08
0.08
0.09
0.09
0.01
0.02
0.03
0.03
0.04
0.04
0.05
0.05
0.06
0.06
0.06
0.07
0.07
0.07
0.08
0.08
0.01
0.02
0.02
0.03
0.03
0.04
0.04
0.05
0.05
0.06
0.06
0.06
0.07
0.07
0.01
0.01
0.02
0.03
0.03
0.04
0.04
0.04
0.05
0.05
0.05
0.06
0.01
0.02
0.02
0.02
0.03
0.03
0.04
0.04
0.04
0.05
0.01
0.01
0.02
0.02
0.03
0.03
0.03
0.04
0.01
0.01
0.02
0.02
0.02
0.03
0.01
0.01
0.01
0.02
12
13
14
15
16
17
18
19
20
in
42
43
44
45
46
47
48
49
50
0.39
0.39
0.39
0.39
0.38
0.38
0.38
0.38
0.38
0.27
0.27
0.27
0.27
0.26
0.26
0.26
0.26
0.26
0.23
0.23
0.23
0.23
0.23
0.23
0.23
2271
0.23
0.21
0.21
0.21
0.21
0.21
0.21
0.2
0.2
0.2
0.19
0.19
0.19
0.19
0.19
0.19
0.19
0.19
0.18
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.15
0.15
0.15
0.15
0.15
0.16
0.16
0.16
0.16
0.14
0.14
0.14
0.14
0.14
0.14
0.14
0.14
0.14
0.13
0.13
0.13
0.13
0.13
0.13
0.13
0.13
0.13
10
0.11
0.11
0.12
0.12
0.12
0.12
0.12
0.12
0.12
11
0.1
0.1
0.1
0.11
0.11
0.11
0.11
0.11
0.11
12
0.09
0.09
0.09
0.1
0.1
0.1
0.1
0.1
0.1
13
0.08
0.08
0.08
0.09
0.09
0.09
0.09
0.09
0.09
14
0.07
0.07
0.07
0.08
0.08
0.08
0.08
0.08
0.08
15
0.06
0.06
0.07
0.07
0.07
0.07
0.07
0.07
0.08
16
0.05
0.05
0.06
0.06
0.06
0.06
0.06
..0667
0.07
17
0.04
0.04
0.05
0.05
0.05
0.05
0.06
0.06
0.06
18
0.03
0.04
0.04
0.04
0.04
0.05
0.05
0.05
0.05
19
0.02
0.03
0.03
0.03
0.04
0.04
0.04
0.04
0.05
20
0.01
0.02
0.02
0.02
0.03
0.03
0.03
0.04
0.04
21
0.01
0.01
0.02
0.02
0.02
0.03
0.03
0.03
0.01
0.01
0.02
0.02
0.02
0.02
0.01
0.01
0.01
0.02
0.01
0.01
22
23
24
25
15
15
16
10
11
12
13
14
15
16
17
18
19
20
21
0.01
0.75
0.69
0.69
0.71
0.73
0.75
0.76
0.78
0.79
0.81
0.81
0.83
0.84
0.84
0.85
0.86
0.86
0.87
0.87
0.02
0.76
0.71
0.72
0.74
0.76
0.78
0.79
0.81
0.82
0.83
0.84
0.85
0.86
0.86
0.87
0.87
0.88
0.88
0.89
0.05
0.77
0.75
0.76
0.79
0.8
0.82
0.83
0.84
0.85
0.86
0.87
0.87
0.88
0.89
0.89
0.9
0.9
0.91
0.91
0.1
0.79
0.79
0.81
0.83
0.84
0.85
0.86
0.87
0.88
0.88
0.89
0.9
0.9
0.91
0.91
0.91
0.92
0.92
0.92
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
0.01
0.88
0.88
0.88
0.89
0.89
0.89
0.9
0.9
0.9
0.9
0.9
0.91
0.91
0.91
0.91
0.91
0.92
0.92
0.92
0.02
0.89
0.9
0.9
0.9
0.9
0.91
0.91
0.91
0.91
0.91
0.92
0.92
0.92
0.92
0.92
0.92
0.93
0.93
0.93
0.05
0.91
0.91
0.92
0.92
0.92
0.92
0.92
0.93
0.93
0.93
0.93
0.93
0.93
0.93
0.94
0.94
0.94
0.94
0.94
0.1
0.93
0.93
0.93
0.93
0.93
0.94
0.94
0.94
0.94
0.94
0.94
0.94
0.94
0.94
0.95
0.95
0.95
0.95
0.95
41
42
43
44
45
46
47
48
49
50
0.01
0.92
0.92
0.92
0.92
0.93
0.93
0.93
0.93
0.93
0.93
0.02
0.93
0.93
0.93
0.93
0.93
0.94
0.94
0.94
0.94
0.94
0.05
0.94
0.94
0.94
0.94
0.95
0.95
0.95
0.95
0.95
0.95
0.1
0.95
0.95
0.95
0.95
0.95
0.95
0.95
0.95
0.96
0.96
Note: There is nothing like a perfect or near perfect contrast to prove hypothesis of normality. This is due to the
fact the true reliability of this test lies in the size of the sample and the real distribution generated by the data. From
that not-so rigorous point of view, Shapiro-Wilks contrast is more suitable (as a general rule) for small sized
samples (n<30), while Kolmogorov-Smirnoff, in Lilliefors modified version, is more suitable form large samples.
G(m(i))
Where, m(i) are statistical median sorted in a uniform way (as we will see later on) and G is the percentage point
function for a given distribution. percentage point function is the inverse of the FDA, Accumulated Distribution
16
17
Function (the probability that x is lower or equal to a value). In other words, given a probability we try to find out the
corresponding x from the FDA.
m(i)=1-m(n) for i=1
m(i)=(i-0.3175)/(n+0.365) for i=2, 3,, n-1
m(i)=0.51/n for i= n
In addition, we can draw a straight line and adjust it to the points as reference.
This definition implies that a probability plot can be easily generated for any distribution provided the percentage
point function can be calculated for such distribution.
One advantage of this method is that the estimations of intersection and slope are as a matter of fact estimations of
parameters of location and scale from the distribution.
This is not that important when it comes to the normal distribution (since these parameters are estimated according
to mean and standard deviation values) , but it can be very useful for other distributions.
The probability plot/graph is mean to answer the following questions:
For a given distribution, for example, Weibull,
Is it suitable for my data?
Which the best distribution for my data?
Example:
The first step is to sort the data in ascending order. Then , we calculate the uniform statistical medians no matter
the distribution and finally we calculate/compute G, the percentage point function for the given distribution.
The parameters for the Weibull distribution are estimated as explained in paragraph 4.2.7.4.
In the case of the Log Normal distribution first we have to apply Ln to the original data, calculate mean and
standard deviation values for the transformed data and finally we calculate the percentage points G(m(i)).
Normal
Weibull
Exponencial
Log Normal
datos
ordenados
m(i)
G(m(i))
G(m(i))
G(m(i))
LN datos
G(m(i))
2780.27
0.0273
-1.921
2784.172
82.924
7.930
2836.082
2861.48
0.0663
-1.504
2844.035
205.274
7.959
2868.806
2892.86
0.1058
-1.249
2876.775
334.308
7.970
2888.913
2914.04
0.1452
-1.057
2899.747
469.161
7.977
2904.196
2924.46
0.1846
-0.898
2917.710
610.382
7.981
2916.939
2925.79
0.2240
-0.759
2932.634
758.603
7.981
2928.121
2948.42
0.2635
-0.633
2945.530
914.554
7.989
2938.267
2954.62
0.3029
-0.516
2956.989
1079.087
7.991
2947.693
2979.04
0.3423
-0.406
2967.388
1253.201
7.999
2956.609
10
2980.58
0.3817
-0.301
2976.987
1438.080
8.000
2965.168
11
2982.11
0.4212
-0.199
2985.973
1635.145
8.000
2973.488
12
2984.65
0.4606
-0.099
2994.490
1846.117
8.001
2981.664
13
2998.25
0.5000
0.000
3002.650
2073.107
8.006
2989.782
14
3006.77
0.5394
0.099
3010.552
2318.749
8.009
2997.923
15
3009.29
0.5788
0.199
3018.280
2586.387
8.009
3006.166
16
3020.7
0.6183
0.301
3025.915
2880.349
8.013
3014.601
17
3025.33
0.6577
0.406
3033.540
3206.385
8.015
3023.328
18
3034.51
0.6971
0.516
3041.245
3572.360
8.018
3032.473
19
3035.31
0.7365
0.633
3049.137
3989.444
8.018
3042.201
20
3045.65
0.7760
0.759
3057.355
4474.265
8.021
3052.742
21
3051.01
0.8154
0.898
3066.098
5053.173
8.023
3064.445
17
3053.02
0.8548
1.057
3075.680
5771.699
8.024
3077.890
23
3087.05
0.8942
1.249
3086.673
6719.331
8.035
3094.174
24
3109.85
0.9337
1.504
3100.370
8114.473
8.042
3115.860
25
3166.46
0.9727
1.921
3121.004
10764.765
8.060
3151.812
media
2990.86
3028.59
media
8.003
desvEst.
81.66
42.6089
desv. Est.
0.027
0.00033435
18
Grfica de normalidad
Normal Distribution Graph
2.5
2
1.5
1
0.5
0
-0.52750
2800
2850
2900
2950
3000
3050
3100
-1
3150
3200
R2 = 0.9673
=0.9835
-1.5
-2
-2.5
-3
datos
=0.986
3000
2950
2900
2850
2800
2750
2750
2800
2850
2900
2950
3000
3050
3100
3150
3200
datos
18
19
=0.891
2000
0
2750
-2000
2800
2850
2900
2950
3000
3050
3100
3150
3200
-4000
-6000
datos
3200
3 150
=0.9822
3 10 0
3 0 50
3000
2 9 50
2900
2 8 50
2800
2 750
2 70 0
2800
2900
3000
3 10 0
3200
Datos
Grfica
Log NormalGraph
Log Normal
Distribution
19
20
Where n is the size of the sample and w is the standard normal accumulated distribution function (FDA) [(x)/].
For small-sized samples this formula needs modifications:
Then this statistical value for the test must be compared to the proper critical value, depending on the level of
significance , as we can see in the table below:
2
crit
0.1
0.05
0.025
0.01
0.631
0.752
0.873
1.035
In the case of the Weibull distribution the statistical value is the same than in the previous case, but now w is :
Where y are scale and shape parameters respectively.
For small-sized samples the formula needs modifications:
Then it has to be compared to the critical value from the table below:
0.1
0.05
0.025
0.01
A2crit
0.637
0.757
0.877
1.038
Any Weibull distribution with shape parameter =1 generates an exponential distribution as a special case.
Note: Anderson-Darling test (or K-S or Shapiro Wilks), do not tell us the data fit a normal distribution, they just tell
us when the data do not fit the specified distribution. This statement certainly sounds like a pun on words but any
statistical test is designed to refute a hypothesis. In the very same way a dry wall is evidence it has not rained, a
wet wall could be evidence of rain o water from a sprinkler as well. Then the wet wall is no proof it has rained, while
a dry wall proves it did not rain.
20
21
F (t ) = 1 e
( t )
We proceed to linearise the function, that is to say, it must be in the form y = 0 x + 1, then we can estimate the
parameters for the distribution:
t
ln(1 F (t )) = ( )
t
ln R(t ) = ( )
ln[ ln R(t )] = ln
ln ln(1 F (t ) 1 ) = ln t ln
4.2.7.2 Estimation of F(t)
In short these are the steps we need to follow.
We estimate parameters for the Weibull distribution by means of the minimum square method, to do that, first, we
linearise the Weibull distribution function.
We use a combination of methods, the method of Median Ranges and tabulated values, to estimate the
accumulated distribution function F(t).
Once we have estimated F(t), we calculate the parameters of the Weibull distribution, this estimation will also be
useful when it comes to other procedures like K-S test or Anderson-Darling test, and we will calculate t time for
preventive maintenance, previously we have already selected a certain risk . Likewise, E(t) (expected value for
time) calculated will allow estimating MTBF for corrective maintenance. Then, we plot Weibull distribution (linear
graph), calculating and showing the correlation coefficient which will enable us to determine the level of fitness,
in addition in the same graph we will see the values of the parameters as well as B life.
The calculations for linearization of FDA for Weibull distribution can be see in the following table:
-1
Data t
Xi =ln t
F(t)
Yi = ln[ln(1-F(i)) ]
287
5.659
0.0943
-2.312
349
5.855
0.2295
-1.344
412
6.021
0.3648
-0.790
453
6.116
0.5000
-0.366
492
6.198
0.6352
0.008
521
6.256
0.7705
0.386
604
6.404
0.9057
0.859
For every point (xi, yi) in the straight line, xi value is the i-th value. Finding out the value of yi coordinate is a bit
more complicated since it includes the estimated value F(ti). For that estimation we use the Median Ranges
21
22
method, which can be calculated, in the case of Weibull distributions, without all the data (no discarding), by means
of the following equation:
i-th Median Range F(ti) = [1+ F(0.5;m; n)*(n-i +1)/i]-1
Where:
F(0.5; m; n) is the median value of F Snedecor m = 2(n-i+1) and n degrees of freedom, i is the ordinal position of
the failure and n the size of the sample.
One alternative to this formula is to use a table of values which contains pre-established values for the function
depending on the size of the sample, as we can see below:
In this software we use the values from the table and when the size of the sample is greater than 30 we will resort
to the formula previously mentioned.
22
23
When we have calculated every pint associated to the values , we have to find out the line equation, in order to do
that we will make use of the least squares method which is explained later on in this document.
4.2.7.3 Estimation of 0 y 1 parameters for the linear equation
The expressions needed to calculate these parameters and the linear equation y = 0 x + 1, using the method of
the least squares are the following:
0 =
i =1
n
i =1
n xy x y
i =1
n x 2 ( x ) 2
i =1
i =1
1 = y 0 x
4.2.7.4 Estimation of y in Weibull Distributions
The result of the linearization of FDA for Weibull distributions is as follows:
1
ln ln(
) = ln t ln .
R(t )
By comparing this last expression with the linear equation we can conclude:
0=
x = ln t
1= - ln
y y = ln [ ln(1-F(t)-1]
This way, shape parameter equals slope 0 from the line and scale parameter
=e
1
0
23
24
From equation for regression line we observe 0= 1.4392 and 1= -3.4605, we can also determine the correlation
coefficient calculated using the expressions we have already dealt with, its value very close to 1 gives an idea of
how appropriate is its fitness.
Then, we re-arrange the parameters for Weibull distribution:
= 0 = 1.4392
1= - ln =>
=e
1
0
=e
3.4605
1.4392
= 11.07
By means of the transformation of the arithmetic axis in the double logarithm of 1/[1-R(t)] vs. the logarithm of the
data, will enable us to observe what is known as probabilistic paper of Weibull distribution (above graph).
4.2.7.6 B Life, Definition and How to Compute It
Period of time when 10% of the components could have failed (design), or the time in which there is a probability of
failure of 0.1 (maintenance).
For example, in real life reliability of bearings is measured with Life B10, this the life under which the manufacturer
guarantees that no more than 10% of them will fail under given circumstances of load and speed.
So if bearing model X is more reliable than bearing model XY, life B10 (a.k.a. Life L10 or Life N10) for model X
would be double the values for model XY. This is quite useful when it comes to defining purchase orders for
bearings.
The expression to calculate Life Bx is the following:
t = * [ ln(1 F (t ))]
1/
1/
24
25
Where x is the percentage of components that could have failed in that period (Life Bx), or in other words, according
to maintenance definition, x is the probability that the component fails before that moment (Life Bx).
For example, Life B10 for the above example is:
1/
1
1.439
= 2.31
xy =
Cov ( x, y )
x y
-1 xy 1
Where x and y are the standard deviation of x and y, respectively and Cov(x, y) is the co-variance of x and y
calculated/computed as follows:
Cov( x, y ) =
1 n
( xi x )( yi y )
n i =1
25
26
=>
=>
=>
=>
y = 0.6745
y = 1.2816
y = 1.645
y = 2.326
Mean Time Between Failures MTBF, during which corrective tasks must be done, matches the mean value
calculated according to the data, i.e., the estimation of (mean of the population).
Log Normal Distribution: In the same way we dealt with normal distribution, we need to find out t time in which
preventive tasks must be done, and the time for the corrective tasks, the latter can be estimated through the
arithmetic mean value of the data or their median value, this last option is more common.
As y has Normal distribution (0, 1), then:
F(y) = risk
Therefore, for risk:
25% => y = -0.6745
10% => y = -1.2816
5% => y = -1.645
1% => y = -2.326
Re-arranging the formula from (1):
y +
t=e
(2)
Values and are estimated according to the data, more precisely from Ln t (formulae in Descriptive Analysis).
The median value , F(y)=0.5, it corresponds to y=0 (for Normal (0,)).
Then , replacing y=0 in (1) we have:
LN t =
=>
t=e
26
27
E (t ) = + (1 +
where
(1 +
FDA expression for Weibull allows determining areas (probability) under the curve.
F (t ) = 1 e
( t )
Now, in order to calculate the moment for preventive maintenance tasks, we have to apply a certain risk value, ,
and then we proceed to re-arrange the variables, this is what we get:
t = * [ ln(1 )]
1/
27
28
The crucial element for acceptable service of a hidden function is the demanded availability to reduce the
probability of multiple failure to a tolerable level. In consequence, from the previous formula we have:
I DP =
PFM
PFP
PFM is a tolerance value established by the company, PFP is usually a known value.
These two possibilities are estimated from the MTBF.
Let us analyse the previous statements considering random failures, that is to say, let us consider an exponential
distribution of failures, which is a good model for electronic components like, protective devices. Now, according to
the analysis we can state the probability of failure is 1/10 approximately. , in other words, there is a failure every 10
periods (See Annex Prob=lambda), which at the same time is the failure rate, always bearing in mind that it is an
exponential distribution (as a matter of fact, to be rigorous, it is not true).
Then, as we can obtain failure rates from MTBF (the inverse), with this last piece of information we get the
probability of multiple failure PFM, and with the same analysis we also get the probability of failure of the protected
function PFP.
Then, the formula for unavailability is:
28
I DP =
29
TFP
TFM
Making use of calculated unavailability D=1-IDP we resort to the table below to determine frequency or failure finding
tasks IBF as a percentage of MTBF for multiple failures:
Availability
IBF % MTBFMF
89.00
25.00
90.00
22.50
91.00
20.00
92.00
17.50
93.00
15.00
95.00
10.00
97.50
5.00
98.00
4.00
99.00
2.00
99.50
1.00
99.90
0.20
99.95
0.10
99.99
0.02
29
30
The software provides the user with a combo to choose one of these values, and in addition to this the user can
type custom values.
Once we choose the value for availability, TDP time, which is Mean Time Between Failures of protective device, is
estimated. TDPS time MTBF of redundant device, needed to calculate FFI is processed internally by means of
Gamma distribution, as it is described in Method 2.
Method 2
IBF = 2 x (TFP/TFM) x TDPS
Where :
o TFP: mean time between failures of protected function
o TFM: mean time between failures of multiple failures
o TDPS: mean time between failures of protective device of all the protective system
Note: all the protective system means the redundant protective system.
Calculating FFI
In order to calculate/compute FFI we need MTBF of the protective device of all the redundant system TDPS.
Whether we use method 1 or 2 the formula is as follows:
Where R(t) is reliability of the protective system (redundant system), calculated using:
n
n
R(t) = (e t ) x (1 e t ) n x
x =k x
30
31
R(t)
R (t i+1)
R (ti)
ti
ti+1
R(t 2 ) + R(t 1 )
2
R(t 3 ) + R(t 2 )
A2 = *
2
A1 = *
.
R(t i +1 ) + R(t i )
2
______________
Ai = *
A total R(t)dt
0
The idea is to change the quantity of intervals (they are all equal in size) o their size, while keeping predefined
error margins.
By doing this, FFI is calculated for both methods.
This is what we see in the screen.
31
32
Where:
o TFP protective function MTBF.
o TDP MTBF of the single protective device
o TFM MTBF of multiple failure, i.e. that the protected function fails when the hidden function is already in a
failure state.
Redundancy at least 1 out of 5 : that at least 1 out of 5 protections work to cause automatic disconnection.
If method 1 is selected then we need to fill only values for TDP, Availability and, of course, redundancy of the
protective device.
If method 2 is selected then Availability field is not enabled.
Example: We will develop the following exercise applying both methods and solving it in the old fashioned way, then we will
proceed to verify the result using the FFI from the software RCM II.
There is an automatic disconnection (TRIP) if at least 2 out of 3 signals are sent:
Lube oil pressure is below the settings in switch MBV26CP001 (i.e., 4.5 bar)
Pressure of oil flow to bearing, measured by pressured transducer MBV26CP101, is below permissible
limits (i.e. 1.5 bar) longer than 3 seconds.
Lube oil pressure is below the settings in switch MBV26CP002 (i.e. 1.0 bar) longer than 3 seconds.
Bear in mind there are 3 protective devices: Pressures switches MBV26CP001 and MBV26CP002 and pressure
transducer MBV26CP101. At least 2 have to fail for a TRIP to take place.
The information provided by operations is the following:
o TFP: MTBF of protected function is 2 years.
o TDP:MTBF of a single protective device is 4 years.
32
33
TFM: MTBF of multiple failure, i.e. that the protected function fails after the hidden function is already in a
failed state, it is estimated as 6 years.
D: Availability demanded 98.0% since functional failures are operational.
Reliability R(t):
n
n
R(t) = (e t ) x (1 e t ) n x
x =k x
3
3
3
R(t) = (e t ) x (1 e t ) 3 x = e 2t (1 e t ) + e 3t
x=2 x
2
)
TDPS = E(t) = R (t )dt = (3e 0.5t 2e 0.75t )dt = 3.3
Method 1:
IBF = 2 x I x TDPS=2 x (1-98%) x 3.33=0.133 years
IBF = 1 month 18 days
Method 2:
IBF = 2 x (TFP/TFM) x TDPS=2 x (2/6) x 3.33=2.222 years
IBF = 2 years 2 months 20 days
33
34
34
35
Method 2:
Conclusion: Results obtained through manual calculations are the same as those obtained with RCM II software. It
is worth mentioning that the purpose of this is not to show that the results we get from the software is right, since
this should be carried out in a general way. On the contrary, the aim is to corroborate any failure or error in the
programming algorithm by means of the absurd hypothesis, and this did not occur.
One last note, the choice of method depends on the importance of the case and the reliability of the information we
get in reference to MTBF of a protective device, a protected function, and MTBF of multiple failure. If the information
is not reliable enough and the issue is not very important then we can choose Method 1, choosing demanded
availability. On the contrary, we should resort to Method 2.
35