Vous êtes sur la page 1sur 8

Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292

Contents lists available at ScienceDirect

Journal of Loss Prevention in the Process Industries


journal homepage: www.elsevier.com/locate/jlp

An imprecise Fault Tree Analysis for the estimation of the Rate


of OCcurrence Of Failure (ROCOF)
Giuseppe Curcur, Giacomo Maria Galante, Concetta Manuela La Fata*
Dipartimento di Ingegneria Chimica, Gestionale, Informatica, Meccanica (DICGIM), Universit degli Studi di Palermo, 90128 Palermo, Italy

a r t i c l e i n f o

a b s t r a c t

Article history:
Received 11 March 2013
Received in revised form
9 July 2013
Accepted 9 July 2013

The paper proposes an imprecise Fault Tree Analysis in order to characterize systems affected by the lack
of reliability data. Differently from other research works, the paper introduces a classication of basic
events into two categories, namely Initiators and Enablers. Actually, in real industrial systems some
events refer to component failures or process parameter deviations from normal operating conditions
(Initiators), whereas others refer to the functioning of safety barriers to be activated on demand
(Enablers). As a consequence, the output parameter of interest is not the classical probability of occurrence of the top event, but its Rate of OCcurrence (ROCOF) over a stated period of time. In order to
characterize the basic events, interval-valued information supplied by experts are properly aggregated
and propagated to the top. To this purpose, the DempstereShafer Theory of evidence is proposed as a
more appropriate mathematical framework than the classical probabilistic one. The proposed methodology, applied to a real industrial scenario, can be considered a helpful tool to support risk managers
working in industrial plants.
2013 Elsevier Ltd. All rights reserved.

Keywords:
Rate of Occurrence of Failure
Fault Tree Analysis
Initiator Events
Enabler events
DempstereShafer Theory

1. Introduction
Risk Analysis (RA) is dened as the process of systematic use of
available information in order to identify hazards and to estimate
the risk (IEC 60300-3-9, 1999). It consists of four basic steps, namely
hazard analysis, consequence analysis, likelihood assessment and
risk estimation (AIChE, 2000). Each step makes use of different
qualitative and quantitative techniques, which collectively guide
toward estimating the risk and ensuring the system safety.
With relation to the likelihood assessment, Fault Tree Analysis
(FTA) is the most popular and recommended technique. It makes
possible the identication and analysis of conditions and factors that
lead to the occurrence of a dened undesired event (i.e. top event)
signicantly affecting the system performance (IEC 61025, 2006).
After identifying all the possible dangerous top events, the risk
analyst needs to individuate all the possible causes (i.e. basic
events) whose combination can generate the undesired event.
Commonly, researchers do not use to distinguish among different
types of basic events and are mainly interested in the characterization of the top event in terms of its probability of occurrence.

* Corresponding author. Tel.: 39 09123861842.


E-mail addresses: giuseppe.curcuru@unipa.it (G. Curcur), giacomomaria.galante@
unipa.it (G.M. Galante), concettamanuela.lafata@unipa.it, manuela.lafata@gmail.com
(C.M. La Fata).
0950-4230/$ e see front matter 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.jlp.2013.07.006

The latter two aspects constitute the core of the present paper.
Actually, in high risk plants the role played by the identied basic
events is quite different: some of them are inherent to the process
control, others refer to the functioning of safety barriers. Furthermore, it seems more realistic and appropriate to characterize the
undesired event in terms of its rate of occurrence over a stated
period of time rather than of its probability of occurrence. Therefore, the paper proposes an imprecise FTA in which two kinds of
basic events are considered and whose output parameter is the top
event rate of occurrence. In particular, basic events are classied in
Initiators and Enablers. The rst ones refer to the component failures or process parameter deviations with respect to the standard
conditions, whereas the latter ones represent the failure of the
safety barriers. Therefore, the top event arises as a consequence of
the occurrence of some initiators together with the failure of all the
safety barriers. For the two aforementioned categories of basic
events, two different input imprecise parameters are suggested,
namely the Rate of OCcurrence Of Failure (ROCOF) for initiators, and
the average Probability of Failure on Demand (PFD) or the classical
steady-state unavailability (Q) for enablers.
In the context of the present paper, FTA is called imprecise
because the input parameters are realistically assumed as unlikely
exactly known, i.e. assessed by single values. Actually, the uncertainty on their true value leads to an interval-valued characterization of them and to the use of a more suitable mathematical
framework than the classical probabilistic one. Helton, Johnson,

1286

G. Curcur et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292

Abbreviations
RA
FTA
ROCOF
PFD
DST
FOD
bpa
Bel
Pl
ENF
NHPP
HPP
LE
LIC
LCV
SIF
SIS

Risk Analysis
Fault Tree Analysis
Rate of OCcurrence of Failure
Probability of Failure on Demand
DempstereShafer Theory
Frame Of Discernment
Basic Probability Assignment
Belief
Plausibility
Expected Number of Failure
Non-Homogenous Poisson Process
Homogenous Poisson Process
Level Element
Level Indicator and Controller
Level Control Valve
Safety Instrumented Function
Safety Instrumented System

& Oberkampf (2004) use several simple test problems to show


different approaches (probability theory, evidence theory, possibility theory and interval analysis) for the representation of the
uncertainty in model prediction that derives from the uncertainty
in model inputs. Authors emphasize the care that must be used in
interpreting the different results arising from the proposed uncertainty representations. In particular, when uncertainty on input
parameters is modeled by means of a uniform probability distribution and no further information is available to distinguish between the potential values that the parameter can assume in the
interval, supplied results are only in appearance exact. Evidence
theory overcomes this appearance of exactness by leading to a
display of the lowest and the highest probability that are consistent
with a probabilistic interpretation of the given information.
Therefore, in the present paper the DempstereShafer Theory of
evidence is proposed as the mathematical framework to deal with
these imprecise data.
2. Literature review
Performing a quantitative FTA requires the characterization of
basic events by means of specic reliability input data that are often
difcult to come by, especially in process plants wherein the
occurrence of severe accident scenarios is rare. As a consequence,
the available reliability data are generally poor so that the knowledge on processes is partial or incomplete. Furthermore, even if
available, data have many inherent uncertainty issues, such as
variant failure modes, design faults, poor understanding of failure
mechanisms, as well as the vagueness of the system phenomena.
In literature, uncertainty due to natural variation or randomized
behavior of physical systems is called aleatory or objective uncertainty, whereas that one due to the lack of knowledge or incompleteness about the parameters characterizing the physical systems
is known as epistemic or subjective uncertainty (Ferson & Ginzburg,
1996; Hoffman & Hammonds, 1994). In the RA eld, the probabilistic approach has been widely used to manage both these uncertainties. However, such an approach requires known probability
density functions, generated from historical data, that are
commonly not available in process plants. As a consequence, a
meaningful attention has been paid by researchers to theoretical
approaches alternative to the probabilistic one. In particular, the
possibility theory and the evidence theory (Shafer, 1976), also

known as DempstereShafer Theory (DST), have been considered as


the most promising methodologies to deal with the epistemic uncertainty. In addition, the DST suggests several combination rules to
aggregate evidences coming from different sources of information
(Sentz & Ferson, 2002).
Generally, DST has been mainly applied to design problems (Bae,
Grandhi, & Caneld, 2003, 2004), whereas its application in the
reliability eld has not yet been widely researched. Furthermore,
the application of DST in the reliability eld has been mainly
focused on the use of the so called three-valued logic wherein a
discrete Frame Of Discernment (FOD) is dened (Guth, 1991). The
structure of the FOD within the three-valued logic approach is of
the type {T, F}, where T and F stand for true and false respectively.
Such a kind of FOD leads to four subsets in the power set forcing the
experts to express a judgment in terms of degree of belief for an
event to be true (i.e. the event occurs) or false (i.e. the event does
not occur). However, in real engineering applications, it is not
reasonable that experts can supply a degree of belief about an event
to be true and false. To overcome such a drawback, an innovative
application of DST in the FTA is proposed in (Curcur, Galante, & La
Fata, 2012a, 2012b). Firstly, a continuous FOD, coincident with the
real interval [0,1], is dened. Then, in order to supply the probability of occurrence of basic events with an associated degree of
belief, the involvement of a team of experts is proposed. Through
the Dempster combination rule, the supplied information are
aggregated and then propagated to the top. Belief and Plausibility
functions are employed to estimate the uncertainty about the top
event probability of occurrence.
The application of the possibility theory to the FTA reduces to
the employment of the classical fuzzy set theory (Zimmermann,
1991). A methodology for a fuzzy based computer-aided FTA tool
is presented in (Ferdous, Khan, Sadiq, & Amyotte, 2009), where
robustness of the fuzzy based approach is compared with that one
of the conventional probabilistic technique. In (Markowski,
Mannan, & Bigoszewska, 2009) a fuzzy-based bow-tie model,
consisting of the combined representation of the fault and the
event trees, is presented for the accident scenario risk assessment.
Even in (Ferdous, Khan, Sadiq, Amyotte, & Veitch, 2012) a generic
framework for a bow-tie analysis under uncertainty is developed. It
proposes the use of appropriate techniques to handle data uncertainty and introduces the interdependence of input events. The
fuzzy-based and the evidence theory-based approaches are
developed to address the uncertainty.
Generally, FTA is proposed in order to calculate the top event
probability of occurrence. The characterization of the top event in
terms of its probability of occurrence is often questionable from the
perspective of the risk management team. Actually, it seems more
useful to know the rate of occurrence of the top. Such a problem has
not been yet deepened in literature by researchers and constitutes
the main focus of the present paper.
The remainder of the paper is organized as follows. A brief
introduction to the DempstereShafer Theory is supplied in Section
3. The reliability parameter ROCOF is presented in Section 4,
whereas Section 5 deals with the proposed imprecise FTA. A case
study for an industrial system is presented in Section 6 and nally
conclusions are drawn in Section 7.
3. The DempstereShafer theory of evidence
In 1967 Arthur P. Dempster and later Glenn Shafer introduced
the theory of evidence, also known as DempstereShafer Theory
(DST) as a mathematical framework for the representation of the
epistemic uncertainty. It is based on three different measures,
namely the Basic Probability Assignment (bpa), the Belief measure
(Bel), and the Plausibility measure (Pl). Within the DST, the Frame Of

G. Curcur et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292

1287

Discernment (FOD) U is dened as a set of mutually exclusive and


exhaustive elements, whereas the power set, PU, comprises all the
possible subsets of U (2jUj), including the empty set. Here, jUj states
for the cardinality of the FOD.

Thus, for a high conict case (i.e. higher value of parameter K), the
Yager combination rule gives more stable and robust results than
the Dempster one.

Denition 1. the bpa is the amount of knowledge associated to every


subset pi of PU and it is denoted by m(pi). The sum of all the bpas
associated to each element in PU is set equal to 1.

4. Rate of occurrence of failure e ROCOF

Each element pi having a m(pi) > 0 is called focal element of PU. On


bpas the following assumptions hold:

mpi : PU /0; 1

(1)

mB 0

(2)

mpi 1

(3)

pi 4PU

With relation to the Eq. (2), it means that in the evidence theory
none possibility for an uncertain parameter to be located outside of the
FOD is given.
Denition 2. the Belief is as the sum of all the bpas of the proper
subsets pk of the element of interest pi, namely:

Belpi

X
pk 4pi

mpk

(4)

Therefore, the Belief can be considered as a lower bound for the


set pi.
Denition 3. the Plausibility is the sum of all the bpas of subsets pk
that intersect with the set of interest pi, namely:

Plpi

mpk

(5)

pk Xpi sB

The Plausibility can be considered as the upper bound of the set pi.
In order to aggregate evidences coming from different and independent sources of information, the DST offers several combination rules. Among them, the rstly dened rule within the
framework of the evidence theory is the Dempster one. Assuming
the independence of two generic sources of information, the
combination of the corresponding bpa on pi can be obtained as
follows:

m1 4m2 pi

8
<
:

P
pa Xpb pi

m1 pa $m2 pb

1K

for pi B
(6)
for pi sB

m1 pa $m2 pb

ut

dENFt
ENt Dt  Nt
lim
Dt
dt
Dt/0

(7)

pa Xpb B

The Dempsters rule veries some interesting properties and its


use has been justied theoretically by several authors (Dubois &
Prade, 1986; Klawonn & Schwecke, 1992; Voorbraak, 1991). Anyway, it ignores contradicting evidences among sources by means
of the normalization factor and exhibits numerical instability if
conict among sources is large. As a consequence, Yager (1987)
proposed a combination rule in which all the conicting mass is
assigned to the ignorance rather than to the normalization factor.

(8)

Since the time interval Dt is supposed to be small, it is possible to


approximate the previous equation to:

utz

ENt Dt  Nt
Dt

(9)

From Eq. (9), u(t) can be interpreted as the ratio between the
mean number of failures in the interval Dt and the interval itself.
Considering the previous assumption on Dt, the probability of the
number of failures to be greater than one can be approximately set
to 0. Therefore, E[N(t Dt)  N(t)] can be 0 or 1, and it can be
interpreted as the probability of occurrence of a failure event in the
interval Dt. Then, the following equation holds:

ut$Dt Pr t  T  t Dt

(10)

where T is the stochastic variable time to failure. Therefore, the


unconditional intensity of failure u(t) of a generic repairable
component, also known as Rate of OCcurrence Of Failure (ROCOF),
multiplied by the interval Dt, represents the probability of a failure
occurrence in that interval.
On the contrary, the conditional failure rate l(t) is representative
of the component reliability behavior and is dened as follows:

lt$Dt Prt  T  t DtjF

where m1 and m2 are the bpas expressed by the two sources with
relation to the events pa and pb respectively. The parameter (1-K)
in Eq. (6) is a normalization factor that allows at respecting the
axiom (3). The parameter K represents the amount of conicting
evidence between the two sources and it is calculated as follows:

In a quantitative FTA, different reliability parameters can characterize the top event. Actually, for non repairable systems, the
system unreliability is the parameter of interest. Instead, for
repairable systems, risk analysts can be focused on the estimation
of the Expected Number of Failures over a time horizon (Rausand &
Hyland, 2004).
Let consider a time interval [0,t]. N(t) represents the number of
failures over this time interval. If s and t are two different time
instants with s < t, the difference [N(t)  N(s)] indicates the number
of failures occurring in the interval [s,t].
By indicating with ENF(t) the Expected Number of Failures at t,
i.e. E[N(t)] ENF(t), the unconditional intensity of failure u(t) is
dened as follows:

(11)

where F is the event the component is working at time t. By


indicating with A the event the component fails within the interval
[t, t Dt], Eq. (11) can be written as follows:

lt$Dt Prt  T  t DtjF PrAjF

(12)

Therefore, Eq. (10) turns into the following one:

ut$Dt Pr t  T  t Dt PrA

(13)

Furthermore, it is possible to state that:



A AXFW AXF

(14)

However, the probability of the event AXF is negligible


because it would imply the occurrence in the small interval Dt of a
repair followed with a failure. As a consequence, Eq. (14) can be
written as follows:

1288

G. Curcur et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292

A AXF

(15)

EXPLOSION

Therefore, from Eq. (15) arises that Eq. (12) can be written as
follows:

lt$Dt

ut$Dt
PrAXF
PrA

PrF
PrF
At

where A(t) is the component availability at the time t. With relation to a non repairable component, it is functioning at time t on
condition that it did not fail during the interval [0,t]. As a consequence, the availability A(t) in Eq. (16) turns into the component
reliability R(t) and the unconditional intensity of failure reduces to
the f(t) (i.e. the probability density function of the variable T).
Equation (16) allows at formulating the following relation between l(t) and u(t):

ut lt$At

FIRE STARTS

PROTECTION
SYSTEM
UNAVAILABLE

EVENT1

EVENT2

(17)

The latter equation is time-dependent. However, the knowledge


of the instantaneous value of the conditional failure rate l(t) for a
renewal process is almost impossible as well as the timedependent availability A(t). As a consequence, Eq. (17) has a practical utility under the assumption of a constant conditional failure
rate and a steady-state value of A(t).
Therefore, Eq. (17) turns into the following one:

u l$A

TOP1

(16)

(18)

In the present paper, as commonly assumed in real applications,


the conditional failure rate is considered constant.
In the literature, different methodologies are proposed for the
estimation of the ROCOF. For instance, in (Tan, Jiang, & Bai, 2008)
the failure of complex repairable systems, generally consisting in a
Non-Homogenous Poisson Process (NHPP), is modeled as a Homogenous Poisson Process (HPP) that represents a simplied
approach for the reliability analysis. As a consequence of this
assumption, times between successive failures are independent
and identically distributed exponential random variables. Then,
the ROCOF is constant and can be easily estimated. In (Phillips,
2001), a NHPP process is used to model the occurrence of failure
events over time. In this case, the intensity function is not constant. The paper presents a parametric estimate of the expected
cumulative intensity function and a non-parametric estimate of
the expected ROCOF. In both cases, condence regions for the
ROCOF are proposed. The focus of the present paper is not the
ROCOF estimation by historical data. Actually, to achieve this
purpose, reliability data should be available to the analysts. This is
unfortunately the main drawback of high risk process plants
where rare failure events imply a poor availability of data.
Therefore, in the present paper ROCOF is supposed to be supplied
by a team of experts. The imprecision of their available knowledge
is also taken into account.
5. Imprecise FTA
As previously mentioned, the quantication of risk requires the
implementation of a FTA in order to characterize the top events
previously determined in the hazard analysis step. For the purpose
of this work, the distinction of basic events in the two aforementioned categories (initiator and enablers) is essential. For instance,
Fig. 1 shows the fault tree related to the top event Explosion. In
such a case, the event Fire Starts is classied as initiator, whereas
the event Protective System Unavailable is the enabler. It is
obvious that top event takes place when a re starts and the protection system is unavailable.

Fig. 1. Example of a fault tree with initiator and enabler events.

5.1. Input data for initiators and enablers


Information related to basic events are here supposed to be
supplied by a team of experts. In particular, each expert refers to the
analyst an interval in which he/she believes the parameter of interest
could lie in. The imprecise information for the initiator events consists in a real positive interval in which the ROCOF lies in with an
associated belief mass. Otherwise, for the enabler events the interval
is referred to the average PFD or the steady-state unavailability
Q with the corresponding belief mass. The two aforementioned
parameters can be dened as follows.
The time-dependent PFD is dened as the probability that an
undetected failure has occurred at or before the time t in which the
intervention of the component is needed. In order to decrease the
PFD, components are periodically tested at regular time intervals of
length s. If at the generic inspection instant ns the component is
found in a failed state, then it is replaced with another component
assumed as good as new. In most applications it is sufcient to
refer to the average value over a period of length s (Rausand &
Hyland, 2004).
The instantaneous unavailability Q(t) is dened as the probability that a repairable component is in a failed state at time t. In
real applications, this parameter is approximated by the steadystate unavailability Q (Zio, 2007).
In order to acquire all the input information in the interval form
with the related belief mass, different scenarios can be hypothesized. For each basic event, it is supposed that the analyst suggests a
belief mass (i.e. m) and receives from experts the corresponding
intervals. The two kinds of basic events do not share the same FOD.
Actually, ROCOF is dened in [0, N) and PFD and Q in [0,1].
Therefore, the two previous intervals represent the two frames of
discernment to be considered for the two parameters. According to
the evidence theory, the quantity (1  m) indicates the ignorance
mass associated to the corresponding FOD.
The collection of the interval-valued information about each
basic event is only the rst step of the whole procedure described
in Fig. 2.
Actually, for each basic event, information coming from the
different sources need to be aggregated. In order to perform this
step, the classical Dempster aggregation rule (6) is applied. However, the aggregation process generates more than one aggregated
interval for each basic event. As a consequence, all their possible
combinations must be considered during the propagation phase, as
it will be detailed in the next section.

G. Curcur et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292

Information
acquisition
( and PFD/Q)

Judgments
aggregation

Propagation to the
Top Event

1289

Calculation of
Belief and
Plausibility

Fig. 2. Flow-chart of the proposed procedure.

5.2. Propagation to the top event


Within the classical FTA, the method commonly used to determine the top event probability of occurrence (Ptop) is the minimal
cut-set method. A minimal cut-set in a fault tree is the minimal set
of independent basic events whose occurrence implies the top
event. Therefore, once minimal cut-sets have been determined,
Ptop can be calculated as follows (Modarres, Kaminsky, & Krivtsov,
2010).

Ptop PC1 WC2 W:::WCm

(19)

where Ck is the kth minimal cut-set within the total number of


minimal cut-sets m.
The occurrence probability, QCk t, and the unconditional rate of
occurrence, uCk t, of a generic minimal cut-set Ck are determined
by expressions (20) and (21):
n
Y

QCk t

Qi t

(20)

i1
n
X

uCk t

uj t$

j1

n
Y

Qi t

(21)

i1
isj

where n is the total number of events in the minimal cut-set Ck,


Qi(t) is the unavailability or the Probability of Failure on Demand of
the ith event in the minimal cut-set Ck and uj(t) is the rate of
occurrence of the jth event in the minimal cut-set Ck. Therefore, the
top event rate of occurrence utop(t) is calculated by the following
equation:

utop t

m
X

m 
Y

1  QCz t
z1
zsk

uCk t$

k1

(22)

The latter allows an exact computation of the utop(t) under the


hypothesis of independence of the minimal cut-sets. However, in

To flare

PSV

the context of high risk process plants, the terms QCz t in (22) are
negligible because the Qi(t) are very small. In any case, this
approximation is widely used by risk analysts because it implies an
overestimation of the parameter utop(t). Therefore, Eq. (22) turns
into the following one:
m
X

utop tz

uCk t

(23)

k1

Eqs. (20)e(23), expressed as a function of the time, can be


written referring to constant values of the involved parameters as
previously specied.
Since two kinds of basic events are here supposed to take place,
the order of events occurrence needs to be taken into account.
Therefore, in the application of Eq. (21) one must consider that not
all sequences are allowed. Let suppose the minimal cut-set of a
generic system comprises the basic events A, B, C and D. If all sequences are admitted, namely no distinction between initiators and
enablers is considered, the minimal cut-set rate of occurrence is
calculated by Eq. (21) that turns into the following expression.

ucut uA $QB $QC $QD uB $QA $QC $QD uC $QA $QB $QD
uD $QA $QB $QC

(24)

On the contrary, and this is the case, if A is an initiator event,


whereas B, C and D are enablers, the only allowed sequence is
A/BXCXD where A precedes the enabler events B, C and D. As a
consequence, (24) turns into:

ucut uA $QB $QC $QD

(25)

Therefore, once all ucut have been calculated taking into account
the admitted sequences, Eq. (23) can be applied to determine the
utop.
In this particular context, Eqs. (20)e(23) previously introduced
involve the interval-valued parameters arising from the aggregation phase. The computation of such equations is based on the ordinary arithmetic operations among intervals. Considering that
generally different aggregated intervals can be associated to each
basic event, one needs to consider all their possible combinations
leading to the top. Then, for each combination z, an interval-valued
of utop (i.e. Iutop;z ) is computed with the related bpa. The latter is
calculated by means of the Cartesian product of masses

RD

Gas Outlet

Gas Inlet

K
V

LAH
1158
LE
1125
LIC
1125

High level
SIF
1130

Fluid
Outlet

LCV
1125

Fig. 3. Case study process diagram.

Table 1
List of acronyms.
Acronyms

Component

V
K
M
LE
LIC
LCV
SIF
LAH
PSV
RD

Vessel
Compressor
Compressor engine
Level element
Level indicator and controller
Level control valve
High level safety instrumented function
Level alarm high
Pressure safety valve
Rupture disc

1290

G. Curcur et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292

Liquid to the
compressor K

TOP1

Process control
system fails

Process safety
system fails

GATE1

GATE2

LE 1125 fails:
no signal to
the LIC 1125

LIC 1125 fails:


no signal to
the LCV 1125

LCV 1125 fails


to open

BE1

BE2

Operator fails

High level SIF


1130 fails
E

BE3

GATE3

Alarm LAH
1158 fails
E

BE6

Operator does
not operate on
alarm
E

BE4

BE5

Fig. 4. Fault tree.

characterizing those intervals involved in the cut-sets (Ferson,


Cooper, & Myers, 2000).
However, the computed Iutop;z can be effectively used to determine the belief and the plausibility of the event utop  uth where
uth is a target threshold desired inside the company or suggested by
the standards. In order to calculate these two belief measures, the
following steps have to be performed:
1) increasingly ordering of the lower and upper bounds of all
Iutop;z ;
2) computation of Belief by adding the belief masses of all those
intervals Iutop;z whose upper bounds are lower or equal to the
threshold value uth, i.e. intervals completely included in [0, uth]:

3) computation of Plausibility by adding the belief masses of


those intervals Iutop;z that have a not empty intersection with the
interval [0, uth]:

Pl0; uth 

Iutop;z X0;uth sB



m Iutop;z

(27)

6. Case study
The methodology described above has been applied to calculate
the ROCOF of the top event Liquid to the compressor referring to a
Table 3
Basic events input data.

Bel0; uth 

X
Iutop;z 30;uth 



m Iutop;z

(26)

Table 2
Minimal cut-sets.
MCS

Basic event

1
2
3
4
5
6

BE1,
BE1,
BE2,
BE2,
BE3,
BE3,

BE4,
BE5,
BE4,
BE5,
BE4,
BE5,

BE6
BE6
BE6
BE6
BE6
BE6

Basic event

Basic event
type

Expert

BE1

BE2

BE3

BE4

BE5

BE6

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert

1
2
1
2
1
2
1
2
1
2
1
2

Lower
bound (LB)

Upper
bound (UB)

3.00E-02
3.00E-02
1.00E-01
1.00E-01
1.50E-01
1.00E-01
2.00E-01
3.50E-01
1.00E-03
2.00E-03
1.50E-03
1.00E-03

3.50E-02
4.00E-02
1.50E-01
2.00E-01
2.00E-01
2.00E-01
3.00E-01
4.00E-01
2.50E-03
3.00E-03
2.50E-03
2.00E-03

0.90
0.90
0.90
0.90
0.90
0.90
0.90
0.90
0.90
0.90
0.90
0.90

G. Curcur et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292

1291

Table 4
Aggregated opinions of initiator events.
Aggregated opinion 1 (AO1)

BE1
BE2
BE3

Aggregated opinion 2 (AO2)

Aggregated Opinion 3 (AO3)

LB AO1

UB AO1

m AO1

LB AO2

UB AO2

m AO2

LB AO3

UB AO3

m AO3

3.000E-02
1.000E-01
1.500E-01

3.500E-02
1.500E-01
2.000E-01

9.000E-01
9.000E-01
9.000E-01

3.000E-02
1.000E-01
1.000E-01

4.000E-02
2.000E-01
2.000E-01

9.000E-02
9.000E-02
9.000E-02

0.000E00
0.000E00
0.000E00

N
N
N

1.000E-02
1.000E-02
1.000E-02

Table 5
Aggregated opinions of enabler events.

BE4
BE5
BE6

Aggregated opinion 1 (AO1)

Aggregated opinion 2 (AO2)

Aggregated opinion 3 (AO3)

Aggregated opinion 4 (AO4)

LB AO1

UB AO1

m AO1

LB AO2

UB AO2

m AO2

LB AO3

UB AO3

m AO3

LB AO4

UB AO4

m AO4

2.0E-01
2.0E-03
1.5E-03

3.0E-01
2.5E-03
2.0E-03

4.737E-01
8.1E-01
8.1E-01

3.5E-01
1.0E-03
1.5E-03

4.0E-01
2.5E-03
2.5E-03

4.737E-01
9.0E-02
9.0E-02

0.0E00
2.0E-03
1.0E-03

1.0E00
3.0E-03
2.0E-03

5.263E-02
9.0E-02
9.0E-02

0.0E00
0.0E00

1.0E00
1.0E00

1.0E-02
1.0E-02

period of time of one year. The case study process diagram is reported in Fig. 3. Acronyms used in such a diagram are synthesized
in Table 1. The gas to be compressed is separated from the liquid
in the vessel V. Then, the separated gas is led to the compressor K.
The process is controlled by a process control system implemented
by means of the loop 1125. The latter comprises three components,
namely the level sensor (LE), the level indicator and controller
(LIC), and the level control valve (LCV). If such a loop fails
(initiator event), then an independent process safety system should
function (enabler event) to prevent the top event. In particular, the
process safety system consists of the two following protection
layers:
1. the operator that is asked to intervene when the high level
alarm is activated (LAH 1158);
2. the high level Safety Instrumented Function (SIF) (IEC 61508,
1999; IEC 61511, 2003) that stops the compressor engine.
Such a SIF is supposed to be performed by a Safety Instrumented System (SIS) that is actually not illustrated in Fig. 3.
The top event fault tree is reported in Fig. 4 where BE1, BE2 and
BE3 are the initiators (I) while BE4, BE5 and BE6 are the enablers (E).
By applying the minimal cut-set method, the following minimal
cut-sets (MCS) are found (Table 2).
It is supposed that two experts supply the interval-valued input
data for the parameters u and PFD or Q with a belief mass, suggested by the analyst, here xed to 0.9. The input data are here
simulated so that they match with those reported in databases of
similar industrial contexts and summarized in Table 3. Table 4

Fig. 5. Belief and plausibility of the event utop  uth .

reports the aggregated intervals for the initiators, whereas in


Table 5 the aggregated intervals for the enablers are shown. In both
Tables, related belief masses are reported.
In order to characterize the top event, all combinations have
been considered. They are 1296 and have been computed and
propagated to the top event through a properly written Visual Basic
macro. By applying the procedure described in Section 5, Belief
(BEL) and Plausibility (PL) measures of the event utop  uth are
computed and shown in Fig. 5.
For instance, with a threshold value uth set to 1E-03, the belief
interval of the event utop  1E-03 is [0.95, 1]. On the contrary, the
analyst could be interested in xing a belief value and in determining the corresponding threshold uth.
7. Conclusions
The paper proposes an imprecise Fault Tree Analysis (FTA) whose
output parameter is the Rate of OCcurrence of Failure (ROCOF) of a
top event. The proposed methodology aims at dealing with systems
characterized by a poor availability of reliability data, as it happens
in a high risk plant. Differently from the classical approach focused
on the evaluation of the probability of occurrence of the top, the
here proposed FTA implies the differentiation of basic events into
two categories, namely initiators and enablers. The rst category
refers to the component failures or process parameter deviations
with respect to the normal operating conditions, whereas the
second one represents the failure of the safety barriers to be activated in order to avoid the occurrence of the top event.
The imprecise characterization of the previous events consists in
the application of a mathematical framework (i.e. DempstereShafer Theory) based on imprecise probabilities. The consideration of
two categories of basic events implies different kinds of input
reliability data. Initiators have been characterized by the unconditional intensity of failure (u), whereas enablers by the average
probability of failure on demand or the steady-state unavailability.
The different phases constituting the proposed procedure (information acquisition, judgments aggregation and propagation to the
top event) have been applied to a real industrial scenario.
The methodology can be considered a very helpful tool for risk
analysts. Actually, it permits to characterize the top event by a more
informative parameter (ROCOF) than its probability of occurrence.
Furthermore, when reliability data are poor, it allows to characterize the basic events by the only available source of information
that is constituted by expert judgments, properly aggregated. In
such a way, the uncertainty is directly allocated to the original

1292

G. Curcur et al. / Journal of Loss Prevention in the Process Industries 26 (2013) 1285e1292

sources and properly propagated to the top by an easy resolution of


the fault tree. The nal result is an interval estimation of the belief
that a xed threshold value of the ROCOF is not overcome.
References
American Institute of Chemical Engineers (AIChe). (2000). Guidelines for chemical
process quantitative risk analysis (2nd ed.). New York: Center for Chemical
Process Safety of the AIChE.
Bae, H. R., Grandhi, R. V., & Caneld, R. A. (2003). Uncertainty quantication of
structural response using evidence theory. American Institute of Aeronautics and
Astronautics Journal, 41(10), 2062e2068.
Bae, H. R., Grandhi, R. V., & Caneld, R. A. (2004). An approximation approach for
uncertainty quantication using evidence theory. Reliability Engineering and
System Safety, 86, 215e225.
Curcur, G., Galante, G. M., & La Fata, C. M. (2012a). Epistemic uncertainty in fault
tree analysis approached by the evidence theory. Journal of Loss Prevention in
the Process Industries, 25, 667e676.
Curcur, G., Galante, G. M., & La Fata, C. M. (2012b). A bottom-up procedure to
calculate the top event probability in presence of epistemic uncertainty. In
Proceedings of the PSAM11 & ESREL 2012, Helsinki, Finland.
Dubois, D., & Prade, H. (1986). On the unicity of Dempster rule of combination.
International Journal of Intelligent System, 1, 133e142.
Ferdous, R., Khan, F., Sadiq, R., & Amyotte, P. (2009). Methodology for computer
aided fuzzy fault tree analysis. Process Safety and Environmental Protection, 87,
217e226.
Ferdous, R., Khan, F., Sadiq, R., Amyotte, P., & Veitch, B. (2012). Handling and
updating uncertain information in bow-tie analysis. Journal of Loss Prevention in
the Process Industries, 25, 8e19.
Ferson, S., Cooper, J. A., & Myers, D. (2000). Beyond point estimates: risk assessment
using interval. fuzzy and probabilistic arithmetic. In Workshop notes at society
of risk analysis annual meeting.
Ferson, S., & Ginzburg, L. R. (1996). Different methods are needed to propagate
ignorance and variability. Reliability Engineering and System Safety, 54,
133e144.
Guth, M. A. S. (1991). A probabilistic foundation for vagueness & imprecision in
fault-tree analysis. IEEE Transactions On Reliability, 40(5), 563e571.

Helton, J. C., Johnson, J. D., & Oberkampf, W. L. (2004). An exploration of alternative


approaches to the representation of uncertainty in model predictions. Reliability
Engineering and System Safety, 85, 39e71.
Hoffman, F. O., & Hammonds, J. S. (1994). Propagation of uncertainty in risk analysis
assessments: the need to distinguish between uncertainty due to lack of
knowledge and uncertainty due to variability. Risk Analysis, 14(5), 707e712.
IEC 61025. (2006). Fault tree analysis (2nd ed.)..
IEC 61508. (1999). Functional safety of electrical/electronic/programmable electronic
safety-related systems. Geneva: IEC (International Electrotechnical Commission).
IEC 61511. (2003). Functional safety e Safety instrumented systems for the process
industry sector. Geneva: IEC (International Electrotechnical Commission).
ISO/IEC Guide 51 IEC 60300-3-9. (1999). Risk analysis of technological systems.
Klawonn, F., & Schwecke, E. (1992). On the axiomatic justication of Dempsters rule
combination. International Journal of Intelligent Systems, 7, 469e478.
Markowski, A. S., Mannan, M. S., & Bigoszewska, A. (2009). Fuzzy logic for process
safety analysis. Journal of Loss Prevention in the Process Industries, 22, 695e702.
Modarres, M., Kaminsky, M., & Krivtsov, V. (2010). Reliability engineering and risk
analysis: A practical guide (2nd ed.). Taylor and Francis Group: CRC Press.
Phillips, M. J. (2001). Estimation of the expected ROCOF of a repairable system with
bootstrap condence region. Quality and Reliability Engineering International, 17,
159e162.
Rausand, M., & Hyland, A. (2004). System reliability theory: Models, statistical
methods, and applications (2nd ed.). New Jersey: John Wiley & Sons, Inc.
Sentz, K., & Ferson, S. (2002). Combination of evidence in DempstereShafer theory.
Technical Report SAND2002e0835. Albuquerque, New Mexico: Sandia National
Laboratories.
Shafer, G. (1976). A mathematical theory of evidence. Princeton: Princeton University Press.
Tan, F. R., Jiang, Z. B., & Bai, T. S. (2008). Reliability analysis of repairable systems
using stochastic point processes. Journal of Shanghai Jiaotong University (Science), 13(3), 366e369.
Voorbraak, F. (1991). On the justication of Dempsters rule of combinations. Articial Intelligence, 48, 171e197.
Yager, R. R. (1987). On the DempstereShafer framework and new combination
rules. Info Sciences, 41, 93e137.
Zimmermann, H. J. (1991). Fuzzy set theory and its application (2nd ed.). Kluwer
Academic Publishers.
Zio, E. (2007). An introduction to the basics of reliability and risk analysis (series on
quality, reliability and engineering statistics). Singapore: World Scientic Publishing Co. Pte. Ltd.

Vous aimerez peut-être aussi