Vous êtes sur la page 1sur 14

Reliability Engineering and System Safety 84 (2004) 271–284

www.elsevier.com/locate/ress

Reliability and cost optimization of electronic devices considering the


component failure rate uncertainty
E.P. Zafiropoulos, E.N. Dialynas*
School of Electrical and Computer Engineering, National Technical University of Athens, 9, Iroon Polytechneiou Street, Zografou 157 73, Greece
Received 11 November 2003; accepted 25 November 2003

Abstract
The objective of this paper is to present an efficient computational methodology to obtain the optimal system structure of electronic
devices by using either a single or a multiobjective optimization approach, while considering the constraints on reliability and cost. The
component failure rate uncertainty is taken under consideration and it is modeled with two alternative probability distribution functions.
The Latin hypercube sampling method is used to simulate the probability distributions. An optimization approach was developed using the
simulated annealing algorithm because of its flexibility to be applied in various system types with several constraints and its efficiency in
computational time. This optimization approach can handle efficiently either the single or the multiobjective optimization modeling of the
system design. The developed methodology was applied to a power electronic device and the results were compared with the results of
the complete enumeration of the solution space. The stochastic nature of the best solutions for the single objective optimization modeling of
the system design was sampled extensively and the robustness of the developed optimization approach was demonstrated.
q 2004 Elsevier Ltd. All rights reserved.
Keywords: Single objective and multiobjective optimization; System reliability; Failure rate uncertainty; Monte Carlo simulation; Latin hypercube sampling
method; Simulated annealing algorithm

1. Introduction mathematically as a single objective or a multiobjective


optimization problem, with several constraints on cost and
During the design phase of a product, reliability system reliability performance [4 –6]. The optimization
engineers are called upon to evaluate the reliability of the techniques being applied to solve these problems can be
developing system [1]. The electronic devices are usually broadly classified into two classes, which are the gradient-
designed using commercial-off-the-shelf components with based methods and the direct search methods. Gradient-
known cost and reliability. However, some component based methods use the first order or higher derivatives of the
characteristics, such as the failure rate, exhibit significant objective function to determine a suitable search direction.
unit-to-unit variability. This component failure rate uncer- Direct search methods require only the computation of the
tainty can be modeled assuming a probability distribution objective(s) function(s) values to select suitable search
function with appropriate parameters. The stochastic nature directions. Since evaluating derivatives of the objective(s)
of the component failure rates is propagated to the system function(s) is usually laborious, and in some cases
failure rate and results in more realistic estimations of the impossible, this gives an advantage to the direct search
system characteristics. class of algorithms [7,8]. However, conventional search
The question of how to meet a reliability goal for the techniques, such as hill climbing, are often incapable of
system under specific budget constraints arises, when optimizing non-linear multi-modal functions. Directed
several choices can be made concerning the type random search techniques such as Genetic Algorithms,
of components that will be used and their different Simulated Annealing, Tabu search can find the optimal
configurations [2,3]. This question can be formulated solution in complex multidimensional search spaces.
The purpose of this paper is to present an efficient
* Corresponding author. Fax: þ 30-210-722-3586. computational methodology that was developed to solve the
E-mail address: dialynas@power.ece.ntua.gr (E.N. Dialynas). non-linear reliability and cost optimization problem of
0951-8320/$ - see front matter q 2004 Elsevier Ltd. All rights reserved.
doi:10.1016/j.ress.2003.11.012
272 E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284

system design under reliability and cost constraints. The In these two equations, m; and s are the mean value and the
optimization methodology is based on the simulated standard deviation of the normal distribution function while
annealing algorithm and it is extended in order to be L; U; and M are the triangular distribution parameters. The
applied in single objective and multiobjective optimization values M and m correspond to the point values of the
problems [8,9]. The component failure rates are considered component failure rates and the values U; L and m þ 3s;
to be stochastic variables and two different probability m 2 3s correspond to the upper and lower values that were
distributions are used to model this uncertainty. The reported or empirically estimated for the components’
simulation of the component failure rate distributions is failure rates, for the triangular and the normal distribution,
performed using the Latin Hypercube Sampling (LHS) respectively.
method [10,11]. The developed methodology is applied to a The component failure rate uncertainty can be simulated
power electronic device and the obtained results are and propagated to the system failure rate. The simulation of
presented. In the case of the single objective optimization the component failure rate probability distribution function
problem the optimal combination of components is found can be presented using the typical Monte Carlo simulation
while in the case of the multiobjective optimization or a stratified sampling method, such as the LHS method
problem, the Pareto optimal solutions are specified. Further [10,11]. A random variable X with a probability density
analysis of the stochastic nature of the solutions of the single function f ðxÞ and a probability cumulative function FðxÞ can
objective optimization problem is conducted and the be simulated by generating random numbers, uniformly
relevant results are presented. distributed between 0 and 1, and calculating the correspond-
ing values of the inverse cumulative distribution function
F 21 ðxÞ: In the case of the normal distribution function, the
2. Simulation modeling of the stochastic nature of the inverse cumulative distribution function can be calculated
component failure rate uncertainty using appropriate computer programs. In the case of the
triangular distribution function, the cumulative distribution
A probability distribution can be used to represent the function Ftriang ðxÞ and the inverse cumulative distribution
21
stochastic nature of the components’ failure rate of an function Ftriang ðyÞ are given in Eqs. (3) and (4), respectively.
electronic device but its selection procedure is a critical ðþ1
issue for the estimation of the system’s failure rate Ftriang ðxÞ ¼ ftriang ðxÞdx
characteristics. For this purpose, two probability distri- 21

butions were selected in order to provide alternative ways of 8


> 0; x#L
simulating the components’ failure rate uncertainty while >
>
>
> 2
>
> ðx 2 LÞ
< ðU 2 LÞðM 2 LÞ ;
the obtained results of the optimal system configuration > L,x,M
could be compared and their relative differences would be
¼ ð3Þ
identified. The normal and the triangular distribution >
> ðx 2 UÞ2
>
> 12 ; M,x#U
functions were selected and their density functions are >
> ðU 2 LÞðU 2 MÞ
>
>
given in Eqs. (1) and (2), respectively, while their curves are :
shown in Fig. 1. 1; x.U
8 8 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
> M 2L
>
> 0; x , L or x . U >
< Lþ yðM 2LÞðU 2LÞ; 0#y#
>
> 21 U 2L
>
< 2ðx 2 LÞ Ftriang ðyÞ ¼
; L#x#M >
> p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ftriang ðxÞ ¼ ðM 2 LÞðU 2 LÞ ð1Þ : U 2 ð12yÞðU 2LÞðU 2MÞ; M 2L , y # 1
>
> U 2L
>
> 2ðU 2 xÞ
>
: ; M,x#U ð4Þ
ðU 2 MÞðU 2 LÞ
! In typical Monte Carlo simulation, sample values are drawn at
1 ðx 2 mÞ2 random from the distributions of each of the input variables.
fnorm ðxÞ ¼ pffiffiffiffiffiffi exp 2 ð2Þ Random numbers uniformly distributed between 0 and 1 are
2ps 2 s2
produced and the samples are calculated by the corresponding
values of the inverse cumulative distribution function.
Stratified sampling techniques produce a more representative
distribution of points in this space. In LHS method, the
interval of numbers 0 –1 is divided into equally probable
subintervals and random numbers, uniformly distributed in
each subinterval, are produced. The samples are calculated by
the corresponding values of the inverse cumulative distri-
bution function. As it is shown in Fig. 2, the LHS method
guarantees that the values from the entire range of a
Fig. 1. Triangular (a) and normal (b) distribution functions. distribution are sampled in proportion to their probability.
E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284 273

In order to implement the simulated annealing algorithm


for the solution of an optimization problem, there are four
principal choices that must be made. These are the
representation of solutions, the definition of the objective
function, the definition of the generation mechanism for the
neighbors and the design of a cooling schedule. The
Fig. 2. Simulation of normal random variables using the typical Monte
generation mechanism produces a new tentative solution
Carlo simulation random sampling (a) and the Latin hypercube sampling applying two steps. The first step randomly selects a bit in
method (b). the bit string of the current solution by applying a certain
procedure. The second step generates a random number
The number of samples required to represent adequately a between 0 and 1 and if the random number is greater than
probability distribution is substantially less than for typical 0.5 the selected bit is flipped one level up while in all other
Monte Carlo simulation samples. Using LHS, the samples are cases it is flipped one level down. In designing the cooling
taken randomly in intervals of equal probability and represent schedule for a simulated annealing algorithm, four par-
better even the extreme regions of the distribution. ameters must be specified. These are the initial and the
lowest temperature, the temperature update rule, the
parameter nmax of the maximum number of iterations for
3. General aspects of the simulated annealing algorithm each temperature level and the maximum number of
accepted ‘worst-than-current’ solutions ðCWmax Þ: The cool-
The simulated annealing algorithm is based on the ing schedules employ different temperature updating
analogy between a combinatorial problem and physical schemes. Stepwise reduction schemes are widely used and
systems integrating a large number of particles [7,8]. The include very simple cooling strategies. This rule updates the
algorithm consists of a sequence of iterations and it is usually temperature by the following equation:
used for the minimization of an objective function, but it can
also be used for the maximization of an objective function Tiþ1 ¼ cTi ; i ¼ 0; 1; … ð6Þ
with appropriate settings of its parameters. Each iteration
where c is a temperature factor which is a constant smaller
consists of randomly changing the current solution to create a
than 1 but close to 1.
new solution in the ‘neighborhood’ of the current solution.
The neighborhood is defined by the choice of the generation
mechanism. Once a new solution is created, the correspond-
4. Reliability modeling of electronic devices
ing change in the objective function is computed to decide
whether the newly produced solution can be accepted as the
The electronic devices usually consist of boards or
current solution. In the case of minimization problems, if the
assemblies that are, in general, all connected to each other in
value of the objective function for the newly produced
order to perform the required operation. In order to estimate
solution is smaller than the value for the current solution, the
the reliability indices of the electronic devices, a reliability
newly produced solution is directly taken as the current
engineer in cooperation with a design engineer should
solution and the optimal solution is updated if it is required.
interpret the functional block diagram and the device
Otherwise, it is accepted as a current solution according to
drawings into a reliability block diagram [1]. It is assumed
Metropolis’ criterion based on Boltzman’s probability.
that the components are characterized by a constant failure
According to Metropolis’ criterion, if the difference between
rate, which means that their reliability has an exponential
the objective function values of the current and the newly
distribution over time. The reliability block diagram is
produced solutions is smaller than zero, a random number d
usually constructed using the minimal cut set method [1]. A
in [0,1] is generated from a uniform distribution and if
minimal cut set is defined as a set of system components,
d # eð2DE=kTÞ ð5Þ which when failed, causes failure of the system but if any of
them operates then the system is operating. The minimal cut
then the newly produced solution is accepted as the current sets are connected in series in the reliability block diagram
solution. If not, the current solution is unchanged. The while in every minimal cut set the components are
parameter k is a constant and T is the temperature parameter connected in parallel. The system failure rate lsys is
that is updated after nmax iterations according to a certain calculated using the Eq. (7).
schedule (‘cooling schedule’). In Eq. (5), DE is the difference
of the new solution’s objective function value and the current X
n
lsys ¼ li ð7Þ
solution’s objective function value. The parameters to be
i¼1
optimized are usually represented in a string form using
various binary coding schemes. In the case of maximization In this equation li is the failure rate of the minimal cut set i
problems, DE is the quantity with the opposite value, so that which is calculated applying appropriate equations [1]. For
DE is always a positive number. example, if the minimal cut set i is a second-order failure
274 E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284

thyristor switches are connected in opposite directions to


allow the load current to flow in both positive and
negative directions. Mechanical bypass three phase
switches are used in parallel with the thyristor blocks
of the primary and alternate source to supply the load
even if the thyristor switches are out of operation. The
control circuit of the device takes as an input the voltage
magnitudes of the two feeders and performs a load
transfer when needed.
The device failure is considered to be its inability to fully
perform its objective which is the switch from the primary
to the alternate feeder and vice versa and supply the load
through the fast thyristor switches. The mechanical bypass
switches are not taken under consideration in the present
system reliability evaluation. The device is decomposed
into several components, such as the control circuit, two
thyristors and the firing circuit for each phase, the redundant
two thyristors and the firing circuit for each phase. The
minimal cut set method can be applied in order to deduce all
Fig. 3. Structure of a static transfer switch (single-phase). the failure events causing system failure [1]. These events
constitute the equivalent reliability block diagram of the
event of components A and B; the failure rate li is calculated system, which has the following components and is shown
by the Eq. (8). in Fig. 4.
lA lB ·ðrA þ rB Þ
li ¼ ð8Þ C control circuit component
1 þ lA rA þ lB rB
TH thyristor component in the primary feeder’s
In this equation ðlA ; rA Þ; ðlB ; rB Þ are the failure rates and thyristor module
repair times of components A and B, respectively. TH1 thyristor component in the primary feeder’s
The electronic devices can be constructed by com- redundant thyristor module
ponents from several off-the-shelf types, which can be F firing circuit for the two thyristor components TH
provided by different manufacturers. However, these F1 firing circuit for the two thyristor components
components are characterized by unit-to-unit variability, TH1
which means that the actual component failure rate is UNIT P two thyristor components THP and the corre-
characterized by uncertainty. This component failure rate sponding firing circuit F
uncertainty will be propagated to the system failure rate. UNIT P1 two thyristor components THP1 and the corre-
The manufacturer of the devices is concerned with the sponding firing circuit F1
proportion of the devices that exhibit a failure rate higher THA thyristor component in the alternate feeder’s
than a certain limit and require warranty repair. Therefore, thyristor module
the need to model the system failure characteristics in THA1 thyristor component in the alternate feeder’s
a stochastic manner is imminent and this can be achieved redundant thyristor module
using the modeling procedure described in Section 3 of FA firing circuit for the two thyristor components
the paper. THA
The static transfer switch (STS) can be used as a FA1 firing circuit for the two thyristor components
typical power electronic device in order to apply the THA1
developed computational methodology. The STS is a UNIT A two thyristor components THA and the corre-
very fast switch that connects a critical load to an sponding firing circuit FA
alternate feeder in the case where the voltage in the UNIT A1 two thyristor components THA1 and the corre-
primary feeder suffers from sags or short interruptions sponding firing circuit FA1
[12,13]. The single-phase model of the STS is shown in
Fig. 3. It consists of two thyristor blocks at the primary It can be noticed that there are certain failure events (for
(P) and the alternate (A) source, which connect the load example, failure of units P and P1) that permit the switch
to the two alternate sources. Each thyristor block is form primary to alternate feeder. However, these failure
composed of three thyristor modules corresponding to the events are considered as system failures because the device
three phases of the load. A redundant thyristor module is cannot fully perform its objective (for example, to switch
connected in parallel for each phase in order to withstand from the alternate to the primary). The STS has 55 minimal
high load currents. In each thyristor module, two sets of cut sets and the failure rate lsys of the device is calculated
E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284 275

Fig. 4. Reliability block diagram of the static transfer switch.

from the Eq. (9): to the relative importance of each of the two factors to the
final decision for the design of the electronic device.
lsys ¼ lC þ 12lTH;TH1 þ 6lF;TH1 þ 6lF1;TH þ 3lF;F1 Therefore, the objective function to be maximized is the
þ 12lTHA;THA1 þ 6lFA;THA1 þ 6lFA1;THA þ 3lFA;FA1 following:
8
ð9Þ >
> 0;
>
>
>
>
In this equation, lC is the control circuit failure rate (first- < 95th perc_frðxÞ . ld or CostðxÞ . cd
order minimal cut) while all the other failure rates fobj ðxÞ ¼ l 2 95th perc_frðxÞ c 2 CostðxÞ
>
> w1 d þ w2 d ;
correspond to the second-order failure events including >
> ld cd
>
>
the respective components. :
otherwise
ð10Þ
5. Optimization method using the simulated annealing where x is the component configuration and 95th perc_frðxÞ
algorithm is the value of the 95th percentile of the system failure rate.
The total system cost CostðxÞ is calculated as the sum of the
5.1. Single objective optimization modeling cost of all system components. This optimization problem is
defined as a maximization problem of the objective function
The selection of the optimal component combination for in Eq. (10), but it can also be defined as a minimization
the construction of the electronic devices constitutes one of problem. In order to obtain the optimal solution of the
the major aspects of the design phase by taking into design of the electronic devices, an efficient method was
consideration the reliability and cost constraints of the developed applying the simulated annealing algorithm
components. The components that will be used are off-the- together with the associated computer program written in
shelf and it can be assumed that their cost and failure rate Visual Basic 6 language. The first part of the developed
data are available. The failure rates of the components are method applies the LHS method to simulate the component
considered to be stochastic variables, which means that the failure rate uncertainty. The user may choose parameters
system failure rate will also be a stochastic variable. It is such as the number of LHS trials and intervals, the
required that the 95th percentile of the system failure rate is parameters governing the cooling procedure of the simu-
lower than a certain value ld ; while the total system cost lated annealing algorithm and the initial solution. A
should not exceed a certain limit cd : This means that only flowchart of the developed computational method and the
5% of the produced electronic devices or less should exceed associated computer program is presented in Fig. 5.
the failure rate ld : The optimal solution will be character-
ized by a low value of the 95th percentile of the system 5.2. Multiobjective optimization modeling
failure rate and a low system cost. These two factors are
taken into consideration when constructing the objective Different design factors such as reliability and cost have
function. In order to limit the problem to single objective different and conflicting impacts on the selection procedure
optimization, a certain combination of weight factors w1 and of the optimal solution and this is the motivation for a
w2 can be used. Their positive values are selected according multiobjective perspective, which relieves the designer
276 E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284

Fig. 5. Flowchart of the developed method and the associated computer program for the single objective optimization problem.

from weighting a priori the effect of the different objectives. functions f1 and f2 :
Two separate objective functions are used in order to ( )
quantify the impact of each component combination on the 0; 95th perc_frðxÞ . ld
f1 ¼
system reliability and cost. Consequently, the optimization ðld 2 95th perc_frðxÞÞ=ld ; 95th perc_frðxÞ , ld
problem is characterized as a multiobjective one. The ð12Þ
solution to the multiobjective optimization problem is the
( )
set of all the solutions that represent the best possible trade- 0; costðxÞ . cd
off between reliability and cost and these solutions are f2 ¼ ð13Þ
ðcd 2 costðxÞÞ=cd ; costðxÞ , cd
known as Pareto optimal solutions. A solution is character-
ized as Pareto optimal when it is not dominated by any other The optimization method presented in Fig. 5 has been
solution in the solution space. A solution ‘a’ is said to modified effectively in order to solve the multiobjective
dominate solution ‘b’ if its performance against each of the maximization problem with the two objective functions of
objective functions is at least as good as that of ‘b’ and its Eqs. (12) and (13). The modified method and the associated
performance is better against at least one objective function. computer program were developed accordingly and their
For a maximization problem, this is expressed as: flowchart is shown in Fig. 6. It should be noted that, when
applying the Metropolis criterion, the quantity DE of Eq. (5) is
;i [ {1; 2; …; n} : fi ðaÞ $ fi ðbÞ and the minimum absolute difference of the new solution’s value
of the objective function f1 and the values of the correspond-
’j [ {1; 2; …; n} : fj ðaÞ . fj ðbÞ ð11Þ ing objective function for the solutions in the Pareto optimal
set. For the implementation of the Metropolis criterion, one
The problem of the optimal selection of the component objective function should be chosen in order to calculate a
combination for electronic devices can be expressed as a numerical difference of the current and the neighbor bit string.
multiobjective problem having the two following objective The use of function f2 instead of f1 in the Metropolis criterion
E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284 277

Fig. 6. Flowchart of the developed method and the associated computer program for the multiobjective optimization problem.

would not influence the algorithm’s efficiency because the assuming the respective two probability distributions while
efficiency of the algorithm is basically influenced by the their costs and the repair times are presented in Table 3. The
parameters of the cooling schedule and the generation values being used for the parameters of the cooling schedule
mechanism for the neighboring solutions [14 –16]. are k ¼ 0:005; initial temperature ¼ 100, final
temperature ¼ 1, temperature factor c ¼ 0:9; nmax ¼ 1000;
CWmax ¼ 500: The values used for the parameters ld ; cd ; w1 ;
6. Analysis of the static transfer switch w2 in Eq. (10) are ld ¼ 150 frs=1025 h; cd ¼ 9600 monetary
units (mu), w1 ¼ 0:75; w2 ¼ 0:25: For the search procedure,
6.1. General aspects each component configuration was coded as a string with 9
bits of 4 levels each. For example, string 033212220
In order to demonstrate the application of the methods corresponds to a combination of candidates
being developed for the single and multiobjective modeling, 1; 4; 4; 3; 2; 3; 3; 3; 1 for components C, TH, TH1, F, F1,
the optimal design of the STS was conducted. The THA, THA1, FA, FA1, respectively.
components that will be used are off-the-shelf and it can be A complete enumeration of the solution space was
assumed that their cost and failure rate data are available. The calculated in order to test the performance of the developed
candidate components failure rate uncertainty is modeled in methods for the single and the multiobjective optimization
two ways, using the triangular and the normal distribution modeling. In the single objective case, the solution space
function, and the optimization problem is solved for the two consists of 49 ¼ 262,144 calculations of the objective
uncertainty models, respectively. The failure rate data for the function of Eq. (10), since one calculation is needed
candidate components are presented in Tables 1 and 2 for each combination of the candidate components. In
278 E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284

Table 1
Characteristics of the candidate components’ failure rates (uncertainty modeled with the triangular probability density function)

Component Components failure rates (frs/1025 h)

Candidate 1 Candidate 2 Candidate 3 Candidate 4

U M L U M L U M L U M L

C 16 14 12 10 8 6 7.5 6 4.5 5 4 3
TH 12 10 8 9 7 5 4 3 2 2.5 2 1.5
TH1 12 10 8 9 7 5 4 3 2 2.5 2 1.5
F 15 13 11 12 10 8 10 8 6 8.5 7.5 6.5
F1 15 13 11 12 10 8 10 8 6 8.5 7.5 6.5
THA 16 14 12 12 10 8 8.5 7 5.5 7 5 3
THA1 16 14 12 12 10 8 8.5 7 5.5 7 6 5
FA 18 16 14 15 13 11 12 10 8 10 8 6
FA1 18 16 14 15 13 11 12 10 8 10 8 6

the multiobjective case, the solution space consists of the enumeration of the solution space showed that there are
same number of calculations for each of the two objective 22 solutions with objective function values higher than
functions f1 and f2 of Eqs. (12) and (13), respectively. Each 0.12 and 199 solutions with objective function values
calculation requires 200 samples using the LHS method in between 0.10 and 0.12. These values change when a new
order to obtain the 95th percentile of the system failure rate simulation is executed due to the stochastic nature of the
distribution. The array containing the 200 samples for each component failure rates. However, the percentage of
candidate component is rearranged each time a system solutions that lie in the specific intervals of the objective
configuration’s failure rate is calculated using a random function values does not change significantly. The results
procedure in order to obtain a different order of sampled of the complete enumeration with the 10 highest values of
realizations. the objective function are presented in Table 4. This table
The application of the two methods being developed was also presents the respective values of the 95th percentile of
conducted using a Pentium 4 processor at 2 GHz. The search the system failure rate and the system cost. Additionally,
procedure required approximately 18 s while the full this set of 10 component configurations was sampled
enumeration analysis on the typical system required several times and the respective values of the 95th
approximately 532 s (8.87 min). percentile of the system failure rate were calculated.
Using these data, the respective mean value and its
6.2. Single objective optimization modeling confidence interval for a confidence level of 0.95 were
6.2.1. Modeling of component failure rate uncertainty using calculated and they are also presented in Table 4.
the triangular distribution The developed methodology finds an excellent solution
The failure rate uncertainty of the system compo- with objective function value higher than 0.12 after
nents is modeled using the triangular distribution with executing approximately 3500 calculations of the objective
the parameters presented in Table 1. The complete function. Since the complete enumeration of the solution

Table 2
Characteristics of the candidate components’ failure rates (uncertainty modeled with the normal probability density function)

Component Components failure rates (frs/1025 h)

Candidate 1 Candidate 2 Candidate 3 Candidate 4

m 3s s m 3s s m 3s s m 3s s

C 14 2 0.67 8 2 0.67 6 1.5 0.5 4 1 0.33


TH 10 2 0.67 7 2 0.67 3 1 0.33 2 0.5 0.17
TH1 10 2 0.67 7 2 0.67 3 1 0.33 2 0.5 0.17
F 13 2 0.67 10 2 0.67 8 2 0.67 7.5 1 0.33
F1 13 2 0.67 10 2 0.67 8 2 0.67 7.5 1 0.33
THA 14 2 0.67 10 2 0.67 7 1.5 0.5 5 2 0.67
THA1 14 2 0.67 10 2 0.67 7 1.5 0.5 6 1 0.33
FA 16 2 0.67 13 2 0.67 10 2 0.67 8 2 0.67
FA1 16 2 0.67 13 2 0.67 10 2 0.67 8 2 0.67
E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284 279

Table 3
Cost and repair time of candidate components

Component Components-off-the-shelf cost (in mon. units) Repair time (in h)

Candidate 1 Candidate 2 Candidate 3 Candidate 4

C 1000 1300 1500 1750 48


TH 680 900 1200 1300 60
TH1 680 900 1200 1300 60
F 800 1000 1100 1150 70
F1 800 1000 1100 1150 70
THA 580 680 900 1100 60
THA1 580 680 900 1000 60
FA 630 800 1000 1200 70
FA1 630 800 1000 1200 70

space requires 262,144 calculations of the objective problem justifies the small deviations in their values of the
function, the developed methodology requires almost objective function.
1.3% of the time that is required by the complete
enumeration to find the optimal solution. The developed 6.2.2. Modeling of component failure rate uncertainty using
methodology was applied several times. In 60% of these the normal distribution
runs, the optimization algorithm found a solution with an When the component failure rate uncertainty is modeled
objective function value higher than 0.12, in 20% of these with a normal distribution function, the complete enumer-
runs it found a solution with an objective function value ation of the solution space showed that there are 13 solutions
between 0.10 and 0.12, while in the remaining 20% of the with objective function values higher than 0.14 and 175
runs it found a solution with an objective function value solutions with objective function values between 0.12 and
between 0.08 and 0.10. In all cases, the optimization method 0.14. The 10 solutions that exhibit the highest objective
needed less than 5% of the time that is required by the function values are shown in Table 5. The developed
complete enumeration to find the optimal solution. The methodology found the optimal solution 133133030 by
results of the complete enumeration are presented in Table 4 running 5182 calculations of the objective function, while the
in order to verify the efficiency of the developed complete enumeration requires 262,144 calculations. This
optimization method. It should be noted that certain means that the developed method requires almost 2% of the
solutions result in identical system configurations (identical time that is required by the complete enumeration to find the
solutions) but they exhibit slight deviations in the values of optimal solution. The developed method was applied several
the objective function. For example, the solution strings consecutive times in order to test the robustness of the
133133030 and 133313030 result in the same system optimization algorithm. In 30% of these runs, solutions with
configuration, because of the existing redundancies in the an objective function value higher than 0.14 were found, in
system of Fig. 4. Theoretically, they should exhibit exactly 30% of these runs, solutions with an objective function value
the same value of failure rate but the stochastic nature of the between 0.12 and 0.14 were found, and in the remaining 40%

Table 4
Dominant solutions for the single objective optimization problem using the triangular distribution to model the component failure rate uncertainty

Solution x f ðxÞ 95th perc_frðxÞ (frs/1025 h) Estimation of the 95th perc_frðxÞ (frs/1025 h) Cost (mon. units)

Mean value Confidence interval (95%)

133133030 0.1294 124.3217 124.3219 0.0346 9560


133313030 0.1293 124.3404 124.3404 0.0354 9560
033333130 0.1277 124.9294 124.9293 0.0438 9510
123233030 0.1255 125.1074 125.1072 0.0330 9560
233033030 0.1252 125.1513 125.1514 0.0358 9560
132323030 0.1252 125.1651 125.1652 0.0382 9560
233303030 0.1251 125.1816 125.1818 0.0292 9560
133333020 0.1247 125.5100 125.5102 0.0303 9510
033333030 0.1243 126.1232 126.1233 0.0292 9410
133033130 0.1222 126.2814 126.2814 0.0396 9460
280 E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284

Table 5
Dominant solutions for the single objective optimization problem using the normal distribution to model the component failure rate uncertainty

Solution x f ðxÞ 95th perc_frðxÞ (frs/1025 h) Estimation of the 95th perc_frðxÞ (frs/1025 h) cost (mon. units)
25
95th perc_frðxÞ (frs/10 h) Confidence interval (95%)

133133030 0.1473 120.7330 120.7331 0.0429 9560


133313030 0.1473 120.7360 120.7359 0.0312 9560
033333130 0.1446 121.5416 121.5415 0.0310 9510
123233030 0.1439 121.4161 121.4162 0.0384 9560
132323030 0.1439 121.4193 121.4191 0.0234 9560
233033030 0.1427 121.6534 121.6532 0.0333 9560
233303030 0.1427 121.6544 121.6544 0.0378 9560
133333020 0.1417 122.1239 122.1238 0.0333 9510
132233030 0.1412 121.9513 121.9511 0.0240 9560
123323030 0.1412 121.9560 121.9558 0.0294 9560

of the runs, solutions with an objective function value consideration that solutions 133133030 and 133313030
between 0.10 and 0.12 were found. In all cases, the developed are identical, it can be noticed that the best solutions for the
method did not need more than 5% of the time that is required optimization problem with the single objective function are
by the complete enumeration to find the optimal solution. the bit strings 133133030 (solution 1), 033333130 (solution
The comparison of the objective function values of the 2), 123233030 (solution 3) and 233033030 (solution 4).
identical solutions showed a very small difference due to the There are slight differences in the values being obtained for
stochastic nature of the problem, but it was obvious that the 95th percentile of the system failure rate when
solution 133133030 was the best one because it exhibited the choosing the identical configurations or running a new
highest value of the objective function. The results of the simulation. This happens because of the variability induced
complete enumeration with the 10 highest values of the by the failure rate uncertainty and the random sampling
objective function are presented in Table 5. This table also method used to simulate this uncertainty. These four
presents the respective values of the 95th percentile of the dominant solutions were sampled extensively in order to
system failure rate and the system cost. Additionally, this set check the ranges of the objective values and the 95th
of 10 component configurations was sampled several times percentile of the system failure rates estimates. Several sets
and the respective values of the 95th percentile of the system of the 200 samples of component failure rates were
failure rate were calculated. Using this data, the respective generated and the respective values of the 95th percentile
mean value and its confidence interval for a confidence level of the system failure rate and the corresponding objective
of 0.95 were calculated and they are also presented in Table 5. function were calculated. This procedure was performed
for the four optimal component combinations and the
6.2.3. Comparison of the solutions obtained results are presented in ascending order in Figs.
The results obtained by applying the two models of the 7 – 10. From Figs. 7 and 9, it can be noticed that the
failure rate uncertainty were compared. Taking into optimal solution is solution 1 with string 133133030

Fig. 7. Variability of the single objective function values for the best four solutions when the failure rate uncertainty is modeled with the triangular distribution.
E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284 281

Fig. 8. Variability of the 95th percentile of the system failure rate for the best four solutions when the failure rate uncertainty is modeled with the triangular
distribution.

Fig. 9. Variability of the single objective function values for the best four solutions when the failure rate uncertainty is modeled with the normal distribution.

Fig. 10. Variability of the 95th percentile of the system failure rate for the best four solutions when the failure rate uncertainty is modeled with the normal
distribution.
282 E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284

Fig. 11. Set of solutions that satisfy the constraints, when the failure rate uncertainty is modeled with the triangular distribution.

regardless of the model for the failure rate uncertainty the complete enumeration are presented. The time needed
being used. Furthermore, from Figs. 8 and 10, it can be to identify the Pareto optimal solutions was approximately
noticed that solution 1 corresponds to a system configur- 3% of the time being required by the complete
ation that exhibits the lowest value of the 95th percentile of enumeration, since only 6549 calculations of the two
the system failure rate under the budget constraint of 9600 objective functions were executed. It can be seen from
monetary units, regardless of the model being used for the Table 6 that the performance of the developed method-
failure rate uncertainty. ology is very good since the biggest part of the Pareto
optimal solutions was found.
6.3. Multiobjective optimization modeling
6.3.2. Modeling of component failure rate uncertainty using
6.3.1. Modeling of component failure rate uncertainty using the normal distribution
the triangular distribution The failure rate uncertainty of the system components is
The failure rate uncertainty of the system components modeled using the normal distribution with the parameters
is modeled using the triangular distribution with the presented in Table 2. The set of the solutions that satisfy the
parameters presented in Table 1. The set of the solutions constraints is presented in Fig. 13 and the Pareto optimal
that satisfy the constraints is presented in Fig. 11 and the solutions can be easily identified using a simple computer
Pareto optimal solutions can be easily identified using a program. Fig. 14 shows the Pareto optimal solutions found
simple computer program. Fig. 12 shows the Pareto by the developed methodology. In Table 7, the Pareto
optimal solutions found by the developed methodology. In optimal solutions found and the complete enumeration are
Table 6, the Pareto optimal solutions being found and presented. The time needed for the developed methodology

Fig. 12. Pareto optimal solutions found by the developed methodology, when the failure rate uncertainty is modeled with the triangular distribution.
E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284 283

Table 6 the developed method is very good since the biggest part
Solutions found by the developed methodology (failure rate uncertainty is of the Pareto optimal solutions was found.
modeled with the triangular distribution)

Pareto optimal solutions Not Pareto optimal Not found 6.3.3. Comparison of the solutions
133133030 023033030 133333020 033333130 From the comparison of the solutions being presented in
033333030 203033030 023333020 003033120 Tables 6 and 7, it can be noticed that the developed
133033030 103033130 032203030 methodology found almost all the Pareto optimal solutions,
033223030 103033030
while only 2– 3 solutions being found were not Pareto
123033030 013033030
033333020 103033120 optimal ones. Furthermore, these Pareto optimal solutions
033033130 003033130 are almost the same in both cases of the failure rate
033033030 003033030 uncertainty modeling using either the triangular or the
203033130 030203030 normal probability distribution function. The best solutions
133133030 and 033333130 being found applying the
to identify the Pareto optimal solution was approximately developed single objective optimization method are also
3% of the time the complete enumeration requires, since included in the Pareto optimal solutions set. This means
only 7489 calculations of the two objective functions were that, if there is a modification in the weight factors w1 and
executed. It can be seen that the performance of w2 in Eq. (10), the optimal solution can be found easily in

Fig. 13. Set of solutions that satisfy the constraints, when the failure rate uncertainty is modeled with the normal distribution.

Fig. 14. Pareto optimal solutions found by the multiobjective optimization algorithm and the complete enumeration of the solution space, when the failure rate
uncertainty is modeled with the normal distribution.
284 E.P. Zafiropoulos, E.N. Dialynas / Reliability Engineering and System Safety 84 (2004) 271–284

Table 7 distribution used for the failure rate uncertainty. Further-


Solutions found by the developed method (failure rate uncertainty is more, the best solutions of the single objective optimization
modeled with the normal distribution)
problem were sampled extensively and it was shown that the
Pareto optimal solutions Not Pareto optimal Not found developed methodology was robust despite of the statistical
noise induced by the failure rate uncertainty.
133133030 033023030 003133030 103033120
033333130 023033030 303033130 003023030
033333030 203033030 003133030 003033020
133033030 103033130 References
033223030 103033030
132303030 013033030 [1] Billinton R, Allan RN. Reliability evaluation of engineering systems:
033333020 003033130 concepts and techniques. London: Pitman; 1985.
033033130 003033030 [2] Marseguerra M, Zio E. System design optimization by genetic
033033030 003033120 algorithms. 2000 Annual Reliability and Maintainability Symposium,
203033130 Los Angeles, USA; 2000. p. 222 –7.
[3] Coit DW, Smith AE. Reliability optimization of series—parallel
the space of the Pareto optimal solutions instead of system using a genetic algorithm. IEEE Trans Reliab 1996;45(2):
254 –60.
searching all the solution space. A complete enumeration
[4] Coit DW, Smith AE. Redundancy allocation to maximize a lower
of 20 –22 objective function values is only required and the percentile of the system time to failure distribution. IEEE Trans
optimal solution for the single objective optimization Reliab 1998;47(1):79 –87.
problem can be selected among them. [5] Busacca PG, Marseguerra M, Zio E. Multiobjective optimization by
genetic algorithms: application to safety systems. Reliab Engng Syst
Safety 2001;72:59–74.
[6] Marseguerra M, Zio E, Cipollone M. Designing optimal degradation
7. Conclusions tests via multiobjective genetic algorithms. Reliab Engng Syst Safety
2003;79:87–94.
The purpose of this paper is to present an efficient [7] Rao SS. Engineering optimization. New York: Wiley; 1996.
computational methodology to obtain the optimal system [8] Pham DT, Karaboga D. Intelligent optimisation techniques: genetic
structure for electronic devices using various component algorithms, Tabu search, simulated annealing and neural networks.
Great Britain: Springer; 2000.
types, while considering the constraints on reliability and [9] Pereira J, Saraiva JT, Ponce de Leao MT. Identification of operation
cost. The developed methodology was applied for the strategies of distribution networks using a simulated annealing
design of a power electronic device using various off-the- approach. IEEE Power Tech ‘99 Conference, Budapest, Hungary;
shelf component types. The failure rate uncertainty of the 1999.
system components was modeled using two different [10] Cheng J, Druzdzel MJ. Latin hypercube sampling in bayesian
networks. 13th International Artificial Intelligence Research Sym-
probability distributions, which are the triangular distri-
posium Conference FLAIRS-2000, FL, USA; 2000. p. 287–92.
bution and the normal distribution. The LHS method was [11] Ross SM. A course in simulation. New York: Macmillan; 1991.
used to simulate the component failure rate distributions and [12] Sannino A, Bollen MHJ. Effect of adverse weather on the voltage dip
the stochastic nature of the components failure rates is mitigation capability of a static transfer switch. Seventh International
propagated to the system failure rate. Conference on Probabilistic Methods Applied to Power Systems,
Naples, Italy; 2002. p. 581 –6.
The optimization problem of selecting the configuration
[13] Moschakis MN, Hatziargyriou ND. Thyristor based static transfer
of component types under reliability and cost constraints switch: theory modelling and analysis. MedPower 2002, Athens,
can be defined as either a single objective or multiobjective Greece; 2002.
optimization problem. The results being obtained from the [14] Papadogiannis KA, Hatziargyriou ND, Saraiva JT. Short term active/
application of the developed methodology were compared reactive operation planning in market environment using simulated
annealing. ISAP 2003, Limnos; 31 August–3 September 2003.
with the results of the complete enumeration. It was shown
[15] Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by simulated
that the developed methodology performed extremely well annealing. Science 1983;220(4598):671–80.
in a very small fraction of the time being required by [16] Aarts E, Korst J. Simulated annealing and Boltzman machines. New
the complete enumeration, regardless of the probability York: Wiley; 1990.

Vous aimerez peut-être aussi