Vous êtes sur la page 1sur 25

Prediction of Machine Health Condition Using Neuro-Fuzzy and

Bayesian Algorithms
Chaochao Chen1**, Bin Zhang2, George Vachtsevanos1
1

School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 USA
2

Impact Technologies LLC, Rochester, NY, 14623

** Chaochao Chen is the corresponding author. Phone: +1-404-894-4130, Fax: +1-404-894-4130,


Email: chaochao.chen@gatech.edu;
Email of Bin Zhang: Bin.Zhang@impact-tek.com;
Email of George Vachtsevanos: george.vachtsevanos@ece.gatech.edu

Abstract:
This paper proposes a novel approach for machine health condition prognosis based on neuro-fuzzy
systems (NFS) and Bayesian algorithms. The NFS, after training with machine condition data, is employed
as a prognostic model to forecast the evolution of the machine fault state with time. An on-line model update
scheme is developed on the basis of the Probability Density Function (PDF) of the NFSs residuals between
the actual and predicted condition data. Bayesian estimation algorithms adopt the models predicted data as
prior information in combination with on-line measurements to update the degree of belief in the forecasting
estimations. In order to simplify the implementation of the proposed approach, a recursive Bayesian
algorithm called particle filtering is utilized to calculate in real-time a posterior PDF by a set of random
samples (or particles) with associated weights. When new data become available, the weights of all particles
are updated and then predictions are carried out, which form the PDF of the predicted estimations. The
developed method is evaluated via two experimental cases --- a cracked carrier plate and a faulty bearing.
The prediction performance is compared with three prevalent machine condition predictors --- recurrent
neural networks (RNN), neuro-fuzzy systems (NFS) and recurrent neuro-fuzzy systems (RNFS). The results
demonstrate that the proposed approach can predict machine conditions more accurately.
Keywords: Prediction, machinery condition monitoring, Bayesian algorithms, recurrent neural networks,
neuro-fuzzy systems, recurrent neuro-fuzzy systems
1

I. Introduction
With the increased complexity and criticality of modern industrial systems, Condition-Based Maintenance
has gained considerable attention in recent years. Instead of traditional scheduled or breakdown maintenance,
Condition-Based Maintenance utilizes on-line condition indicators extracted from available observations to
detect, identify and forecast the evolution in time of potentially detrimental fault conditions. Thus, the
system health condition can be assessed in real-time and maintenance practices can be scheduled to avoid
malfunctions or even catastrophic failure. Condition-Based Maintenance contains six main components,
namely data collection, data processing, feature extraction, fault diagnosis, failure prognosis and decision
making [1]. Amongst them prognosis is an important part that possesses the ability to predict accurately and
precisely the future condition and remaining useful life of a failing component or subsystem. Since prognosis
projects the current condition of the fault indicator in the absence of future observations and necessarily
entails large-grain uncertainty, it is considered widely as the Achilles heel of Condition-Based Maintenance.
Recently, many efforts have been reported in the area of machinery prognosis [2-12].
In general, most of the existing machine prognostic approaches can be divided into two principal
categories: model-based (or physics-based) and data-driven methods [13, 14]. Model-based methods utilize
mathematical models to predict the fault progression. Given a proper model for a specific system, modelbased methods can provide accurate prediction estimates. However, it is usually difficult to develop accurate
models in most practical instances, especially when the process of fault propagation is complex and/or is not
fully understood. Data-driven methods, on the other hand, employ the collected condition data to derive the
fault propagation models. For instance, Tse et al., [2] proposed recurrent neural networks to forecast the
machine deterioration via vibration signals; Wang and Vachtsevanos [6] developed dynamic wavelet neural
networks in fault prognostics; Zhao et al. [3] used a neuro-fuzzy predictor to forecast the bearing fault trend,
and the results shows that its prediction accuracy is better than that of radial-basis-function neural network

predictor. Similar applications using neuro-fuzzy predictors also can be found in [9, 11]; Through combining
fuzzy systems with different types of neural networks, Wang et al., [4, 5, 12] proposed a series of condition
predictors that have been applied successfully in machine health prediction; Niu and Yang [10] used
Dempster-Shafer regression method to perform time-series predictions. Since most data-driven methods,
such as recurrent neural networks (RNN), neuro-fuzzy systems (NFS) and recurrent neuro-fuzzy systems
(RNFS), can be applied to a variety of systems, they have become a popular prediction tool in machinery
prognosis.
Recently, the NFS, as one type of data-driven methods, has been employed successfully in the prediction
of machine condition degradation [3, 4]. The prediction performance of the NFS has been shown to
outperform conventional neural-network-based predictors such as the feedforward-neural-network, radialbasis-function, and recurrent-neural-network based models. Through off-line training using available data
sets, the NFS is utilized to model machine dynamics so that accurate predictions of machine health
conditions may be expected. However, since the machine dynamics in real applications change with time, the
trained NFS cannot carry out accurate predictions if the new dynamics/states are not taken into account
during the prediction process. Since Bayesian algorithms can update system states in real-time via new data,
the NFS is integrated with Bayesian algorithms so that on-line data can be used to improve the prediction
accuracy.
It is well known that Bayesian estimation algorithms are suitable for solving problems of real-time state
estimation, by constructing the posterior Probability Density Function (PDF) of the state and considering the
likelihood of sequential observations [15-17]. As a recursive Bayesian algorithm, particle filtering is a
sequential Monte Carlo method that approximates the state PDF by using point masses (or particles) with
associated discrete probability masses (or weights). Recently, particle filtering has been employed in
machinery prognosis since the prediction can be undertaken when a new measurement becomes available
[18-20]. In most applications, mathematical models have been established to describe the fault propagation
3

process. The derivations of these models are, however, very complex and also require expert knowledge
about the degradation process, i.e. a detailed Finite Element Analysis model, for example, to estimate the
values of the parameters of the fault growth model. In this work, machine condition prognosis is considered
as a real-time state estimation problem using particle filtering, while NFS is employed to build the fault
growth model through off-line training.
Note that errors between the actual machine condition and the prediction estimates from the NFS model
do exist even after the NFS is well trained. Moreover, machine dynamics change during the forecasting
process. Therefore, an on-line model adaptation scheme for fault propagation is desirable. In this paper, a
sliding window with a given length screening the residual signal is utilized. The residual signal screened by
the window generates an error PDF that is employed to update the model parameters in real-time.
The integration of NFS and Bayesian algorithm in this paper forms a new approach for machine health
condition prognosis that possesses the merits of each, including non-linear mapping and real-time state
estimation. In literature, these two systems are always used individually to perform machine prognosis. It is
the first time that the combined system is developed and used to carry out machine prognosis. The on-line
model update scheme is able to adapt the fault growth model to the various machine dynamics quickly. As a
recursive Bayesian algorithm, particle filtering is utilized to forecast the machine condition based on the
particles and associated weights. Experimental vibration data from a damaged helicopter transmission
component and a faulty bearing are employed to validate the proposed approach. The results demonstrate
that it outperforms classical predictors.
The remainder of this paper is organized as follows: In the next section, the RNN, NFS and RNFS are
demonstrated to perform machine condition prediction. Section III describes the proposed prediction
approach, which includes Bayesian estimation and its integration with particle filtering, a fault growth model
and model adaptation scheme. Section IV presents experimental results of the proposed approach on two real
systems. Section V provides some concluding remarks.
4

II. Prediction Using RNN, NFS and RNFS


Since RNN, NFS and RNFS are capable of learning highly nonlinear dynamics of machines without the
necessity of deriving complex mathematical models to forecast the machines behaviors, they have been
employed

widely

in

machinery

prognosis.

For

these

predictors,

the

input

variables, {xt nr xt (n 1)r xt (n 2 )r L xt 3r xt 2 r xt r xt }, and the output/forecasting variable, xt + r , are monitoring


indices that characterize the machine health condition, where r denotes the prediction step, i.e. when r=1,
xt + r means a one-step-ahead prediction, and n defines the number of previous time steps, i.e. when n=3, the

values of three previous time steps and the current value are used to carry out the prediction. For example
when n=3, the input is {xt 3r xt 2 r xt r xt } . In order to capture the dynamics of fault propagation, RNN, NFS
and RNFS are trained via the gradient descent approach. The training process is terminated when the number
of training iterations has reached a predefined value or the desired training error has been achieved. The
RNN, NFS and RNFS will be described in sequence. They are used for comparison purposes with the
proposed predictor.

A. RNN Predictor
The RNN predictor is similar to a feedforward neural network in structure but with additional feedback
connections. Many researchers claimed that the closed loop structure makes the RNN more suitable to
capture the temporal behavior of dynamic systems [2, 6, 7]. Fig. 1 shows the RNN predictor with three layers.
The input values are transmitted through the nodes in the input layer (or Layer 1), with the feedback values
from the output layer (or Layer 3), to the hidden layer (or Layer 2), where every node possesses a sigmoid
function as the activation function. Then, the node in the output layer is activated via a sigmoid function
while receiving the signals from the hidden layer. For more information on recurrent neural networks, see
[23].

Fig. 1. Architecture of the RNN predictor. Z-1 is a unit delay operator, S is a sigmoid function.

B. NFS Predictor
The NFS predictor is in essence a fuzzy logic system, where the system parameters are optimized via
neural network training. The architecture of the NFS is schematically shown in Fig. 2. The NFS consists of
five layers, namely input layer, Membership Function (MF) layer, rule layer, normalized layer and output
layer, respectively. There are l input nodes in the input layer and each input node is related to m term nodes
in the MF layer. Thus, the number of nodes in the MF layer is l m , where m denotes the rule number. For
greater detail on neuro-fuzzy systems see [24]. The signal propagation in the NFS proceeds as follows:
Layer 1 (Input layer): The input values are transmitted directly to the next layer without any computation.

The outputs of this layer can be expressed by


Oi(1) = xi(1)

(1)

Layer 2 (MF layer): Each node in this layer performs a membership function calculation. Sigmoid

membership functions are utilized here, as shown below:


u ij( 2) =

( 2)
ij

1 + exp b

(O

(1)
i

mij( 2)

))

(2)

where u ij( 2) is the MF layers output that is associated with the jth term of the ith input Oi(1) , bij( 2) and mij( 2) are
the parameters of the sigmoid function.
Layer 3 (Rule layer): The following max-product operation is carried out in this layer.
O (j3) = u ij( 2)

(3)

where the output O (j3) represents the firing strength of the jth fuzzy rule.
Layer 4 (Normalized layer): This layer performs the normalization operation for all the rule firing

strengths. The resulting output is given by


O

( 4)
j

O (j3)

(4)

( 3)
j

Layer 5 (Output layer): The output of the NFS is calculated by using the centroid defuzzification

procedure. That is:

xt + r = Ok(5) = O j w (jk5)
( 4)

where w (jk5) is the kth estimated output associated with the jth rule.

Fig. 2. Architecture of the NFS predictor; S is a sigmoid function; T means max-product operation; N means normalization
operation.

(5)

C. RNFS Predictor
The RNFS predictor (Fig. 3) possesses additional feedback links added in Layer 2 (MF layer) as compared
to the NFS predictor. Each node in Layer 2 functions as a memory unit that performs the following
operation:

u ij( 2) =

( 2)
ij

1 + exp b

(x

(2)
ij

mij( 2 )

))

(6)

where u ij( 2) is the MF layers output that is associated with the jth term of the ith input xij( 2) , bij( 2) and mij( 2) are
the parameters of the sigmoid function. Note that the inputs of this layer involve the feedback components.
Thus, we have

xij( 2) (t ) = Oi(1) (t ) + ij( 2) u ij( 2) (t 1)

(7)

where ij( 2 ) is the feedback link weight. It is clear that the activation responses uij( 2) (t 1) at the previous time
step are utilized as one part of the current input values, which allows the RNFS predictor to memorize the
past information so that it can deal with temporal issues.
More information on recurrent neuro-fuzzy systems can be found in [25].

Fig. 3. Architecture of the RNFS predictor; Z-1 is a unit delay operator; S is a sigmoid function; T means max-product operation;
N means normalization operation.

III. Proposed Prediction Scheme


The proposed prediction approach is discussed in this section. We first introduce the Bayesian estimation
algorithm, and then illustrate the integration of the NFS in a recursive Bayesian estimation technique called
particle filtering. Next, an on-line update scheme is suggested to adapt the fault growth model to the new
machine dynamics quickly. Lastly, a thorough description of the algorithm steps is presented.

A. Bayesian Estimation
Bayesian estimation theory is the mechanism for updating target states via new observation information.
An overview of Bayesian estimation can be found in [16]. Since measurements for prognosis are available at
discrete times via digital devices, the dynamic state estimation in this paper will focus on discrete-time
systems.
The state estimation can be achieved recursively in two steps: prediction and update. The prediction step is
intended to obtain the prior PDF of the state for the next time instant k by using the following Chapmankolmogorov equation:

p (x k y1:k 1 ) = p (x k x k 1 )p (x k 1 y1:k 1 )dx k 1

(8)

where the probabilistic process model p (xk xk 1 ) that represents the evolution of the state with time is
defined by

x k = f k ( x k 1 , k 1 )

(9)

and x k is the system state at time k, k 1 is an i.i.d. process noise at time k-1, and f k is a possibly nonlinear
function of the state x k 1 .
When a new measurement becomes available, the update step is carried out. By considering the new
measurement, the prior state PDF, the likelihood function p ( y k x k ) , and Bayes rule, the posterior state PDF
can be calculated by
9

p (x k y1:k ) =

p ( y k x k ) p x k y1: y 1

p ( y k y1:k 1 )

(10)

where

p ( y k y1:k 1 ) = p ( y k x k )p (x k y1:k 1 )dx k

(11)

and the likelihood function p ( y k x k ) is defined by the following measurement model:

y k = hk ( x k , v k )

(12)

where y k is the measurement, v k is an i.i.d. measurement noise, and hk is a possibly nonlinear function that
denotes the non-linear mapping relationship between the system states and the noisy measurements.

B. Integration of NFS in Particle Filtering


The recursive computation of the posterior state PDF p (xk y1:k ) is more conceptual than practical, since
the integrals in Equations (8) and (11) do not have an analytical solution. Therefore, many estimation
methods have been developed to solve this problem [16]. In this paper, a particle filtering method is
employed to approximate the optimal Bayesian solution.
In general, particle filtering is a Monte Carlo method that employs a Sequential Importance Sampling
algorithm. The posterior PDF can be approximated by a set of random samples (or particles) with associated
weights, as shown below

p (x

y1:k

) w (x
i
k

x ki

(13)

i =1

where N is the total number of particles, wki is the weight of the ith particle at time k, x ki is the ith particle at
time k, and () is the Dirac delta measure.
According to the importance sampling principle, the weights are updated as

10

i
k

w w

i
k 1

)(

p y k x ki p x ki x ki 1

q x ki x ki 1 , y k

(14)

where q x ki x ki 1 , y k is a proposed density called an importance density.

If we simply choose

) (

q x ki x ki 1 , y k = p x ki x ki 1

(15)

and then substitute Equation (15) into (14), we obtain

wki wki 1 p y k x ki

(16)

In the importance sampling theory, note that all particles are drawn from the importance

density q x ki x ki 1 , y k . From Equation (15), we can see that the particles can also be drawn from the
probabilistic model of the state evolution p (xk xk 1 ) , defined by Equation (9), because we simply choose

q x ki x ki 1 , y k to be p(xk xk 1 ) .
Since the system process model (or Equation (9)) is a first-order Markov model, where the system state x k
depends only on the previous state x k 1 and the process noise k 1 , a modified NFS is introduced with the
process noise to represent the dynamics of the system, as shown below

x k = x k + k 1

(17)

x k = g k ( x k 1 )

(18)

where g k ( x k 1 ) is a nonlinear function used by the NFS, as shown in Fig. 4.


Here, the adopted NFS in the proposed predictor can be considered as a single input-single output (SISO)
neuro-fuzzy system. Since it only possesses a single input variable instead of four input variables in the
conventional RNN, NFS and RNFS mentioned above, all of the T nodes in Layer 3 in Fig. 2 (conventional

11

NFS) are superfluous and thus Layer 3 is deleted. The architecture for the NFS is fixed ahead of time [26],
e.g., the fuzzy rules are determined like
Rule #m: IF x is Am THEN y = wm .
where Am is the mth fuzzy set and wm is the mth fuzzy rule consequent. Then, the parameters of the NFS are
optimized by using the training data.
Note that if the number of fuzzy if-then rules in the NFS is equal to the number of receptive field units in
radial basis function neural network (RBFN) and sigmoidal functions are adopted in RBFN, the NFS actually
is functionally equivalent to a sigmoidal basis function neural network [27,28].

Fig. 4. Architecture of the NFS predictor in the particle filtering framework; S is a sigmoid function; N means normalization
operation.

C. On-line Model Update


In Equation (17), we note that errors always occur when the NFS attempts to simulate the fault growth
process, especially when the machine dynamics change during the prediction process due to many factors,
e.g., change in operational conditions. The process noise in Equation (17) is a stochastic variable that can

be assumed to follow a Gaussian distribution, ~ N , 2 .


Therefore, the mean and standard deviation of the noise at time k can be calculated by

n 1
i=0

12

z k i

(19)

(z
n 1

i =0

k i

(20)

where z k i is the residual between the actual condition data and the prediction estimation via the NFS at time

k-i, and n is the number of the NFSs residuals.


In order to estimate the process noise in real-time, a sliding window is utilized that contains n NFSs
residuals, as shown in Fig. 5. Once a new measurement becomes available, the window moves forward one
time step so as to include the latest model error information.

Fig. 5. Estimation of process noise via a sliding window.

D. Algorithm Steps
The detailed algorithm steps for condition prognosis may be stated as:

Step 1: The NFS is trained with available condition data to model the fault propagation process.
Step 2: The fault growth model (17), represented by the NFS and the process noise, is employed in the
particle filtering formulation to draw a set of particles. According to the values of the particles and
current weights, condition prediction can be carried out via:

13

N
~
x k = i =1 wki 1 x ki

(21)

When a new measurement becomes available, the weights can be calculated according to Equation
(16). If severe degeneracy does exist, resampling is performed.

Step 3: Update the process noise using Equations (19) and (20).
Step 4: Repeat Step2 and Step 3 until machine prognosis is complete.
Here, step 2 can be considered as the execution of a Sequential Importance Sampling and Resampling
algorithm that is summarized in Fig. 6. The flowchart of the proposed algorithm is shown in Fig. 7.

Fig. 6 Sequential Importance Sampling and Resampling algorithm, where k = 1, 2, 3,K , T

14

Fig. 7 Flowchart of proposed algorithm for machine condition prognosis

IV. Experimental Studies


In this section, the proposed prediction approach is implemented to carry out real-time condition prognosis
in the following two experimental cases: cracked carrier plate of a helicopters gearbox; faulty helicopter
bearing. Three classical predictors including RNN, NFS and RNFS are also implemented for comparison
purpose. In this paper, one-step-ahead prediction is carried out to demonstrate a performance comparison
between the proposed and three conventional predictors. Moreover, since the Bayesian estimation is
integrated in the proposed predictor, we ran the experiments for a Monte Carlo set of 50 realizations for the
proposed predictor.
In order to evaluate the prediction performance, the root-mean square error (RMSE) is denoted as:

15

(y
M

RMSE =

i 1

y i )

(22)

where M is the total number of data points, y i and y i are the ith actual and predicted values, respectively

A. Cracked Carrier Plate


A-I. System Condition Monitoring
The main transmission of Blackhawk and Seahawk helicopters employs a five-planet epicyclic gear
system. Recently, a crack in the planetary carrier plate was discovered during regular maintenance, as shown
in Fig. 8. It apparently endangers the pilots life with a possible loss of the aircraft, and thus a condition
prognosis scheme is needed to carry out accurate prediction of the assets remaining useful life in a real-time
manner so that timely maintenance can be implemented before catastrophic events occur.

Fig. 8. Crack of planetary gear carrier plate

In order to derive an appropriate condition monitoring index (or feature), the gearbox is mounted on a test
cell with a seeded crack fault on the planetary gear carrier. An accelerometer is mounted at a fixed point at
position = 0 on the gearbox to collect the vibration signals, as shown schematically in Fig. 9. Surrounding
the sun gear, the planet gears ride on the planetary carrier and also rotate inside the outer ring gear (or

16

annulus gear). Due to the complex operational environment and the large number of noise sources in the
system, a blind deconvolution de-noising algorithm has been developed to improve the signal-to-noise ratio.
The sideband ratio is set as the condition monitoring index, which is calculated by the ratio between the
energy of the NonRMC and all sidebands:

NonRMC
SBR( X ) =
(NonRMC + RMC )
m

k =1

g = X

k =1

g = X

(23)

where RMC is the Regular Meshing Components or apparent sidebands, and NonRMC represents, the Non
Regular Meshing Components.
Since the focus of this paper is machine condition prediction, we will not describe the de-noising algorithm
and the feature extraction in detail; interested readers are referred to [21, 22].

Fig. 9. Configuration of an epicyclic gear system

A-II. Performance Evaluation


The initial length of the seeded crack on the carrier plate is 1.344 inches and it grows with the evolving
operation of the gearbox. The gearbox operates for a period of 1000 Ground-Air-Ground (GAG) cycles, and
each cycle lasts about 3 minutes at three different torque levels: 20%, 40% and around 100%. For each GAG
17

circle, the vibration feature (or monitoring index) at 20% torque level is used for training the RNN, NFS and
RNFS. The features at 40% and 100% torque levels are utilized for testing validation, respectively. The time
step is set as one GAG circle.
Fig. 10 shows the prediction results for the monitoring index (or condition indicator) of the damaged
gearbox operating at 40% torque level. It is seen that the proposed approach captures the systems dynamics
quickly and accurately and the prediction results are quite close to the actual values. Apparently, the
prediction accuracy of the RNN, NFS and RNFS is inferior to that of the proposed approach, especially at
the beginning of the testing phase.
The prediction index at 100% torque level is shown in Fig. 11. From Fig. 11(a), it is seen that the
prediction results of the RNN can not match the actual values since it fails to capture the systems new
dynamics. For the NFS, RNFS and proposed approach, better prediction accuracy is exhibited as compared
to the RNN. Basically, the NFS and RNFS can track the fault propagation trend well. But the prediction
accuracy for both is lower than that of the proposed approach, particularly at the end of the testing phase.

18

(a)

(b)

(c)

(d)

Fig. 10. Prediction results for the monitoring index at 40% torque level: (a) RNN; (b) NFS; (c) RNFS; (d) one of 50 realizations
using the proposed approach

19

(a)

(b)

(c)

(d)

Fig. 11. Prediction results for the monitoring index at 100% torque level: (a) RNN; (b) NFS; (c) RNFS; (d) one of 50 realizations
using the proposed approach

Table I gives the prediction performance comparison in terms of the RMSE metric, where the results of the
proposed approach are the average RMSE for 50 realizations. It is clear that the prediction accuracy of the
proposed approach is superior to that of the RNN, NFS and RNFS.

20

Table I Prediction RMSE comparison for cracked carrier plate. The results of proposed approach are average RMSE
for 50 realizations.
RNN

NFS

RNFS

Proposed
Approach

40% torque Level

0.1477

0.0743

0.0693

0.0612

100% torque Level

0.4415

0.1604

0.1499

0.1369

B. Faulty Helicopter Oil cooler Bearing


A helicopter oil cooler bearing with unknown fault mode is utilized in this section to evaluate the proposed
prediction approach with different time scale and monitoring index. The data is provided with information
that the bearing is faulty but without detail fault information. The predictor is trained by the monitoring
index (sum of weighted frequency components related to harmonics of the frequency of interest) acquired
from a faulty bearing with spalling. The testing monitoring index is the Root Mean Square (RMS) value of
the vibration signal in a frequency band, which is acquired from an accelerometer on-board a helicopter. Fig.
12 exhibits the prediction results. It can be seen that the proposed predictor can track the system response
very closely except at the end of the testing phase, which may be caused by the intensive dynamic
fluctuations of the system during that period of time. The prediction accuracy of the proposed predictor is
higher than that of the three classical predictors, as shown in Table 2.

21

(a)

(b)

(c)

(d)

Fig. 12. Prediction results for the monitoring index of faulty bearing: (a) RNN; (b) NFS; (c) RNFS; (d) one of 50 realizations using
the proposed approach

Table II Prediction RMSE comparison for faulty bearing. The results of proposed approach are average RMSE for 50 realizations.
RNN

NFS

RNFS

Proposed
Approach

0.1867

0.0619

0.0715

0.0506

22

V. Conclusions
This paper proposes an integrated machine condition prognosis approach based on neuro-fuzzy systems
(NFS) and Bayesian algorithms. The NFS, trained as a prognostic model, is integrated in a recursive
Bayesian algorithm called particle filtering to simulate the machine fault propagation process. Through the
estimation of the Probability Density Function (PDF) of the NFSs residuals between the actual and
predicted condition data, an on-line update scheme is developed to adapt this model to new machine
dynamics quickly. The prediction estimates are calculated via the particles and associated weights of the
particle filtering scheme. On-line condition data can be used to update these weights in order to improve the
prediction accuracy. Experimental vibration data from the main gearbox of a helicopter subjected to a seeded
carrier crack fault and a faulty helicopter bearing are utilized to evaluate the prediction performance of the
proposed approach. The results show that the prediction accuracy of the proposed predictor is better than that
of the conventional predictors, which suggests that the proposed predictor has the potential as a new tool in
machine condition prognosis.

23

References
1.

G. Vachtsevanos, F. Lewis, M. Roemer, A. Hess, and B. Wu, Intelligent Fault Diagnosis and Prognosis for Engineering
Systems. USA: Wiley, 2006.

2.

P. Tse, D. Atherton, Prediction of machine deterioration using vibration based fault trends and recurrent neural networks, J.
Vib. Acoust., vol. 121, pp.355-362, July 1999.

3.

F. Zhao, J. Chen, L. Guo, X. Lin, Neuro-fuzzy based condition prediction of bearing health, J. Vib. Control, vol. 15, no. 7,
pp. 1079-1091, 2009.

4.

W. Wang, F. Golnaraghi, F. Ismail, Prognosis of machine health condition using neuro-fuzzy systems, Mech. Syst. Signal
Process., vol. 18, pp. 813-831, 2004.

5.

J. Liu, W. Wang, F. Golnaraghi, A multi-step predictor with a variable input pattern for system state forecasting, Mech. Syst.
Signal Process., vol. 23, pp. 1586-1599, 2009.

6.

P. Wang, G. Vachtsevanos, Fault prognostics using dynamic wavelet neural networks, Artif. Intell. Eng. Des. Analysis
Manuf., 2001, vol. 15, pp. 349-365, 2001

7.

G. Wen, X. Zhang, Prediction method of machinery condition based on recurrent neural networks models, J. Appl. Sciences,
vol. 4, pp. 675-679, 2004.

8.

JP. Cusumano, D. Chelidze, A. Chattterjee, A dynamical systems approach to damage evolution tracking, part 2: modelbased validation and interpretation, J. Vib. Acoust., vol. 124, pp.258-264, 2002.

9.

B. Samanta, C. Nataraj, Prognostics of machine condition using soft computing, Robot. CIM-INT Manuf., vol. 24, pp. 816823, 2008.

10. G. Niu, B. Yang, Dempster-shafer regression for multi-step-ahead time-series prediction towards data-driven machinery
prognosis, Mech. Syst. Signal Process., vol. 23, pp. 740-751, 2009.
11. V. Tran, B. Yang, A. Tan, Multi-step ahead direct prediction for machine condition prognosis using regression trees and
neuro-fuzzy systems, Expert Syst. Appl., vol. 36, pp. 9378-9387, 2009.
12. W. Wang, An adaptive predictor for dynamic system forecasting, Mech. Syst. Signal Process., vol. 21, pp. 809-823, 2007.
13. J. Lee, A systematic approach for developing and deploying advanced prognostics technologies and tools: methodology and
application, In Proc. of the Second World Congress on Engineering Asset Management, Harrogate, UK, pp. 1195-1206, 2007.
14. A.K.S. Jardine, D. Lin, D. Banjevic, A review on machinery diagnostics and prognostics implementing condition-based
maintenance, Mech. Syst. Signal Process., vol. 20, pp. 1483-1510, 2006.

24

15. C. Andrieu, A. Doucet, E. Punskaya, Sequential monte carlo methods for optimal filtering, in : A. Doucet, N. de Freitas, N.
Gordon (Eds.), Sequential Monte Carlo Methods in Practice, Springer-Verlag, NY, 2001.
16. M.S. Arulampalam, S. Maskell, N. Gordon, T. Clapp, A tutorial on particle filters for online nonlinear/non-Gaussian
Bayesian tracking, IEEE Trans. Signal Process., vol. 50, pp.174188, 2002.
17. A. Doucet, S. Godstill, C. Andrieu, On sequential monte carlo sampling methods for Bayesian filtering, Stat. Comput,
vol.10, no. 3, pp. 197-208, 2000.
18. M. Orchard, A particle Filtering-based Framework for On-line Fault Diagnosis and Failure Prognosis, Ph.D. dissertation,
Georgia Inst. Technol., Atlanta, GA, 2007
19. M. Orchard, G. Vachtsevanos, A particle filtering approach for on-Line fault diagnosis and failure prognosis, Trans. I.
Meas. Control, vol. 31, no. 3-4, pp. 221-246, June 2009.
20. B. Saha, K. Goebel, S. Poll, J. Christophersen, Prognostics methods for battery health monitoring using a bayesian
Framework, IEEE Trans. Instrum. Meas., vol. 58, no. 2, pp. 291-296, February 2009.
21. B. Zhang, T. Khawaja, R. Patrick, G. Vachtsevanos, M. Orchard, A. Saxena, Application of blind deconvolution denoising
in failure prognosis, IEEE Trans. Instrum. Meas., vol. 58, no. 2, pp. 303-310, February 2009.
22. B. Zhang, T. Khawaja, R. Patrick, G. Vachtsevanos, Blind deconvolution denoising for helicopter vibration signals,
IEEE/ASME Trans. Mechatronics, vol. 13, no. 5, pp. 558-565, 2008.
23. D. Mandic, and J. Chambers, Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability.
England: Wiley, 2001.
24. C. Lin and C. Lee, Neural Fuzzy System: A Neuro-Fuzzy Synergism to Intelligent Systems. Upper Saddle River, NJ: PrenticeHall, 1996.
25. C-H. Lee and C-C. Teng, Identification and control of dynamic systems using recurrent fuzzy neural networks, IEEE Trans.
Fuzzy Syst., vol. 8, no. 4, August 2000.
26. J. Mendel, Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions. Upper Saddle River, NJ: PrenticeHall, 2001.
27. M. Golob, B. Tovornik, Input-output modelling with decomposed neuro-fuzzy ARX model, Neurocomputing. vol. 71, pp.
875-884, 2008.
28. J.-S.R. Jang, C.-T. Sun, Functional equivalence between radial basis function networks and fuzzy inference systems, IEEE
Trans. Neural Networ, vol. 4, pp. 156-159, 1993.

25

Vous aimerez peut-être aussi