Vous êtes sur la page 1sur 15

GE Research &

Development Center

______________________________________________________________

Applied Neural Networks for Predicting


Approximate Structural Response
Behavior Using Learning and Design
Experience

S Nagendra, J Laflen, and A Wafa

97CRD117, September 1997

Class 1

Technical Information Series

Copyright 1997 General Electric Company. All rights reserved.

Corporate Research and Development


Technical Report Abstract Page

Title

Applied Neural Networks for Predicting Approximate Structural Response Behavior


Using Learning and Design Experience

Author(s)

S Nagendra
J Laflen
A Wafa

Component

Engineering Mechanics Laboratory

Report
Number
Number
of Pages
Key Words

Phone

(518)387-6280
8*833-6280

97CRD117

Date

September 1997

11

Class

neural networks, turbine blades, frequency assessment, learning, design, historical


training

Neural networks are a class of synergistic computational paradigms that can be distinguished from
others by their inherent fine grain parallelism, distributed adaptation, and biological inspiration. Neural
networks offer solutions to problems that are very difficult to solve using traditional algorithmic
decomposition techniques. The potential benefits of neural nets are the ability to learn from interaction
with the environment (e.g., in-situ data acquisition), few restrictions on the functional relationships (e.g.,
inverse problems), an inherent ability to generalize training information to similar situations (e.g.,
response surface approximations), and inherently parallel design and load distribution (e.g., multi-modal
response prediction). While the initial motivation for developing artificial neural nets was to create
computer models that could imitate certain brain functions, neural nets can be thought of as another way
of developing response surfaces. In the present study all the above aspects of neural nets are estimated
and evaluated in a practical industry application of estimating the multi-modal frequencies of different
families of aircraft engine turbine blades.

Manuscript received August 7, 1997

Applied Neural Networks for Predicting Approximate Structural


Response Behavior Using Learning and Design Experience
S. Nagendra
Engineering Mechanics Lab.
GE Corporate R&D

J. Laflen
TaCOE
GE Aircraft Engines

A. Wafa
Engineering Mechanics Lab.
GE Corporate R&D

Introduction
This report documents the design methodology for predicting turbine blade frequencies
using neural networks [1,2] as a design tool that draws on existing design information to
develop a design response behavior prediction model. Previous work in the application of
neural nets in predicting structural engineering response has hinted at the promise of
neural nets as function approximators, optimizers, and response surface based design
tools [3-7]. The main purpose of this work is to achieve peak performance on currentgeneration machines exploiting their super-scalar processor and fast (but limited) cache
memories. This effort implements a robust Back-Propagation (BP) algorithm developed
by rearranging the BP algorithm in order to work only with matrices [8,9] and then
implementing a fast (and portable) code for matrix multiplication, resulting in a generic
test bed entitled Turbine Blade Quick Frequency Prediction Testbed (TBQFPT) with
embedded matrix back propagation (EMBP).
A practical design problem of interest to the industry is how to use available design
knowledge as well as experimental data amassed over the years to predict response
behavior of in-production or future products. The present study uses a turbine blade
example (Figures 1, 2) to describe and evaluate the basic design response prediction
procedure for a family of eight aircraft engines (denoted as A-H). The adopted approach
utilizes design information such as the length of the blade, chord length, wall thickness,
rib thickness, dovetail and shank heights, and material properties (calculated at various
span locations) as initial variables. The neural net is trained with respect to response
quantities (e.g., modal frequencies) measured during in-situ bench tests and used as target
responses. The trained net is then used to predict frequencies (interpolate or extrapolate)
for the trained modes. Currently up to four critical modes can be evaluated by the system
outlined in Figure 3.
Method of Approach
This section describes the basic ingredients needed to write the given engine family data
in matrix form and present it to the neural net for training is described herein. Consider
( p)
the status of neurons in layer l Sn (l ) (Figure 4): the pattern array consists of the
physical quantities that would influence the frequency or response characteristic of
interest. For several known patterns it can be efficiently written in matrix form as:

S (1) T ( l)
2T
S ( ) ( l)
S ( l) ==
=


S ( P ) T ( l)

S (1) ( l) S (1) ( l)
2
1
2
(
)
(
S ( l) S 2) ( l)
2
1


S ( P ) ( l) S ( P ) ( l)
2
1

1
 S N( ) ( l)
1

2)
(
 S N ( l)
1



P
 S N( ) ( l)
1

(1)

Row p of matrix S(l) contains the status of all the neurons in layer l when a pattern p is
applied to the network. For simplicity the input patterns are similarly organized in matrix
S(0).The weights and propagated errors of associated neurons can be arranged as

1
W ( l ) = w( ) ( l )

2
w( ) ( l )

(1) T ( l)
2T
( ) ( l)
( l) ==
=


( P) T ( l)

w1(1) ( l ) w1(1) ( l )
(1)
(1)
w2 ( l ) w2 ( l )
N1 )
(
w
(l) =


(1)
1)
(
wN ( l ) wN ( l )
i 1
i 1

(1) ( l) (1) ( l)
2
1
2
( ) ( l) ( 2) ( l)
2
1



P
P
( ) ( l) ( ) ( l)
2
1

1
 (N ) ( l)
1

2)
(
 N ( l)
1



P
 (N ) ( l)
1

w1(1) ( l )

 w2(1) ( l )



1
 w(N ) ( l )
i 1

(2)

(3)

where column n of matrix W(l) contains the weights of neuron n of layer l and row p of
matrix (l) contains the error propagated to layer l when pattern p is applied. In addition
a matrix T with the same structure as S(L) is needed to contain the target values and a
matrix W(l) with the same structure of W(l) to contain the weight variation. Thus the
basic neural network structure can be perceived as a multi-layer perceptron with L layers
and no connections crossing a layer (this simplifies the structure of the embedded MBP).
A basic formalism is introduced here to describe the network in terms of tangible physical
and mathematical quantities. [Consider italics for scalars like indices or constants (e.g. i,
j, n. l. N, L), bold for vectors (e.g. x, y. w), and capital letters for matrices (e.g. A, B) ].
Neural Layers
The input of the network is indicated as layer 0; it contains no real neurons because its
purpose is to spread the design parameters to the neurons of the first hidden layer. The
hidden layers are numbered from 1 to L-1. The output layer is L. In general, the lth layer
contains N l neurons; therefore, the input layer has Nl elements and the output layer has
NL neurons (Figure 1). A neuron n in layer l is connected to all the neurons in layer l-1
through several connections (exactly Nl-1), each one associated to a weight. The weights
are organized as a vector w(n)(l) (Figure. 4) with a corresponding bias vector bn(l).

Feed-Forward Phase
Suppose that our design knowledge data consists of P patterns s( p ) p = 1.. P where a
pattern in the present case can be identified as the physical quantities (e.g., blade length,
material parameters etc.) that would influence the frequency of the turbine blade . If we
p
apply a pattern s( ) to the input, it propagates from input to output through every layer.
Each layer responds with a precise pattern that we call the status of the layer. In general,
( p)
( p)
the lth layer will be in status s (l ) and the output of the network will be s ( L) . Then
the status of the nth layer can be computed with the feed-forward rule:
N l 1 ( p)

n
p)
(
sn (l ) = f s (l 1) wi( ) (l ) + bn (l )
i =1 i

(4)

N l 1 ( p)

T
p
p
n
(n )
sn( ) (l ) = f s ( ) (l 1) w ( ) (l ) + bn (l ) = f s (l 1) wi (l ) + bn (l )
i
i =1

(4a)

where f(x) is the activation function of the neuron (in the present case f(x) = tanh(x) is
currently used).
Prediction-Response Error Computation
The total error E is defined as the normalized squared difference (MSE) between the
p
output of the network when pattern s( ) is applied, and the corresponding desired target

p
response output vector t ( ) can be calculated as

1
E=
P NL

p =1

p
p
t ( ) s( ) ( L)

1
=
P NL

(t n( p) sn( p) ( L))
P NL

p = 1n = 1

(5)

Back-Propagating the Error


In order to minimize E in respect to the weights, the weights have to be retrained using
the error function as a guide to find the minimum error. A simple method to accomplish
this is starting from a random point in the weight space and then descending step-by-step
towards a minimum of E. If the minimum is satisfactory, the algorithm stops; otherwise
we choose another random point and repeat the descent. In order to choose the right
direction, we can compute at each step the gradient of E (i. e. E ) . Using the steepest
descent approach, the direction of maximum growth of E would be given by E and a
step size parameter denoted herein as .The basic approach is described as a pseudo
code that describes individual phases of the algorithm as outlined below.

Phase I: Feed-Forward
for each layer l: =1 to L
for each neuron n: = 1 to N 1
for each pattern p: =1 to P

T
p
p
n
sn( ) (l ) = f s ( ) (l 1) w ( ) (l ) + bn (l )

Phase II: Error Computation


for each neuron in the output layer n: =1 to
for each pattern p:-1 to P

n( p ) ( L) =

(6a)

NL

) [

2
2

p
p
p
t n( ) sn( ) ( L) 1 sn( ) ( L)

P NL

(6b)

Phase III: Error Back-Propagation


for each layer l:=L-1 to 1
for each neuron n:=1 to Nl
for each pattern p:=1 to P

n( p ) (l ) = 1 sn( p ) (l )

N l +1

j =1

( p ) (l + 1) w ( j ) (l + 1)
n
j

(6c)

Phase IV: Step Computation


for each layer l:=1 to L
for each neuron n:=1 to N l
p

p
bn (l ) = n( ) (l )
p =1

(6d)
for each weight i:=1 to N l1
p

n
p
p
wi( ) (l ) = n( ) (l ) si( ) (l 1)
p =1

Phase V: Weight updating


for each layer l:=1 to L
for each neuron n:-1 to

(6e)

Nl

bnnew (l ) = bnold (l ) + bn (l )

for each weight i:=1 to


n , new
wi( )
(l ) = wi(n), old (l ) + wi(n) (l )

N l 1

(6f)

Search Momentum and Acceleration Term


During steepest descent a zig-zag effect is generated by a sudden change in the direction
of consecutive gradients, to proceed quicker towards the minimum, which should be
smoothed [10,11]. One simple way to accomplish this is to remember the direction of
the last step and use it to modify the direction of the current one as
p

bnnew

(l ) = n( p) (l ) + bnold (l )
p =1

(7)
p

(n ), new l = ( p ) l s ( p ) l 1 + w (n ), old l
wi
() n () i ( )
()
i
p =1

(8)

The parameter regulates the influence of the previous steps on the current one. Even
though it can heavily modify the behavior of the algorithm, for practical reasons it is often
set to a fixed value (usually 0.9).
Current optimization methods for error minimization are reliable, but their convergence
to the minimum error is slow. It is essential to have quick convergence. Based on the
work performed in reference [10], a very simple improvement was implemented. The
main purpose of the acceleration technique is to vary the learning step and the momentum
during the learning in order to adapt them on-line to the shape of the error surface.
The technique can be briefly summarized as follows:
(a) (0) = 0 ; (0) = 0
(b) if E (t ) < E (t 1) then (t ) = (t 1); (t ) = 0
if E (t ) < (1 + ) E (t 1) then (t ) = (t 1); (t ) = 0

if E (t ) > (1 + ) E (t 1) then discard the last step; (t ) = (t 1); (t ) = 0

(9)

where t is the iteration counter > 1 is the acceleration factor, < 1 is the deceleration
factor, and 0.01 0.05. If the error decreases, the learning step is increased by a
factor (because the direction is correct and we want to go faster) and the momentum is
retained (because it will aid the convergence). Otherwise, if the step produces an error
greater than the previous value (a few percent more), the learning rate is decreased by a
factor (for the reasons described before) and the momentum is discarded (because it
would be more misleading than beneficial). If the error increased only a few percent, the
step is accepted; some experience shows that this may be advantageous in escaping from
a local minimum. The present method is quite robust in respect to the starting step;
however, if the initial step is too large, some iterations are wasted at the beginning of the
learning until a good is found. On the other hand, this technique is quite sensitive to

the acceleration and deceleration parameters as they can heavily influence the
convergence speed.
Amplified Acceleration Technique (AAT)
The notion of the amplified acceleration [12] is that instead of setting the search
acceleration and deceleration factors to fixed values, we change them at every learning
step using variable amplification factors
Ka
Kd
and
(t ) = 1 +
(t ) =
K a + (t 1)
K d + (t 1)
(10)
where Ka and Kd are two constants (respectively for pattern learning acceleration and
deceleration). AAT has the same number of parameters as the original acceleration
technique. The TBQFPT procedure contains an improvement of the acceleration
technique based on references [14,15,16]. In the acceleration phase, if (t 1) is small
((t 1) << Ka ) then (t ) 2 (t 1) ; if instead (t 1) >> Ka , then (t ) (t 1) .
The heuristic behind this is the following: we want to increase rapidly if it is very
small, but we want to be cautious if is very large. The deceleration phase shows the
opposite behavior: if (t 1) >> Kd then (t ) Kd ; in other words the learning step is
immediately (and drastically) shortened. If (t 1) << Kd , then (t ) (t 1) ; in this
case, we are probably stuck in a local minimum (because the total error has increase even
though is very small), so letting the step almost unchanged can help to escape from it.
Results
The basic method utilizes design knowledge based on material properties and geometric
properties of individual blades that can even remotely influence the frequencies of the
blade. Design parameters like blade height and chord length, measured at different span
locations, are used as initial variables. The target frequencies are the first flexure, axial
and torsion modes and the higher two stripe mode. The net is constructed based on the
flow of information shown in Figure 3. All the frequencies are normalized and the net
trained in a normalized design space that happens to be a unit hyper-cube. Preliminary
results indicate that the good correlation can be arrived for different frequencies using the
same set of design variables and individual target modal frequencies.
Results for the initial data used to train the net for 500-600 iterations (from a
computational perspective about 2-3 CPU sec) are shown in Tables 1-4 for the modes
considered. Bench frequencies are normalized modal frequencies measured during in-situ
tests for representative blades of individual engine families. The NN-PRED are the neural
network predictions based on trained data. Among the engine data, the training set is
comprised of engine families (A,B,D,E,F,G,H). Upon training the trained weights were
validated against a validation set comprising of engine families (A,D,E,F,H). The
validated weights were then used to predict the normalized frequencies of engine
family C.

Thus different weight functions are arrived at for each mode and a good correlation is
achieved between the bench frequencies (from A-H) and the neural net prediction. The
maximum error is observed in predicting the normalized axial frequencies of about
4.26%. The predictions of the neural net are well within acceptable error limits from a
practical perspective.
Conclusion
The neural network test bed (TBQFPT) provides a basic test bed for estimating
frequencies based on available design information. Design information from analysis as
well as experiments can be used to capture the design details as well as design
peculiarities as a design pattern. A robust neural net is trained for the design pattern
against the available observed frequency modes. The neural nets are then validated and
the validated set of weights is used to predict the frequency of different engines. Good
correlation (within 5%) was observed using the established procedure between observed
bench frequencies and the neural net predictions. This indicates that neural nets can be
trained to predict approximate structural response functions and can be extremely useful
in preliminary design.
Planned Future Research
The basic outlined approach would be enhanced to include higher modes and new design
data and evaluated for blisks (blade + disk combinations) in the near future. In addition to
aircraft engine turbine blades, industrial turbine blades would also be included as part of
the design data. Correlation with commercially available neural nets like MATLAB [17]
is planned.
Acknowledgements
The present work is carried out as part of the design productivity enhancement initiative
in GE Aircraft Engines. The first and third authors would like to thank GE CRD for the
time required to work on this particular aspect through the design productivity initiative.

Table 1. Mode: Normalized Flexural Frequencies


FAMILY
A
B
C
D
E
F
G
H
ERROR %

BENCH
0.3402
0.6954
1.0000
0.6876
0.0577
0.0164
0.4103
0.0000
Max : 3.52

NN-PRED
0.3401
0.7004
0.9479
0.7085
0.0571
0.0142
0.4094
0.0011
Min : 0.001

Table 2. Mode: Normalized Axial Frequencies


FAMILY
A
B
C
D
E
F
G
H
ERROR %

BENCH
0.3454
0.7668
1.0000
0.7442
0.0464
0.0000
0.4169
0.0206
Max : 4.26

NN-PRED
0.3452
0.7767
0.9294
0.7809
0.0466
-0.0042
0.4042
0.0232
Min : 0.006

Table 3. Mode: Normalized Torsional Frequencies


FAMILY
A
B
C
D
E
F
G
H
ERROR %

BENCH
0.3934
0.9820
1.0000
0.9871
0.1988
0.1854
0.4010
0.0000
Max : 1.43

NN-PRED
0.3966
0.9816
0.9862
0.9793
0.1908
0.1976
0.4082
-0.0091
Min : 0.02

Table 4. Mode: Normalized Two Stripe Frequencies


FAMILY
A
B
C
D
E
F
G
H
ERROR %

BENCH
0.1486
0.3560
1.0000
0.5107
0.0000
0.3310
0.6003
0.2381
Max : 1.61

NN-PRED
0.1489
0.3569
0.9654
0.5207
-0.0014
0.3310
0.5996
0.2384
Min : 0.01

Figure 1. Design Airfoil Schematic

Figure 2. Design Blade Schematic.

T A R G E T F R E Q U E N C IE S

P R E L IM IN A R Y
D E S IG N
PA RA M ETERS

N e u ra l N e t - 1

A X IA L

N e u ra l N e t - 2

FLEXURAL

N e u ra l N e t - 3

T O R S IO N A L

N e u ra l N e t - 4

2 S T R IP E

1
2
3
4
5
6

Figure 3. Neural Net Testbed: TBQFPT Schematic

Layer 1

Layer k

Layer L

( 1)
(k)

(1)
( L)

b ((L1))

b ((k1))
w ((L2 ))

w (( k2 ))
S(p)
(0 )

S(p)
( L)

w ((L3))

w ((k3))

b ((L3))

b ((k3))
w ((NL ))

w ((Nk ))
b ((Nk ))

b ((NL ))

Figure 4: A Feed-Forward Neural Network (Multi-Layer)

10

REFERENCES
[1] Rumelhart, D.E., McClelland, J., and the PDP research group, Parallel Distributed
Processing, Vol. I and II, The MIT Press, 1986.
[2] Anderson, J., and Rosenfield, E., Neurocomputing : Foundations of Research, MIT
Press, Cambridge, MA 1986.
[3] Vanluchene, R.D., and Sun, R., Neural Networks in Structural Engineering,
Microcomputers in Civil Engineering, Vol. 5, No. 3, pp. 207-215, 1990.
[4] Hajela, P., and Berke, L, Neurobiological Computational Models in Structural
Analysis and Design, AIAA-90-1133-CP, 31st SDM Conf. Baltimore, MD, April
8-10, pp. 335-343, 1991.
[5] Lee, H., and Hajela, P., Estimating MFN Trainability for Predicting Turbine
Performance, Advances in Engineering Software, Vol. 27 No. 1/2, pp. 129-136,
October/November 1996, Elsevier Applied Science.
[6] Lee, H., and Hajela, L., Prediction of Turbine Performance Using MFN by
Reducing Mapping Nonlinearity, Artificial Intelligence and Object Oriented
Approaches for Structural Engineering, Edited by G.H.V. Topping and M.
Papadrakakis, pp. 99 - 105, 1995.
[7] Lee, H., and Hajela, P., On A Unified Geometrical Interpretation of Multilayer
Feedforward Networks, Part II: Quantifying Mapping Nonlinearity, the 1994
World Congress on Neural Networks, San Diego, CA. June 4-9, 1994.
[8] Anguita, D., Parodi, G., and Zunnio, R., Speed improvement of the BackPropagation on current-generation workstations, WCNN '93, July 11-15, 1993,
Portland, USA, pp. 165-168.
[9] Anguita, D., Parodi, G., and Zunino, R., An efficient implementation of BP on
RISC-based workstations,. Neurocomputing (in press).
[10] Vogl, T.P., Mangis, J.K., Rigler, A.K., Zinik, W.T., and Alkon, D.L., Accelerating
the Convergence of the Back-Propagation Method, Biological Cybernetics, Vol.
59, pp. 257-263, 1993.0
[11] Anguita, D., Pampolini, M., Parodi, G., and Zunino, R., YPROP: Yet Another
Accelerating Technique for the Back Propagation, ICANN '93, September 13-16,
1993, Amsterdam, The Netherlands, p. 500.
[12] Hertz, J., Krough, A., and Palmer, R.G., Introduction to the Theory of Neural
Computation, Addison-Wesley, 1991.
[13] Hecht-Nielsen, R., Neurocomputing. Addison-Wesley, 1991.
[14] Battiti, R., First- and Second-Order Methods for Learning: Between Steepest
Descent and Newton's Method, Neural Computation, Vol. 4, pp. 141-166, 1992.
[15] Jervis, T.T., and Fitzgerald, W.J., Optimization Schemes for Neural Networks,
CUED/FINFENG/TR 144, Cambridge University Engineering Department.
[16] Fahlman, S.E., An empirical study of learning speed in back-propagation
networks, CMU-CS-88-162, Carnegie Mellon University.
[17] The Math Works, MATLAB V5, 24 Prime Way, Natick, MA 01760-1500, USA.

11

S Nagendra
J Laflen
A Wafa

Applied Neural Networks for Predicting Approximate Structural Response


Behavior Using Learning and Design Experience

97CRD117
September 1997

Vous aimerez peut-être aussi