Vous êtes sur la page 1sur 15

Computer-Aided Civil and Infrastructure Engineering 27 (2012) 640654

Structural Analysis with Fuzzy Data and Neural


Network Based Material Description
W. Graf, S. Freitag, J.-U. Sickert & M. Kaliske
Institute for Structural Analysis, Technische Universitat
Dresden,
Dresden, Germany

Abstract: In the article, a new approach is presented


utilizing artificial neural networks for uncertain timedependent structural behavior. Recurrent neural networks (RNNs) for fuzzy data can be trained by uncertain
experimental data to describe arbitrary stressstraintime
dependencies. The benefit is a generalized formulation,
which can be applied to describe the behavior of several
materials without definition of a specific material model.
Model-free material descriptions can be used as numerical efficient material formulations within the finite element method. To perform fuzzy or fuzzy stochastic finite element analyses, a new approach is introduced. An
-level optimization is utilized for signal computation
and training of RNNs for fuzzy data. The applicability
is demonstrated by means of examples.

1 INTRODUCTION
Artificial neural networks are powerful tools to learn
functional dependencies in data. They are applied for
several tasks in civil engineering, see for example,
Adeli (2001). In structural engineering and mechanics, neural networks are utilized, for example, for response surface approximation (Pannier et al., 2009;
Papadrakakis and Lagaros, 2002), parameter identi and Lehky,
fication (Kucerova et al., 2007; Novak

2006), system identification (Adeli and Jiang, 2006;


Jiang and Adeli, 2005), damage detection (Arangio and
Bontempi, 2010; Jiang and Adeli, 2007), lifetime prediction (Freitag et al., 2009), time series prediction (Reuter

and Moller,
2010), earthquake prediction (Adeli and
Panakkat, 2009), structural control (Adeli and Jiang,
2007; Jiang and Adeli, 2008), etc. Applications of ar

To whom correspondence should be addressed. E-mail: wolfgang.


graf@tu-dresden.de.


C 2012 Computer-Aided Civil and Infrastructure Engineering.
DOI: 10.1111/j.1467-8667.2012.00779.x

tificial neural networks in structural analysis are presented, for example in Graf et al. (2011) and Sickert
et al. (2011).
The focus of this article is to develop neural network
concepts for constitutive modeling (Ghaboussi et al.,
1991) taking uncertain time-dependent material behavior into account. The terminology model-free material
description is selected to indicate that neural networks
are utilized as material formulation in computational
mechanics instead of physically based material models.
The benefit is a wide applicability for several materials without restrictions to specific material characteristics. Additionally, the computation of neural networks
is numerically efficient. Dependencies between stresses
and strains can be identified from data obtained from
real or numerical experiments. In Hashash et al. (2004),
feed forward networks are used to describe nonlinear
stressstrain dependencies. Current and previous stress
and strain states can be considered, if the material behavior is path dependent, see for example, Ghaboussi
and Sidarta (1998). In Jung and Ghaboussi (2006),
the stress and strain rates are used to describe viscoelasticity with feed forward networks. The neural networks can be utilized as material formulation for structural analysis. An application within the finite element
method (FEM) is presented in Hashash et al. (2004). In
Haj-Ali et al. (2001), feed forward networks are used
as material description at the macrolevel within a multiscale approach.
Whereas the discussed approaches are based on feed
forward networks, recurrent neural network (RNN)
concepts are developed in this article. RNNs are suitable to consider time-history effects in data series,
especially long-term dependencies, see for example,
Panakkat and Adeli (2009), Puscasu et al. (2009), and

Schafer
et al. (2008). Special RNNs are universal approximators of dynamical systems in form of state

Structural analysis with fuzzy data

space models, see Schafer


and Zimmermann (2007). In
feed forward networks, time-history effects can only be
regarded by a limited number of prior states (time window), whereas the whole history is considered in RNNs.
A second difference to the known neural network based
material descriptions is that uncertain data are taken
into account for network training. Stress and strain
data are described as fuzzy processes. The uncertainty
model fuzziness is applied to consider imprecise mea
surements, see for example, Moller
and Beer (2008),
resulting in material formulations with fuzzy parameters. The selection of the uncertainty model fuzziness is
motivated to describe epistemic uncertainty, which results, for example from imprecise measurement devices,
incomplete observations, small sample sizes, and imprecise boundary conditions of experimental investigations. However, it can be combined with the uncertainty
model randomness (to describe aleatory uncertainty)
resulting in the generalized uncertainty model fuzzy

randomness, see for example, Moller


and Beer (2004).
RNNs for fuzzy data, see for example, Aliev et al.
(2008), Aliev et al. (2009), Graf et al. (2010), Freitag
et al. (2011c), and Freitag et al. (2011a), can be utilized
to map fuzzy strain components onto fuzzy stress components or vice versa. The outputs of these networks are
fuzzy numbers in contrast to neuro-fuzzy systems, see

for example, Adeli and Jiang (2003), Gonzalez-Olvera


et al. (2010), and Theodoridis et al. (2012), where fuzzy
rules are used to compute deterministic outputs.
Whereas fuzzy signals are computed by interval arithmetic operations in our previous works (Graf et al.,
2010; Freitag et al., 2011a, c), an -level optimization

according to Moller
et al. (2000) is utilized in this
article. This enables to apply RNNs for fuzzy data as
material description within fuzzy finite element (FE)
analyses. They can also be utilized within fuzzy stochastic FE analyses, where fuzzy and stochastic parameters,
see for example, Balu and Rao (2012), or fuzzy and

fuzzy stochastic parameters, see for example, Moller


and Beer (2004) and Sickert et al. (2011), are considered
to compute fuzzy stochastic structural responses such as
fuzzy failure probabilities.
Algorithms for signal computation and network training based on -level optimization are presented. Modelfree material descriptions for strain to stress and stress
to strain mappings are introduced for incremental iterative solution strategies. The corresponding tangential
stiffnesses are formulated.
Two examples are presented at the end of this article.
First, the new approach is verified utilizing a constitutive model based on the fractional Newton element. In
a second example, the application capability is demonstrated by a fuzzy FE analysis of a 3D structure under
long-term loading.

641

Fig. 1. Fuzzy process with discrete functional values.

2 MODEL-FREE MATERIAL DESCRIPTION


2.1 Fuzzy processes
The fuzzy set theory, see for example, Zadeh (1965), is
the basis for the mathematical description of fuzzy processes. Uncertain time-dependent material parameters
can be described as fuzzy processes x(
), see for exam
ple, Moller
and Beer (2004) and Graf et al. (2010). The
tilde is used to symbolize fuzziness. From experimental investigations, fuzzy processes


. . . , [n] x,
. . . , [N] x
(1)
x(
) = [1] x,
with discrete functional values can be obtained. The
fuzzy process in Equation (1) is defined by a series of
fuzzy numbers (measurements of a physical parameter,
e.g., displacement). For each time point [n] , the mea The
sured value is represented as a fuzzy number [n] x.
time steps between all time points are equidistant, that
is,  = [n] [n1] n = 2, . . . , N. In Figure 1, a
fuzzy process according to Equation (1) is presented.
It consists of N = 5 convex fuzzy numbers, which are
uncertain sets gradually assessed by piecewise linear
membership functions (x) defined in [0, 1]. For each
realization x, its level of membership to the set [n] x is
between 0 and 1.
A set of s = 1, . . . , S cuts (-cuts) is used to describe
the fuzzy numbers. The -cut representation of fuzzy
numbers is common in engineering. It allows to handle
fuzzy numbers similar to intervals in numerical simulations, that is, interval operations can be performed for
each -cut. Intervals


[n]
[n]
[n]
x

=
x,
x
(2)
s
sr
sl
are obtained for each level s of membership s =
[n]
[n]
(sl x) = (sr x). The interval bounds of all -cuts are
required to compute fuzzy numbers within numerical
analyses. The sorted sequence
[n]

[n]

[n]

[n]

[n]

x = 1l x, . . . , Sl x, Sr x, . . . , 1r x

(3)

642

Graf, Freitag, Sickert & Kaliske

contains all left and right interval bounds. The simplest


[n]
[n]
[n]
case is a triangular fuzzy number [n] x = 1l x, 2 x, 1r x
(membership function with triangular shape). With
[n]
[n]
[n]
[n]
four elements [n] x = 1l x, 2l x, 2r x, 1r x, a membership function with trapezoidal shape is created. In Figure 1, S = 3 -cuts are used. The trajectories of the
corresponding interval bounds can be utilized to show
fuzzy processes in figures, see Section 4.
2.2 Uncertain stressstraintime dependencies
Constitutive models are utilized to capture timedependent material behavior within numerical analyses
of structures. Dependencies between stress and strain
tensors can be described by functional relationships.
The required components of the symmetric stress and
strain tensors are represented in the vectors and .

The number of stress and strain components depends


on the material and structural formulation (6 for 3D, 3
for 2D, and 1 for 1D).
Stressstraintime dependencies can be described by
the mapping of strain processes onto stress processes or
vice versa. For uncertain stressstraintime dependencies, three types of mapping can be formulated for each
mapping direction. The mapping of time-varying deterministic or fuzzy strains onto time-varying fuzzy stresses
can be realized by (see Freitag et al., 2011b):
Type 1 mapping
(
)  ( )

(4)

The vector of fuzzy strain processes (


) is mapped
onto the vector of fuzzy stress processes ( ) with deterministic mapping parameters.
Type 2 mapping
( ) 
( )

(5)

The vector of deterministic strain processes ( ) is


mapped onto the vector of fuzzy stress processes ( )
with fuzzy mapping parameters.
Type 3 mapping
(
) 
( )

(6)

The vector of fuzzy strain processes (


) is mapped
onto the vector of fuzzy stress processes ( ) with
fuzzy mapping parameters.
Type 3 mapping is the general case including Type
1 and Type 2 mappings as special cases. The mapping
direction can be changed in Equations (4)(6) to map
deterministic or fuzzy stress processes onto fuzzy strain
processes. For both mapping directions, numerical solutions based on artificial neural networks are provided
in the next section. They can be applied within the
FEM.

Fig. 2. Strain to stress mapping with recurrent neural


networks for fuzzy data.

2.3 Numerical formulations for uncertain


stressstraintime dependencies
Incremental iterative solution strategies are required
to compute structures with time-dependent nonlinear
behavior. The NewtonRaphson method can be utilized for quasistatic analyses. As a result of each iteration in nonlinear FE analyses, the displacement field
of increment (time step) [n] is obtained. The nodal
displacements are used to compute the incremental
strains [n]  FE at the integration points inside the FEs.
These incremental strains are required to compute the
structural stresses and the tangential stiffness. For this
purpose, incremental formulations of uncertain stress
straintime dependencies are necessary. Incremental
fuzzy stress and strain components of time step [n] are
computed by
[n]

 = [n] [n1]

(7)

 = [n] [n1]

(8)

and
[n]

respectively.
In the following, two approaches for model-free material descriptions are introduced.
2.3.1 Strain to stress mapping with RNNs for fuzzy
data. The incremental strains [n]  F E at the integration
points of the FEs can directly be used to compute incremental fuzzy stress components and the tangential stiffness matrix. Material formulations based on a strain to
stress mapping ((
)
( )) are suitable, if the whole
material behavior (total stresses and total strains) is described by a RNN for fuzzy data, see Figure 2. The prior
. . . , [n] are considered to comand current strains [1] ,
pute the fuzzy stresses [n] of time step [n].
The j = 1, . . . , J fuzzy strain components [n] j are
scaled to dimensionless input signals
[n] (1)
x j

[n]

x sc
j

(9)

and the dimensionless output signals are scaled to fuzzy


stress components
[n]

(M)

k = [n] x k

zksc

(10)

Structural analysis with fuzzy data

For each component j and k, the scaling parameters x sc


j
and zksc can be defined as its maximal absolute input and
output value.
The tangential stiffness matrix of the material description [n] C is determined in linearized form by the
partial derivatives of the incremental fuzzy stress components with respect to the incremental fuzzy strain
components
[n]

C kj =

 k
[n]  j
[n]

643

Fig. 3. Stress to strain mapping with recurrent neural


networks for fuzzy data.

(11)

The chain rule is applied two times in Equation (11)


leading to

can be solved by stress to strain mappings, see for example, Oeser and Pellinien (2012). With respect to small
strains, the whole strain of time step [n] can be split

(1)

(M)
[n] x j
[n]  k
[n]  k [n] x k
=

(M)
(1)
[n]  j
[n]  j
[n] x k
[n] x j

(12)

The first factor of the right-hand side in Equation (12)


is obtained by



sc
[n] (M)
[n1] (M)
[n]

z
x

k
k
k
 k
=
= zksc (13)
(M)
(M)
[n] x k
[n] x k
taking into account, that the incremental fuzzy stresses


(M)
(M)
[n]
zksc
 k = [n] x k [n1] x k
(14)
are computed by replacing the stresses of time steps [n]
and [n 1] in Equation (7) by Equation (10).
The third factor of the right-hand side in Equation (12) is evaluated by


[n]
j

(1)
x sc
[n] x j
1 [n]  j + [n1] j
1
j
= [n]
= sc
= sc
[n]
[n]
xj
xj
 j
 j
 j
(15)
using Equation (9) and a rearranged form of Equation (8).
Equations (13) and (15) are substituted in Equation (12) to get the components of the tangential stiffness matrix
(M)

[n]

C kj =

zksc [n] x k

(1)
x sc
[n] x j
j

(16)

The partial derivatives of the dimensionless fuzzy out(M)


put signals [n] x k with respect to the dimensionless
(1)
fuzzy input signals [n] x j of the RNN can be computed
using multiple applications of the chain rule. An efficient algorithm to compute these partial derivatives is
presented in Freitag et al. (2011b).
2.3.2 Stress to strain mapping with RNNs for fuzzy data.
Material formulations based on rheological elements

[n]

= [n] el + [n] pl + [n] ve + [n] vp

(17)

into its elastic (el), plastic (pl), viscoelastic (ve), and viscoplastic (vp) parts. In series oriented rheological elements enable the computation of the corresponding
strains due to the applied stress. RNNs for fuzzy data
can be utilized as parts of rheological models, for example, to describe viscoelastic responses. If neural networks are combined with rheological elements (springs
and dashpots), a stress to strain mapping form of the
material description is required. The following algorithm is presented for a neural network based description of viscoelastic strains.
Artificial neural networks for fuzzy data can be utilized to map fuzzy stress processes ( ) onto fuzzy
strain processes ve ( ), see Figure 3. If recurrent neural
networks are applied, the whole uncertain stress history
[1]
, . . . , [n1] and the current stresses [n] are considered to compute the current fuzzy strains [n] ve .
According to Section 2.3.1, the j = 1, . . . , J fuzzy
stress components [n] j of time step [n] are scaled to dimensionless input signals
[n] (1)
x j

[n]

x sc
j

(18)

of the RNN. The dimensionless output signals are scaled


to fuzzy strain components
[n] ve
k

(M)

= [n] x k

zksc

(19)

In case of stress to strain mapping, the strain results


of the FE analysis cannot directly be applied to the neural network. The fuzzy strain components are outputs
of the network, see Figure 3. To compute the tangential stiffness and the stresses, the material description is
solved iteratively (iteration at the material level). The
objective of this iteration is to get the incremental fuzzy
stress components [n]  j , which lead to nearly the same
incremental fuzzy strains [n]  k of the material model
and the corresponding incremental fuzzy strains [n]  kF E

644

Graf, Freitag, Sickert & Kaliske

computed by the nodal displacements of the FE analysis


[n]

 kF E = [n]  k

(20)

If
strains are taken into account

[n] only viscoelastic


 k = [n]  kve , Equation (20) is rearranged to
dk = [n]  kF E [n]  kve = 0

[n]

(21)

and solved with the NewtonRaphson method. A Taylor series is created (polynomial of degree one)


J

[n] d k [n]
[n]
[n]
dk = dk +
 j = 0 (22)
[n]  j
j=1

with respect to the unknown incremental fuzzy stress


components [n]  j . The partial derivatives in Equation (22) are

[n]  kF E [n]  kve


[n]  kve
[n] d k
=
=

(23)
[n]  j
[n]  j
[n]  j
where [n] d k is replaced by Equation (21).
The partial derivatives of the incremental fuzzy strain
components [n]  kve with respect to the incremental
fuzzy stress components [n]  j in Equation (23) are
evaluated using the chain rule two times resulting in
(1)

(M)
[n] x j
[n]  kve
[n]  kve [n] x k
=

(M)
(1)
[n]  j
[n]  j
[n] x k
[n] x j

(24)

The first factor in Equation (24) is computed by





(M)
(M)
[n] x k [n1] x k
zksc
[n] kve
=
= zksc (25)
(M)
(M)
[n]
[n]
x k
x k
replacing the incremental fuzzy strains [n]  kve by Equation (8) and the fuzzy strain components of time steps
[n] and [n 1] by Equation (19), respectively. Using
Equation (18) and a rearranged form of Equation (7),
the third factor in Equation (24) is computed by


[n]
j

(1)
x sc
[n] x j
1 [n1] j + [n]  j
1
j
= [n]
= sc
= sc
[n]
[n]
xj
xj
 j
 j
 j
(26)
With Equations (25) and (26), the partial derivatives of
the incremental fuzzy strain components with respect to
the incremental fuzzy stress components are obtained
as
 kve
[n]  j
[n]

zksc
x sc
j

(M)
[n] x k
(1)
[n] x j

(27)

The partial derivatives of the dimensionless network


(M)
output signals [n] x k with respect to the dimensionless

(1)

network input signals [n] x j can be evaluated using multiple applications of the chain rule, see Freitag et al.
(2011b).
The Taylor series in Equation (22) can be written as
a system of equations

[n] ve
 1
[n]  1ve
[n]  1ve
[n]  . . . [n]  . . . [n] 

1
j
J

..
..
..
..

.
.
.
.

[n]  ve
ve
ve
[n]
[n]


[n] k . . . [n] k . . . [n] k


 1
 j
 J

..
..
..

..

.
.
.
.

[n] ve
ve
ve
[n]
[n]
 K
 K
 K
.
.
.
.
.
.
[n]  1
[n]  j
[n]  J
[n]
[n]
(28)
d 1
 1

..
..

[n]  j [n] d k
=

..
..

[n]  J [n] d K
using Equations (27) and (23). In matrix form, the system of equations (28) is denoted as
[n]

[n]  = [n] d
D

(29)

(vis is the inverse of the stiffness matrix [n] C


where [n] D
coelastic part).
Equation (29) is rearranged and solved in each iteration step [h]
1

[h]
[n]
[h]
 [h] = [n] D
[n] d
(30)
to compute the unknown incremental fuzzy stresses by
[n]

 [h+1] = [n]  [h] + [n]  [h]

(31)

The iteration is stopped in step [h = H], if the incremental iterative


stress differences [n]  [H] are small

[n]
[H]

0 . Hence, the incremental fuzzy
enough
stresses
[n]

 = [n]  [H]

and the viscoelastic part of the stiffness matrix


1

[n]
[H]
C = [n] D
are obtained.

(32)

(33)

Structural analysis with fuzzy data

The algorithm can be extended to consider also elastic, plastic, and viscoplastic fuzzy strain components.

For this purpose, the inverse of the stiffness matrix [n] D


has to be modified.

3 RNNs AS MODEL-FREE MATERIAL


DESCRIPTION

645

and stress components), the number of hidden layers


and neurons has to be defined with respect to the complexity of the material formulation.
Equivalent to feed forward networks, the signals are
computed layer by layer in each time step [n]. The output signal of neuron i in layer (m) is obtained by
 H

 (m1)
(m)
(m)
[n]
[n] (m)
x i = i
x h
w i h
h=1

RNNs are suitable to identify and describe timedependent material behavior, see for example Oeser
and Freitag (2009). As discussed in Graf et al. (2010),
the treatment of fuzzy data requires modified signal
computation and training strategies with respect to deterministic data. Two ways of computation are mentioned in Graf et al. (2010):
interval arithmetic (for each -cut)
optimization (-level optimization)
The interval arithmetic approach for deterministic
network parameters presented in Graf et al. (2010) is
extended for a priori defined and trainable fuzzy network parameters in Freitag et al. (2011c) and Freitag
et al. (2011a), respectively. Here, RNNs for fuzzy data
are applied as material description within fuzzy or fuzzy
stochastic FE analyses, which are performed by an
level optimization according to Moller
et al. (2000). In
this case, -level optimization is also required for signal computation and network training. Of course, this
leads to a higher computational effort in comparison to
interval arithmetic, where only interval bounds of the cuts are evaluated. But an application of interval arithmetic trained networks within fuzzy or fuzzy stochastic
FE analyses based on -level optimization might cause
incorrect results.
3.1 Signal computation
RNNs for fuzzy data can be applied for strain to stress
mapping according to Section 2.3.1 or stress to strain
mapping, see Section 2.3.2. They enable the computa(M)
tion of dimensionless fuzzy output signals [n] x k under consideration of dimensionless fuzzy input signals
[n] (1)
x j , see Figure 4. In addition to feed forward networks, prior inputs are considered for the computation
of the current outputs by feedback connections. The
RNN in Figure 4 is an extended Elman network, containing feedback elements of Jordan networks (Jordan,
1990) and Elman networks (Elman, 1990). It consists of
(M) layers. Each hidden and each output neuron has
an additional context neuron for processing the material history. Whereas the number of input and output
neurons is defined by J and K (number of fuzzy strain

I 

[n] (m)
y q

(m)

c iq

(34)

(m)
+ b i

q=1
(m)

In Equation (34), i (.) is a fuzzy activation function,


[n] (m1)
x h
are the current fuzzy output signals of the pre(m)
vious layer (m 1), and [n] y q are the fuzzy context signals. Unknown fuzzy network parameters are the fuzzy
(m)
(m)
weights w i h , the fuzzy context weights c iq , the fuzzy
(m)
bias values b i and parameters of the fuzzy activation
function. How the unknown network parameters are
determined is presented in Section 3.2.
By synaptic connections, the computed fuzzy output
signal is transferred to all neurons of the next layer (m +
1) and to its context neuron. The corresponding fuzzy
context signal


(m) [n] (m)
(m)
(m)
(m)
[n+1] (m)
(35)
y i = i
x i i + [n] y i i
for the next time step [n + 1] is also obtained as a re(m)
sult of a fuzzy activation function i (.). Its argument
is computed by multiplying the fuzzy output signal of
(m)
neuron i with the fuzzy memory factor i and adding
(m)
the product of the current fuzzy context signal [n] y i
(m)
and the fuzzy feedback factor i . The fuzzy mem(m)
(m)
and the fuzzy feedback factors i
ory factors i
are fuzzy network parameters. They can be fuzzy numbers between zero and one. If all fuzzy memory factors are zero, a feed forward neural network for fuzzy
data is obtained as a special case of a RNN for fuzzy
data.
(M)
The fuzzy output signals [n] x k of the RNN are com
puted by an -level optimization according to Moller
et al. (2000). In Figure 5, a scheme for the application
of an -level optimization for the Type 1 mapping of a
(1)
fuzzy input process x 1 ( ) onto a fuzzy output process
(M)
x 1 ( ) is presented.
The objective of an -level optimization is to compute the minimal and maximal output values (the left
[n] (M)
[n] (M)
and right interval bounds sl xk and sr xk of each cut s) using a RNN as deterministic fundamental solution. For this purpose, Equations (34) and (35) are evaluated with deterministic realizations of their fuzzy input
values and parameters.

646

Graf, Freitag, Sickert & Kaliske

Fig. 4. Recurrent neural network for fuzzy data.

Fig. 5. Application of an -level optimization for signal computation of recurrent neural networks for fuzzy data.

Structural analysis with fuzzy data

The search space for time step [n] is represented by the respective intervals of the fuzzy input
(1)
(1)
signals [1] x j , . . . , [n] x j and all fuzzy network param(m)
(m)
eters (fuzzy weights w i h , fuzzy context weights c iq ,
(m)
(m)
fuzzy bias values b i , fuzzy memory factors i , fuzzy
(m)
feedback factors i , and parameters of the fuzzy acti(m)
(m)
vation functions i (.) and i (.)).
An -level optimization is carried out for each time
step [n] and each of the K fuzzy output values. For fuzzy
processes with [N] time steps, N K -level optimizations are required. As results of the -level optimization
[n] (M)
in time step [n], the left interval bounds sl xk and the
[n] (M)
right interval bounds sr xk of each -cut are obtained.
Hence, the fuzzy output signals of the RNN are calculated and can be used to compute the fuzzy stress or
strain components. Additionally, the corresponding realizations of the fuzzy input signals and the realizations
of the fuzzy network parameters are available, which
lead to the left and right interval bounds, respectively.
These realizations are required for the determination of
the unknown fuzzy network parameters within the network training, see Section 3.2.
As demonstrated in Figure 5, the dimension of the
search space increases linearly with increasing time
steps for Type 1 and Type 3 mappings. In the case of
Type 2 mapping, the dimension of the search space
is constant and identical to the number of fuzzy network parameters. A high number of time steps and/or
networks with many hidden neurons result in high dimensional search spaces, where optimization is timeconsuming. The numerical effort can be reduced by two
strategies:
fading memory
hybrid neural networks
In structural mechanics, so-called fading memory
means that stresses which have been applied to a structure with a distance in time gradually lose their impact on the future structural behavior, see for example
Oeser and Freitag (2009). It can be assumed, that the
uncertainty of prior stresses has less influence on the
current strains than the uncertainty of current stresses.
The same is valid vice versa for strain to stress mapping. The dimension of the search space in time step [n]
can be limited, if only the [n f ] prior and the current in(1)
(1)
puts ([nn f ] x j , . . . , [n] x j are considered as fuzzy num(1)
(1)
bers and all other prior inputs [1] x j , . . . , [nn f 1] x j
are transformed to deterministic (defuzzified) numbers
(1)
[1] (1)
x j , . . . , [nn f 1] x j . Methods for defuzzification can
be found, for example in Rommelfanger (1988), Jain
(1976), and Chen (1985). The fading memory strategy
can be applied for Type 1 and Type 3 mappings. Af-

647

ter time step [n f ], the dimension of the search space remains constant.
Hybrid neural networks are artificial neural networks
with fuzzy and deterministic network parameters. In
Ishibuchi and Nii (2001), partially fuzzified neural networks are presented, which are hybrid network structures for feed forward networks. Fuzzy weights are
used for the synaptic connections between the neurons
of layer (M 1) and the output neurons, which have
fuzzy bias values, too. All other network parameters
are defined as deterministic numbers in Ishibuchi and
Nii (2001). This hybrid neural network strategy can be
generalized to all network parameters, and it can be
extended to RNNs for fuzzy data. Which network parameters are fuzzy numbers and which are deterministic numbers has to be defined a priori. However it might
also happen, that fuzzy network parameters are modified to deterministic numbers during network training, see Section 3.2. The hybrid neural network strategy
can be applied to reduce the dimension of the search
space for Type 2 and Type 3 mappings. It can also be
combined with the fading memory strategy for Type 3
mapping.
3.2 Training
The unknown network parameters are determined
within the network training by an inverse analysis. All
available data are divided into training and validation
patterns. Whereas the training pattern is used to solve
an optimization task, the validation pattern is utilized
to check the generalization capability of neural network
based material formulations.
The objective of the optimization is the minimization
of the averaged total training error


NP
P

1
1
av
[n]
E
(36)
E =
PKS
NP
p=1

n=1

where p = 1, . . . , P are the training patterns. The scaling with the number of patterns P, the number of time
steps NP per pattern, the number of components K, and
the number of -cuts S in Equation (36) is done due to
practical reasons, because it is easier to compare and
evaluate errors with different numbers of P, NP , K, and
S. The error of each time step is computed by

K
S 
1 [n] (M) [n] 2 [n] (M) [n] 2
[n]
E=
x

d
+
x

d
sr k
sr k
sl k
sl k
2
k=1 s=1

(37)
evaluating the distance between the processes of fuzzy
(M)
output signals x k ( ) and the scaled desired responses

(DR) dk( ). In Equation (37), the factor 1/2 is selected

648

Graf, Freitag, Sickert & Kaliske

to cancel later with factor 2 resulting from the derivatives of the squared terms.
Different approaches for solving the optimization
task can be applied. For artificial neural networks, gradient based search algorithms (backpropagation algorithms) are common. The temporal succession of the
data series has to be considered, if RNNs are trained
with backpropagation. In Graf et al. (2010), Freitag
et al. (2011c), and Freitag et al. (2011a), modified
backpropagation algorithms for RNNs have been developed for deterministic, a priori defined fuzzy, and
trainable fuzzy network parameters, respectively. These
approaches can be applied, if the signals are computed
by interval arithmetic. In this article, a backpropagation
algorithm for signal computation based on an -level
optimization is introduced. It is presented for fuzzy and
deterministic network parameters, with respect to hybrid networks.
Backpropagation training can be repeated or processed in parallel with varying network structures and
different a priori defined parameters:
number of hidden layers and hidden neurons;
deterministic or fuzzy network parametershybrid
networks;
types and parameters of fuzzy activation functions;
and
fuzzy or deterministic memory and feedback factors.
Adjustable fuzzy or deterministic network parameters are the weights, the context weights and the bias
values, which are initialized randomly as deterministic
or fuzzy numbers in an interval, for example [1, 1].
A sequential training mode is applied, which enables to
update the network parameters in each time step [n].
As results of the -level optimization, the realizations
of the fuzzy input signals and the realizations of the
fuzzy network parameters are obtained, which lead to
the left and right interval bounds of the fuzzy output signal k. These realizations, denoted by index k , are used
to determine realizations for the incremental correction
of the deterministic or fuzzy weights
(m)
[n]
sl,r k wi h

= [n]

[n] E
(m)
[n]
sl,r k wi h

[n1]

(m)

+ sl,r k wi h (38)

The correction of fuzzy or deterministic context weights


and bias values is straightforward. The parameter [n]
is the learning rate and is a momentum constant, see
for example Haykin (1999) and Freitag et al. (2011a).
For fuzzy weights, the error gradient in Equation (38) is
obtained by
[n] E
(m)
[n]
sl,r k wi h

[n]

(m1)

= sl,r k xh

[n]

(m)

sl,r k i

(39)

If the weight is a deterministic number, it yields


[n] E

(m)
[n] wi h

S 

[n]

(m)
slk i

s=1
(m1)

[n]

sr k xh

(m1)

[n]

slk xh

(m)

[n]

+ sr k i

(40)
.

Equations (39) and (40) can be formulated for fuzzy and


deterministic context weights resulting in
[n] E

[n]

(m)

[n]

sl,r k ciq

[n]

(m)

= sl,r k yq(m) sl,r k i

(41)

and
[n] E
(m)
[n] ciq

S 

[n]

(m)
slk i

[n]

[n]

(m)

slk yq(m) + sr k i

[n]

sr k yq(m)

s=1

(42)
respectively. For the fuzzy bias values, the error gradients are determined by
[n] E
(m)

[n]

sl,r k bi

(m)

[n]

= sl,r k i

(43)

whereas
[n] E
(m)
[n] bi

S 

[n]

(m)
slk i

[n]

(m)

+ sr k i


(44)

s=1

is used for the error gradients taking deterministic bias


values into account.
(m)
[n]
The interval bounds sl,r k i of the local gradients in
Equations (39)(44), are computed layer by layer backward through the RNN. Starting in the output neurons
(M)
[n]
(index i = k), the realizations sl,r k k of the local gradients can be computed from the output error and the
derivative of the fuzzy activation function


(M)
(M)
[n]
[n]
[n]
= sl,r k dk sl,r k xk
sl,r k k
(45)


d
(M) [n]
(M)
sl,r k k
sl,r k k
d
For the hidden neurons h in layer (m 1), the local gradients of the neurons i in layer (m) are required to compute their local gradients by


d
(m1)
(m1) [n]
(m1)
[n]

sl,r
k
sl,r k h
sl,r k h
h
d
(46)
I 


(m) [n]
(m)
[n]

sl,r k
sl,r k i
ih
i=1

The incremental correction of the deterministic or


fuzzy weights according to Equation (38) are used to
compute updated realizations of the weights by
[n+1] (m)
sl,r k wi h

[n]

(m)

[n]

(m)

= sl,r k wi h + sl,r k wi h

(47)

Structural analysis with fuzzy data

649

[n+1]

(m)

[n+1]

(m)

[n+1]

(m)

bound is changed to s1r wi h = sr wi h , if s1r wi h <


[n+1] (m)
wi h , see Figure 6. Both variants have been tested
sr
resulting in similar training and validation errors.

4 EXAMPLES
4.1 Verification with fractional Newton element
The RNN approach has been verified by traditional material models. Here, results are presented for verification by a fractional Newton element, which can be used
as a constitutive model to describe viscoelastic material
behavior. The differential equation
dr
(
)
(50)
d r
of the fractional Newton element, see for example
Oeser and Freitag (2009), contains a fractional derivative of strain (
) with respect to time . The operator
r represents the order of the derivative. In general, it
is a fuzzy number between zero (linear elastic spring)
and one (dashpot). If r = r = 0, the parameter p is an
uncertain modulus of elasticity, whereas it is an uncertain viscosity for r = r = 1. The creep function of the
fuzzy fractional Newton element can be obtained by a
Laplace transform using the stress boundary condition
( ) = (constant stress). A convolution of the creep
function and its evaluation for equidistant time steps 
leads to


n

[i]

  r 
[n]
(r +1)
(r +1)
(n + 1 i)
=
(n i)
p (r + 2)
( ) = p

Fig. 6. Correction of improper fuzzy network parameters.

which is valid also for the context weights and bias val(m)
ues. If the weight wi h is an a priori defined deterministic number, the updated weight is directly obtained by
Equation (47). In case of fuzzy weights, the left and right
interval bounds of the updated weight are computed by


[n+1] (m)
[n+1] (m)
[n+1] (m)
(48)
w
=
min
w
,
w

sl
slk
sr k
ih
ih
ih
and
[n+1] (m)
wi h
sr

= max

[n+1] (m)
slk wi h ,

[n+1] (m)
sr k wi h


(49)

respectively.
The updated fuzzy network parameters have to be
checked according to convexity. If the fuzzy number is
improper, the interval bounds must be modified (rearranged). Hence, two variants are proposed, which are
demonstrated in Figure 6. In variant 1, all computed
interval bounds are sorted, see for example Ishibuchi
et al. (1995). Variant 2 is a check of the interval bounds
[n+1] (m)
[n+1] (m)
from -cut s = S to -cut s = 1. If s1l wi h > sl wi h ,
the left interval bound of -cut s 1 is redefined by
[n+1] (m)
[n+1] (m)
wi h . The corresponding right interval
s1l wi h = sl

i=1

(51)
The function  (.) in Equation (51) is the Gamma function. The strain in time step [n] depends on the current
stress and the whole stress history. In contrast to common rheological elements (e.g., Kelvin elements) used
for viscoelasticity, a recursive form of Equation (51)
cannot be obtained for fractional rheological elements,
that is, it is not possible to reduce the influence of the
stress history to an internal variable updated for each
time step. As a consequence, stressstraintime dependencies are more complicated to learn.
By means of a numerical experiment, Equation (51)
is used to compute training and validation patterns by

performing a fuzzy analysis according to Moller


et al.
(2000). The time step length  = 100 s is selected, and
two -cuts are evaluated. The same model parameters,
training and validation patterns as used for verification in Freitag et al. (2011c) (signal computation with
interval arithmetic) are utilized here for signal computation with -level optimization. Results for Type 1 and
Type 2 mappings are presented.

650

Graf, Freitag, Sickert & Kaliske

Fig. 9. Fuzzy stress processesinputs for validation.


Fig. 7. Fuzzy stress processesinputs for training.

Fig. 8. Fuzzy strain processesoutputs for training (Type 1


mapping).

4.1.1 Type 1 mapping. The five fuzzy stress processes


plotted in Figure 7 are used as inputs for stress to strain
mapping. According to Section 2.1, fuzzy processes are
represented by trajectories of interval bounds for selected membership values . The corresponding fuzzy
strain processes in Figure 8 (indicated with the same
number) are determined as DR by computing Equation (51) with deterministic parameters r = r = 0.14
and p = p = 101 000 MN sr /m2 . The unit MN sr /m2 reflects that a fractional Newton element has a behavior between a spring (for r = 0, MN/m2 is a unit for
the modulus of elasticity) and a dashpot (for r = 1,
MN s/m2 is a unit for the viscosity). Each training pattern h = 1, . . . , 5 is represented by fuzzy processes with
Nh = 100 time steps.
A RNN with the same 1 5 5 3 1 architecture
(one input neuron, three hidden layers with five, five,
and three neurons, and one output neuron) as used in
Freitag et al. (2011c) is selected to map the fuzzy stress

processes onto the corresponding fuzzy strain processes.


The stress history is taken into account by context
neurons in the hidden and output layers. A nonlinear
deterministic activation function in the form of the area
(m)
= a arsinh())
is used in the
hyperbolic sine (i ()
hidden neurons of the layers (m) = (2), (3), (4), and a
(5)
= a )

linear deterministic activation function (1 ()


is selected for the output neuron. The parameter a
is a priori defined as a = 0.75. The identity function
(m)
i ()
= is used to activate the context neurons.
The fading memory strategy is applied to reduce
the dimension of the search space for increasing time
steps. Only the n f = 10 prior stresses and the current
stress are considered as fuzzy numbers to compute the
fuzzy strain of time step [n]. All other prior stresses are
transformed to deterministic numbers using the MaxMethod (Bothe, 1993). In this case, -level optimizations up to 11-dimensional search spaces have to be performed.
The backpropagation algorithm according to Section 3.2 is applied to determine the deterministic network parameters. The final RNN responses show a very
good agreement with the DR, see Figure 8.
A validation with five additional patterns of fuzzy
stress processes (Figure 9) and their corresponding
fuzzy strain processes (Figure 10) with up to 200 time
steps shows the generalization capability of the trained
RNN for fuzzy data.
4.1.2 Type 2 mapping. The same network architecture
is used as selected for the Type 1 mapping verification. Uncertain network parameters are required for
the mapping of deterministic stress processes onto fuzzy
strain processes. A hybrid neural network with fuzzy
and deterministic parameters is created. The factor of
the activation function is defined as a triangular fuzzy
number a = 0.74625, 0.75, 0.75375 resulting in fuzzy

Structural analysis with fuzzy data

651

Fig. 11. Fuzzy strain processesoutputs for training (Type 2


mapping).
Fig. 10. Fuzzy strain processesoutputs for validation (Type
1 mapping).

activation functions in the hidden and output neurons.


(m)
One memory factor 1 in each hidden layer and the
memory factor of the context neuron in the output
layer are defined as an identical triangular fuzzy num(m)
ber 1 = 0.13, 0.14, 0.15, for (m) = (2), . . . , (5). Ad(m)
ditionally, one trainable fuzzy weight w I H and one
(m)
trainable fuzzy context weight c I I are considered
per layer. All other network parameters are deterministic numbers resulting in a 10-dimensional search
space.
For Type 2 mapping, five deterministic stress processes (trajectories for = 1 in Figure 7) are used as
inputs. In contrast to the Type 1 mapping, the parameter r is selected as a triangular fuzzy number with
r = 0.13, 0.14, 0.15.
The DR and the RNN predictions of the fuzzy strain
processes used for training and validation are presented
in Figures 11 and 12, respectively.
It can be seen that the same prediction quality is
possible with -level optimization compared to the
verification results presented in Freitag et al. (2011c),
where interval arithmetic has been applied for signal
computation.

Fig. 12. Fuzzy strain processesoutputs for validation (Type


2 mapping).

4.2 Application within fuzzy FE analysis


A model-free material description is applied to investigate the long-term behavior of a 3D structure. In
Figure 13, the FE model of the structure is shown. It
consists of a plate, a beam, and a column. Due to geometric and load symmetry, only half of the structure is
modeled. The FE mesh consists of 2,376 isoparametric
20-node displacement elements. In the column, element

Fig. 13. FE model.

652

Graf, Freitag, Sickert & Kaliske

Fig. 14. Fuzzy load process l ( ).

size is 0.1 0.1 0.125 m. Elements with size 0.3


0.3 0.067 m are used in the plate section of the structure. The beam section elements are 0.1 0.1 0.1 m.
The mesh in the plate section is refined for coupling
with the beam section. Displacements of nodes in the
supports are constrained to zero (v = 0), see marked
surfaces in Figure 13. Horizontal displacements of
nodes in the symmetry plane are set to zero (v2 = 0).
In addition to the deterministic self load, the uncertain life load of the structure is considered for longterm structural analysis. The upper surface of the plate
is loaded by a fuzzy load process l ( ), see Figure 14,
which is one sampled trajectory of a fuzzy stochastic
) with independent logistic distributed realprocess L(
izations [n]l for each time step. The cumulative distribution function of the used logistic distribution with fuzzy
parameters is given by
 
1
[n]l =
L
(52)
([n] la )/b
1+e
where the fuzzy parameters a = 5, 5.5, 6 and b =
0.063, 0.125, 0.188 are triangular fuzzy numbers.
Uncertain time-dependent material behavior is described by a RNN for fuzzy data. In contrast to Section 4.1, the network is used for the mapping of stresses
onto viscoelastic strains only, see Section 2.3.2. It is combined (in series) with a linear elastic model to capture
elastic and viscoelastic behavior of the structure.
Uncertain time-dependent structural responses are
computed within a fuzzy FE analysis. Fading memory
and hybrid neural network strategies are combined to
reduce the numerical effort. Three -cuts have been
evaluated for 50 increments with a time step length of
one year ( = 1 a). Exemplified, the computed fuzzy
displacement process v 3 ( ) (vertical displacements v3 of
node 7072, see Figure 13) is shown in Figure 15. The uncertainty of the fuzzy displacement process results from

Fig. 15. Fuzzy displacement process v 3 ( ) (vertical


displacement of node 7072).

the fuzzy load process and additional material uncertainty considered by fuzzy network parameters.

5 CONCLUSION
A new approach has been presented using Computational Intelligence in Structural Engineering and Mechanics. Uncertain time-dependent material behavior is
considered by model-free material descriptions based
on artificial neural networks. RNNs for fuzzy data are
applied to describe uncertain stressstraintime dependencies from real or numerical experiments. An -level
optimization is used to compute the fuzzy output signals of the networks. Fuzzy or deterministic network
parameters are identified with a new backpropagation
algorithm taking fuzzy signals into account. After training and validation with experimental data, RNNs can be
utilized instead of or in combination with constitutive
models for structural analysis. Often, the calculation of
RNNs is faster compared to complicated material models, for example fractional rheological elements, which
leads to less computational time in FE analyses. Numerical formulations for uncertain stressstraintime dependencies have been developed, which can be used
for incremental iterative solution strategies within fuzzy
or fuzzy stochastic FE analyses. The new approach has
been verified by a constitutive model. Its applicability is
demonstrated by computing the fuzzy displacements of
a structure under uncertain long-term loading.
Different further developments are possible. Special network structures can be created to consider
physical boundary conditions of several materials. This
step can be realized by postprocessing strategies or directly by using particle swarm optimization (PSO) for
network training, see Freitag et al. (2012). PSO will also

Structural analysis with fuzzy data

enable indirect training approaches using inhomogeneous stress and strain fields inside specimens.
Extensions for applications in multiphysics are aspired,
for example neural network based descriptions for temperature dependent material behavior.

ACKNOWLEDGMENTS
The second author gratefully acknowledges the support of the Deutsche Forschungsgemeinschaft (DFG
German Research Foundation) within the project (FR
3044/1-1) in the framework of a research fellowship.
REFERENCES
Adeli, H. (2001), Neural networks in civil engineering: 19892000, Computer-Aided Civil and Infrastructure Engineering,
16, 12642.
Adeli, H. & Jiang, X. (2003), Neuro-fuzzy logic model for freeway work zone capacity estimation, Journal of Transportation Engineering, 129, 48493.
Adeli, H. & Jiang, X. (2006), Dynamic fuzzy wavelet neural
network model for structural system identification, Journal
of Structural Engineering, 132, 10211.
Adeli, H. & Jiang, X. (2007), Toward smart structures: Novel
wavelet-chaos-dynamic neural network models for vibration control and health monitoring of highrise building
and bridge structures under extreme dynamic loading, in
B.H.V. Topping (ed.), Civil Engineering Computations:
Tools and Techniques, Chapter 1, Saxe-Coburg Publications, Stirlingshire, pp. 124.
Adeli, H. & Panakkat, A. (2009), A probabilistic neural network for earthquake magnitude prediction, Neural Networks, 22, 101824.
Aliev, R. A., Fazlollahi, B., Aliev, R. R. & Guirimov, B. G.
(2008), Linguistic time series forecasting using fuzzy recurrent neural networks, Soft ComputingA Fusion of Foundations, Methodologies and Applications, 12, 18390.
Aliev, R. A., Guirimov, B. G., Fazlollahi, B. & Aliev,
R. R. (2009), Evolutionary algorithm-based learning of
fuzzy neural networks. Part 2: recurrent fuzzy neural networks, Fuzzy Sets and Systems, 160, 255366.
Arangio, S. & Bontempi, F. (2010), Soft computing based multilevel strategy for bridge integrity monitoring, ComputerAided Civil and Infrastructure Engineering, 25, 34862.
Balu, A. S. & Rao, B. N. (2012), Multicut-high dimensional
model representation for structural reliability bounds estimation under mixed uncertainties, Computer-Aided Civil
and Infrastructure Engineering, 27, 41938.
Bothe, H. H. (1993), Fuzzy-Logic, Springer, Berlin.
Chen, S. H. (1985), Ranking fuzzy numbers with maximizing
set and minimizing set, Fuzzy Sets and Systems, 17, 11329.
Elman, J. L. (1990), Finding structure in time, Cognitive Science, 14, 179211.
Freitag, S., Beer, M., Graf, W. & Kaliske, M. (2009), Lifetime
prediction using accelerated test data and neural networks,
Computers and Structures, 87, 118794.
Freitag, S., Graf, W. & Kaliske, M. (2011a), Recurrent neural
networks for fuzzy data, Integrated Computer-Aided Engineering, 18, 26580.

653

Freitag, S., Graf, W. & Kaliske, M. (2011b), Recurrent neural networks for fuzzy data as a material description within
the finite element method, in Y. Tsompanakis and B. H.
V. Topping (eds.), Proceedings of the Second International
Conference on Soft Computing Technology in Civil, Structural and Environmental Engineering, Chania, Civil-Comp
Press, Stirlingshire, paper 28, pp. 120.
Freitag, S., Graf, W., Kaliske, M. & Sickert, J.-U. (2011c), Prediction of time-dependent structural behaviour with recurrent neural networks for fuzzy data, Computers and Structures, 89, 197181.
Freitag, S., Muhanna, R. L. & Graf, W. (2012), A particle
swarm optimization approach for training artificial neural networks with uncertain data, in M. Vorechovsky,
V.
Sadlek, S. Seitl, V. Vesely,
R. L. Muhanna and R. L.
Mullen (eds.), Proceedings of the 5th International Conference on Reliable Engineering Computing, Litera, Brno,
pp. 15170.
Ghaboussi, J., Garret, J. H. & Wu, X. (1991), Knowledgebased modeling of material behavior with neural networks,
Journal of Engineering Mechanics, 117, 13253.
Ghaboussi, J. & Sidarta, D. E. (1998), New nested adaptive
neural networks (NANN) for constitutive modeling, Computers and Geotechnics, 22, 2952.

Gonzalez-Olvera,
M. A., Gallardo Hernandez,
A. G., Tang,
Y., Revilla Monsalve, M. C. & Islas-Andrade, S. (2010), A
discrete-time recurrent neurofuzzy network for black-box
modeling of insulin dynamics in diabetic type-1 patients, International Journal of Neural Systems, 20, 14958.
Graf, W., Freitag, S., Kaliske, M. & Sickert, J.-U. (2010),
Recurrent neural networks for uncertain time-dependent
structural behavior, Computer-Aided Civil and Infrastructure Engineering, 25, 32233.
Graf, W., Sickert, J.-U., Freitag, S., Pannier, S. & Kaliske,
M. (2011), Neural network approaches in structural analysis under consideration of imprecision and variability, in Y.
Tsompanakis and B. H. V. Topping (eds.), Soft Computing
Methods for Civil and Structural Engineering, Saxe-Coburg
Publications, Stirlingshire, pp. 5985.
Haj-Ali, R., Pecknold, D. A., Ghaboussi, J. & Voyiadjis,
G. Z. (2001), Simulated micromechanical models using artificial neural networks, Journal of Engineering Mechanics,
127, 7308.
Hashash, Y. M. A., Jung, S. & Ghaboussi, J. (2004), Numerical
implementation of a neural network based material model
in finite element analysis, International Journal for Numerical Methods in Engineering, 59, 9891005.
Haykin, S. (1999), Neural Networks: A Comprehensive Foundation, Prentice Hall, Upper Saddle River, NJ.
Ishibuchi, H., Morioka, K. & Turksen, I. B. (1995), Learning
by fuzzified neural networks, International Journal of Approximate Reasoning, 13, 32758.
Ishibuchi, H. & Nii, M. (2001), Numerical analysis of the learning of fuzzified neural networks from fuzzy if-then rules,
Fuzzy Sets and Systems, 120, 281307.
Jain, R. (1976), Decisionmaking in the presence of fuzzy variables, IEEE Transactions on Systems, Man, and Cybernetics, 6, 698703.
Jiang, X. & Adeli, H. (2005), Dynamic wavelet neural network
for nonlinear identification of highrise buildings, ComputerAided Civil and Infrastructure Engineering, 20, 31630.
Jiang, X. & Adeli, H. (2007), Pseudospectra, MUSIC, and
dynamic wavelet neural network for damage detection
of highrise buildings, International Journal for Numerical
Methods in Engineering, 71, 60629.

654

Graf, Freitag, Sickert & Kaliske

Jiang, X. & Adeli, H. (2008), Dynamic fuzzy wavelet neuroemulator for non-linear control of irregular building
structures, International Journal for Numerical Methods in
Engineering, 74, 104566.
Jordan, M. I. (1990), Attractor dynamics and parallelism in
a connectionist sequential machine, in J. Diederich (ed.),
Artificial Neural Networks: Concept Learning, IEEE Press,
Piscataway, NJ, pp. 11227.
Jung, S. & Ghaboussi, J. (2006), Neural network constitutive
model for rate-dependent materials, Computers and Structures, 84, 95563.
A., Leps, M. & Zeman, J. (2007), Back analysis of
Kucerova,
microplane model parameters using soft computing methods, Computer Assisted Mechanics and Engineering Sciences, 14, 21942.

Moller,
B. & Beer, M. (2004), Fuzzy Randomness
Uncertainty in Civil Engineering and Computational Mechanics, Springer, Berlin, Heidelberg, New York.

Moller,
B. & Beer, M. (2008), Engineering computation under
uncertaintycapabilities of non-traditional models, Computers and Structures, 86, 102441.

Moller,
B., Graf, W. & Beer, M. (2000), Fuzzy structural analysis using -level optimization, Computational Mechanics,
26, 54765.
D. & Lehky,
Novak,
D. (2006), ANN inverse analysis based on
stochastic small-sample training set simulation, Engineering
Application of Artificial Intelligence, 19, 73140.
Oeser, M. & Freitag, S. (2009), Modeling of materials with
fading memory using neural networks, International Journal for Numerical Methods in Engineering, 78, 84362.
Oeser, M. & Pellinien, T. (2012), Computational framework
for common visco-elastic models in engineering based on
the theory of rheology, Computers and Geotechnics, 42,
14556.
Panakkat, A. & Adeli, H. (2009), Recurrent neural network
for approximate earthquake time and location prediction

using multiple seismicity indicators, Computer-Aided Civil


and Infrastructure Engineering, 24, 28092.
Pannier, S., Sickert, J.-U. & Graf, W. (2009), Patchwork
approximation scheme for reliability assessment and optimization, in H. Furuta, D. M. Frangopol, and M.
Shinozuka (eds.), Safety, Reliability and Risk of Structures,
Infrastructures and Engineering Systems, Taylor & Francis,
London, pp. 4829.
Papadrakakis, M. & Lagaros, N. D. (2002), Reliability-based
structural optimization using neural networks and Monte
Carlo simulation, Computer Methods in Applied Mechanics
and Engineering, 191, 3491507.
Puscasu, G., Codres, B., Stancu, A. & Murariu, G. (2009),
Nonlinear system identification based on internal recurrent
neural networks, International Journal of Neural Systems,
19, 11525.

Reuter, U. & Moller,


B. (2010), Artificial neural networks for
forecasting of fuzzy time series, Computer-Aided Civil and
Infrastructure Engineering, 25, 36374.
Rommelfanger, H. (1988), Fuzzy Decision Support-Systeme,
Springer, Berlin.

Schafer,
A. M., Udluft, S. & Zimmermann, H.-G. (2008),
Learning long-term dependencies with recurrent neural
networks, Neurocomputing, 71, 24818.

Schafer,
A. M. & Zimmermann, H.-G. (2007), Recurrent
neural networks are universal approximators, International
Journal of Neural Systems, 17, 25363.
Sickert, J.-U., Freitag, S. & Graf, W. (2011), Prediction of
uncertain structural behaviour and robust design, International Journal for Reliability and Safety, 5, 35877.
Theodoridis, D., Boutalis, Y. & Christodoulou, M. (2012), Dynamical recurrent neuro-fuzzy identification schemes employing switching parameter hopping, International Journal
of Neural Systems, 22, 1250004, 16 p.
Zadeh, L. A. (1965), Fuzzy sets, Information and Control, 8,
33853.

Vous aimerez peut-être aussi