Académique Documents
Professionnel Documents
Culture Documents
1 INTRODUCTION
Artificial neural networks are powerful tools to learn
functional dependencies in data. They are applied for
several tasks in civil engineering, see for example,
Adeli (2001). In structural engineering and mechanics, neural networks are utilized, for example, for response surface approximation (Pannier et al., 2009;
Papadrakakis and Lagaros, 2002), parameter identi and Lehky,
fication (Kucerova et al., 2007; Novak
and Moller,
2010), earthquake prediction (Adeli and
Panakkat, 2009), structural control (Adeli and Jiang,
2007; Jiang and Adeli, 2008), etc. Applications of ar
C 2012 Computer-Aided Civil and Infrastructure Engineering.
DOI: 10.1111/j.1467-8667.2012.00779.x
tificial neural networks in structural analysis are presented, for example in Graf et al. (2011) and Sickert
et al. (2011).
The focus of this article is to develop neural network
concepts for constitutive modeling (Ghaboussi et al.,
1991) taking uncertain time-dependent material behavior into account. The terminology model-free material
description is selected to indicate that neural networks
are utilized as material formulation in computational
mechanics instead of physically based material models.
The benefit is a wide applicability for several materials without restrictions to specific material characteristics. Additionally, the computation of neural networks
is numerically efficient. Dependencies between stresses
and strains can be identified from data obtained from
real or numerical experiments. In Hashash et al. (2004),
feed forward networks are used to describe nonlinear
stressstrain dependencies. Current and previous stress
and strain states can be considered, if the material behavior is path dependent, see for example, Ghaboussi
and Sidarta (1998). In Jung and Ghaboussi (2006),
the stress and strain rates are used to describe viscoelasticity with feed forward networks. The neural networks can be utilized as material formulation for structural analysis. An application within the finite element
method (FEM) is presented in Hashash et al. (2004). In
Haj-Ali et al. (2001), feed forward networks are used
as material description at the macrolevel within a multiscale approach.
Whereas the discussed approaches are based on feed
forward networks, recurrent neural network (RNN)
concepts are developed in this article. RNNs are suitable to consider time-history effects in data series,
especially long-term dependencies, see for example,
Panakkat and Adeli (2009), Puscasu et al. (2009), and
Schafer
et al. (2008). Special RNNs are universal approximators of dynamical systems in form of state
according to Moller
et al. (2000) is utilized in this
article. This enables to apply RNNs for fuzzy data as
material description within fuzzy finite element (FE)
analyses. They can also be utilized within fuzzy stochastic FE analyses, where fuzzy and stochastic parameters,
see for example, Balu and Rao (2012), or fuzzy and
641
=
x,
x
(2)
s
sr
sl
are obtained for each level s of membership s =
[n]
[n]
(sl x) = (sr x). The interval bounds of all -cuts are
required to compute fuzzy numbers within numerical
analyses. The sorted sequence
[n]
[n]
[n]
[n]
[n]
x = 1l x, . . . , Sl x, Sr x, . . . , 1r x
(3)
642
(4)
(5)
(6)
= [n] [n1]
(7)
= [n] [n1]
(8)
and
[n]
respectively.
In the following, two approaches for model-free material descriptions are introduced.
2.3.1 Strain to stress mapping with RNNs for fuzzy
data. The incremental strains [n] F E at the integration
points of the FEs can directly be used to compute incremental fuzzy stress components and the tangential stiffness matrix. Material formulations based on a strain to
stress mapping ((
)
( )) are suitable, if the whole
material behavior (total stresses and total strains) is described by a RNN for fuzzy data, see Figure 2. The prior
. . . , [n] are considered to comand current strains [1] ,
pute the fuzzy stresses [n] of time step [n].
The j = 1, . . . , J fuzzy strain components [n] j are
scaled to dimensionless input signals
[n] (1)
x j
[n]
x sc
j
(9)
(M)
k = [n] x k
zksc
(10)
C kj =
k
[n] j
[n]
643
(11)
can be solved by stress to strain mappings, see for example, Oeser and Pellinien (2012). With respect to small
strains, the whole strain of time step [n] can be split
(1)
(M)
[n] x j
[n] k
[n] k [n] x k
=
(M)
(1)
[n] j
[n] j
[n] x k
[n] x j
(12)
z
x
k
k
k
k
=
= zksc (13)
(M)
(M)
[n] x k
[n] x k
taking into account, that the incremental fuzzy stresses
(M)
(M)
[n]
zksc
k = [n] x k [n1] x k
(14)
are computed by replacing the stresses of time steps [n]
and [n 1] in Equation (7) by Equation (10).
The third factor of the right-hand side in Equation (12) is evaluated by
[n]
j
(1)
x sc
[n] x j
1 [n] j + [n1] j
1
j
= [n]
= sc
= sc
[n]
[n]
xj
xj
j
j
j
(15)
using Equation (9) and a rearranged form of Equation (8).
Equations (13) and (15) are substituted in Equation (12) to get the components of the tangential stiffness matrix
(M)
[n]
C kj =
zksc [n] x k
(1)
x sc
[n] x j
j
(16)
[n]
(17)
into its elastic (el), plastic (pl), viscoelastic (ve), and viscoplastic (vp) parts. In series oriented rheological elements enable the computation of the corresponding
strains due to the applied stress. RNNs for fuzzy data
can be utilized as parts of rheological models, for example, to describe viscoelastic responses. If neural networks are combined with rheological elements (springs
and dashpots), a stress to strain mapping form of the
material description is required. The following algorithm is presented for a neural network based description of viscoelastic strains.
Artificial neural networks for fuzzy data can be utilized to map fuzzy stress processes ( ) onto fuzzy
strain processes ve ( ), see Figure 3. If recurrent neural
networks are applied, the whole uncertain stress history
[1]
, . . . , [n1] and the current stresses [n] are considered to compute the current fuzzy strains [n] ve .
According to Section 2.3.1, the j = 1, . . . , J fuzzy
stress components [n] j of time step [n] are scaled to dimensionless input signals
[n] (1)
x j
[n]
x sc
j
(18)
(M)
= [n] x k
zksc
(19)
644
kF E = [n] k
(20)
If
strains are taken into account
[n]
(21)
and solved with the NewtonRaphson method. A Taylor series is created (polynomial of degree one)
J
[n] d k [n]
[n]
[n]
dk = dk +
j = 0 (22)
[n] j
j=1
(23)
[n] j
[n] j
[n] j
where [n] d k is replaced by Equation (21).
The partial derivatives of the incremental fuzzy strain
components [n] kve with respect to the incremental
fuzzy stress components [n] j in Equation (23) are
evaluated using the chain rule two times resulting in
(1)
(M)
[n] x j
[n] kve
[n] kve [n] x k
=
(M)
(1)
[n] j
[n] j
[n] x k
[n] x j
(24)
(1)
x sc
[n] x j
1 [n1] j + [n] j
1
j
= [n]
= sc
= sc
[n]
[n]
xj
xj
j
j
j
(26)
With Equations (25) and (26), the partial derivatives of
the incremental fuzzy strain components with respect to
the incremental fuzzy stress components are obtained
as
kve
[n] j
[n]
zksc
x sc
j
(M)
[n] x k
(1)
[n] x j
(27)
(1)
network input signals [n] x j can be evaluated using multiple applications of the chain rule, see Freitag et al.
(2011b).
The Taylor series in Equation (22) can be written as
a system of equations
[n] ve
1
[n] 1ve
[n] 1ve
[n] . . . [n] . . . [n]
1
j
J
..
..
..
..
.
.
.
.
[n] ve
ve
ve
[n]
[n]
..
..
..
..
.
.
.
.
[n] ve
ve
ve
[n]
[n]
K
K
K
.
.
.
.
.
.
[n] 1
[n] j
[n] J
[n]
[n]
(28)
d 1
1
..
..
[n] j [n] d k
=
..
..
[n] J [n] d K
using Equations (27) and (23). In matrix form, the system of equations (28) is denoted as
[n]
[n] = [n] d
D
(29)
(31)
[n]
[H]
0 . Hence, the incremental fuzzy
enough
stresses
[n]
= [n] [H]
(32)
(33)
The algorithm can be extended to consider also elastic, plastic, and viscoplastic fuzzy strain components.
645
RNNs are suitable to identify and describe timedependent material behavior, see for example Oeser
and Freitag (2009). As discussed in Graf et al. (2010),
the treatment of fuzzy data requires modified signal
computation and training strategies with respect to deterministic data. Two ways of computation are mentioned in Graf et al. (2010):
interval arithmetic (for each -cut)
optimization (-level optimization)
The interval arithmetic approach for deterministic
network parameters presented in Graf et al. (2010) is
extended for a priori defined and trainable fuzzy network parameters in Freitag et al. (2011c) and Freitag
et al. (2011a), respectively. Here, RNNs for fuzzy data
are applied as material description within fuzzy or fuzzy
stochastic FE analyses, which are performed by an
level optimization according to Moller
et al. (2000). In
this case, -level optimization is also required for signal computation and network training. Of course, this
leads to a higher computational effort in comparison to
interval arithmetic, where only interval bounds of the cuts are evaluated. But an application of interval arithmetic trained networks within fuzzy or fuzzy stochastic
FE analyses based on -level optimization might cause
incorrect results.
3.1 Signal computation
RNNs for fuzzy data can be applied for strain to stress
mapping according to Section 2.3.1 or stress to strain
mapping, see Section 2.3.2. They enable the computa(M)
tion of dimensionless fuzzy output signals [n] x k under consideration of dimensionless fuzzy input signals
[n] (1)
x j , see Figure 4. In addition to feed forward networks, prior inputs are considered for the computation
of the current outputs by feedback connections. The
RNN in Figure 4 is an extended Elman network, containing feedback elements of Jordan networks (Jordan,
1990) and Elman networks (Elman, 1990). It consists of
(M) layers. Each hidden and each output neuron has
an additional context neuron for processing the material history. Whereas the number of input and output
neurons is defined by J and K (number of fuzzy strain
I
[n] (m)
y q
(m)
c iq
(34)
(m)
+ b i
q=1
(m)
646
Fig. 5. Application of an -level optimization for signal computation of recurrent neural networks for fuzzy data.
The search space for time step [n] is represented by the respective intervals of the fuzzy input
(1)
(1)
signals [1] x j , . . . , [n] x j and all fuzzy network param(m)
(m)
eters (fuzzy weights w i h , fuzzy context weights c iq ,
(m)
(m)
fuzzy bias values b i , fuzzy memory factors i , fuzzy
(m)
feedback factors i , and parameters of the fuzzy acti(m)
(m)
vation functions i (.) and i (.)).
An -level optimization is carried out for each time
step [n] and each of the K fuzzy output values. For fuzzy
processes with [N] time steps, N K -level optimizations are required. As results of the -level optimization
[n] (M)
in time step [n], the left interval bounds sl xk and the
[n] (M)
right interval bounds sr xk of each -cut are obtained.
Hence, the fuzzy output signals of the RNN are calculated and can be used to compute the fuzzy stress or
strain components. Additionally, the corresponding realizations of the fuzzy input signals and the realizations
of the fuzzy network parameters are available, which
lead to the left and right interval bounds, respectively.
These realizations are required for the determination of
the unknown fuzzy network parameters within the network training, see Section 3.2.
As demonstrated in Figure 5, the dimension of the
search space increases linearly with increasing time
steps for Type 1 and Type 3 mappings. In the case of
Type 2 mapping, the dimension of the search space
is constant and identical to the number of fuzzy network parameters. A high number of time steps and/or
networks with many hidden neurons result in high dimensional search spaces, where optimization is timeconsuming. The numerical effort can be reduced by two
strategies:
fading memory
hybrid neural networks
In structural mechanics, so-called fading memory
means that stresses which have been applied to a structure with a distance in time gradually lose their impact on the future structural behavior, see for example
Oeser and Freitag (2009). It can be assumed, that the
uncertainty of prior stresses has less influence on the
current strains than the uncertainty of current stresses.
The same is valid vice versa for strain to stress mapping. The dimension of the search space in time step [n]
can be limited, if only the [n f ] prior and the current in(1)
(1)
puts ([nn f ] x j , . . . , [n] x j are considered as fuzzy num(1)
(1)
bers and all other prior inputs [1] x j , . . . , [nn f 1] x j
are transformed to deterministic (defuzzified) numbers
(1)
[1] (1)
x j , . . . , [nn f 1] x j . Methods for defuzzification can
be found, for example in Rommelfanger (1988), Jain
(1976), and Chen (1985). The fading memory strategy
can be applied for Type 1 and Type 3 mappings. Af-
647
ter time step [n f ], the dimension of the search space remains constant.
Hybrid neural networks are artificial neural networks
with fuzzy and deterministic network parameters. In
Ishibuchi and Nii (2001), partially fuzzified neural networks are presented, which are hybrid network structures for feed forward networks. Fuzzy weights are
used for the synaptic connections between the neurons
of layer (M 1) and the output neurons, which have
fuzzy bias values, too. All other network parameters
are defined as deterministic numbers in Ishibuchi and
Nii (2001). This hybrid neural network strategy can be
generalized to all network parameters, and it can be
extended to RNNs for fuzzy data. Which network parameters are fuzzy numbers and which are deterministic numbers has to be defined a priori. However it might
also happen, that fuzzy network parameters are modified to deterministic numbers during network training, see Section 3.2. The hybrid neural network strategy
can be applied to reduce the dimension of the search
space for Type 2 and Type 3 mappings. It can also be
combined with the fading memory strategy for Type 3
mapping.
3.2 Training
The unknown network parameters are determined
within the network training by an inverse analysis. All
available data are divided into training and validation
patterns. Whereas the training pattern is used to solve
an optimization task, the validation pattern is utilized
to check the generalization capability of neural network
based material formulations.
The objective of the optimization is the minimization
of the averaged total training error
NP
P
1
1
av
[n]
E
(36)
E =
PKS
NP
p=1
n=1
where p = 1, . . . , P are the training patterns. The scaling with the number of patterns P, the number of time
steps NP per pattern, the number of components K, and
the number of -cuts S in Equation (36) is done due to
practical reasons, because it is easier to compare and
evaluate errors with different numbers of P, NP , K, and
S. The error of each time step is computed by
K
S
1 [n] (M) [n] 2 [n] (M) [n] 2
[n]
E=
x
d
+
x
d
sr k
sr k
sl k
sl k
2
k=1 s=1
(37)
evaluating the distance between the processes of fuzzy
(M)
output signals x k ( ) and the scaled desired responses
648
to cancel later with factor 2 resulting from the derivatives of the squared terms.
Different approaches for solving the optimization
task can be applied. For artificial neural networks, gradient based search algorithms (backpropagation algorithms) are common. The temporal succession of the
data series has to be considered, if RNNs are trained
with backpropagation. In Graf et al. (2010), Freitag
et al. (2011c), and Freitag et al. (2011a), modified
backpropagation algorithms for RNNs have been developed for deterministic, a priori defined fuzzy, and
trainable fuzzy network parameters, respectively. These
approaches can be applied, if the signals are computed
by interval arithmetic. In this article, a backpropagation
algorithm for signal computation based on an -level
optimization is introduced. It is presented for fuzzy and
deterministic network parameters, with respect to hybrid networks.
Backpropagation training can be repeated or processed in parallel with varying network structures and
different a priori defined parameters:
number of hidden layers and hidden neurons;
deterministic or fuzzy network parametershybrid
networks;
types and parameters of fuzzy activation functions;
and
fuzzy or deterministic memory and feedback factors.
Adjustable fuzzy or deterministic network parameters are the weights, the context weights and the bias
values, which are initialized randomly as deterministic
or fuzzy numbers in an interval, for example [1, 1].
A sequential training mode is applied, which enables to
update the network parameters in each time step [n].
As results of the -level optimization, the realizations
of the fuzzy input signals and the realizations of the
fuzzy network parameters are obtained, which lead to
the left and right interval bounds of the fuzzy output signal k. These realizations, denoted by index k , are used
to determine realizations for the incremental correction
of the deterministic or fuzzy weights
(m)
[n]
sl,r k wi h
= [n]
[n] E
(m)
[n]
sl,r k wi h
[n1]
(m)
[n]
(m1)
= sl,r k xh
[n]
(m)
sl,r k i
(39)
(m)
[n] wi h
S
[n]
(m)
slk i
s=1
(m1)
[n]
sr k xh
(m1)
[n]
slk xh
(m)
[n]
+ sr k i
(40)
.
[n]
(m)
[n]
sl,r k ciq
[n]
(m)
(41)
and
[n] E
(m)
[n] ciq
S
[n]
(m)
slk i
[n]
[n]
(m)
slk yq(m) + sr k i
[n]
sr k yq(m)
s=1
(42)
respectively. For the fuzzy bias values, the error gradients are determined by
[n] E
(m)
[n]
sl,r k bi
(m)
[n]
= sl,r k i
(43)
whereas
[n] E
(m)
[n] bi
S
[n]
(m)
slk i
[n]
(m)
+ sr k i
(44)
s=1
sl,r
k
sl,r k h
sl,r k h
h
d
(46)
I
(m) [n]
(m)
[n]
sl,r k
sl,r k i
ih
i=1
[n]
(m)
[n]
(m)
(47)
649
[n+1]
(m)
[n+1]
(m)
[n+1]
(m)
4 EXAMPLES
4.1 Verification with fractional Newton element
The RNN approach has been verified by traditional material models. Here, results are presented for verification by a fractional Newton element, which can be used
as a constitutive model to describe viscoelastic material
behavior. The differential equation
dr
(
)
(50)
d r
of the fractional Newton element, see for example
Oeser and Freitag (2009), contains a fractional derivative of strain (
) with respect to time . The operator
r represents the order of the derivative. In general, it
is a fuzzy number between zero (linear elastic spring)
and one (dashpot). If r = r = 0, the parameter p is an
uncertain modulus of elasticity, whereas it is an uncertain viscosity for r = r = 1. The creep function of the
fuzzy fractional Newton element can be obtained by a
Laplace transform using the stress boundary condition
( ) = (constant stress). A convolution of the creep
function and its evaluation for equidistant time steps
leads to
n
[i]
r
[n]
(r +1)
(r +1)
(n + 1 i)
=
(n i)
p (r + 2)
( ) = p
which is valid also for the context weights and bias val(m)
ues. If the weight wi h is an a priori defined deterministic number, the updated weight is directly obtained by
Equation (47). In case of fuzzy weights, the left and right
interval bounds of the updated weight are computed by
[n+1] (m)
[n+1] (m)
[n+1] (m)
(48)
w
=
min
w
,
w
sl
slk
sr k
ih
ih
ih
and
[n+1] (m)
wi h
sr
= max
[n+1] (m)
slk wi h ,
[n+1] (m)
sr k wi h
(49)
respectively.
The updated fuzzy network parameters have to be
checked according to convexity. If the fuzzy number is
improper, the interval bounds must be modified (rearranged). Hence, two variants are proposed, which are
demonstrated in Figure 6. In variant 1, all computed
interval bounds are sorted, see for example Ishibuchi
et al. (1995). Variant 2 is a check of the interval bounds
[n+1] (m)
[n+1] (m)
from -cut s = S to -cut s = 1. If s1l wi h > sl wi h ,
the left interval bound of -cut s 1 is redefined by
[n+1] (m)
[n+1] (m)
wi h . The corresponding right interval
s1l wi h = sl
i=1
(51)
The function (.) in Equation (51) is the Gamma function. The strain in time step [n] depends on the current
stress and the whole stress history. In contrast to common rheological elements (e.g., Kelvin elements) used
for viscoelasticity, a recursive form of Equation (51)
cannot be obtained for fractional rheological elements,
that is, it is not possible to reduce the influence of the
stress history to an internal variable updated for each
time step. As a consequence, stressstraintime dependencies are more complicated to learn.
By means of a numerical experiment, Equation (51)
is used to compute training and validation patterns by
650
651
652
the fuzzy load process and additional material uncertainty considered by fuzzy network parameters.
5 CONCLUSION
A new approach has been presented using Computational Intelligence in Structural Engineering and Mechanics. Uncertain time-dependent material behavior is
considered by model-free material descriptions based
on artificial neural networks. RNNs for fuzzy data are
applied to describe uncertain stressstraintime dependencies from real or numerical experiments. An -level
optimization is used to compute the fuzzy output signals of the networks. Fuzzy or deterministic network
parameters are identified with a new backpropagation
algorithm taking fuzzy signals into account. After training and validation with experimental data, RNNs can be
utilized instead of or in combination with constitutive
models for structural analysis. Often, the calculation of
RNNs is faster compared to complicated material models, for example fractional rheological elements, which
leads to less computational time in FE analyses. Numerical formulations for uncertain stressstraintime dependencies have been developed, which can be used
for incremental iterative solution strategies within fuzzy
or fuzzy stochastic FE analyses. The new approach has
been verified by a constitutive model. Its applicability is
demonstrated by computing the fuzzy displacements of
a structure under uncertain long-term loading.
Different further developments are possible. Special network structures can be created to consider
physical boundary conditions of several materials. This
step can be realized by postprocessing strategies or directly by using particle swarm optimization (PSO) for
network training, see Freitag et al. (2012). PSO will also
enable indirect training approaches using inhomogeneous stress and strain fields inside specimens.
Extensions for applications in multiphysics are aspired,
for example neural network based descriptions for temperature dependent material behavior.
ACKNOWLEDGMENTS
The second author gratefully acknowledges the support of the Deutsche Forschungsgemeinschaft (DFG
German Research Foundation) within the project (FR
3044/1-1) in the framework of a research fellowship.
REFERENCES
Adeli, H. (2001), Neural networks in civil engineering: 19892000, Computer-Aided Civil and Infrastructure Engineering,
16, 12642.
Adeli, H. & Jiang, X. (2003), Neuro-fuzzy logic model for freeway work zone capacity estimation, Journal of Transportation Engineering, 129, 48493.
Adeli, H. & Jiang, X. (2006), Dynamic fuzzy wavelet neural
network model for structural system identification, Journal
of Structural Engineering, 132, 10211.
Adeli, H. & Jiang, X. (2007), Toward smart structures: Novel
wavelet-chaos-dynamic neural network models for vibration control and health monitoring of highrise building
and bridge structures under extreme dynamic loading, in
B.H.V. Topping (ed.), Civil Engineering Computations:
Tools and Techniques, Chapter 1, Saxe-Coburg Publications, Stirlingshire, pp. 124.
Adeli, H. & Panakkat, A. (2009), A probabilistic neural network for earthquake magnitude prediction, Neural Networks, 22, 101824.
Aliev, R. A., Fazlollahi, B., Aliev, R. R. & Guirimov, B. G.
(2008), Linguistic time series forecasting using fuzzy recurrent neural networks, Soft ComputingA Fusion of Foundations, Methodologies and Applications, 12, 18390.
Aliev, R. A., Guirimov, B. G., Fazlollahi, B. & Aliev,
R. R. (2009), Evolutionary algorithm-based learning of
fuzzy neural networks. Part 2: recurrent fuzzy neural networks, Fuzzy Sets and Systems, 160, 255366.
Arangio, S. & Bontempi, F. (2010), Soft computing based multilevel strategy for bridge integrity monitoring, ComputerAided Civil and Infrastructure Engineering, 25, 34862.
Balu, A. S. & Rao, B. N. (2012), Multicut-high dimensional
model representation for structural reliability bounds estimation under mixed uncertainties, Computer-Aided Civil
and Infrastructure Engineering, 27, 41938.
Bothe, H. H. (1993), Fuzzy-Logic, Springer, Berlin.
Chen, S. H. (1985), Ranking fuzzy numbers with maximizing
set and minimizing set, Fuzzy Sets and Systems, 17, 11329.
Elman, J. L. (1990), Finding structure in time, Cognitive Science, 14, 179211.
Freitag, S., Beer, M., Graf, W. & Kaliske, M. (2009), Lifetime
prediction using accelerated test data and neural networks,
Computers and Structures, 87, 118794.
Freitag, S., Graf, W. & Kaliske, M. (2011a), Recurrent neural
networks for fuzzy data, Integrated Computer-Aided Engineering, 18, 26580.
653
Freitag, S., Graf, W. & Kaliske, M. (2011b), Recurrent neural networks for fuzzy data as a material description within
the finite element method, in Y. Tsompanakis and B. H.
V. Topping (eds.), Proceedings of the Second International
Conference on Soft Computing Technology in Civil, Structural and Environmental Engineering, Chania, Civil-Comp
Press, Stirlingshire, paper 28, pp. 120.
Freitag, S., Graf, W., Kaliske, M. & Sickert, J.-U. (2011c), Prediction of time-dependent structural behaviour with recurrent neural networks for fuzzy data, Computers and Structures, 89, 197181.
Freitag, S., Muhanna, R. L. & Graf, W. (2012), A particle
swarm optimization approach for training artificial neural networks with uncertain data, in M. Vorechovsky,
V.
Sadlek, S. Seitl, V. Vesely,
R. L. Muhanna and R. L.
Mullen (eds.), Proceedings of the 5th International Conference on Reliable Engineering Computing, Litera, Brno,
pp. 15170.
Ghaboussi, J., Garret, J. H. & Wu, X. (1991), Knowledgebased modeling of material behavior with neural networks,
Journal of Engineering Mechanics, 117, 13253.
Ghaboussi, J. & Sidarta, D. E. (1998), New nested adaptive
neural networks (NANN) for constitutive modeling, Computers and Geotechnics, 22, 2952.
Gonzalez-Olvera,
M. A., Gallardo Hernandez,
A. G., Tang,
Y., Revilla Monsalve, M. C. & Islas-Andrade, S. (2010), A
discrete-time recurrent neurofuzzy network for black-box
modeling of insulin dynamics in diabetic type-1 patients, International Journal of Neural Systems, 20, 14958.
Graf, W., Freitag, S., Kaliske, M. & Sickert, J.-U. (2010),
Recurrent neural networks for uncertain time-dependent
structural behavior, Computer-Aided Civil and Infrastructure Engineering, 25, 32233.
Graf, W., Sickert, J.-U., Freitag, S., Pannier, S. & Kaliske,
M. (2011), Neural network approaches in structural analysis under consideration of imprecision and variability, in Y.
Tsompanakis and B. H. V. Topping (eds.), Soft Computing
Methods for Civil and Structural Engineering, Saxe-Coburg
Publications, Stirlingshire, pp. 5985.
Haj-Ali, R., Pecknold, D. A., Ghaboussi, J. & Voyiadjis,
G. Z. (2001), Simulated micromechanical models using artificial neural networks, Journal of Engineering Mechanics,
127, 7308.
Hashash, Y. M. A., Jung, S. & Ghaboussi, J. (2004), Numerical
implementation of a neural network based material model
in finite element analysis, International Journal for Numerical Methods in Engineering, 59, 9891005.
Haykin, S. (1999), Neural Networks: A Comprehensive Foundation, Prentice Hall, Upper Saddle River, NJ.
Ishibuchi, H., Morioka, K. & Turksen, I. B. (1995), Learning
by fuzzified neural networks, International Journal of Approximate Reasoning, 13, 32758.
Ishibuchi, H. & Nii, M. (2001), Numerical analysis of the learning of fuzzified neural networks from fuzzy if-then rules,
Fuzzy Sets and Systems, 120, 281307.
Jain, R. (1976), Decisionmaking in the presence of fuzzy variables, IEEE Transactions on Systems, Man, and Cybernetics, 6, 698703.
Jiang, X. & Adeli, H. (2005), Dynamic wavelet neural network
for nonlinear identification of highrise buildings, ComputerAided Civil and Infrastructure Engineering, 20, 31630.
Jiang, X. & Adeli, H. (2007), Pseudospectra, MUSIC, and
dynamic wavelet neural network for damage detection
of highrise buildings, International Journal for Numerical
Methods in Engineering, 71, 60629.
654
Jiang, X. & Adeli, H. (2008), Dynamic fuzzy wavelet neuroemulator for non-linear control of irregular building
structures, International Journal for Numerical Methods in
Engineering, 74, 104566.
Jordan, M. I. (1990), Attractor dynamics and parallelism in
a connectionist sequential machine, in J. Diederich (ed.),
Artificial Neural Networks: Concept Learning, IEEE Press,
Piscataway, NJ, pp. 11227.
Jung, S. & Ghaboussi, J. (2006), Neural network constitutive
model for rate-dependent materials, Computers and Structures, 84, 95563.
A., Leps, M. & Zeman, J. (2007), Back analysis of
Kucerova,
microplane model parameters using soft computing methods, Computer Assisted Mechanics and Engineering Sciences, 14, 21942.
Moller,
B. & Beer, M. (2004), Fuzzy Randomness
Uncertainty in Civil Engineering and Computational Mechanics, Springer, Berlin, Heidelberg, New York.
Moller,
B. & Beer, M. (2008), Engineering computation under
uncertaintycapabilities of non-traditional models, Computers and Structures, 86, 102441.
Moller,
B., Graf, W. & Beer, M. (2000), Fuzzy structural analysis using -level optimization, Computational Mechanics,
26, 54765.
D. & Lehky,
Novak,
D. (2006), ANN inverse analysis based on
stochastic small-sample training set simulation, Engineering
Application of Artificial Intelligence, 19, 73140.
Oeser, M. & Freitag, S. (2009), Modeling of materials with
fading memory using neural networks, International Journal for Numerical Methods in Engineering, 78, 84362.
Oeser, M. & Pellinien, T. (2012), Computational framework
for common visco-elastic models in engineering based on
the theory of rheology, Computers and Geotechnics, 42,
14556.
Panakkat, A. & Adeli, H. (2009), Recurrent neural network
for approximate earthquake time and location prediction
Schafer,
A. M., Udluft, S. & Zimmermann, H.-G. (2008),
Learning long-term dependencies with recurrent neural
networks, Neurocomputing, 71, 24818.
Schafer,
A. M. & Zimmermann, H.-G. (2007), Recurrent
neural networks are universal approximators, International
Journal of Neural Systems, 17, 25363.
Sickert, J.-U., Freitag, S. & Graf, W. (2011), Prediction of
uncertain structural behaviour and robust design, International Journal for Reliability and Safety, 5, 35877.
Theodoridis, D., Boutalis, Y. & Christodoulou, M. (2012), Dynamical recurrent neuro-fuzzy identification schemes employing switching parameter hopping, International Journal
of Neural Systems, 22, 1250004, 16 p.
Zadeh, L. A. (1965), Fuzzy sets, Information and Control, 8,
33853.