Vous êtes sur la page 1sur 15

Application of Soft Computing Techniques for Prediction of Slope Failure

Department of Mining Engineering


National Institute of Technology, Rourkela

MN591:

Research Project I (Summer)

Submitted By:

Guided By:

Abhijeet Dutta
711MN1172

Prof. N. Prakash

Table of Contents
Introduction ..................................................................................................................... 2
Literature Review ............................................................................................................ 3
Artificial Neural Network .................................................................................................. 5
Introduction .................................................................................................................. 5
Back-Propagation Algorithm: ....................................................................................... 7
Variable selection ......................................................................................................... 8
Formation of training, testing and validation sets ......................................................... 9
Neural network architecture ......................................................................................... 9
Evaluation criteria ........................................................................................................ 9
Neural network training ................................................................................................ 9
Fuzzy Inference System ................................................................................................ 11
Conclusion .................................................................................................................... 13
References .................................................................................................................... 14

Table of Figures
Figure 1: Three layer neural network ..................................................................................................... 5
Figure 2: Graphical presentation of neuron in ANN .............................................................................. 6
Figure 3: A multi-layer feedforward network .......................................................................................... 7
Figure 4: A recurrent neural network ....................................................................................................... 7
Figure 5: Input, sum-operator and bias working together for producing the output node. ............ 10
Figure 6: Flow Chart to determine the properties and analysis the Slope stability by ANN ......... 11
Figure 7: (a) Crisp Set (b) Fuzzy Set ................................................................................................... 12

Introduction
Mining is one of the earliest primary industries of human civilization. It is considered a key industry
for many countries, and it has huge ripping effects on other industries. In comparison to past
centuries, the efficiency of modern mining has been dramatically improved through the
development of associated technologies. Many innovative mining methods and theories have
been developed by a multitude of scholars and engineers. Advanced high-tech computing
technologies with improved machineries have significantly contributed to the development of the
mining industry. In fact, modern mining is an advanced amalgamation of all the fundamental
sciences. In the case of actual mining activities, the mining manager frequently encounters many
complex decision-making problems without sufficient data or precise information available to overcome them. An inappropriate decision could endanger peoples lives and cause irreversible
damage to the mining economy, considering the huge size of the capital of mining. The main
causes of difficulties in the decision-making processes in mining can be categorized.
The determination of the non-linear behaviour of multivariate dynamic systems often presents a
challenging and demanding problem. Slope stability estimation is an engineering problem that
involves several parameters. The impact of these parameters on the stability of slopes is
investigated through the use of computational tools called neural networks. A number of networks
of threshold logic unit were tested, with adjustable weights. The computational method for the
training process was a back-propagation learning algorithm. In this paper, the input data for slope
stability estimation consist of values of geotechnical and geometrical input parameters. As an
output, the network estimates the factor of safety (FS) that can be modelled as a function
approximation problem, or the stability status (S) that can be modelled either as a function
approximation problem or as a classification model. The performance of the network is measured
and the results are compared to those obtained by means of standard analytical methods.
Since their introduction, research into the area of artificial neural networks and their applications
continue to captivate scientists and engineers from a variety of disciplines. This growing interest
among researchers is stemming from the fact that these learning machines have an excellent
performance in the issues of pattern recognition and the modelling of non-linear relationships of
multivariate dynamic systems. This paper investigates the validity of utilizing artificial neural
networks in the physical problem of slope stability prediction.
The behaviour of complex engineering mechanisms is determined by a series of interactive
parameters with interrelations not yet entirely understood.
According to Jing and Hudson (2002), Jing (2003) all numerical modelling methods (analytical
methods, basic numerical methods, Finite Element Method, Boundary Element Method, Distinct
Element Method, hybrid methods, extended numerical methods and fully coupled models) attempt
to achieve one-to-one mechanism mapping in the model. In other words, a one-to-one mechanism
occurring in reality is modelled directly such as a clear stressstrain relationship.

Literature Review
Since their introduction, research into the area of artificial neural networks and their applications
continue to captivate scientists and engineers from a variety of disciplines. This growing interest
among researchers is stemming from the fact that these learning machines have an excellent
performance in the issues of pattern recognition and the modelling of non-linear relationships of
multivariate dynamic systems. This paper investigates the validity of utilizing artificial neural
networks in the physical problem of slope stability prediction.
The term one-to-one mapping refers to the direct modelling of geometry and physical
mechanisms, either specifically or through equivalent properties. The neural network approach is
a non one-to-one mapping method. In such a model, mechanism mapping is not totally direct.
This model provides predicting capabilities; this is why it has been used for rock and soils
parameter identification and prediction.
Some recent publications on various geotechnical engineering topics are given below:
Performance monitoring of rock masses for mining geomechanics (Millar and Hudson, 1994).
Liquefaction assessment (Goh, 1995b).
Rock mass classification (Sklavounos and Sakellariou, 1995).
Estimation of load capacity of driven piles (Goh, 1995a).
Estimation of permeability of compacted clay layers (Najjar and Basheer, 1996).
Subsurface characterization (Gangopadhyay et al., 1999).
Different types of displacements of rock slopes (Deng and Lee, 2001).
Lithofacies identification (Chang et al., 2002).
Assessment of geotechnical properties (Yang and Rosenbaum, 2001).
As evidenced by the list of references above, the neural networks modelling approach has already
been applied to a variety of subjects in rock and soil mechanics. It is also evident that the method
has significant potential on account of its non 1:1 mapping and because of the fact that it may
be possible in the future for such networks to include creative ability, perception and judgement.
However, the method has not yet proved an adequate alternative to conventional modelling (Jing
and Hudson, 2002; Jing, 2003). The approach to the problem of slope stability estimation from
the perspective of artificial neural networks is not an easy task and requires sophisticated
modelling techniques, experience, deep knowledge of engineering and a vast amount of
experimental data.

The accurate estimation of the stability of a rock or soil slope is a difficult problem mainly because
of the complexity of the physical system itself and the difficulty involved in determining the
necessary input data associated with geotechnical parameters. We are faced with a non-linear
dynamical system which is likewise spatially distributed including a further problem due to the fact
that only a rough overall (macroscopic) description of the physical and geometric characteristics
of the slope can usually be given. For the above reasons, it is difficult to determine the values of
essential input data. Neural networks provide descriptive and predictive capabilities and, for this
reason, have been applied through the range of rock and soil parameter identification and
engineering activities (Jing and Hudson, 2002; Jing, 2003). This paper is a continuation of the
research conducted by our research group (Sakellariou and Ilias, 1997; Roussos, 2000).
The following approaches will be taken for the rest of my project work:
Application of the neural network method in the field of slope stability, and investigation of the
performance and convergence of artificial neural networks.
Investigation of the accuracy and flexibility of the method when applied to specific real-world
data sets, with reference to circular failure mechanism, plane failure mechanism and wedge
failure mechanism, in soil or highly fractured rock and rock slopes. Additionally, exploration of the
data set and testing the quality of the data.
Validation of the method by comparing its results to those obtained by standard engineering
techniques and simple empirical equations.
Examination of the relative importance of the input parameters.
Estimation of the stability and the factor of safety from the perspective of a dynamic system in
which one can implement descriptive data, as the status of stability.

Artificial Neural Network


Introduction
An artificial neural network (ANN), usually called neural network (NN), is a mathematical model
or computational model that is inspired by the structure and/or functional aspects of biological
neural networks. A neural network consists of an interconnected group of artificial neurons, and
it processes information using a connectionist approach to computation. They are powerful tools
for modelling, especially when the underlying data relationship is unknown. ANNs can identify
and learn correlated patterns between input data sets and corresponding target values. After
training, ANNs can be used to predict the outcome of new independent input data.

ANNs have been applied to many geotechnical engineering problems such as in pile capacity
prediction, modelling soil behaviour, site characterisation, earth retaining structures,
settlement of structures, slope stability, design of tunnels and underground openings,
liquefaction, soil permeability and hydraulic conductivity, soil compaction, soil swelling and
classification of soils.

Figure 1:

Three layer neural network

Figure1 show three layer neural network consist first layer has input neurons, second layer
of hidden neurons, third layer of output neurons. Supervised neural networks are trained in
order to produce desired outputs in response to training set of inputs. It is trained by
providing it with input and matching output patterns.
It used in the modelling and controlling of dynamic systems, classifying noisy data, and
predicting future events. Unsupervised neural networks, on the other hand, are trained by letting
the network continually adjusting itself to new input. It is or Self-organisation in which an (output)
unit is trained to respond to clusters of pattern within the inputs. Reinforcement Learning is be
considered as an intermediate form of the above two types of learning. Here the learning
5

machine does some action on the environment and gets a feedback response from the
environment.
For an artificial neuron, the weight is a number, and represents the synapse. A negative weight
reflects an inhibitory connection, while positive values designate excitatory connections. All
inputs are summed altogether and modified by the weights and refers as a linear combination.
Finally, an activation function controls the amplitude of the output. For example, an acceptable
range of output is usually between 0 and 1, or it could be -1 and 1.
A neuron is a real function of the input vector (x0, x2, xk). The out put is obtained as f(yj)

Where, f is a function, typically the sigmoid (logistic or tangent hyperbolic) function. A


graphical presentation of neuron is given in figure 2. Mathematically a Multi-Layer Perceptron
network is a function consisting of compositions of weighted sums of the functions
corresponding to the neurons.

Figure 2: Graphical

presentation of neuron in ANN

There are several types of architecture of NNs. However, the two most widely used NNs
Feed forward networks and Recurrent networks. In a feed forward network,
information flows in one direction along connecting pathways, from the input layer via the
hidden layers to the final output layer. There is no feedback (loops) i.e., the output of any
layer does not affect that same or preceding layer. Feed-forward neural networks,
where the data O(w) from input to output units is strictly feedforward. The data processing
can extend over multiple (layers of) units, but no feedback connections are present.

Figure 3: A multi-layer feedforward network

These networks differ from feed forward network architectures in the sense that there is at
least one feedback loop. Thus, in these networks, for example, there could exist one layer
with feedback connections as shown in figure below. There could also be neurons with self
feed back links, i.e. the output of a neuron is fed back into itself as input.
Recurrent neural networks that do contain feedback connections. In some cases, the
activation values of the units undergo a relaxation process such that the neural network will
evolve to a stable state in which these activations do not change anymore. In other
applications, the change of the activation values of the output neurons are significant, such
that the dynamical behaviour constitutes the output of the neural network (Pearlmutter,
1990).

Figure 4: A recurrent neural network

Back-Propagation Algorithm:
The back-propagation algorithm is a non-linear extension of the least mean squares (LMS)
algorithm for multi-layer perceptrons. It is the most widely used of the neural network
paradigms and has been successfully applied in many fields of model-free function
estimation. The back propagation network (BPN) is expensive computationally, especially
7

during the training process. Properly trained BPN tends to produce reasonable results when
presented with new data set inputs.
A BPN is usually layered, with each layer fully interconnected to the layers below and above
it. The first layer is the input layer, the only layer in the network that can receive external input.
The second layer is the hidden layer, in which the processing units are interconnected to the
layers below and above it. The third layer is the output layer. Each unit of the hidden layer is
interconnected with the units of the output layer. Units are not interconnected to other units
within the same layer. Each interconnection is assigned an associative connection strength,
expressed as weight (Figure 1). Weights are adjusted during the training of the network. In
BPN, the training is supervised, in which case the network is presented with target values for
each input pattern. The input space of the network is considered to be linearly separable. The
back-propagation algorithm is a non-linear extension of the least mean squares (LMS)
algorithm for multi-layer perceptrons. It is the most widely used of the neural network
paradigms and has been successfully applied in many fields of model-free function
estimation. The back propagation network (BPN) is expensive computationally, especially
during the training process. Properly trained BPN tends to produce reasonable results when
presented with new data set inputs.
A BPN is usually layered, with each layer fully interconnected to the layers below and above
it. The first layer is the input layer, the only layer in the network that can receive external input.
The second layer is the hidden layer, in which the processing units are interconnected to the
layers below and above it. The third layer is the output layer. Each unit of the hidden layer is
interconnected with the units of the output layer. Units are not interconnected to other units
within the same layer. Each interconnection is assigned an associative connection strength,
expressed as weight (Figure 1). Weights are adjusted during the training of the network. In
BPN, the training is supervised, in which case the network is presented with target values for
each input pattern. The input space of the network is considered to be linearly separable.
A BPN is usually layered, with each layer fully interconnected to the layers below and above it.
The first layer is the input layer, the only layer in the network that can receive external input. The
second layer is the hidden layer, in which the processing units are interconnected to the layers
below and above it. The third layer is the output layer. Each unit of the hidden layer is
interconnected with the units of the output layer. Units are not interconnected to other units within
the same layer. Each interconnection is assigned an associative connection strength, expressed
as weight (Figure 1). Weights are adjusted during the training of the network. In BPN, the training
is supervised, in which case the network is presented with target values for each input pattern.
The input space of the network is considered to be linearly separable.
The various steps in developing a neural network model are: summarized below & the example
is shown by metlab software.

Variable selection
The input variables important for modeling variable(s) under study are selected by suitable
variable selection procedures.

Formation of training, testing and validation sets


The data set is divided into three distinct sets called training, testing and validation sets. The
training set is the largest set and is used by neural network to learn patterns present in the data.
The testing set is used to evaluate the generalization ability of a supposedly trained network. A
final check on the performance of the trained network is made using validation set.

Neural network architecture


Neural network architecture defines its structure including number of hidden layers, number of
hidden nodes and number of output nodes etc. Number of hidden layers: The hidden layer(s)
provide the network with its ability to generalize. In theory, a neural network with one hidden layer
with a sufficient number of hidden neurons is capable of approximating any continuous function.
In practice, neural network with one and occasionally two hidden layers are widely used and have
to perform very well.
Number of hidden nodes: There is no magic formula for selecting the optimum number of hidden
neurons. However, some thumb rules are available for calculating number of hidden neurons. A
rough approximation can be obtained by the geometric pyramid rule proposed by Masters (1993).
For a three layer network with n input and m output neurons, the hidden layer would have
sqrt(n*m) neurons.
Activation function: Activation functions are mathematical formula that determine the output of a
processing node. Each unit takes its net input and applies an activation function to it. Non linear
functions have been used as activation functions such as logistic, tanh etc. Transfer functions
such as sigmoid are commonly used because they are nonlinear and continuously differentiable
which are desirable for network learning.

Evaluation criteria
The most common error function minimized in neural networks is the sum of squared errors.
ther error functions offered by different software include least absolute deviations, least fourth
powers, asymmetric least squares and percentage differences.
Neural network training
Training a neural network to learn patterns in the data involves iteratively presenting it with
examples of the correct known answers. The objective of training is to find the set of weights
between the neurons that determine the global minimum of error function. This involves
decision regarding the number of iteration i.e., when to stop training a neural network and the
selection of learning rate.

Figure 5: Input, sum-operator and bias working together for producing the output node.

Various researchers have used ANN to predict to the slope stability or slope failure or factor of
safety. The Back propagation neural network is used to calculate the factor of safety. Nine input
parameters and one output parameter are used in the analysis. The output parameter is the factor
of the safety of the slopes, the input parameters are the height of slope, the inclination of slope,
the height of water level, the depth of firm base, the cohesion of soil, the friction angle of soil, the
unit weight of soil, but the important input parameters are horizontal and vertical seismic
coefficients.
Slope failures are complex natural phenomena that constitute a serious natural hazard in many
countries. To prevent or mitigate the landslide damage, slope-stability analyses and stabilization
require an understanding and evaluation of the processes that govern the behavior of the slopes.
The factor of safety based on an appropriate geotechnical model as an index of stability, is
required in order to evaluate slope stability. Many variables are involved in slope stability
evaluation and the calculation of the factor of safety requires geometrical data, physical data on
the geologic materials and their shear-strength parameters (cohesion and angle of internal
friction), information on pore-water pressures, etc.
The determination of the non-linear behaviour of multivariate dynamic systems often presents a
challenging and demanding problem. The impact of these parameters on the stability of slopes is
investigates through the use of computational tools called neural networks. The input data for
slope stability estimation consist of values of geotechnical and geometrical input parameters. The
network estimates the factor of safety (FS) that can be modelled as a function approximation
problem, or the stability status (S) that can be modelled either as a function approximation
problem or as a classification model. The performance of the network is measured and the results
are compared to those obtained by means of standard analytical methods.
A series of ANNs were created in order to predict the safety factor and estimate stability against
the circular failure mechanism and the wedge failure mechanism.
10

Ann and fuzzy set could primarily be used in two ways in slope stability. One is prediction of
various strength and physic-mechanical properties by previously used properties.
Other is direct prediction of factor of safety or stability based on simulation of huge data set or
incorporating the case studies.
The rock slopes have important role for the design and excavation in various open pit mine and
also civil engineering projects all around the world. Initial the condition and friction angle can be
trained by neural network. Input parameter of compressive strength and later on cohesion and
friction angle were calculated by compressive strength and these properties were used as input
for finite difference code to analyzing slope stability and determine the factor of safety.

Figure 6: Flow Chart to determine the properties and analysis the Slope stability by ANN

Fuzzy Inference System


Fuzzy logic is a form of many-valued logic and it deals with reasoning that is approximate rather
than fixed and exact. The nature of uncertainty in a slope design is a very important that should
considered. Fuzzy set theory was developed specially to deal with uncertainties that are
nonrandom in nature.
The fuzzy set was first introduced in 1965 by Lofti Zadeh as a mathematical way to represent
linguistic vagueness . It can be considered as a generalization of classical set theory. In a classical
set, an element belongs to or does not belong to a set. That is, the membership of an element is
11

crisp (0, 1), and an A crisp set of real objects are described by a unique membership function
such as XA in fig.7(a).

Figure 7: (a) Crisp Set (b) Fuzzy Set

Contrary, a fuzzy set is a generalization of an ordinary set which assign the degree of membership
for each element to range over the unit interval between 0 and 1 as shown in fig. 7(b). In addition,
fuzzy set theory can be used for developing rule-based models which combine physical insights,
expert knowledge and numerical data in a transparent way that closely resembles the real world.
An element of the variable can be a member of the fuzzy set through a membership function that
can take values in the range from 0 to 1. Membership functions (MF) can either be chosen by the
user arbitrarily based on the users experience or can also be designed using machine learning
methods (e.g., artificial neural networks, genetic algorithms, etc.). There are different shapes of
membership functions; triangular, trapezoidal, piecewise-linear, Gaussian, bell shaped, etc. The
fuzzy rules provide a system for describing complex (uncertain, vague) systems by relating input
and output parameters using linguistic variables. A fuzzy ifthen rule assumes the form if x is A
then y is B, where A and B are linguistic values defined by fuzzy sets on universes of discourse
X and Y, respectively.
Fuzzy inference is the process of formulating an input fuzzy set map to an output fuzzy set using
fuzzy logic. In fact, the core section of a fuzzy system is the FIS part, which combines the facts
obtained from the fuzzification with the rule base and conducts the fuzzy reasoning process.
Generally, the basic structure of a FIS consists of three conceptual components, rule base,
database, and reasoning mechanism. A rule base contains a selection of fuzzy rules and a
database defines the membership functions used in the fuzzy rules. A reasoning mechanism
performs the fuzzy reasoning based on the rules and given facts to derive a reasonable output or
conclusion. There are several FISs that have been employed in various applications. The most
commonly used include:
Mamdani Fuzzy Model;
Takagi-Sugeno-Kang fuzzy (TSK) model;
Tsukamoto fuzzy model;
Singleton fuzzy model.
12

The differences between these FISs lie in the consequents of their fuzzy rules, and thus their
aggregation and defuzzification procedures differ accordingly. Defuzzification is a process of
reducing an aggregated (or clipped) fuzzy set into a crisp number, presumably the most
representative value of that fuzzy set interval. There are two methods which are generally used
for defuzzification i.e. Centre of area (Centroid) method and Ranking index method.
The Mamdani Fuzzy model is often used in geotechnical problems because of its simplicity and
effectiveness to handle linguistic variables. Basically, rule base, database and reasoning
mechanism are three conceptual elements of a FIS. The fuzzy rules constitute the rule base and
the database determines the membership functions associated with the inputs parameters to be
used in the rule base while the reasoning mechanism provides the platform to derive an adequate
conclusion (output) by using fuzzy logic. At this stage the extraction of a crisp set from a fuzzy
set, called defuzzification is performed.
Fuzzy logic provides an inference structure that enables the human reasoning capabilities to be
applied to artificial knowledge-based systems. Fuzzy logic provides a means for converting
linguistic strategy into control actions and thus offers a high-level computation.Fuzzy logic
provides mathematical strength to the emulation of certain perceptual and linguistic attributes
associated with human cognition, whereas the science of neural networks provides a new
computing tool with learning and adaptation capabilities. The theory of fuzzy logic provides an
inference mechanism under cognitive uncertainty, computational neural networks offer exciting
advantages such as learning, adaptation, fault tolerance, parallelism, and generalization.
Neuro-fuzzy inference systems have been used in many areas in civil engineering applications.
A stability assessment model for epimetamorphic rock slopes has been developed by using
Adaptive Neuro-Fuzzy Inference System (ANFIS) for its capacity of dynamic nonlinear analyses.
the inference system is employed to predict the stability of the slope by choosing bulk density ,
the height H, the inclination , the shear strength parameters c and , of the slope as inputs, while
the stability state as output.

Conclusion
In order to forecast the factor of safety (FS) or the status of stability (S) in the case of rock or soil
slopes, the factors that influence FS and S have to be determined. The output layer is composed
of a single output parameter, either the factor of safety FS, or the status of stability. Considering
the uncertain problems of stability analysis which have the characteristics of random and
fuzziness, the author uses the maximum membership degree principle to analyze and evaluate
the slope stability. Ridge distribution in effect factor of quantity and trapezium distribution in the
effect factor of ration are applied here to construct membership function. The method of two class
synthesis assessment is adopted to analyze the stability of slope. The slope stability is assessed
by each factor such as cohesion, internal angle of friction, UCS, slope angle, etc. The membership
functions and the distribution of the proportion of importance can also be applied to analyze the
stability of similar slopes. The judgment of fuzzy comprehensive evaluation as the input of neural
network by MATLAB. Then final judgment is transported out through the neural network that
possess learning ability. Since ANN models have the learning ability, therefore it is of much
practical use and it shall be used in further prediction of those slopes which are vulnerable to
natures stability disturbing properties like moisture content and other factors like cohesion, angle
of internal friction, density, etc. which are inherent properties of rocks.

13

References
1. H. B. Wang, W.Y. Xu and R. C. Xu (2005) Slope stability evaluation using back
propagation neural networks, Engineering Geology, 80(3-4), 302-315.
2. A. T. C. Goh (1999) Genetic algorithm search for critical slip surface in multiple wedge
stability analysis, Can. Geotech. J. 36(2), 382-391.
3. Al-Karni, A. (2000) Study of the effect of soil anisotropy on slope stability using method
of slices, Computers and Geotechnics, 26(2), 83-103.
4. Munoz, A. and Sanz-bobi, M. A. (1998) An incipient fault detection system based on the
probabilistic radial basis function network: application to the diagnosis of the condenser
6.of a coal power plant, Neuro-computing, 23, 177-194.
5. Magelssen, G. R. and Elling, J. W. (1997) Chromatography pattern recognition of aroclors
using iterative probabilistic neural networks, Journal of Chromatography, 775(1-2), 231242.
6. Shan, Y. and Zhao, R. (2002) Application of probabilistic neural network in the clinical
diagnosis of cancers based on clinical chemistry data, Analytica Chimica Acta, 471(1),
77-86.
7. Tam, C. M. and Tong, T. K. L. (2004) Diagnosis of pre-stressed concrete pile defects
using probabilistic neural networks, Engineering Structures, 26(8), 1155-1162.
8. Singer, D. A. and Bliss, J. D. (2003) Use of a probabilistic neural network to reduce costs
of selecting construction rock, Natural Resources Research, 12(2), 135-140.
9. Specht, D. F. (1990) Probabilistic neural networks, Neural Networks, 3(1), 109-118.
10. Flood, I. (1994) Neural networks in civil engineering.I: principles and understanding, J.
Compu. Civ. Eng., 8(2), 131-148.
11. Hajmeer, M. and Basheer, I. (2002) A probabilistic neural network approach for modeling
and classification of bacterial growth/no-growth data,J. of Microbilogical Methods, 51(2),
217-226.
12. Huang, D. S. and Zhao, W. B. (2005) Determining the centers of radial basis probabilistic
neural networks by recursive orthogonal least square algorithms, Appl. Math.
Computation, 162(1), 461-473.
13. Gomm, J. B. and Yu, D. L. (2000) Selecting radial basis function network centers with
recursive orthogonal least squares training, IEEE Trans. Neural Networks, 11 (2), 306314.
14. Sah, N. K. (1994) Maximum Likelihood Estimation Of Slope Stability, Int. J. Rock Mech.
Min. Sci. & Geomech. Abstr., 31(1), 47-53.

14

Vous aimerez peut-être aussi