Vous êtes sur la page 1sur 6

SPE 59308

IOR Evaluation and Applicability Screening Using Artificial Neural Networks


Leonid Surguchev, PETEC Software & Services AS, Lun Li, RF - Rogaland Research

Copyright 2000, Society of Petroleum Engineers Inc.


This paper was prepared for presentation at the 2000 SPE/DOE Improved Oil Recovery
Symposium held in Tulsa, Oklahoma, 35 April 2000.
This paper was selected for presentation by an SPE Program Committee following review of
information contained in an abstract submitted by the author(s). Contents of the paper, as
presented, have not been reviewed by the Society of Petroleum Engineers and are subject to
correction by the author(s). The material, as presented, does not necessarily reflect any
position of the Society of Petroleum Engineers, its officers, or members. Papers presented at
SPE meetings are subject to publication review by Editorial Committees of the Society of
Petroleum Engineers. Electronic reproduction, distribution, or storage of any part of this paper
for commercial purposes without the written consent of the Society of Petroleum Engineers is
prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300
words; illustrations may not be copied. The abstract must contain conspicuous
acknowledgment of where and by whom the paper was presented. Write Librarian, SPE, P.O.
Box 833836, Richardson, TX 75083-3836, U.S.A., fax 01-972-952-9435.

Abstract
The oil industry relies heavily on predictions of the recovery
processes in order to make sound operational decisions for
reservoir exploitation. For planning of IOR applications, one
would like to predict the performance of several competing
strategies before making a decision. The main tool for
performing such prediction is the reservoir simulator. The
reservoir simulators (1) require extensive information about
the reservoir that may not be available or can be unreliable in
the initial evaluation stage, and (2) are quite computer
intensive programs.
Very often decisions on recovery strategies should be
taken in an early stage of field development planning. A
reliable first order screening evaluation tool enables the
makimg of critical decisions on potential recovery strategies
with the limited reservoir information. A multicriterion model
based, on the method of Artificial Neural Networks (ANN)
and interval theory, has been applied to the assessment and
screening of EOR/IOR processes, such as water and gas shut
off methods. An assessment relies on the existing industry
experience and field practice at different reservoir conditions.
It is based on applicable ranges for key reservoir parameters
(permeability, porosity, depth, fluid properties, heterogeneity,
rock type, salinity, etc.). Historic data are used for ANN
training by Back Propagation (BP) and Scaled Conjugate
Gradient (SCG) algorithms. The ANN model permits an
integration of different types of data, eliminates arbitrary
approach in making decisions, and provides fast computation
and accuracy. The developed ANN model is capable of
assessing the efficiency of a large number of EOR/IOR and
well shut-off methods even when key reservoir parameters are
defective or noisy.

IOR Screening
Today the reservoir simulator is the main tool for predicting
the performance of oil recovery strategies. In order to establish
a simulation model of the field a comprehensive description of
the reservoir and fluids is required. Full field simulations are
also time and compute intensive programs. In situations where
available reservoir information is limited and fast evaluation
of reservoir strategies is needed a pre-simulation screening
tool can be very useful and assist in providing assessment of
recovery strategies and IOR methods.
Applicability of IOR methods and recovery strategies at
different reservoir conditions can be assessed based on the
existing field experience and IOR knowledge [1]. The
multicriterion approach, based on the Artificial Neural
Network (ANN) and interval theory, can be used to establish
models for fast IOR applicability evaluation. Provided the
expert system is populated with reliable information, the ANN
tool facilitates quantified assessments of IOR methods
efficiency at different reservoir conditions.
ANN
ANN is an information-processing system that has certain
performance characteristics in common with biological neural
networks. A neural network is characterized by a) its pattern
of connections between the neurons, i.e. the network
architecture, b) its method of determining the weights on the
connections, i.e. the training (or learning) algorithm, c) its
activation function.
ANNs have been developed as generalizations of
mathematical methods of human cognition or neural biology,
based on the following assumptions:
1. Information processing occurs at many simple elements
called neurons.
2. Signals are passed between neurons over connection links.
3. Each connection link has an associated weight, which, in a
typical neural net, amplifies the signal transmitted.
4. Each neuron applies an activation function to its net input
to determine its output signal.
The neuron has an internal state or activity level, which is
a function of the inputs it has received. Typically, a neuron
sends its activation as a signal to several other neurons. The
common activation function is the logistic sigmoid function:

L.M.SURGUCHEV, L.LI

f (x ) =

1
1+exp( x)

The weights represent information being used by the net to


solve a problem.
Architecture
Neural nets are often classified as single layer or multi-layers.
The arrangement of neurons into layers and the connection
patterns within and between layers is called the net
architecture [2]. Many neural nets have an input layer in which
the activation of each unit is equal to an external input signal.
The widely used multi-layer neural network has one layer of
hidden units as illustrated in Fig. 1. For solving different
problems it is important to select the layer numbers and the
neuron numbers within each layer. In determining the number
of layers, the input units are not counted as a layer, because
they perform no computation. Equivalently, the number of
layers in the net can be defined to be the number of layers of
weighted interconnect links between the slabs of neurons.
Algorithms
The method of setting the values of the weights (training) is an
important distinguishing characteristic of different neural nets.
Many of the tasks that neural nets can be trained to perform
fall into the areas of mapping, clustering and constrained
optimization. In general, two types of training for neural
networks (supervised and unsupervised) are distinguished. For
most typical neural net setting and training is accomplished by
presenting a sequence of training vectors, or patterns, each
with an associated target output vector. The weights are then
adjusted according to a learning algorithm. This process is
known as supervised training.
In recent years, several adaptive training algorithms for
feed-forward neural networks have been developed. Many of
these algorithms are based on the gradient decent algorithm
that is well known in optimization theory. They usually have a
poor convergence rate and depend on certain parameters
(learning rate and momentum parameter). The values of these
parameters are often crucial for the success of the algorithm.
Here, a new variation of the conjugate gradient method,
Scaled Conjugate Gradient (SCG), was implemented. SCG
allows the avoidance of the line search per training iteration
by using a Levenberg-Marquardt approach in order to scale
the step size [3].
Another widely used supervised ANN training algorithm is
the Back-Propagation (BP) training method [4]. The training
of a network by BP algorithm involves three stages: 1) the
feed-forward of the input training pattern, 2) the calculation
and backpropagation of the associated error, and 3) the
adjustment of the weights.
After training, the ANN can be used to solve the particular
problem. For an entered set of data for the input layer, the
result is obtained from the output layer.

SPE 59308

ANN Application
Both, BP and SCG methods were implemented to train the
neural networks for IOR screening module. This example
network consists of one hidden layer with up to 12 input
neurons and up to 18 output neurons. The input neurons are
main reservoir parameters and output neurons are IOR
methods to be evaluated. The successful applicability level for
IOR methods was set at 0.7 1.0 interval. The critical
intervals for key reservoir parameters characterizing
applicability conditions for IOR methods are specified for
ANN training [5]. As an example, Table 1 contains
applicability intervals for hydrocarbon gas injection, steam
injection and cyclic water flooding. In this way the
information about the IOR methods is required as input for
ANN training. The user of the program has a possibility to
replace or modify parameter intervals and to train the neural
network again with the new set of parameters.
BP training: In order to enhance the assessment capability
of the example ANN for each IOR method, 300 samples were
used to train the neural network. After 60,000 iterations
training was successful, i.e. the total training error reached the
defined minimum (Appendix A).
SCG training: The same samples for each IOR method as
for BP algorithm were used for training by SCG method. The
training was successful after 9,000 iterations (Appendix B).
In many cases SCG method gives better convergence and
shorter training time than the BP method.
Field Examples
The average reservoir parameters for two field cases are given
in Table 2. The fields A and B are sandstone reservoirs from
the same province with quite similar reservoir properties
except for permeability and oil viscosity.
The ANN models trained by SCG and BP algorithms were
used to evaluate applicability of the IOR methods, see Figures
2 and 3. The difference between the estimates is very small
and it results from differences in convergence achieved by
SCG and BP.
Water based IOR methods and Water-Alternated-Gas
(WAG) injection methods appear to have the highest potential
for these sandstone fields, A and B (Figs. 2 and 3). Steam
injection and continuous gas injection methods are not
attractive for the given reservoir conditions and can be
excluded from further evaluation.
Conclusions

The pre-simulation and screening tool based on Artificial


Neural Network (ANN) algorithm allows for fast
evaluation of potential IOR methods and their
applicability at different reservoir conditions.
The IOR estimates based on multicriterion ANN approach
are useful at an early or qualifying phase of field planning
and development when available reservoir data are
limited. The ANN tool assists the decision making
process by screening of most applicable methods and
processes.

SPE 59308

IOR EVALUATION AND APPLICABILITY SCREENING USING ARTIFICIAL NEURAL NETWORKS

Acknowledgements
The authors are grateful to Conoco and Elf for supporting the
work on further development of SWORD programs and thank
Gerard Gaquerel and Trond A. Rolfsvg for valuable
discussions.

1. Surguchev, L. M., Zolotukhin, A. B. and Bratvold., R. B.:


Reservoir Characterization and Modelling Supported by Expert
Knowledge Systems, SPE 24279, pp. 183-192, Proc. SPE European
Petroleum Computer Conference, Stavanger, May 24-27 (1992)
2. Kalam, M. Z., Al-Alawi, S. M. and Al-Mukheini, M.: The
application of artificial Neural Networks to Reservoir Engineering,
Symp. on Production Optimization, Paper IATMI 950104 Bandung,
Indonesia (1995)
3. Mller, M. F.: A Scaled Conjugate Gradient Algorithm for Fast
Supervised Learning, Neural Networks, 6, pp. 525-533 (1993).
4. Hechr-Nielsen, R.: Theory of Backpropagation Neural
Networks, Proc. of International Joint Conference on Neural
Networks, 1 (593), IEEE, Washington DC, (1989).
5. Reich, E.M., Li, L. and Surguchev, L.M.: Screening of IOR
methods using a fast pre-simulation tool, 097, Brighton, European
IOR Simposium, August 1999.

Appendix A: BP Training Algorithm


The training of a network by BP algorithm is as follows:
STEP 0. Initialize weights (using random values).
STEP 1. While stopping condition is false, do Step 2-8.
STEP 2. For each training pair, do Step 3-7.
Feedforward:
STEP 3. Each input unit (xi) receives input signal xi and
broadcasts this signal to all units in the layer above (the hidden
units).
STEP 4. Each hidden unit ( Z j ) sums its weighted input

z _ in j = Voj + xi vij , applies its activation

z j = f z _ in j , and

function to compute its output signal:

sends this signal to all units in the layer above (output units).
STEP 5. Each output unit ( Yk ) sums its weighted input
signals,

sends k to units in the layer below.


STEP 7. Each hidden unit ( Z j ) suns its delta inputs (from
m

units in the layer above),

in j = k w jk , multiplies by
k =1

the derivative of its activation function to calculate its error

References

signals:

y _ ink = Wok + z jW jk , and applies its

y k = f ( y _ in k )

Backpropagation of error:
STEP 6. Each output unit (Yk) receives a target pattern
corresponding to the input training pattern, computes its error

) (

information term, k = t k y k f y _ in k , calculates its


weight correction term and update Wjk ,
'

j = _ in j f ' z _ in j , calculates its

weight correction term and update

Vij ,

vij ( K + 1) = j xi + vij ( K ) vij ( K 1) , and


calculates its bias correction term and update

Voj ,

v oj ( K + 1) = j + v oj ( K ) v oj ( K 1) .
STEP 8. Test stopping condition for number of epochs. In
general, stopping condition is defined as follows:
The error

E = 0.5 t k y k
k

to be minimized (for

example, E = 0.0001) or the fixed iterative steps attached.


After training, application of the net involves only the
computations of the feedforward phase.
Nomenclature
The nomenclature we use in the training algorithm for the
backpropagation net is as follows:
x : Input training vector

x = ( x1 ,......, x n )
t : Output target vector
t = (t1 ,......, t m )

k : Portion of error correction weight adjustment for W jk


that is due to an error at output unit

Yk ; also, the information

Yk that is propagated back to the hidden


units that feed into unit Yk .

about the error at unit

j : Portion of error correction weight adjustment for Vij that


is due to the backpropagation of error information from the
output layer to the hidden unit Z j .

activation function to compute its output signal,

information term,

w jk ( K + 1) = k z j + w jk ( K ) w jk ( K 1) ,
calculates its bias correction term (used to update wok layer),

wok ( K + 1) = k + [wok ( K ) wok ( K 1)] , and

, : Learning rate and momentum parameter.


K : Iterative times.
xi : Input unit i .
Voj : Bias on hidden unit j.
Z j : Hidden unit j: The net input to Z j is denoted z in j :
z _ in j = Voj + xi vij
The output signal (activation) of

z j = f z _ in j

Z j is denoted z j :

L.M.SURGUCHEV, L.LI

Wok : Bias on output unit k.


Yk : Output unit k.
The net input to Yk is denoted y_ink :

k =

y k = f ( y _ in k )

k2
k 0 then a successful reduction in error

w k +1 = w k + k p k
r k +1 = E ' w k +1

k = 0 , success = true.

w1 = (..., w i , j ,...) where w i , j , is the weight connecting


node i in layer l to node j in layer l+1.

k =

STEP 2. If success = true, then calculate second order


information:

k =
pk

p k +1 = r k +1 + k p k
If k 0.75 , then reduce the scale parameter:
k = 41 k
else:

E ' wk + k p E ' w

k = k

success = false.
STEP 8.
If k

k = k + k

k = p k sk
T

k : k = k + k k p k

STEP 4. If k 0 then make the Hessian matrix positive

< 0.25 , then increase the scale parameter:


1 k

pk

STEP 3.

rk +1 r k +1 r k

p1 = r 1 = E ' ( w1 ) , k = 1 and success = true.

0 10 4 , 0 1 10 6 , 1 = 0 . Set

p k +1 = r k +1

w1 and scalars

If k mod N=0 then restart algorithm:

else:

STEP 1. Choose weight vector

sk =

STEP 7.
If
can be made:

E w k E w k + k p k

k = 2 k

Yk is denoted y k :

w1 be the weight vector defined as

Calculate the comparison parameters:

Appendix B: SCG Training Algorithm


Since the main different between SCG method and BP method
is in weight update, we give the weights update method of
SCG below:
Let

k
k

STEP 6.

y _ ink = Wok + z jW jk
The output signal(activation) of

SPE 59308

Scale

definite:

STEP 9.
If the steepest descent direction
k=k+1 and go to step 2

r k 0 , then set

k = 2 k 2

pk

k = k + k p k

w k +1 as the desired minimum.

Nomenclature
The nomenclature we use in the training algorithm for the
SCG is as follows:
N : Number of weights and biases
P : Number of patterns in the training set

E ( w) : Global error function

k = k

STEP 5. Calculate step size:

else terminate and return

k = p k r k

E ' ( w) : Gradient to global error function

w1 : Weight vector

SPE 59308

IOR EVALUATION AND APPLICABILITY SCREENING USING ARTIFICIAL NEURAL NETWORKS

K : Iteration times

p1 : Gradient vector

r1 : Gradient vector
k : Step size
k : Step size
: Scalar
1 : Scalar

1 : Scalar
k : Scalar

s k : Second order vector


k : Second order vector
k : Comparison parameter

TABLE 1 RESERVOIR PARAMETERS AND APPLICABILITY INTERVALS FOR SOME


IOR METHODS
Parameter
Gas injection
Steam injection
Cyclic waterflooding
Thickness, m
2-25
6-20
2-20
Porosity, %
5-35
18-35
10-40
Permeability, D
0.001-0.15
0.2-5
0.1-5
Initial oil saturation, %
60-100
70-100
40-100
Reservoir depth, m
1600-5000
30-1000
30-3500
Oil viscosity, cp
0.4-15
50-1500
0.4-25
Reservoir pressure, Mpa
20-60
5-20
5-60
Net-gross ratio
0.7-1
0.5-1
0.3-1
Initial temperature, C
10-250
10-250
80-500
Anisotropy, frac.
0.7-1
0.6-1
0.1-1
Mineralization, g/l
0-350
0-350
0-350
Oil density, kg/m3
800-900
750-850
650-850

TABLE 2 FIELD EXEMPLES


Parameters
Field A
Thickness, m
15
Porosity, %
20
Permeability, D
0.5
Initial oil saturation, %
73
Reservoir depth, m
1800
Oil viscosity, cp
12.2
Reservoir pressure, Mpa
18
Clay content, %
10
Initial temperature, K
314
Salinity of water, g/l
20
Mineralization, g/l
350
Oil density, kg/m3
850

Field B
12
20
0.2
89
1800
8
18
10
313
15
300
850

Applicability level, fraction

at
e
Ai r flo
ri
nj od
e
C ctio
O
n
St
e 2/
Ac am WA
G
id
w inje
e
c
H
ot ll tr tion
w ea
at
t
er me
n
i
C nje t
O
c
t
2
in ion
je
ct
io
n
Po
W
ly
A
Su me G
r
r
C fact floo
yc
a
lic nt d
w flo
o
at
er d
G
f
as lo
od
i
En nje
ric ctio
h
n
So ed
g
lv
en as
tf
lo
od

W
at
er
flo
od

Applicability level, fraction

tio
C
St O2 n
ea /W
Ac
m
AG
id
w inje
ct
H ell
io
ot
t
w rea n
at
tm
er
en
C inje t
O
2 ctio
in
je n
ct
io
n
Po
W
ly
A
Su me G
r
r
C fact floo
yc
a
lic nt d
w flo
o
at
er d
G
f
as lo
od
i
En nje
ric ctio
h
n
So ed
g
lv
en as
tf
lo
od

ec

nj

ri

Ai

6
L.M.SURGUCHEV, L.LI

Output Layer

Hidden Layer

Input Layer

0.6

0.4

0.6

0.4

SPE 59308

.........

................

.......

Fig. 1 - The ANN model.

Field A

0.8

SCG
BP

0.2

Fig. 2 - IOR methods applicability for oil field example A. Comparison of SCG and BP training algorithms.

Field B

0.8

SCG
BP

0.2

Fig. 3 - IOR methods applicability for oil field example B. Comparison of SCG and BP training algorithms.