Vous êtes sur la page 1sur 13

Acta Math Vietnam

DOI 10.1007/s40306-015-0150-z

A Modified Extragradient Method


for Infinite-Dimensional Variational Inequalities

Pham Duy Khanh1

Received: 8 September 2014 / Accepted: 19 November 2014


Institute of Mathematics, Vietnam Academy of Science and Technology (VAST) and Springer
Science+Business Media Singapore 2015

Abstract A modified form of the extragradient method for solving infinite-dimensional


variational inequalities is considered. The weak convergence and the strong convergence
for the iterative sequence generated by this method are studied. We also propose several
examples to analyze the obtained results.

Keywords Modified extragradient method Variant stepsizes Pseudomonotone


mapping Monotone mapping Lipschitz continuity

Mathematics Subject Classification (2010) 47J20 49J40 49M30

1 Introduction

Let H be a real Hilbert space with inner product ., . and induced norm ., and K a
nonempty closed convex subset of H . For a constant L > 0, a mapping F : K H is said
to be LLipschitz continuous on K if
F (u) F (v) Lu v u, v K. (1)
The variational inequality problem defined by K and F , which is denoted by VI(K, F ), is
that of finding a vector u K such that
F (u ), u u  0 u K. (2)

The author is funded by Vietnam National Foundation for Science and Technology Development
(NAFOSTED) under grant number 101.01-2014.56.

 Pham Duy Khanh


pdkhanh182@gmail.com

1 Department of Mathematics, Ho Chi Minh City University of Pedagogy, 280 An Duong Vuong,
Ho Chi Minh, Vietnam
P. D. Khanh

The solution set of (2) is abbreviated to Sol(K, F ). As usual, we say that F is monotone on
K if
F (u) F (v), u v 0 u, v K. (3)
If the property (3) is replaced by the weaker one
F (u), v u 0 = F (v), v u 0 (u, v K) , (4)
then F is said to be pseudomonotone (in Karamardians sense) on K.
Problem (2) includes such important models in applied mathematics as systems of
nonlinear equations, necessary optimality condition for optimization problems, comple-
mentarity problems. It is well known that (2) has a tight connection with the fixed point
problems. For solving VI(K, F ), one may use projection methods that employ the metric
projection onto the feasible set K. In particular, to deal with VI(K, F ) in the finite-
dimensional Euclidean space Rn , the extragradient method (write EGM for short) was
proposed by Korpelevich [8]. Let us recall the EGM and its basic convergence property in
the following statement.

Theorem 1.1 (see [8, Theorem 2]) Suppose that (2) has a solution,
 and the mapping
F : K Rn is monotone and LLipschitz continuous on K. Let uk be the sequence
generated by the two-step computation
     
uk = PK uk F uk , uk+1 = PK uk F uk

where u0 K is an arbitrary initial point and (0, 1/L) is a fixed constant. Here,
PK (u) = {x K : u x = inf u y|} denotes the metric projection of u on K. Then
yK
 
the sequence uk converges to some u Sol(K, F ).

It is a known fact [3, Theorem 12.1.11] that the above monotonicity assumption on F
can be replaced by the weaker pseudomonotonicity described in (4).
For monotone variational inequalities, the Tikhonov regularization method (TRM for
brevity; see, e.g., [3, 17, 18]) and the proximal point algorithm (PPA for short; see, e.g., [10,
14, 17]) have proved to be efficient solution methods.
For pseudomonotone variational inequalities, as shown by Tam et al. [17, p. 260 and
p. 265], it may happen that every regularized problem generated by TRM (resp. every
auxiliary problem generated by PPA) is non-pseudomonotone. This means that the reg-
ularization procedures performed in TRM and PPA may destroy completely the given
pseudomonotonicity structure of the original problem and bring in auxiliary problems which
are more difficult than the latter. A natural question arises: If there is any algorithm that can
solve pseudomonotone variational inequalities in an efficient way? To our knowledge, the
above-mentioned EGM is such an algorithm.
Motivated by the idea of two-step projection due to Korpelevich, the present paper con-
siders a modified extragradient method with variant stepsizes (modified EGM for brevity)
in infinite-dimensional Hilbert spaces. This modification of the original EGM is aimed at
increasing the computation flexibility and therefore improving the convergence rate of the
iteration process; see, e.g., [14] for an iterative scheme with variant stepsizes. We will obtain
some results on the weak convergence and the strong convergence of the iterative sequence
generated by the modified EGM. Then the results are analyzed by several examples. Rela-
tionships between the boundedness of the iterative sequences and the solution existence for
the given problem are investigated in detail. Moreover, we show that exact bounds for the
segment containing the variant stepsizes can be established.
A Modified Extragradient Method

To make some credits to the advantages of TRM and PPA, which require only the
weak continuity [17, p. 255] of F , for deriving convergence theorems (see, e.g., [17, Theo-
rem 2.3, and Theorem 3.1]), we emphasize that the convergence theorems of the modified
EGM given below rely heavily on the Lipschitz continuity of F (even in the monotone
case!).
Turning back to the EGM and the modified EGM, we would like to make the following
remark. If the Lipschitz constant L does not exist or cannot be found explicitly, then one
should use the method of [6] (see also [5, 9, 11, 12, 15, 16, 19]). If L exists and can be
found explicitly (by using the mean value theorem for vector-valued functions [4, p. 27], for
the concrete forms of F and K), then computations on the basis of the original EGM [8] or
the modified EGM as suggested in this paper (Algorithm 1) are much simpler, hence, more
preferable.
Section 2 describes the modified EGM with variant stepsizes. Section 3 establishes the
strong convergence of the iterative sequences under the pseudomonotonicity of F . Exact
bounds for the region of variant stepsizes are obtained in Section 4. The weak convergence
of the iterative sequences under monotonicity is studied in Section 5. The final section gives
some remarks and questions for further development of the modified EGM.

2 Modified Extragradient Method

Employing some ideas given in [1, 13], we consider the following modified EGM which
requires that the Lipschitz condition (1) is fulfilled for some L > 0.

Remark 2.1 Unlike the classical EGM due to Korpelevich, the stepsize k in each iter-
ation of this modified EGM varies from step to step. The numbers k , k N, are
contained in a closed interval [a, b] 0, L1 . When a = b = , that means the
stepsize is fixed, the modified EGM collapses to the classical EGM already recalled in
Theorem 1.1.

3 Strong Convergence Under Pseudomonotonicity


 
We now establish some basic properties of the Algorithm 1. Let uk be an iterative
sequence generated by that algorithm.
The next lemma will be of a frequent use in the sequel. The proof scheme is adapted
from the original one in [8].
P. D. Khanh

Lemma 3.1 Let K be a nonempty closed convex subset of a real Hilbert space H . Let F
be a pseudomonotone LLipschitz continuous mapping of K into H . Suppose that u is a
solution of VI(K,F). For every k N, it holds that
 
uk+1 u 2 uk u 2 1 k2 L2 uk uk 2 . (6)

Proof By the variational characterization of the metric projection [7, Theorem 2.3 on p. 9],
for every u H , we have
u PK (u), v PK (u) 0 v K. (7)
Since
u v2 =  (u PK (u)) (v PK (u)) 2
= u PK (u)2 + v PK (u)2 2u PK (u), v PK (u),
by (7), we get
u v2 u PK (u)2 + v PK (u)2 v K, u H.
Hence,
v PK (u)2 u v2 u PK (u)2 v K, u H. (8)

 k
Substituting v = u K, u = u k F u into (8) and using the relation u
k k+1 =
  
PK uk k F uk in (5) yield
   
u uk+1 2 uk k F uk u 2 uk k F uk uk+1 2
       
=  uk u k F uk 2  uk uk+1 k F uk 2
  
= uk u 2 uk uk+1 2 + 2k F uk , u uk+1 . (9)

Since u Sol(K, F ), we have F (u ), u u  0 for all u K. Thus, by the pseu-




get F (u), u u  0 for all u K. For u = u K, the last
domonotonicity of F , we k

inequality implies that F uk , u uk 0. Hence,


        
F uk , u uk+1 = F uk , u uk + F uk , uk uk+1
  
F uk , uk uk+1 . (10)

By (9) and (10), we have


uk+1 u 2
  
uk u 2 uk uk+1 2 + 2k F uk , uk uk+1
      
= uk u 2  uk uk + uk uk+1 2 + 2k F uk , uk uk+1

= uk u 2 uk uk 2 uk uk+1 2 2 uk uk , uk uk+1
  
+2k F uk , uk uk+1

= uk u 2 uk uk 2 uk uk+1 2


  
+2 uk uk k F uk , uk+1 uk . (11)
A Modified Extragradient Method

To estimate the last scalar product, we break it into the sum of two other scalar products
and note that
 (1) the nonpositiveness of the first scalar product
 follows  (7) for u =
 from
uk k F uk , v = uk+1 , and from the relation uk = PK uk k F uk ; (2) the second
scalar product can be evaluated by the Schwarz inequality and (1). Namely, we have
     
uk uk k F uk , uk+1 uk = uk k F uk uk , uk+1 uk
    
+ k F uk k F uk , uk+1 uk
    
k F uk F uk , uk+1 uk
   
k F uk F uk uk+1 uk 

k Luk uk uk+1 uk . (12)


Combining (11) with (12) gives
uk+1 u 2 uk u 2 uk uk 2 uk uk+1 2 + 2k Luk uk uk+1 uk .
Since
2k Luk uk uk+1 uk  k2 L2 uk uk 2 + uk+1 uk 2 ,
it follows that
uk+1 u 2 uk u 2 uk uk 2 uk uk+1 2
+k2 L2 uk uk 2 + uk+1 uk 2
 
uk u 2 1 k2 L2 uk uk 2 .

The proof is complete.

Now, we can state the main result of this section.

Theorem 3.2 Let K be a nonempty closed convex set in a Hilbert space  H. Suppose that
F : K H is pseudomonotone and LLipschitz continuous on K. Let uk be a sequence
 
generated by the Algorithm 1. If Sol(K, F ) is nonempty, then uk is a bounded sequence.
 k   k
Moreover, if there exists a subsequence u i u converging strongly to some u K,
 
then u Sol(K, F ) and the whole sequence uk converges strongly to u.

Proof Let u be an arbitrary element of Sol(K, F ). For each k N, we put


k := 1 k2 L2 . (13)

Since 0 < a k b < 1


L, it holds that

0 < 1 b2 L2 k 1 a 2 L2 < 1. (14)


 
By Lemma 3.1 and the condition k > 0, we can infer the sequence uk u  is mono-
   
tonically decreasing and therefore uk is bounded. Moreover, uk u  is a convergent
sequence. The first assertion of the theorem has been proved.    
To prove the second assertion, suppose that there is a subsequence uki uk
converging in norm to an u K. Let us show that u Sol(K, F ). By (6) and (13),
k uk uk 2 uk u 2 uk+1 u 2 k N. (15)
P. D. Khanh

For each n N, adding the first (n + 1) inequalities in (15), we get



n
k uk uk 2 u0 u 2 un+1 u 2 u0 u 2 .
k=0

Hence,


k uk uk 2 u0 u 2 .
k=0

This yields
lim k uk uk 2 = 0.
k

Since k 1 b2 L2 > 0 by (14), it follows that


lim uk uk 2 = 0. (16)
k
   
As the subsequence uki converges in norm to u K, (16) implies that uki also
converges in norm to u.
From the inclusion {ki } [a, b], it follows that {ki } has a convergent subsequence.
Without loss of generality, we may assume that {ki } converges to some a > 0. By (5)
and by the continuity of F and PK (.),
  
u = lim uki = lim PK uki ki F uki = PK (u F (u)).
i i

The equality u = PK (uF (u)) and a well-known characterization of the metric projection
  3.1 on p. 12] that u Sol(K, F ).
[7, Theorem 2.3 on p. 9]) imply [7, the proof of Theorem
It remains to show that the whole sequence uk converges strongly to u. Apply-
 
ing Lemma 3.1 and using u instead of u , we deduce that the sequence uk u is
monotonically decreasing and therefore converges. Since
lim uk u = lim uki u = 0,
k i
 k
u converges strongly to u.
 
Remark 3.3 Under the assumptions of Theorem 3.2, the existence of subsequences of uk
converging in norm to different points of K is excluded. This property is useful for checking
the correctness of the computations made by electronic equipments.

If dim H < +, say H = Rn , then the convergence theorem can be stated as follows.

Theorem 3.4 Suppose that K Rn is a nonempty closed convex set, F : K Rn is
pseudomonotone and LLipschitz continuous on K. If Sol(K, F ) is nonempty and uk
is a sequence generated by the Algorithm 1, then there exists u Sol(K, F ) such that
uk u .

Proof It suffices to apply Theorem 3.2 and the property that any bounded sequence in Rn
has a convergent subsequence.

Let us consider an illustrative example for Theorems 3.2 and 3.4.


A Modified Extragradient Method

Example 3.5 Consider the problem VI(K, F ) with K = R and F (x) = x. We can easily
check that F is LLipschitz continuous with L = 1, monotone
 on K, and that Sol(K, F ) =
{0}. Choose u0 = 1 K and the real sequence {k } 14 , 34 (0, 1) as follows:
1
4 if k is even;
k = (17)
3
4 if k is odd.
 
The iterative sequence uk defined by (5) is given by
    
uk+1 = PK uk k F PK uk k F uk
  
= uk k F uk k F uk
 
= uk k uk k uk
 
= 1 k + k2 uk .

By the condition u0 = 1 and by (17), we have


k 
   13 k+1
uk+1 = 1 i + i2 = k N.
16
i=0
 
Since 13
16 (0, 1), the sequence uk converges to 0 Sol(K, F ).

We now show that if the iterative sequence converges strongly, then the limit point is a
solution of (2).

Theorem 3.6 Suppose that K is a nonempty closed convex set in a Hilbert   space H . Let
F : K H be a LLipschitz continuous mapping. If the sequence uk generated by
Algorithm 1 converges strongly to some u , then u is a solution of VI(K, F ).

Proof By (5), we have


    
uk+1 = PK uk k F PK uk k F uk . (18)
Since {k } [a, b] (0, 1/L), {k } has a convergent subsequence. Without loss of gener-
ality, we may assume that k with [a, b] (0, 1/L). Letting k and using
the continuity of F and PK (.), from (18), we get
    
u = PK u F PK u F u . (19)
Then u is a solution of VI(K, F ). Indeed, by (19), we have
          
u PK u F u  = PK u F PK u F u PK u F u 
     
[u F PK u F u ] [u F u ]
     
F PK u F u F u 
  
LPK u F u u .

Since 0 < L < 1, the latter forces u = PK (u F (u )). Hence, u Sol(K, F ).

The next theorem is a strengthened form of Theorem 3.6.


P. D. Khanh

Theorem 3.7 Let K be a nonempty closed convex set in a Hilbert space H . Suppose 
that F : K H is pseudomonotone and LLipschitz continuous on K. Let uk be a
 k
sequence generated by the Algorithm 1. If u is bounded and it has a strongly convergent
subsequence, then the whole sequence converges strongly to a solution of VI(K, F ).
    
Proof Since uk is bounded, F uk is also bounded by the Lipschitz continuity of F .
Hence, there exists R > 0 such that
 
uk  R, and F uk  R k N.

Therefore, by the nonexpansiveness of the metric projection and by the Lipschitz continuity
of F , for all k N, we have
          
PK uk k F uk  =  PK uk k F uk PK uk + PK uk 
    
PK uk k F uk PK uk  + uk 
  
 uk k F uk uk  + R
 
|k |F uk  + R

L1 R + R.
 
(It holds that PK uk = uk because uk K.) Let KB be the intersection of K and B,
where B is the ball with origin center and radius L1 R + R. Then KB is nonempty, closed,
and convex in H . Since F is pseudomonotone
  on
 K, the problem VI(KB, F ) has a solution
 
by [20, Theorem 2.3]. As PK uk k F uk  L1 R + R, PK uk k F uk is
contained in B. Hence, we have
     
PK uk k F uk = PKB uk k F uk ,

and so
       
uk+1 = PK uk k PK uk k F uk = PKB uk k PKB uk k F uk .
 
The latter means that uk can be regarded as a sequence generated by the modified
 
EGM for solving VI(KB , F ). Since Sol(KB , F ) is nonempty, and the sequence uk has a
 k
strongly convergent subsequence by our assumption, u converges strongly to u KB
by Theorem 3.2. Applying Theorem 3.6, we can conclude that u Sol(K, F ).

If dim H < + then any bounded sequence has a (strongly) convergent subsequence.
Thus, in this
 case, from Theorem 3.7, it follows that if Algorithm 1 produces a bounded
sequence uk , then Sol(K, F )  = .

4 Exact Bounds for a and b

We are interested in finding a lower bound for the constant a and an upper bound for the
constant b in Algorithm 1 and in Theorems 3.2 and 3.4.
A Modified Extragradient Method

Example 4.1 Let K, F, u0 be the same as in Example 3.5 and the real sequence {k }
(0, 1) as
1
k = k+1 k N. (20)
2
Calculating
 similarly
 as in Example 3.5, we have the following formula for the iterative
sequence uk in (5):
k  
uk+1 = 1 i + i2 k N. (21)
i=0
From (20) and (21), we deduce that
k 
 
uk+1 = 1 2(i+1) + 4(i+1)
i=0

k+1 
= 1 2i + 4i
i=1
 
for every k N. We are going to prove that uk converges to some u > 0, and so it does
not converge to any solution
  of VI(K, F ).
It is easily seen that uk is a decreasing
and bounded from below sequence and therefore
 k 
u is convergent. For each x 0, 2 , we have
1

x
1 ex . (22)
2

Indeed, for each t 0, 12 , the inequality et 12 holds. Integrating both sides of the last

inequality over the interval [0, x], where x 0, 12 , we obtain (22). Note that, for every

i 1, the value x := 21i 2.4i belongs to the segment 0, 12 . Substituting that into
(22) gives
i
1 2i + 4i e2 +2.4
1i
i 1. (23)
For each k N, multiplying the first (k + 1) inequalities in (23), we get

k+1  
k+1
1i +2.4i
1 2i + 4i e2
i=1 i=1
 

k+1  i
21i + k+1
i=1 2.4
= e i=1
.
Letting k , we obtain
 



21i + 2.4i
u = lim uk+1 e i=1 i=1
> 0.
k

Example 4.1 leads us to the following


Conclusion 1: The exact lower bound for a is 0.

Example 4.2 Let K, F, u0 be the same as in Example 3.5 and let


1
k = 1 k N. (24)
2k+1
P. D. Khanh

From (24), we have

1 i + i2 = 1 2(i+1) + 4(i+1) i N.

Thus, the iterative sequence in (5) has the form


k+1 
uk+1 = 1 2i + 4i k N.
i=1
 
An analysis similar to that in Example 4.1 shows that uk converges to some u > 0. Hence,
it does not converge to any element of Sol(K, F ).

Example 4.2 leads us to the following


Conclusion 2: The exact upper bound for b is 1/L.

5 Weak Convergence Under Monotonicity

In this section, we prove a weak convergence theorem for the modified EGM. The proof
of the weak convergence for the classical EGM can be found in [2]. Unlike the proof in
[2, Theorem 3.1], our proof is more elementary because it does not use any fact from the
theory of maximal monotone operators.

Theorem 5.1 Suppose that K is a nonempty closed convex set in a Hilbert space H . Let
F : K H be monotone
 and LLipschitz continuous on K. Suppose that Sol(K, F )
is nonempty and uk is a sequence generated by Algorithm 1. Then, there exists u
 
Sol(K, F ) such that uk converges weakly to u.

Proof On the one


 hand, since monotonicity implies pseudomonotonicity, by Theorem 3.2,
the sequence uk is bounded and

lim uk uk  = 0. (25)


k

On the other hand, F is LLipschitz continuous on K and so


   
F uk F uk  Luk uk  k N.
   
Combining this with (25), we get lim F uk F uk  = 0.
  k  
As uk is a bounded sequence in a Hilbert space, we can find a subsequence uki of
 k  k 
u converging weakly to an element u K. Since the subsequence u i uki converges
   
strongly to 0 by (25), the subsequence uki of uk converges weakly to u.
By the monotonicity of F , for each y K, we have
   
F (y), y uki F uki , y uki . (26)
  
Since uki = PK uki ki F uki , using the characterization of the metric projection in
[7, Theorem 2.3 on p. 9], we assert that
  
uki ki F uki uki , y uki 0 y K,
A Modified Extragradient Method

or equivalently,
    
1/ki uki uki F uki , y uki 0 y K. (27)
From (26) and (27), we get
        
F (y), y uki F uki , y uki + 1/ki uki uki F uki , y uki
       
F uki F uki , y uki + 1/ki uki uki , y uki .

Letting i , we obtain F (y), y u 0. By the Minty lemma [7, Lemma 1.5 on


  on F , from this we can deduce that u Sol(K, F ).
p. 84] and the assumptions
Let us show that uk converges weakly to u. Referring back to the choice of u, we see
     
that uki converges weakly to u. Let ukj be another subsequence of uk converging
 that u = u. As
weakly to u. We have to show  has been shown above, u Sol(K, F ). By
Lemma 3.1, the sequences uk u and uk u are monotonically decreasing and
therefore converge. Set
= lim uk u and = lim uk u.
k k
We have 
uki u2 = uki u2 + u u2 + 2 uki u, u u
for all i N. This implies that

lim 2 uki u, u u = 2 2 u u2 .
i
 
Since uki converges weakly to u, we deduce that
0 = 2 2 u u2 .
   
Changing the role of u and u, and of uki and ukj , we obtain
0 = 2 2 u u2 .
 
It follows that u u = 0. Thus, we can assert that the whole sequence uk converges
weakly to u.

Example 5.2 Let H = 2 , the Hilbert space whose elements consist of all 2 summable
sequences (u1 , u2 , . . . , ui , . . .) of scalars, i.e.,



H = u = (u1 , u2 , . . . , ui , . . .) : |ui | < + .
2

i=1
Let K = {u 2 : ui 0 for all i N} and F : H H be the linear mapping defined by
F (u) = (u1 , 0, u3 , 0, . . . , u2i1 , 0, . . .) u = (u1 , u2 , . . . , u2i1 , u2i , . . .) H.
For each u = (u1 , u2 , . . . , ui , . . .) H , we have




F (u), u = u22i1 0 and F (u)2 = u22i1 u2i u2 .
i=1 i=1 i=1
Hence, F is monotone and Lipschitz continuous on H with the Lipschitz constant L = 1.
The solution set of VI(K, F/K ), where F/K stands for the restriction of F on K, is given by
Sol(K, F/K ) = {u 2 : u2i1 = 0 and ui 0 for all i N} .
P. D. Khanh

 
Choose k = 12 (0, 1) for all k N and u0 = u01 , u02 , . . . , u0i , . . . K. Since all
 k
the conditions of Theorem 5.1 are satisfied, the sequence u generated by Algorithm 1
 
converges weakly to an element of Sol(K, F ). In fact, the sequence uk converges strongly
 
to u = 0, u02 , 0, u04 . . . , 0, u02i , . . . K. To see this, we use (5) and observe that
  
uk = PK uk (1/2)F uk
 
1 k k 1 k k 1
= PK u1 , u2 , u3 , u4 . . . , uk2i1 , uk2i , . . .
2 2 2
 
1 k k 1 k k 1 k
= k
u , u , u , u . . . , u2i1 , u2i , . . .
2 1 2 2 3 4 2
and
  
uk+1 = PK uk (1/2)F uk
 
3 k k 3 k k 3
= PK u1 , u2 , u3 , u4 . . . , uk2i1 , uk2i , . . .
4 4 4
 
3 k k 3 k k 3 k
= u1 , u2 , u3 , u4 . . . , u2i1 , uk2i , . . . .
4 4 4
 k+1  
From this, we have uk+1 u  34 u0  for all k N and so the sequence uk
converges strongly to u .

6 Concluding Remarks

We have considered a modified EGM with variant stepsizes for solving infinite-dimensional
variational inequalities. We have obtained several strong convergence theorems and a weak
convergence theorem for iterative sequences generated by this method. We also give an
exact outer bound for the segment containing stepsizes. Furthermore, we have shown that
the solution existence of (2) can be characterized by the behavior of the iterative sequence,
provided that an additional assumption is fulfilled.
Open problems remain in this topic. For instance, it is of interest to study the following
questions :  
(Q1) Is the assumption on the existence of a strongly convergent subsequence uki in
Theorem 3.2 a redundant one?
(Q2) If the monotonicity of
 F in Theorem 5.1 is replaced by pseudomonotonicity, then
the weak convergence of uk is still available?
 
(Q3) Under the assumptions of Theorem 5.1, does uk converge strongly to u?

Acknowledgments The guidance of Prof. N. D. Yen and Dr. T. C. Dieu is gratefully acknowledged. The
author would like to thank the two anonymous referees for valuable remarks and suggestions.

References

1. Ceng, L.C., Huang, S., Petrusel, A.: Weak convergence theorem by a modified extragradient method for
nonexpansive mappings and monotone mappings. Taiwanese J. Math. 13, 225238 (2009)
A Modified Extragradient Method

2. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequali-
ties in Hilbert space. J. Optim. Theory Appl. 148, 318335 (2011)
3. Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems,
vol. I and II. Springer, New York (2003)
4. Ioffe, A.D., Tihomirov, V.M.: Theory of Extremal Problems. North-Holland, Amsterdam (1979)
5. Iusem, A.N., Svaiter, B.F.: A variant of Korpelevichs method for variational inequalities with a new
search strategy. Optimization 42, 309321 (1997)
6. Khobotov, E.N.: Modification of the extra-gradient method for solving variational inequalities and
certain optimization problems. (Russian) Zh. Vychisl. Mat. i Mat. Fiz. 27, 14621473 (1987)
7. Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications.
Academic, New York (1980)
8. Korpelevich, G.M.: An extragradient method for finding saddle points and for other problems. (Russian)
Ekonom. i Mat. Metody. 12, 747756 (1976)
9. Marcotte, P.: Application of Khobotovs algorithm to variational inequalities and network equilibrium
problems. Inform. Systems Oper. Res. 29, 258270 (1991)
10. Martinet, B.: Regularisation dinequations variationnelles par approximations successives. Revue
Francaise Automatique Informatique et Recherche Operationelle 4, 154158 (1970)
11. Noor, M.A.: Some algorithms for general monotone mixed variational inequalities. Math. Comput.
Modelling 29, 19 (1999)
12. Noor, M.A.: Some developments in general variational inequalities. Appl. Math. Comput. 152, 199277
(2004)
13. Nadezhkina, N., Takahashi, W.: Weak convergence theorem by an extragradient method for nonexpan-
sive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191201 (2006)
14. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM. J. Control Optimization
14, 877898 (1976)
15. Solodov, M.V., Tseng, P.: Modified projection-type methods for monotone variational inequalities.
SIAM J. Control Optimization 34, 18141830 (1996)
16. Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequality problems. SIAM J.
Control Optimization 37, 765776 (1999)
17. Tam, N.N., Yao, J.-C., Yen, N.D.: Solution methods for pseudomonotone variational inequalities. J.
Optim. Theory Appl. 138, 253273 (2008)
18. Thanh Hao, N.: Tikhonov regularization algorithm for pseudomonotone variational inequalities. Acta
Math. Vietnam. 31, 283289 (2006)
19. Tinti, F.: Numerical solution for pseudomonotone variational inequality problems by extragradient
methods. Variational analysis and applications. Nonconvex Optim. Appl. 79, 11011128 (2005)
20. Yao, J.-C.: Multi-valued variational inequalities with K-pseudomonotone operators. J. Optim. Theory
Appl. 83, 391403 (1994)

Vous aimerez peut-être aussi