Abstract

This paper presents and analyzes a strongly convergent approximate proximal point algorithm for finding zeros of maximal monotone operators in Hilbert spaces. The proposed method combines the proximal subproblem with a more general correction step which takes advantage of more information on the existing iterations. As applications, convex programming problems and generalized variational inequalities are considered. Some preliminary computational results are reported.

1. Introduction

Let ?? be a real Hilbert space with inner product ?·,·? and induced norm v?·?=?·,·?. A set-valued operator ??:???2?? is maximal monotone. A canonical problem associated with ?? is the maximal monotone inclusion problem, that is, to find a vector ????? such that0???(??).(1.1) The maximal monotone inclusion problem provides a powerful general framework for the study of many important optimization problems, such as convex programming problems and variational inequalities, see [1, 2], for example. Nowadays, it has received considerable attention. The interested readers may consult the monographs by Facchinei and Pang [3] and the survey papers [414].

Many methods have been proposed to solve the maximal monotone inclusion problem. One of the classical methods is the proximal point algorithm (PPA) which was originally proposed by Martinet [5]. Let ??????? be the current iteration, PPA generates the next iteration by solving the following proximal subproblem:???????+??????(??).(1.2) If the sequence {????} is chosen bounded from above zero, then the sequence {????} generated by (1.2) converges weakly to a solution of (1.1). However, since (1.2) is a nonsmooth equation and an implicit scheme, solving (1.2) exactly is either impossible or as difficult as solving the original problem (1.1), see [6]. This makes straightforward applications of PPA impractical in many cases. To overcome this drawback, Rockafellar [7] generalized PPA and presented the following approximate proximal point algorithm:????+???????+??????(??),(1.3) where the error sequence {????} satisfies a summable error criterion. Because of the relaxed accuracy requirement, the approximate proximal point algorithm is more practical than the exact one. Furthermore, Rockafellar posed an open question: the sequence generated by (1.3) converges strongly or not. In 1992, by exhibiting a proper closed convex function in a infinite-dimensional Hilbert space ??2, Güler [8] showed that it does not converge strongly in general. Naturally, the question arises whether PPA can be modified preferably in a simple way, so that strong convergence is guaranteed.

There is one point deserving to be paid attention to. The weak and strong convergences are only conceptions in the infinite dimensional spaces. Many real-world problems in economics and engineering are modeled in the infinite dimensional spaces, such as the optimal control and structural design problems. However, when we solve infinite-dimensional problems, numerical implementations of algorithms are certainly applied to finite dimensional approximations of these problems. Nevertheless, as is pointed out in [9], it is important to develop convergence theory for the infinite-dimensional case, because it guarantees the robustness and stability with respect to discretization schemes employed for obtaining finite dimensional approximations of infinite dimensional problems.

Recently, a number of researchers have concentrated on the developments of the approximate proximal point algorithms on theoretical analysises, algorithm designs, and practical applications. To find the solution of the desired problem in a restricted area ?????, many authors used an additional projection or extragradient step to correct the approximate solution and thus presented the prediction-correction proximal point algorithms. Recent developments on the approximate proximal point algorithms also focus on replacing the correction step with more general steps, see, for example, [1013]. To mention a few, for any fixed vector ?????, Zhou et al. [12] presented the following correction step:????+1=???????+1-??????????????-?????,(1.4) where ??????? is generated by (1.3). For suitably chosen parameter sequence {????} and under certain assumptions on the error term ????, they proved the method converges strongly. Later, Qin et al. [13] developed a more general iterative scheme:????+1=??????+?????????????-?????+????????????,(1.5) where ??????? is generated by (1.3) and {????} is a bounded sequence. For suitably chosen parameter sequences {????}, {????}, and {????} and under certain assumptions on the error term ????, they got the strong convergence of the method too.

In this paper, we propose a strongly convergent approximate proximal point algorithm for the maximal monotone inclusion problems by combining the proximal subproblem (1.3) with a more general correction step. Compared with methods [1013], the proposed method takes advantage of more information on the existing iterations. For practical implementation, we give two applications of the proposed method to convex programming problems and generalized variational inequalities. Preliminary numerical experiments, reported in Section 5, demonstrate that the efficiency of the method is in practice.

This paper is organized as follows. Section 2 introduces some useful preliminaries. Section 3 describes the proposed method formally and presents the convergence analysis. Section 4 discusses some applications of the proposed method. Section 5 presents some numerical experiments, and some final conclusions are given in the last section.

Throughout this paper, we assume that the solution set of (1.1), denoted by ??-10, is nonempty.

2. Preliminaries

This section summarizes some fundamental concepts and lemmas that are useful in the consequent analysis.

Definition 2.1. Let ??be a real Hilbert space and let ??:???2?? be a set-valued operator. Then(i)the effective domain of ??, denoted by ??(??), is ??(??)={??:??????,(??,??)???};(2.1)(ii)the range or image of ??, denoted by ??(??), is ??(??)={??:?????,(??,??)???};(2.2)(iii)the graph of ??, denoted by ??(??), is ??(??)={(??,??)???×??:?????(??),?????(??)};(2.3)(iv)the inverse of ??, denoted by ??-1, is ??-1={(??,??):(??,??)???}.(2.4)

Definition 2.2. Let ?? be a real Hilbert space and let ??:???2?? be a set-valued operator. Then, ?? is called monotone on ?? if ???-???,??-???????=0,?(??,??),?,???????.(2.5) Furthermore, a monotone operator ?? is called maximal monotone if its graph ??(??) is not properly contained in the graph of any other monotone operator on ??.

Definition 2.3. Let ?? be a real Hilbert space, let ?? be a nonempty closed convex subset of ??, and let ?? be an operator from ?? into ??. Then,(i)?? is called firmly nonexpansive if ???????-??2=????-??,???????-??,?(??,??),?,???????;(2.6)(ii)?? is called nonexpansive if ??????'-???=???'-???,?(??,??),?,???????.(2.7)

Definition 2.4. Let ??be a real Hilbert space and let ?? be a nonempty closed convex subset of ??. Then, the orthogonal projection from ?? onto ??, denoted by ????(·), is ????(??)=argmin{???-???:?????},??????.(2.8)

It is easy to verify that the orthogonal projection operator is nonexpansive.

Definition 2.5. Given any positive scalar ?? and operator ??, define the resolvent of ?? by ????=(??+????)-1,(2.9) where ?? denotes the identity operator on ??. Also, define the Yosida approximation ???? by ????=1?????-?????.(2.10)

We know that ??????????????? for all ?????, ????????=|????| for all ?????(??), where |????|=inf{????:???????} and ??-10=??(????) for all ??>0.

In the following, we list some useful lemmas.

Lemma 2.6 (see [14]). Let ?? be any positive scalar. An operator ?? is monotone if and only if its resolvent ???? is firmly nonexpansive. Furthermore, ?? is maximal monotone if and only if ???? is firmly nonexpansive and ??(????)=??.

Lemma 2.7 (see [15]). Let ?? be a maximal monotone operator and let ?? be a positive scalar, then ???????-1(??)???=????(??).(2.11)

Lemma 2.8 (see [16]). For all ?????, lim???8?????? exists and it is the point of ??-10 nearest to ??.

Lemma 2.9 (see [17]). Let {????}, {????} and {????} be sequences of positive numbers satisfying: ????+1=?1-?????????+????+????,??=0,(2.12) where {????} is a sequence in [0,1]. Assume that the following conditions are satisfied(i)?????0 as ???8 and ?8??=0????=8;(ii)????=??(????);(iii)?8??=0????<8.Then lim???8????=0.

3. The Algorithm and Convergence Analysis

In this section, we analyze the proposed approximate proximal point algorithm formally.

Algorithm 3.1. Given??0???, ?????and{????}?(0,8)with?????8as???8. Find???????satisfying ????+?????????+????????????,(3.1) under the inexact error criterion: ????????=??????????-??????,sup??=0????=??<1.(3.2) Generate the new iteration by ????+1=??????+?????????????-?????????-????+??????+??????k,(3.3) where the stepsize ???? is defined by ????=?????-????,????-????+???????????-????+??????2,(3.4) and {????},{????}, and {????} are real sequences in [0,1] satisfying(i)????+????+????=1;(ii)lim???8????=0 and ?8??=0????=8;(iii)?8??=0????<8.

The following remark gives the relationships between Algorithm 3.1 and some existing algorithms.

Remark 3.2. When ????=0,????=1,????=0,????=1, Algorithm 3.1 reduces to the method proposed by He et al. [10]; when ????=0,????=1,????=0 without considering the error term ???? in (3.3), Algorithm 3.1 reduces to the method proposed by Yang and He [11]; when ????=0,????=1, Algorithm 3.1 reduces to the method proposed by Zhou et al. [12]; when ????=1, Algorithm 3.1 reduces to the method proposed by Qin et al. [13] with ????????=????.

In the following, we give the convergence analysis of Algorithm 3.1, beginning with some lemmas. For convenience, we use the notation????=????-????+????.(3.5)

Lemma 3.3 (see [6]). Let ?? be a real Hilbert space and let ?? be a nonempty closed convex subset of ??. For given ???????,????>0 and ???????, there exists ??????? conforming to the set-valued equation (3.1) Furthermore, for any ?????-10, one has ?????-????,?????=?????-??,?????.(3.6)

Lemma 3.4. Let ???? be defined by (3.4), then ????>12.(3.7)

Proof. By the notation of ????, it follows from (3.2) that ?????-????,?????=??????-??????2+?????-????,?????>12??????-??????2+?????-????,?????+12????????2=12????????2.(3.8) By the selection of ????, the proof is complete.

Lemma 3.5. Let ?? be any zero of ?? in C, then ???????????-???????????=????-??????-??.(3.9)

Proof. Since ????? and ????(·) is nonexpansive, by Lemma 3.3, we have ???????????-???????????-??2=??????-??????????-??2=????????-??2-2?????????-??,?????+??2??????????2=????????-??2-2?????????-????,?????+??2??????????2.(3.10) Considering the last two terms of the above equality, by the definition of ???? and ????, we have -2?????????-????,?????+??2??????????2=-?????????-????,?????=-???????????-??????2-??????-???????????????.(3.11) Taking into account that ??????=?????????-?????, by Lemma 3.4, we further obtain ???????????-???????????-??2=????????-??2-12?1-???????????-??????2.(3.12) Since sup??=0????=??<1, the assertion follows from (3.12) immediately.

We now prove the strong convergence of Algorithm 3.1

Theorem 3.6. Let {????} be generated by Algorithm 3.1. Suppose that ?8??=0??????<8. Then, the sequence {????} converges strongly to a zero point ?? of ??, where ??=lim???8??????.

Proof. We divide the proof into three parts.Claim 1. Show that the sequence {????} is bounded.
For any ?????-10. Set ??=max{???-???,???0-???,sup??=0?????-???}. We want to prove that ????????-??=??,???=0.(3.13) It is easy to see that (3.13) is true for ??=0. Now, assume that (3.13) holds for some ??=0. We prove that (3.13) holds for ??+1. By the definition of ????+1, it follows from ????+????+????=1 that ??????+1??=????-??????+?????????????-?????????+??????????-??=???????-???+???????????????-???????????-??+????????????-??=???????-???+????????????-??+????????????-??=??,(3.14) where the second inequality follows from Lemma 3.5. Hence, the sequence {????} is bounded.
Claim 2. Show that limsup???8???-??,????+1-???=0, where ??=lim???8??????. Note that the existence of lim???8?????? is guaranteed by Lemma 2.8.
Since ?? is maximal monotone, ??????????????? and ???????????????????????, we have???-??????,??????????-??????????=-??????,??????-??????????????=-??????-??????????,??????-??????????????-??????????,??????-?????????????=-?????????-??????????,??????-???????????.(3.15) Since ?????8 as ???8, for any ??>0, we obtain limsup???8???-??????,??????????-???????=0.(3.16) By the nonexpansivity of ??????, we have ?????????????+?????-????????????=??????+????-??????=????????.(3.17) Since ???????0as???8, we obtain lim???8?????????????+?????-????????????=0,(3.18) which combines with (3.16) yielding that limsup???8???-??????,???????????+?????-???????=0.(3.19) By Lemma 2.7, we get ???????????-?????-???????????+???????=???????-?????-???????????+???????=????????.(3.20) Thus, lim???8???????????-?????-???????????+???????=0.(3.21) Furthermore, we have limsup???8???-??????,?????????-?????-???????=0.(3.22) Adopting the notation ????=(????/(????+????))????(????-????????)+(????/(????+????))????. Since ????+????+????=1, then ????+1=???????+1-?????????.(3.23) Since ????(·) is nonexpansive, by the notation of ????, we obtain ??????-?????????-???????=???????????+?????????????-?????????+????????+????????-?????????-????????=????????+???????????????-?????????-?????????-???????+????????+??????????-?????????-???????=????????+??????????-????????-????+??????+????????+????????????=||1-????||????????+????????????+????????+????????????.(3.24) Since ???????0 as ???8 and ?8??=0????<8, we get lim???8??????-?????????-???????=0.(3.25) By using (3.23), we get ??????+1-?????????-???????=?????????+1-?????????-?????????-???????(3.26)=????????-?????????-???????+?1-???????????-?????????-???????.(3.27) Since lim???8????=0, it follows from (3.25) and (3.27) that lim???8??????+1-?????????-???????=0.(3.28) Combining (3.28) with (3.22), we obtain limsup???8???-??????,????+1-???????=0.(3.29) Note that ??=lim???8??????, we get limsup???8???-??,????+1?-??=0.(3.30)
Claim 3. Show that ??????? as ???8.
By Lemma 2.7 and by the nonexpansivity of ??????, we have????????=????-???????????+???????=????-??(3.31)????+????-??????.(3.32) By the definition of ????+1, we have ????+1-??=??????+?????????????-?????????+????????-??=????(??-??)+??????????????-?????????-?????+?1-???????????.-??(3.33) Since ????(·) is nonexpansive, we obtain ??????+1??-??2=?????(??-??)+??????????????-?????????-?????+?1-???????????-??,????+1?-??=???????-??,????+1?-??+??????????????-?????????-????,????+1?+?-??1-??????????-??,????+1?-??=???????-??,????+1?-??+???????????????-?????????-????????????+1??+?-??1-?????????????????-????+1??-??=???????-??,????+1?-??+??????????-????????-????????????+1??+?-??1-?????????????????-????+1??.-??(3.34) Now, we consider the last two terms on the right-hand side of the above equality. Since ?????[0,1], it follows from (3.32) that ?????????-????????-??????????+1?-???+1-??????????-????????+1-???=??????????-????????-????????????+1??+?-??1-??????????????+????-?????????????+1??=-??1-????2?????????-??2+??????+1??-??2?+???????????-????????-??????+?1-????????????????????+1??=-??1-????2????????-??2+12??????+1??-??2+???????????-????????-??????+?1-????????????????????+1??.-??(3.35) Consequently, we get ??????+1??-??2=???????-??,????+1?+-??1-????2????????-??2+12??????+1??-??2+???????????-????????-??????+?1-????????????????????+1??.-??(3.36) Furthermore, we obtain ??????+1??-??2=?1-?????????????-??2+2???????-??,????+1????-??+2????????-????????-??????+?1-????????????????????+1??.-??(3.37) Since ?8??=0????<8 and ?8??=0??????<8, we have 8???=02???????????-????????-??????+?1-????????????????????+1??-??<8.(3.38) Set ????=max{???-??,????+1-???,0}, we have ?????0 as ???8. Denote ????=?????-???2, ????=2???????? and ????=2[?????????-????????-?????+(1-????)??????]?????+1-???, by Lemma 2.9, we conclude that ?????0 as ???8, that is, ??????? as ???8. The proof is complete.

4. Applications

This section considers two interesting applications, convex programming problems and generalized variational inequalities.

Let ?? be a real Hilbert space, let ?? be a nonempty closed convex subset of ??, and let ??:???(-8,+8] be a proper closed convex function. Consider the following convex programming problem:min???????(??).(4.1) In 1965, Moreau [18] indicated that if ?? is a proper closed convex function, ???? is a maximal monotone operator, where ????(??) represents the subdifferential of ??, that is,????(??)={?????:??(??)=??(??)+???-??,???,?????},??????.(4.2) Since ?? is a minimizer of ?? if and only if 0?????(??). Thus, the problem (4.1) can be directly transformed to the maximal monotone inclusion problem (1.1). Note that????+?????????+??????????????(4.3) is equivalent to the following equation:????=argmin??????1??(??)+2????????-????-??????2?.(4.4) Hence, when dealing with the convex programming problems, we use (4.4) instead of (3.1), see [12, 13], for example. Specifically, Theorem 3.6 could be detailed as follows.

Theorem 4.1. Let ?? be a real Hilbert space, let ?? be a nonempty closed convex subset of ??, and let ??:???(-8,8] be a proper closed convex function such that ????(0)n???Ø. Let {????}?(0,8) with ?????8 as ???8, {????} be a sequence in ?? satisfying ??????=?????????-????? with sup??=0????=??<1 and ?8??=0??????<8. Given ??0??? and ?????, let {????} be generated by ????=argmin??????1??(??)+2????????-????-??????2?,????+1=??????+?????????????-?????????-????+??????+????????,(4.5) where the stepsize ???? is defined by ????=?????-????,????-????+???????????-????+??????2,(4.6) and {????},{????}, and {????} are real sequences in [0,1] satisfying(i)????+????+????=1;(ii)lim???8????=0 and ?8??=0????=8;(iii)?8??=0????<8.Then, the sequence {????} converges strongly to a minimizer of ?? nearest to ??.

We now turn to another application of the proposed method. In recent years, the approximate proximal point algorithms are a family important methods to solve monotone variational inequalities, see [1921], for example. Let ?? be a real Hilbert space, let ?? be a nonempty closed convex subset of ??, and let ??:???2?? be a monotone set-valued mapping. Consider the following generalized variational inequality (GVI(??,??)): find a vector ??*??? and ??*???(??*) such that???-??*,??*?=0,??????.(4.7) When ?? is single-valued, GVI(??,??) reduces to the classical variational inequality VI(??,??). Let ??(??)=??(??)+????(??), where??(??)???(??) is single-valued and ????(·) denotes the normal cone operator with respect to ??, namely,?????(??):={??:???,??-???=0,??????},if?????;Ø,otherwise.(4.8) In this way, GVI(??,??) can be transformed to the maximal monotone inclusion problem (1.1) easily. In particular, for given ???? and ????>0, using the proximal subproblem (3.1) to solve the problem (4.7) is equivalent to find ??????? and ???????(????) such that????=?????????-????????+?????.(4.9) Specifically, when dealing with GVI(??,??), Theorem 3.6 could be detailed as follows.

Theorem 4.2. Let ?? be a real Hilbert space,let ?? be a nonempty closed convex subset of ??, and let ??:???2?? be a monotone set-valued mapping. Suppose that the solution set of variational inequality problem (4.7) is nonempty. Let {????}?(0,8) with ?????8 as ???8, {????} be a sequence in ?? such that ??????=?????????-????? with sup??=0????=??<1 and ?8??=0??????<8. Given ??0??? and ?????, let {????} be generated by ????=?????????-????????+?????,?????????????,????+1=??????+?????????????-?????????-????+??????+????????,(4.10) where the stepsize ???? is defined by ????=?????-????,????-????+???????????-????+??????,(4.11) and {????},{????}, and {????} are real sequences in [0,1] satisfying(i)????+????+????=1;(ii)lim???8????=0 and ?8??=0????=8;(iii)?8??=0????<8.Then, the sequence {????} converges strongly to a solution of GVI(??,??) nearest to ??.

5. Preliminary Computational Results

In this section, we give some numerical experiments and present comparisons between Algorithm 3.1 and the algorithm presented in Qin et al. [13]. All the codes are written in MATLAB 7.0 and run on the computer with an Intel Core2 1.86?GHz CPU, and Windows XP system. Throughout the computational experiments, the parameters are chosen as ??=0.9,????=??+1,????=1/10(??+1),????=1/2??. The stopping criterion is ?min{????,??(????)}?8=10-6.

Example 5.1. Consider the following generalized variational inequality problem which is tested in [22] by four variables. Let ??={?????4+:?4??=1????=1} and ??:???2??4 be defined by ??(??)=????,??+2??2,??+3??3,??+4??4?[]?:???0,1.(5.1) Then, (1,0,0,0) is a solution of this problem.

We solve this problem with different starting points. The numbers of iterations (It. num.) and the computation times (CPU(s)) are summarized in Table 1. From Table 1, we can see that Algorithm 3.1 performs better. In addition, for the considered problem, the iterative numbers and the computational times are not very sensitive to the starting point.

6. Conclusion

This paper suggests an approximate proximal point algorithm for the maximal monotone inclusion problems by adopting a more general correction step. Under suitable and standard assumptions on the algorithm parameters, we get the strong convergence of the algorithm. Note that Algorithm 3.1 here includes some existing methods as special cases. Therefore, the proposed algorithm is expected to be widely applicable.

Acknowledgments

The authors are grateful to the anonymous reviewer and the editor for their valuable comments and suggestions, which have greatly improved the paper. The work was supported by National Nature Science Foundation (60974082) and the Fundamental Research Funds for the Central Universities (K50511700006, K50511700008).