Abstract

In this paper, we propose and study a diagonal inexact version of Bregman proximal methods, to solve convex optimization problems with and without constraints. The proposed method forms a unified framework for existing algorithms by providing others.

1. Introduction

Let convex functions and C the nonempty subset of are defined by

Let us consider the problem of convex optimization:

To solve , many authors [17] have combined the exterior penalty methods with the proximal method (PM) defined bywhere is set of proper closed convex functions on . PM and its variants have been studied by several authors [6, 813]. In this labor, we generalize this process by introducing Bregman’s distance defined bywhere h is Bregman’s function [14].

In order to solve , we study the coupling of the methods of the exterior penalty with the diagonal inexact version of the Bregman proximal methods defined by

The exact version PMD is defined byhas been studied by several authors [1518].

We propose and study a diagonal inexact version of the Bregman proximal method, which we call DBPM, defined bywhere the sequence is given and approaches f.

By introducing the penalty functions in DBPM, we deduce a solution of .

If the proposed method appears as an inexact version of (6) and solves the problem of convex optimization without constraints:

For , DBPM coincides with diagonal proximal method of Alart and Lemaire [1] as well as the penalization method given by Auslender [2].

2. Preliminary

In this section, we remind some theoretical properties of the approximations called entropic studied by Kabbadj in [17]. This study covers the properties of regularity and approximations of the Moreau–Yosida approximations [19]. These results are necessary for the analysis of the methods proposed in Section 3.

Let S be an convex open subset of and

We define by

Let us consider the following hypotheses:: h is continuously differentiable on S.: h is continuous and strictly convex on : the sets and are bounded where(i): if is such that , then,: if and are two sequences of S such that and , then

Definition 1. (i)h: is a Bregman type function on S or “D-function” if h verifies , and (ii) is called entropic distance if h is a Bregman function.We putA (S) = { verifying and }B (S) = { verifying , and }.

Theorem 1 (see [17]). Let and such that
If one of the two following conditions is verified,(i) and h verifies (ii)then for all and for all , the function has a unique minimum point on

Definition 2. f and h verify the hypothesis of Theorem 1.(i)The entropic approximation of f compared to h, of parameter , is the function defined by(ii)The application entropic proximal of f comparing to h, of parameter λ, is the operator defined by

Proposition 1 (see [17]). Let and such that(a)ri (dom f) (b)Then, .

Proposition 2 (see [17]). We suppose that h and f verify the conditions of Proposition 1.

If and h verify , then is a continuous application.

Proposition 3 (see [17]). We suppose that h and f verify the hypothesis of Proposition 2.

If h is twice continuously differentiable on S and and jointly convex, then is continually differentiable and convex such that :where 

Proposition 4. We suppose that h and f verify the hypothesis of the Proposition 3. If H is defined positive, then

Proof. Let Since H is defined positive, we deduct then that From (17), we haveWe get then
Reciprocally, let such that From (16) and (18), we havethus, we have , which completes the demonstration.
Some examples of Bregman functions are given below.

Example 1. If and , then

Example 2. If andwith the convention , then

Example 3. If and  = , thenWe easily verify that

3. Analysis of the Diagonal Bregman Proximal Method

In this paragraph, we assume the following:(A): and (B): , (C): lim inf

From (15), we can then construct the sequence defined by (Algorithm 1):

(1)Input:
(2)Choose and , and find such that
(3)Set and go to step 2

In what follows, we will derive a convergence result (Theorem 2) for the DPMD framework. First, we need to establish a few technical results.

Lemma 1 (see [20]). Let be two functions of if there exists in which is finite and continuous, then for , for all ,

Definition 3. The sequence verifies the K-property only if the following properties are verified: is bounded and

Lemma 2. If the sequence verifies the K-property, then

Proof. If the sequence does not tend to zero, then it exists that and the subsequence of such thatThe sequence is bounded and Adh ; it exists that the subsequence of and such that . and allow to write, from and On the other hand, is continuous on S, then It follows that is a subsequence of , from with the entropic proximal method (29), we have , so
Lets consider now the function defined by ,

Proposition 5.

Proof. .which is equivalent toAccording to , it exists that such thatwhich meansReplacing in (34) x by , we getFinallyConversely, let z such asReplacing by , we get (34). According to what precedes,which establishes the desired equality.

Definition 4.

Theorem 2. We assume that(i), where(ii)The sequence generated by DPMD is bounded.Then(a)(b)Moreover, if f and h verify the conditions of Proposition 4, then

Proof. according to (18), we can writeThe sequence is bounded; let such thatConsidering (45),Therefore,So, from (i), we haveOn one hand,on the other hand, we havefinally, the two previous inequalities make it possible to writeIf , then and . So,If , then, from (52),Let us show that , from (45),
when
As we haveOn the other hand,From Lemma 1, there exists such that andSince increases with ε, we haveTherefore, there exists such thatFrom Proposition 5, there exits such thatFinally, there exists such thatFrom (55) and (61), we haveSince Adh the sequence verifies then the K-property. From Lemma 2, On the other hand, for all there exists such that for all ,By replacing y by in (63), we getIt is still is bounded. Indeed,so it exists such that
From (i),soFrom (64), we haveFrom , is bounded. Going to the limit in (66), we havethen,Finally, we have(b) Let , there exists then the subsequence of such that , we haveFrom (20), we have

4. Exterior Penalty Coupled with Bregman Proximal Method

Let be the convex function and let C the set of constraints given by

We suppose that C verifies the condition of Slater:

Let us consider the functions of the linear penalty defined byand the quadratic exterior penalty defined bywhere  = max and is an increasing sequence of strictly positive real numbers which tends to

Let us put :

In what follows, we assume(A′): (B′):

so conditions (A), (B), and (C) of Section 3 are verified for f and , j = 1, 2;

We give below an estimate of , j = 1, 2.

Proposition 6. (a)(b)

Proof. Let and Since Slater’s condition is verified, there exists from Ekeland-Temam [21] (chap 3, Theorem 5.2) multiplicators of Lagrangesuch that From (18), we have , by replacing y with , we obtainwhere verifies (3). On the other handLet us putIt follows thatwhere Therefore,which leads toFrom (86), (89), and (91), we obtain(a)From (85),For , from (92), we haveThus, for n such as Conversely,Therefore,(b)If indicates the Euclidean norm on , soby proceeding as below, we deduce the result.Let us consider now the methods of the coupled exterior penalties with the entropic proximal method (Algorithm 2):

(1)Input:
(2)Choose and , and find such that
(3)Set and go to step 2

Theorem 3. Let us suppose(i) and (ii) is coerciveThen the sequence generated by j = 1, 2 is bounded, , and

Proof. Let us show that is bounded.By replacing u by in (100), we obtainLet R =  , we have is increasing, so we putwe deducewhich leads toLet ε such as , since , we haveFrom (i) and (ii), we deduce that the sequence is bounded, so by application of Theorem 2 and Proposition 6, the result is immediate.
In a similar way, we deduce the result for j = 2.

5. Example

Let us consider the following optimization problem:where and .

The algorithm can be applied to solve (). We take

Let us consider the function h: defined by

We easily show that and which checks (A′)

Let us put : is compact and is continuous, so

We have

The sequence generated by the algorithm is defined by and

By writing the condition of optimality, we have

On the other hand,

Then,where is coercive, so by applying Theorem 3, we have

Remarks 1. (i)The convergence performance of the can be discussed according to the parameter . We take note that for ,(ii)Letwhere A is matrix symmetric definite positive and Previously developed methods can solve optimization problems of the type

6. Conclusion

The class of the methods studied in this work constitutes a unified framework for several existing methods that solve convex optimization problems with and without constraints while providing others, more precisely.(i)For , DBPM coincides with the diagonal proximal method DPM studied by Alart and Lemaire [1].(ii)If in DBPM, j = 1, 2, we find then the methods of penalization studied by Auslender [2].(iii)If DBPM appears as an inexact version of BPM and solves the problem of convex optimization without constraints:

the convergence of this version is included in our analysis and responds to the question asked by Eckstein in [15].(i)If and in DBPM, we find then BPM studied by [1518].(ii)If and in DBPM, we find then PM studied by [6, 813].(iii)If and ; DBPM allows to minimize f on

Data Availability

No data were used to support this study.

Conflicts of Interest

The author declares that there are no conflicts of interest.