Acessibilidade / Reportar erro

A convergence result for an outer approximation scheme

Abstract

In this work we study the variational inequality problem in finite dimensional spaces. The constraint set we consider has the structure of semi-infinite programming. Standard convergence analysis for outer approximation methods includes boundedness of the constraint set, or, alternatively, coerciveness of the data. Using recession tools, we are able to replace these assumptions by the hypotheses of boundedness of the solution set and that the domain of the operator contains the constraint set.

maximal monotone operators; Banach spaces; outer approximation algorithm; semi-infinite programs


A convergence result for an outer approximation scheme

R.S. BurachikI** Partially supported by CNPq Grant 301280/94-0 and by PRONEX-Optimization. † Partially supported by PICDT/UFPI-CAPES. ; J.O. LopesII†* Partially supported by CNPq Grant 301280/94-0 and by PRONEX-Optimization. † Partially supported by PICDT/UFPI-CAPES.

IEngenharia de Sistemas e Computação, COPPE-UFRJ, Cx. Postal 68511, 21941-972 Rio de Janeiro, RJ, Brazil, E-mail: regi@cos.ufrj.br

IIDepartamento de Matemáticas, Universidade Federal de Piauí, 21941-972 Piauí, PI, Brazil, E-mail: jurandir@ufpi.br

ABSTRACT

In this work we study the variational inequality problem in finite dimensional spaces. The constraint set we consider has the structure of semi-infinite programming. Standard convergence analysis for outer approximation methods includes boundedness of the constraint set, or, alternatively, coerciveness of the data. Using recession tools, we are able to replace these assumptions by the hypotheses of boundedness of the solution set and that the domain of the operator contains the constraint set.

Mathematical subject classification: 47H05, 90C34, 47J20.

Key words: maximal monotone operators, Banach spaces, outer approximation algorithm, semi-infinite programs.

1 Introduction

Let W Ì be a nonempty closed and convex set. Given n

n a maximal monotone operator, we consider the classical variational inequality problem for T and W, VIP(T, W), defined by: Find x*Î W such that there exists u*Î T(x*) with

for all x Î W. The set W will be called the feasible set for Problem (1.1). We denote by S* the set of solutions of VIP(T, W).

In the particular case in which T is the subdifferential of f n® È{¥}, where f is proper, convex and lower semicontinuous, (1.1) reduces to the nonsmooth constrained optimization problem:

This paper considers a feasible set W explicitly defined by:

We assume throughout this work that the set Y and function g: n × Y ® satisfy the assumptions:

(G1) Y is a compact set contained in p;

(G2) g(·, y) is a proper, lower semicontinuous and convex function "y Î Y;

(G31) g is continuous on n × Y.

There are several applications for which the feasible set has the form (1.3) (see, e.g. [10, 4]). This kind of formulation was studied in Blankenship and Falk [3] and in the context of reflexive Banach spaces in [9]. The methods studied in [3, 9] can be cast in the family of ''outer approximation methods''. Various algorithms of this kind have been proposed in the past 4 decades, being introduced by Cheney and Goldstein [8] and Kelley [15] in the form of cutting plane algorithms. The basic idea of these methods is to replace (1.2) by a sequence of problems (Pn) in which the feasible sets Wncontain the original set W. More specifically, an outer approximation algorithm can be seen as a method that generates a sequence {xn} which converges to a solution of the original problem without demanding xn to belong to the original set W. The outer approximation methods considered in [3, 9] were applied to solve the constrained optimization problem (1.2). In these works, the convergence results are established under the hypothesis of boundedness of the constraint set.

The aim of our work is to solve problem (1.1), with W given by (1.3). For doing this, we extend the method proposed in [3] in two ways. First, we solve problem (1.1), which is, as stated before, a generalization of problem (1.2). Second, we will allow W to be unbounded. The usual convergence result in this context is optimality of all accumulation points of the iterates. For obtaining this result, we will assume that the solution set is nonempty and bounded, and that the domain of the operator contains the constraint set. In our analysis, we apply a technique used by Auslender and Teboulle in [1], based on recession tools.

Our work is built around the following generic outer approximation scheme for solving VIP(T, C).

Algorithm.

Step 0 (Initialization): Set k = 1 and take W1Ê W

Step 1 (Iteration k): Given WkÊ W (k-th approximating problem)

(i. e., xk solves VIP(T,Wk)).

The paper is organized as follows. Section 2 contains some theoretical preliminaries which are necessary for our analysis. In Section 3 we introduce the algorithm and establish the convergence result.

2 Theoretical preliminaries

2.1 Maximal monotone operators

For an arbitrary point to set operator T: n

n, we recall the following definitions:

Domain of T:

• D(T): = {x Î n|T(x) ¹ Æ}

Graph of T:

• G(T): = {(x, v) Î n × n|v Î T(x)}

Range of T:

• R(T): = {v Î n|v Î T(x) forsome x Î n}

The operator T: n

n is monotone if

áu – v, x – yñ > 0

for all x, y Î n and for all u Î T(x), v Î T(y). A monotone operator T is called maximal if for any other monotone operator such that (x) Ê T(x) for all x Î n, it holds that = T.

From now on, T is a maximal monotone operator.

Our convergence theorems require two conditions on the operator T, namely para- and pseudomotonicity, which we discuss next. The notion of paramonotonicity was introduced in [6] and further studied in [7, 13]. It is defined as follows.

Definition 2.1. The operator T is paramonotone in W if it is monotone and áv – u, y – zñ = 0 with y, z Î W, v Î T(y), u Î T(z) implies that u Î T(y), v Î T(z). The operator T is paramonotone if this property holds in the whole space.

Proposition 2.2 (see [13, Proposition 4]) Assume that T is paramonotone and

be a solution of the VIP(T,W). Let x*Î W be such that there exists an element u*Î T(x*) with áu*, x* – ñ< 0 . Then x* also solves VIP(T, W).

The remark below contains some examples of operators which are paramonotone.

Remark 2.3. If T is the subdifferential of a convex function f, then T is paramonotone. Another condition which guarantees paramonotonicity of T: n

n is when T is differentiable and the symmetrization of its Jacobian matrix has the same rank as the Jacobian matrix itself. However, relevant operators fail to satisfy this condition. More precisely, the saddle-point operator T: = (¶xL(x, y), – ¶yL(x, y)), where L is the Lagrangian associated to a constrained convex optimization problem, is not paramonotone, except in trivial instances. For more details on paramonotone operators see [13]. s

Next we recall the definition of pseudomonotonicity, which was taken from [5] and should not be confused with other uses of the same word (see e.g., [14]).

Definition 2.4. Let F: n

n be a multifunction such that D(F) is closed and convex. F is said to be pseudomonotone if it satisfies the following condition:

If the sequence {(xk, uk)} Ì G(F), satisfies that:

(a) {xk} converges to x*;

(b) lim supk áuk, xk – x*ñ < 0.

Then for every w Î D(F) there exists an element u*Î F(x*) such that

Remark 2.5. If T is the subdifferential of a convex function f then T is pseudomonotone. The same is true is T is point-to-point and continuous. Hence, if T is the gradient of a differentiable convex function, then T is both para- and pseudomonotone. An example of a non-strictly monotone operator which is both para- and pseudomonotone is the subdifferential of the function j: ® defined by j(t) = |t| for all t.

2.2 Level-boundedness and recession function of a point-to-set mapping

Recall that a function h : ® È{+¥} is said to be level-bounded [16] when

It is easy to see that the property above is equivalent to the boundedness of all level sets of h. Given a nonempty closed and convex set C Ì n and a convex lower semicontinuous function h : ® È{+¥}, denote by C¥ the recession cone of C and by h¥ the recession function associated to h. When C = W is nonempty and given by (1.3), it holds [16, Propositions 3.9 and 3.23] that

Assume now that T is maximal monotone and such that W Ì D(T). Following [1], the recession function associated to VIP(T, W) (see [1, Section 2]) is given by

where T(W) = {u|there exists x Î W suchthat u Î T(x)}.

When W Ì D(T), it is established in [1, Section 2] that the solution set (T + NW)–1(0) of VIP(T, W) is nonempty and compact if and only if

The above equivalence is the key tool that will allow us to obtain convergence results for the finite dimensional case, with the only assumption that W Ì D(T) and the solution set of VIP(T, W) is nonempty and compact.

2.3 Assumptions

Throughout this work we consider the following assumptions:

(H0) W Ì D(T) and the solution set of VIP(T, W) is nonempty and compact.

(H1) T is paramonotone and pseudomonotone with closed and convex domain.

As pointed out above, standard convergence analysis for outer approximation methods includes boundedness of the constraint set, or, alternatively, coerciveness of the data. We mention now two important problems for which these standard conditions do not hold, but instead (H0) and (H1) are satisfied. The first example is the problem of projecting a point x0Î n onto an unbounded set W, where W := ÇyÎYWy is an intersection of infinite closed and convex sets Wy := {x | g(x, y) < 0} and g, Y are as in (G1) – (G3). The corresponding maximal monotone operator is the subdifferential of the convex function f(·) := (1/2)||· –x0||2. It is easy to check that Tx = ¶f (x) = x – x0, a para- and pseudomonotone operator. The solution of this problem is the unique projection of x0 onto W. Another example is given by the linear programming problem, when the constraint set is unbounded, but the solution set is bounded. The operator T is in this case constant, and hence trivially para- and pseudomonotone.

3 Outer approximation algorithm

Now we can state formally the algorithm. For doing this, we need some notation:

• Yk is a finite subset of Y;

• Wk: = {x Î n| g(x, y) < 0 " y Î Yk}.

• {ek}k is an exogenous nonnegative sequence converging to zero.

• Given Wk, define the k-th approximating problem as:

• Given xk solution of (Pk), define the k-th auxiliary problem as:

Outer Approximation Scheme (OAS)

Step 0. Set k = 1. If there exists Î Y such that g(·, ) is level-bounded, then choose Y1Ì Y finite, nonempty and such that Î Y1.

If g(·, y) is not level-bounded for any y Î Y, then choose any Y1Ì Y finite and nonempty.

Iteration: For k = 1,2....,

Step 1. Given Wk, find xk solution of Pk.

Step 2. For xk obtained in Step 1, solve Ak.

Step 3. (Check for solution and update if necessary)

If g(xk, yk+1) < –ek stop. Otherwise, set

Step 4. Set k := k + 1 and return to Step 1.

Remark 3.1. As mentioned in the Introduction, the authors of [3] propose an outer approximation algorithm for (1.2). The scheme above is an adaptation of this algorithm to the variational inequality problem.

Remark 3.2. Note that

and Wk is convex for all k. If there exists Î Y such that g(·, ) is level-bounded, then it is clear that W1 is bounded, and hence the whole sequence will be bounded.

Remark 3.3. If the solution yk+1 of the k-th auxiliary problem Ak obtained in Step 2 satisfies

g(xk, yk+1) < –ek,

then it holds that g(xk, y) < 0 for all y Î Y, i.e., xk Î W. Thus,

áuk, x – xkñ > 0 " x Î W,

so that xk is a solution of problem (1.1). This justifies the stopping rule of Step 3.

Our first theoretical result establishes that boundedness of the sequence is enough to obtain optimality of all accumulation points.

Lemma 3.4. Let the sequence {xk} be generated by (OAS). Suppose that (H1) holds and that the solution set S* is nonempty. If {xk} is bounded, then any accumulation point is a solution of the VIP(T, W).

Proof. If {xk} is bounded, we can choose a subsequence {x} of {xk} converging to x*. We will prove first that x*Î W. Assume that the opposite is true, i.e., x*Ï W. Thus, there is a point y*Î Y and a positive d such that

By extracting a subsequence of {x} if necessary, we may assume that the sequence {y} has a limit by compacity of Y.

For j large enough, by continuity of g in the first variable,

Thus, by definition of y,

Taking limits in (3.5), we get

where we are using the definition of {y}, {x} and the fact that {ek} converges to zero. As xÎ W, we have that g(x, y) < 0 " y Î Y. Because kj+1> (kj)+1, we have that YÉ Y, then

Using the facts that limj x = x* and limj y = , and taking limit in (3.7), we have that

which contradicts (3.6). Hence, x*Î W.

Next we prove that any accumulation point is a solution of VIP(T, W). Let x* be an accumulation point and take a subsequence {x} converging to x*. Since x solution of P, we know that

Then by (3.2) we have that

Since x*Î W it holds that

This implies that

Take Î S*. By pseudomonotonicity of T, we conclude that there exists u*Î T(x*) such that

Since Î W, (3.10) implies that

Combining the last two inequalities we have that

Finally, by paramonotonicity of T and Proposition 2.2 we conclude that x* is a solution of the VIP(T, W).

Now we are in conditions to present our convergence result, which establishes optimality of all accumulation points of the sequence. As far as we know, this result is new even for the optimization case.

Theorem 3.5. Assume that (H0) and (H1) hold. Let {xk} be the sequence generated by (OAS). Then any accumulation point of {xk} is a solution of VIP(T, W).

Proof. By Lemma 3.4, it is enough to establish boundedness of {xk}. If there exists Î Y such that (2.1) holds for h := g(·, ), then, by Step 0, W1 is bounded and hence {xk} is also bounded. So we can assume that for all y Î Y

Suppose that {xk} is unbounded. This implies that there exists an infinite subset of indices K for which the subsequence {xk}kÎK verifies

(i) ||xk|| ® ¥ and

(ii) ® ¹ 0.

Without loss of generality, we assume that the whole sequence verifies (i)-(ii) above. Since by [2] for any function h we have

we get for every y Î Y

where the rightmost inequality holds by (3.16). Using now (2.2) we conclude that Î W¥. Take uk Î T(xk) as in Step 1 and use monotonicity of T to get

In particular, for all x Î W, u Î T(x) we have that

áu, xkñ<áu, xñ

So,

and taking limits in the above expression we obtain áu, ñ < 0 for all u Î T(W). By (2.3) we conclude that () < 0. This is in contradiction with our hypothesis (H0), in view of (2.4) and the fact that ¹ 0. Hence the sequence must be bounded and the proof is complete.

Remark 3.6. We point out that convergence of the whole sequence requires stronger hypotheses that those of the theorem above. It is clear that if S* is a singleton, then the accumulation point is unique and convergence of the whole sequence holds. Otherwise, (OAS) can generate a sequence with more than one accumulation point. Indeed, assume x*, y*Î S* are different and suppose that Y = {y}. The sequence defined as

is admissible for (OAS), and it does not converge.

4 Concluding remarks

In outer approximation schemes as the one we consider in this work, the complexity of the subproblems increases at each step, and quickly we may face a subproblem which, while having a finite number of constraints, it is as difficult as the original problem. This motivates the quest for approximating procedures for defining the subproblems. One of these approximating procedures replaces the constraint g(x, yk+1) < 0, which is added at iteration k to the finite set of constraints, by its linear approximation. Another approach for avoiding the increasing complexity of the subproblems is to device adaptative rules involving constraints-dropping schemes as the ones developed by Hogan [12] and Gonzaga and Polak [11]. However, the convergence analysis required by these approximating schemes seems to require a much more involved analysis, or extra hypotheses on the data. We believe that approximating schemes of the kind just mentioned are important within the study of outer approximation algorithms, and we envisage to improve our current convergence results by incorporating them into our future research.

5 Acknowledgment

We want to express our most profound gratitude to Prof. Alfred Auslender, for his enlightening suggestions and fruitful comments regarding this work. The authors are also thankful to the two referees, whose helpful and essential corrections greatly improved the final version of the manuscript.

Received: 19/II/03.

Accepted: 30/IX/03.

#564/03.

  • [1] A. Auslender and M. Teboulle, Lagrangian duality and related multiplier methods for variational inequalities, SIAM J. Optimization 10 (2000), 1097-1115.
  • [2] C. Baiochi, G. Buttazo, F. Gastaldi and F. Tomarelli, General existence theorems for unilateral problems in continuum mechanics, Archives Rational Mechanics and Analysis, 100 (1988), 149-189.
  • [3] J.W. Blankenship and J.E. Falk, Infinitely constrained optimization problems, J. Optim. Theory and Appl., 19, 2 (1976), pp. 261-281.
  • [4] J. Bracken and J.F. McGill, Mathematical programs with optimization problems in the constraints, Operations Research., 21, 1 (1973).
  • [5] F.E. Browder, Nonlinear operators and nonlinear equations of evolution in Banach spaces Proceedings of Symposia in Pure Mathematics, Journal of the American Mathematical Society, 18, 2 (1976).
  • [6] R.D. Bruck, An iterative solution of a variational inequality for certain monotone operator in a Hilbert space, Bulletin of the American Math. Soc., 81 (1975), pp. 890-892 (with corrigendum, in 8X (1976), p. 353).
  • [7] Y. Censor, A. Iusem and S. Zenios, An interior point method with Bregman functions for the variational inequality problem with paramonotone operators, Math. Programming, 81 (1998), pp. 373-400.
  • [8] W.E. Cheney and A.A. Golsdtein, Newton's method for convex programming and Tchebycheff approximation, Numer. Math., 1 (1959), pp. 253-268.
  • [9] P.L. Combettes, Strong convergence of block-iterative outer approximation methods for convex optimization, SIAM Journal Control and Optim., 38, 2 (2000), pp. 538-565.
  • [10] J.M. Danskin, The theory of Max-Min, Springer-Verlag, Berlin, Germany, (1967).
  • [11] C. Gonzaga and E. Polak, On constraint dropping schemes and optimality functions for a class of outer approximation algorithms, SIAM Journal Control and Optim., 17 (1979), pp. 477-493.
  • [12] W.W. Hogan, Applications of a general convergence theory for outer approximation algorithms, Math. Programming, 5 (1973), pp. 151-168.
  • [13] A.N. Iusem, On some properties of paramonotone operators, J. Convex Anal., 5 (1998), pp. 269-278.
  • [14] S. Karamardian, Complementarity problems over cones with monotone and pseudomonotone maps, J. Optim. Theory Appl., 18 (1976), pp. 445-455.
  • [15] J.E. Kelley, The cutting-plane method for solving convex programs, J. SIAM, 8 (1960), pp. 703-712.
  • [16] R.T. Rockafellar and R.J-B. Wets, Variational Analysis Springer Verlag, New York, (1998).
  • *
    Partially supported by CNPq Grant 301280/94-0 and by PRONEX-Optimization.
    †
    Partially supported by PICDT/UFPI-CAPES.
  • Publication Dates

    • Publication in this collection
      20 July 2004
    • Date of issue
      2003

    History

    • Accepted
      30 Sept 2003
    • Received
      19 Feb 2003
    Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
    E-mail: sbmac@sbmac.org.br