Acessibilidade / Reportar erro

A sequential quadratic programming algorithm that combines merit function and filter ideas

Abstract

A sequential quadratic programming algorithm for solving nonlinear programming problems is presented. The new feature of the algorithm is related to the definition of the merit function. Instead of using one penalty parameter per iteration and increasing it as the algorithm progresses, we suggest that a new point is to be accepted if it stays sufficiently below the piecewise linear function defined by some previous iterates on the (f,||C||2²)-space. Therefore, the penalty parameter is allowed to decrease between successive iterations. Besides, one need not to decide how to update the penalty parameter. This approach resembles the filter method introduced by Fletcher and Leyffer [Math. Program., 91 (2001), pp. 239-269], but it is less tolerant since a merit function is still used. Numerical comparison with standard methods shows that this strategy is promising.

sequential quadratic programming; merit functions; filter methods


A sequential quadratic programming algorithm that combines merit function and filter ideas* * This work was supported by FAPESP grant 2004/05891-1.

Francisco A.M. Gomes* * This work was supported by FAPESP grant 2004/05891-1.

Department of Applied Mathematics, IMECC, Universidade Estadual de Campinas 13081-970, Campinas, SP, Brazil. E-mail: chico@ime.unicamp.br

ABSTRACT

A sequential quadratic programming algorithm for solving nonlinear programming problems is presented. The new feature of the algorithm is related to the definition of the merit function. Instead of using one penalty parameter per iteration and increasing it as the algorithm progresses, we suggest that a new point is to be accepted if it stays sufficiently below the piecewise linear function defined by some previous iterates on the (f,||C||22)-space. Therefore, the penalty parameter is allowed to decrease between successive iterations. Besides, one need not to decide how to update the penalty parameter. This approach resembles the filter method introduced by Fletcher and Leyffer [Math. Program., 91 (2001), pp. 239-269], but it is less tolerant since a merit function is still used. Numerical comparison with standard methods shows that this strategy is promising.

Mathematical subject classification: 65K05, 90C55, 90C30, 90C26.

Key words: sequential quadratic programming, merit functions, filter methods.

1 Introduction

In this paper we are concerned with the problem

where f : n® is a C2 nonlinear function, C : n® m represents a set of C2 nonlinear constraints and we suppose that

-¥ < li < ui < ¥, for i = 1,...,n.

Naturally, some of the components of x in (1) may be slack variables generated when converting inequality constraints to this form.

Algorithms based on the sequential quadratic programming (SQP) approach are among the most effective methods for solving (1). Some interesting algorithms of this class are given, for example, in [3, 4, 10, 21]. A complete coverage of such methods can be found in [6, 20].

Since SQP algorithms do not require the iterates to be feasible, they have to concern with two conflicting objectives at each iteration: the reduction of the infeasibility and the reduction of function f. Both objectives must be taken in account when deciding if the new iterate is to be accepted or rejected. To make this choice, most algorithms combine optimality and feasibility into one single merit function. Filter methods, on the other hand, usually require only one of these two goals to be satisfied, avoiding the necessity of defining a weighted sum of them. Both approaches have advantages and drawbacks. In this paper, these approaches are mixed with the objective of reducing their disadvantages while keeping their best features. To motivate the new algorithm, both methods are briefly introduced below.

1.1 A prototypical SQP algorithm

The algorithm presented in this paper will inherit the structure of the merit function based SQP method proposed in [10], hereafter called the GMM algorithm.

This method divides each iteration into two components, a normal and a tangential step. The first of these components is used to reduce the infeasibility, while the aim of the second is to improve the objective function. A trust region approach is used as the globalization strategy. All of the iterates are required to satisfy the bound constraints l < x < u, so the merit function chosen was the augmented Lagrangian, written here (and in [10]) in an unusual way as

In (2), q Î [0,1] is a ''penalty parameter'' used as a weight to balance the Lagrangian function (for the equality constrained subproblem), defined as

(x, l) = f(x) + C(x)Tl,

with a measure of the infeasibility, given by

At iteration k, a new point x+ = xk + s is accepted if the ratio between the actual and the predicted reduction of the merit function (when moving from xk to x+) is greater than a positive constant.

The actual reduction of the augmented Lagrangian at the candidate point x+ is defined as

Ared(xk, s, q) = (xk, q) - (xk + s, q).

The predicted reduction of the merit function depends on the strategy used to approximately solve (1). The GMM algorithm approximates (1) by the quadratic programming problem

where H is a symmetric n×n matrix and A(x) = (ÑC1(x), ¼, ÑCm(x))T is the Jacobian of the constraints.

In this case, denoting

as the approximation of j(x), the predicted reduction of the augmented Lagrangian merit function is given by

where

is the predicted reduction of the infeasibility and

is the predicted reduction of the Lagrangian.

The penalty parameter q plays a crucial role in the acceptance of the step. For a augmented Lagrangian of the form (2), (q - 1)/q can be viewed as the slope of the line that defines the forbidden region in the (j, )-plane, that is, the semi-space that contains all of the points that are not acceptable at the current iteration. This is illustrated in Figure 1, where the forbidden region is highlighted for different values of q.


Merit functions have been criticized for many reasons. First, it is not so easy to choose an initial value for q, since (x, l) and j(x) usually have very different meanings and units. Besides, it is necessary to decrease q as the algorithm progresses to force it to find a feasible solution. If the initial penalty parameter used is near to 1 and q is decreased slowly, the algorithm may take too many iterations to reach a feasible point. On the other hand, starting from a small q or decreasing this factor too quickly may force iterates to stay almost feasible, shortening the steps even when we are far from the optimal solution.

As shown in [10], the adoption of a non-monotone strategy for the reduction of q is very effective to avoid this premature step shortening. However, it also allows the algorithm to cycle between small and large penalty parameters, inducing some zigzagging in many cases. This undesired behavior can be controlled if an efficient choice of the non-monotonicity parameter is adopted, but this choice is also very problem dependent, so the criticism of the method still applies.

1.2 The filter method

To overcome some of the difficulties inherent to merit functions, Fletcher and Leyffer [8] introduced the idea of using a filter. Instead of combining infeasibility and optimality in one single function, the filter method borrows the idea of nondominance from multi-criteria optimization theory. Thus, a trial point is refused only if it is dominated by some other point, generated in a previous iteration.

This approach was promptly followed by many authors, mainly in conjunction with SLP (sequential linear programming), SQP and interior-point type methods (see, for instance, [1, 5, 6, 7, 9, 11, 12, 15, 16, 17, 22, 23, 24, 25]).

The SQP-filter algorithm presented in [6] illustrates how this nondominance criterion works. In this method, the objective function f is used to measure optimality, while infeasibility is measured by

Function f(x) differs from j(x) in the norm used. Besides, the violation of the bound constraints is also taken in account here, since the filter method do not require the iterates to satisfy these constraints.

For each approximate solution x, we can plot a point (f(x), f(x)) in the (f, f)-plane, just as we did with (j(x), (x)) in Figure 1. A third equivalent plane will be introduced in next section, as the algorithm presented in this paper works with (j, f) pairs.

Let be a set of previously generated pairs in the form (fj, fj). An iterate xk is accepted by the filter whenever it satisfies

where g Î (0,1) is a constant.

One advantage of this type of Pareto dominance criterion is that it does no require the redefinition of a penalty parameter at each iteration. However, to avoid the acceptance of iterates that are too close to the points in , it was necessary to add to the last inequality of (6) a term that depends on the infeasibility, giving a merit function flavor to this specific filter.

The main disadvantage of the SQP-filter based on (6) is that this acceptance criterion is too tolerant. In fact, requiring only the infeasibility or the optimality to be improved makes possible the acceptance of points that are only marginally less infeasible but have a large increase in f(x) over the current iterate, or vice-versa.

Even though, the SQP-filter method can give us some good hints on how to improve the algorithms based on merit functions.

The first hint is that the same merit function that is reliable for points in the (j, f)-plane that are near to (j(xk), f(xk)) may be not so useful when the step is large, so the trial point is far from the current iterate. As illustrated in Figure 1, for values of q near to 1, the acceptance criterion based on a merit function cuts off a significant portion of the feasible region, including, in many cases, the optimal solution of the problem.

The second good idea behind the SQP-filter method is that a restoration step should be used sometimes. The objective of a restoration is to obtain a point that is less infeasible than the current one and is also acceptable for the filter. In [6, sec. 15.5], a restoration step is computed when the trust region quadratic subproblem is incompatible (i.e. has an empty feasible set). In our method, this strategy will be used whenever staying away from feasibility seems not worthwhile. In other words, infeasible iterates are welcome only if they generate large reductions of the objective function. If the decrease in f is small and the current point is very infeasible, it is better to move off and find a more feasible point.

The last lesson we can take from the SQP-filter method is that feasible points could never be refused. In [6, sec. 15.5], feasible points are never added to , so a feasible iterate will always satisfy the first inequality of (6). This assures that the optimal solution will always be accepted by the algorithm and a restoration will always succeed.

1.3 Motivation and structure of the paper

The objective of this paper is to present an algorithm that takes advantages from both the merit function and the filter ideas. The algorithm introduced here is a merit function SQP method, in the sense that it still combines feasibility and optimality in one single function and it still uses penalty parameters. However, not one but several penalty parameters are defined per iteration, each one related to a portion of the (f, j)-space. These parameters are also automatically computed from some (f, j)-pairs collected at previous iterations, so no update scheme need to be defined for q.

This paper is organized as follows. In the next section, we present the piecewise linear function we use to accept or reject points. Section 3 introduces the proposed algorithm. In section 4, we prove that the algorithm is well defined. Sections 5 and 6 contain the main convergence results. Finally, in section 7 some conclusions are presented, along with lines for future work.

Through the paper, we will omit some (or even all) of the arguments of a function, if this does not lead to confusion. Therefore, sometimes Q(H, x, s) will be expressed as Q(s), for example, if there is no ambiguity on H and x.

2 A merit function that uses several penalty parameters per iteration

As we have seen, a merit function deals with two different concepts: the infeasibility and the optimality of the current point.

In this paper, we will introduce a new merit function that uses information collected at previous iterations to accept new points. Since we compare points generated at different stages of the algorithm, this function cannot be based on the augmented Lagrangian, like in [10], as it would depend on the Lagrange multiplier estimates used and, obviously, these estimates change from one iteration to another. Therefore, we decided to adopt the so called smooth

2 merit function, defined as:

The actual reduction of the

2 merit function will be given by

where

Similarly, the predicted reduction of the merit function will be defined asin (3), replacing (5) by

where

Generally, for a trial point to be accepted, it is necessary that the actual reduction of the merit function satisfies

Ared(x, s, q) > hPred(H, x, s, q),

where h Î (0, 1) is a given parameter.

However, this scheme based on a linear merit function is usually unreliable for trial points that are far from the current iterate. Therefore, we suggest the use of a piecewise linear function to accept or reject new points, which correspond to use several merit functions per iteration.

In order to define the new merit function, let F be a set of p points (ji, fi) in the (j, f)-plane. Suppose that these pairs are ordered so that j1 < j2 < ¼ < jp. Suppose also that each point (ji, fi) in F is below the line segment joining (ji-1, fi-1) and (ji+1, fi+1), for i = 2, ¼, p - 1. Thus the piecewise linear function that passes through all of the points in F is convex.

For each point (ji, fi) in F, define another point (, ) by moving a little towards the southwest (a precise definition of and is given in (10) and (11) below). Let be the set of points (, ). The convex piecewise linear function that connects the points in is defined by

where gs is a small positive constant, such as 10-4.

This new function, illustrated in Figure 2, is formed by p + 1 line segments that can be viewed as merit functions in the form (7). The i-th of these functions is defined by the penalty parameter

and a particular choice of h that will be defined below.


The region between the piecewise linear function that passes through the points in F and the function defined by acts as a margin that prevents the acceptance of iterates that are not ''sufficiently'' better than the pairs in F.

At each iteration k, k is generated defining, for each point (ji, fi) Î Fk, another point (, ) such that,

and

for some 0 < gf < gc < 1. Reasonable values for these constants are gf = 10-4 and gc = 10-3.

Our algorithm starts with F0 = Æ. At the beginning of an iteration, say k, we define the temporary set k as

As it will become clear in the next section, depending on the behavior of the algorithm, the pair (f(xk), j(xk)) may be permanently added to Fk+1 at the end of the iteration. In this case, each pair (f(xi), j(xi)) that is not below the line segment that joins (f(xi-1), j(xi-1)) to (f(xi+1), j(xi+1)) is removed from Fk+1, to keep the (F, j) function convex.

A new iterate x+ = xk + sc is rejected if f(xk + sc) is above the piecewise-linear function (k, j(xk + sc)) or if we predict a good reduction for the merit function, but the real reduction is deceiving (in the sense that (14) occurs).

To express the first of these conditions in the SQP jargon, we say that x+ is not accepted if

where

The parameter is used in (12) to define the region between Fk and k. However, a formula for cannot be written explicitly since we do not know in advance which of the terms in (10) and (11) will be used to define and i. Fortunately, this formula will only be used in the proof of Lemma 15 and, in this case, a simple expression is known.

Some agreement between the model function and the objective function of (1) is an usual second requirement for accepting the step in filter methods. In the SQP-filter method presented in [6], for example, this extra condition is used whenever is sufficiently greater than a small fraction of the infeasibility at xk. In this case, sc is only accepted if /> gg is satisfied. Most filter algorithms, such as those presented in [5, 7, 9, 16, 17, 22, 23] include similar tests.

The combination of infeasibility and optimality into a single merit function allow us to adopt a less stringent condition, rejecting x+ only if

where

k > 0, gmÎ (0, 1) and gg Î (0, 1).

In words, (14) states that when we predict a good reduction for the optimality part of the merit function, the step is only accepted if there exists a penalty parameter Î [gm, 1] such that Ared/Pred > gg.

This condition seems to be somewhat inelegant, since one should expect that all of the iterates that do not belong to the forbidden region shown in Figure 2 are to be accepted. However, if we choose a small gm, say 10-4, (14) becomes less stringent than (12) in most cases, so it is seldom used. In fact, we generally have 0 (º 0) < gm < 1, so this condition only applies when xk is the leftmost point in . And even in this case, the forbidden region is only slightly enlarged.

Finishing this section, Figure 3 illustrates how the new merit function reduces the region of acceptable points in comparison with the filter. The comparison was made for problem 7 from the Hock and Schittkowski test collection [18]. In the figure, the current iterate, xk = (-0.25, 2.27) is shown as a black dot. The points in the (j, f)-space used to define the set were (0.01, -1.5), (2.28, -2.21) and (20, -2.7). The forbidden region for the filter method, as defined in (6), is given in dark gray. The points that are acceptable for the filter but not for the new merit function are shown in light gray. The white region contains the points that are acceptable by both criteria. The contour lines of f(x) = log(1 + ) - x2 are concave up, while concave down contour lines are related to the constraint C(x) = (1 + )2 + - 4. The white concave down narrow region in the bottom of the graph corresponds to points that are almost feasible, i.e., points that satisfy j(x) < 0.01.


3 An SQP algorithm

In the general framework of a trust region sequential quadratic programming algorithm, a step sc is obtained approximating problem (1), in a neighborhood of an iterate xk, by a quadratic programming (QP) problem.

In our case, this QP problem has the form

where Q(H, x, s) is defined by (8), xk is supposed to belong to

W = {x Î n | l < x < u}

and Hk is an approximation of the Hessian of the Lagrangian at xk. It should be noticed that Hk does not need to be positive definite, so this problem must be carefully handled. The infinity norm was chosen here so the bound constraints and the trust region constraint of (16) can be grouped.

We will use the term j-stationary to say that a point satisfies the first order optimality conditions of

Unfortunately, if xk is not j-stationary, the constraints of (16) may be inconsistent, so this problem may not have a solution. Some algorithms, such as the SQP-filter method presented in [6], include a restoration step, called to find a new point that makes (16) compatible. Another common practice to overcome this difficulty is to directly divide the step sc into two components. The first of these components, called normal step, or simply sn, is obtained as the solution of the feasibility problem

where bdÎ (0, 1] is a given constant. If M(xk, sn) = 0, then xk can be substituted by xk + sn in (16) to make this problem feasible, so it can be solved by any QP algorithm. Otherwise, the second component of sc, called the tangential step, or st, is computed so Q is reduced but the predicted reduction of the infeasibility obtained so far is retained. In other words, sc is the solution of the (now consistent) problem

Usually, bd is set to some value around 0.8 so the trust region is enlarged from (18) to (19). This is done to prevent sn from being the the only solution of (19) when this point is in the border of the trust region of (18).

To insure a sufficient decrease of M, a Cauchy point, , is computed. This Cauchy point is based on a decent direction for j(x) given by Pw(xk - Ñj(xk)), the orthogonal projection of xk - Ñj(xk) on W. The solution of (18) is required to keep at least ninety percent of the reduction obtained by .

A similar procedure is adopted for (19). In this case, , the Cauchy point, is obtained from a descent direction for f(x) on the tangent space, given by Px(-ÑQ(sn)), the orthogonal projection of -ÑQ(sn) on the set

Again, the decrease on Q obtained by the solution of (19) must not be less than a fixed percentage of the reduction supplied by the Cauchy point.

Besides using this two-step scheme, the algorithm presented here also performs a restoration whenever the step becomes too small and the current point is very infeasible. Although this feasibility reinforcement seems to be unnecessary, the restoration plays a very important role in accelerating the method and keeping the trust region radius sufficiently large.

This role is better explained by an example. Going back to Figure 2, let's suppose that the current iterate xk is represented by () and that the optimal point has a function value greater than . In this case, unless the trust region radius is sufficiently large and our quadratic model is really good (so the algorithm can jump over a large portion of the forbidden region), it will be necessary to perform a considerable number of short-step iterations to traverse from the southeast to the northwest part of the figure. To shorten the path between the current point and the desired solution, it is necessary to focus only on reducing the infeasibility, and this is exactly what the restoration step does.

One may notice that the use of a standard filter is not useful for circumventing this difficulty. In fact, the problem is aggravated in the presence of (almost) right angles in the frontier of the forbidden region. To see why this happens, let's suppose now that the restoration is used only to ensure that problem (16) is compatible and that xk is in the vicinity of an almost horizontal segment of the filter envelope. In this case, to escape from this region, it may be necessary to severely reduce the trust region radius before (16) becomes inconsistent, so the restoration is called.

The main steps of the algorithm are given below. We start from k = 0 and take F0 = Æ as the initial set of points used to define the piecewise linear function (F). An initial point x0Î W, an initial trust-region radius D0> Dmin and an initial symmetric matrix H0 need also to be given.

Algorithm 1. A new SQP algorithm

1. WHILE the stopping criteria are not satisfied

1.1.

k Fk {(f(xk), j(xk))};

1.2. IF ||C(xk)|| = 0 (xk is feasible),

1.2.1. sn 0;

1.3. ELSE

1.3.1. Compute dn (a descent direction for j(x)):

dnPw(xk - gnÑj(xk)) - xk;

1.3.2. Determine (the decrease step for j(x)), the solution of

1.3.3. Compute sn (the normal step) such that

l<xk + sn < u,

||sn||¥ < bdDk, and

M(xk, 0) - M(xk, sn) > bm[M(xk, 0) - M(xk, )];

1.4. Compute dt (a descent direction for f(x) on the tangent space):

dtPx(-gtÑQ(sn));

1.5. Determine (the decrease step for f(x)), the solution of

1.6. Compute a trial step sc such that

A(xk)sc = A(xk)sn,

l<xk + sc < u,

||sc||¥ < Dk, and

Q(sn) - Q(sc) = bq[Q(sn) - Q()];

1.7. IF (f(xk + sc) > (k, j(xk + sc))) OR

((Hk, xk, sc) > kj(xk) AND < gm),

1.7.1. Dk aR min{Dk, ||sc||¥}; (reduce D)

1.8. ELSE

1.8.1. rk

(xk, sc)/(Hk, xk, sc);

1.8.2. IF (Hk, xk, sc) < kj(xk) OR rk < gr,

1.8.2.1. Fk+1

k; (include (f(xk), j(xk)) in F)

1.8.3. ELSE Fk+1

Fk;

1.8.4. Accept the trial point:

Determine Hk+1;

kk + 1;

1.9. IF Dk < Drest AND j(xk) > h,

1.9.1. Compute a restoration step sr so that

(j(xk + sr) < h AND f(xk + sr) < (k, j(xk + sr)))

OR xk + sr is j-stationary but infeasible;

1.9.2. Fk+1

k; (include (f(xk), j(xk)) in F)

1.9.3. Accept the new point:

xk+1

xk + sr;

Dk+1 max{brDrest, Dmin};

Determine Hk+1;

kk + 1;

The constants used here must satisfy 0 < bd< 1, 0 < bm < 1, 0 < bq < 1, k > 0, 0 < gr < gg < h < 1, gn > 0, gt > 0, 0 < Dmin < Drest, 0 < aR < 1, aA> 1, h > 0 and br > 0. Parameters k, gn, gt, Dmin, Drest, h and br are problem dependent and must be chosen according with some measure of problem data. Reasonable values for the remaining parameters might be bd = 0.8, bm = 0.9, bq = 0.9, gr = 0.01, gg = 0.05, h = 0.9, aR = 0.25 and aA = 2.5. The constant h should not be confused with the parameter defined in (12).

Algorithm 1 seems to be somewhat inefficient, since it requires the solution of several quadratic programming problems per iteration. However, as it will become clear at section 7, only steps 1.3.3 and 1.6 usually require the solution of a quadratic problem. Besides, we do not need to solve these problems exactly, so the algorithm is competitive with modern nonlinear programming codes.

The restoration step is called whenever the trust region radius becomes too small compared to j(xk). In this case, we need to find a point that is less infeasible than xk and that is also acceptable for the piecewise linear merit function. To ensure that such a point can always be obtained, it would be necessary to use an algorithm for global minimization. Of course, this alternative is unaffordable, so the restoration is computed in practice by an algorithm for solving the box constrained problem (17). The only drawback of this approach is that we cannot guarantee that a stationary but infeasible point would not be reached. Therefore, we say that the main algorithm fails if this happens.

If xk is feasible, then the condition < kj(xk) is never satisfied, since is always greater or equal to zero. Besides, the condition < gr is also never satisfied when xk is feasible and f(xk + sc) < (k, j(xk + sc)). Therefore, all of the points in Fk are infeasible, although k may contain a feasible point. This result is very important for two reasons. First, it prevents the optimal solution of problem 1 from being refused by the algorithm. Moreover, it also assures that the algorithm is well defined, as stated in the next section.

4 The algorithm is well defined

An iteration of algorithm 1 ends only when a new point xk + s is below the piecewise linear function (k, j(xk + s)), besides satisfying some other conditions stated at steps 1.7 or 1.9.1. While such a point is not found, the trust region radius is reduced and the iteration is repeated. It is not obvious that an acceptable point will be obtained, as we may generate a sequence of points that are always rejected by the algorithm. In this section, we prove that the algorithm is well defined, i.e. a new iterate xk+1 can always be obtained unless the algorithm stops by finding a j-stationary but infeasible point or a feasible but not regular point.1 1 We say that a point is regular if the linear independence constraint qualification (LICQ) holds.

In the following lemma, we consider the case where xk is infeasible.

Lemma 1. If xk is not j-stationary, then after a finite number of repetitions of steps 1.1 to 1.9, a new iterate xk+1 is obtained by the algorithm.

Proof. At each iteration k, if f(xk + sc) < (k, j(xk + sc)) and one of the conditions

(for some

> gm) is satisfied, then xk + sc is accepted and we move to iteration k + 1. Otherwise, Dk is reduced and after some unfruitful steps, Dk < Drest and j(xk) > h, so a restoration is called.

Suppose that a j-stationary but infeasible point is never reached (otherwise the algorithm fails). As the restoration generates a sequence of steps {sj} converging to feasibility, and since k does not include feasible points (because xk is infeasible and no feasible point is included in Fk), there must exist an iterate xk + sr that satisfies j(xk + sr) < min{1, h}, so we can proceed to the next iteration.

Now, in order to prove that the algorithm is also well defined when xk is feasible, we need to make the following assumptions.

A1. f(x) and Ci(x) are twice-continuously differentiable functions of x.

A2. The sequence of Hessian approximations {Hk} is bounded.

As a consequence of A1 and A2, the difference between the actual and the predicted reduction of the merit function is proportional to D2, so the step is accepted for a sufficiently small trust region radius, as stated in the following lemma.

Lemma 2. Suppose that A1 and A2 hold and that xk is feasible and regular for problem 1 but the KKT conditions do not hold. Then, after a finite number of trust region reductions, the algorithm finds a new point xk + sc that satisfies f(xk+sc) < (k, j(xk+sc)) and Ared(xk, sc, ) > gg Pred(Hk, xk, sc, ) for some

> gm.

Proof. Since xk is feasible, sn = 0. Supposing that xk is regular and non-stationary, there must exist a vector dt ¹ 0 satisfying

Let us define, for all D > 0,

p(D) = t(D)dt,

where

Clearly, x + dt Î W, so we have that ||t(D)dt||¥ = D whenever D < ||dt||¥. Define, in this case,

Since Q() < Q(p(D)), by elementary properties of one-dimensional quadratics, there exists D1Î (0,||dt||¥] such that, for all D Î (0, D1),

Moreover, since xk is feasible and Asn = 0, we have that M(xk, 0) = M(xk, sc) = 0, so

Once xk is feasible, (j(xk), f(xk)) is the first pair in k. Thus, there exists 2Î (0, D1] such that, for D < 2, we need to consider only the portion of (k, j) defined on the interval [0, j2]. This linear function may be rewritten so the condition

f(xk + sc) < (k, j(xk + sc))

is equivalent to

where,

and

1 > 0 is given by (9).

Now, by A1, A2 and the definition of Pred, we have

So, using (23) and (24) we deduce that

Thus, for D < min{ (1 - h1) bqc

1/|1|, 2} = 3, the inequality (22) necessarily takes place.

Now, using the fact that = 1 for xk feasible and replacing 1 by 1 in (25), we can conclude that, for

the condition Ared(xk, sc(D), ) > ggPred(Hk, xk, sc(D), ) is also satisfied and the step is accepted.

5 The algorithm converges to a feasible point

As mentioned in the last section, our algorithm can stop if a j-stationary but infeasible point is found. Naturally, this unexpected behavior of the algorithm makes somewhat pretentious the title of this section.

Formally, what we will prove is that, supposing that a j-stationary but infeasible point is never reached and that the restoration always succeeds, an infinite sequence of iterates converges to feasibility.

In the proofs of the lemmas presented here, we will suppose that A1 and the following assumption are satisfied.

A3. The sequence of iterates {xk} lies within a closed and bounded domain W0.

This requirement is easy to fulfill, as we usually can define finite lower and upper limits for the variables. Besides, as mentioned in [6, p. 730], assumptions A1 and A3 together ensure that, for all k,

for some constants fmin, fmax and jmax > 0. Our analysis will be based on the fact that the rectangle A0 = [0, jmax] × [fmin, fmax] is covered by a finite number of rectangles with area greater than a small constant. Therefore, each time we expand the forbidden region (see fig. 2) by adding to it a small rectangle, we drive the iterates towards feasibility.

Let us start investigating what happens to j(x) when an infinite sequence of iterates is added to F, following the skeleton of Lemma 15.5.2 of [6].

Lemma 3. Suppose that A1 and A3 hold and that {ki} is any infinite subsequence at which the iterate

is added to F. Then

Proof. Let us suppose, for the purpose of obtaining a contradiction, that there exists an infinite subsequence {kj} Í {ki} for which

where > 0.

At iteration kj, the (j, f)-pair associate with is included in F at position m, which means that jm-1< (º jm) < jm+1 and fm-1> fm) > fm+1. Thus, as long as the pair (, ) remains in F, no other (j, f)-pair is accepted within the rectangle

Notice that, by (10), (11) and (27), the area of this rectangle is, at least,

Assume now that (, ) is excluded from F by another pair (, ), included in F at an iteration kl > kj. This case is illustrated in Fig. 4. Notice that (, ) cannot fall in regions I and V since, in this case, (, ) will not be excluded from F. It can be easily verified that the worst case occurs when (, ) lies on p1(j) or p2(j).


Suppose (, ) lies on p2(j), as depicted in Fig. 4. In this case, the rectangle rm will be entirely above 2, the line that connects (, ) to (m+1, m+1). Since 2 will be included in the new piecewise linear function (), no point within rm can ever be reached by a new iterate.

The same idea can be applied in the case (, ) lies on p1(j). Therefore, once (, ) is included in F, rm will always be above (). Since the area of this rectangle is at least and the set A0 is completely covered by at most Surf(A0)/ of such rectangles, it is impossible for an infinite subsequence of {ki} to satisfy (27), and the conclusion follows.

Finally, we are going to consider the case where no point is added to Fk for k sufficiently large.

Lemma 4. Suppose that assumptions A1 and A3 hold. Suppose also that, for all k > k0, xk is never included in Fk. Then,

Proof. Since xk is not included in Fk, no restorations are made and both conditions stated at step 1.8.2 of algorithm 1 are never satisfied for k > k0. Therefore, we have

for all k > k0, which means that the objective function always decrease between infeasible iterations. Since A1 and A3 imply fmin< f(xk) < fmax, we must have

Then, (28) follows from (29) and (30).

6 The algorithm finds a critical point

Finally, we are able to prove the convergence of the algorithm to a stationary point for (1). In order to do that, we will need to make one additional assumption on the choice of the normal step sn.

A4. There exist kn, kN > 0 such that, if {} is a subsequence of iterates that converges to a feasible point, the choice of sn at step 1.3.3 of algorithm 1 satisfies

||sn(xk, Dk)|| < kn||C(xk)||2

for ki > kN.

This requirement is also easy to fulfill since it only applies when the infeasibility is small and, in this case, it is reasonable to suppose that the normal step will also be small.

In the following lemma, derived from Lemma 6.1 of [10], we show that in the neighborhood of a feasible, regular and non-stationary point, the directional derivative of the quadratic model (8) along dt is bounded away from zero.

Lemma 5. Suppose that A2 and A4 hold and that {} is an infinite subsequence that converges to the feasible and regular point x*Î W, which is not stationary for (1). Then there exists k1, c1 > 0 such that

for all x Î { |k > k1}. Moreover, ||dt(H, x, D)|| is bounded and bounded away from zero for all x Î { | k > k1}.

Proof. For all x Î {}, we have that

dt(H,x,D) = Px(-gtÑQ(sn(x, D))) = Px(-gt [Hsn(x, D) + Ñf(x)]).

By the contractive property of the orthogonal projections,

||Px(-gt[Hsn(x, D) + Ñf(x)]) - Px(-gtÑf(x))||2< gt ||H||2||sn(x, D)||2.

So, by A2 and A4, we have that

for k > kN. By the continuity of Ñf(x) and the fact that {} converges, we deduce that

Notice that Px(-gtÑf()) is the solution of

Now, define (-gt Ñf(x*)) as the solution of

Since x* is regular but is not a stationary point for (1), it follows that z = 0 is not a solution for (34). So, (-gt Ñf(x*)) ¹ 0. Moreover, since z = 0 is feasible for (34), we have that

which implies that Ñf(x*)T(-gtÑf(x*)) < 0.

Using the fact that Px(-gtÑf(x)) is a continuous function of x and sn for all regular x (see [10]), we can define 2, 3, 4 > 0 and 2Î such that, for all x Î { | k > 2}, we have

Now, from (32), (33) and (35), the continuity of C(x) and the feasibility of x*, there exists 3> max{2, kN} such that, whenever x Î { | k > 3},

Therefore, ||dt(H, x, D)|| is bounded and bounded away from zero for all x Î { | k > 3}.

Finally, since dt Î (A(x)), assumptions A2 and A4 hold, and ||dt|| is bounded, we have that, for all x Î { | k > 3},

where g3 > 0. Then, (31) follows defining c1 = 4/4 and choosing k1 > 3 such that ||C(x)|| < 4/(4g3).

Using Lemma 5, we state in the next lemma that, in the neighborhood of a feasible, regular and non-stationary point, the decrease of the quadratic model (8) is proportional to the trust region radius D.

Lemma 6. Suppose that A2 and A4 hold and that {} is an infinite subsequence that converges to the feasible and regular point x*Î W, which is not stationary for (1). Then there exists c2, k2 > 0 and D1Î (0, Dmin) such that

for all x Î { | k > k2}.

Proof. See Lemma 6.2 of [10].

Now, we are able to present a crucial lemma, derived from Lemma 6.3 of [10], that relates to the trust region radius in the neighborhood of a feasible point.

Lemma 7. Suppose that A1, A2 and A4 hold and that {} is an infinite subsequence that converges to the feasible and regular point x*Î W, which is not stationary for (1). Then there exists , c3, k3 > 0 and D1Î (0, Dmin) such that, for ki> k3, if

we have that

Proof. By Lemma 6, assumptions A1 and A4 and the convergence of {}, we have that

Q(0) - Q(sc) >Q(sn) - Q(sc) - |Q(0) - Q(sn)| >c2 min {D, D1} - g4 ||C(x)||

for all x Î { | k > k2}, where c2, k2 and D1 are defined as in Lemma 6 and g4 > 0. Therefore, (38) follows if we choose c3 < c2 and k3> k2 such that < a2 (c2 - c3)2/(), where a = min{1, D1/ D}.

We next examine what happens if D is bounded away from zero and an infinite subsequence of points is added to F.

Lemma 8. Suppose that A1, A2, A3 and A4 hold and that {} is an infinite subsequence at which

is added to F. Suppose furthermore that the limit points of this sequence are feasible and regular, that the restoration always terminates successfully and that
> D2, where D2 is a positive scalar. Then there exists a limit point of this sequence that is a feasible, regular and stationary point for (1).

Proof. From assumption A3, we know that {} has a convergent subsequence, say {}. Let us suppose that the limit point of this subsequence is not stationary for (1).

From Lemma 3 we know that there exists

5Î such that, for ki > 5,

Thus, a restoration is never called for ki > 5. So, the hypothesis that is added to implies that Ared(, sc, ) > ggPred(, sc, ) for some > gm, and that one of the inequalities stated at step 1.8.2 of the algorithm is satisfied at iteration ki.

Suppose, for the purpose of obtaining a contradiction, that {} converges to a point that is not stationary for (1). So, from Lemma 3 and (38), there exists 6> 5 such that j() < and

for all ki > 6.

Using Lemma 3 again, we can deduce that there exists

7> 6 such that j() < (c3/k) min{D1, D2} and the condition < kj(xk) is never satisfied for ki > 7.

Therefore, rk < gr must hold. To show that this is not possible, let us write the inequality Ared(, sc, ) > ggPred(, sc, ), as

Using the hypothesis that rk < gr and the fact that (, sc) > 0, we have

Then, taking k4 > k3 (defined in Lemma 7), we deduce from (38) that, for ki > k4,

(1 - ) (j() - j(+ sc)) > (gg - gr) c3 min {D1, D2}.

But, since, gg > gr and limi ® ¥ = 0, we must have

which contradicts the fact that

> gm. Therefore, {} must converge to a stationary point for (1).

Supposing again that D is bounded away from zero, we will now complete our analysis investigating what happens when no iterates are added to F for k sufficiently large.

Lemma 9. Suppose that A1, A2, A3 and A4 hold, that xk is always accepted but Fk remains unchanged for k > k5 and that Dk> D3, for some positive D3. Suppose also that the limit points of the infinite sequence {xk} are feasible and regular. Then there exists a limit point of {xk} that is a feasible, regular and stationary point of (1).

Proof. Assumption A3 implies that there exists a convergent subsequence {}. If the limit point of this subsequence is not stationary for (1), then from Lemma 7, we have

for all ki > max{k5, k3}. Moreover, since is always accepted and Fk is not changed, we deduce, from step 1.8.2 of the algorithm, that rk> gr. Therefore,

From (39) and (40) we conclude that f() - f(+ sc) > grc3 min{D1, D3} for all ki sufficiently large, which contradicts the compactness assumption A3. Thus, the limit point of {} must be stationary.

In the last part of this section, we will discuss the behavior of the algorithm when D ® 0. We will start showing that the predicted reduction of the quadratic model is sufficiently large when D is small.

Lemma 10. Suppose that A2 and A4 hold and that {} is an infinite subsequence that converges to the feasible and regular point x*Î W, which is not stationary for (1). Suppose also that jk satisfies (37) and that

for ki> k6, where c3, and D1 are defined as in Lemma 7. Then > kj().

Proof. Suppose, for the purpose of obtaining a contradiction, that < kj(xk) for some ki > k6. Then, from (38), we have

c3 min {D, D1} << kj() < k D2,

which is impossible because of (41). Thus > kj() must hold.

The purpose of the next four Lemmas is to prove that there exists a sufficiently small trust region radius so the step is always accepted and D is not reduced further at step 1.7.1 of algorithm 1.

The first lemma shows the relation between the predicted reduction of the infeasibility and D.

Lemma 11. Suppose that assumption A1 holds and that xk is not j-stationary. Then there exists D6, c4 > 0 such that

if DkÎ (0, D6).

Proof. The proof of this Lemma is based on the same arguments used to obtain (21). However, here we deal with the reduction of the infeasibility, instead of the reduction of the objective function, so some modifications need to be done.

Firstly, we must notice that dn ¹ 0, since we suppose that xk is not j-stationary. Thus, we can redefine (20) as

t(D) = max {t > 0 | [xk, xk + tdn] Î W and ||tdn|| < bd D}.

Now, using the fact that for D sufficiently small we have ||t(D)dn|| = bd D and defining

there must exist D6Î (0, ||dn||] such that

for all D Î (0, D6). Therefore, for the normal step sn computed at step 1.3.3 of algorithm 1, we have

M(0) - M(sn) > bdbm cDk.

But, since A(xk)sc = A(xk)sn, we deduce from (4) that

and the desired inequality follows.

In order to prove that xk + sc will be accepted, we need to consider how and are computed. Let us begin using the previous lemma to show that, for a small D, , defined in (10), will depend on the predicted reduction of the infeasibility.

Lemma 12. Suppose that A1 holds and that xk is not j-stationary. Then there exists D7 > 0 such that

gc(xk, sc) > gfj(xk),

if j(xk) < h and Dk Î (0, D7).

Proof. Lemma 11 ensures that

Defining D7 = min{ gc c4 / (gf

h), D6}, where D6 is given in Lemma 11, we have that

for all D Î (0, D7), so the desired result follows.

Using Lemma 11 again, we can also show that , defined in (11), will depend on if D is sufficiently small.

Lemma 13. Suppose that A1, A2 and A4 hold, that {} is an infinite subsequence that converges to the feasible and regular point x*Î W, which is not stationary for (1), and that

k is given by (10). Then there exists D8 > 0 such that

gf(Hk, xk, sc) > (j(xk) - k),

if j(xk) < min{h, } and DkÎ (0, D8), where

is defined as in Lemma 7.

Proof. From Lemma 7 we deduce that, if Dk Î (0, D1], then

gf (Hk, xk, sc) > gf c3Dk.

Now, defining D8 = min{gf c3 / h, D1}, we have

gf (Hk, xk, sc) > hD8D > h

> j(xk) > (j(xk) - k)

and the desired conclusion follows.

Let us prove now that, for a infeasible x, the ratio between the actual and the predicted reduction of the merit function is sufficiently large if D is small, not matter how the penalty parameter q is chosen.

Lemma 14. Suppose that A1, A2 and A4 hold, that q Î [0, 1] and g Î (0, 1) are given parameters and that {} is an infinite subsequence that converges to the feasible and regular point x*Î W, which is not stationary for (1). Then there exists D9 > 0 such that, for ki sufficiently large,

Ared(xki, sc, q) > gPred(, , sc, q)

for all Î (0, D9), if

is infeasible.

Proof. Since limi ® ¥ j() = 0 and, at the beginning of iteration k, the trust region radius satisfies > Dmin, there must exist k7> k3 (defined in Lemma 7) such that, for ki > k7, the condition j() < is satisfied, so (38) holds. Besides, since is infeasible, (42) also holds if we take D9< D6 (defined in Lemma 11). Therefore, from the definition of Pred, we have that

Pred(, , sc, q) > qc3 min {, D1} + (1 - q) c4.

where D1, c3 and c4 are defined as in Lemmas 7 and Lemma 11.

For any q Î [0, 1], the above inequality implies that

Pred(, , sc, q) > min {, D1} min{c3, c4} > 0.

But, from A1 and A4, we also have that

|Ared() - Pred()| < c5.

for some c5 > 0. From the last two inequalities, we deduce that, for < D1,

Therefore, defining D9 = min{(1 - g) min{c3, c4}/c5, D1, D6}, we obtain the required result.

In our last lemma, we will use the previous results to prove that, if D ® 0, there is no infinite subsequence that converges to a point that is not stationary for (1).

Lemma 15. Suppose that A1, A2, A3 and A4 hold. Suppose also that the limit points of the infinite sequence {xk} are feasible and regular and that limk® ¥ Dk = 0. Then there exists a limit point of {xk} that is a stationary point of (1).

Proof. Assumption A3 implies that there exists a convergent subsequence {}. Let us suppose, for the purpose of obtaining a contradiction, that the limit point of this subsequence is not stationary for (1).

Now, we need to consider separately two mutually exclusive situations. First, let us suppose that is feasible. In this case, Lemma 2 assures that, for D < 4 (defined in (26)), the step is accepted and the trust region radius need not to be reduced further.

On the other hand, if is not j-stationary, Lemmas 12 and 13 assure that, for < min{D7, D8} = D10, we have > gfji /gc and > (ji - / gf, so the parameter used in (12) is given by

where i is defined in such a manner that i-1< j(x+) < . Thus, 0 < < 1 and f( + sc) > (, j( + sc)) is equivalent to (12).

Now, using Lemmas 10 and 14, we can deduce that, for < D9, the step is always accepted. Consequently, > aR min{4, D9, D10}, which contradicts the hypothesis that limk ® ¥ Dk = 0, so we conclude that the limit point of the subsequence {} is a stationary point of (1).

Finally, let us state a theorem that puts together all of the results presented so far.

Theorem 1. Suppose that A1, A2, A3 and A4 hold and that {xk} is an infinite sequence generated by Algorithm 1. Then either the restoration converges to a j-stationary but infeasible point of (1), or limk ® ¥ j(xk) = 0. Moreover, if the restoration always succeeds and all of the limit points of {xk} are regular, there exists a limit point x* that is a regular and stationary point for (1). In particular, if all of the j-stationary points are feasible and regular, then there exists a subsequence of {xk} that converges to a feasible, regular and stationary point of (1).

Proof. This result is a direct consequence of Lemmas 3, 4, 8, 9 and 15.

7 Numerical experience

The description of some steps of Algorithm 1 was intentionally left vague to suggest the reader that several alternative implementations are available. In this section, we describe one of such possible implementations and present the numerical results obtained by applying the algorithm to some problems from the CUTEr collection [13].

7.1 Algorithmic details

The computational effort of algorithm 1 may be decomposed into three main parts. The first is related to the reduction of the infeasibility and includes steps 1.3.1 to 1.3.3. The aim of the second part, that comprises steps 1.4 to 1.6, is to improve the objective function. Finally, a restoration is called, at step 1.9.1, if the infeasibility needs to be drastically reduced. Each one of these parts is briefly described below.

Taking a closer look at steps 1.3.1 and 1.3.2, one may notice that vectors dn and can be easily determined, since the first involves computing a projection on a box and the second requires only the solution of a one-dimensional problem.

The normal step, sn, can be obtained by solving the bound-constrained least squares problem (18), replacing the word reduce by minimize. In our experiments, the Quacon package [2] was used for this purpose. The computation of sn is declared successful when both conditions

M(xk ,0) - M(xk, sn) > bm[M(xk, 0) - M(xk, )]

and

||gp(sn)||2< 0.001||gp()||2

are satisfied, where bm = 0.9 and gp is the projected gradient of the quadratic function minimized. Otherwise, the limit of max{1000, 6n} iterations is reached.

Vector sc is computed by applying the MINOS 5.4 [19] solver to the quadratic problem (19), stopping the algorithm when both

and condition (36) are satisfied, where, again, gp is the projected gradient and l is the vector of Lagrange multipliers. A limit of max{1000, 3n} iterations also applies.

Vector dt is obtained, at step 1.4, using the MINOS package to approximately solve the problem

where gt = 0.001. Since dt is much more difficult to obtain than dn, we decided to compute this vector only if sc fails to satisfy conditions (36) and (44). In this case, after determining dn and , sc is computed again.

To determine the restored point sr, successive normal steps were computed until the conditions stated at step 1.9.1 were satisfied. A more sophisticated restoration procedure could be devised, but this very simple scheme seemed to be satisfactory.

In all of the experiments, exact Hessians were computed, using the Lagrange multipliers supplied by MINOS as the approximate dual variables for problem (1).

Algorithm 1 terminates when ||C(xk)||¥< 10-6 and one of the conditions

1. ||Ñ(xk, lk)||¥ < g max{1, ||Ñ(xtyp, ltyp)||¥},

2. ||sc||¥< x max{1, ||xk||¥}, or

3.

< s and < o (for three successive iterations),

is satisfied, where

g = 10-6, x = 10-18, s = o = 10-8, and xtyp and ltyp are typical primal and dual steps, determined at the first iterations of the algorithm.

The algorithm also stops if, after a restoration step, Ñj(xk + s) < 10-10. In this case, it is more than likely that a j-stationary but infeasible point was found.

The remaining parameters used are D0 = 105, Dmin = 10-5, gn = 1.0, gt = 10-8, gr = 10-2, gg = 5.10-2, k = 10-4, h = 0.9, aA = 2.5, aR = 0.25, bd = 0.8, bq = 0.9, br = 16.0, gs = 10-4, gf = 10-6, gc = 10-3, gm = 10-4, Drest = 10-1 and h = 102.

7.2 Algorithm performance

To analyze the behavior of the algorithm just described, a set of 82 small to medium-size problems was extracted from the CUTEr collection [13]. The selected problems are presented in Tables 1, 2 and 3. The number of variables of the original CUTEr problem is given by n, while s is the number of slack variables required to convert the problem to the form shown in (1). The number of constraints is given by m.

The algorithm, hereafter called GMM99, was implemented in FORTRAN 77 and the executable program was generated using the ifort 8.1 compiler, under the Fedora 3 Linux operating system. To evaluate its performance, the algorithm was compared to Lancelot (release B), the well known nonlinear programming package distributed along with the Galahad library [14]. The tests were performed on a Dell Optiplex GX280 computer, using an Intel Pentium 4, 540 processor, with a clock speed of 3.2GHz, 1MB of cache memory, a 800MHz front side bus and the Intel 915G chipset. The Lancelot default parameters were used, except for the maximum number of iterations that was increased to 10000. Exact first and second derivatives were computed by both methods.

The new algorithm failed to obtain a feasible point for problems LUBRIF, ROTDISC and YORKNET. Besides, it also could not find an objective function as good as Lancelot for problems BRAINPC5, CATENARY, DIXCHLNG, DTOC2, EG3, ELATTAR, NGONE and OPTCTRL3. These 11 occurrences were classified as GMM99 failures.

Lancelot attained the limit of 10000 iterations without finding a feasiblepoint for problems CRESC132, DITTERT, LAUNCH, NET3, OPTMASS andSMMPSF. For problems DRUGDIS, GAUSSELM, HALDMADS, HVYCRASH, READING3 and SINROSNB, the new algorithm found a better objective function. Finally, for problems BRIDGEND and ROTDISC Lancelot exceeded the limit of 3000 seconds of running time. These 14 cases were treated as Lancelot failures.

Table 4 shows a summary of the results obtained comparing the new algorithm to Lancelot. According to this table, GMM99 outperformed Lancelot in 47 problems (out of 82), while Lancelot achieved the best performance in 32 cases (including GMM99 failures). As mentioned above, both algorithms failed to solve problem ROTDISC.

Table 5 shows the number of times GMM99 and Lancelot obtained the best solution for each type of objective function and constraint. In this table, the problems are also divided according to their size.

For this specific selection of problems, GMM99 performed better for problems with a linear objective function, while Lancelot was the best option for solving problems with a quadratic function and nonlinear constraints. These results are probably related to the fact that MINOS was used to compute the tangential step in GMM99. In general, GMM99 had a slightly better performance.

The new algorithm was also compared with GMM, the standard trust-region SQP method presented in [10]. The results are shown in Tables 6 and 7.

The GMM algorithm was not able to find a feasible solution for problemsCATENARY, CRESC132, LAKES, LUBRIF, ROTDISC and STEENBRC. Moreover, the objective function at the solution was worse than the value obtained by GMM99 for problems AUG2D, COSHFUN, EIGMINB, HAGER1, HAGER2, HIMMELBI, NGONE and SSNLBEAM. For the OPTCTRL3 problem, the algorithm found a solution too far from the one supplied by Lancelot. These problems were classified as GMM failures.

The 9 GMM99 failures cited in Table 6 include problems LUBRIF, ROTDISC and YORKNET, for which a feasible point was not found, problem OPTCTRL3, because Lancelot obtained a much better solution, and also problems BRAINPC5, ELATTAR, GAUSSELM, HVYCRASH and SINROSNB, forwhich GMM has found a lower objective function.

Naturally, the definition of failure used here is arguable, since the fact that an algorithm has found a better solution does not imply that the other method failed to find a local optimum. Besides, problems CATENARY, DIXCHLNG, DTOC2, EG3 and NGONE, classified as GMM99 failures when comparing this algorithm to Lancelot, are not considered as failures in Table 6, since GMM was not able to find a better solution in all of these cases. Even though, we decided to use the term failure to stress that an algorithm that finds a lower solution performs better than the rival, no matter the time spent to find the solution.

Finally, Table 7 shows how the good performance of GMM and GMM99 is distributed among different types of problems.

Tables 6 and 7 clearly show the superiority of GMM99 over GMM. The new algorithm is more robust and attained the best performance for all of the problem classes. Taking into consideration that GMM also uses Minos to compute the horizontal step and Quacon to compute the vertical step, so the core of the algorithm is the same of GMM99, these results suggest that the use of the new acceptance criterion, combined with a restoration procedure, can improve the performance of the SQP algorithm.

For some problems, no points were included in F, so the behavior of GMM and GMM99 were very similar. Considering the whole set of 82 problems, the mean number of points in F at the last iteration of GMM99 was 5. However, for 7 problems, the cardinality of the F set was greater or equal to 20.

The restoration was called at least once for 24 problems. The purpose of the restoration is to avoid the excessive reduction of the step size when xk is very infeasible. This strategy has shown to be effective for 18 problems, while the performance was not significantly altered for two problems and slightly worsened for 4 problems.

8 Conclusions

In this paper, we depict the general framework of a new SQP algorithm that combines ideas from both merit functions and the filter introduced by Fletcher and Leyffer in [8]. The use of several penalty parameters defined automaticallyby the previous iterates avoids the premature reduction of q as well as the zigzagging that can occur when a non-monotone strategy is used to update this parameter. The new method is also less tolerant than the filter method, since we do not accept points that marginally reduce the infeasibility or the objective function.

The algorithm presented here is unique in the sense that it combines

  • The use of a piecewise-linear function to accept or reject iterates. This function can be interpreted as a merit function that selects the penalty parameter according to the position of the predicted point in the (j, f)-plane, but also as a convex piecewise filter.

  • The effective use of the points in F to define the forbidden region in the (j, f)-plane. In addition to dominated points, filter-SQP methods cut off the entire northwestern part of the allowed region when the predicted reduction of the objective function is positive. The GMM99 algorithm reject points if they belong to the forbidden region depicted in Figure 2 or if condition (14) is satisfied. However, this second condition is so mild that it is seldom used in practice.

  • The use of the restoration to prevent the step shortening inherent to methods that use points in the (j, f)-plane to define the forbidden region.

The results shown in the previous section suggest that the algorithm, as implemented, is superior to the method presented in [10] and slightly better than Lancelot for small to medium-size problems. As a future work, we intend to test other strategies to compute the restoration, as well as other solvers to compute the normal and tangential steps, in order to deal with larger problems. Perhaps, the library used to solve the quadratic subproblems can be selected dynamically, after knowing the size of the problem and the type of the objective function and constraints.

Acknowledgments. The author would like to thank Prof. José Mario Martínez for his insightful comments and Tadao Ando Jr. for the revision of the text.

Received: 05/III/07. Accepted: 12/IV/07.

#716/07.

  • [1] C. Audet and J.E. Dennis, A pattern search filter method for nonlinear programming without derivatives. SIAM J. Optimization, 14 (2004), 980-1010.
  • [2] R.H. Bielschowsky, A. Friedlander, F.A.M. Gomes, J.M. Martínez and M. Raydan, An adaptative algorithm for bound constrained quadratic minimization. Investigación Operativa, 7 (1997), 67-102.
  • [3] R.H. Byrd, J.C. Gilbert and J. Nocedal, A trust region method based on interior point techniques for nonlinear programming. Math. Programming, 89 (2000), 149-185.
  • [4] R.H. Byrd, M.E. Hribar and J. Nocedal, An interior point algorithm for large-scale nonlinear programming. SIAM J. Optimization, 9 (2000), 877-900.
  • [5] C.M. Chin and R. Fletcher, On the global convergence of an SLP-filter algorithm that takes EQP steps. Math. Programming, 96 (2003), 161-177.
  • [6] A. Conn, N.I.M. Gould and P.L. Toint, Trust Region Methods. SIAM, Philadelphia (2000).
  • [7] R. Fletcher, N.I.M. Gould, S. Leyffer, P.L. Toint and A. Wächter, Global convergence of a trust-region SQP-filter algorithm for general nonlinear programming. SIAM J. Optimization, 13 (2002), 635-659.
  • [8] R. Fletcher and S. Leyffer, Nonlinear programming without a penalty function. Math. Programming, 91 (2002), 239-269.
  • [9] R. Fletcher, S. Leyffer and P.L. Toint, On the global convergence of a filter-SQP algorithm. SIAM J. Optimization, 13 (2002), 44-59.
  • [10] F.A.M. Gomes, M.C. Maciel and J.M. Martínez, Nonlinear programming algorithms using trust regions and augmented Lagrangians with nonmonotone penalty parameters. Math. Programming, 84 (1999), 161-200.
  • [11] C. Gonzaga, E. Karas and M. Vanti, A globally convergent filter method for nonlinear programming. SIAM J. Optimization, 14 (2003), 646-669.
  • [12] N.I.M. Gould, S. Leyffer and P.L. Toint, A multidimensional filter algorithm for nonlinear equations and nonlinear least-squares. SIAM J. Optimization, 15 (2005), 17-38.
  • [13] N.I.M. Gould, D. Orban and P.L. Toint, CUTEr (and SifDec), a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Software, 29 (2003), 373-394.
  • [14] N.I.M. Gould, D. Orban and P.L. Toint, GALAHAD, a library of thread-safe Fortran 90 packages for large-scale nonlinear optimization. ACM Trans. Math. Software, 29 (2003), 353-372.
  • [15] N.I.M. Gould, C. Sainvitu and P.L. Toint, A filter-trust-region method for unconstrained optimization. SIAM J. Optimization, 16 (2006), 341-357.
  • [16] N.I.M. Gould and P.L. Toint, Global convergence of a hybrid trust-region SQP-filter algorithm for general nonlinear programming. in System Modeling and Optimization XX, E. Sachs and R. Tichatschke (eds.), Kluwer, Dordrecht (2003), pp. 23-54.
  • [17] N.I.M. Gould and P.L. Toint, Global convergence of a non-monotone trust-region filter algorithm for non-linear programming. Tech. report RAL-TR-2003-003, CLRC Rutheford Appleton Laboratory, Oxfordshire, UK (2003).
  • [18] W. Hock and K. Schittkowski, Test examples for nonlinear programming codes. Lecture Notes in Economics and Mathematical Systems, 187 (1981), Springer, Heidelberg.
  • [19] R.B. Murtagh and M.A. Saunders, Large-scale linearly constrained optimization. Math. Programming, 14 (1978), 41-72.
  • [20] J. Nocedal and S.J. Wright, Numerical Optimization. Springer, New York (1999).
  • [21] T.D. Plantenga, A trust-region method for nonlinear programming based on primal interior point techniques. SIAM J. Sci. Computing, 20 (1999), 282-305.
  • [22] M. Ulbrich, S. Ulbrich and L.N. Vicente, A globally convergent primal-dual interior point filter method for nonconvex nonlinear programming. Math. Programming, 100 (2004), 379-410.
  • [23] S. Ulbrich, On the superlinear local convergence of a filter-SQP method. Math. Programming, 100 (2004), 217-245.
  • [24] A. Wächter and L.T. Biegler, Line search filter methods for nonlinear programming: motivation and global convergence. SIAM J. Optimization, 16 (2005), 1-31.
  • [25] A. Wächter and L.T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Programming, 106 (2006), 25-57.
  • *
    This work was supported by FAPESP grant 2004/05891-1.
  • 1
    We say that a point is regular if the linear independence constraint qualification (LICQ) holds.
  • Publication Dates

    • Publication in this collection
      22 Nov 2007
    • Date of issue
      2007

    History

    • Accepted
      12 Apr 2007
    • Received
      05 Mar 2007
    Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
    E-mail: sbmac@sbmac.org.br