Acessibilidade / Reportar erro

A Note on the McCormick Second-Order Constraint Qualification

ABSTRACT

The study of optimality conditions and constraint qualification is a key topic in nonlinear optimization. In this work, we present a reformulation of the well-known second-order constraint qualification described by McCormick in 1717 G.P. McCormick. Second Order Conditions for Constrained Minima. SIAM Journal on Applied Mathematics, 15 (1967), 641-652.. This reformulation is based on the use of feasible arcs, but is independent of Lagrange multipliers. Using such a reformulation, we can show that a local minimizer verifies the strong second-order necessary optimality condition. We can also prove that the reformulation is weaker than the known relaxed constant rank constraint qualification in 1919 L. Minchenko & S. Stakhovski. On relaxed constant rank regularity condition in mathematical programming. Optimization: A Journal of Mathematical Programming and Operations Research, 60 (2011), 429-440.. Furthermore, we demonstrate that the condition is neither related to the MFCQ + WCR in 88 R. Andreani , J.M. Martínez & M.L. Schuverdt . On second-order optimality conditions for Nonlinear Programming. Optimization, 56 (2007), 529-542. nor to the CCP2 condition, the companion constraint qualification associated with the second-order sequential optimality condition AKKT 2 in 55 R. Andreani , G. Haeser , A. Ramos & P. Silva. A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA Journal of Numerical Analysis, 37 (2017), 1902-1929..

Keywords:
Nonlinear programming; second-order optimality conditions; constraint qualification

1 INTRODUCTION

In this paper, we consider the nonlinear optimization problem of the form:

Minimize f ( x ) s.t. h i ( x ) = 0 , for i = 1 , , m g j ( x ) 0 , for j = 1 , , p (1.1)

where the functions f:n,hi:n, i=1,,m, gj:n, j=1,,p are twice continuously differentiable on ℝn .

We denote by

Ω = { x n : h ( x ) = 0 , g ( x ) 0 }

the feasible set of problem (1.1). For each x ∈ Ω we define as A(x)={j{1,,p}:gj(x)=0} the index set of active inequality constraints at x.

The notion of optimality and, especially, of how to characterize an optimal solution is crucial for the study of nonlinear optimization problems due to its close relation to the construction of algorithms to find such points.

The best-known first-order analytical optimality condition for (1.1) is the Fritz-John property presented in 99 D. Bertsekas. “Nonlinear programming”. 2nd Edition, Athena Scientific, Belmont (1999).: given a feasible point x of (1.1), there exist multipliers (μ0,λ,μ)1+m+p such that, µ 0 0, and

μ 0 f ( x ) + i = 1 m λ i h i ( x ) + j = 1 p μ j g j ( x ) = 0 , μ j 0 , μ j g j ( x ) = 0 , j = 1 , , p .

However, the Fritz-John optimality conditions can be satisfied by many points which are not local optimal solutions to the problem when µ 0 = 0. Thus, when an additional regularity condition is assumed in the feasible set, the Fritz-John conditions become the most useful and important stationary properties for (1.1): the well-known Karush-Kuhn-Tucker conditions. We say that a feasible point x of problem (1.1) verifies the Karush-Kuhn-Tucker conditions (KKT conditions in 1515 H. Kuhn & A. Tucker. Nonlinear programming . In P.L. Butzer & F. Feher (editors), “Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability”. University of California, Neyman (1951), p. 72-147.) if there exist multipliers (λ,µ) ∈ ℝm+p such that

f ( x ) + i = 1 m λ i h i ( x ) + j = 1 p μ j g j ( x ) = 0 , μ j 0 , μ j g j ( x ) = 0 , j = 1 , , p . (1.2)

The vectors λ and µ presented in (1.2) are known as Lagrange multipliers. The set of vectors (λ,µ) satisfying (1.2) at x is denoted by ∆(x).

A point that verifies (1.2) is a stationary point of the Lagrangian function associated to (1.1):

l ( x , λ , μ ) = f ( x ) + i = 1 m λ i h i ( x ) + j = 1 p μ j g j ( x ) . (1.3)

Unfortunately, as we have already mentioned, (1.2) is not a first-order necessary optimality condition for a local minimizer. First-order constraint qualifications are conditions in the constraints under which it can be claimed that, if x is a local minimizer, then x is a stationary point of the Lagrangian function (1.3). The most widely used first-order constraint qualification is the linear independence of the gradients of equality and active inequality constraints at a given feasible point (LICQ). It is well-known that LICQ implies that ∆(x) is a singleton. There are other weaker first-order constraint qualifications in the literature which vary from easily verifiable but also somewhat restrictive in some situations to very abstract and difficult to check, but enjoyed by many feasible sets. On the one hand, among the easily verifiable conditions we can mention: the Mangasarian-Fromovitz condition (MFCQ) presented in 1616 O. Mangasarian & S. Fromovitz. The Fritz-John necessary optimality conditions in presence of equality and inequality constraints. Journal of Mathematical Analysis and Applications, 17(1) (1967), 37-47.; the constant-rank constraint qualification (CRCQ) discussed in 1313 R. Janin. Direction derivative of the marginal function in nonlinear programming. Mathematical Programming Studies, 21 (1984), 127-138.; the relaxed constant-rank constraint qualification (rCRCQ) shown in 1919 L. Minchenko & S. Stakhovski. On relaxed constant rank regularity condition in mathematical programming. Optimization: A Journal of Mathematical Programming and Operations Research, 60 (2011), 429-440.; the constant positive linear dependence condition (CPLD) described in 77 R. Andreani , J.M. Martínez & M.L. Schuverdt . On the relation between constant positive linear dependence condition and quasinormality constraint qualification. Journal of Optimization Theory and Applications , 125 (2005), 473-485.), (2020 L. Qi & Z. Wei. On the constant positive linear dependence condition and its application to SQP methods. SIAM Journal Optimization , 10 (2000), 963-981. and the relaxed constant positive linear dependence condition (rCPLD) given in 66 R. Andreani , G. Haeser , M.L. Schuverdt & P.J.S. Silva. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming, 26 (2016), 255-273.. On the other hand, among those more abstract and difficult to check but weaker first-order constraint qualifications we can mention: pseudonormality in 99 D. Bertsekas. “Nonlinear programming”. 2nd Edition, Athena Scientific, Belmont (1999).; quasinormality presented in 1212 M. Hestenes. “Optimization theory: the finite dimensional case”. John Wiley (1975).; the cone continuity property (CCP) described in 55 R. Andreani , G. Haeser , A. Ramos & P. Silva. A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA Journal of Numerical Analysis, 37 (2017), 1902-1929.; Abadie’s CQ shown in 11 J. Abadie. “Nonlinear Programming”. On the Kuhn-Tucker theorem (1967). and Guignard given in 1111 M. Guignard. Generalized Kuhn-Tucker conditions for mathematical programming problems in a Banach space. SIAM Journal on Control, 7 (1969), 232-241..

To check the local optimality in the candidates obtained using the KKT conditions, second-order necessary optimality conditions are studied and developed. These conditions take into account the curvature of the Lagrangian function over critical directions.

Given a KKT point x with multiplier (λ,µ) ∈ ∆(x) we define as

C ( x ) = d n : h i ( x ) d = 0 , i = 1 , , m g j ( x ) d = 0 , j A ( x ) : μ j > 0 g j ( x ) d 0 , j A ( x ) : μ j = 0

the critical cone or cone of critical directions at x ∈ Ω. We are interested in the so-called strong second-order optimality condition (SSOC) described in 1010 A. Fiacco & G. McCormick. “Nonlinear Programming: Sequential Unconstrained Minimization Techniques”. John Wiley, New York (1968).), (1717 G.P. McCormick. Second Order Conditions for Constrained Minima. SIAM Journal on Applied Mathematics, 15 (1967), 641-652.: assume that x is a feasible point and (λ,µ) ∈ ∆(x), then SSOC holds at x with multiplier (λ,µ) if

d T 2 l ( x , λ , μ ) d 0 , (1.4)

for all directions dC(x).

It is well established in the literature that, if a local minimizer of (1.1) verifies LICQ, then there is a unique KKT multiplier vector (λ,µ) for which SSOC holds (see 99 D. Bertsekas. “Nonlinear programming”. 2nd Edition, Athena Scientific, Belmont (1999).). Strong second-order constraint qualifications are conditions in the constraints under which it can be claimed that, if x is a local minimizer then x verifies the KKT condition and there is, at least, a KKT multiplier vector (λ,µ) that verifies SSOC.

In the last few years, weak constraint qualifications have been studied with the aim of obtaining theorems with strong results.

In 44 R. Andreani , C.E. Echagüe & M.L. Schuverdt. Constant-Rank Condition and Second-Order Constraint Qualification. Journal of Optimization Theory and Applications, 146 (2010), 255-266., the authors have proved that if x is a local minimizer that verifies CRCQ defined in 1313 R. Janin. Direction derivative of the marginal function in nonlinear programming. Mathematical Programming Studies, 21 (1984), 127-138. then for all (λ, μ) ∈ ∆(x), SSOC (1.4) holds. In 66 R. Andreani , G. Haeser , M.L. Schuverdt & P.J.S. Silva. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming, 26 (2016), 255-273., the authors observed that the same result proved with CRCQ can be demonstrated using rCRCQ presented in 88 R. Andreani , J.M. Martínez & M.L. Schuverdt . On second-order optimality conditions for Nonlinear Programming. Optimization, 56 (2007), 529-542.: if x is a local minimizer that verifies rCRCQ then for all (λ,µ) ∈ ∆(x), SSOC (1.4) holds.

Recently, in 22 R. Andreani, R. Behling, G. Haeser & P.J. Silva. On second-order optimality conditions in nonlinear optimization. Optimization Methods and Software, 32 (2017), 22-38., the SSOC has been obtained by means of a “modified” Abadie constraint qualification (see Theorem 3.2 in 22 R. Andreani, R. Behling, G. Haeser & P.J. Silva. On second-order optimality conditions in nonlinear optimization. Optimization Methods and Software, 32 (2017), 22-38.). However, we have noted that the “modified” Abadie constraint qualification introduced in 22 R. Andreani, R. Behling, G. Haeser & P.J. Silva. On second-order optimality conditions in nonlinear optimization. Optimization Methods and Software, 32 (2017), 22-38. is not a proper constraint qualification as it involves the sign of the multipliers associated with the active inequality constraints in its definition.

In 1818 L. Minchenko & A. Leschov. On strong and weak second-order necessary optimality conditions for nonlinear programming. Optimization , 65 (2016), 1693-1702., the authors have introduced the notion of critical regularity condition (CRC) and have proved the validity of SSOC at a point of local minimum when CRC holds at this point. It is worth mentioning that even though CRC ensures the existence of Lagrange multipliers in a given solution, it is not a constraint qualification since its definition depends on the objective function.

Some of the second-order practical algorithms (see for example 33 R. Andreani , E.G. Birgin, J.M. Martínez & M. Schuverdt. Second-order negative-curvature methods for box-constrained and general constrained optimization. Computational Optimization and Applications, 45 (2010), 209-263.) take into account the analysis of the Hessian of the Lagrangian function in the following tangent subspace:

C 0 ( x ) = d n : h i ( x ) d = 0 , i = 1 , , m g j ( x ) d = 0 , j A ( x ) .

Clearly, for a feasible point x for which Δ(x), C0(x)C(x) and C 0(x) is independent of the Lagrange multiplier associated with a given KKT point. Considering C 0(x), we can state the so-called weak second-order optimality condition: assume that x is a feasible point and (λ,µ) ∈ ∆(x), then WSOC holds at x with multiplier (λ,µ) if

d T 2 l ( x , λ , μ ) d 0 , (1.5)

for all d ∈ ℝn such that dC 0(x).

In 88 R. Andreani , J.M. Martínez & M.L. Schuverdt . On second-order optimality conditions for Nonlinear Programming. Optimization, 56 (2007), 529-542., the authors have proved that if x is a feasible point that satisfies MFCQ and the weak constant-rank condition (WCR, see Definition 3.3), then there exists (λ,µ) ∈ ∆(x) such that WSOC (1.5) holds. Then in 44 R. Andreani , C.E. Echagüe & M.L. Schuverdt. Constant-Rank Condition and Second-Order Constraint Qualification. Journal of Optimization Theory and Applications, 146 (2010), 255-266., the same results have been proved for all (λ,µ) ∈ ∆(x).

In 55 R. Andreani , G. Haeser , A. Ramos & P. Silva. A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA Journal of Numerical Analysis, 37 (2017), 1902-1929., the authors have introduced the second-order cone-continuity property (CCP2), a second- order constraint qualification strictly weaker than the joint condition MFCQ + WCR88 R. Andreani , J.M. Martínez & M.L. Schuverdt . On second-order optimality conditions for Nonlinear Programming. Optimization, 56 (2007), 529-542., CRCQ1313 R. Janin. Direction derivative of the marginal function in nonlinear programming. Mathematical Programming Studies, 21 (1984), 127-138., and rCRCQ1919 L. Minchenko & S. Stakhovski. On relaxed constant rank regularity condition in mathematical programming. Optimization: A Journal of Mathematical Programming and Operations Research, 60 (2011), 429-440., which can be used in the global convergence analysis of the second-order algorithms defined in 33 R. Andreani , E.G. Birgin, J.M. Martínez & M. Schuverdt. Second-order negative-curvature methods for box-constrained and general constrained optimization. Computational Optimization and Applications, 45 (2010), 209-263.), (88 R. Andreani , J.M. Martínez & M.L. Schuverdt . On second-order optimality conditions for Nonlinear Programming. Optimization, 56 (2007), 529-542.), (1414 P. Kill, V. Kungurtsev & D. Robinson. A stabilized SQP method: global convergence. Journal of Numerical Analysis, 37 (2017), 407-443.. The CCP2 condition is the companion second-order constraint qualification associated with the sequential second-order optimality condition called AKKT 2 (see definition 3.1 in 55 R. Andreani , G. Haeser , A. Ramos & P. Silva. A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA Journal of Numerical Analysis, 37 (2017), 1902-1929.) and the authors have proved that if x is a feasible point that satisfies CCP2, then there exists a multiplier (λ,µ) ∈ ∆(x) for which WSOC holds.

In Theorem 4 of the original paper written by McCormick 1717 G.P. McCormick. Second Order Conditions for Constrained Minima. SIAM Journal on Applied Mathematics, 15 (1967), 641-652., it is shown that a local minimizer verifies WSOC using the following first and second-order constraint qualification based on arcs:

  • • A feasible point x verifies the McCormick first-order constraint qualification (McCormick FOCQ) presented in 1717 G.P. McCormick. Second Order Conditions for Constrained Minima. SIAM Journal on Applied Mathematics, 15 (1967), 641-652. if for any nonzero vector dL(x) being L(x) the linearized constraint set for Ω given by:

L ( x ) = { d n : h i ( x ) d = 0 , i = 1 , , m ; g j ( x ) d 0 , j A ( x ) } (1.6)

  • there exists an arc α(t) contained in the feasible set such that α(0) = x, α′(0) = d, and α is differentiable ∀t ∈ [0, δ].

  • • A feasible point x verifies the McCormick second-order constraint qualification (Mc- Cormick SOCQ) in 1717 G.P. McCormick. Second Order Conditions for Constrained Minima. SIAM Journal on Applied Mathematics, 15 (1967), 641-652. if for any non-zero vector dC 0(x) there exists a twice differentiable arc α(t) such that α(0) = x, α′(0) = d and ∀t ∈ [0, δ]

h i ( α ( t ) ) = 0 , i = 1 , m ; g j ( α ( t ) ) = 0 , j A ( x ) .

It is worth mentioning that although C(x) is the cone that explicitly appears in SSOC, the first- order cone of feasible variations L(x) (1.6) is a more natural approximation of the tangent directions of the feasible set Ω and C 0(x) ⸦ C(x) ⸦ L(x). The set L(x) is essential and has to be taken into account to demonstrate the existence of the multipliers at a local minimizer. In fact, it appears in McCormick FOCQ because it demonstrates the existence of the Lagrange multipliers in a given solution, but it does not ensure that (1.4) holds. At the same time, McCormick shows that McCormick SOCQ is a second-order CQ which does not imply McCormick FOCQ.

In this work, we present a strong second-order CQ (Definition 2.1) which is a reformulation of the McCormick FOCQ and McCormick SOCQ and is independent of the sign of the Lagrange multiplier associated with a KKT point. Using such a reformulation, we can show that a local minimizer verifies SSOC. We also show that the reformulation is weaker than the rCRCQ described in 1919 L. Minchenko & S. Stakhovski. On relaxed constant rank regularity condition in mathematical programming. Optimization: A Journal of Mathematical Programming and Operations Research, 60 (2011), 429-440. and is neither equivalent to MFCQ +WCR in 88 R. Andreani , J.M. Martínez & M.L. Schuverdt . On second-order optimality conditions for Nonlinear Programming. Optimization, 56 (2007), 529-542. nor to CCP2 presented in 55 R. Andreani , G. Haeser , A. Ramos & P. Silva. A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA Journal of Numerical Analysis, 37 (2017), 1902-1929..

The rest of this paper is organized as follows. In Section 2 we describe the formal definition of the reformulation of McCormick FOCQ and McCormick SOCQ. In Section 3, we show the relationships between other strong second-order CQs. In Section 4, we present some concluding remarks.

2 THE REFORMULATION OF MCCORMICK FOCQ AND MCCORMICK SOCQ

Definition 2.1. Let xΩ be a feasible point. We say that x verifies the reformulation of the McCormick FOCQ and McCormick SOCQ (REF-McCormick) if for any nonzero vector dL(x) there exists a twice differentiable arc α(t),t ∈ [0, δ], such that α(0) = x, α′ (0) = d andt ∈ (0, δ]

h i ( α ( t ) ) = 0 , i = 1 , , m ; g j ( α ( t ) ) = 0 , j A ( x ) : g j ( x ) d = 0 ; g j ( α ( t ) ) < 0 , j A ( x ) : g j ( x ) d < 0 . (2.1)

The following theorem establishes that REF-McCormick is a strong second-order constraint qualification. We include the proof here for completeness.

Teorema 2.1.Suppose that x*Ω is a local minimum of (1.1) and that REF-McCormick holds at x*. Then x* is a KKT point and for every (λ,µ) ∈ (x*), (1.4) holds.

Proof. Let us suppose that x* is a local minimizer of (1.1). Then, ∀dL(x*), by the reformulated condition REF-McCormick, there exists a twice differentiable arc α(t) such that α(0) = x*, α′(0) = d and ∀t ∈ [0, δ], (2.1) holds. Thus, f(α(t)) ≥ f(x*) and we have that dL(x*):f(x*)d0. Then,

- f ( x * ) ( L ( x * ) ) ° = z n : z = i = 1 m λ i h i ( x * ) + j = 1 p μ j g j ( x * ) , μ j 0 j A ( x * ) .

The notation C ° indicates the polar cone of C. Therefore, x* is a KKT point.

Let us consider a Lagrange multiplier vector (λ*,µ*) ∈ ∆(x*) and a nonzero direction dC(x*). Then, by REF-McCormick, d is tangent of a twice differentiable arc α(t) (where t ≥ 0) along which α(0) = x*, α′(0) = d and (2.1) holds.

Let us define (t) = l(α(t), λ*,µ*). Following the feasibility and complementarity property we have that (0)=f(x*)+i=1mλi*hi(x*)+j=1pμj*gj(x*)=f(x*). By the KKT condition:

' ( 0 ) = x l ( x * , λ * , μ * ) T d = 0

and

' ' ( 0 ) = d T x 2 l ( x * , λ * , μ * ) d .

Thus, using Taylor on around t = 0, we obtain that

( t ) = f ( x * ) + t 2 2 d T x 2 l ( x * , λ * , μ * ) d + o ( t 2 ) .

Then, from the definition of and (2.1) we have that

f ( α ( t ) ) = f ( x * ) + t 2 2 d T x 2 l ( x * , λ * , μ * ) d + o ( t 2 ) .

Since x* is a local minimizer, ∀t small enough

0 f ( α ( t ) ) - f ( x * ) t 2 2 d T x 2 l ( x * , λ * , μ * ) d + o ( t 2 ) .

Dividing by t 2 the last inequality and taking limits when t → 0, we obtain the proof. □

3 RELATIONS

In this section we present the relationship between REF-McCormick and other well-known second-order CQs.

Definition 3.2.(Ref.1919 L. Minchenko & S. Stakhovski. On relaxed constant rank regularity condition in mathematical programming. Optimization: A Journal of Mathematical Programming and Operations Research, 60 (2011), 429-440.) Let x ∈ Ω. We say that the relaxed constant rank constraint qualification rCRCQ holds if there exists a neighbourhood V of x such that for every index set J ⊂ A(x), the set

{ h i ( y ) } i = 1 , , m { g j ( y ) } j J

has the same rank for all yV ⋂ Ω.

Teorema 3.2.Suppose that x* ∈ Ω and that rCRCQ holds at x*. Then, REF-McCormick holds at x*.

Proof. Let us consider a direction dL(x*). Without loss of generality we rename the equality constraints as c i (x) = h i (x), i = 1, . . . , m and the inequality constraints as c m+j (x) = g j (x), j = 1, . . . , p.

Define the index set I0(x*,d)={j{1,,m+p}:cj(x*)d=0}.

According to this hypothesis, the family of gradients {ci(y)}iI0(x*,d) has the same rank for every y in a neighbourhood N of x*. Let us suppose that the rank of the family {ci(y)}iI0(x*,d) is l, and we denote as I rCR the index of the linearly independent vectors, then l = |I rCR |.

This means that, in N, l functions of the family {ci(y)}iIrCR are independent. Without loss of generality we can assume that the first l functions c 1 , . . . , c l are independent and the other functions (if they exist) depend on c 1 , . . . , c l .

Define the vector function as c:nl by c(x)=(c1(x)cl(x)) and consider C:n+1l given by

C ( y , t ) : = c ( x * + t d + y ) . (3.1)

Thus,

C ( 0 , 0 ) = c ( x * ) = 0

Moreover, using the chain rule, the Jacobian of C with respect to y is the matrix

C y ( y , t ) = J c ( x * + t d + y )

and, in particular,

C y ( 0 , 0 ) = J c ( x * ) .

Since the gradients {ci(x*)}iIrCR are linearly independent, the matrix C y (0, 0) has rank l.

Without loss of generality we can assume that the rank of C y (0, 0) is equal to l with respect to the first l coordinates of vector y. Denote y = (y 1 , y 2) where y 1 = (y 1 ,..., y l ), y 2 = (y l+1 ,..., y n ).

The Implicit Function Theorem (Theorem 2.13 described in Spivak 2222 M. Spivak. “Calculo en variedades”. Ed. Reverte (1970).) ensures that near (y,t) = (0, 0) there exists an implicit continuously differentiable function r¯:n-l+1l, r¯(y2,t)=y1 such that C((r¯(y2,t),y2),t)=0 and r¯(0,0)=0. Let us define the function r:(-ϵ0,ϵ0)l as r(t)=r¯(0,t). Then the curve r(t) is a differentiable arc for which

C ( ( r ( t ) , 0 ) , t ) = 0 (3.2)

and r(0) = 0 hold.

Let us show that r′(0) = 0. By (3.2), we have that Cy1((r(t),0),t)r'(t)+Ct((r(t),0),t)=0 and, taking t = 0

C y 1 ( 0 , 0 ) r ' ( 0 ) + C t ( 0 , 0 ) = 0 . (3.3)

By (3.1), Ct(y,t)=Jc(x*+td+y)d. Then, since dL(x*),

C t ( 0 , 0 ) = J c ( x * ) d = 0 .

Therefore, using the matrix C y (0, 0) which has rank l and (3.3), we obtain that r′(0) = 0.

Using r(t), we define, on a suitable open interval containing t = 0, the differentiable arc

α ( t ) = x * + t d + r ( t ) .

By construction, α(0) = x*, α′(0) = d and we have that

c ( α ( t ) ) = c ( x * + t d + r ( t ) ) = C ( ( r ( t ) , 0 ) , t ) = 0

on (−ε, ε).

If iA(x*) but iI 0(x*, d), we have that ci(x*)d<0. In this case, consider the auxiliary function ϕ(t)=ci(α(t)) which satisfies

ϕ ( 0 ) = c i ( α ( 0 ) ) = c i ( x * ) = 0

and

ϕ ' ( 0 ) = c i ( x * ) α ' ( 0 ) = c i ( x * ) d < 0 .

Then, using Taylor’s theorem, we obtain that there exists ε i > 0 such that ϕ(t) < 0 for all t ∈ (0, ε i ). Taking ε = min{ε i }, we finish the proof. □

Counterexample 1.rCRCQ is strictly stronger than REF-cCormick.

In ℝ2, consider (x1*,x2*)=(0,0) and the following inequality constraints

g 1 ( x 1 , x 2 ) = x 1 ; g 2 ( x 1 , x 2 ) = x 1 e x 2 .

Then,

g 1 ( x 1 , x 2 ) = ( 1 , 0 ) , g 1 ( 0 , 0 ) = ( 1 , 0 ) ; g 2 ( x 1 , x 2 ) = ( e x 2 , x 1 e x 2 ) , g 2 ( 0 , 0 ) = ( 1 , 0 ) .

Hence, rCRCQ fails.

For any non-zero vector dL(0,0)={(d1,d2):d10,d2}, we consider the following two cases:

In the first one, we take into account the directions d = (0, d 2). We propose the curve α(t) ∈ C 2 given by α(t)=(0,d2t)t[0,δ] such that α(0) = (0, 0), α′ (0) = d. Then g 1(α(t)) = 0 and g1(0,0)d=0t(0,δ].

For the second constraint, we have ∇g 2(0, 0)T d = 0 and g2(α(t))=0t(0,δ].

In the second case, we consider d = (d 1 , d 2), d 1 < 0. Then, there exists a curve α(t) ∈ C 2 given by α(t)=d1t,d2tt[0,δ] such that α(0) = (0, 0), α′(0) = d. Furthermore, since g 1(α(t)) = d 1 t we obtain that g1(0,0)d=d1<0 and g1(α(t))<0t(0,δ].

For g 2, we have g2(α(t))=d1ted2t. Then, g2(0,0)d=d1<0 and g2(α(t))=d1ted2t<0,t(0,δ].

Hence, REF-McCormick holds.

Definition 3.3.(Ref.88 R. Andreani , J.M. Martínez & M.L. Schuverdt . On second-order optimality conditions for Nonlinear Programming. Optimization, 56 (2007), 529-542.) Let x ∈ Ω. We say that the weak constant rank condition WCR holds if there is a neighbourhood V of x such that the matrix made of the gradients

{ h i ( y ) } i = 1 , , m { g j ( y ) } j A ( x )

has the same rank for all yV.

Definition 3.4.(Ref.1616 O. Mangasarian & S. Fromovitz. The Fritz-John necessary optimality conditions in presence of equality and inequality constraints. Journal of Mathematical Analysis and Applications, 17(1) (1967), 37-47.) We say that x ∈ Ω satisfies the Mangasarian-Fromovitz constraint qualification if the gradients{hi(x)}i=1,,mare linearly independent and there exists a vector d ∈ ℝn such thathi(x)d=0, i=1,,mandgj(x)d<0, jA(x).

The following counterexamples show that MFCQ +WCR and REF-McCormick are independent.

Counterexample 2.REF-McCormick does not imply WCR + MFCQ.

In ℝ2 , consider (x1*,x2*)=(0,0) and the inequality constraints defined by

g 1 ( x 1 , x 2 ) = x 1 ; g 2 ( x 1 , x 2 ) = x 1 - x 2 2 ; g 3 ( x 1 , x 2 ) = x 1 + x 2 ; g 4 ( x 1 , x 2 ) = - x 1 - x 2 .

Then,

g 1 ( x 1 , x 2 ) = ( 1 , 0 ) ; g 2 ( x 1 , x 2 ) = ( 1 , - 2 x 2 ) ; g 3 ( x 1 , x 2 ) = ( 1 , 1 ) ; g 4 ( x 1 , x 2 ) = ( - 1 , - 1 ) ;

and L(0,0)=(d1,-d1):d10. For any non-zero vector dL(0, 0) consider the curve α(t) ∈ C 2, α(t) = (td 1 , −td 1),t ∈ [0, δ] which verifies α(0) = (0, 0), α′ (0) = d and, ∀t ∈ (0, δ ]

g 1 ( α ( t ) ) = t d 1 < 0 , g 1 ( 0 , 0 ) d < 0 ; g 2 ( α ( t ) ) = t d 1 ( 1 - t d 1 ) < 0 , g 2 ( 0 , 0 ) d < 0 ; g 3 ( α ( t ) ) = 0 , g 3 ( 0 , 0 ) d = 0 ; g 4 ( α ( t ) ) = 0 , g 4 ( 0 , 0 ) d = 0 .

Clearly, REF-McCormick holds. And, it is trivial that MFCQ does not hold.

Counterexample 3.MFCQ + WCR does not imply REF-McCormick.

Consider the following example given in 1818 L. Minchenko & A. Leschov. On strong and weak second-order necessary optimality conditions for nonlinear programming. Optimization , 65 (2016), 1693-1702.. In ℝ3 , consider (x1*,x2*,x3*)=(0,0,0) and the following inequality constraints

g 1 ( x 1 , x 2 , x 3 ) = - x 1 + x 2 - x 3 2 ; g 2 ( x 1 , x 2 , x 3 ) = - x 1 - x 2 ; g 3 ( x 1 , x 2 , x 3 ) = - x 1 ; g 4 ( x 1 , x 2 , x 3 ) = - x 1 2 - x 2 2 - x 3 2 + x 3 .

Then,

g 1 ( x 1 , x 2 , x 3 ) = ( - 1 , 1 , - 2 x 3 ) , g 1 ( 0 , 0 , 0 ) = ( - 1 , 1 , 0 ) ; g 2 ( x 1 , x 2 , x 3 ) = ( - 1 , - 1 , 0 ) , g 2 ( 0 , 0 , 0 ) = ( - 1 , - 1 , 0 ) ; g 3 ( x 1 , x 2 , x 3 ) = ( - 1 , 0 , 0 ) , g 3 ( 0 , 0 , 0 ) = ( - 1 , 0 , 0 ) ; g 4 ( x 1 , x 2 , x 3 ) = ( - 2 x 1 , - 2 x 2 , - 2 x 3 + 1 ) , g 4 ( 0 , 0 , 0 ) = ( 0 , 0 , 1 ) .

It is easy to see that WCR + MFCQ holds.

We will see that REF-McCormick does not hold.

We have that L(0,0,0)=(d1,d2,d3):d10,-d1d2d1,d30. Let us take a generic arc α(t)=(α1(t),α2(t),α3(t)) for t ∈ [0, δ].

Let us consider the direction d=(0,0,d3), d3<0. Then, gi(0,0,0)d=0 for i = 1, 2, 3 and g4(0,0,0)d=d3<0.

As the arc has to be feasible for t ∈ [0, δ ] and (2.1) must hold, the equalities: g1(α(t))=g2(α(t))=g3(α(t))=0 must be verified, ∀t ∈ (0, δ].

However, this is a contradiction since the unique arc which can be considered is the null one. Therefore, REF-McCormick fails.

In 55 R. Andreani , G. Haeser , A. Ramos & P. Silva. A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA Journal of Numerical Analysis, 37 (2017), 1902-1929. the authors have presented the CCP2 condition defined below

Let us consider x* ∈ Ω. For x ∈ ℝn define the cone

C W ( x , x * ) = d n : h i ( x ) d = 0 , i = 1 , , m ; g j ( x ) d = 0 , j A ( x * ) .

and denote by K2W(x) the following set:

i = 1 m λ i h i ( x ) + j A ( x * ) μ j g j ( x ) , H , such that H i = 1 m λ i 2 h i ( x ) + j A ( x * ) μ j 2 g j ( x ) on C W ( x , x * )

where we write AB if d T Ad ≥ d T Bd for all d ∈ ℝn . The set K2W(x) is a convex cone included in ℝn × Sym(n) where Sym(n) denotes the set of symmetric matrices of order n, 55 R. Andreani , G. Haeser , A. Ramos & P. Silva. A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA Journal of Numerical Analysis, 37 (2017), 1902-1929..

Definition 3.5.(Ref.55 R. Andreani , G. Haeser , A. Ramos & P. Silva. A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA Journal of Numerical Analysis, 37 (2017), 1902-1929.) We say that x* ∈ Ω satisfies the second-order cone-continuity property CCP2 if the set-valued mapping (multifunction)xK2W(x), is outer semicontinuous at x*, that is,

lim sup x x * K 2 W ( x ) K 2 W ( x * ) .

The authors proved that CCP2 is less stringent than MFCQ +WCR and rCRCQ. In the following counterexamples we show that REF-McCormick does neither imply nor is implied by CCP2.

Counterexample 4.REF-McCormick does not imply CCP2.

Consider the following example given in 2121 A. Ramos. “Tópicos em Condições de Otimalidade para Otimização não Linear”. Ph.D. thesis, IME- USP, Departamento de Matemática Aplicada, São Paulo-SP, Brazil (2016). where the authors proved that CCP2 fails. In ℝ2, consider (x1*,x2*)=(0,0) and the inequality constraints given by

g 1 ( x 1 , x 2 ) = - x 1 ; g 2 ( x 1 , x 2 ) = x 1 + max { x 1 , 0 } 2 e x 2 2 .

Then

g 1 ( x 1 , x 2 ) = ( - 1 , 0 ) , g 2 ( x 1 , x 2 ) = ( 1 + 2 max { x 1 , 0 } e x 2 2 , 2 x 2 max { 0 , x 1 } 2 e x 2 2 ) , g 2 ( 0 , 0 ) = ( 1 , 0 ) ;

and L(0,0)=(0,d2):d2. For any nonzero vector dL(0, 0) there exists the curve α(t) ∈ C 2, α(t) = (0, d 2 t), ∀t ∈ [0, δ] such that α(0) = 0, α′ (0) = d and, ∀t ∈ (0, δ]

g 1 ( α ( t ) ) = 0 , g 1 ( 0 , 0 ) d = 0 ; g 2 ( α ( t ) ) = 0 , g 2 ( 0 , 0 ) d = 0 .

Clearly, REF-McCormick holds.

Counterexample 5.CCP2 does not imply REF-McCormick. In ℝ2 , consider the following example given in 55 R. Andreani , G. Haeser , A. Ramos & P. Silva. A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA Journal of Numerical Analysis, 37 (2017), 1902-1929.: (x1*,x2*)=(0,0) and the equality and inequality constraints

h 1 ( x 1 , x 2 ) = x 1 ; g 1 ( x 1 , x 2 ) = - x 1 2 + x 2 ; g 2 ( x 1 , x 2 ) = - x 1 2 + x 2 3 .

We have

h 1 ( x 1 , x 2 ) = ( 1 , 0 ) , h 1 ( 0 , 0 ) = ( 1 , 0 ) ; g 1 ( x 1 , x 2 ) = ( - 2 x 1 , 1 ) , g 1 ( 0 , 0 ) = ( 0 , 1 ) ; g 2 ( x 1 , x 2 ) = ( - 2 x 1 , 3 x 2 2 ) , g 2 ( 0 , 0 ) = ( 0 , 0 ) .

As a result, we see that rCRCQ fails at (x1*,x2*). Now, since CW((x1,x2),(0,0))={(0,0)}, we get K2W(x1,x2)=×+×Sym(2). Clearly, K2W(x1,x2) is outer semicontinuous on ℝ2 and CCP2 holds.

We will see that REF-McCormick does not hold. We have that L(0,0)=(0,d2):d20. Let us take d = (0, d 2), d 2 < 0, and a generic arc α(t) = (α 1(t), α 2(t)) for t ∈ [0, δ]. As the arc has to be feasible and (2.1) must hold, the equalities: h1(α(t))=g2(α(t))=0 must be verified, ∀t ∈ (0, δ].

But, this is a contradiction. Therefore, REF-McCormick fails.

In Figure 1, we show the relationship between the CQs discussed in this article.

Figure 1:
Relationship of second-order CQs. An arrow between two CQs means that one is strictly stronger than the other.

4 FINAL REMARKS

In the present paper, we have presented the condition called REF-McCormick, a second-order constraint qualification which unifies McCormick FOCQ and McCormick SOCQ conditions presented in 1717 G.P. McCormick. Second Order Conditions for Constrained Minima. SIAM Journal on Applied Mathematics, 15 (1967), 641-652.. Using REF-McCormick we have proved that a local minimizer verifies SSOC. We have also shown that REF-McCormick is weaker than the strong second-order condition rCRCQ described in 1919 L. Minchenko & S. Stakhovski. On relaxed constant rank regularity condition in mathematical programming. Optimization: A Journal of Mathematical Programming and Operations Research, 60 (2011), 429-440.. Furthermore, we have demonstrated that REF-McCormick is independent of CCP2 and MFCQ+WCR conditions, which imply WSOC.

Acknowledgments

This work has been partially supported by ANPCyT (Grants PICT 2016-0921 and PICT 2019 - 02172), Argentina. We are deeply indebted to the anonymous referee whose insightful comments helped us to improve the quality of the paper.

REFERENCES

  • 1
    J. Abadie. “Nonlinear Programming”. On the Kuhn-Tucker theorem (1967).
  • 2
    R. Andreani, R. Behling, G. Haeser & P.J. Silva. On second-order optimality conditions in nonlinear optimization. Optimization Methods and Software, 32 (2017), 22-38.
  • 3
    R. Andreani , E.G. Birgin, J.M. Martínez & M. Schuverdt. Second-order negative-curvature methods for box-constrained and general constrained optimization. Computational Optimization and Applications, 45 (2010), 209-263.
  • 4
    R. Andreani , C.E. Echagüe & M.L. Schuverdt. Constant-Rank Condition and Second-Order Constraint Qualification. Journal of Optimization Theory and Applications, 146 (2010), 255-266.
  • 5
    R. Andreani , G. Haeser , A. Ramos & P. Silva. A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA Journal of Numerical Analysis, 37 (2017), 1902-1929.
  • 6
    R. Andreani , G. Haeser , M.L. Schuverdt & P.J.S. Silva. A relaxed constant positive linear dependence constraint qualification and applications. Mathematical Programming, 26 (2016), 255-273.
  • 7
    R. Andreani , J.M. Martínez & M.L. Schuverdt . On the relation between constant positive linear dependence condition and quasinormality constraint qualification. Journal of Optimization Theory and Applications , 125 (2005), 473-485.
  • 8
    R. Andreani , J.M. Martínez & M.L. Schuverdt . On second-order optimality conditions for Nonlinear Programming. Optimization, 56 (2007), 529-542.
  • 9
    D. Bertsekas. “Nonlinear programming”. 2nd Edition, Athena Scientific, Belmont (1999).
  • 10
    A. Fiacco & G. McCormick. “Nonlinear Programming: Sequential Unconstrained Minimization Techniques”. John Wiley, New York (1968).
  • 11
    M. Guignard. Generalized Kuhn-Tucker conditions for mathematical programming problems in a Banach space. SIAM Journal on Control, 7 (1969), 232-241.
  • 12
    M. Hestenes. “Optimization theory: the finite dimensional case”. John Wiley (1975).
  • 13
    R. Janin. Direction derivative of the marginal function in nonlinear programming. Mathematical Programming Studies, 21 (1984), 127-138.
  • 14
    P. Kill, V. Kungurtsev & D. Robinson. A stabilized SQP method: global convergence. Journal of Numerical Analysis, 37 (2017), 407-443.
  • 15
    H. Kuhn & A. Tucker. Nonlinear programming . In P.L. Butzer & F. Feher (editors), “Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability”. University of California, Neyman (1951), p. 72-147.
  • 16
    O. Mangasarian & S. Fromovitz. The Fritz-John necessary optimality conditions in presence of equality and inequality constraints. Journal of Mathematical Analysis and Applications, 17(1) (1967), 37-47.
  • 17
    G.P. McCormick. Second Order Conditions for Constrained Minima. SIAM Journal on Applied Mathematics, 15 (1967), 641-652.
  • 18
    L. Minchenko & A. Leschov. On strong and weak second-order necessary optimality conditions for nonlinear programming. Optimization , 65 (2016), 1693-1702.
  • 19
    L. Minchenko & S. Stakhovski. On relaxed constant rank regularity condition in mathematical programming. Optimization: A Journal of Mathematical Programming and Operations Research, 60 (2011), 429-440.
  • 20
    L. Qi & Z. Wei. On the constant positive linear dependence condition and its application to SQP methods. SIAM Journal Optimization , 10 (2000), 963-981.
  • 21
    A. Ramos. “Tópicos em Condições de Otimalidade para Otimização não Linear”. Ph.D. thesis, IME- USP, Departamento de Matemática Aplicada, São Paulo-SP, Brazil (2016).
  • 22
    M. Spivak. “Calculo en variedades”. Ed. Reverte (1970).

Publication Dates

  • Publication in this collection
    14 Nov 2022
  • Date of issue
    Oct-Dec 2022

History

  • Received
    27 Sept 2021
  • Accepted
    30 June 2022
Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC Rua Maestro João Seppe, nº. 900, 16º. andar - Sala 163, Cep: 13561-120 - SP / São Carlos - Brasil, +55 (16) 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br