Acessibilidade / Reportar erro

A Full Rank Condition for Continuous-Time Optimization Problems with Equality and Inequality Constraints

ABSTRACT

First and second order necessary optimality conditions of the Karush-Kuhn-Tucker type are established for continuous-time optimization problems with equality and inequality constraints. A full rank type regularity condition along with a uniform implicit function theorem are used in order to achieve such necessary conditions.

Keywords:
continuous-time programming; necessary optimality conditions; constraint qualifications

RESUMO

Condições necessárias do tipo Karush-Kuhn-Tucker de primeira e segunda ordens são estabelecidas para problemas de otimização com tempo contínuo com restrições de igualdade e desigualdade. Uma condição de regularidade tipo posto completo juntamente com um teorema da função implícita uniforme são usados com a finalidade de se alcançar tais condições necessárias de otimalidade.

Palavras-chave:
programação com tempo-contínuo; condições necessárias de otimalidade; qualificações de restrições

1 INTRODUCTION

We are concerned with the general nonlinear continuous-time optimization problem with equality and inequality constraints in the form

m a x i m i z e P z = 0 T ϕ z t , t d t s u b j e c t t o h z t , t = 0 a . e . t 0 , T , g z t , t 0 a . e . t 0 , T , z L 0 , T ; n (1.1)

where ϕ : n × 0, T , h : n × 0, T p and g : n × 0, T m, p + m n. Here, L0, T; n denotes the Banach space of all Lebesgue-measurable essentially-bounded n-dimensional vector functions defined on the compact interval 0, T , with the norm · defined by

z m a x l i n e s s s u p t 0 , T z i t .

All vectors are column vectors, unless transposed, when they will be denoted by using a prime, and all integrals are in the Lebesgue sense.

Continuous-time problems arise often in the literature and were first proposed by Bellman22 R. Bellman. Bottleneck problems and dynamic programming. In “Proceedings of the National Academy of Sciences”, volume 39. National Academy Sciences (1953), pp. 947-951.),(33 R. Bellman. “Dynamic Programming”. Princeton University Press (1957). in his studies of some dynamic models of production and inventory called “bottleneck processes”, which gave rise to continuous-time linear programming. Such problems can be posed as

m a x i m i z e P z = 0 T a ' z t d t s u b j e c t t o z t 0 , 0 t T , B z t c + 0 t K z s d s , 0 t T , z L 0 , T ; n ,

where B and K are m × n matrices, a is an n-vector and c is an m-vector. Considering a certain dynamic generalization of an ordinary linear programming problem, he formulated a corresponding dual problem, established a weak duality theorem, and suggested some computational procedures. Subsequently, Bellman’s formulation and duality theory were substantially extended to more general forms of continuous-time linear programming problems, and also to certain classes of continuous-time nonlinear programming problems. For a summary of the results pertaining to duality theory in continuous-time programming and a fairly extensive list of relevant references the reader is referred to Zalmai2424 G.J. Zalmai. Optimality conditions and Lagrangian duality in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 109(2) (1985), 426-452..

Optimality conditions of the Karush-Kuhn-Tucker type were first considered in continuous-time programming by Hanson and Mond1414 M.A. Hanson & B. Mond. A class of continuous convex programming problems. Journal of Mathematical Analysis and Applications, 22(2) (1968), 427-437. for the following linearly constrained nonlinear program:

m a x i m i z e P z = 0 T ϕ z t d t s u b j e c t t o z t 0 , 0 t T , B z t c + 0 t K z s d s , 0 t T , z L 0 , T ; n ,

where B(t) is an m × n piece-wise continuous matrix on 0, T, ct n is piece-wise continuous on 0, T, Ks, t is an m × n piece-wise continuous matrix on 0, T × 0, T and φ is a given concave scalar function twice continuously differentiable. For this purpose, certain positivity conditions on B(t), c(t) and K(s,t) were imposed and the objective function was linearized in order to apply an extended version of Levinson’s linear duality result1515 N. Levinson. A class of continuous linear programming problems. Journal of Mathematical Analysis and Applications, 16(1) (1966), 73-83.. A duality theorem for the nonlinear problem under consideration is then established, and the Karush-Kuhn-Tucker conditions are deduced as a consequence of this nonlinear duality theorem. In this way, many other authors obtained necessary and sufficient conditions for continuous-time problems with nonlinear inequality constraints, for example, Farr and Hanson1313 W.H. Farr & M.A. Hanson. Continuous time programming with nonlinear time delayed constraints. Journal of Mathematical Analysis and Applications, 46(1) (1974), 41-61.),(1212 W.H. Farr & M.A. Hanson. Continuous time programming with nonlinear constraints. Journal of Mathematical Analysis and Applications, 45(1) (1974), 96-115. Reiland and Hanson2020 T.W. Reiland & M.A. Hanson. Generalized Kuhn-Tucker conditions and duality for continuous nonlinear programming problems. Journal of Mathematical Analysis and Applications, 74(2) (1980), 578-598..

Roughly speaking, optimality conditions for continuous-time nonlinear programming problems have been obtained by direct methods. In Abrham and Buie11 J. Abrham & R.N. Buie. Kuhn-Tucker conditions and duality in continuous programming. Utilitas Math, 16(1) (1979), 15-37. a certain regularity assumption is used to establish the Karush-Kuhn-Tucker conditions for a class of convex programming problems. Reiland1919 T.W. Reiland. Optimality conditions and duality in continuous programming I. Convex programs and a theorem of the alternative. Journal of Mathematical Analysis and Applications, 77(1) (1980), 297-325., employing a continuous-time version of Zangwill’s constraint qualification2626 W.I. Zangwill. “Nonlinear Programming: A Unified Approach”. Prentice-Hall (1969). introduced in2020 T.W. Reiland & M.A. Hanson. Generalized Kuhn-Tucker conditions and duality for continuous nonlinear programming problems. Journal of Mathematical Analysis and Applications, 74(2) (1980), 578-598., and an infinite-dimensional form of Farkas’ theorem1515 N. Levinson. A class of continuous linear programming problems. Journal of Mathematical Analysis and Applications, 16(1) (1966), 73-83., established optimality conditions and duality relations for differentiable continuous-time programs. In Zalmai2323 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
, by means of a geometric approach along with a generalized Gordan’s Theorem, optimality conditions of Fritz John and Karush-Kuhn-Tucker types are obtained.

Brandão, Rojas-Medar and Silva tackled nonsmooth continuous-time optimization problems, first in2121 M.A. Rojas Medar, A.J. Brandão & G.N. Silva. Nonsmooth continuous-time optimization problems: sufficient conditions. Journal of Mathematical Analysis and Applications, 227(2) (1998), 305-318., where sufficient conditions were obtained, and then in1414 M.A. Hanson & B. Mond. A class of continuous convex programming problems. Journal of Mathematical Analysis and Applications, 22(2) (1968), 427-437. which refers to necessary conditions. In88 V.A. de Oliveira & M.A. Rojas-Medar. Continuous-time optimization problems involving invex functions. Journal of Mathematical Analysis and Applications, 327(2) (2007), 1320-1334., de Oliveira and Rojas-Medar generalized concepts of the KKT-invexity and WD-invexity introduced by Martin1818 D.H. Martin. The essence of invexity. Journal of Optimization Theory and Applications, 47(1) (1985), 65-76. for mathematical programming problems, proving that the notion of KKT-invexity is a necessary and sufficient condition for global optimality of a Karush-Kuhn-Tucker point and that the notion of WD-invexity is a necessary and sufficient condition for weak duality. The same authors established KKT-invexity for nonsmooth continuous-time programming problems99 V.A. de Oliveira, M.A. Rojas Medar & A.J.V. Brandão. A note on KKT-invexity in nonsmooth continuous-time optimization. Proyecciones (Antofagasta), 26(3) (2007), 269-279.. The multi-objective case was considered in77 V.A. de Oliveira & M.A. Rojas-Medar. Continuous-time multiobjective optimization problems via invexity. Abstract and Applied Analysis, 2007(1) (2007), 11.. de Oliveira66 V.A. de Oliveira. Vector continuous-time programming without differentiability. Journal of Computational and Applied Mathematics, 234(3) (2010), 924-933. also studied multi-objective continuous-time programming problems, but without imposing any differentiability assumption. Saddlepoint type optimality conditions, duality theorems as well as results on the scalarization method were presented. The concept of pre-invexity was utilized.

It is worth mentioning that optimality conditions of Karush-Kuhn-Tucker type for problems defined between abstract spaces cannot be applied to problem (1.1). One can assume that, for each feasible solution z, t gzt, t and t hzt, t are maps in L or L 1 (or in the space Λ1 as in Zalmai2323 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
). However, in the first case, the Lagrange multipliers would belong to the topological dual of L , which is a space with a complicated nature. In the second case, although we know the dual of L 1, the positive cone has empty interior. In general, in the literature on abstract optimization (see Luenberger1616 D.G. Luenberger. “Optimization by Vector Space Methods”. John Wiley & Sons, Inc., New York-London-Sydney (1969), xvii+326 pp., for instance), it is assumed that such a cone has a non-empty interior.

In the formulation given in (1.1), where equality and inequality constraints are present and the feasible solutions belong to L0, T; n, necessary optimality conditions are not found in the literature. We believe this is due to the fact that, even a few years ago, a crucial tool for the treatment of equality constraints, namely the uniform implicit function theorem, was not available. When fixing t, we can applyfor continuous-time problems with inequality constraints while the full rank condition the classical version. But the implicit function thereby obtained may not have good properties, such as measureability for example, with respect to t. Such a result only appears in 1997 in a paper by Pinho and Vinter1111 M.R. de Pinho & R.B. Vinter. Necessary conditions for optimal control problems involving nonlinear differential algebraic equations. Journal of Mathematical Analysis and Applications, 212(2) (1997), 493-516.. On the other hand, in L0, T; n and with inequality constraints only, there are a large number of references in the literature, as cited above. In the case of formulations in other spaces, we cite Zalmai2525 G.J. Zalmai. Proper efficiency principles and duality models for a class of continuous-time multiobjective fractional programming problems with operator constraints. Journal of Statistics and Management Systems, 1(1) (1998), 11-59., for instance, where the feasible solutions are in a Hilbert space.

In this work, by means of the uniform implicit function theorem presented by Pinho and Vinter1111 M.R. de Pinho & R.B. Vinter. Necessary conditions for optimal control problems involving nonlinear differential algebraic equations. Journal of Mathematical Analysis and Applications, 212(2) (1997), 493-516. and the use of a full rank type condition, we obtain first and second order necessary optimality conditions for continuous-time programming problems with equality and inequality constraints. Let us compare the full rank condition considered here with some constraint qualifications found in the literature. For example, Reiland1919 T.W. Reiland. Optimality conditions and duality in continuous programming I. Convex programs and a theorem of the alternative. Journal of Mathematical Analysis and Applications, 77(1) (1980), 297-325. presented a condition which is analogous to the Zangwillâ’s condition2626 W.I. Zangwill. “Nonlinear Programming: A Unified Approach”. Prentice-Hall (1969).. This condition is weaker than the full rank condition. However, it is very difficult to be verified, especially in infinite dimensions, since it involves the calculation of the closure of the cone of feasible directions. Furthermore, standing alone, this constraint qualification does not guarantee the validity of the Karush-Kuhn-Tucker conditions for an optimal solution. It is necessary to assume a Slater type condition (Reiland calls it a Slater condition, but, in fact, it is a Mangasarian-Fromovitz type condition) along with some (strong) regularity conditions, which require that the range of a certain operator (defined between infinite dimensional spaces) be closed and that its kernel be of finite dimension. Zalmai2323 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
obtained Karush-Kuhn-Tucker type optimality conditions by making use of a Karlin type constraint qualification, which, in turn, is equivalent to the Slater constraint qualification. There are problems which satisfy the Slater condition but do not satisfy the full rank condition, and vice-versa (see Example 4.1). However, in general, the Slater constraint qualification is difficult to be verified since it requires the convexity (or concavity) of the constraints. It is important to say that the constraint qualifications cited above were defined in1919 T.W. Reiland. Optimality conditions and duality in continuous programming I. Convex programs and a theorem of the alternative. Journal of Mathematical Analysis and Applications, 77(1) (1980), 297-325. and2323 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
for continuous-time problems with inequality constraints while the full rank condition used here can be applied to problems with both equality and inequality constraints.

The paper is organized as follows. In Section 2, we give some preliminaries. In Section 3, we consider problems with equality constraints only. In Section 4, the general case is treated. Finally, in Section 5, concluding remarks are presented.

2 PRELIMINARIES

The unit ball with center at the origin will be denoted by B regardless of the dimension of the space. The Euclidean norm will be denoted by || ||. The same symbol will be used to denote matrix norms induced by the Euclidean vector norm.

We denote by δPz; γ the Fréchet derivative of P at z with increment γ L0, T; n.

We denote by

Ω = z L 0 , T ; n | h z t , t = 0 , g z t , t 0 a . e . t 0 , T

the feasible set of problem (1.1). For simplicity, given z¯ Ω, we will write

ϕ ¯ t = ϕ z ¯ t , t a n d ϕ ¯ t = ϕ z ¯ t , t a . e . t 0 , T

as well as for h, h, g, g and its components. We set index sets I = 1, ..., p and J = 1, ..., m and we define, for almost every t 0, T, the index set of all binding constraints at z¯ Ω as

I a t = j J | g ¯ j t = 0 ,

and Ict = J \ Iat, its complement. For almost every t 0, T, let q a (t) and q c (t) be the cardinals of the I a (t) and I c (t), respectively.

Definition 2.1. We say that z ¯ Ω is a local optimal solution of problem (1.1) if there exists ε > 0 such that P z ¯ P z for all z Ω satisfying z - z ¯ < ε .

Let Fa : n n | a A be a family of maps parameterized by points a in a subset A k. If ∇F a is nonsingular at some point x 0 for all a A, we know by the classic inverse mapping theorem that, for each a, there exists some neighborhood of x 0 on which F a is smoothly invertible. The following uniform inverse mapping theorem (de Pinho and Vinter1111 M.R. de Pinho & R.B. Vinter. Necessary conditions for optimal control problems involving nonlinear differential algebraic equations. Journal of Mathematical Analysis and Applications, 212(2) (1997), 493-516., Proposition 4.1) and, consequently, the uniform implicit function theorem (1111 M.R. de Pinho & R.B. Vinter. Necessary conditions for optimal control problems involving nonlinear differential algebraic equations. Journal of Mathematical Analysis and Applications, 212(2) (1997), 493-516., Corollary 4.2), that will have important roles in the proof of the results of Sections 3 and 4, give conditions under which the same neighborhood of x 0 can be chosen for all a A.

Proposition 2.1 (Uniform Implicit Function Theorem, 11 11 M.R. de Pinho & R.B. Vinter. Necessary conditions for optimal control problems involving nonlinear differential algebraic equations. Journal of Mathematical Analysis and Applications, 212(2) (1997), 493-516. ). Consider a set A k , a number α > 0 , a amily of functions

ψ a : m × n n a A ,

and a point u 0 , v 0 m × n such that ψ a u 0 , v 0 = 0 for all a A . Assume that:

(i) Ψ a is continuously differentiable on u 0 , v 0 + α B , uniformly in a A ;

(ii) there exists a monotone increasing functionθ : 0, 0, , withθs 0as <mml:math><mml:mi>s</mml:mi><mml:mo> </mml:mo><mml:mo>↓</mml:mo><mml:mo> </mml:mo><mml:mn>0</mml:mn></mml:math>, such that

ψ a u ~ , v ~ - ψ a u , v θ u ~ , v ~ - u , v

for all a A , u , v , u ~ , v ~ u 0 , v 0 + α B ;

(iii) v ψ a u 0 , v 0 is nonsingular for each a A and there exists c > 0 such that

v ψ a u 0 , v 0 - 1 c f o r a l l a A .

Then there exist δ 0 and a family of continuously differentiable functions

ϕ a : u 0 + δ B v 0 + α B a A

which are Lipschitz continuous with a common Lipschitz constant K such that

v 0 = ϕ a u 0 a A , ψ a u , ϕ a u = 0 u u 0 + δ B , a A , a n d u ϕ a u 0 = - v ψ a u 0 , v 0 - 1 u ψ a u 0 , v 0 .

The numbers δ and K depend only on θ (·), c and a. Furthermore, if A is a Borel set anda ψau, vis a Borel measurable function for eachu, v u0, v0 + αB, thena ϕauis a Borel measurable function for eachu u0 + εB.

3 KKT CONDITIONS FOR PROBLEMS WITH EQUALITY CONSTRAINTS

In this section, we will consider the continuous-time programming problem with equality constraints only. The general case is postponed to the next section. We start with the continuous-time problem without constraints, instead. The necessary optimality conditions for unconstrained problems will be used later in the proof of the main result of this section.

Consider the unconstrained continuous-time problem, namely,

m a x i m i z e P z = 0 T ϕ z t , t d t s u b j e c t t o z L 0 , T ; n . (3.1)

Assume that

(H1) ϕ·, t is twice continuously differentiable throughout 0, T; ϕ·, z is measurable for each z and there exists Kϕ > 0 such that

ϕ z t , t K ϕ a . e . t 0 , T .

Proposition 3.2. If z ¯ is a local optimal solution for problem (3.1), then

ϕ ¯ t = 0 a . e . t 0 , T

and

0 T γ t ' 2 ϕ ¯ t γ t d t 0 γ L 0 , T ; n .

Proof. Let γ L0, T; n. From the local optimality of z¯, there exists τ¯ > 0 such that Pz¯ Pz¯ + τγ for all τ 0, τ.

By first order Taylor expansion in Banach spaces (see1717 L.A. Lusternik & V.J. Sobolev. “Elements of Functional Analysis”. Frederick Ungar, New York (1961).) we have that

0 P z ¯ + τ γ - P z ¯ = τ δ P z ¯ ; γ + ε τ

where ετ/τ 0 when τ 0. Dividing both sides by τ > 0 and taking limits as τ 0 we have that δPz¯; γ 0. Similarly, δPz¯; -γ 0. Therefore, δPz¯; γ = 0, that is,

0 T ϕ z ¯ t , t ' γ t d t = 0 γ L 0 , T ; n .

From the last equality we see that ϕz¯t, t = 0 almost every t 0, T.

By the second order Taylor expansion, we can write

0 P ( z + τ γ ) P ( z ¯ ) = τ δ P ( z ¯ ; γ ) + 1 2 τ 2 δ 2 P ( z ¯ ; ( γ , γ ) ) + ε ( τ ) = 1 2 τ 2 δ 2 P ( z ¯ ; ( γ , γ ) ) + ε ( τ ) ,

where ετ/τ2 0 when τ 0. Dividing both sides by τ2 > 0 and taking limits as τ 0, we obtain

1 2 δ 2 P z ¯ ; γ , γ 0 0 T γ t ' 2 ϕ z ¯ t , t γ t d t 0 .

The proof is complete.

Now, consider the continuous-time problem with equality constraints:

m a x i m i z e P z = 0 T ϕ z t , t d t s u b j e c t t o h z t , t = 0 a . e . t 0 , T , z L 0 , T ; n . (3.2)

In this case,

Ω = z L 0 , T ; n | h z t , t = 0 a . e . t 0 , T .

Let us remember that ϕ : n × 0, T and h : n × 0, T p, p n. Given ε > 0 and z¯ Ω, we assume that, in addition to (H1), the following hypothesis are valid:

(H2) h(z,·) is measurable for each z and h(·,t) is twice continuously differentiable on z¯t + εB for almost every t 0, T.

(H3) There exists an increasing function θ~ : 0, 0, , θ~s 0 as s 0, such that, for all z~, z z~t +εB¯,

h z ~ , t - h z , t θ z ~ - z a . e . t 0 , T .

There exists K0 > 0 such that

h ¯ t K 0 a . e . t 0 , T .

(H4) There exists K > 0 such that

d e t h ¯ t h ¯ t ' K a . e . t 0 , T .

Remark 1. Hypothesis (H4) guarantees that the rows of h ¯ t , formed by the gradient vectors h ¯ i t , i I , are linearly independent for almost every t 0 , T . Moreover, along with (H3), it also guarantees that the norm of h ¯ t h ¯ t ' - 1 is uniformly bounded, a required property in the application of the uniform implicit theorem. See the proposition below.

Proposition 3.3. Consider a subset A k and M a a A a family of p × p matrices such that

d e t M a K , a A , a n d M a L , a A ,

for some K , L > 0 . Then there exists C > 0 such that

M a - 1 C , a A .

Proof. Consider the singular values decomposition

M a = U a a V a - 1 , a A ,

whereU a and V a are p × p unit matrices for all a A and a = diagσiai = 1p are diagonal matrices with singular values ordered, without loss of generality, in decreasing order

σ 1 a σ 2 a . . . σ p a > 0 , a A .

Thus,

L M a = U a a V a - 1 = a , a A ,

so that

σ i a m a x 1 i p σ i a = a L , a A , i = 1 , . . . , p ,

which, in turn, implies in

i = 1 p - 1 σ i a L p - 1 , a A .

On the other hand,

d e t M a = i = 1 p σ i a K , a A σ p a K i = 1 p - 1 σ i a - 1 K L p - 1 , a A .

Therefore,

M a - 1 = V a a - 1 U a - 1 = a - 1 = m a x 1 i p 1 σ i a = 1 σ p a L p - 1 K , a A ,

which concludes the proof with C = Lp - 1/K

We are now in position to state and prove the main result of this section. Then, following on, first order necessary optimality conditions of Karush-Kuhn-Tucker type and second order conditions are provided for problem (3.2) under the full rank condition (H4).

Theorem 3.1. Let z ¯ be a local optimal solution for problem (3.2) and suppose that (H1)-(H4) do hold. Then, there exists u u L 0 , T ; p such that

ϕ ¯ t + i = 1 p u i t h ¯ i t = 0 a . e . t 0 , T , (3.3)

and

0 T γ t ' 2 ϕ ¯ t + i = 1 p u i t 2 h ¯ i t γ t d t 0 γ N , (3.4)

where N is given by

N = γ L 0 , T ; n | h ¯ t γ t = 0 a . e . t 0 , T .

Proof. Let z¯ be a local optimal solution of problem (3.2) on a e-neighbourhood. The proof is divided in several steps.

STEP 1: We define an application that satisfies the conditions of Proposition 2.1. Let S0 0, T be the largest subset where each of the conditions in (H1)-(H4) do not hold for every t S0. We know from the assumptions that S 0 has a Lebesgue measure equal to zero. It follows from (Rudin 2222 W. Rudin. “Principles of Mathematical Analysis 3rd Edition”. McGraw-Hill, New York (1976)., p. 309) that there exists a Borel set S, which is the intersection of a countable collection of open sets, such that S0 S and S \ S0 has a Lebesgue measure equal to zero. Thence S is a Borel set which has a Lebesgue measure equal to zero, so that 0, T \ S has full measure. In Proposition 2.1, identify the Borel set 0, T \ S with A, t with a, ξ, η with (u;v) and (0;0) with (u 0 ;v 0).

Define μ : n × p × 0, T p as

μ ξ , η , t = h z ¯ t + ξ + h ¯ t ' η , t

Let us check that the assumptions of Proposition 2.1 are fulfilled. First note that setting α = minε2, ε2K0, we have that

z ¯ t + ξ + h ¯ t ' η - z t = ξ + h ¯ t ' η ξ + h ¯ t ' · η ε ,

whenever ξ, η 0, 0 + αB. We have also that

μ 0 , 0 , t = h z ¯ , t = 0 a . e . t 0 , T .

Let ξ~, η~, ξ, η 0, 0 + αB, t A. Then,

μ ξ ~ , η ~ , t - μ ξ , η , t = h z ¯ t + ξ ~ + h ¯ t ' η ~ , t h z ¯ t + ξ ~ + h ¯ t ' η ~ , t h ¯ t ' - h z ¯ t + ξ + h ¯ t ' η , t h z ¯ t + ξ + h ¯ t ' η , t h ¯ t ' = h z ¯ t + ξ ~ + h ¯ t ' η ~ , t - h z ¯ t + ξ + h ¯ t ' η , t I n h ¯ t ' h z ¯ t + ξ ~ + h ¯ t ' η ~ , t - h z ¯ t + ξ + h ¯ t ' η , t · I n h ¯ t ' θ ~ ξ ~ - ξ + h ¯ t ' η ~ - η · 1 + K 0 θ ~ ξ ~ - ξ + K 0 η ~ - η · 1 + K 0 θ ~ ξ ~ - ξ , η ~ - η + K 0 · ξ ~ - ξ , η ~ - η · 1 + K 0 = θ ξ ~ , η ~ - ξ , η ,

where θ : 0, 0, , θs = 1 + K0θ~s + K0s, is an increasing monotone function such that θs 0 when s 0. We have that

η μ 0 , 0 , t = h ¯ t h ¯ t ' a . e . t 0 , T .

Thus, by hypothesis (H4), ημ0, 0, t is nonsingular for each t A. By making use of (H3), it follows from Proposition 3.3 that there exists M > 0 such that

h ¯ t h ¯ t ' - 1 M a . e . t 0 , t . (3.5)

By Proposition 2.1 there exist σ 0, ε, δ 0, ε and an implicit function d : σB × A δB such that d(ξ,·) is measurable for fixed ξ, the functions of family d·, t | t A are Lipschitz continuous with a common Lipschitz constant, d(·, t) is continuously differentiable for each t A, and for almost every t[0,T],

d 0 , t = 0 , (3.6)

μ ξ , d ξ , t , t = 0 , ξ σ B , (3.7)

d 0 , t = - h ¯ t h ¯ t ' - 1 h ¯ t . (3.8)

Choose σ1 > 0 and δ1 > 0 such that

σ 1 0 , m i n σ , ε 2 , δ 1 0 , m i n δ , ε 2 , σ 1 + K 0 δ 1 0 , ε 2 , (3.9)

where K 0 is given by (H3). In the following steps and without loss of generality, we consider the implicit function d defined on σ1B × 0, T and taking values in δ1 B.

STEP 2: We show that if z¯ is a local optimal solution of (3.2), then it is a local optimal solution of the following auxiliary problem

m a x i m i z e P ~ z = 0 T φ z t , t d t s u b j e c t t o z L 0 , T ; n , (3.10)

where φzt, t = ϕzt + h¯t'dzt - z¯t, t, t. Indeed, suppose that z~ - z¯ < σ2, for arbitrary 0 < σ2 < σ1, is a feasible solution of problem (3.10) such that P~z~ > P~z¯. Consider

z ^ t = z ~ t + h ¯ t ' d z ~ t - z ¯ t , t a . e . t 0 , T .

Using (3.9) and (H3), we have that

z ^ t - z ¯ t = z ~ t - z ¯ t + h ¯ t ' d z ~ t - z ¯ t , t z ~ t - z ¯ t + h ¯ t ' · d z ~ t - z ¯ t , t < σ 1 + K 0 δ 1 < ε .

As z~ - z¯ < σ1, using the definition of m we have that, for almost every t 0, T,

μ z ~ t - z ~ t , d z ~ t - z ¯ t , t , t = 0 h z ~ t + h ¯ t ' d z ~ t - z ¯ t , t , t = 0

that is, hz^t, t = 0 almost every t 0, T. But,

P z ^ = P ~ z ~ > P ~ z ¯ = P z ¯ ,

contradicting the fact that z¯ is a local optimal solution of (3.2).

STEP 3: Applying Proposition 3.2, we have that, for almost every t 0, T,

0 = φ z ¯ t , t = I n + h ¯ t ' d 0 , t ' ϕ z t + h ¯ t ' d 0 , t , t = ϕ z ¯ t , t + d 0 , t ' h ¯ t ϕ z ¯ t , t = ϕ z ¯ t , t + h z ¯ t , t ' - h ¯ t h ¯ t ' - 1 h ¯ t ϕ z ¯ t , t = ϕ z ¯ t , t + h z ¯ t , t ' u t = ϕ z ¯ t , t + i = 1 p u i t h i z ¯ t , t

where

u t = - h ¯ t h ¯ t ' - 1 h ¯ t ϕ z ¯ t , t a . e . t 0 , T .

Observe that u is unique and that

u t M K 0 K ϕ a . e . t 0 , T ,

by hypotheses (H1) and (H3) and by (3.5), so that u L0, T; p.

Now, being φ(·; t) and h(·; t) twice continuously differentiable on z¯ + εB¯ throughout [0;T], it follows directly from its definition that μξ, dξ, t, t is twice continuously differentiable on s1B for almost every t 0, T. Consequently, from Proposition 2.1, d is twice continuously differentiable in a neighborhood of ξ = 0 (for simplicity, consider this neighborhood as being σ1B). By Proposition 3.2 we have that

0 T γ t ' 2 φ z t , t γ t d t 0 γ L 0 , T ; n .

Let us calculate ∇2φ. We have that

φ z t , t = I n + h ¯ t ' d z t - z ¯ t , t ' ϕ z t + h ¯ t ' d z ¯ t , t = ϕ z t + h ¯ t ' d z t - z ¯ t , t + d z t - z ¯ t , t ' h ¯ t ϕ z t + h ¯ t ' d z t - z ¯ t , t ,

where

d z t - z ¯ t , t ' h ¯ t = i = 1 p d i z t - z ¯ t , t h ¯ i t ' a . e . t 0 , T .

Putting z = z¯ and using (3.8) results

- h ¯ t ' h ¯ t h ¯ t ' - 1 h ¯ t = d 0 , t - h ¯ t = i = 1 p d i 0 , t h ¯ i t ' a . e . t 0 , T .

From the expression for ∇φ, we obtain

2 φ z t , t = I n + d z t - z ¯ t , t ' h ¯ t 2 ϕ z t + h ¯ t ' d z t - z ¯ t , t , t ' + i = 1 p 2 d i z t - z ¯ t , t h ¯ t ' ϕ z t + h ¯ t ' d z t - z ¯ t , t , t + i = 1 p d i z t - z ¯ t , t h ¯ t ' 2 ϕ z t + h ¯ t ' d z t - z ¯ t , t , t ' + i = 1 p d i z t - z ¯ t , t h ¯ t ' d z t - z ¯ t , t ' h ¯ t 2 ϕ z t + h ¯ t ' d z t - z ¯ t , t , t

Particularly for z = z¯, taking γ N, it follows, for almost every t 0, T, that

γ t ' 2 φ z ¯ t , t γ t = γ t ' 2 ϕ ¯ t γ t - γ t ' h ¯ t ' h ¯ t h ¯ t ' - 1 h ¯ t 2 ϕ ¯ t + γ t ' i = 1 p 2 d i 0 , t h ¯ i t ' ϕ ¯ t γ t - γ t ' h ¯ t ' h ¯ t h ¯ t ' - 1 h ¯ t I n + d 0 , t ' h ¯ t 2 ϕ ¯ t ,

and integrating from 0 to T one has

0 T γ t ' 2 ϕ ¯ t + i = 1 p 2 d i 0 , t h ¯ i t ' ϕ ¯ t γ t d t 0 . (3.11)

On the other hand, once

μ i ξ , d ξ , t , t = h i z ¯ t + ξ h ¯ t ' d ξ , t , t a . e . t 0 , T , i I ,

we have

μ i ξ , d ξ , t , t = I n + h ¯ t ' d ξ , t ' h i z ¯ t + ξ h ¯ t ' d ξ , t , t = h i z ¯ t + ξ h ¯ t ' d ξ , t , t + d ξ , t ' h ¯ t h i z ¯ t + ξ h ¯ t ' d ξ , t , t , i I .

By (3.7), for almost every t 0, T and for each i I, we get

0 = 2 μ i ξ , d ξ , t , t = I n + h ¯ t ' d ξ , t ' 2 h i z ¯ t + ξ + h ¯ t ' d ξ , t , t + j = 1 p 2 d j ξ , t h ¯ j t ' h i z ¯ t + ξ + h ¯ t ' d ξ , t , t + d ξ , t ' h ¯ t d ξ , t ' h ¯ t 2 h i z ¯ t + ξ + h ¯ t ' d ξ , t , t .

We now put ξ = 0 in the last expression, multiply it by u i (t) for almost every t 0, T, sum up from 1 to p and take the inner product with γ N, which results, for almost every t 0, T, in

0 = γ t ' i = 1 p u i t 2 μ i 0 , 0 , t γ t + γ t ' j = 1 p 2 d j 0 , t h ¯ j t i = 1 p u i t h ¯ i t γ t - γ t ' h ¯ t ' h ¯ t h ¯ t ' - 1 h ¯ t d 0 , t ' h ¯ t i = 1 p u i t 2 h ¯ i t γ t .

Integrating from 0 to T gives

0 T γ t ' i = 1 p u i t 2 h ¯ i t + j = 1 p 2 d j 0 , t h ¯ j t ' i = 1 u i t h ¯ i t γ t d t (3.12)

Adding (3.11) and (3.12) and using (3.3) results in (3.4).

4 KKT CONDITIONS FOR PROBLEMS WITH EQUALITY AND INEQUALITY CONSTRAINTS

The general case is now tackled. Consider the problem (1.1) with equality and inequality constraints. Given ε > 0 and z¯ Ω, we assume that, in addition to (H1), the following hypotheses are valid:

(H5) h(z, ) and g(z, ) are measurable for each z, h( ,t) and g( ,t) are twice continuously differentiable on z¯t + εB¯ for almost every t 0, T.

(H6) There exists an increasing function θ¯ : 0, 0, , θ¯s 0 when s 0, such that, for all z~, z z¯t + εB¯,

h , g z ~ , t - h , g z , t θ ¯ z ~ - z a . e . t 0 , T .

There exists K1 > 0 such that

h , g z ~ , t K 1 a . e . t 0 , T .

(H7) There exists K > 0 such that

d e t Υ t Υ t ' K a . e . t 0 , T ,

where

Υ t = h ¯ t 0 g ¯ t d i a g - 2 w ¯ j t j J ,

and w¯j = g¯jt a.e. t 0, T, j J.

Remark 1.It is easy to see that the matrix Υ(t) in Hypothesis (H7) has full rank if, and only if, the vector seth¯it | i I g¯jt | j Iatis linearly independent for almost everyt 0, T. Moreover, as pointed out in de Pinho and Vinte1010 M.R. de Pinho & A. Ilchmann. Weak maximum principle for optimal control problems with mixed constraints. Nonlinear Analysis, 48 (2002), 1179-1196., if (H7) is valid, then detΓtΓt' K a.e. t 0, T, where

Γ t = h ¯ t g ¯ I a t t a . e . t 0 , T

g ¯ I a t t denotes the matrix obtained from g ¯ t by removing the rows with indices not belonging to I a (t)). The reciprocal does not hold true, as can be seen in Example 3.5 in 10 10 M.R. de Pinho & A. Ilchmann. Weak maximum principle for optimal control problems with mixed constraints. Nonlinear Analysis, 48 (2002), 1179-1196. .

Next we present three examples referring to assumption (H7).

Example 4.1. Consider the simple two dimensional example where z ¯ 1 t , z ¯ 2 t 0 , 0 and there is a single inequality constraint g z t , t = z 1 t 3 + z 1 t - z 2 t 0 . Since g is not concave, the Slater condition in Zalmai 23 23 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
is not satisfied. It is easy to see that the full rank condition (H7) is valid. If g z t , t = - z 1 t 2 + z 2 t 0 , then both Slater and (H7) conditions are satisfied. Now, consider two constraints given by g 1 z t , t = - z 1 t - 1 2 - z 2 t 2 + 0 and g 2 z t , t = - 4 z 1 t - 1 / 2 2 - z 2 t 2 / 4 + 1 0 . In this case, the Slater condition is satisfied while (H7) does not hold at z ¯ 1 t , z ¯ 2 t 0 , 0 .

Example 4.2. Consider the problem

m a x i m i z e 0 1 - z 1 2 t - z 2 2 t d t s u b j e c t t o h z t , t = z 1 t - z 2 t = 0 a . e . t 0 , 1 , g 1 z t , t = z 1 t + 1 2 z 1 2 t 0 a . e . t 0 , t , g 2 z t , t = z 1 t z 2 t + 1 0 a . e . t 0 , 1 ,

where z = z 1 , z 2 L 0 , T ; 2 and h , g 1 , g 2 : 2 × 0 , 1 . It is easy to see that z ¯ = 0 , 0 is an optimal solution and that I a t = 1 a . e . t 0 , T . Thus, the matrix in assumption (H7) is given by

Υ t = 1 - 1 0 1 0 0 1 0 - 2 a . e . t 0 , T ,

which has full rank for almost every t 0 , 1 . Note that h ¯ t , g ¯ 1 t is linearly independente for almost every t 0 , T .

Example 4.3. Consider h , g 1 , g 2 : 3 × 0 , 1 and

m a x i m i z e 0 1 - z 1 t - 1 2 - z 2 t - 1 2 d t s u b j e c t t o h z t , t = - z 1 2 t + z 3 t + 1 = 0 a . e . t 0 , 1 , g 1 z t , t = - 2 z 1 z 2 + 4 z 2 + z 3 - 3 0 a . e . t 0 , 1 , g 2 z t , t = - z 1 t + 1 2 z 3 t + 1 2 0 a . e . t 0 , 1 .

The feasible point z ¯ = 1 , 1 , 1 is an optimal solution for this problem. Note that, for almost every t 0 , 1 , I a t = 1 , 2 and

Υ t = - 2 - 2 1 0 0 - 2 2 1 0 0 - 1 0 1 2 0 0 a . e . t 0 , 1 .

Provided rankΥt = 2 a.e. t 0, 1, (H7) is not valid, even thoughz¯ is an optimal solution.

In what follows, the Karush-Kuhn-Tucker type optimality conditions are obtained for the general case.

Theorem 4.2. Let z ¯ Ω be a local optimal solution for problem (1.1). Suppose that (H1), (H5)-(H7) do hold and that g z ¯ · , · is bounded in [0;T]. Then there exists u , v L 0 , T ; p × m such that for almost every t 0 , T one has

ϕ t + i = 1 p u i t h ¯ i t + j = 1 m v j t g j ¯ t = 0 , (4.1)

v t 0 , (4.2)

v j t g ¯ j t = 0 , j J . (4.3)

Moreover,

0 T γ t ' 2 ϕ ¯ t + i = 1 p u i t 2 h ¯ i t + j = 1 p v j t 2 g ¯ j t γ t d t 0 (4.4)

for all γ N ~ , where N ~ is given by

N ~ = γ L 0 , T ; n | h ¯ t γ t = 0 , g ¯ j t ' γ t = 0 , j I a t , a . e . t 0 , T .

Proof. Let w : 0, T m be a measurable function and consider the auxiliary problem below

m a x i m i z e P ~ z , w = 0 T ϕ z t , t d t s u b j e c t t o h z t , t = 0 a . e . t 0 , T , g z t , t - w 2 t = 0 a . e . t 0 , t , (4.5)

where

w 2 t = w 1 2 t w 2 2 t w m 2 t a . e . t 0 , T .

We proceed in several steps.

STEP 1: If z¯ is a local optimal solution for problem (1.1), then z¯, w¯ is a local optimal solution for the problem (4.5), where

w ¯ j t = g ¯ j t a . e . t 0 , T , j J .

Indeed, if z¯ is a solution of (1.1) in a e-neighbourhood, suppose that for all 0 < δ < ε, there exists a feasible solution z~, w~ of (4.5) with z~, w~ - z~, w~ < δ and P~z~, w~ > P~z~, w~. Noticing that

h z ~ t , t = a n d g z ~ t , t = w ~ 2 t 0 a . e . t 0 , T ,

we see that z~ is feasible for the problem (1.1) and

P z ~ = 0 T ϕ z ~ t , t d t = P ~ z ~ , w ~ > P ~ z ~ , w ~ = 0 T ϕ z ~ t , t d t = P z ¯ ,

contradicting the local optimality of z¯ for (1.1).

STEP 2: Define

ψ z , w , t = h z , t g z , t - w 2 p + m .

We will verify that the auxiliary problem (4.5) satisfies the conditions (H1)-(H4) with Ψ and (z;w) playing the role of h and z, respectively. The hypotheses (H1) and (H2) are immediate. Considering z~, w~, z, w z¯t, w¯t εB¯ we have, for almost every t 0, T, that

ψ z ~ , w ~ , t - ψ z , w , t = z ψ z ~ , w ~ , t - z ψ z , w , t w ψ z ~ , w ~ , t - w ψ z , w , t h , g z ~ , t - h , g z , t + - 2 d i a g w ~ i t - w i t = h , g z ~ , t - h , g z , t + 2 I m w ~ t - w t θ ¯ z ~ - z + 2 I m · w ~ - w = θ ¯ z ~ - z + 2 w ~ - w θ ¯ z ~ - z , w ~ - w + 2 z ~ - z , w ~ - w = θ ~ z ~ , z - w ~ , w ,

where θ~ : 0, 0, is given by θ~s = θ¯s + 2s · θ~ is an increasing function and when s 0, θ~s = θ¯s + 2s 0. Also,

ψ z ¯ , w ¯ , t z ψ z ¯ t , w ¯ t , t + w ψ z ¯ t , w ¯ t , t = h , g z ¯ t , t + 2 d i a g w ¯ i t K 0 a . e . t 0 , T ,

where K0 = K1 + w¯m comes from (H6) and from the assumption that w¯· = gz¯·, · is uniformly bounded in [0;T]. Hypothesis (H3) is then verified. Finally, as

ψ z ¯ t , t = Υ t a . e . t 0 , T ,

the hypothesis (H7) implies (H4).

STEP 3: By Theorem 3.1, there exists u, v L0, T; p × m such that

ϕ ¯ t 0 + ψ z ¯ t , t ' u t v t = 0 a . e . t 0 , T ,

which implies that

ϕ ¯ t + i = 1 p u i t h ¯ i t + j = 1 m v j t g ¯ j t = 0 a . e . t 0 , T

and

w ¯ j t v j t = 0 a . e . t 0 , T g ¯ j t v j t = 0 a . e . t 0 , T , j J ,

resulting in (4.1) and (4.3).

STEP 4: We will verify the the second order condition (4.4). Let us denote, for almost every t 0, T,

L z t , w t , t = ϕ z t , t + i = 1 p u i t h i z t , t + j = 1 m v j t g j z t , t - w j 2 t ,

where u and v are the multipliers previously obtained in Step 3. Then, for almost every t 0, T,

L z t , w t , t = z L z t , w t , t w L z t , w t , t = ϕ z t , t + i = 1 p u i t h i z t , t + j = 1 m v j t g j z t , t - 2 w 1 t v 1 t - 2 w m t v m t

and

2 L z t , w t , t = z z L z t , w t , t 0 0 d i a g - 2 v j t j = 1 m

where

z z L z t , w t , t = 2 ϕ z t , t + i = 1 p u i t 2 h i z t , t + j = 1 m v j t 2 g j z t , t

By Theorem 3.1, we have that

0 T γ t , v t ' 2 L z ¯ t , w ¯ t , t γ t , v t d t 0 (4.6)

for all γ, v L0, T; n × m satisfying

h z ¯ t , t γ t = 0 a n d g j z ¯ t , t ' γ t - 2 w ¯ j t v j t = 0 , j J , (4.7)

for almost every t 0, T. For all γ N¯, consider n defined, for almost every t 0, T, as

v j t = 0 , i f j I a t , g j z ¯ t , t ' γ t 2 w ¯ j t , i f j I c t .

Then, note that (γ; v) satisfies (4.7), w¯jtvjt = 0 a.e. t 0, T j Iat, and by (4.3), we have that vjt = 0 a.e. t 0, T j Ict. Thus,

v j t v j t = 0 a . e . t 0 , T , j J .

Replacing (γ; v) in (4.6), we obtain

0 T γ t ' z z 2 L z ¯ t , w ¯ t , t γ t d t 0 ,

with arbitrary γ N¯, implying in (4.4).

STEP 5: We will show the non-negativity condition (4.2). Suppose now that vlt < 0 for all t D 0, T, where D has positive measure, for some l J. By (4.3), l Iat for all t D. Take (γ; ζ ) such that γt 0, ζjt 0 for j l and

ζ l t = 0 , t 0 , T \ D , k , t D ,

where k 0 is an arbitrary constant. Observe that (γ; ζ ) satisfies (4.7). From (4.6), it follows that

D v l t ζ l t 2 d t 0 .

But,

v l t < 0 a n d ζ l t = k 0 , t D D v l t ζ l t 2 d t < 0 ,

which is a contradiction. Therefore (4.2) holds.

Remark 2. For the particular case in which problem (1.1) has only inequality constraints, Theorem 4.2 is similar to Theorem 3.4 in Zalmai 23 23 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
. The following points should be raised:

(i) In 23 23 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
, it is considered a minimization problem and the inequality constraint is given as g z t , t 0 a.e. t 0 , T . The range space of the constraint map is contained in the normed space Λ 1 m [ 0, T ] (see 23 23 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
for details). It is assumed a Slater type constraint qualification (so that g is required to be convex);

(ii) The index set of all the binding constraints in 23 23 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
is given by

I z ¯ = j J | g j z ¯ t , t = 0 a . e . t 0 , T .

Note thatj Iz¯iff the j-th constraint is active throughout [0;T] while in our case the set of the binding constraints varies with the parameter t;

(iii) The multiplier rule (4.1) obtained here in Theorem 4.2 is stronger that the one in Theorem 3.4 in2323 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
, since in2323 G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
https://doi.org/10.1016/0022-247X(85)903...
, it holds only under integration throughout [0;T], we mean

0 T ϕ ¯ t + i = 1 p u i t h ¯ i t + j = 1 m v j t g ¯ j t d t = 0 .

5 CONCLUDING REMARKS

In this article, first and second order necessary optimality conditions of Karush-Kuhn-Tucker type are established for continuous-time problems with equality and inequality constraints. These conditions can be seen as a generalization to the continuous-time case of optimality conditions for finite-dimensional nonlinear programming. It is worth highlighting two important contributions of this paper:

  1. the obtaining of necessary optimality conditions for continuous-time problems in the presence of equality constraints, and

  2. the presentation of second order necessary optimality conditions for continuous-time problems.

These contributions do not appear in the literature when the continuous-time problem is defined in L0, T; n. Such contributions were possible through the use of the implicit function theorem1111 M.R. de Pinho & R.B. Vinter. Necessary conditions for optimal control problems involving nonlinear differential algebraic equations. Journal of Mathematical Analysis and Applications, 212(2) (1997), 493-516. to address the equality constraints present in the problem (1.1).

We know that optimality conditions of Karush-Kuhn-Tucker type have an important role in many aspects of finite-dimensional nonlinear programming, for example, in duality theory, in sensitivity analysis, and in the computational implementation of algorithms. Analogously, the conditions presented here may be used for formulating duality theorems, deriving results on sensitivity analysis and developing computational procedures in order to obtain numerical solutions for continuous-time programming problems.

ACKNOWLEDGMENTS

V.A. de Oliveira is partially supported by grants 2013/07375-0 and 2016/03540-4, São Paulo Research Foundation (FAPESP), and by grants 457785/2014-4 and 310955/2015-7, National Council for Scientific and Technological Development (CNPq).

REFERENCES

  • 1
    J. Abrham & R.N. Buie. Kuhn-Tucker conditions and duality in continuous programming. Utilitas Math, 16(1) (1979), 15-37.
  • 2
    R. Bellman. Bottleneck problems and dynamic programming. In “Proceedings of the National Academy of Sciences”, volume 39. National Academy Sciences (1953), pp. 947-951.
  • 3
    R. Bellman. “Dynamic Programming”. Princeton University Press (1957).
  • 4
    A.J.V. Brandão, M.A. Rojas Medar & G.N. Silva. Nonsmooth continuous-time optimization problems: necessary conditions. Computers and Mathematics with Applications, 41(12) (2001), 1477-1486.
  • 5
    B.D. Craven & J.J. Koliha. Generalizations of Farkas Theorem. SIAM Journal on Mathematical Analysis, 8(6) (1977), 983-997.
  • 6
    V.A. de Oliveira. Vector continuous-time programming without differentiability. Journal of Computational and Applied Mathematics, 234(3) (2010), 924-933.
  • 7
    V.A. de Oliveira & M.A. Rojas-Medar. Continuous-time multiobjective optimization problems via invexity. Abstract and Applied Analysis, 2007(1) (2007), 11.
  • 8
    V.A. de Oliveira & M.A. Rojas-Medar. Continuous-time optimization problems involving invex functions. Journal of Mathematical Analysis and Applications, 327(2) (2007), 1320-1334.
  • 9
    V.A. de Oliveira, M.A. Rojas Medar & A.J.V. Brandão. A note on KKT-invexity in nonsmooth continuous-time optimization. Proyecciones (Antofagasta), 26(3) (2007), 269-279.
  • 10
    M.R. de Pinho & A. Ilchmann. Weak maximum principle for optimal control problems with mixed constraints. Nonlinear Analysis, 48 (2002), 1179-1196.
  • 11
    M.R. de Pinho & R.B. Vinter. Necessary conditions for optimal control problems involving nonlinear differential algebraic equations. Journal of Mathematical Analysis and Applications, 212(2) (1997), 493-516.
  • 12
    W.H. Farr & M.A. Hanson. Continuous time programming with nonlinear constraints. Journal of Mathematical Analysis and Applications, 45(1) (1974), 96-115.
  • 13
    W.H. Farr & M.A. Hanson. Continuous time programming with nonlinear time delayed constraints. Journal of Mathematical Analysis and Applications, 46(1) (1974), 41-61.
  • 14
    M.A. Hanson & B. Mond. A class of continuous convex programming problems. Journal of Mathematical Analysis and Applications, 22(2) (1968), 427-437.
  • 15
    N. Levinson. A class of continuous linear programming problems. Journal of Mathematical Analysis and Applications, 16(1) (1966), 73-83.
  • 16
    D.G. Luenberger. “Optimization by Vector Space Methods”. John Wiley & Sons, Inc., New York-London-Sydney (1969), xvii+326 pp.
  • 17
    L.A. Lusternik & V.J. Sobolev. “Elements of Functional Analysis”. Frederick Ungar, New York (1961).
  • 18
    D.H. Martin. The essence of invexity. Journal of Optimization Theory and Applications, 47(1) (1985), 65-76.
  • 19
    T.W. Reiland. Optimality conditions and duality in continuous programming I. Convex programs and a theorem of the alternative. Journal of Mathematical Analysis and Applications, 77(1) (1980), 297-325.
  • 20
    T.W. Reiland & M.A. Hanson. Generalized Kuhn-Tucker conditions and duality for continuous nonlinear programming problems. Journal of Mathematical Analysis and Applications, 74(2) (1980), 578-598.
  • 21
    M.A. Rojas Medar, A.J. Brandão & G.N. Silva. Nonsmooth continuous-time optimization problems: sufficient conditions. Journal of Mathematical Analysis and Applications, 227(2) (1998), 305-318.
  • 22
    W. Rudin. “Principles of Mathematical Analysis 3rd Edition”. McGraw-Hill, New York (1976).
  • 23
    G.J. Zalmai. The Fritz John and Kuhn-Tucker optimality conditions in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 110 (1985), 503-518. doi: 10.1016/0022-247X(85)90312-9.
    » https://doi.org/10.1016/0022-247X(85)90312-9
  • 24
    G.J. Zalmai. Optimality conditions and Lagrangian duality in continuous-time nonlinear programming. Journal of Mathematical Analysis and Applications, 109(2) (1985), 426-452.
  • 25
    G.J. Zalmai. Proper efficiency principles and duality models for a class of continuous-time multiobjective fractional programming problems with operator constraints. Journal of Statistics and Management Systems, 1(1) (1998), 11-59.
  • 26
    W.I. Zangwill. “Nonlinear Programming: A Unified Approach”. Prentice-Hall (1969).

Publication Dates

  • Publication in this collection
    10 June 2019
  • Date of issue
    Jan-Apr 2019

History

  • Received
    19 Dec 2017
  • Accepted
    01 Aug 2018
Sociedade Brasileira de Matemática Aplicada e Computacional Rua Maestro João Seppe, nº. 900, 16º. andar - Sala 163 , 13561-120 São Carlos - SP, Tel. / Fax: (55 16) 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br