Acessibilidade / Reportar erro

Optimal pest control problem in population dynamics

Abstract

One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest-natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.

optimal pest control; Maximum Principle of Pontryagin; Hamilton-Jacobi-Bellman equation


Optimal pest control problem in population dynamics

Marat RafikovI; José Manoel BalthazarII

IDepartment of Physics, Statistics and Mathematics, UNJUI, Ijui University, Cx. Postal 560, 98700-00 Ijuí, RS, Brazil, E-mail: rafikov@unijui.tche.br

IIDepartament of Statistics, Applied Mathematics and Computation, UNESP, Universidade Estadual Paulista, Cx. Postal 178, 13500-230, Rio Claro, SP, Brazil, E-mail: jmbaltha@rc.unesp.br

ABSTRACT

One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest–natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.

Mathematical subject classification: 92D25, 34H05, 49N90.

Key words: optimal pest control, Maximum Principle of Pontryagin, Hamilton-Jacobi-Bellman equation.

1 Introduction

Pests are species that interfere with human activity or cause injury, loss, or irritation to a crop, stored product, animal, or people. Most methods of pest control, in agriculture, are based on chemical insecticides. The disadvantages of the application of these insecticides are: a) progressive reduction of efficiency due to increased resistance by pest insects; b) high negative impact over the beneficial insect population; c) reduction of natural pest control due to destruction of the pests natural enemies; d) new and more destructive pest surges; e) incidence of secondary pests; and/or new and different types of pests; f) chemical residues in crops; g) ecological accidents; h) long term chemical residues in the agricultural ecosystem; i) high number of accidents that intoxicate human beings and in some cases lead to their deaths [7].

The general definition of biological control is the of parasitoids, predators, or pathogens to maintain the density of an organism at a lower level than would occur without these natural enemies [2]. Van den Bosch et al. [16] defined applied biological control as the manipulation of natural enemies by man to control pests, and, natural biological control as control that occurs without man's intervention.

From the ecological viewpoint, the specie is considered as a pest if its population density surpasses the economic injury level, i.e. the pest population density level at which a insect (or other organism) induced damage can no longer be tolerated and therefore the level at or before which it is desirable to initiate control activities. Thus, the premise of classical control is a reduction and establishment of the pest population density at equilibrium level lower the economic injury level.

There are four major approaches to biological control in agricultural, greenhouse, and urban ecosystems to day are: 1) classical, which can be defined as the importation and establishment of natural enemies, which achieves control of the target pest with further assistance; 2) environmental manipulation, which encompasses a broad range of techniques including use of alternative prey, addition of the pest itself, use of attractants or subsidiary foods, and modifying cropping practices; 3) periodic augmentative releases of natural enemies, which provide an immediate or a delayed (through reproduction) effect on the pest population; and 4) preservation of existing natural enemy fauna through the development of minimally disruptive management techniques [5], [11]. There are many examples of success using of the biological control, such as the complex of imported parasites, which controls alfalfa weevil [15], or augmentative releases of natural enemies, which have been applied in greenhouses in Europe for control of many vegetables pests [11]. Unfortunately, there are also many cases where effective exotic natural enemies simply haven't been found or haven't been successfully established in the target area. Due Thomas and Willis [15] less than 40% of introductions of biological agents against weeds and insects actually result in substantial control. In order for biological control to succeed, the dynamics of the pest and its enemy populations have to be understood.

Mathematical models more and more used for the study of agricultural problems, because with the use of simulation tools the system environment – pest – natural enemies can be understood better. This allows the researcher to have a general vision of the system and he can formulate and to accomplish computational experiments of the real system, operating only a model of the system, facilitating an economy of material expenses and time, when compared to real experiments. Besides, researchers of the area can make use of models to aid the accomplishment of field experiments, through the indication of the parameters, which should be observed.

More specifically, the use of the mathematical modeling, applied to the problems of biological pest control, allows a qualitative and quantitative evaluation of the impact between the pest and its natural enemies populations. Then, the mathematical modeling can be used as tool to project stable systems of the prey-predator or host-parasitoid type. This can be obtained seeking a natural enemy with such characteristics that supply stability to the system. The mathematics is useful in this case, for the possibility of the determination of the parameters' area, in which the system is stable. Another form, in that the mathematical modeling can be used in the pest control, is the formulation of the optimal control strategy, through the dynamic manipulation of the control variables of the pest-natural enemy system.

As it was mentioned the ecological vision considers an insect as pest if and only if the amount of this insect causes economic injury in the crops. This vision can be good as base for the formulation of the optimal pest control problem. The optimal pest control in the prey – predator system has the purpose of maintaining the pest population in an equilibrium level below the economic injury level. The strategy of the biological pest control should satisfy to the following important conditions:

i) the pest-natural enemy ecosystem through the biological control should arrive to a equilibrium state in that the pest population is stabilized in a level below the economic injury level and the natural enemies' population is stabilized in the level enough to pest control;

ii) this equilibrium state of the controlled ecosystem has to be stable;

iii) the biological pest control has to be economic in the sense of minimization of the amount of applications in the ecosystem.

There are varies studies of the application of the optimal control theory to the pest control [4], [8], [14]. Goh [9] used optimal control theory to formulate optimal feedback policies for some simple models. In these models the control functions don't influence directly in the reproduction, competition and interaction processes.

Usually, in the population control, in general, and in the pest control, in particular, the great amounts of species are removed from the system during the period of application. These species don't participate more in the reproduction, competition and interaction processes. To the opposite, the predator species are introduced in the system and they begin immediately to participate in the reproduction, competition and depredation processes. In this paper we introduce the control functions such way that allows to admit its influence in the reproduction, competition and interaction processes.

2 Formulation of the population control problem

Consider a general model of n interacting populations which is described by the set of differential equations:

where xi(t) is density of population i in the instant t; fi(x1, x2, ..., xn) are continuous functions of variables xi.

The system (2.1) describes the development of the population system without the control application. Let Ui(t) be a number of species retired from the system or introduced in the system in the instant t. Let suppose that species of the n1 first populations are retired from the system and species of the n – n1 remaining populations are introduced in the system. Equations that describe the dynamics of the system with application of the control, can be written in following form:

where ki are positive constants that characterize the technical conditions of the application.

Let the control functions Ui(t) satisfy the constraints:

Suppose it is desirable to have the retired population level below some threshold for this population, to have augmentation of number of introduced populations and to have a low cost in using the control variable. To make these objectives into account we use the weighted performance index:

where

ciare positive constants that characterize the weight of each type of the control; 0 and T are initial and final moments of the control application, respectively.

Minimizing the performance index (2.4) we are minimizing the values of the control functions and the retired population during the application period and we are maximizing the introduced population at final point.

The optimal control problem is to choose an admissible control program, which will drive the system (2.2) from the initial state

to terminal state such that the performance index (2.4) is minimized.

This optimization problem for dynamic system can be solved by application of the Maximum Principle [12].

For the sake of convenience, let us introduce new variables w and xi:

A derivative of the function w is:

Adding the equations (2.9) to the system (2.2), we have a new formulation of the optimal control problem: choose admissible control variables Ui(t), which will drive the system (2.2), (2.9), (2.8) from the initial state (2.6) to terminal state such that the performance index

is minimized.

Define the Hamilton function:

and where y0 and yi are the adjoin variables determined by the following equations

final conditions:

Due the Maximum Principle [12] the optimal control functions maximize the function H. The necessary conditions for maximum of function H are:

From first equation of system (2.12) we have:

y0 = const.

Applying the first final condition (2.13) we obtain:

From the system (2.14) we have

On the other hand,

hence the second equation of the system (2.12) can be written:

The general solution of the equation (2.16) is:

yi = Aek,t .

Applying the second final condition (2.13) we obtain: A = 0, and consequently:

Applying (2.19), (2.21) and (2.22) to (2.15) we obtain:

Now from (2.14) we obtain a system of n equations

Using (2.8) and the values of xi, calculated from the system (2.19), we get:

Consider the Lotka-Volterra [10], [17] model with competition between species. In this case we have:

The equations (2.19) can be written

Resolving (2.23), we obtain:

3 Application to optimal pest control in soybeans

We hope to illustrate the application of the proposed methodology to the optimal pest control in soybeans. As an example, consider the prey-predator relations between the soybean caterpillar (Anticarsia gematalis) and its predators (Nabis spp, Geocoris, Arachnid, etc.). The Lotka-Volterra model coefficients were identified by Rafikov [13].

In Figure 1 the phase diagram of the optimal feedback control policy is displayed for the parameters a = 0.216, a = 0.0108, b = 0.173, b = 0.0029, c1 = 1, c2 = 1.78, k1 = 1, k2 = 2. The values of c1 and c2 were chosen to establish the pest density level equal the threshold xd recommended by EMBRAPA (xd = 20 pests/m2 for the big soybean caterpillar with the length more than 1,5 cm). Densities equal and below these values don't cause economic damages the soybean crops.


In Figure 1 the direct lines x1 = x1 and x2 = x2 are switching lines which divide the positive quadrant on four parts A, B, C and D. In this case x1 = 19.3 and x2 = 13.5.

If the initial state is at D the system has the control variables U2 = x2 - x2 and U1 = 0 until the solution intersects the switching line x1 = x1. At the point of intersection, the control variable U1 is switched on and maintained at rates as U1 = x1 - x1 until the state of the system reaches the equilibrium point P. If the initial state is at A the system is allowed to move with null control until the solution intersects the switching line x2 = x2. At the point of intersection, the control variable U2 is switched on and maintained at rates as U2 = x2 - x2 and the function U1 = 0, and the system passes at D. If the initial state is at B the system has the control variables U1 = x1 - x1 and U2 = 0 and there are two types of the solutions. One type of trajectories intersects the switching line x1 = x1 and system passes at A. Other type of trajectories at B intersects the switching line x2 = x2 and the system passes at C, where the control variables are U1 = x1 - x1 and U2 = x2 - x2 until the state of the system reaches the equilibrium point P. To maintain the system in the equilibrium point P it is necessary to attribute to the control functions the following values: U1 = 0,67 e U2 = 1,59. The control functions graphs for the initial conditions x10 = 32, x20 = 16 are in Figure 2.


The simulations for several values of the model parameters showed that the coefficients k1 and k2 don't influence a lot in the long period dynamics of the system, and the alteration of the coefficients c1 and c2 influences in the dynamics of the system, altering the position of the equilibrium point.

The analysis of the control functions in Figure 2 allows to discuss the practical application of the biological or chemical control in the considered ecosystem. The curve of the control U1 in the Figure 2 exhibits an abrupt decrease and after two days the pest population arrives to the wanted level. In this case the control function U1 can be accomplished through application of the biological or chemical insecticide (that kills only this species type). After the insecticide application, the predators' introduction can be accomplished. There is technology that permits this introduction according to the algorithm above presented.

There is a difficulty to accomplish the daily pest removal in small amounts U1 (0.67 soybean caterpillar/square meter for the above presented example) after the second day of the control in order to maintain the system in the desired equilibrium point (, ). Then the process of the pest control is better to divide in two periods. In the first period (2 days for our example) the control should be accomplished according to the algorithm proposed in this section. In the second period the pest control has to be accomplished introducing only natural enemies. This problem is formulated and solved in the next section.

4 Optimal control through the natural enemies' introduction

The objective of this section is to obtain the pest control strategy through natural enemies' introduction. This control moves the system to the equilibrium state in that the pest density is stabilized without causing economic damages, and that the natural enemies' population is stabilized in a level enough to control the pests. This problem is formulated for the prey-predator model which is a particular case of the model (2.1) when n = 2:

where x1 and x2 are respectively prey and predators densities.

The optimal control strategy maintains the pest population at level x* = xd, and in this case the value y* is calculated from first equation of the system (4.1):

The model that describes the dynamics of the system with one control function can be written in the following way:

In (4.3) the control function consists of two parts, feedback u and feedforward u*, which is determined as

Next, we will determine the optimal control strategy u that drives the system (4.3) from any initial state to desired fixed point (, ) in such way that it minimizing the following functional:

where Y = and the matrix Q = is a definitive positive matrix.

Due Dynamic Programming if the minimum exists and is a smooth function S of the initial condition then it satisfies the Hamilton-Jacobi-Bellman equation [3]:

where

The equation (4.6) is partial differential equation and in the nonlinear case even if a smooth solution exist, the solution of the Hamilton-Jacobi-Bellman equation that satisfied the final condition

is quite difficult. There are several methods for numerical solution of this problem.

The nonlinear optimal control problem (4.3)-(4.5) was resolved only for Lotka-Volterra prey–predator model (2.22) in [6], where the solution of the Hamilton-Jacobi-Bellman equation (4.6) was sought in analytic form

where v1 and v2 are positive constants and the optimal control function u(t) for Lotka-Volterra prey–predator model (2.22) was found as [6]:

To resolve the optimal control problem (4.3)-(4.5) in general case we will admit that the initial state of the controlled system (4.3) is closed of the desired equilibrium point, and we will consider the optimal control of the linearized controlled system

or

where

The optimal control problem (4.12), (4.5) is well known infinite time linear quadratic optimal control problem [1]. The optimal control is given by

where P the 2 × 2 constant, positive definite, symmetric matrix, is the solution of the nonlinear, matrix algebraic Riccati equation (ARE):

To undergo the numeric simulations it was considered the same ecosystem with the prey-predator relations between the soybean caterpillar (Anticarsia gematalis) and its predators (Nabis spp, Geocoris, Arachnid, etc.) that was modeled in the previous section. It was supposed that the optimal strategy with two control functions was only applied in the two first days. When reaching the density value 20.6 caterpillar/square meter, the control was only accomplished through the predators' introduction according to the algorithm, presented in this section. The parameters of the Lotka-Volterra model used are a = 0.216, a = 0.0108, b = 0.173, b = 0.0029. Considering desired equilibrium point (20, 19.815) we have

A = .

Choose

Q = , than one obtains P = , by solving ARE (4.15) using the LQR function in MATLAB, and obtains

from (4.14).

Figure 3 presents the pest and predator population variations with the optimal control strategy (4.16) according to the above-referred algorithm.


5 Discussion and conclusions

The process of the pest control was divided in two periods. In the first period (2 days for our example) the optimal pest control problem was formulated e resolved using two control functions. In the second period the pest control was accomplished introducing only natural enemies. The analysis of the population dynamics in the second period (See Figure 3) showed that the pest population grew in the third and fourth days. This can be explained through fact that the initial value of the predator population 12.4 in second period was smaller than the desired equilibrium level of predators 19.815 calculated through the formula (4.2). The introduction of the 7.415 predators/m2 (the difference between 19.815 and 12.4) in the beginning of the second period can improve the dynamics of the pest control. Mathematically this means the alteration of the predator initial conditions. Figure 4 shows the pest and predator population variations with the predator initial conditions x20 = 19.815.


So the biological pest control consists of three stages: 1) the application of two control functions (the elimination of the pest population and the natural enemies' introduction), 2) the natural enemies' impulsive introduction to complete the desired equilibrium level, and 3) the natural enemies' continuous introduction in order to maintain the stability of this level. It is observed that the above-presented methodologies for first and third stages make use of the optimal control theory for non-impulsive systems. The natural enemies' impulsive introduction in second stage indicates the future formulation of the problem in terms of the impulsive control theory.

Received:05/I/04 . Accepted:09/XI/04

#594/04.

  • [1] B.D.O. Anderson and J.B. Moore, Optimal control: linear quadratic methods, Prentice Hall, (1990).
  • [2] L.A. Andres, E.R. Oatman and R.G. Simpson, Re-examination of pest control practices, In D.W. Davis, S.C. Hoyt, J.A. McMurtry, M.T. AliNiazee (eds.), Biological control and insect pest management. Division of Agricultural Sciences, University of California. Publication 4096, Oakland, (1979).
  • [3] R. Bellman, Dynamic Programming, Princeton, New Jersey, (1957).
  • [4] G.R. Conway, Mathematical Models in Applied Ecology Nature, 269 (1977), pp. 291-297.
  • [5] P. Debach, Biological control by natural enemies, Univ. Press, (1974).
  • [6] C.C. Feltrin and M. Rafikov, Aplicaçăo da funçăo de Lyapunov num problema de controle ótimo de pragas Tendęncias em Matemática Aplicada e Computacional, 3 2 (2002), pp. 83-92.
  • [7] D.L. Gazzoni, Manejo de pragas da soja: uma abordagem histórica Londrina: EMBRAPA-CNPSO, (1994).
  • [8] B.S. Goh, G. Leitman and T.L. Vicent, The optimal control of a prey-predator system Math. Biosc. 19, (1974), pp. 263-286.
  • [9] B.S. Goh, Management and analysis of biological populations, Elsevier Scietific Publishing Company: Amsterdam, (1980).
  • [10] A.J. Lotka, Elements of physical biology, William and Wilkins, Baltimore, (1925).
  • [11] M.P. Parrella, K.M. Heinz and L. Nunney, Biological Control through Augmentative Releases of Natural Enemies: A Strategy Whose Time Has Come American Entomologist, 38 3 (1992), pp. 172-179.
  • [12] L.S. Pontryagin, V.G. Boltyanskii, R.V.Gamkrelidze and E.F. Mischenko, The Mathematical Theory of Optimal Processes, Interscience Publishers, Inc., New York, (1962).
  • [13] M. Rafikov, Determinaçăo dos parâmetros de modelos biomatemáticos, Cięncia e Natura, UFSM, Santa Maria, 19 (1997), pp. 7-20.
  • [14] C.A. Shoemaker, Optimization of Agricultural Pest Management III: Results and Extension of a Model, Math. Bioci., 1973.
  • [15] M.B. Thomas and A.J. Willis, Biocontrol - risky but necessary, Trends in Ecology and Evolution, 13 (1998), pp. 325-329.
  • [16] R. Van den Bosh, P.S. Messenger and A.P. Gutierrez, An Introduction to Biological Control, Plenum Press, New York, (1982).
  • [17] V. Volterra, Fluctuations in the abundance of a species considered mathematically, Nature, 118 (1926), pp. 558-560.

Publication Dates

  • Publication in this collection
    07 May 2009
  • Date of issue
    Apr 2005

History

  • Accepted
    09 Nov 2004
  • Received
    05 Jan 2004
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br