Acessibilidade / Reportar erro

A HYBRIDIZED MULTI-OBJECTIVE MEMETIC ALGORITHM FOR THE MULTI-OBJECTIVE STOCHASTIC QUADRATIC KNAPSACK PROBLEM

ABSTRACT

The knapsack problem is basic in combinatorial optimization and possesses several variants and expansions. In this paper, we focus on the multi-objective stochastic quadratic knapsack problem with random weights. We propose a Multi-Objective Memetic Algorithm With Selection Neighborhood Pareto Local Search (MASNPL). At each iteration of this algorithm, crossover, mutation, and local search are applied to a population of solutions to generate new solutions that would constitute an offspring population. Then, we use a selection operator for the best solutions to the combined parent and offspring populations. The principle of the selection operation relies on the termination of the non-domination rank and the crowding distance obtained respectively by the Non-dominated Sort Algorithm and the Crowding-Distance Computation Algorithm. To evaluate the performance of our algorithm, we compare it with both an exact algorithm and the NSGA-II algorithm. Our experimental results show that the MASNPL algorithm leads to significant efficiency.

KEYWORDS:
non-dominated sort algorithm; crowding-distance; gradient algorithm; memetic algorithm with selection neighborhood pareto local search

1 INTRODUCTION

Knapsack problems are widely studied general combinatorial optimization problems that are useful models for modeling many real-world problems in a variety of fields. The canonical version (KP) of this class of problems is NP-hard and, given a set of items, each with positive weight and profit, and a knapsack of fixed positive capacity, it consists of choosing a subset of items that fit in the knapsack and of maximum total profit. For instance, suppose a firm has a fixed budget for investment in several projects, each of which has a budget, and its implementation involves a payoff. The problem for decision-makers is to select a subset of projects to invest in so that the total profit of all chosen projects is maximum and the total budget invested in them does not exceed the available budget.

Two fine monraphs are entirely devoted to knapsack problems in their different versions and variants. The first one, by Martello & Toth (1990MARTELLO S & TOTH P. 1990. Knapsack Problems: Algorithms and Computer Implementations. John Wiley & Sons Ltd.), explicitly describes and comments in great detail on the algorithms and computer implementations of knapsack problems and contains, on 250 pages with 200 references, the main results in this field published during the three previous decades. The second monograph, by Kellerer et al. (2004KELLERER H, PERSCHY U & PISINGER D. 2004. Knapsack problems. Springer.), contains more than 500 pages, with about 500 references, two-thirds of which were published after the first monograph, which shows the interest of researchers in this field. In addition to these fine works, surveys are regularly published to update knowledge in the field. Cacchiani et al. (2022aCACCHIANI V, IORI M & MARTELLO ALS. 2022a. Knapsack problems - An overview of recent advances. Part I: Single knapsack problems. Computers & Operations Research, 143: 1056922.,bCACCHIANI V, IORI M & MARTELLO ALS. 2022b. Knapsack problems - An overview of recent advances. Part II: Multiple, multidimensional, and quadratic knapsack problem. Computers & Operations Research, 143: 1056923.), have recently proposed a survey on knapsack problems and structured it into two parts. Part I is devoted to problems whose goal is to optimally assign items to a single knapsack and review problems with special constraints and relatively recent fields of investigation, like robust and bilevel problems. Part II covers multiple, multidimensional, and quadratic knapsack problems, and includes a succinct treatment of online and multiobjective knapsack problems.

Here we focus on the quadratic form of KP, known as the Quadratic Knapsack Problem (QKP). Given a knapsack with a fixed capacity and a set of items, each item associated with a positive integer weight, an individual profit, and a paired profit achieved if both items are selected, we aim to choose a subset of items whose overall weight does not exceed the given knapsack capacity that maximizes the overall profit. A well-known survey by Pisinger (2007PISINGER D. 2007. The quadratic knapsack problem-a survey. Discrete Appl. Math., 155(5): 623-648.) gave numerous details about the problem and the resolution aspects. Gallo et al. (1980GALLO G, HAMMER PL & SIMEONE B. 1980. Quadratic knapsack problems. Springer.) introduced the quadratic knapsack problem and derived the first bounds by using the concept of “upper plane”. The authors presented and discussed different uses of this concept in a branch-and-bound scheme for solving such a problem. Aïder et al. (2022aAÏDER M, GACEM O & HIFI M. 2022a. Branch and solve strategies-based algorithm for the quadratic multiple knapsack problem. Journal of the Operational Research Society, 73(2): 540- 557.) considered a branch and solve strategies-based algorithm for solving large-scale quadratic multiple knapsack problems. They developed an enhanced fix and solve solution procedure and embedded it in the local branching-based method, where the branches reflect intensification and diversification search around a solution. The same authors Aïder et al. (2022bAÏDER M, GACEM O & HIFI M. 2022b. A hybrid population-based algorithm for the bi-objective quadratic multiple knapsack problem. Expert Systems With Applications, 191: 116238.) used the hybrid population-based algorithm (namely HBPA) to efficiently solve the bi-objective quadratic multiple knapsack problem. Lé tocart et al. (2014LÉ TOCART L, PLATEAU MC & PLATEAU G. 2014. An efficient hybrid heuristic method for the 0-1 exact k-item quadratic knapsack problem. Pesquisa Operacional, 34(1): 49-72.) proposed, to solve the 0-1 exact k-item quadratic knapsack problem, a fast and efficient heuristic method that produces both good lower and upper bounds on the value of the problem in a reasonable time. Specifically, it integrates a primal heuristic and a semidefinite programming reduction phase within a surrogate dual heuristic. To solve the generalized quadratic multiple knapsack problem (GQMKP), Zhou et al. (2022ZHOU Q, HAO JK & WU Q. 2022. A hybrid evolutionary search for the generalized quadratic multiple knapsack problem. European Journal of Operational Research, 296(3): 788-803.) have proposed an efficient hybrid evolutionary search algorithm (HESA) that relies on a knapsack-based crossover operator to generate new offspring solutions, as well as a feasible and infeasible adaptive tabu search to improve the new offspring solutions. Other new features of HESA include a dedicated strategy to ensure a diversified and a high-quality initial population and a streamlining technique to speed up the evaluations of candidate solutions.

However, in real life, problems are non-deterministic, that is, some parameters are known without precision at the time a decision must be made. These parameters can be modeled by continuously or discretely distributed random variables, which turn the underlying problem into a stochastic optimization problem. In this case, we refer to the work of Kosuch & Lisser (2011KOSUCH S & LISSER A. 2011. On two-stage stochastic knapsack problems. Discrete Appl. Math., 159(16): 1827-1841.), who considered two models of stochastic knapsack problems with random weights. The first model is an unconstrained problem, namely the Stochastic Knapsack Problem With Simple Recourse. The authors proposed different solving methods for the corresponding relaxed problem as the branch-and-bound algorithm, while the second, as a two or multi-stage problem that allows later corrections of the decision made in advance. The authors proposed a method to calculate the upper and lower bounds. These bounds are used in the branch-and-bound framework. Kosuch et al. (2017KOSUCH S, LETOURNEL M & LISSER A. 2017. Stochastic Knapsack Problem: Application To Transportation Problems. Pesquisa Operacional, 37(3): 597-613.) studied the stochastic knapsack problem with expectation constraints and proposed to solve the problem with the relaxed version of this problem using a stochastic gradient algorithm to provide upper bounds for a branch-and-bound framework. Blado & Toriello (2021BLADO D & TORIELLO A. 2021. A column and constraint generation algorithm for the dynamic knapsack problem with stochastic item sizes. Mathematical Programming Computation, (13): 1.) considered a two-stage stochastic multiple knapsack problem together with a set of possible disturbances with known probability of occurrence and proposed two branch-and-price approaches to solve it. For the same problem, Tö nissen et al. (2021TÖ NISSEN DD, VAN DEN AKKER JM & HOOGEVEEN JA. 2021. Column generation strategies and decomposition approaches for the two-stage stochastic multiple knapsack problem. Computers & Operations Research, 83: 125-139.) used branch-and-price scheme and compared two different decomposition approaches. Range et al. (2018RANGE TM, KOZLOWSKI D & PETERSEN NC. 2018. A shortest-path-based approach for the stochastic knapsack problem with non-decreasing expected overfilling costs. Computers & Operations Research, 97: 111-124.) considered the stochastic knapsack problem and combined the chance-constrained knapsack problem and the stochastic knapsack problem with simple recourse. They formulated the resulting model as a network problem and showed that it could be solved by a dynamic labeling programming approach for the shortest path problem with resource constraints. Tö nissen & Schlicher (2021TÖ NISSEN DD & SCHLICHER L. 2021. Using 3D-printing in disaster response: The two-stage stochastic 3D-printing knapsack problem Author links open overlay panel, 133: 105356.) introduced the two-stage stochastic 3D printing knapsack problem for which they provided a two-stage stochastic programming formulation which they later reformulated into an equivalent integer linear program. Song et al. (2018SONG B, LI Y, CHEN Y, YAO F & CHEN Y. 2018. A Repair-based approach for stochastic quadratic multiple knapsack problem, 145: 145-155.) studied the stochastic quadratic multiple knapsack problem with the objective to find a solution with the best expected quality under all possible cases. The authors used the recoverable robustness technique which consists of first finding a solution that is feasible for the static associated problem, and then adjusting the solution according to the emerging stochastic scenarios.

In this paper, we study the case where the item weights are random and are supposed to be independent, normally distributed, with known mean and variance while the capacity and benefits remain deterministic. This case entails the following problem: we cannot be sure that the chosen items in advance (i.e., before the revelation of the actual weights) meet the knapsack capacity constraint. We used a stochastic gradient algorithm via a convolution method, and we refer to Andrieu et al. (2007ANDRIEU L, COHEN G & VÁZQUEZ-ABAD F. 2007. Stochastic programming with probability constraints. Arxiv, Available at: http://fr.arxiv.org/abs/0708.0281.
http://fr.arxiv.org/abs/0708.0281...
) and Kosuch & Lisser (2009KOSUCH S & LISSER A. 2009. Upper bounds for the 0-1 stochastic knapsack problem and a B & B algorithm. Ann. Oper. Res., 176(1): 77-93.).

The Stochastic Quadratic Knapsack Problem (SQKP) is NP-hard. For solving such a problem, exact and heuristic algorithms constitute two main and complementary solution methods in the literature. On the one hand, the computing time needed to find an optimal solution by an exact algorithm may become prohibitive for large instances. On the other hand, heuristic algorithms aim to find satisfactory sub-optimal solutions (for too large problem instances) in an acceptable computing time. For solving the Stochastic Quadratic Knapsack, Lisser & Lopez (2010LISSER A & LOPEZ R. 2010. Stochastic Quadratic Knapsack with Recourse. Electronic Notes in Discrete Mathematics, 36: 97-104.) used the solution techniques based on semi-definite relaxations. Song et al. (2018SONG B, LI Y, CHEN Y, YAO F & CHEN Y. 2018. A Repair-based approach for stochastic quadratic multiple knapsack problem, 145: 145-155.) studied the stochastic quadratic multiple knapsack problem with the objective to find a solution with the best expected quality under all possible cases. The authors used the recoverable robustness technique which consists to first find a solution that is feasible for the static associated problem, and then adjust the solution according to the emerging stochastic scenarios.

Most real-world problems like scheduling, resource allocation,... have to simultaneously optimize several criteria. These are called multi-objective optimization problems (MOOPs) and can, in mathematical terms, be formulated as:

max F ( x ) = ( f 1 ( x ) ,..., f m ( x ) ) T s u b j e c t t o x Ω , (1)

where the integer m 2 is the number of objectives and the set Ω is the feasible set of decision vectors, which is typically Ωn.

Our work is mainly inspired by the work of Chen & Hao (2016CHEN Y & HAO J. 2016. The bi-objective quadratic multiple knapsack problem: Model and heuristics. Knowl.-Based Syst., pp. 89-100.) who extended the single objective quadratic multiple knapsack problem to a bi-objective quadratic multiple knapsack problem BOQMKP and proposed a hybrid two-stage HTS algorithm to approximate the Pareto front.

We consider a scenario where given the solutions in some space (of possible solutions), the so-called decision space, which can be evaluated using the so-called objective functions, the goal is to find a solution which the decision-maker can agree, and that is optimal in some sense.

Let u,vm, u dominates v if fj(u) fj(v), for all objective j and fj(u)> fj(v) for at least one objective j{1,..., m}. A solution x*Ω is Pareto Optimal (or efficient) to (1) if there is no point xΩ such that x dominates x*. F(x*) is then called a Pareto Optimal (objective) vector. In other words, any improvement in a Pareto optimal point in one objective must lead to a deterioration in at least one other objective. The set of all the Pareto optimal points is called the Pareto set PS, and the set of all the Pareto optimal objective vectors is the Pareto front PF. It is well-known that a Pareto optimal solution to a MOP, under conditions, could be an optimal solution for a scalar optimization problem in which the objective is an aggregation of all the objective functions. Therefore, an approximation of PF can be decomposed into several scalar objective optimization subproblems. This is a basic idea behind many traditional mathematical programming methods for approximating PF.

Classical optimization methods (including multi-criteria decision-making methods) suggest converting a multi-objective optimization problem into a single-objective optimization problem to solve it. We were inspired by the methods used to solve the Multi-objective Capacitated Arc Routing Problem (MO-CARP), for which Tang et al. (2009TANG K, MEI Y & YAO X. 2009. Memetic Algorithm With Extended Neighborhood Search for Capacitated Arc Routing Problems. IEEE Trans Evol Comput, 13(5): 1151-116.) proposed a memetic algorithm with extended neighborhood search (MAENS), and Mei et al. (2011MEI Y, TANG K & YAO X. 2011. Decomposition-Based Memetic Algorithm for Multiobjective Capacitated Arc Routing Problem. IEEE Transactions on Evolutionary Computation, 15(2): 151-165.) developed a new Memetic Algorithm (MA) called Decomposition-based Memetic Algorithm with Extended Neighborhood Search D-MAENS. Shang et al. (2014) proposed another version of D-MAENS with two improvements consisting on the one hand in accelerating the convergence speed by the replacement of the solutions immediately once an offspring is generated referring to the steady-state evolutionary algorithm, and on the second hand in implementing elitism by using an archive to maintain the current best solution in its decomposition direction during the search.

Zhang & Li (2007ZHANG Q & LI H. 2007. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput., 11: 712-731.) suggested a multi-objective evolutionary algorithm based on decomposition MOEA-D, that decomposes a multi-objective optimization problem into many scalar optimization subproblems and optimizes them simultaneously, each subproblem being optimized by only using information from its several neighboring subproblems. Bhuvana & Aravindan (2016BHUVANA J & ARAVINDAN C. 2016. Memetic algorithm with Preferential Local Search using adaptive weights for multi-objective optimization problems. Soft Comput., (20): 1365-1388.) introduced a Preferential Local Search mechanism to fine-tune the global optimal solutions further, and an adaptive weight mechanism for combining multiple objectives. Kim & Liou (2012KIM H & LIOU MS. 2012. New fitness sharing approach for multi-objective genetic algorithms. J Glob Optim, 55(3): 579-595.) proposed a novel fitness sharing method for Multi-Objective Genetic Algorithm by combining a new sharing function and sided degradations in the sharing process, with preference to either of two close solutions. Arshad et al. (2009ARSHAD S, YANG S & LI C. 2009. A sequence based genetic algorithm with local search for the travelling salesman problem. In: Proceedings of the 2009 UK Workshop on Computational Intelligence. pp. 98-105.) presented a sequence based genetic algorithm, for the symmetric traveling salesman problem, where a set of sequences are extracted from the best individuals, which are used to guide the search and some procedures are applied to maintain the diversity by breaking the selected sequences into sub tours if the best individual of the population does not improve.

Some of these ideas have been integrated into NSGA-II to get at a new memetic algorithm for solving multi-objective optimization problems. Deb et al. (2002DEB K, PRATAP A, AGARWAL S & MEYARIVAN T. 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2): 182-197.) suggested a non-dominated sorting-based multi-objective evolutionary algorithm MOEA, called non-dominated sorting genetic algorithm NSGA-II, and Chu & Yu (2018CHU X & YU X. 2018. Improved Crowding Distance for NSGA-II. Arxiv, Available at: https://arxiv.org/abs/1811.12667v1.
https://arxiv.org/abs/1811.12667v1...
) proposed the crowding distance in the standard NSGA-II.

The remainder of the paper is organized as follows. Section 2 presents a mathematical formulation of the quadratic multi-objective stochastic knapsack problem with simple recourse. Section 3 then describes, in detail, our solution methods that aim to provide a high-quality approximation of the Pareto front. We first introduce the NSGA-II algorithm, then sketch the greedy algorithm and the principle of subpopulation construction before describing the steps of the non-dominated sorting algorithm and detailing the crowding-distance algorithm. In the next subsection, we describe the memetic algorithm with local Pareto search by selection neighborhood. The numerical performance of our proposed method is analyzed and evaluated in Section 4, before concluding in Section 5.

2 MATHEMATICAL FORMULATION

We consider a stochastic quadratic multi-objective knapsack problem of the following form: given a knapsack with a fixed weight capacity c >0 as well as a set of n items, i= 1, ... , n, each item has a weight that is not known in advance, i.e. the decision of which items to choose must be made without the exact knowledge of their weights. Therefore, we treat the weights as random variables and assume that the weights χ i , i= 1, ... , n, are independent and normally distributed with means µi>0, and standard deviations σ i ,i= 1, ... , n. Moreover, each item i= 1, ... , n has a fixed m-vector reward per weight unit ri=(ri1,...,rim)T,rik+,k=1,...,m, and to each pair of items i and j, 1 ij n, is associated a m-vector of joint rewards per unit of weight rij=(rij1,...,rijm)T,rijk+,k=1,...,m. The choice of a reward per unit weight can be justified by the fact that the value of an item often depends on its weight, which is not known in advance.

In case of overweight, items must be removed, and a penalty d must be paid for each unit of weight unwrapped. Our goal is, therefore, to minimize the total penalty.

The selection of the item i is indicated by a binary decision variable xi which takes the value 1 if item i is included in the selection and 0 otherwise.

The Multi-objective Stochastic Quadratic Knapsack Problem can be mathematically formulated as follows:

2.1 Stochastic Quadratic Knapsack Problem with simple recourse

max x { 0,1 } n E [ i = 1 n - 1 j = i + 1 n r i j ( k ) x i x j ( χ i + χ j ) ] + E [ i = 1 n r i ( k ) x i χ i ] - d E [ i = 1 n x i χ i - c ] + , k = 1,..., m . (2)

Equation (2) aims to maximize the total profit of all assigned objects.

The special case when k= 2 is called bi-objective binary knapsack problem and is denoted by 0-1 BOKP.

2.2 Constrained Knapsack Problem

A. Expectation Constrained Knapsack Problem

{ max x { 0,1 } n E [ i = 1 n - 1 j = i + 1 n r i j k x i x j ( χ i + χ j ) ] , k = 1, , m ( 3.1 ) s . t . E [ 1 + ( c-g ( x , χ ) ) ] p . ( 3.2 ) (3)

B. Chance Constrained Knapsack Problem

{ max x { 0,1 } n E [ i = 1 n - 1 j = i + 1 n r i j k x i x j ( χ i + χ j ) ] , k = 1, , m ( 4.1 ) s . t . P [ g ( x , χ ) c ] p , ( 4.2 ) (4)

where:

  • P[A] denotes the probability of an event A,

  • E[.] denotes the expectation,

  • 1ℝ+ denotes the indicator function of the positive real interval,

  • g(x,χ)=i=1nxiχi,

  • d+,

  • p[0.5,1] is the prescribed probability.

In the formulation of the multi-objective stochastic quadratic knapsack problem with simple recourse, the capacity constraint has been included in the objective function by using the penalty function [.]+ and a penalty factor d >0. In the case of an overload, items have to be removed and a penalty d has to be paid for each unit of weight that is unpacked.

We write the objective function of the Stochastic Quadratic Knapsack Problem with simple recourse as follows:

J ( k ) ( x , χ ) = E [ i = 1 n - 1 j = i + 1 n r i j ( k ) x i x j ( χ i + χ j ) ] + E [ i = 1 n r i ( k ) x i χ i ] - d E [ i = 1 n x i χ i - c ] + , k = 1, , m . (5)

Since the function J is not differentiable, we present an approximation to its gradient, named approximation by convolution. This is one of the two methods presented by Andrieu et al. (2007ANDRIEU L, COHEN G & VÁZQUEZ-ABAD F. 2007. Stochastic programming with probability constraints. Arxiv, Available at: http://fr.arxiv.org/abs/0708.0281.
http://fr.arxiv.org/abs/0708.0281...
).

The basic idea of this method called,“Approximation By Convolution Method” is to approximate the indicator function (1 ℝ+) by its convolution with a function:

h t ( x ) : = 1 t h ( x t ) (6)

that approximates the Dirac Function when the parameter t goes to 0.

Let us consider a function h: with the following properties:

a ) h h a s a u n i q u e m a x i m u m a t x = 0, b) x , h ( x ) 0, c) x , h ( x ) = h ( - x ) , d) - + h ( x ) d x = 1. (7)

With any other function ρ: and a small positive number t, the convolution of the two functions ρ and h is defined as follows:

( ρ . h t ) ( x ) = 1 t - + ρ ( y ) h ( y - x t ) d y . (8)

The function (ρ.ht) is differentiable with:

( ρ . h t ) ( x ) = 1 t 2 0 + ρ ( y ) h ( x - y t ) d y = 1 t h ( x t ) . (9)

In case of ρ=1+, we have:

( ρ . h t ) ( x ) = 1 t E [ - + 1 + ( y ) h ( x - y t ) d y ] = 1 t E [ 0 + h ( y - x t ) d y ] (10)

( ρ . h t ) ( x ) = E [ ρ t ( x ) ] (11)

ρ t ( x ) = 1 t E [ 0 + h ( y - x t ) d y ] (12)

Then,

( ρ t ( x ) ) ( x ) = 1 t 2 0 + h ( x - y t ) d y = - 1 t h ( x t ) . (13)

Based on the method explained above, we get the following approximation ∇(J t ) of the gradient of the function J:

( J t ( k ) ) ( x , χ ) = [ ( r 1 ( k ) χ 1 , , r n ( k ) χ n ) T + ( j = 1 j 1 n r 1 j ( k ) ( χ 1 + χ j ) x j , , j = 1 j n n r n j ( k ) ( χ n + χ j ) x j ) T ] - d ( - 1 t h ( g ( x , χ ) - c t ) . χ . ( g ( x , χ ) - c ) + 1 + ( g ( x , χ ) - c ) . χ ) , k = 1, , m . (14)

Andrieu et al. (2007ANDRIEU L, COHEN G & VÁZQUEZ-ABAD F. 2007. Stochastic programming with probability constraints. Arxiv, Available at: http://fr.arxiv.org/abs/0708.0281.
http://fr.arxiv.org/abs/0708.0281...
) and Kosuch & Lisser (2011KOSUCH S & LISSER A. 2011. On two-stage stochastic knapsack problems. Discrete Appl. Math., 159(16): 1827-1841.) proposed various functions that may be chosen for h. They computed for each function a reference value for the mean square error of the obtained approximated gradient and compared them. It turned out that, the function:

h ( x ) = 3 4 ( 1 - x 2 ) ( 1 + ) (15)

offered the smallest of this value Andrieu et al. (2007ANDRIEU L, COHEN G & VÁZQUEZ-ABAD F. 2007. Stochastic programming with probability constraints. Arxiv, Available at: http://fr.arxiv.org/abs/0708.0281.
http://fr.arxiv.org/abs/0708.0281...
).

Here the indicator function (1 1) is defined as:

( 1 1 ) = { 1 if 1 x 1, 0 otherwise . (16)

Based on the results of Kosuch & Lisser (2011KOSUCH S & LISSER A. 2011. On two-stage stochastic knapsack problems. Discrete Appl. Math., 159(16): 1827-1841.), we get the following estimation of the gradient of J:

( J t ( k ) ) ( x , χ ) = [ ( r 1 ( k ) χ 1 , , r n ( k ) χ n ) T + ( j = 1 j 1 n r 1 j ( k ) ( χ 1 + χ j ) x j , , j = 1 j n n r n j ( k ) ( χ n + χ j ) x j ) T ] - d . ( - 3 4 t ( 1 - ( g ( x , χ ) - c t ) 2 ) 1 1 ( g ( x , χ ) - c t ) . χ . ( g ( x , χ ) - c ) - 1 + ( g ( x , χ ) - c ) . χ ) , k = 1, , m . (17)

2.3 Deterministic Equivalent Problem

The new random variable is defined as follows:

X : = i = 1 n x i χ i , (18)

which is normally distributed with mean

μ ^ i : = i = 1 n μ i x i , (19)

standard deviation

σ ^ : = i = 1 n σ i 2 x i 2 , (20)

density function

φ ( x ) = 1 σ ^ f ( x - μ ^ σ ^ ) (21)

and cumulative distribution function

Φ ( x ) = Ϝ ( x - μ ^ σ ^ ) . (22)

Cohn & Barnhart (1998COHN A & BARNHART C. 1998. The stochastic knapsack problem with random weights: A heuristic approach to robust transportation planning. In: Proceedings of the Triennial Symposium on Transportation Analysis (TRISTAN III).) and Kosuch & Lisser (2009KOSUCH S & LISSER A. 2009. Upper bounds for the 0-1 stochastic knapsack problem and a B & B algorithm. Ann. Oper. Res., 176(1): 77-93.) proposed to rewrite the objective function in a deterministic way by using the following calculations:

E [ [ X - c ] + ] = - [ X - c ] + φ ( X ) d X = c ( X - c ) φ ( X ) d X = c X φ ( X ) d X - c c φ ( X ) d X = μ ^ c φ ( X ) d X - σ ^ 2 c φ ' ( X ) d X - c c φ ( X ) d X = σ ^ 2 φ [ ( X ) ] c + ( μ ^ - c ) [ Φ ( x ) ] c = σ ^ 2 φ ( c ) + ( μ ^ - c ) [ 1 - Φ ( c ) ] = σ ^ f ( c - μ ^ σ ^ ) + ( μ ^ - c ) [ 1 - F ( c - μ ^ σ ^ ) ] (23)

Finally, we write the deterministic equivalent objective function as follows:

J det k ( x ) = i = 1 n - 1 j = i + 1 n r i j ( k ) x i x j ( μ i + μ j ) + i = 1 n r i ( k ) μ i x i - d [ σ ^ f ( c - μ ^ σ ^ ) - ( c - μ ^ ) [ 1 - Ϝ ( c - μ ^ σ ^ ) ] ] , k = 1, , m . (24)

3 RESOLUTION METHOD

In the decomposition framework, the original MO-SQKP is first decomposed into many Single-Objective Stochastic Quadratic Knapsack Problems (SO-SQKP). To be more precise, given the objective vector

F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f m ( x ) ) T , (25)

we include the capacity constraint in the objective function by using the penalty function [.]+ and a penalty factor d >0. This can be interpreted as follows: in the case where our choice of items leads to a capacity excess, a penalty occurs per overweight unit.

J k ( x , χ ) = E [ i = 1 n - 1 j = i + 1 n r i j ( k ) x i x j ( χ i + χ j ) ] + E [ i = 1 n r i ( k ) x i χ i ] - d . E [ i = 1 n x i χ i - c ] + , k = 1, , m . (26)

3.1 Population initialization

The initial population is constructed according to the Greedy Constructive Heuristic. Initially, the knapsack is empty. The heuristic includes the object with the highest value density relative to the remaining available objects one by one to the knapsack. The Greedy Constructive Heuristic for MO-SQKP examines relative value density v d (i, S) of an object i to select those to assign to the knapsack. The value density v d (i, S) of an object i relative to a set of other objects is the sum of the value of object i and its joint values with the objects in S divided by its weight Hiley & Julstrom (2006HILEY A & JULSTROM B. 2006. The quadratic multiple knapsack problem and three heuristic approaches to it. In: Proceedings of the 8th annual conference on Genetic and evolutionary computation (GECCO’06). pp. 547-552.):

v d ( i , S ) = ( r i + Σ j S r i j ) w i (27)


Greedy Constructive Heuristic

Subpopulation construction First, we need to manipulate our reward par objective function to create a new individual in the population. To do that, we need a vector_reward V at each iteration. This vector_reward V represents the ratio of how we want to manipulate and favorite the objective function. For instance, if we want to favorite only the first objective function, then the vector_ reward look like V= (1,0,0, ...,0) where each value in V will multiply by the reward, i.e.

n e w _ r e w a r d s m = V ( m ) * r e w a r d m , where k = 1, , n u m b e r o f o b j e c t i v e s . (28)


Subpopulation construction

For the first number_of_objectives population, we want to favorite the m th objective function by giving 1 to m th element of V and 0 elsewhere. Then we send this new_rewards to our greedy algorithm in order to get a new individual. For the rest of the population, first we evaluate the previous individuals that are already in our population, then we search for the maximum value max from all the values evaluated. We define a new vector_max by

v e c t o r _ max = ( max ( f 1 * ) max * , max ( f 2 * ) max * ,..., max ( f m * ) max * ) , (29)

where f* i is a vector of evaluation of the i th objective across all the individuals. Based on vector_max we create a new vector_reward:

V = v e c t o r _ max Σ v e c t o r _ max . (30)

We use V as above to create a new_rewards and send it to our greedy algorithm in order to get a new individual. This process is repeated until we generate a population of size N.

3.2 NSGA-II Algorithm

Our resolution method is inspired by the non-dominated sorting genetic algorithm II (NSGA-II). NSGA-II is one of the most popular multi-objective optimization algorithms, it is more efficient than its previous version NSGA and tends to spread quickly. The main advantage is the strategy of preserving the diversity of solutions. NSGA-II can be detailed as follows:

  • Step 1. Population initialization: The population is initialized based on the problem range and constraints.

  • Step 2. Non dominated sort: The initialized population is sorted based on non-domination. The fast sort algorithm is described in Section 3.3.

  • Step 3. Crowding distance: The Crowding distance algorithm is described in Section 3.4.

  • Step 4. Selection: The selection of individuals is carried out using a binary tournament selection with the crowded-comparison operator.

  • Step 5. Genetic Operators: Real coded GA using simulated binary crossover and polynomial mutation.

  • Step 6. Recombination and selection: The offspring population and the current generation population are combined, and the individuals of the next generation are set by selection. The new generation is filled by each front subsequently until the population size exceeds the current population size.

3.3 Non-dominated Sort Algorithm

In this section, we develop the two steps of the Nondominated Sorting Algorithm whose main objective is to sort all individuals in the population according to different levels of non-dominance.

The first step aims at finding the first level of non-dominated solutions in the population of size N. This requires comparisons of each solution, for each objective function, with all the other solutions in the population to check if it is dominated. The above process (Lines 3-17 of Algorithm 1) is repeated until all non-dominated members of the first front in the population are found. When all individuals on the first non-dominated front are found, the process proceeds to the next step to find all individuals on each subsequent non-dominated front.

Algorithm 1
Non-dominated Sort Algorithm

Let p be any solution and denote by:

  • ηp: an indicator that counts the number of solutions that dominate p.

  • Sp: the set of solutions that p dominates.

After the first step of the non-dominated sorting procedure, the first non-dominated level is found and the two entities are computed for each solution. Then, all the solutions of the first non-dominated level will have their dominance number set to zero ηp= 0.

Then, in the second step of our procedure, we reduce the dominance number by one for each member q visited by each solution p with ηp= 0 of sets S p . After that, we add, to the separate list Q, any member q if its dominance count becomes zero ηq= 0. These members belong to the second non-dominated level. We continue the above procedure with each member of Q and identify the third front. This process continues until all levels are identified. At this point, the solution is assigned a non-dominated level and will never be visited again.

3.4 Crowding-Distance Algorithm

The crowding distance value of a solution provides an estimate of the density of solutions surrounding a particular solution in the population. We calculate the average distance of two points on either side of this point along each of the objectives. Figure 1 shows the computation of the crowding distance of the i th solution, which is an estimate of the size of the largest cuboid enclosing i without including any other point.

Figure 1
Computation of the Crowding distance.

The crowding-distance computation requires sorting the population according to each objective function value in non-decreasing order. The crowding distance value of a particular solution is the average distance of its two neighboring solutions. Then, for each objective function, the boundary solutions with the lowest and the highest objective function values are given an infinite crowding distance value. All other intermediate solutions are assigned a distance value equal to the absolute normalized difference in the function values of two adjacent solutions. This process is done for each objective function. The overall crowding-distance value solution is computed by adding the entire individual crowding-distance value corresponding to each objective function.

Algorithm 2 below outlines the Crowding-Distance Computation procedure of all solutions in a non-dominated set P.

Algorithm 2
Crowding-Distance Computation Algorithm

Note that P[i].k refers to the k th objective function value of the individual I in the set P and fmaxk and fmink are respectively the maximum and the minimum values of the k th objective function.

At this stage, all population members in the set P are assigned a distance metric. A solution with a smaller value of this distance measure is, in some sense, more crowded by other solutions. This is exactly what we compare in the proposed Crowded-comparison operator above.

Crowded-comparison operator The crowding comparison operator guides the selection process at different stages of the algorithm towards a uniformly spread Pareto optimal front. The crowding distance is introduced only when it is a must to select individuals of the same non-domination rank, i.e., the crowding distance is a criterion for selecting individuals of the same rank.

Suppose that each individual i in the population has two attributes:

  • non-domination rank (i rank ),

  • crowding distance (i distance ).

The individuals are selected by using a binary tournament selection with crowded comparison- operator.

  • If (xrank)<(yrank).

  • If x and y belong to a same front Fi and (xdistance)>(ydistance).

  • If x and y are both solutions and are in different ranks (xrank) (yrank), a solution with lower rank is selected.

  • If x and y are both solutions and are in different ranks (xrank) = (yrank), a solution that is located in a higher crowded region is selected.

  • If x and yare both solutions and are in a same rank then a solution from greater crowded space will be selected.

3.5 Selection Neighborhood Pareto Local Search (SNPLS) Algorithm

Algorithm 3
Selection Neighborhood Pareto Local Search SNPLS Algorithm

In the decomposition framework, the original MO-SQKP is first decomposed into many SOSQKP. To be specific, given the objective vector F(x) =f1(x), f2(x),..., fm(x)T and weight vector λ= (λ1, ... ,λm)T, where the sum of weights vector should be equal to 1, the objective function of a subproblem is stated as:

{ J ( x , χ ) = k = 1 m λ ( k ) f ( k ) ( x ) ( 31.1 ) k = 1 m λ ( k ) = 1 ( 31.2 ) λ ( k ) 0, k = 1, , m ( 31.3 ) (31)

We can write J(x, χ) as follows:

J ( k ) ( x , χ ) = k = 1 m λ ( k ) . [ E [ i = 1 n - 1 j = i + 1 n r i j ( k ) x i x j ( χ i + χ j ) ] + E [ i = 1 n r i ( k ) x i χ i ] - d E [ i = 1 n x i χ i - c ] + ] , (32)

and an approximation of the gradient of the function J (x, χ) is given by:

( J t ( k ) ) ( x , χ ) = λ ( k ) [ [ ( r 1 ( k ) χ 1 , , r n ( k ) χ n ) T + ( j = 1 j 1 n r 1 j ( k ) ( χ 1 + χ j ) x j , , j = 1 j n n r n j ( k ) ( χ n + χ j ) x j ) T ] - d . ( - 3 4 t ( 1 - ( g ( x , χ ) - c t ) 2 ) 1 1 ( g ( x , χ ) - c t ) . χ . ( g ( x , χ ) - c ) - 1 + ( g ( x , χ ) - c ) . χ ) ] , k = 1, , m . (33)

The weight vectors are calculated based on the value of the Euclidean norm and parameters ω i as follows:

λ k = ω k Σ i = 1 m ω i , (34)

where:

  • - ω k is given by:

ω k = f k ( x ) f ( x ) , (35)

  • - m is the number of objective functions.

  • - f k (x) is the value of the k th objective function of a solution x,

  • - ||f (x)|| is the Euclidean norm, given by:

f ( x ) = ( f 1 ( x ) ) 2 + + ( f m ( x ) ) 2 (36)

Once the individual weights are determined for all the objectives, they are combined together into a single objective F as follows:

F = λ 1 f 1 + λ 2 f 2 + + λ m f m (37)

We then apply local gradient algorithm to compute the new local solution x p+1 from the current solution x p .

After that, the Evaluate Fitness is done to new solution because the value of fitness is our parameter of comparison between the new and the old solutions. Finally, we choose a solution that has the best fitness.

We repeat the above process for each offspring.

After having built these twice Algorithms (Non-dominated Sort Algorithm, Crowding-Distance Computation Algorithm) including a crowded comparison operator and determining different parameters (i rank ) and (i distance ), we are now ready to describe the Memetic algorithm.

3.6 Memetic Algorithm with Selection Neighborhood Pareto Local Search

In this section, we detail our Memetic Algorithm with Selection Neighborhood Pareto Local Search Algorithm MASNPL for MO-SQKP, including initialization, crossover, and local search.

Then, the fitness of each individual is evaluated and Non-dominated Sorting is applied to assign a non-domination rank i rank equal to its non-domination level, and Crowding-Distance is computed for each individual i in the population i distance .

These two parameters, i rank and i distance , are used to select individuals in the most crowded region and maintain the diversity of solutions on the Pareto front. Then, binary tournament selection is applied to choose the parents, after which, crossover and mutation operators are performed to generate new candidate solutions, i.e., the offspring population of size N.

Crossover operator Select randomly two parents (chromosomes) P1={x11,x21,x31,,xn1} and P2={x12,x22,x32,,xn2} from the current population. A random crossover point is selected. Hence, two substrings are generated before and after cut point in each of the parent (chromosomes). At this cut, the genetic information to the left (or right) of the point is swapped between the two parents (chromosomes) to produce two offspring chromosomes (children).

Copy first substring from P1 and copy the second substring from P 2 and insert as they are in offspring 1.

Copy first substring from P 2 and copy the second substring from P 1 and insert as they are in offspring 2.

Mutation Mutation is an operator that applies changes randomly to one or more genes to produce and introduce the diversity to a new offspring and is usually applied with a low probability p m .

In the mutation stage, one individual is selected randomly from the current population, then it is flipped from 0 to 1 or from 1 to 0. After that, this individual is inserted in the population. The mutation probability is fixed to 0.95.

Tournament We select two individuals randomly from the current population. After that, there is a competition amongst the selected individuals. The competition is used to determine the individual with the highest fitness value to be used in the generation of the new population. The individual winner has the best non-domination rank.

The next stage is to apply a single step of the SNPLS Algorithm on the new offspring. The combined population (Rt=PtQt) is formed where:

  • Pt: is the parent population of size N.

  • Qt: is the offspring population of size N.

  • Rt: is the combined population of size 2N.

Non-dominated sorting and Crowding distance are applied to a combined population. Then, the population R t is sorted according to non-domination. Now, solutions belonging to the best non- dominated set ℱ1 are of the best solutions in the combined population and must be emphasized more than any other solution in the combined population.

If the size of ℱ1 equals N, we definitely add all members of the set ℱ1 to the new population P t+1 .

If the size of ℱ1 is smaller than N, we definitely put all members of the set ℱ1 in the new population P t+1 . The remaining members of the population P t+1 are chosen from subsequent non-dominated fronts in the order of their ranking.

Thus, solutions from the set ℱ2 are chosen next, followed by solutions from the set ℱ3, and so on.

The above process (Lines 17-20 of Algorithm 4) is continued until a final set of non-dominated solutions of size N is obtained. The new population P t+1 of size N is now used for selection, binary tournament selection operator, crossover, and mutation to create a new population Q t+1 of size N. The above process (Lines 6-23 of Algorithm 4) is repeated until NG max is obtained.

Algorithm 4
Gradient algorithm

Algorithm 5
Memetic Algorithm With Selection Neighborhood Pareto Local Search (MASNPL)

4 NUMERICAL RESULTS

In this section, we present our numerical results. As the multi-objective stochastic quadratic knapsack problem is not treated before, and in order to evaluate the performance of our resolution method, we suggest comparing our algorithms firstly with an exact method, and secondly with NSGA-II. The results of our comparison are presented in the following subsections.

Note that all our experiments are realized on an Intel core i5-4200U CPU machine with 1.60 Ghz and 4 Go of RAM.

4.1 Performance comparison of MASNPL and an exact method

To solve the multi-objective stochastic knapsack problem with simple recourse by an exact method, we implement the equivalent deterministic problem already found and execute it on randomly generated instances. Note that the comparison is performed for three examples, and each algorithm is executed once. The results for each example are shown in the figures below.

The instances are created randomly with the parameters given below.

Example 4.1.1.

  • Number of objective functions: 2.

  • Weights : uniformly distributed with mean 225 and variance 25.

  • Rewards per weight: generated uniformly between 1 and 10.

  • Number of objects is 50.

  • Knapsack capacity is 9000.

  • Penalty d; equal to 330.

  • Number of generations for MASNP: 50.

The result of our comparison is presented graphically on Figure 2 .

The figure shows the non-dominated solutions obtained by an exact method and our resolution method (MASNPL algorithm). The CPU-time needed to obtain the final solution with an exact method is: 52179,75 seconds ("" 15 hours and 30 mn) and the total CPU-time with the MASNPL algorithm is 41,62 seconds (less than a minute). MASNPL algorithm is, of course, faster than the exact method and provides good quality solutions.

Figure 2
Comparison between Pareto Fronts of an exact algorithm and MASNPLfor 50 objects.

Example 4.1.2.

  • Number of objective functions: 2.

  • Weights : uniformly distributed with mean 225 and variance 25.

  • Rewards per weight: generated uniformly between 1 and 10.

  • Number of objects is 20.

  • Knapsack capacity is 3600.

  • Penalty d; equal to 132.

  • Number of generations for MASNP: 50

The result of our comparison is presented graphically in Figure 3 .

The figure shows the non-domination solutions obtained by an exact method and our resolution method (MASNPL algorithm). The CPU-time needed to obtain the final solution with an exact method is: 2983,02 seconds (49 minutes) and the total CPU-time with the MASNPL algorithm is 43,69 seconds (less than a minute). MASNPL algorithm is, of course, faster than the exact method and provides good quality solutions.

Figure 3
Comparison between Pareto Fronts of an exact algorithm and MASNPL for 20 objects.

Example 4.1.3.

  • Number of objective functions: 2.

  • Weights : uniformly distributed with mean 225 and variance 25.

  • Rewards per weight: generated uniformly between 1 and 10.

  • Number of objects is 15.

  • Knapsack capacity is 2700.

  • Penalty d; equal to 99.

  • Number of generations for MASNP: 50.

The result of our comparison is presented graphically on Figure 4 .

The figure shows the non-domination solutions obtained by an exact method and our resolution method (MASNPL algorithm). The CPU-time needed to obtain the final solution with an exact method is: 1175,48 seconds ("" 20 minutes) and the total CPU-time with the MASNPL algorithm is 35,91 seconds. MASNPL algorithm is, of course, faster than the exact method and provides good quality solutions.

Figure 4
Comparison between Pareto Fronts of an exact algorithm and MASNPLfor 15 objects.

4.2 Performance comparison of MASNPL and NSGA-II

To compare our MASNPL algorithm and NSGA-II, we implemented these methods on MATLAB and used the multi-objective stochastic quadratic knapsack instances created randomly with the following common parameters:

  • Weights : uniformly distributed with mean equal to 225 and variance equal to 25.

  • Rewards per weight: generated uniformly between 1 and 10.

  • Numbers of generations: 50 for MASNPL, and 2500 for NSGA-II.

We iterate each algorithm 20 times to create a final population of size 50, and in each iteration, the algorithms (MASNPL and NSGA-II) use the same parameters and the same instance created above.

We calculate the value of the objective functions fi for each individual in the population. After this, we sort the population into different levels of non-domination, and we assign the value of the crowding distance to each individual in the population. For each iteration of the 20h runs, we save the non-dominated solutions (rank equal to 1).

The next step of our comparison consists to merge all the non-dominated solutions already saved and obtained by (MASNPL and NSGA-II) algorithm, then applying the non-domination sort algorithm in order to sort (rank) the solutions merged into different non-domination levels and attribute the Crowding-Distance values for each individual of the population merged by applying the crowding distance algorithm.

Example 4.2.1. The parameters of this instance are:

  • Number of objective functions: 2.

  • Number of objects: 50.

  • Knapsack capacity: 9000.

  • Penalty d: 330.

Table 1 represents the results (number of non-dominated solutions) obtained by MASNPL and NSGA-II for each test.

Table 1
Numbers of non-dominated solutions obtained by MASNPL and NSGA-II.

To evaluate the quality of the solutions obtained by applying the MASNPL and NSGA-II algorithms, we use the ratio that determines the percentage of the number of non-dominated solutions for each algorithm compared to the total number of 1-rank solutions.

For the 10 tests, we note that for 6 tests, the 1 rank solutions obtained by the MASNPL algorithm dominate the solutions obtained by the NSGA-II algorithm (the number of 1 rank solutions is equal to 0) although the generation number for NSGA-II is 50 times the generation number for the MASNPL algorithm.

For the rest of the tests, we can observe that:

  • the number of non-dominated solutions obtained by the MASNPL algorithm is greater than the number of non-dominated solutions obtained by the NSGA-II algorithm,

  • the 1 rank solutions obtained by the MASNPL algorithm are more crowded than the solutions obtained by NSGA-II, i.e., they are located in a more crowded region,

  • in terms of CPU time, MASNPL is faster than NSGA-II.

According to these results, we can conclude that the MASNPL algorithm performs significantly better than the NSGA-II algorithm.

Example 4.2.2. The parameters of this instance are:

  • Number of objective functions: 2.

  • Number of objects: 100.

  • Knapsack capacity: 21000.

  • Penalty d: 7520.

Table 2 represents the results (number of the non-domination solutions) obtained by the (MASNPL and NSGA-II) algorithm for every test.

Table 2
Numbers of non-dominated solutions obtained by MASNPL and NSGA-II.

In Table 2, for the 10 tests, we note that in 09 tests, the 1 rank solutions obtained by MAS- NPL dominate the solutions obtained by NSGA-II (no 1 rank solution), although the generation number for NSGA-II is 50 times the generation number for the MASNPL algorithm.

In the remaining test, the number of non-dominated solutions obtained by MASNPL equals the number of 1 rank solutions obtained by NSGA-II.

In terms of CPU time, for all tests, MASNPL is faster than NSGA-II.

According to these results, we can say that the MASNPL algorithm performs significantly better than NSGA-II.

Example 4.2.3.The parameters of this instance are:

  • Number of objective functions: 2.

  • Number of objects: 200.

  • Knapsack capacity: 40000.

  • Penalty d: 1424.

Table 3 represents the results (number of the non-domination solutions) obtained by the (MASNPL and NSGA-II) algorithm for every test.

Table 3
Numbers of non-dominated solutions obtained by MASNPL and NSGA-II.

In Table 3, for the 10 tests, we note that in 09 tests, the 1 rank solutions obtained by MAS- NPL dominate the solutions obtained by NSGA-II (no 1 rank solution), although the generation number for NSGA-II is 50 times the generation number for the MASNPL algorithm.

In the remaining test, the number of non-dominated solutions obtained by MASNPL is twice the number of 1 rank solutions obtained by NSGA-II.

In terms of CPU time, for all tests, MASNPL is faster than NSGA-II.

According to these results, we can say that the MASNPL algorithm performs significantly better than NSGA-II.

Example 4.2.4. The parameters of this instance are:

  • Number of objective functions: 4.

  • Number of objects: 50.

  • Knapsack capacity: 9000.

  • Penalty d: 330.

Table 4 represents the results (number of the non-domination solutions) obtained by the (MASNPL and NSGA-II) algorithm for every test.

Table 4
Numbers of non-dominated solutions obtained by MASNPL and NSGA-II.

In Table 4, for the 10 tests, we note that in 7 tests, MASNPL obtained more solutions of rank 1 than NSGA-II, although the generation number for NSGA-II is 50 times the generation number for MASNPL. We also observe that the non-dominated solutions obtained by MASNPL are more crowded (i.e., they are located in a greater crowded region) than the non-dominated solutions obtained by NSGA-II.

In the remaining (3) tests, the number of non-dominated solutions obtained by MASNPL is less than the number of the solutions of rank 1 obtained by NSGA-II, but the solutions of rank 1, obtained by MASNPL, are more crowded than those obtained by NSGA-II.

In terms of CPU time, for all tests, MASNPL is faster than NSGA-II.

According to these results, we can conclude that the MASNPL algorithm performs significantly better than NSGA-II.

Example 4.2.5. The parameters of this instance are:

  • Number of objective functions: 3.

  • Number of objects: 100.

  • Knapsack capacity: 21000.

  • Penalty d: 752.

Table 5 represents the results (number of the non-domination solutions) obtained by the (MASNPL and NSGA-II) algorithm for every test.

Table 5
Numbers of non-dominated solutions obtained by MASNPL and NSGA-II.

In Table 5, for the 10 tests, we note that in 3, the solutions of rank 1 obtained by MASNPL dominate those obtained by NSGAII (that did not get any 1 rank solution), although the generation number for NSGA-II is 50 times those for MASNPL. We also observe that the non-dominated solutions obtained by MASNPL are more crowded.

In 6 tests, MASNPL obtained more solutions than NSGA-II, and the non-dominated solutions obtained by MASNPL are more crowded than those obtained by NSGA-II.

But for the remaining test, the solutions of rank 1 obtained by NSGA-II dominate those obtained by MASNPL.

In terms of CPU time, for all tests, MASNPL is faster than NSGA-II.

According to these results, we can conclude that the MASNPL algorithm performs significantly better than NSGA-II.

Example 4.2.6. The parameters of this instance are:

  • Number of objective functions: 5.

  • Number of objects: 150.

  • Knapsack capacity: 28000.

  • Penalty d: 1008.

Table 6 represents the results (number of the non-domination solutions) obtained by the (MASNPL and NSGA-II) algorithm for every test.

Table 6
Numbers of non-dominated solutions obtained by MASNPL and NSGA-II.

In Table 6, for the 10 tests, we note that in 1 test, MASNPL obtained solutions of rank 1, but NSGA-II did not, although the generation number for NSGA-II is 50 times those for MASNPL. We also can observe that the non-dominated solutions obtained by MASNPL are more crowded than those obtained by NSGA-II.

In 6 tests, MASNPL obtained more solutions than NSGA-II, and the non-dominated solutions obtained by MASNPL are also more crowded than those obtained by NSGA-II.

However, for the remaining 3 tests, the solutions of rank 1 obtained by NSGA-II dominate those obtained by MASNPL.

In terms of CPU time, for all tests, MASNPL is faster than NSGA-II.

According to these results, we can conclude that the MASNPL algorithm performs significantly better than NSGA-II.

5 CONCLUSION

In this paper, we detailed the model for the Multi-objective Stochastic Quadratic Knapsack Problem with simple recourse and random weights. As the objective functions are not differentiable, we approximate their gradients by the approximation by the convolution method. We apply a greedy heuristic for MO-SQKP to obtain an initial population. Then we use the Non-dominated Sort Algorithm to sort the population into different non-domination levels. After that, we determine the crowding distance value of a solution by applying the Crowding-Distance Computation Algorithm. We obtain a population sorted by non-domination levels and the crowding distance for each individual. We then apply a series of mutations, crossovers, and local searches to this population to generate an offspring population. To improve the offspring population, we apply the Selection Neighborhood Pareto Local Search SNPLS algorithm based on the comparison between a current solution (offspring), and a new solution obtained by the gradient algorithm. Then, the Non-dominated Sort Algorithm and the Crowding-Distance Computation Algorithm are applied to the improved offspring to select our first final best individuals of the population. Finally, the experimental results for the comparison between MASNPL and NSGA-II show that using the gradient algorithm with NSGA-II performs significantly better and more efficiently than NSGA-II.

Our study may open new research perspectives for the stochastic quadratic multi-objective knap- sack problem. The methodology we adopted can easily be adapted to two-stage or multi-stage problems where the weight or reward is not known in advance. Moreover, the efficiency of our method shows that hybridization allows gains both in terms of quality of the obtained solutions and in execution time. Other ideas could be inserted in the algorithm to improve it further.

ACKNOWLEDGEMENTS

The authors thank the anonymous referees for their helpful comments and suggestions which contributed to the improvement of the contents of this paper.

The authors wish to thank the Directorate-General of the Scientific Research and the Technological Development of Algeria for its institutional support.

REFERENCES

  • AÏDER M, GACEM O & HIFI M. 2022a. Branch and solve strategies-based algorithm for the quadratic multiple knapsack problem. Journal of the Operational Research Society, 73(2): 540- 557.
  • AÏDER M, GACEM O & HIFI M. 2022b. A hybrid population-based algorithm for the bi-objective quadratic multiple knapsack problem. Expert Systems With Applications, 191: 116238.
  • ANDRIEU L, COHEN G & VÁZQUEZ-ABAD F. 2007. Stochastic programming with probability constraints. Arxiv, Available at: http://fr.arxiv.org/abs/0708.0281
    » http://fr.arxiv.org/abs/0708.0281
  • ARSHAD S, YANG S & LI C. 2009. A sequence based genetic algorithm with local search for the travelling salesman problem. In: Proceedings of the 2009 UK Workshop on Computational Intelligence. pp. 98-105.
  • BHUVANA J & ARAVINDAN C. 2016. Memetic algorithm with Preferential Local Search using adaptive weights for multi-objective optimization problems. Soft Comput., (20): 1365-1388.
  • BLADO D & TORIELLO A. 2021. A column and constraint generation algorithm for the dynamic knapsack problem with stochastic item sizes. Mathematical Programming Computation, (13): 1.
  • CACCHIANI V, IORI M & MARTELLO ALS. 2022a. Knapsack problems - An overview of recent advances. Part I: Single knapsack problems. Computers & Operations Research, 143: 1056922.
  • CACCHIANI V, IORI M & MARTELLO ALS. 2022b. Knapsack problems - An overview of recent advances. Part II: Multiple, multidimensional, and quadratic knapsack problem. Computers & Operations Research, 143: 1056923.
  • CHEN Y & HAO J. 2016. The bi-objective quadratic multiple knapsack problem: Model and heuristics. Knowl.-Based Syst., pp. 89-100.
  • CHU X & YU X. 2018. Improved Crowding Distance for NSGA-II. Arxiv, Available at: https://arxiv.org/abs/1811.12667v1
    » https://arxiv.org/abs/1811.12667v1
  • COHN A & BARNHART C. 1998. The stochastic knapsack problem with random weights: A heuristic approach to robust transportation planning. In: Proceedings of the Triennial Symposium on Transportation Analysis (TRISTAN III).
  • DEB K, PRATAP A, AGARWAL S & MEYARIVAN T. 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2): 182-197.
  • GALLO G, HAMMER PL & SIMEONE B. 1980. Quadratic knapsack problems. Springer.
  • HILEY A & JULSTROM B. 2006. The quadratic multiple knapsack problem and three heuristic approaches to it. In: Proceedings of the 8th annual conference on Genetic and evolutionary computation (GECCO’06). pp. 547-552.
  • KELLERER H, PERSCHY U & PISINGER D. 2004. Knapsack problems. Springer.
  • KIM H & LIOU MS. 2012. New fitness sharing approach for multi-objective genetic algorithms. J Glob Optim, 55(3): 579-595.
  • KOSUCH S, LETOURNEL M & LISSER A. 2017. Stochastic Knapsack Problem: Application To Transportation Problems. Pesquisa Operacional, 37(3): 597-613.
  • KOSUCH S & LISSER A. 2009. Upper bounds for the 0-1 stochastic knapsack problem and a B & B algorithm. Ann. Oper. Res., 176(1): 77-93.
  • KOSUCH S & LISSER A. 2011. On two-stage stochastic knapsack problems. Discrete Appl. Math., 159(16): 1827-1841.
  • LÉ TOCART L, PLATEAU MC & PLATEAU G. 2014. An efficient hybrid heuristic method for the 0-1 exact k-item quadratic knapsack problem. Pesquisa Operacional, 34(1): 49-72.
  • LISSER A & LOPEZ R. 2010. Stochastic Quadratic Knapsack with Recourse. Electronic Notes in Discrete Mathematics, 36: 97-104.
  • MARTELLO S & TOTH P. 1990. Knapsack Problems: Algorithms and Computer Implementations. John Wiley & Sons Ltd.
  • MEI Y, TANG K & YAO X. 2011. Decomposition-Based Memetic Algorithm for Multiobjective Capacitated Arc Routing Problem. IEEE Transactions on Evolutionary Computation, 15(2): 151-165.
  • PISINGER D. 2007. The quadratic knapsack problem-a survey. Discrete Appl. Math., 155(5): 623-648.
  • RANGE TM, KOZLOWSKI D & PETERSEN NC. 2018. A shortest-path-based approach for the stochastic knapsack problem with non-decreasing expected overfilling costs. Computers & Operations Research, 97: 111-124.
  • SONG B, LI Y, CHEN Y, YAO F & CHEN Y. 2018. A Repair-based approach for stochastic quadratic multiple knapsack problem, 145: 145-155.
  • TANG K, MEI Y & YAO X. 2009. Memetic Algorithm With Extended Neighborhood Search for Capacitated Arc Routing Problems. IEEE Trans Evol Comput, 13(5): 1151-116.
  • TÖ NISSEN DD & SCHLICHER L. 2021. Using 3D-printing in disaster response: The two-stage stochastic 3D-printing knapsack problem Author links open overlay panel, 133: 105356.
  • TÖ NISSEN DD, VAN DEN AKKER JM & HOOGEVEEN JA. 2021. Column generation strategies and decomposition approaches for the two-stage stochastic multiple knapsack problem. Computers & Operations Research, 83: 125-139.
  • ZHANG Q & LI H. 2007. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput., 11: 712-731.
  • ZHOU Q, HAO JK & WU Q. 2022. A hybrid evolutionary search for the generalized quadratic multiple knapsack problem. European Journal of Operational Research, 296(3): 788-803.

Publication Dates

  • Publication in this collection
    18 July 2022
  • Date of issue
    2022

History

  • Received
    16 Oct 2021
  • Accepted
    19 Apr 2022
Sociedade Brasileira de Pesquisa Operacional Rua Mayrink Veiga, 32 - sala 601 - Centro, 20090-050 Rio de Janeiro RJ - Brasil, Tel.: +55 21 2263-0499, Fax: +55 21 2263-0501 - Rio de Janeiro - RJ - Brazil
E-mail: sobrapo@sobrapo.org.br