Acessibilidade / Reportar erro

A Comparison Among Simple Algorithms for Linear Programming

ABSTRACT

This paper presents a comparison between a family of simple algorithms for linear programming and the optimal pair adjustment algorithm. The optimal pair adjustment algorithm improvements the convergence of von Neumann’s algorithm which is very attractive because of its simplicity. However, it is not practical to solve linear programming problems to optimality, since its convergence is slow. The family of simple algorithms results from the generalization of the optimal pair adjustment algorithm, including a parameter on the number of chosen columns instead of just a pair of them. Such generalization preserves the simple algorithms nice features. Significant improvements over the optimal pair adjustment algorithm were demonstrated through numerical experiments on a set of linear programming problems.

Keywords:
Linear programming; von Neumann’s algorithm; Simple algorithms

RESUMO

Este artigo apresenta uma comparação entre uma família de algoritmos simples para programação linear e o algoritmo de ajustamento pelo par ótimo. O algoritmo de ajustamento pelo par ótimo foi desenvolvido para melhorar a convergência do algoritmo de von Neumann que é um algoritmo muito interressante por causa de sua simplicidade. Porém não é muito prático resolver problemas de programação linear até a otimalidade com ele, visto que sua convergência ainda é muito lenta. A família de algoritmos simples surgiu da generalização do algoritmo de ajustamento pelo par aótimo, incluindo um parâmetro sobre o número de colunas escolhidas, em vez de manter fixa duas. Esta generalização preserva a simplicidade dos algoritmos e suas boas qualidades. Apresentamos experimentos numéricos sobre um conjunto de problemas de programação linear que mostram melhorias significativas em relação ao algoritmo de ajustamento pelo par ótimo.

Palavras-chave:
Programação Linear; Algoritmo de von Neumann; Algoritmos Simples

1 INTRODUCTION

The von Neumann algorithm was reported by Dantzig in the early 1990s 44. G.B. Dantzig. Converting a converging algorithm into a polynomially bounded algorithm. Technical report, Stanford University (1991).), (55. G.B. Dantzig. An e-precise feasible solution to a linear program with a convexity constraint in 1/ε2 iterations independent of problem size. Technical report, Stanford University (1992)., and it was later studied by Epelman and Freund 77. M. Epelman & R.M. Freund. Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system. Mathematical Programing, 88 (2000), 451-485., and Beck and Teboulle 11. A. Beck & M. Teboulle. A Conditional Gradient Method with Linear Rate of Convergence for Solving Convex Linear Systems. Mathematical Methods of Operations Research, (2004).. Some of the advantages presented by this method are its low computational cost per iteration, which is dominated by the matrix-vector multiplication, in addition to its ability to exploit the data sparsity from the original problem and the usually fast initial advance. Epelman and Freund 77. M. Epelman & R.M. Freund. Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system. Mathematical Programing, 88 (2000), 451-485. refer to this algorithm as “elementary”, since each iteration involves only simple computations; therefore, it is unsophisticated, especially when compared with the modern interior point algorithms.

In 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009)., three algorithms were proposed to overcome some convergence difficulties from von Neumann’s method: the optimal pair adjustment algorithm, the weight reduction algorithm, and projection algorithm. The optimal pair adjustment algorithm (OPAA) provides the best results among them. This algorithm inherits the best properties from von Neumann’s algorithm. Although OPAA has a faster convergence when compared to von Neumann’s algorithm, its convergence is also considered slow, making it impractical for solving linear optimization problems.

This work presents a comparison between a family of simple algorithms for linear programming and the optimal pair adjustment algorithm. This family originated from the generalization of the idea presented by Gonçalves, Storer and Gondzio in 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009). to develop the OPAA. Hence, the optimal adjustment algorithm for p coordinates was developed. Indeed, for different values of p, a different algorithm is defined, in which p is limited by the order of the problem, thus resulting in a family of algorithms. This family of simple algorithms maintains the ability to exploit the sparsity from the original problem and a fast initial convergence. Significant improvements over OPAA are demonstrated through numerical experiments on a set of linear programming problems.

The paper is organized as follows. Section 2 contains a description of von Neumann’s algorithm. Section 3 presents both the weight reduction algorithm and the OPAA. Section 4 discusses the family of simple algorithms, theoretical properties of convergence of the optimal adjustment algorithm for p coordinates, and a sufficient condition for it to present better iterations than the iterations of von Neumann’s algorithm. Section 5 describes the computational experiments comparing the family with the OPAA. The conclusions and perspectives for future work are presented in the last section.

2 THE VON NEUMANN’S ALGORITHM

The von Neumann algorithm for solving linear programming problems was first described by Dantzig in the early 1990s in 44. G.B. Dantzig. Converting a converging algorithm into a polynomially bounded algorithm. Technical report, Stanford University (1991).), (55. G.B. Dantzig. An e-precise feasible solution to a linear program with a convexity constraint in 1/ε2 iterations independent of problem size. Technical report, Stanford University (1992).. Such an algorithm actually solves the equivalent problem described below.

Consider the following set of linear constraints and the the search for a feasible solution for:

P x = 0, e t x = 1, x 0, (2.1)

where P ∈ ℜm×n and ||P j || = 1 for j = 1, …, n (the columns have norm one), x ∈ ℜn , e ∈ ℜn is the vector with all ones.

Geometrically, the columns P j are points on the m-dimensional hypersphere with unit radius and center at the origin. Therefore, the above problem assigns non-negative weights x j to the P j columns so that its origin is the rescaled gravity center. Note that any linear programming problem can be reduced to problem (2.1) (see 1010. J.P.M. Gonçalves. “A Family of Linear Programming Algorithms Based on the Von Neumann Algorithm”. Ph.D. thesis, Lehigh University, Bethlehem (2004).).

Figure 1 shows the algorithm. In the k - th iteration, the residual is Px k = b k - 1 . The next residual b k is the projection of the origin in the segment of line joining b k - 1 to P s , where P s is the column that forms the largest angle with the residual b k - 1 . The triangle b k - 1 0b k has as hypotenuse 0b k - 1 and cathetus 0b k , and thus, ||b k || < ||b k - 1 || for all iterations.

Figure 1:
Illustration of von Neumann’s algorithm.

The steps of von Neumann’s algorithm are described below:

Algorithm 1:
von Neumann’s Algorithm

In computational experiments, the stopping criterion was bkbk1/bk<ε, where ε is a specified tolerance, and xj0=1n, j = 1, …, n was considered.

The effort per iteration of von Neumann’s algorithm is dominated by the matrix-vector multiplication, required for the selection of the column P S , which is O(mn). The number of operations required in this multiplication is significantly lower, if the P matrix is sparse. For more details, see 1010. J.P.M. Gonçalves. “A Family of Linear Programming Algorithms Based on the Von Neumann Algorithm”. Ph.D. thesis, Lehigh University, Bethlehem (2004).), (1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009)..

3 THE WEIGHT REDUCTION AND THE OPTIMAL PAIR ADJUSTMENT ALGORITHMS

In this section, two algorithms developed by Gonçalves 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009). are described. The algorithms are based on von Neumann’s algorithms and were developed to improve convergence. They are the weight-reduction algorithm and the optimal pair adjustment algorithm.

In the weight-reduction algorithm, the residual b k - 1 is moved closer to the origin 0 by increasing the weight x j of some columns P j or decreasing the weight x i of other columns P i . Figure 2 shows the geometric interpretation of the weight-reduction algorithm.

Figure 2:
Illustration of the weight-reduction algorithm.

At each iteration, the residual b k - 1 moves in the direction PS+PS where the columns PS+ and PS make the largest and smallest angle with b k - 1 , respectively. The new residual b k is the projection of the origin in that line. Only the weights x + and x - will be changed. There is no guarantee that an iteration of this algorithm will improve as much as an iteration of the von Neumann algorithm 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009)..

However, the OPAA also developed by Gonçalves improves the residual at least as much as the von Neumann algorithm 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009)..

First, the OPAA calculates the columns PS+ and PS. Then, the algorithm computes the values xS+k,xSk and λ where xjk=λxjk1 for all js + and js - that minimize the distance between b k and the origin, subject to the convexity and the non-negativity constraints. The solution to this optimization problem is easy to find by examining the Karush-Kuhn-Tucker (KKT) conditions (see 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009).).

The optimal pair adjustment algorithm is described below.

The OPAA modifies all the weights xi'sk, while the weight reduction algorithm modifies only the weights of columns PS+ and PS. This is the main difference between them.

Algorithm 2:
Optimal Pair Adjustment Algorithm

4 OPTIMAL ADJUSTMENT ALGORITHM FOR P COORDINATES

This section presents the optimal adjustment algorithm for p coordinates developed by Silva 1212. J. Silva. “Uma Família de Algoritmos para Programação Linear Baseada no Algoritmo de Von Neumann”. Ph.D. thesis, IMECC - UNICAMP, Campinas SP (2009). In portuguese.. This algorithm was developed by generalizing the idea presented in the subproblem (10) of the OPAA. Instead of using two columns to formulate the subproblem, any number of columns can be used, thus assigning relevancy to any number of variables. For each value of p, a different algorithm can be formulated. Thus, a family of algorithms was developed.

The p variables can be chosen by a different method according to the problem. A natural choice is to take p/2 columns that make the largest angle with the vector b k and p/2 columns that make the smallest angle with the vector b k . If p is an odd number, an extra column for the set of vectors is taken, which form the largest angle with the vector b k , for instance.

The optimal adjustment algorithm for p coordinates computes better direction than the OPAA. It still maintains simplicity, since at each iteration, only a matrix-vector multiplication is performed and a small linear system with a positive definite matrix is solved.

The steps of the optimal adjustment algorithm for p coordinates are similar to those for the OPAA. First, the s 1 and s 2 columns are identified; the s 1 and s 2 columns form the largest and the smallest angle with the residual b k - 1 , respectively, where s 1 + s 2 = p and p is the number of columns to be prioritized. Next, the subproblem is solved, and then, the residual and the current point are updated.

Algorithm 3:
Optimal Adjustment Algorithm for p Coordinates

Subproblem (8) was built in such a way that the residual has the maximum decrease, such that, x 0 ≥ 0, with e t x 0 = 1.

4.1 Relation between von Neumann’s algorithm and the algorithm with p = 1 and geometric interpretation

The optimal adjustment algorithms for p coordinates with p = 1 and von Neumann’s algorithm generate the same points x k and the same residual b k as shown in Figure 1.

The optimal λ for von Neumann’s algorithm is calculated by the projection of the origin in the segment of line joining b k - 1 to P S .

In the algorithm when p = 1, the following subproblem is solved.

minimize b ¯ 2 s .t . λ 0 ( 1 x S k 1 ) + λ 1 = 1, λ i 0, for i = 0,1. (4.2)

where, b¯=λ0(bk1xSk1PS)+λ1PS.

The subproblem can be rewritten (4.2) as follows:

λ 0 ( 1 x S k 1 ) + λ 1 = 1 λ 1 = 1 λ 0 ( 1 x S k 1 ) 0 b ¯ = λ 0 ( b k 1 x S k 1 P S ) + λ 1 P S = λ 0 ( b k 1 x S k 1 P S ) + ( 1 λ 0 ( 1 x S k 1 ) ) P S = λ 0 b k 1 + ( 1 λ 0 ) P S . (4.3)

Thus, the problem (4.2) will be:

minimize b ¯ 2 s .t . λ [ 0, 1 1 x S k 1 ] (4.4)

where, b¯=λbk1+(1λ)PS.

If the term 11xSk1 is larger than 1, then there will be an increase in the number of possible solutions in comparison with von Neumann’s algorithm. Although, the geometric interpretation of the algorithm with p = 1 presented in Figure 3 show that the optimal λis the same for both algorithms.

Figure 3:
Illustration of the algorithm with p = 1.

In Step 4 of von Neumann’s algorithm, x k is updated by

x j k = { λ x j k 1 , j s λ ( x S k 1 1 ) + 1, j = s .

In Step 4 of the algorithm with p = 1, x k is updated by

x j k = { λ x j k 1 , j s 1 λ ( 1 x S k 1 ) , j = s .

Therefore, the algorithm with p = 1 and von Neumann’s algorithm generate the same points x k and the same residual b k .

For p = 2, the subproblem (8) is reduced to the following form:

minimize b ¯ 2 s .t . λ 0 ( 1 x S + k 1 x S k 1 ) + λ 1 + λ 2 = 1, λ i 0, for i = 0,1,2. (4.5)

where, b¯=λ0(bk1xS+k1PS+xSk1PS)+λ1PS++λ2PS.

The term b¯ can be rewritten as:

b ¯ = λ 0 ( b k 1 x S + k 1 P S + x S k 1 P S ) + λ 1 P S + + λ 2 P S = λ 0 b k 1 + ( λ 1 λ 0 x S + k 1 ) P S + + ( λ 2 λ 0 x S k 1 ) P S

Thus, b¯(λ0,λ1,λ2) is a linear transformation. When the vectors {(bk1xS+k1PS+xSk1PS),PS+ePS} are linearly independent, such linear transformation is injective. It transforms the triangle generated by λ0(1xS+k1xSk1)+λ1+λ2=1 and its interior into the triangle whose vertices are PS+,PS and Pν=1(1xS+k1xSk1)(bk1xS+k1PS+xSk1PS), and its interior.

Therefore, the optimal residual b k is the projection of the origin on this triangle. The geometric interpretation of the algorithm with p = 2 is given in Figure 4.

Figure 4:
Illustration of the algorithm with p = 2.

For p > 2 coordinates, the subproblem (8) will be

b ¯ = λ 0 ( b k 1 i = 1 s 1 x η i + k 1 P η i + j = 1 s 2 x η j k 1 P η j ) + i = 1 s 1 λ η i + + P η i + + j = 1 s 2 λ η j P η j = λ 0 b k 1 + i = 1 s 1 ( λ η i + λ 0 x η i + k 1 ) P η i + + j = 1 s 2 ( λ η j λ 0 x η j k 1 ) P η j

with λ0+i=1s1(ληi+λ0xηi+k1)+j=1s2(ληjλ0xηjk1)=1 then b¯ is also an affine combination. Then, the optimal residual b k is the projection of the origin on the affine space region with vertices in the p columns and in vector P ν and

P ν = 1 1 i = 1 s 1 x η i + k 1 j = 1 s 2 x η j k 1 ( b k 1 i = 1 s 1 x η i + k 1 P η i + j = 1 s 2 x η j k 1 P η j ) .

4.2 Subproblem Solution Using Interior Point Methods

In 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009)., Gonçalves solved the subproblem (10) by checking all feasible solutions that satisfies the KKT condition of this subproblem and there are 23 - 1 possible solutions, see 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009).. For the optimal adjustment algorithm for p Coordinates, the possible solutions of the subproblem (8) are 2p - 1 99. C.T.L.S. Ghidini, A.R.L. Oliveira, J. Silva & M. Velazco. Combining a Hybrid Preconditioner and a Optimal Adjustment Algorithm to Accelerate the Convergence of Interior Point Methods. Linear Algebra and its Applications, 436 (2012), 1-18.. The strategy used by Gonçalves in 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009). is inefficient even for small values of p; the number of cases will increase exponentially. The subproblem (8) can be solved by interior point methods.

Consider the subproblem (8). The KKT equations from the problem (8) are given by:

A t A λ + c τ μ = 0, μ t λ = 0, c t λ 1 = 0, λ 0, (4.6)

where τ is a vector of free variables and 0 ≤ μ. The vectors τ and μ are the Lagrange multipliers for equality and inequality constraints, respectively, and A t A is a (p + 1) × (p + 1) matrix.

The path-following interior point method is used to solve the problem (4.6). At each iteration of the interior point method, the linear system of Equation (4.7) is solved to compute the search directions (, , ):

[ A t A c I d U 0 Λ c t 0 0 ] [ d λ d τ d μ ] = [ r 1 r 2 r 3 ] (4.7)

where U = diag(μ), Λ = diag(λ), r 1 = μ - cτ - A t Aλ, r 2 = -τt λ, and r 3 = 1 - c t λ.

The directions , and are given by:

d μ = Λ 1 r 2 Λ 1 U d λ , d λ = ( A t A + Λ 1 U ) 1 r 4 ( A t A + Λ 1 U ) 1 c d τ , c t ( A t A + Λ 1 U ) 1 c d τ = c t ( A t A + Λ 1 U ) 1 r 4 r 3 ,

where r 4 = r 1 + Λ-1 r 2.

Consider l1=(AtA+Λ1U)1c and l2=(AtA+Λ1U)1r4; then, the solution of the linear systems (AtA+Λ1U)l1=c and (AtA+Λ1U)l2=r4 will be necessary to compute the directions.

The matrix A t A + Λ-1 U is a symmetric (p + 1) × (p + 1) positive definite. Both systems can be solved with the same Cholesky factorization.

4.3 Theoretical Properties of the Family of Algorithms

The theorem 3.1 in 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009). ensures that the OPAA converges, in the worst case, with the same rate of convergence as von Neumann’s algorithm. This result can be extended for the optimal adjustment algorithm for p coordinates, and an increase in the value of p leads to a more efficient algorithm with improved performance. This is shown in Theorem 4.1. Only the second part will be proved.

Teorema 4.1The residual ||bk|| after an iteration of the optimal adjustment algorithm for p coordinates is, in the worst case, equal to the residual after an iteration of von Neumann’s algorithm. Furthermore, suppose thatbp1kis the residual after an iteration of the optimal adjustment algorithm for p1coordinates,bp2kis the residual after an iteration of the optimal adjustment algorithm for p2coordinates, and p1p 2n thenbp1kbp2kwhere n is the number of columns P.

Proof. Let k ≥ 1 and b k-1 be the residual at the beginning of the iteration k. Further, {Pη1+,...,Pηs1+} and {Pη1,...,Pηs2} are sets of vectors forming the largest and the smallest angles with the vector b k-1 , respectively, for the algorithm prioritizing the p 2 coordinates, where s 1 + s 2 = p 2; and {Pη1+,...,Pηs3+} and {Pη1,...,Pηs4} are sets of vectors forming the largest and the smallest angles with the vector b k-1 , respectively, for the algorithm prioritizing the p 1 coordinates, where s 3 + s 4 = p 1.

After the k-th iteration in the optimal adjustment for the p 2 coordinates, the residual bp2k will be

b p 2 k = λ ¯ 1 ( b k 1 i = 1 s 1 x η i + k 1 P η i + j = 1 s 2 x η j k 1 P η j ) + i = 1 s 1 λ ¯ η i + P η i + + j = 1 s 2 λ ¯ η j P η j ,

where (λ¯1,λ¯η1+,...,λ¯ηs1+,λ¯η1,...,λ¯ηs2) is the optimal solution of the subproblem (8) prioritizing p 2 coordinates.

The optimal solution of the subproblem (8) prioritizing p 1 coordinates

( λ ˜ 1 , λ ˜ η 1 + ,..., λ ˜ η s 3 + , λ ˜ η 1 ,..., λ ˜ η s 4 ) ,

is also a feasible solution for the subproblem (8) when p 2 coordinates are prioritized.

Therefore,

λ ˜ 1 ( b k 1 i = 1 s 3 x η i + k 1 P η i + j = 1 s 4 x η j k 1 P η j ) + i = 1 s 3 λ ˜ η i + P η i + + j = 1 s 4 λ ˜ η j P η j = = b p 1 k b p 2 k ,

where bp1k is the residual after an iteration of the optimal adjustment algorithm for the p 1 coordinates. Consequently, the reduction of the residual after an iteration of the optimal adjustment algorithm for the p 2 coordinates is, in the worst case, equal to the reduction of the residual after an iteration of the optimal adjustment algorithm for the p 1 coordinates.

This theorem does not ensure that one iteration of family of algorithms is better than one iteration of von Neumann’s algorithm. In the next section, we give the sufficient conditions for that to happen.

b k < b ν k

4.4 Sufficient Conditions for EQ50

Let b k be the residual of the algorithm with p = 2 in the iteration k, and let Ps+ and Ps be the columns forming the largest and smallest angles with the vector b k-1 . If the projection of the origin is in the interior of the triangle bkPs+Ps and coincides with the projection of the origin in the plane determined by b k-1 , Ps+ and Ps then bk<bνk where bνk is the residual of von Neumann’s algorithm in the iteration k. In fact, we can see this clearly in Figure 5, noting that the triangle 0bkbνk has the 0b¯νk hypotenuse and side 0b¯k.

Figure 5:
Illustration of the Sufficient Condition.

Thus, we concluded that under these conditions, the OPAA for p coordinates has better performance than von Neumann’s algorithm.

5 COMPUTATIONAL EXPERIMENTS

The algorithm with p = 2 is the same of the optimal pair adjustment algorithm. In particular, for the algorithm with p = 2, we could use the same strategy and compute all possible solutions of the subproblem, since there are only 23 - 1 possible solutions. However, this strategy becomes impractical for larger values of p, so, the strategy is to solve the subproblem by interior point methods for all differents value of p.

We make two experiments. In the first one, the performance of the family of algorithms for moderate values of p is compared with OPAA, when p=2. The choice for moderate p values comes from the fact that for larger p values, the cost of solution of the subproblem in each iteration becomes noticeable.

The second experiment explores the fact that the optimal adjustment algorithm for p coordinates allows a dynamic choice of p considering the size of linear problem to be solved. This approach was proposed in [Ca1], the developed heuristic was based on numerical experiments and its given by

0 < (m + n) < 10000 → p = 4; 10000 < (m + n) < 20000 → p = 8; 20000 < (m + n) < 400000 → p = 20; 400000 < (m + n) < 600000 → p = 40; 600000 < (m + n) < → p = 80

where m is the number of rows and n is the number of columns for the constraint matrix A of the linear problem.

For the second experiment, we proposed a new heuristic p=nz(A)mn where nz(A) is the number of nonzeros entries of A. And we compare this new heuristic with the heuristic proposed in 88. C.T.L.S. Ghidini, A.R.L. Oliveira & J. Silva. Optimal Adjustment Algorithm for p Coordinates and the Starting Point in Interior Point Methods. American Journal of Operations Research, (2011)..

For the all experiments, a collection of 151 linear programming problems is used. The problems are divided into 95 Netlib problems 33. N. collection LP test sets. NETLIB LP repository. Online at http://www.netlib.org/lp/data.
http://www.netlib.org/lp/data...
, 16 Kennington problems 22. W. Carolan, J. Hill, J. Kennington, S. Niemi & S. Wichmann. An empirical evaluation of the KORBX algorithms for military airlift applications. Oper. Res, 38 (1990), 240-248., and 40 other problems supplied by Gonçalves 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009).. These experiments were performed on an Intel Core 2 Quad Q9550 2.83 GHz and 4GB of RAM machine in a Linux using the gcc compiler.

5.1 Implementation Details

The family of algorithms is implemented in C using the format of the problem given in 1010. J.P.M. Gonçalves. “A Family of Linear Programming Algorithms Based on the Von Neumann Algorithm”. Ph.D. thesis, Lehigh University, Bethlehem (2004). Subsection 2.5.1. The matrix P is not built explicitly. More precisely, it is divided into two blocks. Only one of these blocks is considered in Step 1, at each iteration, to find s 1 and s 2 columns, which form the largest and smallest angles with residual b k , respectively.

To solve the subproblem, the classical path-following interior point method was implemented in C. The perturbation in the complementarity is μtλ(p+1)2.

The initial point was the point with all coordinates equal to one. The tolerance is 10-12; since the linear equation e t x = 1 of the problem (2.1) makes each component x j small, if a solution with a good precision is not computed, the method may not work properly.

5.2 Experiment Design

The first experiment was performed following the steps presented by Gonc¸alves, Storer and Gondzio in 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009).:

  1. Initially, von Neumann’s algorithm is run on all problems;

  2. Next, when the relative difference between ||b k-1 || and ||b k || was less than 0:5%, the time t1(CPU seconds) and number of iterations (up to t1) are recorded.

  3. Additionally, the times t2, t3, t4, and t5 (CPU seconds), which correspond respectively to 3, 5, 10 and 20 times the number of iterations in t1 are also recorded.

  4. Next, the optimal adjustment algorithm for p coordinates, where p = 2, p = 4, p = 10, p = 20, p = 40 and p = 100 is ran on the test problems.

  5. Finally, for the ti times, i = 1, …, 5, the residual ||b k || is recorded.

In the second experiment, both approaches use the termination criteria described in 88. C.T.L.S. Ghidini, A.R.L. Oliveira & J. Silva. Optimal Adjustment Algorithm for p Coordinates and the Starting Point in Interior Point Methods. American Journal of Operations Research, (2011).. This is, the experiment stops when the algorithm exceeds its allotted maximum number of iterations (100) or when the relative error of the residual norm is smaller than a tolerance 10-4. The most successful approach is the one with the smaller residual ||b k ||.

Table 1 shows the problems and the results for time t5, which was used in the performance profile. We use time t5, because in t5 the algorithms get more time running.

Table 1:
Problems test and time t5

5.3 Computational Results

Table 2 shows the relative gain by each algorithm in all problems considering the five different times, i.e., the percentage of problems where the algorithm obtained the lowest value for the residual ||b k ||, in the times t1 up to t5.

Table 2:
Percentage of the relative gain by the algorithms on the problems in five different times

According with Table 2, for the optimal adjustment algorithm for p = 100 and p = 4 coordinates, more problems obtain the lowest value of the residual in comparison with the OPAA (p = 2) on the five times. And for p = 4, the performance was worse. Especially, the optimal adjustment algorithm for p = 100 coordinates had better performance.

The performances of the algorithms using performance profile 66. E. Dolan & J. More. Benchmarking optimization software with performance profiles. Math. Program, (2002), 91:201-213. was also analyzed. The distance of the residual ||b k || to the origin was used to measure the performance. In these graphs, ρ(1) represents the algorithm’s total gain. The best performance algorithms are the ones above the others in graphics.

In Figure 6, the performance profile of the six algorithms with time t5 are compared. This figure shows that the optimal adjustment algorithm for p coordinates is more efficient than the OPAA (p = 2) for p = 100, p = 10 and p = 4. However the algorithm loses in performance for p = 20 and p = 40. Table 2 shows the performance of the algorithms in the t5 column. In terms of robustness, for all values of p the behaviour is the same.

Figure 6:
Performance profile of six algorithms in t5 time.

Figures 9, 10, and 7 show the performance profiles among family algorithms for p = 100, p = 40, p = 20, p = 10, and p = 4 and OPAA at the time t5. The five figures show that the five family algorithms are more efficient and robust. The highest efficiency was achieved by the algorithm with p = 4, which had 88% efficiency, followed by the algorithm with p = 10, which had 84% efficiency, followed by the algorithm with p = 20, which had a 74% efficiency, followed by the algorithm with p = 40, which had a 60% efficiency and ending with the algorithm with p = 100, which had a 57% efficiency. Thus, at time t5, the family algorithms achieved higher efficiency when compared with OPAA. As for robustness, the five figures show that the curves from the five family algorithms are on top of the curve from the OPAA, thus demonstrating their strength.

Figure 7:
Performance profile of the algorithms with p = 2 and p = 100 in time t5.

Figure 8:
Performance profile of the algorithms with p = 2 and p = 40 in time t5.

Figure 9:
Performance profile of the algorithms with p = 2 and p = 20 in time t5.

Figure 10:
Performance profile of the algorithms with p = 2 and p = 10 in time t5.

Figure 11:
Performance profile of the algorithms with p = 2 and p = 4 in time t5.

Thus, the performance profiles show that the family of algorithms has good performance for moderate p values.

The results obtained in the second experiment are presented in Table 3. The best-performing approach is given by the heuristic p=nz(A)mn. It achieves a smaller number of iterations in 52% of the 151 problems tested, a lager number of iterations in 35% of the problems and the same number of iterations in 13% of them. In the Table 3 we present the value of p and the number of iterations (it) for the approach in 88. C.T.L.S. Ghidini, A.R.L. Oliveira & J. Silva. Optimal Adjustment Algorithm for p Coordinates and the Starting Point in Interior Point Methods. American Journal of Operations Research, (2011). (H) and the new heuristic (NH).

Table 3:
Heuristic: H × NH

6 CONCLUSIONS

The family of simple algorithms arose from the generalization of the OPAA. The major advantage of this family of algorithms is its simplicity and fast initial convergence. This paper presents a comparison between a family of simple algorithms for linear programming and the OPAA. Besides, it is proved that the algorithm with p = 1 is equivalent to von Neumann’s algorithm. Finally, sufficient conditions show that the family of algorithms has better performance than von Neumann’s algorithm.

The first experiment is performed in a similar framework as reported by in Gonçalves, Storer and Gondzio in 1111. J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009). and the second experiment is performed in a similar framework as reported in 99. C.T.L.S. Ghidini, A.R.L. Oliveira, J. Silva & M. Velazco. Combining a Hybrid Preconditioner and a Optimal Adjustment Algorithm to Accelerate the Convergence of Interior Point Methods. Linear Algebra and its Applications, 436 (2012), 1-18.. In the first experiment the computational results show the superiority of the family of algorithms for moderate p values, in comparison with OPAA. Performance profile graphs indicate that the family of algorithms has significantly more efficiency and robustness. In the second experiment the new heuristic p=nz(A)mn achieves better results than the approach given in 99. C.T.L.S. Ghidini, A.R.L. Oliveira, J. Silva & M. Velazco. Combining a Hybrid Preconditioner and a Optimal Adjustment Algorithm to Accelerate the Convergence of Interior Point Methods. Linear Algebra and its Applications, 436 (2012), 1-18..

Despite the improvements with respect to OPAA, the family of algorithms is not practical for solving linear programming problems up to a solution. However, it can be useful in some instances, such as, improving the starting point for interior point methods as in 88. C.T.L.S. Ghidini, A.R.L. Oliveira & J. Silva. Optimal Adjustment Algorithm for p Coordinates and the Starting Point in Interior Point Methods. American Journal of Operations Research, (2011)., or to work in combination with interior point methods using its initial fast convergence rate as reported in 99. C.T.L.S. Ghidini, A.R.L. Oliveira, J. Silva & M. Velazco. Combining a Hybrid Preconditioner and a Optimal Adjustment Algorithm to Accelerate the Convergence of Interior Point Methods. Linear Algebra and its Applications, 436 (2012), 1-18..

Nevertheless, future researches are needed to measure the impact that the family of algorithms can have in this direction.

ACKNOWLEDGEMENT

This work was partially funded by the Foundation for the Support of Research of the State of São Paulo (FAPESP) and Brazilian Council for the Development of Science and Technology (CNPq).

The authors are grateful to the referees for their valuable suggestions which improved this article.

REFERENCES

  • 1
    A. Beck & M. Teboulle. A Conditional Gradient Method with Linear Rate of Convergence for Solving Convex Linear Systems. Mathematical Methods of Operations Research, (2004).
  • 2
    W. Carolan, J. Hill, J. Kennington, S. Niemi & S. Wichmann. An empirical evaluation of the KORBX algorithms for military airlift applications. Oper. Res, 38 (1990), 240-248.
  • 3
    N. collection LP test sets. NETLIB LP repository. Online at http://www.netlib.org/lp/data
    » http://www.netlib.org/lp/data
  • 4
    G.B. Dantzig. Converting a converging algorithm into a polynomially bounded algorithm. Technical report, Stanford University (1991).
  • 5
    G.B. Dantzig. An e-precise feasible solution to a linear program with a convexity constraint in 1/ε2 iterations independent of problem size. Technical report, Stanford University (1992).
  • 6
    E. Dolan & J. More. Benchmarking optimization software with performance profiles. Math. Program, (2002), 91:201-213.
  • 7
    M. Epelman & R.M. Freund. Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system. Mathematical Programing, 88 (2000), 451-485.
  • 8
    C.T.L.S. Ghidini, A.R.L. Oliveira & J. Silva. Optimal Adjustment Algorithm for p Coordinates and the Starting Point in Interior Point Methods. American Journal of Operations Research, (2011).
  • 9
    C.T.L.S. Ghidini, A.R.L. Oliveira, J. Silva & M. Velazco. Combining a Hybrid Preconditioner and a Optimal Adjustment Algorithm to Accelerate the Convergence of Interior Point Methods. Linear Algebra and its Applications, 436 (2012), 1-18.
  • 10
    J.P.M. Gonçalves. “A Family of Linear Programming Algorithms Based on the Von Neumann Algorithm”. Ph.D. thesis, Lehigh University, Bethlehem (2004).
  • 11
    J.P.M. Gonçalves, R.H. Storer & J. Gondzio. A Family of Linear Programming Algorithms Based on an Algorithm by von Neumann. Optimization Methods and Software, (2009).
  • 12
    J. Silva. “Uma Família de Algoritmos para Programação Linear Baseada no Algoritmo de Von Neumann”. Ph.D. thesis, IMECC - UNICAMP, Campinas SP (2009). In portuguese.

Publication Dates

  • Publication in this collection
    May-Aug 2018

History

  • Received
    28 July 2017
  • Accepted
    10 Mar 2018
Sociedade Brasileira de Matemática Aplicada e Computacional Rua Maestro João Seppe, nº. 900, 16º. andar - Sala 163 , 13561-120 São Carlos - SP, Tel. / Fax: (55 16) 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br