Mathematical programming and Simplex method

Introduction

Mathematical Programming

Mathematical programming techniques can be applied to problems where one wants to minimize (or maximize) a function, called the objective function, subject to a set of constraints. A solution is said to be feasible if it satisfies the constraints. A solution is said to be optimal if it is feasible and yields the smallest (or largest) value of the objective function among all feasible solutions. A problem may have more than one solution that yields the optimal value.
 

Linear Programming

In linear programming, both the objective function and the constraints are linear and so a linear programming (LP) problem can be formulated in the following form:
'minimize'    c'x

subject to:
              lb(i) <= A(i,*)x <= ub(i)  for i = 1,..., m

              lc(j) <= x(j) <= uc(j)   for j = 1,..., n
where:
  • x is a column vector of the problem variables, x (j). It has length n.
  • c is the column vector of coefficients of the objective function, c(j). It has length n.
  • A is the coefficient matrix of the constraints. It has dimension m * n.
  • lb(i) is the lower bound for the row activity A(i,*)'x.
  • ub(i) is the upper bound for the row activity A(i,*)'x.
  • lc(i) is the lower bound for the variable x(j).
  • uc(i) is the upper bound for the variable x(j).
  • Another way of defining a linear programming problem is the following:
    'minimize'    c'x
    
    subject to:
                  A(i,*)x sgn b(i)  for i = 1,..., m
    
                  lc(j) <= x(j) <= uc(j)   for j = 1,..., n
    where:
  • sgn is either =, <= or >=.
  • b(i) is the right-hand side for the row activity A(i,*)'x. It has length m.
  • x is a column vector of the problem variables, x (j). It has length n.
  • c is the column vector of coefficients of the objective function. It has length n.
  • A is the coefficient matrix of the constraints. It has dimension m * n.
  • lc(i) is the lower bound for the variable x(j).
  • uc(i) is the upper bound for the variable x(j).
  • To deal with the problem in a form that is easier to manipulate, logical variables can be added to create equalities and variable substitution can be performed to eliminate negative variables. The result is a more textbook oriented form:
    'minimize'    c'x
    
    subject to:
                  Ax = b
    
                  x >= 0
    where:
  • b is the column vector of right-hand sides. It has length m.
  • x is a column vector of the problem variables, x (j). It has length n.
  • c is the column vector of coefficients of the objective function. It has length n.
  • A is the coefficient matrix of the constraints. It has dimension m * n.
  • Geometrically, the constraints can be thought of as defining an (n - m)-dimensional polyhedron, in an n-dimensional space.This is known as the feasible region. The goal is to find a point on the polyhedron that minimizes the objective function. This is called the optimal solution. If such a solution exists, the problem is feasible; otherwise, the problem is infeasible or unbounded.

    The above problem is known as the primal problem. There is a related problem called the dual of the primal problem. The dual problem shown below provides a lower bound for the primal problem. Also, the optimal value for the primal problem is equal to the optimal value of the dual problem.

    'maximize' y'b
    ubject to:
               y'A <= c
               y unrestricted
    where:
  • y is a column vector of the dual problem variables. It has length m.
  • b is the column vector of (primal) right-hand sides (dual objective). It has length m.
  • c is the column vector of coefficients of the (primal) objective function (dual right hand side). It has length n.
  • A is the coefficient matrix of the constraints. It has dimension m * n.
  • For solving linear programming problems, there are several approaches which are discussed below.

    Linear Programming: Simplex Method

    The simplex method for solving linear programming problems was developed by Dantzig in 1947.  It is an iterative procedure for finding an optimal solution. In 1975 the Nobel prize in economics has been awarded to the famous mathematician  L. V. Kantorovich
    for his contribution in the application of linear programming to the solution of  problem of allocation of scarce resources.
    It can be shown that if an optimum exists, there is an optimal solution among the vertices of the polyhedron defining the feasible region. Any solution that is a vertex of the polyhedron defining the feasible region is called a basic feasible solution (BFS).

    The primal simplex method searches for an optimal solution among the BFSs. A BFS corresponds to using m columns of the coefficient matrix A as a basis for the vector space spanned by the columns of A. Given an initial BFS, a new BFS is computed by choosing one of the remaining (n - m) columns to enter the basis and one of the current m basic columns to leave the basis. The entering column is chosen so the value of the primal objective function decreases, or at least does not increase, when this column enters the basis; the exiting column is chosen so that the new BFS is still primal feasible.

    In the primal simplex algorithm, you begin with a primal feasible solution which may be dual infeasible. At each step the primal objective function is reduced by introducing variables into the basis corresponding to infeasibilities in the dual problem. An optimum BFS is detected when there are no remaining dual infeasibilities.

    In the dual simplex algorithm, you begin with a dual feasible solution which may be primal infeasible. At each step, the dual objective function is increased by removing variables from the basis corresponding to infeasibilities in the primal problem. An optimum BFS is detected when there are no remaining primal infeasibilities.
     
     

    Linear Programming: Interior-Point Barrier Method

    Interior-point methods for linear programming problems have only been shown to be computationally efficient in recent years. Interior-point methods are also iterative techniques for finding an optimal solution.

    While the simplex algorithm searches for the optimal solution among the vertices of the feasible polyhedron, interior-point methods search for solutions in the interior of the feasible region. OSL contains three interior-point methods that are types of the logarithmic barrier method. These are the primal barrier method, the primal-dual barrier method, and the primal-dual method with the predictor-corrector feature. The latter will often be referred to simply as the predictor-corrector method. The primal barrier method takes a primal linear programming problem and solves a sequence of subproblems of the form:

    'minimize'  F(x;m)  =  c'x - m* S ln x(j)
    
    subject to:
                 Ax = b
    
                 mu > 0.
    Here c, x, A, and b are as defined above. The scalar m is known as the barrier parameter and is specified for each subproblem. The equality constraints are handled directly. Then each function F(x;m)> is approximately optimized to obtain a solution x(m). If x* is an optimal solution to the primal linear programming problem, then the sequence of approximate solutions x(m) converge to x* as m converges to 0.

    Given an initial feasible interior solution, a new solution is computed by choosing a direction to travel from the initial point and a distance to travel along that direction. The direction and distance are chosen so that the value of the objective function decreases along the step and so that the step taken is relatively long. Primal-dual barrier methods consider not only the primal problem but the dual as well. The dual problem can be written as:

    'maximize' b'y
    ubject to:
               A'y + z = c
               z >= 0
               y unrestricted
    Using a barrier parameter m, this problem may be considered as a sequence of subproblems of the form:
    'maximize'    b'y  + m * S  ln z(i)
    subject to:
                  A'y + z = c
                  m > 0
    Assuming the same sequence of m values for the primal and dual problems, the necessary conditions for optimality of each subproblem are:
               Ax = b
               A'y + z = c
               XZ e = m e
    where e is a vector of 1's and X and Z are diagonal matrices with the elements of x and z, respectively, as the diagonal elements. Note that this product of variables makes the subproblem nonlinear. It may be solved (or partially solved) by the classical approach of Newton's method for each value of m considered. In practice, only one Newton iteration is performed for each value of m. When x, y, and z are feasible, they approach primal and dual optimal solutions as m approaches zero. That the feasible primal and dual problems must have identical optimal solution values provides a check-and-control mechanism for convergence.
     

    Linear Programming: Sensitivity Analysis

    Sensitivity analysis determines the effect that changes in objective function coefficients (costs) or the row and column bounds (for example, limits on availability) have on the solution. Suppose an oil company blends gasoline using a few types of oil out of the many available. There may be many choices of types of oil that give the desired results, but one mixture optimizes the objective function. Solving the LP gives the optimal answer, that is, how much of which types of oil should be used to give gasoline meeting the requirements at the minimum cost.
    After obtaining the optimal solution you may want to know how sensitive the choices of oil types are to the coefficients in the objective function. These coefficients might represent the current selling prices of gasoline and the purchase prices of oil. You may also want to know how sensitive the solution is to the upper and lower bounds on rows and columns. These bounds may represent availability. Sensitivity analysis determines how much these factors have to change to cause the optimal solution to occur at a different set of activity values. This is essentially asking: What changes would cause the solution to occur with a different basis? This is what is meant by sensitivity analysis.

    Linear Programming: Parametrics

    Parametric analysis is similar to sensitivity analysis, but instead of asking what changes in a single cost or bound are necessary to cause the solution to occur at a different basis, a range of values is specified for the objective function coefficients and bounds. OSL then determines what happens to the solution as the objective function coefficients and bounds are varied simultaneously through their specified ranges.

    Again, think of the oil company that has an optimal solution for blending gasoline from crude oils. Instead of just answering how much of a change in the assumptions is required to cause a basis change, parametric analysis gives the optimal for a entire range of assumptions. For example, assume that when the oil company LP was solved, type 1 crude oil cost $20 per barrel. Sensitivity analysis might say that the basis would change if the price of type 1 crude oil rose to $23 per barrel or fell to $18 per barrel. Parametric analysis, however, can answer the question: ``What is the optimal solution for every price of type 1 crude oil between $10 and $20 in $2 increments?'' In addition, the parametric algorithm reports on all the solutions at basis changes between $10 and $20.

    It should be noted that parametric analysis usually requires the solution of many LP problems, and so it may take considerably longer to run than sensitivity analysis.


    Return To Index

      To read more about Dantzig and optimization click here.