Linear programming is a widely used tool for computing optimal solutions to problems involving the allocation of scarce resources. A classic reference is Linear Programming, V. Chvátal (W. H. Freeman and Company, New York, 1983), and a recently updated reference is Linear Programming: Foundations and Extensions, R. J. Vanderbei (Kluwer Academic Publishers, Boston, 2001).

A linear-programming (LP) problem is one of maximizing or minimizing a linear function subject to linear equality and linear inequality constraints. A general form of an LP problem is

Maximize c_1 * x_1 + c_2 * x_2 + ... + c_n * x_n 
Subject To 
a11 * x1 + a12 * x2 + ... + a1n * xn <= b1 
a21 * x1 + a22 * x2 + ... + a2n * xn <= b2 
           . 
           . 
           . 
am1 * x1 + am2 * n2 + ... + amn * xn <= bm 
           . 
           . 
           . 
l_1 <= x_1 <= u_1 
l_2 <= x_2 <= u_2 
           . 
           . 
           . 
l_n <= x_n <= u_n

where x1, x2, ..., xn are variables and the remaining elements are input data. Each of the input data can be any (rational) number; the l1, l2, ..., ln can also be assigned the value of $-$ infinity ($-\infty$) and the $u_1, u_2, \ldots , u_n$ can also be assigned the value of +infinity ($+\infty$). The ``Maximize'' can alternatively be ``Minimize'', and each ``$\leq$'' relation can alternatively be an ``$=$'' or a ``$\geq$'' relation.

There is common terminology associated with the parts of an LP problem. The linear function

c1x1 + c2x2 + ... + cnxn
that we seek to maximize (or minimize) is called the objective function, and c1, c2, ... , cn are called the objective coefficients. The inequalities
$a_{i1}x_1 + a_{i2}x_2 + \cdots + a_{in}x_n \leq b_i$ (for $i = 1, \ldots , m)$
are referred to as the constraints, and the values $b_1, b_2, \ldots , b_m$ are called the right-hand-side values. The values $l_1, l_2, \ldots , l_n$ are called lower bounds, and the values $u_1, u_2, \ldots , u_n$ are called upper bounds. Note that by setting $l_i = -\infty$ and $u_i = +\infty$, we allow the variable $x_i$ to have no explicit bounds; such a variable is called free.

It is often convenient to express an LP problem in matrix notation. To do this, we define a matrix

A =
      a11 a12 ... a1n
      a21 a22 ... a2n
             . 
             . 
             . 
      am1 am2 ... amn

and vectors

 x = (x1, x2, ... xn )  
 c = (c1, c2, ... cn )  
 l = (l1, l2, ... ln )  
 u = (u1, u2, ... un )  
 b = (b1, b2, ... bn )

The general LP problem can then be formulated as

Maximize   
    tranpose(c) * x
 Subject To 
      Ax <= b
    l <= x <= u

where $c^T$ is the transpose of the $c$-vector. The matrix $A$ is called the constraint matrix. We will sometimes refer to the $i$th constraint as the $i$th ``row'' and refer to the $j$th variable as the $j$th ``column''.

As an example of an LP problem, consider the following formulation with 3 variables ($n = 3$) and 2 constraints ($m = 2$).

Maximize 3.0x1 + 2.0x2 + 4.0x3
Subject To 
3.1x1 + 2.3x2 + 1.4x3 <= 12.2
    5.0x1 + 1.1x2 = 10.0
      0.0 <= x1 <= +infty
   -infty <= x2 <= +infty
      0.0 <= x3 <= 10.0

In describing such an LP problem, it is standard practice to assume that each variable $x_i$ has lower bound $l_i = 0$ and upper bound $u_i = +\infty$, if the bounds are not explicitly given in the model. So this example could be expressed as

Maximize 3.0x1 + 2.0x2 + 4.0x3
Subject To 
   3.1x1 + 2.3x2 + 1.4x3 <= 12.2
   5.0x1 + 1.1x2 = 10.0
   x2 free 
   x3 <= 10.0

where (as we mentioned above) ``free'' is the shorthand way of expressing the fact that $x_2$ has no explicit bounds.