A breakthrough in solution methods for the traveling salesman problem (TSP) came in 1954, when George Dantzig, Ray Fulkerson, and Selmer Johnson [2] published a description of a method for solving the TSP and illustrated the power of this method by solving an instance with 49 cities, an impressive size at that time. Riding the wave of excitement over the numerous applications of the simplex method (designed by George Dantzig in 1947), they attacked the salesman with linear programming as follows.
Each TSP instance with  n  cities can be specified as a vector of
length  n(n-1)/2  (whose components, indexed by the edges of the
complete graph, specify the costs) and each tour through the  n 
cities can be represented as its incidence vector of length  n(n-1)/2 
(with each component set at 1 if the corresponding edge is a part of
the tour, and set at 0 otherwise); if denotes the cost vector
(thought of as a row vector) and if
denotes the set of the
incidence vectors (thought of as column vectors) of all the tours,
then the problem is to
Since (0.2) is a relaxation of (0.1) in the sense that
every feasible solution of (0.1) is a feasible solution of
(0.2), the optimal value of (0.2) provides a lower bound on
the optimal value of (0.1). The ground-breaking idea of Dantzig,
Fulkerson, and Johnson was that solving (0.2) can help with
solving (0.1) in a far more substantial way than just by providing
a lower bound: having satisfied oneself that the wallet is not under
the streetlamp, one can pick the streetlamp up and bring it a little
closer to the place where the wallet was lost. It is a characteristic
feature of the simplex method that the optimal solution it
finds is an extreme point of the polyhedron defined by
; in
particular, if
is not one of the points in
then
it lies outside the convex hull of
. In that case,
can be separated from
by a hyperplane: some
linear inequality is satisfied by all the points in
and
violated by
. Such an inequality is called a cutting
plane or simply a cut. Having found a cut, one can add it to
the system
, solve the resulting tighter relaxation by the
simplex method, and iterate this process until a relaxation (0.2)
with an optimal solution in
is found.
We shall illustrate the Dantzig-Fulkerson-Johnson method on the
Dantzig-Fulkerson-Johnson example, the 49-city problem. This instance
of the TSP was created by picking one city from each of the 48 states
of the U.S.A. (Alaska and Hawaii became states only in 1959) and
adding Washington, D.C.; the costs of travel between different cities
were defined as road distances taken from an atlas. (In the actual
computations, each distance of miles was replaced by
rounded up to the nearest integer, so that each of the resulting
numbers could be stored as a single byte.) Rather than solving this
49-city problem, Dantzig, Fulkerson and Johnson solved the 42-city
problem obtained by removing Baltimore, Wilmington, Philadelphia,
Newark, New York, Hartford, and Providence. As it turned out, an
optimal tour of the 42 cities used the edge from Washington, D.C. to
Boston; since the shortest route between these two cities passes
through the seven removed cities, this solution of the 42-city problem
yields a solution of the 49-city problem.
Here, and throughout the paper, we shall conform to the following
conventions. The symbol is reserved for the set of cities; each
edge of the complete graph with vertex-set
is simply a two-point
subset of
; if
is a vector whose components are indexed by the
edges of this complete graph, then
or
denotes the
component indexed by
; we write
Trivially, each in
satisfies
In the 42-city problem, the graph of the optimal solution
of this initial relaxation is disconnected: one of its two components
has vertices 1,2,41,42 and the other component has vertices
3,4,...,40. This structural fault makes the first cut obvious:
since every tour must cross every demarcation line separating
into two nonempty parts at least twice, every
in
satisfies
The next two iterations are similar: the graphs of are
disconnected and we add two more subtour constraints (one with
, the other with
). Then the
graph of
becomes connected but not 2-connected: removal of
city 18 splits it into two connected components, one with vertices
13,14,...,17 and the other with vertices 19,20,...,42,1,2,...,12. Again, this structural fault in the graph points
out a violated subtour constraint: more generally, if the removal of a
single city, say
, splits the rest of the graph into connected
components with vertex-sets
(
) then
trivially
for at
least one
. We add the subtour constraint with
and continue through three similar iterations (adding subtour
constraints with
,
, and
) until we obtain a relaxation (0.2),
whose optimal solution
is shown in Fig. 1.1: the solid
edges
carry
, the dashed edges
carry
, and the
long path 28-29-30- ...-41-42-1-2- ...-8-9-10 consists
entirely of solid edges. The graph of this
is 2-connected;
no violated subtour constraints are apparent; in fact there are none.
Fig.1.1: What is wrong with this vector?
Now Dantzig, Fulkerson, and Johnson add two more linear
inequalities satisfied by all in
and say in a footnote,
``We are indebted to I. Glicksberg of Rand for pointing out relations
of this kind to us''. These two inequalities read
By adding constraints (0.6) and (0.7) to the previous
relaxation of (0.1), Dantzig, Fulkerson and Johnson finally put
the streetlamp next to the wallet: they obtained a relaxation, whose
optimal solution is the incidence vector of a tour (passing through
the 42 cities in the order of their labels). And that was the end of
the 49-city problem.
The influence of this work reached far beyond the narrow confines of the TSP.
On the one hand, the method can be used to attack any problem of the
form (0.1), regardless of the particular choice of , as
long as there is an efficient algorithm to recognize points of
(we have to know a good thing when we see it returned as an
optimal solution of a relaxation (0.2)). Many problems in
combinatorial optimization have this form: in the maximum clique
problem,
consists of the incidence vectors of all cliques
in the input graph
(the components of these vectors are indexed by
the vertices of
); in the maximum cut problem,
consists
of the incidence vectors of all edge-cuts in the input graph
(the
components of these vectors are indexed by the edges of
); and so
on. In each of these particular examples, the challenge is to come up
with linear inequalities satisfied by all the points in
and
violated by the optimal solution
of the current relaxation
(0.2). To meet this challenge, one often begins with common sense
to establish some combinatorial property of
and only then
(as we have seen in the 42-city example) one expresses this property
in terms of linear inequalities. This line of attack led to the
development of the flourishing field of polyhedral
combinatorics.
Second, the method can be used to attack any integer linear
programming problem. (To cast the TSP in this form, note that its
consists of all integer solutions of (0.3), (0.4),
(0.5).) Again, the challenge is to come up with linear
inequalities satisfied by all the integer feasible solutions of the
current relaxation (0.2) and violated by its optimal solution
: can I. Glicksberg's ingenuity be replaced by an automatic
procedure to generate cutting planes? Ralph Gomory [3],
[4], [5] answered this challenge with breathtaking
elegance by his design of cutting-plane algorithms.
In this way, the work of Dantzig, Fulkerson, and Johnson became the
prototype of two different methodologies: polyhedral combinatorics in
combinatorial optimization and cutting-plane algorithms in integer
linear programming.
Further discussions of cutting-plane algorithms for the TSP can be found in Padberg and Grötschel [6] and Padberg and Rinaldi [7]
This document was generated using the LaTeX2HTML translator Version 99.2beta8 (1.42)