Vous êtes sur la page 1sur 18

OPERATIONS RESEARCH informs

Vol. 56, No. 4, JulyAugust 2008, pp. 9921009


issn 0030-364X  eissn 1526-5463  08  5604  0992 doi 10.1287/opre.1080.0524
2008 INFORMS

The Pseudoow Algorithm: A New Algorithm


for the Maximum-Flow Problem
Dorit S. Hochbaum
Department of Industrial Engineering and Operations Research and Walter A. Haas School of Business,
University of California, Berkeley, California 94720, hochbaum@ieor.berkeley.edu

We introduce the pseudoow algorithm for the maximum-ow problem that employs only pseudoows and does not
generate ows explicitly. The algorithm solves directly a problem equivalent to the minimum-cut problemthe maximum
blocking-cut problem. Once the maximum blocking-cut solution is available, the additional complexity required to nd
the respective maximum-ow is Om log n. A variant of the algorithm is a new parametric maximum-ow algorithm
generating all breakpoints in the same complexity required to solve the constant capacities maximum-ow problem. The
pseudoow algorithm has also a simplex variant, pseudoow-simplex, that can be implemented to solve the maximum-
ow problem. One feature of the pseudoow algorithm is that it can initialize with any pseudoow. This feature allows
it to reach an optimal solution quickly when the initial pseudoow is close to an optimal solution. The complexities
of the pseudoow algorithm, the pseudoow-simplex, and the parametric variants of pseudoow and pseudoow-simplex
algorithms are all Omn log n on a graph with n nodes and m arcs. Therefore, the pseudoow-simplex algorithm is the
fastest simplex algorithm known for the parametric maximum-ow problem. The pseudoow algorithm is also shown to
solve the maximum-ow problem on s t-tree networks in linear time, where s t-tree networks are formed by joining a
forest of capacitated arcs, with nodes s and t adjacent to any subset of the nodes.
Subject classications: ow algorithms; parametric ow; normalized tree; lowest label; pseudoow algorithm; maximum
ow.
Area of review: Optimization.
History: Received May 2001; revisions received December 2002, June 2003, June 2004, April 2005, May 2007; accepted
May 2007.

1. Introduction complexity, On2 m. An improved version of this algo-


The maximum-ow problem is to nd in a network with a rithm runs in time On3  (Karzanov 1974, Malhorta et al.
source, a sink, and arcs of given capacities, a ow that sat- 1978). Goldberg and Rao (1998) based their algorithm on
ises the capacity constraints and ow-balance constraints an extension of Dinics algorithm for unit capacity networks
at all nodes other than source and sink, so that the amount with run time of Omin n2/3  m1/2 m logn2 /m log U  for
of ow leaving the source is maximized. U the largest arc capacity. Goldberg and Tarjans (1988)
The past ve decades have witnessed prolic develop- push-relabel algorithm with dynamic trees implementation
ments of algorithms for the maximum-ow problem. These has complexity of Omn log n2 /m, and King et al. (1994)
algorithms can be classied in two major classes: feasible- devised an algorithm of complexity Omn logm/n log n n.
ow algorithms and preow algorithms. The feasible-ow We describe here a novel approach for solving the
algorithms work with augmenting paths incrementing the maximum-ow and minimum-cut problems which is based
ow at every iteration. The rst algorithm of this type was on the use of pseudoow permitting excesses and decits.1
devised by Ford and Fulkerson (1957). A preow is a ow Source and sink nodes have no distinguished role in this
that may violate the restriction on the balance of the incom- algorithm, and all arcs adjacent to source and sink in the
ing ow and the outgoing ow into each node other than maximum-ow problem instance are maintained saturated
source and sink by permitting excesses (more inow than throughout the execution of the algorithm. The method
outow). It appears that the rst use of preow for the seeks a partition of the set of nodes to subsets, some of
maximum-ow problem was in work by Boldyreff (1955), which have excesses, and some have decits, so that all arcs
in a push-type algorithm with heuristically chosen labels. going from excess subsets to decit subsets are saturated.
Boldyreffs technique, named ooding technique, does The partition with this property is provably a minimum cut
not guarantee an optimal solution. The push-relabel algo- in the graph.
rithm of Goldberg and Tarjan (1988) uses preows, and is The pseudoow algorithm uses a certicate of optimal-
efcient both theoretically and empirically. ity inspired by the algorithm of Lerchs and Grossmann
The literature on maximum-ow algorithms includes (1965) for the maximum-closure problem dened on a
numerous algorithms. Most notable for efciency among digraph with node weights. That algorithm was devised
feasible-ow algorithms is Dinics (1970) algorithm of for the purpose of nding the optimal solution (contour)
992
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
Operations Research 56(4), pp. 9921009, 2008 INFORMS 993

of the open-pit mining problem. The algorithm of Lerch algorithm. The goal of sensitivity analysis, or parametric
and Grossmann (1965) does not work with ows but rather analysis, is to nd the maximum ow as a function of
with a concept of mass representing the total sum of node a parameter when source and sink adjacent arc capacities
weights in a given subset. It is shown here that the concept are monotone nondecreasing and nonincreasing functions
of mass generalizes in capacitated networks to the notion of of the parameter, respectively. We distinguish two types of
pseudoow. The reader interested in further investigation of sensitivity analysis: In simple sensitivity analysis, we are
the conceptual link between our algorithm and Lerchs and given k parameter values for the arc capacities functions,
Grossmanns algorithm is referred to Hochbaum (2001). and the problem is to nd the optimal solution for each of
The pseudoow algorithm solves, instead of the these values. In complete parametric analysis, the goal is
maximum-ow problem, the maximum blocking-cut prob- to nd all the maximum ows (or minimum cuts) for any
lem (Radzik 1993). The blocking-cut problem is dened value of the parameter.
on a directed graph with arc capacities and node weights Martel (1989) showed that a variant of Dinics algorithm
that does not contain source and sink nodes. The objec- can solve the simple sensitivity analysis in On3 + kn2 .
tive of the blocking-cut problem is to nd a subset of the Gallo et al. (1989) showed that simple sensitivity analy-
nodes that maximizes the sum of node weights, minus the sis for k = On and complete parametric analysis prob-
capacities of the arcs leaving the subset. This problem is lems for linear functions of the parameter can be solved
equivalent to the minimum s t-cut problem (see 3) that is in the same time as a single run of the push-relabel pre-
traditionally solved by deriving a maximum ow rst. ow algorithm, Omn log n2 /m+km log n2 /m. The simple
At each iteration of the pseudoow algorithm, there is sensitivity analysis was subsequently improved by Guseld
a partition of the nodes to subsets with excesses and sub- and Tardos (1994) to Omn log n2 /m + kn log n2 /m per-
sets with decits such that the total excess can only be mitting the increase of the number of parameter values to
greater than the maximum blocking-cut value. In that sense, k = Om, while still maintaining the same complexity.
the union of excess subsets forms a superoptimal solution. We show here that both simple sensitivity analysis for k =
The algorithm is thus interpreted as a dual algorithm for the Om log n and complete parametric analysis can be per-
maximum blocking-cut problem. If there are no unsaturated formed using the pseudoow algorithm or the pseudoow-
arcs between excess subsets and decit subsets, then the simplex algorithm in the same time as a single run,
union of the excess subsets forms an optimal solution to Omn log n + kn. The complete parametric analysis algo-
the maximum blocking-cut problem. A schematic descrip- rithm can be extended to any monotone functions, for the
tion of the partition at an iteration of the algorithm is given pseudoow algorithm and its variants, and also for the
in Figure 1. push-relabel algorithm by adding On log U  steps, where
The pseudoow algorithm works with a tree structure U is the range of the parameter (Hochbaum 2003). The
called a normalized tree. This tree preserves some infor- pseudoow and the pseudoow-simplex algorithms are thus
mation about residual paths in the graph. The normalized the only alternatives to the push-relabel algorithm known
tree is used as a basic arcs tree in a simplex-like variant
to date that can solve the complete parametric analysis
of the pseudoow algorithm, described in 10. We call this
efciently.
variant the pseudoow-simplex.
The contributions here include:
A part of the investigation here is on the sensitivity anal-
(1) A new algorithm for the maximum-ow problem of
ysis of the maximum-ow problem using the pseudoow
complexity, Omn log n. This is the rst algorithm specif-
ically designed for the maximum-ow problem that makes
Figure 1. A schematic description of the graph during use of pseudoows.
the execution of the pseudoow algorithm (2) A new pseudoow-based simplex algorithm for max-
with a partition to subsets of nodes of imum ow, matching the best complexity of a simplex algo-
excess/decit marked +/, respectively. rithm for the problem (Goldberg et al. 1991).
(3) A simple sensitivity analysis algorithm for the
maximum-ow problem on k parameter values with the
+
pseudoow algorithm of complexity, Omn log n + kn.
+
The pseudoow-simplex algorithm for simple sensitivity
+
analysis also runs in Omn log n + kn time. This improves
+
+ +
on the previously best-known simplex-based algorithm of
Goldfarb and Chen (1997) for simple sensitivity analysis
+
with complexity Omn2 + kn.
(4) A complete parametric analysis with the pseud-
+
oow or pseudoow-simplex algorithms generating all
+
breakpoints for any monotone functions of the parameter
in Omn log n + n log U  steps and in Omn log n steps
for linear functions. The pseudoow-simplex is the rst
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
994 Operations Research 56(4), pp. 9921009, 2008 INFORMS

simplex-based algorithm that performs the complete para- capacity of an arc u v A is a nonnegative real number
metric analysis problem in the same complexity as a single cuv , and the ow on that arc is fuv . For simplicity, we set all
constant capacities instance. lower bound capacities to zero, yet all results reported apply
(5) An efcient procedure for warm starting the algo- also in the presence of nonzero lower bounds. A pseudoow
rithm when the graph arcs and capacities are modied f in an arc capacitated s, t-graph is a mapping that assigns
arbitrarily. to each arc (u v) a real value fuv so that 0  fuv  cuv .
(6) A linear-time algorithm for maximum ow on an s, For a given pseudoow f in a simple s t-graph (con-
t-tree network, which is a network with tree topology (in taining at most one arc for each pair of nodes), the residual
the undirected sense) appended by source and sink nodes capacity of an arc u v A is cuv f
= cuv fuv and the resid-
that are connected to any subset of the nodes of the tree. ual capacity of the backwards arc (v u) is cvu f
= fuv . An arc
This is used as a subroutine, e.g., in solving the minimum- or a backwards arc is said to be a residual arc if its residual
cost network ow (see Vygen 2002). capacity is positive. So, the set of residual arcs Af is Af =
This paper is organized as follows. The next section i j  fij < cij and i j A or fji > 0 and j i A .
introduces notations and relevant denitions. In 3, we dis- For P  Q V , P Q = , the  capacity of the cut sepa-
cuss the relationship of the maximum blocking-cut prob- rating P from Q is CP  Q = u vP  Q cuv . For a given
lem to the minimum-cut problem, the maximum-ow prob- pseudoow f , the totalow from a set P to a set Q is
lem, and the maximum-closure problem. Section 4 intro- denoted by f P  Q = u vP  Q fuv . For a given pseud-
duces the normalized tree and its properties. Section 5 oow f , the total capacity of the residual  cut fromf a set P
describes the pseudoow algorithm and establishes its cor- to a set Q is denoted by C f P  Q = u vP  Q cuv .
rectness as a maximum blocking-cut algorithm. In 6, the Even though the underlying graph considered is directed,
generic pseudoow algorithm is shown to have pseudopoly- the directions of the arcs are immaterial in parts of the algo-
nomial run time, a scaling variant is shown to have poly- rithm and discussion. An arc (u v) of an unspecied direc-
nomial run time, and a labeling variant is shown to have tion is referred to as edge u v. So, we say that u v A
strongly polynomial run time. Section 7 presents several if either (u v) or v u A. The capacity of an edge e is
strongly polynomial variants of the pseudoow algorithm. denoted by ce . The ow on an edge e is denoted by fe .
In 8, it is demonstrated how to recover from a normalized The ordered list (v1  v2      vk ) denotes a directed path
tree a feasible ow in time Om log n and Om log n + from v1 to vk with v1  v2      vk1  vk  A. A directed
n log U , respectively. At optimality, the ow amount is path is said to be a residual path if v1  v2      vk1  vk 
equal to the capacity of the cut arcs and thus we con- Af . An undirected path from v1 to vk is denoted by
clude that the pseudoow algorithm is also a maximum- v1  v2      vk  with v1  v2      vk1  vk  A.
ow algorithm. In 9, we discuss the parametric features An s t-graph is called a closure graph if the only arcs
of the algorithm and show that simple sensitivity analy- of nite capacities are those adjacent to the source and
sis and complete parametric analysis can be implemented sink nodes.
in Omn log n for linear functions, and with an additive A rooted tree is a collection of arcs that forms an undi-
factor of On log U  for arbitrary monotone functions. The rected acyclic connected graph T with one node designated
pseudoow-simplex and its parametric implementation are as a root. A rooted tree is customarily depicted with the root
presented in 10. Section 11 describes an implementa- above and the tree nodes suspended from it below. A node
tion of the push-relabel algorithm as a pseudoow-based u is called an ancestor of node v if the (unique) path from
method. The methodologies of pseudoow, the pseudoow- v to the root contains u. All nodes that have node u as an
simplex, and the push-relabel algorithms are compared and ancestor are called the descendants of node u. Tv denotes
contrasted in 12. The online appendices contain a new the subtree suspended from node v that contains v and all
algorithm for normalizing any given tree in a network in the descendants of v in T . Tv pv = Tv is the subtree sus-
Om steps. The implications for efcient warm starts, min- pended from the edge v pv. The parent of a node v,
imum directed cuts, and a linear-time maximum-ow algo- denoted by pv, is the unique node that follows v on the
rithm for s, t-tree networks are discussed in the appendices path from v to the root of the tree. All nodes that have node
as well. An electronic companion to this paper is available v as a parent are the immediate descendants of v and are
as part of the online version that can be found at http://
called children of v. A child of v is denoted by chv. We
or.journal.informs.org/.
will occasionally refer to the nodes or the arcs of a tree T
as the set T whenever there is no risk of ambiguity.
2. Preliminaries and Notations We introduce three related equivalent representations of
For a directed graph G = V  A, the number of arcs is a graph: G, Gst , and Gext .
denoted by m = A and the number of nodes by n = V . (1) The directed graph G = V  A has real node
A graph is called an s t-graph if its set of nodes contains weights wi for i V and (positive) arc capacities cij for
two distinguished nodes s and t. i j A.
For P  Q V , the set of arcs going from P to Q is (2) The graph Gst = Vst  Ast  is an s t-graph with
denoted by P  Q = u v A  u P and v Q . The only arc capacities. It is constructed from the graph G as
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
Operations Research 56(4), pp. 9921009, 2008 INFORMS 995

follows: The set of nodes is Vst = V s t , and the set The objective function value, which is also the total ow
of arcs Ast comprises of the arcs of A appended by sets leaving the source (or arriving at the sink) is denoted by
of arcs adjacent to s and t, As, and At. The arcs of f . In this formulation, the rst set of (equality) constraints
As = s j  wj > 0 connect s to all nodes of positive is called the ow-balance constraints. The second set of
weight, each of capacity equal to the weight of the respec- (inequality) constraints is called the capacity constraints.
tive node, csj = wj . Analogously, At = j t  wj < 0 A preow violates the ow-balance  constraints
 in one
and cjt = wj = wj  for j t At. Zero weight nodes direction permitting nonnegative excess i fki j fjk  0.
are connected neither to the source nor to the sink. Thus, A pseudoow may violate the ow-balance constraints in
Gst = Vst  Ast  = V s t  A As At. both directions. Capacity constraints are satised by both
The inverse map from a graph Gst to a graph G is preow and pseudoow.
as follows: A node weighted graph G = V  A is con-
Claim 2.1. For any pseudoow in Gst , there is a corre-
structed by assigning to every node v adjacent to s a weight
sponding feasible ow on the graph Gext .
wv = csv , and every node u adjacent to t is assigned a
weight wu = cut . Nodes that are adjacent neither to the Proof. The feasible ow is constructed by sending the
source nor to the sink are assigned the weight zero. For a excess or decit of each node v back to node r via the
node v adjacent to both s and t in Gst , the lower capacity excess arc (v r) or the decit arc (r v). 
arc among the two of value $ = min csv  cvt is removed,
Let f be a pseudoow vector in Gst with 0  fij  cij
and the value $ is subtracted from the other arcs capac-
and let inowD, outowD be the total amount of ow
ity. Therefore, each node can be assumed to be adjacent
incoming and outgoing to and from the set of nodes D. For
to either source or sink or to neither. The source and sink
each subset of nodes, D V , the excess of D is the net
nodes are then removed from Vst .
inow into D,
(3) The extended network, Gext , corresponds to an s,
t-graph Gst by adding to Ast , for each node v, two arcs
excessD = inowD outowD
of innite capacity(t v) and (v s)and then shrinking  
s and t into a single node r called root. We refer to = fuv fvu 
the appended arcs from sink t as the decit arcs and the u vV s \D D v uD V t \D

appended arcs tothe source s as the excess arcs and denote


them by A
= vV v r r v . Figure 2 provides a The excess of a singleton node v is excessv. A
schematic description of such a network. The extended net- negative-valued excess is called decit, with decitD =
work is the graph Gext = V r  Aext , where Aext = A excessD.
As At A
. For S V , we let the complement of S be S = V \S. The
The maximum-ow problem is dened on a directed s, complement sets used here are only with respect to V , even
t-graph, Gst = Vs t  Ast . An arc of innite capacity is for the graphs Gst and Gext .
added from sink to source (t s) to turn the problem into Next, we introduce the maximum blocking-cut problem
a circulation problem. The standard formulation of the and the concept of surplus of a set.
maximum-ow problem with variable fij indicating the Problem Name: Maximum blocking-cut.
amount of ow on arc (i j) is Instance: Given a directed graph G = V  A, node
weights positive or negative wi for all i V , and nonneg-
Max fts ative arc weights cij for all i j A.
 
subject to fki fjk = 0 k Vs t  Optimization Problem: Find a subset of nodes S V
i j such that
0  fij  cij i j Ast   
surplusS = wi cij is maximum
iS iS jS
Figure 2. An extended network prior to shrinking
source and sink into r. The notion of surplus of a set of nodes in G has related
Excess arc Deficit arc denitions in the graphs Gst and Gext .

Denition
 2.1. The surplus of S V in the graph G is

jS j w iS jS cij . The surplus of S V in the graph
Gst is C s  S CS 
ext
 S t . The  surplus of S V
in
 the graph G is jS r jAs c rj jS j rAt cjr
s V t
c
 ij .
i jS S

These denitions of surplus in the corresponding


graphs Gext , Gst , and G are equivalent: in Gext and Gst , the
denitions are obviously the same because the capacity of
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
996 Operations Research 56(4), pp. 9921009, 2008 INFORMS

the arcs in As with endpoints in S is C s  S, the capac- 3.1. The Blocking-Cut Problem and the
ity of the arcs in At with endpoints in S is CS t , and Maximum-Closure Problem
CS S t  = CS t  + CS S.
 To see the equivalence
The pseudoow algorithm is a generalization of the
in Gst and G, observe that the sum of weights of nodes algorithm solving the maximum-closure problem in clo-
in S is also the sum of capacities C s  S CS t , sure graphs, described in Hochbaum (2001). Indeed, the
where the rst term corresponds to positive weights in S, blocking-cut problem generalizes the closure problem. The
and the second term to negative weights in S. Therefore, maximum-closure problem is to nd in a node weighted
  directed graph a maximum weight subset of nodes that
wj 
cij = C s  S CS t  CS S forms a closed set, i.e., a set of nodes that contains all
jS iS jS
successors of each node in the closed set. Picard (1976)
= C s  S CS S t  demonstrates how to formulate the maximum-closure prob-
lem on a graph G = V  A as a ow problem on the
The expression on the left-hand side is the surplus of S respective Gst graph: All arc capacities in A are set to inn-
in G, whereas the expression on the right-hand side is the ity, a source node s and a sink node t are added to V
surplus of S in Gst . and arcs As and At are appended as in the description
of how to derive Gst from G( As contains arcs from s
3. The Maximum-Flow and the Maximum to all nodes of positive weight with capacity equal to that
Blocking-Cut Problems weight, and At contains arcs from all nodes of negative
weights to t with capacities equal to the absolute value of
The blocking-cut problem is closely related to the the weight. In Gst , any nite cut CS S t  must have
maximum-ow and minimum-cut problems as shown next.  = . This implies that for such a nite cut, the set S
S S
This relationship was previously noted by Radzik (1993). is closed. It follows that CS S t  is equal to CS t ,
Lemma 3.1. For S V , s S is the source set of a mini- and thus Lemma 3.1 demonstrates that the source set of a
 is a maximum blocking
mum cut in Gst if and only if S S minimum cut is also a maximum weight closed set.
cut in the graph G. The blocking-cut problem generalizes the maximum clo-
sure problem by relaxing the closure requirement: Nodes
Proof. We rewrite the objective function in the maximum
that are successors of nodes in S, i.e., nodes that are the
blocking-cut problem for the equivalent graph Gst :
head of arcs with tails in nodes of S, may be excluded
from the set S, but at a penalty equal to the capacity of the
maxC s  S CS S t 
SV respective arcs.
 CS S t  From Lemma 3.1, we conclude that any maximum-ow
= maxC s  V  C s  S
SV or minimum-cut algorithm solves the maximum blocking-
 + CS S t 
= C s  V  minC s  S cut problem: The source set of a minimum cut is a max-
SV imum surplus set. The pseudoow algorithm works in a
reverse direction by solving the maximum blocking-cut
In the last expression, the term C s  V  is a constant from
problem rst, which provides a minimum cut, and then
which the minimum-cut value is subtracted. Thus, the set S
recovering a maximum ow.
maximizing the surplus is also the source set of a minimum
cut and, vice versathe source set of a minimum cut also
maximizes the surplus.  4. A Normalized Tree
We note that the blocking-cut problem has appeared The pseudoow algorithm maintains a construction called
in several forms in the literature. The Boolean quadratic a normalized tree after the use of this term by Lerchs and
minimization problem with all the quadratic terms having Grossmann (1965). A normalized tree T = V r  ET 
positive coefcients is a restatement of the blocking-cut is dened on a spanning tree in Gext rooted in r so that

problem.2 More closely related is the feasibility condition ET A vV v r r v . The children of r in such
of Gale (1957) for a network with supplies and demands, a spanning tree are denoted by ri , and called the roots of
or of Hoffman (1960) for a network with lower and upper their respective branches (also referred to as trees or sub-
bounds. Verifying feasibility is equivalent to ensuring that trees). In a normalized tree, only the branch roots ri are
the maximum blocking cut is zero in a graph with node permitted to carry nonzero decits or excesses. For a given
weights equal to the respective supplies and demands with pseudoow f , a branch Tri is called strong if excessTri  =
opposite signs. If the maximum blocking cut is positive, excessri  = fri  r > 0, and weak otherwise. All nodes of
then there is no feasible ow satisfying the supply and strong branches are considered strong, and all nodes of
demand balance requirements. The names maximum block- weak branches are considered weak. Branches with zero
ing cut or maximum surplus cut were used for the problem excess are called zero-decit branches, and are considered
by Radzik (1993). to be weak.
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
Operations Research 56(4), pp. 9921009, 2008 INFORMS 997

The tree T induces a forest of all the branches in G = Property 5 (Superoptimality). The set of strong nodes
V  A formed by the set of arcs ET A. The arcs in the of a normalized tree T is a superoptimal solution to
set ET A are called in-tree arcs, and the arcs in the set the blocking-cut problem. That is, the sum of excesses of
A\ET are called out-of-tree arcs. the strong branches is an upper bound on the maximum
Recall that the root is placed at the top of the tree, so surplus.
that the reference to the downwards direction is equivalent Proof. To establish the superoptimality property, we rst
to pointing away from the root, and the upwards direction prove two lemmata. Recall that for a pseudoow f and any
is equivalent to pointing toward the root. The topology D V , the capacity 
of a normalized tree with three branches is illustrated in  of the residual cut from  D to D = V \D
 = i jAD D
is C f D D  c ij fij  +  D fji .
j iAD
Figure 3. The branch rooted at r1 is strong because the
amount of excess of the branch Tr1 (and of r1 ) is positive. Lemma 4.1. For a pseudoow f saturating As At,
Branch Tr3 has nonpositive excess and is thus considered 
surplusD = excessD C f D D.
weak. Proof.
ext
Denition 4.1. A spanning tree T in G with a pseud-  D f D D

excessD = C s  D CD t  + f D
oow f in Gst is called normalized if it satises Proper-
ties 1, 2, 3, and 4.  D
= C s  D CD t  + f D
 
Property 1. The pseudoow f saturates all source-adja- 
CD D  cijf
cent arcs and all sink-adjacent arcs As At. 
i jAD D

Property 2. The pseudoow f on out-of-tree arcs is equal 


= C s DCD t CD D+C f 
D D
to the lower or the upper bound capacities of the arcs.

= surplusD + C f D D 
Property 3. In every branch, all downwards residual
capacities are strictly positive. Denition 4.2. For a pseudoow f saturating As
At and a normalized tree T , we dene surplusT D =
Property 4. The only nodes that do not satisfy ow-  T .
excessD C f D D
balance constraints in Gst are the roots of their respective
branches. From this denition and Lemma 4.1, it follows that
excessD  surplusT D  surplusD. All three terms
Property 4 means that for T to be a normalized tree, the are equal when C f D D = 0, which happens when
arcs connecting the roots of the branches to r in Gext are   
f D D = 0 and f D D = CD D.
the only excess and decit arcs permitted. It also means
that the excess of a branch is equal to the excess of its root, Lemma 4.2. For a normalized tree T with pseudoow f ,
excessTri  = excessri .  maxDV surplusT D =
strong nodes S and weak nodes S,
T
It is shown next that a crucial propertythe superopti- surplus S.
mality propertyis satised by any normalized tree and is Proof. The maximum of excessD is attained for a set of
thus a corollary of Properties 1, 2, 3, and 4. nodes D that contains all the nodes of positive excess
the roots of the strong branchesand excludes all nodes of
Figure 3. A schematic description of a normalized tree. negative excessthe roots of weak branches. Recall that
all nonroot nodes have zero excess.
Suppose that for a strong branch B S, only a proper
s = r = t
subset B1 B is contained in D , where B1 contains
Excess > 0 Excess 0
the root of the branch. Then, the set of residual arcs
in B1  B1  T is nonempty due to Property 3, and
surplusB1   surplusB. Thus, B maximizes the value
Deficit arc
surplusT D for any D B. Similarly, including any sub-
set of a weak branch that does not include the weak root
r1 r2 r3 cannot increase the value of surplusT D . Therefore, the
maximum is attained for the set of all strong nodes S. 
 T = , it fol-
In particular, observing that S S
lows that

excessS = max surplusT D  max surplusD


DV DV

Thus, the excess of the strong nodes S is an upper bound


Strong Weak on the value of the optimal solution to the blocking-cut
Note. Each ri is a root of a branch. problem, and the superoptimality is proved. 
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
998 Operations Research 56(4), pp. 9921009, 2008 INFORMS

 is zero,
When the residual capacity of arcs in (S S) Figure 4. A simple normalized tree where N s/N t
T
then excessS = surplus S = surplusS. With this and are the nodes adjacent to s/t, respectively.
Lemma 4.2, we have:
s = r = t
Corollary 4.1 (Optimality Condition). For a normal- Excess( j ) = csj Deficit(i ) = cit
ized tree T with a pseudoow f saturating As and At
 = 0, then S is a
and a set of strong nodes S, if C f S S Excess = 0
maximum surplus set in the graph and S S is a maximum
blocking cut.
j k i
Denition 4.3. A normalized tree with pseudoow f is

optimal if for the set of strong nodes S in the tree S S N(s) N(t)
Af = . Strong Weak

Corollary 4.2 (Minimality). If S is the set of strong


nodes for an optimal normalized tree, then it is a minimal procedure Initialize Gst
maximum surplus set in the graph. S = W = .
begin
Proof. From the proof of Lemma 4.2, the maximum sur-
i j A, fij = 0.
plus set contains all strong nodes. Therefore, a set strictly
s j As, fsj = csj ; rj = j; excessrj  = frj  r = csj ;
contained in S cannot be a maximum surplus set. 
S S j .
j t At, fjt = cjt ; rj = j; excessrj  = fr rj = cjt ;
5. The Description of the Generic W W j .
Pseudoow Algorithm j V \ S W ; W W j ; rj = j;
The algorithm begins with a normalized tree and an asso- excessrj  = fr rj = 0.

ciated pseudoow saturating source and sink adjacent arcs Output T = jV r rj  S W .
in Gst . An iteration of the algorithm consists of seeking a end
residual arc from a strong node to a weak nodea merger Another type of normalized tree, the saturate-all tree,
arc. If such an arc does not exist, the normalized tree is is generated by a pseudoow saturating all arcs in the
optimal. Otherwise, the selected merger arc is appended graph Gst . Here, the branches are also singletons, and the
to the tree, the excess arc of the strong merger branch is strong nodes are those that have incoming capacity greater
removed, and the strong branch is merged with the weak than outgoing capacity. Other types of initial normalized
branch. The entire excess of the respective strong branch trees are described in Anderson and Hochbaum (2002).
is then pushed along the unique path from the root of the We rst present the generic version of the pseudoow
strong branch to the root of the weak branch. Any arc algorithm which does not specify which merger arc to
encountered along this path that does not have sufcient select. We later elaborate on selection rules for a merger
residual capacity to accommodate the amount pushed is arc that lead to more efcient variants of the algorithm.
split and the tail node of that arc becomes a root of a new The input parameters to procedure pseudoow consist
strong branch with excess equal to the difference between of a pseudoow f in Gst saturating As At, a normal-
the amount pushed and the residual capacity. The process ized tree T associated with f , and the respective sets of
of pushing excess and splitting is called normalization. The strong and weak nodes, S and W . As before, Af is the set
residual capacity of the split arc is pushed further until it of residual arcs in A with respect to f .
either reaches another arc to split or the decit arc adjacent
procedure pseudoow Gst  f  T  S W
to the root of the weak branch.
begin
To initialize the algorithm, we need a normalized tree.
while S W  Af =  do
One normalized tree, called a simple normalized tree, cor-
Select s   w S W ;
responds to a pseudoow in Gst saturating all arcs As
Let rs , rw be the roots of the branches containing s 
and At and zero on all other arcs. In a simple normalized
and w, respectively.
tree, each node is a singleton branch for which the node
Let $ = excessrs  = frs  r ;
serves as a root as in Figure 4. Thus, all nodes adjacent
Merge T T \r rs  s   w;
to source are strong nodes, and all those adjacent to sink
Renormalize {Push $ units of ow along the path
are weak nodes. All the remaining nodes have zero inow
rs      s   w     rw  r:}
and outow, and are thus of zero decit and set as weak.
i = 1;
The following procedure outputs a simple normalized
Until vi+1 = r;
tree.
Let vi  vi+1  be the ith edge on the path;
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
Operations Research 56(4), pp. 9921009, 2008 INFORMS 999

{Push ow} If cvfi  vi+1  $ augment ow by $, 6. The Complexity of the Pseudoow


fvi  vi+1 fvi  vi+1 + $; Algorithm
Else, split {vi  vi+1  $ cvfi  vi+1 };
Set $ cvfi  vi+1 ; 6.1. The Pseudopolynomiality of the Generic
Set fvi  vi+1 cvi  vi+1 ; Algorithm
ii+1 The next lemma demonstrates that for integer capacities,
end the generic algorithm is nite.
end
Lemma 6.1. At each iteration, either the total excess of the
end
strong nodes is strictly reduced, or at least one weak node
procedure split a b M becomes strong.
T T \a b a r; excessa = far = M;
{a is a root of a strong branch} Proof. Because downwards residual capacities are posi-
Af Af b a \ a b ; tive, a positive portion of the excess pushed arrives at the
end weak branch. Then, either a positive amount of excess
arrives at rw , or some upwards arc (u pu) in the weak
An illustration of the steps at one iteration is given in branch has zero residual capacity. In the rst case, a posi-
Figure 5. The merger arc is (s w). In Figure 5(a), the tive amount of excess arrives at node rw and the total excess
weak and strong merger branches, Trw and Trs , are shown. is strictly reduced. In the latter case, there is no change in
The subtrees Tw and Ts are contained in those branches. excess but the nodes of the subtree Tu pu that include the
Figure 5(b) shows the inversion of the path rs      s in head of the merger arc, w, become strong. 
the strong branch that follows a merger. Edges e1 and e2 are
the only blocking edges found in the normalization process, Let M + = C s  V  be the sum of arc capacities in As
and the new partition following the split of those edges and M = CV  t  be the sum of arc capacities in At.
is shown in Figure 5(c). The incoming ow into b is the These quantities are the total excess of the strong nodes
excess frs  r and the incoming ow into d is the amount that and the total decit of the weak nodes in the initial simple
got through (b a), cba f
. normalized tree. From Lemma 6.1, it follows that two con-
The correctness of the generic algorithm follows rst secutive reductions in total excess may be separated by at
from Lemma 6.1, shown in 6, proving that the algorithm most n mergers because each merger that is not associated
terminates. Second, at each iteration the excess is pushed with a strict reduction in excess must result in a decrease
and arcs split so that the resulting tree is normalized and in the number of weak nodes. Thus, we have for integer
all nonroot nodes satisfy ow-balance constraints in Gst . capacities:
The algorithm thus terminates with a normalized tree and Corollary 6.1. The complexity of the algorithm is
no residual arc between strong and weak nodes. This is the OnM +  iterations.
certicate of optimality given in Corollary 4.1. Therefore,
at termination the set of strong nodes is the source set of a The complexity expression in Corollary 6.1 depends on
minimum-cut and a maximum-surplus set. the total sum of excesses in the initial tree. It is thus

Figure 5. The merger arc is (s w); rs and rw are the roots of the strong and weak branches respectively; e1 = b a is
the rst split arc on the merger path; e2 = d e is the second split arc. (a) Merger arc identied. (b) Inverting
the strong branch. (c) After renormalization.
r

rw
r r

c
rw rs
Trw e2
d d b
rw
Trw Trs rs
w c w
w s
WS s
Tw
s
Tw Ts Tw Ts
a a
e1
b Trs
Ts
rs

(a) (b) (c)


Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
1000 Operations Research 56(4), pp. 9921009, 2008 INFORMS

possible to take advantage of the symmetry of source and merger arc, called the strong merger node, and w, called
sink and solve the problem on the reverse graphreversing the weak merger node.
all arcs and the roles of source and sinkresulting in Our labeling scheme is similar, but not identical, to the
OnM  iterations. Therefore, the total number of itera- distance labeling used in Goldberg and Tarjan (1988). The
tions is On min M +  M . basic labeling scheme restricts the selection of a merger
arc (s w) so that w is a lowest label weak node among all
Corollary 6.2. The complexity of the algorithm is
possible residual arcs in (S W ).
On min M +  M  iterations. Initially, all nodes are assigned the label 1, lv = 1
It is important to note that even though the algorithm ter- v V . After a merger iteration involving the merger
minates when total excess is zero, this is only a sufcient arc (s w), the label of all strong nodes including the
condition for termination, not a necessary condition. At ter- strong merger node s can only increase to the label of w
mination, both excess and decit may be positive, as long plus 1. Formally, the statement Select s w S W  is
as the cut arcs (S W ) are all saturated. This observation replaced by
leads to another corollary. Select s w S W , so that lw = minu vS W  lv ;
A procedure for feasible ow recovery is given in 8. {relabel} v S, lv max lv  lw + 1 .
The feasible ow recovered in Gst has the ow on all
out-of-tree arcs unchanged. In particular, for an optimal The labels satisfy the following invariants throughout the
normalized tree, the ow saturates all arcs in (S W ) and execution of the algorithm:
(S W ) is a minimum cut. We conclude that for optimum Invariant 1. For every arc u v Af , lu  lv + 1.
minimum-cut value CS W  = C , the remaining excess at
Invariant 2 (Monotonicity). Labels of nodes on any
termination is M + C . We then get a somewhat tighter
downwards path in a branch are nondecreasing.
complexity expression,
Invariant 3. The labels of all nodes are lower bounds
Corollary 6.3. Let the minimum-cut capacity be C . on their distance to the sink. Furthermore, the difference
Then, the complexity of the algorithm is OnC  iterations. between the labels of any pair of nodes u and v, lu lv , is
a lower bound on the shortest residual path distance from
6.2. A Scaling Polynomial-Time Improvement u to v.
Using a standard technique for scaling integer arc capaci- Invariant 4. Labels of nodes are nondecreasing over the
ties, the running time of the algorithm becomes polynomial. execution of the algorithm.
This works as follows: Let P = log2 maxi jAst cij 1.
Then, at each scaling iteration p p = P  P 1     1 0, the We now prove the validity of the invariants:
problem is solved with arc capacities, cij = cij cij /2p  Proof of Invariant 1. Assume by induction that the
for all i j Ast . At the next iteration, the value of p is invariant holds through iteration k, and prove that it holds
reduced by one, thus adding one more signicant bit to the through iteration k + 1 as well. Obviously, the invariant is
value of the capacities. satised at the outset, when all labels are equal to one.
Now between two consecutive scaling iterations, the Consider a residual arc (u v) after the relabeling at iter-
value of the residual cut is increased by at most m scal- ation k + 1 and let arc (s w) be the merger arc in that
ing units. This is because the residual cut capacity at the iteration. Let lu , lv be the labels prior to the relabeling in
end of the previous scaling iteration is zero, and when the iteration k + 1, and lu , lv be the labels after the relabeling is
additional bit is added to the capacities of at most m arcs, complete. There are four different cases corresponding to
the value of the minimum cut in the graph is bounded by the status of the nodes at the beginning of iteration k + 1.
this residual cut. With Corollary 6.3, this implies a running u strong, v weak: At iteration k + 1, only the label of
time of Omn iterations per scaling step. Because there node u can change because weak nodes are not relabeled.
are Ologmin M +  M  scaling steps, we have The lowest label choice of w implies that lv  lw , and there-
fore lu = max lu  lw + 1  lv + 1 = lv + 1.
Corollary 6.4. The complexity of the scaling pseudoow
u strong, v strong: Here, the inequality can potentially
algorithm is Omn log min M +  M  iterations.
be violated only when the label of u goes up and the label
of v does not. Suppose that the label of u has increased.
6.3. A Strongly Polynomial Labeling Scheme Then, lu = lw + 1. If the label of v has not likewise
We describe here a labeling scheme for the pseudoow increased, then lv  lw + 1 and lv = lv  lw + 1 = lu , so the
algorithm, and show that it satises four invariants. The inequality is satised.
pseudoow algorithm satisfying these invariants is shown u weak, v strong: Only the label of v can change, and
to have complexity of Omn log n, which is strongly then only upwards.
polynomial. u weak, v weak: Weak nodes do not get relabeled, so,
In this section, a merger arc (s w) has node s (which is by induction, the inequality is still satised at the end of
unrelated to the source node), the strong endpoint of the iteration k + 1. 
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
Operations Research 56(4), pp. 9921009, 2008 INFORMS 1001

Proof of Invariant 2. Assume, by induction, that mono- Corollary 6.6. There are at most Omn calls to pro-
tonicity is satised through iteration k. The operations that cedure split throughout the execution of the pseudoow
might affect monotonicity at iteration k + 1 are relabeling, algorithm.
merging, and splitting of branches. As a result of relabel-
Proof. At any iteration, there are at most n branches
ing, the nodes on the strong section of the merger path
because there are only n nodes in the graph. Each call to
rs      s are all labeled with the label lw + 1 because
split creates an additional branch. The number of branches
previously all the labels of these nodes were ls by the
after a merger can:
inductive assumption of monotonicity. After merging and
Increase, when there are at least two splits.
inverting the strong branch, that path has the roles of par-
Remain unchanged, when there is exactly one split.
ents and children reversed along the path. But because the
Decrease by one when there is no edge split. In this
nodes along the strong section of the path all have the same
case, all the strong nodes in the branch containing the
labels, the monotonicity still holds. Monotonicity also holds
strong merger node become weak.
for all subtrees that are suspended from the merger path
Because there are only Omn iterations, the total accu-
nodes because the parent/child relationship is not modied
mulated decrease in the number of branches can be at most
there, and all labels  lw + 1. 
Omn throughout the algorithm. Therefore, there can be
Proof of Invariant 3. This is a corollary of Invariant 1. at most Omn + n edge splits. 
Along a residual path, labels increase by at most one unit
6.3.1. Data Structures. We maintain a set of n buck-
for each arc on the path. Therefore, the difference between
ets, where bucket k, Bk contains all the k-labeled branch
the labels of the endpoints is less than or equal to the dis-
roots. The buckets are updated in the following cases:
tance between them along any residual path. Formally, for
(1) There is a merger and the root of the strong branch
a residual path on k nodes (v1  v2      vk ), we have
no longer serves as root. Here, this root is removed from
its bucket.
lv1  lv2 + 1   lvk + k 1
(2) A root is relabeled, and then moved to another,
higher label bucket.
Therefore, lv1 lvk  k 1. 
(3) A split operation creates a new root node. Here, the
Invariant 4 is obviously satised because relabeling can
new root node is inserted into the bucket of its label.
only increase labels.
The number of bucket updates is thus dominated by the
Lemma 6.2. Between two consecutive mergers using complexity of the number of mergers and the number of
merger arc s w, the labels of s and w must increase by edge splits Omn.
at least one unit each. For the tree operations, we use the data structure called
dynamic trees devised by Sleator and Tarjan (1983, 1985).
Proof. Let the label of w be lw = L during the rst merger
Dynamic trees is a data structure that manages a collection
using (s w). After the mergers relabeling, ls  L + 1. Prior
of node disjoint trees. Among the operations enabled by
to (s w) serving again as a merger arc, ow must be pushed
the dynamic trees data structure are:
back on (w s) so that (s w) may become an out-of-tree
ndrootvnd the root of the tree that v belongs to.
residual arc. This can happen if
ndminvnd the minimum key value on the path
w is above s in a strong branch and the strong merger
from v to the root of its tree.
node is either s or a descendant of s. After such a merger,
addcost(v $)add the value $ to the keys of all nodes
the label of w must satisfy lw = ls  L + 1. Or,
on the path from v to the root of its tree.
(w s) serves as a merger arc. But then w is relabeled
invert(v)invert the tree that v belongs to so it is rooted
to be at least ls + 1 and lw  ls + 1  L + 2.
at v instead of at ndrootv.
In either case, lw  ls  L + 1.
merge(u v)link a tree rooted at u with node v of
Upon repeated use of (s w) as a merger arc, ls  lw +1 
another tree so that v becomes the parent of u.
L + 2. Thus, the labels of s and w must increase by at least
split(u pu)split the tree that u and pu belong to
one each between the consecutive mergers. 
so that the descendants of u form a separate tree Tu .
Invariants 3 and 4 imply that no label can exceed n All these operations, and several others, can be per-
because a label of a node increases only if there is a resid- formed in time Olog n per operation (either in amortized
ual path from the node to a weak root. It follows that each time or in worst case depending on the version of dynamic
arc can serve as merger arc at most n 1 times throughout trees implemented; see Sleator and Tarjan 1983, 1985). The
the algorithm. only operation required for the pseudoow algorithm which
is not directly available is the operation of nding the next
Corollary 6.5. The labeling pseudoow algorithm exe-
edge along the merger path that has residual capacity less
cutes at most Omn mergers.
than the amount of excess $. We call the operation that
We now bound the number of edge splits throughout the nds the rst arc on the path from v to the root of the
algorithm. tree with residual capacity less than $, FIND-FIRST(v $).
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
1002 Operations Research 56(4), pp. 9921009, 2008 INFORMS

We note that FIND-FIRST can be implemented as a minor The search for a merger arc is implemented efciently
modication of ndmin(v), and in the same complexity as utilizing the monotonicity property. The strong nodes are
ndmin(v), Olog n. scanned in depth-rst-search (DFS) order starting from a
lowest label root of a strong branch. Such a root node is
Lemma 6.3. The complexity of the labeling pseudoow found easily in the lowest label nonempty bucket. Each
algorithm is Omn log n. backtrack in the strong branch is associated with a relabel
Proof. The number of merger iterations is at most Omn of the corresponding node. For each node, we maintain a
and the number of splits is Omn. Each iteration requires: neighbor-pointer to the last scanned neighbor in the adja-
(1) Identifying a merger arc (s w), if one exists, with w cency list since the last relabel. When a node is relabeled,
of lowest label. this pointer is reset to the start position. A node that has
(2) Inverting and merging trees. its pointer at the end position and has had all its neighbors
(3) Updating residual capacities along the merger path. scanned for a potential merger is a candidate for relabeling.
(4) Finding the next split edge on the merger path. Maintaining the status of out-of-tree arcs is easy as an arc
For operation (1), a merger arc can be identied ef- changes its status either in a merger, when the merger
ciently using several possible data structures. We show, in arc becomes an in-tree arc; or in a split, when an in-tree
the next section, that in all the labeling variants the search arc becomes out-of-tree. Either one of these cases happens
for a merger arc initiates at a strong node of a specic only Omn times throughout the algorithm and the update
label. It is then established that scanning all arcs adjacent of status for each takes O1. To summarize,
to all strong nodes of label l requires only Om opera- Lemma 7.2. Finding all merger arcs throughout a phase
tions. This results in Omn steps to identify all merger requires at most Om operations for a total of Omn
arcs throughout the algorithm, or O1 steps on average per operations for the entire algorithm.
merger arc.
Operations (2) and (3) use dynamic trees, and opera- Proof. Let phase l be the collection of iterations when the
tion (4) is the FIND-FIRST operation. The complexity of lowest label among the strong nodes is l. From Invariant 3,
each of these operations is Olog n. The total complexity it follows that there are no more than n phases. The DFS
is therefore Omn log n.  process scans all arcs in the normalized tree at most twice
per phase, and all out-of-tree arcs are scanned at most once
per phase. Therefore, during a single phase, the total com-
7. Strongly Polynomial Variants plexity of searching for merger arcs is Om. 
7.1. The Lowest Label Variant
7.2. The Highest Label Variant
Under the lowest label variant, the selection rule depends In the highest label variant, we select as strong merger
on the labels of strong nodes. The merger arc is selected branch the one that has a highest label root node. The merg-
between a lowest label strong node of label l and a node ers are still performed from a node of lowest label l in the
of label l 1: branch (rather than among all strong nodes) to a node of
Select s w Af for s S satisfying ls = minvS lv label l 1, and the relabeling rule is identical. Unlike the
and lw = ls 1; lowest label variant, the head of the merger arc may not
{relabel} If no such arc exists and chs, be a weak node. Still, all the invariants hold as the proof
lchs  ls + 1, relabel ls ls + 1. of Lemma 7.1 applies, and so does the complexity analy-
sis. Note that the search for merger arcs at each phase is
Because s is of lowest label among the strong nodes, the
accomplished in Om as it relies only on the monotonicity
node labeled l 1 is necessarily weak. The relabel is an
invariant.
increase of the label of a single strong node by one unit
when it has no neighbor of lower label and all its children
7.3. A Hybrid Variant
in the strong branch have larger labels.
Here, any strong branch can be selected to initiate the merg-
Lemma 7.1. Invariants 1, 2, 3, and 4 hold for the lowest ers from. The merger sought at each iteration is from a
label variant. lowest label strong node in the selected branch, and thus
Proof. Invariant 1 holds because the relabel of u occurs of label identical to the label of the root of the branch. As
only when for all residual arcs, (u v) lu < lv + 1. After before, all the variants and the complexity analysis are the
increasing the label of u by one unit, the invariant inequal- same as for the lowest label variant.
ity still holds.
The relabel operation satises the monotonicity invariant, 7.4. Free Arcs Normalized Tree
Invariant 2, by construction. Invariant 3 is satised because An arc (i j) is said to be free with respect to a pseud-
it is a corollary of Invariant 1, and Invariant 4 is satised oow f if it is residual in both directions, 0 < fij < uij .
by construction.  In the pseudoow algorithm, a split occurs on arc (i j)
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
Operations Research 56(4), pp. 9921009, 2008 INFORMS 1003

on the merger path when the amount of pushed excess $ of f  are in T , and if for all out-of-tree arcs i j A\T ,
strictly exceeds the residual capacity cijf . Therefore, when fij = fij .
$ = cijf , then arc (i j) remains in the tree but is not free. If the normalized tree is optimal, then the cut between
The free arcs variant of the algorithm splits an arc if the strong and weak nodes is saturated and therefore the cor-
excess pushed is greater or equal to the residual capacity. responding feasible ow is maximum.
The split branch resulting when the excess is equal to the
residual capacity has zero excess and is considered weak. Theorem 8.1. Every normalized tree with pseudoow f
With this splitting rule, all in-tree arcs are free. has a feasible ow in Gst associated with it that can be
The free arcs variant tends to create smaller branches. constructed in Om log n time.
Note that the weakly polynomial complexity bound result- The following lemma is needed for the proof of the the-
ing from Lemma 6.1 does not apply for this case. The free orem. The concept of strictly strong or strictly weak
arcs variant was used for the bipartite matching pseudoow node refers to a node in a strong (respectively, weak)
algorithm in Hochbaum and Chandran (2004), leading to branch with strictly positive excess (respectively, decit).
particularly small (up to three nodes) branches.
Lemma 8.1. For any strictly strong node, there exists a
7.5. Global Relabeling residual path either to the source or to some strictly weak
node.
Because labels are bounded by the residual distance to the
sink, we can use a process of assigning labels that are equal Proof. Suppose not. Let v be a strictly strong node for
to the respective distances to the sink, provided that mono- which there is no residual path to the source or to a strictly
tonicity is satised in each branch. These labels are found weak node. Therefore, the set of nodes reachable from v in
using breadth-rst-search (BFS) from the sink in the resid- the residual graph, Rv, includes only nodes with nonneg-
ual network. To preserve monotonicity, the label assigned ative excess.
to each node is the minimum between its distance to sink Because no node is reachable from Rv in the resid-
and the maximum label of its children in the branch. We ual graph, inowRv = 0, and thus in particular,
call this labeling process global relabeling, after an analo- inowRv outowRv  0. On the other hand,
gous process used for push-relabel. The frequency of use of for all u Rv, the excess is nonnegative, inowu
global relabeling should be balanced against the increased outowu  0. Now, because v Rv,
complexity of performing BFS in the residual network to
identify the distance labels. 0  inowRv outowRv

= inowu outowu > 0
7.6. Delayed Normalization uRv

A heuristic idea is to process a merger through merger arc This leads to a contradiction of the assumption. 
(s w) while normalizing for the strong portion of the path
only, from rs to s and w. The normalization process in the An analogous argument proves that for any strictly weak
weak section of the merger path is delayed and the excess node, there exists a residual path either to the sink or to a
at node w is recorded. After several mergers have been per- strictly strong node.
formed, the normalization of the weak branches is executed Proof of Theorem 8.1. Given the pseudoow f corre-
jointly for all the weak nodes by a single scan of the weak sponding to the normalized tree, a feasible ow is con-
branches. If several strong branches were merged to the structed by eliminating the positive excess at the roots
same weak branch, the weak sections of their merger paths of strong branches and the negative excess at the roots
overlap. In that case, instead of normalizing each path sep- of (strictly) weak branches. This is done by using a pro-
arately, we normalize them jointly for a potential improve- cess analogous to ow decomposition, which decomposes
ment in the running time. The extent of improvement in a feasible ow in an s t-graph into a collection of up to
performance depends on the magnitude of the overlap and m simple s t-paths and cycles. To use ow decomposition,
on the overhead required. we construct a graph with source and sink nodes in which
the pseudoow f is feasible by appending nodes and arcs
8. Flow Recovery to the graph Gst as follows: let all strictly strong nodes in
Gst with respect to f be adjacent to a sink node t with arcs
At termination of the pseudoow algorithm, we have a
from the excess nodes to t carrying the amount of excess.
minimum cut, but not a maximum feasible ow. We show
These excess nodes include the node t with excess equal
in this section how to construct a feasible ow from any
to the total capacity of At, CAt. All strictly weak
pseudoow and a corresponding normalized tree, not nec-
nodes have arcs incoming to them from a node s carrying
essarily optimal.
the decit ow. The decit nodes include the source node s
Denition 8.1. A feasible ow f  is said to be associated with decit equal to the capacity of the arcs As. The
with a normalized tree T and pseudoow f if all free arcs resulting graph has a feasible ow from s to t. Such a graph
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
1004 Operations Research 56(4), pp. 9921009, 2008 INFORMS

is shown in Figure 6(a) (where the quantity excessv is Finding the ow paths can be done using DFS, where
denoted by Mv ). at each iteration the procedure identies a cycle or a path
We rst decompose the sum of all excesses (which is along which ow is pushed back and eliminates at least
the portion of the ow from the excess nodes other than one bottleneck arc. The complexity of this DFS procedure
t to t). This is done by reversing the graph and the ows is Omn. A more efcient algorithm for ow decomposi-
on each arc with the amount of ow becoming the residual tion using dynamic trees was devised by Sleator and Tarjan
capacity of each arc in the opposite direction. This graph (1985). The complexity of that algorithm is Om log n. 
contains a residual path from every excess node to s as In an optimal normalized tree, with a set of strong
proved in Lemma 8.1. Once all the excesses have been  In that case,
nodes S, there are no residual arcs in (S S).
disposed of, there may be some strictly positive decits it follows from Lemma 8.1 that all positive excess is sent
left. These are denoted by the ows cj in Figure 6(b). All back to the source s using paths traversing strong nodes
these decits must now reach t via t in the residual graph only, and all positive decit is sent back to sink t via
because there are no other positive excess nodes left. Again, weak nodes only. Thus, for an optimal normalized tree, the
ow decomposition is employed to send these decits to t.  and the associated feasible ow
pseudoow saturates (S S),
 So, the ow on (S S)
also saturates the arcs (S S).  is equal
Figure 6. The graph in which ow decomposition to the capacity of the cut CS S. Given the maximum
generates the corresponding feasible ow. blocking-cut solution and minimum cut, the maximum ow
Here, Mv = excessv. (a) The graph in is therefore found in time Om log n, as proved in Theo-
which excesses are eliminated. (b) The graph rem 8.1. This is an alternative proof to the optimality of a
after excesses have been eliminated and normalized tree with C f S S = 0.
before applying ow decomposition to elim-
Remark 8.1. For closure graphsgraphs that have all arcs
inate decits.
not adjacent to source and sink, with innite capacity
(a) Deficit nodes Excess nodes the pseudoow on the normalized tree is transformed to a
feasible ow in On time (Hochbaum 2001). That algo-
rw1 rs1
rithm is more efcient for nding feasible ow than the
algorithm for general graphs because for closure graphs, all
Mrw
1 out-of-tree arcs must have zero ow on them and the ow
rw2 rs2 decomposition involves only the arcs within the respective
Mrw
Mrs
2
branch. This is the case also for s t-tree networks discussed
2
in the online appendix.
s G t

9. The Parametric Pseudoow Algorithm


rwp rsq
Whereas parametric analysis for general linear program-
ming problems is restricted to a sequence of changes of
c((s)) (s) c((t)) one parameter at a time, for the maximum-ow prob-
s t lem parametric changes can be made simultaneously to
all source adjacent and sink adjacent capacities provided
(t) that the changes are monotone nondecreasing on one side
and monotone nonincreasing on the other. In our discus-
(b) Deficit nodes Excess nodes
sion, the capacities of arcs adjacent to source are mono-
rw1 rs1 tone nondecreasing functions of the parameter 5, e.g.,
c1 asj + 5bsj for bsj  0, and the capacities of arcs adjacent to
0 sink are monotone nonincreasing functions, e.g., ait 5bit
c2 rw2 rs2 for bit  0. Gallo et al. (1989) showed how to solve
0
the parametric maximum-ow problem for a sequence
of k parameter values with the push-relabel algorithm in
s G t Omn log n2 /m + km log n2 /m steps. For k = On, this
complexity is the same as required for solving a constant
cp
rwp rsq
0 capacity maximum-ow instance (referred to here as a sin-
gle instance).
cs
As in the introduction, the parametric analysis for a
(s)
c((t)) given set of sorted parameter values, 51 < 52 < < 5k ,
s t is referred to as the simple sensitivity analysis. The com-
plete parametric analysis is more general and generates
(t) all possible breakpoints b1 < b2 < < bq , at which the
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
Operations Research 56(4), pp. 9921009, 2008 INFORMS 1005

minimal source set of a minimum cut is changing. (From The key to the efciency of the parametric solution is
Lemma 9.1, q  n.) A complete parametric analysis solu- to leave the distance labels unchanged between consecutive
tion provides the solutions for any parameter value 5. calls to monotonically increasing values of the parameter.
Specically, if 5 bl  bl+1 , then Sbl  Sbl a minimum Adjusting the graph for a new parameter value in the push-
cut for the parameter bl is also a minimum cut for 5. In relabel algorithm requires Om log n2 /m time for a total of
other words, the optimal solution for any given parame- Omn log n2 /m + km log n2 /m. For the pseudoow algo-
ter value is found from the complete parametric analysis rithm, the normalized tree remains the same except that it
output by identifying the consecutive pair of breakpoints may require renormalization at a complexity of On when
between which the value lies. the value of the parameter is increased. The running time
To date, only the push-relabel algorithm has been shown is then Omn log n + kn.
to solve the complete parametric analysis efciently, in the We now sketch briey the main concepts of the com-
same complexity as a single instance. This was shown in plete parametric analysis algorithm of Gallo et al. (1989),
Gallo et al. (1989) for linear functions of the parameter. mimicked here for the pseudoow algorithm. Each call to
In Hochbaum (2003), we demonstrated that the complete the algorithm is made with respect to a value 5 and an
parametric analysis can be implemented for any monotone interval 51  52 , where 5 51  52  and where the graph
functions (for both push-relabel and pseudoow) with an has the maximal source set of the minimum cut on G51
additive run time of On log U , where U is the range of shrunk with the source, and the maximal sink set of the
the parameter. This additive run time is the complexity of minimum cut on G52 shrunk with the sink. Each such call
nding zeroes of the monotone functions, which is provably is provided with two free runs, one starting from G51 and
impossible to avoid (Hochbaum 2003). the other on the reverse graph starting from the solution
We rst show that the pseudoow algorithm solves the on G52 . For a given interval where we search for break-
simple sensitivity analysis in Omn log n + kn time, which points, we run the algorithm twice: from the lower endpoint
is the complexity of solving a single instance for k = of the interval where the maximal source set of the cut
m log n. The pseudoow algorithm is then shown to solve obtained at that value is shrunk into the source, and from
the complete parametric analysis in Omn log n for lin- the highest endpoint of the interval where the maximal sink
ear functions, and with an additive run time of On log U  set of the cut is shrunk into the sink. The runs proceed for
for arbitrary monotone functions. The pseudoow-simplex the graph and reverse graph until the rst one is done. The
algorithm is later shown to solve the respective parametric newly found cut subdivides the graph into a source set and
problems in the same complexities as the pseudoow algo- a sink set, G1 and G2 , with n1 and n2 nodes, respectively,
rithm. It is the only known simplex algorithm that solves and m1 and m2 edges, respectively. Assuming that n1  n2 ,
the simple sensitivity analysis in the same complexity as a then n1  21 n. In the smaller interval, corresponding to the
graph on n1 nodes, two new runs are initiated from both
single instance, and substantially faster than other simplex
endpoints. In the larger interval, however, we continue the
algorithms. Moreover, the pseudoow-simplex algorithm is
previous runs using two properties:
the only simplex algorithm known to solve the complete
Reectivity: The complexity of the algorithm remains the
parametric analysis efciently.
same whether running it on the graph or reverse graph.
Let S5 be a minimal (maximal) source set of a mini-
Monotonicity: Running the algorithm on a monotone
mum cut in the graph G5 in which the capacities are set
sequence of parameter values has the same complexity as
as a function of 5. It is well known (e.g., Gale 1957)
a single run.
that as 5 increases, so does the set S5 . This can also be
The implementation of the complete parametric analy-
deduced from the construction of an optimal normalized
sis with pseudoow requires two minimum-cut solutions,
tree: As 5 increases, all excesses of branches can only go
one with minimal and one with maximal source sets. From
up. So, strong branches can only become stronger, while
Corollary 4.2, the set of strong nodes at the optimal solu-
some weak branches may become strong or have lesser
tion is a minimal source set. To nd a maximal source set
decit. (The procedure of adjusting the normalized tree minimum cut, we solve the problem in the reverse graph.
for changes of capacities, renormalization, is given in the Alternatively, if using the free arcs variant, the set of strong
online appendix.) nodes appended with all zero-decit branches that do not
Lemma 9.1 (Nestedness). For k parameter values 51 < have residual arcs to strictly weak branches forms a maxi-
52 < < 5k , the corresponding minimal maximal mal source set. This collection of zero-decit branches can
source set minimum cuts satisfy S51 S52 S5k . also be generated by a process of normalization, even if the
free arcs variant is not used.
Corollary 9.1 (Contraction Corollary). For 5 Using the reectivity and monotonicity properties for the
51  52 , a source set of a minimum cut S5 in the graph labeling pseudoow algorithm, we choose a quantity Q to
G5 in which the set S51 is contracted with the source and be Q = c log n for some constant c, m1  m. m2  m, where
S52 S51 is contracted with the sink, is also a source set n1 + n2  n + 1 and n1  21 n. Let T m n be the running
of a minimum cut in G5 . time of the labeling parametric pseudoow algorithm on
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
1006 Operations Research 56(4), pp. 9921009, 2008 INFORMS

a graph with m arcs and n nodes. The recursive equation along the cycle starting with (r rs ) attaining this bottleneck
satised by T m n is capacity $ is the leaving arc, where
 
f f
T m n = T m1  n1  + T m2  n2  + 2Qm1 n1  $ = cu v = min excessrs   min 
ce  decitr w 
ers    s  w  rw 

The solution to this recursion is T m n = OQmn = = min cef 


er rs    s   w  rw  r
Omn log n. This omits the operation of nding the value
of 5 , which is done as the intersection of the two cut func- The new basis tree is T s   w \ u v . The roots
tions for the parameter value 51 and the parameter value of the branches remain unchanged provided that neither u
52 . This intersection is computed at most On times, each nor v is the node r. If u = r, then the excess arc is the
at O1 steps for the linear functions. Computing the inter- bottleneck arc, and the strong branch Trs is eliminated and
section of arbitrary monotone functions can be done in joins the weak branch Trw . If v = r, then the decit arc is
Olog U  steps each with binary search, thus requiring an the bottleneck arc, and the weak branch is eliminated and
additive factor of On log U . joined with Trs . This means that throughout the algorithm,
the number of branches is nonincreasing.
10. Pseudoow-Simplex procedure pseudoow-simplex Gst  f  T  S W
A generic simplex network-ow algorithm works with a begin
feasible ow and a spanning tree of the arcs in the basis, while S W  Af =  do
called the basis tree. All free arcs are basic, and are thus Select s   w S W ; {s   w is the entering arc.}
part of the basis tree. A simplex iteration is characterized {Leaving arc and ow update:}
by having one arc entering the basis, and one arc leaving Let $ be the minimum residual capacity along the
the basis. cycle r rs      s   w     rw  r
A normalized tree can serve as a basis tree in the attained rst for arc u v:
extended network for a simplex algorithm that solves the {If $ > 0 push $ units of ow along the path
maximum blocking-cut problem. A merger arc qualies as rs      s   w     rw  r:}
an entering arc, but in the pseudoow algorithm there are Until vi+1 = r;
potentially numerous leaving arcs (split arcs). The simplex Let vi  vi+1  be the ith edge on the path;
version of the pseudoow algorithm removes the rst arc {Push ow} Set fvi  vi+1 fvi  vi+1 + $;
with the bottleneck residual capacity on the cycle begin- If fvi  vi+1 = cvi  vi+1 then
ning at r rs      s w     rw  r. At each iteration, there is Af Af vi+1  vi  \ vi  vi+1  ;
precisely one split edge that may be such that the strong ii+1
merger branch is eliminated altogether when the excess end
arc is the bottleneck (the residual capacities of all arcs on Set T T \ u v s   w .
the cycle are greater or equal to the amount of excess), end
or the weak merger branch is eliminated when the decit end
arc is the bottleneck. The requirement to remove the rst The four properties of a normalized tree apply at the end
bottleneck arc is shown below to be essential to retain the of each iteration: the choice of the leaving arc as the rst
downwards positive residual capacities in each brancha bottleneck arc ensures that all downwards residual capaci-
property of normalized trees. Removing a rst bottleneck ties remain positive (Property 3 of a normalized tree). This
arc on the cycle has been used previously in the concept of is because other arcs on this path, which after the ow
a strong basis introduced by Cunningham (1976). update have zero residual capacity, are all in the weak side
Our simplex algorithm for solving the maximum block- of the tree, and the zero residual capacity is in the upwards
ing-cut problem is called the pseudoow-simplex algorithm. direction.
Although the algorithmic steps taken by the pseudoow Lemma 6.1 applies to pseudoow-simplex, so a sim-
algorithm and the pseudoow-simplex on the same normal- plex iteration results in either a strict decrease in the total
ized tree can lead to different outcomes in the subsequent (positive) excess, or at least one weak node becomes strong.
iteration (see 12), the complexity of both algorithms is the The complexity of pseudoow-simplex is thus OnM + 
same. iterations. Furthermore, the termination rule and all com-
Let an out-of-tree residual arc between a strong node s  plexity improvements still apply. The use of the labeling-
and a weak node w be referred to as an entering arc. The based selection rule for an entering arc with the dynamic
cycle created by adding an entering arc to a normalized tree trees data structure leads to precisely the same complex-
is r rs      s   w     rw  r, where r represents, as before, ity as that of the labeling algorithm, Omn log n, although
the root in the extended network. The largest amount of the some modication is required as noted next.
ow that can be augmented along the cycle is the bottle- Although Invariants 1, 3, and 4 hold, the monotonic-
neck residual capacity along the cycle. The rst arc (u v) ity invariant does not hold for pseudoow-simplex. The
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
Operations Research 56(4), pp. 9921009, 2008 INFORMS 1007

reason is that the tail of the split bottleneck arc may not decit arcs during execution. At termination of the push-
become the new root of the strong tree, and no inversion relabel algorithm, some of the decit arcs may carry posi-
takes place. So, when the bottleneck arc is within the weak tive ows on them. In that case, generating a feasible ow
branch, the labels of the weak nodes that join the strong can be done by the procedure described in 8.
branch are smaller than that of the strong merger node Running push-relabel with pseudoows permits the use
which becomes their ancestor. This lack of monotonicity of arbitrary initial pseudoows, notably, the saturate-all
disables the efcient scanning for merger arcs with DFS. initialization. This version of push-relabel allows the use of
Instead, we use a dynamic tree representation of the nor- warm starts. Also, experimental studies we conducted show
malized tree with an additional set of key values indicating that for some classes of problems, the pseudoow-push-
the labels of the respective nodes. Finding a minimum key- relabel utilizing some of the initialization schemes runs
valued node in a branch requires Olog n time at most, faster than the standard push-relabel algorithm (Chandran
and the updating of these keys all can be performed within and Hochbaum 2003).
the run time dominated by the run time of the overall algo-
rithm, and therefore without increasing the complexity. 12. Comparing the Simplex, Pseudoow,
Because all properties of pseudoow-simplex needed to and Push-Relabel Algorithms for
prove the complexity of solving the maximum-ow prob-
lem and its parametric versions are the same as those of
Maximum Flow
the pseudoow algorithm, it follows that the complexity of Here, we compare the strategies of three generic algo-
pseudoow-simplex is the same as that of the pseudoow rithms for the maximum-ow problem: simplex (network
algorithm for these problems. simplex), push-relabel, and pseudoow. We also describe
the differences between the pseudoow algorithm and
pseudoow-simplex.
11. A Pseudoow Variant of the
Two extreme strategies are manifested in the simplex and
Push-Relabel Algorithm push-relabel algorithms. Simplex is a global algorithm
It is possible to set up the push-relabel algorithm as that maintains a spanning tree in the graph and each iter-
a pseudoow-based algorithm. The advantage is that with ation involves the entire graph. Push-relabel, on the other
pseudoow, the algorithm can be initialized with any hand, is a local algorithm that can execute operations
pseudoow.3 based only on information at a node and its neighbors. In
We sketch the push-relabel algorithm: the algorithm that sense, push-relabel is a suitable algorithm a for dis-
solves the maximum-ow problem in the graph Gst initial- tributed mode of computation, whereas simplex is not.
izing with a preow saturating source adjacent arcs As, Our algorithm is positioned in a middle ground between
and setting all other ows to zero. The source is labeled n, simplex and push-relabel: instead of maintaining individual
the sink is labeled zero, and all other nodes are labeled one. nodes as push-relabel does, it maintains subsets of nodes
An iteration of the algorithm consists of nding an excess with feasible ows within the subset or branch. These sub-
node of label 2n (or n if we only search for a minimum sets retain information on where the ow should be pushed
cut) and pushing, along a residual arc, the minimum of the toalong the paths going between the roots of the respec-
excess and of the residual capacity to a neighbor of lower tive branches. This path information is not captured in a
label. If no such neighbor exists, then the node is relabeled push-relabel algorithm that pushes ow guided only by dis-
to the minimum label of its out-neighbors in the residual tance labels toward lower label nodes without paths or par-
graph plus one. When no excess node of label 2n exists, tial paths information.
the algorithm terminates with a maximum ow. The subsets of nodes maintained by the pseudoow algo-
The push-relabel algorithm works with a set of excess rithm tend to be smaller in size than those maintained by
nodes, but does not permit nodes with decits. However, simplex which are the subtrees rooted at a source-adjacent
in the extended network Gext , any pseudoow is a feasible or a sink-adjacent node (this latter point is made clearer
ow. In particular, decit nodes can be represented as bal- by contrasting the split process; see below). Moreover,
anced nodes with ow on the decit arcs equal to the decit compared to pseudoow-simplex, the pseudoow algorithm
amount. In Gext , it is possible to start the push-relabel algo- does not strive to increase ow feasibility. Pseudoow-
rithm with any pseudoow that also saturates sink-adjacent simplex sends only an amount of excess that does not
arcs. This requires adding decit arcs from the sink to all violate the feasibility of any residual arc along the aug-
generated decit nodes by the given pseudoow. Adding mentation path (the merger path in our terminology). In
excess arcs that go to the source from all excess nodes not contrast, the pseudoow algorithm pushes the entire excess
adjacent to source is not necessary because the push-relabel of the branch until it gets blocked. This quantity is always
algorithm can work with any arbitrary set of excess nodes. larger or equal to the amount pushed by simplex.
With the added decit arcs, the push-relabel algorithm To contrast the split process in pseudoow as compared
works without modication. Because the algorithm does to the one in simplex, consider Figure 7. In Figure 7(a),
not generate strict decits, there is no need to add new the merger arc is added, and edge (u v) is identied for
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
1008 Operations Research 56(4), pp. 9921009, 2008 INFORMS

Figure 7. Comparing the split of an edge with an 3. We are grateful to an anonymous referee for pointing
update step in the simplex algorithm. out this possibility.
r
Acknowledgments
rs rw
This research was supported in part by NSF award DMI-
u
0620677.

s w
References
Anderson, C., D. S. Hochbaum. 2002. The performance of the pseud-
(a)
oow algorithm for the maximum ow and minimum cut problems.
Manuscript, University of California, Berkeley.
Boldyreff, A. W. 1955. Determination of the maximal steady state ow
r r of trafc through a railroad network. J. Oper. Res. Soc. Amer. 3(4)
rs rw u rw 443465.
Chandran, B., D. S. Hochbaum. 2003. Experimental study of the pseud-
u rs oow push-relabel algorithm. Manuscript, University of California,
w w Berkeley.
Cunningham, W. H. 1976. A network simplex method. Math. Program-
ming 1(1) 105116.
s s Dinic, E. A. 1970. Algorithm for solution of a problem of maximal ow in
a network with power estimation. Soviet Math. Dokl. 11 12771280.
Ford, L. R., Jr., D. R. Fulkerson. 1957. A simple algorithm for nding
(b) (c)
maximal network ows and an application to the Hitchcock problem.
Canad. J. Math. 9 210218.
Gale, D. 1957. A theorem of ows in networks. Pacic J. Math. 7(1057)
10731082.
pseudoow as the rst infeasible edge, or as leaving arc
Gallo, G., M. D. Grigoriadis, R. E. Tarjan. 1989. A fast parametric
for simplex. Figure 7(b) shows the resulting branches fol- maximum ow algorithm and applications. SIAM J. Comput. 18(1)
lowing simplex-split and Figure 7(c) shows the branches 3055.
after split. (In general, the choice of the split arc for sim- Goldberg, A. V., S. Rao. 1998. Beyond the ow decomposition barrier.
plex may be different from the choice in the corresponding J. ACM 45 783797.
iteration of the pseudoow algorithm.) Note that the root- Goldberg, A. V., R. E. Tarjan. 1988. A new approach to the maximum
ow problem. J. ACM 35 921940.
ing of the branch on the left is different. For pseudoow-
Goldberg, A. V., M. D. Grigoriadis, R. E. Tarjan. 1991. The use of
simplex, the set of roots of the normalized tree with which dynamic trees in a network simplex algorithm for the maximum ow
the procedure works is always a subset of the initial set of problem. Math. Programming 50 277290.
roots, whereas for pseudoow, the set of the strong roots Goldfarb, D., W. Chen. 1997. On strongly polynomial dual algorithms for
can change arbitrarily. the maximum ow problem. Special issue of Math. Programming,
Ser. B 78(2) 159168.
Another aspect in which the two algorithms differ is that
Guseld, D., E. Tardos. 1994. A faster parametric minimum-cut algorithm.
pseudoow performs several arc exchanges in the tree dur- Algorithmica 11(3) 278290.
ing a single-merger iteration, whereas simplex performs Hochbaum, D. S. 2001. A new-old algorithm for minimum-cut and
exactly one. maximum-ow in closure graphs. Networks 37(4) 171193.
Hochbaum, D. S. 2003. Efcient algorithms for the inverse spanning tree
problem. Oper. Res. 51(5) 785797.
13. Electronic Companion Hochbaum, D. S., B. G. Chandran. 2004. Further below the ow decompo-
sition barrier of maximum ow for bipartite matching and maximum
An electronic companion to this paper is available as part closure. Submitted.
of the online version that can be found at http://or.journal. Hoffman, A. J. 1960. Some recent applications of the theory of linear
informs.org/. inequalities to extremal combinatorial analysis. R. Bellman, M. Hall
Jr., eds. Proc. Sympos. Appl. Math., Vol. 10. Combinatorial Analysis.
American Mathematical Society, Providence, 113127.
Endnotes Karzanov, A. V. 1974. Determining the maximal ow in a network with
a method of preows. Soviet Math. Dokl. 15 434437.
1. We note that the concept of pseudoow has been King, V., S. Rao, R. Tarjan. 1994. A faster deterministic maximum ow
used previously in algorithms solving the minimum-cost algorithm. J. Algorithms 17(3) 447474.
network-ow problem. Lerchs, H., I. F. Grossmann. 1965. Optimum design of open-pit mines.
2. We thank Michel Minoux for mentioning the Boolean Trans. Canad. Inst. Mining, Metallurgy, Petroleum 68 1724.
quadratic minimization problems relationship to the block- Malhorta, V. M., M. P. Kumar, S. N. Maheshwari. 1978. An OV 3 
algorithm for nding maximum ows in networks. Inform. Proc. Lett.
ing-cut problem, and the anonymous referees for point- 7(6) 277278.
ing out the references Radzik (1993), Gale (1957), and Martel, C. 1989. A comparison of phase and nonphase network ow algo-
Hoffman (1960). rithms. Networks 19(6) 691705.
Hochbaum: The Pseudoow Algorithm: A New Algorithm for the Maximum-Flow Problem
Operations Research 56(4), pp. 9921009, 2008 INFORMS 1009

Picard, J.-C. 1976. Maximal closure of a graph and applications to com- Sleator, D. D., R. E. Tarjan. 1983. A data structure for dynamic trees.
binatorial problems. Management Sci. 22(11) 12681272. J. Comput. System Sci. 24 362391.
Radzik, T. 1993. Parametric ows, weighted means of cuts, and frac- Sleator, D. D., R. E. Tarjan. 1985. Self-adjusting binary search trees.
tional combinatorial optimization. P. M. Pardalos, ed. Complex- J. ACM 32 652686.
ity in Numerical Optimization. World Scientic, Hackensack, NJ, Vygen, J. 2002. On dual minimum cost ow algorithms. Math. Methods
351386. Oper. Res. 56 101126.

Vous aimerez peut-être aussi