Vous êtes sur la page 1sur 6

A note on Kirchhos formula and Wilsons algorithm

Ron Rosenthal
Based on a basic notion lecture given on 2.5.2013 by Shlomo Sterenbrg
May 6, 2013
1 Introduction
The goal of this note is to give a relatively simple proof to Kirchos formula and the Theorem beyong Wilsons
algorithm (To be stated below). Let G = (V, E) be a nite connected graph. a spanning tree in G is a sub-graph
H = (V
H
, E
H
) G such that H is a tree and V
H
= V . Denote by = (G) the number of spanning trees of G.
To each graph G as above we can associate an operator : C (V ) C (V ) given by
f (v) = deg (v) f (v)

wv
f (w) ,
where w v means that w is a neighbor of v, i.e. {v, w} E. If f is a constant function then f = 0 and therefore
zero is always an eigenvalue of the Laplacian. Since we assumed that G is a connected graph the constant functions
are the only eigenfunctions with eigenvalue zero. Since is also self adjoint positive semi-denite. Thus we can
write the eigenvalues of as a sequence
0 =
1
<
2

3
. . .
n
,
where |V | = n.
We are now ready to state Kirchos formula
Theorem 1.1 (Kirchos formula). For every nite connected graph G
=
1
n
n

i=2

i
.
Next we turn to dene Loop Erased Random Walks (LERW). Given a nite graph G = (V, E) and a path =
(v
1
, v
2
, . . . , v
m
) of a random walk in it we dene the loop erasure of is a new simple path created by erasing all the
loops of in the chronological order. Formally, we dene i
1
= 1 and inductively i
j+1
= max {i n : (i) = (i
j
)}+
1 (if the max is over an empty set we dene i
j+1
= n and stop the process). Then the loop erased version of ,
denoted by LE(), is the path ( (i
1
) , (i
2
) , . . . , (i
J
)), where J is the last index in the induction process.
Wilsons algorithm suggest a method to construct a spanning tree in any nite connected graph.
Wilsons algorithm
Fix some ordering of the vertices V = {v
1
, v
2
, . . . , v
n
}.
1. Denote A
1
= {v
1
} and take a path
1
of simple random walk starting from v
2
until it hits A
1
.
2. Dene A
2
= A
1
LE(
1
).
3. If A
2
= V the algorithm is nished. Otherwise, take a path
2
of simple random walk starting from the vertex
with the minimal index in V \A
2
until it hits A
2
.
4. Dene A
3
= A
2
LE(
2
).
5. Continue inductively until all vertices are visited.
1
Not that the resulting graph is a spanning tree in G.
The following theorem is the main result regarding the algorithm:
Theorem 1.2 (Wilsons algorithm). The random spanning tree constructed by the algorithm is uniformly distributed
among all spanning trees of G.
2 Proof details
2.1 Simple result from linear algebra
Claim 2.1. Let M be an nn matrix and (i
1
, i
2
, . . . , i
n
) any ordering of the number 1, 2, . . . , n. Denote M
(0)
= M
and for 1 j n 1 let M
(j)
be the matrix obtained from M by removing the
_
i
th
k
_
j
k=1
rows and columns. Keep
the indexed names as in the original matrix M, e.g., after removing the i
th
1
row and column and assuming that
i
2
> i
1
, the row that was originally the i
th
2
row and now is the (i
2
1)
th
one is still called the i
th
2
row. Then
1
det M
=
n1

j=0
_
M
1
(j)
_
ij+1,ij+1
.
Remark 2.2. Note that the result is independent of the order of the indexes removed. This fact will play a crucial
rule in the future and is the reason that the order in which we erase loops in LERW is not important.
Example 2.3. The case n = 2. Let M =
_
a b
c d
_
. Using the order i
1
= 1, i
2
= 2 we get
1

j=0
_
M
1
(j)
_
ij+1,ij+1
=
_
M
1
(0)
_
i1,i1

_
M
1
(1)
_
i2,i2
=
_
M
1
_
1,1

_
d
1
_
=
d
ad bc

1
d
=
1
ad bc
=
1
det M
.
Proof. The proof follows by induction. The case n = 1 is immediate. Assume that this holds for n 1 and let M
be an n n matrix. Applying the induction assumption for the (n 1) (n 1) matrix M
(1)
we get that
n

j=1
_
M
1
(j)
_
ij+1,ij+1
=
1
det M
(1)
.
Consequently, using the fact that A
1
i,i
=
1
det A
(adjA)
i,i
=
1
det A
det A
(i)
, it follows that
n

j=0
_
M
1
(j)
_
ij+1,ij+1
=
_
M
1
(0)
_
i1,i1
det M
(1)
=
1
det M
(1)

det M
(1)
det M
=
1
det M
as required.
2.2 Markov chains with sinks
Before we can turn to the proofs of Theorems 1.1 and 1.2 we turn to discuss the model of nite Markov chains with
sinks.
Denition 2.4. Let A be a nite set. A homogenous Markov chain M on the state space A is dened by a matrix
M = (M
i,j
)
i,jA
such that M
i,j
0 for every i, j A and

jA
M
i,j
= 1 for every i A. The Markov chain is
then a stochastic process {X
k
}
k1
dened by the transition law
P(X
k+1
= j|X
k
= i) = M
ij
.
If we will specify the initial distribution P(X
0
= i) =
i
(which we will usually take to be just the indicator of some
i
0
A), this denes the process completely.
2
Denition 2.5. Let M = (M
ij
)
i,jA
be a homogenous Markov chain on the nite state space A.
1. We say that a point i A is a sink of the Markov chain if M
ii
= 1. This immediately implies that M
ij
= 0
for every j = i. We denote by A the set of sinks of the Markov chain M.
2. The Markov chain is called sink irreducible if for every i A which there exist a sink j A and k N such
that P(X
k
= j|X
0
= i) > 0, i.e., We can get from every state in A to one of the Markov sinks in a nite time.
From now on assume that our Markov chain as at least one sink and that it is sink irreducible. Given such a
Markov chain, up to a permutation, we can write the Matrix associate with M as
M =
_
I 0
R Q
_
,
where I is the |A| |A| identity matrix.
Next we claim that the Matrix I Q is invertible and that its inverse is given by

k=0
Q
k
. This will follows
once we show that Q < 1 .
Denote G = (I Q)
1
. The Matrix G is the green function of the Markov chain and for i, j A\A we have
G
i,j
= E
_
number of visits to j starting
at i before hitting A
_
.
Indeed, denoting T
i,j
= E
_
number of visits to j starting
at i before hitting A
_
and using the denition of the Markov chain we get
E
_
number of visits to j starting
at i before hitting A
_
=
i,j
+

kA\A
Q
i,k
E
_
number of visits to j starting
at i before hitting A
_
.
In matrix form this gives
T = I +QT
and therefore
T = (I Q)
1
= G.
2.3 Generalized Wilsons algorithm
Let M be a Markov chain on the nite state space A as dened in subsection 2.2. The generalized Wilson algorithm
is dened as follows:
1. Denote A
0
= A.
2. Assume now that the set A
j
is dened and A\A
j
= . Fix some v A\A
j
and sample a path
j
of loop
erased random walk (with transition given by the Markov chain M) from v until it hits A
j
.
3. Denote A
j+1
= A
j

j
(including all edges the path passed through).
4. If A
j+1
= A the algorithm is complete. Otherwise return to step 2 with j replaced by j + 1.
Remark 2.6.
1. Note that unlike the original Wilsons algorithm the resulting graph is not necessarily connected. In the
general case the resulting graph is a spanning forest.
2. The last algorithm also denes an ordering on the vertices in A\Aas follows: Denote
j
=
_
y
j
1
, . . . , y
j
|j|1
, y
j
|j|
_
,
where |
j
| is the length of
j
(recall that the last vertex of the path belong to A
j
). The ordering is then given
by
y
0
1
, y
0
2
, . . . , y
0
|0|1
, y
1
1
, y
1
2
, . . . , y
1
|1|1
, . . . , y
J
1
, y
J
2
, . . . , y
J
|
J
|1
,
where J is the index such that A
J
= A.
3
Denition 2.7. Given a Markov chain M as above and a spanning forest T of A we dene the probability of the
forest as
p (T) =
J

j=1
|j|1

i=1
M
y
j
i
,y
j
i+1
.
Remark 2.8. Back to the original Wilsons Algorithm where A is the set of vertices of the graph G, M is the
transition matrix of simple random walk and A = {v
0
} for any choice v
0
V , we get that
p (T) =

v V
v = v
0
1
deg (v)
since the resulting spanning tree as exactly one edge going out from each vertex except the vertex v
0
, and the
probability of using this edge is exactly the degree of the origin vertex of the edge.
2.4 Two observations regarding the Laplacian of a graph
Let G be a nite graph and the Laplace operator on it.
Claim 2.9. There exists a constant C such that
adj = C
_
_
_
_
_
_
_
_
_
1 1 1
.
.
.
.
.
.
.
.
.
1
.
.
. 1
.
.
.
.
.
.
.
.
.
1 1
_
_
_
_
_
_
_
_
_
:= C J
n
. (1)
Proof. Recall that zero is an eigenvalue of the Laplacian and that the space of eigenfunction corresponding to zero
is the space of constant functions. This implies that
det =
n

i=1

i
= 0
and therefore
adj = det I = 0.
The last equation can hold if and only if each column of adj is constant. In addition the symmetry of implies
that adj is also symmetric and therefore the rows are constant as well. All together this implies that adj is a
constant matrix.
Claim 2.10. The constant C from Claim 2.9 satisfy
C =
1
n
n

i=2

i
.
Proof. Denote by P (x) = det (xI ) the characteristic polynomial of . On the one hand, using the observations
from the introduction on the eigenvalues of is follows that P is of the form
P (x) = x(x
2
) (x
3
) (x
n
)
and therefore the coecient of x in the polynomial is exactly (1)
n1

n
i=2

i
. On the other hand, using Jacobis
formula, the coecient of x is also given by
p

(x) =
_
d
dx
det (xI )
_
x=0
=
_
trace
_
adj (xI )
d
dx
(xI )
__
x=0
= trace (adj ()) = (1)
n1
nC.
Comparing the two gives the result.
4
2.5 Main Lemmas
The proofs of both theorem follows from the following two lemmas.
Lemma 2.11. Let M be a Markov chain on a nite state space A and set of sinks A A. The probability that
T is the resulting forest (tree) achieved by the Wilsons algorithm is
p (T) det G
where G is the green function dened in 2.2.
Lemma 2.12. For any nite connected graph G = (V, E) x a vertex v
0
V and dene M to be the Markov chain
achieved by taking a simple random walk and changing the vertex v
0
to be a sink. Then
det G =

vV
deg (v)
deg (v
0
)

1
C
,
where C is some constant to be dened later.
Proof of the main theorems assuming Lemmas 2.11 and 2.12. Using Both Lemmas and Remark 2.8 we get that the
probability that the spanning tree T is the result in Wilsons algorithm is
1
C
=
_
1
n

n
i=2

i
_
1
. Consequently, the
distribution is uniform and C = (G).
Proof. (Proof of Lemma 2.12) Without lose of generality assume that v
0
is the the rst row in the matrix of .
Recall that = D A , where D is the diagonal degree matrix and A is the adjacency matrix. We denote by
D
(1)
, A
(1)
and
(1)
the 1, 1 minor of D, A and respectively and by Q . It follows that
I Q = I
_
D
(1)
_
1
A
(1)
=
_
D
(1)
_
1
_
D
(1)
A
(1)
_
=
_
D
(1)
_
1

(1)
.
Consequently
det (G)
1
= det (I Q) = det
_
_
D
(1)
_
1

(1)
_
=
det
_

(1)
_
det
_
D
(1)
_
=
adj ()
1,1
det
_
D
(1)
_ =
C deg v
0

vV
deg v
.
Proof. (Proof of Lemma 2.11) Fix some spanning forest (tree) T in A and assume the ordering of the vertices used
in the algorithm is {v
1
, v
2
, . . . , v
n
} . The probability that T is the resulting spanning forest is as follows: The
process start with a path from v
2
to some xed vertex v
j
. The probability that this occurs is

: v
2
v
j
, v
1
/
LE () = {v
2
, v
j
}
P () ,
where P is the probability of the path w.r.t the law of simple random walk. Each such path can be written as a
path the convolution of a closed path from v
2
to itself and the path (v
2
, v
j
). Since the probability of the last path
is exactly
1
deg v2
we can write the probability above as
1
deg v
2

: v
2
v
2
v
1
/
P () =
1
deg v
2

n=0

: v
2
v
2
v
1
/ , || = n
P ()
=
1
deg v
2

n=0
Q
n
v2,v2
=
1
deg v
2

n=0
Q
n
v2,v2
=
1
deg v
2
(I Q)
1
v2,v2
=
1
deg v
2
G
v2,v2
=
1
deg v
2
_
G
(0)
_
v2,v2
.
5
Turning to deal with the next part of the forest we know that there is a path from v
j
to another vertex v
k
, but
now the vertex cannot use both v
1
and v
2
. Indeed, if the path have used the vertex v
2
this will close an additional
loop around v
2
, which contradicts the construction so far. Repeating the last estimation, we get that the probability
of this event is given by
1
deg v
j

: v
j
v
j
v
1
, v
2
/
P ()
which is the same as before except we think of both v
1
and v
2
as sinks. Thus the last term is exactly
1
deg v
j
_
G
(1)
_
vj,vj
,
where G
(1)
is the Green function for the Markov chain obtained by observing simple random walk with two sinks
v
1
, v
2
. The proof now continues in induction. Multiplying the above probabilities we get that the probability to
nd the spanning forest T is

v V
v = v
0
1
deg v
j

n1

j=0
_
G
(j)
_
yj,yj
with {y
j
}
n1
j=0
the order in which the vertices are ordered. This equals due to Claim 2.1

vV
deg v
deg v
0

1
det (G
1
)
=

vV
deg v
deg v
0
det (G)
as required.
6

Vous aimerez peut-être aussi