Vous êtes sur la page 1sur 7

A PERSONAL INTERVIEW WITH THE SINGULAR VALUE

DECOMPOSITION
MATAN GAVISH

Part

1.

Theory

1.

The Polar Decomposition

In what follows, F denotes either R or C. The vector space Fn is an inner product space with
the standard inner product, h, i. Let us denote by Mnm (F) the set of matrices over F with
n rows and m columns. Vectors will denote columns, that is, we will write

Fn 3 v =
v .
|

The following fact is known as the (Left) Polar Decomposition: Let m 6 n. Any matrix
A Mnm (F) may be factorized as A = U P where U Mnm (F) has a orthonormal
columns and P Mmm (F) is positive semi-denite.
When F = R and n = m, the polar decomposition makes precise the statement, that
every linear transformation T : Rn Rn is a composition of a dilation and a rotation/reection.
The proof will imply that in this case there is an orthonormal base (the eigenvectors of the

matrix A A) on which A acts as a rotation/reection (U ) composed with a dilation (P ).


Remark.

The polar decomposition follows directly from a remarkable connection between the (general)

matrix A and the positive semi-denite matrix P = A A.

Date

: June 30, 2010.

A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION

1.1. The Matrix A A. We rst note that the matrix A A Mmm (F) is self-adjoint and
positive semi-denite, since for any v Fm we have
0 6 kAvk2 = hAv, Avi = hv, A Avi = v A Av .

There are several equivalent ways to dene the matrix A A. To name one, note that A A
can be diagonalized over F, A A = W DW , where W is unitary and D is diagonal. In fact,
since A A is positive semi-denite, all its eigenvalues are real and non-negative, and we can
denote

21

D=

...
2m

for some numbers 0 6 1 , . . . , m R, called the Singular

A A

=W

Values

of A. We dene

...


W .

This notation is justied since


positive semi-denite.
Why is the matrix

A A

2

= A A. Note that

A A is also self-adjoint and

A A is interesting? one reason is that for all v Fm ,


2
E D
E
D



A Av, A Av = A Av ,
kAvk2 = hAv, Avi = hv, A Avi = v, A A A Av =

that is,




kAvk = A Av .

(1.1)

Besides the geometrical interpretation that A and


length, this has the useful consequence,

A A have the same eect on a vector's

ker A = ker A A .

As m = dim ker A + rankA = dim ker A A + rank A A, it also follows that

rank A A = rankA .
We denote their common value by `.

A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION

1.2. Proof of the (Right) Polar Decomposition. Since A A is self-adjoint, there exists

an orthonormal basis 1 , . . . , m of Fm whose elements are eigenvectors of A A. Using our

previous notation for the eigenvalues of A A, suppose that A A i = i i for i = 1, . . . , m.


For simplicity, let us assume that 1 . . . ` > 0 (where m > ` = rankA) or in other words, that

`+1 , . . . , m are basis vectors for ker A = ker A A (in case A has a non-trivial kernel). To

factor A = U P , let P = A A and dene U Mnm (F) as follows.


Consider the vectors

1
1
1 A1 , . . . , ` A` .

j
1
1
1

hAi , Aj i =
hi , A Aj i =
i , 2j j =
hi , j i = i,j
i j
i j
i j
i

1
1
Ai ,
Aj
i
j


=

This set is orthonormal: for 1 6 i, j 6 `, we have

(recall that 1 , . . . m are reals).


Remark.

In fact, we discovered that `+1 , . . . , m is an orthonormal basis for ker A, and


is an orthonormal basis for ImA.

1
1
1 A1 , . . . , ` A`

We now use the assumption m 6 n. that If ` < m, take an orthonormal completion `+1 , . . . , m
such that 11 A1 , . . . , 1` A` , `+1 , . . . , m constitutes an orthonormal set of vectors in Fn . This
is possible since m 6 n. Finally, dene

U =

|
1
1 A1

1
` A`

|
`+1

|
m
|

..

Mnm (F) .

Let us check that the matrix U has the desired properties. First, U has orthonormal columns:
the left-hand matrix in the above product is unitary and hence has orthonormal columns, as
does the right-hand matrix. It is simple to verify that generally, a product of matrices with
orthonormal columns again has orthonormal columns. Next, Since

i
..

..

1
i ,

..
.

A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION

we nd that
U i =

1 Ai

16i6`

`<i6m

so that

i U i
U P i =
0 U

Ai
=
` < i 6 m 0 i
16i6`

16i6`

= Ai

`<i6m

where we have used the fact that Ai = 0 for i = ` + 1, . . . , m (these vector actually span
ker A). Since the identity A = U P holds on the basis vectors, 1 , . . . , m , we are done.

If A is inverible, ker A = ker A A implies that


is uniquely determined since U = A P 1 .

Remark.

A A is invertible. In this case, U

If A Mnm (F) and m > n, we can perform a right polar decomposition for
A Mmn (F). We obtain a matrix U Mmn (F) with orthonormal columns such that

A = U AA , or equivalently, A = AA U . Here AA is again positive semi-denite,


but now U has orthonormal rows. This is the Left Polar Decomposition. If n = m, that
is, if our matrix A is square, both decompositions are possible: there exist unitary matrices
U, U 0 Mnn (F) such that
Remark.

A=U

A A =

AA U .

It is interesting to note that t, in this case U = U holds if and only if A is normal (AA = A A).
0

2.

The Singular Value Decomposition

2.1. Roughly Speaking. At least two dierent decompositions go by the name of Singular
Value Decomposition (SVD):
(1) Any matrix A Mnm (F) may be factorized as
A = V D W ,

where V Mnp (F) and W Mmp (F) have orthonormal columns for p = min {m, n},
and D Mpp (F) is diagonal with non-negative entries.

A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION

(2) Any matrix A Mnm (F) may be factorized as A = V D W , where V Mnn (F)
and W Mmm (F) are unitary matrices, and D Mnm (F) has non-negative
entries on the main diagonal and zeros elsewhere. (The main diagonal of a matrix
D Mnm (F) is (D11 , . . . , Dnn ).)
We will work out SVD using the rst denition1. Suppose that such a decompisition does exist,
and denote

V =
v1
|

w1
;
W
=
vp

|
|

d1

;
D
=
wp

|
0

...

dp

Since W W = V V = Ip , for each 1 6 i 6 p, we have

(2.1) A A wi = W DV V DW wi = W D2 W wi = W D2

w1

..
.

wi = d2i wi

wp

and similarly

(2.2)

AA vi = V DW W DV vi = V D2 V vi = V D2

v1

..
.

vi = d2i vi .

vp

Should such a decomposition exist, then, the column wi of W must be an eigenvector of A A


with eigenvalue d2i . Using notation from the previous section, if m 6 n (so p = m) we must
have simply

W =
1
|
1Matlab,

for example, uses the second one

m
; D=
|
0

...

A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION

2.2. SVD from the Polar Decomposition. Formally, the SVD is an easy consequence of
the polar decomposition. Assume that m 6 n (so p = m) and A Mnm (F). We use the
notations dened above, and employ the right polar decomposition:

A = U

A A = U W

...


W =

1
1 A1

1
` A`

`+1

1
1 A1

1
` A`

`+1

|
{z
V

...

m
WW
|

1
|

...

|
m
} |
{z
D


W =


} |

..
.

{z
}
W

Dene, then

...

D =

|
1
1 A1

1
` A`

=
1
|

|
|

|
`+1
|

and we have, by denition, an SVD decomposition of A. Manifestly, (2.1) holds. Let us


convince ourselves that (2.2) also holds, namely, that the column vi of V is an eigenvectors of
AA , which corresponds to the eigenvalue 2i . Indeed, for 1 6 i 6 ` we have

1
1
1
AA vi = AA
Ai = A (A Ai ) = A 2i i = 2i
i
i
i

1
Ai
i


.

A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION

For the case where ` + 1 6 i 6 m, recall that by our construction `+1 , . . . , m (ImA) . We
always have (ImA) ker AA : indeed, x v (ImA) , then for any u Fn ,
0 = hAA u, vi = hu, AA vi

- which implies AA v = 0. Now, if ` + 1 6 i 6 m then i = 0 and vi = i ker AA , so that


AA vi = 0 = 0 i = 2i i .

Thus, in either case, the column vi of V is an eigenvectors of AA that corresponds to the


eigenvalue 2i .

Vous aimerez peut-être aussi