Vous êtes sur la page 1sur 6
Singular Value Decomposition • So far, we have seen how to decompose symmetric matrices into
Singular Value Decomposition • So far, we have seen how to decompose symmetric matrices into

Singular Value Decomposition

Singular Value Decomposition • So far, we have seen how to decompose symmetric matrices into a

So far, we have seen how to decompose symmetric matrices into a product of orthogonal matrices (the eigenvector matrix) and a diagonal matrix (the eigenvalue matrix).

What about non-symmetric matrices? The key insight behind SVD is to find two orthogonal basis representations U and V such that for any matrix A , the following decomposition holds

A = U Σ V T

where U and V are orthogonal matrices (so UU T = I and V V T = I )

Σ is a diagonal matrix, whose entries are called singular values (hence the meaning behind the term SVD).

689:F03 – p.23/28

Singular Value Decomposition • We want to choose the orthogonal matrices in such a way
Singular Value Decomposition • We want to choose the orthogonal matrices in such a way

Singular Value Decomposition

Singular Value Decomposition • We want to choose the orthogonal matrices in such a way that

We want to choose the orthogonal matrices in such a way that not only are they orthogonal, but we want the result of applying A to them to also be orthogonal.

That is, the v i are are a orthonormal basis set (unit vectors) and moreover, Av i also orthogonal to each other.

to give us the second orthonormal basis

We can then construct the u i = set.

Av

i

Av

i

A h v 1

v 2 i = h σ 1 u 1

σ 2 u 2 i = h u 1

u 2 i

2

4 σ 1

0

0

σ 2

3

5

So, we get the following relationships:

AV = U Σ or U 1 AV = Σ or U T AV = Σ

AV = U Σ or AV V 1 = U Σ V 1 or A = U Σ V T

689:F03 – p.24/28

Finding Orthonormal Basis Representations for SVD • How do we go about finding U and
Finding Orthonormal Basis Representations for SVD • How do we go about finding U and

Finding Orthonormal Basis Representations for SVD

Finding Orthonormal Basis Representations for SVD • How do we go about finding U and V

How do we go about finding U and V ?

Here is the trick. We can “eliminate” U from the equation AV = U Σ by premultiplying A by its transpose:

A T A = ( U Σ V T ) T ( U Σ V T ) = ( V Σ T U T )( U Σ V T ) = V Σ T Σ V T

Since A T A is always symmetric (why?), the above expression gives us exactly the familiar spectral decomposition we have seen before (namel y, QΛQ T ), except that now V represents the orthonormal eigenvector set of A T A .

In a similar fashion, we can “eliminate” V from the equation AV = U Σ by postmultiplying A by its transpose:

AA T = ( U Σ V T ) ( U Σ V T ) T = ( U Σ V T )( V Σ T U T ) = U ΣΣ T U T

The diagonal matrix Σ is now the eigenvalue matrix of A T A .

689:F03 – p.25/28

Examples of SVD • Find the SVD of the following matrices 4 A = 2
Examples of SVD • Find the SVD of the following matrices 4 A = 2

Examples of SVD

Examples of SVD • Find the SVD of the following matrices 4 A = 2 3

Find the SVD of the following matrices

4

A = 2

3

5

2

1

2

1

A = 2 4 2

1

2

1

3

5

689:F03 – p.26/28

SVD Technology: How Google T M works • Google T M uses SVD to accelerate
SVD Technology: How Google T M works • Google T M uses SVD to accelerate

SVD Technology: How Google TM works

SVD Technology: How Google T M works • Google T M uses SVD to accelerate finding

Google TM uses SVD to accelerate finding relevant web pages. Here is a brief explanation of the general idea.

Define a web site as an authority if many sites link to it.

Define a web site as a hub if it links to many sites.

We want to compute a ranking x 1 ,

As a first pass, we can compute the ranking scores as follows: x 0 is the number of

, x N of authorities and y 1 ,

i

, y M of hubs.

links pointing to i and y

i 0 is the number of links going out of i .

But, not all links should be weighted equally. For example, links from authorities (or hubs) should count more. So, we can revise the rankings as follows

1

x =

i

X

j links to i

y = A T y 0 and y =

0

j

1

i

X

x

0

j

i links to j

= Ax 0

Here, A is the adjacency matrix where a ij = 1 if i links to j .

689:F03 – p.27/28

SVD Technology: How Google T M works • Clearly, we can keep iterating, and compute
SVD Technology: How Google T M works • Clearly, we can keep iterating, and compute

SVD Technology: How Google TM works

SVD Technology: How Google T M works • Clearly, we can keep iterating, and compute x

Clearly, we can keep iterating, and compute

x

k A T A x k 1 and y = AA T y k 1

i

=

k

i

This is basically doing an iterative SV D computation, and as k increases, the

largest σ 2 eigenvalues

1

dominate.

Google TM is doing an SVD over a matrix A of size 3 × 10 9 by 3 × 10 9 !

689:F03 – p.28/28