Vous êtes sur la page 1sur 5

UNIVERSITY OF CALIFORNIA BERKELEY Structural Engineering, Department of Civil Engineering Mechanics and Materials Fall 2003 Professor: S.

Govindjee Miscellaneous Linear Algebra: Eigenvalues and Eigenvectors

Denition of Eigenvalues and Eigenvectors


Ap = p , (1)

Given a tensor A the scalar and the vector p which satisfy are known as the eigenvalue and eigenvector of A. Note that these eigenpairs are not unique. As means of nding the eigenpairs note that (1) can be written as (A 1)p = 0 . (2) The only way for (2) to have a non-trivial solution is for the tensor in the parenthesis to have zero determinant. Thus, the governing equation for the eigenvalues is given by det[A 1] = 0 . (3) This can be expanded to give a (characteristic) polynomial in of nth order where n is the order of A. For the case of second order tensors this expansion gives 3 + IA 2 IIA + IIIA = 0 , (4) {(tr[A])2 tr[A2 ]} = 1 2 + where IA = tr[A] = 1 + 2 + 3 , IIA = 1 2 2 3 + 3 1 , and IIIA = det[A] = 1 2 3 are the 3 invariants of A. Solving this polynomial for the s gives the eigenvalues of A. Once the eigenvalues are known, Eq. (2) can be used to determine the corresponding eigenvectors. Note that the vectors are not unique. By convention we will always normalize them to have unit length.

Eigenvalues are real for S3

Eigenvalues are real for a symmetric real valued tensors. Proof: Assume that and n are an Eigenpair; i.e. n = n . 1 (5)

Since = T this implies that T n = n or that T n = n so that where the bar indicates complex conjugation. Dot (6) by n T n = n n. n Conjugate (5) and dot by n so that n n n = . n Apply the denition of transpose to (8) to give n n T n n = . Compare (9) to (7) to reveal that n n n= . n = which implies that is a real number. Thus we have that (10) (9) (8) (7) (6)

Eigenvector are orthogonal for S3

Proof of orthogonality of eigenvectors: Let (1 , x1 ) and (2 , x2 ) be two eigenpairs for = T (real) where 1 = 2 . Then x1 = 1 x1 and x2 = 2 x2 . Dot these two expression by x2 and x1 respectively. Thus x2 x1 = 1 x2 x1 and x1 x2 = 2 x1 x2 . (12) Apply the denition of transpose and invoke the relation = T on (11) to give T x2 x1 = x2 x1 = 1 x2 x1 . (13) Combining (13) with (12) yields (1 2 )x2 x1 = 0 . (14) (11)

This implies that x1 x2 = 0 since the eigenvalues were assumed distinct. 2

Spectral Representations

For real symmetric 2nd order tensor, A, one has the following useful representation theorems. If one has distinct eigenvalues {1 , 2 , 3 } with associated eigenvectors {n1 , n2 , n3 } then we can write
3

A=
i=1

i ni ni .

(15)

The proof of this form follow easily if one notes that the identity can be expressed in terms of the eigenvectors as 1 = ni ni . If we have 1 , 2 = 3 with n1 the eigenvector associated with 1 , then we have A = 1 n1 n1 + 2 (1 n1 n1 ) . (16) Note that 1 n1 n1 represents a projection onto the subspace orthogonal to n1 and thus represents a plane of eigenvectors. If we have 1 = 2 = 3 , then A = 1 1 (17) and all vectors are eigenvectors.

Caley-Hamilton Theorem

The Caley-Hamilton Theorem states that a tensor satises its own characteristic polynomial. For simplicity we will only deal with the real-symmetric case. To prove this we rst start with an auxiliary result. is a solution of the characAssume that f () is a real polynomial and that ) is an eigenvalue of f (A). The teristic polynomial for a tensor A, then f ( proof is performed via the principle of mathematical induction (PMI). First r p for r = 1 (by assumption). note that there exists a p such that Ar p = Now assume np An p = (18) for some xed n > 1. Thus np = n Ap = n p = n+1 p . An+1 p = A(An p) = A (19)

r p for all r. The remainder of the proof Thus by PMI we have that Ar p = follow directly by expansion. 3

To show the main result, assume that the polynomial f () is the characteristic ) is an eigenvalue of polynomial. Then f ( f (A) = A3 + IA A2 IIA A + IIIA . (20)

) 0 by our choice of f (). This implies that all the eigenvalues of But f ( f (A) are zero. For real symmetric tensors, the only such tensor is the zero tensor (by spectral representation) so we have the nal result: A3 + IA A2 IIA A + IIIA = 0 . (21)

Note that the result is more general than the symmetric-real case, (it works for all tensors), but the proof is a bit more involved. See Gilbert Strang Linear Algebra and Its Applications for a proof of the more general result utilizing Schurs Lemma.

Max-min properties of eigenvalues

If we consider the strain tensor , then the eigenvalues are the max, min, and saddle point normal strains over all directions. Proof: Let I be the Eigenvalues of and nI the corresponding Eigenvectors. Find v such that v v (22)

is a critical value (min, max, or saddle point), where v is unit. First write in the basis dened by the eigenvectors
3

=
I =1

I nI nI .

(23)

Also assume that 1 > 2 > 3 . Then v v = 1 (n1 v )2 + 2 (n2 v )2 + 3 (n3 v )2 . (24)

Let (n1 v ) = l, (n2 v ) = m, and (n3 v ) = n and note that l2 + m2 + n2 = 1 since v is assumed to be unit. Therefore we need to nd the critical values of 1 l2 + 2 m2 + 3 n2 subject to the constraint that l2 + m2 + n2 = 1. This is done via the method of Lagrange multipliers; form the Lagrangian L = 1 l2 + 2 m2 + 3 n2 + (l2 + m2 + n2 1) . 4 (25)

The critical equations of the Lagrangian are L l L m L n L Expanding the derivatives gives (1 + )l (2 + )m (3 + )n 2 l + m2 + n2 = = = = 0 0 0 1. = 0 = 0 = 0 = 0. (26) (27) (28) (29)

(30)

To satisfy Eqs. (30) there are several choices: 1. = 1 , m = 0, n = 0, l = 1 which implies that v v = 1 . 2. = 2 , m = 1, n = 0, l = 0 which implies that v v = 2 . 3. = 3 , m = 0, n = 1, l = 0 which implies that v v = 3 . Remark: Choice 1 leads to a maximum. Choice 3 leads to a minimum. And Choice 2 leads to a saddle point. Thus the primary conclusion is that 1 is the maximum normal strain and 3 is the minimum normal strain.