Vous êtes sur la page 1sur 4

Test 1 practice Note: GEPP = Gaussian elimination with partial pivoting (1) Let 4 2 2 A = 4 5 1 2 0 7 (a) Compute the LU decomposition

n of A. The sequence of row operations is that we subtract 1 row 1 from row 2, subtract 1/2 row 1 from row 3, then subtract 1/3 row 2 from row 3. This leaves 1 0 0 1 0 L = 1 1/2 1/3 1 4 2 2 U = 0 3 3 0 0 5 P =I (b) What is det(A)? With r = 0 the number of row swaps, we have det(A) = (1)r det(U ) = 4 3 5 = 60. (c) Solve 2 Ax = 17 . 9 Solving Lc = b gives c = (2, 15, 5). Solving U x = c gives x = (1, 4, 1) . (2) Give an example of a system Ax = b, with A a square matrix, that has no solutions. We can take the equations for two parallel (but unequal) lines; this system will have no solutions because the lines have no points in common. For instance, we can take 3x + 5y = 10 and 3x + 5y = 11. In matrix form: 3 5 3 5 x 10 = . y 11 zeros and let b = 0. Then Ax = b can

A more trivial example is to let A be a matrix of have no solutions, since Ax will always be 0. (3) Suppose A has the following LU decomposition: 1 0 0 2 A = 0.78 1 0 0 0.6 0 .2 1 0

1 1 1 3 0 0

(a) What is the rank of A? The rank of A is 2 (the number of nonzero rows of U ). (b) Since L1 always exists, we will have Ax = 0 i U x = 0. Find a vector x = 0 such that Ax = 0 by nding a nonzero solution to U x = 0. We can let x3 be anything we want, and then back substitute to nd x1 and x2 . (In general, if a column of U has no pivots, we can let the corresponding variable take on any value.)
1

Taking x3 = 1, back substitution in U x = 0 gives 0x1 + x2 + 3x3 = 0, so x2 = 3; then 2x1 + x2 + x3 = 2x1 + (3) + 1 = 0, so x1 = 1. So, we have 1 x = 3 1 as a solution to U x = 0 and Ax = 0. (4) Give an example of a square matrix A that is lower triangular, with ones on the diagonal, such that if we compute its LU decomposition using GEPP, we do not simply get L = A, U = I, P = I. The key here is that the LU decomposition always has entries of L whose absolute values are at most 1. So, if we let 1 0 A= 2 1 then the LU decomposition cannot possibly have L = A. (What ends up happening is that the partial-pivoting will swap the rows of A in the rst step. If we did not do any pivoting, then would would have L = A whenever A is lower triangular with ones on the diagonal.) (5) If V is known to be unitary/orthogonal, describe a simple way to solve V x = b. If V is unitary/orthogonal, then V V = V V = I . In particular, multiplying both sides of the above equation on the left by V yields V V x = V b x = V b. (One might protest that, in practice, one should avoid doing computations with matrix inverses, and that because V is unitary, V = V 1 . However, that is an oversimplication. A better rule of thumb is that if you are using the function inv(), there is probably a better way to solve your problem; we did not have to use inv(V) hereour code would have been x = V*b.) (6) Let B be an m n matrix and let A = B B . (a) What are the dimensions of A? B is n m, so A = B B will be n n. (b) Show that for any x Rk (for the appropriate k ), we have x Ax 0. Since A is n n, we must consider x Rn . For x Rn , we have x Ax = x B B x = (B x) (B x) 0. (The inequality comes from the fact that (B x) (B x) is the sum of the squares of the entries of B x. At a more abstract level, (B x) (B x) = B x 2 0.) (7) Suppose A is a square matrix with A = 10 and A1 = 102 . (a) Find (A), the condition number of A. (A) = A A1 = 10 102 = 103 . (b) Assuming the only errors involved come from roundo errors, with relative roundo error = 2 1016 , about how large can the relative error in x be when you solve an equation Ax = b? The relative error in x when solving Ax, if roundo errors are the only issue, will typically be bounded by (A) , in this case about 2 1013 . A more conservative bound, that is very rarely exceeded, is 10(A) , in this case about 2 1012 . (c) Putting roundo errors aside, how much can the solution to Ax = b change if we change b by an amount with norm 0.07? More precisely, if we have Ax = b and Ax = b , with b b = 0.07, how large can x x be?

Since x = A1 b, A1 puts an upper bound on how sensitive x is to changes in b. So, if we change b by an amount with norm 0.07, the resulting change in x has norm at most A1 0.07 = 7 . (8) A square matrix A is normal if A A = AA . Show that every Hermitian/symmetric matrix is normal. If A is Hermitian/symmetric (so A = A), we have A A = AA = AA so A is normal. (Normal matrices have the useful property that they can be decomposed as V DV where V is unitary/orthogonal and D is diagonal. Informally, this says that, with an appropriate change of coordinates that does not distort space, A becomes a diagonal matrix.) (9) Recall the identity v w = v w cos where is the angle between v and w. (In fact, this identity can be used to dene the angle between two vectors whenever a space has a notion of dot product.) Let x = linspace(0,2*pi*511/512, 512) (be sure you have a column vector). Think of this as an approximation of the interval [0, 2 ], just as our mesh in the last assignment was an approximation of a two-dimensional region. Create vectors a = cos(0*x); b = cos(x); c = cos(2*x); d = sin(x); f = cos(x + 1.234); (a) Thinking of these vectors as functions dened on [0, 2 ], sketch them. Calling plot(x,a) will plot a against x, and so on. y a 1 f d x 2

0 1 y 1

cb 0 x 2

1 (b) What is the angle between b and each of the other vectors? (Recall that cos1 (x) = arccos(x) can be computed with acos(x).) v w Solving for in v w = v w cos gives = cos1 vev . So we can compute this w angle with acos(v*w/(norm(v)*norm(w))). This gives us the following angles: /2 (b and a), /2 (b and c), /2 (b and d), 1.234 (b and f). In particular, we are seeing that sines and cosines of dierent frequencies are orthogonal to each other, and that a sine and cosine of the same frequency are orthogonal to each other. Two waves of the same frequency, but out of phase by an angle of (as with b and f ), have angle between them.

(10) Find the xed points of the function g (x) = x2 2x +2, and for each one, determine whether it is superattracting, attracting, neutral, or repelling. The xed points of g are the solutions of g (x) = x: x2 2x + 2 = x x2 3x + 2 = 0 (x 2)(x 1) = 0 x = 1, 2 So the xed points are x = 1 and x = 2. We have g (x) = 2x 2, so g (1) = 0, making 1 a superattracting xed point, and g (2) = 2 > 1, making 2 a repelling xed point.

Vous aimerez peut-être aussi