Vous êtes sur la page 1sur 6

Math 206, Spring 2016

Assignment 11 Solutions

Due: April 15, 2016

Part A.
(1) Give conditions on n, a and b which are necessary and sufficient for Pn ' Rab . [As always, carefully
justify your answer, and provide proofs for any claims you make which we didnt discuss in class.]
Solution. Because dimension characterizes isomorphism, we know that Pn ' Rab if and only if
dim(Pn ) = dim(Rab ). We also saw in class that dim(Rab ) = ab, and we will prove momentarily that
dim(Pn ) = n + 1. Hence we get
Pn ' Rab if and only if n + 1 = a + b.
Now we prove that dim(Pn ) = n + 1. For this we will exhibit a basis for Pn . Consider B =
{1, x, x2 , , xn }. First, recall that by definition we have
Pn = {a0 + a1 x + + an xn : a0 , a1 , , an R}.
From this definition we see that any given element of Pn is a linear combination of the polynomials from
B, and hence B spans. To show that B is independent will require a bit more work. Suppose that we
have c1 , c2 , , cn R so that
c0 (1) + c1 (x) + c2 (x2 ) + + cn (xn ) = 0.
[That is, the given polynomial is identically zero as a function.] We want to argue that c0 = c1 = =
cn = 0.
One could use some heavy machinery to prove this result. For instance, suppose (for the sake of
contradiction) that at least one of the ci 6= 0; this means the polynomial has degree at most n. The
fundamental theorem of algebra tells us that the polynomial c0 + c1 x + + cn xn has at most n
complex roots (counted with multiplicity); in particular, there are at most n real numbers we can plug
into the polynomial so that the polynomial vanishes. Since there are far more than n real numbers
we could plug into this polynomial, we conclude that there must exist some real number r so that
c0 + c1 r + + cn rn 6= 0. In other words, the polynomial c0 + c1 x + + cn xn is not identically zero;
this is a contradiction to our assumption.
One could approach this in a more elementary manner. Again, assume we are given constants
c0 , c1 , , cn R so that
c0 + c1 x + + cn xn = 0.
(1)
[Again: by this we mean that the polynomial is identically zero.] We wish to conclude all the coefficients
are 0. If we plug x = 0 into equation (1), then we recover
c0 + c1 (0) + c2 (0)2 + + cn (0)n = 0.
dp
; weve seen
Of course, this just means that c0 = 0. Now let T be the linear transformation T (p) = dx
in class that this transformation is linear. If we apply this transformation to equation (1), then we get
the equation
c1 + 2c2 x + + ncn xn1 = 0.
[The right side is zero because any linear transformation T L(V, W ) has T (0V ) = 0W .] If we evaluate
this polynomial at x = 0, then we recover

c1 + 2c2 (0) + + ncn (0)n1 = 0,


and so c1 = 0. Continuing in this manner, let 1 i n be given, and apply T i to equation (1) to
recover
(i + 1)!
n!
i!ci +
ci+1 x + +
cn xni = 0.
1
(n i)!
http://palmer.wellesley.edu/~aschultz/w16/math206

Page 1 of 6

Math 206, Spring 2016

Assignment 11 Solutions

Due: April 15, 2016

If we plug x = 0 into this equation, we recover ci = 0. Since this applies to all ci , we therefore have
that the original relation was trivial.
Using either proof, we have that B is independent.

(2) Suppose that B Rnn , and define ann(B) to be the set {S Rnn : SB = 0}.
(a) Prove that ann(B) is a subspace of Rnn .
Solution. Well use the subspace test. First, note that the zero matrix 0 has the property that
0B = 0. Hence 0 ann(B).
Now let S1 , S2 ann(B) be given. This means that S1 B = S2 B = 0. Then we have
(S1 + S2 )B = S1 B + S2 B

(distributivity of matrix multiplication*)


(since S1 , S2 ann(B))

=0+0
= 0.
Hence S1 + S2 ann(B).
Finally, let S ann(B) and k R. Then we have
(kS)B = k(SB)

(associativity of matrix scaling)

= k0

(since S ann(B))

= 0.
[Note: weve used some significant facts about matrix algebra above which have shown up in the
text, but which we havent explicitly talked about in class. So let me offer a few proofs. I dont
expect anyone to have gone to this level of care in their problem set, but its worth seeing these
kinds of calculations carried out once.
First, Ill check that matrix products distribute across matrix sums. So let A, B Rrc , and
let C Rcq . We claim that (A + B)C = AC + AB. To prove this result, well check that
Coli ((A + B)C) = Coli (AC + AB). On the left side of this expression, we have
Coli ((A + B)C) = (A + B)Coli (C)

(definition of matrix/matrix multiplication)

= c1i Col1 (A + B) + + + cci Colc (A + B)


= c1i (Col1 (A) + Col1 (B)) + + cci (Colc (A) + Colc (B))

(expanding product)
(definition of matrix addition)

= (c1i Col1 (A) + + cci Colc (A))


+ (c1i Col1 (B) + + cci Colc (B))

(commutativity)

= AColi (C) + BColi (C)

(definition of matrix/vector product)

= Coli (AC) + Coli (BC)

(definition of matrix/matrix product)

= Coli (AC + BC)

(definition of matrix addition).

The other identities (such as 0X = 0 for any X, and k(AB) = (kA)B) follow from a similar type
of calculation.]

(b) Give an example of B R33 for which dim(ann(B)) = 9.
Solution. Let B be the zero matrix in R33 . Then we have that for any S R33 we have
SB = 0 (as in the previous part). Hence we have R33 = ann(B). But since we know that
dim(R33 ) = 9 from class, the desired result follows.

http://palmer.wellesley.edu/~aschultz/w16/math206

Page 2 of 6

Math 206, Spring 2016

Assignment 11 Solutions


(c) Compute dim ann

1
1

1
1


.


Solution. Observe that ann


a
c

b
d

Due: April 15, 2016

1
1

 
a
:
c

1
1
b
d


is defined as


1
1

1
1


=

0
0

0
0


.

If we carry out this matrix product, we then have that



 
 
 
1 1
a b
a+b a+b
0
ann
=
:
=
1 1
c d
c+d c+d
0

0
0


.


Since two matrices are equal if and only if they are equal entry-by-entry, it follows that ann
corresponds to the solution set of the system

1a+1b+0c+0d=0

1a+1b+0c+0d=0
0a+0b+1c+1d=0

0a+0b+1c+1d=0

1
1

1
1

Row reducing this system gives that the full set of solutions corresponds to



b b
: b, d R .
d d
Of course this is equivalent to saying that



1 1
1
ann
= span
1 1
0

1
0

 
,

0
1

0
1


.

If we can show that this spanning set is independent, then well have a basis for the space of
interest, and in particular well know that its 2-dimensional.
To see that this collection is independent, suppose we have c1 , c2 R so that



 

1 1
0 0
0 0
c1
+ c2
=
.
0 0
1 1
0 0
Combining terms on the left side then gives the equality

 

c1 c1
0 0
=
.
c2 c2
0 0
Comparing the top-right and bottom-right entries in these two matrices, we get c1 = c2 = 0, as
desired.


Part B.
(1) Define T : P2 R P3 and S : P3 P4 to be the functions which take a polynomial f and return the
antiderivative f whose constant term is zero. (So, for instance, T (1 + x x2 ) = x + 21 13 x3 .) Let
B = {1 + x, x + x2 , 1 + x + x2 }, D = {1, 1 + x, 1 + x + x2 , 1 + x + x2 + x3 } and E = {1, x, x2 , x3 , x4 }; it
is a fact that these are bases of (respectively) P2 , P3 and P4 .
http://palmer.wellesley.edu/~aschultz/w16/math206

Page 3 of 6

Math 206, Spring 2016

Assignment 11 Solutions

Due: April 15, 2016

(a) Compute the matrix representations for T , S and S T relative to these bases.
Solution. We know that

RepD,B (T ) = RepD (T (b1 ))

RepD (T (b1 ))

RepD (T (b3 ))

= RepD (x + 12 x2 )

RepD ( 12 x2 + 31 x3 )

RepD (x + 12 x2 + 13 x3 )

To calculate the representations for these polynomials, one example should suffice to explain the
basic procedure. For instance, what is RepD (x + 21 x2 + 13 x3 )? It is a vector c R4 so that
1
1
c1 (1) + c2 (1 + x) + c3 (1 + x + x2 ) + c4 (1 + x + x2 + x3 ) = x + x2 + x3 .
2
3
We can rearrange terms on the left side to recover an equivalent expression:
1
1
(c1 + c2 + c3 + c4 )1 + (c2 + c3 + c4 )x + (c3 + c4 )x2 + c4 x3 = x + x2 + x3 .
2
3
Now we can simply set the coefficients for {1, x, x2 , x3 } equal on both sides of the expression to
recover a system of equations

c1 + c2 + c3 + c4 = 0

c2 + c3 + c4 = 1
.
1
c3 + c4 = 2

1
c4 = 3

1
1/2

If we solve the system, we find that the desired c is


1/6 .
1/3
We proceed in a similar manner to find the other columns of the matrix representation for T , and
we find

1
0 1
1/2 1/2 1/2
.
RepD,B (T ) =
1/2
1/6 1/6
0
1/3 1/3
In the same way we find that

RepE,D (S) =

RepE,B (S T ) =

0
0
0
0
1
1
1
1
0 1/2 1/2 1/2
0
0 1/3 1/3
0
0
0 1/4
0
0
0
0
1/2
0
1/6 1/6
0 1/12

0
0
1/2
1/6
1/12

http://palmer.wellesley.edu/~aschultz/w16/math206

Page 4 of 6

Math 206, Spring 2016

Assignment 11 Solutions

Due: April 15, 2016

(b) Verify (by direct computation) that the matrices you calculated in the previous part satisfy
RepE,B (S T ) = RepE,D (S)RepD,B (T ).
Solution. Were asked to check that a
check the product column-by-column:

0
0
0
0
1
0

1
1
1
1
1/2 1/2

Col1
0 1/2 1/2 1/2 1/2
1/6
0
0 1/3 1/3
0
1/3
0
0
0 1/4

Col2

Col3

0
1
0
0
0

0
1
0
0
0

0
0
0
1
1
1
1/2 1/2 1/2
0 1/3 1/3
0
0 1/4

0
0
0
1
1
1
1/2 1/2 1/2
0 1/3 1/3
0
0 1/4

1/2

1/2

0
1/2
1/6
1/3

1/2

1/2

0
1/2
1/6
1/3

certain matrix product computes as we expect. Well


1

1/2
=

1/6

1/3

0
0
0
0
1
1
1
1
1
1/2

0 1/2 1/2 1/2


1/2
0
0 1/3 1/3
0
0
0
0 1/4


0
0
1 + 1/2 + 1/2 0

= 1/2 .
1/4 + 1/4
=

1/6

1/6
0
0

0
0
0
0
0
1

1
1
1

1
1/2
= 0 1/2 1/2 1/2 1/2
1/6

1/6
0
0 1/3 1/3
1/3
1/3
0
0
0 1/4

0
0
1/2 + 1/6 + 1/3
0

0
= 1/4 + 1/12 + 1/6 =
.
1/6

1/18 + 1/9
1/12
1/12

0
0
0
0
1
1

1
1
1

1
1/2
= 0 1/2 1/2 1/2 1/2
1/6

1/6
0
0 1/3 1/3
1/3
1/3
0
0
0 1/4

0
0
1 + 1/2 + 1/6 + 1/3
0

= 1/4 + 1/12 + 1/6 = 1/2


.

1/6
1/18 + 1/9
1/12
1/12

Hence the product is the desired matrix.

http://palmer.wellesley.edu/~aschultz/w16/math206

Page 5 of 6

Math 206, Spring 2016

Assignment 11 Solutions

Due: April 15, 2016

(2) Complete problem 58 from section 4.1.


Solution. For part (a), suppose that g(x) V . This means that g 00 (x) = g(x). Now consider
h(x) = (g(x))2 + (g 0 (x))2 .
The chain rule and the fact that g(x) V give that
h0 (x) = 2g(x)g 0 (x) + 2g 0 (x)g 00 (x) = 2g(x)g 0 (x) 2g 0 (x)g(x) = 0.
Since the derivative of h is 0, we conclude that h(x) is constant, as desired.
For part (b), suppose that g(x) V satisfies g(0) = g 0 (0) = 0. From above we know that (g(x))2 +
(g (x))2 is constant. Since we have g(0) = g 0 (0) = 0, this constant value is just (g(0))2 + (g 0 (0))2 =
02 + 02 = 0. Hence weve shown that for all x R, we have
0

(g(x))2 + (g 0 (x))2 = 0.
Since the quantities on the left side are nonnegative, the only way they can sum to zero for all x R is
if both is zero for all x R. Hence we conclude that g(x) = 0 for all x R.
Finally, for part (c) suppose that f (x) V , and write g(x) = f (x) f (0) cos(x) f 0 (0) sin(x). We
have that g(x) V since
d2
[f (x) f (0) cos(x) f 0 (0) sin(x)]
dx2
d
d
d
= 2 [f (x)] f (0) 2 [cos(x)] + f 0 (0) 2 [sin(x)]
dx
dx
dx
= f (x) + f (0) cos(x) + f 0 (0) sin(x)

g 00 (x) =

= (f (x) f (0) cos(x) f 0 (0) sin(x))

(definition of g)
d
is linear)
dx2
(since f (x), cos(x), sin(x) V )
(

(distributing scalars)

= g(x)(definition of g).
Furthermore we have that
g(0) = f (0) f (0) cos(0) f 0 (0) sin(0) = f (0) f (0) = 0
g 0 (0) = f 0 (0) + f (0) sin(0) f 0 (0) cos(0) = f 0 (0) f 0 (0) = 0.
By part (b) we must have that g(x) = 0 for all x. But substitution the definition for g(x), we conclude
that
f (x) = f (0) cos(x) + f 0 (0) sin(x).
Hence we have that f (x) is a combination of cos(x) and sin(x), which is the desired result.


http://palmer.wellesley.edu/~aschultz/w16/math206

Page 6 of 6

Vous aimerez peut-être aussi