Vous êtes sur la page 1sur 14

This document gives the solutions to all of the online exercises for M51A.

The section (§) numbers refer


to the textbook.

Solutions to Online Exercises for Linear Algebra M51A

TYPE I are True/False.


Answers are in square brackets [].

Lecture 02 (§1.1)
TYPE I:
1) The matrix [3 2 1] has order 3 × 1. [F. This matrix has 1 row and 3 columns, so its order
is 1 × 3.]
2) If A is the 2 × 2 matrix defined by Ai,j = i − j for i = 1, 2 and j = 1, 2, then all diagonal
elements of A are zero. [T. The diagonal elements of A are A1,1 and A2,2 . But by the
definition of A, A1,1 = 1 − 1 = 0 and A2,2 = 2 − 2 = 0.]
3)
 
1 2 · ¸
 3 4 = 1 3 5
.
2 4 6
5 6

[F. The two matrices have different orders so they cannot be equal (regardless of the
similarity of their elements). Later we will see that these two matrices are related. Each
is the “transpose” of the other.]

Lecture 03 (§1.1)
TYPE I:
1) Matrix addition, subtraction, and scalar multiplication are all defined elementwise. [T. See
definitions given in lecture or in the textbook.]
2) Matrix addition and subtraction are commutative and associative. [F. Matrix addition,
like ordinary addition of real numbers, is commutative and associative. But like ordinary
subtraction of real numbers, matrix subtraction is neither commutative nor associative.]

3) If λ = 0, then for any p × n matrix A, λA = 0p×n . [T.

(λA)i,j = (0A)i,j = 0 · Ai,j = 0.

Since every element is zero, λA is the p × n zero matrix.]

Lecture 04 (§1.2)
TYPE I:
1) Matrix multiplication is defined elementwise. [F. See the definition of matrix multiplication
given in lecture.]
2) For all matrices A and B, AB 6= BA. [F. In general, matrix multiplication is not com-
mutative, but there some particular cases in which commutativity holds. For example, if
A = B, then AB = AA = BA.]
3) Matrix multiplication is associative. [T. Associativity of matrix multiplication is asserted
in a theorem given in the lecture, and its proof is a homework exercise.]

1
Lecture 05 (§1.3)
TYPE I:
1)
 
· ¸T 1 −3
1 4 2 5  4 6 
=
 2
.
−3 6 8 11 8 
5 11

[T. When transposing a matrix, the first row becomes the first column, the second row
becomes the second column, etc.]
2) There exists a nonzero matrix A which is both skew-symmetric and diagonal. [F. If A is
skew-symmetric, then all diagonal elements must be zero. But if A is diagonal, then all
nondiagonal elements must be zero. Hence, if A is both skew-symmetric and diagonal,
then A = 0.]
3) If A 6= 0 and AT = kA for some real number k, then k = ±1. [T. Try to prove it. Notice
that if AT = kA, then
A = (AT )T = (kA)T = kAT = k 2 A.
]

Lecture 06 (§1.3)
TYPE I:

1) The matrix A shown below is symmetrically partitioned.


 
0 1 2 3 4
 5 6 7 8 9 
 
A=  10 11 12 13 14 .

 15 16 17 18 19 
20 21 22 23 24

[F. The vertical partition lines occur after the first and fourth columns, but the horizontal
partition lines occur after the second and fourth rows. A symmetrically partitioned matrix
is a square matrix that is partitioned in such a way that the vertical partition lines are
in the same places with respect to the sequence of columns as are the horizontal partition
lines with respect to the sequence of rows.]
· ¸
1 2
2) Every upper triangular matrix is in row-reduced form. [F. The matrix is upper
0 2
triangular, but it is not in row-reduced form because the first nonzero element in the second
row is not a 1.]
3) If A is both upper triangular and lower triangular, then A must be a zero matrix. [F.
Every diagonal matrix is both upper triangular and lower triangular.]

Lecture 07 (§1.4)
TYPE I:
1) If x1 and x2 are distinct solutions to a system of linear equations, then z = .3x1 + .7x2 is
also a solution to this system. [T. A theorem proven in lecture asserts that if x1 and x2
are solutions to a given system and if α + β = 1, then αx1 + βx2 is also a solution to the
system.]

2
2) There exists a system of linear equations whose set of solutions has exactly 5 elements. [F.
As proven in lecture, the set of solutions to a system of linear equations must have either
0, 1, or infinitely many elements.]

3) Every consistent system is homogeneous. [F. The system Ax = b is consistent if it has at


least one solution; and it is homogeneous if b = 0. Just because a system has a solution
does not mean that b must be 0. For example, the system given by the two equations
x + y = 2, −x + y = 2 is consistent, because (0, 2) is a solution. But it is not homogeneous.
It is true that every homogeneous system is consistent, because every homogeneous system
has the trivial solution x = 0.]

Lecture 08 (§1.4)
TYPE I:
1) If S is the set of solutions to the equations E1 , . . . , Em in the variables x1 , . . . , xn , then S
is also the set of solutions to the equations E1 − 2E2 , E2 , E3 , . . . , Em , where E1 − 2E2 is the
equation obtained by adding to equation E1 negative 2 times equation E2 . [T. As discussed
in lecture, adding to one equation a scalar times another equation does not change the set
of solutions of the system of equations. Do you see why?]
2) For a given system of equations, the derived set of equations (obtained by doing Gaussian
elimination on the augmented matrix for the original system) has the same set of solutions
as the original system of equations. [T. The derived set of equations corresponds to the
new augmented matrix put in row-reduced form by successive elementary row operations.
But elementary row operations on an augmented matrix do not change the solution set of
the corresponding system of equations.]
3) If, after doing elementary row operations, an augmented matrix for a linear system in the
variables x, y, and z has the form
 
1 1 1 1
 0 1 1 1 ,
0 0 1 1

then the (unique) solution to the original system is x = 1, y = 1, z = 1. [F. The derived
set of equations is
x + y + z = 1
y + z = 1
z = 1.
So z = 1 and using back-substitution gives y = 1 − z = 1 − 1 = 0 and x = 1 − y − z =
1 − 0 − 1 = 0. So x = 0, y = 0, z = 1 is the unique solution.]

Lecture 09 (§1.4)
TYPE I:
1) If after applying elementary row operations to an augmented matrix there exists a row of
zeros, then the corresponding system of equations must have infinitely many solutions. [F.
The existence of a row of zeros does not necessarily imply infinitely many solutions. For
example, you can check that the row-reduced form of the augmented matrix for the system

x + 2y = 5
−x − 3y = −7
2x + 5y = 12

has a row of zeros, but the system has exactly one solution, namely x = 1, y = 2.]

3
2) If the row-reduced form for the augmented matrix of a system (in the variables x, y, and
z) is  
1 a b c
 0 1 d f 
0 0 0 g
where a, b, c, d, f, g are real numbers, then the system does not have a unique solution. [T.
If g 6= 0, then the system is inconsistent, so there are no solutions. Otherwise, if g = 0,
then z is unrestricted, so there must be infinitely many solutions.]
3) A homogeneous system with more variables than equations must have infinitely many
solutions. [T. Every homogeneous system is consistent, and if there are more variables
than derived equations, then there must be infinitely many solutions.]

Lecture 10 (§1.5)
TYPE I:
1) If A is not invertible, then A must have a zero row. [F. A sufficient condition for nonin-
vertibility is the existence of a zero row, but this condition is not necessary. For example,
the 2 × 2 matrix with all entries equal to 1 is not invertible. For any real numbers a, b, c, d,
if · ¸· ¸
a b 1 1
= I2 ,
c d 1 1
then · ¸ · ¸
a+b a+b 1 0
= .
c+d c+d 0 1
· ¸
1 1
Equating first rows gives a + b = 1 and a + b = 0, which is impossible. So has
1 1
no inverse.]
2) If A is invertible and symmetric, then A−1 is symmetric as well. [T. A matrix B is
symmetric if B T = B. Now notice that (A−1 )T = (AT )−1 = (A)−1 (the first equality
was proved in this lecture, and the second follows from the fact that A is symmetric).
Therefore, (A−1 )T = A−1 , implying that A−1 is symmetric.]
3) If A and b are the matrices
· ¸ · ¸
5 0 4
A= , b= ,
0 −2 3

then the matrix equation Ax = b has the unique solution


· ¸· ¸
1/5 0 4
x= .
0 −1/2 3

[T. A is· a diagonal matrix


¸ with all diagonal elements nonzero, so A is invertible, and
−1 1/5 0
A = . Therefore, the unique solution is
0 −1/2
· ¸· ¸
−1 1/5 0 4
x=A b= .
0 −1/2 3

Lecture 11 (§1.5)
TYPE I:

4
1) The elementary matrix  
0 0 1
E= 0 1 0 
1 0 0
corresponds to the elementary row operation of interchanging the first and third rows of
any 3 × n matrix. [T. Just check the assertion directly by left-multiplying any 3 × n matrix
by E. Notice that E can be obtained by interchanging the first and third rows of I3 .]
2) An n × n matrix is invertible if and only if it can be transformed using elementary row
operations into the identity matrix In . [T. As mentioned in this lecture, it is proven later
in the course that an n × n matrix is invertible if and only if it can transformed using
elementary row operations into a row-reduced form with all diagonal elements nonzero. In
this lecture, it was demonstrated how such row-reduced matrices can be further transformed
into the identity matrix.]
3) If a matrix A is invertible, then A−1 can be computed by applying to the identity ma-
trix any sequence of elementary row operations that transform A into the identity. [T.
Applying an elementary row operation to A is equivalent to left-multiplying by an elemen-
tary matrix. If E1 , . . . , Ek are elementary matrices such that Ek Ek−1 · · · E1 A = In , then
A−1 = Ek Ek−1 · · · E1 = Ek Ek−1 · · · E1 In , which can be obtained by transforming In using
the corresponding sequence of elementary row operations.]

Lecture 12 (§1.6)
TYPE I:
1) If A = LU , for some matrices L and U , and L is invertible, then A must be invertible. [F.
A is invertible if and only if both L and U are invertible. Try to prove this.]
2) If the lower triangular matrix L has a zero on its diagonal, then L is not invertible. [T.
This was proven as a lemma in the lecture. L can be transformed to a matrix L0 with
a row zeros, and that matrix cannot be invertible. Since L0 is not invertible, it cannot
be transformed into the identity matrix. But then L also cannot be transformed to the
identity matrix. Hence, L is not invertible.]
3) If the nonsingular matrix A can be transformed to upper triangular form using only the
third elementary row operation, then A has an LU decomposition. [F. As proven in lec-
ture, A has an LU decomposition if and only if A can be transformed to upper triangular
form using only elementary row operations · R3 (i, ¸j, k) where i > j. The restriction “i > j”
0 1
is important: Consider the matrix A = . If A had an LU decomposition, then
1 0 · ¸ · ¸
a 0 0 a
L−1 A = U would be upper triangular. Let L−1 = . Then L−1 A =
c b b c
which is upper triangular only if b = 0, in which case L−1 would be singular, implying A
would be singular, a contradiction. Therefore, A does not have an LU decomposition. How-
ever, by first adding row2
· to row1,
¸ then adding (−1) times row1 to row2, A is transformed
1 1
into the matrix B = , which is upper triangular. So A has been transformed to
0 −1
upper triangular form using only operation R3 , but A does not have an LU decomposition.
Notice that the first operation applied to A is of the form R3 (i = 1, j = 2, 1), so i < j.]

Lecture 13 (§2.1)
TYPE I:
1) For any two vectors u and v in a vector space V , u ¯ v = v ¯ u. [F. The expressions
“u ¯ v” and “v ¯ u” make no sense. ¯ denotes scalar multiplication, which is a binary
operation whose input is a scalar and vector pair, and whose output is a vector.]

5
2) For any three vectors u, v, and w in the vector space V , u ⊕ (v ⊕ w) = v ⊕ (u ⊕ w). [T.

u ⊕ (v ⊕ w) = (u ⊕ v) ⊕ w associativity
= (v ⊕ u) ⊕ w commutativity
= v ⊕ (u ⊕ w) associativity.

3) R3 , with vector addition and scalar multiplication defined componentwise, is a real vector
space. [T. Some of the axioms were verified in lecture. You should verify the others.]

Lecture 14 (§2.1)
TYPE I:
1) Rn , with vector addition and scalar multiplication defined componentwise, is a complex
vector space. [F. Both Rn and Cn are real vector spaces. But Rn is not a complex vector
space. Scalar multiplication of an n-tuple of real numbers
√ by a complex number may not
give an n-tuple of real numbers. For example, if i = −1 ∈ C, then i ¯ (1, 1, 1) = (i, i, i) 6∈
R3 . So, if the scalars are chosen to be complex numbers, then R3 does not have the
requisite closure property for ¯.]
2) The set of all polynomials with real coefficients and having degree less than or equal to 4,
with vector addition and scalar multiplication defined as usual for polynomials, is a real
vector space. [T. Try to prove this by verifying the requisite properties.]
3) The set of all n × n lower triangular matrices (having entries in R), with vector addition
and scalar multiplication being matrix addition and scalar multiplication, is a real vector
space. [T. Note that the lower triangular matrices are closed under matrix addition and
scalar multiplication. The zero matrix is lower triangular, and the additive inverse of a
lower triangular matrix is lower triangular. Furthermore, all the other axioms of a vector
space automatically hold for the set of n×n lower triangular matrices because those axioms
hold for the set of all n × n matrices.]

Lecture 15 (§2.1)
TYPE I:
1) In some vector spaces there are vectors that have more than one additive inverse. [F. As
proven in lecture, each vector in a given vector space has a unique additive inverse.]
2) For any vector u in a vector space and any scalar α, α ¯ u = 0 if and only if either α = 0
or u = 0. [T. One of the theorems proven in lecture says that α = 0 implies α ¯ u = 0 for
any vector u; another theorem proven in the lecture says that u = 0 implies α ¯ u = 0 for
any scalar α; and a third theorem proven in lecture says that for any vector u and scalar
α, α ¯ u = 0 implies either α = 0 or u = 0.]
3) For any vector u in a vector space V and any scalar α,

−(α ¯ u) = (−α) ¯ u = α ¯ (−u).

[T. Try to prove this. HINT: Show that when either (−α) ¯ u or α ¯ (−u) is added to
α ¯ u, you get 0.]

Lecture 16 (§2.1)
TYPE I:

6
1) If V is a vector space, then V is a subspace of V . [T. V is a subset of V and certainly V
is a vector space (using the same operations as defined for V !). So V is a subspace of V .
Note: The subspaces V and {0} are often called the trivial subspaces of the vector space
V .]

2) There exists a subspace of R2 with exactly 10 elements. [F. A subspace S of any real
(or complex) vector space has either 1 element (the zero vector), or it has infinitely many
elements. If there exists a nonzero u ∈ S, then by closure under scalar multiplication
n ¯ u ∈ S for every integer n. All of these vectors are distinct (why?), so S must have
infinitely many elements.]

3) If S is a nonempty subset of the vector space V and α ¯ u ⊕ β ¯ v ∈ S, whenever u, v ∈ S


and α and β are scalars, then S must be a subspace of V . [T. The assertion is exactly the
corollary proven in lecture.]

Lecture 17 (§2.2)
TYPE I:
¯
¯
1) S = {(x1 , x2 , x3 ) ∈ R3 ¯ 2x1 − 3x2 + 4x3 = 1} is a subspace of R3 . [F. It is easy to see
that 0 = (0, 0, 0) 6∈ S, so S is not a subspace. If the “1” is changed to a “0” in the defining
equation for S, then S would be a subspace of R3 .]
2) If S is a subspace of V , then S = span(S). [T. For each u ∈ S, u = 1u ∈ span(S).
So S ⊂ span(S). However, if S is a subspace, then span(S) ⊂ S because, as proved in
lecture, span(S) is contained in every subspace containing S. Thus, since S and span(S)
are subsets of each other, S = span(S).]
3) If vn is a linear combination of v1 , . . . , vn−1 , then span{v1 , . . . , vn−1 } = span{v1 , . . . , vn−1 , vn }.
[T. Try to prove this. HINT: First show that any linear combination of v1 , . . . , vn can be
written as a linear combination of v1 , . . . , vn−1 .]

Lecture 18 (§2.3)
TYPE I:
1) Any set of three vectors in R4 must be linearly independent. [F. There are many coun-
terexamples. For example, the set {(1, 0, 0, 0), (0, 1, 0, 0), (1, 1, 0, 0)} is linearly dependent
because the last vector is the sum of the first two. Note that by a result alluded to in
lecture, any 4 vectors in R3 must be linearly dependent.]
2) If V is the set of all real-valued functions defined on R, and S = {sin t, cos t}, then S is
linearly independent. [T. If S were linearly dependent, then there would exist a scalar α
sin t
such that sin t = α cos t, implying that tan t = cos t = α is a constant function, which is
obviously false.]
3) If S ⊂ T and T is linearly dependent, then S must be linearly dependent. [F. For example,
in R2 , let S = {(1, 0), (0, 1)} and T = {(1, 0), (0, 1), (1, 1)}. T is linearly dependent but
S is linearly independent. Note that, in general, if S ⊂ T and T is linearly independent,
then S is linearly independent as well.]

Lecture 19 (§2.4)
TYPE I:
1) A basis for the vector space V is a set S ⊂ V such that S is linearly independent and spans
V . [T. That’s exactly the definition of basis given in lecture.]

7
2) The set S = {t, t2 , t3 } is a basis for the vector space V of all polynomials q(t) such that
the degree of q(t) is less than or equal to 3 and q(0) = 0. [T. We know that S is linearly
independent. Also if q(t) has degree less than or equal to 3, then q(t) = a0 +a1 t+a2 t2 +a3 t3 .
But q(0) = 0 implies a0 = 0, so q(t) is a linear combination of t, t2 , and t3 . Thus S spans
V , implying that S is a basis for V .]
3) If S has n elements and is a spanning set for some vector space V , then any subset of V
with more than n elements must be linearly independent. [F. As proven in lecture, if V
has a spanning set with n elements, then any subset of V with more than n elements must
be linearly dependent. ]

Lecture 20 (§2.4)
TYPE I:
1) There exists a basis of P 3 with 5 elements. [F. P 3 , the set of all polynomials of degree less
than or equal to 3, has dimension 4, so all bases of P 3 must have 4 elements.]
2) Every linearly independent subset S of R3 that contains exactly 3 vectors must be a basis
for R3 . [T. If S = {v1 , v2 , v3 } were linearly independent but not a basis, then S must fail
to span. So there exists v4 ∈ R3 such that v4 6∈ span(S). Therefore, {v1 , v2 , v3 , v4 } is
linearly independent. But since dim(R3 ) = 3 every set with more than 3 elements must
be linearly dependent. This contradiction implies span(S) = R3 .]
3) If S is a 10 element subset of M4×2 that spans M4×2 , then two elements can be removed
from S so that the remaining 8 element subset is a basis for M4×2 . [T. From the theorem
given in lecture, some subset of S must be a basis for M4×2 . But dim(M4×2 ) = 4 · 2 = 8,
so any subset of S that is a basis must contain 8 elements. Therefore, there exists at least
one (maybe more) 8 element subset of S that is a basis for M4×2 .]

Lecture 21 (§2.5)
TYPE I:
1) If V is a vector space, S ⊂ T ⊂ V , and T = span(X) for some set X ⊂ V , then
span(span(S)) ⊂ T . [T. The hypotheses say that S ⊂ span(X). Therefore, since taking
the span preserves the subset relation, and applying span twice is the same as applying it
once,
span(span(S)) = span(S) ⊂ span(span(X)) = span(X) = T.
]
2) The ·row rank¸ of a matrix A is the number of nonzero rows of A. [F. For example, if
1 1
A= , then the row space of A is the one-dimensional vector space spanned by
1 1
[1 1]. In general, the row rank of A is the number of nonzero rows of the matrix B obtained
by transforming A to row-reduced form using elementary row operations.]
3) If A is an upper triangular n×n matrix with nonzero diagonal elements, then rowspace(A) =
Rn . [T. If A has nonzero elements on its diagonal, then A can be transformed to the iden-
tity matrix In using elementary row operations. But the rows of the identity matrix In
give the standard basis for Rn , so rowspace(In ) = Rn . Since A can be transformed to In
by elementary row operations, rowspace(A) = rowspace(In ) = Rn .]

Lecture 22 (§2.5)
TYPE I:

8
1) If the 3 × 4 matrix A can be transformed using elementary row operations to the matrix
 
1 1 0 0
B =  0 0 1 5 ,
0 0 0 1

then rowrank(A) = 3. [T. B is obtained from A by elementary row operations so


rowspace(A) = rowspace(B). Since B is in row-reduced form, the number of nonzero
rows of B gives the dimension of rowspace(B), which is the dimension of rowspace(A),
which is rowrank(A).]
2) If V is a vector space with basis S and the set {u1 . . . , um } is linearly independent, then the
set of coordinate representations (with respect to S) of u1 , . . . , um is linearly independent
as a subset of Rn . [T. Essentially, this fact was proven in the lecture. You should try to
prove it yourself, without referring back to the lecture.]
3) If V is a vector space and A is the k × n matrix whose rows are the coordinates of the
vectors in some set S ⊂ V , and S is linearly independent, then rowrank(A) = n. [F. A
has k rows, and according to the hypotheses, as a set these rows are linearly independent.
So this set forms a basis for rowspace(A), implying that rowrank(A) = k.]

Lecture 23 (§2.6)
TYPE I:
1) If A is a square matrix and the rows of A are linearly independent, then the columns of
A are also linearly independent. [T. If A is n × n and the rows are linearly independent,
then they form a basis for the row space of A, which therefore has dimension n. So the
column space also has dimension n, and it is spanned by the columns of A. Since there
are n columns, these columns must form a basis for the column space. Thus, the columns
are linearly independent.]
2) If A is k×n and r(A) = n, then the system Ax = 0 must have infinitely many solutions. [F.
As shown in lecture, Ax = 0 has nontrivial solutions only if r(A) is less than the number
n of variables in the system.]
3) If A is any k×n matrix and b is any k×1 matrix, then r(A) ≤ r([A|b]), and r(A) < r([A|b])
if and only if the system Ax = b is inconsistent. [T. If A1 , . . . , An are the columns of A,
then for any column matrix b, {A1 , . . . , An } ⊂ {A1 , . . . , An , b}, so span{A1 , . . . , An } ⊂
span{A1 , . . . , An , b}. Taking dimensions gives

dim(span{A1 , . . . , An }) ≤ dim(span{A1 , . . . , An , b}),

or
r(A) ≤ r([A|b]).
In the lecture, it was shown that r(A) = r([A|b]) if and only if Ax = b is consistent.
Therefore, r(A) < r([A|b]) if and only if Ax = b is inconsistent.]

Lecture 24 (§2.6)
TYPE I:
1) If A is an n × n matrix and r(A) < n, then for any n × n matrix C, r(CA) < n. [T. Try
to prove this.]
2) If A and B are n×n matrices and AB = In , then both A and B are invertible and A−1 = B
and B −1 = A. [T. As proven in lecture, AB = In implies BA = In . Therefore, both A
and B are invertible and each is the other’s inverse.]

9
3) If A is a square matrix and not invertible, and if B is the row-reduced matrix obtained from
A by using elementary row operations, then B has at least one zero on its diagonal. [T. As
proven in lecture, A is invertible if and only if it can be transformed using elementary row
operations to an upper triangular matrix with all diagonal elements equal to 1. (Remember
that row-reduced form implies that the first nonzero entry in each row is 1.)]

Lecture 25 (§3.1,3.2)
TYPE I:
1) By definition, a linear transformation is one-to-one. [F. A linear transformation is a func-
tion (between vector spaces) that preserves vector addition and scalar multiplication. It
need not be one-to-one. For example, the zero transformation from a nontrivial vector
space V to a vector space W is linear but definitely not one-to-one, because all vectors in
V are mapped to the same vector (0) in W .]
2) A transformation T : V → W is onto if for every v ∈ V there exists some w ∈ W such
that T (v) = w. [F. T is onto if for every w ∈ W there exists v ∈ V such that T (v) = w.]
3) If T : V → W is linear, then T (v − w) = T v − T w for all v, w ∈ V . [T.

T (v − w) = T (v + (−1)w) = T (v) + T ((−1)w) = T v + (−1)T w = T v − T w.

Lecture 26 (§3.3)
TYPE I:
1) If {v1 , . . . , vn } is a basis for the vector space V , and T1 : V → W and T2 : V → W are
linear transformations satisfying T1 (vj ) = T2 (vj ) for j = 1, . . . , n, then T1 v = T2 v for all
v ∈ V . [T. A linear transformation is determined by its action on a basis of its domain. So
if two linear transformations with domain V agree on a basis for V , then they must agree
on all vectors in V . More explicitly, if v = c1 v1 + · · · + cn vn , then

T1 v = T1 (c1 v1 + · · · + cn vn )
= c1 T1 v1 + · · · + cn T1 vn
= c1 T2 v1 + · · · + cn T2 vn (because T1 (vj ) = T2 (vj ), j = 1, . . . , n)
= T2 (c1 v1 + · · · + cn vn )
= T2 v.

]
2) If T : V → W is a linear transformation, B is a basis for V , C is a basis for W , and A is
a matrix such that (T v)C = A(v)B for all v ∈ V , then the jth row of A consists of the
coordinates of T vj with respect to C. [F. The coordinates T vj with respect to C compose
the jth column of A. Just note that (vj )B is the column matrix ej having a 1 in the jth
position and zeros elsewhere. So A(vj )B = Aej , which is the jth column of A.]
3) If the vector space V has basis {v1 , . . . , vn }, then the transformation ψ : V → Rn defined
by  
c1
 
ψ(c1 v1 + · · · + cn vn ) =  ...  ∈ Rn
cn
must be linear, one-to-one, and onto. [T. If B = {v1 , . . . , vn }, then ψ is the transformation
ψ(v) = (v)B . As shown in lecture, this transformation is linear, one-to-one, and onto.]

10
Lecture 27 (§3.4)
TYPE I:
1) If B, C, and D are bases for the finite dimensional vector space V , then PCD PBC = PBD . [T.

PCD PBC (v)B = PCD (v)C = (v)D = PBD (v)B .

]
2) If AB C
B is the matrix representation of T : V → V with respect to basis B of V , AC is the
B C C C
matrix representation of T with respect to basis C of V , then AB PB = PB AC . [F. The
correct formula is AC C C B
C PB = PB AB . Notice that

AC C C C C B
C PB (v)B = AC (v)C = (T v)C = PB (T v)B = PB AB (v)B .

]
3) Two matrices A and à are similar if there exists an invertible matrix P such that P à =
AP −1 . [F. A and à are similar if there exists an invertible matrix P such that à = P −1 AP ,
or P Ã = AP .]

Lecture 28 (§3.5)
TYPE I:
1) If T : V → W is linear, then both ker(T ) and image(T ) are subspaces of V . [F. ker(T ) is
a subspace of V , but image(T ) is a subspace of W .]
2) If T : R2 → R 2
¯ is defined by T ¯(a, b) = (0, b), then null(T ) = 1. [T. Check that ker(T ) =
¯ ¯
{(a, b) ∈ R2 ¯ b = 0} = {(a, 0) ¯ a ∈ R}. Since (a, 0) = a(1, 0), {(1, 0)} is a basis for the
kernel. Hence, null(T ) = dim(ker(T )) = 1.]
3) If T : V → V is linear, V is n-dimensional with basis B, and the matrix representation of
T with respect to B is invertible, then r(T ) = n. [T. If A is the matrix representation of
T with respect to B, then A is invertible (by hypothesis), so its rank is n. Consequently,
r(T ) = r(A) = n.]

Lecture 29 (§3.5)
TYPE I:
1) If T : V → W is linear and has a 3 dimensional image, and if dim(V ) = 7, then null(T ) = 4.
[T. As proven in lecture, r(T ) + null(T ) = dim(V ).]
2) If T : V → W is linear and T is one-to-one, then the kernel of T is nontrivial. [F. If T is
one-to-one, then the kernel must be the trivial subspace consisting only of the zero vector.]
3) If V and W are n-dimensional vector spaces, then there must be exist a linear transforma-
tion T : V → W that is one-to-one and onto. [T. All vector spaces with the same (finite)
dimension are isomorphic.]

Lecture 30 (§4.1, 4.2)


TYPE I:
· ¸
a b
1) If A = , then det(A) = det(AT ). [T. Just compute. det(A) = ad − cb = ad − bc =
c d
det(AT ). In the next lecture, we’ll show that det(A) = det(AT ) holds for all square matrices
A.]

11
2) If A has all zeros on its diagonal, then det(A) = 0. [F. For example,
¯ ¯
¯ 0 1 ¯
¯ ¯
¯ 1 0 ¯ = −1.

]
3) If A is an invertible n × n matrix and B is a row-reduced matrix obtained from A using
elementary row operations, then det(B) = 1. [T. The rank of B must equal the rank of A,
which is n. But if B has rank n and is in row-reduced form, then B is upper triangular
with 1’s on its diagonal. Therefore, det(B) = 1n = 1.]

Lecture 31 (§4.2)
TYPE I:
   
v1 v3
1) If A =  v2  and B =  v1 , where v1 , v2 , v3 ∈ R3 , then det(B) = det(A). [T. B
v3 v2
is obtained by first interchanging row 1 and row 3 of A, then, on the resulting matrix,
interchanging row 2 and row 3. Each pairwise interchange of rows contributes a factor of
−1 to the determinant. Therefore, det(B) = (−1) · (−1) det(A) = det(A).]
2) For any square matrix A, det(−A) = − det(A). [F. If A is n × n and λ is any scalar,
then det(λA) = λn det(A). Letting λ = −1 gives det(−A) = (−1)n det(A). Therefore,
det(−A) = − det(A) if and only if n is odd.]
3) If square matrix B is obtained from A by adding 5 times column 3 of A to column 1 of
A, then det(B) = det(A). [T. B T is obtained by adding 5 times row 3 of AT to row 1
of AT . By a result proved in lecture, this elementary row operation does not change the
determinant; so det(B T ) = det(AT ). But det(B T ) = det(B) and det(AT ) = det(A).]

Lecture 32 (§4.2)
TYPE I:
1) For any square matrix A, A is invertible if and only if det(A) = 0. [F. As proven in lecture,
A is invertible if and only if det(A) 6= 0.]
2) For any n × n matrices A and B, det(AB) = det(BA). [T. det(AB) = det(A) det(B) =
det(B) det(A) = det(BA).]
3) If A, B, and C are n × n matrices, B is nonsingular, and ABC = B, then det(AC) = 1.
[T. Taking determinants of both sides of ABC = B yields det(A) det(B) det(C) = det(B).
Dividing through by det(B) (which we can do because B is nonsingular, implying that
det(B) 6= 0) gives det(A) det(C) = 1. Therefore, det(AC) = det(A) det(C) = 1.]

Lecture 33 (§4.3)
TYPE I:
1) For any linear transformation T : V → V , the real number 3 is an eigenvalue of T because
T 0 = 0 = 3 · 0. [F. By definition, an eigenvector must be nonzero. So 3 is eigenvalue of T
if and only if there exists a nonzero vector v such that T v = 3v.]
2) If A, B, and P are n × n matrices such that B = P −1 AP , and if x is an eigenvector of A
with eigenvalue λ, then P −1 x is an eigenvector of B with eigenvalue λ. [T.

BP −1 x = P −1 AP (P −1 x) = P −1 Ax = P −1 λx = λP −1 x.

12
3) If A is a diagonal n × n matrix with λ1 , . . . , λn on its diagonal, then λ1 , . . . , λn are the
eigenvalues of A. [T. If ej is the column matrix with a 1 in the jth spot and zeros elsewhere,
then Aej = λj ej , showing that each λj is an eigenvalue. By writing out the equation
Ax = λx, try to prove that any eigenvalue of A must be one of λ1 , . . . , λn . Remember that
by definition, an eigenvector is a nonzero vector.]

Lecture 34 (§4.3, 4,4)


TYPE I:
· ¸
0 −1
1) The matrix A = has eigenvalues +1 and −1. [F. The characteristic equation
1 0
det(A − λI2 ) = 0 simplifies to λ2 + 1 = 0. Since this equation has no real roots,
· A ¸ has
i
no real eigenvalues. It’s complex eigenvalues are +i and −i, with eigenvectors and
1
· ¸
1
respectively.]
i

2) If 0 is an eigenvalue of A, then det(A) = 0. [T. The eigenvalues are the solutions to


det(A − λI) = 0. Since λ = 0 is an eigenvalue, det(A) = det(A − 0I) = 0.]
· ¸ · ¸
1 1 1 0
3) The matrices A = and B = have the same characteristic equations but
0 1 0 1
are not similar. [T. det(A − λI2 ) = (1 − λ)2 = det(B − λI2 ). But B is I2 , which commutes
with every matrix. So for any invertible matrix P , P −1 BP = P −1 I2 P = I2 . So the only
matrix similar to B = I2 is B = I2 . Therefore, A and B are not similar. In fact, A is not
similar to any diagonal matrix. Think about why.]

Lecture 35 (§4.4)
TYPE I:

1) If A has a nonzero eigenvalue, then A must be invertible. [F. As proven in lecture, A is


invertible if and only if all of its eigenvalues are nonzero.]

2) If 4 is an eigenvalue of A, then 64 is an eigenvalue of A3 . [T. If Ax = 4x, then

A3 x = A(A(Ax)) = A(A(4x)) = 4A(Ax) = 4 · A(4x) = 16Ax = 64x.

]
3) If det(A − λIn ) = (λ − 1)(λ − 2) · · · (λ − n), then tr(A) = n(n + 1)/2. [T. The roots of the
characteristic equation

det(A − λIn ) = (λ − 1)(λ − 2) · · · (λ − n) = 0

are λ = 1, 2, . . . , n, and these are the eigenvalues of A. According to a theorem stated in


lecture, their sum 1 + 2 + · · · + n = n(n + 1)/2 is the trace of A.]

Lecture 36 (§4.5)
TYPE I:
1) If A is a 2 × 2 matrix with 2 eigenvectors, then A must be diagonalizable. [F. A is
diagonalizable if and only if A has 2 linearly independent eigenvectors. Note that any
multiple of an eigenvector is an eigenvector, so the existence of one eigenvector always
implies the existence of infinitely many eigenvectors.]

13
2) If M −1 AM = D is a diagonal matrix, then each column of M must be a eigenvector of A.
[T. Let xj be the jth column of M , let λj be the jth diagonal element of D, and let ej be
the jth standard basis vector.

Axj = AM ej = M Dej = M λj ej = λj M ej = λj xj .

So xj is an eigenvector of A with eigenvalue λj .]

3) If λ1 , . . . , λn are the eigenvalues (listed with multiplicity) of the n × n matrix A, then any
diagonal matrix to which A is similar must have λ1 , . . . , λn on its diagonal, although not
necessarily in that order. [T. This assertion is a slight re-wording of the second theorem
proven in the lecture. As an exercise you should prove that any two n×n diagonal matrices
with λ1 , . . . , λn on their diagonals must be similar. Hint: elementary matrices acting on
the left can permute rows, and acting on the right can permute columns.]

Lecture 37 (§4.5)
TYPE I:
1) If A is 3 × 3 and the real eigenvalues of A are −1, −2, and −3, then A is diagonalizable.
[T. A has 3 distinct real eigenvalues. Any three corresponding eigenvectors are linearly
independent. Therefore, A is diagonalizable.]
2) If A is 3×3 and the real eigenvalues of A are −1 and −2, then A must not be diagonalizable.
[F. A may or may not be diagonalizable. If A has two linearly independent eigenvectors
with eigenvalue −1, or two linearly independent eigenvectors with eigenvalue −2, then A
will have 3 linearly independent eigenvectors. In this case A is diagonalizable. However,
if dim(S−1 ) = dim(S−2 ) = 1, then there does not exist a set of 3 linearly independent
eigenvectors, so in this case A is not diagonalizable.]
3) If A is 5 × 5 and r(A − 6I5 ) = 3, then 6 is an eigenvalue of A and dim(S6 ) = 2. [T. If
r(A−6I5 ) = 3 < 5, then det(A−6I5 ) = 0, implying that 6 is an eigenvalue of A. According
to a theorem proven in lecture, dim(S6 ) = 5 − r(A − 6I5 ) = 2.]

14

Vous aimerez peut-être aussi