Vous êtes sur la page 1sur 8

Invertible matrices are very important in many areas of science.

For example, decrypting a coded message uses invertible matrices (see the coding page). The problem of finding the inverse of a matrix will be discussed in a different page (click here). Definition. An exists an matrix A is called nonsingular or invertible iff there matrix B such that

where In is the identity matrix. The matrix B is called the inverse matrix of A.

Elementary operations for matrices play a crucial role in finding the inverse or solving linear systems. They may also be used for other calculations. On this page, we will discuss these type of operations. Before we define an elementary operation, recall that to an nxm matrix A, we can associate n rows and m columns. For example, consider the matrix

Its rows are

Its columns are

Let us consider the matrix transpose of A

Its rows are

As we can see, the transpose of the columns of A are the rows of AT. So the transpose operation interchanges the rows and the columns of a matrix.

Therefore many techniques which are developed for rows may be easily translated to columns via the transpose operation. Thus, we will only discuss elementary row operations, but the reader may easily adapt these to columns. Elementary Row Operations. 1. Interchange two rows. 2. Multiply a row with a nonzero number. 3. Add a row to another one multiplied by a number. Definition. Two matrices are row equivalent if and only if one may be obtained from the other one via elementary row operations. Example. Show that the two matrices

are row equivalent. Answer. We start with A. If we keep the second row and add the first to the second, we get

We keep the first row. Then we subtract the first row from the second one multiplied by 3. We get

We keep the first row and subtract the first row from the second one. We get

which is the matrix B. Therefore A and B are row equivalent. One powerful use of elementary operations consists in finding solutions to linear systems and the inverse of a matrix. This happens via Echelon Form and Gauss-Jordan Elimination. In order to appreciate these two techniques, we need to discuss when a matrix is row elementary equivalent to a triangular matrix. Let us illustrate this with an example. Example. Consider the matrix

First we will transform the first column via elementary row operations into one with the top number equal to 1 and the bottom ones equal 0. Indeed, if we interchange the first row with the last one, we get

Next, we keep the first and last rows. And we subtract the first one multiplied by 2 from the second one. We get

We are almost there. Looking at this matrix, we see that we can still take care of the 1 (from the last row) under the -2. Indeed, if we keep the first two rows

and add the second one to the last one multiplied by 2, we get

We can't do more. Indeed, we stop the process whenever we have a matrix which satisfies the following conditions 1. any row consisting of zeros is below any row that contains at least one nonzero number; 2. the first (from left to right) nonzero entry of any row is to the left of the first nonzero entry of any lower row. Now if we make sure that the first nonzero entry of every row is 1, we get a matrix in row echelon form. For example, the matrix above is not in echelon form. But if we divide the second row by -2, we get

This matrix is in echelon form.

Operations[edit source | editbeta]


There are three types of elementary matrices, which correspond to three types of row operations (respectively, column operations): Row switching A row within the matrix can be switched with another row. Row multiplication Each element in a row can be multiplied by a non-zero constant. Row addition A row can be replaced by the sum of that row and a multiple of another row.

If E is an elementary matrix, as described below, to apply the elementary row operation to a matrix A, one multiplies the elementary matrix on the left, EA. The elementary matrix for any row operation is obtained by executing the operation on the identity matrix.

Row-switching transformations[edit source | editbeta]


The first type of row operation on a matrix A switches all matrix elements on row i with their counterparts on row j. The corresponding elementary matrix is obtained by swapping row i and row j of the identity matrix.

So TijA is the matrix produced by exchanging row i and row j of A.

Properties[edit source | editbeta]


The inverse of this matrix is itself: Tij1=Tij.

Since the determinant of the identity matrix is unity, det[Tij] = 1. It follows that for any square matrix A (of the correct size), we have det[TijA] = det[A].

Row-multiplying transformations[edit
source | editbeta]
The next type of row operation on a matrix A multiplies all elements on row i by m where m is a nonzero scalar (usually a real number). The corresponding elementary matrix is a diagonal matrix, with diagonal entries 1 everywhere except in the ith position, where it is m.

So Ti(m)A is the matrix produced from A by multiplying row i by m.

Properties[edit source | editbeta]


The inverse of this matrix is: Ti(m)1 = Ti(1/m). The matrix and its inverse are diagonal matrices. det[Ti(m)] = m. Therefore for a square matrix A (of the correct size), we have det[Ti(m)A] = m det[A].

Row-addition transformations[edit
source | editbeta]
The final type of row operation on a matrix A adds row j multiplied by a scalar m to row i. The corresponding elementary matrix is the identity matrix but with an m in the (i,j) position.

So Ti,j(m)A is the matrix produced from A by adding m times row j to row i.

Properties[edit source | editbeta]


These transformations are a kind of shear mapping, also known as a transvections. Tij(m)1 = Tij(m) (inverse matrix). The matrix and its inverse are triangular matrices. det[Tij(m)] = 1. Therefore, for a square matrix A (of the correct size) we have det[Tij(m)A] = det[A]. Row-addition transforms satisfy the Steinberg relations.

Vous aimerez peut-être aussi