Here are two facts about row reduced echelon form.
For every matrix , there is a sequence of row operations taking to a matrix in row reduced echelon form.
We prove this by induction on the number of columns of . When has one column, either is the zero vector (in which case it is already in RREF) or it has a nonzero entry . Swap to the top row, multiply the top row by , and use the entry as a pivot to eliminate the other entries of . The result is the vector with a 1 at the top and zeroes elsewhere, which is in RREF.
For the inductive step, suppose that is and that the result is true for all matrices with columns. We then know that there is a series of row operations we can do to that result in a matrix whose first columns form a RREF matrix. Suppose the matrix formed by these columns has rows of zeroes at the bottom. If the final column has zeroes in its bottom entries, the matrix is in RREF. If not, swap a nonzero entry to the top of these rows, use it as a pivot to eliminate all other nonzero entries in the final column, and multiply by a scalar so that its entry is 1. The result is in RREF. ∎
Let be a matrix. If and are RREF matrices that can be obtained by doing row operations to , then .
This theorem says that there is only one RREF matrix which can be obtained by doing row operations to , so we are justified in calling the unique RREF matrix reachable from the row reduced echelon form of .
Again, the proof is by induction on the number of columns of . There are only two RREF column vectors: the zero vector and a vector with a 1 at the top and all other entries zero. Clearly no sequence of row operations takes one of these to the other, so the base case of the induction holds.
For the inductive step, suppose that and are RREF matrices reachable from . Let , , and be the matrices formed by the first columns of , , and respectively. The matrices and are RREF matrices formed by doing row operations to , so by induction they are equal. Suppose for a contradiction that , so that there is some such that the th entry in the last column of which differs from the corresponding entry of .
Theorem 3.9.1 tells us that the equations , , and all have exactly the same solutions.
Let be any solution of . We have , so . Since the first columns of are all zeroes, we get , so . In other words, every solution of has last entry zero.
The RREF matrix has some nonzero rows and then some zero rows. Say there are zero rows. We can then write like this
where has no zero rows. Then and have the form
I claim that . Suppose for a contradiction that . The first rows of all have leading entries. For , let the leading entry in row of occur in column number . Let have entries , where is the number of rows of . Notice that column of has a 1 in position and zeroes everywhere else, so if we add up times column , times column , and so on, up to times column we get the vector
which is the last column of . It follows that the vector with in position , with in position for , and with zeroes elsewhere is a solution to . This contradicts every solution to having last entry zero.
Since is in RREF, must have a at the top and all other entries zero, and . The same argument applies to , so and . This shows . ∎
Let . We want to do a sequence of row operations to which ends up with a matrix in RREF. Row 1 has a leading entry at position , but the other entries in column 1 aren’t 0. We use the entry as a pivot to eliminate the other entries in column 1. That is, we apply row operations of the form to make the other entries in column 1 equal to 0.
This matrix isn’t in RREF. One reason is that the leading entry in row 2, in position , isn’t equal to 1. To make that leading entry 1 we can use the row operation that multiplies row 2 by :
Now we have a leading entry in row 2, column 2 which is equal to 1, but there are other nonzero entries in that column. We use the entry as the next pivot to eliminate entries in column 2.
This matrix is in RREF, so we are done.