Is there a way to know if a row reduction of a matrix has been done correctly?

We know that elementary row operations do not change the row space of the matrix. And if a matrix is in rref, then it is relatively easy to check whether a vector belongs to the row space.

So suppose you have matrix $A$ and a reduced row echelon matrix $B$. If $R_A$ and $R_B$ are row spaces, you can easily check whether $R_A\subseteq R_B$. Of cause, this is only "half"1 of the verification whether $R_A=R_B$, which is equivalent to $A\sim B$.


Example. Suppose that I have a matrix $$A= \begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\ \end{pmatrix}.$$

And that after Gaussian elimination I get: $$B= \begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 &-1 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix} $$

To check whether $R_A\subseteq R_B$ it suffices to check whether each row of $A$ is a linear combination of $(1,0,0,1)$, $(0,1,0,-1)$ and $(0,0,1,2)$ , i.e., whether it is of the form $c_1(1,0,0,1)+c_2(0,1,0,-1)+c_3(0,0,1,2)$. But since these vectors are very simple, we can see that on coordinates where there are pivots we get $c_1$, $c_2$ and $c_3$. So it is easy to find coefficients.

Let us try with the fourth row: $(1,2,1,1)$. We look at the first three coordinates. (Those are the coordinates with the pivots.) And we check whether $$(\boxed{1},\boxed{2},\boxed{1},1)= 1\cdot(1,0,0,1)+2\cdot(0,1,0,-1)+1\cdot(0,0,1,2) $$ We see that this is true. If the same thing works for each row of $A$, this shows that $R_A\subseteq R_B$.

Let me try now another example where I make a mistake on purpose to see how to find the mistake. $$\begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\ \end{pmatrix}\overset{(1)}\sim \begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\ \end{pmatrix}\overset{(2)}\sim \begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 2 & 0 & 0 \\ \end{pmatrix}\overset{(3)}\sim \begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{pmatrix}\overset{(4)}\sim \begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix}\overset{(5)}\sim \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix} $$

We can check that $$(1,1,1,2)\ne 1\cdot(1,0,0,0)+1\cdot(0,1,0,0)+1\cdot(0,0,1,1).$$

I can even make the same verification for the matrix after each step. For example, for the matrix after step $(2)$, i.e., $\begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 2 & 0 & 0 \\ \end{pmatrix}$, everything works. So some error must be before this step.


I will stress once again that this is only halfway verification. I have only checked $R_A\subseteq R_B$ but not $R_B\subseteq R_A$.

So it is possible that I make a mistake which I do not notice in this way. Here is a (rather naive) example

$$\begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\ \end{pmatrix}\sim \begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 &-1 \\ \end{pmatrix}\sim \begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix}\sim \begin{pmatrix} 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix}\sim \begin{pmatrix} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix}\sim \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix} $$

The sanity check described above works. (We check that $R_A\subseteq R_B$, which is true.) But the result is incorrect.


If I want to be able to check both inclusions and additionally to be able to make a check after each step, I can use extended matrix. (But this is much more work.)

In our example I would do the following $$ \left(\begin{array}{cccc|cccc} 1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 2 & 1 & 1 & 0 & 0 & 0 & 1 \\ \end{array}\right)\sim \left(\begin{array}{cccc|cccc} 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 &-1 & 1 \\ \end{array}\right)\sim \left(\begin{array}{cccc|cccc} 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\ \end{array}\right)\sim \left(\begin{array}{cccc|cccc} 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\ \end{array}\right)\sim \left(\begin{array}{cccc|cccc} 1 & 0 &-1 &-1 & 0 & 1 &-1 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\ \end{array}\right)\sim \left(\begin{array}{cccc|cccc} 1 & 0 & 0 & 1 & 1 & 0 &-1 & 0 \\ 0 & 1 & 0 &-1 &-1 & 1 & 1 & 0 \\ 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\ \end{array}\right) $$ Now the four numbers on the right are coefficients which tell me how to get this row as a linear combination of the linear matrix. For example, if I look at the first row, I can check that $$1\cdot(1,1,1,2)-1\cdot(0,1,1,1)=(1,0,0,1).$$ By making a similar verification for each I can test that $R_A\subseteq R_B$.

Notice that I can do this also halfway through the computation. For example, if I look at the last row of the third matrix, I have there $$\left(\begin{array}{cccc|cccc} 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\ \end{array}\right)$$ And I can check that $$-1\cdot(1,1,0,0)-1\cdot(0,1,1,1)+1\cdot(1,2,1,1)=(0,0,0,0).$$


1 This is similar to the advice given in comment. If you are using Gaussian elimination to solve a linear system, you can check whether the solution you got is indeed a solution. But it is still possible that you do not have all solutions. So this is just a "half-check".