If $AB = I$ then $BA = I$

Dilawar says in 2. that he knows linear dependence! So I will give a proof, similar to that of TheMachineCharmer, which uses linear independence.

Suppose each matrix is $n$ by $n$. We consider our matrices to all be acting on some $n$-dimensional vector space with a chosen basis (hence isomorphism between linear transformations and $n$ by $n$ matrices).

Then $AB$ has range equal to the full space, since $AB=I$. Thus the range of $B$ must also have dimension $n$. For if it did not, then a set of $n-1$ vectors would span the range of $B$, so the range of $AB$, which is the image under $A$ of the range of $B$, would also be spanned by a set of $n-1$ vectors, hence would have dimension less than $n$.

Now note that $B=BI=B(AB)=(BA)B$. By the distributive law, $(I-BA)B=0$. Thus, since $B$ has full range, the matrix $I-BA$ gives $0$ on all vectors. But this means that it must be the $0$ matrix, so $I=BA$.


So you want to find a proof of this well-known fact, which avoids the usual "indirect" proofs? I've also pondered over this some time ago.

We have the following general assertion:

Let $M$ be a finite-dimensional $K$-algebra, and $a,b \in M$ such that $ab=1$, then $ba=1$. [For example, $M$ could be the algebra of $n \times n$ matrices]

Proof: The sequence of subspaces $\cdots \subseteq b^{k+1} M \subseteq b^k M \subseteq \cdots \subseteq M$ must be stationary, since $M$ is finite-dimensional. Thus there is some $k$ and some $c \in M$ such that $b^k = b^{k+1} c$. Now multiply with $a^k$ on the left to get $1=bc$. Then $ba=ba1 = babc=b1c=bc=1$. QED

No commutativity condition is needed. The proof shows more general that the claim holds in every left- or right-artinian ring $M$.

Remark that we needed, in a essential way, some finiteness condition. There is no purely algebraic manipulation with $a,b$, which shows $ab = 1 \Rightarrow ba=1$ (and shift operators provide a concrete counterexample). Every argument uses some argument of the type above. For example when you want to argue with linear maps, you have to know that every subspace of a finite-dimensional(!) vector space of the same dimension actually is the whole vector space, for which there is also no "direct" proof. I doubt that there is one.


PS. See here for a proof of $AB=1 \Rightarrow BA=1$ for square matrices over a commutative ring.


If $\rm\,B\,$ is a linear map on a finite dimensional vector space $\rm\, V\,$ over field $\rm\,K,\,$ then easily by finite dimensionality (cf. Note below) there is a polynomial $\rm\,0\ne p(x)\in K[x]\;$ with $\rm\,p(B) = 0.\,$ W.l.o.g.$\rm\,p(0) \ne 0\,$ by canceling any factors of $\rm\,B\;$ from $\rm\;p(B)\;$ by left-multiplying by $\rm A,\,$ using $\rm\, AB = 1.$

Notice $\rm\ AB=1 \, \Rightarrow\, (BA-1)\, B^n =\, 0\;$ for $\,\rm\;n>0\;$

so by linearity $\rm\, 0 \,=\, (BA-1)\ p(B)\, =\, (BA-1)\ p(0) \;\Rightarrow\; BA=1 \quad\quad$ QED

This is essentially a special case of computing inverses by the Euclidean algorithm - see my Apr 13 1999 sci.math post on Google or mathforum.

Note $ $ The existence of $\rm\;p(x)\;$ follows simply from the fact that $\rm\,V\;$ finite-dimensional implies the same for the vector space $\rm\,L(V)\,$ of linear maps on $\rm\,V\,$ (indeed if $\rm\,V\;$ has dimension $\rm n$ then a linear map is uniquely determined by its matrix of $\,\rm n^2\,$ coefficients). So $\rm\, 1,\, B,\, B^2,\, B^3,\,\cdots\;$ are $\rm\,K$-linearly dependent in $\rm\, L(V)$ which yields the sought nonzero polynomial annihilating $\rm\,B.$