Matrix Inverses and Eigenvalues

(Assuming $\mathbf{A}$ is a square matrix, of course). Here's a solution that does not invoke determinants or diagonalizability, but only the definition of eigenvalue/eigenvector, and the characterization of invertibility in terms of the nullspace. (Added for clarity: $\mathbf{N}(\mathbf{A}) = \mathrm{ker}(\mathbf{A}) = \{\mathbf{x}\mid \mathbf{A}\mathbf{x}=\mathbf{0}\}$, the nullspace/kernel of $\mathbf{A}$.)

\begin{align*} \mbox{$\mathbf{A}$ is not invertible} &\Longleftrightarrow \mathbf{N}(\mathbf{A})\neq{\mathbf{0}}\\ &\Longleftrightarrow \mbox{there exists $\mathbf{x}\neq\mathbf{0}$ such that $\mathbf{A}\mathbf{x}=\mathbf{0}$}\\ &\Longleftrightarrow \mbox{there exists $\mathbf{x}\neq\mathbf{0}$ such that $\mathbf{A}\mathbf{x}=0\mathbf{x}$}\\ &\Longleftrightarrow \mbox{there exists an eigenvector of $\mathbf{A}$ with eigenvalue $\lambda=0$}\\ &\Longleftrightarrow \mbox{$\lambda=0$ is an eigenvalue of $\mathbf{A}$.} \end{align*}

Note that this argument holds even in the case where $\mathbf{A}$ has no eigenvalues (when working over a non-algebraically closed field, of course), where the condition "the eigenvalues of $\mathbf{A}$ are all nonzero" is true by vacuity.

For $\mathbf{A}$ invertible: \begin{align*} \mbox{$\lambda\neq 0$ is an eigenvalue of $\mathbf{A}$} &\Longleftrightarrow \mbox{$\lambda\neq 0$ and there exists $\mathbf{x}\neq \mathbf{0}$ such that $\mathbf{A}\mathbf{x}=\lambda\mathbf{x}$}\\ &\Longleftrightarrow\mbox{there exists $\mathbf{x}\neq\mathbf{0}$ such that $\mathbf{A}({\textstyle\frac{1}{\lambda}}\mathbf{x}) = \mathbf{x}$}\\ &\Longleftrightarrow\mbox{there exists $\mathbf{x}\neq \mathbf{0}$ such that $\mathbf{A}^{-1}\mathbf{A}({\textstyle\frac{1}{\lambda}}\mathbf{x}) = \mathbf{A}^{-1}\mathbf{x}$}\\ &\Longleftrightarrow\mbox{there exists $\mathbf{x}\neq \mathbf{0}$ such that $\frac{1}{\lambda}\mathbf{x} = \mathbf{A}^{-1}\mathbf{x}$}\\ &\Longleftrightarrow\mbox{$\frac{1}{\lambda}$ is an eigenvalue of $A^{-1}$.} \end{align*}


Here is a short proof using the fact that the eigenvalues of ${\bf A}$ are precisely the solutions in $\lambda$ to the equation $\det ({\bf A}-\lambda {\bf I})=0$.

Suppose one of the eigenvalues is zero, say $\lambda_k=0$. Then $\det ({\bf A}-\lambda_k {\bf I})=\det ({\bf A})=0$, so ${\bf A}$ is not invertible.

On the other hand, suppose all eigenvalues are nonzero. Then zero is not a solution to the equation $\det ({\bf A}-\lambda {\bf I})=0$, from which we conclude that $\det({\bf A})$ cannot be zero.

I'll leave the second question to you.


a different approach from Joseph's (which also shows what the form of $A^{-1}$ is)

Let us assume for simplicity that $A$ is diagonalizable (otherwise one can most probably extend the proof using the Jordan normal form). The matrix $A$ can be brought on the form $$A= T D T^{-1}$$ with $D = \text{diag}(\lambda_i)$ a diagonal matrix containing the eigenvalues, and $T$ and invertible matrix. The inverse of $A$ therefore reads $$A^{-1} = (T D T^{-1})^{-1} = T D^{-1} T^{-1}.$$ This inverse exists if $D^{-1}$ exist. But $D^{-1}$ we can calculate easily. It is given by $D^{-1} =\text{diag}(\lambda_i^{-1})$ which exists as long as all $\lambda_i$ are nonzero.