How to determine the diagonalizability of these two matrices?

The diagonalization theorem, here for example, states that you can take $$ A = \left[\begin{matrix} -1 & 0 & 1\\3 & 0 & -3\\1 & 0 & -1\end{matrix}\right]$$ and turn it into a diagonal matrix $$ V = \left[\begin{matrix} 0 & 0 & 0\\0 & 0 & 0\\0 & 0 & -2\end{matrix}\right] $$ where the diagonal elements of $V$ are the eigenvalues $(0,0,-2)$ of $A$ using $$V = P^{-1} A P$$ where $P = (v_1 \quad v_2 \quad v_3)$ is invertible. This only happens if $A$ has $n$ linearly-independent eignevectors $v_1, v_2, v_3.$ In this case, although $\lambda_1 = \lambda_2 = 0$, you have a non-singular $$ P = \left[\begin{matrix} 1 & 0 & -1\\0 & 1 & 3\\1 & 0 & 1\end{matrix}\right] $$ To decide so, first find all eignevectors, form $P$ and check if $P$ is non-singular (equivalently, $v_1, v_2, v_3$ are linearly-independent). In the first matrix, however, $$P = \left[\begin{matrix} 1/4 & 1 & 0\\1/2 & 1 & 0\\1 & 1 & 0\end{matrix}\right] $$ which is singular.


The converse theorem does not apply. If a matrix(nxn) has NOT n distinct eigenvalues, this doesn't mean that he can't be diagonizable.Actually we can't know that only from the number of distinct eigenvalues.It's just a sufficient (but not necessary) condition. Check the first example out: http://en.wikipedia.org/wiki/Diagonalizable_matrix


To check whether $A=\left[\begin{matrix} 0 & 1 & 0\\0 & 0 & 1\\2 & -5 & 4\end{matrix}\right]$ is diagonalizable (Assuming that the entries are taken over the field $\mathbb R$):

Suppose with respect to some basis $\beta$ of $\mathbb R^3_\mathbb R,~[T]_{\beta}=A$ for some linear operator $T$ of $\mathbb R^3_\mathbb R.$ Then $\chi_T:(x-1)^2(x-2).$ Consequently the characteristic values of $T$ are $1,2$ (Since $1,2\in\mathbb R$).

Let us first check whether $T$ is diagonalizable:

$E_1(T)=\{v\in\mathbb R^3:Tv=v\}=Ker~(T-I_\mathbb R{^3})$

$E_2(T)=\{v\in\mathbb R^3:Tv=2v\}=Ker~(T-2I_\mathbb R{^3})$

Now

$Rank~[T-1I_\mathbb R{^3}]_\beta=R\left[\begin{matrix} -1 & 1 & 0\\0 & -1 & 1\\2 & -5 & 3\end{matrix}\right]\leq 2\implies Rank~[T-I_\mathbb R{^3}]_\beta=2$ and

$Rank~[T-2I_\mathbb R{^3}]_\beta=R\left[\begin{matrix} -2 & 1 & 0\\0 & -2 & 1\\2 & -5 & 2\end{matrix}\right]= 2$ (operating on rows)$\implies Rank~[T-I_\mathbb R{^3}]_\beta=2.$

Recall: For any two finite dimensional vector spaces $V$ & $W$ over the same field, $~T\in L(V,W)$$\implies Rank~T=Rank$ of any matrix of $T.$

Consequently, $\dim E_1(T)=Nullity~(T-I_\mathbb R{^3})=3-Rank~(T-I_\mathbb R{^3})=3-Rank~[T-I_\mathbb R{^3}]_\beta$$=1.$ Similarly $\dim E_2(T)=3-2=1.$

Now $\dim E_1(T)+\dim E_2(T)\ne\dim \mathbb R^3.$ Consequently $T$ is not diagonalizable.

$($ Alternatively $\chi_T:(x-1)^2(x-2)\neq (x-1)^{\dim E_1(T)}(x-2)^{\dim E_2(T)}.$ Consequently $T$ is not diagonalizable.$)$

Therefore $A$ is not diagonalizable.