Block diagonal matrix diagonalizable

Short answer: the minimal polynomial of $C$ is the monic lcm of the minimal polynomials of $A$ and $B$. And a square matrix is diagonalizable if and only if its minimal polynomial splits (which is automatic in $\mathbb{C}$ of course) with only simple roots. In other words, as pointed out by DonAntonio: if and only if its minimal polynomial is the product of pairwise distinct monic linear factors. Over the field under consideration, of course.

Now I'll give a detailed argument without explicit use of minimal polynomials.

Fact: a square matrix $M$ with coefficients in a field $K$ is diagonalizable if and only if there exists a nonzero polynomial $p(X)\in K[X]$ which splits over $K$ with simple roots and such that $p(M)=0$.

Proof: if $M$ is diagonalizable and if $\{\lambda_1,\ldots,\lambda_k\}$ is the set of its (non repeated) eigenvalues, then $p(X)=(X-\lambda_1)\cdots(X-\lambda_k)$ annihilates $M$. Conversely, if such a polynomial $p(X)$ with $\lambda_j$ pairwise distinct annihilates $M$, we have (by Bezout, essentially): $K^n=\mbox{Ker } p(M)=\bigoplus_{j=1}^k\mbox{Ker } (M-\lambda_j I_n)$. Diagonalizability follows easily. QED.

Now for every polynomial $p(X)$, you have $$ p(C)=\left(\matrix{p(A)&0\\0&p(B)}\right) $$ This gives you the annoying direction, namely $C$ diagonalizable implies $A$ and $B$ diagonalizable.

The converse is easier. Take $P$ and $Q$ invertible such that $PAP^{-1}$ and $QBQ^{-1}$ be diagonal. Then $$ R:=\left(\matrix{P&0\\0&Q}\right) $$ is invertible with $$ R^{-1}=\left(\matrix{P^{-1}&0\\0&Q^{-1}}\right)\qquad \mbox{and}\qquad RCR^{-1}=\left(\matrix{PAP^{-1}&0\\0&QBQ^{-1}}\right) $$ is diagonal.

Note: you can also do the converse with the fact above. Just take the lcm of the minimal polynomials of $A$ and $B$.


When $C$ is diagonalisable, we have $S^{-1}CS=\Lambda$, or equivalently, $CS=S\Lambda$ for some invertible matrix $S$ and some diagonal matrix $\Lambda=\operatorname{diag}(\lambda_1,\ldots,\lambda_{m+n})$. Hence $$ \pmatrix{A&0\\ 0&B}\pmatrix{x_j\\ y_j}=\lambda_j\pmatrix{x_j\\ y_j} $$ where $\pmatrix{x_j\\ y_j}$ is the $j$-th column of $S$. Consequently $Ax_j=\lambda_jx_j$. As $S$ is invertible, the top $n$ rows of $S$, i.e. the matrix $(x_1,x_2,\ldots,x_{m+n})$, must have full row rank, i.e. rank $n$. Since row rank is always equal to column rank, it follows that the $n\times n$ submatrix $X=(x_{k_1},x_{k_2},\ldots,x_{k_n})$ is invertible for some $k_1,\ldots,k_n\in\{1,2,\ldots,m+n\}$. Hence the equality $Ax_j=\lambda_jx_j$ implies that $AX=XD$, where $D=\operatorname{diag}(\lambda_{k_1},\ldots,\lambda_{k_n})$. So $X^{-1}AX=D$, i.e. $A$ is diagonalisable. For the similar reason, $B$ is diagonalisable too.

Remark. As the OP asks for a way to find $n$ linearly independent vectors among $x_1,\ldots, x_n$, this answer directly addresses the OP's question. However, in my opinion, the preferred way to show that $A$ and $B$ are diagonalizable is to employ minimal polynomials. See Julien's answer for instance. Sadly, undergraduate linear algebra courses nowadays are not quite "algebraic" as they were.