There exists $C\neq0$ with $CA=BC$ iff $A$ and $B$ have a common eigenvalue

For the => direction, if $A$ is diagonalizable, then it's easy to finish off your argument: Let $x_1, \ldots, x_m$ be a basis of $V$ consisting of eigenvectors for $A$, then since $C$ is nonzero, some $Cx_i$ must be nonzero, so your argument shows that $\lambda$ is an eigenvalue of $B$.

If $A$ is not diagonalizable then it's trickier. One way is to use generalized eigenvectors: A non-zero vector $v \in V$ is a generalized eigenvector of $A$ if for some positive integer $k$ and scalar $\lambda$, we have $(A-\lambda I)^k v = 0$. The scalar $\lambda$ is always an eigenvalue if such an equation holds. The key fact about generalized eigenvectors is that for every matrix $A$, there is a basis for $V$ consisting of generalized eigenvectors of $A$.

Take a basis $x_1, \ldots, x_m$ of generalized eigenvectors of $A$, with corresponding eigenvalues $\lambda_1, \ldots, \lambda_m$. Then, as before, some $Cx_i$ is nonzero, and a short computation using the condition $CA=BC$ shows that $$(B-\lambda I)^k C = C (A-\lambda I)^k$$ Using this, we conclude that $Cx_i$ is a generalized eigenvector of $B$ and therefore $\lambda_i$ is an eigenvalue of $B$.

For the <= direction, again the diagonalizable case is easy: Take a common eigenvalue $\lambda$ and map an eigenvector of $A$ to an eigenvector of $B$, just as you did above. The rest of the basis of eigenvectors of $A$, you send to 0. You can check that $CA=BC$ on the eigenvectors basis.

The general case is done by using generalized eigenvectors. Take a common eigenvalue $\lambda$. Now we have to be careful: We can't map an eigenvector $A$ to an eigenvector of $B$. (Let $A$ be $$\left( \begin{matrix} 1 & 1\cr 0 & 1 \end{matrix} \right)$$ and $B = I$; show that every matrix $C$ satisfying $CA=BC$ must take the unique eigenspace of $A$ to zero.) Consider the generalized eigenspaces $$V_{\mu} = \{v \in V : (A - \mu I)^k v = 0 \mbox{ for some positive integer } k\}.$$ The space $V$ is the direct sum of all the $V_{\mu}$. For $\mu \not = \lambda$, you send $V_{\mu}$ to 0. The tricky part is what to do with $V_{\lambda}$ itself.

Since we're just worrying about $V_{\lambda}$ now, we can replace $V$ by $V_\lambda$, thus we may assume that the $\lambda$ is the only eigenvalue of $A$. Let $k$ be the smallest positive integer such that $(A-\lambda I)^k v = 0$ for all $v \in V$. For lack of a better term, let's call $k$ the index of $A$ for $V$. We proceed by induction on $k$. If $k=1$ we are in the diagonalizable case.

If $k>1$, let $v$ be an eigenvector of $A$ for $\lambda$. Then $A$ fixes the space $\langle v \rangle$ and therefore acts on the quotient $\overline{V} = V/\langle v \rangle$. Now we can see that $(A-\lambda I)^{k-1}$ kills $\overline{V}$ (since otherwise $(A - \lambda I)^k$ would not kill $V$). Hence the index of $A$ in $\overline{V}$ is less than $k$. Inductively, we have a nonzero map $\overline{C} : \overline{V} \to W$ such that $\overline{C} A = B \overline{C}$, and by composing with the projection from $V$ to $\overline{V}$, we get a nonzero map $C : V \to W$ satisfying $CA = BC$.

EDIT: I don't think the fact that the index of $A$ in $\overline{V}$ is smaller than $k$ is as trivial as I made it sound above. You need to look at the Jordan blocks of $A$. Or, instead of induction on $k$, just note that $\dim \overline{V} < \dim V$ and use induction on $\dim V$ instead.


(This is not an answer! Modified question/comment:)

As we are considering $C$ to be non zero linear transformation such that $CA=BC$, and it implies that $A$ and $B$ have a common eigen value.

If $C$ is invertible, then $A$ and $B$ will have all common eigen values.

So does the number of common eigen values depend on rank of $C$? Since $C$ is non-zero certainly means its rank is atleast $1$.

[I couldn't find the button of "comment" below question, therefore posting the remark/sub-question here.]


Here is mild generalization. Let $A$ be a principal ideal domain, and $V$ and $W$ two finitely generated torsion modules. Then there is a nonzero $A$-linear map from $V$ to $W$ if and only if there is an irreducible element $p$ such that $pV\not=V$ and $pW\not=W$.

We can assume $V=A/(p^r)$, $W=A/(q^s)$, where $p$ and $q$ are irreducible elements, and $r$ and $s$ are positive integers, for in the general case $V$ and $W$ will be finite direct sums of modules of this form. We must check that there is a nonzero $A$-linear map from $V$ to $W$ if and only if $(p)=(q)$.

If $(p)=(q)$ we compose the canonical projection $A/(p^r)\to A/(p)$ with the $A$-linear map $A/(p)\to A/(p^s)$ induced by the multiplication by $p^{s-1}$.

Assume $(p)\not=(q)$. Let $f:A/(p^r)\to A/(q^s)$ be $A$-linear, and let $x$ be in $A/(q^s)$. We have $p^rf(x)=f(p^rx)=f(0)=0$. As $p^r$ is invertible mod $q^s$, the proof is complete.

EDIT. Here is a further generalization. Let $A$ be an a priori non commutative ring. "Module" shall mean left $A$-module.

Assume that $V$ and $W$ are finite length nonzero modules, that $S$ and $T$ are simple modules, that all simple subquotients of $V$ are isomorphic to $S$, and that all simple subquotients of $W$ are isomorphic to $T$. Then $\mathrm{Hom}(V,W)$ if nonzero and only if $S$ and $T$ are isomorphic.

This is clear.