*-homomorphisms between matrix algebras

The algebra $M_n(\mathbb{C})$ of $n \times n$ complex matrices is Morita equivalent to $\mathbb{C}$. Which implies: Every unital representation of $M_n(\mathbb{C})$ is isomorphic to a direct sum of copies of the defining representation. Thus your homomorphism $\rho:M_n \to M_k$ exists precisely when $k$ is a multiple of $n$, and after a change of basis in the target, $\rho(A)$ is just copies of $A$ on the diagonal. This is so for all homomorphisms, whether or not they are *-homomorphisms, but the answer for *-homomorphisms is the same. The only difference is that instead of a general change of basis, everything is equivalent up to a unitary change of basis.

The nonunital homomorphisms are not much more general. Up to a change of basis, you can pad a unital homomorphism with extra rows and columns that are all 0.

There is a similar result for a direct sum of matrix algebras. It is summarized in the concept of a "Bratteli diagram" to describe a homomorphism between two direct sums of matrix algebras. The homomorphism can be thought of as a bin packing -- packing items in bins --- with allowed repetition of the items. The Bratteli diagram shows how many copies of each item (matrix summand of the domain) goes into each bin (matrix summand of the target). In the unital case, the bins have to be filled exactly.


I do understand this is an old question but, considering that: 1. you might still be interested in a simpler (and surely less elegant) proof; 2. I had the same problem, so this might be helpful for others in the future I'll tell you how I solved this problem

Take your *-homomorphism $\lambda:M_n\to B$, where $B$ is any other C*-algebra. Set $$F_{ij}:=\lambda(E_{ij}),\qquad\forall i,j=1,\ldots,n,$$ where the $E_{ij}$ is the canonical basis of the underlying vector space of $M_n$ (then the $F_{ij}$ satisfy the very same algebra, i.e. $F_{ij}F_{mn}=\delta_{jm}F_{in}$), and suppose that the kernel of $\lambda$ is non-trivial. Then there exists $a\in M_n\smallsetminus \{0\}$ s.t. $\lambda(a)=0$. This is a constraint between all the $F_{ij}$, namely there are coefficients $\alpha_{ij}$ such that $$\sum \alpha_{ij}F_{ij}=0.$$ Note that if just one of the $F_{ij}$ for some $(i,j)$ is 0, then $\lambda$ is 0, because $F_{kk}$ are all Murray-von Neumann equivalent projections, and the "off-diagonal" elements $F_{ij}$, $i\neq j$ are partial isometries linking them. Therefore sandwich the above linear combination between $F_{kk}$ and $F_{mm}$ to obtain $$0=\sum_{ij}\alpha_{ij}F_{kk}F_{ij}F_{mm} = \alpha_{km}F_{km},$$ which implies $$\alpha_{km}=0$$ for arbitrary $(k,m)$. Hence all the $F_{ij}$ are linearly independent, meaning that, as a vector space, $B$ must have enough space to accommodate at least a copy of $M_n$. But taking into account once again that all the projections $F_{kk}$ are Murray-von Neumann equivalent, we can conclude that there must be a subalgebra $C$ of $B$ such that $M_n\otimes C\subset B$ and a projection $P\in C$ such that, up to unitary equivalence $$\lambda(E_{ij})=E_{ij}\otimes P\in M_n\otimes C.$$ If the rank of $P$ is $k$, then $B$ should be large enough to accommodate $k$ copies of $M_n$, i.e. the underlying vector space of B must have dimension greater than $kn$ (if it is not exactly $kn$ the you'll have some 0-padding).

Hope this helped like it did for me!