Relation between linear maps and matrices

You are correct on some accounts but there seems to be a bit of confusion as well. Let us first address the example in your first question.

Suppose we have a linear operator $T:V\rightarrow V$ where $\dim V = n$ for odd $n$. Let us fix a basis $\mathcal{B}$ for $V$ and let $A = [T]_\mathcal{B}$ be the matrix of the mapping with respect to $\mathcal{B}$. As you've said, the map $[\ ]_\mathcal{B}: L(V)\rightarrow M_n(\mathbb{R})$ is a vector space isomorphism between the space of operators on $V$ and the space of $n\times n$ matrices.

First of all, note that you are not "free to choose" $V$ to be $\mathbb{R}^n$. $T$ is already defined to be a linear operator on $V$ and in this case $V$, whatever it is, is fixed. However, the power of interpreting the mapping as a matrix is that we can effectively carry out all the calculations as if the mapping were from $\mathbb{R}^n$ to $\mathbb{R}^n$: This is precisely what an isomorphism allows us to do.

For example. suppose we have a linear mapping $T$ on $P_2(\mathbb{R})$, the vector space of polynomials with real coefficients of degree at most $2$: $$T(ax^2 + bx + c) = bx + c$$ In this case, our vector space $V$ is $P_2(\mathbb{R})$. We are not free to change it. However, what we are allowed to do is to study the matrix $$A=\begin{pmatrix}0 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}$$ which is just the matrix representation of $T$ with respect to the standard basis $\{x^2,\ x,\ 1\}$. The point here is that $A$ is not $T$. It is a representation of $T$ which happens to share many of the same properties. Therefore by studying $A$, we gain valuable insight into the behaviour of $T$. For example, one way of finding eigenvectors for $T$ would be to find the eigenvectors of $A$. The eigenvectors of $A$ then correspond uniquely via isomorphism to the eigenvectors of $T$.

You ask "If I'm given a matrix with entries in $\mathbb{F}$, how exactly would I go about determining information about it from linear maps?", but this question is a little backwards. If we have a matrix, then its information is readily available to us. For example, a huge amount of information can be obtained by simply row reducing the matrix. In general, it is easier to study matrices than to study abstract linear transformations and this is precisely why we represent linear transformations with matrices.

The bottom line is that matrices serve as simpler representatives for linear mappings. Given an arbitrary linear mapping, we can fix basis for the domain and codomain and obtain a corresponding matrix representation for the mapping. Conversely, for a given choice of basis, each matrix can also be interpreted as a general linear map. However, we seldom use the latter fact since it is easier to work with matrices than general linear mappings.

Some of your questions were a little hard to interpret so I hope I have addressed your main concerns here. Please do not hesitate to ask for clarification.


The topic can indeed create some confusion. First of all, a linear map $T\colon V\to W$ is just a function and not a matrix.

However, any $m\times n$ matrix $A$ can be used to define a linear map $$ T_A\colon F^n\to F^m $$ by $T_A(v)=Av$ (writing vectors in $F^n$ as columns). This is where the general concept of linear maps was born from, actually.

When $V$ and $W$ is finite dimensional, say $\dim V=n$, once we have a basis $\mathcal{B}=\{v_1;v_2;\dots;v_n\}$, we can define a bijective linear map $$ C_{\mathcal{B}}\colon V\to F^n $$ by $$ C_{\mathcal{B}}(v)= \begin{bmatrix}\alpha_1\\\alpha_2\\\vdots\\\alpha_n\end{bmatrix} \quad\text{if and only if}\quad v=\alpha_1v_1+\alpha_2v_2+\dots+\alpha_nv_n $$ Given a linear map $T\colon V\to W$ and a basis $\mathcal{D}=\{w_1;w_2;\dots;w_m\}$ of $W$, we can define the matrix $$ [T]_{\mathcal{B},\mathcal{D}}= \begin{bmatrix} C_{\mathcal{D}}(T(v_1)) & C_{\mathcal{D}}(T(v_2)) & \dots & C_{\mathcal{D}}(T(v_n)) \end{bmatrix} $$ (where we specify the matrix by its columns) with the property that, for every $v\in V$, $$ C_{\mathcal{D}}(T(v))=[T]_{\mathcal{B},\mathcal{D}}C_{\mathcal{B}}(v) $$

This matrix is easily seen to be unique and, as such, it contains all the information about $T$. Computations about $T$ can be translated into computations on the associated matrix $[T]_{\mathcal{B},\mathcal{D}}$. For instance, the rank of $T$ is the same as the rank of the associated matrix and this one can be computed by row reduction or other methods.

In the particular case when $V=F^n$, $W=F^m$, $B$ is the canonical basis of $V$ and $D$ is the canonical basis of $W$, we have that $$ C_{\mathcal{B}}(v)=v $$ and similarly on $W$, so the relation becomes $$ T(v)=[T]_{\mathcal{B},\mathcal{D}}v $$ and so we prove that

any linear map $F^n\to F^m$ is of the form $T_A$, for a unique $m\times n$ matrix $A$.

Of course, if we compute the associated matrix to $T_A$ when we choose on $V$ or $W$ bases different from the canonical basis, we'll get a matrix different (in general) from $A$. Here's where confusion may arise, but keeping well distinct the concept of linear map and associated matrix (when a choice of bases is made) will help in well understanding the matter.


An important particular case is when linear maps from $V$ to $V$ are dealt with. In this case it's quite natural to consider the same basis on the domain and the codomain (but it's not mandatory, of course). So if $T\colon V\to V$ is a linear map and $\mathcal{B}$ is a basis of $V$, we have the associated matrix $[T]_{\mathcal{B},\mathcal{B}}=[T]_{\mathcal{B}}$ (it makes sense mentioning the basis only once) with the property that, for all $v\in V$, $$ C_{\mathcal{B}}(T(v))=[T]_{\mathcal{B}}C_{\mathcal{B}}(v) $$ If $v$ is an eigenvector of $T$ relative to the eigenvalue $\lambda$, we have $T(v)=\lambda v$ and so $$ [T]_{\mathcal{B}}C_{\mathcal{B}}(v)= C_{\mathcal{B}}(T(v))=C_{\mathcal{B}}(\lambda v)=\lambda C_{\mathcal{B}}(v) $$ and so $w=C_{\mathcal{B}}(v)$ is an eigenvector of the matrix $[T]_{\mathcal{B}}$ relative to the eigenvalue $\lambda$. Conversely, if $[T]_{\mathcal{B}}w=\lambda w$ (and $w\ne0$), we can define $v$ as the unique vector such that $w=C_{\mathcal{B}}(v)$ and then $$ C_{\mathcal{B}}(T(v))=C_{\mathcal{B}}(\lambda v) $$ (just follow the path backwards), and we deduce $T(v)=\lambda v$. So the eigenvalues of $[T]_{\mathcal{B}}$ are exactly the same as the eigenvalues of $T$.

Note that the basis is irrelevant for $\lambda$ being an eigenvalue: all matrices associated to $T$ (when the same basis is chosen on the domain and the codomain) share the eigenvalues.

The eigenvectors are not shared, but the isomorphism $C_{\mathcal{B}}$ allows for getting the eigenvectors of $T$ when the eigenvectors of $[T]_{\mathcal{B}}$ are known.