Are matrices and second rank tensors the same thing?

A second-order tensor can be represented by a matrix, just as a first-order tensor can be represented by an array. But there is more to the tensor than just its arrangement of components; we also need to include how the array transforms upon a change of basis. So tensor is an n-dimensional array satisfying a particular transformation law.

So, yes, a third-order tensor can be represented as a 3-dimensional array of numbers -- in conjunction with an associated transformation law.


Matrices are often first introduced to students to represent linear transformations taking vectors from $\mathbb{R}^n$ and mapping them to vectors in $\mathbb{R}^m$. A given linear transformation may be represented by infinitely many different matrices depending on the basis vectors chosen for $\mathbb{R}^n$ and $\mathbb{R}^m$, and a well-defined transformation law allows one to rewrite the linear operation for each choice of basis vectors.

Second rank tensors are quite similar, but there is one important difference that comes up for applications in which non-Euclidean (non-flat) distance metrics are considered, such as general relativity. 2nd rank tensors may map not just $\mathbb{R}^n$ to $\mathbb{R}^m$, but may also map between the dual spaces of either $\mathbb{R}^n$ or $\mathbb{R}^m$. The transformation law for tensors is similar to the one first learned for linear operators, but allows for the added flexibility of allowing the tensor to switch between acting on dual spaces or not.

Note that for Euclidean distance metrics, the dual space and the original vector space are the same, so this distinction doesn't matter in that case.

Moreover, 2nd rank tensors can act not just as maps from one vector space to another. The operation of tensor "contraction" (a generalization of the dot product for vectors) allows 2nd rank tensors to act on other second rank tensors to produce a scalar. This contraction process is generalizable for higher dimensional tensors, allowing for contractions between tensors of varying ranks to produce products of varying ranks.

To echo another answer posted here, a 2nd rank tensor at any time can indeed be represented by a matrix, which simply means rows and columns of numbers on a page. What I'm trying to do is offer a distinction between matrices as they are first introduced to represent linear operators from vector spaces, and matrices that represent the slightly more flexible objects I've described


A matrix is a special case of a second rank tensor with 1 index up and 1 index down. It takes vectors to vectors, (by contracting the upper index of the vector with the lower index of the tensor), covectors to covectors (by contracting the lower index of the covector with the upper index of the tensor), and in general, it can take an m upper/n-lower tensor to either m-upper/n-lower by acting on one of the up indices, to m-upper/n-lower by acting on one of the lower indices, or to m-1-upper/n-1-lower by contracting with one upper and one lower indices.

There is no benefit to matrix notation if you know tensors, it's a special case where the operation of tensor product plus one contraction produces an object of the same type. The tensor notation generalizes the calculus of vectors and linear algebra properly to make the right mathematical objects.