Showing that 4D rank-2 anti-symmetric tensor always contains a polar and axial vector

  1. OP is asking for the branching rules for $$\begin{align}H~:=~&O(3)\cr ~\cong~&\begin{pmatrix} O(3)&\cr &1\end{pmatrix}_{4\times 4}~~\cr ~\subseteq~& O(3,1)~=:~G.\end{align}\tag{1}$$

  2. The 4-vector representation decomposes as $${\bf 4}~\cong~\underbrace{\bf 3}_{\text{vector}}\oplus \underbrace{\bf 1}_{\text{scalar}}.\tag{2}$$

  3. Therefore the tensor product representation becomes $$\begin{align} {\bf 16} ~\cong~& {\bf 4}\otimes{\bf 4}\cr ~\cong~&({\bf 3}\oplus {\bf 1})\otimes({\bf 3}\oplus {\bf 1}) \cr ~\cong~&{\bf 3}\otimes{\bf 3}\oplus \overbrace{\underbrace{{\bf 3}\otimes{\bf 1} \oplus{\bf 1}\otimes{\bf 3}}_{~\cong~{\bf 3}_S ~ \oplus~ {\bf 3}_A}}^{\text{off-diagonal blocks}} \oplus {\bf 1}\otimes{\bf 1}.\tag{3} \end{align}$$ Here ${\bf 3}_S$ and ${\bf 3}_A$ denote the symmetric and antisymmetric combination of the off-diagonal blocks, respectively.

  4. The symmetric part of the tensor product ${\bf 4}\otimes{\bf 4}$ reads $${\bf 10}~\cong~ {\bf 4}\odot{\bf 4} ~\cong~\underbrace{{\bf 3}\odot {\bf 3}}_{~\cong~{\bf 5} ~ \oplus~ {\bf 1}}\oplus {\bf 3}_S \oplus {\bf 1},\tag{4} $$ while the antisymmetric part is

    $$ {\bf 6}~\cong~{\bf 4}\wedge{\bf 4} ~\cong~\underbrace{{\bf 3}\wedge {\bf 3}}_{\text{axial vector}}\oplus \underbrace{{\bf 3}_A}_{\text{vector}} ,\tag{5} $$

    cf. OP's title question. In eq. (5) we used Hodge duality in 3D, cf. e.g. this Phys.SE post.


Qmechanic's answer is beautiful. I'll clarify one non-obvious detail, namely why the $\textbf{3}\wedge \textbf{3}$ transforms as a vector under the identity component of the rotation group. (It doesn't transform as a vector under reflections, which is why we call it an axial vector.)

Let $F_{ab}$ be an antisymmetric tensor in 4d spacetime, and use $0$ for the "time" index and $\{1,2,3\}$ for the "space" indices. When Lorentz transformations are restricted to rotations, the components $F_{jk}$ with $j,k\in\{1,2,3\}$ do not mix with the component $F_{0k}=-F_{k0}$, so we can consider only the components $F_{jk}$. These are the components of the $\textbf{3}\wedge \textbf{3}$ in Qmechanic's answer.


For the rest of this answer, all indices (including $a,b,c$) are restricted to the spatial values $\{1,2,3\}$.

The antisymmetry condition, $F_{jk}=-F_{kj}$, implies that this has only $3$ independent components, which is the correct number of components for a vector, but something doesn't seem quite right: Under rotations, the transformation rule for a vector only uses one rotation matrix, but the transformation rule for $F_{jk}$ uses two rotation matrices — one for each index. How can these possibly be equivalent to each other? Of course they're not equivalent to each other for rotations with determinant $-1$, which is why we call it an axial vector, but they are equivalent to each other for rotations with determinant $+1$, and the purpose of this answer is to explain why that's true.

Let $R_{jk}$ be the components of a rotation matrix whose determinant is $+1$. This condition means $$ \sum_{j,k,m}\epsilon_{jkl}R_{1j}R_{2k}R_{3m} = 1, \tag{1} $$ which can also be written $$ \epsilon_{abc} = \sum_{j,k,m}\epsilon_{jkm}R_{aj}R_{bk}R_{cm}. \tag{2} $$ The fact that $R$ is a rotation matrix also implies $$ \sum_c R_{cm}R_{cn}=\delta_{mn}, \tag{3} $$ which the component version of the matrix equation $R^TR=1$. Contract (2) with $R_{cn}$ and then use (3) to get $$ \sum_c\epsilon_{abc}R_{cn} = \sum_{j,k}\epsilon_{jkn}R_{aj}R_{bk}. \tag{4} $$ Equation (4) is the key. The effect of a rotation on $F_{jk}$ is $$ F_{jk}\to \sum_{a,b}R_{aj}R_{bk}F_{ab}, \tag{5} $$ with one rotation matrix for each index. Since $F_{ab}$ is antisymmetric, we can represent it using only three components like this: $$ v_m\equiv\sum_{j,k}\epsilon_{jkm}F_{jk} \tag{6} $$ The question is, how does $v$ transform under a rotation whose determinant is $+1$? To answer this, use (5) to get $$ v_m\to v_m'=\sum_{j,k}\epsilon_{jkm}\sum_{a,b}R_{aj}R_{bk}F_{ab} \tag{7} $$ and then use (4) to get $$ v_m' =\sum_{a,b,c} \sum_c\epsilon_{abc}R_{cm}F_{ab} =\sum_c R_{cm} v_m. \tag{8} $$ This shows that $v$ transforms like a vector under rotations whose determinant is $+1$.

For rotations whose determinant is $-1$ (reflections), the right-hand side of equation (1) is replaced by $-1$, which introduces a minus sign in equation (4), which ends up putting a minus sign in equation (8). That's why we call $v$ an axial vector instead of just a vector.


More generally, in $N$-dimensional space:

  • Pseudovector and axial vector are synonymous with "completely antisymmetric tensor of rank $N-1$." Intuitively, an ordinary (polar) vector has only one index, and a pseudovector/axial vector is missing only one index. As a result, they both transform the same way under rotations, but only under rotations. They transform differently in other respects, including relfections and dilations.

  • Under an arbitrary coordinate transform, a (polar) vector transforms as $v_{j}\to \Lambda^a_j v_{a}$.

  • Under an arbitrary coordinate transform, a rank-2 tensor transforms as $F_{jk}\to \Lambda^a_j\Lambda^b_k F_{ab}$. (The components of $\Lambda$ are the partial derivatives of one coordinate system's coordinates with respect to the other's. Sums over repeated indices are implied.)

  • If $N\neq 3$, then angular momentum is an antisymmetric rank-2 tensor (also called a bivector), not an axial vector. A bivector has 2 indices, but an axial vector has $N-1$ indices.

  • To illustrate the different transformation laws for (polar) vectors and bivectors, consider a dilation (also called dilatation) that multiplies the spatial coordinates by a constant factor $\kappa$. Then each factor of $\Lambda$ contributes one factor of $\kappa$, so $F_{jk}\to\kappa^2 F_{jk}$, but a vector goes like $v_j\to \kappa v_j$.

Axial vectors and bivectors are the same in 3d space, but they are not really vectors at all, even though they both happen to have 3 components in 3d space. If we only consider rotations (with determinant $+1$), then they might as well be vectors, but even that's only true in 3d space, not in other-dimensional spaces.