4th order tensors double dot product and inverse computation

I'll answer my own question since I was able to find the solution to my problem with the help of one commentator. The definition of the identity tensor $I = \delta_{ik}\delta_{jl} e_{i} \otimes e_{j} \otimes e_{k} \otimes e_{l}$ is correct but it does not lead to a tensor with minor symmetries.

My mistake was in using this definition of the identity and applying the Mandel transformation to it. The Mandel transformation preserves the double dot product and the inverse if and only if the transformed tensors have the minor symmetries. As suggested in the review by Helnwein mentioned by @user3658307 whom I thank for his help, the definition one should use for the identity tensor in the case of minor symmetries is :

$$ I = \frac{1}{2}( \delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk})e_{i} \otimes e_{j} \otimes e_{k} \otimes e_{l} $$

Which has the minor symmetries and can thus be put into the Mandel representation that yields :

$$ [I] = \begin{bmatrix} 1 & 0&0&0&0&0 \\ 0&1&0&0&0&0\\0&0&1&0&0&0\\0&0&0&1&0&0\\0&0&0&0&1&0\\0&0&0&0&0&1\end{bmatrix} $$

If you are interested in a more rigorous presentation of this subject (matrix representation of tensors), I highly recommend reading the aforementioned review. In particular, one should be really careful of the covariance and contravariance of the tensors he or she is handling as in some particular cases, their matrix representation can differ even if $A^{ij}_{kl} = A_{ij}^{kl}$.

Note on the inverse :

My original goal was to find an easy way to inverse fourth order tensors with minor symmetries using usual inversion algorithms for matrices. It is not always possible in the general case since the matrix representation of a general fourth order tensor possessing only minor symmetries is not always invertible in the space of $6\times 6$ matrices. However it is known that given a random $n \times n$ matrix with real coordinates, the 'probability' of it being invertible is $1$ in the sense of the Lebesgue measure. In linear elasticity or physics in general, one should thus be able to compute an inverse in every case.


In this modern computer age, I'm not sure why anyone is enamored by these old compressed $\,6\times 6\,$ formats anymore.

Column-stacking will transform any $\,3\times 3\,$ matrix into a $\,9\times 1\,$ vector. $$ [S] = {\rm vec}(S) = \begin{bmatrix} S_{11} \\S_{21}\\S_{31}\\S_{12}\\S_{22}\\S_{32}\\S_{13}\\S_{23} \\S_{33} \end{bmatrix} $$ The index $(\alpha)$ of the vector elements can be calculated from the index-pair $(i,j)$ of the matrix elements with simple mod-3 arithmetic, and vice versa. $$\eqalign{ &\alpha = 1+(i-1)\,\%\,3 + (j-1)\times 3 \\ &i = 1+(\alpha-1)\,\%\,3,\quad j = 1+(\alpha-1)\,//\,3 \\ &\alpha\in[1\ldots 9],\quad\quad i,j\in[1\ldots 3] }$$ Extending this index mapping to an additional index/index-pair $\big(\beta,\, (k,l)\big)$ allows a fourth-order tensor $C_{ijkl}$ to be flattened into a matrix $[C]_{\alpha\beta}\,$ in a perfectly reversible way. $$ [C] = \begin{bmatrix} C_{1111} &C_{1121}&C_{1131}&C_{1112}&C_{1122}&C_{1132}&C_{1113}&C_{1123} &C_{1133}\\ C_{2111} &C_{2121}&C_{2131}&C_{2112}&C_{2122}&C_{2132}&C_{2113}&C_{2123} &C_{2133}\\ C_{3111} &C_{3121}&C_{3131}&C_{3112}&C_{3122}&C_{3132}&C_{3113}&C_{3123} &C_{3133}\\ C_{1211} &C_{1221}&C_{1231}&C_{1212}&C_{1222}&C_{1232}&C_{1213}&C_{1223} &C_{1233}\\ C_{2211} &C_{2221}&C_{2231}&C_{2212}&C_{2222}&C_{2232}&C_{2213}&C_{2223} &C_{2233}\\ C_{3211} &C_{3221}&C_{3231}&C_{3212}&C_{3222}&C_{3232}&C_{3213}&C_{3223} &C_{3233}\\ C_{1311} &C_{1321}&C_{1331}&C_{1312}&C_{1322}&C_{1332}&C_{1313}&C_{1323} &C_{1333}\\ C_{2311} &C_{2321}&C_{2331}&C_{2312}&C_{2322}&C_{2332}&C_{2313}&C_{2323} &C_{2333}\\ C_{3311} &C_{3321}&C_{3331}&C_{3312}&C_{3322}&C_{3332}&C_{3313}&C_{3323} &C_{3333} \end{bmatrix} $$

This is a very clean and easy to code transformation, since it doesn't require goofy scale factors like $(2,{\sqrt 2})$ on three-quarters of the terms.

A double-dot product between two tensors becomes a single-dot product in the flattened matrix representation, i.e. $$\eqalign{ C &= A:B &\implies C_{ijmn} = A_{ijkl}B_{klmn} \\ [C] &= [A]\cdot[B] &\implies [C]_{\alpha\lambda} = [A]_{\alpha\beta}\,[B]_{\beta\lambda} }$$

Finally, the inverse of a $\,9\times 9\,$ matrix $[C]^{-1}$ (should it exist) can be reconstituted into a fourth-order tensor. Not in some probabilistic sense, but in every single case.