Are the off-diagonal elements of exp(At) log-concave in t, for nonnegative matrices?

The following Matlab code (unless I coded something wrong, which is well possible) finds a few random counterexamples each time I run it, even when restricted to off-diagonal entries:

n = 3;
for trie = 1:100
   A = rand(n);
   B1 = log(expm(A));
   B2 = log(expm(2*A));
   B3 = log(expm(3*A));

   C = B2 - (B3+B1)/2; %its off-diagonal should be >= 0 if the claim holds
   C = C - diag(diag(C));
   if not(all(all(C >= 0)))
       A
   end
end

For instance:

A =
   0.804449583613070   0.535664190667238   0.989144909700340
   0.986104241895970   0.087077219900892   0.066946258397750
   0.029991950269390   0.802091440555804   0.939398361884535
A =
   0.018177533636696   0.534137567882728   0.625937626080496
   0.683838613746355   0.885359450931142   0.137868992412558
   0.783736480083219   0.899004898906140   0.217801593712125
A =
   0.133503859661312   0.300819018069489   0.286620388894259
   0.021555887203497   0.939409713873458   0.800820286951535
   0.559840705872510   0.980903636046859   0.896111351432604
A =
   0.108016694136759   0.559370572403004   0.848709226458282
   0.516996758096945   0.004579623947323   0.916821270253738
   0.143156022083576   0.766681998621487   0.986968274783658
A =
   0.068357220470829   0.026107108154905   0.961558573103663
   0.436327077480103   0.954678274080449   0.762414484002993
   0.173853037365001   0.430596519859417   0.007348661102847

Quick remarks with some tips for numerical experimentation:

  1. For continuous functions, midpoint convexity is equivalent to convexity, so I only tested for that. The $t=1,2,3$ interval is chosen because it looked like the simplest thing to try.
  2. Out of habit, I first coded this with 1000 tries with $5\times 5$ matrices. $100$ and $3\times 3$ are here just for quicker display. It's so fast that it doesn't matter anyway, so it's better to err on the side of more and larger examples.
  3. One should be careful with instructions such as C - diag(diag(C)) (which subtracts its diagonal from a matrix), which could hide numerical mistakes (what if a -1e-16 pops up on the diagonal?). In this case though the subtractions are of the form a - a, which is guaranteed to return 0 even in double-precision arithmetic. Matlab does not have a simpler way to set the diagonal of a matrix to zero or ignore it, unfortunately. (I actually first wrote C(1:n+1:n^2) = 0, but then I replaced it because it is hackish and difficult to read).
  4. There are lots of factoids about matrices (and especially about monotonicity of eigenvalues of nonsymmetric matrices and the matrix exponential) that look true at first sight but have counterexamples. I suggest you to always try some random experiments like these ones. Once one gets in the habit, it's faster to write the code than to think about it. :)

For $2\times2$ matrices, the log-concavity is true. One has $e^{tA}=f(t)I_2+g(t)A$ by Cayley-Hamilton. Writing that the eigenvalues of $e^{tA}$ are the exponentials of those of $tA$, we find $$g(t)=\frac{e^{t\mu}-e^{t\lambda}}{\mu-\lambda}\,,$$ where $\mu,\lambda$ are the eigenvalues of $A$. Thus we only have to prove that $g$ is log-concave, that is $$gg''-g'^2=-e^{t(\mu+\lambda)}\le0.$$ Notice that the assumption is implicitely used in that it implies $g>0$.

Edit. The formula above seems to be a particular case of a more general one. Suppose $A$ is $n\times n$. With Cayley-Hamilton, we have $$e^{tA}=f(t)I_n+g(t)A+\cdots+h(t)A^{n-1}.$$ Let us form the Hankel matrix $M_h(t)=\left(h^{(i+j-2)}(t)\right)_{1\le i,j\le n}$. Then $\det M_h(t)=(-1)^{n+1}e^{t{\rm Tr}\,A}$.

Remark that a smooth function $h$ satisfies a linear ODE of order $n-1$ with some constant coefficients if, and only if, $\det M_h\equiv0$.