Different Methods in NIntegrate

New answer (b/c the old one was based on a mistake in the posted code):

First, multidimensional integrals can be hard to compute. Both easy and hard ones are common in dimension 2. The proportion of hard ones seems to increase with the dimension. Integrating over infinite domains can be hard if the integrand is oscillatory, which is not the case here. Integrands with singularities can be hard, too, which also is not the case here. Each of these problems is sufficiently common to have methods to address them.

The Monte Carlo methods are modestly useful when all else fails. They give a rough approximation somewhat quickly. They converge very slowly and using them to pursue high precision is usually futile.

This seems a moderately difficult integral. The integrand does not seem pathological, but the default rule, a medium-order "MultidimensionalRule", seems to struggle. In fact, it seems to get the wrong answer with the global-adaptive strategy. It turns out that the local-adaptive strategy in the OP is accurate. How to verify that?

Generally, a cartesian-product rule based on the Gauss-Kronrod or Clenshaw-Curtis rule will be effective on a smooth integrand. The main draw back is that they tend to be slow in high dimensional integrals because of excessive sampling. We can use them to verify the local-adaptive result.

In fact, though, my usual first step with a smooth integrand is to raise the order of the multidimensional rule with the suboption "Generators" - > 9. This turns out to be a good method here, too.

There is no need to use MinRecursion or other options. I'll use both a medium- and high-order Gauss-Kronrod rules to check consistency. (Another way to check consistency is to double the working precision to WorkingPrecision -> 32, but I'll omit that.)

(* high-order multidimensional rule *)
i1[d_?NumericQ, x_?NumericQ, y_?NumericQ, xp_?NumericQ] := 
 NIntegrate[
  integrand[d, x, y, xp, x0, T], {T, 0, ∞}, {x0, 0, 1}, 
  Method -> {"MultidimensionalRule", "Generators" -> 9}];

(* Gauss-Kronrod cartesian product rule *)
i2[d_?NumericQ, x_?NumericQ, y_?NumericQ, xp_?NumericQ] := 
 NIntegrate[
  integrand[d, x, y, xp, x0, T], {T, 0, ∞}, {x0, 0, 1}, 
  Method -> "GaussKronrodRule"];

(* High-order Gauss-Kronrod cartesian product rule: a double check *)
i3[d_?NumericQ, x_?NumericQ, y_?NumericQ, xp_?NumericQ] := 
 NIntegrate[
  integrand[d, x, y, xp, x0, T], {T, 0, ∞}, {x0, 0, 1}, 
  Method -> {"GaussKronrodRule", "Points" -> 11}];

The OP's table with these methods agrees with each:

Table[i1[3, x, 1, 0], {x, 0.05, 1, 0.05}] // AbsoluteTiming
(*
{4.46711, {-20.7877, -19.7131, -17.9935, -15.7272, -13.0363,
 -10.0544, -6.91493, -3.74124, -0.63984, 2.30356, 5.02495, 7.48073, 
  9.64493, 11.5056, 13.061, 14.316, 15.2788, 15.9584, 16.3626, 
  16.4967}}
*)

Table[i2[3, x, 1, 0], {x, 0.05, 1, 0.05}] // AbsoluteTiming
(*
{4.37294, {-20.7877, < same as above >, 16.4967}}
*)

Table[i3[3, x, 1, 0], {x, 0.05, 1, 0.05}] // AbsoluteTiming
(*
{7.19945, {-20.7877, < same as above>, 16.4967}}
*)

The derivative with respect to y

One way is to differentiate under the integral sign:

i2dy[d_?NumericQ, x_?NumericQ, y_?NumericQ, xp_?NumericQ] := 
 NIntegrate[
  D[integrand[d, x, \[FormalY], xp, x0, T], \[FormalY]] /. \[FormalY] -> y,
   {T, 0, ∞}, {x0, 0, 1},
   Method -> "GaussKronrodRule"];

Another is to use complex-step differentiation. A third way is to use the central difference formula. Below is an example of each:

i2dy[3, 0.1, 1, 0]
i2[3, 0.1, 1 + Sqrt@$MachineEpsilon*I, 0]/Sqrt@$MachineEpsilon // Im
(i2[3, 0.1, 1 + 0.5 Sqrt@$MachineEpsilon, 0] - 
   i2[3, 0.1, 1 - 0.5 Sqrt@$MachineEpsilon, 0])/Sqrt@$MachineEpsilon
(*
  77.8076
  77.8076
  77.8076
*)

The integral is zero for Element[{x,y},Reals] (Thanks to the answer Michael E2)

Integrate[integrand[3, x, y, 0, x0, T], {T, 0, \[Infinity]}, {x0, 0, 1}]
(*ConditionalExpression[0, Re[y^2] > 0]*)

addition

The integral depending on x,y,xp is zero for Element[{y},Reals]

 Integrate[integrand[3, x, y, xp, x0, T], {T, 0, \[Infinity]}, {x0, 0, 1}]

(*ConditionalExpression[0, Re[y^2] > 0]*)

I've found similar issues when doing high-dimension integrals. A reliable method is QuasiMonteCarlo, since the set of sampling points it uses are more evenly distributed than in MonteCarlo, and therefore it will converge faster. However, if your integral receives the most contribution from a single point, e.g. a spike/singularity, then an adaptive method would work better, since it will preferentially sample the singularity (as long as your initial grid refinement is fine enough to see it in the first place), and therefore will converge faster.

In your case, identify any singularities and then do some integrals focused around them to see whether they will make a large contribution to the integral. If they don't make a large contribution, then QuasiMonteCarlo should be fine. If they do contribute a lot, then I recommend breaking your integral down into several domains, so that you can integrate the singularities separately from the rest of the domain.