Why do isolated large values of WorkingPrecision fail in NDSolve?

bads[1/4, 1/16] generates four distinct error messages, in addition to General::stop, which limits the number of identical errors printed. They are

Do[test[1/4, 1/16, x, y], {x, 5, 40}, {y, x, 80}]; $MessageList // Union // Rest
(* {HoldForm[InterpolatingFunction::dmval], HoldForm[NDSolveValue::mxst], 
    HoldForm[NDSolveValue::nderr], HoldForm[NDSolveValue::ndtol]} *)
  • InterpolatingFunction::dmval occurs here when integration stops before reaching x = 0
  • NDSolveValue::ndtol occurs when the requested PrecisionGoal and AccuracyGoal are too large in comparison with WorkingPrecision
  • NDSolveValue::mxst occurs when the number of integration steps exceeds MaxSteps
  • NDSolveValue::nderr is an inexplicable error often associated with large WorkingPrecision

The ListPlot in the question can be modified to distinguish among these errors, for instance,

enter image description here

which is intended to indicate the primary error. Of course, multiple errors can occur from a single call to NDSolveValue, and almost any other error also causes InterpolatingFunction::dmval. As noted in my earlier comment, in those instances where it is the only error, the integration has inexplicably stopped just short of the endpoint. Adjusting the endpoint slightly seems to solve the problem here. NDSolveValue::ndtol is resolved by increasing WorkingPrecision or (here) decreasing AccuracyGoal. NDSolveValue::mxst, is eliminated simply by increasing MaxSteps. Method -> "StiffnessSwitching" sometimes eliminates NDSolveValue::nderr, and sometimes not.


Summary

NDSolve::nderr.

My guess for the NDSolve::nderr, at least when using the default LSODA method, is that the LSODA routine returned the repeated-error-test-failure error as it reduced the step size and order to satisfy the error test (for instance, search for "error test" in the (SciPy) LSODA code or the LSODE manual). Changing the method can take care of this error.

InterpolatingFunction::dmval.

After some analysis, I cannot say in the end that the picture is clear. I'm pretty sure the extrapolation warning InterpolatingFunction::dmval is due to a bug. I'm not sure of the nature of the bug. It could there is some sort of integration error before the dmval message that is not caught or reported. Or it could be that NDSolve mishandles the last step, which is dropped when the solution is returned, resulting in a truncated domain and the extrapolation warning. In the OP's examples, the issue seems connected to round-off error in the last step and to discrete jumps in the underlying size of the mantissa of arbitrary-precision numbers, which jumps occur as WorkingPrecision increases. It seems remarkable to me that the OP's example so frequently runs into such an edge-case problem, if that is what it is.

Analysis

I'm not sure any of the following is of great interest, unless you want to see some of the computations I used to analyze the performance of NDSolve.

Utility code

Some slight alterations to the OP's setup: One can get better control over the process of integration through the NDSolve`State object. It is also convenient to save the state and the solution so that they may be inspected.

ClearAll[ivp];
ivp[s_, t_, q_, r_] :=
  {Operate[Derivative[1], q] == (((-mF - I Sqrt[mE mG - mF^2])/mE) /.
    {u -> q, v -> r}), 
   Head[q][t] == s};

(* Iterate[] all the way to user-specified r0 *)
ClearAll[test6];
test6[s_, t_, x_, y_, r0_, opts___] :=
  ({state} = 
    NDSolve`ProcessEquations[{ivp[s, t, q[r], r]}, q, {r, r0, t}, 
     opts, PrecisionGoal -> Infinity, AccuracyGoal -> x, 
     WorkingPrecision -> y];
   NDSolve`Iterate[state, r0];
   sol = NDSolve`ProcessSolutions[state];
   q[0] /. sol);

(* test5: shift r -> r+1 *)
ClearAll[test5];
test5[s_, t_, x_, y_, r0_, opts___] :=
  ({state} = 
    NDSolve`ProcessEquations[ivp[s, 1 + t, q[r], r - 1], 
     q, {r, 1 + r0, 1 + t}, opts, PrecisionGoal -> Infinity, 
     AccuracyGoal -> x, WorkingPrecision -> y];
   NDSolve`Iterate[state, 1 + 0];
   sol = NDSolve`ProcessSolutions[state];
   q[1 + 0] /. sol);

(* test3: Integrate to r0; WhenEvent to stop at r == 0 *)
ClearAll[test3];
test3[s_, t_, x_, y_, r0_, opts___] :=
  ({state} = 
    NDSolve`ProcessEquations[{ivp[s, t, q[r], r], 
      WhenEvent[r == 0, "StopIntegration"]}, q, {r, r0, t}, opts, 
     PrecisionGoal -> Infinity, AccuracyGoal -> x, 
     WorkingPrecision -> y];
   NDSolve`Iterate[state, r0];
   sol = NDSolve`ProcessSolutions[state];
   q[0] /. sol);

ClearAll[bads2];
bads2[run_, s_, t_, r0_, opts___] := Quiet[
  PrintTemporary@Dynamic@{x, y, Clock@Infinity};
  Replace[
   Last@Reap[
     Do[Check[run[s, t, x, y, r0, opts], Sow[{x, y}]], {x, 5, 40}, {y,
        x, 80}]],
   {bad : {{_Integer, _Integer} ..}} :> ListPlot[bad, PlotRange -> All]
   ]]

NDSolve::nderr, NDSolve::mxst

The default method (LSODA) resets the step size to a very small step from time to time. When, if, and how often it has to do that varies randomly with the settings of WorkingPrecision and AccuracyGoal. Sometimes it can't restart and we get an error test failure. Sometimes it does it so often that it reaches the limit MaxSteps before the integration is complete (which also leads to a InterpolatingFunction::dmval warning). Usually when you get NDSolve::nderr, it fails at the first attempt at restarting.

The following shows LSODA drastically scales down the step size a few times. Many times it does not have to reset the step size at all. (I will address the InterpolatingFunction::dmval warning in a later section.) Note that the integration runs backwards from 1 to 0, and I reversed the steps to show them in left-to-right order.

(* test6: Iterate[] all the way to user-specified r0 = 0 *)
test6[1/4, 1/16, 27, 45, 0];
state;
sol;
q@"Grid" /. sol // Differences // Flatten // Reverse // ListLogPlot

In the case below, LSODA resets the step size an excessive number of times and runs into the MaxSteps limit. A close-up shows that the step structure is the same as above: After a few steps at the each size, the step size is increased until LSODA runs into trouble.

(* test6: Iterate[] all the way to user-specified r0 = 0 *)
test6[1/4, 1/16, 43, 55, 0];
state;
sol;
q@"Grid" /. sol // Differences // Flatten // Reverse // ListLogPlot
Show[% /. ps_PointSize -> PointSize[1./240], 
 PlotRange -> {{0, 240}, All}]

enter image description here

Changing method eliminates NDSolve::nderr and NDSolve:mxtp

Changing the method gets rid of NDSolve::nderr and NDSolve:mxtp, but not InterpolatingFunction::dmval (with Method set to "Extrapolation", "ExplicitRungeKutta", and "ImplicitRungeKutta").

(* test6: Iterate[] all the way to user-specified r0 = 0 *)
bads2[test6, 1/4, 1/16, 0,
 Method -> "Extrapolation", StartingStepSize -> 1/200]

Translation to {r, 1, 2} using test5[] shows no errors, which suggest round-off error is connected with the NDSolve::dmval errors when the integration is over {r, 0, 1}:

(* test5: shift r -> r+1 *)
bads2[test5, 1/4, 1/16, 0, Method -> "ExplicitRungeKutta"]
bads2[test5, 1/4, 1/16, 0, Method -> "Extrapolation"]
(*  {}  *)
(*  {}  *)

Inspecting state shows how far NDSolve seems to have integrated.

InterpolatingFunction::dmval due to round-off error?

The following is the same example as above under NDSolve::nderr, but here we show this about it: The integration goes past the specified end point r0 == 0 in NDSolve`StateData to -7*10^-59; but the solution returned loses the last step, and the domain is truncated to 0.00075. The overstep is equivalent to a negligible round-off error at the working precision. This happens on all the InterpolatingFunction::dmval error cases I examined.

(* test6: Iterate[] all the way to user-specified r0 = 0 *)
test6[1/4, 1/16, 27, 45, 0];
state
sol

enter image description here

Workaround using WhenEvent[] to stop integration

Not particularly important, but in case one needs a workaround:

(* test3: Integrate to r0; WhenEvent to stop at r == 0 *)
bads2[test3, 1/4, 1/16, -1, Method -> "Extrapolation"]
(*  {}  *)

However, it does not always stop exactly at r == 0 (below it stops around -8 * 10^-59:

(* test3: Integrate to r0; WhenEvent to stop at r == 0 *)
test3[1/4, 1/16, 25, 46, -1/16^12, Method -> "Extrapolation"]
state
sol

Mathematica graphics

Jumps in the size of arbitrary-precision numbers

While I know how to use arbitrary-precision numbers, their internal workings are unknown and complicated. The mantissa seems to be stored whole number of words. For a given precision, there seems to be a minimum size, but numbers at the same precision might have different sizes. It's confusing. The following shows jumps in size near the problematic WorkingPrecison settings above:

ListPlot[#, PlotLabel -> 
    Row[SparseArray[Differences@#]["NonzeroPositions"], ", "]] &@
 Table[ByteCount@N[1, wp], {wp, 80}]

I'd be surprised if there were no connection to the dmval errors, but I cannot explain what the connection is.


My thanks to the others who have responded to my question over the last three months. But now someone from the Mathematica support team has figured out what was really going on and has provided an easy fix.

They tell me that the ODE in my example has small intervals of stiff behavior, and the default solver that NDSolveValue employs can't deal with that. They suggest that I specify Method -> "StiffnessSwitching" in the call to NDSolveValue. This gives vastly better behavior. Here are two graphs:

enter image description here

The first shows the warnings that I get with the default solving method for various AccuracyGoals and WorkingPrecisions, with the color code from above:

  1. Blue is InterpolatingFunction::dmval
  2. Red is NDSolveValue::ndtol
  3. Green is NDSolveValue::nderr

The second shows the warnings when I specify Method->"StiffnessSwitching". Once the AccuracyGoal exceeds 10, the latter scheme is vastly superior, showing trouble only for a WorkingPrecision near 46 (actually, from 45.76 up through 46.05.) (There is also one blue dot for an AccuracyGoal of 77 and a WorkingPrecision of 84.)

I have only a vague understanding of stiffness, so I can't comment on whether my system really is skirting stiffness or not. But it does seem that specifying "StiffnessSwitching" produces better behavior.