Converting a backward/forward sweep code for optimal control to _Mathematica_

According to chapter 9, "Mathematica’s NDSolve can take in boundary conditions, and system (9.26) can be directly input into it" (p. 239). Let's give it a try.

β = 0.05; μ = 0.01; γ = 0.5; n = 100; w1 = 1; i0 = 10; T = 0.9; umax = 100;

ust[t] := Min[umax, Max[0, λ[t] i[t]/2]];

sol = NDSolve[{
     i'[t] == β (n - i[t]) i[t] - (μ + γ) i[t] - ust[t] i[t],
     λ'[t] == -w1 - λ[t] (β (n - i[t]) - β i[t] - (μ + γ) - ust[t]),
     i[0] == i0, λ[T] == 0}, {i, λ}, {t, 0, T}][[1]];

Plot[Evaluate[ust[t] /. sol], {t, 0, T}, AxesLabel -> {"t", "u*"}]
Plot[Evaluate[i[t] /. sol], {t, 0, T}, AxesLabel -> {"t", "i"}]

enter image description here enter image description here


I am not 100% sure whether this does the same as the the Matlab code, but this could be a start. Please double-check for correctness.

n = 100;
h = 1./n;
t = Subdivide[0., 1., n];
Δ = 0.001;
β = 0.05;
μ = 0.01;
γ = 0.5;
P = 100;
w1 = 1.;
forwardstep[x_, u_, u1_] := Module[{k1, k2, k3, k4},
   k1 = β (P - x) x - (μ + γ) x - u x;
   k2 = (β (P - x) - 0.5 k1 h) (x + 0.5 k1 h) - (μ + γ) (x + 0.5 k1 h) - 0.5 (u + u1) (x + 0.5 k1 h);
   k3 = β (P - x - 0.5 k2 h) (x + 0.5 k2 h) - (μ + γ) (x + 0.5 k2 h) - 0.5 (u + u1) (x + 0.5 k2 h);
   k4 = β (P - x - k3 h) (x + k3 h) - (μ + γ) (x + k3 h) - u1 (x + k3 h);
   x + (h/6.) (k1 + 2. k2 + 2. k3 + k4)
   ];
backwardstep[λ_, x_, u_, u1_] := Module[{k1, k2, k3, k4},
   k1 = -w1 - λ (β (P - x) - β x - (μ + γ) - u);
   k2 = -w1 - (λ - 0.5 k1 h) (β (P - x + 0.5 k1 h) - (μ + γ) - 0.5 (u + u1));
   k3 = -w1 - (λ - 0.5 k2 h) (β (P - x + 0.5 k2 h) - (μ + γ) - 0.5 (u + u1));
   k4 = -w1 - (λ - k3 h) (β (P - x + k3 h) - (μ + γ) - u1);
   λ - (h/6.) (k1 + 2. k2 + 2. k3 + k4)
   ];

u = ConstantArray[0., n + 1];
λ = ConstantArray[0., n + 1];
x = ConstantArray[0., n + 1];
x[[1]] = 10.;

test = -1.;
While[test < 0,
  oldu = u;
  oldx = x;
  oldλ = λ;
  Do[x[[i + 1]] = forwardstep[x[[i]], u[[i]], u[[i + 1]]], {i, 1, n}];
  Do[λ[[j - 1]] = backwardstep[λ[[j]], x[[j]], u[[j]], u[[j - 1]]], {j, -1, -n, -1}];
  u1 = Clip[0.5 λ x, {0., 100}];
  u = 0.5 (u1 + oldu);
  temp1 = Δ Total[Abs[u]] - Total[Abs[oldu - u]];
  temp2 = Δ Total[Abs[x]] - Total[Abs[oldx - x]];
  temp3 = Δ Total[Abs[λ]] - Total[Abs[oldλ - λ]];
  test = Min[temp1, Min[temp2, temp3]];
  ];
ListLinePlot[Transpose[{t, u}]]
ListLinePlot[Transpose[{t, x}]]