Solve functional equation $ h(y)+h^{-1}(y)=2y+y^2 $

From

$$ x = 2h(x)+h^2(x)-h(h(x))\Rightarrow h(x) = \pm\sqrt{h(h(x))+x+1}-1 $$

now using an iterative approximation over the positive leaf such as

$$ h_{k+1}(x) = \sqrt{h_k(h_k(x))+x+1}-1 $$

or according to the MATHEMATICA script which follows, (inspired after a fruitful discussion with Semiclassical)

Clear[h]
h[x_, 1] := x
h[x_, n_] := h[x, n] = Sqrt[h[h[x, n - 1], n - 1] + x + 1] - 1

we obtain beginning with $h_1(x) = x$ the successive approximations in red, and black for $h_5(x)$ showing a good convergence for $x > 0$.

enter image description here

Also the accomplishment for $h^{-1}(x)+h(x) = 2x+x^2$ can be analyzed in the following plot

err5 = y - h[2 y + y^2 - h[y, 5], 5];
Plot[Abs[err5], {y, 0, 4}, PlotStyle -> {Thick, Black}]

enter image description here


Unfortunately, your proposed functional equation: $$ h(y)+h^{-1}(y)=2y+y^2 \tag{1} $$ has no differentiable solutions near $\,y=0.\,$ Suppose that $$ h(y) = 0 + a_1 y + O(y^2). \tag{2} $$ Then its inverse function is $$ h^{-1}(y) = 0 + \frac1{a_1} y + O(y^2). \tag{3} $$ Substituting equations $(2)$ and $(3)$ into $(1)$ gives $$ h(y)\!+\!h^{-1}(y) = \frac{a_1^2+1}{a_1}y \!+\! O(y^2) = 2y \!+\! O(y^2). \tag{4}$$ The only solutions for $\,a_1\,$ are $\,a_1=1\,$ and $\,a_1=-1.\,$ However, we are given that $$ 0<h(x)<x \tag{5}$$ and thus the only possible value is $\,a_1=1.\,$ Suppose we want more terms in the power series $$ h(y) = 0 + y + a_2y^2 + O(y^3). \tag{6}$$ The inverse function is now $$ h^{-1}(y) = 0 + y - a_2y^2 + O(y^3). \tag{7}$$ Adding these two equations together is the equation $$ h(y) + h^{-1}(y) = 0 + 2y + 0y^2 + O(y^3). \tag{8}$$ This implies that equation $(1)$ can not be satisfied.

There is a possibility that there exists an exponent $\,e\,$ not an integer such that $\, h(y) = 0 + y + c y^e + \cdots \,$ which perhaps should be studied according to comments.

In fact, define $$ g(x) : = \sqrt{h(x^2)}. \tag{9}$$ Then solving for a power series expansion gives $$ g(x) = x - \frac{x^2}{\sqrt{6}} + \frac{x^3}6 + O(x^4) \tag{10}$$ which implies that $$ h(x^2) = (x^2+x^4/2)+\frac1{\sqrt{6}}f(x) \tag{11} $$ where $$ f(x) \!=\! -2 x^3 \!-\!\frac{11x^5}{24} \!+\! \frac{117x^7}{1280} \!-\!\frac{5491x^9}{110592} \!+\! \frac{156538363x^{11}}{3715891200} \!+\! O(x)^{13}. \tag{12}$$

A Wolfram Language code to calculate $\,g(x)\,$ is

ClearAll[x, g, gx];
gx[3] = x - 1/Sqrt[6]*x^2 + O[x]^3;
Do[g = Normal[gx[n]] + O[x]^(2+n); gx[n+1] = Simplify[
    g + (x^2 + (Normal[g]/.x -> g)^2 - g^4 - 2*g^2) * 3 /
    ((4+n)*x^2*Sqrt[6])], 
  {n, 3, 6}] 

As some comments suggest, it seems that the power series for $\,f(x)\,$ has zero radius of convergence. This makes finding properties of it difficult. Perhaps we need to let $\,x\,$ approach $\,\infty.$ In this case we find that $\,h(x) \approx \sqrt{x}\,$ with infinite number of other terms. Using equation $(1)$ we can find the expansion $$ h(x) = x^{1/2} - 1 + \frac12 x^{-1/4} + \frac1x s(x) \tag{13} $$ where $$ s(x) := \sum_{k=2}^\infty 2^{-k}(x^{-\frac32 2^{-k}} - x^{-2^{-k}} ). \tag{14} $$ This gives moderately good approximations as $\,x\,$ gets large and down to $1.$

One method that leads to equations $(13)$ and $(14)$ is as follows. From equation $(1)$ we immediately get $$ h^{-1}(y) = y^2 + 2y - h(y) \tag{15} $$ and if $\,y=h(x),\,$ then $$ x = h(x)^2 + 2h(x) - h(h(x)). \tag{16} $$ We start with an approximation and try to find what additional term will satisfy equation $(16)$. So we guess $$ h(x) = x^{1/2} + cx^e + \cdots \tag{17} $$ where $\,\cdots\,$ denotes terms with smaller exponents. Substitute equation $(17)$ in equation $(16)$ to get $$ x \!=\! (x \!+\! 2cx^{1/2+e} \!+\! \cdots) \!+\! 2x^{1/2} \!+\! \cdots \!=\! x \!+\! 2x^{1/2}(cx^e \!+\! 1) + \cdots. \tag{18} $$ This implies $\,0 = cx^e + 1.\,$ Solving this we get our next guess as $$ \,h(x) = x^{1/2} - 1 + cx^e + \cdots. \tag{19} $$ Repeating this process leads to equations $(13)$ and $(14)$.

The series in $(14)$ appears to converge but I have no proof, only numerical evidence. I also have no proof that $(13)$ satisfies the functional equation $(1)$.

An answer to this question by 'Semiclasical' contains sequence recursions $$ u_n = u_{n-1} - j_{n-1} \quad \text{ and } \quad j_n = j_{n-1} - u_n^2. \tag{20} $$ with the property that $\, u_n = h(u_{n-1})\,$ and that $\,u_n \to 0^+.\,$ I found that for suitable starting values of $\,u_1\,$ and $\,j_1\,$ we have $$ u_n = 6n^{-2} - \frac{15}2n^{-4} + \frac{663}{40} n^{-6} - \frac{43647}{800}n^{-8} + \cdots. \tag{21} $$

An answer to this question by 'Cesareo' uses recursion to construct a sequence of functions $\,h_n(x)\,$ which seems to converge to a global solution. It would be nice to give a proof of this.


The following is not intended as an answer so much as it is a development of the physics problem. (Hence why it's a wiki answer.) Let $U_k$ be the potential difference across the $k$th nonlinear element with corresponding current $\alpha U_k^2$, for $k\geq 1$. By virtue of Kirchoff's loop rule, we deduce that the voltage across the $k$th resistor is given by $U_{k-1}-U_{k}$. Since each resistor is ohmic, we conclude that the current in the $k$th resistor is $$J_{k-1}=R^{-1}(U_{k-1}-U_{k}).$$ In particular, we have $J_0=R^{-1}(U_{0}-U_{1})$. Kirchoff's junction rule then demands $$J_{k} = J_{k-1}-\alpha U_k^2.$$ Introducing $(u_k,j_k):=(\alpha R U_k,\alpha R^2 J_k)$ and rearranging, we obtain the dimensionless recurrence relations \begin{align} u_k = u_{k-1}-j_{k-1},\qquad j_k &= j_{k-1}-u_k^2\\&=j_{k-1}-(u_{k-1}-j_{k-1})^2. \end{align} This may be compactly expressed as $$(u_k,j_k)=g(u_{k-1},j_{k-1})=\cdots=g^k(u_0,j_0)$$ where $g(u,j):=(u-j,j-(u-j)^2)$. That is, the sequence of voltages and currents is obtained by iterating from an initial choice of $(u_0,j_0)$.

However, on physical grounds we are only concerned with solutions for which the currents and voltages are positive and monotonically decreasing to zero. This is rather delicate, as numerical experimentation demonstrates that the fixed point $(u,j)=(0,0)$ is badly unstable. (I don't know how to formally prove this. We can note, though, that the conditions $u-j>0$ and $j>j-(u-j)^2>0$ require $j<u<j+\sqrt{j}$. Hence the range of possible $j$ gets smaller and smaller as $j\to 0^+$, which to me seems consistent with the origin being unstable.)

Finally, to obtain the advertised functional equation, we seek a solution of the form $j_k=f(u_k)$. In that case, applying the first equation to $j_{k+1}=f(u_{k+1})$ yields

$$j_{k+1} = f(u_{k+1}) = f(u_k-j_k) = f(u_k-f(u_k)).$$

On the other hand, from the second equation we have

$$j_{k+1} = j_k - u_{k+1}^2 = j_k - (u_k-j_k)^2 = f(u_k)-(u_k - f(u_k)^2.$$ Together, we obtain the desired functional equation $$f(x-f(x)) = f(x)-f(x-f(x))^2$$ under the identification $x=u_k$. (From this we may further conclude that $h(u_k) = u_{k+1}$, i.e., the voltages are obtained by iterating $h$ on $u_0$.)

I'm not confident I know how to justify the condition $j_k=f(u_k)$ rigorously. Physically, however, it has a simple interpretation: Suppose we draw a vertical line between the first nonlinear element and the second resistor. Then on the right we have a copy of the infinite chain, but subject to voltage $V_1$ and $J_1$. Since it's the same chain, it must have the same voltage-current relationship as the original, i.e., $j_1=f(u_1)$. We may then repeat this logic with the next element-resistor pair and so on, yielding $j_j=f(u_k)$ for all $k$ as desired.

At this point I altogether run out of firm conclusions. But I do have some more observations:

  • In a problem with multiple dimensionful parameters, it's often wise to study limiting cases for which the problem simplifies. For instance, one has the trivial limit $U_k,J_k\to 0$ as $U_0\to 0$. More interesting is the case $\alpha\to \infty$, where each nonlinear element is a short circuit and so the entire current $J_0$ will flow through the first nonlinear element (path of least resistance). Therefore $J_0=R^{-1} U_0$ as $\alpha R U_0\to\infty$. Other limits are not helpful: If $\alpha\to 0$, then all the nonlinear elements become open circuits and therefore one has an infinite chain of identical resistors, i.e., infinite resistance. Similarly, if $R\to 0$ then the voltage across each nonlinear element is $U_0$ and therefore the current required would be infinite. Neither situation is physical and so there's no evident conclusion to draw. (To render them physical, one could either introduce ohmic resistances on the branches or consider a finite chain.)

  • It's worth noting that, while the dimensionless variables chosen above seem obvious enough, they're not the only ones possible. For instance, we could just have well have taken $u'_k:=U_k/U_0$, $j'_k:=(R/U_0)J_k$ to obtain the dimensionless equations $$j'_{k-1} = u'_{k-1} - u'_k,\qquad j'_k = j'_{k-1} - \gamma {u'_k}^2$$ where $\gamma = \alpha R U_0$. The principal benefit of this is that we can now explicitly consider $\alpha \to \infty$, in which case $\gamma\to \infty$ and so the second equation collapses to $u'_k=0$ for $k\geq 1$. Hence $j'_1=u_0$ as stated previously. This moreover raises the possibility of solving the equations perturbatively in powers of $\gamma^{-1}$, though I've run into trouble proceeding along this line.

  • In the above, I've presented the problem in terms of a system of nonlinear first-order difference equations. This is readily converted into a single nonlinear difference equation of second-order: $$u_k^2 = j_{k-1}-j_k = (u_{k-1}-u_k)-(u_{k}-u_{k+1})=u_{k+1}-2u_k+u_{k-1}.$$ This second-order difference equation is analogous to the differential equation $u(x)^2 = 2u''(x)$ with $j(x):=-u'(x)$. By taking a first integral, we obtain $$\frac{1}{3}u(x)^3 = u'(x)^2+C.$$ The requirement that $u(x),j(x)\to 0$ as $x\to \infty$ then imposes $C=0$. The resulting first-order ODE is separable with solution $u(x)=u(0)(1+x\sqrt{u(0)/12})^{-2}$. Then $j(0)=u(0)^{3/2}/\sqrt{3}$, suggesting $$j_0\propto \frac{u_0^{3/2}}{\sqrt{3}}\implies J_0\propto \frac{(\alpha R U_0)^{3/2}}{\sqrt{3}\alpha R^2}$$ (for small $U_0$, I think?) as the asymptotics for the original difference equation. The above is not exactly rigorous, so I don't know how much stock to put in it; however, it does seem to match up with what numerics I've done.

  • One solution idea, in line with the above use of a first integral to solve the differential equation, is to search for a conservation law, i.e., a function $H(x,y)$ such that $$H(u_k,j_k) = (H\circ g)(u_{k-1},j_{k-1})=H(u_{k-1},j_{k-1}).$$ (Or, for $u_k$ alone, a function $H'$ such that $H'(u_{k+1},u_k)=H'(u_k,u_{k-1})$.) This would dictate $$H(u_0,j_0)=\cdots =H(u_k,j_k)=\cdots=\lim_{k\to \infty} H(u_k,j_k)=H(0,0).$$ As such, the set of physical $(u_0,j_0)$ would be prescribed as the level set of $H$ through $(0,0)$. However, I've yet to come up with such $H(x,y)$ and so have no definitive conclusions here.