Functions that are their Own nth Derivatives for Real $n$

The set of functions $f$ such that $f^{(n)}-f=0$ is a vector space of dimension $n$, spanned by the functions $e^{\lambda t}$ with $\lambda$ an $n$th root of unity. In particular, there are many such functions, not just one: the general such function is of the form $$f(t)=\sum_{k=0}^{n-1}a_ke^{\exp(2\pi i k/n)t}.$$ This is explained in every text on ordinary diffential equations; I remember fondly, for example, Theory of Ordinary Differential Equations by Earl A. Coddington and Norman Levinson, but I am sure you can find more modern expositions in every library.


What you are interested here are what Spanier and Oldham term "cyclodifferential functions", functions that are regenerated after being differintegrated to the appropriate order (or to put it another way, functions that are eigenfunctions of the differintegration operator).

For the cyclodifferential equation

$${}_0 D_x^{\alpha}y=y$$

(or in more familiar notation, $$\frac{\mathrm{d}^{\alpha}y}{\mathrm{d}x^{\alpha}}=y$$, but the problem with this notation in the setting of differintegrals is that it neglects to take the lower limit that is present in both the Caputo and Riemann-Liouville definitions into account).

as you might have seen, $\exp(x)$ is a cyclodifferential function for ${}_0 D_x^1 y$ (i.e. $\exp(x)$ is its own derivative), $\cosh(x)$ and $\sinh(x)$ are cyclodifferentials for ${}_0 D_x^2 y$ (differentiating those two functions twice gives you the originals);

$$\frac1{\sqrt{\pi x}}+\exp(x)\mathrm{erfc}(-\sqrt{x})$$

($\mathrm{erfc}(x)$ is the complementary error function)

is a cyclodifferential for ${}_0 D_x^{\frac12} y$ (it is its own semiderivative), and in general

$$x^{\alpha-1}\sum_{j=0}^\infty \frac{C^j x^{\alpha j}}{\Gamma(\alpha(j+1))}$$

for $\alpha > 0$ and $C$ an appropriate eigenvalue, is a cyclodifferential for ${}_0 D_x^{\alpha}$. (This general solution can alternatively be expressed in terms of the Mittag-Leffler function)


The solutions to a homogeneous linear equation with constant coefficients $$ a_{n}y^{(n)} + a_{n-1}y^{(n-1)} + \cdots + a_1y' + a_0y = 0$$ correspond to roots of the "auxiliary polynomial" $a_nt^n + \cdots + a_0$. If the polynomial has roots $r_1,\ldots,r_k$ (in complex numbers), with multiplicities $a_1,\ldots,a_k$, then a basis for the solutions is given by $${ e^{r_1x}, xe^{r_1x},\ldots, x^{a_1-1}e^{r_1x},e^{r_2x},\ldots,x^{a_k-1}e^{r_kx}}.$$ Here, the complex exponential is used, so that if $a$ and $b$ are real numbers and $i$ is the square root of $-1$, we have $$e^{a+bi} = e^a(\cos(b) + i \sin(b)).$$

In your case, you are looking at the polynomial $t^n - 1 = 0$, whose roots are the $n$th roots of unity. They are all distinct; so you can either take the complex exponentials, or the real and complex parts. So if $\lambda$ is a primitive $n$th root of $1$, then a basis for the space of solutions is $$ e^x, e^{\lambda x}, e^{\lambda^2 x},\ldots,e^{\lambda^{n-1}x}.$$ The general solution is a linear combination of these with complex coefficients: $$f(x) = a_0e^x + a_1e^{\lambda x} + a_2e^{\lambda^2x}+\cdots+a_{n-1}e^{\lambda^{n-1}x},\qquad a_0,\ldots,a_{n-1}\in\mathbb{C}.$$ If you don't want complex values, you can take a general form as above, and take the real and complex parts separately.