Computational complexity of calculating the nth root of a real number

First, note that the asymptotic complexity of arithmetic operations stated in the common literature concerns operations on numbers with arbitrary precision, and the running time is expressed as a function of the desired number of digits. From the standpoint of asymptotic complexity it makes no sense to ask for operations with constant precision (e.g., single floats, as you mentioned in the comments): there are only $O(1)$ such numbers, hence the operation can be evaluated in time $O(1)$ (e.g., by a look-up table).

Let me thus denote the desired precision as $m$ (since you use the customary $n$ for something else). The result you quote is that $\sqrt a$ can be computed in time $O(M(m))$, where $M(m)$ is any function (satisfying some mild regularity conditions) such that multiplication of two $m$-bit integers can be performed in time $M(m)$. (The currently known asymptotically fastest multiplication algorithm has $M(m)=m\log m\,2^{O(\log^*m)}$.) The algorithm uses Newton iteration $x\mapsto x-\frac{x^2-a}{2x}$. This iteration has a quadratic rate of convergence, hence $O(\log m)$ iterations suffice, and each step takes $O(1)$ multiplications and divisions, leading to the estimate $O(M(m)\log m)$ on the total running time. The extra factor of $\log m$ can be removed by the following observation: since the number of correct digits is roughly doubled by each iteration, we do not have to perform all operations with precision $m$, it suffices to use precision sufficient to accomodate the correct digits. Thus only the last iteration is performed with precision $m$, the last but one has precision $m/2$, the one before that $m/4$, and so on. Then the running time is $O(M(m)+M(m/2)+M(m/4)+\cdots)$. Since $M$ is essentially linear, this is can be bounded by a geometric series, whose sum is $O(M(m))$. (Note by the way that the fact that division can be done in time $O(M(m))$ also uses a similar Newton iteration argument.)

Now, what about $n$th roots in general? You can use Newton iteration again, as Denis suggests. The analysis is similar to the square root case, but since each step takes $O(\log n)$ multiplications, you get a bound $O(M(m)\log n)$. Note that if $n$ is given in binary, $\log n$ is the length of the input, hence this is an algorithm with worse than a quadratic running time. Another approach is to compute $\sqrt[n]a$ as $\exp((\log a)/n)$. Using binary splitting, the Taylor series for $\exp$ and $\log$ can be evaluated in time $O(M(m)(\log m)^2)$; using algorithms based on the arithmetic-geometric mean this can be reduced to $O(M(m)\log m)$, leading to $n$th root computation with the same time bound. This also has an extra $\log$ factor, but it is independent of $n$. I don’t know how to compute $\sqrt[n]a$ in time $O(M(m))$, and I am somewhat skeptical that such a thing is known. It might well be that the comment in Alt’s paper is only intended to cover the case of constant $n$.


The Newton-Raphson algorithm uses, for computation of $A^{1/p}$, the sequence $u_0=A$, $u_{n+1}=u_n-\frac {u_n^p-A}{pu_n^{p-1}}$, whose speed of convergence , always quadratic, is essentially independent of $p$ (and $A$). So, mostly, it asks for $\ln p$ multiplications and 1 division at each step.