Newton-Raphson Division With Big Integers

First of all, you can implement division in time O(n^2) and with reasonable constant, so it's not (much) slower than the naive multiplication. However, if you use Karatsuba-like algorithm, or even FFT-based multiplication algorithm, then you indeed can speedup your division algorithm using Newton-Raphson.

A Newton-Raphson iteration for calculating the reciprocal of x is q[n+1]=q[n]*(2-q[n]*x).

Suppose we want to calculate floor(2^k/B) where B is a positive integer. WLOG, B≤2^k; otherwise, the quotient is 0. The Newton-Raphson iteration for x=B/2^k yields q[n+1]=q[n]*(2-q[n]*B/2^k). we can rearrange it as

q[n+1]=q[n]*(2^(k+1)-q[n]*B) >> k

Each iteration of this kind requires only integer multiplications and bit shifts. Does it converge to floor(2^k/B)? Not necessarily. However, in the worst case, it eventually alternates between floor(2^k/B) and ceiling(2^k/B) (Prove it!). So you can use some not-so-clever test to see if you are in this case, and extract floor(2^k/B). (this "not-so-clever test" should be alot faster than the multiplications in each iteration; However, it will be nice to optimize this thing).

Indeed, calculating floor(2^k/B) suffices in order to calculate floor(A/B) for any positive integers A,B. Take k such that A*B≤2^k, and verify floor(A/B)=A*ceiling(2^k/B) >> k.

Lastly, a simple but important optimization for this approach is to truncate multiplications (i.e. calculate only the higher bits of the product) in the early iterations of the Newton-Raphson method. The reason to do so, is that the results of the early iterations are far from the quotient, and it doesn't matter to perform them inaccurately. (Refine this argument and show that if you do this thing appropriately, you can divide two ≤n-bit integers in time O(M(2n)), assuming you can multiply two ≤k-bit integers in time M(k), and M(x) is an increasing convex function).