What is the next number on the constructibility sequence? And what is the asymptotic growth?

I have written some Haskell code to compute the next number in the constructibility sequence. It's confirmed everything we have already established and gave me the following extra results:

There are $149714263$ line-line intersections at the 4th step (computed in ~14 hours). Pace Nielsen's approximation was only off by 8! This includes some points that are between a distance of $10^{-12}$ and $10^{-13}$ from each other.

I have found the fourth number in the constructibility sequence: $$1723816861$$ I computed this by splitting the first quadrant into sections along the x-axis, computing values in these sections and combining them. The program was not parallel, but the work was split amongst 6 process on 3 machines. It took approximately 6 days to complete and each process never used more than 5GB of RAM.

My data can be found here. I've produced these two graphs from my data, which give a sense of how the points are distributed: enter image description here If we focus at the area from $0$ to $1$ we get: enter image description here


Implementation Details:

I represent constructible reals as pairs of a 14 decimal digit approximation (using ireal) and a symbolic represenation (using constructible). This is done to speed up comparisons: the approximations give us quick but partial comparisons functions, while the symbolic representation gives us slower but total comparison functions.

Lines are represented by a pair $\langle m, c \rangle$ such that $y = mx + c$. To deal with vertical lines, we create a data-type that's enhanced with an infinite value. Circles are triples $\langle a, b, r \rangle$ such that $(x-a)^2 + (y-b)^2 = r^2$.

I use a sweep-line algorithm to compute the number of intersections in a given rectangle. It extends the Bentley-Ottoman Algorithm to allow the checking of intersections between circles as well as lines. The idea behind the algorithm is that we have a vertical line moving from the left to right face of the rectangle. We think of this line as a strictly ordered set of objects (lines and circles). This requires some care to get right. Circles need to be split into their top and bottom semi-circles, and we don't only want to order objects based on their y-coordinates but if they are equal then also their slope and we need to deal with circles that are tangential to each other at a point. The composition or order of this set can change in 3 ways as we move from left to right:

  1. Addition: We reach the leftmost point on an object and so we add it to our sorted set.
  2. Deletion: We reach the rightmost point on our object and so we remove it from our sorted set.
  3. Intersection: Several objects intersect. This is the only way the order of objects can change. We reorder them and note the intersection.

We keep track of these events in a priority queue, and deal with them in the order they occur as we move from left to right.

The big advantage of this alogrithm over the naive approach is that it doesn't require us to keep track of the collision points. It also has the advantage that if we compute the amount of collisions in two rectangles then it is very easy to combine them, since we just need to add the amount of collisions in each, making sure we aren't double counting the borders. This makes it very easy to distribute computation. It also has very reasonable RAM demands. Its RAM use should be $O(n)$ where $n$ is the amount of lines and circles and the constant is quite small.


Mathematica has a command to solve systems of equations over the real numbers; or one can just solve them equationally. It also has a command to find the minimal polynomial of an algebraic number. Thus intersection points between lines and circles can be found using exact arithmetic (as numbered roots of minimal polynomials over $\mathbb{Q}$), as can the slopes of lines and radii of circles. Using such methods, there are exactly 17,562 distinct lines and 32,719 distinct circles on the next stage.

Finding the minimal polynomial of an algebraic number this way is somewhat slow (there may be ways to speed that up), but these lines and circles can also be found in just a few minutes if we instead use (10 digit) floating point approximations.

I've now optimized the code a bit, and using those floating point approximations, in a little under 21 hours I compute that there are at least $$149,714,255$$ distinct intersections between those 17,562 lines. This could be undercounting, because the floating point arithmetic might make us think that two distinct intersection points are the same. However, the computations shouldn't take much longer using 20 digit floating points (but they would take a lot more RAM). I expect that the numbers won't change much, if at all. But I did see changes going from 5 digit to 10 digit approximations, so trying the 20 digit computation would be useful.

Storing those 10 digits, for a little more than hundred million intersection points, was taking most of my RAM. It appears that if I try to do the same computation with the circle intersections, it will exceed my RAM limits. However, it is certainly doable, and I'm happy to give my code to anyone who has access to a computer with a lot of RAM (just email me; my computer has 24 GB, so you'd want quite a bit more than that). The code may still have some areas where it can be sped up--but taking less than 1 day to find all intersection points between lines is already quite nice.

Another option would be to store these points on the hard disk---but there are computers out there with enough RAM to make that an unnecessary change.


Edited to add: I found a computer that is slightly slower than my own but had a lot of RAM. It took about 6 weeks, and about 360 GB of RAM, but the computation finished. It is still only an approximation (not exact arithmetic, only 10 digit precision past the decimal place). The number of crossings I get is $$ 1,723,814,005 $$ If you have a real need to do exact arithmetic, I could probably do that, but it would take a bit longer. Otherwise I'll consider this good enough.