Can we cover the unit square by these rectangles?

Edit: I've replaced the broken links with inline images -- apologies if this takes up more space than the answer deserves. As I mention in a reply, the Paulhus paper cited in a different answer is the good stuff.

As a bit of fun, I have written a program that attempts to fit the first n rectangles into the square. (I accept that this is not an obvious route to a proof.)

Initially, I planned to jumble the rectangles without any strategy, except that I constrained each new rectangle to share a vertex with at least one previous rectangle. Unfortunately, I quickly found that backtracking is extremely time-consuming. In retrospect, this makes sense: if a state is reached where there are only $N$ spaces big enough to accept the next $N+1$ rectangles, backtracking will probably need to try all $N!$ permutations before deciding to backtrack further. (And this is as it should be, because one of the permutations may free up a corner to allow progress.) So, without strategy, 255 rectangles go in and then there is no more progress for a long time in this algorithm: Dead end

So, I added a bit of strategy: try to make as many edge-to-edge joins as possible. With this algorithm, I have reached 40000 (and still going) without any need at all for backtracking. (In fact, it's quite rare to find an exact fit into a gap, where a new rectangle has edge-to-edge contact with three existing rectangles. Therefore, in retrospect, it would probably be roughly as good to insist that new rectangles have two or more edge-to-edge contacts -- which will effectively mean fitting into "corners" where the new rectangle fills the only remaining quadrant at a vertex.)

Here's an image of the situation after 10000 rectangles: Maximized contact. There is a different pattern, arguably just as good, if the first position with 2 edge-to-edge contacts is selected: Two contacts (after 1000 rectangles). This is quicker.

For the squeamish, look away now: I have been using floating-point arithmetic. With the gcc compiler's somewhat lame "long double", this stores about 20 decimal places. So, I have insisted that an "exact" contact must have coordinates that match to at least 19 decimal places. A "clear" gap or overlap between non-contacts must be at least, say, $10^{-14}$ -- so there are 5 orders of magnitude between "presumably touching" and "presumably separate". You could regard this as having a probabilistic chance of a mistake, and I guess (without justification) the probability might be of order $10^{-5}$.

If gaps are required to be at least $10^{-12}$, then the algorithm is unsure whether $$ {1\over 3912} + {1\over 4124} - {1\over 4050} - {1\over 3981} = {1\over 3612702562200} $$ is zero or gap. If gaps are at least $10^{-13}$, the same happens with $$ {1\over 26981}+{1\over 29981}-{1\over 14201} = {1\over 11487435443561}.$$ These are real examples, and it's easy to concoct other situations that would challenge higher precision. For example, try $$ {1\over 30234}+{1\over 26811}-{1\over 28672}-{1\over 28172} = {1\over 27281801667907584}. $$ So far, no in-between gaps (between $10^{-19}$ and $10^{-14}$) have been encountered.

I have recently started checking the results using arbitrary-length rational numbers (using the IMath package). This is slower, of course. The size of the denominator could be excessive (see A003418), but only 138 base-10 digits were required up to 4800 rectangles. This took about 5 hours on a desktop. The code isn't designed for efficiency, and gets progressively slower in a variety of ways.

It may seem pointless to press on beyond 1000, or 2000 or whatever, and it probably is. However, there is an exciting crunch point at about 17000: until this point, there has been a clear region of unfilled space, substantially larger than the incoming rectangles. Any rectangle that doesn't fit conveniently elsewhere can go in there. This is quite a luxurious position: you can tell at a glance that deadlock won't be reached in the next few placements. When that space is filled, are the remaining slivers large enough? -- the rectangles aren't small enough that remaining gaps look like wide-open spaces. Initial experience suggests that this crunch is survived, but of course there may be more crunches to come.

Here are images: Wide open space at 10000: Wide open space

and impending crunch at 15000: Impending crunch at 15000

Then crunch at 17000 (zoomed in): Crunch at 17000

Crunch averted so far, at 30000: Crunch averted so far, at 30000

@Kevin Buzzard: I hope this doesn't take the fun out of your interactive applet. I think you're right that a bit of insight comes out of this square-bashing: there is hope that there are enough small rectangles to more or less fill the gaps between medium rectangles, and enough really small rectangles to more or less fill the gaps between small rectangles, and so on. This seems to be the hope, rather than clever arrangements of exact matches.

I can be specific about the rarity of filling exact matches using this algorithm: 20 three-edge contacts in the first 1000 rectangles, 6 in the next, and 4 in the next. Presumably more could be arranged by thinking ahead. Also, a better algorithm could do a lot more to avoid small gaps (which must be the killer in the end, if there is a killer).


This problem actually goes back to Leo Moser.

The best result that I'm aware of is due to D. Jennings, who proved that all the rectangles of size $k^{-1} × (k + 1)^{-1}$, $k = 1, 2, 3 ...$, can be packed into a square of size $(133/132)^2$ (link).

Edit 1. A web search via Google Scholar gave a reference to this article by V. Bálint, which claims that the rectangles can be packed into a square of size $(501/500)^2$.

Edit 2. The state of art of this and related packing problems due to Leo Moser is discussed in Chapter 3 of "Research Problems in Discrete Geometry" by P.Brass, W. O. J. Moser and J. Pach. The problem was still unsettled as of 2005.


It's been a long time since I considered this problem, so prompted by seeing this question I was intrigued to discover more on V. Bálint's bound of $(501/500)^ 2.$

A quick search revealed Bálint's paper A Packing Problem and Geometrical Series. In this article it is only stated that with some patience one can pack the first 499 rectangles into the unit square. However, the main difficulty of the problem is to pack the larger rectangles and so it would have been nice to see a demonstration.

Bálint addresses the question in Two Packing Problems but I do not have easy access to this and so now I'm concerned that a similar claim, without a demonstration, may have been made in this paper too.

Please could someone with access to the paper lay my concern to rest?

I would very much like to have confidence in the later bound as its validity makes the problem yet more interesting. Can we get arbitrarily close to 1? I still see no good reason why this should be the case but it's a fascinating possibility that hints at the prospect of something quite deep going on.