Efficient way to find overlapping of N rectangles

The exception you're getting comes from the last line of the code you show. The expression list[rect] is not valid, since list is a class, and the [] syntax in that context is trying to index it. You probably want just [rect] (which creates a new list containing the single item rect).

There are several other basic issues, with your code. For instance, your Rect.__init__ method doesn't set a left attribute, which you seem to expect in your collision testing method. You've also used different capitalization for r1 and r2 in different parts of the overlap method (Python doesn't consider r1 to be the same as R1).

Those issues don't really have anything to do with testing more than two rectangles, which your question asks about. The simplest way to do that (and I strongly advise sticking to simple algorithms if you're having basic issues like the ones mentioned above), is to simply compare each rectangle with each other rectangle using the existing pairwise test. You can use itertools.combinations to easily get all pairs of items from an iterable (like a list):

list_of_rects = [rect1, rect2, rect3, rect4] # assume these are defined elsewhere

for a, b in itertools.combinations(list_of_rects, 2):
    if a.overlap(b):
        # do whatever you want to do when two rectangles overlap here

A couple of potential minor efficiency improvements. First, fix your overlap() function, it potentially does calculations it needn't:

def overlap(r1, r2):

    if r1.left > r2.right or r1.right < r2.left:
        return False

    if r1.top < r2.bottom or r1.bottom > r2.top:
        return False

    return True

Second, calculate the contaning rectangle for one of the lists and use it to screen the other list -- any rectangle that doesn't overlap the container doesn't need to be tested against all the rectangles that contributed to it:

def containing_rectangle(rectangles):
    return Rectangle(min(rectangles, key=lambda r: r.left).left,
        max(rectangles, key=lambda r: r.right).right,
        min(rectangles, key=lambda r: r.bottom).bottom,
        max(rectangles, key=lambda r: r.top).top
    )

c = containing_rectangle(listA)

for b in listB:
    if b.overlap(c):
        for a in listA:
            if b.overlap(a):

In my testing with hundreds of random rectangles, this avoided comparisons on the order of single digit percentages (e.g. 2% or 3%) and occasionally increased the number of comparisons. However, presumably your data isn't random and might fare better with this type of screening.

Depending on the nature of your data, you could break this up into a container rectangle check for each batch of 10K rectangles out of 50K or what ever slice gives you maximum efficiency. Possibly presorting the rectangles (e.g. by their centers) before assigning them to container batches.

We can break up and batch both lists with container rectangles:

listAA = [listA[x:x + 10] for x in range(0, len(listA), 10)]

for i, arrays in enumerate(listAA):
    listAA[i] = [containing_rectangle(arrays)] + arrays

listBB = [listB[x:x + 10] for x in range(0, len(listB), 10)]

for i, arrays in enumerate(listBB):
    listBB[i] = [containing_rectangle(arrays)] + arrays

for bb in listBB:
    for aa in listAA:
        if bb[0].overlap(aa[0]):
            for b in bb[1:]:
                if b.overlap(aa[0]):
                    for a in aa[1:]:
                        if b.overlap(a):

With my random data, this decreased the comparisons on the order of 15% to 20%, even counting the container rectangle comparisons. The batching of rectangles above is arbitrary and you can likely do better.


First off: As with many a problem from computational geometry, specifying the parameters for order-of-growth analysis needs care: calling the lengths of the lists m and n, the worst case in just those parameters is Ω(m×n), as all areas might overlap (in this regard, the algorithm from the question is asymptotically optimal). It is usual to include the size of the output: t = f(m, n, o) (Output-sensitive algorithm).
Trivially, f ∈ Ω(m+n+o) for the problem presented.


Line Sweep is a paradigm to reduce geometrical problems by one dimension - in its original form, from 2D to 1D, plane to line.

Imagine all the rectangles in the plane, different colours for the lists.
 Now sweep a line across this plane - left to right, conventionally, and infinitesimally further to the right "for low y-coordinates" (handle coordinates in increasing x-order, increasing y-order for equal x).
 For all of this sweep (or scan), per colour keep one set representing the "y-intervals" of all rectangles at the current x-coordinate, starting empty. (In a data structure supporting insertion, deletion, and enumerating all intervals that overlap a query interval: see below.)
 Meeting the left side of a rectangle, add the segment to the data structure for its colour. Report overlapping intervals/rectangles in any other colour.
 At a right side, remove the segment.
 Depending on the definition of "overlapping", handle left sides before right sides - or the other way round.


There are many data structures supporting insertion and deletion of intervals, and finding all intervals that overlap a query interval. Currently, I think Augmented Search-Trees may be easiest to understand, implement, test, analyse…
Using this, enumerating all o intersecting pairs of axis-aligned rectangles (a, b) from listA and listB should be possible in O((m+n)log(m+n)+o) time and O(m+n) space. For sizeable problem instances, avoid data structures needing more than linear space ((original) Segment Trees, for one example pertaining to interval overlap).


Another paradigm in algorithm design is Divide&Conquer: with a computational geometry problem, choose one dimension in which the problem can be divided into independent parts, and a coordinate such that the sub-problems for "coordinates below" and "coordinates above" are close in expected run-time. Quite possibly, another (and different) sub-problem "including the coordinate" needs to be solved. This tends to be beneficial when a) the run-time for solving sub-problems is "super-log-linear", and b) there is a cheap (linear) way to construct the overall solution from the solutions for the sub-problems.
This lends itself to concurrent problem solving, and can be used with any other approach for sub-problems, including line sweep.


There will be many ways to tweak each approach, starting with disregarding input items that can't possibly contribute to the output. To "fairly" compare implementations of algorithms of like order of growth, don't aim for a fair "level of tweakedness": try to invest fair amounts of time for tweaking.