What is the most efficient way of finding all the factors of a number in Python?

from functools import reduce

def factors(n):    
    return set(reduce(list.__add__, 
                ([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0)))

This will return all of the factors, very quickly, of a number n.

Why square root as the upper limit?

sqrt(x) * sqrt(x) = x. So if the two factors are the same, they're both the square root. If you make one factor bigger, you have to make the other factor smaller. This means that one of the two will always be less than or equal to sqrt(x), so you only have to search up to that point to find one of the two matching factors. You can then use x / fac1 to get fac2.

The reduce(list.__add__, ...) is taking the little lists of [fac1, fac2] and joining them together in one long list.

The [i, n/i] for i in range(1, int(sqrt(n)) + 1) if n % i == 0 returns a pair of factors if the remainder when you divide n by the smaller one is zero (it doesn't need to check the larger one too; it just gets that by dividing n by the smaller one.)

The set(...) on the outside is getting rid of duplicates, which only happens for perfect squares. For n = 4, this will return 2 twice, so set gets rid of one of them.


The solution presented by @agf is great, but one can achieve ~50% faster run time for an arbitrary odd number by checking for parity. As the factors of an odd number always are odd themselves, it is not necessary to check these when dealing with odd numbers.

I've just started solving Project Euler puzzles myself. In some problems, a divisor check is called inside two nested for loops, and the performance of this function is thus essential.

Combining this fact with agf's excellent solution, I've ended up with this function:

from functools import reduce
from math import sqrt
def factors(n):
        step = 2 if n%2 else 1
        return set(reduce(list.__add__,
                    ([i, n//i] for i in range(1, int(sqrt(n))+1, step) if n % i == 0)))

However, on small numbers (~ < 100), the extra overhead from this alteration may cause the function to take longer.

I ran some tests in order to check the speed. Below is the code used. To produce the different plots, I altered the X = range(1,100,1) accordingly.

import timeit
from math import sqrt
from matplotlib.pyplot import plot, legend, show

def factors_1(n):
    step = 2 if n%2 else 1
    return set(reduce(list.__add__,
                ([i, n//i] for i in range(1, int(sqrt(n))+1, step) if n % i == 0)))

def factors_2(n):
    return set(reduce(list.__add__,
                ([i, n//i] for i in range(1, int(sqrt(n)) + 1) if n % i == 0)))

X = range(1,100000,1000)
Y = []
for i in X:
    f_1 = timeit.timeit('factors_1({})'.format(i), setup='from __main__ import factors_1', number=10000)
    f_2 = timeit.timeit('factors_2({})'.format(i), setup='from __main__ import factors_2', number=10000)
    Y.append(f_1/f_2)
plot(X,Y, label='Running time with/without parity check')
legend()
show()

X = range(1,100,1) X = range(1,100,1)

No significant difference here, but with bigger numbers, the advantage is obvious:

X = range(1,100000,1000) (only odd numbers) X = range(1,100000,1000) (only odd numbers)

X = range(2,100000,100) (only even numbers) X = range(2,100000,100) (only even numbers)

X = range(1,100000,1001) (alternating parity) X = range(1,100000,1001) (alternating parity)