Python Efficiency / Optimization Project Euler #5 example

I see that although a faster solution has been posted, no one has actually answered the question. It's a rather difficult one to answer, in fact! The fundamental explanation is that function calls are relatively expensive. In order to make this conclusion persuasive, though, I'll have to dig fairly deeply into Python internals. Prepare yourself!

First of all, I'm going to disassemble (your third version of) ProjectEulerFive and find_solution from the "optimized" solution, using dis.dis. There's a lot here, but a quick scan is all that's required to confirm that your code calls no functions at all:

>>> dis.dis(ProjectEulerFive)
  2           0 LOAD_FAST                0 (m)
              3 STORE_FAST               1 (a)

  3           6 LOAD_CONST               1 (11)
              9 STORE_FAST               2 (start)

  4          12 LOAD_FAST                2 (start)
             15 STORE_FAST               3 (b)

  5          18 SETUP_LOOP              64 (to 85)
        >>   21 LOAD_FAST                3 (b)
             24 LOAD_FAST                0 (m)
             27 COMPARE_OP               0 (<)
             30 POP_JUMP_IF_FALSE       84

  6          33 LOAD_FAST                1 (a)
             36 LOAD_FAST                3 (b)
             39 BINARY_MODULO       
             40 LOAD_CONST               2 (0)
             43 COMPARE_OP               3 (!=)
             46 POP_JUMP_IF_FALSE       71

  7          49 LOAD_FAST                1 (a)
             52 LOAD_FAST                0 (m)
             55 INPLACE_ADD         
             56 STORE_FAST               1 (a)

  8          59 LOAD_FAST                2 (start)
             62 STORE_FAST               3 (b)

  9          65 JUMP_ABSOLUTE           21
             68 JUMP_ABSOLUTE           21

 11     >>   71 LOAD_FAST                3 (b)
             74 LOAD_CONST               3 (1)
             77 INPLACE_ADD         
             78 STORE_FAST               3 (b)
             81 JUMP_ABSOLUTE           21
        >>   84 POP_BLOCK           

 12     >>   85 LOAD_FAST                1 (a)
             88 RETURN_VALUE        

Now let's look at find_solution:

>>> dis.dis(find_solution)
  2           0 SETUP_LOOP              58 (to 61)
              3 LOAD_GLOBAL              0 (xrange)
              6 LOAD_FAST                0 (step)
              9 LOAD_CONST               1 (999999999)
             12 LOAD_FAST                0 (step)
             15 CALL_FUNCTION            3
             18 GET_ITER            
        >>   19 FOR_ITER                38 (to 60)
             22 STORE_DEREF              0 (num)

  3          25 LOAD_GLOBAL              1 (all)
             28 LOAD_CLOSURE             0 (num)
             31 BUILD_TUPLE              1
             34 LOAD_CONST               2 (<code object <genexpr> at 
                                            0x10027eeb0, file "<stdin>", 
                                            line 3>)
             37 MAKE_CLOSURE             0
             40 LOAD_GLOBAL              2 (check_list)
             43 GET_ITER            
             44 CALL_FUNCTION            1
             47 CALL_FUNCTION            1
             50 POP_JUMP_IF_FALSE       19

  4          53 LOAD_DEREF               0 (num)
             56 RETURN_VALUE        
             57 JUMP_ABSOLUTE           19
        >>   60 POP_BLOCK           

  5     >>   61 LOAD_CONST               0 (None)
             64 RETURN_VALUE        

Immediately it becomes clear that (a) this code is much less complex, but (b) it also calls three different functions. The first one is simply a single call to xrange, but the other two calls appear inside the outermost for loop. The first call is the call to all; the second, I suspect, is the generator expression's next method being called. But it doesn't really matter what the functions are; what matters is that they are called inside the loop.

Now, you might think "What's the big deal?" here. It's just a function call; a few nanoseconds here or there -- right? But in fact, those nanoseconds add up. Since the outermost loop proceeds through a total of 232792560 / 20 == 11639628 loops, any overhead gets multiplied by more than eleven million. A quick timing using the %timeit command in ipython shows that a function call -- all by itself -- costs about 120 nanoseconds on my machine:

>>> def no_call():
...     pass
... 
>>> def calling():
...     no_call()
...     
>>> %timeit no_call()
10000000 loops, best of 3: 107 ns per loop
>>> %timeit calling()
1000000 loops, best of 3: 235 ns per loop

So for every function call that appears inside the while loop, that's 120 nanoseconds * 11000000 = 1.32 seconds more time spent. And if I'm right that the second function call is a call to the generator expression's next method, then that function is called even more times, once for every iteration through the genex -- probably 3-4 times per loop on average.

Now to test this hypothesis. If function calls are the problem, then eliminating function calls is the solution. Let's see...

def find_solution(step):
    for num in xrange(step, 999999999, step):
        for n in check_list:
            if num % n != 0:
                break
        else:
            return num
    return None

Here's a version of find_solution that does almost exactly what the original does using for/else syntax. The only function call is the outer one, to xrange, which shouldn't cause any problems. Now, when I timed the original version, it took 11 seconds:

found an answer: 232792560
took 11.2349967957 seconds

Let's see what this new, improved version manages:

found an answer: 232792560
took 2.12648200989 seconds

That's a hair faster than the performance of your fastest version of ProjectEulerFive on my machine:

232792560
took 2.40848493576 seconds

And everything makes sense again.


This ought to take about no time:

def gcd(a, b):
    if (b == 0): return a
    else: return gcd(b, a%b)

def lcm(a, b):
    return abs(a*b) / gcd(a, b)

def euler5(n):
    if (n == 1): return 1
    else: return lcm(n, euler5(n-1))

print euler5(20)

Not an answer to your question (hence the community wiki), but here is a useful decorator for timing functions:

from functools import wraps
import time

def print_time(f):
    @wraps(f)
    def wrapper(*args, **kwargs):
        t0 = time.time()
        result = f(*args, **kwargs)
        print "{0} took {1}s".format(f.__name__, time.time() - t0)
        return result
    return wrapper

Usage is as follows:

@print_time
def foo(x, y):
    time.sleep(1)
    return x + y

And in practice:

>>> foo(1, 2)
foo took 1.0s
3