strange numpy fft performance

I think that power 2 padding of array some times has several disadvantages:

  1. If you cut array to power 2 length it will produce a big loses of the data for big arrays
  2. If you pad array with zeros it will produce so called "edge effect"

I have found in this topic that fft performance depends on the array size prime factorization. If the array length is prime number it lead to long calculation time. So I propose following code that decreases array length looking for best it factorization.

from sympy.ntheory import factorint

FACTOR_LIMIT = 100

def bestFFTlength(n):
    while max(factorint(n)) >= FACTOR_LIMIT:
        n -= 1
    return n

a = np.zeros(166400)
audio_fft = np.fft.fft(a,bestFFTlength(len(a)))

Divide-and-conquer FFT algorithms, such as Cooley-Tukey, work much better the more factors the input length has. Powers of 2 work especially well, whereas primes (like 165037) require alternate, slower implementations. If you can pad your input to a power-of-2 length, you may be able to drastically speed up slow FFTs.


A simple way to solve this - that isn't mentioned in the other answers - is to use scipy.fftpack.next_fast_len to pad the array. Given a target length, it gives you the next length > target that is a composite of 2, 3 and 5.

As the other answers have pointed out, FFT performs the worst when the array's length is a prime number. Its efficiency increases as the number of prime factors increase (2, 3, 5, etc.).

Tags:

Python

Numpy

Fft