Performing FFT at low frequencies but high resolution?

I assume for "high speed" you mean a small delay from data collection to the resultant FFT. With a low sample rate, your computational ability isn't the limiting factor, given modern computers. The delay problem lies in having enough data for analysis. If you want your 1Hz bin to be different from DC/0Hz, you have to accumulate enough signal data to capture a full cycle of that signal. This is why, for a fixed sample rate, a longer FFT gives you a higher frequency resolution.

Thus, for very low frequencies, your low sample rate (128Hz) means that it will take only a few samples to differentiate these frequencies: a 128-point FFT will have 1Hz resolution and a 256-point FFT will have 0.5Hz resolution. The problem lies in getting that data. 256 points takes a whole 2 seconds to accumulate at a 128Hz sample rate. For a faster FFT update rate, you could re-use samples: take say 32 samples as data blocks, then compute a 256-point FFT using the most recent 8 blocks. Then, when you have 32 new samples, you can throw out the oldest and update the FFT 4 times per second.

Essentially, you have encountered the trade offs required in creating a spectrogram: you have to choose between frequency resolution and time-locality. (MATLAB activity and example here) More frequency resolution requires more samples, thus making your FFT represent a long span of time. Using a short span of time means using fewer samples, thus making your FFT frequency resolution lower. You'll have to choose which is more important in your application.


One usually needs to acquire multiple samples per waveform period to get good results from an FFT. The Nyquist limit of 2 samples per period is a lower bound but usually 10 samples per period or more is what is practically used. So to analyze a 64Hz signal you probably want to acquire samples at a rate of 640Hz or more.

Also (up to a point) you will get better results when measuring actual periodic signals if you acquire multiple waveform periods worth of samples. You will need to determine what window size makes the most sense for your application but to capture 1Hz signals I would suggest capturing somewhere around 10s worth of data.

So basically you need to acquire samples at a high rate relative to your highest frequency and for a long time relative to your lowest frequency to get good results. This mandates that there will be some sort of processing delay that will be a multiple of the period of your lowest frequency. This does not however prevent you from performing that processing as often as once every sample time.

So if you want to analyze the frequency components of a signal as it changes over time, and you want to see what the FFT looks like at a high rate then you can just take in however many samples you need. Run the FFT. Shift all the samples over 1 position, and then take the FFT again at the next sample time.

EXAMPLE:
1) Sample at 819.2 samples per second with a time window of 10 s.
2) Let samples accumulate for 10 s (for a total of 8192 samples)
3) Run the FFT on 8192 samples.
4) Discard the first sample in the buffer, and keep the other 8191 samples shifting them over 1 position.
5) 1/819.2 seconds later add in the next sample to the end of the buffer and re-run the FFT.
6) Repeat steps 4-6 until you have completed your analysis.

This would give you an FFT that analyzes a sliding window of data 819.2 times a second

The processing power needed for the example would be approximately 13*8192*819.2 Multiply-Accumulate opeations per second (87 Million MACs/s). An ordinary PC could easily handle this. You can of course reduce the processing power by a factor of N by only running the FFT ever N samples (for example running it every 8 samples only requies 11M MACs per second).


Modern DSP ships can easily handle the function you describe; if you only sample 128 time per second, you could easily do an FFT on every sample and shift one sample per FFT.