Trying to implement a digital IIR filter why is this happening?

  1. It looks you already found the first bug: https://arm-software.github.io/CMSIS_5/DSP/html/arm__biquad__cascade__df1__f32_8c.html the function needs float arrays not int arrays
  2. By fixing this you just moved the problem to a different spot. You will get integers from your DMA so somewhere you need to do INT -> FLOAT -> INT conversions.
  3. Your block size is very small so your interrupt rate is very high. Depedning on how much interrupt overhead you have, the processor may not be able to keep up

In general it's useful to debug this in seperate steps.

  1. Confirm you can write a "output equals input" passthrough program. Make sure there are no drop outs or framing issues and that the HW is properly initialized & configured. This is also usful for benchmarking you baseline CPU load.
  2. Do something very simple and well undestood. Like "scale by half". Do this first in "native" ADC and DAC data formats and then in the data type you want to do your actual processing in
  3. Now insert the desired processing. Verify with a few cases where the output is known. If the actual processing is farily complicated, verify the code of the processing function FIRST in an off-line test rig with known test vectors and result vectors before a dropping it into a real time application. Measure your CPU load.

Okay, so I actually and finally figured it out.

The issue was the DMA from the ADC sending data out as an INT and the DSP function requires a float then the timer DMA wanted an INT to be sent back out

The way I figured it out was

  1. Double check what Hilmar said at the post above.
  2. Wrote a simple 'for loop' that converted INT -> FLOAT and FLOAT -> INT

for (int i = 0; i < 2; i++) { ADC_Value_f[i] = (float)ADC_Value[i]; }
DSP FUNCTION

for (int i = 2; i < 4; i++) { ADC_Value_Output[i] = (int)ADC_Value_Output_f[i]; }

and it worked!

enter image description here enter image description here