Hi all!
I’ve been working with juce::dsp::FFT recently and I’ve encountered what may possibly be a bug. At first I attributed it to a floating-point rounding error, but I’m beginning to believe there may be something more fundamental going on here.
Behavior:
When passed a fully-zeroed buffer of FFT data, the JUCE FFT returns time-domain data with a small, but non-negligible (?) Nyquist frequency signal present.
Example:
All values were read during an LLDB debugging session. These readings are from the same buffer, before and after a single call to performRealOnlyInverseTransform()
. Buffer size is 2048.
// buffer: pre-transform
(float) [0] = 0
(float) [1] = 0
(float) [2] = 0
(float) [3] = 0
...
// buffer: post-transform
(float) [0] = 0.000221714377
(float) [1] = -0.000221714377
(float) [2] = 0.000221714377
(float) [3] = -0.000221714377
...
// this pattern repeats to the end of the buffer
The value in this repeating nyquist signal (i.e. its amplitude) seem to be different every time I run this, but the pattern is always the same — one number, alternating positive and negative every sample. Is this enough to chock up to floating point rounding error? I’m not familiar enough with the vDSP implementation of the FFT algorithm to know where such errors could crop up.
This could also easily be from a simple oversight on my part, hence this being a possible bug report. Thank you in advance for any help you can provide with this!