If I run an FFT on a single cycle waveform to obtain magnitudes, shouldn’t I get the original waveform back if I run the inverse fft?
float fftBuffer[4096];
zeromem(fftBuffer);
loadWaveform(fftBuffer); // waveform is 2048 samples
// transform into frequency domain
fft.performRealOnlyForwardTransform(fftBuffer, false);
// now go back to time domain
fft.performRealOnlyInverseTransform(fftBuffer);
What I’m getting is a 512 size waveform that kind looks like part of my original and the rest zeroes.
To reverse the FFT and recover the original waveform, you’ll need to retain the complex values (real + imaginary), not just the real. Instead of performRealOnlyForwardTransform and performRealOnlyInverseTransform, use the “perform” method.
The forward FFT will also apply a gain factor of the length of the FFT (4096 in your case), so when you do the reverse FFT you should scale the reverse FFT output by 1.0 / 4096.0.
You may also want to apply a windowing function to the data before the forward FFT (see dsp::WindowingFunction).
I beg to differ, if the initial data is real, which is true with audio signals, you can perform performRealOnlyForwardTransform and with that now complex data you can performRealOnlyInverseTransform to get your initial data.
The realOnly transforms use a special property of the FFT, when transforming real data:
The complex coefficients of negative frequencies (beyond the Nyquist-frequency) will be the complex-conjugated coefficients of their positive counterparts. So that’s redundant data and is therefore not needed for the inverse transform.
I guess the problem here is just a wrong FFT size. But we will see once @pizzafilms will tell us
Edit: Just noticed:
As his data is 2048, I assume the FFT size is also 2048, and if I am not mistaken, the scaling is also 2048. The 4096 is because complex data needs twice the amount of memory.
Sorry about the delay getting back here…that damn day gig!
As it turns out, my FFT size was okay, I stupidly forgot that memcpy() likes total number of bytes, not number of floats…duh. Once I corrected that, all is well.
But while I have you guys here, I’ve for a few more related questions…
It looks like the reconstructed wave looks nearly identical to the original. I noticed that the right edge (last few samples) were not exact…I’m wondering if a windowing function would help there. For this use, what would be the best window function?
Interesting, the levels look nearly exact. I don’t see why I would need to scale the values by 1.0 / 4096.0.
Also, could you explain the values in the bins after the first performRealOnlyForwardTransform()? Where is level and where is phase? And by ‘level’, I believe they’re actually magnitude? What’s the correct way to convert those to either dB or linear values?
Interesting, the levels look nearly exact. I don’t see why I would need to scale the values
Oh - looks like JUCE is doing this for you on the inverse transform - at least using the fallback engine. See the “perform” method, juce_FFT.cpp, line 101.
In my experience, there is no “best” windowing function - each function has pros and cons.
Also, could you explain the values in the bins after the first performRealOnlyForwardTransform()? Where is level and where is phase? And by ‘level’, I believe they’re actually magnitude? What’s the correct way to convert those to either dB or linear values?
Each bin is a complex value, so each bin is actually two numbers - the real part and the imaginary part. Think of these as x and y values on a two dimensional plane - the real is the x and the imaginary is the y.
Now picture that point on the two-dimensional graph in polar coordinates - so instead of x and y you need a distance from the origin and an angle. The distance from the origin is the magnitude and the angle is the phase.
It’s easier if you use the new std::complex data type - then you can just use the built in functions. See FFT::performFrequencyOnlyForwardTransform (juce_FFT.cpp, line 835). Note the cast to a std::complex pointer and the use of std::abs to get the linear magnitude.
run an FFT to deconstruct the wave into arrays of level and phase
manipulate those levels and phases in normalized space
then recreate a waveform
Thanks to your help, I have most of this working.
const int fftOrder = 11;
const int numBins = 1 << fftOrder; // 2048
const int waveSize = numBins; // just for clarity
const int fftBufferSize = waveSize * 2;
float fftBuffer[fftBufferSize];
float levels[numBins];
float phases[numBins];
auto fft = new dsp::FFT(fftOrder);
bufferClear(fftBuffer, fftBufferSize);
loadWaveIntoBuffer();
fft->performRealOnlyForwardTransform(fftBuffer, true);
// zero out the DC and Nyquist bins
fftBuffer[0] = 0.0f;
fftBuffer[fftBufferSize - 1] = 0.0f;
// convert the complex buffer into floats and find the max value
auto* realBuffer = reinterpret_cast<std::complex<float>*> (fftBuffer);
float maxValue = -1.f;
for(int i = 1; i < numBins; ++i)
{
float mag = std::abs(realBuffer[i]);
float phase = std::arg(realBuffer[i]);
maxValue = jmax(mag, maxValue);
levels[i] = mag;
phases[i] = phase;
}
// normalize levels
for(int i = 0; i < numBins; ++i)
{
levels[i] /= maxValue;
// anything required to normalize phase values?
}
// edit levels and phases using normalized values......................
// reconstruct complex number from levels and phases and put back into fftBuffer
??????
// run fft to create new waveform
fft->performRealOnlyInverseTransform(fftBuffer);
So, given that, my three questions are:
Do I need to do anything to the phase values to normalize them after getting them from std::arg(realBuffer[i]); ?
How do I reconstruct complex numbers from the normalized levels and phases?