That's going to be 2^10 = 1024 bins, or 2048 input samples
Now create float x[2048], give it a waveform -- maybe a sawtooth with only 1 tooth. So, a ramp from -1 to 1.
Throw it into performRealOnlyForwardTransform
Convert to complex: auto z = (Complex*)x;
Now you have your frequency bins.
If you want to make a crude lowpass filter for example, set z[512 onwards] to 0
Now convert back to the time domain with performRealOnlyInverseTransform
Try doing a basic loopback and check you get back out what you put in.
Once you have that working, you are ready to work on a big waveform: you will be stepping a size-2048 window through this big waveform in steps of 512. So 4x overlap.
For each frame, you want to envelope/window it (you could Google hanning window)
 then fft, filter, ifft
 and then add the result to your output waveform at that location.
Sure, this is my code. It just does an fft and reverse it. But the result is just random floats, not even normalized.
If I remove the 2 transforms the sound is right.
Really, just a couple of lines as an example of a transform and a reverse transform would be nice, even without the overlapping.
void MainContentComponent::getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill)
{
mInstrumentPlayer->getNextAudioBlock(bufferToFill); // fill the buffer with with audio
int numSamples = bufferToFill.buffer->getNumSamples(); // 512 here
for (int channel = 0; channel < 2; ++channel)
{
float* channelData = bufferToFill.buffer->getWritePointer (channel);
FFT lFFT(9, false);
float * freqArray = new float[numSamples * 2];
memcpy(freqArray, channelData, numSamples * sizeof(float)); // fills the first half of the array with the signal
lFFT.performRealOnlyForwardTransform(freqArray);
lFFT.performRealOnlyInverseTransform(freqArray);
bufferToFill.buffer->copyFrom(channel, 0, freqArray, numSamples);
}
}
There is a demo at https://github.com/julianstorer/JUCE/tree/master/examples/SimpleFFTExample/Source
Does that help?
Ď€
PS If you can construct a minimal example demonstrating failure, I recommend posting it! Then the team can sort it out if it is a bug. Looking through the FFT source code is going to be really hard work!
I have this project coming up where we have to create a phase vocoder.
I was wondering why are you using getNextAudioBlock() instead of processBlock().
I think processBlock() applies more to a VST plugin, which is what i’m supposed to make.
Do you think I could/ should use what you’re using?
Hey, it’s a bit old but at the time i ended up using an external library called SoundTouch. You can also look at RubberBand. I just found SoundTouch easier to use.