Smooth or overlap-add between frames

Is there anyway to make overlap-add? I’m implementing an algorithm that creates some fast changes between frames and I need to smooth between frames. That algorithm is already tested with Matlab and it works fine (with overlap-add).

up :frowning:

Well, overlap-add is quite straight forward to implement, you’ve already implemented it in Matlab, right? Depending with which rate your data is coming in, e.g. on a fixed blocksize basis by a IFFT, or a variable blocksize (like the processBlock callback), you can either use a simple overlap buffer, or a ringbuffer where you add/write your blocks, and delete/read them afterwards.

Well, I’m implementing the paper Autonomous Multitrack Equalization Based on Masking Reduction, which basically equalize each track on each frame trying to reduce overloaded frequency bands. So, I always have the same input size, and the output is just the input but equalized with an IIR filter.

The problem is that I’m not sure about how does the framework set the frame input/output location. I mean, I guess that the processBlock input buffer is pointing at the sample position “x”, and the output is also pointing at “x”. Then, for next processBlock call, the buffer is pointing at “x”+frameSize, so that does not allow me to do overlap. Maybe I can read the samples at position “x” but write to another position?

Im not very familiar with Juce and VST plugins so that’s why I don’t really know what I’m able to do with the framework. How can I implement the overlap buffer you mentioned?

Im also seeing the possibility of rotating the phase to match the frames boundaries, but I’m not sure it’s gonna work.

And thanks for the answer! :smiley:

The main differences between realtime audio-plugins and scripts in Matlab, is that with audio-plugins you only get one frame / buffer at a time, and have to write back one, before you get the next one. In Matlab in general, you have access to the whole file or signal you use.
So everything you want to keep from a frames’ worth of information, you have to store it somewhere. Best practice is to create an AudioBuffer<float> as a member of your processor, resize it during prepareToPlay to a size which fits your requirements, and write back all the information you’ll need in the next frame. In the next one you read it and so on.

Sure, but let say the frame size is 1024. If i want to do overlapping, i need to request the next frame to move 512 frames, not 1024 so I have 512 of overlapping.

That’s the second big difference: when you create audio plug-ins, you are not in the position to request stuff :wink: I also grew up with matlab dsp-wise, it’s a quite different way of thinking when it comes to real-time audio. If you really need a specific framesize, you have to do the buffering yourself, which of course will introduce a little bit of latency, nothing a proper DAW can’t handle.

Writing that buffering layer can be quite cruel, as you can’t rely on receiving the same number of samples in each call. Can be the prepared number of samples, can be just 1 sample. Quite a long time ago I wrote such a buffering layer for overlapping DFT processing: https://github.com/DanielRudrich/OverlappingFFTProcessor/blob/master/Source/OverlappingFFTProcessor.h Looks very cruel to my own eyes :slight_smile:

Sorry for late reply. Yeah, so basically I did what you said and it’s working fine. Thx a lot, it was actually quite dumb to do hahaha