Difference between getNextAudioBlock function and processBlock function

Hello everyone,

Beginner in Juce, I am currently trying to make an harmonizer, using the library SoundTouch.
In order to do it I’m combining what I learned in the Juce tutorials and the code of a working, open source harmonizer JUCE project that uses SoundTouch.
I took as a template the project for simple audio processing from JUCE tutorials as a base and I try to transform the outbuffer with a pitch shift from the Soundtouch library instead of simply adding white noise.

One problem I have is that my project uses a getNextAudioBlock function, while that other project uses a processBlock function. I struggle to understand exactly what differentiates the 2, and if I need the former, the latter, or both.

My hunch right now is that I either need one or the other but not both, and that processBlock somehow calls getNextAudioBlock implicitely (as apparently getNextAudioBlock is a callback function that the systems calls itself to get his load of samples to feed to the audio hardware, and as such seems always it’s always needed in an audio app). But I’d like it if someone could explain precisely the difference beetween both.

The second problem is that the other project uses the function getSampleData on a buffer, and that this function is deprecated now. But if I understood well, the equivalent is to use getWritePointer on a buffer I want to write, and/or getReadPointer on a buffer i want to read. Would like a confirmation on that too.

Thanks for your help :slightly_smiling_face:

1 Like

There’s no functional difference between the methods. Both are called when the audio system wants the buffer contents processed. The difference is mainly how the AudioBuffer needs to be filled. processBlock requires all the samples in the buffer to be processed. getNextAudioBlock may require only a portion of the buffer to be processed. (Determined by the startSample and numSamples variables of the AudioSourceChannelInfo object.) If you are doing a plugin, it’s going to be easiest if everything uses the plain AudioBuffers and the processBlock style of methods. I would by the way recommend doing your thing as a plugin, the AudioAppComponent based things are IMHO a bit messy to deal with. When you do a plugin project, you can also build a standalone application from it.

You could have chosen an easier thing to do as a beginner project. SoundTouch isn’t the easiest library to work with…(But if you have some working code already for it from another project, maybe it’s not so hard to figure it out.)

3 Likes

So what do you think I should do, tryhard using SoundTouch library ? I got how to use the classes and functions of it it but I can’t find a way to make my other files find the SoundTouch.h or to make the linking correctly.

You need to add the SoundTouch include directory into your include directories, and most or all of the .cpp files of SoundTouch into your project. Note however, that would be static linking of SoundTouch into your finished binary, which has some licensing implications if you distribute your binaries publicly. Using SoundTouch from a dynamic library is a pretty bad headache, especially if you try to do that in a plugin project.

I think that I have a SoundTouch for mac library. It has a readme (that of course I read but that doesn’t explain much), indeed an include folder, and a libsoundtouch.a. Where are my own include directories ?
It won’t be distributed publicly, it’s a personal/school project. No worries.

It can be quite a headache to get the static (.a) libraries to work on Mac too…(Especially if you didn’t build the static library yourself on your own machine.) I would recommend trying to build SoundTouch from its source code directly in your project.

Also, if you finally get SoundTouch working, it is going to have a pretty severe latency for real time pitch shifting. It is much more suited for offline type of use where real time input signals are not involved.

I don’t know how to do that…
Well the latency is that severe ?
While testing stuff I was apparently able to make the projucer agree to accept the RubberBand library and build an XCode project with it. May I use that instead ?

RubberBand has lots of real time latency too, but if you already got your project compiling and linking with that, it’s probably easiest to use that. (I just tried doing the “compile from source” with SoundTouch myself on macOs and there were some problems.)

I made a Juce stand alone application/AudioAppComponent based project that compiles SoundTouch in directly :

However, it doesn’t actually do any sound processing and there are no guarantees it would run correctly on macOs anyway. (I had to do a small hack in the SoundTouch code that allowed compilation to succeed but the resulting code might not actually work or not work optimally.)

Cool. I actually decided to switch to a plugin configuration like you suggested. So I used the code you provided to make a new plugin project, and it worked. Thank you very much !

Now i have something that works, but the sound is hyper fuzzy. So i’m trying to use windowing functions of the dsp module but to no avail. It just doesn’t work, doesn’t seem to even be able to see the dsp module because it’s saying stuff like ‘WindowingFunction’ is not a class, namespace, or enumeration", when it’s actually a class in the dsp module.

The inside of my ProcessBlock function right now is like this :

ScopedNoDenormals noDenormals;
auto totalNumInputChannels = getTotalNumInputChannels();
auto totalNumOutputChannels = getTotalNumOutputChannels();
auto numSamples = buffer.getNumSamples();

for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
    buffer.clear (i, 0, buffer.getNumSamples());


for (auto channel = 0; channel < getTotalNumOutputChannels(); ++channel)
{
    
    auto actualInputChannel = channel % getTotalNumInputChannels();
    auto* inBuffer = buffer.getReadPointer (actualInputChannel);
    auto* outBuffer = buffer.getWritePointer (channel);
    
    AudioBuffer<float> buffer1;
    buffer1.makeCopyOf(buffer);
    auto* inBuffer1 = buffer1.getReadPointer (actualInputChannel);
    auto* outBuffer1 = buffer1.getWritePointer (channel);
    
    m_st->putSamples(inBuffer1, numSamples);
    m_st->receiveSamples(outBuffer1, numSamples);
    
    for( int i = 0; i<numSamples; i++)
    {
        outBuffer[i] = outBuffer1[i];
    }
    
    
}

That isn’t going to work, you need to put enough samples into SoundTouch to be able to receive enough of them back. That is, putting numSamples samples into SoundTouch isn’t necessarily going to allow you to get back numSamples of samples with the receiveSamples call. (This is where the real time latency comes from, you may need to output silence for some time before SoundTouch has enough processed samples to output.)

There may also be other issues in the code. For example allocating an AudioBuffer in the audio processing function isn’t a good idea. (If you need helper AudioBuffers, you should have them as member variables of your AudioProcessor and preallocate them in the prepareToPlay method.)

Allright I see the problem, although I stuggle to find the solution.
What should I do, use some data structure to store the data from receiveSample ? And then use it ?

I made a plugin project example :

I didn’t test if it compiles on macOs yet, but I don’t suppose there’s any particular reason it wouldn’t.