Converting the audio buffer for another library to use


#1

I’m trying to use the SoundTouch library to do some processing in my VST.

I have included the build fine but I’m not sure how to convert the Juce buffer into a buffer that SoundTouch can read.

Admittedly this seems like a simple C++ typecast but I’m not sure how to approach it.

soundTouch->putSamples(buffer, buffer.getNumSamples());

**No viable conversion from ‘AudioBuffer’ to 'const soundtouch::SAMPLETYPE ’ (aka 'const float ')


#2

The AudioBuffer is a container for the audio samples. To access the raw samples, use AudioBuffer::getReadPointer (channel)

soundTouch->putSamples (buffer.getReadPointer (0), buffer.getNumSamples());

#3

However, more complicated processing is needed if there are more than 1 channels to process because SoundTouch wants interleaved audio buffers. The audio from the JUCE AudioBuffer needs to be interleaved into a helper buffer that is then passed into SoundTouch.


#4

Good shout, I haven’t used SoundTouch myself. Can it do mono processing and you create two separate instances as multi-mono?
Otherwise there is no way around copying the samples around to a temporary buffer.


#5

It’s of course possible to create multiple mono instances of SoundTouch but then the channels will drift apart in timing. So, using the helper/work buffer for stereo/multichannel is really the recommended way to implement it.


#6

Thanks for the tips! I’ll try the mono as proof of concept then try to make it work in stereo.

Are there any good examples for interleaving the buffer?


#7

Add a std::vector<float> as a member of your AudioProcessor. Resize that to numChannels*samplesPerBlockExpected in prepareToPlay.

Then something like this in processBlock to interleave :

int numch = buffer.getNumChannels();
for (int i=0;i<numch;++i)
  for (int j=0;j<buffer.getNumSamples();++j)
    m_tempbuffer[j*numch+i]=buffer.getSample(i,j);

I think JUCE also has a helper function for that, that may be able to vectorize the operation to make it faster, but I don’t recall what it is named and how to use it right now.


#8

It’s in the AudioDataConverters namespace:
AudioDataConverters::interleaveSamples()

// prepare outside the the audio thread
AudioBuffer<float> output (1, numSamples * 2);

// in processing
AudioDataConverters::interleaveSamples (buffer.getArrayOfReadPointers(), 
                                        output.getWritePointer (0),  
                                        buffer.getNumSamples(), 
                                        buffer.getNumChannels());

#9

Ok I don’t quite understand how to initialize the new output buffer but before that, I can’t get it to work in mono.

soundTouch->putSamples(buffer.getReadPointer(0), buffer.getNumSamples());
soundTouch->receiveSamples(buffer.getWritePointer(0), buffer.getNumSamples());

I’m no longer receiving audio out from my synth.


#10

It’s difficult to say based just on that what could be wrong. What else is going on in the code? Is the SoundTouch using code even in the AudioProcessor subclass? Have you initialized SoundTouch to work with the number of channels and sample rate required?


#11

I have a sampler synth that I have created to playback samples on a singular midi note. Also has a reverb and a low pass.

In my processor constructor

soundTouch = new soundtouch::SoundTouch();

In my prepare to play

soundTouch->setChannels(getTotalNumOutputChannels());
soundTouch->setRate(sampleRate);

Process block

    const int totalNumInputChannels = getTotalNumInputChannels();
    const int totalNumOutputChannels = getTotalNumOutputChannels();
    
    for (int i = totalNumInputChannels; i < totalNumOutputChannels; ++i) {
        buffer.clear(i, 0, buffer.getNumSamples());
    }
    
    soundTouch->flush();

    if (MajorParamChange)
    {
        updateParams();
        MajorParamChange = false;
    }
    
//    playHead = this->getPlayHead();
//    playHead->getCurrentPosition (currentPositionInfo);
    
    mySynth.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());
    
    soundTouch->putSamples(buffer.getReadPointer(0), buffer.getNumSamples());
    soundTouch->receiveSamples(buffer.getWritePointer(0), buffer.getNumSamples());
    
    dsp::AudioBlock<float> block (buffer);
    
    if(filterState) {
        lowPassFilter.process(dsp::ProcessContextReplacing<float> (block));
    }
    
    if(reverbState) {
        reverb.process(dsp::ProcessContextReplacing<float> (block));
    }

There are other bits setting up the synth and reverb but everything works there until I try to manipulate buffer with SoundTouch.


#12

SoundTouch::flush clears the audio that is available from the stretcher, so you shouldn’t be calling it in processBlock. You might call it in prepareToPlay to clear the stretcher of the old audio.


#13

Ah yea I put that in there testing different things and forgot to remove. Unfortunately thats not the issue. Removing flush from processBlock has no effect on the output not working.


#14
soundTouch->setChannels(getTotalNumOutputChannels());

Probably makes SoundTouch expect stereo audio, so the putSamples and receiveSamples get messed up when you are testing as mono. try setChannels(1);

Note if that gets the audio running, it may still be corrupt for example because receiveSamples didn’t produce enough output. You can check if that happens from the return value of receiveSamples. It may not always be how many samples you requested. You will likely have to let SoundTouch process additional audio before trying to get output from it. Which will cause latency…It’s not really the best library to use in a virtual instrument like you are trying to use it.


#15

I see. I am still new to VST development and trying to patch my way together to a full instrument. What are best practices when looking for standardized algorithms for pitch shifting and bpm matching for example? I’ve found this repo to be quite resourceful. Id love to look at more like this if there are any buried.