Looping a grain of audio in a SynthesiserVoice renderNextBlock method

I’m writing a granular synthesizer. I originally partitioned multiple AudioBuffers, each of which corresponded to a grain. For performance reasons (I don’t want to have to re-partition every time grain size changes) and code clarity, I’d rather just store a start and end index in to one large AudioBuffer. Now, instead of storing a per-partitioned AudioBuffer of let’s say 10ms, my SynthesiserVoice subclass, sees one large buffer but I’m unable to restrict playback to a shorter range. What is unclear to me is why I can’t hardcode startSample and numSamples to be a 0 and 441 or something for let’s say a 10ms grain. Whenever I do this, it just plays the full buffer, so I think I’m missing something fundamental. This code is more or less copied from the SamplerVoice, code and is just called in standard fashion like:

void GranularAudioSource::getNextAudioBlock(const AudioSourceChannelInfo& bufferToFill)
    synth.renderNextBlock(*bufferToFill.buffer, incomingMidi,
                              bufferToFill.startSample, bufferToFill.numSamples);

I think the change I need must be in the following code:

void GranularVoice::renderNextBlock (AudioBuffer<float>& outputBuffer, int startSample, int numSamples)
    if (auto* playingSound = static_cast<GranularSound*> (getCurrentlyPlayingSound().get()))
        // grainData is an AudioBuffer<float>& 
        auto& data = playingSound->grainData;
        const float* const inL = data.getReadPointer (0);
        const float* const inR = data.getNumChannels() > 1 ? data.getReadPointer (1) : nullptr;

        // Are we in stereo? If yes then set outR to the second write buffer pointer of data,
        // else set outR to nullptr.
        float* outL = outputBuffer.getWritePointer (0, startSample);
        float* outR = outputBuffer.getNumChannels() > 1 ? outputBuffer.getWritePointer (1, startSample) : nullptr;

        // We will fill the buffer with numSamples samples,
        // but if the number of samples in a grain is less than numSamples,
        // we'll need to loop through the grain some number of times to fill the buffer.
        while (--numSamples >= 0)
            auto pos = (int) sourceSamplePosition;
            auto alpha = (float) (sourceSamplePosition - pos);
            auto invAlpha = 1.0f - alpha;

            // Using a very simple linear interpolation here. 
            float l = (inL[pos] * invAlpha + inL[pos + 1] * alpha);
            // Again, only process right channel if we're in stereo.
            float r = (inR != nullptr) ? (inR[pos] * invAlpha + inR[pos + 1] * alpha)
                                       : l;

            auto envelopeValue = adsr.getNextSample();

            l *= lgain * envelopeValue;
            r *= rgain * envelopeValue;

            if (outR != nullptr)
                *outL++ += l;
                *outR++ += r;
                *outL++ += (l + r) * 0.5f;

            sourceSamplePosition += pitchRatio;

            if (sourceSamplePosition > playingSound->length)
                // stopNote (0.0f, true);
                // break;
                 sourceSamplePosition = 0;

Did you find the fix for your problem in the end?

Unfortunately not though I’m aware of a few projects on github that implement granular synthesis which may be useful for you

ATM I am reading through the JUCE looping sound tutorial and try to port the GitHub - passivist/GRNLR: granular synthesis plugin part of my bachelors thesis project to the most recent JUCE version. If there is more on Github that might be helpful.