Hi all,
I am pretty new to JUCE and DSP.
I built a simple delay plugin which uses circular buffers.
I tried initializing a buffer on the heap with a pointer but it seemed pretty unstable and prone to crackles or worse. I thought perhaps somehow I was writing outside the buffer.
I thought this might be a reasonable solution to just declare an array as a private member of the audio processor class and initialize it to zero.

The plugin sounds lovely and I have sent it to a lot of friends ( 20 or more ) with different Operating Systems and DAWS but with one user it managed to crash his version of cubase on a new computer.
It sounds suspiciously like some sort of seg fault to me.
Do I really need to use a juce audio buffer class to do this sort of thing?
Can you run valgrind or something on an audio plugin?

If the array is initialized to zero ( I assume when the class is created ) do I need to do more to memset it or something in the prepare to play or constructor or both? Declare it in the constructor instead?

    float mCircularBufferLeft[96000] = {  };
    float mCircularBufferRight[96000] = {  };
    float mCircularBufferLeftTwo[96000] = {  };
    float mCircularBufferRightTwo[96000] = {  };

I tried using one buffer instead of 16 but that did crash my DAW.
It doesn’t seem to me like it is the amount of memory allocation that a modern computer would worry about.

Perhaps some more experienced C++ and JUCE users will explain to me how silly an idea this really is!

Thanks for you advice!

First question: How do you come to the conclusion that the reason for the crash is based on your buffering? Tbh. it could be almost everything. The only thing that helps here is a stack trace of the crash where you can see what function caused what kind of crash, so better ask your friend to send you the crash report. It could be a segfault but also some kind of unhandled exception, division by zero or using some CPU instruction not supported by the CPU used on that PC…

Anyway, a few thoughts regarding your buffer question:

Best practice and coding style taken aside, if you do it right, both ways should work correctly, e.g. allocating memory on the heap inside your processor constructor or prepare routine or declaring arrays as members – they will be allocated on the heap anyway as the whole processor will likely be allocated on the heap :wink: So to me your “solution” actually sounds more like “works by accident”.

First of all, declaring magic numbers, especially for array bounds is often a source of error, a simple typo can have bad side effects and if you decide to change your size at some point, you have to remember all places you specified it – and you’ll likely miss at least one. If you want to go with those raw C arrays, better declare a member variable like static constexpr int circularBufferSize = 96000 and use that everywhere to create buffers of that size and check bounds in functions accessing these arrays.

But better don’t use Raw C arrays. Generally I’d advise you to write C++ and use a suitable container class that knows a bit more about its own size, which can handle dynamic resizing if needed and bounds checking. juce::AudioBuffer is a great container to manage multichannel audio buffers, it has easy functions for clearing the buffer too. I’d maybe also consider writing my own circular buffer class that handles all that circular buffering stuff, maybe they even added one to juce in the meantime…?


There is still Timurs great juce::AbstractFifo to manage read and write operations. You just need to implement the actual copy procedures yourself, since it is general purpose.

1 Like

i’m a rather heavy cubase user and here are my thoughts on this issue:

some daws, like fl, don’t know the concept of having a different amount of in- and outputs on a track, but cubase pro lets you define that flexibly. so ask your friend which version he has and if pro ask if he tried the plugin on tracks with different channel configurations. if the problem persists you are not doing something about the edge case of input channel count != output channel count. does every channel have its own ringbuffer? are you actively clearing all possible not used channels yet?

ask your friend to change the buffer size of his daw and tell you if anything changed. if that’s that case (even if the problem persists) then your issue has likely something to do with the way you access your ringbuffer for each sample of the daw’s audio buffer. the goal is that the distance between the writing- and reading index has the exact length in samples that the user specifies with the parameter. the sampleRate plays into this calculation but not the daw’s buffer size.

speaking of sampleRate, maybe you allocated your ringbuffer with a fixed sampleRate and your friend just uses a different one. then it wouldn’t be a cubase-issue ofc. that is not unlikely because most people use 44.1 but maybe he doesn’t. cubase-people are sometimes a little oldschool and esoteric and still believe that they can get a wildly better audio quality when producing in 96khz

Lots of great advice here and I really liked last this reply upstairs which makes a lot of sense. The read and write operations to those buffers are below.
The buffer size is arbritrary. My idea was that 96000 would give one second of delay at 96k. Much more than needed for this plug. which is a simple chorus/reverb type deal.

void WaylodelayUdAudioProcessor::processBlock (juce::AudioBuffer<float>& buffer, juce::MidiBuffer& midiMessages)
    juce::ScopedNoDenormals noDenormals;
    auto totalNumInputChannels  = getTotalNumInputChannels();
    auto totalNumOutputChannels = getTotalNumOutputChannels();
    for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
        buffer.clear (i, 0, buffer.getNumSamples());

    float* LeftChannel = buffer.getWritePointer(0);
    float* RightChannel = buffer.getWritePointer(1);
    for (int i = 0; i < buffer.getNumSamples(); ++i)

//// other stuff ! 

        // shove some of the input into the circular buffer also add some of the feedback
        mCircularBufferLeft[mCircularBufferWriteHead] = LeftChannel[i] + mfeedbackLeft;
        mCircularBufferRight[mCircularBufferWriteHead] = RightChannel[i] + mfeedbackRight;

        buffer.setSample(0, i, buffer.getSample(0, i)* *mDryGainParameter +
                         delay_sample_Left* *mDelayOneGainParameter+ delay_sample_Right* *mDelayOneGainParameter
                         +delay_sample_LeftThree* *mDelayThreeGainParameter+ delay_sample_RightThree* *mDelayThreeGainParameter
                         +delay_sample_LeftFive* *mDelayFiveGainParameter+ delay_sample_RightFive* *mDelayFiveGainParameter
                         +delay_sample_LeftSeven* *mDelaySevenGainParameter+ delay_sample_RightSeven* *mDelaySevenGainParameter
        buffer.setSample(1, i, buffer.getSample(1, i)* *mDryGainParameter
                         + delay_sample_LeftTwo* *mDelayTwoGainParameter+ delay_sample_RightTwo* *mDelayTwoGainParameter
                         + delay_sample_LeftFour* *mDelayFourGainParameter+ delay_sample_RightFour* *mDelayFourGainParameter
                         + delay_sample_LeftSix* *mDelaySixGainParameter+ delay_sample_RightSix* *mDelaySixGainParameter
                         + delay_sample_LeftEight* *mDelayEightGainParameter+ delay_sample_RightEight* *mDelayEightGainParameter


ok let’s see. 2 buffers called mCircularBufferSomething with a writehead called mCircularBufferWriteHead. it’s not clear to me where the writeHead is being updated. it would have to go one up for each sample, but i suppose you do that somewhere and i just don’t see it, since it only doesn’t work in one daw. so you get Left- and RightChannel at sample and add the feedback to it. i think this would have to be a multiplication instead. adding something is usually just a dc offset. maybe i’m misunderstanding something here, but it looks to me like you attempt to write a feedback-delay.
then you set both channels’ samples to new values. you could use your Left- and RightChannel pointers for that as well btw, since you already have them there as write pointers.
so, you get some dryGainParameter’s value and multiplicate that with the dry sample and then add the rest to it, which is:
delay_sample_left times some other gain parameter. by the way writing the * right next to a value makes it look a lot like pointer stuff. a bit confusing to read for me. then you add delay_sample_right multiplicated with yet another gain parameter and so on and so on. there are a lot of delay samples it seems. but where do they come from? where were they calculated? and why is left and right mixed into one channel? is this going to be some sort of reverb? i’d encourage you to make shorter statements per line to make it more readable when you want to discuss it with other people. i don’t really understand what’s going on there. i thought first you just meant a normal feedback delay but not sure anymore

1 Like