Solving platform-dependent problems

Hey everyone,

a friend asked me to make him a haas-effect plugin with some little extra steps. :slight_smile: I did it and it works perfectly fine in my DAW, cubase artist 9.5.5, but in his DAW, fl studio he gets a lot of weird clicking noises, as if some discontinuity occurs in random intervals. I’m just wondering how stuff like that can happen in general. I mean I didn’t even use any delay implementations of JUCE. I just very traditionally put samples into a vector, while getting samples from another part of the vector (circular buffer), nothing special and especially nothing that should produce different results in different DAWs or even operating systems. (Both of us use windows though, so at least that’s not a problem)

At the beginning I tried to rewrite some parts of my code to look more readable and added some safety-things like clearing all channels >= 2, which definitely helped a bit (for some reason, not sure why) but now i’m at a point where i’m really out of ideas what else to try there. I’m glad he’s a friend, but what if he was a customer? How do you debug things, that are not happening on your own machine? Is there anything you can tell them to do to give you more valuable information? I wish i could just open the project on his computer and DBG a lot of things from there but he doesn’t live near me and obviously that also isn’t a good strategy with customers later in my career so I should get into the habit of whatever you will recommend to me in this post!

FL Studio (like a few other hosts, but not all of them) tends to split the block to smaller sizes, is it possible you assumed in your code the block size was constant between processBlock() calls?

for every block (processBlock) i used buffer.getNumSamples() to determine the length of the sample loop.

btw: huge thanks that you wanna help me with this example in particular. if you also have some ideas how to troubleshoot issues like that in general that’s great too and i’d like to encourage everyone who reads this to participate in this post if you either have an idea about this specific problem, or the general problem how to deal with crossplatform issues

Are you sure this is not a case of the #1 most common programming mistake that we see on the forum ?

Which basically means in your case: are you sure that you are allocating a circular buffer FOR EVERY channel that you have, and that, for every one of those channel, you are writing and reading samples from the corresponding buffer?

Because if you are dealing with two channels, and you process the first (reading and writing from a circular buffer), and then process the second (by reading and writing from the SAME circular buffer used for the first channel), that’s probably NOT what you want.

Other from that, isn’t FL Studio available for a limited time evaluation? If so, you can start the demo of FL Studio and hopefully reproduce then debug the issue your friend is seeing.
If it is not, you can contact its manufacturer directly and ask for a Not For Resale copy of FL Studio for the purpose of testing plug-in compatibility

1 Like

i indeed only have one delay object in my code, but don’t stop reading here yet. i figured no one who uses a haas effect would ever want to automate over the center, because that sounds ridiculous and i mean, most people use haas effects just statically anyway. so i thought: why not save some cpu here? so i declared a variable that can be either 0 or 1. depending if the delay parameter is positive or negative that variable is flipped. when it comes to processing the sample then that variable controls which channel writes into and reads from the delay. so… there is a clicking on that transition, yes. but that’s not my problem. my problem is that there are clicking noises while staying within one channel, even long after 0 has been crossed and the buffer is not much longer than 20ms as well.

also please remember: the problem does not occur on my own system. if i was using a processor incorrectly with 2 channel’s data, i’d get a problem on my system as well.

I treat the samples as a single stream, so it doesn’t matter how long each block is to me.
Like ‘eyalamir’ said, FL Studio can send really small blocks to your processor. I’ve seen it as low as 1 sample. Yes 1 sample!

can you please go into detail what that exactly means? how can i prevent my implementation from failing when the block sizes vary?

void processBlock(juce::AudioBuffer<float>& buffer) {
   auto samples = buffer.getArrayOfWritePointers();
   for (auto s = 0; s < buffer.getNumSamples(); ++s)
       processSample(samples, s);
}
void processSample(float** sample, int s) {
   sample[chDelay][s] = delay.process(sample[chDelay][s]);
}

this is basically what my implementation currently looks like. (chDelay can be 0 or 1 depending on my delay parameter, as described earlier)

It means you have to be prepared for buffer.getNumSamples() to be 1.
Your calls seem to be OK, but where is chDelay set?

It seems like you’re using the same delay line to process each channel instead of separate ones, so you should probably do delay[chDelay].process()

BTW, It would be easier on the CPU to pass pointer to a memory block of samples for the delay process, and let it run through each channel block, one at a time, to be wary of memory cache scrubbing.

As described above I only use one delay processor because i thought well, it’s a haas effect, no need to waste cpu with 2 delays here just so that the transition at 0ms sounds ok. chDelay is a bool that represents if the delay should be applied to channel 0 or 1. it’s set in the method where the delay gets its time in ms, because it goes from -20 to 20. negative numbers represent the other channel giving chDelay its value.

i’m having a hard time understanding your last paragraph tbh. sry for that, i’m still much of a beginner when it comes to programming. i assume that i get a pointer to the 2dimensional array with buffer.getArrayOfWritePointers();. isn’t that supposed to be the memory block of samples that i want to process with my delay? i googled “memory cache scrubbing” and wikipedia gave me a definition that had something to do with computational errors that only happen once a year due to some weird radiowaves in the air or something. i guess that’s not what you mean here… or is it?

One thing at a time then.
Your ‘chDelay’ should be an integer, as you can’t guarantee a bool is 0 or 1, it could be 0 and -1 (or anything the compiler wants it to be), but it’s certainly is a really bad idea to access an array with a boolean!

        const float **readPuts = buffer.getArrayOfReadPointers();
        float **writePuts = buffer.getArrayOfWritePointers();
        int numSamples  = buffer.getNumSamples();
        int numChannels = buffer.getNumChannels();

        for (int ch = 0; ch < numChannels; ch++)
        {
             delay[ch].process( writePuts[ch], readPuts[ch], numSamples );
        }

You could of course just send the pointers and let the delay handle the stereo, so it can handle things like cross feeding channels for ping-pong delays and the like.
Otherwise, you really do need a delay for each channel like the above code.

If you are using std::vector to store the samples inside delay class then you have to aware of how std::vector works and do the circular buffering logic properly to avoid relocation of vector.

ok, so i just changed chDelay from bool to int and rebuilt the plugin. honestly, i’d be surprised if this was the solution because the random clicks in my friend’s audio playback don’t come when the delay parameter has changed, but constantly in irregular intervals.

not sure why you call both of these. don’t they point to the same memory address and therefore have the same data, just with the difference that one of them can’t be overwritten?

I’d appreciate it if we stopped talking about having to use 2 delays now. Only using 1 delay here was a decision based on the fact that a haas delay only needs to delay one channel’s sample data.

when the sampleRate changes I allocate a little more than enough samples for 20ms. it works on my machine so I suppose the logic is correct.

i guess i’m doing that, but you can check if you want to:

float process(float sample) {
	++wIdx;
	if (wIdx >= size)
		wIdx = 0;
	data[wIdx] = sample;
	dIdx += inertia * (delay - dIdx);
	if (dIdx >= size)
		dIdx -= size;
	rIdx = wIdx - dIdx;
	if (rIdx < 0)
		rIdx += size;
	return data[(int)rIdx];
}

wIdx determines where to write in the buffer. ofc it just moves one up for each sample.
data is the vector.
dIdx is the current index of the delay, a value between 0 and whatever amount of samples corresponds to 20ms.
inertia is a variable with some value way below 1 to slow down the movement of dIdx on a parameter value change.
delay is the current delay in samples.
not sure atm why i checked if dIdx goes higher than size, since it’s not supposed to be able to do that atm… um but i guess it shouldn’t do anything atm. (i’ll try)
rIdx is the index that we’ll read a sample from, so it relates to wIdx and dIdx.
then we read and return the sample with no interpolation. (i also tested linear interpolation and spline interpolation but the problem persists)

edit: ok just checked. deleting the check with dIdx and size makes no difference. i guess it was left from another experiment.

edit2: i have a question, maybe it’s a little weird but… is it always guaranteed that channel 0 is left and channel 1 is right or do different DAWs interpret that differently?

As a sidenote: bool to any other integral type maps false to 0 and true to 1 -this is guaranteed by the standard. Indexes are size_t, so both int->size_t and bool->size_t are conversions, but they’re both defined.

The read and write pointer can be different addresses on different DAWs, you can’t presume that it’s the same everywhere.
A weird clicking noise might be your friend’s buffers sizes. Can you get them to check their audio settings, and possible increase FL’s audio buffer sizes?
edit BTW you could allocate your buffers in here:

 void prepareToPlay (double sampleRate, int maximumExpectedSamplesPerBlock) 

Are you 100% of this? I’m not sure how this can be true in JUCE, as the only difference between getReadPointer and getWritePointer is the constness of what the return pointer is pointing at.

    const Type* getReadPointer (int channelNumber) const noexcept
    {
        jassert (isPositiveAndBelow (channelNumber, numChannels));
        return channels[channelNumber];
    }

    Type* getWritePointer (int channelNumber) noexcept
    {
        jassert (isPositiveAndBelow (channelNumber, numChannels));
        isClear = false;
        return channels[channelNumber];
    }

Of course I may well have misunderstood something, but it certainly looks to me like the only difference there is setting the isClear member when getting a write pointer, and that the return type is const Type* for a read pointer as opposed to Type* for the write pointer.

if this is the case then my implementation with chDelay being bool should work fine and defined. i changed it to int now anyway and still waiting for my friend to test it. if the problem persists i change it back to bool since that seems to be well defined.

ok i’m pretty sure that it’s the same address for read and write from looking at these methods. however i use getArrayOfWritePointers(). the documentation for this method says:

Don't modify any of the pointers that are returned,
and bear in mind that these will become invalid if the buffer is resized.

so when i do something like sample[ch][s] to get the sample at channel ch and sample s, i assume i’m not “modifying the pointer” but just reading from its memory address, right? it also says that it becomes invalid if the buffer is resized. but i get a new pointer for each start of processBlock. so can i assume that the buffer won’t be resized until the end of processBlock or could this be my issue?

i assume by using buffer.getNumSamples() i’d always use the amount of samples that is given by the DAW right now, so it should work platform- and buffersize-independent, right? I’ll totally try telling him to change his buffer size and then tell me if it changed anything though. worth a shot.

yeah i noticed that prepareToPlay also gives you the amount of samples per block. but i’m not sure how this can be used. i mean imagine i iterated over this “maximum size” and some of the entrys are just empty or full with garbage because the actual buffer is smaller or so. or i misunderstand something about this argument, idk. i just never found a way to utilize it yet.

edit: oh you meant allocating the buffers. yes, already do that. prepareToPlay triggers the resizing of my delay, so that it has a bit more than 20ms of samples.


please keep in mind that the plugin works perfectly fine in my daw. it means that the logic is correct everywhere. it must be a different kind of mistake… something that makes a difference between his and my computer, or his and my daw.

If chDelay is not getting flipped somehow, inertia is always 0…1, and your delay line is not resized out of prepareToPlay, I can only suspect a performance problem as Dave suggests -at least with the information we have so far.

when the delay in ms parameter is being changed chDelay is flipped to either 0 or 1 and the delay gets the absolute value (0, 20) of the parameter (-20, 20). the delay then transforms this to a buffer length in samples and resizes itself. the delay received the sampleRate from prepareToPlay before. inertia is supposed to be < 1 and it’s only meant to slow down delay rate changes. that works fine in both of our daws already, we checked on a test signal in an oscilloscope. about the performance issue: it’s just a little haas effect. even if it had the most terrible code ever, it would never max out the cpu so much that the crackles come from that. please consider that i might be pretty much of a programming beginner, but not so terrible that i’d reallocate vectors all the time or pass objects that are bigger than double not by reference or something.

EDIT: WAIT! or is it actually possible that fl studio calls prepareToPlay more often than just when the sampleRate changes? because that would really suck atm and it would explain why the crackle comes in irregular intervals.

I just tested FL Studio with one of my plugs - prepareToPlay to be called at least 20 times on boot up! Then again when I click stop twice on the transport bar.
It’s probably best to check for any changes before doing allocations in there.

1 Like

this actually gives me hope, man :slight_smile: this would just explain everything as my vector reallocates with prepareToPlay, because i don’t check if the sampleRate is still the same, since this was never a problem in cubase