[Resolved] Is there something wrong with processing audiobuffers in the Synthesiser class like this? Is there MIDI data in there?

I am almost done my second of three major synths, but I have encountered a quirk I can’t understand.

I am using the JUCE synthesiser class. I have a distortion module which is essentially just an oversampled tanh/atan function. I can run this on a per sample basis in SynthVoice.h (my synth voice for polyphonic distortion). I can also run this in PluginProcessor.cpp without any quirks using a function like this to render blocks:

void renderNextBlockMono(AudioBuffer<float>& bufferIn)
{
	float* chanbuf = bufferIn.getWritePointer(0);
	for (int j = 0; j < bufferIn.getNumSamples(); ++j)
	{
		chanbuf[j] = getNextSample(chanbuf[j]);

		for (int i = 1; i < bufferIn.getNumChannels(); ++i)
		{
			float* chanbufcopy = bufferIn.getWritePointer(i);
			chanbufcopy[j] = chanbuf[j];
		}
	}

}

Where getnextsample runs the distortion on each sample.

However, when I move the same function to the Synthesiser area, it starts acting very strangely. If I feed it 0 velocity signals that don’t make noise in any other part of the synthesiser, it crackles in response. As if it is receiving MIDI values somehow into it directly.

I have confirmed this isn’t just an amplification of crackling occurring somewhere else in the project, because as I said it doesn’t do this even with the same distortion placed in other places.

So I’m wondering if there’s something about the audio buffer in the Synthesiser area that would be causing me problems. Does it have some other type of data like MIDI somehow in there as well at this level that would need to be stripped out?

This is my “SynthesiserInherited” relevant function:

class SynthesiserInherited
	: public Synthesiser,
	public AudioProcessorValueTreeState::Listener
{
public: 
....
//BLOCK BASED PROCESSING OF VOICES
void renderVoices(AudioBuffer<float>& outputAudio,
	int startSample, int numSamples) override
{
	//BASE
	Synthesiser::renderVoices(outputAudio, startSample, numSamples);

	//MONOCOMP
	if (monoCompOnOff) {
		monoComp.render(outputAudio, startSample, numSamples);
	}

	//DISTORTION
	if (monoDistOnOff) {
		monoDistortion.renderNextBlockMono(outputAudio);
	}
	//DELAY
	if (delayOnOff) {
		monoDelay.renderNextBlockMono(outputAudio, delayTime, delayFeedback, delayPrePostMix, delayDWMix);
	}

How on earth could MIDI data be getting into the monoDistortion here to trigger crackling? It seems very much to be coming from there because if I turn off the monoDistortion, no matter how much I apply distortion in other levels of the project (or turn up the volume), there is no such similar crackling.

I am using a midi guitar and if I just tap on the keys it makes 0 velocity “notes” that are causing this monoDistortion to crackle. But there is absolutely no sound made by the synth if I do the same thing with monoDistortion off, even with the volume as high as possible, or with the same distortion applied in other areas.

Is this crazy or what?

Thanks for any help.

Well I just confirmed I’m not crazy, although I still don’t get it. Moved the distortion back to PluginProcessor.cpp like this:

void AudioPlugInAudioProcessor::processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages)
{
    ScopedNoDenormals noDenormals;

auto totalNumInputChannels  = getTotalNumInputChannels();
auto totalNumOutputChannels = getTotalNumOutputChannels();


for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
    buffer.clear (i, 0, buffer.getNumSamples());

buffer.clear();
mySynth.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());

monoDist.renderNextBlockMono(buffer);

And presto - the crackling in response to 0 velocity midi data again disappears. So there is nothing wrong with the distortion or the synth in general.

There is just clearly something wrong with processing the buffer in the overriden Synthesiser class as I tried to do in the OP. But I need to do some processing there (eg. monoComp must exist there), so it is important for me I think to understand how this midi data is getting into the audio buffer when the distortion is in the Synthesiser location.

Thanks for any help.

Is it because you are not passing in the startSample and numSamples into the monoDist.renderNextBlock (buffer) function? Every time you send it a midi note (even with 0 velocity) the renderVoices function will get called in chunks so startSample won’t be 0 so I suspect this is throwing your distortion off.

1 Like

There are more problems than this:
renderNextBlock in the Synthesizer engine is adding data, not replacing, since it is iterating all active voices using the same buffer.
Either you want an effect applied to the sum, then you process that in the AudioProcessor::processBlock() on the whole buffer, after all synth voices have been added, or you want the effect per voice (especially important, if your effect is somehow linked to ADSR). In this case your voice needs to have it’s private buffer pre-allocated, where it renders the block into (take care of the right amount of samples), applies the effect, and then adds the result to the block it’s got from the Synthesizer class.

N.B.: when you iterate channels and blocks, make sure to iterate the channels outside and inside along the samples for cache coherency reasons.

1 Like

Thanks daniel, but I’m not sure I understand exactly. Let me elaborate.

CASE 1: No Crackling

PluginProcessor:

void AudioPlugInAudioProcessor::processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages)
{
    ScopedNoDenormals noDenormals;
    
	auto totalNumInputChannels  = getTotalNumInputChannels();
    auto totalNumOutputChannels = getTotalNumOutputChannels();
    
	for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
        buffer.clear (i, 0, buffer.getNumSamples());
	  	buffer.clear();
	
    mySynth.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());
	
	monoDistortion.renderNextBlockMono(buffer);

Synthesiser:

	void renderVoices(AudioBuffer<float>& outputAudio,
		int startSample, int numSamples) override
	{
		
		Synthesiser::renderVoices(outputAudio, startSample, numSamples);

CASE 2: Crackling

PluginProcessor:

void AudioPlugInAudioProcessor::processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages)
{
    ScopedNoDenormals noDenormals;
    
	auto totalNumInputChannels  = getTotalNumInputChannels();
    auto totalNumOutputChannels = getTotalNumOutputChannels();
    
	for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
        buffer.clear (i, 0, buffer.getNumSamples());
	    	buffer.clear();
	
    mySynth.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());

Synthesiser:

	void renderVoices(AudioBuffer<float>& outputAudio,
		int startSample, int numSamples) override
	{
		
		Synthesiser::renderVoices(outputAudio, startSample, numSamples);

        monoDistortion.renderNextBlockMono(outputAudio);

Question:
In what way are these two cases actually different? In both cases, aren’t we running:

Synthesiser::renderVoices(outputAudio, startSample, numSamples);

then:

monoDistortion.renderNextBlockMono(outputAudio);
or
monoDistortion.renderNextBlockMono(buffer);

With just a different name for the buffer and nothing in between either way? So why would it work differently one way vs. the other?

Like Daniel said, in the latter case you’re applying the distortion to the whole buffer, possibly multiple times (depending on how many sub-buffers the Synthesiser class divided the buffer into based on the timing of the MIDI messages).

Essentially, you need to allow your renderNextBlockMono() function to take int startSample and int numSamples rather than always processing from samples 0 to the number of samples in the buffer.

1 Like

Thanks Martin. Perhaps you can help me understand what I’m doing wrong then. I tried implementing startSamples and numSamples when I read @alibarker1 's suggestion in the thread but it hasn’t stopped the crackling in response to 0 velocity midi signals.

Here’s how I wrote the distortion function. It’s meant to just process channel 0 and then just copy that to any other channels since this is a mono synth and there’s no need to waste processing power on left and right channels separately. getNextSample here is the function that applies distortion per sample.

void renderNextBlockMonoStartSample(AudioBuffer<float>& bufferIn, int startSample, int numSamples)
{
	float* chanbuf = bufferIn.getWritePointer(0);
	for (int j = startSample; j < numSamples; j++)
	{
		chanbuf[j] = getNextSample(chanbuf[j]);

		for (int i = 1; i < bufferIn.getNumChannels(); i++)
		{
			float* chanbufcopy = bufferIn.getWritePointer(i);
			chanbufcopy[j] = chanbuf[j];
		}
	}

}

Is that not correct? Why would this still be misbehaving? Thanks.

The AudioPlugInAudioProcessor::processBlock() is called for full blocks, and needs to be processed from sample 0 to the last sample. So here your approach works.

This is calling mySynth.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());
What the mySynth.renderNextBlock() does, is to clear the buffer and then iterate all active voices to add their samples.

When the noteOn event happens in the middle of the buffer, it calls SynthVoice::renderNextBlock() with startSample set to the sample, where the noteOn occurred, e.g. 93. And numSamples respectively buffer.getNumSamples() - 93.

The same happens for the noteOff, if that happens 73 samples into the buffer, the renderNextBlock will have numSamples set to 73.

  1. If you press more than one note, every following note will alter the samples, that were already delivered by previous voices. You have to do all processing inside the SynthVoice in their own buffer, and when the result is finished, it is added to the buffer, that was referenced in the calling argument.

  2. If your distortion model is stateful, you need a separate instance of that state for each voice and each channel there, otherwise the state continuity is different from the signal’s continuity. That will result in jumps in the signal

  3. When you iterate through a multi channel buffer, be sure if possible (which is 95% of the cases) to iterate a full channel and process the next channel afterwards, so the processor can use the cache to it’s best. This is called “cache coherency”, otherwise it is like reading two chapters in a book in parallel (the thing with paper and pages :wink: ).
    Another thing is FloatVectorOperations (boils down to a processor feature called SIMD). If you copy a buffer, use AudioProcessor::copyFrom or addFrom, this makes a big difference, especially in a debug build, when optimisation is turned off. This alone could have caused your crackle.
    Your code would be rewritten like this:

void renderNextBlockMonoStartSample(AudioBuffer<float>& bufferIn, int startSample, int numSamples)
{
    float* chanbuf = bufferIn.getWritePointer(0);
    for (int j = startSample; j < numSamples; ++j)
        chanbuf[j] = getNextSample(chanbuf[j]);

    for (int i = 1; i < bufferIn.getNumChannels(); ++i)
        bufferIn.copyFrom (i, startSample, bufferIn.getReadPointer (0), numSamples);
}

(but this still doesn’t fix your problem, that you are not using a buffer per voice to do the processing there).

I don’t know, how to explain it better, I hope it helps a bit.

1 Like

Thank you guys for the clarification and thank you daniel for clarifying the correct way to copy the buffer among channels.

I understand the renderVoices function is not the best place then for this type of processing and why it will run into trouble.

But I’m still not then sure of the best place to do the processing I was trying to do. The reason I tried doing some stuff there is I have a special kind of mono output compressor on my synth. This compressor works by summing together the output from a note triggered ADSR envelope from each of the voices and using that mono sum per sample in math against the mono synth output.

So each voice has an envelope called monoCompEnvelope and I need to sum their outputs together per sample so I can perform math on the mono output buffer.

I need to somehow get these envelopes all summed together for every sample of audio in a place where I have access to the individual voices.

In my Synthesiser::RenderVoices override this was being done with code like this after Synthesisiser::RenderVoices in a function called monoComp.render like so:

//GET ENVELOPE SUMMED
		envelopeSummed = 0;
		for (int i = 0; i < mSynthVoice.size(); i++) {
			envelopeSummed = envelopeSummed + mSynthVoice[i]->getMonoCompEnvelopeValue();
		}

	}

	//ACCESS BUFFER
		for (int i = 0; i < outputAudio.getNumChannels(); i++) {
			float* buffer = outputAudio.getWritePointer(i);

			for (int j = 0; j < outputAudio.getNumSamples(); j++) {

				////--------COMPRESSION MATH HERE PER SAMPLE
					multiplier = *function to calculate multiplier from envelopeSummed and some knob values*;
				}

				///---------COMPRESSION MATH ENDS
				float sample = (buffer[j]) * multiplier;
				buffer[j] = sample;
			}

But as we reviewed, things are behaving glitchy when mono sample-by-sample output effects are being performed in this area because renderVoices is not a perfectly linear start to finish process for the buffer.

So then is there a better way for me to run envelopeSummed = envelopeSummed + mSynthVoice[i]->getMonoCompEnvelopeValue(); and use the value against the mono synth output per sample?

Thanks for any help. It’s hard to find any guidance on this type of situation as I suppose it is not a very typical application, but it is important for my use.

I think it’s resolved. Bruce Dawson who’s been giving me some instruction gave me this suggestion to try, and I did this in my Synthesiser inherited class:

//CUSTOM BLOCK BASED PROCESS
void renderNextBlockCustom(AudioBuffer<float>& outputAudio,
	const MidiBuffer& inputMidi,
	int startSample,
	int numSamples)
{
	Synthesiser::renderNextBlock(outputAudio, inputMidi, startSample, numSamples);

	//MONOCOMP
	if (monoCompOnOff) {
		monoComp.render(outputAudio, startSample, numSamples);
	}

	//DISTORTION
	if (monoDistOnOff) {
		//monoDistortion.renderNextBlockMono(outputAudio);
		monoDistortion.renderNextBlockMonoStartSample(outputAudio, startSample, numSamples);
	}
	//DELAY
	if (delayOnOff) {
		monoDelay.renderNextBlockMono(outputAudio, delayTime, delayFeedback, delayPrePostMix, delayDWMix);
	}
}

And then in my PluginProcessor under processBlock I am calling:

mySynth.renderNextBlockCustom(buffer, midiMessages, 0, buffer.getNumSamples());

instead of:

mySynth.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());

Everything seems to work now (including my envelope based MonoComp) and it no longer cares whether I give my DSP functions for the monoComp, delay, or distortion startSample or numSample or not.

Thanks for the help guys.

1 Like