DSP filtering won't work on renderVoices method


#1

Hi!

So I followed the basic example on DSPAudioPlugin demo found with juce. it works great on AudioProcessor, but I’m trying to get it to work on individual voices on synthesiser, so I’ve inherited Synthesiser-class, and overridden the RenderVoices-method to do the DSP-stuff in there. For some reason the DSP does not work as it works in AudioProcessor. :frowning:

I also tried inherit SamplerVoice, and do it inside it’s renderNextBlock but the results are the same and I feel it’s unnecessay to do it in there…

Sound playbacks but is very distorted like 3/4 of the samples would be completely missing in the buffer or something. I’ve confirmed the thing works without these lines in RenderVoices-method, and for example a basic tempBuffer.ApplyGain confirms everything else is working ok:

dsp::AudioBlock<float> block(tempBuffer);
process(dsp::ProcessContextReplacing<float>(block));		

Any Ideas?

Code:

//Sampler is a public Synthesiser
Sampler::Sampler() : 
lowPassFilter(dsp::IIR::Coefficients<float>::makeFirstOrderLowPass(48000.0, 20000.f)) {}

void Sampler::renderVoices(AudioBuffer<float>& buffer, int startSample, int numSamples)
{
	int SampleCount = buffer.getNumSamples();
	int ChannelCount = buffer.getNumChannels();

	//DSP STUFF
	dsp::ProcessSpec spec{ getSampleRate(), static_cast<uint32> (numSamples), ChannelCount };
	lowPassFilter.prepare(spec);
	lowPassFilter.reset();	

	AudioBuffer<float> tempBuffer;
	tempBuffer.setSize(ChannelCount, SampleCount, false, false, true);
	tempBuffer.clear();

	AudioBuffer<float> tempGatherBuffer; 
	tempGatherBuffer.setSize(ChannelCount, SampleCount, false, false, true);
	tempGatherBuffer.clear();

	for (auto* voice : voices) {

		voice->renderNextBlock(tempBuffer, startSample, SampleCount);
				
		updateParameters(500.0f); //low-pass of 500Hz

		dsp::AudioBlock<float> block(tempBuffer);

		process(dsp::ProcessContextReplacing<float>(block));		

		tempGatherBuffer.addFrom(0, startSample, tempBuffer, 0, startSample, numSamples, 1.0f);
		if (ChannelCount==2) tempGatherBuffer.addFrom(1, startSample, tempBuffer, 1, startSample, numSamples, 1.0f);
		tempBuffer.clear();
	}

	buffer.operator=(tempGatherBuffer);
}

void Sampler::process(dsp::ProcessContextReplacing<float> context) noexcept {
	ScopedNoDenormals noDenormals;	
	// Post-lowpass filtering
	lowPassFilter.process(context);
}

void Sampler::updateParameters(float LPHz) {
	*lowPassFilter.state = *dsp::IIR::Coefficients<float>::makeLowPass(getSampleRate(), LPHz);
}

Any help would be much appreciated! :slight_smile:


#2

Ok, there were some bugs that made the plugin crash on some cases. I’ve updated and now the basis works flawlessly (At least that’s what it seems to do) but the filtering still doesn’t work. The sound comes out as “low-passed”, but with certain crackle included in it :frowning:

I’ve moved the preparing/reset of the filter to another method called prepareToPlay, which AudioProcessor calls from prepareToPlay -method for each synthesiser. But it didn’t affect the sound…

Sampler::Sampler() : 
	lowPassFilter(dsp::IIR::Coefficients<float>::makeFirstOrderLowPass(48000.0, 20000.f)) {}


void Sampler::prepareToPlay(int samplesPerBlock) {	
	dsp::ProcessSpec spec{ getSampleRate(), static_cast<uint32> (samplesPerBlock), 2 };
	lowPassFilter.prepare(spec);
	lowPassFilter.reset();
}


void Sampler::renderVoices(AudioBuffer<float>& buffer, int startSample, int numSamples)
{

	AudioBuffer<float>* subBuffer = new AudioBuffer<float>(buffer);

	AudioBuffer<float>* tempBuffer = new AudioBuffer<float>();
	tempBuffer->makeCopyOf(*subBuffer, false);
	tempBuffer->clear();

	for (auto* voice : voices) {
				
		voice->renderNextBlock(*tempBuffer, startSample, numSamples);

		float LPHz = 150.0f;
		updateParameters(LPHz);

		dsp::AudioBlock<float> block(*tempBuffer);
		
		process(dsp::ProcessContextReplacing<float>(block));

		buffer.addFrom(0, startSample, *tempBuffer, 0, startSample, numSamples, 0.5f);
		buffer.addFrom(1, startSample, *tempBuffer, 1, startSample, numSamples, 0.5f);
		tempBuffer->clear();
		
	}

	delete subBuffer;
	delete tempBuffer;

}

void Sampler::process(dsp::ProcessContextReplacing<float> context) noexcept {
	ScopedNoDenormals noDenormals;
	lowPassFilter.process(context);
}


void Sampler::updateParameters(float LPHz) {	
*lowPassFilter.state = *dsp::IIR::Coefficients<float>::makeLowPass(getSampleRate(), LPHz); 
}

#3

Something is wrong with your algorithm ! You should either filter your signal only once at the end of your renderVoices algorithm (outside the loop), or use one different filter object for each voice. Don’t forget filtering use state variables !

Moreover, you should avoid at any cost to allocate some memory in the audio thread. Instead, you need to create your AudioBuffer objects once in the function prepareToPlay, and then use them without having to change their size in the process functions !

Sorry in advance for the amount of (!) :slight_smile:


#4

Hello!

Thanks for the info! :slight_smile: I need per voice-based filtering so I think I’ll try to create that same mechanic inside the voice-class and have a similar prepareToPlay-method in them etc. But what about the audiobuffers, I wonder whether the buffersize stays always the same after playback’s been started? I mean if the size changes, buffers might “overflow” and crash if they’re initialized in the prepareToPlay?


#5

You should be getting the maximum buffer size that will be used in the prepareToPlay call and you can adjust your buffers according to that. If some host then uses a larger buffer size for the processBlock call, it’s a bug in the host and not really your problem. (Though you could resize your buffers as a last resort to prevent a crash…)

Also, don’t needlessly use AudioBuffer via pointers and “new”. If you do that, you are negating the AudioBuffer class taking care of memory management for you and you risk doing your manual "delete"s in the wrong way.


#6

Ah thanks for the tip!! :slight_smile:

I’ve just confirmed that IvanC has stuff right, I went to do the filtering after the whole rendering of voices ( to the full buffer ) and it worked! No problems! :slight_smile: So I’ll just have to apply this practice to Voices instead of the synthesiser to achieve what I’m looking for! :slight_smile: I’ll let you know how it works out!


#7

Filtering is working now inside voice-class! But there’s 1 trouble: When a Voice “isInRelease” = True, for some reason there’s a single click sound (~1 sample in the buffer goes wrong for some reason)… I wonder why… Since it’s all in the same renderNextBlock-method, which I too have the filter-processing inside it…


#8

Maybe the filter state for a given voice should be reset when it wasn’t played, and then played again ?


#9

I mean the clip sound happens during the playback of the voice, just at the moment stopvoice is called, and before the sound is really stopped (when there’s release tail present). I’ve tried all sorts of fiddling but the click doesn’t go away. Only time I got it away was when doing filtering to the copy of the the whole SamplerSound’s input buffer… on EACH block hehe. So it destoyed performance…

I tried resetting the filter after each note, and on releaseResources after the playback, even in the beginning of the rendernextblock inside Voice-class. There’s something else going wrong, and I’ve surfed through the Synthesiser, SynthesiserVoice and SamplerSound etc. classes… My bet is on the buffers and their sizes or something… But it seems the sizes match so I’m a bit clueless at the moment :frowning: I know the code isn’t probably best optimized yet but It’d be nice to get it working… And then optimize… This is how the voiceclass’s renderNextBlock looks now:

void SamplerVoiceInherit::renderNextBlock (AudioSampleBuffer& outputBuffer, int startSample, int numSamples)
{

    if (const SamplerSound* const playingSound =static_cast<SamplerSound*> (getCurrentlyPlayingSound().get()))
    {
		AudioSampleBuffer* tempDSPBuffer = new AudioSampleBuffer();
		tempDSPBuffer->makeCopyOf(outputBuffer, false);
		tempDSPBuffer->clear();
						
		int sampleblock = numSamples;
		
		int midinotenumber =  playingSound->midiRootNote;
		
        const float* const inL = playingSound->data->getReadPointer (0);
        const float* const inR = playingSound->data->getNumChannels() > 1
                                    ? playingSound->data->getReadPointer (1) : nullptr;

 
		float* outL = tempDSPBuffer->getWritePointer(0, startSample);
		float* outR = tempDSPBuffer->getNumChannels() > 1 ? tempDSPBuffer->getWritePointer(1, startSample) : nullptr;

        while (--numSamples >= 0)
        {
            const int pos = (int) sourceSamplePosition;
            const float alpha = (float) (sourceSamplePosition - pos);
            const float invAlpha = 1.0f - alpha;

            // just using a very simple linear interpolation here..
            float l = (inL [pos] * invAlpha + inL [pos + 1] * alpha);
            float r = (inR != nullptr) ? (inR [pos] * invAlpha + inR [pos + 1] * alpha)
                                       : l;
			

            l *= lgain;			
            r *= rgain;

            if (isInAttack)
            {
                l *= attackReleaseLevel;
                r *= attackReleaseLevel;

                attackReleaseLevel += attackDelta;

                if (attackReleaseLevel >= 1.0f)
                {
                    attackReleaseLevel = 1.0f;
                    isInAttack = false;
                }
            }
            else if (isInRelease)
            {				
				
                l *= attackReleaseLevel;
                r *= attackReleaseLevel;

                attackReleaseLevel += releaseDelta;

                if (attackReleaseLevel <= 0.0f)
                {
                    stopNote (0.0f, false);
                    break;
                }
            }

            if (outR != nullptr)
            {
                *outL++ += l;
                *outR++ += r;
            }
            else
            {
                *outL++ += (l + r) * 0.5f;
            }

			sourceSamplePosition += pitchRatio*PitchWheelRatio;

            if (sourceSamplePosition > playingSound->length)
            {
                stopNote (0.0f, false);
                break;
            }
        }

		
			float minLPHz = 200, maxLPHz = 22000; //velocity 1.0 = LP on 2500Hz, vel 0.0 = LP on 20
			float LPHz = lgain*lgain*lgain * (maxLPHz - minLPHz) + minLPHz;

			updateParameters(LPHz);
			dsp::AudioBlock<float> block(*tempDSPBuffer);

			process(dsp::ProcessContextReplacing<float>(block));
			outputBuffer.addFrom(0, startSample, *tempDSPBuffer, 0, startSample, sampleblock, 0.5f);
			if(outputBuffer.getNumChannels()==2) outputBuffer.addFrom(1, startSample, *tempDSPBuffer, 1, startSample, sampleblock, 0.5f);

		delete tempDSPBuffer;
	   }
}

What I’m trying next is take a block size snippet from the input buffer (samplersound’s data) and apply filtering to it before it goes through the renderNextBlock’s processing… I don’t know if it’s a good idea but I’m clueless…


#10

I couldn’t manage to create a proper portion of the playingSound’s data… Somehow it’s super difficult to do…

Well, I continued this method I started now there’s a temporary buffer within the class which gets sized in prepareToPlay-method, also the DSP-filter gets prepare and resets in there. I tried to to do it in StartNote also but that didn’t have any effect. Playing sound still gets the “click” sound when “note off”/stopVoice kicks in…

Any ideas? :frowning:

Here’s how my renderNextBlock looks like now:

void SamplerVoiceInherit::renderNextBlock(AudioSampleBuffer& outputBuffer, int startSample, int numSamples)
{
	if (auto* playingSound = static_cast<SamplerSound*> (getCurrentlyPlayingSound().get()))
	{
		tempDSPBuffer.clear();		
						
		int sampleblock = numSamples;
				
		auto& data = *playingSound->data;
		const float* const inL = data.getReadPointer(0);
		const float* const inR = data.getNumChannels() > 1 ? data.getReadPointer(1) : nullptr;

		float* outL = tempDSPBuffer.getWritePointer(0, startSample);
		float* outR = tempDSPBuffer.getNumChannels() > 1 ? tempDSPBuffer.getWritePointer(1, startSample) : nullptr;

        while (--numSamples >= 0)		
        {
			auto pos = (int)sourceSamplePosition;
			auto alpha = (float)(sourceSamplePosition - pos);
			auto invAlpha = 1.0f - alpha;

			// just using a very simple linear interpolation here..
			float l = (inL[pos] * invAlpha + inL[pos + 1] * alpha);
			float r = (inR != nullptr) ? (inR[pos] * invAlpha + inR[pos + 1] * alpha)
				: l;

			l *= lgain;
			r *= rgain;

			if (isInAttack)
			{
				l *= attackReleaseLevel;
				r *= attackReleaseLevel;

				attackReleaseLevel += attackDelta;

				if (attackReleaseLevel >= 1.0f)
				{
					attackReleaseLevel = 1.0f;
					isInAttack = false;
				}
			}
			else if (isInRelease)
			{
				l *= attackReleaseLevel;
				r *= attackReleaseLevel;

				attackReleaseLevel += releaseDelta;

				if (attackReleaseLevel <= 0.0f)
				{
					stopNote(0.0f, false);
					break;
				}
			}

            if (outR != nullptr)
            {
				*outL++ += l;
				*outR++ += r;

			}
            else
            {
				*outL++ += (l + r) * 0.5f;
            }

			sourceSamplePosition += (double)(pitchRatio*PitchWheelRatio);
			
            if (sourceSamplePosition > playingSound->length)
            {
                stopNote (0.0f, false);
                break;
            }
        }
		
			float LPHz = 500; //Low pass on 500 Hz

			updateParameters(LPHz);
			dsp::AudioBlock<float> block(tempDSPBuffer);

			process(dsp::ProcessContextReplacing<float>(block));
			outputBuffer.addFrom(0, startSample, tempDSPBuffer, 0, startSample, sampleblock, 1.0f);
			if(outputBuffer.getNumChannels()==2) outputBuffer.addFrom(1, startSample, tempDSPBuffer, 0, startSample, sampleblock, 1.0f);
		
	   }
}

#11

Ok I got a clue why this is happening at the moment. I traced the buffersize and startSample & numSamples and it turns out whenever there’s either a fresh new note, or a noteoff, synthesiser seems to split the buffer to be rendered. This happens because if the midinote hits in mid-buffer(like always but in sample position 0), it needs to start rendering immediately (not on the next buffer start or present one in the beginning) so renderVoices does partial rendering from the midimessage.

Well. This in turns messes up somehow with the DSP-filtering. I tried to prepare the DSP again with proper blocksize during the rendering but it doesn’t work. This is a bit complex. Do I need to “gather” the buffer inside the synthesiser into a “full block” from the bits and pieces before filtering the whole block?? Would there be any other way inside SynthesiserVoice, since it’d be easier to manage with unique dsp objects etc…

I confirmed this by

synthesier->setMinimumRenderingSubdivisionSize(256, true); //256 samples was the default in my DAW

And the klicking sound stopped on note releases.


#12

Ok! This is solved. It was as I said in the previous post. All I needed to I removed that tempDSPBuffer.clear(); from the beginning and added a check in the DSP filtering part of renderNextBlock -method, which made sure the whole buffer’s been ran through the renderNextBlock. And then, it adds the whole buffer length at once. So, basicly it adds to the temporary buffer and holds on it until it’s filled. Also, you can use the .clear() buffer after the buffer’s been added to the main output buffer, but I’m not sure it’s even necessary, and you could just change those *outL ++ += l; into outL ++ =l to reset it during the run. In case buffer size changes you can do the clearing up in prepareToPlay after the sizing. phew, that took some time. :stuck_out_tongue:


#13

It seems it’s useful to call .clear() on the temporary buffer after each DSP-filtering, it removes some very minor clicking sound that might happen with fast releases :thinking: