[SOLVED] JUCE::dsp filtering only the feedback signal in delay plugin - need help

Hello, can you help a newbie.

I’m working on adding a filter(HPF/LPF) to a delay vst audio plugin I’ve made. I’m using juce::dsp::ProcessorChain<juce::dsp::LadderFilter> to do this. I’ve read the tutorial https://docs.juce.com/master/tutorial_dsp_introduction.html. I’ve managed to do a working implementation. My problem is that the full signal is filtered, and I only want it to filter the wet signal(feedback delayed signal). When I play a note, I want to hear the initial dry unfiltered signal, then followed by the filtered feedback delayed signal.

DelayFilter

Problem also is that I don’t fully understand this code:

/* setting up the ProcesChain*/
auto block = juce::dsp::AudioBlock (buffer);
auto blockToUse = block.getSubBlock(0, buffer.getNumSamples());
auto contextToUse = juce::dsp::ProcessContextReplacing (blockToUse);
processorChain.process (contextToUse);

In my first implementation (full signal is filtered) I used the incoming buffer in the processBlock (AudioBuffer& buffer, MidiBuffer& midiMessages). So, the idea was to create a second AudioBuffer(called wetBuffer), that should contain the feedback delay signal(wet), that is then summed with the dry signal from buffer. Here is my code,trying to do that. I would really appreciate if you could help me here.

#include “PluginProcessor.h”
#include “PluginEditor.h”

DelayAudioProcessor::DelayAudioProcessor()
#ifndef JucePlugin_PreferredChannelConfigurations
: AudioProcessor (BusesProperties()
#if ! JucePlugin_IsMidiEffect
#if ! JucePlugin_IsSynth
.withInput (“Input”, AudioChannelSet::stereo(), true)
#endif
.withOutput (“Output”, AudioChannelSet::stereo(), true)
#endif
)
#endif
{

addParameter (mDryWetParameter = new AudioParameterFloat ("drywet", // parameter ID
                                                        "Dry Wet", // parameter name
                                                        0.0f,   // minimum value
                                                        1.0f,   // maximum value
                                                        0.5f)); // default value

addParameter (mFeedbackParameter = new AudioParameterFloat ("feedback", // parameter ID
                                                            "Feedback", // parameter name
                                                            0.0f,   // minimum value
                                                            0.98f,   // maximum value
                                                            0.5f)); // default value


addParameter (mTimeParameter = new AudioParameterFloat ("delaytime", // parameter ID
                                                        "Delay Time", // parameter name
                                                        0.1f,   // minimum value
                                                        MAX_DELAY_TIME,   // maximum value
                                                        0.5f)); // default value

/*check constructor for parameter*/
addParameter (mNoteParameter = new AudioParameterInt ("delayNotetime", // parameter ID
                                                        "Delay NoteTime", // parameter name
                                                        1,   // minimum value
                                                        7,   // maximum value
                                                        2)); // default value






mTimeSmoothed = 0;




/* we set nullptr because we don't no the sample rate yet, and are not ready to instantiate audio data/ how big buffer is*/

mCircularBufferLeft = nullptr;
mCircularBufferRight = nullptr;



mCircularBufferWriteHead = 0;
mCircularBufferLength = 0;

mDelayTimesInSamples = 0;
mDelayReadHead = 0;

mFeedBackLeft = 0;
mFeedBackRight = 0;

mLFOPhase = 0;

/*For testing. Later to be removed, to do GUI linked to parameter*/
auto& filter = processorChain.get<filterIndex>();
filter.setMode(dsp::LadderFilter<float>::Mode::LPF24 );
filter.setCutoffFrequencyHz (100.0f);
filter.setResonance (0.0f);

}

DelayAudioProcessor::~DelayAudioProcessor()
{

}

const String DelayAudioProcessor::getName() const
{
return JucePlugin_Name;
}

bool DelayAudioProcessor::acceptsMidi() const
{
#if JucePlugin_WantsMidiInput
return true;
#else
return false;
#endif
}

bool DelayAudioProcessor::producesMidi() const
{
#if JucePlugin_ProducesMidiOutput
return true;
#else
return false;
#endif
}

bool DelayAudioProcessor::isMidiEffect() const
{
#if JucePlugin_IsMidiEffect
return true;
#else
return false;
#endif
}

double DelayAudioProcessor::getTailLengthSeconds() const
{
return 0.0;
}

int DelayAudioProcessor::getNumPrograms()
{
return 1; // NB: some hosts don’t cope very well if you tell them there are 0 programs,
// so this should be at least 1, even if you’re not really implementing programs.
}

int DelayAudioProcessor::getCurrentProgram()
{
return 0;
}

void DelayAudioProcessor::setCurrentProgram (int index)
{
}

const String DelayAudioProcessor::getProgramName (int index)
{
return {};
}

void DelayAudioProcessor::changeProgramName (int index, const String& newName)
{
}

void DelayAudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)
{

/*our delayed audio in time samples we want to retrieve and add to our original audiobuffer signal*/
mDelayTimesInSamples = sampleRate * *mTimeParameter;

/* our time in samples we need to store*/
mCircularBufferLength = sampleRate * MAX_DELAY_TIME;



/* Here we check for nullptr, and create and set the size of our floating point array - the correct length of our buffers*/
if (mCircularBufferLeft == nullptr )
{
    mCircularBufferLeft.reset( new float[(int)mCircularBufferLength]);
}

zeromem(mCircularBufferLeft.get(), mCircularBufferLength * sizeof(float));

if (mCircularBufferRight == nullptr )
{
    mCircularBufferRight.reset(new float[(int)mCircularBufferLength]);
}

zeromem(mCircularBufferRight.get(), mCircularBufferLength * sizeof(float));


mCircularBufferWriteHead = 0;


mTimeSmoothed = *mTimeParameter;   // Data from delay time (GUI slider)knob

mLFOPhase = 0;




/* we create a ProcessSpec */
dsp::ProcessSpec spec;
spec.sampleRate = sampleRate;
spec.maximumBlockSize = uint32 (samplesPerBlock);
spec.numChannels = uint32 (getTotalNumOutputChannels ());

/*need to call prepare() with above spec */
processorChain.prepare(spec);

}

void DelayAudioProcessor::releaseResources()
{
// When playback stops, you can use this as an opportunity to free up any
// spare memory, etc.

/*need to call reset */
processorChain.reset();

}

#ifndef JucePlugin_PreferredChannelConfigurations
bool DelayAudioProcessor::isBusesLayoutSupported (const BusesLayout& layouts) const
{
#if JucePlugin_IsMidiEffect
ignoreUnused (layouts);
return true;
#else
// This is the place where you check if the layout is supported.
// In this template code we only support mono or stereo.
if (layouts.getMainOutputChannelSet() != AudioChannelSet::mono()
&& layouts.getMainOutputChannelSet() != AudioChannelSet::stereo())
return false;

// This checks if the input layout matches the output layout

#if ! JucePlugin_IsSynth
if (layouts.getMainOutputChannelSet() != layouts.getMainInputChannelSet())
return false;
#endif

return true;

#endif
}
#endif

void DelayAudioProcessor::processBlock (AudioBuffer& buffer, MidiBuffer& midiMessages)
{

/*We create a audiobuffer, containing the feedback filtered signal */
AudioBuffer<float> wetBuffer (2, buffer.getNumSamples());
wetBuffer.clear();



ScopedNoDenormals noDenormals;
auto totalNumInputChannels  = getTotalNumInputChannels();
auto totalNumOutputChannels = getTotalNumOutputChannels();


for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
{
    buffer.clear (i, 0, buffer.getNumSamples());
}


/* setting up the ProcesChain*/
auto block = juce::dsp::AudioBlock<float> (wetBuffer);
auto blockToUse = block.getSubBlock(0, wetBuffer.getNumSamples());
auto contextToUse = juce::dsp::ProcessContextReplacing<float> (blockToUse);
processorChain.process (contextToUse);





/*get a pointer, to access buffer*/
float* leftChannel = buffer.getWritePointer(0);
float* RightChannel = buffer.getWritePointer(1);




/*get a pointer, to DAW audioPlayHead*/
AudioPlayHead* phead = getPlayHead();
AudioPlayHead::CurrentPositionInfo playposinfo;


if (phead != nullptr)
{
    phead->getCurrentPosition(playposinfo);
    
}


/*Iterate through all samples in audio buffer, and proces buffer data in loop*/
for (int i = 0; i < buffer.getNumSamples(); i++)
{
    
    
    /*. Normal delay: we store data in circular buffer + adding the feedback data back, se below. Se diagram in notes */
    //mCircularBufferLeft.get()[mCircularBufferWriteHead] = leftChannel[i] + mFeedBackLeft;
    //mCircularBufferRight.get()[mCircularBufferWriteHead] = RightChannel[i] + mFeedBackRight;
    
    /* add ping pong */
    mCircularBufferLeft.get() [mCircularBufferWriteHead] = leftChannel[i] + RightChannel[i] + mFeedBackRight;
    mCircularBufferRight.get()[mCircularBufferWriteHead] = mFeedBackLeft;

    
    
    /*we track Bpm from DAW and use that to set mDelayTimesInSamples, based on note*/
    float beatPrSecond = playposinfo.bpm / 60;
    float sampleTimeOneBeat = getSampleRate() / beatPrSecond;
    mDelayTimesInSamples = sampleTimeOneBeat /  *mNoteParameter;
    
    
    
    /*  we find the (delayed)data at position/index in buffer that we want to read from buffer behind the writeHead(mCircularBufferWriteHead) and to sum with original buffer data*/
    mDelayReadHead = mCircularBufferWriteHead - mDelayTimesInSamples;
   
    
    

    /*we check if mDelayReadHead gets less than zero. If so, we are wrapping around the buffer in the opposite direction*/
    if(mDelayReadHead < 0)
    {
        
        mDelayReadHead += mCircularBufferLength;
    }
    
    
    
    /* here we set our values to put in lin_interp() */
    int readHead_x = (int)mDelayReadHead; // Int is needed for mCircularBuffer array index access
    int readHead_x1 = readHead_x + 1;
    /* the float remainder value, used for interval between x and x1, where we want to compute a interpolated value */
    float readHeadFloat = mDelayReadHead - readHead_x;
    
    /*If we exceed our buffer*/
    if(readHead_x1 >= mCircularBufferLength)
    {
        readHead_x1 -= mCircularBufferLength;
    }
    
    

    
    
    /* our interpolated delayed output audio data we want to sum with original audio buffer. added to feedback, then into circularbuffer above, and finaly summed below with original audio buffer buffer.addSample()  delayed output   */
    float delay_sample_left = lin_interp(mCircularBufferLeft.get()[readHead_x], mCircularBufferLeft.get()[readHead_x1], readHeadFloat);
    
    float delay_sample_right = lin_interp(mCircularBufferRight.get()[readHead_x], mCircularBufferRight.get()[readHead_x1], readHeadFloat);
    
    
    
    
    /* here we scale our feedback audio data (* mFeedbackParameter) avoiding overloaded volume in DAW */
    mFeedBackLeft = delay_sample_left * *mFeedbackParameter;
    mFeedBackRight = delay_sample_right * *mFeedbackParameter;
  • here we add the delay/feedback to the wetBuffer*/
    wetBuffer.addSample(0, i, delay_sample_left);

      wetBuffer.addSample(1, i, delay_sample_right);
    

// blockToUse.addSample(0, i, mFeedBackLeft);
// blockToUse.addSample(1, i, mFeedBackRight);

    /* here we sum delayed signal/data to the original buffer data.
      If DryWet is 0 only dry (original audio data/signal is heard)*/
    buffer.setSample(0, i, buffer.getSample(0, i) * (1 - *mDryWetParameter) +
                     wetBuffer.getSample(0, i) * *mDryWetParameter );
    buffer.setSample(1, i, buffer.getSample(1, i) * (1 - *mDryWetParameter) +
                     wetBuffer.getSample(1, i) * *mDryWetParameter );
    
    
    
    mCircularBufferWriteHead++;  // we increment writeHead
    
    
    /* if end of array, we set writeHead to begining - circularbuffer concept */
    if(mCircularBufferWriteHead >= mCircularBufferLength)
    {
        
        mCircularBufferWriteHead = 0;
        
    }
    
                   
}

}

PluginProcessor.cpp (14.1 KB)

Hi @MKauf,

Formatting of the code is broken, so it is hard to read. For now I can try to explain the code, which you do not fully understand.

/* setting up the ProcesChain*/

Here AudioBlock representation for buffer is created:

auto block = juce::dsp::AudioBlock (buffer);

Next statement is creating a AudioBlock which is a subblock of buffer. But because full range of samples is given <0, buffer.getNumSamples()>, IMHO it points to the same data as block and could be deleted.

auto blockToUse = block.getSubBlock(0, buffer.getNumSamples());

Then context object is prepared, so it can be passed to processorChain.

auto contextToUse = juce::dsp::ProcessContextReplacing (blockToUse);
processorChain.process (contextToUse);

Unfortunately I can not check more now, so I will shoot where could be the problem. I would filter a block of audio after a delay, so the first delayed signal would be untouched. As I can see LadderFillter do not have a public method for processing single samples, so it would need an additional buffer.

One more note, you are using two arguments constructor of AudioBuffer in your processBlock. You should avoid allocations in this method. You can create a buffer in your prepareToPlay and then reuse it.

Please, try to fix formatting. I am somehow interested where this problem is. So if you or somebody else do not find it, I will try to do it. :slight_smile:

Best regards,
Mateusz

Thanks for your reply Mateusz. This is my first time doing a post here. Not sure how to fix formatting. I’ve oploaded the .cpp file instead. Hope that helps. I will create the AudioBuffer in prepareToPlay like you said. Constructor might also be a good place to put it. Best regards Michael

Does it produce silence or there is some signal, but with frequencies below 100Hz?

It looks buffer (input) is cleared before reading.

	for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
	{
		buffer.clear(i, 0, buffer.getNumSamples());
	}

Then wetBuffer is filtered by LadderFilter. But this buffer is empty IMHO.

	/* setting up the ProcesChain*/
	auto block = juce::dsp::AudioBlock<float>(wetBuffer);
	auto blockToUse = block.getSubBlock(0, wetBuffer.getNumSamples());
	auto contextToUse = juce::dsp::ProcessContextReplacing<float>(blockToUse);
	processorChain.process(contextToUse);

Then delay buffer is filled, but IMHO with silence:

	float* leftChannel = buffer.getWritePointer(0);
	float* RightChannel = buffer.getWritePointer(1);
...
	for (int i = 0; i < buffer.getNumSamples(); i++)
	{
...
		mCircularBufferLeft.get()[mCircularBufferWriteHead] = leftChannel[i] + RightChannel[i] + mFeedBackRight;
		mCircularBufferRight.get()[mCircularBufferWriteHead] = mFeedBackLeft;

Then wetBuffer is filled:

		wetBuffer.addSample(0, i, delay_sample_left);
		wetBuffer.addSample(1, i, delay_sample_right);

Somewhere here I would try to move filtering code. But outside a loop, so it would be full ready.

And at the end, the output is build:

		buffer.setSample(0, i, buffer.getSample(0, i) * (1 - *mDryWetParameter) +
			wetBuffer.getSample(0, i) * *mDryWetParameter);
		buffer.setSample(1, i, buffer.getSample(1, i) * (1 - *mDryWetParameter) +
			wetBuffer.getSample(1, i) * *mDryWetParameter);

Or am I missing something?
Sorry I did not find to much time to check on my own if my understanding is correct. And I am afraid that I will be able to check more until Tuesday. :frowning:

Have a nice weekend. :smiley:

Does it produce silence or there is some signal, but with frequencies below 100Hz?
No silence, but full unfiltered signal, containing both dry and feedback delay.

This .cpp file PluginProcessor_All Filtered.cpp (14.1 KB)
shows my first implementation adding the buffer (not wet buffer ) to processorChain. Here the full signal is filtered. It works fine.

When using the wetBuffer (AudioBuffer, needed for feedback delay signal) it looks like that the wetBuffer only gets the feedback delay samples,but is never processed by processorChain. I’ve tried using blockToUse.addSample and blockToUse.getSample (in summing code) instead, but same result.

It looks buffer (input) is cleared before reading.
Yes, It is a standard thing Juce add to your code when creating a audio plugin project. In PluginProcessor_All Filtered.cpp it works fine, so it should be that way.
JUCE write this:
// In case we have more outputs than inputs, this code clears any output
// channels that didn’t contain input data, (because these aren’t
// guaranteed to be empty - they may contain garbage).
// This is here to avoid people getting screaming feedback
// when they first compile a plugin, but obviously you don’t need to keep
// this code if your algorithm always overwrites all the output channels.

Then wetBuffer is filtered by LadderFilter. But this buffer is empty IMHO.
Yes, but wetBuffer is filled later in code(wetBuffer.addSample). In PluginProcessor_All Filtered.cpp this works fine, although I don’t use a second buffer to add/ getSamples I simply add feedback delay pr sample(in loop) to buffer(sum dry and wet).

Then delay buffer is filled, but IMHO with silence?
No, it gets sample form the incoming buffer (processBlock (AudioBuffer& buffer, MidiBuffer& midiMessages)) + the feedback delay(initially 0 entering the loop). The idea is that we read/write a specific sample time behind the original signal to a circularBuffer, then sum it with dry later in code.

Then wetBuffer is filled:
Yes.

Somewhere here I would try to move filtering code. But outside a loop, so it would be full ready.
Don’t know where to put it. It need to be in sync pr sample with dry signal (buffer) otherwise it will not work when summing dry and wet.

And at the end, the output is build:
Yes

I need to understand this: the audio thread can only send out audio to DAW through one single audiobuffer - right? Not simultaneously streaming from two buffers. Maybe using two buffers create a conflict. ProcessorChain tries to send out audio to audio thread - and so is incoming buffer. In PluginProcessor_All Filtered.cpp it worked fine because there was only the incoming buffer……. ProcessorChain should only apply filter to a sample, and that sample is then summed with the dry signal - buffer going out audio thread……….

Don’t worry about time - I’m just glad someone will help.

Have a nice weekend :smiley:

Hi @MKauf,

About

It looks buffer (input) is cleared before reading.

I was blind :stuck_out_tongue:. Sorry. I thought that i was initialized with 0. Forget it, please.

The filtering happens when processorChain.process (contextToUse); is invoked. When wetBuffer.addSample(...) is invoked after that, it overwrite the content of the buffer.

IMHO the only way to do it, is to write two separated for loops. First for filling the wetBuffer. Then do a filtering by invoking processorChain.process(contextToUse); and at the end there would be a loop for summing both buffers.

I wrote some tape-like delay and its “main” process method use a lot of loops and looks like that:

	void process(const dsp::ProcessContextReplacing<float>& context) override
	{
		auto& inputBlock = context.getInputBlock();
		auto& outputBlock = context.getOutputBlock();

		inputBlock.copyTo(tmpInBuffer); // (1)

		AudioBuffer<SampleType> tmpDelayBufferSubBuffer{ tmpDelayBuffer.getArrayOfWritePointers(), (int)inputBlock.getNumChannels(), (int)inputBlock.getNumSamples() };
		tapeHead.copyNextBufferTo(tmpDelayBufferSubBuffer); // (2)

		dsp::AudioBlock<float> tmpDelayBlock(tmpDelayBufferSubBuffer);
		dsp::ProcessContextReplacing tmpDelayContext{ tmpDelayBlock };

		feedbackProcessor.process(tmpDelayContext); // (3)

		outputBlock.copy(tmpDelayBufferSubBuffer); // (4)

		feedbackGain.process(tmpDelayContext); // (5)

		tapeHead.copyNextBufferFrom(tmpInBuffer, tmpDelayBufferSubBuffer); // (6)
	}

(1) I must do a copy of input, because this code use ProcessContextRepacing so input and output buffers points to the same data.

(2) I need to prepare a subbuffer, because some DAWs sends a shorter buffer that was said that in prepareToPlay. For example FL Studio.

(3) Here filtering is done.

(4) Writing output of a plugin.

(5) Applying wet gain. Here, because I want to have a one delayed signal when wet gain is set to 0.

(6) At the end I put input and processed wet signals to the delay.

Is it more efficient then processing sample by sample? Hmm, I do not know. I had more compact implementation, but I do not fill a difference when using the delay. Maybe I will change it someday. In a profiler there are other hot spots, for example my interpolation code :stuck_out_tongue:

Have fun with codding. :slight_smile: And let me know, specially, if you find that it could be done better :smiley:

Best regards.
Mateusz

1 Like

Hi @mateusz

Thanks again helping me - much appreciated :smiley:

I have solved it. The key was the thing you wrote about overwritting buffer and using a second loop. I use the first for loop to fill the wetbuffer. After that loop I setup ProcessChain. Then after that I do a for loop that sum dry and wet buffer. And it work :smile: I also initialized wetBuffer in prepareToPlay like you suggested. You can try it out yourself at my gitHub if you like (.zip file contain all JUCE project files).

https://github.com/kauffmann/Delay-Plugin

I’m still a newbie, so not sure I can answer that. Can I see your full code - maybe a github thing?

One more thing, I changed it to a HPF. Its more relevant.

Case is solved :smiley:

Best regards
Michael

Good to hear :smiley:

The code of my delay is not public. There is a lot of mess, which I plan to clean up before making it public. But I put compiled version (64bit VST3 for windows) here: https://digitalsteam.pl. The web page is not finished also :stuck_out_tongue:

Best regards,
Mateusz

Thanks :grinning: