How to correctly handle dsp processing for custom processor?

Hi, I’m trying to create a gaussian filter implementation. I’ve managed to create a custom dsp processor, handling the audio blocks and applying to it a gaussian filter (similar to the high cut filter but with another approach). I’m making a 1D convolution between the input signal and my gaussian curve. Then rewrite the output buffer with the convolution output signal.

The filter is working, as I hear it when testing it in audio plugin host. However there are some crackling artefacts going on, it goes worst when turning my gaussian parameters up.

I’m pretty new to audio programming but I saw the DF1/2 or TDF1 topics, and looking to the juce IIRFilter I can clearely see I’m missing something like this.

My problem is I can’t figure out how to adapt DF1 or any other kind of filter to my process.

Here is my processor class.

#pragma once
#include <JuceHeader.h>
#include <cmath>

class GaussianProcessor: juce::dsp::ProcessorBase
{
public:
	struct Parameters {
		float gaussianSigma = 0.2f;
	};

	const Parameters& getParameters() const noexcept { return parameters; }

	void setParameters(const Parameters& newParams) {
		this->sigma = newParams.gaussianSigma;
		this->setGaussian();
	}

	void prepare(const juce::dsp::ProcessSpec &specs) override {
		this->numSamples = specs.maximumBlockSize;
		this->setGaussian();
	}

	void process(const juce::dsp::ProcessContextReplacing<float> &context) override
	{
		auto&& inBlock = context.getInputBlock();
		auto&& outBlock = context.getOutputBlock();
		auto numSamples = inBlock.getNumSamples();
		auto numChannels = inBlock.getNumChannels();

		jassert(inBlock.getNumChannels() == outBlock.getNumChannels());
		jassert(inBlock.getNumSamples() == outBlock.getNumSamples());

		outBlock.copyFrom(inBlock);

		if (numChannels == 1 && outBlock.getNumChannels() == 1) {
			this->processMono(outBlock.getChannelPointer(0), (int)numSamples);
		} else if (numChannels == 2 && outBlock.getNumChannels() == 2) {
			this->processStereo(
				outBlock.getChannelPointer(0),
				outBlock.getChannelPointer(1),
				(int)numSamples
			);
		}
		else {
			jassertfalse;
		}
	}

	void reset() override {
		this->sigma = this->defaultSigma;
		this->setGaussian();
	}

private:
	void processMono(float* const samples, const int numSamples) {
		std::vector<float> srcSignal{ samples, samples + numSamples };
		std::vector<float> fft = this->computeFFT(srcSignal);

		for (int i = 0; i < numSamples; i++) {
			samples[i] = fft[i];
		}
	}

	void processStereo(float* const left, float* const right, const int numSamples) {
		jassert(left != nullptr && right != nullptr);
		std::vector<float> srcSignalL{ left, left + numSamples };
		std::vector<float> srcSignalR{ right, right + numSamples };
		std::vector<float> fftL = this->computeFFT(srcSignalL);
		std::vector<float> fftR = this->computeFFT(srcSignalR);

		for (int i = 0; i < numSamples; ++i) {
			// biquad filter / One pole filter ?
			// How to make it work without crackling artefacts ?
                        // Missing something here
			left[i] = fftL[i];
			right[i] = fftR[i];
		}
	}

	/*
		Compute Fast Fourrier Transform with gaussian distributionn and signal
	*/
	std::vector<float> computeFFT(std::vector<float> signal) {
		size_t signalSize = signal.size();
		size_t gaussianSize = this->gaussianDistribution.size();
		std::vector<float> fft(signalSize + gaussianSize - 1, 0.0f);

		for (size_t i = 0; i < signalSize; i++) {
			for (size_t j = 0; j < gaussianSize; j++) {
				fft[i + j] += signal[i] * this->gaussianDistribution[j];
			}
		}
		return this->extractSignal(fft);
	}

	std::vector<float> extractSignal(std::vector<float> signal) {
		std::vector<float>::const_iterator first = signal.begin() + (this->numSamples / 2);
		std::vector<float>::const_iterator last = signal.begin() + (this->numSamples / 2) + this->numSamples;
		std::vector<float> extractedSignal(first, last);

		return extractedSignal;
	}
 
	void generateGaussianFilter(float sigma, int alpha) {
		auto computeGx = [](int x, float sigma, float alpha) {
			return (1 / (sigma * sqrt(2 * PI))) * exp((-pow(x - alpha, 2)) / (2 * pow(sigma, 2)));
		};
		auto splitSamples = this->numSamples / 2;

		for (int i = 0; i < this->numSamples; i++) {
			auto x = i - splitSamples;
			this->gaussianDistribution.push_back(computeGx(x, sigma, alpha));
		}
	}

	void setGaussian() {
		this->gaussianDistribution.clear();
		this->generateGaussianFilter(
			this->sigma,
			this->alpha
		);
	}


private:
	Parameters parameters;

	std::vector<float> gaussianDistribution;
	int numSamples;

	const float alpha = 0.0f;
	const float defaultSigma = 0.2f;

	// Modulable sigma
	float sigma = defaultSigma;
};

My process block is fairly simple:

void GainExciterAudioProcessor::processBlock (juce::AudioBuffer<float>& buffer, juce::MidiBuffer& midiMessages)
{
    juce::ScopedNoDenormals noDenormals;
    auto totalNumInputChannels  = getTotalNumInputChannels();
    auto totalNumOutputChannels = getTotalNumOutputChannels();

    for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
        buffer.clear (i, 0, buffer.getNumSamples());

    auto chainSettings = getChainSettings(apvts);
    processorChain.get<PPBlur>().setParameters(chainSettings.blurParameters);

    juce::dsp::AudioBlock<float> block(buffer);
    juce::dsp::ProcessContextReplacing<float> context(block);

    processorChain.process(context);
}

Thanks in advance,

didn’t look at your code in detail but if your filter architecture is based on convolution you should try FIR filters instead of IIR ones. IIR is when some of the already processed samples feedback into the signal

For FFT processing is advisable to use a dedicated thread, since you need to fill a buffer for the FFT analysis which is different in size than the audio buffer. There is an example for a spectrum analyser that uses this approach. Hope this helps.

Thanks, I’ll look at it. Do you know where I can find this threaded example?

It’s in JUCE’s examples\Audio folder. The snipped should be SimpleFFTDemo.h

1 Like