Crackly, distorted output

I don’t understand why this is happening at all, but I am getting a lot of distortion in my filter. The same logic running in a WDL-OL vst seems to work fine. If someone could take a look I’d really appreciate it. I’ve tried many different things to fix this, but nothing works. I’m using this filter:

http://www.martin-finke.de/blog/articles/audio-plugins-013-filter/

1 Like

Looks like you are sharing just one filter across multiple channels. You’ll need a filter for each channel.

Hey, thanks for the reply. I thought I was processing the data in all channels:

    int numSamples = buffer.getNumSamples();
    for (int channel = 0; channel < totalNumInputChannels; ++channel)
    {
        float* channelData = buffer.getWritePointer(channel);

		for (int i = 0; i < numSamples; i++)
		{
			filter.setCutoff(b01);
			channelData[i] = filter.process(channelData[i]);
		}
    }

What would I need to do differently?

I figured out what you mean, like a filterL for the left channel and filterR for the right channel. I tried that and it unexpectedly caused the left channel to go quieter and quieter when I moved the fader down. However that idea led me to the solution. Rather than looping through each channel independently, I process both channels at the same time in the same loop. This solved my problem.

	int numSamples = buffer.getNumSamples();
	float* channelDataL = buffer.getWritePointer(0);
	float* channelDataR = buffer.getWritePointer(1);

	for (int i = 0; i < numSamples; i++)
	{
		filter.setCutoff(b01);
		channelDataL[i] = filter.process(channelDataL[i]);
		channelDataR[i] = filter.process(channelDataR[i]);
	}

[quote=“Rokit, post:1, topic:20381”]
github
[/quote]You’re still not doing it correctly. He means you’re using one filter instance for both channels. Think of channels as separate instances, each needing a separate filter. You feed data into a filter, it has internal state ‘keeping up to date’ with the wave data. Thus, it you feed different wave forms into it, you mess up the time continuity inside the filter (often resulting in crackling).

To put it simpy, yes you need a filterL and a filterR. If you test your ‘solution’ with stereo input, you’ll notice the crackling and distortion depending on the stereo difference. I have no idea what your fader does.

1 Like

Thanks for testing my code. The fader just sets the cutoff value for the filter.

I think I messed up the solution before which is why I lost audio in the left ear when the cutoff fader was down. I forgot to set the cutoff for both filters as well. This is what I have now:

	int numSamples = buffer.getNumSamples();
	float* channelDataL = buffer.getWritePointer(0);
	float* channelDataR = buffer.getWritePointer(1);

	for (int i = 0; i < numSamples; i++)
	{
		filterL.setCutoff(*filter1);
		filterR.setCutoff(*filter1);
		channelDataL[i] = filterL.process(channelDataL[i]);
		channelDataR[i] = filterR.process(channelDataR[i]);
	}

Tested it and it sounds good, although I’m probably not testing it right if you found a problem with the last solution cause that last one sounded good to me, too. I loaded up a stereo file and ran it through the filter and it sounded fine. I guess I’m not understanding something. What do you mean by “stereo difference”?

Two side notes:
1.) the cutoff frequency won’t change during the process block, so you can (and should) move the filter.setCutoff() call outside the loop

2.) although reading the code might be easier with both filters in one loop, for the processor it is better to compute it channel by channel. The reason is so called cache coherence, which means a lot of read and write operations from one adress to the next is faster that accessing alternating the left and the right channel. Eventually the optimizer will unwind that for you, but it is better to write it like you want it to be processed

Timur explains that here (youtube).

3.) so iterating over the channels would make the code coping with mone, stereo and other formats. And there is good reason (see 2) to have the channels in the outer loop.

maybe still of interest:

HTH

1 Like

Thanks, that is useful info, although I’m not sure how to go about it. I need to have one filter per channel and they need to persist. Is there a way to know how many channels there are ahead of time, other than prepareToPlay()? It doesn’t make much sense to instantiate my filters inside of prepareToPlay(). I would just be creating/destroying a bunch of filter objects every time I entered that function. It also doesn’t seem to make much sense to guess at how many filters I need (20, 50, 100, 1000?) and resize the array in prepareToPlay() either.

In Reaper, I can see that my plugin says “2 in 2 out” which means that it only supports 2 in channels, right? How do I make it support more? What is the max number of real-world channels I should support? If I knew ahead of time, I think it would be easier.

Should I just make 7 filters for 7 channel surround? Or do I also need to account for some weird case where I could have 100 channels of input?

That kind of setup task is exactly why prepareToPlay exists!

No, not even the host does know that. It will instanciate with the default constructor and negotiate the actual setup.
All methods the host uses to query the behaviour of your plugin are virtuals, therefore an instance must exist to make that happen. An exception is AudioProcessor::containsLayout(). So you can limit to certain layouts, but which one is chosen you cannot forsee.

After that process prepareToPlay is called.

Have a look at AudioChannelSet, there you can see all the various formats that JUCE is aware of. You don’t need to support all of them, because you can define which you will handle, and the host will hide your plugin, if it doesn’t fit into.

If you are starting with musical plugins you probably only want to support mono and stereo. But that’s all up to you.

And like jules already answered, you cannot know the actual channel setup before prepareToPlay and after that everything must be set, because then the processBlock will be called from your real time thread.

Best practise (IMHO) is to have an Array<MyFilter> myFilters; (or std::vector<MyFilter> myFilters;) and resize it in prepareToPlay to the actual number of channels. Also you can (and should) do the default setup of the filter coefficients there.

If you go surround at some point, you will realise, that you need different filters for different channels anyway (no point to process a 1kHz filter in an LFE channel e.g.)

HTH

The reason I said that is because my UI Sliders are tied to my filters. When the user changes a Slider the value is saved in the filter. Should I really be destroying and remaking my filters in that case?

Edit: Also I assumed it was expensive to do that, but I guess not if there are only a few being made. And my filter class is probably pretty cheap anyway.

Nevermind, Jules. I was confused about how std::vector resize works. I didn’t realize it does it all for you automatically. I was thinking I had to rebuild the array from scratch each time, but that’s not the case.

Well, I got everything working now. One filter for each channel, processed channel by channel, and the array is resized if it needs to be in prepareToPlay(). I seriously appreciate the help, guys. Thanks.

I have the same problem, but in my case I’m using the Context to filter the buffer instead to filtered sample per sample:

for (int channel = 0; channel < totalNumInputChannels; ++channel) //Iterate for each channel of audio
    {
        auto* channelData = buffer.getWritePointer (channel);
        const float* bufferData = buffer.getReadPointer(0);
        const float* delayBufferData = delayBuffer.getReadPointer(channel);
        float* ouputDryBuffer = buffer.getWritePointer(channel);

        juce::dsp::AudioBlock<float> block(buffer);
        juce::dsp::ProcessContextReplacing<float> context(block);
        filter.process(context);

Because I am just copying my buffer to the block. How could I do a whole block just with the Left channel and another one just with the Right channel?

Thanks so much,
Raul