Bitcrusher-Like Effect when doing anything in processBlock()

Hey hey guys,

I’ve made it as far as configuring juce and getting my GUI set up without asking any questions on the forum but this has me totally stumped.

I’m trying to use the IRRFilter class to just test the waters and see how processBlock works, but i’m getting a bitcrusher-like effect on the output that gets worse as I decrease the buffer size in my host.

my processBlock function is pretty straightforward so I don’t think this can eb a matter of the code being too slow to process everything in time.

[code]void ReverbAudioProcessor::processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages)
// This is the place where you’d normally do the guts of your plugin’s
// audio processing…
for (int channel = 0; channel < getNumInputChannels(); ++channel)
loPass->processSamples(buffer.getSampleData(channel), buffer.getNumSamples());

    // something to the data...


I’ve called the setHighPass function in prepareToPlay. I thought perhaps the sample rate is changing slightly, causing the filter to not process the end of some of the blocks; but the effect just gets worse if i call setHighPass in processBlock().

I could post an audio example of this if it would be helpful, any ideas on how to get it sounding smooth?

I have solved my problem. A further search revealed that i need a separate filter for each channel. Sounds perfect now.

I will probably post more stupid questions like this shortly -.-

Could somebody point me in the direction of a tutorial/post on how to assign parameters to gui components?

Would be much appreciated. thanks.

Download and run the DSPFilters demo and if it does what you want, grab the source code (do a SVN Checkout because the .zip sources are a little bit out of date):

I’ve had a look at that and i’m still stumped as to how it all works. The key point that i’m missing is how i get the audioprocessor and editor to interact with each other.

What do you mean stumped? Are you confused about how the GUI thread passes information into the audio callback thread? Can you be specific?

Yeah, that’s exactly it. I assumed at first, while reading through the tutorials, that all I needed to do to change parameters in the plugin was to implement a listener in the GUI. I quickly discovered that was the wrong approach as there’s no way that the audioprocessor and the GUI can “talk” to each other by using this method.

Basically, what I want to do is to change the cutoff frequency of a filter by turning a rotary Slider on the GUI. Should be pretty straightforward right?

Thanks again for your time.

Glad to help.

Yes, your use-case is straightforward, and I address it directly in the DspFilters Demo.

The problem boils down to how do you communicate information from one thread to another? There are two schools of thought:

1) Protect access to shared variables using a mutex (i.e. juce::CriticalSection).

In your case, the shared variable is the cutoff frequency + filter coefficients. Basically the juce::IIRFilter. The general technique is to put a lock around any code that accesses the variable. So when you want to change the cutoff from the UI thread (also known as the Message thread in juce terms), take the mutex (juce::ScopedLock), recalculate the IIR filter, and release the mutex.

Locking always comes in pairs, so then in the audio callback when you apply the IIR filter you will need to take the mutex, apply the filter, and release the mutex.

This is the easiest way to implement “synchronization” (threads communicating) but unfortunately the worst-performing.

The second school of thought, which is my preference, is

2) Modify shared variables asynchronously using a thread-safe queue

Instead of changing the IIR filter directly, put a message into a thread-safe queue telling the other thread to recalculate the cutoff/coefficients of the IIR filter the next time it gets a chance. A simple implementation of the thread-safe queue will use a mutex to protect the queue, but since the mutex is held for constant time (the time for a linked list insertion usually), it is extremely CPU friendly.

This is the method use in the DspFilters demo:

Thread Queue using Functors:

Processing the Thread Queue from the audioDeviceIOCallback:

void AudioOutput::audioDeviceIOCallback (const float** inputChannelData,
                                         int numInputChannels,
                                         float** outputChannelData,
                                         int numOutputChannels,
                                         int numSamples)
  m_queue.process(); // <-- process Thread Queue

Posting a functor to the Thread Queue for asynchronous execution:

// Called from the UI thread (message thread)
void AudioOutput::setFilterParameters (Dsp::Params parameters)
{ (bond (&AudioOutput::doSetFilterParameters, this, parameters));

If you don’t understand any part of it feel free to post your questions.

Sophisticated programmers will use a wait-free MPSC (multiple producer, single consumer) queue with a custom allocator to avoid the “ABA” problem. For more information on advanced synchronization, please visit:

You are deallocating objects in ThreadQueue.process() in the audiodevice callback.
Avoid doing this and always deallocate objects in the gui (by marking them dirty for example)

Otherwise you can use a growable/shrinkable fifo of preallocated objects…

Thanks guys i got it working!

[quote=“kraken”]You are deallocating objects in ThreadQueue.process() in the audiodevice callback.
Avoid doing this and always deallocate objects in the gui (by marking them dirty for example)

Otherwise you can use a growable/shrinkable fifo of preallocated objects…[/quote]

Duh! I’m not going to give away my best implementation in an MIT-licensed project! I only included it as a starting point for people who want to learn how to develop concurrent systems.

Anyone interested in improving it to the highest level of performance can start here:

Glad to hear it!