Unit Delay (Z¯¹)

Is there something in JUCE for unit delays, commonly denoted as Z¯¹? I want to try and create an audio filter, but I'm not sure how to make a simple unit delay. I'm new to JUCE, sorry.

Also, what would you use to get the audio stream through the filter? Like I say I'm new to JUCE. I'm use to working in Reaktor and it was much easier. I'm trying to figure out how to work with audio in JUCE.

There are some ready to use filters available in JUCE. Have a look into the documentation; otherwise you will need to program the DSP part by yourself. A good starting point is a DSP book. You will find an overview of the common DSP topics here:


The thing is, I actually have a good bit of experience with DSP, especially filters. I was making them in Reaktor (Core) though and it's quite different than doing it in C++ and JUCE. I think if I can find a very minimal example of a simple, self-contained filter, I'll be able to figure it out. At this very moment I'm just trying to figure out how unit delays work in C++. It's extremely simple in Reaktor and I imagine it is in C++ and JUCE too, but I'm just not sure how it's done. 

A literal mono unit delay (that is, a delay of 1 sample without a way to have any other delay lenght) could be something like :

class unit_delay
    unit_delay() : m_previous(0.0f) {}
    float process(float input)
        float temp = m_previous;
        m_previous = input;
        return temp;
    float m_previous;

You need an instance of this class in a scope where it will retain its state as long as your audio processing is going on. You need a separate instance for every audio channel you process. (That is, if you have a stereo signal, you need 2 independent object instances etc...)

The implementation shown above may not be optimal. It is common to make audio processing classes process buffers of audio, not single samples, to reduce possible overhead from function calls. (I'd suspect though that the class above is so simple the compiler can figure out a way to not make any function calls.) But you shouldn't be worrying about efficiency at this point. Just get things working in the cleanest and easiest way possible.

So after thinking about it a little, I have a theory of how a unit delay can work. I'm thinking that if you create a variable (let's call it z) and assign it after you assign the output (and use it on the next sample) that would be a unit delay.

For example:

// Gets called every sample void process(float& in, float& out, double& feedback) 
    out = in + z; 
    fbk = out * feedback; 
    z = fbk; 

// Or even simpler 
void process(float& in, float& out, double& feedback) 
    out = in + z; 
    z = out * feedback; 

That being something like this (maybe):


Do you think that would work? In my mind, it makes sense. 

Hi Jordan,

yes, your code implements a very basic way to perform a unit delay.

Anyway, be aware using a generic unit delay for more complex filters, because it is not so convenient when you need several delays.

If you have an N-point filter, you can use an array to store the current value (z^0), and all the delayed values z^-1, z^-2, z^-3; see the following simple FIR filter implementation:

​// implements a generic FIR filter: B(z) = b[0] + b[1]*z^-1 + b[2]*z^-2 + ... + b[size-1]*z^(size-1)
// out = x[0]*b[0] + x[1]*b[1] + x[2]*b[2] + ... + x[size-1]*b[size-1]

float processFIR(float input, float* x, float* b, int size){
  float out = 0.0f;
  for(int n = size-1; n >= 0; n--){
    if(n > 0) x[n] = x[n-1]; // shift old samples
    if(n == 0) x[0] = input; // sample 0 is the current sample (z^0)

    out += x[n]*b[n];
  return out;

void init(){

  // initialize input signal
  // signal[0] => z^0, signal[1] => z^-1, signal[2] => z^-2, ...
  for(int i=0; i<size; i++) signal[i] = 0;

  // filter coefficients
  coeff[0] = 0.454664;
  coeff[1] = 0.324342;
  coeff[2] = 0.24365;

  // initialize output
  output = 0.0f;


void process(float sample){
  output = processFIR(sample, signal, coeff);


Also, keep in mind that the feedback parameter (and filter parameters in general) depends on the actual samplerate value, so you have to calculate its value in the prepareToPlay() method of your AudioProcessor class.

For c++ DSP alghoritms you can start browsing here: http://www.musicdsp.org/archive.php?classid=3

and here https://github.com/vinniefalco/DSPFilters

Anyway,  there are a lot of libraries around the world.




Thanks man! I think I'm starting to understand the C++ way of doing it. 


Do you think you could show me how you would actually implement a filter in JUCE? I'm guessing you do it in the processBlock(). I'm trying to implement the filter in the link below, but I'm getting this crackling sound in the right channel for some reason. I have it doing some filtering, but that crackling is bad. I was thinking that it's clipping, but i don't think that makes much sense because it's only that right channel. I'll try looking at other people's code to see how they do it, but in curious how you would implement a filter.





Actually, I only start getting the crackling after first I change a parameter. I'm pretty sure I did the parameter stuff well enough. It's weird that the left channel sounds just fine and the right channel doesn't. Here's my processBlock. Please let me know if I'm doing something wrong.

// This is the place where you'd normally do the guts of your plugin's // audio processing... for (int channel = 0; channel < getNumInputChannels(); ++channel){  
    float* channelData = buffer.getWritePointer(channel);  

    for (long i = 0; i < buffer.getNumSamples(); ++i) { 
        const float in = channelData[i];  
        channelData[i] = directFormFilter.process(in); 

// In case we have more outputs than inputs, this code clears any output // channels that didn't contain input data, (because these aren't // guaranteed to be empty - they may contain garbage).  for (int i = getNumInputChannels(); i < getNumOutputChannels(); ++i)
    buffer.clear (i, 0, buffer.getNumSamples()); 

Another edit:

Actually, it might be because the filter is only set up to process one channel and I'm trying to process both with it. I creates another process function, so one for each channel, and it ended up working. :) I'm a little confused about how to properly set up stereo processing, but I came up with something that works.

if (getNumInputChannels() < 2) { 
else { 
    float* leftData = buffer.getWritePointer(0); 
    float* rightData = buffer.getWritePointer(1); 
    for (long i = 0; i < buffer.getNumSamples(); ++i) { 
        const float inL = leftData[i]; 
        const float inR = rightData[i]; 
        leftData[i] = directFormFilter.processLeft(inL); 
        rightData[i] = directFormFilter.processRight(inR); 

Did you add copies of the relevant state variables in the filter class for the stereo processing? If you didn't, you still have a sound processing bug in the code. Try a sine wave to hear (or see in a wave editor) clearly if the processing still produces wrong results.

Instead of having 2 processing functions (and the absolutely required copies of the state variables) in the filter class, it would have been better to just leave the filter class as mono and have 2 instances of the filter for your stereo processing. 

Yeah I actually did remember to create copies of the state variables. :) I know it wouldn't make much sense to use the state of the other channel each sample. I think I'm starting to get the hang of it.

As Xenakios said, you do not need two identical processing function, but two instances of a filter object having a single processing function; then you'll call that processing function on the proper object for the proper channel.

Look at the snippet below.

Note: it is supposed that you have a little knowledge of OOP (Object Oriented Programming), how to create Classes, include external files, and so on.


// ---- in the AudioProcessor constructor
MyBiquadFilter directFormFilterLeft;
MyBiquadFilter directFormFilterRight;

// ---- in prepareToPlay()

// ---- in setParameter()

// ---- in processBlock()
float* leftData = 0;
float* rightData = 0;
if (getNumInputChannels() < 2) { // MONO CASE
  leftData = buffer.getWritePointer(0);
  for (long i = 0; i < buffer.getNumSamples(); ++i) {
    leftData[i] = directFormFilterLeft.process(leftData[i]);
} else { // STEREO CASE
  leftData = buffer.getWritePointer(0); 
  rightData = buffer.getWritePointer(1); 
  for (long i = 0; i < buffer.getNumSamples(); ++i) { 
    leftData[i] = directFormFilterLeft​.process(leftData[i]); 
    rightData[i] = directFormFilterRight.process(rightData[i]); 


Thanks for the reply and the code example! Well I was thinking it would be better to make 2 different process member functions for the filter class because you could just calculate the coefficients once whenever a parameter changes, instead of calculating it for each object. Plus, you would only have to call a parameter change once for one object. Idk which way is better, but I feel like the way I'm doing it is at least more efficient. Do you think it would work if I had one process member function, store the states in an array (size of 2), and access the index of that array from the process member function's argument? I'm trying to think of the best way to do it.

I always prefer "generality"... some day you will need the same class for making an EQ that will process left and right channels independently, you will apply the filter to 5.1 channels, you will need a cascade of filters, and so on...

If you want to perform just one time the computation of the coefficients, you can create a method to copy the coefficients from one filter to another:

leftFilter.setFilterType(kLPF);  // where kLPF = 0, kHPF = 1, and so on, defined elsewhere

leftFilter.setCutoff(250.0f);  // this will make the coefficients computation

rightFilter.copyCoeff(leftFilter);   // this just copies the coefficients to the rightFilter