I am working on rebuilding some distortion units from Reaktor based on Infinite Linear Oversampling which is a technique to reduce aliasing. It involves utilizing the integrals of the distortion equations with unit delays. Here’s an example of a schematic:

https://www.native-instruments.com/forum/attachments/ilo-tanh-png.54931/

I’m wondering what the simplest way to write a function that delays output by one unit.

I read Jordan Harris’s posts (who designed the State Variable Filter I’m currently enjoying very much incidentally) here, but I’m not sure I follow his technique.

Here’s what I’m wondering if might be the same idea:

```
double output = nullptr;
inline double getUnitDelay(float& input) {
return output;
input = output;
}
```

So in principle, it takes an input, but it doesn’t return that input. It copies it to another variable called output, which I think needs to be initialized as nullptr so that there is something in it (even nullptr) for the first sample request. Not sure if this can be folded into the function.

Since C++ is order sensitive (I think), this function returns the output from the prior sample each time it is run.

Then for example, it can be used in equations like this:

`integral - getUnitDelay(integral) ... ;`

Would that work? Is there a better way to do it?

Thanks as always