Comparison of noise/random algorithms [code included]

Hello all.

I put together some code that brings together various random number generator algorithms for testing as noise sources;

  • juce::Random
  • std::rand()
  • Linear congruential generator
  • Xorshift
  • custom implementation of Marsenne-Twister
  • std::mt19937
  • Taus88

Suffice to say, they all sound the same! So, in my mind, the fastest to compute wins.

You can access the code here: https://github.com/Arifd/Noise-Algorithms

Enjoy, I hope this becomes of use to any of you.

Note: There are probably ways to get even greater performance (Iā€™m thinking, generating a random wavetable every block, - but I know that thatā€™s hotly debated, and beyond the scope of this particular project, although a fine second project! :wink: If you have any suggestions/ideas for me to investigate).

1 Like

The differences are not audible when sampling numbers directly from the generators and using them as audio. But there may be differences when putting the numbers through transformations.

I personally use Mersenne Twister (from the C++ standard library) because it annoys me on principle the other ways to generate random numbers are ā€œworseā€. :wink:

Yeah, I was going to suggest auditioning the noise through different filters and EQ as Xenakios suggests the transforms are where youā€™ll notice difference that trip up things like IIR filters.

Also in terms of comparing ā€œfastestā€ being sure that youā€™re also using the most optimal way of getting an RNGā€™s value into the [-1, 1] range.

Thereā€™s also Supercolliderā€™s random number stuff based on Lā€™Ecuyerā€™s 1996 three-component Tausworthe generator "taus88

It uses some cunning bit tricks on IEEE floats to map the bit patterns from the RNG into the [-1,1] range without using float divide or multiply for example.

This was written originally in the mid/lat 90s so compilers might be out performing tricks like that nowadays. I havenā€™t checked!

Ahh thank you. I tested both Xorshift and SCā€™s Taus both with the cunning trick and with the * 2 - 1 method, and it appears (at least for me/today) that the * 2 - 1 method just eked out a slight advantage.

I have added the taus algorithm to my githup repo. And yes, although they are ā€˜betterā€™ until I run into any problems, I guess Iā€™m going to stick to my trusty but shitty xorshift :slight_smile:

BTW fastest compute on my MacBook Pro i7 is JUCE from that set. Tried a few times randomising the order and JUCE always wins

Thatā€™s not surprising - the juce random class was designed to be weak but fast!

2 Likes

If you only need white noise (not an RNG), the xorshift (and other LFSR based approaches) can be modified to generate one random bit which means you donā€™t need to scale, just cast. Itā€™s counterintuitive but to my ears, noise that is just 0s or 1s sounds quite smooth (un-normalized of course).

Hi @Holy_City, I like the idea! But if values are only 0 and 1 you still want to scale it to -1 - +1 right? Or did you mean something different by ā€˜scaleā€™.

In any case, my google-fu is not returning anything on a 1 bit RNG, do you have any code/link to share?

Hereā€™s the neatest that I found, and my current favourite; https://excamera.com/sphinx/article-xorshift.html

Iā€™ve used the example on that wikipedia article, just take the LSB of the feedback shift register

    float sample = lfsr & 0x1 ? 1.0f : -1.0f;

Iā€™m no expert on RNGs but my understanding is that the LFSR technique (which on first impressions is what xorshift uses) generates a random bit with 50% probability of being 1 or 0 in the LSB. So if you map that to +1 or -1 then you get what amounts to clipped white noise.

Disclaimer: I heard about this from a colleague a few years ago

If I understand correctly,

The algorithm itself remains the same. returns a 32-bit integer, and then (I think) with this:
lfsr & 0x1 youā€™re saying just look at the first bit of the returned integer.

yup, I havenā€™t dug in too deeply. iirc the importance of the algorithm is how many samples it takes to repeat the sequence, which for noise generation isnā€™t such a big deal. But mapping the algorithm to an int in a certain range and then casting to a float in the [-1,1] range looks to be a bit more expensive than just taking the lowest bit. You may want to do some benchmarks to be sure, processors are weird.

Alright nice!

So in terms for using it for noise, it does seem to make a slightly grittier (especially after transformations) noise, to my ears the frequency response is a little different as well, I think - more mid range - which may or not suit your taste. But for ā€œmostā€ applications, Iā€™m gonna say better to return float * 2 - 1

However for generating a random bool or random sign - itā€™s a winner! So thank you. Have added it to my aresenal!

The frequency response is definitely white in my tests. Try putting it through a one pole LPF or APF and see how it sounds.