Note: There are probably ways to get even greater performance (Iām thinking, generating a random wavetable every block, - but I know that thatās hotly debated, and beyond the scope of this particular project, although a fine second project! If you have any suggestions/ideas for me to investigate).
The differences are not audible when sampling numbers directly from the generators and using them as audio. But there may be differences when putting the numbers through transformations.
I personally use Mersenne Twister (from the C++ standard library) because it annoys me on principle the other ways to generate random numbers are āworseā.
Yeah, I was going to suggest auditioning the noise through different filters and EQ as Xenakios suggests the transforms are where youāll notice difference that trip up things like IIR filters.
Also in terms of comparing āfastestā being sure that youāre also using the most optimal way of getting an RNGās value into the [-1, 1] range.
Thereās also Supercolliderās random number stuff based on LāEcuyerās 1996 three-component Tausworthe generator "taus88
It uses some cunning bit tricks on IEEE floats to map the bit patterns from the RNG into the [-1,1] range without using float divide or multiply for example.
This was written originally in the mid/lat 90s so compilers might be out performing tricks like that nowadays. I havenāt checked!
Ahh thank you. I tested both Xorshift and SCās Taus both with the cunning trick and with the * 2 - 1 method, and it appears (at least for me/today) that the * 2 - 1 method just eked out a slight advantage.
I have added the taus algorithm to my githup repo. And yes, although they are ābetterā until I run into any problems, I guess Iām going to stick to my trusty but shitty xorshift
If you only need white noise (not an RNG), the xorshift (and other LFSR based approaches) can be modified to generate one random bit which means you donāt need to scale, just cast. Itās counterintuitive but to my ears, noise that is just 0s or 1s sounds quite smooth (un-normalized of course).
Hi @Holy_City, I like the idea! But if values are only 0 and 1 you still want to scale it to -1 - +1 right? Or did you mean something different by āscaleā.
In any case, my google-fu is not returning anything on a 1 bit RNG, do you have any code/link to share?
Iāve used the example on that wikipedia article, just take the LSB of the feedback shift register
float sample = lfsr & 0x1 ? 1.0f : -1.0f;
Iām no expert on RNGs but my understanding is that the LFSR technique (which on first impressions is what xorshift uses) generates a random bit with 50% probability of being 1 or 0 in the LSB. So if you map that to +1 or -1 then you get what amounts to clipped white noise.
Disclaimer: I heard about this from a colleague a few years ago
The algorithm itself remains the same. returns a 32-bit integer, and then (I think) with this: lfsr & 0x1 youāre saying just look at the first bit of the returned integer.
yup, I havenāt dug in too deeply. iirc the importance of the algorithm is how many samples it takes to repeat the sequence, which for noise generation isnāt such a big deal. But mapping the algorithm to an int in a certain range and then casting to a float in the [-1,1] range looks to be a bit more expensive than just taking the lowest bit. You may want to do some benchmarks to be sure, processors are weird.
So in terms for using it for noise, it does seem to make a slightly grittier (especially after transformations) noise, to my ears the frequency response is a little different as well, I think - more mid range - which may or not suit your taste. But for āmostā applications, Iām gonna say better to return float * 2 - 1
However for generating a random bool or random sign - itās a winner! So thank you. Have added it to my aresenal!