Hi all DSP experts!
I am trying yo reduce the aliasing of four cascaded nonlinear saturating/wavefolding functions and I would like to do some questions.
For each nonlinear block, I am applying a first order antiderivative antialiasing method, I make my prototype in MATLAB and it works fine. However, when I go towards C++/JUCE implementation, I need to make higher my ‘tolerance’ value (the tolerance value is the value that controls the ill condition of the ADAA) to not to listen to strange artifacts. Moreover, I need to reduce also the input gain/trim range in order to not get NaN values. These issues do not come out without ADAA, even if I get more aliasing.
Do these problems arise because of float single-precision number representation?
Moreover, each nonlinear block with ADAA introduces a latency of 0.5 samples. Since I am applying oversampling before the nonlinear process, should I do average my latency with the oversampling rate?
In other words, my total latency is
totLatencySamples = oversamplerLatency + 0.5*numberOfNLProcess
totLatencySamples = oversamplerLatency + 0.5*numberOfNLProcess*baseSamplingRate/oversampSamplingRate?
Thank you very much!