Hello again ! Hello @jules

For the exercise one and general information about my talk at ADC 17, see there :

## Exercise 2

Now for the exercise two, Iâll make you play with the Newtonâs Raphson algorithm all by yourself !

Iâll starting by explaining again (and better) the example provided previously for Newton-Raphsonâs algorithm with the numeric calculus for the equation x*x - 5 = 0. As I said during my talk, the purpose of that algorithm is to calculate the ârootsâ of a given equation g(x), which means the value(s) for x so g(x) = 0. It is quite useful for the so-called âimplicit equationsâ, which means equations for which there is no explicit solution that can be written output = âsomething depending only of inputsâ.

But that can be used as well to calculate numeric approximations of some explicit equations / functions, as alternate ways to get the output of given std functions without the need to call them. Itâs like reverse-engineering the inside of some well known std functions behaviour !

So if g(x) = x*x - 5, then the roots are very probably sqrt(5) and -sqrt(5). But thanks to that algorithm, you will be able to get a numeric value for sqrt(5) without any call to any C++ sqrt function. This is an iterative algorithm, which must be initialized with a given value x0, and called several times in succession to get the x_k, improving the accuracy of the result at each iteration, using this formula :

x_{k+1} = x_k - (dg(x_k)/dx)^-1 * g(x_k)

dg(x)/dx is the derivative function of g(x) depending on x, and the ^-1 means that we are basically going to compute Newton-Raphsonâs algorithm when everything is one dimensional by computing g(x_k) divided by the value of (dg(x_k)/dx), otherwise we need to invert a square matrix of the right dimension.

So in our first example case, here is the expression for all these functions :

• g(x) = x * x - 5
• dg(x)/dx = 2 * x (if donât understand why, then you need to take some math lessons about derivatives of classic functions Iâm afraid !)

So letâs initialize the algorithm with x0 = 1. Using the algorithm formula, we can calculate the next iteration x1 :

x1 = x0 - g (x0) / (dg(x0)/dx) = 1 - (1 * 1 - 5) / (2 * 1) = 3.

If we do the same thing several times we get :

• x2 = 2.333333333333334
• x3 = 2.230895238095238
• x4 = 2.236068895643363
• x5 = 2.236067977499978
• x6 = 2.236067977499790

etc.

Tell me if you have any question so far And the code for doing it in JUCE is in the zip archive provided in the first topic.

7 Likes

So, anyone is interested in the next part of the homework ? Have you been able to run Newton-Raphsonâs algorithm by yourself to compute square root of 5 ?

1 Like

thank you, all this is very interesting.
and yes i would love the next part of the homework
well, i think i should send you a private message

1 Like

Hello ! Have you been able to run the square root code already to do manually some Newton Raphson ?

By the way, I discovered recently that you could do Newton-Raphsonâs algorithm inâŚ the JUCE demo app !! Because there is a Javascript interpreter there

``````/*
Javascript! In this simple demo, the native
code provides an object called 'Demo' which
has a method 'print' that writes to the
console below...
*/

Demo.print ("Newton-Raphson's algorithm in JUCE + Javascript!");
Demo.print ("Calculus of square root of five");
Demo.print ("");

function square_root_five_newton_raphson (x)
{
var g = x*x - 5;    // expression of g in g(x) = 0 = x*x - 5
var dg = 2*x;       // derivative of g(x)
x = x - g / dg;     // Newton-Raphson's algorithm application
return x;
}

var x_0 = 1;

Demo.print ("Initial value = " + x_0);
Demo.print ("");

var x = x_0;

for (var i = 1; i < 10; ++i)
{
x = square_root_five_newton_raphson (x);
Demo.print ("Iteration " + i + " : x = " + x);
}

Demo.print ("");
Demo.print ("Exact value : x = " + Math.sqrt(5));``````
1 Like

a while back i chose tanh(x) as a distortion algorithm. but its very interesting playing around with your proposed algorithm alternatives and implementations.

i can open juce projects and mess a bit with them and compile them but unfortunately juce is still way out of my league so i cant really develop anything from scratch on it.
but i use other environments i am more comfortable with to mess with your findings

thanks.

Hey @IvanC, I think Iâve gotten the hang of the Newton-Raphsonâs algorithm and Iâm interested in the next part of the homework. Also, are you planning on posting the âtanh analog clipperâ homework?

Hello !

Yes I have a list of new topics to prepare for the JUCE forum in the next weeks, and Iâll add the final parts + some answers in the 2 ADC17 topics as well

4 Likes

So here is the next part of the homework ! Letâs use Newton-Raphsonâs algorithm for real in an audio algorithm !

Letâs code that thing :

Basically, itâs just a tanh standard waveshaper, but we have also an additional feedback here. And the usual gain you can apply on the input signal.

How do we code this ? We need to write the relationship between the input and the output. If `w` is the input of the tanh waveshaper, then we can write this :

`out = tanh(w)`

But the value of w is :

`w = in - k * out`

So at the end we get :

`out = tanh(in - k*out)`

Here is our implicit equation ! Because even if we invert the tanh, we canât get any expression with `out` all alone on the left of the equal sign, and anything using `in` on the right :

`tanh^-1(out) = in - k * out <=> k * out + tanh^-1(out) = in`

So how can we solve it ? Using Newton-Raphsonâs algorithm of course ! We need to calculate our function `g(out)` expression, and same for its derivative :

`g(out) = out - tanh(in - k * out)` and `dg(out)/dout = 1 + k * (1 + tanh(in - k * out)^2)`

Now from here, we can write the code that will do the audio processing :

``````// w is a state variable which keeps the value of the
// last result, initialized at zero in reset()
for (auto i = 0; i < numSamples; i++)
{
auto in = samples[i];

for (auto n = 0; n < 8; n++)
{
auto g = w - std::tanh(in - k * w);
auto dg = 1 + k * (1 + std::tanh(in - k * w) * std::tanh(in - k * w));
w = w - g / dg;
}

samples[i] = w;
}
``````

So guys, you can try this, and see what happens with various k values, for various input gains. At first, the sound will be a little different from the standard tanh waveshaper, and it will be more rounded or with less gain, listen to the results and play with an oscilloscope.

Little note, I have put there a fixed number of iterations, but to have a perfect result I should use a while statement which tests if abs(g) is lower than a given acceptable tolerance. However, doing so might be unpraticable in a plug-in because then the CPU load would be signal dependent.

5 Likes

Hi Ivan! Thanks for all information. But I have a question. Why
`dg(out)/dout = 1 - k * tanh(in - k * out)`?
If we have a function `y = tanh(x)`, then `dy/dx = 1-tanh^2(x)`.
So, for `g(out) = out - tanh(in - k * out)` we have `dg(out)/dout = 1 - [1-tanh^2(in - k * out)] * (-k) = (1+k) - k * tanh^2(in - k * out)`

1 Like

Holy ****. Youâre right, I derive too many exponential functions nowadays I guess

Moreover, I didnât say anything about the types there. Quick question, do you think it would be better to have w / g / dg as a float or as a double variable ?

Hi Ivan. Well, itâs very difficult to answer this question, because we need get many factors keep in mind (arch of fpu, cost of casting operation etc). Obvious, using double as type of
an intermediate variables can increase accuracy of resultâŚ But in my tests of this tanh waveshaper I did not get a noticeable difference between float/double in performance or quality.

Hello !

When I have some time, Iâll deal with a new example based on a sinh waveshaper in the feedback loop, like what happens in the diode clipper example I showed in my previous talks. The double vs float question will have new consequences in this context.

In short, before I do this, I would like to point two things to answer my own question :

• I didnât talk much about convergence before, because itâs a problematic topic related with Newton-Raphsonâs algorithm. The thing is, for some schemes involving that algorithm, some cases of divergence related with the amplitude of the input signal happen quite often. You think your algorithm is fine, then you put some gain in the input, and you get unexpected results because the iterative algorithm fails and returns infinity or NaN. The tanh algorithm only issue is that the convergence might be slow, but not that the algorithm might diverge.
• The exit condition for the iterative algorithm is a fixed number of iterations. But thatâs not the only way to do it. You can also check the value of g and exit if its absolute value is under a given threshold.

As youâll see with the next example, using double instead of float will have two consequences : youâll get more room before divergence, and youâll be able to reduce the number of iterations to get a given accuracy, which will compensate a lot for the extra CPU load associated with the use of double variables operations

HmmmâŚ Very interesting!
I never considered whether the type (float/double) affects the number of iterations in a NR loop))
Iâll try to add logging to the algorithm and check this interesting effect. Thanks!