Catmull-Rom Interpolator based on the Lagrange Interpolator

Hello!

 

For the Radium music editor, I have used the source code of the effects/LagrangeInterpolator.cpp file in JUCE to create a Catmull-Rom interpolator (also known as a 4 point, 3rd order Hermite interpolator.):

 

 

//
// The Interpolator class below is based on the LagrangeInterpolator class in JUCE, written by Julian Storer.
//

// I've changed it to use the Catmull-Rom spline interpolator instead (since I found that it sounded a lot better),
// plus that the 'process' function takes two extra parameters:
//
// 1. 'numOut': The maximum number of samples the 'process' method are allowed to write to 'out'.
// 2. 'numCounsumed': After 'process' is finished, this variable will contain the number of samples read from 'in'.
//
// In addition, the 'process' function now returns the number of frames put into 'out', while previously it returned the number
// of samples read from 'in'.
//
// My benchmarking shows that the degrade in performance compared to the original LagrangeInterpolator class is hardly measurable.

 

 

 

Here's the source code:  https://github.com/kmatheussen/radium/blob/master/audio/SampleInterpolator.cpp

(For practical reaons (didn't configure my text editor), I also changed the coding style)

 

You might want to consider backporting the code to JUCE, since the sound quality seems to be a lot better, without adding extra CPU usage.

 

With the extended API, it's also possible to create a callback-API for it, which I did in another file:

https://github.com/kmatheussen/radium/blob/master/audio/Resampler.cpp#L73

 

Awesome!

It seems like you copied the header files from juce too. So is it really copyrighted by ROLI?... Or is it pure GPL3? If I were you and you are not after monetizing I would license your additions under the MIT license. To increase the chances that other juce users will also use (and potentially contribute to) this interpolator.

I kept the same copyright so that it could be merged back into JUCE if ROLI wants to, but in case I guess Jules probably would rewrite everything though, so it probably doesn't matter. Radium (the host project) is GPL, by the way, so it's natural to continue using GPL code (not that I have a choice either).

 

In order to use an MIT (or other type of more liberal) license, I would have to write everything from scratch, which I don't bother.

 

Thanks, sounds like something that might be good for us to absorb (and yes, rewrite!)

Does it really sound that much better?

For 44100Hz sounds, I can't hear any difference, but when playing soundtracker MOD files (has sounds which are played back around 15000Hz (I think)), there is a lot less high frequency artifacts with the Catmull-Rom interpolator.

 

1 Like

I haven't tested too much though. It might be possible to hear a difference for higher frequency sounds as well, I just went with the Catmull-Rom interpolator when I heard the big difference when playing back MOD files. The important reason for creating this class was  to get the extended API for 'process', not to change the interpolator algorithm.

Well, it wouldn't be a bad thing for us to offer it alongside the LaGrange version, with the same class interface, so people can use whichever they like.

FYI I've added this now - testing would be welcomed!

It might be an idea to add this to the tests at http://src.infinitewave.ca (instructions at http://src.infinitewave.ca/faq.html)

Currently they have only an outdated JUCE 2.x interpolator, which bounces all over the place.

btw, I couldn't find Catmull-Rom from the documentation -- I found it from the GitHub repo: https://github.com/julianstorer/JUCE/tree/master/modules/juce_audio_basics/effects

Maybe https://www.juce.com/doc/classResamplingAudioSource#details should reference this interpolator under 'see also'...?

π

1 Like

btw...

        return y1 + offset * ((0.5f * y2 - halfY0)
                                + (offset * (((y0 + 2.0f * y2) - (halfY3 + 2.5f * y1))
                                              + (offset * ((halfY3 + 1.5f * y1) - (halfY0 + 1.5f * y2))))));

eeeek!

        return y1 + offset/2 * (
            y2 - y0 + offset * (
                2*y0 - 5*y1 + 4*y2 - y3 + offset * ((y3-y0)+3*(y1-y2))
            ));

Or even:

    float C = -y0 + 3*y1 - 3*y2 + y3;
    C = 2*y0 - 5*y1 + 4*y2 - y3 + offset*C;
    C = y2-y0 + offset*C;
    C = y1 + offset/2*C;
    return C;

Check: http://coliru.stacked-crooked.com/a/ebb8824a193a7d56

π

What are you talking about?

Kind of hard to see what was going on with the original expression -- looks like someone who isn't sure on operator precedence or int->float promotion lifted it from a formula.

Can you be less cryptic?

I tidied up the expression that's in the codebase myself, and I can assure you I'm pretty damned good at operator precedence and float conversion. If you're trying to say that I made a mistake in it, that's great, but explain exactly what's wrong and why.

There is no mistake, just obfuscation and redundancy.  I proposed a rewrite that is much cleaner:

  • removed multiple superfluous parentheses
  • 2.0f*someFloat -> 2*someFloat  etc. (as the 2 promotes)
  • tidied the algebra
  • removed temporary variables that weren't actually saving CPU cycles (I don't think, haven't tested -- does anyone know of a profiling tool to count CPU cycles? I suppose it would be architecture dependent...)

I added a test which illustrates that the rewrite evaluates the same as the original, I'm guessing you saw the post before I updated it with the test and thought I was flagging a math error. Sorry about that, I sometimes post then switch from my MBA (speech reco) to my windows (dev) machine & edit the post to drop code in.

I use geordi (http://www.eelis.net/geordi/) to check precedence. It's a fantastic tool written by one of the guys on ##c++ (Eelis) to assist discussions.

If you go to http://webchat.freenode.net/ and put any old nickname, click Connect and then `/query geordi` you can interact with it.

e.g.

[15:43] <pi> precedence a-b+c
[15:43] <geordi> (a-b)+c
[15:44] <pi> struct S{S(){BARK;}}; int main() {S s;}
[15:44] <geordi> S::S()
[15:45] <pi>  {S s;}  struct S{S(){BARK;}};
[15:45] <geordi> S::S()

It means I never need to place a single redundant parenthesis on account of not knowing a particular precedence (and not wanting to look it up).

Also, wouldn't the function signature be better written:

static forcedinline float valueAtOffset (const float inputs[4], const float offset) noexcept

?

Test:

[16:16] <pi-x> {float x[50]; f(& x[19]);}  void f (const float inputs[4]) {}
[16:16] <geordi> <no output>

π

My version is better.

If you want to argue about that, I'd suggest you learn more about how optimisers work first - your suggestions sound extremely naive to me.

hmm I can't see how your version might optimise better. Isn't it just a bunch of + and * float ops whichever way you dice it...?

Doing a quick check of CPU cycles seems to indicate both of my alternatives perform pretty much identically to the original: http://coliru.stacked-crooked.com/a/7a506ad8e94db913

At least on that particular architecture with that particular compiler and a prevailing north-westerly breeze. Results come out different each time I run that link, but they are always consistent within a particular run.

I flagged it in the first place because it appeared anomalous: I've mainly only been interacting with JUCE close to the surface, where it is always easy to see why you have done something a particular way, and this jarred. I couldn't see why it was written in this way. The only thing I could think of was that maybe you had copied it from another source in a hurry.

Does the original really have some performance advantage?

If so that is fascinating and rather scary!

π

Yes, I obviously copied the expression from the source in this thread. But the way I wrote it is fine, it doesn't need fixing, and your suggestions are at best irrelevent.

Optimising simple mathematical expressions are what compilers do best, they don't need your help. And if you start doing naive things like re-using intermediate variables it just restricts the compiler's freedom to re-order things.

gee, you're giving me a really hard time here...

My edit wasn't intended to further optimise performance but to improve clarity for the reader/maintainer at zero performance cost.

Anything that works doesn't need reworking. But we do it nevertheless, we go over our codebases cleaning and whittling all the way to the last whitespace do we not? It is nicer to have something tidy. It makes it easier to maintain, easier for someone to follow, it is aesthetically pleasing...

So you lifted the expression from somewhere... That's all I was saying in the first place -- that with a little algebraic manipulation it can be reworked into something syntactically cleaner and more aesthetically pleasing at no additional computational cost.

My first version doesn't even introduce the possible contention of reusing intermediate variables (although I'm pretty sure that would get optimised to a CPU register on any compiler).

So I don't see how the edit could be described as naive. It really is as simple as it looks. There is no hidden gotcha.

And I don't see how something that tidies code can be described as "at best irrelevant".  Can code-polishing be irrelevant?  At worst, maybe time wasting ... not worth the time expenditure?

But it would take so little time to process. A few seconds to scan the initial proposed change, click the link to verify that it computes the same, and copy/paste if you like the look of it. A couple of minutes max.

So I can't see that this really applies.  This whole ensuing conversation feels like an exercise in futility, but I think the original post is decent enough, albeit a very minor edit, maybe on a par with pointing out a spelling mistake in a comment.

Would you prefer that we don't "clutter" the forums with such minor morsels?

It looks as though you read every word that gets entered into the forums, and they are getting busier I think!  It's a really really good framework, massively undermarketed IMHO, they could get really busy over the next few years! So that might get tricky :)

π

1 Like

Sorry, maybe sounded harsher than intended!

But you did suggest re-using temporary variables to improve performance, and I like to jump in when I see people posting stuff that's misleading to any beginners who might read it.

Sure, the code I wrote probably could be improved for readability, but not by adding variables with meaningless names like "C" or using integer literals instead of floats (which is a pet hate of mine that I'd criticise at length if I had time).

If you really wanted to improve a simple piece of code like that, the smart approach is to introduce const intermediate variables with long, expressive names, so that the code mirrors the internal SSA-form expressions that the compiler actually optimises. Or better, break it into well-named inline functions to avoid intermediate named variables.

Noop I never suggested reusing temporary variables to improve performance.

 If you look back to the original post, I propose 2 alternatives without rationalisation. There is no subsequent suggestion of that.

 I just chose C so as to have A the original and B&C the proposed alternatives. A temporary float I suppose could be called y, as x->y is generally understood by mathematicians as a domain->target mapping. (y is therefore a bad choice for the inputs). Or tmp, but my experience is that single letters brain-parse more easily in formulae.

 I can see that insisting on 1.0f over 1 is reasonable, and a healthy general practice. I personally wouldn't do it if I'm dealing with raw floats, as there is no contention -- I prefer to keep things concise and crisp, especially if I have convoluted expressions. I prefer to transcribe as closely as possible from the mathematical formula.

But the above wasn't the intention of my original post.

 My brain just jarred parsing that formula. It looked as though someone had copied across some cookbook formula, noticed p/2, q/2 occurred more than once and tried to optimise by storing them in temporaries, not noticing that some basic manipulation would accomplish equivalent performance more cleanly.

Also multiple redundant parentheses that served no useful purpose and clarified nothing suggested that they appeared uncertain of operator precedence.

i.e. That formula appears to be lacking the craftsmanship I see elsewhere in your work.

It seemed clear to me that you had just copied over the formula, I guessed maybe you had more pressing things to do.

So I took it apart and put it back together again, out of curiosity. And then shared the result just in case it might be useful, even though it was a little trite -- just a few minutes' worth of work.

And you did rather bite my head off :)

Those are decent reworkings. The first one at least is much cleaner than the original. The second was more a curiosity.

Re-reading, everything I've said on this thread has been on point. I think initially you reacted to something that wasn't actually there.

ok enough pedantry from me...

π

1 Like