A suggestion

While looking at some of the changes in the codebase, I came across several methods that contained large repetitive tests , e.g.

...
if (abbr == "ACN15") return ambisonicACN15;
if (abbr == "ACN16") return ambisonicACN16;
if (abbr == "ACN17") return ambisonicACN17;
if (abbr == "ACN18") return ambisonicACN18;
if (abbr == "ACN19") return ambisonicACN19;
if (abbr == "ACN20") return ambisonicACN20;
if (abbr == "ACN21") return ambisonicACN21;
...

and the reverse, etc.

This is a rather painful linear search.

I’m wondering why neither a binary search nor a dictionary was used for this? Either of these would have been both much cleaner and much faster.

If someone decides to fix this, a simple regex could be used to convert this code into a suitable form to initialize a dictionary: for example

1 Like

I totally disagree! But it’s an interesting seed for discussion, as it does illustrate how important it is to think about exactly which factors you should think about when trying to improve code.

A “perfect” optimisation is one where you end up with shorter, more readable code that runs faster in all cases, uses less memory at runtime, and is a smaller binary size. But that’s rarely possible, so you have to pick your battles on a case-by-case basis.

In this case, speed is irrelevant. Although being faster is never a bad thing, it needs to be bottom of the priority list for this bit of code.

Compiled code size could possibly be something we care about here. Lots of people may never need this function, or will maybe call it just once, so if it took up megabytes of space in your binary, that’d be a bit annoying. (Though not a huge deal, really)

Runtime memory footprint size may matter a little bit, but again not a huge factor to worry about.

The only really important factors to think about if you were trying to improve this would be:

  • How likely are there to be bugs lurking in it.
  • If the list of constants changes in the future, how likely is it that someone will make a mistake in updating the code.

You suggested using a regex or dictionary instead. But if you did that:

  • It would probably be slower (depends on the number of items and cache locality but my gut feeling is that you’d need enormous lists for it to go faster than a small list of ‘if’ statements)
  • It would use more memory at runtime
  • It’d involve an extra step to initialise the table, so that’d be called during initialisation (static?), and you’d have to consider multi-threading in terms of how init and lookup are allowed.
  • It couldn’t be made constexpr, and even if the linker could see that the function is never called and could remove it, it’d have to still have to retain the code for initialising the map
  • A regex would need to be careful not to match patterns other than the valid ones. That to me smells like a source of non-obvious bugs.
  • Unless the list of items is gigantic, either a dictionary or regex would make the binary bigger, because those classes are going to be template specialisations, and maybe pull in all kinds of other library dependencies too.
  • If you don’t use a switch to do the enum match, then the compiler can’t warn you about new enums that you might need to add.

The only bad aspect I can see in the long explicit lists is that there’s some repetition, so that adding a new item involves editing in two separate places. But there are a couple of tricks that could be used to avoid that, e.g.

  • use an X macro. It’s a macro, which is bad, but this would be the most bulletproof solution, since it’d generate code that retains the switch statement and can catch missing items.
  • a constexpr array of mappings, and a simple pair of functions to search it. This is pretty good. Short, clean, fast code. But no switch statement, unfortunately.
2 Likes

Not sure why you’re harping on about the dangers of the regex. The purpose of the regex was to automate the conversion of the original source code into the set of assignments for a dictionary, thereby reducing the risk of programming mistakes doing it manually. There’s no regex involved in the ultimate code that would be compiled.

If you don’t care about speed, then the technique is even safer because you could do the reverse lookup by searching the values rather than keys, which would essentially be the same linear search but you only need to have one list (dictionary) to manage.

Consequently, changing the list doesn’t require touching any functions and is therefore cleaner, safer to maintain (reducing bugs!), things would be near the top of my requirements.

While I agree that in this situation neither runtime nor size matters much, both are reduced in this case.

Sorry, I was trying to be helpful with my detailed answer, but you don’t seem to have either understood or believed what I was saying (not sure which!)

No… the whole point I was trying to explain was that runtime memory, exe size and speed would almost certainly all be considerably worse if you used a runtime-generated dictionary instead of a plain-old-list-of-ifs.

I’d have to disagree with this statement when it comes to speed.
A dictionary will likely use a hash table and be able to very quickly find an item while a simple list of ifs is obviously O(n).

I’d also like to add that maintaining this kind of code is very hard, you don’t get to look things up in reverse unless you add another huge list of ifs and don’t even get me started on those early returns :slight_smile:

This statement makes me suspect that you haven’t spent any time benchmarking similar bits of code. You’ll probably be surprised if you actually measure the performance of the two different approaches.

No!

Sorry, but I thought it was pretty widely understood that in practice, the fact that a map has a better big-O complexity only becomes relevant when you have large numbers of items to search.

There was a famous bit of benchmarking that Bjarne Stroustrup did a few years ago where he showed that until you had more than about 40,000 items, then a std::vector was faster than any other container.

You’re right. I did not implement both versions and measure the times, but your comment makes me suspect that you haven’t done that either :slight_smile:

@jules @t0m Yes - I know about the overheads of calculating hashes etc, but the type of items matter as well.

"ACN15"
"ACN16"
...
"ACN21"

These are strings and to reach say “ACN21” item - we have to compare all the ACN… ones prior to it that are similar which means compare the first 3 characters of all those.

@t0m is right - I haven’t measured this and frankly - it really doesn’t matter that much in this particular scenario which basically renders the argument that linear search is faster for small sets of items mute. The more important aspect is code readability and reusability. If you ever wanted to search the items the other way around you’d have to create yet another of one of these mammoth if statements in reverse.

@jules - do you have a reference to that Stroustrup test by any chance? 40,000 items is a lot and I think it will come down to the TYPE of item you’re searching for not just the type of container.

I’d love to see that study — I’m struggling to believe that a linear search for a string is faster than a dictionary lookup until you have more than 40,000 items.

In any case, the maintenance issue still stands.

@jules @t0m btw… JUCE is one of the best if not THE best written libraries I have ever used in my 30+ programming career so this is all just good fun and we’re nitpicking on purpose :slight_smile:

Not me — I think that unless the speed is critical, e.g. you’re inside a processblock and instrumentation can really demonstrate that a linear search will be faster, the maintenance cost is much more relevant and a single dictionary that can be searched by value (when necessary) is safer than managing two separately written functions with a long sequence of if tests.

Also, I remember learning (probably 40 years ago as an undergrad) that the cross-over point for linear search vs binary was a very small number, as low as 10-20 items, for example. Obviously, things have changed due to caching, lookahead, and other hardware optimizations) but I did find an article where somebody measured this precise case and found that the switchover from linear to binary was really low (the only part being amusing was their surprise that linear search was faster for less than 8 items)

https://terrainformatica.com/2017/10/15/when-linear-search-is-faster-than-stdmapfind-and-stdunordered_mapfind/

By far the most common lookups in this case are “L” and “R”, which are right at the top of the list of ifs. That’s going to be hard to beat.

1 Like

Not me

So you’re nitpicking on accident, then? :smiley:

More seriously, is there any tangible problem being caused by this piece of code?

This debate feels largely academic given the lack of actual problems described here. Personally, I’d rather the JUCE team not spend their time addressing non-issues. YAGNI, and all that.

Is there a problem you are experiencing that this change would solve?

2 Likes

I think the reason I felt it was worth jumping in here was that regardless of whether or not the code in question had a problem that needed solving, the OP’s suggestion illustrated some common misconceptions about performance and the factors are are genuinely important when you’re trying to improve, clean up or optimise code. And I think it’s worth pointing out that the suggestion given isn’t as sensible as it may seem to a lot of people.

No! Again, that’s completely wrong. Like I said in my original post, the most helpful thing you could do in a situation like this that would make it easily maintainable is to make sure there’s a switch statement containing all the cases. Then your compiler will warn you if the list grows. That’ll happen for the existing code, but not if you used a map.

So when I said that a runtime-populated map is worse than the current code in terms of speed, size and complexity, I should also have added maintenance to that list.

Literally the only possible advantage I can think of for a map would be to make the source code shorter. But I gave some other suggestions above for ways to make the code even more compact, without compromising in any of the other ways.

Well, in an audio callback you’re never going to be using strings! And probably no std containers either, since none of them are realtime safe. And you’ll be profiling anyway, so will hopefully be optimising for performance based on evidence rather than general best practice.

In other code where it’s not worth benchmarking, then it does help to understand all the things that impact code quality and learn to have a good instinct for how to find the best balance.

1 Like

OK - we’ll just agree to disagree on this — I didn’t expect this amount of debate, it was just something I noticed while reviewing the changes and I’m always skeptical when I see lots of essentially repeated code.

Well, sure, but disagreeing with a fact doesn’t make it any less true!

There are lots of times where two different approaches to a piece of code are equally good and it’s purely a matter of taste/opinion which one you choose.

This isn’t one of them.

Like I tried to explain multiple times, a dictionary or map would make this code worse in literally every way I can think of.

You’re 100% right to be massively sceptical when you see repeated code! I’ve been banging on about D.R.Y for years. Maybe what makes this thread interesting is that it’s an example where every other “good programming” rule pulls in the opposite direction from D.R.Y., so they win. However, like I said above, there are other (very simple!) ways that it could be tweaked to make it D.R.Y without impairing its size, performance, maintainability or readability.

Clearly you’re not living in the US :face_with_symbols_over_mouth:

And I have to disagree — we could debate the speed question and apparently both of us can point to benchmarks that support our respective positions but since you’ve already pointed out that speed is not an issue in this particular case, then it’s hard to see why a map where you only need define the keys once along with two functions that never have to change is not better than having to write two functions, each with N cases to test and then having to (remember to) modify (both of) them whenever a new item is added.

(P.S. Sir Humphrey would have been proud of that last sentence :slight_smile: )

Extra points for the Yes Minister reference :slight_smile:

But I never suggested that keeping the two functions with duplicated copies of the list was the best thing to do. What I said in my first post was that a best-of-all-worlds optimisation would be to boil it down to one copy of the list, but to implement it as either an X macro, or a constexpr static table, or a type pack, or really any kind of compile-time list, but absolutely NOT to use a dynamically-instantiated dictionary or map.

And a common use case of this kind of test suite in a real-world would be to add specific one-line tests that check for regressions for specific bugs and potentially add project management ticket references and/or source code commit references (which could in turn be automated to red flag any future failures). Tweaking regex to test for these edge cases loses this granularity, so +1 or even more than +1 for Jules’ comment on speed being irrelevant

I realised I should have added a brown-nose emoji there, should one exist :slight_smile:

1 Like