Instrumenting jasserts?

Hello everyone,

I’m nearing the release of my first plugin (, and would like to be able to collect whenever a jassert fails, likely through juce::Analytics. This would be very useful as it would allow me to know not only when something goes wrong in my own code (instrumented by juce::Analytics), but also within JUCE classes themselves.

Given I taught myself everything I need to do in C++ through JUCE, I am at a loss as to how to do this. In Ruby (my language of choice before picking up C++), this can be done through monkey patching.

So I’m wondering, has anyone done something like this? How could this be done in C++?

I can think of another way around this which is to somehow register a listener of stderr (or wherever jassert logs end up), and trigger juce::Analytics events whenever the logs match a failed assertion. Not sure how easy that would be to do but it doesn’t sound like it’ll require macros or (what seems to me like) C++ wizardry :slight_smile:

Curious to hear people’s thoughts! Tips on other techniques on how to collect errors are welcome as well.


Jasserts are not compiled into release builds. They are not meant for logging/diagnosing problems in already released products, but rather for finding out issues during development with debug builds.


Ah, I see. So I guess this entire approach is on the wrong track.

I’ll perhaps hand roll my own global assertion function that uses juce::Analytics under the hood to more thoroughly instrument my own code. I just wish I was able to capture when something within JUCE itself fails an assertion, but that’s perhaps reaching too deep into a library abstraction :slight_smile:


What sort of thing are you hoping to catch with this approach?

To me it sounds like the area you need to invest time and energy into is testing. That includes automated testing (unit tests, integration tests, end-to-end tests) and manual testing (QA).

I’d also strongly suggest adopting the principal of Parse, Don’t Validate which can turn a lot of would-be runtime errors into compilation errors, and thus often removes any need for an assertion or an exception to be thrown.

1 Like

Hello fellow rubyist!

This thread might be of interest to you re: crash reporting:

Just as another data point, my personal take is that runtime issues are far less interesting/common in modern C++ than in dynamic languages like ruby, in part due to confidence given by the type system/compiling and in part due to the lack of all those sweet sweet runtime monkey patching abilities (which is def sad at times!).

I sort of see asserts as “casual wannabe tests” that help remind future-you what application conditions you expect to be met in code (rather than “code correctness”). I had a blog post somewhere about the usage of asserts vs. unit tests, but can’t find them now…

I plan on eventually having some runtime logging, as there are certain types of issues it would be nice to be able to monitor (fps, issues with callbacks, etc)


Hey James :wave:t3: Thanks for the read. I’ll check it out.

I’m investing in unit tests, end-to-end tests (thanks to Focusrite/juce-end-to-end) (setup in CI), and lots and lots of manual testing (:smiling_face_with_tear:) quite a bit already. I plan on doing a lot more but I think runtime assertions are a net positive tool in a reliability toolkit.

I look at runtime assertions as a way to detect runtime conditions that are abstracted away JUCE but that are still interesting to uncover as they point to behaviour that is unaccounted for even when handled gracefully. I expect these to arise from the substantial number of combinations of DAW/hosts, plugin formats, operating systems, and other runtime conditions the plugin will run under in the real world. Ideally my testing covers all of them, but that can never be the case in practice. So a mechanism to uncover any such conditions that are occurring out in the wild would be complementary to automated systematic testing.

For instance, I have this simple lambda somewhere in my code:

auto connectNodes = [this] (auto& node, auto& otherNode, int channel) {
  auto result = chain.addConnection ({ { node->nodeID, channel }, { otherNode->nodeID, channel } });
  jassert (result);

My plugin is tested in artificial conditions under which the assertion here fails, so that behaviour is handled gracefully (no crashes, no bad state as a result). However, should this happen in the wild, I want to be notified.

I could go on and on about the nuances of all this above, but I thought I’d elaborate a little bit on why I’m trying to do such a thing in the first place :slight_smile:

Thanks for your take @sudara. Happy to see a fellow Rubyist here as well. I’ll check out the thread you linked and will keep an eye on any runtime logging work you will publish in the future.

Will try to find the blog post you allude to as well.

PS. Big fan of your work! I used pamplejuce when I first got started with JUCE and it’s been super helpful.

I agree about jasserts going hand-in-hand with unit tests. Often, my unit tests catch errors not by actual REQUIRE() statements failing, but by internal assertions failing, which can be just as useful.

In this context, a jassert logger would be somewhat useful, because usually you’d be running your unit tests in debug config anyway.

FWIW, I don’t think that a jassert logging mechanism for production code really makes sense. By the time you ship your code, no assertions should be hit under any circumstances.

There’s also an interesting discussion to be had about assertions vs throwing exceptions. Assertions are generally not recoverable at runtime, but exceptions can be.

My general rule is that assertions are assumptions, and exceptions are unknowns.

So I use assertions to say “I’m assuming this to be true, so I’m not going to actually handle this error. If this occurs, you’re probably using the API wrong.”.

Then I’ll throw an exception to say “You did every right when using this API but yet still something went wrong and I don’t know how to handle it, so I’m bailing out”.