Extreme danger of perforating eardrum from audioDeviceIOCallback

EDIT: The original proposal was for the implementation of a (default-enabled) safety-guard. From discussing on #juce on TheAudioProgrammer discord, there is some contention regarding the proposal. Therefore, I reduce the proposal, and instead propose no more than that the JUCE team discuss the options for implementing such a safety guard.

I have damaged my eardrum with this code:

    void audioDeviceIOCallback(
        const float** inputChannelData, int numInputChannels,
        float** outputChannelData, int numOutputChannels,
        int numSamples
    ) override
    {
...

If I don’t explicitly zero the output buffer, occasionally (for me 20%+ of runs) the output buffer is not zeros, and ths makes such a loud static sound that even on macOS MINIMUM volume it is dangerous.

I had no idea it was possible to drive headphones/mac-speaker this hard. Surely it must risk destroying the speaker.

So, now it seems I have a perforated eardrum.
So please consider making this change:

(EDIT:)

myAudioDeviceManager.addAudioCallback(
    newCallback,
    false,  // zeroOutputBuffer : optional, default true
    false   // checkOutput : optional, default true
);

Or adding 2 flags to .initialize:

            myAudioDeviceManager.initialise(
                1,            // numInputChannelsNeeded
                2,            // numOutputChannelsNeeded
                nullptr,      // XML
                true,         // selectDefaultDeviceOnFailure
                device,       // preferredDefaultDeviceName
                &deviceSetup  // preferredSetupOptions
                false,  // zeroOutputBuffer : optional, default true
                false   // checkOutput : optional, default true
                );

Or even adding members to

            auto deviceSetup = AudioDeviceManager::AudioDeviceSetup();
            deviceSetup.sampleRate = 44100;
            deviceSetup.bufferSize = bufSize;
            deviceSetup._DANGER_zeroOutputBuffer = false;  // defaults to true
            deviceSetup._DANGER_checkOutput = false;  // defaults to true

Maybe the last pattern is cleanest, as it now becomes impossible to remove the (now) default protections without realizing what they are doing. Any code containing these overrides is explicit. Warning-via-code > warning-via-doc.

Now this is non-breaking, and an engineer concerned with optimizing performance could manually elect to disable the safeguards.

The key point is that they would need to explicitly write , true, true in order to disable the safeguards, which means they have read the API doc for the method, and understand the implications.

Mattijs on TheAudioPrgrammer Discord has offered a shim for preventing this.

I am smiling - and unsure how to interpret this… :smiley:

Love the concern… even if it feels like a joke.

But I would certainly not like a framework/library to clear buffers for me to “protect my ears”!

Not even using overridable default parameter values.

2 Likes

the lesson here is: always have a limiter after your audio apps or plugins. not only because of random data in buffers but also because of dsp bugs that can introduce infinite feedback and other crazy stuff

6 Likes

I completely agree with Florian! And, it’s our responsability.

but I am absolutely against voting for the mentioned suggestion.

Still a big smile about the proposed “Shim” :smiley:

But there are sooo many more ways to mess with audio signals… this is just ridiculous.

The proposed function is a bit verbose, but I’m not sure what’s humorous about it. It is a good idea to have some sort of brickwall limiter, or a “go silent if any clipped or NaN samples” function at the end of your signal chain. Perhaps JUCE could provide one of these, but it’s not hard at all to create one.

My mindset is focused on plugins, maybe that’s not a good starting point for commenting, also in the way I did. Making fun etc.

But, this Shim mentions debugging with headphones on…

I am sorry, but how stupid can you be to do that?
Did someone reply with the advice to simply not do that in the first place?

The fix is: don’t use earpieces or headphones connected to your computer while debugging.

Yes, that is a great piece of advice. Even when using speakers, I wouldn’t trust my DSP code to run through them without a brickwall limiter, at least before the DSP code has been tested, etc.

i can’t agree there. i often use headphones for debugging. it makes sense in my case because my workspace is also our living room. it’s all a bit DIY around here. a limiter is enough to deal with any potentially damaging audio data and it makes using headphones pretty safe as well.

i’m personally not against extending that method to have a non-breaking change of a new clear argument in it. it’s just a nice little reminder that can help preventing this from happening just like the default code of juce’ plugin template with that buffer clearing stuff. and just like it coming with ScopedDenormals out of the box it could come with a debug limiter at the end that would not exist anymore in release, just like DBG

4 Likes

I’m sure a lot of people who get into audio programming don’t know or expect that buffers are not zeroed out by default, or that small mistakes can cause audio output to blow up. You only learn about this when it happens to you – especially when you have headphones on.

I once had a parameter that, due to rounding off of the slider position in the GUI, passed a frequency larger than Nyquist to a filter, which caused it to explode. That’s why I wrote that protectYourEars code.

I wouldn’t put this in a release build, but I definitely suggest using something like this (or a limiter) for debugging until you’re 100% sure you’ll never create screaming feedback. It is my nightmare that this kind of failure mode exists in plug-ins that ship to users. (And it happens: I know of at least one synth that had an issue like this.)

4 Likes

I get your points :+1:

And admit I am spoilt working in a former church tower with 40 cm thick walls (no reasons for using headphones). And using speakers with built-in DSP/limiters. My monitor controller is at arms length.
Jeez… sorry guys :wink:

Cheers

One reason for using this function over a limiter is that it warns you when bad stuff happens. With a limiter you might be thinking your code is fine, until someone tries it without a limiter. :wink:

1 Like

IIRC Reaper mutes the channel at +18dBFS. This both protects your ears and gives you a warning as you’ll be forced to unmute (if it’s still +18dBFS I believe it just re-mutes immediately, no samples played!).

2 Likes

Sorry, I wasn’t thinking clearly when I made that proposal. I’ll update the OP.

    void audioDeviceIOCallback (
        const float** inputChannelData, int numInputChannels,
        float** outputChannelData, int numOutputChannels,
        int numSamples,
        bool zeroOutputBuffer = true,
        bool checkOutput = true
    ) override
    {

Adding params to the callback doesn’t make sense. They would need to be given either when the callback is registered, like this:

void AudioDeviceManager::addAudioCallback (
    AudioIODeviceCallback* newCallback,
    false, // zeroOutputBuffer : optional, defaults to true
    false  // checkOutput : optional, defaults to true
) { ...

… or (maybe better) during init:

            deviceManager.initialise(
                1,            // numInputChannelsNeeded
                2,            // numOutputChannelsNeeded
                nullptr,      // XML
                true,         // selectDefaultDeviceOnFailure
                device,       // preferredDefaultDeviceName
                &deviceSetup  // preferredSetupOptions
                false,  // zeroOutputBuffer : optional, defaults to true
                false   // checkOutput : optional, defaults to true
                );

With this solution:

  • any existing code still works, just with a miniscule performance hit.
  • it’s trivial for the developer to optimize for production builds by adding , true, true to existing code.

So it is now safe by default.

@PeterRoos Your reaction is not sane. JUCE has surely attracted plenty of teenagers / uni students without the finances to kit theselves out with a pro-rig or the experience to pre-empt such an issue. And surely you can see how a mac user might imagine that their system is protected against causing severe and permanent physical damage. Can your advice be delivered to every developer ahead-of-time? How would you propose doing that?

Also some apps (such as mine that involves 3D positioning of audio) require headphones. It’s understandable that many would test with headphones. I always test first without. But this is an intermittent glitch. It only happened on the fourth run, and blindsided me. It depends on whether the memory region allocated for the output buffer is zeroed or not. You have to think outside of your own personal situation when you create a generic tool.

Maybe a malloc should be a calloc somewhere deep down in the system audio internals.

And this incident has done worse than perforate my eardrum. 20 years ago an assault left me with Tinnitus, equivalent to a low conversational volume of > 10kHz static. Over the years I have come to barely notice it. But since this episode, it is been returned to its original level of ferocity. FWIW it is a known technique to break the spirit of prisoners: to tune a radio to static and leave it playing. It is a torture. To deny them their silence.

The observation that a limiter may not warn against (prefiltered) dangerous audio levels is pertinent.

AFAICS this is a strong proposal. I’m amazed that the framework has existed for over a decade without addressing this very fundamental concern.

Big red flag.

5 Likes

I already said “sorry guys”, maybe you missed it, so I repeat it here.

2 Likes

I just started programming audio applications recently but thinking about a limiter was actually one of the first thoughts of mine before I wrote the first line of code. But I have to say, talking to others that started around the same time with me, I was more or less the only one – and I think that was purely because, being a musician way longer than a programmer, I kind of adopted “watching out for my hearing sense” into my so called life style.

With JUCE making a notable effort to not only target beginners in audio programming but also beginners in programming over all, I was actually very surprised that this wasn’t mentioned like all over the place in the tutorials. Its actually insane if you think about it, that a typo, like a zero too much, or multiplying instead of dividing etc. (and of course listening with headphones) could cost you your good hearing forever.
And it was said before, but I feel like this is also underestimated: for a new comer I think it is very hard to conceptualise, that MacOS itself does not take care of this. Considering Apples high efforts on making lives safer and improving or developing health features, I think it is a (at least subconscious) a reasonable assumption, that there would be some sort of protection built into your Mac, what clearly is not the case to a point, where you could even damage the hardware (possibly? – no sure though never tried it, but it sure sounds like it). iPhone and iPad ship with exactly this feature.

And, there is actually quite a lot JUCE can do about it. Mentioning that in the tutorials would raise at the very least the awareness for the danger which is like half the way and it doesn’t affect the code base at all. In the code base one could at least implement ready to use limiters (or checks/asserts) for the most important audio callbacks. Like a wrapper for AudioIOCallback that checks what really is going out into the world before sending it.

4 Likes

Man, that sucks. Hope the eardrum gets well soon… I know the tinnitus may take some time, after years of dumb experiments and dumber rehearsals.

I think there won’t be much support to have this enforced, at least for release builds. It’s a known issue, mentioned in the docs for audioDeviceIOCallback:

The initial contents of the array is undefined, so the callback function must fill all the channels with zeros if its output is silence. Failing to do this could cause quite an unpleasant noise!

and processBlock:

Note that if you have more outputs than inputs, then only those channels that correspond to an input channel are guaranteed to contain sensible data - e.g. in the case of 2 inputs and 4 outputs, the first two channels contain the input, but the last two channels may contain garbage, so you should be careful not to let this pass through without being overwritten or cleared.

It would be good to have these highlighted (not sure if that’s possible in Doxygen) and also mentioned in the tutorials.

The system can’t do much about it -system outs are integer, aren’t they? They’re already capped at 0, so you’d need to shut them off, but the system wouldn’t have a way to separate a “bad” max_int from a “good” one.

1 Like

As developping an experimental fork of Pd i’m also very concern by limiting damage to users. After reading a thread about that on Pd-list i decided to clip the output signal of my app. Since it’s not clear if it is limited on macOS, and pretty sure it is not on Linux. As soon as the framework is used for prototyping and/or by student i think that it is a highly serious request.

:+1:

2 Likes

There’s been a thorough discussion on #juce on TheAudioProgrammer Discord on this issue.

One partial-solution that should be non-contentious is for JUCE to prezero the output audio buffer (that comes into audioDeviceIOCallback) when it is (re)allocated, which appears to be only upon route-change. If it is initially zeroed, it can only be populated by user-code within the callback. And this greatly reduces the liklihood of getting zapped.

In my case, I used audioDeviceIOCallback only to catch mic-input. It did not occur to me to do anything with the output buffer. And as it is random (to a degree) whether the allocated memory is zeroed, I only got zapped on the fourth run.

This would have zero performance hit, and I don’t see a downside.

Protecting against user-generated dangerous audio output could be considered a separate issue.

1 Like

I’ve also been bitten by this, albeit not in such a severe way. The “problem” here is that the CoreAudio mixer appears to be float until it hits the actual hardware. So even if you turn down the volume of the mac, you’re able to generate output higher that this “threshold” if you’re generating samples louder than 0 dBFS. I have a little loop at the end of my processing chain which jasserts that all output samples are neither NaN, nor inf, nor greater than 5 (obviously their abs values). So the debugger halts if I’m generating fishy output. And this happens more often than you’d think. It’s enough to make a off by one mistake when calculating a delay line index and you have infinite feedback.

I actually have this check after each FX device so that it’s relatively easy to track down the culprit.

IMHO such a debug check would be a perfectly reasonable addition. If your code generates NaNs, Infs or excessively loud samples, it’s bound to be some sort of programming error. If you really intended to generate such loud signals then perhaps defining JUCE_DISABLE_OUTPUT_SANITY_CHECKS could be a way to disable this.

Of course this alone doesn’t protect anybody against issues that only appear in release builds - like uninitialised variables - bit it’s still better than nothing and it doesn’t have any overhead in release builds.

So yeah, I think adding this to the framework is just the responsible thing to do.

4 Likes

Here’s some efficient (I hope) code to sanitize an AudioBlock:

struct Codes {
    enum {ok, finiteRange, hasInf, COUNT};
};

int codeFor(float* v, size_t N) {
    // expect most blocks to be correct & require
    // going no further than this first pass
    int nGood = 0;
    for(float* p = v; p < v + N; ++p)
        nGood += (int)(*p >= -1 && *p <= 1);

    if(nGood == N)
        return Codes::ok;

    // check for NaN / inf / HUGE_VALF
    int nFinites = 0;
    for(float* p = v; p < v + N; ++p)
        nFinites += (int)(isfinite(*p));

    return nFinites == N ? Codes::finiteRange : Codes::hasInf;
}

#ifdef DEBUG
#  define DEBUG_ONLY(x) x
#else
#  define DEBUG_ONLY(x)
#endif

// ^ Can't use JUCE_DEBUG for lines creating variables as it creates fresh scope

void sanitizeBlock(AudioBlock<float> blk, float clampAt=2.f) {
    int results[Codes::COUNT] = {0};
    for(int ch=0; ch < blk.getNumChannels(); ++ch)
        ++results[codeFor(blk.getChannelPointer(ch), blk.getNumSamples())];
    if(results[Codes::hasInf] > 0) {
        DBG("⚠️ ⚠️ infinite value(s) in AudioBlock, zeroing");
        blk.clear();
    }
    else if(results[Codes::ok] < blk.getNumChannels()) {
        DEBUG_ONLY(float _min = numeric_limits<float>::max());
        DEBUG_ONLY(float _max = numeric_limits<float>::min());
        DEBUG_ONLY(int nClamped = 0);
        for(int ch=0; ch < blk.getNumChannels(); ++ch) {
            auto v = blk.getChannelPointer(ch);
            for(int i=0; i < blk.getNumSamples(); ++i) {
                float& x = v[i];
                DEBUG_ONLY(float _x = x);
                DEBUG_ONLY(_min = min(_min, x));
                DEBUG_ONLY(_max = max(_max, x));
                x = max(-clampAt, min(clampAt, x));
                DEBUG_ONLY(nClamped += (int)(_x != x));
            }
        }
        DBG("  ⚠️ Clamped " << nClamped << " samples, ranging [" << _min << ", " << _max << "]");
    }
}

Now I can use AudioBlocks in my audio render callback:

// EDIT: removed quitting flag as per rincewind's observation below
class Mic
: public AudioIODeviceCallback
{
    // bool quitting = false;
public:
    //~Mic() {
    //    quitting = true;  // avoid crashes in audio render callback
    //}
    void audioDeviceIOCallback (
        const float** inputChannelData, int numInputChannels,
        float** outputChannelData, int numOutputChannels,
        int numSamples
    ) override
    {
        // Note: outputChannelData[0] can be non-null even when numOutputChannels = 0
        if(numOutputChannels == 0)
            return;

        AudioBlock inBlock(inputChannelData, numInputChannels, numSamples);
        AudioBlock outBlock(outputChannelData, numOutputChannels, numSamples);

        // Note:
        //   It is CRITICAL to ZERO the output buffer
        //   On occasional runs (30%?) outputChannelData is filled with noise
        outBlock.clear();

        // don't attempt to process any audio after d'tor is called,
        // as member objects may have already been destroyed.
        // (Yes, callback still fires after d'tor hits)
        //if(quitting)
        //    return;

        // TODO: Your processing here

        sanitizeBlock(outBlock);
    }
};