Profiling CPU peaks with Xcode Instruments


Hi everybody,

I am currently chasing down some drop outs which occur in my plugin although the CPU meter averages at 20%.

I am using the time profiler from Instruments and it works ok to show me the average cpu usage of every function. But are there some tricks to get the peak usage (something like “only give me the function call that lasted longer than the buffer size”)? They occur only every 20 seconds so they are kind of hard to track down…


In Tracktion we use a trick where we have a class called a GlitchDetector, which is a simple RAII class that measures the time spent while it’s in scope, and asserts if it lives for more than a few milliseconds.

Using a macro, we’ve scattered these in all the audio thread functions where a spike might occur, and although it’s normally disabled in a release build, we can turn them on or add more of them if we’re trying to track down a glitch. Very handy.


Hi Jules,

Interesting trick! Do you think that works reliably in a debug build? Doesn’t it need something that works in a development-release (profiling) build but is disabled in a deployment-release build?

(For example, in a sampler-type plugin I’ve found that some of the logic in controlling voice allocation can be “slow” in a debug build that is optimised away in release.)


Damn, that’s handy. Do you keep any information (something like a static pointer) to the most recent positive GlitchDetector around to avoid getting many assertions down the stack trace?

I think I have some scattering to do :slight_smile:

@martin, I think you can use it with your release build and ifdef it to void before you build the deployment version:

#define ADD_GLITCH_DETECTOR(name){ GlitchDetector ge(name);}
#define ADD_GLITCH_DETECTOR(name) {}

Then in your to be profiled function, just add


as first line. I’ll write one myself and share it (it seems to be trivial), unless Jules want to migrate this class to JUCE (hint, hint)…

BTW, what is the most recommended function to get a reasonable accurate time vs. performance for measuring in the audio thread (and not only for debugging but in production code that has a CPU meter)?

I am using the following code (this is rather legacy code that was written two years ago):


but there are many other functions in the Time class which practically do the same but with different accuracy. I am guessing

static double Time::getMillisecondCounterHiRes()	

does the same thing?


Alright, here is my shot:

/** A small helper class that detects a timeout.
*   Use this to catch down drop outs by adding it to time critical functions in the audio thread and set a global time out value 
*   using either setMaxMilliSeconds() or, more convenient, setMaxTimeOutFromBufferSize().
*   Then in your function create a object of this class on the stack and when it is destroyed after the function terminates it
*   measures the lifespan of the object and logs to the current system Logger if it exceeds the timespan.
*   The Identifier you pass in should be unique. It will be used to deactivate logging until you create a GlitchDetector with the same
*   ID again. This makes sure you only get one log per glitch (if you have multiple GlitchDetectors in your stack trace, 
*   they all would fire if a glitch occurred.
*   You might want to use the macro ADD_GLITCH_DETECTOR(name) (where name is a simple C string literal which will be parsed to a 
*   static Identifier, because it can be excluded for deployment builds using
*       #define USE_GLITCH_DETECTION 0
*   This macro can be only used once per function scope, but this should be OK...
class ScopedGlitchDetector
    /** Creates a new GlitchDetector with a given identifier. 
    *   It uses the id to only log the last call in a stack trace, so you only get the GlitchDetector where the glitch actually occured.
    ScopedGlitchDetector(const Identifier &id):
        if(lastPositiveId == id)
            // Resets the identifier if a GlitchDetector is recreated...
            lastPositiveId = Identifier::null;
        const double stopTime = Time::getMillisecondCounterHiRes();
        const double interval = (stopTime - startTime);
        if (lastPositiveId.isNull() && interval > maxMilliSeconds)
            lastPositiveId = identifier;
            if(Logger::getCurrentLogger() != nullptr)
                Logger::getCurrentLogger()->writeToLog("Time out in function: " + identifier.toString());
    // =================================================================================================================================
    /** Change the global time out value. */
    static void setMaxMilliSeconds(double newMaxMilliSeconds) noexcept
        maxMilliSeconds = newMaxMilliSeconds;
    /** Same as setMaxMilliSeconds, but accepts the sample rate and the buffer size as parameters to save you some calculation. */
    static void setMaxTimeOutFromBufferSize(double sampleRate, double bufferSize) noexcept
        maxMilliSeconds = (bufferSize / sampleRate) * 1000.0;
    // =================================================================================================================================
    const Identifier identifier;
    const double startTime;
    static double maxMilliSeconds;
    static Identifier lastPositiveId;
    // =================================================================================================================================


#define ADD_GLITCH_DETECTOR(x) static Identifier glitchId(x); ScopedGlitchDetector sgd(glitchId)

// Drop these somewhere in a source file (not a header) to avoid duplicate symbols...
double ScopedGlitchDetector::maxMilliSeconds = 3.0;
Identifier ScopedGlitchDetector::lastPositiveId = Identifier();

Feedback is welcome :slight_smile:


That’s actually a lot more complex than the one we use, but basically the same idea. We never added anything to make it log any glitches, we only use it for our own debugging/testing, not wanting the extra overhead in a release build, but it might make sense in your app.

In debug mode it can throw a few false positives, but that’s not a big deal, and it’s a handy safeguard to catch occasional mistakes that creep in.


Well, if you #define USE_GLITCH_DETECTION 0, it won’t waste any cycles because it expands the macro ADD_GLITCH_DETECTION() to void.


You could also use this with some kind of singleton TimingStatistics class where your ScopedGlitchDetector destructor adds its measured interval together with the Identifier you supplied.
That class then keeps a simple running average, minimum, maximum and standard deviation for each of the identifiers you supplied (calculation must be kept minimal of course, so i guess keeping a running histogram would be a bit too much).
This way you could also get a view on the timing behavior of each of the identified functions, and not just an assert when something passes a threshold.
Hmmm, I guess that’s what the IDE profiler already does too :wink:


I wanted to add a settable threshold level so you can make it fire when it takes eg 80% of the available time (because chances are great you will get a drop out too) You can then set it to a low value and subclass you StatisticsCounter from Logger without having to change too much. However this class is not supposed to be a good starting point for an all purpose profiler but rather to catch the nasty edge cases that produce clicks…