Using a std::function to call a realtime processing member function - any overhead?


this is more some basic C++ question then a JUCE specific one:

Let’s say I have some audio algorithm class

class AudioProcessingAlgorithm {

    void processBlock (float *inputData, float *outputData, int blockSize);

Now I do this:

class MyAudioProcessor : public AudioProcessor {


    MyAudioProcessor (std::function<void(float*, float*, int)> p) : process (p) {};


    void processBlock (AudioBuffer<float> &audioBuffer, MidiBuffer &midiBuffer) override {
        process (audioBuffer.getReadPointer (0), audioBuffer.getWritePointer (0), audioBuffer.getNumSamples ());

    std::function<void(float*, float*, int)> process;

// ....

AudioProcessingAlgorithm audioProcessingAlgorithm;
MyAudioProcessor myAudioProcessor (std::bind (&AudioProcessingAlgorithm::processBlock, &audioProcessingAlgorithm));

I hope this quick example is syntactical correct, however I think you got what I want to point out.

Now my question is: Will calling process on the realtime thread this way cause any runtime-overhead? I’m really not sure what happens behind std::function :wink:

std::function is awesome but it isn’t free. In my profiling, casting a function pointer to std::function or using a lambda with an empty capture list (which can be decomposed to a function pointer) is about as quick as you can get it… but at that point, use a function pointer I guess.

In addition, depending on how you structure your dsp objects (like if that MyAudioProcessor was actually inside another algorithm), the compiler can do some heavy inlining of your DSP process calls, especially when you use LTO. I’ve noticed that std::function can ruin that optimization, but I’m not sure if it’s always going to be that way or if it’s inherent to std::function.

Well it will have a little overhead, because the compiler doesn’t have the chance to optimise the function call, because its happens on runtime.

When you use the function to process a block of data, this should be okay.
Otherwise if you use a function on a per-sample basis (funny enough the juce intern oscillator class does exactly that), this might be a bigger problem.

But the best thing is, if you are not sure, use a profiler first. (Maybe you will find other more important bottlenecks )

Thank you. It‘s fine if it generates as much overhead as a usual function call.

Profiling is a good point, however this particular question is part of a basical design decision for a much more complex project.

This means when I‘m at the point of being able to profile the code, a lot of work has already been be done before. So I‘m looking for theoretical pro and cons against this design decision before starting to actually implement things.

Whether or not it’s a good decision depends on what the project is. Do you need that level of abstraction in your DSP code? Does the speed in development time outweigh the cost in CPU time for your customers?

And you don’t need to profile an entire project, nor should you wait until the project is too big to change to profile (do it as you go), nor should it be designed such that if you find a problem during profiling that it’s too complex to change. Create a minimum test case to profile, see if it helps or hurts, move on to a real situation, profile that, see if your first assumption held true.

I will say you can get the benefits of std::function without the overhead if you can make some assumptions about the function signature itself (basically, roll your own callable object). You can also get fancy with how the DSP objects work, for example in some situations a DSP object won’t hold it’s own state, you pass the state into the callback as an argument. One thing to look for when you compile a large DSP object is to compile in release with LTO and debugging symbols, then check the assembly for function calls down the stack of your process callback. If you do things right, the compiler can inline almost the entire callback, which is optimal. It’s not hard to check for that.

1 Like