We noticed that leak detector tends to complain during application shutdown in the midst of a
Timer::callAfterDelay() call. In our case we passed a shared_ptr captured by value into the lambda function. It seemed like the shared_ptr just doesn’t get released by the time the TimerThread gets destroyed.
A closer look into
juce_Timer.cpp reveals that under the hood
Timer::callAfterDelay() creates a
LambdaInvoker as a raw pointer and just discards it! I looked further and felt a bit assured that
LambdaInvoker adds itself to
TimerThread::timer. However, that vector of Timers “alive” just never seem to get properly released in
LambdaInvoker::timerCallback has never been invoked until shutdown.
I tried changing
TimerThread::~TimerThread() to the following and it seems to fix the leaks:
jassert (instance == this || instance == nullptr);
while (! timers.empty())
if (instance == this)
instance = nullptr;
The lambdaInvoker gets deleted in its callback, if the callback is never triggered then I imagine there would be a leak. What are you using the LambdaInvoker for? I think it was probably built on the assumption that in 99.9% cases (or more) the callback would be called and the object deleted, and in the 0.1% where it’s not, the cost would be small and only happen when the application is shutting down and therefore the memory should be freed up for another application anyway. I’m not saying it couldn’t be improved but I’m interested to understand in what case it would be regularly leaking?
struct LambdaInvoker final : private Timer
LambdaInvoker (int milliseconds, std::function<void()> f) : function (f)
void timerCallback() override
auto f = function;
void JUCE_CALLTYPE Timer::callAfterDelay (int milliseconds, std::function<void()> f)
new LambdaInvoker (milliseconds, f);
Timer::callAfterDelay(). Now, if there’s a script that regularly calls setTimeout, it almost always triggers a memory leak during shutdown. Honestly not a big issue at all in release builds, but it’s more of a nuisance during debugging.