We’re seeing a problem that maybe someone here can give some guidance on?
We were having a major slowdown in our plugin, and in the DAWs, when our graphs were busy drawing, especially while scrolling (either by hand or when the transport was running and we were scrolling to keep up with the playhead). So, we added a thread pool with a lambda in the graph regeneration function to queue up its content updates and free up the DAW from waiting on us. This really freed up a lot of CPU, but introduced a problem.
If you do a bunch of scrolling or zooming, it looks like the queue can get backed up from the mouse movements generating these calls, and sometimes ends up with a delay in the last mouse movement’s graph regeneration of a second or more. If the user closes the plugin, that thread pool’s lambda sometimes is still executing when the plugin destroys the view component that contains that graph regeneration code. That leads to a crash on closing the plugin, because it access items that have been deleted since the job started.
We enter the lambda, the plugin closes, and we call removeAllJobs(true,1000), but the job doesn’t finish and exit until too late.
I’m wondering if there is a better way to end the job in this case, that will ensure that the lambda finishes before the rest of the plugin is destroyed (especially the Processor, which owns the data that is needed for generating the graph).
One idea I have would be to set that timeout on removeAllJobs() to 0 or something smaller, but I’m not sure if that will work in all cases or simply make the crash more rare.
Or maybe, since the graph only needs the latest data, perhaps we should clear the queue before adding a new job, so that the queue only ever has at most the current job and one about to start? To do that, I assume we’d remove all jobs except the running job (if there is one), then call addJob() for the call we’re making now? That seems like it would help, but I don’t know for sure if that will resolve the problem or not. I haven’t logged the number of jobs waiting to see if it’s growing unreasonably or not and that is causing the delay, but if so then this solution seems reasonable.