re: mostly thread-safe.
True, and Jules has a lot on his place. It’s a pity parts of the project couldn’t be community developed.
Is anyone actually reporting this? Or is it a hypothetical?
Very good to know about Visual Studio 2008 statics. I’m going to have to look at my code to make sure I don’t have possible issues there. My recollection is that it’s thread-safe for any GCC that’s 3.x or up, couldn’t find evidence but pretty sure of this…
They are destroyed in reverse order of creation by an atexit() function inserted by the linker.
Yeah, but you don’t really know what that order of creation is, either! So it’s still risky.
The yield should not really be necessary, as long as the operating system ensures that each thread makes a little bit of progress every so often, the algorithm is guaranteed to succeed.
I looked up my Java threads, and apparently I was right, though in practice it’s better these days - but this isn’t Java.
It really comes down to how good the thread manager is at avoiding thread-starvation. And from my reading, it seems that they’re pretty good, and if your threads yield politely they’re even better.
the most important building block for a concurrent system is a ‘thread queue’,
So interestingly enough I have something exactly like this in my code, though it works differently.
My “Data” are persistent, serialized data objects (which happen to be Google protocol buffers so you can assign, copy and swap them). You can “read” them - that is, get a coherent snapshot copy of the current value - but to “write” them you need to send an Operation (also serializable) to… a queue exactly like this thread queue, which will eventually propagate updates to all the listeners - I actually have one queue per type, mainly so that everyone doesn’t see every update.
So there are very few locks. There are some classes that are used by both the GUI and calculation threads, and they need to be synchronized, but there aren’t very many of these, and these synchronized methods aren’t called very often.
Nice to have reinvented the wheel! 
Changing the sample rate on a resampler is definitely NOT a heavyweight operation.
Hmm, well, I identify four levels of real-time on a computer.
[list]
[]There’s the fastest time - sample time, where you’re processing one video or graphical pixel or one audio sample.[/]
[]One step below that is block time - where you’re processing a group of samples at once.[/]
[]Then there’s controller time - where you’re reading some continuous controller from the user, like a mouse drag, a slider move, key pressure from a musical keyboard.[/]
[]The slowest is switch time - where you are pressing a button on the display to switch things around.[/][/list]
Now, the higher the real-time level, the worse locking is. If you lock at sample time, your code will be very slow. But if you have a background calculation thread, it seems impossible not to lock or at least synchronize once in block time in order to update the outside world with the intermediate progress in your calculation…
Changing the sample rate on a resampler is probably either in controller time, or switch time. So I don’t see why this is necessarily a bad place for a lock! In particular, changing this parameter might well set off some calculation that takes a long time to complete, so you might even want to lock twice - once before you start resampling with the new sample rate, and once when you’re done.