CPU hungry timer


I was experimenting a bit with Thread and Time to build a Clock for a MIDI sequencing app. It seems as if the waitForMillisecondCounter approach in a Thread eats up a lot of CPU when you run it very fast (3ms resolution).
Maybe my design has a flaw, but I was thinking to implement a clock thread that runs at the smallest note division my app supports (say 1/64 or 1/128), and “steps” a midi manager class.
I also could adapt that clock, so that it “knows” at what time there is the next midi event, and make the wait time dynamic, to save cpu. But I think that’s an ugly design.
Usually I was using SuperCollider to do audio stuff and noticed the timing/scheduling functions in there are extremely accurate, and eat only very little cpu, even if you run them at a high resolution. That begs the question: If it is possible to do that on my OS (OSX) with low cpu usage - how?
Maybe SuperCollider runs the timers in the callback of the audio interface or something, but I have no idea.

Here is my code:

void Clock::run()
	int startTime = Time::getMillisecondCounter();
	int nextStep = startTime + timerResolution;
		DBG (String(Time::getMillisecondCounter() - startTime));
		nextStep += timerResolution;

Thanks for any ideas or help.



waitForMillisecondCounter runs much more efficiently if you give it longer periods - if you wait for e.g. 50ms, it’ll sleep for most of that, then spin the cpu for only the last couple of ms to get an accurate result. If you repeatedly call it with 3ms, it’ll be spinning most of the time, so your design’s really not going to be very efficient at all.


hmm, I thought so… but then, how would I be able to trigger very short note intervals? Let’s say 1/32 notes at 170 bpm - that would be around 11ms. I was looking into the source code of SuperCollider, to find out how the timers work in there. They use a pthread_condition in combination with pthread_cond_timedwait(). Someone also referred to this approach here:

Here’s a link to the implementation in SuperCollider:
Have a look at TempoClock in there.

I’ll experiment a bit with it and post what I find out. As I said, if I debug-post in a 3ms interval in SC, I get an incredible accuracy with only a fraction of the CPU usage JUCE would take. So there might be a hidden performance treasure in the code :wink:


Yeaah! You actually used pthread_cond_timedwait() in JUCE. I found it in in WaitableEvent. If I use WaitableEvent::wait() instead of Time::waitForMillisecondCounter(), I can run the loop at a 1ms resolution and it almost does not affect my CPU at all (only when I post stuff at 1ms intervals obviously). It’s not very accurate when its not balanced though.

int startTime = Time::getMillisecondCounter();
int postInterval = 100;
int step = 0;
WaitableEvent* waiter = new WaitableEvent();
	if(step >= postInterval){
		DBG(String(Time::getMillisecondCounter() - startTime));
		step = 0;

To balance it I can just calculate by how much it missed the desired interval - like the shortest note I want to support. It actually works and gives me an accuracy below 1ms (if the interval is higher than the 1ms wait resolution). Still not perfect, but good enough for now.

If I don’t post much the app eats under 2% of the CPU. Let’s see if it still works that efficiently if I start sending messages to another thread in short intervals.

But you’re probably right and my design is not the best approach to write a midi sequencer, but it is the most logical to me at the moment. If you are willing to share some of your knowledge about how to design a sequencer more efficiently, please do so. My ears (actually my eyes in this case) are wide open.

Thanks a lot


well, waitForMillisecondCounter tries to be accurate at the expense of a little cpu, but it was also designed to work on windows too - it’d probably benefit from being made platform-specific and optimised differently for each OS. The accuracy of wait() will probably vary between platforms.

If you’ve got your events sorted into order and timestamped, surely it’s trivial to just wait until the next one is due to happen?


True, it could be a disaster on windows. Still I’m wondering what happens if you use some automation. Let’s say a simple LFO. You will probably end up sending messages at a certain resolution anyway. Also you still have to move the playhead constantly and wake the timer if you added a new message during its sleep interval.
On the other hand, apart from the performance question, the constant “step” approach has the downside that all events fall into a fixed time grid, that would for example not allow accurate triplets or grooves (even if it’s highres).
I really appreciate, that you find the time to talk to the JUCE users in this forum - thanks.


Hi guys ! Did you finally find a solution to this problem ? Just curious to know the end of the story cause I’m facing the same issue myself !

I found there’s a microsecond timer in boost : http://www.boost.org/doc/libs/1_35_0/boost/date_time/microsec_time_clock.hpp
That could be a suitable cross-platform solution to our problem !

Another (probably dumb) question : even if juce integrates lots of midi stuff ( like MidiMessageSequence ), there’s no built-in MIDI player in juce, is there ? Or did I totally miss it ?


I haven’t made any changes since the post.

[quote] I found there’s a microsecond timer in boost : http://www.boost.org/doc/libs/1_35_0/bo … _clock.hpp
That could be a suitable cross-platform solution to our problem ! [/quote]

Juce has always had a high-resolution clock. Knowing the correct time is easy, the problem here is how to most efficiently wait until a particular time.