Achieving sample accuracy for arbitrary function calls in renderNextBlock()

I have searched the forums and found quite a few topics about this issue, for people asking about a sample-accurate timer. I know that you can 'count samples' in order to achieve sample accuracy of AUDIO PROCESSING, but how can I make arbitrary function calls with sample accuracy? For example, let's say I want to call some function `onBeat()' every time a beat happens. I might try this:

void renderNextBlock(outputBuffer, numSamples){
    for (int i=0; i<numSamples; i++){
        if (sampleCount == samplesPerBeat){
            sampleCount = 0;

This isn't really sample accurate though. This is only sample BLOCK accurate, since all that's happening here is our `onBeat()' is called whenever the sample that will be streamed on a downbeat is added to the audio output buffer, not when that sample is actually streamed. I must be missing something here. What is the 'Juce way' of implementing arbitrary function calls with sample accuracy?

What you've got there IS sample accurate.  It's called at exactly the moment when you are processing that particular sample.  When that happens in actual human time is both probably unknowable (your computer is doing a lot at once) and irrelevant (it's all buffered and finally arrives in a regular synchronised stream only at the sound cards output buffer).

What are you trying to achieve in the big picture?


PS. It's a side issue - but  there are data quality problems with some sequencers anyway, so whilst you can hit a particular numbered sample, getting sample accurate hits on beats is probably practically unachievable in some cases.



The audio processing is sample accurate, but unless I just don't understand how this all works (which very well may be the case), the actual processing of a sample (or any arbitary executed code) that happens within the rendering callback will happen before you actually hear the audio. What I mean is, what if I want to execute some arbitrary code for some purpose other than audio processing, but I want the timing of that execution to be sample accurate (or as close to it)?

Let's say the buffer size is 1024 samples. If I execute some function onBeat() whenever a sample is processed that lines up on a beat, the actual execution of that function (at 44.1hz sample rate) will happen up to 1024/44100 seconds (~2 milliseconds) from when that actual sample is streamed. It just doesn't seem super accurate.

Now that I think about it, is there any functionality besides audio processing that I need to be sample accurate? I suppose not.

Well, here's an example to consider.

Do you want a meter update on the screen to happen at exactly the same time as the audio comes out of the speakers?  Is that the kind of problem you're thinking about? 

Did you know that sounds and vision are only simulateneous in conciousness for humans when the event happens a few meters away.  

If the event happens close to your face then the audio happens slightly before the sound.  

It turns out that the vision processing latency inside your head is rather slower than the audio latency.   Fortunately a couple of meters away and the fact that sound travels slower than light through air compensates for that. 

If it had been the other way around you'd never be able to experience the sound from an event at the sametime as you saw it happen.

1 Like

Obviously that wasn't directly relevant. But I think it's such an interesting fact* ... ! :) 

Well, fringes of fact anyway ... different research suggests different people have different response times.  But ... I suppose the important point is that ... mmm what was the point? 


Interesting fact of the day! :)

I guess I didn't have a specific example in mind so much as I was comparing Juce to Kontakt's API, which has a super accurate timer with MICROsecond resolution. I suppose I was hoping I would be able to have the same sort of accuracy (though perhaps Kontakt isn't as accurate as it seems?)


Given a buffer of samples to work on, you can perform any operation at sample accuracy, by rendering the sound at the correct place in the output. It doesn't matter when you call functions to render those samples. It's be meaningless to talk about accuracy in microseconds in this context.

Kontact is just a plugin like anything else, and operates on buffers too. If you were calling a function to perform e.g. a physical i/o operation in an embedded device then yes, the time at which the function is called matters, and you'd use an entirely different system to control it.

I think I'm doing a poor job explaining my thoughts.

I'm not talking about accuracy of sample processing. Obviously sample processing is sample accurate. I'm talking about arbitrary function calls. What if I actually care about WHEN a function is executed? I suppose the functionality I'm envisioning is more of an ultra high resolution timer, completely unrelated and useless to processing audio.

For instance, Kontakt has an API to suspend execution (like Thread::sleep()) for an arbitrary number of microseconds. I have no idea how to accomplish this with Juce. I was going to just count samples, but then realized the whole dilemma that counting samples is really only sample accurate for audio processing, not for arbitrary functions unrelated to hearing audio. The only reason I even brought up the rendering callback is because this is the only way I can conceive of creating that level of high-resolution timing.

Well, you might want to look at HighResolutionTimer, Time::waitForMillisecondCounter, etc. This is all basic OS-level threading stuff.