Why is the result of timeInSamples so unpredictable? I would have thought as the host was playing, this would increment by the numSamples in the buffer? Instead, it’s somewhat erratic.
I need to sync or quantize an event based on the host reaching a bar number (bar 1, bar 2, etc…). I was trying to calculate the number of samples in a bar at a given samplerate and bpm and triggering when timeInSamples reached that number.
Anyone have any approach I could use to accomplish this?
The maximumExpectedSamplesPerBlock value is a strong hint about the
maximum number of samples that will be provided in each block. You may
want to use this value to resize internal buffers. You should program
defensively in case a buggy host exceeds this value. The actual block
sizes that the host uses may be different each time the callback
happens: completely variable block sizes can be expected from some
This depends also especially on the platform/wrapper/host
Ok scratch the word constant - Is there any sample accurate way to achieve host sync within the processBlock other than using AudioPlayHead::CurrentPositionInfo? Since the values received from timeInSamples is not precise?
I’ve noticed in plugins like Maschine and FLStudio that there play cursors sync to that of the Host play cursor. Playback position also starts from the same location as the host and with sample accuracy. Playing back an inverted audio file in the host and in the plugin nulls for example…
Where did you get that from?
The length of each buffer in processBlock will vary.
The timeInSamples is the position in samples of the beginning of the currently processed block since the start of edit, as the host reports it.
At least that’s what the documentation says. If you observe any different, I would say that this would be a bug…
Per the docs - “The number of samples in these buffers is NOT guaranteed to be the same for every callback…”
So i shouldn’t expect that with a buffersize of 128 and starting at 0 in the edit, my callbacks would consistently occur at 0,128,256,384 etc…
This confuses me because what IS accurate or the same during every callback? I mean, this is audio, surely we can’t be unpredictably dropping and adding samples irregularly?
…exactly, every sample that comes in must go out.
But your continuum has the possibility to stop, jump etc, which happens when the user presses pause. You can only realize that by observing what the playhead returns.
And you have only one chance to observe that, which is at position 0 of the buffer of the callback.
If the user presses pause after say 5 samples, you won’t notice until the next callback arrives.
So choosing a small buffer size makes your display “more accurate”. But we are talking about 512 / 44.100 i.e. 12 ms. So for a line scrolling over the display I would say this is accurate enough. For listening and playing music this is a different story.
Exactly. The position is consistent with the sum of all processed buffer.getNumSamples(). But not with n * buffer.getNumSamples(), because it may be different at each callback.