I have found that if the audio buffer size is large (eg 4096) the edit->getTransport()->getPosition() method returns the same timeline position a few times in a row before jumping ahead to the next position and repeating that value etc… If you tie a UI playhead location to this method you get the illusion of a very slow frame rate.
For me, this behaviour presents on both iOS and MacOS.
It would be great to get a solution or a work around to this!
With kind regards,
I haven’t done any Tracktion development, but that does seem strange. 4096 bytes of audio data is around 100th of a second at 44.1k. even if the value were being updated at that boundary, it’s hard to imagine a playhead delay would be visible. Do you see this same behavior in Waveform? Since I can’t help solve this with experience, I’m just suggesting a test to validate it’s not somehow an issue with your implementation l.
Hey @cpr2323 thanks for the quick reply. I haven’t used Waveform but I’m using the same methodology demonstrated in the supplied RecordingDemo example.
Excluding the UI from the equation, just running a timer with
DBG(edit->getTransport()->getPosition().inSeconds() will show a result repeated a few times, then a jump, and a new result repeated a few times and so on.
I believe @attila is aware of this issue as well. I’m willing to believe I’m missing something here so if so please do call me out on it!
I recommended checking Waveform as a sanity check. You can download it for free. Since in is built on Tracktion. If it exhibits the issue, it would give weight to it being a Tracktion issue. If it doesn’t exhibit the issue, it points to either a work around has been implemented in in Waveform, or a problem in your implementation… Or something else. Best of luck!
TransportControl::getPosition() is actually a cache of the current position that is flushed back from the audio playhead to the UI thread. I’ve been considering removing it for this reason or at least providing a way to query the current playhead position.
Can you try using this code instead and seeing if you get smoother results?
if (auto epc = transportControl.getCurrentPlaybackContext())
auto position = epc.getPosition();
// Do something with position
Thanks for the reply. Some time ago I believe I tried obtaining the value via currentPlaybackContext(). Regardless I gave your code a whirl and crossed my fingers.
Alas I get the same issue on MacOS/iOS with large buffer sizes. Here is a DBG of position.inSeconds() from a 2048 buffer size (the max I can set on the Mac). The renderOpenGL() method is what triggers each DBG - to give you an idea of frequency.
//buffer = 2048
If I reduce the buffer size to something like 128 the problem doesn’t present at all:
//buffer = 128
Please let me know if there is anything else I can try, or if there’s more information I can provide that might help.
That’s what you’d expect though isn’ it?
Assuming you’re using a sample rate of 44.1KHz, that means each buffer is 46.4ms.
As the playhead is updated at the start of each block, that means a new position will be available to read every 0.046s or at a rough a rate of 21Hz.
I don’t know what frequency your OpenGL render is happening at but lets assume a repaint rate of 30Hz or a period of 0.033s, (although 60Hz, period of 0.017s, could be quite common).
As you’re reading the position form the UI thread more frequently than a new position is available, you’ll get a repeated time. Reducing the buffer size to less than the repaint rate mitigates this.
One other question I have though is why you’re using such a large buffer size? Do you have to do a lot of processing but don’t care about latency at all?
Thanks very much @dave96 for the thoughts and the breakdown. It certainly makes total sense, and matches what I was experiencing, hence I suspected it. I guess I just wasn’t expecting to see it so blatantly because I could hear smooth audio and assumed something was calculating the positional equivalent.
To your question, yes that’s correct. On a mobile device the new timestretching, though rather lovely, can quickly overload the CPUs capability giving very glitchy audio, certainly on older phones. Using these enormous buffer sizes mitigates that even as far back as iPhone7, which is fine as for certain situations I’m not worried about latency.
That said, I am wondering about ways to fire off a low priority background render then switch the playback audio to essentially freeze the stretch/warped source file. Then if the track is dirtied by a tempo change or whatever, let the regular live-calculated playback come through, and queue up another low priority background render. This is out of scope of this thread, but hopefully explains why I’m currently experimenting with these insane buffer sizes.
What time-stretching implementation are you using? (We support SoundTouch, RubberBand and Elastique).
And what audio file types are you reading?
Have you profiled to see if the cpu time is spent in the file reading or the time stretching?
Do you mean rendering just a time-stretched proxy file or a freeze file with effects etc. on it?
Ah yes, these tests have been with Rubberband.
The audio file types are wav. The native/vanilla output from recording using TE2.0.
I’ve not yet done that profiling, no. Largely as I haven’t found out how with AppCode (MacOS M1/iOS). I’ll do more googling.
Correct. I believe to get the nice time stretching I’m getting now, useProxy=false is a prerequisite though, right? However I rather like to opt to not freeze any post (insert) effects so that they can be tweaked based the re-timed source audio.