I’m digging into the Tracktion Engine, and trying to set up a way to move between different parts on the timeline in a musically/temporally useful manner. I have marker clips in the marker track set up to correspond to verse, chorus, bridge, etc., and have automated setting loopPoint1 and loopPoint2 in the transport to correspond to the start and end points of whatever marker clip I have selected, but the changes occur instantly (or as soon as the changes in the ValueTree propagate to the relevant listeners).
How can I (or is there a way) to queue loop point changes to take effect at the end of the current loop, or at the end of the current bar, for example? I’m trying to get to an equivalent of what would be called scene launching in Ableton Live, where the user can select the next scene, but it only launches when the time quantization threshold has been reached (bar, etc.)
Any help or insight will be greatly appreciated, thanks.
This isn’t really a supported use case at the moment but is on the roadmap: https://github.com/Tracktion/tracktion_engine/blob/develop/ROADMAP.md
The only way I can think of doing that right now is to run a timer and poll for the current transport position and set the loop points at the correct time. This won’t be sample accurate though.
The other option would be for us to add some kind of
PlayHeadRemappingAudioNode which converts the playhead time to another time. This isn’t trivial though and I haven’t had a chance to think through exactly where such a node would need to go in the graph or what else it could effect (such as aux send/return nodes).
I think a task like that might have to wait until after the “Refactor Rendering Graph” item on our roadmap.
Many thanks for the quick response, Dave, even if the answer isn’t what I was hoping for. Back to the drawing board for me.
Just thinking out loud, could you point me to the code that constructs audio and midi buffers when the playhead needs to loop? i.e. the end of the loop and then the beginning of the loop are played in the same block.
Any chance of users adding a check for nextLoopStart != loopPosition1 (for example), and then building the rest of the buffer from the next loop and resetting loop points? Or is there too much else going on under the hood for something like that to work?
It doesn’t really work like that I’m afraid. It’s handled on a per-AudioNode basis for those that can handle it. Take a look at
WaveAudioNode::renderAdding which calls
invokeSplitRender. This then breaks the render block in to two and calls the
WaveAudioNode::renderSection for each part.
renderSection method can then use the
AudioNode::ContinuityFlags to determine how to treat each part.
I think there would need to be a new higher level audio node to do this but that would probably require the rest of audio nodes being able to render any number of samples (which is probably where we want to end up anyway).
@dave96 I believe a lot changed under the hood of tracktion_engine. How would you recommend approaching this problem with current state of tracktion_engine + tracktion_graph? Are there all needed construct in place to write
It’s moving towards being able to support that but not quite there yet.
If you look at
TracktionNodePlayer::process you’ll see how looping is now handled which is closer to what we’d need for sample accurate jumps.
The second part would be to dispatch position changes in a similar way to
EditPlaybackContext::NodePlaybackContext::postPosition but with a timestamp.
I think the NodePlayer would then have to process a number of samples, set the new loop position and continue.
I’m still in the process of removing the old engine so can’t really look at this right now I’m afraid.