From my initial look, it seems like proxy file generation for audio clips in Tracktion Engine is unavoidable and part of the engine workflow, is that correct?
I have a specific use case in which I am wanting to have a very basic media player (think one stereo audio clip playing at a time), but still benefiting from being able to use TE and apply some DSP (think plug-in instances on the master track).
Right now I just keep the same edit and remove the old audio clip if there is one, and insert a new clip to play the next track (and adapt the transports time regions to the new clip). This works reasonably well for smaller files, but if the file is long enough, say 10 minutes or more, Tracktion’s transport will happily start playing but you won’t hear any audio for a few moments while the proxy file seems to be rendering.
What is my best approach to be able to still use Tracktion Engine, but create a basic media file playback experience (controlled by Tacktion’s transport), but without having to render proxy files, etc.?
My main idea right now would be to create a built-in plug-in that essentially is just a glorified audio file player, say with a
juce::AudioTransportSource for playback, and synchronizing the AudioTransportSource’s transport state/position, etc. with Tracktion’s.
Is there an easier way? Is it possible to exclude an audio clip from proxies entirely?
The idea would be to support common audio formats like .wav, .mp3, .flac, etc.