Hi, I’m trying to use Tracktion Engine API (without any GUI) to load external VST plugins and generate sound programmatically.
Project Setup:
- I’m building a standalone C++ console application on Windows.
- It links against the necessary Tracktion Engine modules.
- I call TE-related APIs directly from
main()to simulate sound generation.
My current understanding of the general workflow is:
-
Initialize the Tracktion Engine;
-
Scan and load VST plugins from disk;
-
Create an
Editobject and set up tracks/clips according to Tracktion Engine’s structure; -
Regarding plugin use:
- 3.1 For sampler-type plugins (e.g., Kontakt): I assume I should load an audio file via the API and control parameters based on what the plugin exposes?
- 3.2 For synthesizer-type plugins (e.g., Serum, Vital): I assume I need to create a MIDI track and route it to the plugin instance to generate sound? Is there a recommended way to load a preset or patch in this case?
Main question:
How does Tracktion Engine utilize VST plugins through its API to actually produce sound? From loading the plugin, assigning it to a track, setting parameters, and triggering playback—what’s the correct or recommended way to do this in a non-GUI context?
Additional Note:
I understand that without a GUI, some plugin parameters may not be accessible or modifiable, depending on how the plugin exposes them. I’m okay with that limitation for now; my goal is to establish a working headless pipeline for basic testing.
I’d really appreciate any guidance, corrections to my assumptions, or even a minimal code example that demonstrates loading a plugin and playing back sound through it.
Thanks in advance!
