Node graph clarifications

Hi, I’m trying to understand tracktion::graph, my main goal is to recreate something like AudioPluginHost with node graph (not importing all tracktion::engine) to exploit the potential of multithreading.

Up to now it seems me that each node is something like an AudioProcessor and that an AudioProcessorGraph is a node (that can be converted to a NodeGraph), right?

if it’s so how to represent an AudioPluginInstance with a tracktion::graph::Node? how to attach a player to a DeviceManagerCallback?

I can’t find a stat point to understand how to exploit tracktion::graph for this purpose… I’m all wrong or my vision is right? There is some possible dialogue between tracktion::graph::Node and an AudioProcessor? Maybe the instance of AudioPluginInstance is something I need to wrap in a subclass of tracktion::graph::Node as decorator?

my idea of working as start point (pseudoCode):

    sineNode.reset(new SinNode{220});
    nodePlayer.reset(new SimpleNodePlayer{std::move(sineNode), sampleRate, blockSize});
    
    deviceManager.initialise(2, 2, nullptr, true);
    deviceManager.addAudioCallback (&nodePlayer);

It’s a bit more complicated than that.

tracktion::graph is lower level than juce::AudioProcessorGraph. You build up a graph of Nodes to represent your processing graph similar to tracktion_EditNodeBuilder.cpp:createNodeForEdit then hand that to a NodePlayer for playing.

There’s no automatic connection to a device manager or plugin instance etc. You’d have to wrap that yourself and process the player by manually calling its process function.

If you want to process the graph in a multi-threaded way, you need to use a MultiThreadedNodePlayer or LockFreeMultiThreadedNodePlayer to play it back.

1 Like

Ok so, looking around code (before use those stuffs for my purpose) an AudioPluginInstance in tracktion is something more similar to PluginNode?

Yes, but that’s very much a tracktion::engine concept as it wraps a te::Plugin.

1 Like

The Tracktion graph module has just a few utility type of nodes built-in. If you want to host plugins without getting a dependency on the full the Tracktion Engine, you need to write your own Tracktion graph node subclass that wraps a juce::AudioPluginInstance. Things might be easier in the end if you just use the full engine even if you don’t need all the features it has. (But could be an interesting programming exercise to do the custom plugin hosting node, of course…)

1 Like

What I always envisioned was a wrapper around a juce::AudioProcessorGraph that uses the topology of that and the plugins but converts the playback to use tracktion::graph. That would mean TG basically replaces the internal juce::GraphRenderSequence.

I think its doable but if it’s worthwhile is another question. You’d have to have a pretty big and complex graph to get decent utilisation of multiple cores.

1 Like

Does this make any sense?

using namespace tracktion::graph;

static MidiBuffer toMidiBuffers(tracktion_engine::MidiMessageArray& midiMessageArray)
{
    MidiBuffer midiMessages;
    
    for (int i = 0; i < midiMessageArray.size(); i++)
    {
        midiMessages.addEvent(midiMessageArray[i], i);
    }
    
    return midiMessages;
}

class NodeWrapper : public Node
{
    
public:
    
    NodeWrapper(std::unique_ptr<AudioProcessor> audioProcessor) : processor(std::move(audioProcessor)) {}
    
    std::vector<Node*> getDirectInputNodes() override { return inputNodes; }
    
    NodeProperties getNodeProperties() override
    {
        NodeProperties props;
        
        if (processor.get())
        {
            props.hasAudio = !processor->isMidiEffect();
            props.hasMidi = processor->acceptsMidi() || processor->producesMidi();
            props.numberOfChannels = processor->getTotalNumOutputChannels();
            props.nodeID = nodeID;
        }
        
        return props;
    }
    
    bool isReadyToProcess() override { return processor.get() ? true : false; }
    
    void prepareToPlay (const PlaybackInitialisationInfo& infos) override
    {
        jassert(processor.get());
        processor->prepareToPlay(infos.sampleRate, infos.blockSize);
    }
    
    void process (ProcessContext& pc) override
    {
        jassert(processor.get());
        
        auto audioBuffer = toAudioBuffer(pc.buffers.audio);
        auto midiBuffer = toMidiBuffers(pc.buffers.midi);
        
        processor->processBlock(audioBuffer, midiBuffer);
    }
    
private:

    std::unique_ptr<AudioProcessor> processor;
    size_t nodeID = 0; // MARK: 🔶 With AudioProcessorGraph it was managed by the graph ... Here?
    std::vector<Node*> inputNodes; // MARK: 🔶 Who will tell this to the node?
};

And if yes: I could process it with LockFreeMultiThreadedNodePlayer? and how can I pass the buffer the juce Device Manager to be rendered?

It’s somewhat in the right direction but unfortunately needs quite a bit more work…You need to store the Node’s input node or nodes in your custom node, for starters. How the graph structure works in the Tracktion graph is a bit peculiar…I just today started looking more into it all and things can get a bit hairy. (I also need custom nodes for plugins etc, I can’t use the full Tracktion Engine for my purposes.)

1 Like

Ok… I don’t understand yet how connect nodes in trackion::graph: my approach, and what I’d like to achieve with some helper classes, is to connect and manage my nodes as AudioProcessorGraph actually does, so up to now I immagine to have a NodesManager that gives an abstract idea of Graph who actually has some func called connectNodeChannel or addNode ecc that at some point fills the InputSources of each WrapperNode (with nodes connected to this one) and gives it an unique id.

AudioProcessorGraph has IOAudio input IOAudio output and same for midi to pass data to audioprocessoraplayer and so to the devicemanager: Maybe customizing those classes I can pass and get data to the main node passed to MultithreadNodePlayer?

Yes… Using all tracktion engine is really useless also for mine… Basically I want to recreate Audiopluginhost but I want to manage nodes in multithread way, so… “a node graph with in some nodes” :sweat_smile:

Can you elaborate on this a bit please? There might be ways I can improve the API or examples.

It was mainly developed to run the Tracktion Engine processing so isn’t the best fleshed out for other uses just yet. It can definitely be used for other tasks though. All the tests are good examples and don’t use Engine.

I guess it’s just about me having to wrap my head around the concept of having to rebuild the graph when the signal routing needs to change. And avoid pitfalls like what Ayra was getting into : in the code he/she posted a unique_ptr to a juce AudioProcessor was taken into the node. That would not work with heavy to construct/destruct 3rd party plugins. It’s not too uncommon plugins may take even several seconds to construct/destruct, so that should be avoided at all costs if not absolutely necessary. So, lifetimes of things like plugins need to be handled somehow separately of the Tracktion graph node lifetimes, I would assume.

There’s also the additional problem I have to make this all work with Clap plugins, so I also need to take into consideration stuff relating to that. Wanting to use Clap plugins with the special features they make possible was the main reason it didn’t look feasible to use the full Tracktion engine.

1 Like

m… But calling this (were nodeWrapperContainer is an OwnedArray that collect instances of WrapperNode):

formatManager.createPluginInstanceAsync (desc.pluginDescription,
                                             sampleRate,
                                             blockSize,
                                             [this] (std::unique_ptr<AudioPluginInstance> instance, const String& error)
                                             { nodeWrapperContainer.add(new NodeWrapper(std::move (instance))) });

Cannot be a solution to be sure that the 3rd party plugin has been finished to be instantiated?

Yes, I think the main problem here is that people expect tracktion::graph to be of a similar abstraction level as juce::AudioProcessorGraph, it’s not, it’s much more like the internal juce::GraphRenderSequence as I mentioned above.

The way graph gets its speed is by being a static graph (or set of Nodes if you will), these are also relatively lightweight (most just contain some smart pointers to some data and a couple of allocated audio/MIDI buffers).
Persistent data (plugins, automation, playheads, time-stretchers etc.) are stored with smart pointers and not re-created when the graph rebuilds.
I spoke about this at length in this ADC talk:

We do have a higher level concept in Tracktion Engine called Racks which are more similar to juce::AudioProcessorGraph. In my opinion separating the model (plugins and their connections) from how they are processed was one of the great wins of the tracktion::graph project. It makes things much easier to manipulate and optimise.


I think the bigger question with Clap plugins is: what unique features of them do you need? That might help drive the implementation decisions.

2 Likes

No, you need to keep your plugin instances completely separate from the graph. They can only be referenced by the graph, not owned by it.

Take a look at how tracktion::engine::ExternalPlugin owns the juce::AudioPluginInstance but the PluginNode of the graph just takes a reference to that Plugin.


I don’t want to speak out of turn but this stuff is extremely complex. It’s taken me many years of working on a DAW engine to properly grasp how all this ownership and processing semantics should work. There’s a lot of very tricky problems to solve. tracktion::engine does all these by utilising tracktion::graph internally.

It’s going to take a lot of work to replicate what tracktion::engine does (playhead, tempo changes, automation etc.) if you start with tracktion::graph (the graph construction alone is a couple of thousand lines of code).

I’d think about your product first and then if tracktion::graph is the most efficient use of time. It could be that just using juce::AudioProcessorGraph single threaded is just fine for your use case. Or it could be that just using tracktion::engine is fine, there’s some overhead to it but not loads and tons of benefit.

It’s just difficult to know what advise to give without knowing what exactly people are building.

1 Like

Yes… But juce::AudioProcessorGraph is perfect for my purposes… But it works in single thread (for what I understand…) so I can’t process the mum of plugins I need in my scenario… There’s a way to make AudioProcessorGraph multithread? (or maybe instantiate multiple AudioProcessorGraph and give to each one a separate thread?)

The most crucial one is the unified input and output event queues for the plugins that handle automation/modulation and note events (which are not completely equivalent to traditional MIDI events).

Thats really a question for you but it seems a bit of a strange one. Yes, a fully multi-threaded system will likely outperform a single threaded one but what systems will your product be running on?

  • Is it a private tool, a commercial product etc.
  • What plugins will be running? Are they your own ones or any 3rd party plugin?
  • How will you know how many plugins users need to run? Have you benchmarked a single-threaded system (i.e. using juce::AudioProcessorGraph )?
  • Is this a plugin or a stand-alone product? If a plugin, what host will it be run in? Is that multi-threaded?

I don’t mean to be discouraging, it’s just that if you have a limited amount of time and resources, finding the quickest route to an acceptable solution is often the better path. If this is the start of a much bigger, longer term project however then I can see the value in user a lower level library like Tracktion Graph.

Just my 2 cents

1 Like

Ok, that’s not something I’ve really looked in to but presumably you’re tying this in to a larger project model with a concept of these events and then want to generate event queues to pass to the Clap plugins?

I think if this was me, and you’re building a DAW-like-thing (i.e. with a timeline and events), I’d probably fork Engine and add the capability rather than trying to build the DAW from scratch around Tracktion Graph. But again, it’s hard to know without having some kind spec or at least feature set to look at.

I have already spent months on other aspects of the software, the event sequencing stuff works reasonably well and was designed from the beginning to support the needed Clap specific things. Just haven’t been happy with the primitive and hacky single threaded audio graph I made for it. The whole audio etc routing stuff was bit of an afterthought, I initially thought I could get by just having a single Clap instrument hosted for the whole software… :sweat_smile: (It’s not a traditional DAW really.)

Anyway, I already got quite far with the custom Tracktion graph stuff too. Haven’t yet tested with actual Clap plugins or if there would be problems when using the multithreaded players. I guess it’s still a possibility to fork Engine too, but I will have to see about that.