Questions about app structure/state ValueTree using tracktion engine

Hi everyone,

I working on a personal project for which I’m trying to have tracktion engine running on a raspberry pi using ELK platform for low-latency multi channel audio/midi input/output. I have some questions about the best way to structure my app and how to follow the ValueTree state pattern discussed by @dave96 in a number of videos. Here’s some context about the project (which maybe you can skip and go straight to the questions section below):

— start context bit

The plan is to make some sort of sequencer + sampler which runs on the Pi and is connected via USB to an Ableton Push2 that is used as the hardware interface. I already have some demo code that I put together which loads some tracks with samples using the tracktion engine (inside a plugin) and connects to push to display some simple graphical interface for adjusting volumes of tracks and steps of a step sequencer :slight_smile: I still need to get it working on the ELK platform but well, this is the topic of another thread.

Now I’m starting to plan more in detail the features I want the device to have. The plan is first to simply re-use things already implemented in the tracktion engine. Then I have some ideas that will most probably require to extend some things of the engine. Because I’m in this planning phase and I’ve got no experience in building an app like this, I would like to get some input about what would be a good way of structuring my app.

— end context

My questions/comments:

  1. If I want to add extra stuff to the tracktion engine state ValueTree (e.g. my own app settings), how do I get/edit the state object? I guess there’s some method in te::Engine or te::Edit to do so (maybe simply editing the juce::ValueTree state public property in te::Edit)? Is there any example code where this is shown using trakction engine? Also for saving and loading then I guess I just save/load the edit? (like in the RecordingDemo example).
  2. Also, is there any example code for setting change listeners to the tracktion engine ValueTree state?
  3. I want my UI to solely depend on the ValueTree state so that it does not need to know anything else about my app. I guess if in the future I want to extract the UI bits into a different app, then I’ll mostly only need a way to sync the state ValueTree contents. However I also need to send messages from UI to my audio engine (see point 6 below). Is this generally a good idea?
  4. The way my demo code currently deals with UI and state that is that I have a state object (not yet using tracktion engine state ValueTree, just a custom class) and a “update UI” function that runs at 60Hz which reads the state, generates an image frame for the push display and sends necessary MIDI messages to set buttons, colors, etc in push. I guess a key thing here is to optimize this function as much as possible to not compute/send redundant info. Is having a function like this a good idea?
  5. For UI things that “change fast” (like track level meters, midi in indicators), what I do now is that I have a “state update” function run at 10Hz which collects all level meter values, etc, and saves them into the state object. Once I refactor my state object to use tracktion engine’s state ValueTree, it would mean that every 10Hz I’d be writing these things to the state ValueTree. However, I guess there’s a better way to do that. Maybe for these things which are “volatile” I should have a way to connect directly to the UI without writing to the state (and avoid having this “state update” function)? Or have some other ValueTree state only for the UI/volatile stuff?
  6. My UI also needs to send messages to the app “audio engine” (which is a class that basically talks to the tracktion engine using helper methods), for example when a pad is pressed or a knob turned. What I do is that the UI class has a reference to my “audio engine” so it can call methods. If in the future I want to further separate UI from the rest of the app I guess I’ll need to use some sort of grpc protocol here. Is this a good pattern?

Sorry if the questions are not well formed, I can give more details if needed. My code is open source (I’ll share it once I get it in a better shape), so hopefully when refactored with a better strcucture it will be useful for others as an example.

Thank you very much in advance!!! If anyone is interested to participate I can share current code repository so you get more details about the current status.

I think most of what you have written here makes sense.

Yes, this is the reason these state members are exposed. In Waveform we add things like height to track states which aren’t included in the Engine.
Saving is done like the RecordingDemo.

This just uses the ValueTree::Listener interface. You can watch my ADC talk on ValueTree apps which might be beneficial: GitHub - drowaudio/presentations: Resources for presentations and talks I've given

Yes, this is similar to the React paradigm and generally a good one as it separates the Engine from the visual appearance of your app.
If you want to send messages back, you can abstract them in to message instructions. Doing it this way will make it easier to convert to an RCP style in the future.

I think that sounds sensible. I’m not familiar with the Push but I would tend to follow their suggested practices here. With hardware a lot depends on the available data bandwidth and latency.

Yes, I would avoid keeping transient/volatile state in the Edit’s ValueTree. It doesn’t make much sense to store meter values in there as they’ll get saved with the Edit, then when you load it back, they’ll have some existing value before playback has even started which doesn’t make sense.
There will also be a performance hit here as the Edit will be listening to all state changes, you don’t really want to trash this with meter updates at 60Hz so abstracting these in to a “UI” state which you have more control over is a better approach.

Yes, this would be a better pattern than having concrete references.
You can generally use EditItemIDs to uniquely identify Edit elements.

I hope that helps. It generally sounds like you’re taking the right approach.

1 Like

As always thanks @dave96 for your answer, very useful :slight_smile:
Just a couple of doubts about your answer to make sure I understood correctly:

By message instructions you mean that for the UI class to communicate with my audio engine there would be a single method like performAction(const String &serializedActionData) called by the UI, and in serializedActionData I’d put whatever info is required for the audio engine to interpret the message right (e.g. action ID set volume, track number, volume value)? In this way the UI does not depend on any implementation detail of the audio engine and this single message could be easily replaced by a grpc call in the future. This sounds in fact very similar to the ActionListener/ActionBroadcaster pattern in JUCE. Maybe for my first implementation (without grpc) I should then use ActionListener/ActionBroadcaster so that my UI class does not even need a reference to the audio engine? Or there are important drawbacks in using this pattern here?

Good, so it sounds reasonable that I have some sort of ValueTree volatileState that I update with a function that I run regularly. But to put another example, if I want to flash my screen every time a MIDI message is received, then I’d better have a way to directly tell the UI to flash the screen without writing to an intermediate volatileState right? Maybe I should use again the ActionListener/ActionBroadcaster pattern here? I guess the same strategy I use for sending messages from UI to AudioEngine, I can use from AudioEngine to UI? (that might mean that both UI and AudioEngine are action listeners/broadcasters that send messages to each other).

Where can I find more info about what EditItemID are and how to use them? having a quick look at the engine source code I assume EditItemID provides a way to easily access and set properties of the ValueTree state, but it is not clear to me how I’d use that.

Thanks again!

Yes, something like that. It doesn’t have to be a string though, it could be some custom struct that’s serialisable over whatever interface you’re sending it (pipe/socket/MIDI etc.).
The main thing is that you don’t pass Engine references around.

Yes, that would probably work. I don’t generally use ActionListener / ActionBroadcaster as I prefer something more strongly typed but that’s really up to you.

EditItemID is basically just an int that uniquely identifies items. Have a look at the EditItemID class, the EditItem base class and then tracktion_EditUtilities.h for examples of how to retrieve Edit items by ID.

Hope that helps.

1 Like

Many thanks, that definitely helps :slight_smile:

Just another small quick question

What would be a more typed alternative to ActionListener / ActionBroadcaster pattern? Should I subclass ActionListener / ActionBroadcaster classes to make my own version which use some custom struct instead of a String? Or what would be a good way to go about that?

It really depends on how you’re sending the message but something like ActionBroadcaster which takes a more strongly typed object would avoid spelling mistakes etc.
But you could always create an object that serialises to and from a string and pass it over that.

That could work nicely if you end up using OSC to pass messages across a network but for other protocols it might be easier/quicker to send the data in a binary format.

I see. I like the typed idea, so maybe I’ll just define a custom struct and some methods to serialize/unserialize to/from string so I don’t need to subclass ActionBroadcaster/ActionListener. At this time I’m not really sure what my messages will be like so the custom struct might change a lot in first development phases. Messages will probably contain at least some action ID and a number (std::map?) of parameters. Anyway, I think I have enough input for now, thanks!