OK, I understand your suggestion now. It’s a great idea: it would simplify things a lot and allow me to work with Midi notes on a lower level. However I also have some more questions about whether or not it will be feasible.
The reason I chose Tracktion in the first place was that there are a lot of different potential channels for audio in my application, and I want to make sure that they are all kept in sync. Right now, the application is only generating Midi output, but in the future I want it also to be able to play audio samples, and also to be able to host other VST applications. I don’t anticipate getting to the VST part any time, soon, but I also don’t want to wall myself in for when I do get to it.
People over in this thread suggested that Tracktion would do a lot of the heavy lifting for me. If I abandon Tracktion now, will it become much harder to keep all of the different channels in sync with each other?
Even if the answer here is “yes”, I still think that your approach might be the right one. Maybe I can use build my application as a simple plugin, then use Tracktion at a higher level to keep it in sync with everything else…
Don’t think I’d take the roll-your-own-sequencer approach and deal with tempo syncing here, especially if you’re going to put this in a plugin. But it’s really hard to understand exactly how you want it to behave.
Imagining if I was implementing an app where you program the sequence of notes, I think I’d probably take the approach that when you hit the generate button, you expect the new sequence to kick in at next bar or beat, to avoid interruptions and keep things flowing in time (as I’m assuming you’d want all this to be used for live performance). That doesn’t seem like it’d be beyond what the engine can comfortably do.
Also, don’t forget you could look at Dave’s new tracktion_graph API, which gives you lower-level access to the underlying playback graph. That’d let you insert nodes which can play recorded sequences, but you could add your own custom generator nodes that could control synth plugins live.
Just to be clear, the “roll your own sequencer” approach is what @oxxyyd was suggesting, right?
This is useful information. I was looking at the MidiAudioNode class last night and wondering if I could use it to play a MidiMessageSequence. Can I ask for some clarification about which classes I should be looking at? I assume that the files located in modules/tracktion_graph/tracktion_graph/ are the right place to start. Do I also need to understand the AudioNode class that is located in modules/tracktion_engine/playback/audionodes/, or will this become obsolete in the new tracktion_graph branch?
I’ll try again, then. As I have described above, the user is presented with a grid of cells, which you can type commands into and also click on. The important commands are “play”, which plays a note, and “wait”, which forces a delay. With these commands, the user can construct sequences and melodies which will start playing as soon as the user clicks on a cell. The melodies may be infinite in length, and they can also be altered in real time by the user (as you mentioned, I have live performance in mind here).
What does the “play” command actually play? There are three possibilities (depending on the arguments of the command):
It plays midi notes which are forwarded to the host.
It plays audio samples (probably using tracktion_engine::SamplerPlugin).
It sends midi messages to other plugins which are hosted by the application.
This playback capability is probably quite ambitious; number 3 seems particularly complicated, as it would mean that my application would be a plugin which can host other plugins. I set this goal because I like the idea of being able to run my sequencer independently of another DAW of hosting environment; however, I’m willing to compromise or rethink this if it seems crazy. In any case, all of the playback elements would need to stay in sync with each other, and if my sequencer is running inside a host, it would need to stay in sync with the host. I had originally chosen to use Tracktion because it hopefully makes light work of this synchronization. But as you can see, I’m learning this as I go, so I might have misjudged something major here.
What does the “wait” command actually do? As far as the user is concerned, it’s simple: “wait” adds a delay between one “play” event and another. So without the “wait” command, all the notes would play at once. However, under the hood it looks quite different, and this is certainly the part that I misjudged the most when I started on this project. I had originally thought that I could just use a timer to administer the delays, but others have pointed out that this would run amok with the audio thread timing. So instead, the system takes active events and projects them onto a timeline. If the user changes a command which has already been placed on the timeline, the timeline event is removed and recalculated. But the user never has access to the timeline directly, and if they click on a new cell, they expect its action to be executed “now”.
The system which parses commands and decides when they should take place is working well, but I have been struggling for a long time with placing and replacing them on the timeline. I’ve been trying to use the tracktion_engine::MidiClip class for this, but I am increasingly convinced that this is the wrong thing to do. For one, I keep getting the skipped notes that I have described, and I have to go to ridiculous lengths to catch them. Secondly, it appears that I am triggering the graph to recalculate every single time a note plays, which seems very inefficient. So I am hoping that I can find another way of scheduling notes on the timeline, without having to disturb the ValueTree model.
If adding latency is the only way to get this thing to work then I’ll go with it, but I’m hoping that it’s not necessary. It also doesn’t seem like it would immediately solve the problem: obviously I would want the latency to be as low as possible, but there doesn’t seem to be a way to calculate the point below which notes start getting skipped.
I’m away at the moment so haven’t been able to keep up with this thread but I’ll just add a quick note to what Jules said…
If your project will evolve in the future to contain more elements that need to be played back and kept in sync I’d still probably recommend Tracktion Engine for that. Without it, you’ll be looking at implementing a lot of code yourself which is not easy or quick.
It does sound like you have a fairly specialised use case and maybe injecting MIDI messages using a MIDI Clip or live from the message thread isn’t suitable for you.
In that case, I’d suggest creating a te::Plugin subclass (or maybe your own concrete class that you wrap in a te::Plugin) which does your MIDI sequence playback. That way, you should have direct control over when the user triggers a sequence and how you want to play it back in the process callback.
That of course will mean storing all your future MIDI events and doing all the beats <-> time conversions and note off handling yourself and making sure it’s all thread safe and lock free.
It should mean that you can build out with other elements of the Engine when you need to though.
Thanks for taking time out of your holiday to reply. I think I have a clear idea of what I need to do now, and I’m feeling excited about it for the first time in months. I’ve done lots of research on lock free programming, so I think I’m ready for the challenge.
This next question should be easy. Now that I’ve created my custom midi plugin, how do I insert it into the right place on the audio graph? So far I’ve just been using AudioTrack::pluginList.insertPlugin(), but for this job it looks like I’ll need more control, so that I can for example route my custom midi plugin to the correct output. Should I still be doing this on the AudioTrack, or is there another interface?
If someone can just point me towards the right classes, I can probably figure out the rest.
Also, if there is a distinction between connecting nodes on the “high” level (ie. by modifying the state ValueTree) vs. a “low” level (ie. calling AudioProcessorGraph::addConnection() or some equivalent), I’d be interested to know, so that I can study it properly.
I’m not quite sure what you mean by this? You add your plugin to a track and then the track outputs to a specific device. Is that not what you need?
There isn’t really a 1:1 mapping between the Edit model and the Node graph that is created to play it back. There’s all kind of internal latency compensation, summing and send Nodes created to play the high level model. Also, there is no juce::AudioProcessorGraph in Tracktion Engine, the play graph gets created directly in EditNodeBuilder.cpp. Maybe have a look there first?
I think my question is simpler than that, I’m just trying to work out the order in which plugins in a track are called. For instance, in Waveform, every audio track has a sequence of plugins on the right hand side, which looks something like
[ 4OSC > Chorus > Volume ]
What determines the order of these plugins?
I notice that Track has PluginList member. Does the order of this array correspond to the order in which the plugins are called?
Yes, it will render them in series.
You can use PluginList::insertPlugin (const Plugin::Ptr&, int index, SelectionManager* selectionManagerToSelect); with a 0 index to insert at the start of the plugin chain.
I just need the plugin to have a pointer to the MidiMessageSequence that has the master copy of all the notes. Casting the plugin and then supplying the pointer through a member function works fine–I was just wondering if there was another way, since supplying the pointer through the constructor would be preferable in terms of class design.
Yes, I’m being quote careful thread safety. (In fact, I don’t know why I brought up MidiMessageSequence, because I’m not using it at all).
What I’m actually doing on the audio thread is retrieving a shared_ptr to a container (using the method that Timur outlined in his talk, so it will never deallocate on the audio thread). Then, through that shared_ptr, finding a tracktion_engine::MidiMessageArray and swapping it in with the buffer. The MidiMessageArray has been organized (on a different thread) so that it only contains notes starting at the same time.
Does this sound like a safe approach? (I know it might sound a bit over complicated with all the different layers, but these match the existing structure of my program very well).