This is a follow-up to my old thread about building a sequencer, but since I’m now using Tracktion, I’ve started a new thread.
I’ve spent the last few days trying to learn enough about Tracktion Engine to build a prototype. I’ve been making progress, but need some help with how to access MIDI messages in the applyToBuffer() function.
Here’s what I’m trying to achieve in this prototype:
Keyboard input is encoded as a MidiMessage text meta-event.
That MidiMessage is buffered in a MidiClip for 3 seconds.
3 seconds later, the audio-thread encounters the MidiClip and prints the text meta-event.
I’ve tried to accomplish this in Tracktion by the following approach:
Create a single MidiClip inside a single Track inside a single Edit.
Make the clip length large and loop the Transport around the clip.
Use Component::keyPressed() to add SysEx events to the MidiClip.
Create a custom Plugin which can receive Midi input, and add it to the track.
I’m stuck on number 4, as I can’t seem to access the cued Midi events inside applyToBuffer(); the bufferForMidiMessages is always empty. Can anyone see why?
Here is my code:
#pragma once
#include "../JuceLibraryCode/JuceHeader.h"
class MyMidiPlugin : public tracktion_engine::Plugin
{
public:
MyMidiPlugin(tracktion_engine::PluginCreationInfo info) : Plugin(info) {}
~MyMidiPlugin() { notifyListenersOfDeletion(); }
static const String xmlTypeName;
private:
virtual void applyToBuffer(const tracktion_engine::AudioRenderContext& a) override
{
if (a.bufferForMidiMessages && a.bufferForMidiMessages->size()) // PROBLEM: this condition never gets met; size() is always 0.
{
auto message = (*a.bufferForMidiMessages)[0].getTextFromTextMetaEvent();
DBG(message);
}
}
virtual juce::String getSelectableDescription() override { return xmlTypeName; }
virtual juce::String getName() override { return xmlTypeName; }
virtual juce::String getPluginType() override { return xmlTypeName; }
virtual void initialise(const tracktion_engine::PlaybackInitialisationInfo &) override {}
virtual void deinitialise() override {}
virtual bool needsConstantBufferSize() override { return false; }
bool takesMidiInput() override { return true; }
};
const String MyMidiPlugin::xmlTypeName = "MidiPlugin";
class MyMidiSequencer : public Component
{
public:
MyMidiSequencer() :
engine("MyMidiSequencer"),
edit(engine, tracktion_engine::createEmptyEdit(), tracktion_engine::Edit::EditRole::forEditing, nullptr, 0),
transport(edit.getTransport()),
midiClip(getOrCreateMidiClip())
{
setSize(200, 300);
setWantsKeyboardFocus(true);
engine.getPluginManager().createBuiltInType<MyMidiPlugin>(); // Here's where I insert the new plugin.
auto newPlugin = edit.getPluginCache().createNewPlugin("MidiPlugin", {});
tracktion_engine::getAudioTracks(edit)[0]->pluginList.insertPlugin(newPlugin, 0, nullptr);
transport.setLoopRange(tracktion_engine::Edit::getMaximumEditTimeRange());
transport.looping = true;
transport.position = 0.0;
transport.play(true);
}
private:
bool keyPressed(const KeyPress& key) override
{
const auto s = String("message ") + String(key.getTextCharacter());
const auto message = MidiMessage::textMetaEvent(1, s);
midiClip.getSequence().addSysExEvent(message, transport.getCurrentPosition() + 3.0 /*seconds*/, nullptr); // Should I be using addNote() instead of addSysExEvent() here?
return true;
}
tracktion_engine::MidiClip& getOrCreateMidiClip()
{
edit.ensureNumberOfAudioTracks(1);
auto firstTrack = tracktion_engine::getAudioTracks(edit)[0];
if (dynamic_cast<tracktion_engine::MidiClip*> (firstTrack->getClips()[0]) == nullptr)
firstTrack->insertNewClip(tracktion_engine::TrackItem::Type::midi, { 0, tracktion_engine::Edit::maximumLength - 1 }, nullptr);
return *static_cast<tracktion_engine::MidiClip*> (firstTrack->getClips()[0]);
}
tracktion_engine::Engine engine;
tracktion_engine::Edit edit;
tracktion_engine::TransportControl& transport;
tracktion_engine::MidiClip& midiClip;
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (MyMidiSequencer)
};
Can anyone help? Again, what I’m looking for here is that the cued MidiMessage should reach the audio thread inside applyToBuffer().
It’s a bit difficult to tell. One thing that I can see is that you’re passing time in seconds to addSysExEvent and that takes a beat number. You probably need to convert the transport position to a number of beats with the TempoSequence class (which you can get from the Edit).
If that doesn’t work, can you double check that Edit::restartPlayback is being called when you add the sysex even and maybe dig in to MidiList::exportToPlaybackMidiSequence to see if the sysex exent is actually being added to the playback sequence?
Thanks for looking into this @dave96. Converting to beats per second has helped to get me on the right track. I think what was happening is that the message was being cued to a position that the playhead had already passed, hence the silence.
I have two more questions, both of which are likely to make you grind your teeth. The second question is essential to me, the first one is less important.
Is it possible to cue a ReferenceCountedObject on the timeline? Right now I’m cuing MidiMessages, which only loosely suit my needs; it would be much more convenient if I could build my own struct and then extract its terms on the other end. I can imagine that this might create problems for the audio engine though. I can make MidiMessage work, I’ll just have to overload the textMetaEvent quite a bit.
The audio thread receives MidiMessages on the audio thread inside applyToBuffer(), but in my program it’s going to have to use this as a trigger to cue some further actions on the message thread. How would I accomplish that? Here’s a pseudo-code example:
virtual void applyToBuffer(const AudioRenderContext& a) override
{
auto message = (*a.bufferForMidiMessages)[0].getTextFromTextMetaEvent();
if (message.matchesSomeSpecialCondition())
performSomeActionOnTheMessageThread();
}
I’m guessing that if I implemented this naively, I’d encounter race conditions. How would I get around them? It might seem like a strange situation, but it’s at the heart of what I’m trying to do, so I have to find a solution to this problem, with or without the model that I have already built.
I’ve done a bit more reading, and I’ve realized that I probably need to provide more information about what I am trying to accomplish.
When the Midi event is triggered, I need it to write to a ValueTree (a ValueTree which is also used by the message thread). There are two problems that I don’t know how to solve:
What’s the best way to ensure that a race condition doesn’t occur, ie. if the audio thread and the message thread are trying to access the same ValueTree at the same time?
How can I find the correct ValueTree to edit? I have a potential solution ready for this which I’ve been using on the message thread: I give ValueTrees a path stored as a String. But what I’ve read so far suggests that you can’t use Strings on the audio thread.
I hope that this all makes sense. This is the first time that I’ve worked with threading, so please be patient if I’m going off track.
What feature are you actually trying to do?
Using the audio pipeline to schedule an event seems a little odd.
Would you be better having some timer that simply polls the playback position and triggers the event when it crosses your time?
If it must be in the audio render block, then you could post a message (I know technically this isn’t real-time safe but neither is MidiMessage::getTextFromTextMetaEvent).
I understand that it must sound a bit odd. I was originally planning to use a timer, but was convinced by other people that I actually need to sync my events to the audio playhead (see my original post).
Basically, I am building a sequencer that can service external musical events and also its own internal state. So for instance, on beat #1 it could send Midi note 55 through a Midi channel to an external device, and on beat #2 it could send an internal message to update its own state, so that next time around, beat #1 will send Midi note 56.
At the moment, I’m just trying to get my program to the point where it can send Midi and / or OSC messages through the ports to other devices, and in won’t have any other audio capabilities in this prototype phase. However, phase 2 would be to expand it so that it can store and play samples internally, and also hopefully host VST plugins. I’m worried that if I used timers, it would make my future goals very challenging, even though it seems like they would be easier in the short run. Nevertheless, I might be completely off the mark with using the Tracktion Engine code I’ve drafted here–it’s really just a stab in the dark at this point.
Ok, just remember that you’re syncing events to when the transport passes over a specific time, not for a specific region. I.e. repositioning the timeline won’t trigger any events this way until it plays over the even time.
So if you post a message in response to your text event, is everything working as expected?
If you’re going with stings, you can just pull out the string from the text MIDI message and then post a message to the message queue to process it.
If you want it fully real-time safe though you’ll need to encode your changes in to some other MIDI messages (that don’t allocate internally like strings or sysex) and then use a lock-free FIFO to dispatch them on the message thread.
I’d go with option one first though (posting a message) to check your feature set works like this.
Great, I think I can make this work. Thanks for your feedback.
I have a few more questions; hope you don’t mind. They should be a lot easier than the last ones.
Does Tracktion have OSC output capabilities? If not, is it easily compatible with the relevant JUCE OSC classes?
I was surprised to find that insertNewClip(clip, Edit::getMaximumEditTimeRange(), nullptr); asserts (line 104 in tracktion_CombidiningAudioNode.cpp, which reads jassert (time.end < Edit::maximumLength);). I have to use { 0, Edit::maximumLength - 1 } for the EditTimeRange argument, which feels a bit strange. Is this on purpose? Or is there a chance that the assertion is supposed to use <= instead of <?
Since I’m planning to reserve some Midi messages for internal use, what is the best way of differentiating them? Should I reserve a single channel for internal use? (But there are only 16 channels, so this cuts down what the user can do.) Should I identify them as SysEx messages? (But I thought SysEx was manufacturer specific).
I’m trying to learn a bit more about the Midi format in general; does anyone have any good resources? I’ve found this, but I’m wondering if there are any standard texts I should ready.
Yes, we have OSC output for the CustomControlSurface class. But you might want to open and manage an OSC port yourself if you need specialised control.
We can probably change that, it’s probably never been hit before as adding a clip that’s 48 hours long is unusual.
It really depends on how many messages you have. Yes, MIDI only has 16 channels but there’s 127 messages you could encode. Other than that there are NRPNs, or SysEx.
SysEx stands for “system exclusive” and usually used for large memory dumps like plugin states or for updating synth firmware. You can use it, it just might not be the best tool for the job.
Yeah most people are lightweights I hope what I’m doing is not quite as crazy as it sounds though. It’s not based around the paradigm of a rolling playhead–instead it’s perpetually on and waiting for instructions to play. I’ve tried to achieve this using the looping Clip method in Tracktion since that’s the one that was presented in the tutorials. When wondering how long to make the Clip I was just looking for some system constant, rather than picking a number at random, but if you think that this will lead to problems, or that there is a better way of accomplishing what I’m trying to do, then please let me know.
This reminds me of another question (sorry for so many!) It’s likely that the software will spend long durations idle, without having anything to play. Is there an easy way of slowing down the sample rate, or stopping it altogether? Or should I just leave that kind of thing to the audio engine?