Can I roll my own version of these three Tracktion Engine features?

After a series of discussions on this forum several years ago, I decided to use Tracktion Engine for the sequencer app I’m developing. This got me off the ground, and my app is now working quite well (though it’s still under development). But as I’ve progressed, I’ve realized that Tracktion might not be the best fit for my app, and I’m now considering transitioning away from it. I’ve been thinking about how I might do this, and I’d like to present these thoughts here and gather some feedback from people with more experience than me. There are some quite specific questions below, so feel free to skip to that part.

Before this, let me briefly mention the reasons why I’m thinking of transitioning away from Tracktion.

The main reason is that my app is not a DAW, and so the workflow of Tracktion is somewhat at cross purposes. At this point there are really only three things that I’m using Tracktion for (listed below), and I’m wondering if I could improve my binary size and runtime efficiency by building my own tools for these jobs. On the other hand, as I’ve been warned in the past, these might be quite tricky to implement, and there’s always the “if it aint broke, don’t fix it” argument. So if anyone can convince me that any of the three services are more complicated than I realize and that I’d be foolish to try to roll my own, this would be very useful feedback.

The second reason I’m considering moving away from Tracktion is that I’m a bit put off by the lack of a perpetual license, and I don’t like the idea of having to pay monthly subscriptions if I ever decide to sell my software. This is in no way meant as a criticism (and it’s not as important as the first consideration) but I feel like my account here would be incomplete if I didn’t mention it.

So there are three things I’m currently using Tracktion for, and for each of them, my question is, “what would it take to roll my own?”

The first use is as a plugin manager. My app has two built-in plugins, and in the future I intend to expand this in the future so that the user can load their own plugins. Is it possible to accomplish this kind of thing in JUCE? I have a naïve sense that it shouldn’t be too difficult, but I’ve not really worked with this area of audio programming before, so I’m not sure.

The second use is for the Tracktion Sampler Plugin. In my app the user can load a bunch of samples and then trigger them using midi events. I’ve browsed through the Tracktion code here and it looks pretty complex—I’m not sure I’d want to write my own! But on the other hand this seems like a pretty generic tool, and I might be able to find some other open-source software for it, or even something within JUCE. Any ideas here?

The third use is the main one—I’m using Tracktion for its built in timeline. Without going into too much detail, my app isn’t really based around a timeline, and it has its own solutions for scheduling midi notes on the audio thread. I’ve worked hard on this and it’s now working pretty well; the only point of contact with Tracktion is with repeated calls to edit.getCurrentPlaybackContext()->getUnloopedPosition() to find the current time. It’s this that I’m thinking of replacing. I’ve been warned repeatedly that this is harder than it sounds, but I might see a way of doing it, which I’ll outline below.

To get the current time, my plan was to get AudioPlayhead::PositionInfo from within the processBlock() function, and use getTimeInSeconds() to set an atomic<double>, which can then be called from another thread. The documentation for PositionInfo says that some hosts may not provide time information though. So as backup, I could use highPrecisionTimer() to constantly update the atomic<double>. This would probably not be very accurate, but my thought is that without a host timeline to sync to, the inaccuracy probably won’t be very noticeable. To summarize, here’s a pseudocode implementation of what I’m thinking:

double getCurrentTime()
{
    if (hostProvidesTimelineInfo())
        return AudioPlayHead::PositionInfo::getTimeInSeconds();
    else
        return getTimeFromJucePrecisionTimer();
}

std::atomic<double> currentTime;

void processBlock()
{
    currentTime = getCurrentTime();
    // do other stuff
}

void functionFromSomeOtherThread()
{
    double time = currentTime.load();
   // ...
}

Does this seem viable, or is there something I’m overlooking here? If I’m being stupid here, please tell me, because it will save me a lot of time!

I guess the answer depends on what you need the time for.

Can’t you just count the samples in processBlack, someting like

void processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiBuffer)
{
	// your code

	samplesSinceStart += buffer.getNumSamples();
}
double getCurrentTime()
{
	return samplesSinceStart / sampleRate;
}
{
	if (!shouldBeSuspended)
		samplesSinceStart = 0;
}

This time will of course be quantized to the buffer size, so if you want to show a moving time line or something similar you’ll have to interpolate this time if you want to avoid stuttering. Maybe you could use juce::HighResolutionTimer to fill in gaps…

I hadn’t thought about using samples rather than seconds like this before. It does seem like it could work.

I don’t have a moving timeline or anything like that. I have my own event loop, which queries the time value frequently in order to decide which new midi events to process. In pseudo-code, it’s something like this:

void processMyMessageLoop() // my own thread (not audio or message thread)
{
     while(appIsRunning())
     {
          for_each(midiMessage : pendingMidiMessages)
               if (getCurrentTime() <= midiMessage.executionTime())
                    scheduleOnLockFreeFifo(midiMessage);

         sleep(10);
     }
}

void processBlock (AudioBuffer<double>&, MidiBuffer& mBuffer)
{
     mBuffer.addEvents(popMidiBufferFromLockFreeFifo()); // no locking or allocating
}

So the MyMessageThread processes a chunk of midi events at a time, and then forwards them to the audio thread. If I used sample blocks for timing as you suggest, would there be any loss of precision here?

Do you really need processMyMessageLoop? Can’t you put that code inside processBlock()?

Yes, i really need it. It’s doing way more than I’m showing here, i just wanted to keep the example brief.

Spontaneously, it looks a bit convoluted to me. (Haven’t seen the rest of your code though)…

processBlock() is the routine that nows the exact time (for dispatching the midiEvents). It should be responisble to decide when it’s time to dispath the events, not a function in another thread (processMyMessageLoop).

I would prepare the midievents and give them a time stamp (relative to samplesSinceStart above) in possible another routine, put them e.g. in a midiMessageSequence and swap it out to the processblock to use e.g

processMyMessageLoop()
{
	while (!done)
		//prepare midiEvent and give them their timestamp
		midiMessageSequence.addEvent(midiEvent);
	}
	
	const ScopedLock sl (getCallbackLock());
	midiMessageSequence.swapWith(processBlockMidiMessageSequence);
}

processBlock(AudioBuffer<double>& audioBuffer, MidiBuffer& midiBuffer)
{
	auto numSamples = audioBuffer.getNumSamples();

	//check the time stamp of next event in processBlockMidiMessageSequence
	auto midiEvent = processBlockMidiMessageSequence.getEventPointer(nextEventID++);
	auto midiSamples = midiEvent.getTimeStamp() * sampleRate;
	if (midiSamples < samplesSinceStart + numSamples)
		midiBuffer.addEvent(midiMessage, numSamples - midiSamples);

	samplesSinceStart += numSamples;
}

The caveat in this example is that the processblock might not have finished the content of its processBlockMidiMessageSequence before processMyMessageLoop is ready to swap the next sequence of events, so it may need a mechanism to alert processMyMessageLoop() it’s ok to swap another sequence.

Or you could use a midiBuffer instead of the processBlockMidiMessageSequence and do something like this in processMyMessageLoop() when it’s prepared a new event

processMyMessageLoop()
{
    MidiBuffer temp;

    //prepare midievents and put them in temp. When done, do

	const ScopedLock sl(getCallbackLock());
	if (processorMidiEventsBuffer.isEmpty())
		processorMidiEventsBuffer.swapWith(temp);
	else
		processorMidiEventsBuffer.addEvents(temp, 0, -1, 0);
}

This is just my quick thoughts over the subject, as I understand it so far.
The important thing is though, that processBlock should do the counting (aka samplesSinceStart)
and decide when it’s time to let go of the midi next event in order to ensure an accurate, tight and sample correct dispatch of the midi events

I understand what you’re saying. My message manager is there because the midi notes aren’t directly given. The user types in some javascript code, which is interpreted and then used to schedule the midi notes. The JS engine is pretty heavy, which is why I’ve given it its own thread.

I think what you’re saying still works though, and I’ll give it a shot with samples rather than seconds. One question here though: If I store the sample counter in say std::atomic<std::size_t>, what can I do to ensure it doesn’t overflow?

I don’t know if it’ll have to be atomic at all, but if you use a size_t isn’t it 32 bits? Then it would last 2^32 / 48000 before it overflows (@ sample rate 48 000), i.e just over 24 hours if I don’t miscalculate. If you need longer sessions than that, just make it an atomic<int64>

Great, I’ll definitely be giving this a shot. Thanks so much for the help.

Does anyone have any ideas about the other two points, namely the plugin manager and the sample player?

Take a look at AudioPluginHost in extras folder. It’ll show you how to scan and load external plugins, i.e. a plugin manager

1 Like

Thinking about it a bit more, if I’m using samples to calculate timing, do I risk my plugin wandering away from the host’s timing? Is there anything I can do to synchronize them?

If you wan’t to sync the plugin time with the daw time you could do it at the first process call something like this:

void YourAudioSourceProcessor::suspendProcessing (bool shouldBeSuspended)
{
	if (!shouldBeSuspended)
		samplesSinceStart = -1;  // Flag it to be set at first processBlock call
	
	juce::suspendProcessing (shouldBeSuspended);
}
void processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiBuffer)
{
   if (samplesSinceStart < 0)
       samplesSinceStart = playHead->timeInSamples; //or what ever it's called
   else
     /*should stay in sync here if host is updating playHead acc to spec
     could possibly differ by one bufferful
   */
      jassert (samplesSinceStart == playHead->timeInSamples)

	// your code

	samplesSinceStart += buffer.getNumSamples();
}

Hopefully, if the host is doing it’s job, the assert will never trigger, and if it does, it will hopefully be a constant offset, which you could subtract from samplesSinceStart, and if not even that’s the case, your user won’t possible notice the midi events is a few samples off anyway.

No, your plugin will not wander away from the host’s timing (under normal conditions), just as no other (well behaving) plugin will - otherwise they would lose audiosamples every now and then, and they don’t.

The only thing that might happen is there might be a constant offset between the the host and the plugin if the host fail to report it’s timeInSamples at the beginning, but that will hopefully be unnoticeable.