Request:VST hosting Playhead support (not implemented yet)

I’m currently programming a VST host, and I noticed that the BPM of the PlayHead I tell the VSTPluginInstance via setPlayHead() is not taken into account. So I modified the code a little bit:

[code]void VSTPluginInstance::processBlock (AudioSampleBuffer& buffer,
MidiBuffer& midiMessages)
{
const int numSamples = buffer.getNumSamples();

if (initialised)
{
	AudioPlayHead *playHead=getPlayHead();
	if (playHead!=0)
	{
		AudioPlayHead::CurrentPositionInfo position;
		playHead->getCurrentPosition(position);
		vstHostTime.tempo=position.bpm;
		// and so forth...
	}[/code]

Now it works, but it would be cool if other people also would be able to enjoy this feature :slight_smile:

Jules, could you update juce_VSTPluginFormat.cpp so the information passed by setPlayHead() would be taken into account? That would really be the icing on the cake of your plugin hosting code :stuck_out_tongue:

Sure - I actually didn’t realise that I never finished adding that bit. I’ll throw that in there shortly…

You updated your code, but you forgot to add the ppqPosition and ppqPositionOfLastBarStart (and possibly others, I didn’t check), which are very important - for instance, if you want to have a plugin with some sequence playing synchronized to the main song of the sequencer, you absolutely need this information.

Yes, sorry, I got interrupted while writing it and didn’t finish. I’ll add those params shortly…

I’m a bit confused as to how to use the AudioPlayHead. What component is ultimately responsible for filling in the AudioPlayHead::CurrentPositionInfo? In particular, who updates the sample position (VSTTimeInfo::samplePos)?

I’ve traced the plug-in host with the demo plugin through VSTPluginInstance::processBlock and it calls AudioPlayHead::getCurrentPosition which after a couple of callbacks gets to VSTPluginInstance::handleCallback. The invocation chain ends in this method’s audioMasterGetTime case: it returns vstHostTime, an instance of VSTTimeInfo, owned by the VSTPluginInstance.

Now vstHostTime is updated during VSTPluginInstance::processBlock which, if it has one, updates its own AudioPlayHead and then fills in various fields in vstHostTime from AudioPlayHead::CurrentPositionInfo. Alas, VSTPluginInstance::getPlayHead returns 0.

The consequence of VSTPluginInstance not having an AudioPlayHead is that ppqPosition and related fields don’t get filled in. Only the nanoSeconds field is updated (from the Windows multimedia clock).

Finally, the original AudioPlayHead that gets passed in to the filter is the instance of the VSTWrapper class that gets created when the filter is loaded. Thus there is a different AudioPlayHead for each filter loaded by the host.

My questions (among others):

  1. Why is VSTPluginInstance::playHead mentioned in the 3rd paragraph null?

  2. Who updates VSTTimeInfo::samplePos?

  3. How do you synchronize the multiple instances of AudioPlayHead.

  4. Should a more accurate clock (like the high performance timer) be used to update VSTTimeInfo::nanoSeconds?

My guess is that the answers involve the audioMasterCallback which I thought would be a callback to the host. But this callback just goes back to the VSTPluginInstance.

Another piece of the puzzle is that some component needs to act as a transport and implement a play function from which a zero point for the sample position can be established. The logical place would be the filter graph since it touches all the loaded plugins.

Any help in clearing up the confusion will be greatly appreciated.

Well it’s up to the whatever is hosting the plugin to provide a playhead - it gets set using AudioProcessor::setPlayhead, so if you’re writing e.g. a VST then the wrapper code provides a playhead. If you’re writing a host, then the host needs to provide the playhead, because only it can know that timing info. If it’s zero, then I guess in the simple hosting demo it doesn’t get set (not surprising, as there’s no actual timeline in the demo). If you’re writing a real host with a timeline, then you’d need to create your own playhead subclass and give it to the filters so they can ask for the info they need.

Jules–what you say makes sense, but I though the JuceVSTWrapper itself is the AudioPlayHead passed to a loaded filter. Does this mean that the host has to derive its AudioPlayHead from the JuceVSTWrapper or would the host give the filter a separate AudioPlayHead derivative after the filter has been instantiated by the wrapper?

Another question is, who should be driving the timing information? In the multitrack recorder I’ve implemented (not using the plug in framework) timing is provided by the AudioTransportSource’s audioIOCallback method. Deriving timing information during audioIOCallback should be more accurate than, say, a separate clock (even the high performance timer) in a separate thread as it is driven by the audio driver. By keeping track of the number of samples in the buffers passed into audioIOCallback, you can sync to sample accuracy and easily synchronize both streaming audio and audio coming in from the audio card as it is all coordinated by a single method.

Sequencers, like Cakewalk, have two clocking methods(not counting SMPTE or external MIDI clock sync): one which is driven by an internal clock and is used for MIDI, the other driven by timing information from the audio driver. The internal midi clock, with maximum accuracy (at least in my version) of 480 ppq, cannot drive audio without speed variations and dropouts, making the audio card derived timing information critical for audio recording and playback.

Translating the above into Juce leads me to consider the AudioProcessorPlayer as the object to provide the timing information. It has both the audioIOCallback and midiCallback methods and seems to be what’s pushing audio through the FilterGraph in the Audio Plug-in Host example. AudioProcessorPlayer functions similarly to the AudioTransportSource object in the sense that it gets audio pushed through it by the driver and calls a audio block processing method of its attached AudioSource. This it makes a natural candidate for building a sequencer object. But I’m not sure that’s its intended operation. Please comment.

Yes, AudioProcessorPlayer would be the logical place to implement a playhead, but it couldn’t fully specify the values without knowing about the stuff that it’s playing - some of the timing info is referring to a notional ‘edit time’ rather than just how long it’s been streaming for.

But the JuceVSTWrapper should be doing just what you say - in that case, the wrapper grabs info from the host and turns it into a playhead for the plugin to use. I thought you were writing a host, though?

I am writing a host but I want to know about data and timing flow through the whole system, including the plugins. Thanks for the info–I’ve got a pretty good idea of how to proceed.