Using the sample juce audio plugin example, I’m trying to understand from where I can find an accurate timing source that is related to the tempo of the host DAW.
Am I on my own to detect MIDI clock events or is there some nice callback that gets invoked at some fractional interval of the current tempo?
Have a look at AudioPlayHead and AudioProcessor::getPlayHead ()
I don’t think that’s a callback — I want to ensure a particular piece of code is called periodically with very high acuracy.
I don’t really understand what you’re asking… The plugin’s audio callback is where you do your work - inside it you can ask the host about its the current position, and return any midi events that you the host to send, but that’s all you get.
Suppose you want to produce very accurate echos timed to the beat — I hate to have to poll to do that kind of thing.
Maybe I’m missing something fundamental about the way these plugins work…I’m new to this stuff.
Yes, you seem to be completely misunderstanding. Like I said, you process each block of data when the host calls your plugin and asks for it. Maybe you should take a good look at how the demo plugin works.
I actually used that audio plugin as my starting point although I deleted most of the synth producing pieces (although not the method calls) as I’m really trying to generate MIDI output to virtual MIDI devices in response to incoming MIDI data.
I finally took the time to understand how the jucer GUI builder integrates and now I’m able to instantiate that (GUI modified) plugin in MainStage and send MIDI events into it (starting to play with the MidiInCallback stuff) — but now I want to produce generative MIDI output with precise timing and I’m still just reading through the classes (on my iPad ) to see what’s available.
In the “old” days (i.e, last time I did this was probably 15 years ago) I would create a high precision timer and generate what I needed each time I got a tick. So far, I’ve only found a mechanism that lets me put MIDI events in a buffer with timestamps (sendBlockOfMessages) but the problem with that is I don’t actually know what I want to send UNTIL I reach the desired point in the future so I can’t actually prefill that buffer with anything (and if the tempo is changed while there are still events in the future, I would want to be able to adjust the actual times anyway).
However, I am assuming that that sendBlockOfMessages has very accurate timing in it so it can grab the event at the appropriate time. That would lead me to believe there is a way to get at a very precise timing source (in a cross-platform manner) that I could use to trigger my own processing.
Does that make sense?
No! You don’t use (or need) the MidiOutput or MidiInput classes in a plugin!
When your audio callback happens, the host also gives you some midi input, and you return whatever midi you want it to output. You don’t run your own thread, or timers, or send messages yourself - the host pulls the data from you when it wants it, your plugin just has to sit there and respond to the callbacks.
The only reason I am building this as a plugin is so that MainStage (or whatever DAW is being used) can be used to recall settings automatically and allow hardware control surfaces to adjust parameters. There is no actual audio involved in what I’m doing. I realize I can get MIDI in through the audio callback but it’s not clear that that helps me as the issue is still about generating MIDI data over time with high precision.
I did observe that I was getting continuous callbacks to that processBlock call even if I’m not “playing” anythign but it wasn’t clear to me whether it was coming in at precise intervals…if it is, then I can certainly use that. The output will be going to a virtual MIDI port.
So can I assume from the lack of responses that I’m on my own to produce a high-frequency accurate timer callback? I would have thought this was a pretty common need.
the audio callback IS your high resolution timer…
So you’re saying that that ‘processAudio’ gets called all the time at a fixed rate (or at least at a high enough rate that I can produce my own periodic timer out of it), regardless of any ‘audio’ that might or might not be happening? And if so, is that true cross-platform?
If that’s the case, then I’m all set and should be able to move forward with no problems…exciting.
I was just starting to do some more experiments to determine this myself in fact.
No it might not be regular (I’ve been observing Logic change buffer sizes recently while a plugin is running) but as it will contain samples and the MIDI data will be timed in samples you can schedule stuff in a sample-accurate way by “counting samples”.
Yes — I now see how stuff works — I threw in a logger so I could monitor what’s going on in that processBlock and I see how the callback rate and sample size are connected to the audio preferences in MainStage. The whole thing makes sense now…Julian, this is a VERY nice piece of work.
I am in the same kind of questioning : from what has been said before, with the audio callback, we can get an accurate timer. We can make an Audio App, and not a plug-in, then use the audio callback of this app, so not necessarily the AudioProcessor::getPlayHead (), which is used for building plug-ins ? Is there an example around of such an app ?
I’m not sure that this is an accurate statement
Would you mind either create a new thread if you have a problem or explain a bit further, what you think is wrong with that statement?
Otherwise this conversation goes into the void and leaves people puzzled at best or annoyed at worst.
oh sorry, didn’t notice the time of creation. Well, I’m not sure that the getNextAudioBlock is called at a precise interval, is it?
That is correct. Processing time is never linear, it is dependant on system resources.
The deadline however when the signal is presented is well determined (in relation to the signal stream).
However, every user should synchronise the data to the audio clock, which is given by the AudioPlayHead as described before.
So you can precisely say an event shall have effect at the n-th sample within that block of samples.
Does that sound clearer?
This is a perfect answer, thank you