I want to make an audio plugin host in JUCE that you can make music with. I want to treat graphs in a way that’s analogous to individual tracks in a DAW, which means being able to start processing a graph at a given split second or sample number. I’m not sure how to do that.
It seems to me that if the sample offset I want to start it at is a multiple of the block size, it’ll be easy. But if it’s not, I don’t know how to do it without processing a shorter block in the other tracks that are running simultaneously (can you even do that?) to sync up the blocks, and it seems like that could mess up the audio buffer if it’s doing real-time playback.
Another way that occurs to me is that I could make a plugin that just delays a signal by a given amount of time and put it at the beginning of the graph. It could just stores all the samples in the time difference between the input and the output in memory in a ring buffer. But the problem with that would be memory consumption. I think before actually calculating it that 3.5 minutes would take roughly 80MB. And what if your track starts, say, an hour and 25 minutes into the song? I may be able to mitigate memory usage by run-length encoding the zeros at the end of the track, but then it seems I couldn’t use a ring buffer, which is a fixed size, so it would be very CPU-inefficient.
So, what’s the best way? And also, if I do decide to do it via a delay plugin, do I have to make a separate project with juce_audio_plugin_client checked and make a plugin .vst3, or can I just code the plugin as an intrinsic part of my audio plugin host project? How would I do that? I note that enabling juce_audio_plugin_client seems to make audio plugin hosting not work in the same project.
Edit: It occurs to me that I could take a hybrid approach and start the graph at a specific block number and then have the delay plugin only delay by a fraction of a block, making up the difference, thereby only taking a tiny bit of memory..
Ps. I still also need help with my previous post.. =)
To line things up with sample accuracy you will need to have the whole sequence of the tracks/clips/events all updated to the graph at the same time, all in one shot, at the top of the graph’s processing cycle.
The only fine grained limitation for time is the sample rate the graph is processing at. The block size only impacts how quickly your change to the graph’s state gets picked up.
The thing is that I want the graph that starts at a certain time/sample to play alongside other graphs. It’ll be starting at a certain time/sample index relative to the other graphs. But I can only process a whole block at a time. So how would I line it up with sample-level accuracy? I’m not sure if I understood you correctly or if I explained my problem well enough in my initial post.
Btw, I realized I can’t use the custom delay plugin route because I want the user to be able to send midi or change parameters as the song is playing, and that includes on a graph that starts at a specific time.
As far as I’m aware there’s no way around it. All graphs will have to sync to a common clock/playhead and your update mechanism will need to cover all positioning concerns during that update.
If you want to trigger things on the fly then you can of course do that but you then can’t make any guarantees that what you’ve triggered on the fly will land perfectly at a specific sample position.
Thanks. I guess I don’t really need more precision than the block size now that I think about it. Even DAWs have some latency due to the processing speed and block size. I guess to delay a graph till the right time I’ll just start it at the nearest processing block to timeoffset*samplerate.
Yeah I think you might still be running into trouble if you’re trying to line things up like that. But of course maybe I’m not picking up on what exactly you mean.
Since you mention you are building something DAW-like, have you looked at using Tracktion Engine? You could save yourself a lot of time and effort by building on that vs doing it from scratch.
I think maybe I see why you say I’d run into trouble. The graphs would have different block processing times so I’d have to playback each block when both graphs finish processing that block. But when starting a new graph at time T, that would change the combined processing delay, so you’d hear a glitch in the audio output whenever I stop or start another graph. Is that it? I wonder how I could solve that.
Thanks for telling me about Tracktion engine. I’m not sure if it would be useful to me or not. There’s no detailed documentation (it’s supposed to have doxygen generated content, but the files seem to be missing), though it seems it’s used primarily for midi sequencing to plugins and whatnot. I want to be able to sequence, but I also want to be able to run midi commands at arbitrary times, i.e., “between beats” if the user wants.
I also want to be able to script changes in plugin parameters over time. And I want the user to be able to program all this in JavaScript or Python (I’m going with JavaScript, since I think using JUCE’s native JavaScript engine might be easier than figuring out Python bindings.)
I also want the user to be able to play back the song and change plugin parameters or enter midi notes using input devices (like keyboard keys or controls on a midi keyboard) in real-time, and the program will remember those inputs in data files alongside the scripted program. What parameters/midi notes would be input by what controls starting and stopping at what times would be something the user programmatically controls as part of the song program.
So, I’m not sure what Tracktion does that JUCE doesn’t already do except some kind of sequencing apparently over discrete time units, and my program would do a lot more than that and would do it in a scripting language.
Could it be that you are picking the wrong tool for the job? Graphs are good at determining the order of a DSP chain, but not very flexible in working with time. Tracks are there to define a sequence of events through time, which can be applied to a graph. Instead of forcing a graph to do something it wasn’t designed for, why not use tracks for time sequencing?
Oh, I just figured I’d make my own routines to figure out exactly when to send midi notes to the instrument plugins in the tracks or change parameters to effect plugins…wouldn’t that work?
Though I didn’t know JUCE had tracks, how do they work? Or do you mean Tracktion tracks? Can they involve changing parameters, and can events happen at arbitrary times, not just squarely on beats? Also I don’t expect I could use tracks to input midi/parameters during playback
Based on what you describe Tracktion Engine is exactly what you want. It’s a production grade DAW engine. You can sequence things exactly how you want, there’s no per beat or midi limitation or anything like that. It might not have scripting built in but that’s a layer you can build on top either way.
And if there is by chance some piece of functionality that you want that it doesn’t have, then just build that part yourself. You’ll be miles (probably meaning years) ahead of the game vs trying to build it all from scratch.
Cool, thanks. I’ll look into it more. But what about the other thing I mentioned - inputting midi and effects parameters from midi controls during playback? Don’t you have to define the whole track ahead-of-time?
Your engine will be a combination of a predetermined events/sequences (playback content) and real time inputs. For playback content everything can be lined up to be sample accurate. This is what is predetermined ahead of time. For live inputs the accuracy is at block level. There’s no inherit positioning involved beyond that. (maybe in some cases if sync’ing with external clocks but I don’t have much experience with that).
Tracktion_engine solves a lot of these problems. JUCE is one layer of the stack, Tracktion sits on top of it. The next layer above that, is the GUI - and when you’ve done that, you’ve got a DAW.
Look at the TracktionEngine demo and examples. There are ready to build examples of Audio recording and playback, MIDI recording and playback, etc. Starting from those you can see how to build a whole DAW. I did.
The WaveForm DAW is built on top of TracktionEngine.
JUCE exists because its functionality was needed to create a DAW. It has been expanded to be much more versatile. But that is its roots.