I have an audio application that uses AudioProcessorGraph to handle a graph of ins and outs, hosted plugins, etc. As well as plugins, I want my app to have built-in sample playback and recording. I haven’t worked with sample playback before in juce, but I’ve looked through the tutorials and it seems like something I should be able to handle.
My question is this: Should I design it so that every sample takes a single node in the graph, or should I have just one node for a sample player, which handles all samples simultaneously? The first option (one node per sample) would fit very well with my existing design and I think it would be much easier to program, so really what I’m asking is if this design raises any red flags, or if it would scale significantly worse than having a single sample player to handle all samples.
I hope this question makes sense. It’s really a question about design rather than coding, since I haven’t even started writing this part of the code yet.
IMHO, its worthwhile investigating TRACKTION as well as JUCE. JUCE handles a lot of low-level audio competences, and TRACKTION builds on this to add higher-level concepts such as recording/playback, tracks/edits, sessions, etc.
If you are just starting out building a sampler, soon enough you’re going to start adding features that have already been thought about - and implemented, pretty well - in TRACKTION. So, my advice is, before you get too entrenched in the design details of your own sampler classes, check out what the folks have done with TRACKTION already. You might be surprised at what you get out of this approach …
I was using Tracktion for a while, but now I’m actually getting rid of it! I know what you’re talking about, but Tracktion wasn’t right for my particular project. The sampler is one place where it would be easier to use Tracktion, but it’s not worth it for me overall.
The short version is that Tracktion is designed for making DAWs, and I’m not really making a DAW, but rather an audio programming environment. Lots of things that Tracktion assumes like tracks and clips don’t exist in my app, and I could kind of make them fit, but it always felt like a round peg in a square hole. As i got better at coding, i found myself rolling back most of the Tracktion code I’d written, until at one point it just seemed better to get rid of the whole thing. Tracktion is great software, but it doesn’t fit my project.
Since nobody has responded to the original question, is it safe to infer that making a one-node-per-sample sample player inside an AudioProcessorGraph is not an obviously bad idea?
That depends on your existing design or what you are aiming at. Is it a just a few samples (one, two or tre upto an octave perhaps), that are triggered programatically in some way or do you aim at fully fledged sampled piano?
A piano with 88 keys where each key is sampled in four or five velocity layers means a total of over 400 nodes/processors. All these should then be connected to the midi-input (if to be played manually) or to some other triggering mechanism. And the nodes would then be called internally one after one by the AudioProcessorGrah player to determine what note(s) to be played. That wuold probably amount to some none-trivial cpu usage. And this would be regardless if any note is played or not. Granted you may not need a sample for each note, it might be sufficient with a sample for every third semi-tone or so which is then repitched, which would reduce the number of nodes. But then you would have to distribute this same sample to 3 or four processors in some way.
And you have to think about how to change your instrument/sample set. Reloading some hundred nodes followed by a graph rebuild might take some time and perhaps would even be audible. And say you’d want to change the pitch of the instrument or some other parameter. Then you you need to communicate that out to all the perhaps hundreds of nodes.
On the other side, if you just want your sample player to output an “Oumph”-sample for every beat, then a node per sample might be the way to go.
Some additional points you might have to consider,
How many samples are you going to use? How to switch between samples or sample sets/instruments? Does the swith it have to be seamless/immediate? Do yo wan’t to be able to morph between different samples…
I think there’s a sampler example somewhere in the Juce framework, maybe even a sampler class. Why dont you start with that and see if it suits?
Thanks oxxyyd, this is really helpful as always. I can see what you mean that the system I’m proposing might not scale so well. But the way you spelled it out makes me think that this might not be such a big problem in my case. I’m thinking more of having a sparse set of samples, rather than the 400 node organ than you described. If I do find the need for something larger, then I think it makes sense to create or find another sampler plugin and link that into a single node.
I’m following the juce tutorial on sample playback (though I haven’t got very far with it yet). If anyone can share any other example work, it would be much appreciated.