We now live in an era of 64 core processors, but yet individual core speed is not increasing significantly. This means if someone wants to really push the limits of what available we need to use multithreading.
I am working on a synth where the complexity/quality of the rendering is scalable to an extent and if I want to push it up, it would be nice to be able to switch it so the voices all render on different cores. The parent synthesiser itself could take yet another core or share with one of the others. So a 6 voice synth might use 6-7 voices.
I am wondering how I would do this in the context of an MPE synthesiser. Even more complex than that, the next question would be: Is it possible to synchronize the processing of these cores sample-by-sample so that I can feedback outputs from one voice into the others for crosstalk effects?
Letās say my rendering right now is done in PluginProcessor.cpp as:
void AudioPlugInAudioProcessor::processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages) {
const ScopedLock renderLock(lock);
ScopedNoDenormals noDenormals;
buffer.clear();
mMpeSynth.renderNextBlockCustom(buffer, midiMessages, 0, buffer.getNumSamples());
}
Then in my MPE Synthesiser I have the following:
void renderNextBlockCustom(AudioBuffer<float>& outputAudio, const MidiBuffer& inputMidi, int startSample, int numSamples) {
MPESynthesiser::renderNextBlock(outputAudio, inputMidi, startSample, numSamples);
//custom block based processing
//...
}
void renderNextSubBlock(AudioBuffer<float>& buffer, int startSample, int numSamples) override {
renderOneSample(buffer, startSample, numSamples);
//to force sample by sample processing in renderNextBlock
}
void renderOneSample(AudioBuffer<float>& buffer, int startSample, int numSamples) {
const juce::ScopedLock sl(voicesLock);
for (auto i = 0; i != numSamples; ++i) {
const auto sampleIndex = startSample + i;
for (auto* voice : voices) {
if (voice->isActive()) {
MPESynthesiserVoiceInherited* voiceInherited = (MPESynthesiserVoiceInherited*)voice;
//sample by sample retrieval of outputs from the voices
//put output from other voices back into each voice
voiceInherited->renderNextBlock(buffer, sampleIndex, 1);
}
}
}
}
I believe that works to get the sample by sample output of each voice or each voiceās internal values into the synth and put them back into each other voice so theyāre shared. (I tested it and had it working with that approach in principle.)
With that general architecture, how would I go about starting to specify if a voice goes to a given core? Where do I create the threads and tell each voice to get its own?
I see from this thread some ideas for how it might work. But thatās over my head a bit. I understand from that i would need to call as many threads as I wanted to get them started in prepareToPlay()
like this:
const int numThreads = 4;
OwnedArray<TimeSliceThread> threads;
for (int i=0; i < numThreads; ++i) {
threads.add (new TimeSliceThread)->startThread();
}
But the other stuff discussed in that thread seem very specific to what that person was asking about and Iām not sure how to generalize a solution to my synth. Iāve never used Thread classes in JUCE or coded anything with control over the threading or what goes to which thread or how many threads there are.
Are there any basic points or example code you could provide that might help me understand how to do this?
Even if I canāt synchronize the cores for the sample-to-sample voice feedback, Iād still just be happy at least to start to be able to force the voices each to a different thread.
Thanks for any guidance.