Blocksize, multiple buffers and some brainstorming

Ok, so I´m working on my latest project Wusik 4000, which uses modules for Envelopes, OSCs, effects, and such. The thing is, lets say I have multiple modules, each module has its own buffer to use for the next module, and if we have a polyvoice module, its buffer * voices. (which can be 1 to 128 voices) So, its a heck load of buffers we have. BlockSize could change from one call to another, so what I will do is just check if the current size is larger and resize blocks, but not when its less, otherwise it would eat too much CPU when we have to resize, lets say, 20, 40 or even 80 modules. :shock: If anyone has a better idea, I´m open to suggestions, of course, and please, do suggest. :wink:

Another thing is how I will handle variables that I want to share. How can I give a module (which is a Juce VST plugin file) a pointer to a variable? So far I´m using setStateInformation to send a class that has some info on it that I want to be shared, so I don´t have to use “expensive” getpar/setpar calls. Now, I´m pretty sure this is an idiotic way, but to be honest, I don´t know how else I could handle that, so, please, don´t make fun, I´m very exhausted and may be missing something simple to use instead. :oops: Since modules will be open-source, I want my code to look elegant. :mrgreen: So, please, any ideas on how I could handle this instead?

For voices I will just use a simple method, the AMP Envelopes will determinate voice allocation, as they will be the MASTER on the whole thing. So an AMP Envelope will get the basic notes, and output notes but put like this. Note.delta * (voicenumberoriginalbuffer) so the output buffer will be originalbuffervoices. Its silly, but it was the best I could do by using the JUCE VST plugin format. Now, this is passed to another module that knows that the input buffer has the AMP envelope information like this, and notes are set like this.

But another problem arises, polyvoice modulations. Mono modulations I could just use setParameter, but its still “expensive” to use that. So I was hoping on using a more direct way, by sending a pointer to each module to some internal variables instead. Still, each modulation needs its own buffer data. Oh well…

Anyway, just brainstorming out loud in the hopes someone will give me some hints… (pretty please?)

Best Regards, WilliamK

My advice: stop coding, get some rest and think about the whole picture.

Don’t try to over think the whole thing by pre optimising and let each voice of each module be a single instance of a VST.
Do not interleave stuff.

if you have 10 modules with a polyphony of 8, then you will have 80 instances.
You can eventually reuse buffer if you process your voice one by one so you ll only need as many buffer as you have modules.

For modulation, just use VST parameter.
You can eventually have a single set of parameters for each modules to store the actual state of the module (the model) and allow easy serialization.

Want CPU efficiency ? Code each module with very low overhead. Simple as that.

Thanks, but I still want to be able to optimize the thing, lets say, use SSE2 for parallel voice performance, as we are talking about 80 modules all called for, lets say, 8 voices, is still 80 * 8. Now, even a simple task, lets say, 2 Sampler OSC + AMP Envelope + Filter = 4 modules, but now, if we have 32 voices, its 32 * 4. Even worse, a piano sound, using 128 voices. :shock: And most presets will use multiple layers… so its a total mess, things can go pretty crazy fast. :cry: So I´m still brainstorming on how to handle this…

I don´t care if modules are not pure VSTs, I´m just using the Juce VST format so its cross-platform and so I don´t have to do my own nasty module handling code. :oops:

Anyway, lets keep talking, and I will rest a bit more… didn´t code for a few hours to talk with my Dad… :mrgreen:

Best Regards, WilliamK

You’re doing it wrong.

Use SSE inside each module to process a single voice. See vDSP on OSX and IPP on Windows.

If you want to speed up stuff, use multiple thread to use multiple core to handle voices in parallel.
You can check out Intel Threading building block for example.

http://threadingbuildingblocks.org/

1 Like

Ok, let me think this again. Lets say I have 60 modules, each module will process 1 sample, as I can´t just run the whole samplebuffer on each module as they interact with each other. So, I wonder how many calls to each module I will need per second, wouldn´t this just kill the CPU? Now, If I could just run more samples, ok, it would speed things up as I wouldn´t be using too many calls per module. BUT, some modules wouldn´t work correctly as they need to interact with each other. Specially Envelopes, and modulation stuff. But maybe I´m not getting the picture right… my brain hurts… :mrgreen:

I did take a look at TBB, but still wonder if sending all module calls to TBB and hoping things will work nicely is not fitting in my mind how it will react in real-life. Not to talk about that I couldn´t figure how to do a simple task with TBB. I guess I´m just too used to Juce´s Easy Life… :oops:

I will keep reading TBB and trying to figure out what could work. In the end I could still process AMP Envelopes at 1 buffer or another way, get the voice-alocation data that I need, and them run everything else in multi-buffer, which would be nicer. But still, some overhead from calling multiple and multiple times getParameter, setParameter and ProcessBlock, times voices, instead of a single time per module…

In any event, thanks for the ideas and hits. 8)

Best Regards, WilliamK

I think the question is, do you really need to have one sample per block even if block interact with each other ? (audio rate modulation for everything ?) and most of the time, well, they don’t.
An amplitude envelope and an oscillator can be processed by block.

What you are trying to achieve reminds me a lot of SynthEdit, I don’t think they process 1 sample block or interleave voice processing.

my 2 cents.

Indeed, its just that I´m partially dyslexic and take me some time to figure things out. I´m mostly a genius for complicated stuff but a totally dumb-ass for simple stuff, go figure, so be patient, I´m sure I will figure something out eventually. :oops:

But I still wonder on TBB, if I just could figure out how to fire up, lets say, 8 threads and wait for all to finish, that would save me some time.

And yes, I don´t really need one sample per module. There won´t be any feedback from modules. Only AMP Envelopes and Voice Allocation is still a mystery, but I have some ideas…

Ok, more info. What I´m trying to do is not exactly the same as SynthEdit, but still, you are right, its similar in some ways.

But I just realized one thing. When processing the Envelopes, I don´t need to go one sample at a time. I can go as far as the next note delta, PRESTO, that solves a lot of problems. Geeesh, I told you, its a simple thing that I just didn´t see… :oops: :roll:

So, the AMP Envelope, which is the one that will need to process MIDI input and output values + notes + voice information, can go much faster this way. All the other modules can, indeed, process one voice per instance. And since those are GUIless modules, there won´t be much memory used. I may still share some variables using some nasty stuff I created. It works, and it uses less resources.

So, modules that requires input can have those by setParameter, but I still wonder how fast that will be since if you have, lets say, 20 parameters, and you are sending the last one from a switch() statement, wouldn´t that eat too much of the CPU for nothing? I´m still brainstorming on better ways to handle that… like shared variables for MOD input on each parameter, something like that, hard to explain yet as I don´t know yet how they will work… (I will let my brain figure it out while I´m sleeping, hehe)

I will post more info as I find more stuff to improve.

Wk

Another option, in terms of modulation, is to leave setParameter for not-so-used stuff, mostly non-real-time stuff, and leave things that are commonly used to be set in another faster and more direct way. EG: Filter Envelope Freq/Rez Inputs, OSC Pitch/Fine/Amp Inputs.

Still, if I figure a better way to handle those direct stuff, and if its nice, I could always use it for the whole thing.

Edit: I guess I was programming for micro-controllers for too long, and now getting back to computers I´m always worried about resources… :lol:

Wk

Quick question. Since I´m splitting up a single blockProcess into multiple smaller ones, is there a way to append one buffer to another, or I just have to code that myself? I ask this since I have a temporary buffer that does a small process and them I need to keep adding to the main buffer until its completed…

Wk

You can create an AudioSampleBuffer from a pre-existing block of memory (that included in the original AudioSampleBuffer) so you can just create your smaller ones on the stack to process smaller blocks. Take a look at the juce::Synthesiser class for some examples of how to split up MIDI into smaller chunks.

[code] void processBlock (AudioSampleBuffer& audio, MidiBuffer& midi)
{
// if audio is 512 and you want to process in blocks of 32
int numSamples = audio.getNumSamples();
int startSample = 0;

    while (numSamples > 0)
    {
        const int numThisTime = jmin (32, numSamples);
        AudioSampleBuffer section (audio.getArrayOfChannels(), audio.getNumChannels(), startSample, numThisTime);
        // and do a similar thing for the MIDI
        
        child->processBlock (section, midi);
        
        numSamples -= numThisTime;
        startSample += numThisTime;
    }

[/code]

1 Like

Thanks for the code, that sure helps me out! :mrgreen:

I also see that I can resize a buffer without relocating data, which is nice. So if the buffer gets smaller it doesn´t relocate, if its bigger, it relocates.

Best Regards, WilliamK

Ok, it works, and its actually nice, thanks again for all the ideas! 8)

Here´s the current code, its ugly, so any advice on making it better… please, I´m open to suggestions…

[code] ARRAY_Iterator(wLayers[layer]->envelopeArray)
{
modEnvelopes* env = wLayers[layer]->envelopeArray[index];
if (env->isAmpEnvelope)
{
// Check buffer sizes first
PROCESS_ENV_VOICES(layer)
{
env->envVoices[voice]->audioBuffer.setSize(1, buffer.getNumSamples(), false, false, true);
env->envVoices[voice]->midiMessages.clear();
env->envVoices[voice]->tempMidiMessages.clear();
}

			if (layerMidiBuffer.isEmpty()) // If there are no midi events, we just process the whole thing directly
			{
				PROCESS_ENV_VOICES(layer) 
				{ 
					env->envVoices[voice]->instance->processBlock(env->envVoices[voice]->audioBuffer, layerMidiBuffer);
					env->envVoices[voice]->envLastOutput = env->envVoices[voice]->audioBuffer.getSampleData(0)[env->envVoices[voice]->audioBuffer.getNumSamples() - 1];
				}
			}
			else // Otherwise, we need to process each midi event until a new note on/off appears (we ignore the rest, for now, but copy pitchwheel for the OSCs)
			{
				// Seek the next MIDI Event
				int prevDelta = 0;
				int offsetPosition = 0;
				bool foundEvent = false;
				MidiBuffer::Iterator midiIterator(layerMidiBuffer);
				while (1)
				{
					foundEvent = midiIterator.getNextEvent(midiMessage, deltaPos);
					if (deltaPos > buffer.getNumSamples()) break;

					if (!foundEvent || (midiMessage.isNoteOnOrOff() && deltaPos != prevDelta))
					{
						if (!foundEvent) deltaPos = buffer.getNumSamples();
						if ((deltaPos - prevDelta) > 0)
						{
							PROCESS_ENV_VOICES(layer)
							{
								env->envVoices[voice]->tempAudioBuffer.setSize(1, deltaPos - prevDelta, false, false, true);
								env->envVoices[voice]->instance->processBlock(env->envVoices[voice]->tempAudioBuffer, env->envVoices[voice]->tempMidiMessages);
								env->envVoices[voice]->audioBuffer.copyFrom(0, offsetPosition, env->envVoices[voice]->tempAudioBuffer.getSampleData(0), deltaPos - prevDelta);
								env->envVoices[voice]->midiMessages.addEvents(env->envVoices[voice]->tempMidiMessages, 0, 0, 0);
								env->envVoices[voice]->tempMidiMessages.clear();
								env->envVoices[voice]->envLastOutput = env->envVoices[voice]->tempAudioBuffer.getSampleData(0)[env->envVoices[voice]->tempAudioBuffer.getNumSamples() - 1];
							}
						}
						offsetPosition += deltaPos - prevDelta;
						prevDelta = deltaPos;
						if (!foundEvent) break;
					}

					if (midiMessage.isPitchWheel()) { PROCESS_ENV_VOICES(layer) { env->envVoices[voice]->midiMessages.addEvent(midiMessage, deltaPos); } }
					else if (midiMessage.isNoteOnOrOff())
					{
						if (midiKeyNotes.isMono)
						{
						}
						else // notMono
						{
							if (midiMessage.isNoteOff()) env->envVoices[midiKeyNotes.noteOff(midiMessage.getNoteNumber())]->tempMidiMessages.addEvent(midiMessage, deltaPos);
							else env->envVoices[midiKeyNotes.noteOn(midiMessage.getNoteNumber(), env)]->tempMidiMessages.addEvent(midiMessage, deltaPos);
						}
					}
				} // while end
			}
		}
	}

[/code]

D.R.Y!! How often did you copy-and-paste “env->envVoices[voice]”!? Any repeated phrases or expressions in your code are a bad sign.

(A good thing to hold in your mind when writing code is “how much could this text be compressed by something like gzip?” If the answer is “a lot” then you need to D.R.Y it)

And use fewer macros!

Thanks Jules! I will rethink the whole thing and try to learn a bit more. So far I learned a lot from your latest updates on Juce, thanks for that. :mrgreen: Loved OwnedArray for instance, saves me a lot of time. I just need to learn more on how to organize the code the way you do, as its is nicer that way. Even after all this years, still new stuff to learn… :oops:

Best Regards, WilliamK

Ok, I hope I understood what you are talking about… here´s the new code. I like it, looks better and its easier to understand, and things that should be inside other classes are inside those now. :wink:

Edit: I updated the code a bit more.

[code] ARRAY_Iterator(wLayers[layer]->envelopeArray)
{
modEnvelopes* env = wLayers[layer]->envelopeArray[index];
if (env->isAmpEnvelope)
{
env->checkBufferSizeAndClear(buffer.getNumSamples());

			if (layerMidiBuffer.isEmpty()) env->directBlockProcess();
			else
			{
				int prevDelta = 0;
				MidiBuffer::Iterator midiIterator(layerMidiBuffer);
				while (1)
				{
					foundEvent = midiIterator.getNextEvent(midiMessage, deltaPos);
					if (deltaPos > buffer.getNumSamples()) break;
					if (!foundEvent || (midiMessage.isNoteOnOrOff() && deltaPos != prevDelta))
					{
						if (!foundEvent) deltaPos = buffer.getNumSamples();
						env->partialBlockProcess(deltaPos-prevDelta);
						prevDelta = deltaPos;
						if (!foundEvent) break;
					}

					if (midiMessage.isPitchWheel()) env->addEventToAllVoices(midiMessage, deltaPos);
					else if (midiMessage.isNoteOnOrOff())
					{
						if (env->isMonophonic())
						{
						}
						else // notMono
						{
							env->addNoteEventToVoice(midiMessage, deltaPos);
						}
					}
				} // while end
			}
		}
	}

[/code]

Guys, anyone knows how to activate (if there is even such feature) on MSVC2013 vertical branch lines, so I can see the { and } branches more easily? Thanks.