Getting real I/O buffer size when using Logic Pro


When loading my au plugin in Logic pro, in the processBlock function the buffer size I get (calling buffer.getNumSamples() ) is always 1024. No matter what value I set “I/O Buffer Size” in Logic Pro’s preferences. Is it normal? If yes, how can I get the value I set in Logic in my C++ code?


You need to work with whatever buffer size you get from the host into your plugin. Kind of curious the size you set in Logic is not reflected in what you get in your plugin…It might be a bug or a feature in Logic itself. Nevertheless, you have to use the host provided buffer size in your plugin while processing.


I am doing so. I mean I am using the buffer size that Logic provides. But it is interesting for me why that happens, and wondering if there is a way to access the exact value that is set in the Logic.


Unfortunately it’s a quirk of Logic and some other DAWs that you may get differing buffer sizes between processBlock() calls. The value received in prepareToPlay() (which may be the buffer size you’re looking for - what are you getting there?) is just a “it’ll be equal to or less than this” value with no other guarantees.


Well the original poster seems to be having the opposite problem : always getting 1024 samples regardless of what buffer size is set in Logic’s preferences. (That might actually make certain sense in order to keep plugins working consistently, but 1024 samples feels kind of large to be a universally used buffer size for processing plugins…)


Yeah, that’s kinda what I meant - the buffer size may be something totally unexpected (but should be within whatever prepareToPlay() reports).

@ehsanen1, what are you getting in the prepareToPlay() function? I’m pretty sure it should be called at least when you change the buffer size in Logic’s preferences…


I also get 1024 in prepareToPlay().


Modern hosts, may change the block size while playback due different kind of reasons

  • Optimisation. Audio which does not need to be calculated in realtime is calculated in bigger blocks to save CPU-Load. Steinberg calls this ASIO Guard
  • Automation precision


which worries me, because that way automation data is also affected. This feeds the need for sub-block automation data. Is there a technique for that? I kind of remember this was announced for VST3? (Sorry, if that’s too OT)


VST3 and IIRC AudioUnits and AAX have that, but Juce doesn’t support it for any of those plugin formats.


Thank you @Xenakios, I thought so.
@jules and @fabian, is that in backlog? Will that come at any point?



This actually hadn’t been on our roadmap but it is something we should definitely add.

One way to do this is to maybe add a AudioProcessor base class method where you can set the maximum size of your blockSize. Something like AudioProcessor::setMaximumBlockSize (int blockSize).

JUCE will then ensure that all your parameters are updated every time your processBlock function is called.

Of course, proper sub-block automation data would be better, but that would require quite a big overhaul to the parameter system.

What do people think?


@fabian not sure why this would be an improvement to the current situation.

Some formats may provided ramp data, other maybe exact sample positions. I think the solution shouldn’t add any extra dependencies to future api-decisions.

Just and Idea, the wrapper cut split up the block to for exact sample positions (if provided)
And inside the processBlock you can request the parameter-value for the start and end-position

parameter.getValueForPosition(0) // start
parameter.getValueForPosition(blockSize-1) //end

PS: and plugin-formats without ramped parameters, the position 0 the parameter-value of the last call, and the last value is the new parameter-value


I would handle it the exact same way as midi-events are handled. Each automation event should have a timestamp (in samples, relative to the current process buffer) and then some helper functions to get the “next” one etc.


+1 and ensure that there is always at least one event at pos(0) and pos(blockSize-1) maybe?

Another approach could be inspired by the visual FxPlug api, there you can access the actual keyframes as array and even get their tangents. Additionally they provide a getValueAt for arbitrary points.
That way the plugin developer can decide, how accurate he/she needs the automation data.


this method has the same drawbacks as midi automation, you always need the future events (which may are outside the block) to calculate the right amount of continuous shift
This should be considered!


This would solve that issue…


Incidentally, I was just doing some host automation code right now. I found no other way of doing this properly than to stick my knife into the flesh of the AudioProcessorGraph. Here’s my version of perform()

template <typename FloatType>
void perform(AudioBuffer<FloatType>& sharedBufferChans, const OwnedArray<MidiBuffer>& sharedMidiBuffers, const int numSamples)
	HeapBlock<FloatType*>& channels = audioChannels.get<FloatType>();

	for (int i = totalChans; --i >= 0;)
		channels[i] = sharedBufferChans.getWritePointer(audioChannelsToUse.getUnchecked(i), 0);

	AudioBuffer<FloatType> buffer(channels, totalChans, numSamples);

	if (processor->isSuspended())
	else if (!processor->haveAutomation())
		ScopedLock lock(processor->getCallbackLock());
		callProcess(buffer, *sharedMidiBuffers.getUnchecked(midiBufferToUse));

	//---- my code --------------------
		int numSamplesToAuto = 0;
		bool haveAutomation = false;
		int samplePos = AudioProcessorGraph::samplePos;	// > 0 only if host is playing

		if (samplePos >= 0)
			numSamplesToAuto = processor->automation.getNextAutomation(samplePos) - samplePos;
			if (numSamplesToAuto < numSamples)
				haveAutomation = true;
				if (numSamplesToAuto < 16)
					numSamplesToAuto = processor->automation.playAutomation() - samplePos;

				jassert(numSamplesToAuto >= 16);

		int startSample = 0;	//start position in AudioBuffer 
		int numSamplesLeft = numSamples;
		int numSamplesNow = numSamples;

			if (haveAutomation)
				numSamplesNow = jmin(numSamplesLeft, numSamplesToAuto);
				jassert(numSamplesNow >= 16);
				numSamplesLeft -= numSamplesNow;

				//leave any automation events near the end of the buffer to next call
				if (numSamplesLeft < 16)
					numSamplesNow += numSamplesLeft;
					numSamplesLeft = 0;
				buffer.setDataToReferTo(channels, totalChans, startSample, numSamplesNow);
				numSamplesLeft -= numSamplesNow;

			ScopedLock lock(processor->getCallbackLock());
			callProcess(buffer, *sharedMidiBuffers.getUnchecked(midiBufferToUse));

			if (numSamplesLeft <= 0)

			startSample += numSamplesNow;
			int sp = processor->automation.playAutomation() - samplePos;
			numSamplesToAuto = jmax(16, sp - startSample);
		} while (1);

	//---- end my code --------------------

I also added a class, automation, to the AudioProcessor, which handles the automation stuff.

A few notes.

processor->automation.getNextAutomation(samplePos) returns the time (in samples) of next automation event. If not within the next numSamples, skip any automation and call proc->processBlock with the full buffer.

If we however have any automation within the next numSamples, call proc->processBlock with only the number of samples to the next automation event, and after that, play the automation via processor->automation.playAutomation(), which eventually will call processor->setParameter(event.param, event.value). Decrease number of played samples and repeat until all of the AudioBuffer content is done.

To avoid silly processor->processBlock() calls with buffer sizes with values of just a few samples, playAutomation() coalesces nearby automation events and plays all events within say, 16 samples from the current one, and likewise at start and end of the sample buffer.

I might increase this number (16) even more while the automation events are timed with the juce hiresolution timer which isn’t more accurate than a ms anyway.

This is a work in progress…