Confused about FloatVectorOperations::multiply compared to regular multiply

Here is three pieces of code, which I thought would produce the exact same results, but does not.

Note for all three examples, in between reading and writing samples, I am doing some “wavefolding”, that is processing signals that exceeds -1 or 1, so overflow wraps down into valid range.

  1. First one is what I need;
while (voiceSamples < numSamples)
{
	// Read sample
	signal = TGVoiceBuffer.getSample (0, voiceSamples) * multiplier;

	// Wrap signal here
	
	// Write sample
	TGVoiceBuffer.setSample (0, voiceSamples, signal);
	
	++voiceSamples;
}
TGVoiceBuffer.applyGain (multiplier);
while (voiceSamples < numSamples)
{
	// Read sample
	signal = TGVoiceBuffer.getSample (0, voiceSamples);

	// Wrap signal here
	
	// Write sample
	TGVoiceBuffer.setSample (0, voiceSamples, signal);
	
	++voiceSamples;
}
FloatVectorOperations::multiply (TGVoiceBuffer.getWritePointer (0, 0), multiplier, numSamples);
while (voiceSamples < numSamples)
{
	// Read sample
	signal = TGVoiceBuffer.getSample (0, voiceSamples);

	// Wrap signal here
	
	// Write sample
	TGVoiceBuffer.setSample (0, voiceSamples, signal);
	
	++voiceSamples;
}

The samples produces by gain on method 2 and multiplication on method 3, is different from method one, which is what I need.

How, if possible, do I use FloatVectorOperations, so it does exactly as in first example?

Are you sure the values are actually different? Have you debugged the code and inspected the values?

I have not debugged, but I can easily see it in the synth I am coding as it has a continuous graph of the waveform (samples), kind of an oscilloscope, and I can hear it.

Ok I need to take the day off programming! I made a mistake - Please move on, nothing to see here. Sorry for wasting your time.

If you are using the Juce Synthesiser classes, you should not operate with the Synthesiser voice buffers with the 2 and 3 methods, only method 1 will work. The reason for that is that the buffer you get into the voice’s renderNextBlock already contains the audio from the other voices. The synthesiser class uses a shared audio buffer for the voices. The voices should only add/mix their audio into the buffer, not do anything else with the audio. Considering that, also your implementation for method 1 seems wrong, you should not use setSample on the buffer, instead addSample should be used.

Nah it was me overseeing a bad mistake in the code. And those buffers are not the one’s “coming” from the Processor, they are local to each voice.

OK, nice you got it working.

Two weeks straight of coding from morning to night, time for a break.