[Conceptual] How to inter-sample modulate variables of multiple Audio Sources from a single source? [SOLVED]

In my project I am pulling various AudioSource’s together into a MixerAudioSource then just calling the MixerAudioSource.getNextAudioBlock() from my “output bus” AudioSource getNextAudioBlock() function (which has the callback attached).

They all have their own setXParameter() functions that are normalised between 0 and 1, and the end user is going to control all those parameters from a single slider.

Because they all receive orders from the same slider value, it makes more sense to have only one parameter smoothing algorithm applied to the slider value and send the singular processed interpolation values to each AudioSource per sample.

I was thinking something like this:

void setSmoothedParameter(float value)
{
	audioSource1.setParameter(value);
	audioSource2.setParameter(value);
	audioSource3.setParameter(value);
	audioSource4.setParameter(value);
	audioSource5.setParameter(value);
}

void getNextAudioBlock(const AudioSourceChannelInfo& bufferToFill) override
    {
    	// this is generating the smoothed values, but needs to be triggered once for every sample
        setSmoothedParameter(parameterSmoothing(slider,sampleRate));

        audioSourceMixer.getNextAudioBlock(bufferToFill);
    }

But of course, it took me a bit of head scratching to work out why it wasn’t working right… the parameterSmoothing algorithm is only getting called and applied once per block.

The solution in my head therefore is to generate an array of parameterSmoothed values (for every sample in the block) and have the audioSources in their getNextAudioBlock functions read from the array rather than from their local parameter variable.

Does that sound like the “right” approach? Is there a better approach?

How about just doing the simplest thing and worry about optimizing later, if it’s really needed? :slight_smile: Also, there might be some kind of a design problem anyway, if multiple audio processings are always just going to use the same parameter value. It kind of sounds like the processing maybe should be handled by a single object. But it’s of course very difficult to say without knowing more details.

Right, I think the simplest thing is the aforementioned strategy honestly, because otherwise then I’d have to create instances of the parameter smoothing, initialise, process etc. vs just read from an array of values.

Yeah in this case they’re taking the same parameter value but doing different things with it.

But yeah, the reason why I’m worrying now, is because I want to start thinking about problems in the right way (the obvious ones at least) rather than just merrying along and then having to restructure later anyway. I like to use PureData for those happy-go-lucky, the world is wonderful, quick production of ideas.

To me, that doesn’t sound simple at all. I would only go for that kind of scheme if performance was absolutely critical and I had tons of audio objects that really need to be using the same control data at the per-sample level. Also, if you are worried about having to restructure later, can you now be sure the objects won’t need different control data in the future anyway?

You are right, but…

I wanted to do this also though because I know how to implement parameter smoothing, I don’t really know how to implement this, so it’s like a cool badge of honour once I manage it.

Here’s how I did it (for anyone in the future)

so in my getNextAudioBlock of the base class I do the generation and declaration of the array because it needs to know the block size, and only needs to exist for as long as the block is available:

        float SmoothedValuesArray[bufferToFill.numSamples];
        for (int i = 0; i < bufferToFill.numSamples; ++i)
        {
            SmoothedValuesArray[i] = parameterSmoothing.process(slider, sampleRate);
        }
        // send address of array to various sources
        setArray(SmoothedValuesArray);

So my SetArray function will call a function in every AudioSource passing the address of the array:

void setArray(float* values)
    {
        AudioSource1.setArray(values);
    }

And then in private of AudioSource1 declare a pointer to an array of floats float* mixerArray;

And set it in its setArray function:

void setArray(float* array)
    {
        mixerArray = array;
    } 

Now I should be able to use it in getNextAudioBlock: DBG(*(mixerArray + i)); Let’s just display it for now.

The code looks more or less like I would have done it! :slight_smile: Except I would init the float* mixerArray; to a null pointer, just to get a more consistent failure in case I forgot to set it to a valid pointer first. (Raw pointers don’t get initialized to a null pointer automatically, you need to explicitly set them to that.)

edit : note that a thing like float myarray[buffer.getNumSamples()]; is not standard C++. Your particular compiler may compile that OK, but it’s not universally available.