LinearSmoothedValue to crossfade between buffers

Hi,

I’m working on my first plugin right now. I’m trying to crossfade between two buffers in a plugin for 10ms. I’ve seen it mentioned I should use the LinearSmoothedValue class, but I can’t figure out how to implement this in a plugin. I’ve tried calling .applyGain(), but I don’t think it should be called every processBlock… so I’m a bit confused.

Really all I’m hoping for is some more reading on it, or an example of this being used as a crossfade.

Thanks, I’m just not sure where else to look.

Without code, how you exactly plan to apply crossfade (linear, logarithmic?) or more details I can just make a guess. It’s late so please someone correct me if I’m wrong:

You say real time, so I bet you wanna be precise on that 10ms so the best way is to count samples. for instance if you have a 48khz sample rate (48000 samples each second) and a 128 sample blocksize, you have 375 process calls each second. That’s aprox 2,67ms call each between each block, so if you want to make a crossfade in 10ms you need to process it in 3 processblock calls, or if you can relax it you can make it 4 calls totaling 10,67ms aprox.
So from there once you trigger the applyTransform you can set a flag, and if it’s active in each process block you apply 1/n of the transform (where n is the number of calls you will use to complete the transform, in this case 4) until you reach 4*128 = 512 samples.

Maybe it can be done with a HiRes Timer? Bear you that if you use it it will run on it’s own thread (not on the main or audio thread).

No, not a good idea. You should almost never attempt to time things with external timers or threads for the audio processing. (Just as one thing that will go wrong : when offline rendering in a host. The processing has even less relation to real time at that point than when doing the “close to real time processing”.)

The crossfade the original poster wants can be implemented sample accurately pretty easily. (Whether the easy method is CPU efficient enough, is of course another matter…)

Hmm suspected it, even tho it seems like OP wants it for a visual effect/feedback rather than for processing purposes, alas the old TimersAndEvents demo (that apply colour transforms/effects on click). I may be wrong so more details would enlighten us with his real case usaged

Yeah it’s not entirely clear what the poster is asking about. What does AudioVisualiserComponent and colors have to do with all this? :wink:

Oh my god that’s embarrassing, I forgot to change the title from my previous question… so sorry to waste your time trying to figure out what I meant!! (I’ve corrected the title now)

So it’s A LOT more simple than changing colors with smoothedValueClass, it’s simply crossfading between two loaded buffers.

Here’s my pseudocode:

if ( bool == true )

{

start fading out buffer A;

start fading in buffer B;

}

else

{

start fading out buffer B;

start fading in buffer A;

}

I just don’t understand right now how this would be done in the processBlock.

Using logarithmic curves is a plan B, but I’m pretty sure linear will do just fine for now considering the crossfades are so short. 10ms does not need to be exact, that’s just the starting point and I’ll tweak what works best once the plugin is finished. Somewhere between ~3ms and 20ms likely.

But thanks for the help! I’ll try what you suggested @johngalt91 right now and let you guys know how it went. And I promise I’ll be a better forum citizen and provide more details in the original post in the future :slight_smile:

All fine, we’ve been all new here :smile:

I don’t know why I got the idea that you were talking about colours, it was late so I guess I confused myself. The thing is it depends on what you are working with, but if it’s a plugin you probably have an AudioProcessor and in the processBlock function (where you process all the audio/dsp code) you can do it. It should be something like this (brief idea without testing):

for (all the samples to process)
    {
    outputBuffer [i] = bufferA[i] * fadeCoefficient + bufferB[i] * (1.0f-fadeCoefficient)
    }

Where fadeCoefficient must be a float from 0.0f to 1.0f otherwise you will start clipping since your outputBuffer values will be higher than 1.0f (or lower than -1.0f). And here you must take into account the sample counting: i.e if you do the transition in 4 processBlock calls, your fadeCoefficient variation would be 1/4 = 0.25f. That would be in your 1st iteration fadeCoefficient = 1.0f, then 0.75f, then 0.5f, then 0.25f or just 0.75f, 0.5f, 0.25f, 0.0f.

Then you add your outputBuffer to the audioBuffer of the processBlock.

Or even easier: create two of these:
https://docs.juce.com/master/classSmoothedValue.html

call reset (sampleFrequency, rampLength) in your prepareToPlay
set one to gain 0, the other to 1, and switch both when you want to change it again.
in each processBlock call smoothA.applyGain(bufferA) and smoothB.applyGain(bufferB) for each buffer:
https://docs.juce.com/master/classSmoothedValueBase.html#af4a8c2b5a79277406ac6bc2d138efc78

add both to the outputbuffer and you’re all set

You guys are just making this too easy for me. I’ll post an update soon!

@johngalt91 the reason you thought I was talking about colors was because that’s what my title mistakenly said, you’re not crazy!

1 Like

After spending quite a lot of time with this, I’m still at a loss. I think I just fundamentally do not understand how the class works, and I can’t find a simple explanation anywhere. My plugin uses lookAhead/latency, involving a FIFO buffer so it’s a little more than just calling applyGain every buffer call. I’ve read the class reference and I’ve looked at the code. Is there perhaps an example code of a crossfade using this class anywhere? Thanks for the help!

Just in case you’re still looking for a solution, I’ve just had to write something similar myself. Hope this helps:

class Crossfade
{
public:
    enum ActiveBuffer { leftBuffer, rightBuffer };

    Crossfade() = default;

    /**
        Resets the crossfade, setting the sample rate and ramp length.

        @param sampleRate           The current sample rate.
        @param rampLengthInSeconds  The duration of the ramp in seconds.
    */
    void reset (double sampleRate, double rampLengthInSeconds)
    {
        smoothedGain.reset (sampleRate, rampLengthInSeconds);
    }

    /**
        Sets the active buffer. I.e. which one should be written to the output.

        @param buffer   An enum value indicating which buffer to output.
    */
    void setActiveBuffer (ActiveBuffer buffer)
    {
        if (buffer == leftBuffer)
            setGain (1.0);
        else
            setGain (0.0);
    }

    /**
        Can be used to set a custom gain level to combine the two buffers.

        @param gain     The gain level of the left buffer.
    */
    void setGain (double gain)
    {
        smoothedGain.setTargetValue (std::clamp (gain, 0.0, 1.0));
    }

    /**
        Applies the crossfade.

        Output buffer can be the same buffer as either of the inputs.

        All buffers should have the same number of channels and samples as each
        other, but if not, then the minimum number of channels/samples will be
        used.

        @param leftBuffer   The left input buffer to read from.
        @param rightBuffer  The right input buffer to read from.
        @param outputBuffer The buffer in which to store the result of the crossfade.
    */
    template<typename LeftFloatType, typename RightFloatType, typename OutFloatType>
    void process (const juce::AudioBuffer<LeftFloatType>& leftBuffer,
                  const juce::AudioBuffer<RightFloatType>& rightBuffer,
                  juce::AudioBuffer<OutFloatType>& outputBuffer)
    {
        // find the lowest number of channels available across all buffers
        const auto channels = std::min ({ leftBuffer.getNumChannels(),
                                          rightBuffer.getNumChannels(),
                                          outputBuffer.getNumChannels() });
        // find the lowest number of samples available across all buffers
        const auto samples = std::min ({ leftBuffer.getNumSamples(),
                                         rightBuffer.getNumSamples(),
                                         outputBuffer.getNumSamples() });

        for (int channel = 0; channel < channels; ++channel)
        {
            for (int sample = 0; sample < samples; ++sample)
            {
                // obtain the input samples from their respective buffers
                const auto left = leftBuffer.getSample (channel, sample);
                const auto right = rightBuffer.getSample (channel, sample);

                // get the next gain value in the smoothed ramp towards target
                const auto gain = smoothedGain.getNextValue();

                // calculate the output sample as a mix of left and right
                auto output = left * gain + right * (1.0 - gain);

                // store the output sample value
                outputBuffer.setSample (channel, sample, static_cast<OutFloatType> (output));
            }
        }
    }

private:
    juce::SmoothedValue<double, juce::ValueSmoothingTypes::Linear> smoothedGain;
};

Call reset() in prepareToPlay(), then setActiveBuffer() to switch between the left and right inputs when necessary.
Call process() every block with two input buffers to crossfade between and one to output to.

3 Likes

Really helpful posts here, thanks a lot. Used a pair of smoothedValue gain variables in order to implement a crossfaded bypass.

How would this ‘crossfaded bypass’ technique be implemented in a plugin that has latency? Suppose the input buffer is delayed by x amount of samples and we also call setPluginLatency() with the same x amount of samples, then surely the ‘bypass buffer’ would need to be delayed as well, is that right?

then surely the ‘bypass buffer’ would need to be delayed as well, is that right?

Correct :slight_smile: if your processing code has a latency of N samples, then processBlockBypassed should basically be a delay line of N samples.

1 Like

Waking this…
Could this be done juce::dsp-style on an AudioBlock too?
I don’t see how, but perhaps… :thinking:
Could be useful to avoid clicks during sudden changes in realtime.

That is without taking it apart and iteratig of course…