Convolution Engine Blend 2 impulses

Hi all, i’m still playing with the convolution engine and few cabinet impulses, the goal would be to have 2 impulses with a blend option that i can eventually manage with a slider to decide for example 70% one and 30% the other, or wathever value, problem is, i have no idea where to start :stuck_out_tongue: i didn’t see any option in the convolution variable that contain the loaded impulse how to set that, so i thought that i have to probably go and modify the Juce_Convolution.cpp, but before to navigate blinded i wantet to ask if someone can help me.
thx in advance :slight_smile:

I guess you have to mix the two impulses together with the amount you want before processing. Makes that sense? I’m pretty sure modifying the engine is the wrong path.

the logic i was thinking was to declare 2 impulses like
dsp::Convolution convolution1;
dsp::Convolution convolution2;

load them with 2 different impulses and then manage the wet and dry (that i saw is coded in the engine) for each of them, but like i said did not know if that is really the way to go, this is why i wanted to ask here before

It’s not a good idea to have two convolutions. You will also need twice the CPU power when you do that. Mixing them before the convolution should be fine.

so in a single convolution variable i can load 2 impulse at the same time, and eventually manage the quantity? can’t see anything that can help in the properties

I’m not a convolution expert, but i think an impulse is something like a float array from a wav file that you load into the convolution engine. Can’t you just mix the impulses before you processing them?

i can’t see anything that can help before to launch the process, this is why i thought i should search in the Juce_Convolution.cpp but like you said, it may not be the way to go

If you want quick realtime control over the blending, then you will need to use 2 convolution engine instances and you will need to take the CPU hit from that.

You could also keep the original IRs in memory in AudioBuffers and mix them into a 3rd AudioBuffer, which is the one you would pass into the convolution engine. It does however take some time for the convolution engine to preprocess the new IR, so it’s going to be tricky if you want it to respond to GUI slider moves…(Convolution is generally really bad for stuff that needs interactive control…)

I think you should look up a bit of the mathematical theory behind a convolution!

The convolution is an LTI (Linear, time-invariant) operation that can be treated somehow like a multiplication. Let * be the convolution-operator,・the multiplication-operator, i the Input Signal, IR1 the first impulse response, IR2 the second impuse response, b the balance factor (where b = 1 is only IR1, b = 0.5 an identical mix of IR1 and IR2 and b = 0 only IR2) and y the output signal. Then I think you want to do something like that:

y = b ・(i * IR1) + (1 - b) ・(i * IR2)

Now it’s mathematically correct to do something like that
y = (i * (b ・IR1)) + (i * ((1 - b) ・IR1)) = i * ((b ・IR1) + ((1 - b) ・IR2))

So you see, you can strip it down to only one convolution and scale the “static” impulse response instead of the dynamic convolution result, while getting the same overall result.

Now as the impulse responses may origin from files in your case and you might directly load them into the convolution engine through the loadImpulseResponse function you now have to load your two impulse responses into two AudioBuffers first, apply the scaling to each buffer and then add both buffers. Take care of the fact that one impulse response might be shorter than the other, in this case just let the overhang of the longer buffer untouched and add the earlier samples.

All that will get a bit more tricky if both impulse responses are sampled at a different sample rate, as you’d have to resample one of them first.

1 Like

uhmmm… more complex than expected, i will try to play with the 2 engines solution, if i figure out a way i will update, thx for the explanation

Thx for your detailed explanation even if it sounds a bit complicated for my level :stuck_out_tongue: i was hoping that there was natively some function that, for example gave 2 loaded impulse response it could handle for each the wet quantity, for example something like:
convolution2.setWet(0.3) ;
hopefully will be implemented in the future something like that :stuck_out_tongue: anyway coming back to your explanation i try not to give up, i have 2 questions, first question is, if i don’t load the impulse directly with the loadImpulseResponse, how can i get the audio buffer with the impulse in it without going to get it in the Juce_Convolution.cpp? question number 2 is to scale it what should be removed from the audio buffer? the number of samples or what? don’t laugh if my question sound too noobish trying my best to catch up :stuck_out_tongue: