I am trying to understand how to create an “Envelope Effect” and modify it. I managed to create one with “maximilian library” but i had to use the synthesiser to trigger the envelope in startnote and stopnote methods. But i want to make something that it is going to trigger the envelope both when the dB level of audio is higher than a certain level (-18dB for example) and when there is a button press event.
Is there a method that lets me to read the dB or volume of the audio so that i will be able to trigger the envelope in a “if condition” when level is higher than the treshold value.
I have one more question. The audioplugin app and the synthesiser examples have different processing sections.I can see that they are connected and as you know one is “renderNextBlock” in synthesiser voice and the other one is " processBlock " in audioProcessor. the renderNextBlock is where i create the osc signals and apply the envelope effect. And the process is where i normally create channels and read&write the outpuData . My question is , how am i going to conbine these two? Because i want my app to be able to get sounds and apply the envelope effect when it is triggered by the treshold value. And when there isn’t a sound file in my daw’s track i want it to be able to apply the envelope effect when a midi button is pressed.
I hope that i could express my thoughts
the AudioBuffer class: JUCE: AudioBuffer< Type > Class Template Reference
Thank you. I tried to trigger the gate with that method. It works but i can only trigger it by pressing the button so that the startnote or stopnote works.But it didnt work when i tried it in daw with a sound file my problem is that i am not sure which method or section gets the all sound and processes it. Within renderNextBlock i was able create sine wave and apply the effects but it does not effect the sound in daw. i tried to combine the processBlock and renderNextBlock. Both gets AudioBuffer object but while in processblock i was able to change the volume of the sound in daw, in renderNextBlock it doesn’t change it or gets it.
void renderNextBlock(AudioBuffer<float>& outputBuffer, int startSample, int numSamples) //override
for (int sample = 0; sample < numSamples; ++sample)
double thewave = osc1.sinewave(frequency);
double theSound = adsr.process() *level * juce::Decibels::decibelsToGain(theGain ) * setOscType();
//float* channelData = outputBuffer.getWritePointer(channelState);
for (int channel = 0; channel < outputBuffer.getNumChannels(); ++channel)
//channelData[sample] = theSound;
outputBuffer.addSample(channel, startSample, theSound);
In previous tutorials it was working with getWritePointer() but right now i am stuck. i need a tutorial but i can’t find it. I know it sounds kind of complainment but it is a bit complicated and there is not a complete tutorial that covers everything with details. What do you suggest begginers like me ?