Audio Processing of Effects

Displaying is supposed to be processing audio. But, I don’t understand how a parameter can control a buffer such as ADSR without a direct command.

My understanding from what I just read in the docs is everything gets sent to a block from different classes, unless there is a direct command to a single block, if classes are sent to different blocks a copy block is required. Which makes no sense because there’s no real control, if this class, this class, and this class get jammed into a single block with no direct relationship how do you know what your controlling. :man_shrugging:t2:!

I have been working on creating an ADSR Envelope and have been trying to connect everything into a single processBlock which is apparently wrong, it does work but is over complicated. There is a flaw with this technique though by not running the effects through a single block, what is the order of the effects in which the signal is processed.

Such a major flaw and don’t understand how it’s overlooked. You can’t just throw everything in the closet and expect it to be organized.

I have looked at many other plugins since this realization and from a producing standpoint, I will now only use external effects (internal when necessary).

By “user” do you mean the person operating the software, the “end user”? If you mean that, they know the order of the processing blocks by reading it from your manual or documentation, if you haven’t provided a way to change the processing order of the blocks in the plugin.

I am unsure how this is possible by processingBlocks, etc. separately. My perpesective is within the processBlock the effect must be declared before another effect takes place. The method would then read top to bottom.

Plugins should have control of sequence of effects.

It’s up to you to implement that in some manner. It’s of course possible to do, nothing in JUCE/AudioProcessor prevents doing that.

Agreed. I am looking further into developing the plugin than just the envelope, if I am to set the ADSR envelope to process in its own class there is no clear perspective in which will process first. I believe everything should be programmed to a single process block, etc…

You can do that all in one processBlock call.
From a DRY perspective it makes sense to separate parts in separate entities, that do each a certain part of the process, especially if you want to re-use these separate parts. You would simply call these pieces from the processBlock(). It also has the benefit to reorder the chain to experiment, like doing ADSR before a reverb, doing it after, etc…

Bu they don’t have to be full AudioProcessor subclasses, any class with a function taking the AudioBuffer would be ok.

1 Like

I don’t understand. If audioprocessor processBlock, synthvoice renderNextBlock, etc. which occurs first?

Yeah if you look at a lot of plugins that use custom synths or effects IN the main processing block, it’s usually a hierarchical composite that is traversed top down to the branches.

Usually using something like a tick() method advances the state of each audio leaf.

You end up having 1 call to synth.tick() but then your whole processing logic is dictated by the node graph you created to process in the single block.

I thought using a timer would be the proper solution.

No Timers here, tick() is abstract, the method API doesn’t mean anything other than advance your state by the number of samples given.

It is the tool that moves your graph forward in time one buffer at a time. The processing order is exactly the way you setup your traversing in the graph.

The only thing, that is known to the outer world, is the AudioProcessor class. So only processBlock() is a given. The rest depends, how you set it up.
You have probably added a Synthesiser::renderNextBlock(), which calls all the active voices to deliver their audio.
You can add a global effect after the Synthesiser::renderNextBlock(), by calling a processor, altering that buffer. Or you can add an effect inside each voice in SynthesiserVoice::renderNextBlock() (but careful, this buffer contains already all other voices).

A timer is not suitable for anything realtime. Only use it for GUI work.

The simplest way for a chain of DSP modules would be to have an array/vector that has the processing order as integers and then loop over it, dispatching to functions or object methods. Obviously that isn’t as flexible as a full audio graph.

So, something like :

std::array<int,3> m_processingOrder{2,1,0}; // as a member variable
void myPluginProcessor::processBlock(AudioBuffer& buf...)
{
  for (int i=0;i<m_processingOrder.size();++i)
  {
    if (m_processingOrder[i]==0) processReverb(buf); 
    else if (m_processingOrder[i]==1) processEQ(buf);
    else if (m_processingOrder[i]==2) processEnvelope(buf);
  }
}
1 Like