And here are some remarks regarding the code and such:
auto myBlock = juce::dsp::AudioBlock<float>(buffer);
auto myContext = juce::dsp::ProcessContextReplacing<float>(myBlock);
I must say I’m a bit puzzled by object instansiations and function calls wrapping the buffer in a Audiblock, wrapped in context and finally handed to the filter. From earlier reading I’ve got the impression functional calls should be kept to a minimum in the processor functions, but I guess this is the way to do this, just curious to hear if there are any other options. Couldn’t all the context stuff go into some initial settings and then only hand the buffer to the filter?
Also, this explanation in the tutorial is a bit strange,
The Audio Graph is coded with a couple of variables that we declare here. We use the auto variable type which lets the computer decide what data type we need. But we have to declare them here in the .cpp file because we can’t use auto in the header.
This is not only a declaration but actual implementation so it could never be in a pure header-file.
I believe it’s worth mentioning that the use of the (deprecated?) GenericEditor renders
PluginEditor.cpp/.h unused. They can be removed from the project. So someone doesn’t spend time editing stuff there wondering why nothing happens.
This is also an excellent place for your second tutorial - that would be about building a GUI right?
I had to add the juce-qualifier here, like:
myCutoffptr = dynamic_cast<juce::AudioParameterFloat*>(myValueTreeState.getParameter("cutoff"));
Typo here, should be
myTypeptr = dynamic_casy<juce::AudioParameterChoice*>
I noticed there’s a discussion regarding cast above, I’ll read it. My initial reaction when seeing this section was wondering if there’s not a better way to do this than casting though. I’m definitely no C++ expert but I though casting in general is something to avoid.