Access an AudioProcessorGraph::Node::Ptr

Hello,
I am referring to this Tutorial: Cascading plugin effects. I just picked the guts of the code to create a Graph with an OscillatorNode connected to a FilterNode, connected to a gainNode. I am able to pass it through the outputNode and listen to the filtered oscillator.
But now I want to change some characteristics of the nodes or rather to the underlying processors . For example I want to change the frequency of the oscillator or the volume of my gain with a GUI element.
When I add a new node to the graph like this:
oscillatorNode = mainProcessor->addNode (std::make_unique<OscillatorProcessor>());
I am not able to set the frequency of that oscillator afterwards. My OscillatorProcessor Class looks like this, where the ProcessorBase inherits from AudioProcessor:

//==============================================================================
class OscillatorProcessor  : public ProcessorBase
{
public:
    OscillatorProcessor()
    {
        oscillator.setFrequency (220.0f);
        oscillator.initialise ([ ] (float x) { return x / MathConstants<float>::pi; });
    }

void prepareToPlay (double sampleRate, int samplesPerBlock) override
{
    dsp::ProcessSpec spec { sampleRate, static_cast<uint32> (samplesPerBlock) };
    oscillator.prepare (spec);
}

void processBlock (AudioSampleBuffer& buffer, MidiBuffer&) override
{
    
    dsp::AudioBlock<float> block (buffer);
    dsp::ProcessContextReplacing<float> context (block);
    oscillator.process (context);
}

void reset() override
{
    oscillator.reset();
    
}

void setFrequency(float frequency) 
{
    oscillator.setFrequency (frequency);
}

const String getName() const override { return "Oscillator"; }

private:
dsp::Oscillator<float> oscillator;
};

it´s almost the same as in the tutorial, except of the setFrequency function that I would use to change frequency from “outside”. But I can´t find a way to access this function. oscillatorNode->getProcessor()->setFrequency(newFreq) doesn’t work and I can´t understand why…

What am I missing? Or do I have a completely wrong approach? I appreciate every help!!
Greets

With AudioProcessor subclasses you should preferably use parameters. Your current AudioProcessor doesn’t appear to have any. Once you’ve added for example a frequency parameter, you would get the value of that in your processBlock method and call the setFrequency of your oscillator with that, before calling the oscillator’s process method. Outside code would then communicate the desired frequency change into the processor by setting the value of the parameter.

Thanks for the quick answer!
I created a Parameter for the frequency inside the oscillatorProcessor. But I am wondering how to access and change this value from outside that class. I can’t find any method in AudioProcessor to set the value of the parameter…

You can call getParameters() on the inner processor to get an array of pointers to the inner parameters.

Great! I call this method in my plugin PrepareToPlay Function:
oscillatorNode->getProcessor()->getParameters().getFirst()->setValue(400.0f);
It seems to work, but the sound is not as expected: a very high beep-sound occurs…

My modified oscillatorProcessorClass looks like this:

class OscillatorProcessor  : public ProcessorBase
{
public:
    OscillatorProcessor()
    {
        addParameter(frequency = new AudioParameterFloat (
                                                          "frequency",
                                                          "frequency",
                                                          50,
                                                          2000,
                                                          300
                                                          ));
        
        oscillator.setFrequency (*frequency);
        //oscillator.initialise ([] (float x) { return std::sin (x); });
        oscillator.initialise ([] (float x) { return x / MathConstants<float>::pi; });
    }
    
    void prepareToPlay (double sampleRate, int samplesPerBlock) override
    {
        dsp::ProcessSpec spec { sampleRate, static_cast<uint32> (samplesPerBlock) };
        oscillator.prepare (spec);
    }
    
    void processBlock (AudioSampleBuffer& buffer, MidiBuffer&) override
    {
        
        oscillator.setFrequency (*frequency);
        
        dsp::AudioBlock<float> block (buffer);
        dsp::ProcessContextReplacing<float> context (block);
        oscillator.process (context);
    }
    
    void reset() override
    {
        oscillator.reset();
        
    }
    
   
    
    const String getName() const override { return "Oscillator"; }
    
private:
    dsp::Oscillator<float> oscillator;
    
    AudioParameterFloat* frequency;
};

Now I printed the parameterValue with:
cout << oscillatorNode->getProcessor()->getParameters().getFirst()->getValue() << endl;
with that result: 0.128205
for frequency 300Hz

It seems that the whole range, that I have defined in addParameter, is mapped to a different one.Why is this happening and how is this other range defined?

Unfortunately the base class parameter API uses the normalized 0.0-1.0 range for the parameters. You could maybe dynamic_cast to an AudioParameterFloat in the outside code. But what is the reason the outside code would need to access the parameters in the first place?

I thought that I have to access the parameters in the first place (-> inside my OscillatorProcessor) to change them. Or is there a “second” place? As I said I just want to set different frequencies from my GUI.
So as I understand I have to scale for example a slider value to 0.0-1.0 and then pass it with oscillatorNode->....setValue(newRangedValue); or alternatively use AudioParameterFloat in the outside code.

The “normal” way to do GUIs for AudioProcessors is to inherit from AudioProcessorEditor, make your AudioProcessor return true from hasEditor and the editor instance from createEditor. The custom editor should accept and store a reference or pointer to the AudioProcessor subclass, so it can access the methods or members of the AudioProcessor subclass.

Of course that design might not be what you want to do, in which case you will have to dynamic_cast the parameters or the AudioProcessor you get from the audio graph node getProcessor method.

Yes I´ll do it with the dynamic-cast.
The “normal” way you described is also interesting and helps a lot to my understanding! I think this is analogous to the way you can change parameters in a PluginProcessor from the PluginEditor.
Thanks a lot for your help and useful tipps!

It’s exactly the same, the plugin projects set up AudioProcessor and AudioPluginEditor subclasses. (It’s the files that are named as PluginProcessor and PluginEditor.)