Hello @Xenakios or anyone else,
I’m trying to build sound synthesis the “right” way / from the ground up. But I’m having some conceptual problems.
I tried to create my own class that inherits from AudioSource, and instantiates AudioDeviceManager (with default settings) and AudioSourcePlayer objects because to my understanding these three objects are necessary in order to produce sound.
But it didn’t work. I was getting problems because AudioSource is protected somewhere deep in the code and I was trying to do things not permitted, problems with virtual and override etc.
So I thought I would take a step back. And just try to recreate my own AudioAppComponent-like functionality into the MainComponent class.
I assumed just inheriting from AudioSource, and overriding the AudioSource functions but from within the Maincomponent would do the trick:
void MainComponent::getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill) override
Maybe I needed to make my own setAudioChannels function that initialises the AudioDeviceManager etc.
void MainComponent::setAudioChannels (int numInputChannels, int numOutputChannels)
audioError = deviceManager.initialiseWithDefaultDevices (numInputChannels, numOutputChannels);
I created AudioDeviceManager and AudioSourcePlayer objects for good measure.
But now I just get those annoying override errors.
I still don’t really have a conceptual understanding of how the various audio related classes are put together in order to feed some sound to my speakers. I would just like to output some white noise, for now.
The Building a White Noise Generator tutorial, goes into the algorithm and where to put it (i.e. getNextAudioBlock) but not how to put together an application from scratch that is going to ask for getNextAudioBlock.
Can anyone outline a step by step process how to put together what is necessary in order to do this?