Multiple AudioAppComponents


#1

Hi. I recently started my journey with audio programming and JUCE (which is great audio and GUI library for C++; where have you been all my life, JUCE?). So bear with my obvious questions. I did some tutorials and I’m trying to understand the ecosystem of JUCE audio classes and it gives me headache.

I have a main component and a set of components each with different wave generator or audio player made according to tutorials. Each of them extends the AudioAppComponent, while MainComponent extends just Component. Here:
https://juce.com/doc/classAudioAppComponent
I’ve read that AudioAppComponent creates a basic AudioDeviceManager. But here:
https://juce.com/doc/classAudioDeviceManager
I’ve read there should be only one AudioDeviceManager per application. I instantiate each of the generators and players and add them to the MainComponent. Doesn’t it mean I now have one instance of AudioDeviceManager per component? Shouldn’t that crash the application? And why when I activate many components at once I hear them all summed up without needing to write anything by myself? In the future want to mix my audio sources and add effects to them like reverb, but I don’t understand why they are already mixed. This understanding will probably help me develop the app further.


#2

You probably just got lucky with using multiple AudioDeviceManagers/AudioAppComponents on your particular system. It might very easily fail on another system, so you should immediately redesign your code to use only one shared instance. (Which can however use multiple different audio callback objects or you could take care of mixing your audio processors yourself within a single audio callback object.) That of course means you can no longer inherit from AudioAppComponent…You should only have one class inheriting that in your app.


#3

Wow, that was fast! Thank you.
I did this on Windows. From reading the docs and forum I’ve already started to come to the conclusion, my code structure is well… suboptimal. :smiley: However it would be nice if the tutorials warned about it. If not, somebody eventually will build basic sine generator, make few instances and get puzzled as to why his app crashed.

I now see two ways for me:

  • add callbacks to the MainComponent implementing AudioAppComponent from children components that themselves do not implement AudioAppComponent - a low level way
  • create a mixerAudioSource in the MainComponent and create AudioSources in children components and plug them into mixer in main - a higher level way

In both cases functionally I’ll arrive at the same spot, but the second way seems to be easier as it uses simpler to understand JUCE classes.


#4

Yeah, the second way seems cleaner. There are of course a multitude of ways to implement what you are doing, including ways that don’t even involve the Juce mixer and audiosource classes. (That is, you could just implement it all with your own classes, but since Juce already has that stuff, it’s probably better to stick to using those at this point.)


#5

Thank you again. Getting closer to the metal always gives more options. But at my stage its better to go for high level. A total beginner like me is spoiled with choice when starting to get to know JUCE. Its easy to get lost in docs and try to understand things I don’t even need. Maybe its a neat idea for tutorial or overview.


#6

The documentation definitively lacks educational documents to cover basic questions like:

  • What is a audio rendering pipeline?
    An audio rendering pipeline is a sequence of classes, that produce, mix or alter a continuous signal, i.e. sample arrays

  • What drives the rendering pipeline?
    There are two models for pipelines in general: push or pull. I.e. either a generator produces a signal and triggers the processing of the following processor (push). Or the audio driver pulls constantly blocks of samples out of the pipeline. The later one is how JUCE works.
    The AudioIODevice calls an AudioIODeviceCallback to pull samples. There are some implementations: AudioSourcePlayer, AudioProcessorPlayer, SoundPlayer and StandalonePluginHolder. You can add several callbacks to the AudioIODevice (calling AudioIODevice::start(AudioIODeviceCallback*) ), that’s why it worked for you to have several AudioAppComponents. But as @Xenakios wrote and you already figured out, that’s not how it is meant to be done…
    Pulling audio data usually happens in getNextAudioBlock() of AudioSources and it’s subclasses, and processing to alter audio data happens in AudioProcessor::processBlock().

  • What is an AudioAppComponent
    An AudioAppComponent is a convenience class to combine a GUI Component and an AudioSource. This way it is a good starting point, but most times you end up with your custom setup:

    • As AudioSource it has a getNextAudioBlock() to override, so you can feed your audio data.
    • Aggregating an AudioSourcePlayer, it is used to feed this audio into an AudioIODevice
    • To configure the AudioIODevice, it aggregates an AudioDeviceManager
  • Why is there no AudioIODevice in a plugin?
    A plugin is not meant to communicate to any hardware. Instead it adds it’s functionality to the host, so it gets fed data from and back to the host’s track. Any routing should be done inside the host. If the host doesn’t support specific routings, you are out of luck. There are sometimes workarounds, but usually they are hard to teach the users, so it’s fine for your in-house solutions, but not feasible for a commercial product.

Maybe one fine day there will be a wiki, where these kind of questions become accessible instead of ageing out of the forum…