Is there any documentation specific to application types? [SOLVED]

I’m about to make a second run at the tutorials, and yes, in every tutorial it does say briefly something along the lines of:

The Audio Application template is very similar to the GUI Application template, except that:

  • The MainContentComponent class inherits from the AudioAppComponent class rather than the Component class.
  • The juce_audio_utils module is added to the project, in addition to the other audio-related modules that are added to projects by default.

But it doesn’t explain why we inherit from AudioAppCommponent and not just Component, or why the juce_audio_utils module isn’t included in the GUI Application template. So when I see these options in the projucer, tbh, I don’t really know which one to choose.

Side question, If I wanted to contribute an effect to an existing application that already has JUCE incorporated (somehow), which application type would I want to start from?

Thanks again a billion you wonderful people!

1 Like

AudioAppComponent is just a Component and an AudioSource combined that will be played by an AudioDeviceManager. I wouldn’t really recommend using AudioAppComponent. The way how it combines GUI and audio related things into a single class is not a really clean design. But it’s fine if you just want a GUI window and some basic audio playback happening. Code you write against AudioAppComponent is not compatible for example with audio plugin projects which require separate AudioProcessor and AudioProcessorEditor subclasses.

juce_audio_utils not included in the plain GUI application template because it is not expected you would need it. AudioAppComponent itself is part of juce_audio_utils, so projects using that will need that module added.

How to contribute to a 3rd party project completely depends on what that 3rd party project is. Many, but not all, Juce based projects will use AudioProcessor as the base class for their audio effects. Some may use AudioSource or the newer dsp::ProcessorBase or may have a custom base class.

2 Likes

Thank you Xenakios,

I will take more time assimilating the various the tutorials, and come back if I have any further questions.

Hello @Xenakios or anyone else,

I’m trying to build sound synthesis the “right” way / from the ground up. But I’m having some conceptual problems.

I tried to create my own class that inherits from AudioSource, and instantiates AudioDeviceManager (with default settings) and AudioSourcePlayer objects because to my understanding these three objects are necessary in order to produce sound.

But it didn’t work. I was getting problems because AudioSource is protected somewhere deep in the code and I was trying to do things not permitted, problems with virtual and override etc.

So I thought I would take a step back. And just try to recreate my own AudioAppComponent-like functionality into the MainComponent class.

I assumed just inheriting from AudioSource, and overriding the AudioSource functions but from within the Maincomponent would do the trick:

void MainComponent::getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill) override

Maybe I needed to make my own setAudioChannels function that initialises the AudioDeviceManager etc.

void MainComponent::setAudioChannels (int numInputChannels, int numOutputChannels)
{
    String audioError;

    audioError = deviceManager.initialiseWithDefaultDevices (numInputChannels, numOutputChannels);

    jassert (audioError.isEmpty());

    deviceManager.addAudioCallback (&audioSourcePlayer);
    audioSourcePlayer.setSource (this);
}

I created AudioDeviceManager and AudioSourcePlayer objects for good measure.

But now I just get those annoying override errors.

I still don’t really have a conceptual understanding of how the various audio related classes are put together in order to feed some sound to my speakers. I would just like to output some white noise, for now.

The Building a White Noise Generator tutorial, goes into the algorithm and where to put it (i.e. getNextAudioBlock) but not how to put together an application from scratch that is going to ask for getNextAudioBlock.

Can anyone outline a step by step process how to put together what is necessary in order to do this?

Many thanks.

Here’s the simplest way to do a Component class that plays noise audio :

class MainComponent   : public Component, public AudioIODeviceCallback
{
public:
    //==============================================================================
    MainComponent()
    {
		m_adman.initialiseWithDefaultDevices(0, 2);
		m_adman.addAudioCallback(this);
		setSize (600, 400);
    }
	~MainComponent()
    {
		m_adman.removeAudioCallback(this);
    }
	void audioDeviceIOCallback(const float **inputChannelData, int numInputChannels, 
		float **outputChannelData, int numOutputChannels, int numSamples) override
	{
		for (int i = 0; i < numSamples; ++i)
		{
			float sample = jmap(m_rnd.nextFloat(), 0.0f, 1.0f, -0.1f, 0.1f);
			outputChannelData[0][i] = sample;
			outputChannelData[1][i] = sample;
		}
	}

	void audioDeviceAboutToStart(AudioIODevice *device) override
	{
	}

	void audioDeviceStopped() override
	{
	}

    //==============================================================================
    void paint (Graphics& g) override
    {
        // (Our component is opaque, so we must completely fill the background with a solid colour)
        g.fillAll (getLookAndFeel().findColour (ResizableWindow::backgroundColourId));

        g.setFont (Font (16.0f));
        g.setColour (Colours::white);
        g.drawText ("Hello World!", getLocalBounds(), Justification::centred, true);
    }

    void resized() override
    {
        
    }
private:
    AudioDeviceManager m_adman;
	Random m_rnd;
    JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (MainComponent)
};

It skips all the AudioSource stuff (since it isn’t strictly necessary) and just directly implements the audio IO callback for use with the AudioDeviceManager instance.

Note however that kind of Component isn’t really a good idea. (Like the AudioAppComponent isn’t.) You really want to have the audio and GUI parts separated in your code.

If your goal is to contribute audio effects code to a 3rd party project, it isn’t likely your contribution is going to go into code like that. You need to figure out how their audio effects processing has been designed and then work based from that.

1 Like

Thank you so much man, this is unbelievably helpful!

Note however that kind of Component isn’t really a good idea. (Like the AudioAppComponent isn’t.) You really want to have the audio and GUI parts separated in your code.

Right, so If I then essentially remove all the component stuff from your code, and put the remainder into its own class that solely inherits from AudioIODeviceCallback

Then from the constructor of MainWindow in main.cpp instantiate my new audio object (with white noise built in) alongside the instantiation of MainComponent. I’ll then have my GUI stuff in MainComponent and audio stuff in the newly generated class (slowly, over time, getting more complex as I become more competent with JUCE/programming), and that would be the proper way of doing it, would it not?

Yeah, this is the longer term goal. For now I’m just trying to get my head around what JUCE is first and foremost!

Here’s a version of the above code where the audio callback has been separated into a different class :

In that the MainComponent object still owns the audio callback and the audio device manager objects, which can be fine, if you know your application is always going to need the GUI anyway. But if you want, you could have the Juce application subclass own them too. Whatever makes the most sense in your situation. You should mainly avoid having multiple AudioDeviceManager instances, as that can lead to problems.

2 Likes

Version which uses AudioSource :


You don’t really get any particular benefit by using AudioSource in code like this, the code actually is a bit more complicated than just directly implementing the audio IO callback. Anyway, it can be useful in some cases to use the AudioSource approach. (For example if you already have AudioSource classes that do something complicated and useful and you just want to be able to play those back with the AudioDeviceManager. Juce has some such classes included, for example for playing back audio files and mixing together multiple AudioSources.)

To be complete, this could be done in yet another way by subclassing AudioProcessor and using AudioProcessorPlayer. That would be the most complicated (or verbose) approach as AudioProcessor has lots of virtual methods to implement.

edit : here’s the version with AudioProcessor :wink:

2 Likes

Xenakios, this is exactly what the doctor ordered!

Is there any documentation, any video, lecture, seminar, anything, on AudioIODeviceCallback, AudioSource, and AudioProcessor - the differences, the similarities, the philosophy, why all three exist as part of JUCE etc. Maybe a lecture series called ‘The Anatomy of JUCE’ or something.

I.e. How did you come to this knowledge if it weren’t already documented somewhere? How is someone like me, or any other new developer supposed to learn this? Just by studying the code? :worried:

The tutorials and examples do a pretty good job at getting newbies up to speed with projects that do something. They could obviously be better in some regards.

This video may have something useful but it’s been a while since I’ve last watched it, so I can’t say for sure :

2 Likes

Wow. That was incredible and took me a long time (many days work) to absorb but I think I’ve squeezed every bit of juice (:wink:) out of that video. I think I’m almost ready to wrap this thread up, so some last little bits I think:

Here is how my head looks at the various collections of audio classes and their use cases:

    • In every case, the star of the show is the AudioDeviceManager. It will exist in every application in order to send your app’s output to the soundcard
    • If I want to build some kind of simple self contained audio app (doesn’t integrate with another audio app, doesn’t take audio inputs beyond the soundcard) I want to use only AudioSource classes to produce sound. (MixerAudioSource to mix them, and AudioSourcePlayer to connect it to the AudioDeviceManager)
    • If I’m building a typical synth plugin/app I want one AudioProcessor to handle all the audio/GUI/MIDI events and use AudioSource classes for sound generation manipulation (You may want to take a look at the Synthesizer class (and it’s related classes … SynthAudioSource etc) for handling the voices - see related tutorial).
    • If I’m building a DAW-like application I would want an AudioProcessor for every track so that they could each have their own channelstrip inserts etc. (using AudioProcessorGraph for the under-the-hood mixing and routing of all the AudioProcessors - see related tutorial
    • If you’re creating any other kind of bespoke application, you could of course look into combining all of the above.

Is that correct?

Comment on point 1 : With audio plugins, you don’t really want to be dealing with AudioDeviceManager at all. (Except in the case of the standalone app builds of the plugins and even then you have to go to some trouble to actually be able to manipulate the instance Juce creates for you.) If you are running as a plugin inside a 3rd party host, trying to use AudioDeviceManager can easily fail. This is an important point because people regularly think they can conveniently implement for example previewing of audio samples just by trying to play the sample through an AudioDeviceManager. It can work on some particular systems and hosts, but may very well fail on others. (Because the 3rd party host application is already using the audio hardware.)

As a more general comment on other points : you don’t need to use anything in particular from Juce to implement your audio. (The audio is really just buffers of floating point values and you can process and generate them in any way you want.) The Juce classes can be quite convenient in some cases, though.

1 Like

Good point. So everything else wasn’t pure gibberish?? :upside_down_face:

I edited my reply regarding the other points.

Regarding your point about “[not] need[ing] to use anything in particular from Juce to implement your audio”. Yes, but the beautiful thing about JUCE for a novice like me is that I JUCE is all I need right now :slight_smile:

In closing, this thread has been a ride! Thank you for all your input.

No - if you’re building a DAW-like application, the best advice is just use the tracktion engine. It’s free unless you’re generating proper revenue from your app, and will save you years of work.