General questions about modular audio processing

Hey everyone!

I’m developing a not-really-audio-related software, but now I have to add some audio related stuff too. It’s a node-based system so I have a class for every functionality. In the audio topic I have to write nodes like AudioInput (mic/linein), AudioLoader (file), AudioBalance (pan), AudioGain, AudioMixer, AudioOutput.

I know and really love the architecture and conventions of JUCE, but now I’m lost. I just don’t understand the architecture of the audio stuff. :slight_smile:

In my case I think all of my nodes should use AudioSources as input / output ports. But my AudioOutput node should stream audio all the time (even if it’s input is empty, so it’s silent) and if it’s connected to a AudioSource (the input is not a nullptr) it plays it. I have the Message and a Processing thread, and both can change values of my nodes.

I don’t need any code, but please give me some advice about how to build a system like this, which node should use which Audio class.

Thank you!

(Ps.: Jules - and now for all of the ROLI team - JUCE is awesome!)

The outputs is anything that inherits AudioIODeviceCallback. On a proper doxygen doc you see, that these are AudioProcessorPlayer and AudioSourcePlayer. They behave exactly as you expected it.
You hook up the player into your device via AudioDeviceManager::addAudioCallback()

Hope that helps

1 Like

Hey Daniel!

Thank you for the suggestion. Now I have a node, that loads audio from a file (like in the tutorial), it’s only output is a pointer to the loaded AudioSource. I have an output node too, that is inherited from AudioSourcePlayer now. When I connect them together the audio starts to play as it’s expected.

When I disconnect them or I give an invalid file path to the loader the AudioSource pointer became a nullptr, and my app crashes at void AudioSourcePlayer::setSource (AudioSource newSource)* on the line oldSource->releaseResources(); with EXC_BAD_ACCESS.

What is the safe way to set AudioSourcePlayer’s source to nullptr NOT on the audio thread?

My next two nodes I have to implement are the Balance and a Mixer. The Balance looks simple, I’ll Inherit from AudioSource, and I have to just multiply the channel data with my balance. The Mixer looks a bit harder. I think I should inherit from MixerAudioSource, and use the addInputSource() method. But how do I set the gain on each source? Should I multiply the channel data before, or write a standalone Gain node?

(And sorry for the dumb questions, I’m new to the audio stuff, and I have to understand the whole concept first.)

Have a look at AudioSourcePlayer::setSource():

If there’s another source currently playing, its releaseResources()
method will be called after it has been swapped for the new one.

So eventually it’s being called twice, you don’t have to call releaseResources(). And how I understand it, there should be no problem setting the audioSource to nullptr.

As for your other question, I can’t give you a general advice. There are a lot of options how to achieve this, one is as you said to create an AudioSource, which has an AudioSource and passes the audio data on modifying it. That’s probably the easiest way to do it.
An other option is to create an audioProcessor. So it depends in what direction you want to evolve your application. But maybe somebody else wants to give good ideas or has a stronger oppinion, on how to do this…

Or I would eventually create my own MixingAudioSource with an additional struct to each connected source with the parameters of gain, pan, mute… And don’t forget to store the gain of each channel after processing the last block, so you can use AudioBuffer::addFromWithRamp(…) to avoid jumps in the audio.

Ok, it was my fault (thank you for the double release tip), I had to add a ScopedLock, and change the ownership (not using ScopedPointer to hold the actual source) when I load the new audio file, the nullptr (stopping audio stream) part works like a dream now. (Million thanks to Jules and the JUCE team again!)

Now I need some kind of player functionality. In my AudioPlayer node as far as I see, I need to convert my incoming AudioSource* to a PositionableAudioSource* and operate with it’s time-related functions. Can you suggest something for me to reach this?

You have it already, because the AudioFormatReaderSource inherits a PositionableAudioSource, therefore you can put it into an AudioTransportSource (not TransportAudioSource for some reason). Then you can use start and stop on that one.

This polymorphic stuff is so handy, but you can’t understand that without looking up the inheritance. That’s why it annoys me every day, that the diagrams are not shown on the api-docs, but thanks to @samuel there is the up-to-date api docs with the default doxygen style with full functionality available.

Still, the stuff they code is awesome, I agree!

Fantastic! Now I have a really simple, but working Audio system, thank you!

Why MP3 encoding is surrounded by this legal and licence stuff? I have to pay for the licence to use MP3 in may app? To who? Is it serious?

Rail

Thank you for your reply. I have never thought it’s an issue. :slight_smile:

I’m not sure what does “Per Unit Royalty” mean. If I make one software but publish it in different suites (Lite, Basic, Pro, etc) it’s one unit, or it’s one unit per version? (Anyway the $0.75 / unit is absolutely ok, but I’d like to see things clearly.)

Did you look at the “PC Software” tab?: http://mp3licensing.com/royalty/software.html
Especially the Minimum Royalties section at the bottom i.e., minimum $15k

Why not use APIs provided by the operating system to de- and encode mp3 and other patented formats? In this case the de/encoding is done by the OS. The OS vendors Apple and Microsoft already payed the royalties for this functionality. (Not available in a Linux distribution)

The de/encoding is done in the OS and not in my application - no codec present in the binary -, so I don’t have to pay royalties at all.

I haven’t used it yet, but here’s an example on how to use these OS X and Windows APIs

1 Like