Help with processBlock of a multi-channel app

I am trying to build a drum machine and I want to be able to add DSP effects on each channel. I am able to load and play different files in different channels but I got stuck when trying to figure out how to manipulate an audio buffer of a specific channel. I based my work on the “Build an audio player” tutorial so at the moment I can only use the processBlock function in my PluginProcessor class which will just allow me to manipulate to the total output of all channels. I would like some hints on how to get access to each channel’s audio data.

So in PluginProcessor.h I have this
AudioDeviceManager audioDeviceManager;
AudioFormatManager formatManager;
OwnedArray<AudioOutEngine> audios;

And AudioOutEngine is defined like this:

         class AudioOutEngine : public Thread
        		AudioSourcePlayer audioSourcePlayer;            
        	 	  AudioTransportSource transportSource;     
        		  TimeSliceThread  tst{ "Thread" };       
        		std::unique_ptr<AudioFormatReaderSource> currentAudioFileSource;
        		AudioFormatManager* pFormatManager;
        		AudioDeviceManager* pAudioDeviceManager;
        		AudioOutEngine(AudioFormatManager* _pFormatManager, AudioDeviceManager* 
        		bool loadURLIntoTransport(const URL& audioURL);
        		void run() override;

Now I can play files by calling audios[n].transportSource.start() but how do I proceed from here to process audio blocks of audios[n] ?

First the question, is this an app, hosting AudioProcessors or an instrument plugin?
If it is a plugin, don’t use AudioDeviceManager, since you are not supposed to talk to the AudioDevices directly. The user wants that all plugins respect the routing in the host.

Avoid to inherit the AudioOutEngine from Thread. It is good to offload the work to a background thread, but that’s why you have the TimeSliceThread for. And for that it would be good to create a shared pool of TimeSliceThreads, since you want ideally as many threads as cores. But in this setup you end up with as many threads as instances of your AudioOutEngine.
Have a look at SharedResourcePointer to create that.

If you want to call processBlock (i.e. you are hosting), you can create an AudioBuffer, that is referencing a subset of channels of the multi channel AudioBuffer.

1 Like

Thank you, sir. Your input is much appreciated.

So actually, I am aiming at having this project in both standalone and dll formats. How should I design the communication with audio devices in this case? Also, which Projucer template would you choose to begin with (or should I have 2 different projects)?

Regarding the shared pool of TimeSliceThreads – that’s an excellent point. Will do.

The audio plugin project template in Projucer allows also building a stand alone application, so that’s probably the easiest choice.

1 Like

Yes, the audio plugin project template is the way to go, like _Xenakios wrote.

You don’t need to interact with the devices at all, the StandalonePluginHolder is a mini-host, that will take care of the interaction with the AudioDevices, provides a built-in AudioDeviceSelectorComponent and will call processBlock, as if it were in any other host.

But if you need code specifically for the StandaloneApp case (usually you don’t), you can add code after testing JUCEApplication::isStandaloneApp() and you can access the PluginStandaloneHolder::getInstance() and retrieve it’s AudioDeviceManager, which is just a public variable.


So if I understand: first I should wrap my project in a StandalonePluginHolder. This will give me access to the AudioDeviceManager in case of a standalone and respect the host routing in case of a dll. While I’m at it I should delete AudioDeviceManager from my PluginProcessor.h file and from AudioOutEngine constructor since AudioDeviceManager will already be provided within StandalonePluginHolder. Next, I should access the AudioBuffer member to get a subset of channels, and that will allow me to process separate tracks from my drum machine as I wanted.

Is all of this correct? If so, is there an example of a project using a StandalonePluginHolder I can learn from? I spent a lot of time reading the documentation. Still couldn’t figure out how to make StandalonePluginHolder know about my <AudioOutEngine> OwnedArray. I must be missing something very basic here.

All (or most) of the JUCE example plugins will build or can be set to build as standalone applications with the Projucer.

For basic work you shouldn’t need to do anything with the StandalonePluginHolder. The standalone application code is generated by the Projucer into the IDE (Visual Studio, Xcode) project. Your code should just implement the AudioProcessor and AudioProcessorEditor subclasses. The standalone application and plugin wrappers will instantiate and manage those as needed.

1 Like

Thanks. It’s much easier when you let JUCE handle the details…

But if I remove the AudioDeviceManager from my AudioProcessor base class then I can’t register my AudioSourcePlayer as a callback, so no audio will be played.

Considering that -

  1. I want to play audio files on-demand
  2. I do not want to process incoming audio
  3. I want to have a standalone and DLL versions

What would be the best design? How to make an AudioProcessor load-and-play without an AudioDeviceManger?

Make your AudioSource render into the buffer provided into the processBlock function. (Or into an intermediate buffer, if needed.) Drop using the AudioSourcePlayer, though, it’s not useful in the AudioProcessor context.

1 Like

So, something like this but there are obviously all kinds of other details if you need something more advanced :

void MyPluginAudioProcessor::processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages)
    ScopedNoDenormals noDenormals;
    auto totalNumInputChannels  = getTotalNumInputChannels();
    auto totalNumOutputChannels = getTotalNumOutputChannels();
	for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
        buffer.clear (i, 0, buffer.getNumSamples());
	AudioSourceChannelInfo cinfo(buffer);
1 Like

Great. I got the processor to play without having to use an AudioDeviceManager by using your example and I also removed AudioSourcePlayer from the code. However, you did not mention how to actually “hit play” with your setup so after a few tries I found out that whenever I call AudioOutEngine.loadURLIntoTransport(URL) it does the job nicely. Should I just leave it like this or is there a more efficient technique? It feels like I shouldn’t be loading the same URL over and over again to play the same file but transportSource.start() will not stream the audio to PluginProcessor::processBlock in this new setup.

Thanks a lot for your time and help.

Maybe just seek into the beginning of the file with AudioTransportSource::setPosition (double newPosition) ? Note that might not be thread safe to do from the GUI or some other thread without thread synchronization while the audio is playing. (So you would need to use a mutex/CriticalSection or something like that.)

1 Like

I tried adding setPosition(0) before start() but it did not work. The file only get played when transportSource.setSource(currentAudioFileSource.get(),...) is being called inside loadURLIntoTransport(URL). As I said the program is now working so if there isn’t a better solution I could just go on, but I do feel there is a “Start” option hiding in here somewhere.

When I scrolled all the way up, I read you want to build a drum machine. In this case I would advise against using a TransportAudioSource, because it is designed to connect actions from the message thread (start() and stop()) to the audio thread. However, the message thread and the audio thread live in different dimensions. The message manager is always in user time (if I call it real time it becomes messy, literature calls it wall clock), vs. the audio thread has it’s own time continuum. It can be in sync with the wall clock (playback and zero latency), but most likely it is not, because:

  • latency could occur, the host will try to compensate
  • each processBlock happens only at certain times (discretises events to block size, which can be up to 20ms easily)
  • could happen in offline render mode

That means:

  • you will have to do synchronisation against the playhead
  • you have to consider the position of your sample inside each AudioBuffer of each processBlock call
  • make sure you don’t use asynchronous audio sources (the TransportAudioSource can use a background thread in certain configurations)

I am sorry, this probably raised more questions for you than it solved, but it seemed you were headed into a dead end street.

I thought I remember somebody with a similar problem recently, but I can’t find a thread ad hoc. I think this would be worth a tutorial, but I am afraid, we don’t have anyone in house currently assigned for new tutorials.

1 Like

Well I can’t say I that I am happy with the thought of reprograming the whole thing just when I thought I got it wired correctly, but it’s better to know these things now before I add more complexity to the project. Also, if you say that this issue could be worth a tutorial it makes me feel better about not being able to figure it out from the existing documentation. I also tried to look in the forum using some keywords to find examples for this kind of plugin but couldn’t find an exact match.
The real problem is that I have to start from scratch and that there is no suitable demo project to “copy” from. I need to have some technical instructions, something like “Start with template X, use object Y to stream the audio, process the signal in function Z”.
If anyone is interesting in giving some basics clues I will just repeat the specifications again:

  1. The project should be a Standalone and a Plugin.
  2. The project should have midi in and out, audio out, and no audio in.
  3. Need to play audio on demand.
  4. Need to (DSP) process separate audio channels.
1 Like

I would suggest to create it as a Synthesiser using the SamplerSound class for each drum. Many of your problems are solved there, e.g. synchronising with midi, and where to start inside each block.
There is a tutorial for Synthesisers in general, maybe that’s a good place to start.

The part, where you create your rhythm patterns is something you have to design yourself. By default it would simply play MIDI from the host, but you probably want to create the patterns inside your plugin, I assume?
Especially considering, that the StandalonePluginHolder has no notion of time, i.e. no PlayHead and no transport controls. So you will probably want to add start and stop buttons and synchronise to the progress of the processBlock() calls.

1 Like

It’s obviously impossible for Roli to provide example code for every possible type of thing people might want to do. And the thing you are attempting to do here is quite complicated.

I apologize on my part for not recalling your initial post where you mention this is supposed to be a drum machine. Trying to stream samples from disk for that with the JUCE provided classes is not going to work easily, if at all. To get it working without too much effort, you need to load the samples fully into memory buffers. Like Daniel suggested above, the Juce Synthesiser related classes could be a starting point.

1 Like

It’s obviously impossible for Roli to provide example code for every possible type of thing people might want to do

Right you are. I was really sure that I am aiming at basic level here. If you say that this is getting complicated, I’ll take it as a compliment! Anyway, I will dig into the Synthesizer classes and rebuild my engine from there.

Not sure where you got the idea a multisample sampler with internal sequencing would be a basic thing…

By the way, even the Juce Synthesiser stuff doesn’t directly solve getting the individual samples/tracks processed with different effects. You may need to resort to having multiple instances of the synthesisers or make your own sampler sound and voice subclasses that can route to multiple channels. The Juce Synthesiser and the provided basic sampler classes only mix the voices into a single stereo output.

1 Like

I “solved” this by putting the effects chains in a custom SamplerVoice class, and having SamplerSound structs for each sample with a set of properties to hand to the effects chain in the voice. That tells the voice which effects settings to use when processing. This is definitely not an easy thing to do in an efficient way!