I’m experimenting with using the AudioProcessorGraph to render files. That is, have an input file(s), process it through an AudioProcessorGraph, and the write the output to file(s).
My thought was to create an AudioProcessorGraph, load a filter graph with AudioProcessorGraph::loadFrom, and then call AudioProcessorGraph::processBlock while providing input samples from a buffer (read from the file).
{
AudioProcessorGraph graph;
graph.loadFrom (fileToOpen, true);
graph.setPlayConfigDetails(2, 2, 44100, 512);
graph.releaseResources();
graph.prepareToPlay(44100, 512);
graph.setNonRealtime(true);
juce::AudioBuffer<float> audio_buffers;
audio_buffers.setSize(1, 512);
juce::MidiBuffer midi_buffers;
for (int32_t currentSample = 0; currentSample < 220000; currentSample += 512)
{
/*
Read data into the audio_buffers buffer
*/
// Call processBlock
//
graph.processBlock(audio_buffers, midi_buffers);
/*
Write data from the audio_buffers to a file
*/
}
}
What I’m finding is that the input buffer I’m passing into AudioProcessorGraph::processBlock doesn’t end up getting processed through to the plugins (I’m using the demo gain plugin to see this) and I don’t get any audio in my buffer once AudioProcessorGraph::processBlock is completed.
The filter graph was created using the AudioPluginHost application and only contains a gain plugin.
I’ve seen posts about using a graph in a plugin and calling graph.processBlock directly. I would have thought this would work in my simple example.
Is there something required in the configuration of the AudioProcessorGraph to get it to process the data coming into the processBlock?
Did you check that the graph actually gets plugins instantiated and connections are set between them? I don’t suppose you could directly use the AudioPluginHost project files as state of the AudioProcessorGraph.
Yes, plugins are getting instantiated and addNode is getting set between them.
I’m using the AudioPluginHost project files as state of the AudioProcessorGraph.
A question in my mind is if the graph state (filter graph) doesn’t have an inputs or outputs assigned, does calling AudioProcessorGraph::processBlock call into the first plugins inputs directly or are these “dangling” inputs or something like that.
What do you mean by special input and output nodes?
Are these derived from specific type?
Or just an AudioProcess with only outputs (for my source files) and inputs (for my destination files)?
If you process offline make sure any pre-buffering sources are set to blocking mode. Otherwise it is pulling faster than the buffering audio source can provide samples and it will send empty buffers or worse.
My situation is that I have a class inheriting from AudioAppComponent, which may or may not be the right choice. I want to be able to render audio in either real-time or non real-time.
For now, I call graph->setNonRealtime(true);
At the moment in which the graph is already setup and I want to render in non real-time, I call this->setAudioChannels(2, 2);
This leads to my audio app component’s getNextAudioBlock(const juce::AudioSourceChannelInfo& bufferToFill) in which
I call graph.processBlock(audio_buffers, midi_buffers)
However, juce_AudioProcessorGraph::processBlock causes some trouble because handleAsyncUpdate() is called and then it sleeps forever in the lines below.