I"m currently learning how to use JUCE and I wanted to use the audio recording functionality found in AudioRecordingDemo.cpp in a plugin environment, however I’m struggling to figure out how I would port this over.
When creating a new plugin project with Projucer, in the processBlock() there is the main for loop:
// This is the place where you'd normally do the guts of your plugin's
// audio processing...
for (int channel = 0; channel < totalNumInputChannels; ++channel)
{
float* channelData = buffer.getWritePointer (channel);
// ..do something to the data...
}
Within the example, the closest thing I can find is:
void audioDeviceIOCallback (const float** inputChannelData, int /*numInputChannels*/,
float** outputChannelData, int numOutputChannels,
int numSamples) override
{
const ScopedLock sl (writerLock);
if (activeWriter != nullptr)
{
activeWriter->write (inputChannelData, numSamples);
// Create an AudioSampleBuffer to wrap our incoming data, note that this does no allocations or copies, it simply references our input data
const AudioSampleBuffer buffer (const_cast<float**> (inputChannelData), thumbnail.getNumChannels(), numSamples);
thumbnail.addBlock (nextSampleNum, buffer, 0, numSamples);
nextSampleNum += numSamples;
}
// We need to clear the output buffers, in case they're full of junk..
for (int i = 0; i < numOutputChannels; ++i)
if (outputChannelData[i] != nullptr)
FloatVectorOperations::clear (outputChannelData[i], numSamples);
}
It looks like in the AudioRecordingDemo, the activeWriter->write (inputChannelData, numSamples); bit is used to write the audio to the file. There is a numSamples parameter that I don’t think I see exposed in the Plugin interface.
Any pointers in the right direction would be really appreciated. Ultimately I just want to take the audio coming into the plugin and write it to a file (i.e. bounce the audio).
Many come here and ask this, because it seems to be the first thing to learn. But actually, a plugin is not meant to write or read audio files. The host does this.
The plugin receives audio, modifies it and sends it back into the track’s processing chain.
You can do that, but it is actually relatively hard to do, because you have to synchronise non-realtime processes like reading from or writing to disk to the realtime audio thread. It’s not the first thing worth learning…
Thanks for the info, I’ll look into these two classes.
Yea as I’ve been digging more into plugins, I’ve been realizing how limited the capabilities are. Unfortunately, the use case I’m trying to develop requires bouncing a track into the plugin and sending it off to an external web service/API.
Like Daniel already mentioned, AudioFormatWriter::ThreadedWriter is the class to use. That will also need an AudioFormatWriter and a TimeSliceThread to be passed into its constructor. I would also reiterate that recording from within a plugin’s code is not trivial, even with the helper classes Juce provides.
A user of a DAW would simply bounce a track, if he/she wants a recording done.
If you write from a plugin, you get audio material with no context whatsoever. No information about synchronisation, where to put it, so in the further process this recording is useless.
It is not, there are plugins like Melodyne, Autotune, GRM Freeze, my PaulXStretch and some others that capture the incoming audio for analysis and/or random access playback purposes. (The idea isn’t to record on behalf of the DAW application but for the purposes of the plugin itself.)
Ok, fair point. But they do a lot behind the scenes to acquire that information and don’t hand out that recording to the user.
That’s why I asked about the use case, there might be a reason I am not aware, but simply for recording the output of a plugin nobody needs a plugin, just bounce the track, that is the clean solution.
Tell that to Melda Production who have a dedicated plugin for doing just that. It can be a useful tool for some situations, for example if one wants to record the output of some track (including the master) live if the DAW doesn’t provide a suitable “live record what you hear” feature itself.
@daniel, this actually sounds like what I need.
The idea is to:
Record input audio stream (pre-fader), store it in a folder
Read and play the recorded wav file during silences via a noise-gate like loudness threshold.
In other words, I don’t need any info on synchronization - just the audio data. I would disable this during DAW playback / record as I imagine that would be CPU intensive. Does this sound more feasible?
Would it be feasible for you to capture into a memory buffer? It would simplify things somewhat. You would need to have a predetermined maximum length for the capture, though…(Because it’s not a great idea to increase buffer sizes during playback.) Also, if you need the captured sound to be retained between quitting and restarting the DAW application, then it again gets complicated, though…You would need to save into a file on disk anyway, or take the risk that storing the audio into the DAWs project state is going to work OK.
Ideally I would save to disk so I can recall files after closing out (not sure where exactly, cross that bridge when I come to it)
I can definitely set a maximum length, no problem. I’ve spent the past few days trying to reverse engineer the AudioRecordingDemo to do this, but I’m a bit confused as to how to implement into the processBlock…
I’m taking @daniel’s old advice, looking into ThreadedWriter right now.
Also real quick: This is my first time posting in these forums, I’m incredibly grateful for your advice for a newbie like me!
Not considering any of the other details involved (how you initialize and stop the threaded writer and do it safely etc), the code you would put into processBlock is very simple :