Method for writing AudioBuffer to a wav file

Hello,

I’m working in a plugin that does all the processment in double precision, as it’s required by the algorithm. The processBlock() stores the results in AudioBuffer and I want to output these results to a file for further processment. The number of output samples is the numSamples/96, therefore just a few new samples are produced at each processBlock(). The new samples are then accumulated in a bigger AudioBuffer. I want to write these results into a wav file using AudioFormatWriter, but I’m not sure if this is the easiest approach, therefore that’s my first question:

1. Question: should I write the results in a wav file or is there a better method like a text file?

I have to write all the results accumulated on the Buffer for the whole stream of data, therefore I’m using a Timer to write the results on the wav file every 10 seconds. However, I also have another Timer attached to my Editor in order to update the GUI. And this is my second question:

2. Question: is it safe to run two Timers simultaneously, one attached to the Processor and the other attached to the Editor? I also ask myself if is it better to store 10s of results in a big AudioBuffer and write 10s of results in the wav file at once on the timerCallback() or if it would be better to write the new results on the wav file directly in processBlock() every time it’s called, which I assume would not be threat safe.

I’m also struggling with the AudioFormatWriter class:

juce::AudioBuffer mWavBuffer; //buffer to write wave file
juce::AudioBuffer mResultsBuffer; //buffer which stores the double precision (bit depth) results

juce::WavAudioFormat format;
std::unique_ptrjuce::AudioFormatWriter writer;
juce::File resourceFile = juce::File::getCurrentWorkingDirectory().getChildFile(“Result.wav”);

But I would like to set a specific directory for the file (I’m using Windows), like this:

juce::File resourceFile = (“C:\Users\user\Documents\AudioPlugins\Result.wav”);

But then I get an error which says that the format is not right. Therefore:

3. Question: how should I write the location for the wav file?

When I set the file location by getCurrentWorkingDirectory() method, the wav file is created only if I run the standalone version of the plugin, but if I run the VST3 version in Reaper, no wav file is created in the VST3 directory folder.

4. Question: Am I restricted to Windows writing permissions when running the VST3 through Reaper?

Another problem is that the method writeFromSampleBuffer() accepts only float AudioBuffers, therefore, I’m trying to convert the double buffer which has the results to a float buffer before writing the wav file:

void AudioProcessor::timerCallback()
{
mWavBuffer.makeCopyOf(mResultsBuffer, false);

if (writer != nullptr)
    writer->writeFromAudioSampleBuffer( mWavBuffer, 0, mResultsBuffer.getNumSamples());

}

I’m also not sure if this type conversion is safe and how it could be avoided.

I would also like to point that I found lots of topics related to output files issues, however none of them addressed all my issues, then I decided to make this new one :slight_smile:

I appreciate any help/suggestion and I thank you all.

I think there is a misunderstanding. The number of samples is always the same determined by the host.

If you want double processing, your plugin has to advertise it by returning true from supportsDoublePrecisionProcessing() and implement processBlock with the AudioBuffer<double> as argument.

If the host doesn’t support double precision, it will not call that mehtod.

You can still convert the single precision float buffer to double, if you think it benefits your algorithm.

About Timers: they are like a wall clock, and since the plugin processing speed is independent from the wall clock, the Timer has no meaning in processBlock. You need to count audio samples or use AudioPlayHead to determine the time.

Hi Daniel,

thank you for your answer!

I think there is a misunderstanding. The number of samples is always the same determined by the host.

If you want double processing, your plugin has to advertise it by returning true from supportsDoublePrecisionProcessing() and implement processBlock with the AudioBuffer<double> as argument.

If the host doesn’t support double precision, it will not call that mehtod.

You can still convert the single precision float buffer to double, if you think it benefits your algorithm.

I mean, the number of samples that I get after processing the numSamples (given by the host) at every time processBlock() gets called is this numSamples divided by a fixed factor of 96 (downsampling).

I did implement the double version of processBlock and set supportsDoublePrecisionProcessing() to return true, as you suggested in my other topic. Therefore, my results are stored in a double AudioBuffer, but the wav writer function writeFromAudioSampleBuffer() takes only float AudioBuffers as input, therefore at the end I’m converting from double → float, only for visualization purpuse. I know, this conversion should not make sense, but it’s not my choice to process the algorithm in double precision.

About Timers: they are like a wall clock, and since the plugin processing speed is independent from the wall clock, the Timer has no meaning in processBlock. You need to count audio samples or use AudioPlayHead to determine the time.

I do not necessarily need to determine the time of the Timer. I used this time approach only to avoid writing a small piece of the wav file every time processBlock() is called. But at the end I’m not sure what is worse, as with the Timer I’m writing much more data at once and there’s still this double → float conversion happening there.

Ah, you were talking about oversampling. That is something different from double precision processing.
With double precision people usually understand 64 bit floating point numbers, which some hosts support.

For oversampling you need to do the upsampling at the beginning as well as the downsampling at the end.
The number you receive is always the same as the number you return.

It only matters for your writing, as you seem to want to write the oversampled signal. Here you get more samples, not less.

Hi Daniel, thank you for your answer again!

No, I’m not talking about oversampling. I’m really talking about 64 bit floating point numbers, or “bit depth”, number of bits/sample. This is why I set the plugin to process double precision and implemented both versions of processBlock(). You helped me in my other topic to make this setting, I think you forgot xD I work with an algorithm which processes with double precision and unfortunately this is not my choice. I can not change the algorithm to process with float precision.

I’m not performing oversampling, only downsampling, by a fixed decimation factor of 96. For example, if I have a 512 samples per block, each time processBlock() gets called I get approximately (512/96) new double precision samples of results stored in my AudioBuffer. I want to write these samples in a wav file, but it seems that the WavAudioWriter accepts only float AudioBuffers, this is why I’m converting double → float, just in order to visualize the data for now. These results are not audio, only data to be visualized. I think in the audio plugins world this would be called an audio analyser. The results of the processBlock() must still be stored in double precision.

Is it possible to avoid writing the (downsampled) data to disk? If the data is only for visualization purpose, I would guess double buffering would be perfect for your case.

Hello, thank you for your answer! Sorry but I do not know what “double buffering” means. I’m using the term “visualize” the data because the data is not audio, but I sill have to store it in a file (not necessarily .wav) for further processing. I would describe my implementation as an “real time analyser” that shows the current processed results at the GUI (this already works fine) but also outputs a file with all the current and past results for all the duration of the input file.

Got it. I mention double buffering because I would assume your plugin works as follows:

  • the audio thread write downsampled data to the mResultsBuffer
  • the timer thread save mResultsBuffer to disk

In such case you may consider double buffering since you are sharing a large piece of data between a real-time thread and a message thread. See https://www.youtube.com/watch?v=Q0vrQFyAdWI

If you want a specific path for the file, the code may look like this:

juce::File resourceFile {"C:\Users\user\Documents\AudioPlugins\Result.wav"}

But it will be problematic if you run multiple instances of this plugin.

1 Like

You really got it. I will take a look on the video.

I also have the possibility of decreasing the size of the mResultsBuffer to let’s say, 1 second of processed data, and then increase the frequency of the timerCallback(). I do not know what would be better, to use a bigger mResultsBuffer and write large data on the disk at once in a single timerCallback, or to decrease the size of mResultsBuffer and increase the frequency of the Timer, so that I write smaller pieces of data to disk through multiple timerCallbacks.