How Reaper and other DAWs work with 32 bit float bit depth

Hello,

I’m new to plugins and I want to build a plugin which has an algorithm that requires the input audio to be 32 bit float. I ask myself how Reaper and other DAWs will deliver the input audio to my plugin. I did some search and I found that Reaper allows directly recording in 32 bit float and does all the processment actually in 64 bit float, but I have not found specifications on how the audio tracks are passed to the plugin. Does someone could suggest a good reference material for this topic?

Basically I want to know if my tracks recorded in 32 bit float will be passed as they are to my plugin and if there’s a built in Reaper/DAW function that would allow to convert a 24 bit track into 32 bit float, for example, so that my plugin could process it.

I appreciate any help or suggestion.

I don’t know about Reaper specifically, but probably all modern DAWs work in float nowadays. This defaults to 32 bit float.

Your plugin can tell the host that it can process 64 bit floating point a.k.a. double.
This usually means, that your plugin has an implementation for each use case, often done with templates, so you write the code once and it works for float and double.

How the host stores the data is not of your concern, only in what format it is passed into your plugin.

You have to override AudioProcessor::supportsDoublePrecisionProcessing() to return true and implement both variants of processBlock():

void AudioProcessor::processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages) overide;

void AudioProcessor::processBlock (AudioBuffer<double>& buffer, MidiBuffer& midiMessages) overide;

// juce namespace left out for brevity
1 Like

Hi Daniel, many thanks for your clarification and for the example code! Does it mean that if I have a track recorded at 16 bit fixed point the DAW would send it to the plugin as 32 bit float?

I would like to learn more about the iteraction between DAWs and plugins but I’m not sure where is the best place to start, as it might also be DAW dependent. I took a look at the VST 3 standard documentation but I have not found this specification there.

Yes that is the case.

You can get quite far relying on juce’s abstraction before you need to look into the different plugin APIs, so my two pennies would be don’t get hung up in the lower level.

(OT: muscle memory is weird, I just pressed CMD+Option+L instead of enter in the browser. Weirdly it didn’t reformat my post :joy:)

1 Like

In VST3 the plugin implements IAudioProcessor

The host calls IAudioProcessor::setupProcessing prior to playback. One of the fields in ProcessSetup is symbolicSampleSize which is either kSample32 or kSample64 for 32 or 64 bit float precision.

During playback the host calls IAudioProcessor::process with a pair of AudioBusBuffers for input or output. The sample data is a union of either floats or doubles based on the value that was passed to setupProcessing.

JUCE handles the dispatch to your processBlock<float> and/or procesBlock<double> implementations in your AudioProcessor.

tl;dr: in VST3, all audio processing is 32 or 64 bit floats. For a host to process audio with a VST3 plugin it must convert from whatever sample type it has for audio tracks into floating point. Plugins don’t care about this, it’s the host’s job.

1 Like

Many thanks I really appreciate your clarification!