Is a JUCE-powered VST always "on" in a DAW?

I use Ableton, and recently programmed my first VST3 using JUCE (super simple delay)! So cool. I’ve always wanted to do this.

Once, I didn’t zero out the buffer of floats I was using to hold the delay line, and upon loading the VST3 into Ableton (playback wasn’t running), the thing made a ton of noise and screached horribly.

I since fixed by zeroing out the data, but what I don’t understand is why Live was pushing samples through my VST before playback started. I know that Live’s engine (or other DAWs) engines keep running in case there is feedback or reverb, but this is after playback, not before.

I’m just trying to understand how VSTs interact with DAWs like ableton, and when DAW host wants to start “pushing” through audio data.

AS far as I know There is no standard for host behaviour regarding running or not running audio buffers through tracks audio chains.
Only somewhat related plugin standard is the bypassed status, where the host should run a specific plugin callback depending whether the user has disabled the plugin or not.

The right thing to do IMHO for a DAW is to run audio :

  • if playing back, through all tracks that are enabled
  • if not playing back, through all tracks armed for recording that therefore need monitoring.

I think that Ableton is pretty optimised and should do that, but honestly not 100% sure.

Makes sense as far as what it should do. I know empirically that having a processBlock() function that feeds in garbage (un-zeroed) memory to the buffer for playback does in fact play garbage when loaded into a fresh (never playback enabled) Live 10 (latest version) set.

Perhaps the Live forums are a better place to ask this?

The VST3 standard supports this via the Silence Flag

Processing can optionally be applied to Plug-ins only when audio signals are present on their respective inputs, so VST 3 Plug-ins can apply their processing economically and only when it is needed.

Audio Units support this via the unitRenderAction_OutputIsSilence flag.

This flag can be set in a render input callback (or in the audio unit’s render operation itself) and is used to indicate that the render buffer contains only silence. It can then be used by the caller as a hint to whether the buffer needs to be processed or not.

The SynthEdit API supports this though the isStreaming flag on audio signals, and supports removing the plugin from the Audio Processing chain (until audio comes) via the ‘setSleep()’ method.

So it’s a feature JUCE hopefully will support too.

The reason you heard a ton of noise was because of not zeroing out the buffer holding the delay line (as you worked out already, sorry for repeating what you already know!), it just so happened that on that one occasion your buffer was occupying memory that was used for something else previously, something that probably wasn’t even float data, so when it got interpreted as float data you hear a god-awful noise. So it wasn’t Live pushing samples through before playback started, rather just (bad) luck that you got allocated a section of memory that wasn’t already empty that one time.

It differs by DAW, and by their settings. Cubase, for example, has a setting “Suspend VST 3 plug-in processing when no audio signals are received”. I use that when debugging and don’t want processing to take place until I start the transport.