Sending audio data from plugin -> standalone app

I’m trying to make a plugin which captures audio from the DAW and sends it to another standalone juce application which could capture the audio and then record it/play it in real time.

I have it “almost” working but I’m not sure this is correct. To summarize, I use the InterprocessConnection class to capture the audiobuffer at the end of the plugin processBlock and send it to the standalone which then takes the buffer and plays it in its own processBlock method.

In the plugin I do the following at the end of the processBlock method

    WavAudioFormat format;
    MemoryBlock memBlock;
    {
        std::unique_ptr<AudioFormatWriter> writer(format.createWriterFor(new MemoryOutputStream(memBlock, false), 48000, 1, 16, StringPairArray(), 0));
        writer->writeFromAudioSampleBuffer(buffer, 0, buffer.getNumSamples());
    }
    processConnection->sendMessage(memBlock);

In the standalone app, if I receive a message and place it in a vector of memory blocks, and in the next iteration of processBlock I convert the message back into an AudioBuffer and add then add it to the buffer.

Converting memoryblock to audiobuffer

AudioBuffer<float> BcProcessConnection::getNextProcessBuffer()
{
	AudioBuffer<float> buffer;
	WavAudioFormat format;
	if (memoryBlocks.size() > 0)
	{
		WavAudioFormat wavFormat;
		std::unique_ptr<AudioFormatReader> reader(wavFormat.createReaderFor(
			new MemoryInputStream(
				memoryBlocks[0].getData(),
				memoryBlocks[0].getSize(),
				false),
			true));

		if (reader.get() != nullptr)
		{
			buffer.setSize(reader->numChannels, reader->lengthInSamples);
			reader->read(&buffer, 0, reader->lengthInSamples, 0, true, true);
		}
		memoryBlocks.erase(memoryBlocks.begin());
	}
	return buffer;
}

Adding the audio data to the main buffer

        if (processConnection != nullptr)
        {
            AudioBuffer<float> buf = processConnection->getNextProcessBuffer();
            if (buf.getNumSamples() > 0)
            {
                buffer.addFromWithRamp(0,
                    0,
                    buf.getReadPointer(0, 0),
                    buf.getNumSamples(),
                    1,
                    1);
            }
        }

I get audio and this works but I can’t seem to process the messages fast, the samples are played out in basically super slow motion and my vector of memoryblocks keeps increasing.

Is there a way to achieve what I’m trying to do, or am I on the wrong path here?

EDIT
I’m close, I can almost smell the finish line… I changed the interprocessconnection to run on the audio thread which greatly reduced delay, there’s still a few milliseconds delay annoyingly but it’s very close. I’m not sure how I could possible sync up the clocks though, I feel like I’ll always be one or two processBlocks behind. The relay plugin would need to wait for the host to process it’s message or something.

Just a few thoughts:

  • Looking at the code snippet that you put at the end of your processBlock function I see a lot calls that are allocating memory and therefore are definitely not realtime safe at all. In order to operate realtime safe you’ll have to offload all that logic to another thread and should push the sample blocks through some lock free queue. Of course this will not bring down your latency but it will at least make sure that your plugin does not cause audio dropouts when the system load it a bit higher or the block size is low.
  • Why are you converting your audio blocks into 16 Bit wav files? This seems like some unnecessary overhead and will likely also reduce the audio quality quite a bit.

What you are trying to achieve here is challenging to impossible, especially as you cannot rely on the daw processing blocks like you expect it to do in any form… But what’s your use-case here?

1 Like

Thanks for the comments, I’ve since optimized the code to just copy the raw floats into the memoryblocks and capture it in the standalone. It’s very close, the sound is good, I just want to reduce the latency somewhat, it’s currently 5-10 processblocks behind due to buffering.

As for use case I’ll try to explain: I’m trying to capture audio data from the DAW into the standalone app as close to realtime as possible. I saw that Satellite Sessions Plugin - Music Collaboration Inside Your DAW (Ableton Live + Logic + Pro Tools + more!) - SatelliteSatellite works by having a master sequencer plugin as an instrument and then “capture” plugins to get the audio from different tracks and route it into the sequencer.

This got me thinking that the same method could probably exist to pipe that audio to a standalone app, which is my case would be really ideal for a bunch of reasons.

As PluginPenguin said, this doesn’t seem realtime safe at all, especially now that you’ve put the IPC callbacks on the audio thread. This renders the project pretty useless, as users running bigger sessions/at smaller block sizes will be hearing clicks and pops throughout.

Your best bet is using a lock-free FIFO (JUCE has an option here but you need to handle the memory yourself, and boost has a pretty good one here). Don’t bother writing the audio to a file – that’ll waste time. In each process call, simply copy the input block to the FIFO. When the standalone requests a block from the relay through the IPC have the messageRecieved method in the plugin push the first item in the FIFO to the standalone. In the standalone, use a similar FIFO to store the blocks from the plugin which are ready for processing.

You’ll need to be at least one block behind, but this is the only safe way to do this. Using the audio thread to do allocation/make locks is only going to make your life harder.

1 Like

But still, creating a MemoryBlock alone during processBlock will allocate heap memory and sending something via IPC will block so it’s still the wrong approach.

If you want to achieve such low latency figures, you should do some research how hosts running plugins sandboxed in another process share audio buffers with no latency over process boundaries. I guess it will come down to allocate specific shared memory that is accessed simultaneously by both processes. But in contrast to the sandboxed plugin scenario, your reader side has no knowledge over how the host will slice the buffers etc. so you’ll need at least some hundred samples safety margin. You’d probably need to place some suitable FIFO into that shared memory location that the plugin only writes to and the standalone only reads from.

I have never implemented something like that myself, so all that is just theory, but this could become a funny low level implementation challenge :smiley:

1 Like

Interesting, thanks for the info, I’ll look into that - and you’re right about allocating the memory block each time, I should be careful of that stuff!

For my use case If I could lock myself 1 block behind the plugin that would be “good enough” for the purposes of doing things in Realtime and if I wanted to record then I could always adjust the timing of what I get appropriately.

Cheers for the thoughts guys