Reading audio buffers... just that!

Hi!

C++ developer here, completely new to audio and JUCE. I have a project where I need to do a real-time analysis of an audio signal, whether it’s something input by a microphone or something playing on an output.

I looked at a few tutorials, and most of them seem to rely on calls to getNextAudioBlock to process what’s in the input buffers. I’d like to bind my analysis on data coming in the buffers rather than being called when an device calls back to output something.

I feel like the answer lies around AudioFormatWriter::ThreadedWriter::IncomingDataReceiver but I am not so sure that I should use this class directly, as it’s quite deep in the API.

Any recommendation on how to “just” read audio whenever it’s written to an input or output buffer?

Cheers!

getNextAudioBlock and processBlock is the only place you can read/write live audio data.

If you’re trying to intercept audio data from another application, you need an audio driver that supports that, your code would still be best placed in the proper audio callbacks though.

Juce is using a pulling pipeline model, i.e. a driver or plugin host is pulling out new samples.

What you want is a pushing pipeline, where the audio data is pushed towards the speakers / sound card once it appears. JUCE won’t work that way. You will have to add a FIFO buffer in between and have a strategy in place what to do if it runs empty or overflows. And ofc. it’s adding latency.

1 Like

Can you elaborate on this?

I was just reading the dsp tutorial which includes some helpful information about the signal processing lifecycle. E.g. the prepare, process, and reset functions and how they are called in series.

But what you are saying is, these don’t work with live data; they only on data passed to them by the getNextAudioBlock->processBlock chain, but not directly on the buffers?

Sorry if this is a poorly phrased question, I’m just encountering some of the same confusion about dataflow and naming conventions as others have and trying to understand.

Can you explain what exactly you are looking to do?

I will try my best to explain some of it. (beware, there may be errors!)

Prepare lets you know that the audio pipeline is about to start (or you’re about to be included in said pipeline if you’re a plugin), this is where you can create buffers or threads to use on the audio thread, and, if you’re a plugin this is where you will find out what SampleRate and BlockSize you’re going to be working with.

Most OS developers made a decision to separate any memory between Kernel level and User level code, and as a result, the buffers we get access to when we request audio are not directly from the hardware/driver but from the OS API (WASAPI, ASIO, CoreAudio, etc).

The typical audio chain might look like this:

It’s the OS API’s job to provide everyone with a uniform interface and a fair time slot in which to provide it with some audio.

Some APIs do this with a ‘pulling’ interface: you provide a callback that the API calls when it needs more data, sound familiar? :slight_smile:

Or a ‘push’ interface: you request some (usually large) buffers from the API, fill them, and ‘queue’ them ready to be mixed/sent to the driver.

All this is a contrived way to say nobody gets ‘live’ audio, not in latency terms anyway (including ‘zero latency’ hardware). The data should be live (as in untouched) however.

I hope this helps!

5 Likes

Understand JUCE!


Thanks @oli1 for the breakdown and graphic.

OOh, I know this one! It’s JUCE!

Thanks for explaining the two ways these terms are used. For anyone else, the processing audio input tutorial has some good notes about when to touch channels and when to leave them alone.