How to use Opus API (libopus 1.3.1) in a JUCE project?

Hello all!

I’m trying to set up my JUCE project to use Opus’ C API. I don’t know much about using APIs so I was wondering whether you could help me using C functions in C++, and more precisely in a JUCE plugin.
Opus is an open-source audio codec and provides an API which basically enables developers to encode, decode and analyze audio signals. I would like to use some of these functions in my plugin.
Since most of the API’s guidelines explain how to use the functions in C language, I must admit I’m a little lost.

Example from opus.h:

//Since Opus is a stateful codec, the encoding process starts with creating an encoder state.
//this can be done with:
@code
int          error;
OpusEncoder *enc;
enc = opus_encoder_create(Fs, channels, application, &error);
@endcode

The tutorials say I should develop most of my code inside ProjectNameAudioProcessor::processBlock but I’m not sure how to use the C functions in there.
Any way you could help me?
Thank you so much in advance! :slight_smile:

Warmly,
André

P.S.
I hope you’re shining in these dark Covid-19 times. But the bright side is we all slowly get to work from home, which isn’t so bad after all. :partying_face: :yellow_heart:

Opus functions can be called without problems from C++. In your AudioProcessor constructor, you call opus_encoder_create(Fs, channels, application, &error); and store the OpusEncoder* enc as a member variable. In the destructor, you have to call opus_encoder_destroy(enc);. In processBlock you then gather blocks of audio data (be default Opus uses chunks of 960 samples) and once you have enough samples, you can call opus_encode_float() to encode the binary data.

Same thing the other way for the decoder. You’ll also need to allocate some memory for all the needed buffers.

Have you already dealt with building the library? As you can probably tell I’m working on a similar thing right now. I decided to build static libraries for mac and windows, but am having issues with the configure script on mac os x. In short I can’t get it to build universally and for some reason I cannot get it to build without SSE4_1 being required by the result although it should be configurable.

Hey Adi,

Thank you so much for your answer. It helped a lot setting me in the right way for the development of this plugin.
So, it took me a couple of days to figure this out but a few things are still unclear. Maybe you can shed some light upon it as well.

  • I called opus_encoder_create(SAMPLE_RATE, CHANNELS, APPLICATION, &errorEnc); and opus_decoder_create(SAMPLE_RATE, CHANNELS, &errorDec); each with their own error handling and destroyer. Thank you again for the advice!
  • Now here comes the hardest part… I wasn’t really sure what you mean by that so here’s what I imagined for the processBlock:
    //Declare buffers
    std::list<AudioBuffer<float>&> copyBuffer; 
    std::list<AudioBuffer<float>&> audioData;

And for the main processing part:

        //If final buffer contains less than MAX_NUM_SAMPLES, then allocate memory for final buffer (+960 samples)
        if (audioData.size() + 960 <= MAX_NUM_SAMPLES)
        {
            //Store samples in buffer
            copyBuffer.makeCopyOf(buffer.getReadPointer(channel));
            audioData.push_back(copyBuffer);        
        }
        
        //Else it means that the storage is done. Start encoding for all the samples of the buffer
        //Then decode
        //Then store in a file.
        //Iterate through all the samples and store each sample in a buffer

Unfortunately, as you can tell, I’m not very proud of this and I think there are a few things I’ve missed about how to process those audio chunks :wink:. I think I see the idea but I’m unable to write the code for it…

Any way you could help with that?

Warmly,
André

I could tell, yes! :slight_smile:
Same boat here hahaha :wave:
I’m working on a PC and I’ve decided to compile the code on os x if my code is needed there. But have you checked the answers to this topic?

So you want to write the opus encoded audio to disk in realtime? In that case you will need a worker thread for the file access and probably can do the encoding on that thread as well.

I have a case with no file access, but I’m encoding and uncompressing in realtime. I the processBuffer() method I use a ringbuffer to create chunks of 960 samples and every time I have collected another 960 samples I do the encoding which gives me some bytes of encoded data. Not the best solution, but seems to work ok for now.

in the meantime I found a github project where opus is setup with CMake. This allows to use XCode to compile it and gives me all the configuration options I know how to deal with. Unfortunately it is using opus 1.2.1. https://github.com/pokey909/opus-cmake

This is exactly what I’m looking for! Do you mean the processBlock() method?
Ok so you do your encoding chunk-by-chunk. Smart! That sure gives you real-time. Why are you saying it’s not the best solution?

Ok well at least something exists! So it’s either downgrade or looking for another solution…

yes sorry, processBlock(). The problem is the same as with large FFTs… If the host runs at low buffer sizes like 32 or 64, the processBlock() calls use very different amounts of CPU power if the chunks are 960 samples long. But to achieve the best low-latency performance the processBlock() should produce near-constant load. If they don’t, user’s will get dropouts before the CPU is maxed.

A better solution would be to just collect the audio in the processBlock() call and put them into a non-locking queue. Then read the queue on another thread and process there whenever enough audio is in the queue. Then the reverse process to get the audio back into processBlock(). This also requires some additional buffering to be sure there is always enough data.