Unity3D Game Engine with JUCE Audio Backend

Does anyone have any experience with this? I’m about to jump headfirst into this rabbit hole.

My end goal is to target AR & VR (Augmented Reality & Virtual Reality) apps while being able to leverage the power of the JUCE audio APIs.

My initial research suggests that it’s not as simple as I would have thought to use C++ from within Unity3D (Unity is made with C++, but most of the work is done in C# via the Mono engine).

I’ve developed a simple tool (basically a simple transpiler) that converts my audio backend (using JUCE obviously) into a simple C API accessible through a Header & a static library with no other dependencies — this has worked wonders for using JUCE as an audio backend for mobile apps and other projects where JUCE is used solely for the Audio bits and everything else is done in the other programming language.

From the basic C API, it’s pretty simple to auto-generate other language bindings — so far I’ve done Objective-C, Swift, Java, & Pure Data, and I’m working on the Julia, Python,& Javascript bindings (Javascript via Emscripten — a whole can of worms in itself).

My initial impression would be that it would be best to generate a C# API built off the C API and make a Unity3D plugin from that — it’s possible to link static & dynamic libraries in Unity, and I think having the C# bindings would make things simpler.

Does anyone have any experience with this sort of thing?

I understand it’ll be a bit of work no matter what, but I really feel like it would be worth it to develop a solid workflow that allows us to leverage both of these powerful tools together. If we are successful, this could be a godsend for developers everywhere.

FWIW, I plan on open-sourcing my JUCE -> C transpiler soon (as well as C -> Other Languages), but it’s nothing fancy so far, just a simple Lex & Parser engine. I want to switch from my own home-brewed system to using the LLVM Lexing & Parsing tools, but in the meantime I have something that works pretty well, and I have a feeling that it could actually be pretty simple to hook up the raw C API into Unity3D. I just barely started using Unity3D, though, so I could use all the advice I can get!

Thanks in advance!

3 Likes

I haven’t found a link to the sources of Unity3d (just quick browsing), but what I saw in the documentation it seems like it uses (also) OpenAL?
If so I would create a subclass of AudioIODevice which calls the callback of the AudioIODeviceCallback to fill an OpenAL buffer. I suggested that already here (but had no time to make a proof of concept):

Or did you actuall want to go the other way round and feed the game sound into JUCE buffers?

That actually sounds perfect!

Really I just wanted to use JUCE as the audio engine, most of the work would be in Unity. I think it would be awesome to develop virtual reality apps with JUCE doing the audio.

I’ll update with how it works out.

Why use JUCE as the audio engine when Unity has its own audio engine? My advice would be to use Unity to handle audio IO and then send it to your JUCE-based binary through Unity’s OnAudioFilterRead() method.

void OnAudioFilterRead(float[] data, int channels)
{
   yourBinaryObject.processBlock(data, channels);
}

You want to do everything in your power to avoid OnAudioFilterRead - since it runs in the C# domain, the thread is subject to garbage collection.

There are multiple other ways to directly do what you want - Native plugins, audio mixer plugins, spatializer plugins… Note these are all “native” and you can thus use C++ (and JUCE) directly from Unity3D.

Hi Mayae, by writing a native plugin won’t you have to implement your own audio IO routines? I did this in the past but found it quite cumbersome as my library and Unity would compete for the same audio IO. I could disable Unity’s audio IO, but it’s pretty good and finding and selecting the default devices. I guess JUCE is just as capable of doing this. Anyhow, nowadays I create a class derived from AudioSource and send samples too and fro an external processing library. I’ve never had any issues with drop-outs or latency, and I’ve pushed it pretty hard. It also means I can use my custom audio objects alongside Unity’s. Afaik there is no way to do this without calling OnAudioFilterRead()? Or maybe there is, I’m certainly no expert.

1 Like

I mean the native plugins inside Unity. Unity can load your DLLs, SOs or bundles. You can then either interface yourself or you can use the native audio plugins SDK (a C interface for integrating your audio routines). Note this is all happening inside Unity, it’s not external.

It’s not so much what you do inside OnAudioFilterRead, it’s what you do on the outside - loading lots of assets, creating objects each frame etc. will trigger the GC every so often, and it will impact the performance and headroom of the procedural audio generation.

Edit: Have a look:

https://docs.unity3d.com/500/Documentation/Manual/AudioMixerNativeAudioPlugin.html

3 Likes

Ah yes, now I follow. I also tried this route before, and their SDK is pretty easy to use. But the last I tried it was only for audio effects. In my case I was writing generate stuff and it just wasn’t intuitive to use Unity’s native SDK. But for what @cpenny42 it could very well be a good solution.

The ideal solution would be if we could generate Unity native audio plugins using JUCE. I understand that the GUI would have to be written separately in C#, but the rest of the plugin interface seems quite similar to VST and others, so in principle that should be possible.

Yes, Unity has a great native audio SDK that would be ideal as a JUCE export option. The C# creates Unity Editor controls that work with other aspects of the game engine scripting API, a few really good open source references are :

https://github.com/mtytel/helm - a full JUCE based synthesizer with a unity asset that can load a binary of the audio engine and control the synth (the unity asset is 80 dollars though and I think might be closed source) a reference for how this works in practice is in his video tutorials https://www.youtube.com/watch?v=1oNWX0igFMo&index=3&list=PLyQHewKAoFRKldoPGM6foug1g7RZTkPeO

https://github.com/grame-cncm/faust/tree/master-dev/architecture/unity - faust2unity generates a binary and C# script to control it, but I filed a few caveats about the nature of their script interface that I am helping sort.

and my favorite is Heavy compiler for Pure Data:
https://enzienaudio.com/docs/index.html#06.unity
this compiles pure data patches into VST, Unity Native Audio binary / C# script for interface and many other formats including C++ source code.

I think what you need to do is mostly look at the examples for the unity native audio SDK and think of that like the way JUCE treats the VST SDK, create a C# script interface generator that facilitates a connection to the compiled binary from JUCE
https://docs.unity3d.com/Manual/AudioMixerNativeAudioPlugin.html

1 Like

Nice! This is what I love about JUCE.

1 Like

Depending on what you want to do. You can attach an audio filter to a sprite and access the raw data

void OnAudioFilterRead(float data, int channels)
{ /* process the data like in Juce */ }

We built a whole project round this in the past and works very well.

You can call any native DLL using the OnAudioFilerRead function. The slight headache comes when sending parameters to the native code. How did you accomplish that smoothly? I’ve found it a pain in the past and just resorted to floating point blocks of data.

Well in this use case we were just using that function in Unity to generate sine tones and lfo warbles for gamifing audiometry for kids https://clik.audio

But I did a flash DJ app a few years ago with sockets in the same using flex app as a front end and a little juce app as a backend. It was purely for my own education but it worked really well + sockets are easy to work with in unity. Just need to design a simple protocol to do something (key pair / json / xml or just raw bytes enumerated)