AUv3 without JUCE GUI on iOS

gui
projucer
audio

#1

Hello.

I’m trying to use JUCE with native UI on iOS to develop an AUv3 extension. I don’t have any experience with JUCE, so this possibly trivial task is giving me a hard time.

The closest thing that I could do with Projucer was to create a new project based on the Audio Plugin template, enable the AUv3 format, and:

bool NewProjectAudioProcessor::hasEditor() const
{
    return false;
}

AudioProcessorEditor* NewProjectAudioProcessor::createEditor()
{
    return nullptr;
}

However, this results in a crash when running in any host with an EXC_BAD_ACCESS when attempting to getWidth inside a Component::setBounds triggered by MainContentComponent::resized.

It’s also not obvious to me how I could “eject” from the rather opaque project that Projucer has created into something that I’m used to when building iOS apps. Where’s the main entry point, where’s the view controllers, how can I use native iOS controls etc.?

Fundamentally, what I’m after is just using JUCE for the audio capabilities. Is this incompatible with this framework’s architecture, or is there some example project that highlights how this can be done?

Thanks!


#2

AUv3 is designed in a way that the audio processing part and the GUI part are heavily intertwined. I wouldn’t know how to split the two without doing a lot of changes to JUCE’s AUv3 wrapper.

I think a better approach for you would be to start off from an Apple AUv3 sample project and then add a JUCE module to the mix when you need it - for example add juce_audio_basics if you need an IIR filter etc.


#3

If you’re referring to AUv3 in general, I disagree about it dictating that the audio processing parts and the GUI parts have to be heavily intertwined. I can happily (for example) set up a react native app that’s entirely decoupled from the audio processing parts in an AUv3 extension, with communication happening over various abstraction channels driven by parameters and observers.

If this is an issue pertinent to JUCE itself and how it handles AUv3, then it’s understandable. Integrating JUCE into an existing project might be the way to go I guess, but this doesn’t seem a trivial task for anything that’s not fundamentally self contained (perhaps like IIR filters).

There’s a whole lot of plumbing required if one wanted to use a larger chunk of the JUCE framework. After digging around, I’ve ended up in the juce_audio_plugin_client module, and there’s one place in particular that seems to be dealing with setting things up for iOS, essentially implementing pretty much everything itself to get things going.

That file however is thousands of lines long and difficult to digest for the most part, especially when it comes to figuring out how things are tied to other JUCE internals. The amount of hand holding going on in this framework for sample projects is quite insane, so much so that ways of integrating it into already existing projects becomes quite obfuscated (though I understand this is reasonable if all you ever wanted to use was JUCE).

I’d be quite happy if there was at least some kind of tutorial for how to spin things up in a way where I’m supplying/specifying some core primitives or entry points to JUCE, and the framework takes it from there.