.jucer GUI project run on Linux without desktop... can I?

Ok, that is actually unrelated to the X11 question. You are accessing samples outside the allocated memory, that explains the crash sooner or later.
You have to work the call stack backwards, why you end up reading or writing outside…

I think you’ll be able to get JUCE 6 working just fine. And yes, I think the first link I shared implies that so long as you don’t use the GUI modules you should be fine. The key sentence in that link from ed95 was “If you are building a JUCE application for Linux using JUCE 6 then the same executable can run on systems both with and without the X11 libraries.”

I guess I’m curious about the exact recommended structure, supposing that one is using CMake (are you edsut?). I think it’s the way to go on Linux without a doubt (although I have compiled Projucer and DemoRunner on both Linux Desktop and Raspberry Pi successfully recently).

As far as I can understand, you would need two totally different CMakeLists.txt, or if you had only 1 you would need conditional logic within it to decide whether or not to include the GUI modules. If you used a cached variable, it could be made visible to any code that needs to decide whether to do anything GUI-related.

I guess an example project that showed the simplest way to build with-GUI and headless from the same CMake project would be useful!

As for your call to loadImpulseResponse(), I think that you may need to move your call into getNextAudioBlock() (the app equivalent of processBlock()).

As far as I know the constructor for MainComponent doesn’t count as being on the audio thread

@daniel,
Right… originally this was a “can I run a JUCE application with all gui-related stuff opted out in a headless system?”. That led me down the “update JUCE” path, and that’s when the crash came into the picture.

@kcoul,
Glad to hear that, so I’ll try to stick with JUCE 6.

Regarding CMake… I did use it to build JUCE examples/extras to include Projucer and DemoRunner among others. The application in question is just using the LinuxMake environment that Projucer originally created. The work was originally done on a Pi, but was moved over to Ubuntu a while back.

My hopeful plan is that I have only one build with conditional logic that will make the GUI-or-NOTGUI decisions.

@kcoul,
I don’t understand why the thread context would matter; but I moved all calls to loadImpulseResponse() to be done on the first invocation of getNextAudioBlock().
Didn’t make any difference.

You shouldn’t create any Component at all. They will when added to the desktop create a ComponentPeer, which is the connection to the window system. From that point the X libraries will be loaded and it won’t run without X11.

You can use AudioDeviceManager to open the audio device and AudioSourcePlayer to play from any kind of AudioSource.
Hosting AudioProcessors is already tricky, I don’t know if it will work, since the createEditor() will return a Component. The juce_audio_processors depends on juce_gui_basics and juce_gui_extra for that reason.

1 Like

@daniel,
ok, now that’s scaring me…
First, note that this is an already-created project (actually, its a few years old now), that just recently has the need to run headless. I’m not sure what you mean by “You shouldn’t create any Component at all”; but that sounds like your saying that creation of the MainComponent implies that X11 is an underlying requirement. Is that the case?
By the way, just in case it isn’t already obvious, I am very new to JUCE, so I apologize if I’m asking some dumb questions here…

You might get away if it is not added to the desktop: Component.addToDesktop (windowFlags)

Alternatively there was a wrapper you can use on linux to run even if it’s headless.
xvfb is the command

xvfb…
I’ll look into that, thanks…

@edsut it would be best if you move the issues that you are having with the Convolution class into a separate thread so that we can help you there and address your original question about headless Linux support in this thread.

If you run your application built with the latest version of JUCE the headless support allows you to run the same executable on a system both with and without the X11 libraries installed and you can use the Desktop::isHeadless() method to query this at runtime.

2 Likes

Ok, I’ll start a new topic for the crash…
Glad to hear I’m on the right track with JUCE 6.

Thanks for that answer, @reuk, I do have to say, that documentation is a bit confusing to me.

I see two use cases here:

  1. The ‘load’ function happens from the GUI/Preset load/etc.

In that case I’m totally fine with the sync mechanism of the convolution engine taking a bit of time and maybe still running the old impulse response on a couple of process calls until loading is done.

  1. The ‘load’ function happens as a part of a linear process.
    For example when loading the impulse response and immediately doing the processing on it, such as the case with offline export, or with a visualizer of the IR.

In that case, I need a way to be absolutely sure that the IR is loaded when I call process(), and I’m willing to take the cost of the extra waiting/memory allocation just to get predictable results.

What is the correct way to call this method for each use case?

In general, calls to loadImpulseResponse load the impulse response (IR) asynchronously. The IR will become active once it has been completely loaded and processed, which may take some time.

Calling prepare() will ensure that the IR supplied to the most recent call to loadImpulseResponse() is fully initialised. This IR will then be active during the next call to process(). You can call loadImpulseResponse() before prepare() if a specific IR must be active during the first process() call.

For case 1, you’d call loadImpulseResponse from within the context of the audio callback. The loading will then happen asynchronously.

For case 2, you’d call loadImpulseResponse before prepare, which would make the IR available for immediate use in the next audio callback. There is not currently a way to replace the IR “immediately” without calling prepare, so replacing the IR at a particular sample count during offline rendering (for example) is not possible.

1 Like

Thanks @reuk for the detailed answer!

Can you explain more on the need to call loadImpulseResponse from the audio thread instead of the message thread (for case 1)

I was under the impression that this function would return immediately regardless of which thread is called, and then would sync the result in a background thread, to be ready for the audio thread whenever process() is called (or after a few process() calls)

Is that not the case?

If indeed it’s needed on the audio thread, the I need to add yet another sync mechanism to sync from the GUI to the audio thread before making that call, which is not trivial for something like a file, buffer, etc.

How about making the loadImpulseResponse() a new thread as well? This communication here is all over the place :wink:

2 Likes

Yep, seconded. Please start a new thread for this question, and I’ll answer there.

1 Like

@ed95,
Ok, I’m sticking to just the desktop issue in this thread…
I created an empty “Audio” application with Projucer and added a check for the desktop as you suggest. I assume it has at the top of Main.cpp…

    if (!juce::Desktop::getInstance().isHeadless())
        mainWindow.reset (new MainWindow (getApplicationName()));

Assuming this is the correct place to put it, I immediately see that this causes MainComponent to not be instantiated when headless. That’s good; but then how do I get the equivalent of MainComponent::getNextAudioBlock()?

BTW, your welcome to just point me to docs, I’m not trying to be a mooch here, just trying to get over this hump quickly.

The Audio project template is just a quick way of setting up a window with some audio and MIDI I/O capabilities and isn’t really designed for more complex applications since the audio functionality is tightly coupled to the window itself.

I’d recommend starting a new project using the GUI template, adding the juce_audio_devices module, and taking a look at the docs for the AudioDeviceManager and AudioIODeviceCallback classes to set up your audio processing independent from the GUI.

1 Like

Thanks for this tip! I think with this info it should be possible for me to make a template for dual headless/withGUI modes in a single project. @edsut I’ll share it on Github and post it here, it sounds like if you had it you could refactor your codebase using the template.

@kcoul,
That would be great, thanks.
Ed