Good command-line workflows to postpone UI dev as much as possible?

Hi all,

I’ve been exploring JUCE and audio dev for the past 3-4 months, with the goal of getting a good overall grasp of the different tech and complexity involved in developing audio plugins. A big pain point for me is still UI and frontend dev though, because compared to DSP and audio-related things there seems to be many equally valid ways of doing things, especially with webviews joining JUCE (and other new stuff like Cmajor entirely relying on them). I’ve found myself spending a lot of time checking out frontend frameworks and thinking “would that pay off better to learn that one thing in the long run compared to that other thing?” (note that the JUCE UI tools count as a “frontend framework” for me)

I guess having learned about multiple tech and practices in the field will pay off in the long run because I know better about how rendering works, what performs well and what doesn’t, etc. But for now I’m considering going the completely opposite way with this and trying to code as much as possible without any UI. I checked out Vital code from Matt Tytel, and I noticed there is a “headless” build which seemed crazy to me at first. Until now my “headless” builds have been plugins which expose their parameters to the DAW, but some functionality doesn’t translate this way (like assigning modulators to parameters): do a lot of you build command-line tools for plugin testing, or do you rather build mockups with JUCE components that “do the job” until you consider the UI layout and aesthetics in a final stage?

Every ui is based on reading and updating “backend” values via some kind of api. So even if you would have something like drag and drop modifier to parameter to create modulation, when expressed in code it would look something like

{
  “target”: “osc1”,
  “modifier”: “lfo1”,
  “action”: “connect”,
  “value”: etc…
}

So I would even advice to stay away from ui and make sure first that the api is solid. You can organize some testing with apps like Postman, this is how I test our app backend because I have no clue in frontend code :slight_smile:

3 Likes

Ok I see, thanks for this answer!
I don’t fully understand where Postman sits in all this. Is this a tool that allows you to use the API at runtime (like sending stuff to the backend in a form similar to what you wrote above) to check that your backend works ? This is precisely what I’m wondering, how to test if my plugin works as expected without even having built a rudimentary UI.

Perhaps pluginval is also a good starting point of making your own testing tools (it only tests stability and compatibility).

1 Like

Yes, sorry, maybe I was moving too fast in my answer.

Basically, a lot of modern software is built in a fashion where you would have some backend (c++/juce in our case) and frontend is usually built with web tools.

So, basically your juce app will show a web view, while the content of this webview is a webpage, and everything that happens on that web page is a concern of frontend developer(-s). It’s typical to offload ui work to some kind of web technology, because even though frameworks such as juce or even qt have their methods of building ui, it is almost impossible to keep up with how fast web ui’s are progressing (and with them the user expectations of what is a good ui/ux)

Now that we established our stack to be juce on the backend + say vue.js on the frontend, we need a way of communication between frontend and backend. Typically this is done via http requests or websocket messages, or both.

Now, http/websocket is just a mean of communication, meaning “how” you send your data, you also have to answer the question “what kind” of data you are sending? In case of juce app, this is most likely some kind of json messages (as the one I posted in my first answer).

And this is where postman becomes handy: postman can create websocket and http requests for you, giving an impression to juce that you actually talk from the frontend, meanwhile it will only be the messages, you wouldn’t need to draw any buttons and think about html/css and all that stuff.

If you want to future proof yourself and are building this having something ambitious in your head, I suggest you look at this route, because this is how more or less all modern apps are built (from my limited experience, I might be wrong)

On the other hand, if you just want couple of buttons and a few sliders, going the juce ui tutorial route would be infinitely simpler and faster

1 Like

Yes, I agree that command-line interface is important.

For us - 99% of our code is in JUCE-style modules.

That means that I can relatively easily spawn almost all of our DSP code in a very simple command line app and test it in isolation, and then perhaps use that testing code in a unit test.

Doing that process also improves the code itself, because if the code is too difficult to spawn in a simple command line app, that would lead me to do some needed refactoring which would improve the non-command-line version of the code as a result.

3 Likes

@eyalamir That’s interesting. Could you give examples of how you would test DSP code with a command line tool?

At the risk of exposing my noobness(probably already exposed) :slightly_smiling_face:
@mchagneux You could also use juce::GenericAudioProcessorEditor to skip the UI setup while you’re working on your DSP,
you just need to set up your APVTS parameters first.

1 Like

I see, thanks for the detailed answer!

Yeah this confirms what I what thinking then :slight_smile: I need to spend some time setting that up.

Oh I never did it that way. I think I ended up using that generic editor somehow, because I’m sometimes relying on the AudioPluginHost which creates so kind of interface based on the APVTS whenever the loaded plugin doesn’t have an editor :slight_smile: thanks!

Great tip on the GenericAudioProcessorEditor. However you don’t even need the APVTS, the editor uses the lower level interface to the AudioProcessor using getParameters() or getParameterTree().

That means it doesn’t matter how you added your parameters to the AudioProcessor.

2 Likes

Sure, so:
Say you have some code set up in a module called mydsp, you can setup a trivial command line app like:

#include <mydsp/mydsp.h>

//Implement with some hard coded buffer, like an impulse response
//load wav from a file, etc.
juce::AudioBuffer<float> getTestingBuffer();

//Implement to reflect on the buffer somehow
//maybe phase invert the result, or log some samples, etc
void validate(const juce::AudioBuffer<float>& buffer);


int main()
{
    DSP::MyFilter filter;
    filter.prepare(44100.0, 128);

    //set some parameters I want to test on the filter, then:
    auto buffer = getTestingBuffer();
    filter.process(buffer);
    validate(buffer);

    return 0;
}

In CMake land, that would look like:

project(TestDSP VERSION 0.1)

juce_add_console_app(TestDSP PRODUCT_NAME "TestDSP"
target_sources(TestDSP PRIVATE Main.cpp)

#if mydsp requires juce_audio_basics, no need to explicitly link juce in here:
target_link_libraries(TestDSP PRIVATE mydsp)

Notice that this has a couple of preconditions:

  1. There is an actual DSP::MyFilter that exists as a simple standalone class.

You can also spawn your full on plugin if you want in that way, but that might lead to a slightly more complicated setup which in many cases isn’t needed when you’re testing/developing a specific component.

  1. You need to make sure your class in a module can be easily spawned and built.
    Here’s an example of one of the simplest modules you can build, with a standalone white noise:
    JUCECmakeRepoPrototype/Modules/shared_processing_code at master · eyalamirmusic/JUCECmakeRepoPrototype · GitHub

Hope that helps. :slight_smile:

1 Like

We have a set of features in our engine that allow us to create “Audio Effect” classes, and generate plugins with automated UIs from those classes to load into the DAW. All we have to do is write the DSP and set up the parameter mappings and ranges.

But it’s a fair amount of work.

@eyalamir is it possible to use the command line tool in real-time? Including assigning inputs and outputs?

Sure, I don’t see why not.

You can look at this example by @xenakios:

I do have to say that if I need to have a full on audio loop that plays sound, I’d usually have a testing plugin (usually with an empty/generic editor UI) that also has the audio settings window and all that.

To me a big advantage of the command line workflow is to have quick feedback loops and predictable I/O, otherwise it’s pretty much the same with and without a UI.

I would however have that testing plugin (or many plugins) be external to my main plugin, to be able to have a cleaner test bench for whatever it is I’m working on right now.

For example with a plugin like Beat Scholar I have a standalone plugin that tests the sampler, without any UI and without the sequencer, etc.

3 Likes

Very helpful, I’ll try it out soon
Thanks for the detailed explanation!

I’m a big proponent of automated unit tests. You can use a framework like Catch2 or Googletest and then register your test cases with ctest, so the workflow basically becomes

  • cmake (to configure)

  • cmake --build

  • ctest

And the goal is to have a wide range of tests that you can run quickly & continuously after making changes.

The real power of ctest is that you can register any command as a test, so you can have unit tests, but you can also set up things like pluginval, auval, etc to be run through ctest.

This is a great tool that sounds like what you’re looking for: GitHub - CrushedPixel/Plugalyzer: Command-line VST3, AU and LADSPA plugin host for easier debugging of audio plugins

It allows you to load a plugin and specify parameter values and input & output files. This could also be used in a ctest test case.

3 Likes

What do you mean by “quick feedback loops” ?

a quick feedback loop is when you write some code, compile, and launch the plugin to test your modifications then ‘loop’ back to making more code modifications based on running the plugin. The quicker you can compile and launch the plugin, the quicker you can make progress.
What can slow this down is when you make a small change, but it results in having to recompile a bunch of unrelated files. e.g. you change something on the GUI and notice that the Processor is getting rebuilt because the code is too tightly coupled.
I worked on one plugin where they used macro to remove most of the Processor when making DEBUG builds in order to get a tighter development ‘loop’ when making the GUI. I’ve also done the opposite - use the plain JUCE default GUI while developing the DSP.
IMHO the best plugin frameworks make this easy through having a clean separation of concerns between the UI code and the Processor. JUCE is not one of these.

3 Likes

The most code you can put in .cpp files instead of .h, the better for this particular aspect.

On the other hand, it is much quicker to write short functions inline directly in the .h, but that comes at the cost of having to rebuild all the .cpp files that include that, every time one modifies the body of those functions even slightly.

I’m told that build times would also be improved if C++20 modules were available, but they are not supported in Xcode at the moment

2 Likes