Units tests and continuous integration


Hello guys !

I would like to know if you have some experience about automatic unit testing. One of my clients is working on a continuous integration system to create the installers of their plug-ins when someone update the git repository, and we would like also to include some kind of automatic testing, both for the code and for the plug-ins themselves, to test a few things with tools like PluginVal or directly in some DAWs to check if everything is working properly. We don’t know how to start there yet, so any suggestion of tools, report from your experience, or any source of information about the topic would be much appreciated. They use Gitlab for both git + CI.

Thanks in advance !



@Anthony_Nicholls gave a great talk on this a few months ago: https://skillsmatter.com/skillscasts/12544-audio-developers-meet-up

Definitely worth watching that to get an overview.

In my experience, the best way to get up and running with CI is to think about cloning a fresh copy of the repo and running a single script which builds all your products, runs any tests and then builds installers etc. (and also runs any tests it needs to with these).

Once you’ve got that, setting up CI is really just a matter of translating your build system to the CI system.

Hope that helps but maybe some more specific questions would help you more?

1 Like

Thanks @dave96, I think Dave has very concisely summed up my 1 hour talk above.

I would also recommend you don’t rely on writing the scripts in whatever format your CI provides (so via the yaml files in GitLab). Make scripts in whatever language suits you or your project best (I highly recommend python) and call these scripts instead from the yaml file. It can be nice to add arguments to your script(s) so you only have say a build.py and build_installers.py script (could potentially add a test.py to that list) and pass in the formats / architectures you want to build/test (however have it build everything you can by default to make your life easier locally), but at the end of the day it’s whatever suits your project(s) best.

My rationale for the above recommendation is…

  1. It allows you to switch to another CI service in the future with relative ease
  2. Testing the build process becomes much easier
  3. Because testing is easier it’s also easier to spot when there is an issue with the CI and when it’s with your build. Without this there can be the temptation to blame the CI for issues that are really related to your build process!
  4. It encourages a clean and easy build process that makes it easy for any developer to check out and build

Another tip is to use environment variables to configure certain aspects of the scripting (i.e which certificate to sign with, passwords, user names).

regarding testing specifically we used to test the plugins once after they are built (in the same job), then again before the installer is created in the installer job. This would help catch issues where simply copying an artifact from one job to another can cause issues (copying files across different file systems can break some signatures - to fix this archive then unarchive the artifacts - GitLab may even have an option for this nowadays I can’t remember to be honest), however you might choose to do a smaller subset of basic tests for the second one.

Another thing I would say is much like optimising your code, don’t optimise your build times until you’ve shown it’s a problem! When you do need to optimise, prefer making parallel builds with simple scripts (that take arguments) over complex build scripts that try to be very clever. It’s often a waist of time as at some point the complexities hold you up in trying to figure out what has gone wrong somewhere down the line, which can completely outweigh the speed up you gained in the first place. Not to mention that once you have CI working well you probably won’t even notice the wait between the commit and the build.

In regards to tests specifically, GitLab I believe now supports JUnit (https://docs.gitlab.com/ee/ci/junit_test_reports.html) this is a pretty easy format to generate in something like python as it’s just a XML with some very simple attributes. This should offer a clean way to see exactly what has passed/failed. I actually hadn’t realised GitLab had added this until I started writing this response so I haven’t tried it myself, yet.


I would also suggest starting small and adding progressively get PluginVal working first and tweak your CI get the JUnit stuff working and so on. auval is another easy win, there’s also vst3validator, and MrsWatson (VST2), and I believe a validator for AAX plugins is available from AVID that I never actually got around to using. I never tried using any DAWs in CI, that always seemed far too complicated and error prone, the gain vs effort often doesn’t start paying off so well by the time you’ve done the above. VST3 does have a special class for adding unit tests that I believe will be run by the validator but again I never actually got around to doing that. JUCE unit tests or google test is obviously another win. When to trigger the tests then becomes the question, if I remember correctly we went down the route of generating a special build, for example you could do it that the stand alone app when opened from the command line with a particular argument runs the JUCE unit tests. Ultimately you need to rely on logging and return codes so we would return the number of failed tests on the command line to indicate a failure for example - although JUnit should make that a whole lot easier as you just need to generate the file(s) and tell GitLab where to look for the file(s).

1 Like

Thanks a lot for all the answers !

I have transmitted all the information to my clients.

Gitlab is already been used to generate all the installers from the code, with the compilation in all the formats and the protection scheme implementation already working, I’ll tell you if we have other questions :wink:


One thing that is tricky is code signing that requires USB dongles/cryptographic tokens. On Windows, signing the (un)installer is OK because only the Authenticode cert is required, and it works really well on VM (using VirtualBox). But PACE signing is more tedious: both Windows and macOS require having access to the iLok dongle. As I only have one in my possession, I ended up using real Mac as my build server and using some kind or orchestration to start/stop slave VM machines for macOS/Windows in turn so each one can use iLok. I’d be happy to know if anyone got anything better? Don’t know if one can have two iLoks with access to the PACE signing tools.

1 Like

I’ve found the only feasible way to sign with PACE is to have a physical dongle attached to each build machine. You can register multiple iLoks with the Eden tools if you email PACE.

It’s really not a very nice solution though as ideally I’d like to move all of this to the cloud…


Oh good to know, thanks! Yeah, I’d prefer not having to manage my own build machines either. But well, once it’s all set, it’s pretty stable, fun to watch and comfortable to work with.


This is true. Until you have to update to Xcode 10 and lose the ability to build your legacy 32-bit projects or go install Xcode 9 alongside it and add a call to xcode-select to all your legacy build scripts.


Have to admit I no longer want to consider making 32-bit builds on macOS. What small share would this represent I’m not sure, regarding 32-bit support, I guess there’s a bigger user base on Windows. But for the sake of curiosity, what does fail exactly when trying to build your legacy 32-bit projects?


Well you can’t build 32-bit archs at all on Xcode 10, it’s just not a valid target.
So if you upgrade to Xcode 10, and all your legacy build scripts just use xcodebuild, then they’ll fail.

It means if you have shipping products that have 32-bit versions you can’t easily release update for them.

For new products this isn’t really an issue. It’s when you have customers on old OSes that this becomes a problem. And tbh, the biggest problem is explaining that there won’t be any updates for 32-bit versions, and future installers simply won’t include them. It’s more about the messaging than anything else. That and having to update dozens of legacy projects, installer project files, build scripts, tests etc. just because one architecture isn’t there anymore and you want your CI to pass.

1 Like

IMO there are two-three sane ways to deal with PACE for CI…

The first is really for the simplest use cases (signing only), it is to have a specific machine (well one for Windows and one for macOS) dedicated to PACE signing and having the signing done by separate jobs, this means your builds can be done in parallel on multiple machines then the artifact required for signing is passed to the job that is run on the signing machine, this also means one less thing to install and maintain on your VMs / build machines.

The second method which is as good as required for more complex cases (wrap and sign) is to have a PACE dongle per VM or build node.

Achieving this second option is not easy. The most obvious is having physical machines or VMs that are constantly running and are allocated the dongles. If however you want the VMs to be spun up only when required and set to a some known state before a job starts (like running in the cloud), then you need some way for each VM that is spun up to grab only one dongle that no other VM will also try and grab.

When I was at Sonnox we managed this with a nifty script I wrote using VirtualBox. VirtualBox allows filters for which USB devices it will grab. The only way to tell the dongles apart was the serial numbers if I remember correctly. The script would take a master VM image and then cycle through all the available iLoks and create a linked clone of the VM for each dongle. It then registered these clones with GitLab using GitLab runner. Due to tags applied to the VM clones the jobs could run on any one of the registered VMs. The script would be run after we had updated the master VM image. All the VMs in our case were running from a single Linux server with all the iLoks plugged in via powered hubs.

I’ve heard of people using iLoks running on another machine and having software emulate the iLok being plugged in but this is not supported as far as I’m aware and I’ve heard of nasty results in trying to do this on occasion.

There is also a third, hybrid approach, where for the signing you use the first approach but for the wrapping you add support for network licences to your own licences and run a machine that acts as a PACE licence server. You would need to have a good chat with PACE to go down this approach. The nice thing about this is each licence just has enough seats so they get grabbed as required, I think this way you can keep the number of physical iLoks down too as all the licences can be on the one iLok I believe. However it’s not possible to do this for the signing (unless things have changed int he last year or so) :frowning:


We’ve been using the JUnit parsing for a while, but it only shows the results in Merge Requests, not for every run of the pipeline which is a little bit annoying!


That’s a shame!


Regarding CI and automated testing, I can’t stress enough how right Ant is about writing scripts and simply calling them from the CI. This approach has made our lives easier a number of times!

Our current plug-in testing is using PluginVal and AAX Val, as well as PACE and OS digital signature checks, all using scripts written in python to hide away complexity.
I think we wrote a python script at some point to translate AU Val output to JUnit, but I don’t know what happened to it…

These tests have so far been enough to catch any general major plug-in bugs, but need to be used alongside more plug-in-specific and framework unit tests!


It’s also a shame they haven’t merged this yet!!!

1 Like

There are a few fairly fundamental issues with GitLab’s CI:

  • Concurrent jobs on a single Runner sometimes run in the same CI_PROJECT_DIR
  • There’s no “fail fast” if a single job in a parallel build fails

Both of these are blockers from the JUCE team’s perspective


Can’t say I ever recall hitting that issue, but then again we would have always had concurrent jobs running inside individual VMs - to be honest I wouldn’t recommend running concurrent jobs in a single runner. It just sounds like it’s asking for problems.

That’s a good point!

1 Like

I have a similar USB dongle sharing setup.
I’m cheap, so I have a single iLok dongle plugged into a mac mini that dual boots mac/windows. I have the Mac OS schedule restarts at midnight and the Windows OS schedule restarts at noon. Then I have the boot loader load Windows if it’s between 6pm and 6am and otherwise load Mac.

This gives me a machine that alternates mac and windows on an automated daily schedule. Then I just build and run my tests on each OS as a startup script.


  • Cheap
  • The OS scheduled restart wakes from sleep so after the CI tests run I can put each OS to sleep and save power
  • No VM USB sharing to deal with


  • It’s a hack
  • Daily builds instead of on push (though I have gitlab CI running tests on push)
  • The build computer is in the guest room and some of my integration tests connect to audio out. So some people that stay with me think they hear ghosts in the night.

The other thing we need that GitLab doesn’t support is some kind of DAG for working out dependencies. For example we have lots of things we want to build in parallel, across multiple different machines. One of these is the Projucer, after which we want to resave a lot of projects and build those in parallel too. With GitLab’s job descriptions we have to wait until all of the non-Projucer-dependent jobs complete before we can start on the post-Projucer jobs.