Thanks @dave96, I think Dave has very concisely summed up my 1 hour talk above.
I would also recommend you don’t rely on writing the scripts in whatever format your CI provides (so via the yaml files in GitLab). Make scripts in whatever language suits you or your project best (I highly recommend python) and call these scripts instead from the yaml file. It can be nice to add arguments to your script(s) so you only have say a
build_installers.py script (could potentially add a
test.py to that list) and pass in the formats / architectures you want to build/test (however have it build everything you can by default to make your life easier locally), but at the end of the day it’s whatever suits your project(s) best.
My rationale for the above recommendation is…
- It allows you to switch to another CI service in the future with relative ease
- Testing the build process becomes much easier
- Because testing is easier it’s also easier to spot when there is an issue with the CI and when it’s with your build. Without this there can be the temptation to blame the CI for issues that are really related to your build process!
- It encourages a clean and easy build process that makes it easy for any developer to check out and build
Another tip is to use environment variables to configure certain aspects of the scripting (i.e which certificate to sign with, passwords, user names).
regarding testing specifically we used to test the plugins once after they are built (in the same job), then again before the installer is created in the installer job. This would help catch issues where simply copying an artifact from one job to another can cause issues (copying files across different file systems can break some signatures - to fix this archive then unarchive the artifacts - GitLab may even have an option for this nowadays I can’t remember to be honest), however you might choose to do a smaller subset of basic tests for the second one.
Another thing I would say is much like optimising your code, don’t optimise your build times until you’ve shown it’s a problem! When you do need to optimise, prefer making parallel builds with simple scripts (that take arguments) over complex build scripts that try to be very clever. It’s often a waist of time as at some point the complexities hold you up in trying to figure out what has gone wrong somewhere down the line, which can completely outweigh the speed up you gained in the first place. Not to mention that once you have CI working well you probably won’t even notice the wait between the commit and the build.
In regards to tests specifically, GitLab I believe now supports JUnit (https://docs.gitlab.com/ee/ci/junit_test_reports.html) this is a pretty easy format to generate in something like python as it’s just a XML with some very simple attributes. This should offer a clean way to see exactly what has passed/failed. I actually hadn’t realised GitLab had added this until I started writing this response so I haven’t tried it myself, yet.