Support for 3rd party modules in Projucer

I have to agree with Dave on this point you really only want to update modules explicitly, I wouldn’t want them updated implicitly unless I’ve explicitly stated so.

However the advantage as I see it is you could have these modules pulled in without having access to the whole repo, history and useless files not related to what you’re trying to build. They could be pulled in to a single folder too which means the Projucer only has to reference this folder rather than the Projucer having to point in a bunch of different places to find the modules. It would also make inter module dependancies simple, that is until two modules want to access the same module dependency at different versions.

Sure, it’ll take a while to get there, but it does look like this is where things are going to end up eventually, so we need to have a plan. And if the big 3 compilers all support it by next year (which seems likely) then it looks like it’s going to happen a lot sooner than we expected.

Yes, that’s why you need to have explicit version numbers in a dependency manager. As I stated above, that is one of main issues that any package manager has to deal with - preventing broken dependencies.

What if, say a module dependency, or even JUCE, has a breaking change, which means your module needs adapt to it’s dependencies or risk breaking peoples projects? This is why dependency versioning is important.

I don’t thing global modules are a good idea either. Each module should be referenced by name/location/url along with a version number.

Sure, Git is sufficient, it does the job, but, having used a variety of package managers for different languages and frameworks over the years, a dedicated package manager is pretty much always easier and cleaner to use than Git submodules. The whole point is you don’t have to manually update things - you just declare a module’s name/url and version in the manifest (which in our case is the .jucer project, or the module’s manifest) and then run an install or update command: the package manager updates your code. That way all the information is in one place - you can see the dependencies and what versions they are at, and the package manager automates the installation and update (or removal) process.

Keeping it to Git submodules makes it a lot more tedious to manage - say you need to update a module’s version: Firstly, how to do you know which version it is currently at? Git will only tell you the commit hash! Then you have to run 'git checkout <branch/tag>` from within the module’s Git folder. Then, what if you want to make use of another third party module in your module… nested Git submodules… :grimacing:

1 Like

Yeah, I guess you can go down the version number route. It just feels error prone, especially without long-established APIs.

If you allow different versions, you’ll almost certainly run in to issues with modules requiring different versions of other modules and end up in ODR violations or at best duplicate symbols.

The whole thing is a bit of a mess which is why it hasn’t been solved easily by the C++ committee. This discussion is really just mirroring their process and I’m of the firm belief they just need to start with the basics rather than try to figure out all the really hard problems.

I’m also trying to be pragmatic about the scope of what the JUCE team can do. Write a package manager with versioning and recursive dependency checking? Sounds like a tall order. Create a website to list 3rd part modules? Much more manageable.

The whole point of the talk I linked to was to not write your own package manager or even package your code, but to structure it in a way so it’s possible in existing solutions. I think that’s very doable with the JUCE module format and it would be nice if people could just use Conan, Hunter, vcpkg etc. if they wanted to. Making a JUCE specific package manager seems like a bit of a wasted effort. There’s a bunch of other stuff I’d rather see in JUCE first.

But again, these are just my thoughts.

10 Likes

I totally agree that writing a fully fledged package manager is a big task. As I suggested above, it might be better as a community effort rather than expecting the JUCE team to build it.

I know, there are many pitfalls and complexities that go along with dependency versioning… its a potential rabbit hole.

My proposal here is to make it easy to install 3rd party modules from within Projucer, by adding a git url (with a branch, tag or commit hash) or unique name, as an alternative to a local file path.

I think that, along with a web page listing available modules, plus a tool to create a module skeleton, would be enough to promote and support the creation and usage of 3rd party JUCE modules.

But on macOS it will probably come in an Xcode version that no longer builds for i386 (32-bit arch) :confused:

If there is a website containing all the modules I guess a simple-ish extension to that could be in the Projucer when you click to add a module you get a sub menu for third party modules. Click on one and it opens a dialog to select where you want it to be cloned. If the location is inside a git repo it calls git submodule add <repo> otherwise it calls git clone <repo>. Then leave all the versioning down to git - don’t allow Projucer to control this at all? The only thing about this is it would encourage having all third party JUCE modules to be individual repos (maybe that’s not a bad thing?), it wouldn’t really work to have multiple modules in one repo or you would end up with multiple copies of the repo. Maybe this feature would only work if git is installed on the command line too? that way it stays fairly simple, the Projucer won’t be trying to manage the modules (that would be left to git), but it might encourage more 3rd party JUCE module development due to the visibility, allowing the JUCE team to get on with more important things.

In other words it will make it easy to add third party modules for the majority of use cases (using Projucer and git).

There could be a simple pull request process to get modules included in the list that requires them to pass some basic criteria such as passing the module through a simple validator - maybe in the future it would have to pass through some basic CI that compiles and runs unit tests (if there are any in the module) before being added?

All of the above could be written and managed by the community including the additions to the Projucer.

I attempted to convert some of my code into a JUCE module, but AFAIK this required me to #include all of my .cpp files into a single translation unit. i.e. Allow the preprocessor to concatenate all my cpp files into one big file. Now my codebase, being idiomatic C++ relies on keeping the symbols of each translation unit private, so my code cannot currently build as a JUCE module. (My codebase is several hundred thousand lines of code, so it’s not practical for me to alter it too much, nor would I want to be restricted from using normal C++ conventions, or lose the ability to perform fast incremental builds).

So, perhaps you guys will consider enhancing the JUCE modules specification to support this use-case (multiple, isolated .cpp files), which I think is pretty common.

1 Like

This should be perfectly possible with the existing format by having multiple files at the root each prefixed with the module name. If needs be each of these could simply include a single cpp file further down in the source.

Module CPP files
----------------

A module consists of a single header file and zero or more .cpp files. Fewer is better!

Ideally, a module could be header-only module, so that a project can use it by simply
including the master header file.

For various reasons it's usually necessary or preferable to have a simpler header and
some .cpp files that the user's project should compile as stand-alone compile units.
In this case you should ideally provide just a single cpp file in the module's root
folder, and this should internally include all your other cpps from their sub-folders,
so that only a single cpp needs to be added to the user's project in order to completely
compile the module.

In some cases (e.g. if your module internally relies on 3rd-party code which can't be
easily combined into a single compile-unit) then you may have more than one source file
here, but avoid this if possible, as it will add a burden for users who are manually
adding these files to their projects.

The names of these source files must begin with the name of the module, but they can have
a number or other suffix if there is more than one.

In order to specify that a source file should only be compiled on a specific platform,
then the filename can be suffixed with one of the following strings:

_OSX
_Windows
_Linux
_Android
_iOS

e.g.
juce_mymodule/juce_mymodule_1.cpp         <- compiled on all platforms
juce_mymodule/juce_mymodule_2.cpp         <- compiled on all platforms
juce_mymodule/juce_mymodule_OSX.cpp       <- compiled only on OSX
juce_mymodule/juce_mymodule_Windows.cpp   <- compiled only on Windows

Often this isn't necessary, as in most cases you can easily add checks inside the files
to do different things depending on the platform, but this may be handy just to avoid
clutter in user projects where files aren't needed.

To simplify the use of obj-C++ there's also a special-case rule: If the folder contains
both a .mm and a .cpp file whose names are otherwise identical, then on OSX/iOS the .mm
will be used and the cpp ignored. (And vice-versa for other platforms, of course).
1 Like

I should also add that although it’s true that from within your own cpp files you might have access to things you wouldn’t expect to as it comes down to a single translation unit, this won’t impact users of your module. Also having a single cpp file should decrease build times compared to building all the cpp files separately, and make it easier for users of the module that aren’t taking advantage of the Projucer to automatically select and compile all the relevant files.

2 Likes

Thanks for the info Anthony!
In my case, it would be impractical to rename all 2000 source files in my library to that naming convention because although I would like this library as a JUCE module, that is merely one target of my library. So renaming all the files would be disruptive to my other (non-JUCE) consumers of the library. Also once again, I believe JUCE modules would be more attractive to developers if we didn’t need to perform disruptive wholesale rewrites of our existing libraries just to suit this unconventional format.

Appreciate your help nonetheless!

1 Like

I think there are two ways you could handle this, in either case make a new repo that contains your current repo as a submodule, in the new repo (this will be the JUCE module) create a single cpp that includes all the other cpps. If this really is a problem then create 2000 corresponding cpp files that each have one line that includes one of the cpp files from the library - no renaming required. However asking users to include 2000 cpp files to add a library seems a wee bit excessive to me.

1 Like

Well to be fair, if it didn’t require you to structure your code in a particular way then it wouldn’t actually be a “format” at all!

The reason the module format is done the way it is is to force a library author to do the work of splitting their code into an optimal number of top level compile units, so that a user gets a fast build time, and to enforce naming and layout conventions so that there’s no need for build scripts in order to know how how to build it.

I don’t know if you’ve looked at the tracktion engine, but that’s also several hundred kloc and many hundreds of source files, but if you keep things clean, all that’s needed are a few top-level files to collect it all together:

6 Likes

Hey Guys, I’ve been eagerly following this discussion and was compelled to introduce myself! I’m one of the developers of the Buckaroo Package Manager. We are now actively looking how we can make Buckaroo more suitable for specific communities like JUCE and Arduino.

We examined the JUCE module format and are confident that we can package it without changing the source code.

In order to test this out, could you point me to some open-source JUCE apps that I can build?

Regarding Projucer, Buckaroo has already some IDE support via compile_commands.json etc. We would love to add support for Projucer too. How does it integrate with existing build systems?

Regarding the state of C++ package managers in general, I’d like to answer some of the above questions and contrast solutions.

Where are packages stored and what is a version? - Where does a Package Registry live?

Buckaroo

Buckaroo packages live directly in source-control and the package versions are read from Git tags and branches. Buckaroo does not need its own registry, since it leverages the Git infrastructure that already exists. This has a few benefits:

  • The publishing process is just a git push
  • You can “live at head” by depending on branches, like with Git submodules
  • Everyone is working from the same set of packages
  • There is no registry server to be hacked or go down

Conan

Conan Packages live on a Conan Server, which acts as a registry. There might be multiple servers, so the user should maintain a list of the ones they are interested in. You can also add packages to a local registry (they call this the Conan cache) manually. Conan was designed with internal company use in mind, and there seems to be some hurdles to supporting the open-source community with the same model.

Vcpkg and Hunter

Vcpkg and Hunter do not have registries in the sense of NPM, but instead use a “cookbook” file that defines all of the packages. A Cookbook is a curated list of installable packages, their versions and where to fetch them. Adding new packages means forking the cookbook and making a pull request. The reason we moved away from this approach for Buckaroo v2 is that it fragments the community based on the cookbook they are using. To consume someone’s package, you must adopt / merge their cookbook.

we could avoid having to use Git submodules.

Some people use Buckaroo as an enhanced version of submodules. Submodules don’t solve a couple of problems that you might encounter in bigger projects, most crucially they do not flatten the dependency graph or resolve version constraints.

Feel Free to ask me any questions.

8 Likes

:smiley: This sounds great!

:confused: Did you have a real look at the JUCE repository? It contains 5 GUI applications (AudioPerformanceTest, AudioPluginHost, DemoRunner, NetworkGraphicsDemo, and Projucer itself) and 2 console applications (BinaryBuilder and UnitTestRunner). That should be enough material to get started.

Projucer is not a regular IDE. It generates build-system files for other IDEs: Android Studio, Code::Blocks, CLion, Visual Studio and Xcode. It doesn’t really integrates with existing build systems, except in the context of the Live Build Engine. However, I’m not sure I understand what you mean by “Buckaroo has already some IDE”, so I might be missing something.

Looking forward to seeing how this develops. Please keep us updated!
I really don’t mind using a CLI tool to pull in dependencies. I’d be very happy if we could use Buckaroo for JUCE modules.

My main question is how do you see the use of Buckaroo alongside Projucer? I was imagining we’d use JUCE as we do currently, managing projects via Projucer, and pull in extra dependencies separately (in this case using Buckaroo), i.e. have 3rd party JUCE modules as Buck packages.

I had a peruse of your repositories and saw that you’re adding Buck manifests to the JUCE library modules. That’s an interesting approach which in some cases may negative the need for Projucer at all (although it does have some very useful tools such as the BinaryBuilder)

Is it possible for Buckaroo to refer to just part of a git repo? I ask because some people dislike having to clone the entire JUCE repo particularly when they don’t need the Projucer, plus there are, all the tutorials and extra projects. I think for those people to simply say which of the JUCE modules they want and to only have those they would be much happier. However that would mean a single repo (JUCE) having multiple Buckaroo packages and for each one to be pulled independently of each other but without essentially having to rely on the back end just cloning the entire repo multiple times.

Looking at the documentation, it looks promising…

  • Pull individual packages out of mono-repos

We created a PoC that shows how you can use the JUCE modules via Buckaroo:

The advantage of doing this is that you can easily integrate other Buckaroo packages (e.g. Boost, Eigen) into the project, using one cohesive tool.

However, I’m not sure I understand what you mean by “Buckaroo has already some IDE support”, so I might be missing something.

It is possible to take a Buckaroo project structure and generate an IDE project from it. This will give you auto-complete, refactoring and context-aware navigation, etc.

We think it is important that developers can use their preferred IDE with Buckaroo.

My main question is how do you see the use of Buckaroo alongside Projucer? I was imagining we’d use JUCE as we do currently, managing projects via Projucer, and pull in extra dependencies separately (in this case using Buckaroo), i.e. have 3rd party JUCE modules as Buck packages.

My understanding of JUCE is the module format is quite simple (I like it!) but also inflexible. It will be infeasible to port other projects to the JUCE format, but porting JUCE to something more general (e.g. Buck) is quite straight-forward.

So I think the only way to integrate non-JUCE packages is to not use Projucer.

I could be wrong here though, can someone more knowledgeable weigh in?

Is it possible for Buckaroo to refer to just part of a git repo?

There are two aspects to this: the dependency-graph and the build-graph.

Buckaroo will always fetch the entire folder from Git. We made this efficient by using caches and shallow clones. When you come to build, however, we only build the modules that you actually depend on.

In other words, there is some redundancy in the dependency-graph (we sometimes fetch code you won’t actually build), but there is zero redundancy in the build-graph (you only ever build the code you actually use).

We would like to improve this in the future, but given that compilation is the more expensive bit, this works well for now. Git “partial clones” are still experimental and not supported by GitHub.

However that would mean a single repo (JUCE) having multiple Buckaroo packages and for each one to be pulled independently of each other but without essentially having to rely on the back end just cloning the entire repo multiple times.

To clarify how it would work, we would have one package (JUCE) that contains multiple targets (juce_core, juce_graphics, etc.)

We would only clone this once per machine since we use a cache.

It would be possible to split the JUCE package into multiple packages, but this makes maintenance more difficult. For one, we would have a different folder structure to JUCE upstream. Atomic releases would also no longer be possible.

3 Likes

This is exactly what Projucer does.

It’s an interesting prospect, but I think that a lot of JUCE users would want to keep using Projucer. If Buck could provide a way to include 3rd party JUCE modules in an existing JUCE project, I think that would be the path most users would take at the moment.

In terms of non-JUCE packages, generally people wrap them as a JUCE module. In this case, it would definitely make things easier if we could add packages to a project without wrapping them first.

In terms of non-JUCE packages, generally people wrap them as a JUCE module

Could you provide some examples? I’m not sure how this could be done with more complex projects.

It’s an interesting prospect, but I think that a lot of JUCE users would want to keep using Projucer.

What integration points does Projucer provide besides JUCE modules?
Most tools are centering around a compilation database compile_commands.json.
We already support this.