Suggestions for how to convert C74 Max8.1 patch to Juce

Hi folks, I’ve been following some of the threads here, and Im going to be so bold as to ask for suggestions directly on how to go about this rather large project. The first stage is converting an existing C74 patch for a 32-voice synth which contains 123 GUI objects. Almost all of them directly feed three compiled gen~ codebox objects:

  • gen object for voice allocation, 300 lines
  • gen~ object in poly~ object for audio path, 1200 lines
  • gen~ object for mono output compression and limiting, 200 lines

That is to say, my audio path and voice assignment code is completely written already. There is a working Windows32 demo for 32 polyphonic voices here:

I am looking at converting to Juce~ to complete making the synth multitimbral. So far, I tried three different ways to do this in C74 Max (sending params to poly~ object on voice+program change, loading params from a cache each poly~ instance, and reading data from a shared buffer~ instead). All of them had performance problems, with the audio stuttering even at maximum audio latency with about 20 voices on a 2.6GHz i5. It seems rather ludicrous that it is 2019, and I have to write in C++ to get more than 20 multitimbral voices on a base system configuration even with parallel processing, but that’s the way it still is. C74 can’t do it, and Native Instruments, it can’t do multiprocessing at all, because it was hard coded to use SIMD MMX extensions way long ago, and there is no one left there who could recode it.

I am really more of C programmer than a C++ programmer. Ideally I would just like Juce to prove a shell with audio output and UI for the transcribed code. I am thinking to use structures for the 123 params. When a note-on event arrives, it would pass through my LRU voice allocator code which would assign a voice and set the voice to use a particular program. In Omni mode, the user can send a program change for each new note and the other notes continue to play the sound from their prior program preset, while the new notes use the new program preset. In multi mode, the synth can play 16 channels simultaneously. I also have a 16-channel step sequencer where each sequencer can modulate the others to integrate too, which I was hoping to complete this year, but I already gave up hope on that altogether.

This also presents a slight challenge for the UI. There could be up to 16 settings for each panel object, In max8 I was setting the panel object values directly from pattrstorage dumps of program presets sent to a pattrhub object. Then panel changes were stored in a separate pattrstorage object with one slot for each MIDI channel. When the panel was set to a particular channel, and voices are playing on that channel, I also tried various methods to transfer the panel changes to playing voices.

When I originally developed the code, C74 could not compile code at anywhere near the required performance. JavaScript was the only native code environment. When the JavaScript ran out of CPU it just dropped notes. So with Max 8 I transferred that code as much I could to native compiled gen codebox. But now the ONLY method to share data between each thread are audio buffers, because the C74 codebox cannot compile anything but numbers, and there are no batch operations.

It transpires C74 now has a problem saving external data from standalone apps due to the new security requirements for Catalina. Also, when I try to read the params for each voice from audio buffers, I could only get about 16 voices on a 2.6GHz i5, and that with maximum possible latency settings. This poor performance is entirely due to there being no other way to access the program settings except audio buffers.

Lastly, I whave a problem with the target app type as well. I was building a standalone app so I could use multiprocessing, with a rewire client interface. But from just glancing at the rewire sdk, I know I am not a talented enough programmer to integrate rewire into JUCE.

So now it appears I need to build a plugin which can access threads external to the DAW to provide sufficient performance, that is, it would need a plugin UI communicating with an external application. That means, in this case, I really am starting rather differently than most people. The audio path is already coded. What I need is to set 123 parameters for each voice, depending on what MIDI channel it is on, and run the voices on different threads to make use of whatever parallel processing is available, and to provide the audio output inside a DAW somewhow. And I’d want JUCE to detect the number of cores and configure itself for that automatically. Then I’d want it to store the user’s program configuration for each voice.

This really seems more than most people attempt with JUCE, so I am very grateful for any suggestions on how to go about finishing this design, now 10 years in the making, I still can’t compile on MacOS from C74 and I’ve given up hope, after 10 years now, that C74 ever will. Thanks for reading my long post.