Since JUCE is a mature product with both front end GUI, and backend MIDI and Audio processing, it make sense to start with JUCE, and move the audio processing out of the CPU. With IDL defines the JUCE objects, it makes it more portable and transparent when the other components/modules call those audio processing objects regardless what the audio processing unit really is. Audio processing unit can be DSPs, or GPU. In the case of DSP, since there are many different vendors, there is no uniform APIs. With CORBA, we don’t have change JUCE core code as long as we port ORB to the new DSPs.
For my day job, I have been working on software defined radio for 15 years, and we worked with different DSPs and many legacy radio applications, and each has different needs in RF processing. We achieved this by porting ORB on DSPs. Audio processing can be done the same way.
As for GPU, definitely the traditional audio processing on GPU may not be as efficient as many mature DSP. However, the Nsynth from Magenta gives a different perspective. Audio synthesis of different instrument is done through machine learning, the result is pretty instresting. So the question is can we integrate the power of audio synthesis from Nsynth and mature plugin platform like JUCE. With the massive parallel processing in CUDA/GPU, a lot can be done there even for audio processing.
With CORBA as the backend, we don’t need to know whether it is GPU or DSP or even a dedicated CPU core, CORBA gives us this flexibility.