Hello, I’m interested in building an audio engine component for a piece of software with Juce and thought it might be a good idea to ask a few questions here before starting development.
The application I would like to develop a Juce plug-in/component for is TouchDesigner which is a graphical programming environment mostly focused on realtime graphics and data processing and output. It has audio tools and functionality but it is pretty limited and since the audio runs in the graphics loop if there are frame drops larger than the buffer which needs to be at least 1 frame (high latency at least 16.6667ms typically) the audio is not acceptable for output. That being said I would like to build a Juce component for TouchDesigner that runs on its own thread and outputs audio directly to an audio device for monitoring and master output but at the same time outputs audio within Touch to be processed for various graphics uses.
I’m assuming I’ll use projucer to set up a .DLL project (since I’ll need a .DLL to load into a custom component for touch). I’m wondering if there are any recommendations of what classes to use since I would like to have a Juce GUI for loading files, VSTs, controlling levels etc… yet this will not be a standalone application. Also it will be crucial that Juce runs in its own thread. Do I need to roll my own second thread or is there functionality regarding threading already in Juce? If so what would be the workflow regarding shared memory and locking?
I’m happy to get started on my own and find the best route but if there are any pointers or tips to go down the right path from the start it would be greatly appreciated.