Juce component for TouchDesigner

Hello, I’m interested in building an audio engine component for a piece of software with Juce and thought it might be a good idea to ask a few questions here before starting development.

The application I would like to develop a Juce plug-in/component for is TouchDesigner which is a graphical programming environment mostly focused on realtime graphics and data processing and output. It has audio tools and functionality but it is pretty limited and since the audio runs in the graphics loop if there are frame drops larger than the buffer which needs to be at least 1 frame (high latency at least 16.6667ms typically) the audio is not acceptable for output. That being said I would like to build a Juce component for TouchDesigner that runs on its own thread and outputs audio directly to an audio device for monitoring and master output but at the same time outputs audio within Touch to be processed for various graphics uses.

I’m assuming I’ll use projucer to set up a .DLL project (since I’ll need a .DLL to load into a custom component for touch). I’m wondering if there are any recommendations of what classes to use since I would like to have a Juce GUI for loading files, VSTs, controlling levels etc… yet this will not be a standalone application. Also it will be crucial that Juce runs in its own thread. Do I need to roll my own second thread or is there functionality regarding threading already in Juce? If so what would be the workflow regarding shared memory and locking?

I’m happy to get started on my own and find the best route but if there are any pointers or tips to go down the right path from the start it would be greatly appreciated.

Thanks
Keith

I think the Juce GUI/messagemanager objects will always need to run in the GUI/main thread of the process. However, the realtime audio and MIDI things run by design in other threads, at least when doing Juce standalone applications or VST etc plugins. But if your host application requires the audio will run in its GUI thread, that could be quite tricky to set up…If needed, you can of course start your own thread for the audio rendering, but you’d need to figure out how to safely and efficiently pass the rendered audio back into the host application’s GUI thread. And also, you’d need to figure out how to safely access the GUI/main thread state from your audio thread.

You mentioned opening the audio hardware device directly for your use. This is possible with Juce with the AudioDeviceManager, but it is possible some complications happen with that when the host application is already using the audio hardware. And that way you would just be sending the audio directly to the OS sound system/hardware, I would guess there isn’t a way for TouchDesigner to “tap” into that itself.

Having to render the audio in the GUI thread sounds like a really painful design in the application, have you checked if the audio really needs to be handled that way? There isn’t some way to directly hook into a realtime audio callback? (I would assume the application has something like that internally anyway, since most audio systems work like that.)

Thanks for the feedback.

I think the Juce GUI/messagemanager objects will always need to run in the GUI/main thread of the process. However, the realtime audio and MIDI things run by design in other threads, at least when doing Juce standalone applications or VST etc plugins.

That seems it could actually work well in my case, since I only need to make sure the audio is not blocked by the main thread.

also, you’d need to figure out how to safely access the GUI/main thread state from your audio thread.

I assume there must be some tools for this to draw and display audio based graphics in GUI? I’ll look into this further to see what Juce already has for realtime feedback of audio data.

You mentioned opening the audio hardware device directly for your use. This is possible with Juce with the AudioDeviceManager, but it is possible some complications happen with that when the host application is already using the audio hardware

Yes this could be an issue, fortunately a user will have to create and enable an Audio Device in Touch in order for this to be an issue. The workflow would be for the user to not create native audio devices operators…

Having to render the audio in the GUI thread sounds like a really painful design in the application

Yes it is, unfortunately it was developed in such a way that the audio is processed in the main graphics thread which is basically a loop for processing logic, string and float/int data and drawing custom graphics in OpenGL - not necessarily GUI but graphics nonetheless. I think the idea be this method was that the audio would either be easily available to the float processing system for either creation or processing or affecting video. Pretty sure the main focus wasn’t on “monitoring” audio through speakers but using it to affect visuals which are inherently running at a pretty slow framerate ie 60 fps. - Which is why I’m here!

Thanks for the tips.

t