Custom UDP-based streaming device

Hello,

I have a device that streams sound via WiFi using UDP. Currently, I have separate apps for iOS and Android to control and to record sound.

I am exploring developing a JUCE-based VST3 plugin for iOS and Android that would basically show the device as a new instrument to a DAW. Hitting Record in the DAW would send a “start” UDP command to the device’s IP address, device would stream 16-bit, 44.1kHz data via UDP, plugin would extract raw sound bytes and provide it to the DAW. Is this feasible?

Based on my reading of the docs and the tutorials, I think this is what I need:

  1. Looks like I need to create a new AudioIODeviceType
  2. Then create the AudioIODeviceCallback
  3. Then create a new AudioIODevice
  4. Then add new device to the AudioDeviceManager

Am I on the right track? Any pointers would be much appreciated.

Regards,
Prince

It sounds like you’re a beginner, so I’ll give you some thoughts to chew on regarding the overall design of what it sounds like you’re trying to make.

Your idea is somewhat of a non-starter from the beginning, because the latency of streaming over Wi-Fi will likely be unacceptably high for recording, not to mention since you’re using an unreliable transmission protocol (not just that it’s UDP, but also WiFi) you have no protection from glitches or dropouts during recording.

Additionally, you can’t dynamically populate DAW plugin lists with more or less plugins, as they are scanned and populated at DAW load time based on what’s locally available in the plugins folder. You could however have a single “endpoint” plugin that could connect to a specific instance of a streaming mobile device via some sort of P2P adhoc setup.

Also, Android and iOS devices don’t use VST3 plugins. Android uses inter-app audio and iOS uses inter-app audio and AUv3. But what you’re describing sounds like a standalone app that connects to a desktop DAW plugin(?).

Regarding your streaming scheme, you also shouldn’t make assumptions about what audio format the host wants “streamed” to it (i.e. 44.1/16). You have two options there, adjust your remote recording rate to provide what the host wants, or you’ll have to resample at your DAW-side endpoint to what the host wants. Not to mention you’ll need to buffer your UDP stream to the host’s block size which will further add to your latency.

Don’t be discouraged, unfortunately audio programming is really hard, but luckily JUCE makes it easier. I recommend learning how to record audio locally first: https://juce.com/doc/tutorial_processing_audio_input

2 Likes

Thanks for the feedback.

The device sets up its own access point and allows only one smart device to connect to it. It’s only two devices on the WiFi network so the unreliability of UDP transmission is not an issue. As I said previously, this works - I have separate Android and iOS apps that record reliably using UDP sockets. My goal is to have a common JUCE-based app for both iOS and Android.

The latency is fixed because the device buffers 1024 bytes = 512 bytes/channel = 256 samples of 16 bits. At 44100 Hz sampling, the latency is 5.8ms.

Thanks for tutorial link. I have read it. Currently it assumes that the source is the default input device. How do I change it to use a custom device?

Apologies if all this sounds too basic/naive.

Well, if you already have the code to read/write to this device, then yes, you’d just need to create your own AudioIODeviceType/AudioIODevice classes, and then the rest of the JUCE classes will be able to talk to it like any other audio device.