Audio routing from sensor input

Hello, I’m not an audio developer, but am posting here to determine if Juce is an appropriate development platform for what may be a unique application.

Typically audio routing/multiplexing is done by human user input. I’m hoping to have audio routing automated using sensor input instead.

Imagine 64 audio output channels (speakers), each with a corresponding sensor to detect an objects presence. If a sensor detects an objects presence, then sound is routed only to that channel output/speaker. If the object moves from one speaker/sensor to another adjacent one, the sound is routed accordingly using panning laws.

My application is simpler than XY spatial routing as the speakers are essentially only on one axis (Left/Right). So it is as if you had 64 channels from left to right and the sensors determine which panned channel(s) receive output. However, there is some complexity as there are multiple audio inputs, each of which could have a unique ID each sensor is capable of identifying.

As I already use a VST host for processing audio, I envision implementation as a VST plugin. Essentially, an audio multiplexer which could accept MIDI/OSC or other commands from a microprocessor (e.g., RaspPi/Arduino) to which the sensors are connected.

Perhaps software already exists with such capability (if so, please share!), but I have yet to find audio routing software which can accept inputs/commands to direct audio routing of outputs. Thus, the solution may be to outsource development. If such software does not exist, is Juce an appropriate development platform, preferably as a VST plugin?

The short answer is “yes, you could use juce for this”.
But like everything there’s an “but it depends” clause…
Effectively what you are describing is a multi-channel panner with hardware controllers. I think something like Plogue Bidule or another flexible audio host could handle something like this with enough setup, and might be easier than programming from scratch.
I also wonder if you need panning logic at all - since panning is effectively volume control, maybe each sensor could just control the volume of its relative speaker, this would massively simplify things with pretty much the same result, to the point where you might be able to use any host that allows MIDI/OSC control of channels.
… but it depends on the details…

icebreakeraudio

Thank-you for your insight! The good news is that I already have and use Plogue Bidule as a VST host. I was not aware it could interface with hardware inputs. More to research…

No - it doesn’t have to be panning at all. In my simplistic non-audio developer mind, that was just one way to implement. Rather than thinking of simpler concepts, I had be thinking of more complex ones such as 3D spatial audio. Your suggestion of independent volume controls is simpler.

Is it correct to assume the volume control would be digital (e.g., in VST host) and not analogue? If analog, multiple sources may complicate it as each detected Object has its own ID correlated to a Source. Suppose Sources A & B and corresponding Objects A & B (e.g., Source A outputs to speaker where Object A is located) and for simplicity 10 speakers each with its own detection sensor (10 zones/blocks). If Object A is in zone 4, and Object B is in Zone 6, then those Zones play the corresponding Sources. Suppose both Objects A & B move to zone 5. Now Zone 5 plays both Sources A & B. Finally, suppose A moves to Zone 6, then Source A has to follow it (Source B keeps playing in Zone 5 as Object B has not moved).

Does the ID tracking of Object to a correlated Source complicate it such that custom code is required? I’m thinking not, but am prepared to be corrected. The microcontroller (e.g., Arduino with attached detection sensors) knows which Source correlates with each Object. So the VST host does not need to know the ID, it only needs to know which channel(s) to amplify. Initially I was thinking the VST host would have to have the intelligence to correlate Sources with Objects, but that can be done with the microprocessor detecting the inputs.

The attraction of JUCE is Arduino, MIDI, and OSC compatibility. I already have Plogue Bidule, Arduino, sensors, multichannel duplex ADC/DACs, and multichannel amplifiers - I just have not found a solution of how to trigger channel changes based on sensor input (sensor/trigger based routing).

If there’s multiple sound generation sources (or objects as you call them), then something more complicated might be required, but again I think you might be able to get away with bidule or max/msp or something like that.
With that said, if you already know how to program in C++ and have a clear concept in mind, it might be just as convenient to create something in juce. The only downside of running the idea you have as a plugin is that the host will also be a factor, and by the time you’ve set up a host that can handle the setup, you might be most of the way there already and not need a custom plugin.
There’s only so far I can take you with forum posts unfortunately.

Understood, thank-you for the help! I’m prepared to investigate this further with self-study. I have probably just been looking in the wrong places or using poor search terms.

Just for clarity of anyone reading, the sound sources are not the same as the objects. The object is a person or thing being tracked with sensors. For example, if John (Object A) is listening to Beyonce (Source A), the sensors will detect where he is and only play audio through the adjacent speaker. If Sarah (Object B) is listening Ciara (Source B), some logic is needed to ensure the John only hears Beyonce and Sarah only hears Ciara, unless they are at the same location (triggered sensor).

In other words, the source audio should follow the correlated object. Not all objects hear all sources when detected.