Suggestions needed for Swarm Intelligence plugin


#1

Hello, I’m a new user of the framework trying to port an app I built for my undergrad thesis from openFrameworks to Juce. The app utilizes a swarm intelligence algorithm and was built for iOS while now we want to have that in a audio plugin (synth) format.

The swarm algorithm is expressed in a class named Swarm that inherits from AnimatedAppComponent and I have an object of that class as a member variable of the AudioProcessor in order to sonify the algorithm through a Synthesiser object. At the same time, the algorithm gets visualised by making AudioProcessor’s Swarm variable a child of its corresponding AudioProcessorEditor.

For the sonification I pass a reference of the Swarm object to the SynthesiserVoice added to the AudioProcessor’s Synthesiser. And now I need to map the Swarm to the audio engine (exported from Faust and added as a member variable of the SynthesiserVoice). Should I perform the mapping in the renderNextBlock() function? By using sample rate and block size I could use that function as a “timer” for specific rates of Swarm’s state but is that a good idea? I want to know if you had any similar experience, any suggestion would be greatly appreciated!

I apologise beforehand for not knowing my way around the framework…

Thank you,
Andreas


#2

Here are my thoughts, which may be right or wrong (still new myself)…

If your Audio is dependent on the positions of the “flies” in the swarm, then the update function for your swarm will need to be in renderNextBlock() which will be called in processor’s processBlock() which is an audio rate thread, but the UI update for the swarm will need to call separately in your editor. You definitely do not want to repaint() from your update function in renderNextBlock as I believe this will cause blocking in your audio thread. Not a solution but hopefully helpful


#3

Thank you Joshua for your suggestion. Also thank you for the great work on your youtube channel . It has been really helpful for me!!!


#4

I needed similar functionality for a plugin I did to animate sound sources.
You need to be aware about the thread boundaries, the ownerships and the time references.

The parameters you want to use for the sonification need to be available in the processor, and I would use atomics here. Alternatively you can add an extra lock around the reading and the writing from the simulation thread.

Next question is, who updates the objects. It is best to use a separate thread, that executes your swarm updates. Keep the writes to the atomics in the audio as short as possible.
If you decide to update in the audioThread, the update method needs as fast as possible and must not block.

In your processBlock, use local copies of your parameters in the method, so that a change in the middle of execution doesn’t give unexpected results. Also this minimises the chance of collisions.

HTH


#5

Thanks that means a lot!


#6

Thank you Daniel!!


#7

Sounds interesting home66home, do you have any links to the original iOS app/ more information about the project you can share?


#8

Hi remaininlight, thanks for your interest. Here is a video demo of the iOS app. The algorithm is based on Boids, but the entities communicate through sound signals. Each one emits a signal that the others can listen in order to extract the location and velocity of their neighbouring individuals. That perceptual mechanism consists of 4 “ears” (in 2D) for each entity that analyse the incoming signals with a FFT. The user interacts with the Swarm by modifying those signals, and the resulting emergent behaviour of the Swarm modifies the sound signal the user hears. So each participant in the interaction modifies the signals to be auditioned be the other.


#9

Awesome, thanks for the video, sounds intriguing. Do let us know when you’ve got something runnable - I’d love to have a play :slight_smile: