Number of Polyphonic Channels in synth?

So I have made a good start to a new synthersizer and I am hitting all the architectural barriers which come with making a synth.

My question is why does every synth on the market seem to have a cap on polyphony and if I choose to make it unlimited what are the downsides as I can’t imagine anyone mashing all 128 midi channels.

At the moment I am running one oscillator just generating a sine wave. I have created events which are added to an array on midiOn and the events keep there own ADSR state and will exist in the array until the release has complete. With a longer I can also layer the same note over itself this way and have stress tested by setting a long release and mashing all the keys on my S88 and I barely break 2% CPU on debug.

So this is promising and do I continue with this architecture?

If you make the polyphony dynamic (“unlimited”), you will have to do heap allocations which are a big no no in realtime audio code. (Because the time the allocations take is not known beforehand.) If the polyphony is limited to some amount, the code can just preallocate that many voices before starting the audio processing.

Well kinda,
I have a vector of events which is just a simple struct of say 48 bytes. including ADSR state phase/time. I reserve 128 in that vector and add and remove events outside the loop so there is no allocation unless someone sets a long release and triggers more than 128 events in a short time which I think anyone expecting that much of a plugin would be stupid at best.

Events are added using emblace and removed with erase so the pre-allocated units stay and I just iterate over the current size. So I don’t think there is any heap allocation going on, only heap read

Don’t underestimate the stupidity of users :wink:

4 Likes