Usage of SynthesiserSound

i’ve been looking in the juce demo and class documentation, but it seems that i cannot figure out how that class would come in handly. is it supposed to filter out voice allocation for a synth ? i mean, i have 2 synth classes with some voices, in every synth there is one sound splitted to a part of the keyboard, and the other is taking the rest… i’m doing it right ? in which case i will allocate more than one sound for a single synth (probably a sampler with a wave playing across some keys) ? someone have some ideas to share the possible use ?

Yes, sounds like you’re doing it right. It’s more like a descriptor of what should be played, and which keys should play it, rather than an algorithm for actually creating the sound.

i supposed that, so i can use that to filter my synths instances to different channels and making them play with different key maps…
thanx :wink:
so i’m allocating only 1 sound for each synth instance. i just was wondering when i would allocate 2 sounds for a single synth and how should i treat that case on note on (just an example)…

Up to you, really. If you put 2 sounds on a key, two voices will be used to play them.

ah ok, now i got it. thanx a lot

coming back to the synthesiser, i’ve started hacking the findFreeVoice function to update the voice stealing algorhitm. what u think about a new virtual function in the Voice class in which you have to return the overall gain of that voice (mainly the volume envelope gain) ? in this case, as i have used a lot before, in the stealing process i’m checking the longest playing voice, that is in release stage, that is playing the lowest or the highest note (in hertz), that is playing less louder than the others. in this way there are a lot less cases in which you can hear note stealing abruptly…

Maybe, though not all types of voice would necessarily have a value they could return.

in that case the voice should return 1.0 to notify that it have a full gain.

ah i see that u are doing:

for (int i = voices.size(); --i >= 0;)
    voices.getUnchecked (i)->renderNextBlock (outputBuffer, startSample, numThisTime);

in the process block. if you raise the number of overall voices you are processing all the voices without any check (it’s up to the voice to not process the block, but then you are calling the same that function). if you handle voice allocation you should know which voices are actually playing or not, keeping them in a list. this way you could save a lot of function calls in the process loop when not so many voices are playing…
just some thoughts, i hope i don’t disturb you :wink:

i’ve found a bug in the synthesiser noteOff implementation.
actually that function check only if the note nomber is the same and is currently playing, but do not check if that voice already received a noteOff message and is currently in tailOff.
so if i play a note that have long release, and then i replay it back more times, i get a lot of clicking weirdness…

        if (voice->getCurrentlyPlayingNote() == midiNoteNumber)

should be:

        if (voice->getCurrentlyPlayingNote() == midiNoteNumber && ! voice->isCurrentlyTailingOff())

OR you must be sure that you called


in stopNote even if you are allowed to tail off. and you have also to modify findFreeVoice:

    for (int i = voices.size(); --i >= 0;)
        if (voices.getUnchecked (i)->getCurrentlyPlayingNote() < 0 &&
            ! voices.getUnchecked (i)->isCurrentlyTailingOff())
            return voices.getUnchecked (i);

to not deal with notes that are in tailing off already…

i think that there must be some mechanics to let the voice specify also that it is in tailOff (release) mode, if not, the actual implementation is quite unusable for building a real synth (aka lots of clicks and plops)…

No, I think it’s just your implementation that’s not behaving properly here.

If a voice is tailing off and its stopNote() method is called twice, it should just ignore the second call. If your code makes a clicking noise, don’t blame my framework for it!

Of course, stopNote may be called more than once for a good reason - e.g. if the first time it allows a tail-off, but then the user wants to kill all sound, and it gets called again with no tail-off, meaning that the voice should just stop immediately.

A voice shouldn’t call its clearNote method until it’s actually finished all sound and is ready for a new task.

[quote=“jules”]No, I think it’s just your implementation that’s not behaving properly here.
If a voice is tailing off and its stopNote() method is called twice, it should just ignore the second call. If your code makes a clicking noise, don’t blame my framework for it![/quote]

ok then, i could check my implementation and see if i can turn around this with other techniques (not clear like that), but remember… your demo implementation (even if it is the simplest sine oscillator) is clicking the same :wink:
and i don’t want to blame your framework; just want some ideas how i should do it right for making my synth voices sounds good with your synthesiser and voice allocation (which differs a bit from my old ones).

doing like this fixed it in the demo.

if (allowTailOff) { // start a tail-off by setting this flag. The render callback will pick up on // this and do a fade out, calling clearCurrentNote() when it's finished. // remember to tail off only if you not already started it if (tailOff == 0) tailOff = 1.0; }

Ah, good point about the demo, sorry about me setting a bad example there!

I’m about to do some work with synths for a new plugin I’m messing about with, so might have some improvements to this area of the code over the next few weeks too.

very good. i’m doing also some research on that area, mainly trying to adapt my old synths implementation into the juce synthesiser/audiosource framework. and now my synth doesn’t click anymore, as it were used to do before changing abruptly the voice allocation engine behind him…
what about your synth ? you’ll show something to the public or it will be closed source ?

My plugin will be closed-source, but there might be a few spin-off classes that end up in the library. Don’t know yet, as I’ve only just started…

gr8, would really love if this part of the framework evolves to something more powerful :wink:

after a while that i’ve been trying the Synthesiser class, i have to admit that is working very good, except the note stealing methodology. when the synth runs out of voices, and note steal is enabled, you have to cut abruptly your voice in stopNote, without any possibility to do a fast fade (and so clicking as hell).
and no, it’s not my mistake or my wrong usage of the framework, just when i cut a voice with stopNote (allowTailingOff=false) i shouldn’t cut it abruptly: have you ever heard a synth that when playing with 2 voices max, it clicks when it needs to steal a voice ?
instead, you should notify that voice to do a fast release (to avoid clicking using a release in a few sampleFrames, but not immediately, with tailOff = 0.99 for example), and the new noteOn should be started on another free voice.
for that i usually create N voices, then in the synthesiser i specify the max polifony (which could be at max < N but never be = the number of voices): so i have some free voices that could be used to trigger new notes, while the stealed voices still do their fast release (and without any click)… abviously you’ll hear maxPolifony + numberOfStealed voices when stealing is occurring, but that should very very short period (but i repeat, not immediate)…
you could try to reproduce this by getting a Synthesiser play with note stealing enabled and 2 voices with long release as max polifony.
i could be wrong, but… what u think ?

ah just for notice, was renè ceballos (rgc audio) that had made me understand this approach, a couple of years ago…

I know this is a relatively old thread (and I’m also relatively new to JUCE). But I think I have this soft stealing method working, although I might be making too many assumptions. Basically I have an ADSR envelope mechanism running in my synth voices with an extra STEAL phase which is effectively a 5ms (or whatever) fast release.

In my voice’s startNote() method I check to see if the voice is already in either the ATTACK, DECAY, SUSTAIN or RELEASE phases of the envelope and if so assume this voice must be being stolen (I also have an OFF phase so I know when a voice is really done). So instead of using the startNote parameters immediately I take a copy of these parameters in the voice and trigger the STEAL phase of he envelope.

When the STEAL phase has faded to silence and the envelope phase set to OFF I then call startNote() with the copies of the parameters I took earlier.

So a note that has stolen an existing voice will start 5ms late but this method doesn’t use extra voices.

I assume in this case I shouldn’t call clearCurrentNote() when the voice being stolen gets to its end since the Synthesiser assumes that it has already started the new voice.

(One thing is I can’t figure out under what normal circumstances stopNote() will be called with allowTailOff being false.)

Anyway does this sound like a sensible approach?

1 Like

Yes, that sounds like a pretty sensible way of gracefully stopping the notes.

The reason for the allowTailOff param is so that you can shut down all the notes and know that there’s nothing left hanging, e.g. if your app needs to delete the entire synth or something.

Have a look at some analog synths and how they work. There are subtle differences in the way envelopes work, like the minimoogs climbing envelope which are also present in polyphonic synths. If you always 0 an envelope on a new note on when it was already playing you’ll end up with a very thumpy sound which is not great and looses some of the character and dynamics of a hardware synth