I was studying the sources of the ADSR
class and noticed something: am I right to think it should be
attackRate = (parameters.attack > 0.0f ? static_cast<float> (1.0f / (parameters.attack * sr)) : -1.0f);
decayRate = (parameters.decay > 0.0f ? static_cast<float> ((1.0f - sustainLevel) / (parameters.decay * sr)) : -1.0f);
releaseRate = (parameters.release > 0.0f ? static_cast<float> (sustainLevel / (parameters.release * sr)) : -1.0f);
rather than:
attackRate = (parameters.attack > 0.0f ? static_cast<float> (1.0f / (parameters.attack * sr)) : -1.0f);
decayRate = (parameters.decay > 0.0f ? static_cast<float> (1.0f / (parameters.decay * sr)) : -1.0f);
releaseRate = (parameters.release > 0.0f ? static_cast<float> (1.0f / (parameters.release * sr)) : -1.0f);
?
That is: in the attack stage, amplitude goes from 0 to 1, so we “split” the range (1 in this case) according to the attack duration (expressed in n. of samples). For the decay stage, amplitude goes from 1 to the sustain level, so we split the range (1 - sustainLevel
) according to the decay duration, and so on… Instead, it looks like the original code always uses 1. I tried to make the change locally and it seems to work just fine…