It seems to work in general (I think) but Is it the correct way to do it?
Currently it sounds kinda uh…strange/glitchy for low delay values (say ~30 ms).
Thanks.
edit: Changed the code a bit and did some more tests.
I think it works correctly, but only for delay times >~15 ms, below that I get distorted sound.
Why is that?
that’s probably because around 20ms is the threshold at which a delayed signal can appear not to have been delayed. the haas effect also uses delays to up to 20ms for that reason. so maybe more of a perception thing than a bug
Thanks, but I don’t think that’s it.
If anyone wants to check:
I recorded a test signal with my plugin, listen to how the left channel sounds distorted, that starts when I’m setting delay times <~15 ms.
(as zip in case in case player doesn’t work: test.zip)
code looks reasonable to me, even though i don’t use delay line myself. you can change the delay time before or after the sample loop though since you are not smoothing that parameter. and maybe it’s also a bit confusing that there is an outter channel loop when you deal with both channels inside the sample loop anyway
If anyone comes across this in the future, it’s solved.
@danielrudrich PM’ed me (thanks!) and told I had to swap the inner and outer loops, so that the two delays (left and right channel) are ‘time synced’ on the sample level, which makes sense, now that I think of it.
It now does delay times >= 1 ms glitch free.
yeah. that’s what i meant too. most of the time you either use the channel loop or explicit channels in the sample loop. another example is mid/side conversion. stuff that only makes sense in stereo basically. make sure to check numChannels before going into this or it will crash on mono tracks