Delay output is distorting

If I understood your code correctly, then I believe you should make sure that your delayBuffer has at least the same number of channels as your buffer, instead of doing the “juce::min” thing when you obtain the delayData pointer.

Consider this scenario: buffer is stereo and delayBuffer is mono.

  • On the first pass of your for, for the first channel in buffer you write delayData (hence the first channel in delayBuffer) with samples that come from the processing of the first channel in buffer.
  • On second pass of your for, for the second channel in buffer, your delayData again points to the first (the only) channel in delayBuffer which now has its samples already updated from the first pass. Your processing of the second channel in buffer happens with delay data that was the result of the processing of the first channel in the current callback, not with data that came from processing of the second channel from previous callback as it should have.

In this scenario, this is an instance of the #1 most common programming mistake that we see on the forum