Hello,
I’m trying to implement a time‑stretching feature using JUCE and signalsmith‑stretch, but I haven’t been able to get it working correctly yet.
I’ve extracted the parts of the implementation that seem relevant and included them below. stretchRatio is a predefined value (smaller = slower, larger = faster).
My expectation is that setting stretchRatio to 2 would give “same pitch, double speed,” and setting it to 0.5 would give “same pitch, half speed.”
However, in both cases the pitch drops while the tempo stays the same.
If anyone has any insights or suggestions, I would greatly appreciate your help.
This is a weird one. There’s nothing obviously wrong to me with the use of the SignalsmithStretch class from what I’m seeing here, but I’m also not seeing the whole picture.
The weird part is that you get a drop in pitch whether the ratio is 2 or 0.5, which suggests that something else is off. Are there any other artefacts in the audio?
One thing making me a little nervous is that the number of channels in bufferToFill is not checked against the number of channels in the stretchOutputBuffer, which could cause issues if the stretchOutputBuffer is mono and bufferToFill is stereo. But that could be nothing depending on the greater context of the project.
Since the copy of signalsmith-stretch had originally been added by the AI and I wasn’t sure where it came from, I re-added it via git submodule and checked again, but the issue didn’t improve.
I’m using signalsmith-stretch with tag 1.1.0, and linear is set to 0.3.0.
Are there any other artefacts in the audio?
You mean whether there are any other sources, right.
Are you processing samples or real time audio? If you are processing incoming signal, you have to account for the different buffer length. As far as I remember, i had many headaches trying to use stretch on realtime audio, while it works like a charm for samples
I meant audio artifacts, like glitches, distortion, noise, that sort of thing. You say it plays at a lower pitch, but does it play perfectly at a lower pitch?
I can tell you now that putting DBG messages in an audio callback will cause glitches. I also find that any time/pitch effect running in realtime will suffer from glitches if you’re running a debug build, or if you haven’t got the settings right. Try running a release build and see how things go.
Ultimately how you approach this problem will depend heavily on your needs and end goals. When processing samples it might be better to render the time-stretching of the audio file in a background thread, then swap it out when it’s finished - however this has many pitfalls that AI coding assistants tend to miss; it also isn’t great for long audio files, but there are ways to deal with that
The audio isn’t just playing at a lower pitch. It sounds slightly distorted, and it also feels like the pitch isn’t quite correct.
As you pointed out, I’m running it in a debug build. However, what concerns me is that when time‑stretching is disabled — meaning when the stretchRatio is 1.0 — I don’t hear any noticeable artifacts, despite it being a debug build.
I also feel that the fact that the tempo doesn’t change is something that can’t really be explained.
I’ll try checking it in a release build for now.
That is, I changed the order of reset() and presetDefault. Without that, the code crashes right away for me and doesn’t even get to the audio processing.
The offline processing code I wrote for reference :
icebreakeraudio, xenakios
Thank you very much for your replies.
I tried both of the suggestions you gave me — doing a release build, and changing the order of reset() and presetDefault() — each one separately.
Unfortunately, neither of them had any effect.
The behavior looks the same as before the fix.
The release build did make the audio slightly clearer, which is nice, but the major issues remain unchanged: the pitch shifts and the tempo does not.
Next, I plan to look at the code xenakios posted and compare it with mine to find elements that exist in his code but not in mine.
The other day, lcapozzi asked me whether I was processing samples or real‑time audio. I answered that I was processing samples, but it turns out that wasn’t correct. It seems I was actually processing real‑time audio.
Real‑time audio processing with signalsmith‑stretch looked difficult, just as lcapozzi mentioned, so I switched to bungee, which claims to support real‑time processing as well. However, the results sounded pretty much the same. Even when I tried to apply time‑stretching, the tempo didn’t change and the pitch dropped, making it sound out of tune.
It looks like there’s still quite a bit more investigation to do.