I’ve been developing an amp simulator for the better part of a year now, and when using the standalone plugin with the Windows Driver setting it sounds great, albeit only outputting from one speaker.
However, when I change it to ASIO (I’ve done all the Steinberg SDK stuff) I lose a lot of top end clarity, and there’s a weird distortion that becomes present, almost like the signal is being overloaded. This also comes out of a single speaker, although the reverb comes out of both.
I’ve used a lot of filter banks to sculpt the guitar tone, which I’ve used ProcessorDuplicator for. The only other elements in the chain are a Waveshaper using the function: x / (std::abs(x) + 1) and a Convolution module. Oh, also a JUCE’s reverb class and a few gain sections (input/output) which I’ve ranged appropriately.
I have no idea why it is doing this. The perfect result happens using the Windows Driver, but exporting the VST to Ableton and switching to ASIO yields a wooly, lifeless and farty result.
Anyone have any ideas? The code is part of a wider project that I can get in trouble for sharing, so any help would be much appreciated. Thanks.
Your description doesn’t sound particularly like a block size issue, but if you want to investigate that further you’d have to monitor the buffer sizes in each process_block() call. You could keep track of min, max, avg for instance.
Thinking about your description again, it could be a phase cancellation issue or maybe even comb filtering. Check whether you have a stereo signal being summed to mono, or if there is a delayed version of the signal being mixed in from somewhere.