Sample accuracy of Synthesiser

I’ve just spent three hours tracking down some accuracy problem to discover this function - did the Synthesiser class suddenly become less accurate at some point? Surely the default should be sample accurate?

/** Sets a minimum limit on the size to which audio sub-blocks will be divided when rendering.
    When rendering, the audio blocks that are passed into renderNextBlock() will be split up
    into smaller blocks that lie between all the incoming midi messages, and it is these smaller
    sub-blocks that are rendered with multiple calls to renderVoices().
    Obviously in a pathological case where there are midi messages on every sample, then
    renderVoices() could be called once per sample and lead to poor performance, so this
    setting allows you to set a lower limit on the block size.
    The default setting is 32, which means that midi messages are accurate to about < 1ms
    accuracy, which is probably fine for most purposes, but you may want to increase or
    decrease this value for your synth.
*/
void setMinimumRenderingSubdivisionSize (int numSamples) noexcept;

PS. Sorry about the title, it didn’t want to accept the function name in the heading for some reason…I get “Title is invalid; try to be a bit more descriptive” if the function name appears in there!

Afaik this has been like that for quite a while. This is not really about the precision of the audio rendering itself. This is about how precise the point in the buffer is at which the Synthesiser e.g. triggers a new note, as a result of incoming MIDI messages with a certain timestamp that falls within the current buffer. If you really sure you want that MIDI-timing to be sample accurate, you can easily do that by setting numSamples to 1 (which may obviously lead to worse performance).

I’d argue that if you are e.g. recording a performance played on a MIDI controller, all the other factors contributing to imprecise timing of those MIDI messages are much more significant than that numSamples parameter so most of the time it doesn’t matter if you make it lower than the default.

I understand what it’s doing. But I don’t think it’s a good idea to introduce changes that reduce accuracy like this.

Also worth knowing that lots of uses of midi don’t include recording live performances.

The electronic music guys in a bunch of genres need and expect MIDI to be sample accurate. And even in the rock and pop world I have four sides of analysis (the most comprehensive customer support request I’ve ever seen!) from a mix engineer using a beta product how he had to manually auto-correct the output from this change back into time.

Anyway - I’ve fixed it now. But sample accurate should be the default!!

Just to finish off. The problem is not so much an individual performance, but if you layer two performances you may need two synthesisers to have consistent timing to avoid phasing issues.

Hm, I agre but I can’t see such a change in the git history, as far as I can tell it has been like this for a while… where do you see that? There was a change by Jules more than a year ago that added the option to specify a different value, I can’t see how this reduced any accuracy anywhere?

Yeah but you’ll get that as long as you use the same minimumRenderingSubdivisionSize for both, right?

Both layers are not necessarily from the same plugin. You might have a bass from one synth playing over a drum from another non-JUCE plugin. They have to have the same phase relationship every time they hit otherwise it sounds horrible!

Was there an issue that someone had that led to this change?

@jules do you know anything about this?

We introduced the change because it makes a huge performance difference when you have very dense buffers of midi messages, which is the time when it doesn’t really matter about phasing etc because you’re likely to have a whole bunch of notes playing at once.

The idea is that it won’t reduce accuracy at all, except in circumstances where notes are played very close together, so e.g. if you have one note played in a buffer, it will start at exactly the sample index that it specifies. But if it’s followed by another note just a few samples later in the same buffer, then that one will either be played at the same time as the first, or delayed by 32 (or whatever) samples to be batched with the next one.

Ah, so it seems I somewhat misunderstood how this stuff works. I thought all MIDI messages would be quantised to 32 samples. Now it turns out this only happens if you have notes that are closer together than 32 samples. Thanks for the explanation @jules !

Thinking more about the number 32 samples: with a sample rate of 44.1kHz that’s less than a millisecond. Compare that to the baud rate of the MIDI protocol: 31250 bits per second, that means MIDI can’t really do more than 1 note-on or off message per millisecond anyway. (and if it could, most likely no-one could hear the difference anyway). So that number actually seems quite reasonable :slight_smile:

But that isn’t how it’s implemented …? I’ll double check - but I was only doing a note every half second or so.

Lets assume samplesToNextMidiMessage is 30, and minimumSubBlockSize is 32. We have one note in the block.

while (numSamples > 0)
{
    if (! midiIterator.getNextEvent (m, midiEventPos)) // false
    {
       ...
    }

    const int samplesToNextMidiMessage = midiEventPos - startSample;

    if (samplesToNextMidiMessage >= numSamples) // false
    {
        ...
    }

    if (samplesToNextMidiMessage < minimumSubBlockSize) // true
    {
        handleMidiEvent (m); // midi note starts 30 samples early!!!!
        continue;
    }

    renderVoices (outputAudio, startSample, samplesToNextMidiMessage);
    handleMidiEvent (m);
    startSample += samplesToNextMidiMessage;
    numSamples  -= samplesToNextMidiMessage;
}

OK, good point… For the first note, if it happens to be near the start of the buffer, then it would lose a bit of accuracy. I can fix that, e.g.

bool firstEvent = true;
int midiEventPos;
MidiMessage m;

const ScopedLock sl (lock);

while (numSamples > 0)
{
   ...

    if (samplesToNextMidiMessage < minimumSubBlockSize && ! firstEvent)
    {
        handleMidiEvent (m);
        continue;
    }

    firstEvent = false;

    renderVoices (outputAudio, startSample, samplesToNextMidiMessage);
    handleMidiEvent (m);
    startSample += samplesToNextMidiMessage;
    numSamples  -= samplesToNextMidiMessage;
}

But I think that otherwise the logic is OK. You may still want to reduce the value in your own plugin though!

1 Like

Ta - definitely better!!

Looks like this change broke something: