MIDI note-on and note-off at the same time?

There was a similar thread but the question was not answered.

In the current way I am implementing processing of MIDI messages, if a note is released and a new note started on the same position in the sample buffer, the new note-on will be eaten.

I'm just curious how you deal with this.

This is my current implementation:

// In processBlock
    MidiBuffer::Iterator MIDIMessagesIterator (midiMessages);
    MidiMessage currentMidiMessage;
    int midiMessageSamplePosition;
    for (int sample = 0; sample < buffer.getNumSamples(); ++sample)
        MIDIMessagesIterator.setNextSamplePosition (sample);
        // Is there at least one more MIDI message in the buffer?
        if (MIDIMessagesIterator.getNextEvent (currentMidiMessage, midiMessageSamplePosition))
            // Is the message we got a note on and is it in the
            // right place?
            if (currentMidiMessage.isNoteOn() 
                && midiMessageSamplePosition == sample)
                // Do whatever here.

Any help is appreciated!


You're horribly misusing the midi buffer iterator! You shouldn't ever set its sample position, just iterate it and let it tell you the sample position of each event that it gives you. The juce::Synthesiser class does all this kind of thing for you - why not have a look at how that works?

I'm not really making a synthesizer. I'm making an audio effect that responds to midi.


I'm taking a look at the juce::Synthesiser class but I'm having a little bit of trouble understanding what's going on. I'd really like a MIDI implementation that is devoid of the concept of anything other than MIDI handling. I haven't found anything like this yet though.


I would basically like to handle the incoming MIDIBuffer from processBlock and call whatever I want with it (no concept of rendering an audio block or synth voices).

I think the main source of my confusion right now is in the way I am handling my DSP. I'm confused what aspects of my program should be aware of what a buffer is.


The way I'm currently doing it is:

- Start on the current sample

- Process each external processing aspect so their current sample is up to date

- Iterate through each channel and set the output accordingly


The way I see it done in the JUCE example:

- For each processing aspect:

        - Pass in the audio and MIDI buffer

        - Render through the entire buffer size individually for each channel


Should I be doing it this way? Should all of my aspects be capable of rendering a buffer, or should they simply be able to output the next sample?