Best way to make scrollable/zoomable sequencer GUI

I am planning to make a MIDI-like sequencer piano-roll layout where the user can click on a grid space to create a note there, click again to delete the note, and hopefully be able to drag and manipulate these notes in other ways.

I cannot use any MIDI-centric classes for this because my program is not using MIDI or even a traditional piano key setup. All other sequencer-based tutorials and guides I’ve found for JUCE only reference step sequencers, but I need something that can support dynamic grid sizes and scrolling at the very least to be viable.

So, here are the two approaches I came up with:

  1. It is easy to use for loops to generate a nice grid of rectangles, and I could easily just change the drawing parameters and repaint to “fake” a scrolling effect. However, these rectangles, being just drawings, are not interactive in any way, and I don’t know how to make them so. It seems like this would be a more customizable and optimizable solution than I mention in 2, so if there is a way to do this I would love to know it.

  2. My next thought was to make a grid of buttons which change color when toggled on and off, but that seems a little silly. Is that really the intended use of buttons? Will the application remain stable even with many hundreds of such buttons displayed at once? Besides, this approach would seem to create a hassle when it comes to having notes longer than a single grid square - would I just extend the button bounds, or…?

Sorry, I know this is a very open-ended questions, but after reading all the tutorials I am still pretty unsure of how to tackle this. Any advice is appreciated.

Just a couple of quick thoughts to get you started.

I’m not quite sure what you mean when you say you cannot use any MIDI-centric classes. Why can’t you use MIDI internally for your sequencer? Many MIDI classes don’t require any connected MIDI hardware, but rather offer useful features for organising and manipulating the timing, pitch and intensity of notes, etc.

To make your rectangles interactive, you’ll need to have some kind of listener. Every JUCE Component has/is a MouseListener, so that’s a good place to start. If you have hundreds of such Components, you can drastically improve their performance by calling setPaintingIsUnclipped(true); for those that don’t explicitly require clipping at their boundaries.

2 Likes

Thanks for that performance tip. I was able to get MouseListener to work (I didn’t realize it was included in all components, not just buttons and such - silly me) and I can now toggle my rectangle’s color with a click. I am repainting tons of rectangles and it is really getting laggy so I’ll have to try implementing that.

Also, as an aside, I can’t use MIDI because I am working with a custom-built file extension tailored towards microtonal music. Rather than specifying notes from a preset enumeration like MIDI does, I wanted this format to be extensible enough to contain all kinds of equal-temperaments and tunings - so it allows a user to specify an exact frequency in Hertz up to four decimal places. (In memory it is hardly more than just a list of ints) This is also why I can’t use a default piano roll layout - the grid divisions in intervals will have to be configurable to something other than semitones for it to be visually useful. The end goal of this project is to basically have a very watered down “DAW” for editing and playback of these files. Unfortunately since it is all so non-standard it seems I will have to do a lot of things from scratch. :frowning:

1 Like

You’re welcome. Glad to hear you’re getting somewhere! :grinning:

Oh I see, that sounds very interesting indeed and good luck with your endeavours.

To save some headaches and build on what’s already available, perhaps you could create adapter-style layers around MIDI for a lot of the functionality. e.g. conversions between ultra-precise Hz value, MIDI ‘key’ and pitch + bend value, possibly even leveraging MPE in some way to work around the usual limitations of MIDI channels and pitchbend. Have you investigated those options? I guess a lot depends on what sort of sound generation you’re using too.

Just a comment here that I’d recommend avoiding MIDI like the plague in the data model of an app like this. MIDI is an outdated streaming format with tiny fixed limits on channel count. It’s totally unsuitable for any kind of musical data model where you expect to store + edit the data or scale up the complexity.

I say this from personal experience because pretty soon we’re going to have to re-write the whole of Tracktion’s internal audio engine so that MIDI is only used for physical input and output, with a more flexible data structure replacing it for all internal operations, and this is something it’d have been better to do from day one.

6 Likes

Yeah, I was hoping to one day implement at least a MIDI import. I have looked into the pitch bend and it seems it shouldn’t be more difficult than writing out a conversion formula. The main trouble will be the logarithmic relation of frequency to pitch, whereas pitch bend is a linear number…

As for sound generation, I also am using JUCE entirely for that. At first I’ll probably just have very simple wave generators since those will be easiest to tune to the exact frequencies, but I hope to add something tweakable for the user. Thanks again for the advice!

Wow, I never would have guessed a big audio application wouldn’t use MIDI at its core. Good to know it isn’t, like, entirely unfeasible to do without it. Thanks for the advice - I’ll make sure to keep MIDI out of the core operations.

Interesting stuff! I haven’t really dug into JUCE’s MIDI in depth yet but from the level of usefulness I see elsewhere I imagined there would be all kinds of helper stuff in there for exploration and inspiration, and to abstract away some of the outmoded awfulness.

Still, of course, modelling your internal application using concepts that are native to the problem domain is an excellent way to avoid coupling and friction.

I meant to refer to interface interactions with MIDI on some level: the majority of virtual instruments, for example, expect some kind of MIDI input. Of course, you may be generating your sounds using some kind of custom synthesis, as indeed it seems you are, in which case, fully-fledged bespoke solutions are the order of the day.

Sorry, I’d probably phrased some of this backwards the first time since I’m so often dealing with this specific MIDI translation layer as my essential problem domain… :smile:


And yes, from what you’ve just said, it sounds as if you’re on track for some great microtonal developments. Looking forward to future updates!

1 Like

Any chance you or the ROLI team are members of the MIDI Manufacturers Alliance and know any details of that HD-MIDI spec that they’ve been talking about for the last decade?

Yes, we’re all familiar with HD MIDI and it’s a good format. Don’t expect to be using it any time soon though, it’ll be years if not decades before it gets enough momentum to be a viable replacement.

if you use doubles for everything, and define pitch in hz, position as a sample number, and duration as another sample number, is that enough resolution? That seems simple enough for noteOns and noteOffs…

I’ve wrote one a year ago just to see how hard it was.

I don’t know if it was a good idea or not, because it’s not made it to any production software. And it’s certainly not optimised (I’ve spotted a huge hunt through the note data in there, and some stuff that should be references), but it worked well in testing. I stored the notes like this:

struct NoteData
{
	float vel;
	int pitch;
	double time;
	double duration;
};

class SequencerNotes
{
public:
	static bool isNoteWithinRange(NoteData & note, 
		double viewLeftTime, double viewRightTime, 
			int lowestVisible, int highestVisible)
	{
		return
			note.pitch >= lowestVisible && note.pitch <= highestVisible &&
			(
				(viewLeftTime <= note.time && note.time < viewRightTime)
				||
				(viewLeftTime > note.time && note.time + note.duration > viewLeftTime)
			);
	}

	std::vector<int> getNotesWithinRange(double viewLeftTime, double viewRightTime, int lowestVisible, int highestVisible, bool alsoShowSavedNotes)
	{
		std::vector<int> visibleNotes;

		for (int i = 0; i < notes.size(); ++i)
		{
			auto & note = notes[i];

			if (isNoteWithinRange(note, viewLeftTime, viewRightTime, lowestVisible, highestVisible))
				visibleNotes.push_back(i);
		}

		if (alsoShowSavedNotes)
		{
			for (auto s :savedNoteData)
				if (isNoteWithinRange(s.second, viewLeftTime, viewRightTime, lowestVisible, highestVisible))
					visibleNotes.push_back((1 + s.first) * -1);
		}

		return visibleNotes;
	}

	NoteData & get(int index)
	{
		if (index >= 0)
			return notes[index];

		return savedNoteData[-1 * index - 1];
	}

	NoteData & getSavedNote(int index) { return savedNoteData[index]; }

	void insert(int pitch, double location)
	{
		NoteData d; 
		d.pitch = pitch; 
		d.time = location;
		d.vel = 1.0f; 
		d.duration = 0.25;
		notes.push_back(d);
	}

	void saveNoteData(const SelectedItemSet<int> & notesToSave)
	{
		for (auto s : notesToSave)
			savedNoteData[s] = notes[s];
	}

	void clearSavedData()
	{
		savedNoteData.clear();
	}

	bool isSavedDataEmpty() const { return savedNoteData.empty(); }

	void reinsertSavedNoteData()
	{
		for (auto s : savedNoteData)
			notes.push_back(s.second);
	}
private:
	std::vector<NoteData> notes;
	std::unordered_map<int, NoteData> savedNoteData;
};

Saved notes were used for copy while dragging operations, so we could add them back into their original positions if the user pressed ALT when dragging.

Notes were displayed as components when required, though set not to intercept mouse clicks. So all the selection handling and mouse click handling was done by the parent class - the note Components just display a rectangle and not a lot else.

This parent class was contained within a viewport for scrolling. Create only the notes components that are actually on-screen.

The whole thing wasn’t that complex at all really considering … but don’t forget to use juce::SelectedItemSet<> … I didn’t realise it existed and wasted a lot of time reimplementing it :slight_smile:

1 Like

Yup, that is more or less how my file format works. The pitch resolution is so high that it is continuous to human ears. I don’t use the sample rate to store starts of notes, though. I use a more relativistic method for now, so I can’t say for sure how that would sound. But, I imagine it would be fine…

Wow, thanks for posting all that! And especially thanks for the tip about SelectedItemSet. I’ve only done the main JUCE tutorials so my API knowledge is still super limited. Looks like I had better go read through basically all of it…

If you don’t mind my asking, I have one question: You say the mouse click handling was done by the parent class and that notes weren’t always represented as components…but I don’t see how one would be able to tell which note the user was clicking on that way? (unless you did something like calculating where the notes should visually be using x and y coordinates, and then access the actual item in the vector based on the mouse’s coordinates when clicked - but, this seems like a very indirect solution to what a task that I imagine could be tackled more directly)

NoteComponent* LinearSequencerView::getNoteAt(Point<int> p)
{
	for (auto n: noteComponents)
		if (n->getBounds().contains(p))
			return n;

	return nullptr;
}
1 Like

Hmm… why would a max channel count of 16 be such a plague? Doesn’t a typical daw midi project consists of a bunch of tracks each consisting of just one instrument using one midi channel to feed either a virtual synth (plugin) or some external h/w?

In a sequencer I have been using, the whole internal sequencer engine is in float beats, velocity is float as well, although pitch is still defined in integers for the synths.

I think this works great because you have the internal engine at high resolution. You create an interface into the sequencer and then OSC or MIDI becomes trivial to implement since it’s just implementing the sequencers interface.

Like wise all the controls are float and the OSC/MIDI mediator just translates those values before they hit the sound engine.

Seems you can’t go wrong with a decoupled internal engine like this because you are not coupling the external interface inside your internal code.

1 Like