Using Lua as MIDI scripting language


I am playing around with the sampler class and trying to implement a scripting engine that processes MIDI events (basically like KONTAKT does with KSP). I checked out both Angelscript and Lua, and I hacked it together, but stumbled upon performance issues: Whenever I call the respective Lua function that handles the processing of the midi message from within the audio callback, it starts glitching if I do more than "x = 2 + 4" :) I know there is the law to do nothing stupid in the audio callback (and I think the Lua GC falls under that category), but the only other solution I figured was to use a second MidiBuffer and process all messages in a separate thread, but that at least doubles the latency (since the processes MidiMessages are only available in the next processBuffer() callback. That can't be the solution.

Has anyone tried messing with the Lua internals (eg. replacing the allocator with the more realtime suitable TLSF and using LuaJIT), or is this unnecessary for the rather simple task of processing some midi messages (eg. storing the last note and changing the velocity based on the tempo of the last notes)?

What is the standard approach to this problem (Max MSP allows even Javascript for message processing)? It would be really nice to have a scripting language to define the behaviour of virtual instruments.

I use LUA in my program to do MIDI processing and GUI stuff (some parts of JUCE are bound to Lua).

I use luabind (it uses boost). I don't allow lua to be part of the audio thread or the MIDI input thread, all the lua stuff happens in the message thread so it's not realtime, but the program was never intended as a realtime processor for MIDI events.

But i do that and it works very well, even in this model Lua is very fast. I'm considering LuaJIT but i saw some info that exceptions might not work, i use exceptions to catch runtime errors, that helps when debugging scripts within the program, loosing exceptions would be a big deal, so for now i have not made that step.

Thanks for the answer. But how is the MIDI stuff processed in the message thread? If you have a plugin you get all the Midi events in the incoming MIDI buffer which has to be processed immediately, so passing the MIDI data to the message thread introduces an unpredictable delay, doesn't it?

After taking a look on what you are doing (if I got it right), I can imagine that CC messages can live with a few milliseconds delay, but for note ons it is a bit uneconomic regarding the efforts that are made to keep the overall latency low.

Calling a script interpreter introduces a huge overhead into your system, so the results are expected to become unpredictable.
You can however play with the Midi classes in Juce to process your MIDI input before the process() call, but it’s gonna be quite a
bit of a jump to achieve this. In addition, it will change the way you need to handle your audio processing calls, since you cannot use
some of the ready-made classes in Juce anymore which would integrate the MIDI message collection for you.
It’s doable, but as I said, requires a bit of a jump.

However, please do keep in mind that there are always many ways to do things.

That idea crossed also my mind but how is it exactly possible? I always thought that the the host calls the processBlock callback as soon as he has all the data and the spare time to handle my plugin, so I have no idea where to look to get this data sooner. And even if I dig myself deeper into the Juce classes, I see no way how to get the midi messages BEFORE the processBlock callback AND handle them in a lower priority thread BEFORE the processBlock is called(since that is what we need to take the load off the audio callback, isn't it?).

Could you give me a starting point which classes come in handy for this problem, since I am clearly way out of my skillset right here :)

I’ll see if I find the time, but that might take a few hours since I have some development to do on my own.

Something I can answer right away, though…
The AudioDevice callback (which, in essence, is calling processBlock() ) is implementation-wise independent of the MIDI input;
some classes like AudioProcessorPlayer (or whatever it is called) act as a convenience wrapper around that and will automagically
collect the incoming MIDI messages into a MessageBuffer which it will pass along to the AudioProcessor instances processBlock() calls.
You can however seperate that out into its own thread, process incoming MIDI messages there and pass them along to an AudioDeviceCallback implementation which will have its own MessageBuffer; this will introduce an average message delay of one processBlock() call for MIDI messages, and at best introduce no delay at all. However, you’re taking the large overhead for calling the Lua interpreter away from the audio thread. Ideally, the MIDI processing thread should have a priority lower than the audio thread, but higher than the rest of the threads in the process, so you can be fairly sure that it does get called unless you hog you CPU up to 100%, which would cause problems anyway.

I’ll dig for the classes and interfaces involved, but, as I said previously, might take a few.

The interfaces (=virtual classes) you need to derive your solution from are AudioIODeviceCallback and MidiInputCallback.
As you can see there, AudioIODeviceCallback::audioDeviceIOCallback() and MidiInputCallback::handleIncomingMidiMessage() are the methods
look for and implement.

MidiInputCallback::handleIncomingMidiMessage() is called for every (!) single incoming MIDI message, alas many, many times on a busy
MIDI sender; thus, you should not directly process the MIDI there, but defer processing to a dedicated thread.
AudioIODeviceCallback::audioDeviceIOCallback() is registered with the AudioIODevice and gets called whenever there is a buffer switch
happening in the ASIO driver (or any other backend driver for that matter). This is the topmost level of execution for your audio processing
thread (and any attempts at introducing multi-threaded audio processing need to start here).

As you can see from the documentation, these interfaces are independent of each other; if you happen to implement the MIDI callback by deferring message processing to another thread, as suggested, you will need to keep in mind this constitutes a producer-consumer pattern
both on the MIDI callback and Audio callback side; so you will need to have synchronized buffers inbetween to isolate the two threads sufficiently. An empty MIDI message buffer however cannot result in the audio processing thread to block!

If there are any further questions about that, and how to handle incomind MIDI and audio, you can have a peek into the AudioProcessorPlayer class and its implementation (although this one does the opposite of what we’re talking about).

Any, may I add, I am really sorry for the long, uninterrupted text - “Filtered HTML” does not work properly for me, so plain test it is, which means the forum software will create this bloaty block of text no matter how many whitespaces I’ll insert!

Whoah, thank you for this incredible amount of information (will keep me busy for a while). I did some stuff with the MidiInput::handleIncomingMidiMessage in another project, but I discarded the class from being anything interesting for plugin development where you have the nice AudioProcessor class which handles all the communication with the outside world aka Host.

As I said I already I managed a (unstable)multithreading concept but it was based on hole MidiBuffers within the AudioProcessor so going a step deeper might yield something useable.

You’re welcome.

Ok I looked into the classes and it should be quite forward how to do this for a standalone app, however I don't have a clue how to bypass the AudioProcessor architecture in a plugin context. Both the PluginDemo and the standard Introjucer plugin template derive from AudioProcessor and use its processBlock() thingy. 

If I derive from AudioIODeviceCallback, I have to specify an audio device, but that is on the side of the host, isn't it?

In terms of passing MIDI data between threads there will always be delay, but like i said my program does not process MIDI, it generates it only (you have the option of getting the MIDI input and doing stuff to it, but it's not something i'm concerned in this program).

You could create a lua context inside the AudioProcessor thread and process the MIDI messages there, with JUCE MIDI classes in that context you could do a lot of processing on those messages (or without that even, since MIDI is very simple you'd just need some additional code to do bit-wise operations).

I pass my MIDI data to the message thread to do all LUA stuff in it, since LUA also does GUI in my program it's simpler that way i don't need to lock the Lua context ever, i didn't get any reports of any significant delays, but like i said, any time sensitive MIDI data is generated inside the program and not passed from the host outside.

And also i do MIDI device handling in the plugin so you can do MIDI-thru stuff host->device device->host etc. This is not always possible but helpful (windows won't allow you to open your midi device in more then one application, so if your host is using your MIDI device, the plugin won't be able to do the same)

You can't, the entire VST architecture is based on the fact that MIDI comes in and goes out in that one call (process()) there is no way to bypass it. The fastest way to deal with MIDI in a plugin is to quickly modify the data inside the incoming MIdiBuffer and return.

That's what I thought. Well then I have to try to speed up Lua to make it more predictable. Or has anybody tried Squirrel, it's way less popular, but it seems to have a more realtimish GC based on reference counting. The API is a bit quirky, so embedding is not as straight forward like Lua, but maybe it is worth the effort plus its syntax is more like c++ which is a huge advantage for me.

If you want C+±ish syntax with your scripting language, the closest thing you can get is AngelScript. plus, there’s a juce module for it around :slight_smile: