(JUCE+FAUST) Synth plugin: gets MIDI from `MidiKeyboardComponent`, but not from host (piano roll etc.)


#1

I have a simple synth plugin that seems to work as expected when playing its built-in keyboard (a MidiKeyboardComponent), however it ignores MIDI from the host (e.g. DAW piano roll).

As mentioned in the title, I am working with FAUST and JUCE (I got interested in this after looking at @ncthom’s open source fx plugin Temper).

Basically, my editor inherits from MidiInputCallback and MidiKeyboardStateListener:

class SawtoothSynthAudioProcessorEditor  : 
  public AudioProcessorEditor, 
  private MidiInputCallback, 
  private MidiKeyboardStateListener
{

and then I implement the required methods:

  void handleNoteOn (MidiKeyboardState*, int midiChannel, int midiNoteNumber, float velocity) override;
  void handleNoteOff (MidiKeyboardState*, int midiChannel, int midiNoteNumber, float /*velocity*/) override;
  void handleIncomingMidiMessage (MidiInput* source, const MidiMessage& message) override;

The method’s implementation is quite simple - they just call the FAUST-generated code:

void SawtoothSynthAudioProcessor::keyOn(int pitch, int velocity)
{
  dspFaust.keyOn(pitch,velocity);
}

(for those of you who are interested, I’ve been following this FAUST guide)

Now, there was a big smile on my face when I saw that this was working, however I can only play on the onscreen keyboard as it seems MIDI from the host is ignored.

I did a JUCE-only version before starting to look into FAUST, and that was working properly. However, it used all of the Synthesiser/SynthesiserVoice/ SynthesiserSound infrastructure, that I am not using anymore here (as I suppose all of that job is now done by the code exported by FAUST).

Thanks for reading and if anybody has any advice related to the issue above it would be very much appreciated!


#2

Awesome :smiley: Glad to see that project is still helpful!

What you have set up here only responds to MIDI callbacks from the on-screen piano. If you want to receive MIDI events from the host you’ll want to look in the processBlock method of your main AudioProcessor: there you receive both a block of input/output samples and a buffer of midiMessages.

To handle MIDI events from the host you’ll want to iterate through those midi messages each block and call dspFaust.keyOn at the right time. (You can check the sample offset in the block for each MidiMessage in the buffer, so probably do something like process a couple samples with Faust, then call dspFaust.keyOn when you’re at the correct sample offset for the corresponding midi event, then process more samples until the next event, etc).

If your Faust patch handles voicing and polyphony and all that then you can probably get away without the Synthesiser class stuff.


#3

Hey, thanks for your reply! I found looking at the Temper code quite useful, thanks for sharing that (as well as your blog posts, etc.)! :slight_smile:
Are you planning to keep using FAUST for your next projects?

Ah right, that makes sense.
But in my previous “JUCE only” plugin I didn’t have to do this; is it because the Synthesiser class (& friends) were doing that behind the scenes I guess?

Not sure I understand the details here (sample offset etc.). Is there any example/tutorial to see this in action?

Thanks again!


#4

You’re welcome! Really glad to hear that. I am currently not using Faust for my projects but I would still recommend it. I’m also greatly looking forward to play around with SOUL after the ADC announcement.

Yea you’re right, the Synthesiser class does this for you. So I guess if you want this to be handled automatically, you’ll want to use the Synthesiser stuff and call your dspFaust.keyOn in the noteOn handlers there. For reference, though, this is how the Synthesiser does it:

It’s the same approach of processing forward a couple samples until the next midi event, then invoking the midi event handler, then processing forward a couple samples until the next midi event, etc etc. If you don’t want to implement the Synthesiser classes, you’ll want to write something very similar to this loop.


#5

Thanks, I finally got it working!

Eventually I decided to reinstate the Synthesiser stuff that I previously removed (and integrate that with the FAUST code), for this reasons:

  • I may not want to reinvent the wheel by replicating a behavior that it’s already available in Synthesiser (although it’s instructive to take a look at what it does!).
  • I thought that the point of using FAUST is to generate the actual DSP, but for what regards voice handling and MIDI stuff, I may probably want to keep it in JUCE (can’t explain exactly why, but it felt right :slight_smile:)

So I did this steps:

  • re-export C++ code from FAUST, this time without using the “polyphonic” options. If I understand correctly this should generate a “DSP only” version, skipping all the voice handling stuff.
  • Reinstate Synthesiser, SynthesiserVoice etc. in the JUCE plugin
  • Change things so that each SynthesiserVoice owns a FAUST dsp object, and triggers it in the startNote method.

Now this solves my initial problem.

It is excruciatingly slow though! :confused:
There is a visible latency when pressing a key on the keyboard, and even when playing notes on a piano roll, you could see a visible delay. The synth is just 2 oscillators, simple amp and pitch envelopes, a filter - nothing too big.
But I suppose that is a FAUST problem, so I will investigate that separately.

Thanks for the tips, have a good weekend!


#6

Excellent! I think that’s smart reasoning. Sounds like you took the right steps too. As for performance, I think all you can do is get the profiler out and see where things are slow, and start there. Good luck! :slight_smile:


#7

Concerning polyphony, we experimented 2 ways in the faust2juce “automatic” script : https://github.com/grame-cncm/faust/tree/master-dev/architecture/juce

  • using Faust C++ polyphonic architecture (that takes a Faust compiled generated single voice, duplicates it, does MIDI handing…etc…)

  • using JUCE polyphonic code

This can be seen in the file: https://github.com/grame-cncm/faust/blob/master-dev/architecture/juce/juce-plugin.cpp

I understand that you started from the example “Adding Faust DSP Support to Your JUCE Plug-ins” (which itself used the faust2api tools : https://github.com/grame-cncm/faust/tree/master-dev/architecture/api), but the example done in faust2juce may help in your case.


#8

Hey @sletz thanks for your reply!

That’s correct, I started from the “Adding Faust DSP Support to Your JUCE Plug-ins” that uses faust2api rather than faust2juce.

The reason for this choice is that the tutorial mentions:

Faust can be used to generate ready-to-use JUCE applications and plug-ins implementing the standard user interface (UI) described in the Faust code using faust2juce. However, it is sooo easy to make professional looking UIs from scratch in JUCE that you might want to use Faust to implement the DSP portion of your plug-in and build your own UI.

so I thought that faust2api would be a better choice for this specific case (since I already have a GUI that I’d like to reuse).

But thanks for the hint, I’m going to give faust2juce a go and see how it works (I’ve never tried that before). As I mentioned in a previous message it seems there’s some delay between triggering a MIDI message (keyboard, piano roll etc.) and hearing the sound, and now I’m curious to see if it also happens with the code generated by faust2juce - interestingly, it doesn’t look like my CPU is overloaded or anything like that (1%). It’s just some delay between sending a note and hearing the sound.

Btw, if I use faust2juce rather than faust2api, is it still possible (and practical) to plug in my own GUI? Or do I have to stick with the default one?


#9

I’m not saying you should use faust2juce :wink: , but instead look at the https://github.com/grame-cncm/faust/blob/master-dev/architecture/juce/juce-plugin.cpp file, as an example for using a single Faust generated voice in the JUCE polyphonic model (you’ll have to adapt a bit of course…)


#10

Right, got it - thanks for the suggestion! :slight_smile:

I looked at https://github.com/grame-cncm/faust/blob/master-dev/architecture/juce/juce-plugin.cpp and I was trying to experiment with something similar.

I see that you use a hybrid JUCE/Faust voice:

class FaustVoice : public SynthesiserVoice, public dsp_voice { ... }

But when I tried to inherit from dsp_voice in my code, I get an error because dsp_voice cannot be found. This is because it is defined in DspFaust.cpp but not in the header (DspFaust.h) which is the one I am importing from my Voice code. I could try to manually move the declaration into the header, but I’m not sure I’m doing the right thing; since those files are auto generated I shouldn’t touch them right? (I’m relatively new to C++ so I’m not sure what’s the best approach here)

Also, out of curiosity I tried compiling my Faust definition with faust2juce (well, actually using the online editor with the JUCE -> jsynth-midi-poly16 exporter); I then opened the solution in Visual Studio 2017 and built successfully both VST and Standalone. Now, standalone works well, but apparently VST makes no sound :frowning:. I’m a bit lost here, do you have any thoughts? (for the records, here is the Faust definition I’m playing with: https://github.com/dfilaretti/patsynth/blob/master/Source/PatSynth.dsp)

Thanks!


#11

Btw, if I use faust2juce rather than faust2api , is it still possible (and practical) to plug in my own GUI? Or do I have to stick with the default one?"

You can also try that, by hacking the FaustPlugInAudioProcessorEditor::FaustPlugInAudioProcessorEditor constructor, removing the use of fJuceGUI object (which builds the “automatic” layout) and substitute your own one. Then you are on your own, or more precisely in JUCE land…


#12

Well, unfortunately it appears that the plugin generated with faust2juce is not making any sound. Initially I thought that maybe the online interface was based on an older version of Faust; so today I built from source on Linux, invoked faust2juce on my dsp file, and then moved the generated folder on Windows. But, unfortunately, same problem (the command I used was faust2juce -nvoices 8 -midi -jsynth myfile.dsp).

Also, I noticed something: in the juce-plugin.cpp file there are conditional sections for the “JUCE voice model” that are introduced only if JUCE-POLY, such as

    #ifdef JUCE_POLY
        ScopedPointer<FaustSynthesiser> fSynth;
    #else

Now, my understanding is that the -jsynth option of faust2juce should make sure that JUCE_POLY is set so that that particular code is included.
However I looked in the generated code and it seems that JUCE_POLY is not defined (code is greyed out in visual studio 2017). Is this an actual problem or am I missing something?


#13

Yes the JUCE_POLY should be defined when-jsynth option use used.

Have you tried faust2juce -nvoices 8 -midi myfile.dsp ?


#14

Just tried faust2juce -nvoices 8 -midi myfile.dsp (i.e. removed the -jsynth) and it seems that JUCE_POLY is still not defined in the generated FaustPluginProcessor.cpp. It looks like it’s not defined regardless of the -jsynth option.


#15

JUCE_POLY is defined only when -jsynth is used. It works here. You can check it in the .jucer project, you should see something like: extraDefs="JUCE_POLY MIDI"