Questions about structure of plug-in synths & VFLib

Hello all,

I’m trying to learn how to make VST/AU synthesisers with Juce. I’m getting along nicely with the GUI side of Juce, but have a few questions about the best way to structure a plug-in instrument, since the Juce audio plugin demo only shows an effect plugin.

  1. Obviously I’d like the parameters of my synth to be automatable. As far as I can see, the AudioProcessor class is the easiest way to get this functionality in Juce. So the obvious thing to do would be to attach an ‘editor’ to a subclass of AudioProcessor, then call the AudioProcessor’s processBlock method from my SynthesiserVoice object. But I want my synth to be polyphonic, so it will need an individual processor for each voice. So maybe it would be better to have the editor just modify variables in the AudioProcessor objects, which are then grabbed by each SynthesiserVoice’s renderNextBlock? Am I on the right track here?

  2. This also leads me onto the question of thread safety etc. If I have multiple voices reading those same variables, and the editor possibly modifying them at the same time, I’ll need to somehow ensure these variables are thread safe. I imagined that for simple variables, I could use std::atomic and I might get away with it. But if I wanted to start sending more data from the GUI to the audio threads, I realise I’m getting into the black art of FIFO queues and ring-buffers, which I freely admit I don’t really understand. What would it take to, say, safely pass a small struct (for example, representing a new modulation routing) to the audio threads? I have seen VFLib’s concurrency classes, and they look like a good framework for me to use, but I’m not really sure how to start setting up a system using them. Are there any examples of this floating around?

Thanks in advance for any pointers, this is tougher than I had anticipated! :oops:

You want to make your data structures “immutable”. This means that once a synth voice descriptor is constructed, it is never modified. How does the editor make changes? Well, it doesn’t. It creates a whole new object, starting with a copy from the current one, and constructs it with the new parameters.

When you have finished putting together this “replacement” descriptor, you use a vf::CallQueue to pass it to the audio thread and tell it to swap the voice descriptor in use with the new one.

These descriptors would need to be reference counted of course.

struct VoiceInfo : ReferenceCountedObject {
  typedef ReferenceCountedObjectPtr <VoiceInfo> Ptr;
  //...
};

struct AudioThread : AudioDeviceIoCallback
{
  // called from GUI thread
  void setCurrentVoice (VoiceInfo::Ptr voice) {
    m_queue.call (&AudioThread::setCurrentVoiceAsync, this, voice);
  }

private:
  vf::ManualCallQueue m_queue;
  VoiceInfo currentVoice;

  // called on audio device thread
  void setCurrentVoiceAsync (VoiceInfo::ptr voice) {
    currentVoice = voice;
  }

  void handleAudio (AudioBufferInfo&) {
    m_queue.synchronize ();
    // process audio using currentVoice
  }
};

Actually, SimpleDJ (link in my signature) uses the same technique to select tracks into the deck. Here’s the file in question:

https://github.com/vinniefalco/AppletJUCE/blob/master/Source/core/Deck.cpp

The problem is that SimpleDJ crashes on Windows and doesn’t build on MacOS…something happened when I updated to the latest JUCE tip. It is being looked at and hopefully will be working again soon.

Great, thanks for the tips. Although that seems a bit inefficient. What if my voice structure is quite complex: with 2 LFOs, 2 Envelopes, a sequencer and a bunch of modulation routings. So if a user is playing the synth and tweaking parameters then behind the scenes I’m continually creating and destroying new voice objects?

I’ll take a look at the SimpleDJ source. Even if it won’t compile it should give me some tips on how to set up the CallQueues. Thanks very much Vinn!

I think that changing synth parameters in real-time and hearing the changes as you move the knobs is a non-trivial exercise. Especially when it comes to the DSP filtering part, because it is very easy to get discontinuities in the output which leads to cracks and pops of audio.

Yet it’s done by most commercially available plugins. I think it’s a fairly important feature for a good sound-design experience. Would be kind of annoying if you couldn’t, for example, adjust an LFO speed or filter cutoff whilst hearing its effect. Considering that this is done in most plugins I’ve ever used, how do you speculate it might be done?

This ventures into aspects of digital filter design that I am not very proficient at. If you visit the KVR forum there are discussions of the creation of filters that are stable in response to parameter changes (like cutoff frequency):

KVR DSP and Plugin Development Forum

There are many techniques for making IIR filters stable with parameter changes, and most of them are secret. I implemented some features in my library DSPFilters (see signature). One of them is to choose the right realization (for example, Transposed Direct Form II). There are other ways, like interpolation of pole/zeroes or interpolation of filter coefficients.

The best techniques are those which incorporate stability directly into the design. Here’s a discussion on a variation of a moog ladder filter:

Another popular filter design which is quite tolerant of changes in filter parameters is the “state variable filter”. Here’s a good example design with working code:

If you want to search for reading materials the keywords are “time varying iir filter parameters” or “filter parameter modulation.”

For the concurrency and queue implementation, instead of bulk replacement of a filter as I described originally (which is still useful for many classes of problems), you would modify only part of the filter parameters. If your filter design remains stable under parameter changes no additional action is needed. Otherwise, you would have to do some kind of crossfading or smoothing. Here’s an example of the implementation details for changing a filter’s parameters:

struct Filter {
  void setCutoffFrequency (double freq); // not thread-safe
  void process (AudioBuffer&);
private:
  double cutoffFrequency;
};

struct AudioThread : AudioDeviceIoCallback
{
  // called from GUI thread
  void setCutoffFrequency (double freq) {
    m_queue.call (&Filter::setCutoffFrequency, &filter, freq);
  }

private:
  vf::ManualCallQueue m_queue;
  Filter filter;

  void handleAudio (AudioBuffer& buffer) {
    m_queue.synchronize ();
    filter.process (buffer);
  }
};

It seems this topic is even deeper and more difficult than I’d first assumed. Many thanks for your tips, I’d better get reading :?

OK Vinn, I’ve been checking out your DSPFilter project. I gave it a miss previously because I thought STK would be sufficient, but your filters look much more practical and ‘ready-to-use’. In addition, your parameter smoothing algorithm appears to do a really good job. So, I’ll study it this week and try to build a simple open-source VA synth with a couple of oscillators, a filter, a couple of envelopes and a routable LFO. I think this could be a useful learning tool for others like me.

Once I’ve put it up on Github, any help with concurrency would be most welcome. Cheers.

Hi, how did you solve this problem??
I thought passing parameters to synthvoice would be as easy as connecting the GUI with the .processor but it looks like its not. Any advices??

1 Like