How do additive synthesis plugins run so well? E.g. Ableton Operator, FL Studio Harmor, Pigments’ harmonics engine. It seems like running 32+ std::sin calculations per sample for each voice is pretty expensive. Even if you use a sine lookup table, it doesn’t seem to make a big difference.
Obviously, you could generate a new lookup table for each set of partials, but I’m wondering if there’s a better solution? Pigments, for example, lets you modulate the frequency and amplitudes of the partials, and have partials which are not multiples of the base frequency.
Maybe this is a complex question, but I wondered if anyone has any suggestions or knows where would be a good place to start looking.
What I think happens is that almost everything happens in the spectrum / frequency domain, and that only gets turned into time domain at the very end of the signal chain. If you happen to have Reaktor and Razor you could inspect how it works on the inside. Also ZynAddSubFx is an open source synth that you could study.
Ableton’s Operator is more of an FM synth, but you can indeed customize per operator how much of each harmonic it has. The good thing for Operator, is that you don’t usually change that very much while playing - meaning it’s not meant to be automateable if I’m not mistaken. So they can just compute the waveform onMouseUp once and be good.
I’ve been working on an additive for the last 3+ years and my (hopefully not too annoying) answer is:
There are plenty of faster ways than std::sin, including JUCE’s FastMathApproximations::sin and vendor library functions such as IPP/vDSP — it can be very fast.
The lookup table route is interesting — the lines between “wavetable synth” and “additive synth” blur. It can make a lot of sense to approach additive as “copy and paste and interpolate” — why bother calculating when it’s the same values every time…
iFFT is definitely a way that some commercial products accomplish it. An additive synth is just one big Fourier machine. Even if you don’t literally use the an off the shelf iFFT, you could see the task as constructing one from scratch and making optimizations/compromises as needed.
I’ve rewritten my engine several times — each time the driving force was discovering that certain features weren’t possible or I had new needs (for example, like you mention, I really wanted solid per-harmonic pitch modulation).
The hard part of additive (in my opinion) is the mental model and therefore the UI. Depending on the fundamental, a voice might have 10 partials or 500 partials — how the UI exposes this and lets the edit the harmonics will also sort of dictate what your “engine” needs are, what the constraints are, what you can “get away with.”
If you are interested in building an additive, I’d recommend starting with a “dumb” dsp solution and optimizing it as you need features and run into challenges (via benchmarking alternatives or doing more research, etc). My additive engine runs 10k oscillators (500 per 20 voices) with per-partial pitch/volume modulation — but for example it may not have the exact feature set you’d want in yours. It was whittled down and optimized as I was determining the feature set — definitely the path I’d recommend!
I totally agree with sudara that the main problem with additive oscillators is the UI.
Implementing an additive oscillator is the easy part, especially if you have only integer multiples of the fundamental frequency. In this case, it takes only a few lines of code and no calls to trigonometric functions to construct all partials from the fundamental. This will dramatically cut the number of required arithmetic operations compared to the naive approach of rendering each partial individually.
I have implemented an additive oscillator along these lines earlier this year and immediately got acceptable performance (should be around 10-12 voices @ 48 kHz with 1024 partials each, no use of SIMD, only plain C++).
As concerns the UI, I can only speak for myself but I do not enjoy working with additive softsynths that let you control the individual magnitude and perhaps even pitch of each partial.
So I have decided to keep my additive oscillator “under the hood”, and use it as a high-quality wavetable player.
The advantages of doing so (compared to a regular wavetable oscillator) should be obvious:
Sharp cutoff of partials at the Nyquist frequency, therefore no need to worry about aliasing and no need for using oversampling
Inherent band-limiting and no interpolation artifacts, so the output of the oscillator needs not be filtered, and we get perfect phase reproduction
Low RAM footprint because no oversampled lookup tables, mipmaps etc. are needed, only the Fourier coefficients of the wavetable
It is easy to add features associated with additive oscillators such as an inharmonicity control or more advanced spectral warp operations (just think of Vital).
As a user, I prefer working with meaningful controls such as tone, inharmonicity, or more complex spectral warp controls. I certainly do not want to care about each individual partial, for me this kills creativity. I am convinced that end users enjoy controls that have a more global effect on the resulting sound. (Sound designers may have different preferences though).
With this system you can take advantage of some form of physical modeling, since you can add variables such as stiffness or damping. You could also add some form of impulse that simulates a blow or a bowed string, interactions between harmonics, randomness… I believe that this path is more interesting and fun than the basic establishment of parameters of each harmonic, which in my opinion lead nowhere if they are not done based on a deep analysis.