Tips/Pointers to making a Polyphonic resonator

Hello Jucers,

I got into juce yesterday after some umming and ahhing, went through some tutorials on website and found Audio Programmer vids on YT. I’ve made a couple of simple delays based around circular buffer and turned the delay length into a frequency wavelength hoping to make a tuned resonator type effect. This is isn’t a lagrange interpolation question don’t worry! The problem I have is that it doesn’t sound great, or anything close to a resonator.

There’s a tutorial on karplus strings that looks quite adaptable, although the layout of the tutorial files are slightly confusing (due to all being in one header file). Do the .h files in the pip tutorials contain all the information same information as …Editor.cpp Editor.h and Processor.cpp and Processor.h?

If I followed along with the tutorials in the DSP section would I be able to enter the code from the page into a blank project, or is it a necessity to have the PIP project?

In the Karplus string tutorial there’s a chain of effects, a signal chain from Osc > Distortion/Waveshaper > Cab sim > String/delay thing (don’t quote me on that). Given I’m only a day deep, I apologise for what may be obvious to others, is it simple enough to just replace the Osc section for an GetAudioBlock type object? Does the project need to be re-written from the ground up to include audio in, or can I piece of modular type “Audioin/AudioBlock” place of Osc without much headaches?

And then to make polyphonic ie, to make a chord resonator, could I just have x amount of karplus delay lines in parallel, where x is the number of voices in a chord? And by parallel I mean each delay is not feeding into each other.

The overall end goal is to make a microtonal chord resonator (I mainly use ratios/just intonation). Decided to start exploring how to develop plugins as there’s not really any fx plugins that cater for the microtonal composer. I have a clear idea of what I want to make, if anyone has any advice on the best way to go about it, I’d really appreciate hearing their thoughts.

Big thanks in advance!

3 Likes

i think the reason there aren’t a lot of microtonal plugins is because for composition you need midi and midi out from vst3 has always been really limited. however your idea is not a midi plugin so i’m glad this will finally be something useful for the microtonal / xen-community :slight_smile:

maybe it would help if you dropped some code examples along with a description of your expected results vs what really happened.

btw it’s insane how much you learned in just one day. did you work with dsp coding before?

1 Like

Useful is a very ambitious word, I’m not entirely sure I can live up to, but thank you for the kind words!!

So the code tutorials I’m working on to combine is the ProcessingAudioInput and DSP Delay line: https://docs.juce.com/master/tutorial_dsp_delay_line.html

Currently trying to write a generic AudioInput function to use in place of CustomOsc, the tutorial example that shows “Audio In” in DSP examples (annoyingly doesn’t show how to send to processor chain), the class inherits (not a programmer, unsure on terminology) from AudioAppComponent to allow setting of AudioChannels (SetAudioChannel) and GetsAudioBlock is inside that routine. So I figured it would have to look something like:

template < typename Type>

class InputComponent : public AudioAppComponent

{

public :

InputComponent()

{ setAudioChannels (2, 2); // Set input and Output channels }

I think to map to the DSP processor chain I assume I’d need to write something like:
edit* map is the wrong word, it’s instantiating and calling three processes over and over

juce::dsp::ProcessorChain<juce:InputComponent<Type>, juce::dsp::Gain<Type>> processorChain;

I think GetNextAudioBlock would map across to the Process section? Not sure on Prepare, or heapBlock and TempBlock, might have parallels to BufferIn and BufferOut?

On the other hand maybe I’m coming at this from the wrong perspective, is there a way to take out the specific delay function inside this monolith of a pip file, and try and integrate it with a much simpler project that already has Audio in facilitated? Would avoid using the processor chain for now.

I’m not sure I can agree that I’ve learned a lot, I mostly feel defeated. Small bits of dsp make sense, the working at the sample level, it’s how these containers all link together that’s not immediately obvious to me.

I’d love to just extract just the delay section of the tutorial and re-phrase it so I can write something like:

readAudioIn();
makeDelayHappen();

1068 lines of monolithic code does not make for an easy tutorial, I pulled out all the cab/waveshape/distortion code, it was actually longer. It’s not easy reading, or quick to digest the logic scrolling around such a beast. Not really familiar with C++ either, so I just feel like I’m swimming in the deep end constantly!