STK 4.5.0 modules

just release my STK modules:

https://github.com/danlin/stk_module

stk_core:
STK Library for audio sysnthesis and effects


stk_effects:
Perry's simple reverberator, 
simple pitch shifter effect, 
CCRMA's NRev reverberator, 
Pitch shifter effect based on the Lent algorithm, 
John Chowning's reverberator, 
Jezar at Dreampoint's FreeVerb, 
echo, 
chorus


stk_filters:
two-pole filter, 
two-zero filter, 
non-interpolating tapped delay line, 
one-pole, 
one-zero filter, 
one-zero filter, 
one-pole filter, 
general infinite impulse response filter, 
sweepable formant filter, 
general finite impulse response filter, 
linear interpolating delay line, 
allpass interpolating delay line, 
non-interpolating delay line, 
biquad (two-pole, two-zero) filter


stk_generators:
sinusoid oscillator, 
noise, 
granular synthesis, 
file looping / oscillator, 
linear line envelope, 
band-limited square wave, 
band-limited sawtooth wave, 
band-limited impulse train, 
asymptotic curve envelope, 
ADSR envelope


stk_generators_extra:
"singing" looped soundfile, 
periodic/random modulator
 

This sounds great, but your link seems to be broken!

link fixed

Great, I used your module for a Delay Effect, where I was too lazy to do the groundwork, but I will check out your new stuff.

BTW, how do you use the vectorized versions of tick() within JUCE? Their input parameters are StkFrames which is not compatible with the AudioSamplerBuffer class, so some sort of conversion is needed, which could destroy the performance gain of the vectorized processing

I try to convert the AudioSamplerBuffer only to times.

[Juce]->[STK]->[STK] ... [STK]->[Juce]

i know the problem with the STK buffers and currently search a way to fast convert or wrap the two buffer logics.

Ah great. When I understand it right, the AudioSampleBuffer is channel-aligned, while the StkFrames are frame aligned.

on the other hand in most vectorized tick() implementations the single tick() method is called in a loop without bigger optimizations, so there is no reason to use the vectorized thing anyway.

yes you can use the tick(float).

i currently write stk::WvIn and stk::WvOut classes to convert the juce AudioSampleBuffer to StkFrames.

How do you use the WvIn & WvOut classes? im trying to use them to feed the ADSR class a  juce buffer. how do i do this??

Thanks in advance.

That thing sounds great, but, can I have an example of how to use a chorus effect?
I’m based of a set of template functions (from the plugin demo example)…

I’m trying something like that:

template <typename FloatType>
void MyAppAudioProcessor::process (AudioBuffer<FloatType>& buffer, AudioBuffer<FloatType>& delayBuffer)
{
    ignoreUnused(delayBuffer);
    const int numSamples = buffer.getNumSamples();
    
    stk::Chorus chorus = stk::Chorus(6000);
    chorus.setModFrequency(0.2f);
    chorus.setModDepth(0.8f);
    chorus.setEffectMix(1.0f);

    for (int channel = 0; channel < getTotalNumInputChannels(); ++channel)
    {
        FloatType* writeData = buffer.getWritePointer(channel);

        for (int i = 0; i < numSamples; ++i)
        {
            writeData[i] = chorus.tick(writeData[i]);
        }
    }

    //applyGain (buffer, delayBuffer);

    //applyChorus (buffer, delayBuffer);
    
    //applyDelay (buffer, delayBuffer);
    
    //applyReverb (buffer, delayBuffer);
    
    // In case we have more outputs than inputs, we'll clear any output
    // channels that didn't contain input data, (because these aren't
    // guaranteed to be empty - they may contain garbage).
    for (int i = getTotalNumInputChannels(); i < getTotalNumOutputChannels(); ++i)
        buffer.clear (i, 0, numSamples);
}

But It has no sound…

Is there anybody out there? :wink:
I’ve setted up in a blank audio plugin the stk::Chorus effect.
Here the code:

void StkdemoAudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)
{
chorus = stk::Chorus(500);
chorus.setSampleRate(sampleRate);
chorus.setModDepth(0.8);
chorus.setModFrequency(0.2);
chorus.setEffectMix(0.8);
}

and:

void StkdemoAudioProcessor::processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages)
{
const int totalNumInputChannels = getTotalNumInputChannels();
const int totalNumOutputChannels = getTotalNumOutputChannels();

// In case we have more outputs than inputs, this code clears any output
// channels that didn't contain input data, (because these aren't
// guaranteed to be empty - they may contain garbage).
// This is here to avoid people getting screaming feedback
// when they first compile a plugin, but obviously you don't need to keep
// this code if your algorithm always overwrites all the output channels.
for (int i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
    buffer.clear (i, 0, buffer.getNumSamples());
// This is the place where you'd normally do the guts of your plugin's
// audio processing...
for (int channel = 0; channel < totalNumInputChannels; ++channel)
{
    stk::StkFloat* channelData = buffer.getWritePointer (channel);
    
    for (int i = 0; i < buffer.getNumSamples(); ++i)
    {
        channelData[i] = chorus.tick(channelData[i]);
    }
}

}

So now i hear the chorus effect (very cool!) but even a cycling horrible noise…
I’m wondering why…
I’ve missed some cast?

Can you upload your test Code somewhere then i can look at it

SOLVED!
The buffer must be mono, the stereo output depends on STK effect.

Code snippet here:

chorus = new stk::Chorus (delay);
chorus->setModDepth(depth);
chorus->setModFrequency(frequency);
chorus->setEffectMix(mix);

template
void STKDemoAudioProcessor::applyPitchShift (AudioBuffer& buffer)
{
const int numSamples = buffer.getNumSamples();

FloatType* channelData = buffer.getWritePointer (0);

for (int i = 0; i < numSamples; ++i)
{
    channelData[i] = pitchShift->tick(channelData[i]);
}

}

These modules are great!!!

1 Like

Ok, now that I have my effects, how can I make them all together?
I have a global template equals to the “audio plugin” example (as I mentioned earlier).
I need to copy the buffer of each effect and then add to the main buffer…I think.
But how?

RE-SOLVED!

The filters are to be treated as mono (so with a for loop for channels),
The effects as to be treated as mono to stereo (so with a single write pointer).
The chain can be done assigning a ::process(AudioBuffer& buffer) function that includes the single ::applyEffect(AudioBuffer& buffer) and ::applyFilter(AudioBuffer& buffer) functions.
Effects can use the single main buffer, delay filter needs a second buffer to transfer data to (add AudioBuffer& delayBuffer as member of the ::applyDelayFunction() and to the ::process() function.

If you intend to use a mix effect parameter, that must always be > of 0, use a bool flag to deactivate the effect in the ::process() function.

I suggest to take this template when working onto a plugin:

void processBlock (AudioBuffer& buffer, MidiBuffer& midiMessages) override
{
jassert (! isUsingDoublePrecision());
process (buffer, pitchBufferFloat, chorusBufferFloat, echoBufferFloat, delayBufferFloat, reverbBufferFloat);
}

void processBlock (AudioBuffer<double>& buffer, MidiBuffer& midiMessages) override
{
    jassert (isUsingDoublePrecision());
    process (buffer, pitchBufferDouble, chorusBufferDouble, echoBufferDouble, delayBufferDouble, reverbBufferDouble);
}

And the effect/filter functions:

template <typename FloatType>
void process (AudioBuffer<FloatType>&,
              AudioBuffer<FloatType>& pitchBuffer,
              AudioBuffer<FloatType>& chorusBuffer,
              AudioBuffer<FloatType>& echoBuffer,
              AudioBuffer<FloatType>& delayBuffer,
              AudioBuffer<FloatType>& reverbBuffer);

template <typename FloatType>
void applyGain (AudioBuffer<FloatType>&,
                AudioBuffer<FloatType>& pitchBuffer,
                AudioBuffer<FloatType>& chorusBuffer,
                AudioBuffer<FloatType>& echoBuffer,
                AudioBuffer<FloatType>& delayBuffer,
                AudioBuffer<FloatType>& reverbBuffer);
template <typename FloatType>
void applyPitchShift (AudioBuffer<FloatType>&,
                      AudioBuffer<FloatType>& pitchBuffer,
                      AudioBuffer<FloatType>& chorusBuffer,
                      AudioBuffer<FloatType>& echoBuffer,
                      AudioBuffer<FloatType>& delayBuffer,
                      AudioBuffer<FloatType>& reverbBuffer);
template <typename FloatType>
void applyChorus (AudioBuffer<FloatType>&,
                  AudioBuffer<FloatType>& pitchBuffer,
                  AudioBuffer<FloatType>& chorusBuffer,
                  AudioBuffer<FloatType>& echoBuffer,
                  AudioBuffer<FloatType>& delayBuffer,
                  AudioBuffer<FloatType>& reverbBuffer);

And so on…
Excuse me if I’ve not removed the additional buffers in the declarations.
All the functions must have the same number and type of audio buffers (that have to be declared in provate scope of the header file).

Going into deep i’ve found another problem…
The implementation of the stk::tapDelay object.
Its ->tick() method requires a float input and a StkFrame object output.
It return SKFrames.
How i can convert them to an AudioBuffer?

I’v that code:

const int numSamples = buffer.getNumSamples();
FloatType* const channelData = buffer.getWritePointer(0);
FloatType* const delayData = delayBuffer.getWritePointer(0);

for (int i = 0; i < numSamples; ++i)
{
    const FloatType in = channelData[i];
    channelData[i] += delayData[i];
    stk::StkFrames frames;
    frames = tapDelay->tick(delayData[i], frames);
}

Where I 'm wrong?
Maybe i need a pointer on frames?

Here is the documentation of this method:

StkFrames & tick (StkFloat input, StkFrames &outputs)
Input one sample to the delayline and return outputs at all tap positions. More…

Hi, I’m already here…
I have a problem with STK for JUCE…
I have an audio plugin with buses configuration mono>mono and stereo>stereo.
When I load it in Cubase in a mono audio track, it sounds good!
But if I make a stereo audio track, the sound is very noisy…

For the effects, I’m using this pattern:

void DigiFexAudioProcessor::applyChorus1(AudioSampleBuffer &buffer, int inputChannels)
{
    const int numSamples = buffer.getNumSamples();
    
    const float chorusDepth = params[11];
    const float chorusRate = params[12];
    const float chorusMix = params[13];
    
    chorus1.setModDepth(chorusDepth);
    chorus1.setModFrequency(chorusRate);
    chorus1.setEffectMix(chorusMix);
    
    for (int channel = 0; channel < inputChannels; ++channel)
    {
        float* channelData = buffer.getWritePointer(channel);
        
        for (int i = 0; i < numSamples; ++i)
        {
            channelData[i] = chorus1.tick(channelData[i]);
        }
    }
}

I also clear the effect in prepare to play after I have instantiated the effect object constructor…

What i’m doing wrong?

You need one different chorus object for each channel

I know, and it’s exactly what i want, but the input is always mono (guitar) and I’ve tried that pattern with writePointer(0); and the chorus2 with writePointer(1)… noise…

if your input is always mono, but the output stereo, maybe you should copy the content of the channel 0 in the channel 1 before processing it ?