No Output, help me reason about this

The task of my audio plugin in JUCE is to add odd harmonics and even harmonics to the signal. It is the user’s responsibility to control the blendRatio (the amount of odd harmonics and even harmonics) and the Mix (dry/wet).

I would like some help reasoning about this implementation because with blendRatio = 0 and Mix = 100%, I do not get any signal output.

Even harmonics: f(t) = ∑n=1∞ A2n⋅sin(2n⋅2πft+ϕ2n)

  • f(t) is the resulting signal in time,
  • A2n is the amplitude of the n-th even harmonic,
  • 2n⋅2πft represents the frequency of the n-th even harmonic (with f as the fundamental frequency),
  • ϕ2n is the phase of the n-th even harmonic.

Odd harmonics: To construct a signal that contains only odd harmonics, sinusoids are added at frequencies 1x, 3x, 5x, etc., where x is the fundamental frequency of the signal. The general formula for a signal with only odd harmonics is: f(t) = ∑n=0∞ A2n+1⋅sin((2n+1)⋅2πft+ϕ2n+1) where A2n+1 and 2n+1 are the amplitude and phase of the (n+1)-th odd harmonic, respectively.

I have created:
OddEngine.h OddEngine.cpp
adding odd harmonics with Fourier series
EvenEngine.h OddEngine.cpp
adding even harmonics with Fourier series
BlendEngine.h BlendeEngine.cpp
In the final output, it blends the even harmonics and odd harmonics.
Code Snippet:

Summary

Signal Analysis:

void Analysis::analyzeAudioSignal(const std::vector<float>& inputSignal, int sampleRate, int numChannels) {
    computeFundamentalFrequency(inputSignal, sampleRate, numChannels);
    computeHarmonicContent(inputSignal, sampleRate);
}

float Analysis::getFundamentalFrequency() const {
    return fundamentalFrequency;
}

std::vector<std::pair<float, float>> Analysis::getHarmonicContent() const {
    return harmonicContent;
}

int Analysis::nextPowerOfTwo(int number) {
    if (number <= 0) return 1;
    --number;
    number |= number >> 1;
    number |= number >> 2;
    number |= number >> 4;
    number |= number >> 8;
    number |= number >> 16;
    return number + 1;
}

void Analysis::computeFundamentalFrequency(const std::vector<float>& inputSignal, int sampleRate, int numChannels) {
    // Calcoliamo la frequenza fondamentale usando l'autocorrelazione

    // La lunghezza del buffer per l'autocorrelazione
    size_t bufferSize = inputSignal.size() / numChannels;

    std::vector<float> monoSignal(bufferSize, 0.0f);
    for (size_t i = 0; i < bufferSize; ++i) {
        for (int channel = 0; channel < numChannels; ++channel) {
            monoSignal[i] += inputSignal[i * numChannels + channel];
        }
        monoSignal[i] /= numChannels;
    }

    std::vector<float> autocorr(bufferSize, 0.0f);
    for (size_t lag = 0; lag < bufferSize; ++lag) {
        for (size_t i = 0; i < bufferSize - lag; ++i) {
            autocorr[lag] += monoSignal[i] * monoSignal[i + lag];
        }
    }

    // Troviamo il primo picco dell'autocorrelazione dopo il lag zero
    size_t peakIndex = std::distance(autocorr.begin(), std::max_element(autocorr.begin() + 1, autocorr.end()));


    // Calcoliamo la frequenza fondamentale
    fundamentalFrequency = sampleRate / static_cast<float>(peakIndex);

}

void Analysis::computeHarmonicContent(const std::vector<float>& inputSignal, int sampleRate) {
    int originalSize = inputSignal.size();
    int fftSize = nextPowerOfTwo(originalSize);


    juce::AudioBuffer<float> fftBuffer(1, fftSize); // 1 canale, fftSize campioni
    juce::dsp::FFT forwardFFT(std::log2(fftSize));  // Crea l'oggetto FFT

    // Copia il segnale nel buffer FFT, usando zero-padding se necessario
    fftBuffer.clear();
    for (int i = 0; i < originalSize; ++i) {
        fftBuffer.setSample(0, i, inputSignal[i]);
    }


    std::vector<float> fftData(fftSize);
    memcpy(fftData.data(), fftBuffer.getReadPointer(0), sizeof(float) * fftSize);


    forwardFFT.performFrequencyOnlyForwardTransform(fftData.data());


    harmonicContent.clear();
    for (int i = 1; i <= fftSize / 2; ++i) {
        float amplitude = fftData[i];
        float freq = i * sampleRate / static_cast<float>(fftSize);
        harmonicContent.push_back({ freq, amplitude });
    }

}

BlendEngine:

BlendEngine::BlendEngine() {
}

BlendEngine::~BlendEngine() {
   
}

void BlendEngine::blendSignals(const std::vector<float>& oddSignal, const std::vector<float>& evenSignal, float blendRatio) {
    // Calcola la lunghezza massima tra i due segnali
    size_t maxSignalSize = std::max(oddSignal.size(), evenSignal.size());

    // Prepara i vettori temporanei con lunghezza uguale
    std::vector<float> tempOddSignal = oddSignal;
    std::vector<float> tempEvenSignal = evenSignal;

    // Riempi con zeri se necessario per allinearli in lunghezza
    tempOddSignal.resize(maxSignalSize, 0.0f);
    tempEvenSignal.resize(maxSignalSize, 0.0f);

    // Miscelazione dei segnali
    blendedSignal.resize(maxSignalSize);
    for (size_t i = 0; i < maxSignalSize; ++i) {
        blendedSignal[i] = (1.0f - blendRatio) * tempOddSignal[i] + blendRatio * tempEvenSignal[i];
    }
}

std::vector<float> BlendEngine::getBlendedSignal() const {
    return blendedSignal;
}

OddEngine.cpp

OddEngine::OddEngine() {
    // 
}

OddEngine::~OddEngine() {
    // 
}

void OddEngine::processSignal(const std::vector<float>& inputSignal, int sampleRate, float fundamentalFrequency) {
    if (fundamentalFrequency > 0.0f) {
        addOddHarmonics(inputSignal, sampleRate, fundamentalFrequency);
    }
    else {
        // Nessuna elaborazione se la frequenza fondamentale non è disponibile
        processedSignal.clear();
    }
}

std::vector<float> OddEngine::getProcessedSignal() const {
    return processedSignal;
}

void OddEngine::addOddHarmonics(const std::vector<float>& inputSignal, int sampleRate, float fundamentalFrequency) {
    size_t inputSize = inputSignal.size();
    processedSignal.resize(inputSize);
    float maxAmplitude = 0.0f;  // Usato per la normalizzazione

    for (size_t i = 0; i < inputSize; ++i) {
        float time = static_cast<float>(i) / sampleRate;
        float harmonicsSum = 0.0f;

        for (int harmonic = 1; harmonic <= 5; harmonic += 2) {
            float frequency = harmonic * fundamentalFrequency; // Usa la frequenza fondamentale
            float amplitude = 1.0f / (harmonic * 2); // Riduciamo l'ampiezza
            harmonicsSum += amplitude * std::sin(2.0f * PI * frequency * time);
        }

        processedSignal[i] = inputSignal[i] + harmonicsSum;
        maxAmplitude = std::max(maxAmplitude, std::abs(processedSignal[i]));
    }

    // Normalizzazione del segnale (se necessario)
    if (maxAmplitude > 1.0f) {
        for (float& sample : processedSignal) {
            sample /= maxAmplitude;
        }
    }
}

processblock:

void HotColdAudioProcessor::processBlock(juce::AudioBuffer<float>& buffer, juce::MidiBuffer& midiMessages) {
    juce::ScopedNoDenormals noDenormals;
    auto totalNumInputChannels = getTotalNumInputChannels();

    // Pulizia dei canali di output non utilizzati
    for (auto i = totalNumInputChannels; i < buffer.getNumChannels(); ++i)
        buffer.clear(i, 0, buffer.getNumSamples());

    // Ottieni i parametri
    float blendRatio = *parameters.getRawParameterValue("blendRatio");
    float mix = *parameters.getRawParameterValue("mix");
    float fundamentalFrequency = analysis.getFundamentalFrequency();
    
    DBG("blendRatio: " << blendRatio);
    DBG("mix: " << mix);
    DBG("fundamentalFrequency: " << fundamentalFrequency);

    for (int channel = 0; channel < totalNumInputChannels; ++channel) {
        auto* channelData = buffer.getWritePointer(channel);

        std::vector<float> channelSamples(buffer.getNumSamples());
        for (int i = 0; i < buffer.getNumSamples(); ++i) {
            channelSamples[i] = channelData[i];
        }

        // Processiamo il segnale con OddEngine
        oddEngine.processSignal(channelSamples, getSampleRate(), fundamentalFrequency);
        auto oddSignal = oddEngine.getProcessedSignal();

        // Processiamo il segnale con EvenEngine
        evenEngine.processSignal(channelSamples, getSampleRate(), fundamentalFrequency);
        auto evenSignal = evenEngine.getProcessedSignal();

        // Misceliamo i segnali
        blendEngine.blendSignals(oddSignal, evenSignal, blendRatio);

        // Applichiamo il mix finale
        auto processedSignal = blendEngine.getBlendedSignal();
        for (int i = 0; i < buffer.getNumSamples(); ++i) {
            channelData[i] = mix * processedSignal[i] + (1.0f - mix) * channelData[i];
        }

    }
}

I’m a newbie to Juce and no expert in C++ (anymore) but I think your problem is the two vector resize() calls in blendSignals() - you have specified a value of 0.0f which will clear the buffers (as part of the resize) before you blend them. If you have the mix set to 1.0 then you’re only going to get the wet signal (which is now zero). Simply omitting the “, 0.0f” from the resize() call will resize them without changing the data.

As an aside, it looks like you’re also trying to add the harmonics to the dry signal in your odd/even engine code - which means your mix range is actually going to be 50% to 0%

2 Likes