How does the output stage work in the Audio Unit?

Hello :slight_smile:

A bit new to coding here. I have been following a lot of tutorials lately and learned about Juce. Very nice :slight_smile: But yesterday I wanted to try make my own setup, but had some issues:

I am sure its something very basic that I am missing, I just cant seem to figure it out.

I cant seems to get audio output from my very simple noise/saw osc. All I get from the setup below is a loud pop when I open the app and a loud pop when I close the app. No errors or anything.

I watched many tutorials, but they all kind of have a twist and doesn’t really show how exactly how to set up the output stage of a synthesizer plug in, in a simple way, where audio signal is created internally in the synth. Anyway here is my project. Am I missing something in the preparetoplay block?

(Sorry of this is basic question)

Jakob

Pluginprocessor.cpp:

/*

This file was auto-generated!

It contains the basic framework code for a JUCE plugin processor.

==============================================================================
*/

#include “PluginProcessor.h”
#include “PluginEditor.h”

//==============================================================================
OutPutTutorialAudioProcessor::OutPutTutorialAudioProcessor()
#ifndef JucePlugin_PreferredChannelConfigurations
: AudioProcessor (BusesProperties()
#if ! JucePlugin_IsMidiEffect
#if ! JucePlugin_IsSynth
.withInput (“Input”, AudioChannelSet::stereo(), true)
#endif
.withOutput (“Output”, AudioChannelSet::stereo(), true)
#endif
)
#endif
{
}

OutPutTutorialAudioProcessor::~OutPutTutorialAudioProcessor()
{
}

//==============================================================================
const String OutPutTutorialAudioProcessor::getName() const
{
return JucePlugin_Name;
}

bool OutPutTutorialAudioProcessor::acceptsMidi() const
{
#if JucePlugin_WantsMidiInput
return true;
#else
return false;
#endif
}

bool OutPutTutorialAudioProcessor::producesMidi() const
{
#if JucePlugin_ProducesMidiOutput
return true;
#else
return false;
#endif
}

bool OutPutTutorialAudioProcessor::isMidiEffect() const
{
#if JucePlugin_IsMidiEffect
return true;
#else
return false;
#endif
}

double OutPutTutorialAudioProcessor::getTailLengthSeconds() const
{
return 0.0;
}

int OutPutTutorialAudioProcessor::getNumPrograms()
{
return 1; // NB: some hosts don’t cope very well if you tell them there are 0 programs,
// so this should be at least 1, even if you’re not really implementing programs.
}

int OutPutTutorialAudioProcessor::getCurrentProgram()
{
return 0;
}

void OutPutTutorialAudioProcessor::setCurrentProgram (int index)
{
}

const String OutPutTutorialAudioProcessor::getProgramName (int index)
{
return {};
}

void OutPutTutorialAudioProcessor::changeProgramName (int index, const String& newName)
{
}

//==============================================================================
void OutPutTutorialAudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)
{
// Use this method as the place to do any pre-playback
// initialisation that you need…
}

void OutPutTutorialAudioProcessor::releaseResources()
{
// When playback stops, you can use this as an opportunity to free up any
// spare memory, etc.
}

#ifndef JucePlugin_PreferredChannelConfigurations
bool OutPutTutorialAudioProcessor::isBusesLayoutSupported (const BusesLayout& layouts) const
{
#if JucePlugin_IsMidiEffect
ignoreUnused (layouts);
return true;
#else
// This is the place where you check if the layout is supported.
// In this template code we only support mono or stereo.
if (layouts.getMainOutputChannelSet() != AudioChannelSet::mono()
&& layouts.getMainOutputChannelSet() != AudioChannelSet::stereo())
return false;

// This checks if the input layout matches the output layout

#if ! JucePlugin_IsSynth
if (layouts.getMainOutputChannelSet() != layouts.getMainInputChannelSet())
return false;
#endif

return true;

#endif
}
#endif

void OutPutTutorialAudioProcessor::processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages)
{
ScopedNoDenormals noDenormals;
const int totalNumInputChannels = getTotalNumInputChannels();
const int totalNumOutputChannels = getTotalNumOutputChannels();

// In case we have more outputs than inputs, this code clears any output
// channels that didn't contain input data, (because these aren't
// guaranteed to be empty - they may contain garbage).
// This is here to avoid people getting screaming feedback
// when they first compile a plugin, but obviously you don't need to keep
// this code if your algorithm always overwrites all the output channels.
for (int i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
    buffer.clear (i, 0, buffer.getNumSamples());


// This is the place where you'd normally do the guts of your plugin's
// audio processing...
for (int channel = 0; channel < totalNumInputChannels; ++channel)
{
    // ..do something to the data...
    // Input
    // float* InputData = buffer.getWritePointer (channel);
    // Output
    float* OutputData = buffer.getWritePointer (channel);
    
    
    // Scale pitch parameter
    // pitchlimit = clampf(params[PITCH_PARAM].value, -5.0, 5.0);
    freq = 440 * powf(2.0, 3.0f); // pitchlimit
    
    // BASIC OSCILLATOR
    phase += freq * deltaTime;
    if (phase >= 1.0)
        phase -= 1.0;
    
    saw = phase;

    
    for(int sample = 0; sample < buffer.getNumSamples(); ++sample)
    {
     
        // InputData[sample] = ((saw * 2.0 ) -1.0) * 0.25;
        // OutputData[sample] = ((saw * 2.0 ) -1.0) * 0.25;
        // OutputData[sample] = InputData[sample];
        OutputData[sample] = (Random().nextFloat() * 2 -1) * 0.1;

    }
}

}

//==============================================================================
bool OutPutTutorialAudioProcessor::hasEditor() const
{
return true; // (change this to false if you choose to not supply an editor)
}

AudioProcessorEditor* OutPutTutorialAudioProcessor::createEditor()
{
return new OutPutTutorialAudioProcessorEditor (*this);
}

//==============================================================================
void OutPutTutorialAudioProcessor::getStateInformation (MemoryBlock& destData)
{
// You should use this method to store your parameters in the memory block.
// You could do that either as raw data, or use the XML or ValueTree classes
// as intermediaries to make it easy to save and load complex data.
}

void OutPutTutorialAudioProcessor::setStateInformation (const void* data, int sizeInBytes)
{
// You should use this method to restore your parameters from this memory block,
// whose contents will have been created by the getStateInformation() call.
}

//==============================================================================
// This creates new instances of the plugin…
AudioProcessor* JUCE_CALLTYPE createPluginFilter()
{
return new OutPutTutorialAudioProcessor();
}

"PluginProcessor.h"

/*

This file was auto-generated!

It contains the basic framework code for a JUCE plugin processor.

==============================================================================
*/

#pragma once

#include “…/JuceLibraryCode/JuceHeader.h”

//==============================================================================
/**
*/
class OutPutTutorialAudioProcessor : public AudioProcessor
{
public:
//==============================================================================
OutPutTutorialAudioProcessor();
~OutPutTutorialAudioProcessor();

//==============================================================================
void prepareToPlay (double sampleRate, int samplesPerBlock) override;
void releaseResources() override;

#ifndef JucePlugin_PreferredChannelConfigurations
bool isBusesLayoutSupported (const BusesLayout& layouts) const override;
#endif

void processBlock (AudioSampleBuffer&, MidiBuffer&) override;

//==============================================================================
AudioProcessorEditor* createEditor() override;
bool hasEditor() const override;

//==============================================================================
const String getName() const override;

bool acceptsMidi() const override;
bool producesMidi() const override;
bool isMidiEffect () const override;
double getTailLengthSeconds() const override;

//==============================================================================
int getNumPrograms() override;
int getCurrentProgram() override;
void setCurrentProgram (int index) override;
const String getProgramName (int index) override;
void changeProgramName (int index, const String& newName) override;

//==============================================================================
void getStateInformation (MemoryBlock& destData) override;
void setStateInformation (const void* data, int sizeInBytes) override;

private:

// Basic osc
float saw;
float freq;
float deltaTime = 1.0 / getSampleRate();
float phase;
// float pitchlimit;
static const int KChannels = 2;



//==============================================================================
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (OutPutTutorialAudioProcessor)

};

The code works to output white noise when compiled with Visual Studio on Windows. So I suspect there is something wrong in your audio set up. What platform are you building and testing it on?

The oscillator code probably does not work, you can’t for example just do this to init your member variable :

float deltaTime = 1.0 / getSampleRate();

There is no valid sample rate yet when the object is being constructed, you would need to do the initing based on the sample rate passed into prepareToPlay. (The documentation suggests getSampleRate only returns a valid result when called in processBlock.)

If the code doesn’t work for you to even produce the white noise, this line is a bit suspicious in the code :

OutputData[sample] = (Random().nextFloat() * 2 - 1) * 0.1;

You should not really use random generator objects like that, constructing it anew for each sample. You should have the Random object as a member variable.

Hm strange. I tested it in the plug in host, I tested it in Logic X and only thing I get is loud click and then no sound until I remove the plug in again. I get another loud click.

I am using OSX and Xcode. I have all ready build many small projects and they all work, audio in>process> audio out. But most of them I was following tutorials. I also build a synth, following tutorials…

Ahh okay. Ill try calling the sample rate from another place. About the oscillator, I tested in on another platform, so I know it works, but I did change for the noise just for testing purposes… And I get same result form both…

I know, this was just a test to see if it would produce any audio out and it didnt. I saw another video where this worked, but he was working on an Audio App, not a Audio plug in so the set up was different and I couldnt really convert it into what I needed.

So I am still stuck. If you say it works for you, I really dont understand what the problem is here… Getting frustrated after spending 4 hours trying to sort this out last night.

Still have an idea I am missing something in the prepare to play block. But if you have sound, it is obviously not my problem… Scratchinh my head.

Anyway, thank you for the reply, Ill try moving the samplerate call for procss block but since also the noise is not working, I dont think it will solve the problem.

Plugins that only generate audio directly without taking MIDI input and audio input are a special case that isn’t necessarily supported well in all hosts. Did you try putting a dummy audio file in Logic on the track you tried the plugin on and starting playback? By the way, I only tested your code so far as a standalone application because your thread title said “app”. edit : It does work also as a VST plugin in Reaper. The track it is loaded on needs to be record enabled to get the noise playing if there is no audio file on the track.

Well my goal is to make a midi instrument eventually. I just wanted to get the very basic working, audio output, so I could start experimenting with some algorithms for filters oscs. etc.

In an hour or so I have some time to sort this out. Thanks again.

About the Random object, there is a shortcut available: Random::getSystemRandom()

Random::getSystemRandom().nextFloat()

Another note: in what host are you testing? If you try to generate audio in Logic, you need to build a generator, not an effect. Because processBlock will not be called on an effect if no audio is on the track…

If you are planning to make a MIDI instrument eventually I would define it directly as a Synth (there is an option in Jucer). Logic will call processBlock and play the sound when the plugin receives MIDI.

Hey!

Sorry for the delayed answer.

Yeah I think I missed the point that there is difference in framework for vst and vst instrument. I went on and build a vst effect instead to start with, since it seemed a bit easier.