Running Plugin through Live 9.6.1 opens xcode, crashes live and stops system audio (MAC)

Hi,

I’m studying a module in audio software dev and university and I have began to prototype the different elements I will need for my final project, an LUFS meter. I have been watching The Audio Programmers tutorials to get an idea of the first part of the LUFS meter Filtering.

https://www.youtube.com/watch?v=YJ4YbV6TDo0&t=626s (Tutorial I followed)

Unlike the tutorial, I did not implement a means for the user to change the cutoff fq and Q as these are not user controlled. But processes the buffer of audio goes through before the loudness calculation.

I want these to happen as "background’ processes which do not effect the signal coming from the speakers (the user can hear their track whilst viewing a UI detailing the LUFS level) I may have to implement the filter in a different place, or create a copy of the audio (which will be uploaded into the app) which goes through the LUFS stages and one which gets played directly.

However, when I try to test the implementation of the filter (LUFS uses a high shelf and then high pass) the plugin will not run in live, when I drag it to an audio track it opens xcode and then crashes live and my audio drivers. I am also unable to open it as a standalone app… it says audio is muted to prevent a feedback loop.

Please find attached my code… any help would be appreciated!:slight_smile:

https://drive.google.com/drive/folders/14RewXPbF8pkTqQkUM-otVbecKUi9AG67?usp=sharing

What do you mean by “opens Xcode”? Are you running Live via the debugger? If so, what is Xcode showing? That might help to narrow down the problem.

Well what happens if you un-mute it? Does the plug-in work as expected?

Hey Tom,

When I drag the plugin onto a track, the code opens in Xcode rather than creating an instance of a Live plugin.

I’m not able to, it pin wheels the entire time it is open.

JUCE Assertion failure in juce_ProcessorDuplicator.h:70

is the error when its run as a standalone.

Well that assertion the Xcode debugger has stopped on is telling you that the size of your processing context is not the same as your number of processors.

Are you making a mistake with your number of channels?

Hmm ah right, well here is my Prepare to play where I state the spec for dsp:

and then my Processblock channel definitions:

50

To use the filter have created a process dup which could be causing the issues?

RE: prepareToPlay
But then you are not using the spec. When the method ends, it goes out of scope, as if nothing happened

The missing lines are:

HighPass.prepare (spec);
HighShelf.prepare (spec);

BTW, please prefer posting text instead of screenshots, that allows search engines to read your code, and it helps me to copy and paste, when answering your questions.
By using three backticks ``` before and after the code, you will get a proper formatting.
Thanks

1 Like

Thank you so much Daniel!

My mistake, will do in the future :slight_smile:

How would I define the coefficients for each of these filters? I have looked at the dsp::IIR:Coefficents methods but I’m unsure where they have to be placed within the JUCE project.

Is it within the constructor?

Depends, I think in the LUFS measure algorithm, they are fixed, so it would be enough to set them once in the prepareToPlay (you need the sampleRate to calculate the coefficients. That is not yet known in the Constructor).

But you can also set them each time when a parameter is changed, using a parameterListener to the AudioProcessorValueTreeState.
IIRC they are designed thread safe, so you don’t care when to change them threadwise.

I have found a spec with the coefficients for both of the filters in and yes they are fixed. However they are for 48k (the spec is for 5.1 film based stuff) so I’m just summing two channels. But I take it the conversion is not as simple as 48,000/coefficient * 44,100??

The only parameters I plan to have is a gain knob, upload and play/stop feature (track player) and some form of graphical UI. is it worth reseting the filter after each of these change?

Ok I have a limited understanding of threads… so thats perfect! :wink:

You don’t care, just take the right factory:
dsp::IIR::Coefficients<NumericType>::makeHighShelf()
It takes sampleRate, cutOff, Q and gain as parameters…

I care about what, the coefficients?

Surely I have to use the spec filters for the later mean squared summing to produce the correct LUFS readings?



HighShelf(dsp::IIR::Coefficients<float>::makeHighShelf(44100.00, 1000.0f, 1.0f, 4.0f))
HighPass (dsp::IIR::Coefficients<float>::makeHighPass(44100.00, 90.0f, 1.0f))


is already in my constructor (I have used the correct fq, Q and gain)

ok, but in the constructor you don’t know the samplerate yet, so you can’t set them just yet.

In prepareToPlay you have all information needed. It will be called, before processing starts…

Ok, is this as simple as

double CurrentSampleRate = SampleRate();

or is it best to put 44.100?

Also I’m aware that I will have to later change the buffer to be reading from an array of samples (when the user uploads their track) but I’m unsure how I’d separate it so the the filtered signal is the one which effects the UI meter (measures LUFS) and the original signal is then played back with no filtering for the user to hear their track whilst app is running.

thanks again for your help sir!

You must not assume 44.1!

In an AudioProcessor, when it is created the samplerate is unknown, so calling getSampleRate() will probably return 0 or anything meaningless.

The API is designed to call prepareToPlay with the proper setup before processing starts, so everything can be prepared accordingly (once processing has started, you cannot allocate memory for example).

That means for you to leave the Coefficients empty in the constructor (i.e. no need to call the constructor at all), but instead assign it in prepareToPlay:

void prepareToPlay (double sampleRate, int samplesExpected) override
{
    dsp::ProcessSpec spec;
    spec.sampleRate = sampleRate;
    spec.maximumBlockSize = samplesExpected;
    spec.numChannels = getTotalNumOutputChannels();

    *HighShelf.coefficients = *dsp::IIR::Coefficients<float>::makeHighShelf (sampleRate, 1000.0f, 1.0f, 4.0f);
    HighShelf.prepare (spec);

    *HighPass.coefficients = *dsp::IIR::Coefficients<float>::makeHighPass (sampleRate, 90.0f, 1.0f);
    HighPass.prepare (spec);
}

that should do it…

EDIT: checking the docs: “It’s up to the caller to ensure that these coefficients are modified in a thread-safe way.”
But in prepareToPlay you are in the clear!

Right so some implementation that checks the SR of the uploaded audio and defines it appropriately?

Ok I see.

With that implementation I then get an error from my processor.h there is no member named for “coefficients” in:

dsp::ProcessorDuplicator<dsp::IIR::Filter <float>, dsp::IIR::Coefficients<float>> HighShelf, HighPass;

is this due to the fact this line sets the coefficients to predetermined JUCE ones?

Ok sweet.

Ok, that’s due to the ProcessorDuplicator. It has a templated member state, which is for IIR the coefficients.

In that case it is called:

void prepareToPlay (double sampleRate, int samplesExpected) override
{
    dsp::ProcessSpec spec;
    spec.sampleRate = sampleRate;
    spec.maximumBlockSize = samplesExpected;
    spec.numChannels = getTotalNumOutputChannels();

    *HighShelf.state = *dsp::IIR::Coefficients<float>::makeHighShelf (sampleRate, 1000.0f, 1.0f, 4.0f);
    HighShelf.prepare (spec);

    *HighPass.state = *dsp::IIR::Coefficients<float>::makeHighPass (sampleRate, 90.0f, 1.0f);
    HighPass.prepare (spec);
}

About files: the AudioProcessor is designed to manipulate a signal as it is passing through. The host is responsible to send you the samples in the proper sample rate.

If you want to implement that as a console investigative offline tool, you don’t need AudioProcessor at all.

You can use the AudioFormatReader directly, which will tell you the file’s sample rate.

1 Like

Right, cheers

I will look into the AudioFormatReader

In terms of coefficient implementation the only documents I’ve seen (or can make sense of) are for declaring them in the constructor using


dsp::IIR::Coefficients< NumericType >::Coefficients	(	NumericType 	b0,
NumericType 	b1,
NumericType 	b2,
NumericType 	a0,
NumericType 	a1,
NumericType 	a2 
)	

This filter uses b0,b1,b2,a1,a2 for both?