BUG: createPluginInstance creates 44100 plugins only

Host app plays sound with sample rate 44100.
But I have loaded plugin with 48000:

formatManager.createPluginInstanceAsync(desc, 48000, 512, [this](std::unique_ptr<AudioPluginInstance> instance, const String& error)

This is a huge blocker. Please help!

I believe your plugin must run at the same sample rate as the host, if you internally want to run at a specific sample rate you need to do sample rate conversion on the way in and out of your plugin…

1 Like

I do run my host in 48000.
At least I try: I setup my device with 48000, but my soundcard does not switch sample rate.
Only if I ran
deviceManager.initialiseWithDefaultDevices(2, 2);
It switches to 44100 back whatever it was set.

But I have one more question: when I need deviceManager? What if I don’t want to hear sound, bit only process, and save wav to disk?

You don’t need the AudioDeviceManager is you are making an app that doesn’t use audio devices.

I don’t understand how to pass midi generated data to my loaded plugin, and get audio result after.

The only only way I can do it, if I attach my process to AudioProcessorPlayer, and provide callback fo my audio device:

deviceManager.addAudioCallback (&player);
player.setProcessor(mimiProcessor -> mainProcessorGraph.get());

Only then I can access data in the processBlock in my mainProcessor.

My goal is to pass generated midi data to loaded VST plugin, and then catch data and save it to wav file.

You have to call processBlock() yourself in a loop and add the result to a AudioFormatWriter.

Caveat: tell the instrument you are working in offline using setNonRealtime (true), because calling as fast as possible will switch many instruments to skip, because they were not allowed to finish loading or other heavy tasks.

Thank you.
If I make a timer to run processBlock in a loop, then what is the good way to get audioBuffer and midiBuffer to pass to the function?

No need for a Timer. The Timer would run on the message thread, blocking your GUI.
I would use a ThreadPoolJob or just inherit Thread.

You just need to do that in a loop, something along those lines (untested):

void MyThread::run() override
{
    const double sampleRate {48000.0};

    AudioBuffer<float> buffer (2, 512);
    int64 pos = 0;
    WavAudioFormat format;
    std::unique_ptr<AudioFormatWriter> writer (
        format.createWriterFor (new FileOutputStream (filename), sampleRate, /* ... */));

    processor->setNonRealtime (true);
    processor->prepareToPlay (sampleRate, buffer.getNumSamples());

    while (! threadShouldExit() && pos < totalLength)
    {
        buffer.clear();
        MidiBuffer midi; 
        // fill with your midi events of that block

        processor->processBlock (buffer, midi);
        writer->write (buffer);
        pos += buffer.getNumSamples();
    }
}

Thank you very much for your help.
I did the loop, but I have problems with sound quality itself.
I made a video where I described the problem:
Process block loop problem

When device manager takes care about proccessBlock – everything is fine.
But when I go away from device manager, and make my own loop calling processBlock, then I have problems with sound from plugin. I am starting to pull out my hears. Almost bald now…

Thanks for the video, that helps to understand it.

It still sounds to me, that the Kontakt player does something asynchronous and doesn’t respect the setNonRealtime flag. Usually that flag should switch anything asynchronous to blocking, so it can render as fast as possible.
You could also simulate the audio device and put a Thread::sleep (10) inside the loop (~ 512 samples @48 kHz).

Another idea is to test with another instrument, that doesn’t rely on Kontakt to see, if it is a general problem with your implementation, but my hunch is it’s Kontakt.

I found only one reference on the forum, but unresolved unfortunately:

1 Like

Yes, thank you. Putting Thread to sleep helps. But output is still damaged with dropouts. sleep(10) - is the best value :slight_smile: but still…

I have tested with another plugins – it is the same. Setting nonRealTime does not help.
And I even think there is no workaround without device manager… Should it be reported to Juce team?

I just wonder how device manager manages its callback avoiding this dropouts? Card’s processor maybe… And wonder how Vienna Ensemble Pro overcame this problem…

UPDATE: Actually, it renders files properly. These dropouts are some bytes inserted when thread sleeps 10 ms… That is my guess…

UPDATE 2: it renders files properly with Thread::sleep(10). But when I remove this sleep, then wav file is in slow motion if I can say so. It plays correct pitch and bitrate, but 10 seconds of recorded piano are stretched in 20 min in wav file. This is mistery for now… Do you have any idea why?

I am wondering about your code, since you showed playing live while rendering to file.

You cannot call processBlock() from different places. If you still play back the audio while rendering, the signal is chopped, one 512 samples in the file, the next 512 samples to the audio device and so forth.

For your setup you will have to prerecord the midi messages and fill them in during the loop writing to the file.

Just a thought…

No, I don’t play it in the code where I am looping processBlock. Inside processBlock I send data via socket to my DAW plugin, and it plays there. That is why I am so worried about solid data in processBlock.

I was thinking why device manager handles its callback to player correctly and it is solid sound goes from processor. Here are my thoughts:

  • if sample rate is 48000 and frame is 512, then callback happens each 512/48000 * 1000 ms. (10,66… ms). That is why Thread::sleep(10) is optimal.

  • But I measured call back period from device manager – it is not always exact value (± 100 microseconds), that is way if I constantly sleep for 10 ms, the output is chopped.

So, now I have to come up with a solution how to reconstruct solid signal from processor. One idea is to set short sleep interval and add mask to the signal indicating it is start of new sound chunk. Then I have to analyse collected chunks and cut overlapping ones.

Update: Issue solved :slight_smile:

Thank you for your help!