I believe your plugin must run at the same sample rate as the host, if you internally want to run at a specific sample rate you need to do sample rate conversion on the way in and out of your plugin…
I do run my host in 48000.
At least I try: I setup my device with 48000, but my soundcard does not switch sample rate.
Only if I ran deviceManager.initialiseWithDefaultDevices(2, 2);
It switches to 44100 back whatever it was set.
But I have one more question: when I need deviceManager? What if I don’t want to hear sound, bit only process, and save wav to disk?
You have to call processBlock() yourself in a loop and add the result to a AudioFormatWriter.
Caveat: tell the instrument you are working in offline using setNonRealtime (true), because calling as fast as possible will switch many instruments to skip, because they were not allowed to finish loading or other heavy tasks.
Thank you very much for your help.
I did the loop, but I have problems with sound quality itself.
I made a video where I described the problem: Process block loop problem
When device manager takes care about proccessBlock – everything is fine.
But when I go away from device manager, and make my own loop calling processBlock, then I have problems with sound from plugin. I am starting to pull out my hears. Almost bald now…
Thanks for the video, that helps to understand it.
It still sounds to me, that the Kontakt player does something asynchronous and doesn’t respect the setNonRealtime flag. Usually that flag should switch anything asynchronous to blocking, so it can render as fast as possible.
You could also simulate the audio device and put a Thread::sleep (10) inside the loop (~ 512 samples @48 kHz).
Another idea is to test with another instrument, that doesn’t rely on Kontakt to see, if it is a general problem with your implementation, but my hunch is it’s Kontakt.
I found only one reference on the forum, but unresolved unfortunately:
Yes, thank you. Putting Thread to sleep helps. But output is still damaged with dropouts. sleep(10) - is the best value but still…
I have tested with another plugins – it is the same. Setting nonRealTime does not help.
And I even think there is no workaround without device manager… Should it be reported to Juce team?
I just wonder how device manager manages its callback avoiding this dropouts? Card’s processor maybe… And wonder how Vienna Ensemble Pro overcame this problem…
UPDATE: Actually, it renders files properly. These dropouts are some bytes inserted when thread sleeps 10 ms… That is my guess…
UPDATE 2: it renders files properly with Thread::sleep(10). But when I remove this sleep, then wav file is in slow motion if I can say so. It plays correct pitch and bitrate, but 10 seconds of recorded piano are stretched in 20 min in wav file. This is mistery for now… Do you have any idea why?
I am wondering about your code, since you showed playing live while rendering to file.
You cannot call processBlock() from different places. If you still play back the audio while rendering, the signal is chopped, one 512 samples in the file, the next 512 samples to the audio device and so forth.
For your setup you will have to prerecord the midi messages and fill them in during the loop writing to the file.
No, I don’t play it in the code where I am looping processBlock. Inside processBlock I send data via socket to my DAW plugin, and it plays there. That is why I am so worried about solid data in processBlock.
I was thinking why device manager handles its callback to player correctly and it is solid sound goes from processor. Here are my thoughts:
if sample rate is 48000 and frame is 512, then callback happens each 512/48000 * 1000 ms. (10,66… ms). That is why Thread::sleep(10) is optimal.
But I measured call back period from device manager – it is not always exact value (± 100 microseconds), that is way if I constantly sleep for 10 ms, the output is chopped.
So, now I have to come up with a solution how to reconstruct solid signal from processor. One idea is to set short sleep interval and add mask to the signal indicating it is start of new sound chunk. Then I have to analyse collected chunks and cut overlapping ones.