Change audio buffer size

#1

Hi, I developing offline plugin and plugin should send audio data to server. After, server send response with processed data and plugin write new samples to audio buffer. But now I have one problem, because server application have minimal audio data size limit (around 2K).
How I can change audio buffer size in plugin? Or maybe JUCE have additional processing function something like: preprocessing audio stream, post processing audio stream?

0 Likes

#2

Plugins can’t control the own buffer size, but that shouldn’t be a problem since you can’t use networking on the audio thread anyway, so there has to be some kind of FIFO. Then it’s just a question of how you deal with back pressure.

0 Likes

#3

Thanks for reply.
When I using multi pass conception (the first pass start thread for write samples to file and then send it to server for processing, second pass read processed samples from file and write it to audio buffer) its work good. But I think this is not comfortable for users. Because I try to find way for do it in one pass.
May be I can use SDK functions directly? For example:

#if AAX_PLUGIN_BUILD
PreRender()
RenderAudio()
// etc
#endif
0 Likes

#4

The prepareToPlay method is called before the stream begins where you can do that work. If you care about idiomatic/“modern” C++ then that should happen in the constructor, along with any other initialization for a single instance.

But what I meant is to pipe data from audio thread to networking thread you need a FIFO or other lock/wait free inter-thread communication technique, and since you can’t count on the size of the buffer to be constant you’ll need to buffer on the networking thread anyway before sending your data to the server, and buffering it when you receive. So controlling the buffer size isn’t really an issue, designing a lock-free channel with back pressure is.

There may be more complexity, as if your plugin is rendering offline (as opposed to real time, I’m not sure if that’s what you meant in your original post), you may need to block the audio callback to deal with network latency.

All told unless your processing can only be done on the server then this idea is very questionable.

0 Likes

#5

Yes, all thread do it work good. When user start playback I starting to buffering all samples. When user stop playback I run processing thread. Its work good. But after that I can’t update samples in DAW, for example, such as in AAX SDK its no problem because I can use PreRender, Render, Post and etc Audio Suite functions. Therefore in AAX SDK I can do it in one pass. I want repeat it by JUCE in one pass.
Now in JUCE this is work good, but only in two pass.

How I try to solve it (what ways):
1 - block audio callback, send data, receive and update samples in audio buffer, but JUCE audio buffer very small (because I wrote my question about it in Topic).
2 - May be JUCE have an additional functions for support of several pass, such as PreRender, Render, etc or may be I can use AAX SDK functions directly in JUCE.

0 Likes

#6

I think your problem is not technical but a design problem. What are you trying to achieve by using server side rendering? If you are developing an application, why not do the processing in the application itself?

0 Likes