We are exploring to see if JUCE is a viable framework for an application that will not be used as a generic audio application and it will most likely be not using any audio devices available on the OS. So limitations of the OS’s audio APIs are not of concern.
Right now, a concern for us is whether JUCE can handle data at multiple sampling rates. I’m specifically looking for possibility of various AudioProcessors in a AudioProcessorGraph instance being able to process data at various sampling rates ranging from 1kHz to 44.1KHz.
We are open to modifying JUCE’s source to be able to achieve this, since I don’t see anything that supports this on a first look. How do you suggest we can go about this with performance in mind? We will also have anywhere from 128 to 256 channels for data rather than the usual number of channels that some classes in the framework might expect.
Thanks in advance.